Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 2B14B200B71 for ; Wed, 17 Aug 2016 04:45:14 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 298E2160ABA; Wed, 17 Aug 2016 02:45:14 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id CCC3B160AA8 for ; Wed, 17 Aug 2016 04:45:12 +0200 (CEST) Received: (qmail 2198 invoked by uid 500); 17 Aug 2016 02:45:10 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 2183 invoked by uid 99); 17 Aug 2016 02:45:10 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 17 Aug 2016 02:45:10 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id 0F594185811 for ; Wed, 17 Aug 2016 02:45:10 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 0.723 X-Spam-Level: X-Spam-Status: No, score=0.723 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_ENVFROM_END_DIGIT=0.25, HTML_MESSAGE=2, RP_MATCHES_RCVD=-1.426, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd3-us-west.apache.org (amavisd-new); dkim=pass (1024-bit key) header.d=126.com Received: from mx2-lw-us.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id CKbkam0OwwFi for ; Wed, 17 Aug 2016 02:45:03 +0000 (UTC) Received: from m50-110.126.com (m50-110.126.com [123.125.50.110]) by mx2-lw-us.apache.org (ASF Mail Server at mx2-lw-us.apache.org) with ESMTP id 7E11D5FC20 for ; Wed, 17 Aug 2016 02:45:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com; s=s110527; h=From:Message-Id:Mime-Version:Subject:Date; bh=SKcUK SNfIc9Y3mvIh06RR+Gd7gYqrpF8IAJ2UuQA9fo=; b=RCOCMx8q4Wj/0GBOwEEFA QNPIZ6yJ8pnLVrJ+/ueHrvosSba/aCDbI0B10wLk/ZrMkEWlSUPsghGW2LJ0HcTE rqFjZxugR6zBcUM079Ln7KxES6ssyswC4Gih8GePvbH3t+tTFt/0J0+aqZiJV0PT 3EZiVX4s3TAy4zgPm5fTpU= Received: from [10.0.80.155] (unknown [103.227.80.181]) by smtp4 (Coremail) with SMTP id jdKowACXEzSjz7NXXsVvBA--.35155S3; Wed, 17 Aug 2016 10:44:52 +0800 (CST) From: jinxing Content-Type: multipart/alternative; boundary="Apple-Mail=_23C4CC5B-7D5D-429D-9263-9B8B939E3166" Message-Id: <48C7CA62-60EC-4DE8-81F4-49B339A72F80@126.com> Mime-Version: 1.0 (Mac OS X Mail 8.2 \(2070.6\)) Subject: Re: hfs downgrade from 2.7.2 to 2.5.0 Date: Wed, 17 Aug 2016 10:44:51 +0800 References: <81D048B3-C6E2-4C5C-BCC9-3EFD5B606AAC@hortonworks.com> To: Chris Nauroth , user@hadoop.apache.org In-Reply-To: <81D048B3-C6E2-4C5C-BCC9-3EFD5B606AAC@hortonworks.com> X-Mailer: Apple Mail (2.2070.6) X-CM-TRANSID: jdKowACXEzSjz7NXXsVvBA--.35155S3 X-Coremail-Antispam: 1Uf129KBjvJXoW3Jw47Jr1xury7try7JFy3CFg_yoWxXr48pw 17J3yqqr48Gr4SkryIgF1jkas5Kr1q9w47G3W8urW2qa95WryjgFn8Aa4rWryxW3s5K34j kr4Svw1Dua4rAFDanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDUYxBIdaVFxhVjvjDU0xZFpf9x07UJ739UUUUU= X-Originating-IP: [103.227.80.181] X-CM-SenderInfo: xmlq5xdqjwikas6rjloofrz/1tbiTwbI-VYY6daU5AAAs9 archived-at: Wed, 17 Aug 2016 02:45:14 -0000 --Apple-Mail=_23C4CC5B-7D5D-429D-9263-9B8B939E3166 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=gb2312 Hi, Chirs, It=A1=AFs great to get your reply, I find I can continue the upgrade for = detained. Two questions: 1. I always find the upgrade for detained very slow. Each datanode = contains 20T data in my cluster, it take 30 minutes for upgrade = approximately; 2. Currently is it possible to downgrade the nematode and datanode in my = cluster? if so, what is the proper procedure. --Jin > =D4=DA 2016=C4=EA8=D4=C217=C8=D5=A3=AC=C9=CF=CE=E72:22=A3=ACChris = Nauroth =D0=B4=B5=C0=A3=BA >=20 > Hello, > =20 > Running =A1=B0hdfs dfsadmin -rollingUpgrade finalize=A1=B1 finalized = the upgrade. This is a terminal state for the upgrade process, so = afterwards, it is no longer possible to run =A1=B0hdfs dfsadmin = -rollingUpgrade downgrade=A1=B1. > =20 > Rolling upgrade supports upgrading individual daemons independent of = other daemons (e.g. just the DataNodes). If you want to proceed with = upgrading your 2.5.0 DataNodes to 2.7.2, then I expect you can start a = new rolling upgrade and proceed with the upgrade process on just the = subset of DataNodes still running 2.5.0. > --Chris Nauroth > =20 > From: jinxing > Date: Tuesday, August 16, 2016 at 6:02 AM > To: "user@hadoop.apache.org" > Subject: hfs downgrade from 2.7.2 to 2.5.0 > =20 > Hello, it=A1=AFs great to join this mailing list.=20 > =20 > Can I ask a question? > =20 > Is it possible to downgrade cluster ? > =20 > I have already upgrade my cluster=A1=AFs namenodes(with one stand by = for HA) and several datanodes from 2.5.0 folloing = https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/Hdfs= RollingUpgrade.html#Downgrade_and_Rollback = ; > =20 > I take following steps: > 1. hdfs dfsadmin -rollingUpgrade prepare; > 2. hdfs dfsadmin -rollingUpgrade query; > 3. hdfs dfsadmin -shutdownDatanode upgrade > 4. restart and upgrade datanode; > =20 > However, I terminated the upgrade by mistake with command "hfs = dfsadmin -rollingUpgrade finalize" > =20 > Currently, I have two 2.7.2 nematodes, and three 2.7.2 datanodes and = 63 2.5.0 datanodes; Now I want to downgrade the nematodes and datanodes = from 2.7.2 back to 2.5.0; > =20 > But when I try to downgrade nematode and restart with = =A1=B0-rollingUpgrade downgrade=A1=B1, namenode cannot get started, I = get rolling exception: > 2016-08-16 20:37:08,642 WARN = org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered = exception loading fsimage > org.apache.hadoop.hdfs.server.common.IncorrectVersionException: = Unexpected version of storage directory = /home/maintain/hadoop/data/hdfs-namenode. Reported: -63. Expecting =3D = -57. > at = org.apache.hadoop.hdfs.server.common.StorageInfo.setLayoutVersion(StorageI= nfo.java:178) > at = org.apache.hadoop.hdfs.server.common.StorageInfo.setFieldsFromProperties(S= torageInfo.java:131) > at = org.apache.hadoop.hdfs.server.namenode.NNStorage.setFieldsFromProperties(N= NStorage.java:608) > at = org.apache.hadoop.hdfs.server.common.StorageInfo.readProperties(StorageInf= o.java:228) > at = org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.= java:323) > at = org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSIma= ge.java:202) > at = org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesyst= em.java:955) > at = org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesys= tem.java:700) > at = org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.ja= va:529) > at = org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:5= 85) > at = org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:751) > at = org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:735) > at = org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.ja= va:1407) > at = org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1473) > 2016-08-16 20:37:08,645 INFO org.mortbay.log: Stopped = HttpServer2$SelectChannelConnectorWithSafeStartup@dx-pipe-sata61-pm:50070 > 2016-08-16 20:37:08,745 INFO = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode = metrics system... > 2016-08-16 20:37:08,746 INFO = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics = system stopped. > 2016-08-16 20:37:08,746 INFO = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics = system shutdown complete. > 2016-08-16 20:37:08,746 FATAL = org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode = join > org.apache.hadoop.hdfs.server.common.IncorrectVersionException: = Unexpected version of storage directory = /home/maintain/hadoop/data/hdfs-namenode. Reported: -63. Expecting =3D = -57. > at = org.apache.hadoop.hdfs.server.common.StorageInfo.setLayoutVersion(StorageI= nfo.java:178) > at = org.apache.hadoop.hdfs.server.common.StorageInfo.setFieldsFromProperties(S= torageInfo.java:131) > at = org.apache.hadoop.hdfs.server.namenode.NNStorage.setFieldsFromProperties(N= NStorage.java:608) > at = org.apache.hadoop.hdfs.server.common.StorageInfo.readProperties(StorageInf= o.java:228) > at = org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.= java:323) > at = org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSIma= ge.java:202) > at = org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesyst= em.java:955) > at = org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesys= tem.java:700) > at = org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.ja= va:529) > at = org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:5= 85) > at = org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:751) > at = org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:735) > at = org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.ja= va:1407) > at = org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1473) > =20 > It=A1=AFs great if someone can help? > =20 --Apple-Mail=_23C4CC5B-7D5D-429D-9263-9B8B939E3166 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=gb2312 Hi, Chirs,

It=A1=AFs great = to get your reply, I find I can continue the upgrade for = detained.
Two = questions:
1. I always = find the upgrade for detained very slow. Each datanode contains 20T data = in my cluster, it take 30 minutes for upgrade approximately;
2. Currently is it possible to = downgrade the nematode and datanode in my cluster? if so, what is the = proper procedure.

--Jin

=D4=DA 2016=C4=EA8=D4=C217=C8=D5=A3=AC=C9=CF=CE=E72:22=A3=ACChr= is Nauroth <cnauroth@hortonworks.com> =D0=B4=B5=C0=A3=BA

Hello,
 
Running = =A1=B0hdfs dfsadmin -rollingUpgrade finalize=A1=B1 finalized the = upgrade.  This is a terminal state for the upgrade process, so = afterwards, it is no longer possible to run =A1=B0hdfs dfsadmin = -rollingUpgrade downgrade=A1=B1.
 
Rolling upgrade supports upgrading individual daemons = independent of other daemons (e.g. just the DataNodes).  If you = want to proceed with upgrading your 2.5.0 DataNodes to 2.7.2, then I = expect you can start a new rolling upgrade and proceed with the upgrade = process on just the subset of DataNodes still running 2.5.0.
--Chris = Nauroth
 
From: jinxing <jinxing6042@126.com>
Date: Tuesday, August 16, = 2016 at 6:02 AM
To: "user@hadoop.apache.org" <user@hadoop.apache.org>
Subject: hfs downgrade from = 2.7.2 to 2.5.0
 
Hello, it=A1=AFs great to join this mailing = list. 
 
Can I ask a question?
 
Is it possible to downgrade cluster ?
 
I have already upgrade my cluster=A1=AFs = namenodes(with one stand by for HA) and several datanodes from 2.5.0 = folloing https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoo= p-hdfs/HdfsRollingUpgrade.html#Downgrade_and_Rollback;
 
I take following steps:
1. hdfs dfsadmin -rollingUpgrade prepare;
2. hdfs dfsadmin -rollingUpgrade query;
3. hdfs dfsadmin -shutdownDatanode <host:port> = upgrade
4. restart and upgrade datanode;
 
However, I terminated the upgrade by mistake = with command "hfs dfsadmin -rollingUpgrade finalize"
 
Currently, I have two 2.7.2 nematodes, and three = 2.7.2 datanodes and 63 2.5.0 datanodes; Now I want to downgrade the = nematodes and datanodes from 2.7.2 back to 2.5.0;
 
But when I try to downgrade nematode and restart = with =A1=B0-rollingUpgrade downgrade=A1=B1, namenode cannot get started, = I get rolling exception:
2016-08-16 = 20:37:08,642 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: = Encountered exception loading fsimage
org.apache.hadoop.hdfs.server.common.IncorrectVersionException:= Unexpected version of storage directory = /home/maintain/hadoop/data/hdfs-namenode. Reported: -63. Expecting =3D = -57.
        at = org.apache.hadoop.hdfs.server.common.StorageInfo.setLayoutVersion(StorageI= nfo.java:178)
        at = org.apache.hadoop.hdfs.server.common.StorageInfo.setFieldsFromProperties(S= torageInfo.java:131)
        at = org.apache.hadoop.hdfs.server.namenode.NNStorage.setFieldsFromProperties(N= NStorage.java:608)
        at = org.apache.hadoop.hdfs.server.common.StorageInfo.readProperties(StorageInf= o.java:228)
        at = org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.= java:323)
        at = org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSIma= ge.java:202)
        at = org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesyst= em.java:955)
        at = org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesys= tem.java:700)
        at = org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.ja= va:529)
        at = org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:5= 85)
        at = org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java= :751)
        at = org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java= :735)
        at = org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.ja= va:1407)
        at = org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1473)
2016-08-16 20:37:08,645 INFO org.mortbay.log: Stopped = HttpServer2$SelectChannelConnectorWithSafeStartup@dx-pipe-sata61-pm:50070<= o:p class=3D"">
2016-08-16 20:37:08,745 INFO = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode = metrics system...
2016-08-16 20:37:08,746 INFO = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics = system stopped.
2016-08-16 20:37:08,746 INFO = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics = system shutdown complete.
2016-08-16 20:37:08,746 = FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in = namenode join
org.apache.hadoop.hdfs.server.common.IncorrectVersionException:= Unexpected version of storage directory = /home/maintain/hadoop/data/hdfs-namenode. Reported: -63. Expecting =3D = -57.
        at = org.apache.hadoop.hdfs.server.common.StorageInfo.setLayoutVersion(StorageI= nfo.java:178)
        at = org.apache.hadoop.hdfs.server.common.StorageInfo.setFieldsFromProperties(S= torageInfo.java:131)
        at = org.apache.hadoop.hdfs.server.namenode.NNStorage.setFieldsFromProperties(N= NStorage.java:608)
        at = org.apache.hadoop.hdfs.server.common.StorageInfo.readProperties(StorageInf= o.java:228)
        at = org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.= java:323)
        at = org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSIma= ge.java:202)
        at = org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesyst= em.java:955)
        at = org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesys= tem.java:700)
        at = org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.ja= va:529)
        at = org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:5= 85)
        at = org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java= :751)
        at = org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java= :735)
        at = org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.ja= va:1407)
        at = org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1473)
 
It=A1=AFs great if someone = can help?
 

= --Apple-Mail=_23C4CC5B-7D5D-429D-9263-9B8B939E3166--