hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chris Nauroth <cnaur...@hortonworks.com>
Subject Re: hfs downgrade from 2.7.2 to 2.5.0
Date Fri, 19 Aug 2016 20:28:18 GMT
Hello Jin,

1. I believe there may have been a recent bug around slow DataNode upgrade times, but I can’t
find the relevant JIRA issue right now.  Perhaps someone else on the list remembers.  HDFS-8791
is relevant to performance of the block layout on disk in versions 2.6.0 and later, but I
vaguely recall that there was something separate specifically related to upgrade too.
2. It is possible to downgrade, as per the documentation of the HDFS Rolling Upgrade feature
[1].  Downgrading is only possible if the upgrade has not been finalized though.

[1] http://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html#Downgrade

--Chris Nauroth

From: jinxing <jinxing6042@126.com>
Date: Tuesday, August 16, 2016 at 7:44 PM
To: Chris Nauroth <cnauroth@hortonworks.com>, "user@hadoop.apache.org" <user@hadoop.apache.org>
Subject: Re: hfs downgrade from 2.7.2 to 2.5.0

Hi, Chirs,

It’s great to get your reply, I find I can continue the upgrade for detained.
Two questions:
1. I always find the upgrade for detained very slow. Each datanode contains 20T data in my
cluster, it take 30 minutes for upgrade approximately;
2. Currently is it possible to downgrade the nematode and datanode in my cluster? if so, what
is the proper procedure.

--Jin

在 2016年8月17日,上午2:22,Chris Nauroth <cnauroth@hortonworks.com<mailto:cnauroth@hortonworks.com>>
写道:

Hello,

Running “hdfs dfsadmin -rollingUpgrade finalize” finalized the upgrade.  This is a terminal
state for the upgrade process, so afterwards, it is no longer possible to run “hdfs dfsadmin
-rollingUpgrade downgrade”.

Rolling upgrade supports upgrading individual daemons independent of other daemons (e.g. just
the DataNodes).  If you want to proceed with upgrading your 2.5.0 DataNodes to 2.7.2, then
I expect you can start a new rolling upgrade and proceed with the upgrade process on just
the subset of DataNodes still running 2.5.0.
--Chris Nauroth

From: jinxing <jinxing6042@126.com<mailto:jinxing6042@126.com>>
Date: Tuesday, August 16, 2016 at 6:02 AM
To: "user@hadoop.apache.org<mailto:user@hadoop.apache.org>" <user@hadoop.apache.org<mailto:user@hadoop.apache.org>>
Subject: hfs downgrade from 2.7.2 to 2.5.0

Hello, it’s great to join this mailing list.

Can I ask a question?

Is it possible to downgrade cluster ?

I have already upgrade my cluster’s namenodes(with one stand by for HA) and several datanodes
from 2.5.0 folloing https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html#Downgrade_and_Rollback;

I take following steps:
1. hdfs dfsadmin -rollingUpgrade prepare;
2. hdfs dfsadmin -rollingUpgrade query;
3. hdfs dfsadmin -shutdownDatanode <host:port> upgrade
4. restart and upgrade datanode;

However, I terminated the upgrade by mistake with command "hfs dfsadmin -rollingUpgrade finalize"

Currently, I have two 2.7.2 nematodes, and three 2.7.2 datanodes and 63 2.5.0 datanodes; Now
I want to downgrade the nematodes and datanodes from 2.7.2 back to 2.5.0;

But when I try to downgrade nematode and restart with “-rollingUpgrade downgrade”, namenode
cannot get started, I get rolling exception:
2016-08-16 20:37:08,642 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered
exception loading fsimage
org.apache.hadoop.hdfs.server.common.IncorrectVersionException: Unexpected version of storage
directory /home/maintain/hadoop/data/hdfs-namenode. Reported: -63. Expecting = -57.
        at org.apache.hadoop.hdfs.server.common.StorageInfo.setLayoutVersion(StorageInfo.java:178)
        at org.apache.hadoop.hdfs.server.common.StorageInfo.setFieldsFromProperties(StorageInfo.java:131)
        at org.apache.hadoop.hdfs.server.namenode.NNStorage.setFieldsFromProperties(NNStorage.java:608)
        at org.apache.hadoop.hdfs.server.common.StorageInfo.readProperties(StorageInfo.java:228)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:323)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:955)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:700)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:529)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:585)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:751)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:735)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1407)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1473)
2016-08-16 20:37:08,645 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@dx-pipe-sata61-pm:50070
2016-08-16 20:37:08,745 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
metrics system...
2016-08-16 20:37:08,746 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
system stopped.
2016-08-16 20:37:08,746 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
system shutdown complete.
2016-08-16 20:37:08,746 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in
namenode join
org.apache.hadoop.hdfs.server.common.IncorrectVersionException: Unexpected version of storage
directory /home/maintain/hadoop/data/hdfs-namenode. Reported: -63. Expecting = -57.
        at org.apache.hadoop.hdfs.server.common.StorageInfo.setLayoutVersion(StorageInfo.java:178)
        at org.apache.hadoop.hdfs.server.common.StorageInfo.setFieldsFromProperties(StorageInfo.java:131)
        at org.apache.hadoop.hdfs.server.namenode.NNStorage.setFieldsFromProperties(NNStorage.java:608)
        at org.apache.hadoop.hdfs.server.common.StorageInfo.readProperties(StorageInfo.java:228)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:323)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:955)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:700)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:529)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:585)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:751)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:735)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1407)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1473)

It’s great if someone can help?


Mime
View raw message