Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id AD45B100C0 for ; Fri, 6 Dec 2013 06:13:03 +0000 (UTC) Received: (qmail 30579 invoked by uid 500); 6 Dec 2013 06:12:34 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 30478 invoked by uid 500); 6 Dec 2013 06:12:32 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 30469 invoked by uid 99); 6 Dec 2013 06:12:30 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 06 Dec 2013 06:12:30 +0000 X-ASF-Spam-Status: No, hits=2.2 required=5.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: local policy) Received: from [203.199.18.84] (HELO mail1.impetus.co.in) (203.199.18.84) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 06 Dec 2013 06:12:09 +0000 Received: from MAIL3.impetus.co.in ([fe80::443f:4f25:6889:b268]) by mail1.impetus.co.in ([fe80::4c95:4ec3:183e:2d3%15]) with mapi id 14.02.0318.004; Fri, 6 Dec 2013 11:41:43 +0530 From: Nirmal Kumar To: "user@hadoop.apache.org" Subject: RE: Any reference for upgrade hadoop from 1.x to 2.2 Thread-Topic: Any reference for upgrade hadoop from 1.x to 2.2 Thread-Index: AQHO51fShvafpzUpSES0Ad72FIWG/5ow4Veg//+pzgCAAHuRAIAHx0FQgAlMtqCAAGUegIAAADYAgAGii1CAAWmfUIAAAfeAgAE22iA= Date: Fri, 6 Dec 2013 06:11:43 +0000 Message-ID: <8978C61A0021A142A5DC1755FD41C279835446DC@Mail3.impetus.co.in> References: <8978C61A0021A142A5DC1755FD41C27983542946@Mail3.impetus.co.in> <8978C61A0021A142A5DC1755FD41C27983543A14@Mail3.impetus.co.in> <8978C61A0021A142A5DC1755FD41C27983544046@Mail3.impetus.co.in> <8978C61A0021A142A5DC1755FD41C27983544320@Mail3.impetus.co.in> <8978C61A0021A142A5DC1755FD41C279835444D8@Mail3.impetus.co.in> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [192.168.145.16] Content-Type: multipart/alternative; boundary="_000_8978C61A0021A142A5DC1755FD41C279835446DCMail3impetuscoi_" MIME-Version: 1.0 X-Virus-Checked: Checked by ClamAV on apache.org --_000_8978C61A0021A142A5DC1755FD41C279835446DCMail3impetuscoi_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Thanks Sandy for the useful info. Is there any open JIRA issue for that? -Nirmal From: Sandy Ryza [mailto:sandy.ryza@cloudera.com] Sent: Thursday, December 05, 2013 10:38 PM To: user@hadoop.apache.org Subject: Re: Any reference for upgrade hadoop from 1.x to 2.2 Unfortunately there is no way to see MR1 jobs in the MR2 job history. -Sandy On Thu, Dec 5, 2013 at 3:47 AM, Nirmal Kumar > wrote: Hi Adam, Apache Hadoop-2.0.6-alpha has the following issue. This issue got fixed in 2.1.0-beta 1. Hadoop HDFS 2. HDFS-4917 Start-dfs.sh cannot pass the parameters correctly https://issues.apache.org/jira/browse/HDFS-4917?jql=3Dproject%20%3D%20HDFS%= 20AND%20text%20~%20upgrade I setup Apache Hadoop 2.1.0-beta and then were able to run the commands : ./hadoop-daemon.sh start namenode -upgrade ./hdfs dfsadmin -finalizeUpgrade 2013-12-05 21:16:44,412 INFO org.apache.hadoop.hdfs.server.namenode.FSNames= ystem: fsOwner =3D cloud (auth:SIMPLE) 2013-12-05 21:16:44,412 INFO org.apache.hadoop.hdfs.server.namenode.FSNames= ystem: supergroup =3D supergroup 2013-12-05 21:16:44,412 INFO org.apache.hadoop.hdfs.server.namenode.FSNames= ystem: isPermissionEnabled =3D true 2013-12-05 21:16:44,412 INFO org.apache.hadoop.hdfs.server.namenode.FSNames= ystem: HA Enabled: false 2013-12-05 21:16:44,426 INFO org.apache.hadoop.hdfs.server.namenode.FSNames= ystem: Append Enabled: true 2013-12-05 21:16:44,908 INFO org.apache.hadoop.util.GSet: Computing capacit= y for map INodeMap 2013-12-05 21:16:44,908 INFO org.apache.hadoop.util.GSet: VM type =3D= 32-bit 2013-12-05 21:16:44,908 INFO org.apache.hadoop.util.GSet: 1.0% max memory = =3D 889 MB 2013-12-05 21:16:44,908 INFO org.apache.hadoop.util.GSet: capacity =3D= 2^21 =3D 2097152 entries 2013-12-05 21:16:44,923 INFO org.apache.hadoop.hdfs.server.namenode.NameNod= e: Caching file names occuring more than 10 times 2013-12-05 21:16:44,930 INFO org.apache.hadoop.hdfs.server.namenode.FSNames= ystem: dfs.namenode.safemode.threshold-pct =3D 0.9990000128746033 2013-12-05 21:16:44,930 INFO org.apache.hadoop.hdfs.server.namenode.FSNames= ystem: dfs.namenode.safemode.min.datanodes =3D 0 2013-12-05 21:16:44,930 INFO org.apache.hadoop.hdfs.server.namenode.FSNames= ystem: dfs.namenode.safemode.extension =3D 30000 2013-12-05 21:16:44,931 INFO org.apache.hadoop.hdfs.server.namenode.FSNames= ystem: Retry cache on namenode is enabled 2013-12-05 21:16:44,932 INFO org.apache.hadoop.hdfs.server.namenode.FSNames= ystem: Retry cache will use 0.03 of total heap and retry cache entry expiry= time is 600000 millis 2013-12-05 21:16:44,947 INFO org.apache.hadoop.util.GSet: Computing capacit= y for map Namenode Retry Cache 2013-12-05 21:16:44,947 INFO org.apache.hadoop.util.GSet: VM type =3D= 32-bit 2013-12-05 21:16:44,947 INFO org.apache.hadoop.util.GSet: 0.029999999329447= 746% max memory =3D 889 MB 2013-12-05 21:16:44,947 INFO org.apache.hadoop.util.GSet: capacity =3D= 2^16 =3D 65536 entries 2013-12-05 21:16:45,038 INFO org.apache.hadoop.hdfs.server.common.Storage: = Lock on /home/cloud/hadoop_migration/hadoop-data/name/in_use.lock acquired = by nodename 8695@Impetus-942.impetus.co.in 2013-12-05 21:16:45,128 INFO org.apache.hadoop.hdfs.server.common.Storage: = Using clusterid: CID-4ece2cb2-6159-4836-a428-4f0e324dab13 2013-12-05 21:16:45,145 INFO org.apache.hadoop.hdfs.server.namenode.FileJou= rnalManager: Recovering unfinalized segments in /home/cloud/hadoop_migratio= n/hadoop-data/name/current 2013-12-05 21:16:45,166 INFO org.apache.hadoop.hdfs.server.namenode.FSImage= : Upgrading to sequential block IDs. Generation stamp for new blocks set to= 1099511628823 2013-12-05 21:16:45,169 INFO org.apache.hadoop.hdfs.server.namenode.FSImage= : Loading image file /home/cloud/hadoop_migration/hadoop-data/name/current/= fsimage using no compression 2013-12-05 21:16:45,169 INFO org.apache.hadoop.hdfs.server.namenode.FSImage= : Number of files =3D 45 2013-12-05 21:16:45,203 INFO org.apache.hadoop.hdfs.server.namenode.FSImage= : Number of files under construction =3D 0 2013-12-05 21:16:45,204 INFO org.apache.hadoop.hdfs.server.namenode.FSImage= : Image file /home/cloud/hadoop_migration/hadoop-data/name/current/fsimage = of size 4975 bytes loaded in 0 seconds. 2013-12-05 21:16:45,204 INFO org.apache.hadoop.hdfs.server.namenode.FSImage= : Loaded image for txid 0 from /home/cloud/hadoop_migration/hadoop-data/nam= e/current/fsimage 2013-12-05 21:16:45,211 INFO org.apache.hadoop.hdfs.server.namenode.FSImage= : Reading /home/cloud/hadoop_migration/hadoop-data/name/current/edits expec= ting start txid #1 2013-12-05 21:16:45,211 INFO org.apache.hadoop.hdfs.server.namenode.FSImage= : Start loading edits file /home/cloud/hadoop_migration/hadoop-data/name/cu= rrent/edits 2013-12-05 21:16:45,232 INFO org.apache.hadoop.hdfs.server.namenode.FSImage= : Edits file /home/cloud/hadoop_migration/hadoop-data/name/current/edits of= size 4 edits # 0 loaded in 0 seconds 2013-12-05 21:16:45,233 INFO org.apache.hadoop.hdfs.server.namenode.FSImage= : Starting upgrade of image directory /home/cloud/hadoop_migration/hadoop-d= ata/name. old LV =3D -41; old CTime =3D 0. new LV =3D -47; new CTime =3D 1386258405233 2013-12-05 21:16:45,241 INFO org.apache.hadoop.hdfs.server.namenode.FSImage= : Saving image file /home/cloud/hadoop_migration/hadoop-data/name/current/f= simage.ckpt_0000000000000000000 using no compression 2013-12-05 21:16:45,321 INFO org.apache.hadoop.hdfs.server.namenode.FSImage= : Image file /home/cloud/hadoop_migration/hadoop-data/name/current/fsimage.= ckpt_0000000000000000000 of size 4923 bytes saved in 0 seconds. 2013-12-05 21:16:45,365 INFO org.apache.hadoop.hdfs.server.namenode.FSImage= TransactionalStorageInspector: No version file in /home/cloud/hadoop_migrat= ion/hadoop-data/name 2013-12-05 21:16:45,421 INFO org.apache.hadoop.hdfs.server.namenode.FSImage= : Upgrade of /home/cloud/hadoop_migration/hadoop-data/name is complete. 2013-12-05 21:16:45,422 INFO org.apache.hadoop.hdfs.server.namenode.FSEditL= og: Starting log segment at 1 2013-12-05 21:16:45,741 INFO org.apache.hadoop.hdfs.server.namenode.NameCac= he: initialized with 0 entries 0 lookups 2013-12-05 21:16:45,741 INFO org.apache.hadoop.hdfs.server.namenode.FSNames= ystem: Finished loading FSImage in 791 msecs 2013-12-05 21:16:46,079 INFO org.apache.hadoop.ipc.Server: Starting Socket = Reader #1 for port 54310 2013-12-05 21:16:46,113 INFO org.apache.hadoop.hdfs.server.namenode.FSNames= ystem: Registered FSNamesystemState MBean 2013-12-05 21:16:46,126 INFO org.apache.hadoop.hdfs.server.namenode.FSNames= ystem: Number of blocks under construction: 0 2013-12-05 21:16:46,126 INFO org.apache.hadoop.hdfs.server.namenode.FSNames= ystem: Number of blocks under construction: 0 2013-12-05 21:16:46,127 INFO org.apache.hadoop.hdfs.StateChange: STATE* Saf= e mode ON. The reported blocks 0 needs additional 15 blocks to reach the threshold 0.9= 990 of total blocks 15. Safe mode will be turned off automatically 2013-12-05 21:16:46,167 INFO org.apache.hadoop.ipc.Server: IPC Server Respo= nder: starting 2013-12-05 21:16:46,176 INFO org.apache.hadoop.ipc.Server: IPC Server liste= ner on 54310: starting 2013-12-05 21:16:46,177 INFO org.apache.hadoop.hdfs.server.namenode.NameNod= e: NameNode RPC up at: localhost/127.0.0.1:54310 2013-12-05 21:16:46,177 INFO org.apache.hadoop.hdfs.server.namenode.FSNames= ystem: Starting services required for active state 2013-12-05 21:23:08,461 INFO org.apache.hadoop.hdfs.server.namenode.FSImage= : Finalizing upgrade for storage directory /home/cloud/hadoop_migration/had= oop-data/name. cur LV =3D -47; cur CTime =3D 1386258405233 2013-12-05 21:23:08,461 INFO org.apache.hadoop.hdfs.server.namenode.FSImage= : Finalize upgrade for /home/cloud/hadoop_migration/hadoop-data/name is com= plete. I can now see the existing files in the HDFS that were used by earlier Map = Reduce jobs(input\output files) using Apache Hadoop-1.2.0 However, I cannot see the history of those Map Reduce jobs through MapReduc= e JobHistory Server. Is there some way in which I can see the history of those Map Reduce jobs a= s well? Thanks, -Nirmal From: Nirmal Kumar Sent: Wednesday, December 04, 2013 7:41 PM To: user@hadoop.apache.org Cc: rdyer@iastate.edu Subject: RE: Any reference for upgrade hadoop from 1.x to 2.2 Thanks Adam, I am upgrading from *Apache Hadoop-1.2.0* to *Apache Hadoop-2.0.6-alpha* I am getting the same exception when using the command: ./hadoop-daemon.sh = start namenode -upgrade 2013-12-05 00:56:42,312 INFO org.apache.hadoop.hdfs.server.namenode.FSNames= ystem: fsOwner =3D cloud (auth:SIMPLE) 2013-12-05 00:56:42,312 INFO org.apache.hadoop.hdfs.server.namenode.FSNames= ystem: supergroup =3D supergroup 2013-12-05 00:56:42,312 INFO org.apache.hadoop.hdfs.server.namenode.FSNames= ystem: isPermissionEnabled =3D true 2013-12-05 00:56:42,312 INFO org.apache.hadoop.hdfs.server.namenode.FSNames= ystem: HA Enabled: false 2013-12-05 00:56:42,317 INFO org.apache.hadoop.hdfs.server.namenode.FSNames= ystem: Append Enabled: true 2013-12-05 00:56:42,784 INFO org.apache.hadoop.hdfs.server.namenode.NameNod= e: Caching file names occuring more than 10 times 2013-12-05 00:56:42,789 INFO org.apache.hadoop.hdfs.server.namenode.FSNames= ystem: dfs.namenode.safemode.threshold-pct =3D 0.9990000128746033 2013-12-05 00:56:42,789 INFO org.apache.hadoop.hdfs.server.namenode.FSNames= ystem: dfs.namenode.safemode.min.datanodes =3D 0 2013-12-05 00:56:42,789 INFO org.apache.hadoop.hdfs.server.namenode.FSNames= ystem: dfs.namenode.safemode.extension =3D 30000 2013-12-05 00:56:42,840 INFO org.apache.hadoop.hdfs.server.common.Storage: = Lock on /home/cloud/hadoop_migration/hadoop-data/name/in_use.lock acquired = by nodename 31742@Impetus-942.impetus.co.in 2013-12-05 00:56:42,911 INFO org.apache.hadoop.metrics2.impl.MetricsSystemI= mpl: Stopping NameNode metrics system... 2013-12-05 00:56:42,912 INFO org.apache.hadoop.metrics2.impl.MetricsSystemI= mpl: NameNode metrics system stopped. 2013-12-05 00:56:42,912 INFO org.apache.hadoop.metrics2.impl.MetricsSystemI= mpl: NameNode metrics system shutdown complete. 2013-12-05 00:56:42,913 FATAL org.apache.hadoop.hdfs.server.namenode.NameNo= de: Exception in namenode join org.apache.hadoop.hdfs.server.common.IncorrectVersionException: Unexpected = version of storage directory /home/cloud/hadoop_migration/hadoop-data/name.= Reported: -41. Expecting =3D -40. at org.apache.hadoop.hdfs.server.common.Storage.setLayoutVersion(St= orage.java:1079) at org.apache.hadoop.hdfs.server.common.Storage.setFieldsFromProper= ties(Storage.java:887) at org.apache.hadoop.hdfs.server.namenode.NNStorage.setFieldsFromPr= operties(NNStorage.java:583) at org.apache.hadoop.hdfs.server.common.Storage.readProperties(Stor= age.java:918) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDir= s(FSImage.java:304) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransition= Read(FSImage.java:200) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(= FSNamesystem.java:627) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk= (FSNamesystem.java:469) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(N= ameNode.java:403) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameN= ode.java:437) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.= java:609) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.= java:594) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(N= ameNode.java:1169) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.ja= va:1235) 2013-12-05 00:56:42,918 INFO org.apache.hadoop.util.ExitUtil: Exiting with = status 1 2013-12-05 00:56:42,922 INFO org.apache.hadoop.hdfs.server.namenode.NameNod= e: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at Impetus-942.impetus.co.in/192.168.4= 1.106 ************************************************************/ I also referred https://hadoop.apache.org/docs/current/hadoop-project-dist/= hadoop-hdfs/Federation.html Upgrading from older release to 0.23 and configuring federation Older releases supported a single Namenode. Here are the steps enable feder= ation: Step 1: Upgrade the cluster to newer release. During upgrade you can provid= e a ClusterID as follows: > $HADOOP_PREFIX_HOME/bin/hdfs start namenode --config $HADOOP_CONF_DIR -u= pgrade -clusterId If ClusterID is not provided, it is auto generated. But Getting: [cloud@Impetus-942 hadoop-2.0.6-alpha]$ bin/hdfs start namenode --config /h= ome/cloud/hadoop_migration/hadoop-2.0.6-alpha/etc/hadoop -upgrade -cluster= Id testclusterid1111 Error: Could not find or load main class start [cloud@Impetus-942 hadoop-2.0.6-alpha]$ bin/hdfs start namenode --config /h= ome/cloud/hadoop_migration/hadoop-2.0.6-alpha/etc/hadoop -upgrade -cluster= Id Error: Could not find or load main class start [cloud@Impetus-942 hadoop-2.0.6-alpha]$ I have the following environment variables set : YARN_CLASSPATH=3D/home/cloud/hadoop_migration/hadoop-2.0.6-alpha/bin/yarn HADOOP_HOME=3D/home/cloud/hadoop_migration/hadoop-2.0.6-alpha HADOOP_PREFIX=3D/home/cloud/hadoop_migration/hadoop-2.0.6-alpha YARN_HOME=3D/home/cloud/hadoop_migration/hadoop-2.0.6-alpha HADOOP_HDFS_HOME=3D/home/cloud/hadoop_migration/hadoop-2.0.6-alpha HADOOP_COMMON_HOME=3D/home/cloud/hadoop_migration/hadoop-2.0.6-alpha HADOOP_YARN_HOME=3D/home/cloud/hadoop_migration/hadoop-2.0.6-alpha JAVA_HOME=3D/usr/lib/jvm/jdk1.7.0_45 HADOOP_CONF_DIR=3D/home/cloud/hadoop_migration/hadoop-2.0.6-alpha/etc/hadoo= p YARN_CONF_DIR=3D/home/cloud/hadoop_migration/hadoop-2.0.6-alpha/etc/hadoop HADOOP_MAPRED_HOME=3D/home/cloud/hadoop_migration/hadoop-2.0.6-alpha PATH=3D/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/= usr/sbin:/sbin:/home/cloud/bin:/usr/lib/jvm/jdk1.7.0_45/bin:/home/cloud/had= oop_migration/hadoop-2.0.6-alpha/bin:/home/cloud/Manisha/maven/apache-maven= -3.1.1/bin Regards, -Nirmal From: Adam Kawa [mailto:kawa.adam@gmail.com] Sent: Tuesday, December 03, 2013 11:58 PM To: user@hadoop.apache.org Cc: rdyer@iastate.edu Subject: Re: Any reference for upgrade hadoop from 1.x to 2.2 @Nirmal, And later, you need to make a decision to finalize the upgrade or rollback. 2013/12/3 Adam Kawa > @Nirmal, You need to run NameNode with upgrade option e.g. $ /usr/lib/hadoop/sbin/hadoop-daemon.sh start namenode -upgrade 2013/12/3 Nirmal Kumar > Hi All, I am doing a test migration from Apache Hadoop-1.2.0 to Apache Hadoop-2.0.6= -alpha on a single node environment. I did the following: * Installed Apache Hadoop-1.2.0 * Ran word count sample MR jobs. The jobs executed successfully. * I stop all the services in Apache Hadoop-1.2.0 and then was able = to start all services again. * The previous submitted jobs are visible after the stop/start in t= he job tracker url. Next I installed Apache Hadoop-2.0.6-alpha alongside. I used the SAME data directory locations that were in Apache Hadoop-1.2.0 i= n the configuration files namely: core-site.xml ---------------- $hadoop.tmp.dir /home/cloud/ha= doop_migration/hadoop-data/tempdir hdfs-site.xml ----------------- $dfs.data.dir /home/cl= oud/hadoop_migration/hadoop-data/data $dfs.name.dir /home/clou= d/hadoop_migration/hadoop-data/name I am UNABLE to start the NameNode from Apache Hadoop-2.0.6-alpha installati= on I am getting the error: 2013-12-03 18:28:23,941 INFO org.apache.hadoop.metrics2.impl.MetricsConfig:= loaded properties from hadoop-metrics2.properties 2013-12-03 18:28:24,080 INFO org.apache.hadoop.metrics2.impl.MetricsSystemI= mpl: Scheduled snapshot period at 10 second(s). 2013-12-03 18:28:24,081 INFO org.apache.hadoop.metrics2.impl.MetricsSystemI= mpl: NameNode metrics system started 2013-12-03 18:28:24,576 WARN org.apache.hadoop.hdfs.server.namenode.FSNames= ystem: Only one image storage directory (dfs.namenode.name.dir) configured.= Beware of dataloss due to lack of redundant storage directories! 2013-12-03 18:28:24,576 WARN org.apache.hadoop.hdfs.server.namenode.FSNames= ystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) = configured. Beware of dataloss due to lack of redundant storage directories= ! 2013-12-03 18:28:24,744 INFO org.apache.hadoop.util.HostsFileReader: Refres= hing hosts (include/exclude) list 2013-12-03 18:28:24,749 INFO org.apache.hadoop.hdfs.server.blockmanagement.= DatanodeManager: dfs.block.invalidate.limit=3D1000 2013-12-03 18:28:24,762 INFO org.apache.hadoop.hdfs.server.blockmanagement.= BlockManager: dfs.block.access.token.enable=3Dfalse 2013-12-03 18:28:24,762 INFO org.apache.hadoop.hdfs.server.blockmanagement.= BlockManager: defaultReplication =3D 1 2013-12-03 18:28:24,762 INFO org.apache.hadoop.hdfs.server.blockmanagement.= BlockManager: maxReplication =3D 512 2013-12-03 18:28:24,762 INFO org.apache.hadoop.hdfs.server.blockmanagement.= BlockManager: minReplication =3D 1 2013-12-03 18:28:24,763 INFO org.apache.hadoop.hdfs.server.blockmanagement.= BlockManager: maxReplicationStreams =3D 2 2013-12-03 18:28:24,763 INFO org.apache.hadoop.hdfs.server.blockmanagement.= BlockManager: shouldCheckForEnoughRacks =3D false 2013-12-03 18:28:24,763 INFO org.apache.hadoop.hdfs.server.blockmanagement.= BlockManager: replicationRecheckInterval =3D 3000 2013-12-03 18:28:24,763 INFO org.apache.hadoop.hdfs.server.blockmanagement.= BlockManager: encryptDataTransfer =3D false 2013-12-03 18:28:24,771 INFO org.apache.hadoop.hdfs.server.namenode.FSNames= ystem: fsOwner =3D cloud (auth:SIMPLE) 2013-12-03 18:28:24,771 INFO org.apache.hadoop.hdfs.server.namenode.FSNames= ystem: supergroup =3D supergroup 2013-12-03 18:28:24,771 INFO org.apache.hadoop.hdfs.server.namenode.FSNames= ystem: isPermissionEnabled =3D true 2013-12-03 18:28:24,771 INFO org.apache.hadoop.hdfs.server.namenode.FSNames= ystem: HA Enabled: false 2013-12-03 18:28:24,776 INFO org.apache.hadoop.hdfs.server.namenode.FSNames= ystem: Append Enabled: true 2013-12-03 18:28:25,230 INFO org.apache.hadoop.hdfs.server.namenode.NameNod= e: Caching file names occuring more than 10 times 2013-12-03 18:28:25,243 INFO org.apache.hadoop.hdfs.server.namenode.FSNames= ystem: dfs.namenode.safemode.threshold-pct =3D 0.9990000128746033 2013-12-03 18:28:25,244 INFO org.apache.hadoop.hdfs.server.namenode.FSNames= ystem: dfs.namenode.safemode.min.datanodes =3D 0 2013-12-03 18:28:25,244 INFO org.apache.hadoop.hdfs.server.namenode.FSNames= ystem: dfs.namenode.safemode.extension =3D 30000 2013-12-03 18:28:25,288 INFO org.apache.hadoop.hdfs.server.common.Storage: = Lock on /home/cloud/hadoop_migration/hadoop-data/name/in_use.lock acquired = by nodename 21371@Impetus-942.impetus.co.in 2013-12-03 18:28:25,462 INFO org.apache.hadoop.metrics2.impl.MetricsSystemI= mpl: Stopping NameNode metrics system... 2013-12-03 18:28:25,462 INFO org.apache.hadoop.metrics2.impl.MetricsSystemI= mpl: NameNode metrics system stopped. 2013-12-03 18:28:25,473 INFO org.apache.hadoop.metrics2.impl.MetricsSystemI= mpl: NameNode metrics system shutdown complete. 2013-12-03 18:28:25,474 FATAL org.apache.hadoop.hdfs.server.namenode.NameNo= de: Exception in namenode join org.apache.hadoop.hdfs.server.common.IncorrectVersionException: Unexpected = version of storage directory /home/cloud/hadoop_migration/hadoop-data/name.= Reported: -41. Expecting =3D -40. at org.apache.hadoop.hdfs.server.common.Storage.setLayoutVersion(St= orage.java:1079) at org.apache.hadoop.hdfs.server.common.Storage.setFieldsFromProper= ties(Storage.java:887) at org.apache.hadoop.hdfs.server.namenode.NNStorage.setFieldsFromPr= operties(NNStorage.java:583) at org.apache.hadoop.hdfs.server.common.Storage.readProperties(Stor= age.java:918) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDir= s(FSImage.java:304) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransition= Read(FSImage.java:200) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(= FSNamesystem.java:627) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk= (FSNamesystem.java:469) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(N= ameNode.java:403) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameN= ode.java:437) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.= java:609) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.= java:594) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(N= ameNode.java:1169) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.ja= va:1235) 2013-12-03 18:28:25,479 INFO org.apache.hadoop.util.ExitUtil: Exiting with = status 1 2013-12-03 18:28:25,481 INFO org.apache.hadoop.hdfs.server.namenode.NameNod= e: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at Impetus-942.impetus.co.in/192.168.4= 1.106 ************************************************************/ Independently both the installations(Apache Hadoop-1.2.0 and Apache Hadoop-= 2.0.6-alpha) are working for me. I am able to run the MR jobs on both the i= nstallations independently though. But I aim to migrate the data and jobs submitted from Apache Hadoop-1.2.0 t= o Apache Hadoop-2.0.6-alpha. Is there any HDFS compatibility issues from Apache Hadoop-1.2.0 to Apache H= adoop-2.0.6-alpha? Thanks, -Nirmal From: Nirmal Kumar Sent: Wednesday, November 27, 2013 2:56 PM To: user@hadoop.apache.org; rdyer@iastate.ed= u Subject: RE: Any reference for upgrade hadoop from 1.x to 2.2 Hello Sandy, The post was useful and gave an insight of the migration. I am doing a test migration from Apache Hadoop-1.2.0 to Apache Hadoop-2.0.6= -alpha on a single node environment. I am having the Apache Hadoop-1.2.0 up and running. Can you please let me know the steps that one should follow for the migrati= on? I am thinking of doing something like: * Install Apache Hadoop-2.0.6-alpha alongside the existing Apache H= adoop-1.2.0 * Use the same HDFS locations * Change the various required configuration files * Stop Apache Hadoop-1.2.0 and start Apache Hadoop-2.0.6-alpha * Verify all the services are running * Test via mapreduce (test MRv1 and MRv2 examples) * Check Web UI Console and verify the MRv1 and MRv2 jobs These above steps needs to be performed on all the nodes in a cluster envir= onment. The translation table mapping old configuration to new would be definitely = *very* useful. Also the existing Hadoop ecosystem components needs to be considered: * Hive Scripts * Pig Scripts * Oozie Workflows Their compatibility and version support would need to be checked. Also thinking of any risks like Data Loss, others that one should keep in m= ind. Also I found: http://strataconf.com/strata2014/public/schedule/detail/32247 Thanks, -Nirmal From: Robert Dyer [mailto:psybers@gmail.com] Sent: Friday, November 22, 2013 9:08 PM To: user@hadoop.apache.org Subject: Re: Any reference for upgrade hadoop from 1.x to 2.2 Thanks Sandy! These seem helpful! "MapReduce cluster configuration options have been split into YARN configur= ation options, which go in yarn-site.xml; and MapReduce configuration optio= ns, which go in mapred-site.xml. Many have been given new names to reflect = the shift. ... We'll follow up with a full translation table in a future po= st." This type of translation table mapping old configuration to new would be *v= ery* useful! - Robert On Fri, Nov 22, 2013 at 2:15 AM, Sandy Ryza > wrote: For MapReduce and YARN, we recently published a couple blog posts on migrat= ing: http://blog.cloudera.com/blog/2013/11/migrating-to-mapreduce-2-on-yarn-for-= users/ http://blog.cloudera.com/blog/2013/11/migrating-to-mapreduce-2-on-yarn-for-= operators/ hope that helps, Sandy On Fri, Nov 22, 2013 at 3:03 AM, Nirmal Kumar > wrote: Hi All, I am also looking into migrating\upgrading from Apache Hadoop 1.x to Apache= Hadoop 2.x. I didn't find any doc\guide\blogs for the same. Although there are guides\docs for the CDH and HDP migration\upgradation fr= om Hadoop 1.x to Hadoop 2.x Would referring those be of some use? I am looking for similar guides\docs for Apache Hadoop 1.x to Apache Hadoop= 2.x. I found something on slideshare though. Not sure how much useful that is go= ing to be. I still need to verify that. http://www.slideshare.net/mikejf12/an-example-apache-hadoop-yarn-upgrade Any suggestions\comments will be of great help. Thanks, -Nirmal From: Jilal Oussama [mailto:jilal.oussama@gmail.com] Sent: Friday, November 08, 2013 9:13 PM To: user@hadoop.apache.org Subject: Re: Any reference for upgrade hadoop from 1.x to 2.2 I am looking for the same thing if anyone can point us to a good direction = please. Thank you. (Currently running Hadoop 1.2.1) 2013/11/1 YouPeng Yang > Hi users Are there any reference docs to introduce how to upgrade hadoop from 1.x= to 2.2. Regards ________________________________ NOTE: This message may contain information that is confidential, proprietar= y, privileged or otherwise protected by law. The message is intended solely= for the named addressee. If received in error, please destroy and notify t= he sender. Any use of this email is prohibited when received in error. Impe= tus does not represent, warrant and/or guarantee, that the integrity of thi= s communication has been maintained nor that the communication is free of e= rrors, virus, interception or interference. ________________________________ NOTE: This message may contain information that is confidential, proprietar= y, privileged or otherwise protected by law. The message is intended solely= for the named addressee. If received in error, please destroy and notify t= he sender. Any use of this email is prohibited when received in error. Impe= tus does not represent, warrant and/or guarantee, that the integrity of thi= s communication has been maintained nor that the communication is free of e= rrors, virus, interception or interference. ________________________________ NOTE: This message may contain information that is confidential, proprietar= y, privileged or otherwise protected by law. The message is intended solely= for the named addressee. If received in error, please destroy and notify t= he sender. Any use of this email is prohibited when received in error. Impe= tus does not represent, warrant and/or guarantee, that the integrity of thi= s communication has been maintained nor that the communication is free of e= rrors, virus, interception or interference. ________________________________ NOTE: This message may contain information that is confidential, proprietar= y, privileged or otherwise protected by law. The message is intended solely= for the named addressee. If received in error, please destroy and notify t= he sender. Any use of this email is prohibited when received in error. Impe= tus does not represent, warrant and/or guarantee, that the integrity of thi= s communication has been maintained nor that the communication is free of e= rrors, virus, interception or interference. ________________________________ NOTE: This message may contain information that is confidential, proprietar= y, privileged or otherwise protected by law. The message is intended solely= for the named addressee. If received in error, please destroy and notify t= he sender. Any use of this email is prohibited when received in error. Impe= tus does not represent, warrant and/or guarantee, that the integrity of thi= s communication has been maintained nor that the communication is free of e= rrors, virus, interception or interference. ________________________________ NOTE: This message may contain information that is confidential, proprietar= y, privileged or otherwise protected by law. The message is intended solely= for the named addressee. If received in error, please destroy and notify t= he sender. Any use of this email is prohibited when received in error. Impe= tus does not represent, warrant and/or guarantee, that the integrity of thi= s communication has been maintained nor that the communication is free of e= rrors, virus, interception or interference. --_000_8978C61A0021A142A5DC1755FD41C279835446DCMail3impetuscoi_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable

Thanks Sandy for the us= eful info.

 

Is there any open JIRA = issue for that?

 

-Nirmal

 

From: Sand= y Ryza [mailto:sandy.ryza@cloudera.com]
Sent: Thursday, December 05, 2013 10:38 PM
To: user@hadoop.apache.org
Subject: Re: Any reference for upgrade hadoop from 1.x to 2.2
=

 

Unfortunately there is no way to see MR1 jobs in the= MR2 job history.

 

-Sandy

 

On Thu, Dec 5, 2013 at 3:47 AM, Nirmal Kumar <nirmal.kumar@i= mpetus.co.in> wrote:

Hi Adam,

 

Apache Hadoop-2.0.6-alph= a has the following issue.

 

This issue got fixed in 2.1.0-beta

 

1.       Hadoop HDFS

2.       HDFS-4917

Start-dfs.s= h cannot pass the parameters correctly

 

https://issues.apache.org/jira/browse/HDFS-4917?jql=3Dproje= ct%20%3D%20HDFS%20AND%20text%20~%20upgrade

 

I setup Apache Hadoop 2.1.0-beta and then were able to run the commands :

./hadoop-dae= mon.sh start namenode -upgrade

./hdfs dfsad= min -finalizeUpgrade

 

2013-12-05 21:16:44,412 INF= O org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner  &= nbsp;          =3D cloud (auth= :SIMPLE)

2013-12-05 21:16:44,412 INF= O org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup &nbs= p;        =3D supergroup

2013-12-05 21:16:44,412 INF= O org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = =3D true

2013-12-05 21:16:44,412 INF= O org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false

2013-12-05 21:16:44,426 INF= O org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true=

2013-12-05 21:16:44,908 INF= O org.apache.hadoop.util.GSet: Computing capacity for map INodeMap

2013-12-05 21:16:44,908 INF= O org.apache.hadoop.util.GSet: VM type       = =3D 32-bit

2013-12-05 21:16:44,908 INF= O org.apache.hadoop.util.GSet: 1.0% max memory =3D 889 MB

2013-12-05 21:16:44,908 INF= O org.apache.hadoop.util.GSet: capacity      =3D 2= ^21 =3D 2097152 entries

2013-12-05 21:16:44,923 INF= O org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occur= ing more than 10 times

2013-12-05 21:16:44,930 INF= O org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemod= e.threshold-pct =3D 0.9990000128746033

2013-12-05 21:16:44,930 INF= O org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemod= e.min.datanodes =3D 0

2013-12-05 21:16:44,930 INF= O org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemod= e.extension     =3D 30000

2013-12-05 21:16:44,931 INF= O org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namen= ode is enabled

2013-12-05 21:16:44,932 INF= O org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use= 0.03 of total heap and retry cache entry expiry time is 600000 millis

2013-12-05 21:16:44,947 INF= O org.apache.hadoop.util.GSet: Computing capacity for map Namenode Retry Ca= che

2013-12-05 21:16:44,947 INF= O org.apache.hadoop.util.GSet: VM type       = =3D 32-bit

2013-12-05 21:16:44,947 INF= O org.apache.hadoop.util.GSet: 0.029999999329447746% max memory =3D 889 MB<= /span>

2013-12-05 21:16:44,947 INF= O org.apache.hadoop.util.GSet: capacity      =3D 2= ^16 =3D 65536 entries

2013-12-05 21:16:45,038 INF= O org.apache.hadoop.hdfs.server.common.Storage: Lock on /home/cloud/hadoop_= migration/hadoop-data/name/in_use.lock acquired by nodename 8695@Im= petus-942.impetus.co.in

2013-12-05 21:16:45,128 INF= O org.apache.hadoop.hdfs.server.common.Storage: Using clusterid: CID-4ece2c= b2-6159-4836-a428-4f0e324dab13

2013-12-05 21:16:45,145 INF= O org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Recovering unf= inalized segments in /home/cloud/hadoop_migration/hadoop-data/name/current<= /span>

2013-12-05 21:16:45,166 INF= O org.apache.hadoop.hdfs.server.namenode.FSImage: Upgrading to sequential b= lock IDs. Generation stamp for new blocks set to 1099511628823

2013-12-05 21:16:45,169 INF= O org.apache.hadoop.hdfs.server.namenode.FSImage: Loading image file /home/= cloud/hadoop_migration/hadoop-data/name/current/fsimage using no compression

2013-12-05 21:16:45,169 INF= O org.apache.hadoop.hdfs.server.namenode.FSImage: Number of files =3D 45

2013-12-05 21:16:45,203 INF= O org.apache.hadoop.hdfs.server.namenode.FSImage: Number of files under con= struction =3D 0

2013-12-05 21:16:45,204 INF= O org.apache.hadoop.hdfs.server.namenode.FSImage: Image file /home/cloud/ha= doop_migration/hadoop-data/name/current/fsimage of size 4975 bytes loaded in 0 seconds.

2013-12-05 21:16:45,204 INF= O org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid 0 f= rom /home/cloud/hadoop_migration/hadoop-data/name/current/fsimage

2013-12-05 21:16:45,211 INF= O org.apache.hadoop.hdfs.server.namenode.FSImage: Reading /home/cloud/hadoo= p_migration/hadoop-data/name/current/edits expecting start txid #1

2013-12-05 21:16:45,211 INF= O org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file = /home/cloud/hadoop_migration/hadoop-data/name/current/edits

2013-12-05 21:16:45,232 INF= O org.apache.hadoop.hdfs.server.namenode.FSImage: Edits file /home/cloud/ha= doop_migration/hadoop-data/name/current/edits of size 4 edits # 0 loaded in 0 seconds

2013-12-05 21:16:45,233 INF= O org.apache.hadoop.hdfs.server.namenode.FSImage: Starting upgrade of image= directory /home/cloud/hadoop_migration/hadoop-data/name.

   old LV =3D -41= ; old CTime =3D 0.

   new LV =3D -47= ; new CTime =3D 1386258405233

2013-12-05 21:16:45,241 INF= O org.apache.hadoop.hdfs.server.namenode.FSImage: Saving image file /home/c= loud/hadoop_migration/hadoop-data/name/current/fsimage.ckpt_000000000000000= 0000 using no compression

2013-12-05 21:16:45,321 INF= O org.apache.hadoop.hdfs.server.namenode.FSImage: Image file /home/cloud/ha= doop_migration/hadoop-data/name/current/fsimage.ckpt_0000000000000000000 of size 4923 bytes saved in 0 seconds.

2013-12-05 21:16:45,365 INF= O org.apache.hadoop.hdfs.server.namenode.FSImageTransactionalStorageInspect= or: No version file in /home/cloud/hadoop_migration/hadoop-data/name=

2013-12-05 21:16:45,421 = INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Upgrade of /home/cloud= /hadoop_migration/hadoop-data/name is complete.

2013-12-05 21:16:45,422 INF= O org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at= 1

2013-12-05 21:16:45,741 INF= O org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entr= ies 0 lookups

2013-12-05 21:16:45,741 INF= O org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSI= mage in 791 msecs

2013-12-05 21:16:46,079 INF= O org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 54310

2013-12-05 21:16:46,113 INF= O org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesys= temState MBean

2013-12-05 21:16:46,126 INF= O org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of blocks und= er construction: 0

2013-12-05 21:16:46,126 INF= O org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of blocks und= er construction: 0

2013-12-05 21:16:46,127 INF= O org.apache.hadoop.hdfs.StateChange: STATE* Safe mode ON.

The reported blocks 0 needs= additional 15 blocks to reach the threshold 0.9990 of total blocks 15.

Safe mode will be turned of= f automatically

2013-12-05 21:16:46,167 INF= O org.apache.hadoop.ipc.Server: IPC Server Responder: starting

2013-12-05 21:16:46,176 INF= O org.apache.hadoop.ipc.Server: IPC Server listener on 54310: starting

2013-12-05 21:16:46,177 INF= O org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode RPC up at: loca= lhost/127.0.0.1:54310<= /a>

2013-12-05 21:16:46,177 INF= O org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services re= quired for active state

2013-12-05 21:23:08,461 INF= O org.apache.hadoop.hdfs.server.namenode.FSImage: Finalizing upgrade for st= orage directory /home/cloud/hadoop_migration/hadoop-data/name.

   cur LV =3D -47= ; cur CTime =3D 1386258405233

2013-12-05 21:23:08,461 = INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Finalize upgrade for /= home/cloud/hadoop_migration/hadoop-data/name is complete.

 

I can now se= e the existing files in the HDFS that were used by earlier Map Reduce jobs(= input\output files) using Apache Hadoop-1.2.0

 

However, I cannot see the history of those= Map Reduce jobs through MapReduce JobHistory Server.

Is there som= e way in which I can see the history of those Map Reduce jobs as well?

 

Thanks,

-Nirmal

 

 

Thanks Adam,

 

I am upgrading from *Apa= che Hadoop-1.2.0* to *Apache Hadoop-2.0.6-alpha*

 

I am getting the same excep= tion when using the command: ./hadoop-daemon.sh start namenode -upgrade

 

2013-12-05 00:56:42,312 INF= O org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner  &= nbsp;          =3D cloud (auth= :SIMPLE)

2013-12-05 00:56:42,312 INF= O org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup &nbs= p;        =3D supergroup

2013-12-05 00:56:42,312 INF= O org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = =3D true

2013-12-05 00:56:42,312 INF= O org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false

2013-12-05 00:56:42,317 INF= O org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true=

2013-12-05 00:56:42,784 INF= O org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occur= ing more than 10 times

2013-12-05 00:56:42,789 INF= O org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemod= e.threshold-pct =3D 0.9990000128746033

2013-12-05 00:56:42,789 INF= O org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemod= e.min.datanodes =3D 0

2013-12-05 00:56:42,789 INF= O org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemod= e.extension     =3D 30000

2013-12-05 00:56:42,840 INF= O org.apache.hadoop.hdfs.server.common.Storage: Lock on /home/cloud/hadoop_= migration/hadoop-data/name/in_use.lock acquired by nodename 31742@= Impetus-942.impetus.co.in

2013-12-05 00:56:42,911 INF= O org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metr= ics system...

2013-12-05 00:56:42,912 INF= O org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics syste= m stopped.

2013-12-05 00:56:42,912 INF= O org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics syste= m shutdown complete.

2013-12-05 00:56:42,913 FAT= AL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode j= oin

org.apache.hadoop.hdfs.serv= er.common.IncorrectVersionException: Unexpected version of storage director= y /home/cloud/hadoop_migration/hadoop-data/name. Reported: -41. Expecting =3D -40.

    &nb= sp;   at org.apache.hadoop.hdfs.server.common.Storage.setLayoutVe= rsion(Storage.java:1079)

    &nb= sp;   at org.apache.hadoop.hdfs.server.common.Storage.setFieldsFr= omProperties(Storage.java:887)

    &nb= sp;   at org.apache.hadoop.hdfs.server.namenode.NNStorage.setFiel= dsFromProperties(NNStorage.java:583)

    &nb= sp;   at org.apache.hadoop.hdfs.server.common.Storage.readPropert= ies(Storage.java:918)

    &nb= sp;   at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverSt= orageDirs(FSImage.java:304)

    &nb= sp;   at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTr= ansitionRead(FSImage.java:200)

    &nb= sp;   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.load= FSImage(FSNamesystem.java:627)

    &nb= sp;   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.load= FromDisk(FSNamesystem.java:469)

    &nb= sp;   at org.apache.hadoop.hdfs.server.namenode.NameNode.loadName= system(NameNode.java:403)

    &nb= sp;   at org.apache.hadoop.hdfs.server.namenode.NameNode.initiali= ze(NameNode.java:437)

    &nb= sp;   at org.apache.hadoop.hdfs.server.namenode.NameNode.<init= >(NameNode.java:609)

    &nb= sp;   at org.apache.hadoop.hdfs.server.namenode.NameNode.<init= >(NameNode.java:594)

    &nb= sp;   at org.apache.hadoop.hdfs.server.namenode.NameNode.createNa= meNode(NameNode.java:1169)

    &nb= sp;   at org.apache.hadoop.hdfs.server.namenode.NameNode.main(Nam= eNode.java:1235)

2013-12-05 00:56:42,918 INF= O org.apache.hadoop.util.ExitUtil: Exiting with status 1

2013-12-05 00:56:42,922 INF= O org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:

/**************************= **********************************

SHUTDOWN_MSG: Shutting down= NameNode at Impetus-942.impetus.co.in/192.168.41.106

***************************= *********************************/

 

I also referred https://hadoop.apache.org/= docs/current/hadoop-project-dist/hadoop-hdfs/Federation.html

Upgradin= g from older release to 0.23 and configuring federation

Older releases supported a single Namenode. Here are the steps enab= le federation:

Step 1: Upgrade the cluster to newer release. During upgrade you ca= n provide a ClusterID as follows:

> $HADOOP_PREFIX_HOME/bin/h= dfs start namenode --config $HADOOP_CONF_DIR  -upgrade -clusterId <= cluster_ID>

If ClusterID is not provided, it is auto generated.

But Getting:

[cloud@Impetus-942 hadoop-2= .0.6-alpha]$ bin/hdfs start namenode --config /home/cloud/hadoop_migration/= hadoop-2.0.6-alpha/etc/hadoop  -upgrade -clusterId testclusterid1111

Error: Could not find or lo= ad main class start

[cloud@Impetus-942 hadoop-2= .0.6-alpha]$ bin/hdfs start namenode --config /home/cloud/hadoop_migration/= hadoop-2.0.6-alpha/etc/hadoop  -upgrade -clusterId

Error: Could not find or lo= ad main class start

[cloud@Impetus-942 hadoop-2= .0.6-alpha]$

 

I have the following enviro= nment variables set :

 

YARN_CLASSPATH=3D/home/clou= d/hadoop_migration/hadoop-2.0.6-alpha/bin/yarn

HADOOP_HOME=3D/home/cloud/h= adoop_migration/hadoop-2.0.6-alpha

HADOOP_PREFIX=3D/home/cloud= /hadoop_migration/hadoop-2.0.6-alpha

YARN_HOME=3D/home/cloud/had= oop_migration/hadoop-2.0.6-alpha

HADOOP_HDFS_HOME=3D/home/cl= oud/hadoop_migration/hadoop-2.0.6-alpha

HADOOP_COMMON_HOME=3D/home/= cloud/hadoop_migration/hadoop-2.0.6-alpha

HADOOP_YARN_HOME=3D/home/cl= oud/hadoop_migration/hadoop-2.0.6-alpha

JAVA_HOME=3D/usr/lib/jvm/jd= k1.7.0_45

HADOOP_CONF_DIR=3D/home/clo= ud/hadoop_migration/hadoop-2.0.6-alpha/etc/hadoop

YARN_CONF_DIR=3D/home/cloud= /hadoop_migration/hadoop-2.0.6-alpha/etc/hadoop

HADOOP_MAPRED_HOME=3D/home/= cloud/hadoop_migration/hadoop-2.0.6-alpha

PATH=3D/usr/lib64/qt-3.3/bi= n:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/cloud/= bin:/usr/lib/jvm/jdk1.7.0_45/bin:/home/cloud/hadoop_migration/hadoop-2.0.6-= alpha/bin:/home/cloud/Manisha/maven/apache-maven-3.1.1/bin

 

Regards,

-Nirmal

 

From: Adam Kawa [= mailto:kawa.adam@gmail.com]
Sent: Tuesday, December 03, 2013 11:58 PM
To: user= @hadoop.apache.org
Cc: rdyer@ias= tate.edu
Subject: Re: Any reference for upgrade hadoop from 1.x to 2.2
=

 

@Nirmal,

 

And later, you need to make a decision&nbs= p;to finalize the upgrade or rollback.

 

2013/12/3 Adam Kawa <kawa.adam@gmail.com>

@Nirmal,

 

You need to run NameNode with upgrade opt= ion e.g.

$ /usr/lib/hadoop/sbin/hadoop-daemon.sh s= tart namenode -upgrade

 

2013/12/3 Nirmal Kumar <nirmal.kumar@impetus.co.i= n>

Hi All,

 

I am doing a test migration= from Apache Hadoop-1.2.0 to Apache Hadoop-2.0.6-alpha on a single node env= ironment.

 

I did the following:=

·   = ;      Installed Apache Hadoop-1.2.0

·   = ;      Ran word count sample MR jobs. The jobs executed succe= ssfully.

·   = ;      I stop all the services in Apache Hadoop-1.2.0 and the= n was able to start all services again.

·   = ;      The previous submitted jobs are visible after the stop= /start in the job tracker url.

 

Next I installed Apache Had= oop-2.0.6-alpha alongside.

I used the SAME data direct= ory locations that were in Apache Hadoop-1.2.0 in the configuration files n= amely:

core-site.xml

----------------

$hadoop.tmp.dir  =             &nb= sp;            =             &nb= sp;      /home/cloud/hadoop_migration/hadoop-data/= tempdir

 

hdfs-site.xml

-----------------

$dfs.data.dir  &n= bsp;            = ;            &n= bsp;            = ;            &n= bsp; /home/cloud/hadoop_migration/hadoop-data/data

$dfs.name.dir  &n= bsp;            = ;            &n= bsp;            = ;            /home/c= loud/hadoop_migration/hadoop-data/name  

 

I am UNABLE to start the Na= meNode from Apache Hadoop-2.0.6-alpha installation I am getting the error:<= /span>

 

2013-12-03 18:28:23,941 INF= O org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from had= oop-metrics2.properties

2013-12-03 18:28:24,080 INF= O org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot per= iod at 10 second(s).

2013-12-03 18:28:24,081 INF= O org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics syste= m started

2013-12-03 18:28:24,576 WAR= N org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image stora= ge directory (dfs.namenode.name.dir) configured. Beware of dataloss due to lack of redundant storage directories!

2013-12-03 18:28:24,576 WAR= N org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace e= dits storage directory (dfs.namenode.edits.dir) configured. Beware of dataloss due to lack of redundant storage directories!

2013-12-03 18:28:24,744 INF= O org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude= ) list

2013-12-03 18:28:24,749 INF= O org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.= invalidate.limit=3D1000

2013-12-03 18:28:24,762 INF= O org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.acc= ess.token.enable=3Dfalse

2013-12-03 18:28:24,762 INF= O org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplic= ation         =3D 1

2013-12-03 18:28:24,762 INF= O org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicatio= n             = =3D 512

2013-12-03 18:28:24,762 INF= O org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplicatio= n             = =3D 1

2013-12-03 18:28:24,763 INF= O org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicatio= nStreams      =3D 2

2013-12-03 18:28:24,763 INF= O org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckFo= rEnoughRacks  =3D false

2013-12-03 18:28:24,763 INF= O org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRe= checkInterval =3D 3000

2013-12-03 18:28:24,763 INF= O org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTr= ansfer        =3D false

2013-12-03 18:28:24,771 INF= O org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner  &= nbsp;          =3D cloud (auth= :SIMPLE)

2013-12-03 18:28:24,771 INF= O org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup &nbs= p;        =3D supergroup

2013-12-03 18:28:24,771 INF= O org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = =3D true

2013-12-03 18:28:24,771 INF= O org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false

2013-12-03 18:28:24,776 INF= O org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true=

2013-12-03 18:28:25,230 INF= O org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occur= ing more than 10 times

2013-12-03 18:28:25,243 INF= O org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemod= e.threshold-pct =3D 0.9990000128746033

2013-12-03 18:28:25,244 INF= O org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemod= e.min.datanodes =3D 0

2013-12-03 18:28:25,244 INF= O org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemod= e.extension     =3D 30000

2013-12-03 18:28:25,288 INF= O org.apache.hadoop.hdfs.server.common.Storage: Lock on /home/cloud/hadoop_= migration/hadoop-data/name/in_use.lock acquired by nodename 21371@= Impetus-942.impetus.co.in

2013-12-03 18:28:25,462 INF= O org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metr= ics system...

2013-12-03 18:28:25,462 INF= O org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics syste= m stopped.

2013-12-03 18:28:25,473 INF= O org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics syste= m shutdown complete.

2013-12-03 18:28:25,474 FAT= AL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode j= oin

org.apache.hadoop.hdfs.serv= er.common.IncorrectVersionException: Unexpected version of storage director= y /home/cloud/hadoop_migration/hadoop-data/name. Reported: -41. Expecting =3D -40.

    &nb= sp;   at org.apache.hadoop.hdfs.server.common.Storage.setLayoutVe= rsion(Storage.java:1079)

    &nb= sp;   at org.apache.hadoop.hdfs.server.common.Storage.setFieldsFr= omProperties(Storage.java:887)

    &nb= sp;   at org.apache.hadoop.hdfs.server.namenode.NNStorage.setFiel= dsFromProperties(NNStorage.java:583)

    &nb= sp;   at org.apache.hadoop.hdfs.server.common.Storage.readPropert= ies(Storage.java:918)

    &nb= sp;   at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverSt= orageDirs(FSImage.java:304)

    &nb= sp;   at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTr= ansitionRead(FSImage.java:200)

    &nb= sp;   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.load= FSImage(FSNamesystem.java:627)

    &nb= sp;   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.load= FromDisk(FSNamesystem.java:469)

    &nb= sp;   at org.apache.hadoop.hdfs.server.namenode.NameNode.loadName= system(NameNode.java:403)

    &nb= sp;   at org.apache.hadoop.hdfs.server.namenode.NameNode.initiali= ze(NameNode.java:437)

     &n= bsp;  at org.apache.hadoop.hdfs.server.namenode.NameNode.<init= >(NameNode.java:609)

    &nb= sp;   at org.apache.hadoop.hdfs.server.namenode.NameNode.<init= >(NameNode.java:594)

    &nb= sp;   at org.apache.hadoop.hdfs.server.namenode.NameNode.createNa= meNode(NameNode.java:1169)

    &nb= sp;   at org.apache.hadoop.hdfs.server.namenode.NameNode.main(Nam= eNode.java:1235)

2013-12-03 18:28:25,479 INF= O org.apache.hadoop.util.ExitUtil: Exiting with status 1

2013-12-03 18:28:25,481 INF= O org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:

/**************************= **********************************

SHUTDOWN_MSG: Shutting down= NameNode at Impetus-942.impetus.co.in/192.168.41.106

***************************= *********************************/

 

Independently both the inst= allations(Apache Hadoop-1.2.0 and Apache Hadoop-2.0.6-alpha) are working fo= r me. I am able to run the MR jobs on both the installations independently though.

But I aim to migrate the da= ta and jobs submitted from Apache Hadoop-1.2.0 to Apache Hadoop-2.0.6-alpha= .

 

Is there any HDFS compatibi= lity issues from Apache Hadoop-1.2.0 to Apache Hadoop-2.0.6-alpha?

 

Thanks,

-Nirmal

 

 

Hello Sandy,

 

The post was useful and gav= e an insight of the migration.

 

I am doing a test migration= from Apache Hadoop-1.2.0 to Apache Hadoop-2.0.6-alpha on a single node env= ironment.

I am having the Apache Hado= op-1.2.0 up and running.

 

Can you please let me know = the steps that one should follow for the migration?

I am thinking of doing some= thing like:

·        = ; Install Apache Hadoop-2.0.6-alpha alongside the existi= ng Apache Hadoop-1.2.0

·        = ; Use the same HDFS locations

·        = ; Change the various required configuration files=

·        = ; Stop Apache Hadoop-1.2.0  and start Apache Hadoop= -2.0.6-alpha

·        = ; Verify all the services are running

·        = ; Test via mapreduce (test MRv1 and MRv2 examples)

·        = ; Check Web UI Console and verify the MRv1 and MRv2 jobs=

 

These above steps needs to = be performed on all the nodes in a cluster environment.

 

The translation table mappi= ng old configuration to new would be definitely *very* useful.

 

Also the existing Hadoop ec= osystem components needs to be considered:

·        = ; Hive Scripts

·        = ; Pig Scripts

·        = ; Oozie Workflows

Their compatibility and ver= sion support would need to be checked.

 

Also thinking of any risks = like Data Loss, others that one should keep in mind.

 

Also I found: http://strataconf.com/strata2014/public/schedule/de= tail/32247

 

Thanks,

-Nirmal

 

From: Robert Dyer [= mailto:psybers@gmail.com]
Sent: Friday, November 22, 2013 9:08 PM
To: user= @hadoop.apache.org
Subject: Re: Any reference for upgrade hadoop from 1.x to 2.2
=

 

Thanks Sandy! These seem helpful!

 

"MapR= educe cluster configuration options have been split into YARN configuration= options, which go in yarn-site.xml; and MapReduce configuration options, which go in mapred-site.xml. Many have been given new names = to reflect the shift. ... We’ll follow up with a full translation table in a future post."


This type of translation table mapping old configuration to new would be *v= ery* useful!

 

- Robert

On Fri, Nov 22, 2013 at 2:15 AM, Sandy Ry= za <sandy.r= yza@cloudera.com> wrote:

 

On Fri, Nov 22, 2013 at 3:03 AM, Nirmal K= umar <ni= rmal.kumar@impetus.co.in> wrote:

Hi All,

 

I am also lo= oking into migrating\upgrading from Apache Hadoop 1.x to Apache Hadoop 2.x.=

I didn’= ;t find any doc\guide\blogs for the same.

Although the= re are guides\docs for the CDH and HDP migration\upgradation from Hadoop 1.= x to Hadoop 2.x

Would referr= ing those be of some use?

 

I am looking= for similar guides\docs for Apache Hadoop 1.x to Apache Hadoop 2.x.=

 

I found some= thing on slideshare though. Not sure how much useful that is going to be. I= still need to verify that.

http://www.sl= ideshare.net/mikejf12/an-example-apache-hadoop-yarn-upgrade

 

Any suggesti= ons\comments will be of great help.

 

Thanks,

-Nirmal

 

From: Jilal Oussama [mailto:jilal.oussama@gmail.com]
Sent: Friday, November 08, 2013 9:13 PM
To: user= @hadoop.apache.org
Subject: Re: Any reference for upgrade hadoop from 1.x to 2.2
=

 

I am looking for the = same thing if anyone can point us to a good direction please.

Thank you.

(Currently running Hadoop 1.2.1)

 








NOTE: This message may contain information that is confidential, proprietar= y, privileged or otherwise protected by law. The message is intended solely= for the named addressee. If received in error, please destroy and notify t= he sender. Any use of this email is prohibited when received in error. Impetus does not represent, warrant = and/or guarantee, that the integrity of this communication has been maintai= ned nor that the communication is free of errors, virus, interception or in= terference.

 








NOTE: This message may contain information that is confidential, proprietar= y, privileged or otherwise protected by law. The message is intended solely= for the named addressee. If received in error, please destroy and notify t= he sender. Any use of this email is prohibited when received in error. Impetus does not represent, warrant = and/or guarantee, that the integrity of this communication has been maintai= ned nor that the communication is free of errors, virus, interception or in= terference.

 








NOTE: This message may contain information that is confidential, proprietar= y, privileged or otherwise protected by law. The message is intended solely= for the named addressee. If received in error, please destroy and notify t= he sender. Any use of this email is prohibited when received in error. Impetus does not represent, warrant = and/or guarantee, that the integrity of this communication has been maintai= ned nor that the communication is free of errors, virus, interception or in= terference.

 

 

 








NOTE: This message may contain information that is confidential, proprietar= y, privileged or otherwise protected by law. The message is intended solely= for the named addressee. If received in error, please destroy and notify t= he sender. Any use of this email is prohibited when received in error. Impetus does not represent, warrant = and/or guarantee, that the integrity of this communication has been maintai= ned nor that the communication is free of errors, virus, interception or in= terference.

 








NOTE: This message may contain information that is confidential, proprietar= y, privileged or otherwise protected by law. The message is intended solely= for the named addressee. If received in error, please destroy and notify t= he sender. Any use of this email is prohibited when received in error. Impetus does not represent, warrant = and/or guarantee, that the integrity of this communication has been maintai= ned nor that the communication is free of errors, virus, interception or in= terference.

 









NOTE: This message may contain information that is confidential, proprietar= y, privileged or otherwise protected by law. The message is intended solely= for the named addressee. If received in error, please destroy and notify t= he sender. Any use of this email is prohibited when received in error. Impetus does not represent, warrant = and/or guarantee, that the integrity of this communication has been maintai= ned nor that the communication is free of errors, virus, interception or in= terference.
--_000_8978C61A0021A142A5DC1755FD41C279835446DCMail3impetuscoi_--