Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 0DAA410B5B for ; Tue, 3 Dec 2013 18:28:53 +0000 (UTC) Received: (qmail 5577 invoked by uid 500); 3 Dec 2013 18:28:47 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 5480 invoked by uid 500); 3 Dec 2013 18:28:47 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 5472 invoked by uid 99); 3 Dec 2013 18:28:47 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 03 Dec 2013 18:28:47 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of kawa.adam@gmail.com designates 209.85.220.177 as permitted sender) Received: from [209.85.220.177] (HELO mail-vc0-f177.google.com) (209.85.220.177) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 03 Dec 2013 18:28:43 +0000 Received: by mail-vc0-f177.google.com with SMTP id hv10so10107753vcb.36 for ; Tue, 03 Dec 2013 10:28:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=ZZ3kEZF5fX+XgjUwpD1C4j8q5ypDvWK6I+vpzURyOJM=; b=AWsaK8Q/mM3n9ptratKmdKMfrnHTKBYCWIoWRo5dRBhu8117D0wm6nVWCPyB5w87md gaHit5t9bKmiWNFHWgDC9+FlrJGJKyX00biWr1Jx7kv4C2ZQbfF8ofzTOuHgUIZK9H6o 897YKJAFWN2nttFOEYzuXGNhuXk5ahVkkDq/qXy2jGZRH/zhjiYGrYCqLRUmpizQ+8+5 b695yPOZDMxaAadnybBDkG5iu27rsWQL5x8sJogG4k9gBljYhUAhvtHY0JvsZAUgIulU SnbYyXrsiIQ/uRU6lgOTtEXvJBToPZ2Hb9G/ykewvpT0bbi9njFwmnf1JAicuvqBxy5x kqrQ== MIME-Version: 1.0 X-Received: by 10.58.187.129 with SMTP id fs1mr101611vec.45.1386095302282; Tue, 03 Dec 2013 10:28:22 -0800 (PST) Received: by 10.58.197.67 with HTTP; Tue, 3 Dec 2013 10:28:22 -0800 (PST) In-Reply-To: References: <8978C61A0021A142A5DC1755FD41C27983542946@Mail3.impetus.co.in> <8978C61A0021A142A5DC1755FD41C27983543A14@Mail3.impetus.co.in> <8978C61A0021A142A5DC1755FD41C27983544046@Mail3.impetus.co.in> Date: Tue, 3 Dec 2013 19:28:22 +0100 Message-ID: Subject: Re: Any reference for upgrade hadoop from 1.x to 2.2 From: Adam Kawa To: user@hadoop.apache.org Cc: "rdyer@iastate.edu" Content-Type: multipart/alternative; boundary=047d7bacc4281e614804eca576cd X-Virus-Checked: Checked by ClamAV on apache.org --047d7bacc4281e614804eca576cd Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable @Nirmal, And later, you need to make a decision to finalize the upgrade or rollback. 2013/12/3 Adam Kawa > @Nirmal, > > You need to run NameNode with upgrade option e.g. > $ /usr/lib/hadoop/sbin/hadoop-daemon.sh start namenode -upgrade > > > 2013/12/3 Nirmal Kumar > >> Hi All, >> >> >> >> I am doing a test migration from Apache Hadoop-1.2.0 to Apache >> Hadoop-2.0.6-alpha on a single node environment. >> >> >> >> I did the following: >> >> =B7 Installed Apache Hadoop-1.2.0 >> >> =B7 Ran word count sample MR jobs. The jobs executed successfull= y. >> >> =B7 I stop all the services in Apache Hadoop-1.2.0 and then was >> able to start all services again. >> >> =B7 The previous submitted jobs are visible after the stop/start >> in the job tracker url. >> >> >> >> Next I installed Apache Hadoop-2.0.6-alpha alongside. >> >> I used the SAME data directory locations that were in Apache Hadoop-1.2.= 0 >> in the configuration files namely: >> >> core-site.xml >> >> ---------------- >> >> $hadoop.tmp.dir >> /home/cloud/hadoop_migration/hadoop-data/tempdir >> >> >> >> hdfs-site.xml >> >> ----------------- >> >> $dfs.data.dir >> /home/cloud/hadoop_migration/hadoop-data/data >> >> $dfs.name.dir >> /home/cloud/hadoop_migration/hadoop-data/name >> >> >> >> I am UNABLE to start the NameNode from Apache Hadoop-2.0.6-alpha >> installation I am getting the error: >> >> >> >> 2013-12-03 18:28:23,941 INFO >> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from >> hadoop-metrics2.properties >> >> 2013-12-03 18:28:24,080 INFO >> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot >> period at 10 second(s). >> >> 2013-12-03 18:28:24,081 INFO >> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics syst= em >> started >> >> 2013-12-03 18:28:24,576 WARN >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image stor= age >> directory (dfs.namenode.name.dir) configured. Beware of dataloss due to >> lack of redundant storage directories! >> >> 2013-12-03 18:28:24,576 WARN >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace >> edits storage directory (dfs.namenode.edits.dir) configured. Beware of >> dataloss due to lack of redundant storage directories! >> >> 2013-12-03 18:28:24,744 INFO org.apache.hadoop.util.HostsFileReader: >> Refreshing hosts (include/exclude) list >> >> 2013-12-03 18:28:24,749 INFO >> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: >> dfs.block.invalidate.limit=3D1000 >> >> 2013-12-03 18:28:24,762 INFO >> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: >> dfs.block.access.token.enable=3Dfalse >> >> 2013-12-03 18:28:24,762 INFO >> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: >> defaultReplication =3D 1 >> >> 2013-12-03 18:28:24,762 INFO >> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: >> maxReplication =3D 512 >> >> 2013-12-03 18:28:24,762 INFO >> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: >> minReplication =3D 1 >> >> 2013-12-03 18:28:24,763 INFO >> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: >> maxReplicationStreams =3D 2 >> >> 2013-12-03 18:28:24,763 INFO >> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: >> shouldCheckForEnoughRacks =3D false >> >> 2013-12-03 18:28:24,763 INFO >> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: >> replicationRecheckInterval =3D 3000 >> >> 2013-12-03 18:28:24,763 INFO >> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: >> encryptDataTransfer =3D false >> >> 2013-12-03 18:28:24,771 INFO >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = =3D >> cloud (auth:SIMPLE) >> >> 2013-12-03 18:28:24,771 INFO >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup = =3D >> supergroup >> >> 2013-12-03 18:28:24,771 INFO >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled= =3D >> true >> >> 2013-12-03 18:28:24,771 INFO >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false >> >> 2013-12-03 18:28:24,776 INFO >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: tru= e >> >> 2013-12-03 18:28:25,230 INFO >> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names >> occuring more than 10 times >> >> 2013-12-03 18:28:25,243 INFO >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: >> dfs.namenode.safemode.threshold-pct =3D 0.9990000128746033 >> >> 2013-12-03 18:28:25,244 INFO >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: >> dfs.namenode.safemode.min.datanodes =3D 0 >> >> 2013-12-03 18:28:25,244 INFO >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: >> dfs.namenode.safemode.extension =3D 30000 >> >> 2013-12-03 18:28:25,288 INFO >> org.apache.hadoop.hdfs.server.common.Storage: Lock on >> /home/cloud/hadoop_migration/hadoop-data/name/in_use.lock acquired by >> nodename 21371@Impetus-942.impetus.co.in >> >> 2013-12-03 18:28:25,462 INFO >> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode >> metrics system... >> >> 2013-12-03 18:28:25,462 INFO >> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics syst= em >> stopped. >> >> 2013-12-03 18:28:25,473 INFO >> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics syst= em >> shutdown complete. >> >> 2013-12-03 18:28:25,474 FATAL >> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode j= oin >> >> org.apache.hadoop.hdfs.server.common.IncorrectVersionException: >> Unexpected version of storage directory >> /home/cloud/hadoop_migration/hadoop-data/name. Reported: -41. Expecting = =3D >> -40. >> >> at >> org.apache.hadoop.hdfs.server.common.Storage.setLayoutVersion(Storage.ja= va:1079) >> >> at >> org.apache.hadoop.hdfs.server.common.Storage.setFieldsFromProperties(Sto= rage.java:887) >> >> at >> org.apache.hadoop.hdfs.server.namenode.NNStorage.setFieldsFromProperties= (NNStorage.java:583) >> >> at >> org.apache.hadoop.hdfs.server.common.Storage.readProperties(Storage.java= :918) >> >> at >> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImag= e.java:304) >> >> at >> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSI= mage.java:200) >> >> at >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesy= stem.java:627) >> >> at >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNames= ystem.java:469) >> >> at >> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.= java:403) >> >> at >> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java= :437) >> >> at >> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:609= ) >> >> at >> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:594= ) >> >> at >> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.= java:1169) >> >> at >> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1235) >> >> 2013-12-03 18:28:25,479 INFO org.apache.hadoop.util.ExitUtil: Exiting >> with status 1 >> >> 2013-12-03 18:28:25,481 INFO >> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: >> >> /************************************************************ >> >> SHUTDOWN_MSG: Shutting down NameNode at >> Impetus-942.impetus.co.in/192.168.41.106 >> >> ************************************************************/ >> >> >> >> Independently both the installations(Apache Hadoop-1.2.0 and Apache >> Hadoop-2.0.6-alpha) are working for me. I am able to run the MR jobs on >> both the installations independently though. >> >> But I aim to migrate the data and jobs submitted from Apache >> Hadoop-1.2.0 to Apache Hadoop-2.0.6-alpha. >> >> >> >> Is there any HDFS compatibility issues from Apache Hadoop-1.2.0 to Apach= e >> Hadoop-2.0.6-alpha? >> >> >> >> Thanks, >> >> -Nirmal >> >> >> >> *From:* Nirmal Kumar >> *Sent:* Wednesday, November 27, 2013 2:56 PM >> *To:* user@hadoop.apache.org; rdyer@iastate.edu >> *Subject:* RE: Any reference for upgrade hadoop from 1.x to 2.2 >> >> >> >> Hello Sandy, >> >> >> >> The post was useful and gave an insight of the migration. >> >> >> >> I am doing a test migration from Apache Hadoop-1.2.0 to Apache >> Hadoop-2.0.6-alpha on a single node environment. >> >> I am having the Apache Hadoop-1.2.0 up and running. >> >> >> >> Can you please let me know the steps that one should follow for the >> migration? >> >> I am thinking of doing something like: >> >> =B7 Install Apache Hadoop-2.0.6-alpha alongside the existing >> Apache Hadoop-1.2.0 >> >> =B7 Use the same HDFS locations >> >> =B7 Change the various required configuration files >> >> =B7 Stop Apache Hadoop-1.2.0 and start Apache Hadoop-2.0.6-alph= a >> >> =B7 Verify all the services are running >> >> =B7 Test via mapreduce (test MRv1 and MRv2 examples) >> >> =B7 Check Web UI Console and verify the MRv1 and MRv2 jobs >> >> >> >> These above steps needs to be performed on all the nodes in a cluster >> environment. >> >> >> >> The translation table mapping old configuration to new would be >> definitely *very* useful. >> >> >> >> Also the existing Hadoop ecosystem components needs to be considered: >> >> =B7 Hive Scripts >> >> =B7 Pig Scripts >> >> =B7 Oozie Workflows >> >> Their compatibility and version support would need to be checked. >> >> >> >> Also thinking of any risks like Data Loss, others that one should keep i= n >> mind. >> >> >> >> Also I found: >> http://strataconf.com/strata2014/public/schedule/detail/32247 >> >> >> >> Thanks, >> >> -Nirmal >> >> >> >> *From:* Robert Dyer [mailto:psybers@gmail.com ] >> *Sent:* Friday, November 22, 2013 9:08 PM >> *To:* user@hadoop.apache.org >> *Subject:* Re: Any reference for upgrade hadoop from 1.x to 2.2 >> >> >> >> Thanks Sandy! These seem helpful! >> >> >> >> "MapReduce cluster configuration options have been split into YARN >> configuration options, which go in yarn-site.xml; and MapReduce >> configuration options, which go in mapred-site.xml. Many have been given >> new names to reflect the shift. ... *We=92ll follow up with a full >> translation table in a future post.*" >> >> >> This type of translation table mapping old configuration to new would be >> *very* useful! >> >> >> >> - Robert >> >> On Fri, Nov 22, 2013 at 2:15 AM, Sandy Ryza >> wrote: >> >> For MapReduce and YARN, we recently published a couple blog posts on >> migrating: >> >> >> http://blog.cloudera.com/blog/2013/11/migrating-to-mapreduce-2-on-yarn-f= or-users/ >> >> >> http://blog.cloudera.com/blog/2013/11/migrating-to-mapreduce-2-on-yarn-f= or-operators/ >> >> >> >> hope that helps, >> >> Sandy >> >> >> >> On Fri, Nov 22, 2013 at 3:03 AM, Nirmal Kumar >> wrote: >> >> Hi All, >> >> >> >> I am also looking into migrating\upgrading from Apache Hadoop 1.x to >> Apache Hadoop 2.x. >> >> I didn=92t find any doc\guide\blogs for the same. >> >> Although there are guides\docs for the CDH and HDP migration\upgradation >> from Hadoop 1.x to Hadoop 2.x >> >> Would referring those be of some use? >> >> >> >> I am looking for similar guides\docs for Apache Hadoop 1.x to Apache >> Hadoop 2.x. >> >> >> >> I found something on slideshare though. Not sure how much useful that is >> going to be. I still need to verify that. >> >> http://www.slideshare.net/mikejf12/an-example-apache-hadoop-yarn-upgrade >> >> >> >> Any suggestions\comments will be of great help. >> >> >> >> Thanks, >> >> -Nirmal >> >> >> >> *From:* Jilal Oussama [mailto:jilal.oussama@gmail.com] >> *Sent:* Friday, November 08, 2013 9:13 PM >> *To:* user@hadoop.apache.org >> *Subject:* Re: Any reference for upgrade hadoop from 1.x to 2.2 >> >> >> >> I am looking for the same thing if anyone can point us to a good >> direction please. >> >> Thank you. >> >> (Currently running Hadoop 1.2.1) >> >> >> >> 2013/11/1 YouPeng Yang >> >> Hi users >> >> Are there any reference docs to introduce how to upgrade hadoop from >> 1.x to 2.2. >> >> >> >> Regards >> >> >> >> >> ------------------------------ >> >> >> >> >> >> >> >> NOTE: This message may contain information that is confidential, >> proprietary, privileged or otherwise protected by law. The message is >> intended solely for the named addressee. If received in error, please >> destroy and notify the sender. Any use of this email is prohibited when >> received in error. Impetus does not represent, warrant and/or guarantee, >> that the integrity of this communication has been maintained nor that th= e >> communication is free of errors, virus, interception or interference. >> >> >> ------------------------------ >> >> >> >> >> >> >> >> NOTE: This message may contain information that is confidential, >> proprietary, privileged or otherwise protected by law. The message is >> intended solely for the named addressee. If received in error, please >> destroy and notify the sender. Any use of this email is prohibited when >> received in error. Impetus does not represent, warrant and/or guarantee, >> that the integrity of this communication has been maintained nor that th= e >> communication is free of errors, virus, interception or interference. >> >> ------------------------------ >> >> >> >> >> >> >> NOTE: This message may contain information that is confidential, >> proprietary, privileged or otherwise protected by law. The message is >> intended solely for the named addressee. If received in error, please >> destroy and notify the sender. Any use of this email is prohibited when >> received in error. Impetus does not represent, warrant and/or guarantee, >> that the integrity of this communication has been maintained nor that th= e >> communication is free of errors, virus, interception or interference. >> > > --047d7bacc4281e614804eca576cd Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: quoted-printable
@Nirmal,

And later= , you need to make a=A0decision=A0to finalize the upgrade or rollback.


2013/12= /3 Adam Kawa <kawa.adam@gmail.com>
@Nirmal,

You need to run NameNode with = upgrade option e.g.
$ /usr/lib/hadoop/sbin/hadoop-daemon.sh start= namenode -upgrade


2013/12/3 Nirmal Kumar <nirmal.kumar@impetus.co.in>=

Hi All,

=A0

I am doing a test migration from Apache= Hadoop-1.2.0 to Apache Hadoop-2.0.6-alpha on a single node environment.

=A0

I did the following:

=B7= =A0=A0=A0=A0=A0=A0=A0=A0 Installed Apache Hadoop-1.2.0

=B7= =A0=A0=A0=A0=A0=A0=A0=A0 Ran word count sample MR jobs. The jobs e= xecuted successfully.

=B7= =A0=A0=A0=A0=A0=A0=A0=A0 I stop all the services in Apache Hadoop-= 1.2.0 and then was able to start all services again.

=B7= =A0=A0=A0=A0=A0=A0=A0=A0 The previous submitted jobs are visible a= fter the stop/start in the job tracker url.

=A0

Next I installed Apache Hadoop-2.0.6-alpha alongside.

I used the SAME data directory location= s that were in Apache Hadoop-1.2.0 in the configuration files namely:

core-site.xml

----------------

$hadoop.tmp.dir=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 /home/cloud/hadoop_migration/hadoop-da= ta/tempdir

=A0

hdfs-site.xml

-----------------

$dfs.data.dir=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 /home/cloud/ha= doop_migration/hadoop-data/data

$dfs.name.dir=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 /home/cloud/hadoop_m= igration/hadoop-data/name=A0=A0

=A0

I am UNABLE to start the NameNode from Apache Hadoop-2.0.6-alpha installation I am getting the= error:

=A0

2013-12-03 18:28:23,941 INFO org.apache= .hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2= .properties

2013-12-03 18:28:24,080 INFO org.apache= .hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 se= cond(s).

2013-12-03 18:28:24,081 INFO org.apache= .hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started

2013-12-03 18:28:24,576 WARN org.apache= .hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory= (dfs.namenode.name.dir) configured. Beware of dataloss due to lack of redundant storage directories!

2013-12-03 18:28:24,576 WARN org.apache= .hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage= directory (dfs.namenode.edits.dir) configured. Beware of dataloss due to lack of redundant storage directories!

2013-12-03 18:28:24,744 INFO org.apache= .hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list

2013-12-03 18:28:24,749 INFO org.apache= .hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.l= imit=3D1000

2013-12-03 18:28:24,762 INFO org.apache= .hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.en= able=3Dfalse

2013-12-03 18:28:24,762 INFO org.apache= .hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication=A0=A0= =A0=A0=A0=A0=A0=A0 =3D 1

2013-12-03 18:28:24,762 INFO org.apache= .hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication=A0=A0=A0= =A0=A0 =A0=A0=A0=A0=A0=A0=A0=3D 512

2013-12-03 18:28:24,762 INFO org.apache= .hadoop.hdfs.server.blockmanagement.BlockManager: minReplication=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0 =3D 1

2013-12-03 18:28:24,763 INFO org.apache= .hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams=A0= =A0=A0=A0=A0 =3D 2

2013-12-03 18:28:24,763 INFO org.apache= .hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks= =A0 =3D false

2013-12-03 18:28:24,763 INFO org.apache= .hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterva= l =3D 3000

2013-12-03 18:28:24,763 INFO org.apache= .hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer=A0=A0= =A0=A0=A0=A0=A0 =3D false

2013-12-03 18:28:24,771 INFO org.apache= .hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0 =3D cloud (auth:SIMPLE)

2013-12-03 18:28:24,771 INFO org.apache= .hadoop.hdfs.server.namenode.FSNamesystem: supergroup=A0=A0=A0=A0=A0=A0=A0= =A0=A0 =3D supergroup

2013-12-03 18:28:24,771 INFO org.apache= .hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled =3D true

2013-12-03 18:28:24,771 INFO org.apache= .hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false

2013-12-03 18:28:24,776 INFO org.apache= .hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true

2013-12-03 18:28:25,230 INFO org.apache= .hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more tha= n 10 times

2013-12-03 18:28:25,243 INFO org.apache= .hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-= pct =3D 0.9990000128746033

2013-12-03 18:28:25,244 INFO org.apache= .hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datano= des =3D 0

2013-12-03 18:28:25,244 INFO org.apache= .hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension= =A0=A0=A0=A0 =3D 30000

2013-12-03 18:28:25,288 INFO org.apache= .hadoop.hdfs.server.common.Storage: Lock on /home/cloud/hadoop_migration/ha= doop-data/name/in_use.lock acquired by nodename 21371@Impetus-942.impetus.co.in

2013-12-03 18:28:25,462 INFO org.apache= .hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system..= .

2013-12-03 18:28:25,462 INFO org.apache= .hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.

2013-12-03 18:28:25,473 INFO org.apache= .hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown c= omplete.

2013-12-03 18:28:25,474 FATAL org.apach= e.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join

org.apache.hadoop.hdfs.server.common.In= correctVersionException: Unexpected version of storage directory /home/clou= d/hadoop_migration/hadoop-data/name. Reported: -41. Expecting =3D -40.

=A0=A0=A0=A0=A0=A0=A0 at org.apache.had= oop.hdfs.server.common.Storage.setLayoutVersion(Storage.java:1079)

=A0=A0=A0=A0=A0=A0=A0 at org.apache.had= oop.hdfs.server.common.Storage.setFieldsFromProperties(Storage.java:887)

=A0=A0=A0=A0=A0=A0=A0 at org.apache.had= oop.hdfs.server.namenode.NNStorage.setFieldsFromProperties(NNStorage.java:5= 83)

=A0=A0=A0=A0=A0=A0=A0 at org.apache.had= oop.hdfs.server.common.Storage.readProperties(Storage.java:918)

=A0=A0=A0=A0=A0=A0=A0 at org.apache.had= oop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:304)

=A0=A0=A0=A0=A0=A0=A0 at org.apache.had= oop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:200)

=A0=A0=A0=A0=A0=A0=A0 at org.apache.had= oop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:627)

=A0=A0=A0=A0=A0=A0=A0 at org.apache.had= oop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:469)

=A0=A0=A0=A0=A0=A0=A0 at org.apache.had= oop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:403)<= /p>

=A0=A0=A0=A0=A0=A0=A0 at org.apache.had= oop.hdfs.server.namenode.NameNode.initialize(NameNode.java:437)

=A0 =A0=A0=A0=A0=A0=A0at org.apache.had= oop.hdfs.server.namenode.NameNode.<init>(NameNode.java:609)

=A0=A0=A0=A0=A0=A0=A0 at org.apache.had= oop.hdfs.server.namenode.NameNode.<init>(NameNode.java:594)

=A0=A0=A0=A0=A0=A0=A0 at org.apache.had= oop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1169)=

=A0=A0=A0=A0=A0=A0=A0 at org.apache.had= oop.hdfs.server.namenode.NameNode.main(NameNode.java:1235)

2013-12-03 18:28:25,479 INFO org.apache= .hadoop.util.ExitUtil: Exiting with status 1

2013-12-03 18:28:25,481 INFO org.apache= .hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:

/**************************************= **********************

SHUTDOWN_MSG: Shutting down NameNode at= Impetus-942.impetus.co.in/192.168.41.106

***************************************= *********************/

=A0

Independently both the installations(Apache Hadoop-1.2.0 and Apache Hadoop-2.0.6-alpha<= span style=3D"font-size:11.0pt;font-family:"Calibri","sans-s= erif"">) are working for me. I am able to run the MR jobs on both the installations= independently though.

But I aim to migrate the data and jobs = submitted from Apache Hadoop-1.2.0 to Apache Hadoop-2.0.6-alpha.

=A0

Is there any HDFS compatibility issues = from Apache Hadoop-1.2.0 to Apache Hadoop-2.0.6-alpha?<= /span>

=A0

Thanks,

-Nirmal

=A0

From: Nirmal= Kumar
Sent: Wednesday, November 27, 2013 2:56 PM
To: user= @hadoop.apache.org; rdyer@iastate.edu
Subject: RE: Any reference for upgrade hadoop from 1.x to 2.2
=

=A0

Hello Sandy,

=A0

The post was useful and gave an insight= of the migration.

=A0

I am doing a test migration from Apache= Hadoop-1.2.0 to Apache Hadoop-2.0.6-alpha on a single node environment.

I am having the Apache Hadoop-1.2.0 up = and running.

=A0

Can you please let me know the steps th= at one should follow for the migration?

I am thinking of doing something like:<= /span>

=B7=A0=A0=A0=A0=A0=A0=A0=A0 Install Apache Hadoop-2.0.6-alpha alongside the existin= g Apache Hadoop-1.2.0

=B7=A0=A0=A0=A0=A0=A0=A0=A0 Use the same HDFS locations

=B7=A0=A0=A0=A0=A0=A0=A0=A0 Change the various required configuration files<= /p>

=B7=A0=A0=A0=A0=A0=A0=A0=A0 Stop Apache Hadoop-1.2.0=A0 and start Apache Hadoop-2.0= .6-alpha

=B7=A0=A0=A0=A0=A0=A0=A0=A0 Verify all the services are running

=B7=A0=A0=A0=A0=A0=A0=A0=A0 Test via mapreduce (test MRv1 and MRv2 examples)=

=B7=A0=A0=A0=A0=A0=A0=A0=A0 Check Web UI Console and verify the MRv1 and MRv2 jobs<= /span>

=A0

These above steps needs to be performed= on all the nodes in a cluster environment.

=A0

The translation table mapping old confi= guration to new would be definitely *very* useful.

=A0

Also the existing Hadoop ecosystem comp= onents needs to be considered:

=B7=A0=A0=A0=A0=A0=A0=A0=A0 Hive Scripts

=B7=A0=A0=A0=A0=A0=A0=A0=A0 Pig Scripts

=B7=A0=A0=A0=A0=A0=A0=A0=A0 Oozie Workflows

Their compatibility and version support= would need to be checked.

=A0

Also thinking of any risks like Data Lo= ss, others that one should keep in mind.

=A0

Also I found: http://strataconf.com/strata2014/public/schedule/de= tail/32247

=A0

Thanks,

-Nirmal

=A0

From: Robert= Dyer [mailto:psyber= s@gmail.com]
Sent: Friday, November 22, 2013 9:08 PM
To: user= @hadoop.apache.org
Subject: Re: Any reference for upgrade hadoop from 1.x to 2.2
=

=A0

Thanks Sandy! These seem helpful!

=A0

"MapReduce cluster= configuration options have been split into YARN configuration options, whi= ch go in yarn-site.xml; and MapReduce configuration options, which go in mapred-site.xml.=A0Many have been given new names to reflect t= he shift.=A0... We=92ll follow up with a full translation table in a future post.&qu= ot;


This type of translation table mapping old configuration to new would be *v= ery* useful!

=A0

- Robert

On Fri, Nov 22, 2013 at 2:15 AM, Sandy Ryza <sandy.ryza@clouder= a.com> wrote:

=A0

On Fri, Nov 22, 2013 at 3:03 AM, Nirmal Kumar <nirmal.kumar@= impetus.co.in> wrote:

Hi All,

=A0

I am also looking into mi= grating\upgrading from Apache Hadoop 1.x to Apache Hadoop 2.x.

I didn=92t find any doc\g= uide\blogs for the same.

Although there are guides= \docs for the CDH and HDP migration\upgradation from Hadoop 1.x to Hadoop 2= .x

Would referring those be = of some use?

=A0

I am looking for similar = guides\docs for Apache Hadoop 1.x to Apache Hadoop 2.x.

=A0

I found something on slid= eshare though. Not sure how much useful that is going to be. I still need t= o verify that.

http://www.slideshare.ne= t/mikejf12/an-example-apache-hadoop-yarn-upgrade

=A0

Any suggestions\comments = will be of great help.

=A0

Thanks,

-Nirmal

=A0

From: Jilal = Oussama [mailto:jilal.oussama@gmail.com]
Sent: Friday, November 08, 2013 9:13 PM
To: user= @hadoop.apache.org
Subject: Re: Any reference for upgrade hadoop from 1.x to 2.2
=

=A0

I am looking for the = same thing if anyone can point us to a good direction please.

Thank you.

(Currently running Hadoop 1.2.1)

=A0

2013/11/1 YouPeng Yang <yypvsxf19870706@gmail.com>

Hi users

=A0=A0 Are there any reference docs to introduce how to upgrade hadoop from= 1.x to 2.2.

=A0=A0

Regards

=A0

=A0








NOTE: This message may contain information that is confidential, proprietar= y, privileged or otherwise protected by law. The message is intended solely= for the named addressee. If received in error, please destroy and notify t= he sender. Any use of this email is prohibited when received in error. Impetus does not represent, warrant = and/or guarantee, that the integrity of this communication has been maintai= ned nor that the communication is free of errors, virus, interception or in= terference.

=A0








NOTE: This message may contain information that is confidential, proprietar= y, privileged or otherwise protected by law. The message is intended solely= for the named addressee. If received in error, please destroy and notify t= he sender. Any use of this email is prohibited when received in error. Impetus does not represent, warrant = and/or guarantee, that the integrity of this communication has been maintai= ned nor that the communication is free of errors, virus, interception or in= terference.









NOTE: This message may contain information that is confidential, proprietar= y, privileged or otherwise protected by law. The message is intended solely= for the named addressee. If received in error, please destroy and notify t= he sender. Any use of this email is prohibited when received in error. Impetus does not represent, warrant = and/or guarantee, that the integrity of this communication has been maintai= ned nor that the communication is free of errors, virus, interception or in= terference.


--047d7bacc4281e614804eca576cd--