Return-Path: X-Original-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id A68161109A for ; Fri, 11 Apr 2014 05:55:22 +0000 (UTC) Received: (qmail 17556 invoked by uid 500); 11 Apr 2014 05:55:21 -0000 Delivered-To: apmail-hadoop-hdfs-issues-archive@hadoop.apache.org Received: (qmail 17314 invoked by uid 500); 11 Apr 2014 05:55:19 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-issues@hadoop.apache.org Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 17305 invoked by uid 99); 11 Apr 2014 05:55:15 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 11 Apr 2014 05:55:15 +0000 Date: Fri, 11 Apr 2014 05:55:15 +0000 (UTC) From: "Chris Nauroth (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HDFS-6233) Datanode upgrade in Windows from 1.x to 2.4 fails with symlink error. MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDFS-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13966263#comment-13966263 ] Chris Nauroth commented on HDFS-6233: ------------------------------------- Good point. :-) Thanks again. > Datanode upgrade in Windows from 1.x to 2.4 fails with symlink error. > --------------------------------------------------------------------- > > Key: HDFS-6233 > URL: https://issues.apache.org/jira/browse/HDFS-6233 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, tools > Affects Versions: 2.4.0 > Environment: Windows > Reporter: Huan Huang > Assignee: Arpit Agarwal > Attachments: HDFS-6233.01.patch > > > I try to upgrade Hadoop from 1.x and 2.4, but DataNode failed to start due to hard link exception. > Repro steps: > *Installed Hadoop 1.x > *hadoop dfsadmin -safemode enter > *hadoop dfsadmin -saveNamespace > *hadoop namenode -finalize > *Stop all services > *Uninstall Hadoop 1.x > *Install Hadoop 2.4 > *Start namenode with -upgrade option > *Try to start datanode, begin to see Hardlink exception in datanode service log. > {code} > 2014-04-10 22:47:11,655 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8010: starting > 2014-04-10 22:47:11,656 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting > 2014-04-10 22:47:11,999 INFO org.apache.hadoop.hdfs.server.common.Storage: Data-node version: -55 and name-node layout version: -56 > 2014-04-10 22:47:12,008 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on d:\hadoop\data\hdfs\dn\in_use.lock acquired by nodename 7268@myhost > 2014-04-10 22:47:12,011 INFO org.apache.hadoop.hdfs.server.common.Storage: Recovering storage directory D:\hadoop\data\hdfs\dn from previous upgrade > 2014-04-10 22:47:12,017 INFO org.apache.hadoop.hdfs.server.common.Storage: Upgrading storage directory d:\hadoop\data\hdfs\dn. > old LV = -44; old CTime = 0. > new LV = -55; new CTime = 1397168400373 > 2014-04-10 22:47:12,021 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting block pool BP-39008719-10.0.0.1-1397168400092 directory d:\hadoop\data\hdfs\dn\current\BP-39008719-10.0.0.1-1397168400092\current > 2014-04-10 22:47:12,254 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool (Datanode Uuid unassigned) service to myhost/10.0.0.1:8020 > java.io.IOException: Usage: hardlink create [LINKNAME] [FILENAME] |Incorrect command line arguments. > at org.apache.hadoop.fs.HardLink.createHardLinkMult(HardLink.java:479) > at org.apache.hadoop.fs.HardLink.createHardLinkMult(HardLink.java:416) > at org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocks(DataStorage.java:816) > at org.apache.hadoop.hdfs.server.datanode.DataStorage.linkAllBlocks(DataStorage.java:759) > at org.apache.hadoop.hdfs.server.datanode.DataStorage.doUpgrade(DataStorage.java:566) > at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:486) > at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:225) > at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:249) > at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:929) > at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:900) > at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274) > at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220) > at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815) > at java.lang.Thread.run(Thread.java:722) > 2014-04-10 22:47:12,258 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool (Datanode Uuid unassigned) service to myhost/10.0.0.1:8020 > 2014-04-10 22:47:12,359 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool ID needed, but service not yet registered with NN > java.lang.Exception: trace > at org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143) > at org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.remove(BlockPoolManager.java:91) > at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:859) > at org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350) > at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619) > at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837) > at java.lang.Thread.run(Thread.java:722) > 2014-04-10 22:47:12,359 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool (Datanode Uuid unassigned) > 2014-04-10 22:47:12,360 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool ID needed, but service not yet registered with NN > java.lang.Exception: trace > at org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143) > at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:861) > at org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350) > at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619) > at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837) > at java.lang.Thread.run(Thread.java:722) > 2014-04-10 22:47:14,360 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode > 2014-04-10 22:47:14,361 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0 > 2014-04-10 22:47:14,362 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: > /************************************************************ > SHUTDOWN_MSG: Shutting down DataNode at myhost/10.0.0.1 > ************************************************************/ > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)