Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id C1B8710254 for ; Wed, 28 Aug 2013 08:38:26 +0000 (UTC) Received: (qmail 63658 invoked by uid 500); 28 Aug 2013 08:38:18 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 63584 invoked by uid 500); 28 Aug 2013 08:38:17 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 63571 invoked by uid 99); 28 Aug 2013 08:38:15 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 28 Aug 2013 08:38:15 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of harsh@cloudera.com designates 209.85.223.173 as permitted sender) Received: from [209.85.223.173] (HELO mail-ie0-f173.google.com) (209.85.223.173) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 28 Aug 2013 08:38:11 +0000 Received: by mail-ie0-f173.google.com with SMTP id qa5so4307247ieb.18 for ; Wed, 28 Aug 2013 01:37:51 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:content-type; bh=aOqV4jYwzup3LJtotYR2h9f0/ubO5Q+urnVn6VrO4jI=; b=bgW1jVWLYBqlQyzZ0mqi+8KW0GqYrq0/knzlz/JTxoJ/4SDJcc5/JW7mQYqo+zmOyN tU5rRnDh5nS4pyAok8sZJ1S24NEGTawxRT4KCc/z+DBDDOxFH9zanGpXevx2XjwQMXLj vGqQmkuJsXUzpRfzPUNbUHwPRISIv6Jf+tA4CS+eUaf5JGhTLanyTVWAeviP/GH8h7qt PKJ1LJ1ntxIXvYGKvcdMSSUo4bR2jRS8aUCfghMOMvbfItj71TSMy/MfmmAzJCUNb0yf 9dRiCjr5MborBBV5XdfmkfV5hI/GpoYzmdp3v1Q1NA0sZUvoe+7lZxOnIB2TMpbxPVCR ihJg== X-Gm-Message-State: ALoCoQmTX5AjkkUSz+Ng1VC9sl6m1Ki2bomXYv3yTq8gWC35IgIfISJ5vARPkXUPWkeya+IVvwcF X-Received: by 10.50.130.106 with SMTP id od10mr11930634igb.1.1377679070963; Wed, 28 Aug 2013 01:37:50 -0700 (PDT) MIME-Version: 1.0 Received: by 10.50.101.202 with HTTP; Wed, 28 Aug 2013 01:37:29 -0700 (PDT) In-Reply-To: References: From: Harsh J Date: Wed, 28 Aug 2013 14:07:29 +0530 Message-ID: Subject: Re: Namenode joining error in HA configuration To: "" Content-Type: text/plain; charset=ISO-8859-1 X-Virus-Checked: Checked by ClamAV on apache.org Thanks. I don't believe we support Solaris 10 (i.e. we do not intensively test over it), but what that piece behind the failure does is execute "bash -c exec 'df -k /namedirpath'". If such a thing cannot run on Solaris 10, thats probably the central issue for you right now (though there may be other issues as well). On Wed, Aug 28, 2013 at 1:39 PM, orahad bigdata wrote: > Hi Harsh, > > I'm using Solaris 10 OS and java 1.6. > > Yes I'm able to run df command against /tmp/hadoop-hadoop/dfs/name dir as > hadoop user. > > Regards > Jitendra > > On Wed, Aug 28, 2013 at 4:06 AM, Harsh J wrote: >> >> What OS are you starting this on? >> >> Are you able to run the command "df -k /tmp/hadoop-hadoop/dfs/name/" >> as user "hadoop"? >> >> On Wed, Aug 28, 2013 at 12:53 AM, orahad bigdata >> wrote: >> > Hi All, >> > >> > I'm new in Hadoop administration, Can someone please help me? >> > >> > Hadoop-version :- 2.0.5 alpha and using QJM >> > >> > I'm getting below error messages while starting Hadoop hdfs using >> > 'start-dfs.sh' >> > >> > 2013-01-23 03:25:43,208 INFO >> > org.apache.hadoop.hdfs.server.namenode.FSImage: Image file of size 121 >> > loaded in 0 seconds. >> > 2013-01-23 03:25:43,209 INFO >> > org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid >> > 0 from /tmp/hadoop-hadoop/dfs/name/current/fsimage_0000000000000000000 >> > 2013-01-23 03:25:43,217 INFO >> > org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 >> > entries 0 lookups >> > 2013-01-23 03:25:43,217 INFO >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading >> > FSImage in 1692 msecs >> > 2013-01-23 03:25:43,552 INFO org.apache.hadoop.ipc.Server: Starting >> > Socket Reader #1 for port 8020 >> > 2013-01-23 03:25:43,592 INFO >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered >> > FSNamesystemState MBean >> > 2013-01-23 03:25:43,699 INFO >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services >> > started for standby state >> > 2013-01-23 03:25:43,822 INFO >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services >> > started for active state >> > 2013-01-23 03:25:43,822 INFO >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services >> > started for standby state >> > 2013-01-23 03:25:43,824 INFO org.apache.hadoop.ipc.Server: Stopping >> > server on 8020 >> > 2013-01-23 03:25:43,829 INFO >> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode >> > metrics system... >> > 2013-01-23 03:25:43,831 INFO >> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics >> > system stopped. >> > 2013-01-23 03:25:43,832 INFO >> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics >> > system shutdown complete. >> > 2013-01-23 03:25:43,835 FATAL >> > org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode >> > join >> > org.apache.hadoop.util.Shell$ExitCodeException: >> > at org.apache.hadoop.util.Shell.runCommand(Shell.java:202) >> > at org.apache.hadoop.util.Shell.run(Shell.java:129) >> > at org.apache.hadoop.fs.DF.getFilesystem(DF.java:108) >> > at >> > org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker$CheckedVolume.(NameNodeResourceChecker.java:69) >> > at >> > org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.addDirToCheck(NameNodeResourceChecker.java:165) >> > at >> > org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.(NameNodeResourceChecker.java:134) >> > at >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startCommonServices(FSNamesystem.java:683) >> > at >> > org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:484) >> > at >> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:448) >> > >> > >> > Thanks >> >> >> >> -- >> Harsh J > > -- Harsh J