Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 406EF101B2 for ; Wed, 28 Aug 2013 08:10:04 +0000 (UTC) Received: (qmail 8912 invoked by uid 500); 28 Aug 2013 08:09:56 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 8314 invoked by uid 500); 28 Aug 2013 08:09:54 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 8292 invoked by uid 99); 28 Aug 2013 08:09:52 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 28 Aug 2013 08:09:52 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of oraclehad@gmail.com designates 209.85.219.49 as permitted sender) Received: from [209.85.219.49] (HELO mail-oa0-f49.google.com) (209.85.219.49) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 28 Aug 2013 08:09:48 +0000 Received: by mail-oa0-f49.google.com with SMTP id n12so4199257oag.8 for ; Wed, 28 Aug 2013 01:09:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=Xp+Gy0qWEfOg2wEmWfUPbaE8mj6gqxwVdMpAHgQYzYs=; b=yiGqJpi+9209KlLjEACgniXdYpBTNZNDKAJcqjBMD/uwN98mnry38v3KdCoKhfnVqn FlRGi83XBvmUmDV/GGDaHxq37X3C/z1mc+Du8hYt0ibtByxIomO5aC5gL8eLrRTJGvsb 9t5TUCBfdZQdkg1ZnT8f+OhDetBBJX/UkpS1zGbZzbfIthhNhfW8ugF7G1Wajk1SY1pQ TQHFNwwoaWcE9SWbLs5neamTNoyMVWxDgLiUxo2EyVyNKi3sOncLYhtbWCQxP+65BeD+ LFR7Jdz1m1J9jkgFxA5L+NYjeiKNFTD9zl7N4lyBggx5MUkou1hr1IsO2zIPgoIzW2vX gdSw== MIME-Version: 1.0 X-Received: by 10.182.65.34 with SMTP id u2mr9298776obs.53.1377677367497; Wed, 28 Aug 2013 01:09:27 -0700 (PDT) Received: by 10.182.165.34 with HTTP; Wed, 28 Aug 2013 01:09:27 -0700 (PDT) In-Reply-To: References: Date: Wed, 28 Aug 2013 13:39:27 +0530 Message-ID: Subject: Re: Namenode joining error in HA configuration From: orahad bigdata To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=089e0158b00c1ad8d404e4fd826f X-Virus-Checked: Checked by ClamAV on apache.org --089e0158b00c1ad8d404e4fd826f Content-Type: text/plain; charset=ISO-8859-1 Hi Harsh, I'm using Solaris 10 OS and java 1.6. Yes I'm able to run df command against /tmp/hadoop-hadoop/dfs/name dir as hadoop user. Regards Jitendra On Wed, Aug 28, 2013 at 4:06 AM, Harsh J wrote: > What OS are you starting this on? > > Are you able to run the command "df -k /tmp/hadoop-hadoop/dfs/name/" > as user "hadoop"? > > On Wed, Aug 28, 2013 at 12:53 AM, orahad bigdata > wrote: > > Hi All, > > > > I'm new in Hadoop administration, Can someone please help me? > > > > Hadoop-version :- 2.0.5 alpha and using QJM > > > > I'm getting below error messages while starting Hadoop hdfs using > 'start-dfs.sh' > > > > 2013-01-23 03:25:43,208 INFO > > org.apache.hadoop.hdfs.server.namenode.FSImage: Image file of size 121 > > loaded in 0 seconds. > > 2013-01-23 03:25:43,209 INFO > > org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid > > 0 from /tmp/hadoop-hadoop/dfs/name/current/fsimage_0000000000000000000 > > 2013-01-23 03:25:43,217 INFO > > org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 > > entries 0 lookups > > 2013-01-23 03:25:43,217 INFO > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading > > FSImage in 1692 msecs > > 2013-01-23 03:25:43,552 INFO org.apache.hadoop.ipc.Server: Starting > > Socket Reader #1 for port 8020 > > 2013-01-23 03:25:43,592 INFO > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered > > FSNamesystemState MBean > > 2013-01-23 03:25:43,699 INFO > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services > > started for standby state > > 2013-01-23 03:25:43,822 INFO > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services > > started for active state > > 2013-01-23 03:25:43,822 INFO > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services > > started for standby state > > 2013-01-23 03:25:43,824 INFO org.apache.hadoop.ipc.Server: Stopping > > server on 8020 > > 2013-01-23 03:25:43,829 INFO > > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode > > metrics system... > > 2013-01-23 03:25:43,831 INFO > > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics > > system stopped. > > 2013-01-23 03:25:43,832 INFO > > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics > > system shutdown complete. > > 2013-01-23 03:25:43,835 FATAL > > org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode > > join > > org.apache.hadoop.util.Shell$ExitCodeException: > > at org.apache.hadoop.util.Shell.runCommand(Shell.java:202) > > at org.apache.hadoop.util.Shell.run(Shell.java:129) > > at org.apache.hadoop.fs.DF.getFilesystem(DF.java:108) > > at > org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker$CheckedVolume.(NameNodeResourceChecker.java:69) > > at > org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.addDirToCheck(NameNodeResourceChecker.java:165) > > at > org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.(NameNodeResourceChecker.java:134) > > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startCommonServices(FSNamesystem.java:683) > > at > org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:484) > > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:448) > > > > > > Thanks > > > > -- > Harsh J > --089e0158b00c1ad8d404e4fd826f Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Hi Harsh,
=A0
I'm using Solaris 10 OS and java 1.6.
=A0
Yes I'm able to run df command against /tmp/hadoop-hadoop/dfs/name= dir as hadoop user.
=A0
Regards
Jitendra
On Wed, Aug 28, 2013 at 4:06 AM, Harsh J <harsh@cloudera= .com> wrote:
What OS are you starting this on?
=
Are you able to run the command "df -k /tmp/hadoop-hadoop/dfs/name= /"
as user "hadoop"?

On Wed, Aug 28, 2013 at 12:53 AM, orahad bigdata <= oraclehad@gmail.com> wrote:> Hi All,
>
> I'm new in Hadoop administration, Can so= meone please help me?
>
> Hadoop-version :- 2.0.5 alpha and using QJM
>
> I&= #39;m getting below error messages while starting Hadoop hdfs using 'st= art-dfs.sh'
>
> 2013-01-23 03:25:43,208 INFO
> org.ap= ache.hadoop.hdfs.server.namenode.FSImage: Image file of size 121
> loaded in 0 seconds.
> 2013-01-23 03:25:43,209 INFO
> org.= apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid
> 0= from /tmp/hadoop-hadoop/dfs/name/current/fsimage_0000000000000000000
> 2013-01-23 03:25:43,217 INFO
> org.apache.hadoop.hdfs.server.nam= enode.NameCache: initialized with 0
> entries 0 lookups
> 2013-= 01-23 03:25:43,217 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNa= mesystem: Finished loading
> FSImage in 1692 msecs
> 2013-01-23 03:25:43,552 INFO org.apache.= hadoop.ipc.Server: Starting
> Socket Reader #1 for port 8020
> = 2013-01-23 03:25:43,592 INFO
> org.apache.hadoop.hdfs.server.namenode= .FSNamesystem: Registered
> FSNamesystemState MBean
> 2013-01-23 03:25:43,699 INFO
> o= rg.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
&g= t; started for standby state
> 2013-01-23 03:25:43,822 INFO
> o= rg.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
> started for active state
> 2013-01-23 03:25:43,822 INFO
> = org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
&= gt; started for standby state
> 2013-01-23 03:25:43,824 INFO org.apac= he.hadoop.ipc.Server: Stopping
> server on 8020
> 2013-01-23 03:25:43,829 INFO
> org.apache= .hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
> metrics = system...
> 2013-01-23 03:25:43,831 INFO
> org.apache.hadoop.me= trics2.impl.MetricsSystemImpl: NameNode metrics
> system stopped.
> 2013-01-23 03:25:43,832 INFO
> org.apach= e.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
> system s= hutdown complete.
> 2013-01-23 03:25:43,835 FATAL
> org.apache.= hadoop.hdfs.server.namenode.NameNode: Exception in namenode
> join
> org.apache.hadoop.util.Shell$ExitCodeException:
> = =A0 =A0 =A0 =A0 at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)<= br>> =A0 =A0 =A0 =A0 at org.apache.hadoop.util.Shell.run(Shell.java:129)=
> =A0 =A0 =A0 =A0 at org.apache.hadoop.fs.DF.getFilesystem(DF.java:1= 08)
> =A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.namenode.NameNodeReso= urceChecker$CheckedVolume.<init>(NameNodeResourceChecker.java:69)
= > =A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.namenode.NameNodeReso= urceChecker.addDirToCheck(NameNodeResourceChecker.java:165)
> =A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.namenode.NameNodeReso= urceChecker.<init>(NameNodeResourceChecker.java:134)
> =A0 =A0 = =A0 =A0 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startCommonS= ervices(FSNamesystem.java:683)
> =A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.namenode.NameNode.sta= rtCommonServices(NameNode.java:484)
> =A0 =A0 =A0 =A0 at org.apache.h= adoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:448)
>>
> Thanks



--Harsh J

--089e0158b00c1ad8d404e4fd826f--