Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id D727217CB1 for ; Tue, 3 Feb 2015 17:24:53 +0000 (UTC) Received: (qmail 9466 invoked by uid 500); 3 Feb 2015 17:24:49 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 9323 invoked by uid 500); 3 Feb 2015 17:24:49 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 9246 invoked by uid 99); 3 Feb 2015 17:24:48 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 03 Feb 2015 17:24:48 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of manojsameltech@gmail.com designates 209.85.216.171 as permitted sender) Received: from [209.85.216.171] (HELO mail-qc0-f171.google.com) (209.85.216.171) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 03 Feb 2015 17:24:22 +0000 Received: by mail-qc0-f171.google.com with SMTP id s11so36478922qcv.2 for ; Tue, 03 Feb 2015 09:24:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=il8crNO1ktGR5pwd4JO1ccxzVrQUHAeQxn8vJOmOnVA=; b=k8t2KGZbAjGBEWulDcmXaG3RlrRsd13DM8JaKt2VRJGjQ6P2CxgGGzt/OkI83H4PEB LIX7J6lrB6T1WZAIt+VsEskvqxrwRC61P4QxeHUXKahxz/l4qZGjuGTlueRqrhlXwA9t JYAkJ58HJxafe73hc6b/7Iq+sKV4Ypa8qO9X4Y6ELBnLY4cvvJhCRwm3vsk8h2RhOOL3 fZCzmW+7FxDUdpt+yGCw4VliAEQDVXcXoHVJcu8X0o6Xx7P924hyFyK35iM7IgmOYY/T GdcqTl9HgRWU+22OL00t+yO40m36YFSCDG8HBhc22o/705BLh5iU48dcpnwvFHYLuOri 6A5g== MIME-Version: 1.0 X-Received: by 10.224.88.1 with SMTP id y1mr29611813qal.91.1422984261063; Tue, 03 Feb 2015 09:24:21 -0800 (PST) Received: by 10.140.38.164 with HTTP; Tue, 3 Feb 2015 09:24:21 -0800 (PST) In-Reply-To: <3b3787ba.1c.14b502adee9.Coremail.donhoff_h@163.com> References: <3b3787ba.1c.14b502adee9.Coremail.donhoff_h@163.com> Date: Tue, 3 Feb 2015 09:24:21 -0800 Message-ID: Subject: Re: Can not start HA namenode with security enabled From: Manoj Samel To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=001a11c3e2f066d89f050e325783 X-Virus-Checked: Checked by ClamAV on apache.org --001a11c3e2f066d89f050e325783 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Have you added all host specific principals in kerberos database ? Thanks, On Tue, Feb 3, 2015 at 7:59 AM, =E9=83=9D=E4=B8=9C wrot= e: > I am converting a secure non-HA cluster into a secure HA cluster. After > the configuration and started all the journalnodes, I executed the > following commands on the original NameNode: > 1. hdfs name -initializeSharedEdits #this step succeeded > 2. hadoop-daemon.sh start namenode # this step failed. > > The namenode did not start successfully. I verified that my principals ar= e > right. And I checked the DNS is configured correctly so that I could use > the nslookup command to lookup and reverse-lookup the Namenode and > JournalNodes. > > I also checked the logs. The JournalNodes did not report any ERROR. The > Namenode Log report some ERRORs, but I still could not find the reason > according to these ERRORS. > > In the following I listed the main part of my hdfs-site.xml and the error > log from my Namenode. Could anyone help me to figure it out? > > Many Thanks! > > **************The main part of my hdfs-site.xml************************* > > > dfs.nameservices > bgdt-dev-hrb > > > > dfs.ha.namenodes.bgdt-dev-hrb > nn1,nn2 > > > > dfs.namenode.rpc-address.bgdt-dev-hrb.nn1 > bgdt01.dev.hrb:9000 > > > > dfs.namenode.rpc-address.bgdt-dev-hrb.nn2 > bgdt02.dev.hrb:9000 > > > > dfs.namenode.shared.edits.dir > > qjournal://bgdt01.dev.hrb:8485;bgdt03.dev.hrb:8485;bgdt04.dev.hrb:= 8485/bgdt-dev-hrb > > > > dfs.client.failover.proxy.provider.bgdt-dev-hrb > > org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyP= rovider > > > > dfs.ha.fencing.methods > sshfence > shell(/bin/true) > > > > > dfs.ha.fencing.ssh.private-key-files > /home/hadoop/.ssh/id_rsa > > > > dfs.journalnode.edits.dir > /bgdt/hadoop/hdfs/jn > > > > dfs.permissions.enabled > true > > > dfs.namenode.name.dir > file:///bgdt/hadoop/hdfs/nn > true > > > dfs.datanode.name.dir > file:///bgdt/hadoop/hdfs/dn > > > > dfs.namenode.http-address.bgdt-dev-hrb.nn1 > bgdt01.dev.hrb:50070 > > > > dfs.namenode.http-address.bgdt-dev-hrb.nn2 > bgdt02.dev.hrb:50070 > > > > dfs.permissions.superusergroup > bgdtgrp > > > > dfs.block.access.token.enable > true > > > > dfs.http.policy > HTTP_ONLY > > > > dfs.namenode.https-address.bgdt-dev-hrb.nn1 > bgdt01.dev.hrb:50470 > > > > dfs.namenode.https-address.bgdt-dev-hrb.nn2 > bgdt02.dev.hrb:50470 > > > > dfs.namenode.keytab.file > /etc/hadoop/keytab/hadoop.service.keytab > > > dfs.namenode.kerberos.principal > hdfs/_HOST@BGDT.DEV.HRB > > > dfs.namenode.kerberos.https.principal > host/_HOST@BGDT.DEV.HRB > > > > dfs.webhdfs.enabled > true > > > > dfs.web.authentication.kerberos.principal > http/_HOST@BGDT.DEV.HRB > > > > dfs.web.authentication.kerberos.keytab > /etc/hadoop/keytab/hadoop.service.keytab > > > > dfs.journalnode.kerberos.principal > hdfs/_HOST@BGDT.DEV.HRB > > > > dfs.journalnode.kerberos.https.principal > host/_HOST@BGDT.DEV.HRB > > > > dfs.journalnode.kerberos.internal.spnego.principal > http/_HOST@BGDT.DEV.HRB > > > > dfs.journalnode.keytab.file > /etc/hadoop/keytab/hadoop.service.keytab > > > *********************The Error Log from the > Namenode****************************** > > 2015-02-03 17:42:06,020 INFO > org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file > http://bgdt04.dev.hrb:8480/getJournal?jid=3Dbgdt-dev-hrb&segmentTxId=3D68= 994&storageInfo=3D-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344= 421f3, > > http://bgdt01.dev.hrb:8480/getJournal?jid=3Dbgdt-dev-hrb&segmentTxId=3D68= 994&storageInfo=3D-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344= 421f3 > 2015-02-03 17:42:06,024 INFO > org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwardin= g > stream ' > http://bgdt04.dev.hrb:8480/getJournal?jid=3Dbgdt-dev-hrb&segmentTxId=3D68= 994&storageInfo=3D-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344= 421f3, > > http://bgdt01.dev.hrb:8480/getJournal?jid=3Dbgdt-dev-hrb&segmentTxId=3D68= 994&storageInfo=3D-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344= 421f3' > to transaction ID 68994 > 2015-02-03 17:42:06,024 INFO > org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwardin= g > stream ' > http://bgdt04.dev.hrb:8480/getJournal?jid=3Dbgdt-dev-hrb&segmentTxId=3D68= 994&storageInfo=3D-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344= 421f3' > to transaction ID 68994 > 2015-02-03 17:42:06,154 ERROR > org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: caught excepti= on > initializing > http://bgdt04.dev.hrb:8480/getJournal?jid=3Dbgdt-dev-hrb&segmentTxId=3D68= 994&storageInfo=3D-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344= 421f3 > java.io.IOException: > org.apache.hadoop.security.authentication.client.AuthenticationException: > GSSException: No valid credentials provided (Mechanism level: Server not > found in Kerberos database (7) - UNKNOWN_SERVER) > at > org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.ru= n(EditLogFileInputStream.java:464) > at > org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.ru= n(EditLogFileInputStream.java:456) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation= .java:1614) > at org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:444= ) > at > org.apache.hadoop.security.SecurityUtil.doAsCurrentUser(SecurityUtil.java= :438) > at > org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog.getI= nputStream(EditLogFileInputStream.java:455) > at > org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.init(EditLo= gFileInputStream.java:141) > at > org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOpImpl(= EditLogFileInputStream.java:192) > at > org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOp(Edit= LogFileInputStream.java:250) > at > org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogI= nputStream.java:85) > at > org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditL= ogInputStream.java:151) > at > org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp= (RedundantEditLogInputStream.java:178) > at > org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogI= nputStream.java:85) > at > org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditL= ogInputStream.java:151) > at > org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp= (RedundantEditLogInputStream.java:178) > at > org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogI= nputStream.java:85) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FS= EditLogLoader.java:184) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEdit= LogLoader.java:137) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:816= ) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:6= 76) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSIm= age.java:279) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesys= tem.java:955) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesy= stem.java:700) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.j= ava:529) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:= 585) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:751) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:735) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.j= ava:1407) > at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:147= 3) > > > --001a11c3e2f066d89f050e325783 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Have you added all host specific principals in kerberos da= tabase ?

Thanks,
<= br>
On Tue, Feb 3, 2015 at 7:59 AM, =E9=83=9D=E4= =B8=9C <donhoff_h@163.com> wrote:

I am converting a secure non-HA cluster into a=20 secure HA cluster. After the configuration and started all the=20 journalnodes, I executed the following commands on the original=20 NameNode:
1. hdfs name -initializeSharedEdits #this step succeeded
2. hadoop-daemon.sh start namenode # this step failed.

The namenode did not start successfully. I verified that my principals a= re right. And I checked the DNS is configured correctly so that I could use t= he nslookup command to lookup and reverse-lookup the Namenode and JournalNo= des.

I also checked the logs. The JournalNodes did not report any ER= ROR. The Namenode Log report some ERRORs, but I still could not find the re= ason according to these ERRORS.

In the following I listed the main pa= rt of my hdfs-site.xml and the error log from my Namenode.=C2=A0 Could anyo= ne help me to figure it out?

Many Thanks!

**************Th= e main part of my hdfs-site.xml*************************

<property= >
<name>dfs.nameservices</name>
<value>bgdt-dev-hrb</value>
</property>

<property>
<name>dfs.ha.namenodes.bgdt-dev-hrb</name>
<value>nn1,nn2</value>
</property>

<property>
<name>dfs.namenode.rpc-address.bgdt-dev-hrb.nn1</name>
<value>bgdt01.dev.hrb:9000</value>
</property>

<property>
<name>dfs.namenode.rpc-address.bgdt-dev-hrb.nn2</name>
<value>bgdt02.dev.hrb:9000</value>
</property>

<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://bgdt01.dev.hrb:8485;bgdt03.dev.hrb:8485;bgdt04.de= v.hrb:8485/bgdt-dev-hrb</value>
</property>

<property>
<name>dfs.client.failover.proxy.provider.bgdt-dev-hrb</name><= br> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailover= ProxyProvider</value>
</property>

<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence
shell(/bin/true)
</value>
</property>

<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hadoop/.ssh/id_rsa</value>
</property>

<property>
<name>dfs.journalnode.edits.dir</name>
<value>/bgdt/hadoop/hdfs/jn</value>
</property>

<property>
<name>dfs.permissions.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///bgdt/hadoop/hdfs/nn&l= t;/value>
<final>true</final>
</property>
<property>
<name>dfs.datanode.name.dir</name>
<value>file:///bgdt/hadoop/hdfs/dn&l= t;/value>
</property>

<property>
<name>dfs.namenode.http-address.bgdt-dev-hrb.nn1</name>
<value>bgdt01.dev.hrb:50070</value>
</property>

<property>
<name>dfs.namenode.http-address.bgdt-dev-hrb.nn2</name>
<value>bgdt02.dev.hrb:50070</value>
</property>

<property>
<name>dfs.permissions.superusergroup</name>
<value>bgdtgrp</value>
</property>

<property>
<name>dfs.block.access.token.enable</name>
<value>true</value>
</property>

<property>
<name>dfs.http.policy</name>
<value>HTTP_ONLY</value>
</property>

<property>
<name>dfs.namenode.https-address.bgdt-dev-hrb.nn1</name>
<value>bgdt01.dev.hrb:50470</value>
</property>

<property>
<name>dfs.namenode.https-address.bgdt-dev-hrb.nn2</name>
<value>bgdt02.dev.hrb:50470</value>
</property>

<property>
<name>dfs.namenode.keytab.file</name>
<value>/etc/hadoop/keytab/hadoop.service.keytab</value>
</property>
<property>
<name>dfs.namenode.kerberos.principal</name>
<value>hdfs/_HOST@BGDT.DEV.HRB</value>
</property>
<property>
<name>dfs.namenode.kerberos.https.principal</name>
<value>host/_HOST@BGDT.DEV.HRB</value>
</property>

<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>

<property>
<name>dfs.web.authentication.kerberos.principal</name>
<value>http/_HOST@BGDT.DEV.HRB</value>
</property>

<property>
<name>dfs.web.authentication.kerberos.keytab</name>
<value>/etc/hadoop/keytab/hadoop.service.keytab</value>
</property>

<property>
<name>dfs.journalnode.kerberos.principal</name>
<value>hdfs/_HOST@BGDT.DEV.HRB</value>
</property>

<property>
<name>dfs.journalnode.kerberos.https.principal</name>
<value>host/_HOST@BGDT.DEV.HRB</value>
</property>

<property>
<name>dfs.journalnode.kerberos.internal.spnego.principal</name&g= t;
<value>http/_HOST@BGDT.DEV.HRB</value>
</property>

<property>
<name>dfs.journalnode.keytab.file</name>
<value>/etc/hadoop/keytab/hadoop.service.keytab</value>
</property>

*********************The Error Log from the = Namenode******************************

2015-02-03 17:42:06,020 INFO org.apache.hadoop.hdfs.server.namenode.FSIm= age: Start loading edits file http://bgdt04.dev.hrb:8480/getJournal?j= id=3Dbgdt-dev-hrb&segmentTxId=3D68994&storageInfo=3D-57%3A876630880= %3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3, http://bgdt01.de= v.hrb:8480/getJournal?jid=3Dbgdt-dev-hrb&segmentTxId=3D68994&storag= eInfo=3D-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3<= br> 2015-02-03 17:42:06,024 INFO=20 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream:=20 Fast-forwarding stream=20 'http://bgdt04.dev.hrb:8480/g= etJournal?jid=3Dbgdt-dev-hrb&segmentTxId=3D68994&storageInfo=3D-57%= 3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3, http://bgdt01.dev.hrb:8480/getJournal?jid=3Dbgdt-dev-hrb&segment= TxId=3D68994&storageInfo=3D-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf= -a347-42f1344421f3' to transaction ID 68994
2015-02-03 17:42:06,024 INFO=20 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream:=20 Fast-forwarding stream=20 'http://bgdt04.dev.hrb:8480/g= etJournal?jid=3Dbgdt-dev-hrb&segmentTxId=3D68994&storageInfo=3D-57%= 3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3' to transaction ID 68994
2015-02-03 17:42:06,154 ERROR org.apache.hadoop.hdfs.server.namenode.EditLo= gInputStream: caught exception initializing http://bgdt04.dev.hrb:8= 480/getJournal?jid=3Dbgdt-dev-hrb&segmentTxId=3D68994&storageInfo= =3D-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3
java.io.IOException:=20 org.apache.hadoop.security.authentication.client.AuthenticationException: GSSException: No valid credentials provided (Mechanism level: Server=20 not found in Kerberos database (7) - UNKNOWN_SERVER)
at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.= run(EditLogFileInputStream.java:464)
at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.= run(EditLogFileInputStream.java:456)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformati= on.java:1614)
at org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:444)=
at org.apache.hadoop.security.SecurityUtil.doAsCurrentUser(SecurityUtil.ja= va:438)
at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog.ge= tInputStream(EditLogFileInputStream.java:455)
at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.init(Edit= LogFileInputStream.java:141)
at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOpImp= l(EditLogFileInputStream.java:192)
at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOp(Ed= itLogFileInputStream.java:250)
at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLo= gInputStream.java:85)
at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(Edi= tLogInputStream.java:151)
at org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.next= Op(RedundantEditLogInputStream.java:178)
at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLo= gInputStream.java:85)
at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(Edi= tLogInputStream.java:151)
at org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.next= Op(RedundantEditLogInputStream.java:178)
at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLo= gInputStream.java:85)
at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(= FSEditLogLoader.java:184)
at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEd= itLogLoader.java:137)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:8= 16)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java= :676)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FS= Image.java:279)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNames= ystem.java:955)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSName= system.java:700)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode= .java:529)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.jav= a:585)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.j= ava:751)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.j= ava:735)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode= .java:1407)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1473= )



=

--001a11c3e2f066d89f050e325783--