Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 613BF10A1E for ; Tue, 25 Mar 2014 05:23:02 +0000 (UTC) Received: (qmail 10088 invoked by uid 500); 25 Mar 2014 05:22:53 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 9601 invoked by uid 500); 25 Mar 2014 05:22:52 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 9584 invoked by uid 99); 25 Mar 2014 05:22:49 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 25 Mar 2014 05:22:49 +0000 X-ASF-Spam-Status: No, hits=2.5 required=5.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_SOFTFAIL X-Spam-Check-By: apache.org Received-SPF: softfail (nike.apache.org: transitioning domain of fh34@nyu.edu does not designate 209.85.214.174 as permitted sender) Received: from [209.85.214.174] (HELO mail-ob0-f174.google.com) (209.85.214.174) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 25 Mar 2014 05:22:40 +0000 Received: by mail-ob0-f174.google.com with SMTP id wo20so6969232obc.33 for ; Mon, 24 Mar 2014 22:22:18 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:content-type; bh=LbEh7IifuseW2lExO7T1FFhJwujMht0s8sgxIaii7ug=; b=Artq9rTLt99dn5i4nqQC9w8Z3voIP9GuJ+mx/rQvZBee0U63hT2c2V5apKACLHL3Fi hCM65uyiwXSHlSjKWzLN1ufDhphd/X+/qcscr7i8e8+P+06zgtRnfzIE8pdxjNR8KQr9 W9mmtiP+py8qauEGdmRxTN++S1tbtPaafOxf7355H9Q4f3LQ7GwGsbGooW6BL2fp1YhO RLwLmFboUJmC84xQ4/NS6EMjGGzEtij5XiQGuzqecmTipHe3Nm6Vggn8KhPEqtMXOX+2 /+OzdEANN1PTn0mmvy2LWWkewMtTJvhzcMsZax05EPfiji4y/B1agV/axLbQdv7ic/tc ugCQ== X-Gm-Message-State: ALoCoQknHoK+R+BIb/5LJGCJTuo7qb5tjiP6UP/jDUp6fDWslYXd3AUqyIgmtQVVbSiFDQX6s2RZ MIME-Version: 1.0 X-Received: by 10.60.120.34 with SMTP id kz2mr35257oeb.76.1395724938529; Mon, 24 Mar 2014 22:22:18 -0700 (PDT) Received: by 10.60.162.9 with HTTP; Mon, 24 Mar 2014 22:22:18 -0700 (PDT) In-Reply-To: References: Date: Tue, 25 Mar 2014 09:22:18 +0400 Message-ID: Subject: Re: I am about to lose all my data please help From: Fatih Haltas To: "user@hadoop.apache.org" Content-Type: multipart/alternative; boundary=047d7b33cf2e2a56f304f5678990 X-Virus-Checked: Checked by ClamAV on apache.org --047d7b33cf2e2a56f304f5678990 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Ok, Thanks to you all, I just removed version information of all datanodes and namenodes, then restart, it is working fine now On Mon, Mar 24, 2014 at 5:52 PM, praveenesh kumar wro= te: > Can you also make sure your hostname and IP address are still mapped > correctly. Because what I am guessing is when you restart your machine, > your /etc/hosts entries might get restored (it happens in some > distributions, based on how you installed it). So when you are trying to > restart your namenode, it might be pointing to some different IP/machine > (in general localhost). > > I can't think of any reason how it can happen just by restarting the > machine. > > > On Mon, Mar 24, 2014 at 5:42 AM, Stanley Shi wrote: > >> Can you confirm that you namenode image and fseditlog are still there? i= f >> not, then your data IS lost. >> >> Regards, >> *Stanley Shi,* >> >> >> >> On Sun, Mar 23, 2014 at 6:24 PM, Fatih Haltas wrot= e: >> >>> No, not ofcourse I blinded it. >>> >>> >>> On Wed, Mar 19, 2014 at 5:09 PM, praveenesh kumar wrote: >>> >>>> Is this property correct ? >>>> >>>> >>>> >>>> fs.default.name >>>> -BLANKED >>>> >>>> >>>> Regards >>>> Prav >>>> >>>> >>>> On Wed, Mar 19, 2014 at 12:58 PM, Fatih Haltas w= rote: >>>> >>>>> Thanks for you helps, but still could not solve my problem. >>>>> >>>>> >>>>> On Tue, Mar 18, 2014 at 10:13 AM, Stanley Shi wro= te: >>>>> >>>>>> Ah yes, I overlooked this. Then please check the file are there or >>>>>> not: "ls /home/hadoop/project/hadoop-data/dfs/name"? >>>>>> >>>>>> Regards, >>>>>> *Stanley Shi,* >>>>>> >>>>>> >>>>>> >>>>>> On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu wrote= : >>>>>> >>>>>>> I don't think this is the case, because there is; >>>>>>> >>>>>>> hadoop.tmp.dir >>>>>>> /home/hadoop/project/hadoop-data >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi wr= ote: >>>>>>> >>>>>>>> one possible reason is that you didn't set the namenode working >>>>>>>> directory, by default it's in "/tmp" folder; and the "/tmp" folder= might >>>>>>>> get deleted by the OS without any notification. If this is the cas= e, I am >>>>>>>> afraid you have lost all your namenode data. >>>>>>>> >>>>>>>> * >>>>>>>> dfs.name.dir >>>>>>>> ${hadoop.tmp.dir}/dfs/name >>>>>>>> Determines where on the local filesystem the DFS na= me node >>>>>>>> should store the name table(fsimage). If this is a comma-de= limited list >>>>>>>> of directories then the name table is replicated in all of t= he >>>>>>>> directories, for redundancy. >>>>>>>> * >>>>>>>> >>>>>>>> >>>>>>>> Regards, >>>>>>>> *Stanley Shi,* >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Sun, Mar 16, 2014 at 5:29 PM, Mirko K=E4mpf < >>>>>>>> mirko.kaempf@gmail.com> wrote: >>>>>>>> >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> what is the location of the namenodes fsimage and editlogs? >>>>>>>>> And how much memory has the NameNode. >>>>>>>>> >>>>>>>>> Did you work with a Secondary NameNode or a Standby NameNode for >>>>>>>>> checkpointing? >>>>>>>>> >>>>>>>>> Where are your HDFS blocks located, are those still safe? >>>>>>>>> >>>>>>>>> With this information at hand, one might be able to fix your >>>>>>>>> setup, but do not format the old namenode before >>>>>>>>> all is working with a fresh one. >>>>>>>>> >>>>>>>>> Grab a copy of the maintainance guide: >>>>>>>>> http://shop.oreilly.com/product/0636920025085.do?sortby=3Dpublica= tionDate >>>>>>>>> which helps solving such type of problems as well. >>>>>>>>> >>>>>>>>> Best wishes >>>>>>>>> Mirko >>>>>>>>> >>>>>>>>> >>>>>>>>> 2014-03-16 9:07 GMT+00:00 Fatih Haltas : >>>>>>>>> >>>>>>>>> Dear All, >>>>>>>>>> >>>>>>>>>> I have just restarted machines of my hadoop clusters. Now, I am >>>>>>>>>> trying to restart hadoop clusters again, but getting error on na= menode >>>>>>>>>> restart. I am afraid of loosing my data as it was properly runni= ng for more >>>>>>>>>> than 3 months. Currently, I believe if I do namenode formatting,= it will >>>>>>>>>> work again, however, data will be lost. Is there anyway to solve= this >>>>>>>>>> without losing the data. >>>>>>>>>> >>>>>>>>>> I will really appreciate any help. >>>>>>>>>> >>>>>>>>>> Thanks. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>>>>>>>>> Here is the logs; >>>>>>>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>>>>>>>>> 2014-02-26 16:02:39,698 INFO >>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: >>>>>>>>>> /************************************************************ >>>>>>>>>> STARTUP_MSG: Starting NameNode >>>>>>>>>> STARTUP_MSG: host =3D ADUAE042-LAP-V/127.0.0.1 >>>>>>>>>> STARTUP_MSG: args =3D [] >>>>>>>>>> STARTUP_MSG: version =3D 1.0.4 >>>>>>>>>> STARTUP_MSG: build =3D >>>>>>>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1= .0-r 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012 >>>>>>>>>> ************************************************************/ >>>>>>>>>> 2014-02-26 16:02:40,005 INFO >>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties= from >>>>>>>>>> hadoop-metrics2.properties >>>>>>>>>> 2014-02-26 16:02:40,019 INFO >>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for = source >>>>>>>>>> MetricsSystem,sub=3DStats registered. >>>>>>>>>> 2014-02-26 16:02:40,021 INFO >>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled sna= pshot >>>>>>>>>> period at 10 second(s). >>>>>>>>>> 2014-02-26 16:02:40,021 INFO >>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metr= ics system >>>>>>>>>> started >>>>>>>>>> 2014-02-26 16:02:40,169 INFO >>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for = source ugi >>>>>>>>>> registered. >>>>>>>>>> 2014-02-26 16:02:40,193 INFO >>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for = source jvm >>>>>>>>>> registered. >>>>>>>>>> 2014-02-26 16:02:40,194 INFO >>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for = source >>>>>>>>>> NameNode registered. >>>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: V= M >>>>>>>>>> type =3D 64-bit >>>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2= % >>>>>>>>>> max memory =3D 17.77875 MB >>>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: >>>>>>>>>> capacity =3D 2^21 =3D 2097152 entries >>>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: >>>>>>>>>> recommended=3D2097152, actual=3D2097152 >>>>>>>>>> 2014-02-26 16:02:40,273 INFO >>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=3Dh= adoop >>>>>>>>>> 2014-02-26 16:02:40,273 INFO >>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup= =3Dsupergroup >>>>>>>>>> 2014-02-26 16:02:40,274 INFO >>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: >>>>>>>>>> isPermissionEnabled=3Dtrue >>>>>>>>>> 2014-02-26 16:02:40,279 INFO >>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: >>>>>>>>>> dfs.block.invalidate.limit=3D100 >>>>>>>>>> 2014-02-26 16:02:40,279 INFO >>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: >>>>>>>>>> isAccessTokenEnabled=3Dfalse accessKeyUpdateInterval=3D0 min(s), >>>>>>>>>> accessTokenLifetime=3D0 min(s) >>>>>>>>>> 2014-02-26 16:02:40,724 INFO >>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered >>>>>>>>>> FSNamesystemStateMBean and NameNodeMXBean >>>>>>>>>> 2014-02-26 16:02:40,749 INFO >>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file na= mes >>>>>>>>>> occuring more than 10 times >>>>>>>>>> 2014-02-26 16:02:40,780 ERROR >>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesyste= m >>>>>>>>>> initialization failed. >>>>>>>>>> java.io.IOException: NameNode is not formatted. >>>>>>>>>> at >>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransition= Read(FSImage.java:330) >>>>>>>>>> at >>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(F= SDirectory.java:100) >>>>>>>>>> at >>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(F= SNamesystem.java:388) >>>>>>>>>> at >>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNam= esystem.java:362) >>>>>>>>>> at >>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameN= ode.java:276) >>>>>>>>>> at >>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.= java:496) >>>>>>>>>> at >>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(N= ameNode.java:1279) >>>>>>>>>> at >>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.ja= va:1288) >>>>>>>>>> 2014-02-26 16:02:40,781 ERROR >>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOExcep= tion: >>>>>>>>>> NameNode is not formatted. >>>>>>>>>> at >>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransition= Read(FSImage.java:330) >>>>>>>>>> at >>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(F= SDirectory.java:100) >>>>>>>>>> at >>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(F= SNamesystem.java:388) >>>>>>>>>> at >>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNam= esystem.java:362) >>>>>>>>>> at >>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameN= ode.java:276) >>>>>>>>>> at >>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.= java:496) >>>>>>>>>> at >>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(N= ameNode.java:1279) >>>>>>>>>> at >>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.ja= va:1288) >>>>>>>>>> >>>>>>>>>> 2014-02-26 16:02:40,781 INFO >>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: >>>>>>>>>> /************************************************************ >>>>>>>>>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1 >>>>>>>>>> ************************************************************/ >>>>>>>>>> >>>>>>>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D >>>>>>>>>> Here is the core-site.xml >>>>>>>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> fs.default.name >>>>>>>>>> -BLANKED >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> hadoop.tmp.dir >>>>>>>>>> /home/hadoop/project/hadoop-data >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > --047d7b33cf2e2a56f304f5678990 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Ok, Thanks to you all, I just removed version information = of all datanodes and namenodes, then restart, it is working fine now
<= div class=3D"gmail_extra">

On Mon, Mar 24= , 2014 at 5:52 PM, praveenesh kumar <praveenesh@gmail.com> wrote:
Can you also make sure= your hostname and IP address are still mapped correctly. Because what I am= guessing is when you restart your machine, your /etc/hosts entries might g= et restored (it happens in some distributions, based on how you installed i= t). So when you are trying to restart your namenode, it might be pointing t= o some different IP/machine (in general localhost).

I can't think of any reason how=A0 it can happen just by rest= arting the machine.


On Mon, Mar 24, 201= 4 at 5:42 AM, Stanley Shi <sshi@gopivotal.com> wrote:
Can you confirm that you na= menode image and fseditlog are still there? if not, then your data IS lost.=

Reg= ards,
Stanley Shi,



On Sun, Mar 23, 2014 at 6:24 PM, Fatih H= altas <fatih.haltas@nyu.edu> wrote:
No, not ofcourse I blinded it.


On Wed, Mar 19, 2014 at= 5:09 PM, praveenesh kumar <praveenesh@gmail.com> wrote:<= br>
Is this property correct ?<= div>

<property>
=A0 =A0 <name>fs.default.name</name= >
=A0 =A0 <value>-BLANKED</value>
=A0 </propert= y>

Regards
Prav
<= div>


On Wed, Mar 19, 2014 at 12:58 PM, Fatih Haltas <fatih.haltas@nyu.edu> wrote:
Thanks for you helps, but s= till could not solve my problem.


On Tue, Mar 18, 2014 at 10:13 AM, Stanle= y Shi <sshi@gopivotal.com> wrote:
Ah yes, I overlooked this. = Then please check the file are there or not: "ls /home/hadoop/project/= hadoop-data/dfs/name"?

Reg= ards,
Stanley Shi,



On Tue, Mar 18, 2014 at 2:06 PM, Az= uryy Yu <azuryyyu@gmail.com> wrote:
I don't think this is the case, because there is;
=
=A0 <pro= perty>
=A0 =A0 <name>hadoop.tmp.dir</name>
=A0 =A0 <valu= e>/home/hadoop/project/hadoop-data</value>
=A0 </property>
<= /div>


On Tue, Mar 18, 2014 at 1:55 PM, Stanley= Shi <sshi@gopivotal.com> wrote:
one possible reason is that you didn't set the namenod= e working directory, by default it's in "/tmp" folder; and th= e "/tmp" folder might get deleted by the OS without any notificat= ion. If this is the case, I am afraid you have lost all your namenode data.=
<property>
  <name>dfs.name.dir</name>
  <value>${hadoop.tmp.dir}/dfs/name</value>
  <description>Determines where on the local filesystem the DFS name =
node
      should store the name table(fsimage).  If this is a comma-delimited l=
ist
      of directories then the name table is replicated in all of the
      directories, for redundancy. </description>
</property>

Regards,
Stanley Shi,<= /div>


On Sun, Mar 16, 2014 at 5:29 PM, Mirko K= =E4mpf <mirko.kaempf@gmail.com> wrote:
Hi,

what is the= location of the namenodes fsimage and editlogs?
And how much= memory has the NameNode.

Did you work with a Secon= dary NameNode or a Standby NameNode for checkpointing?

Where are your HDFS blocks located, are those still safe?
With this information at hand, one might be able to fix your setup, = but do not format the old namenode before
all is working with a fr= esh one.

Grab a copy of the maintainance guide: htt= p://shop.oreilly.com/product/0636920025085.do?sortby=3DpublicationDate<= br>
which helps solving such type of problems as well.

Best wishes
Mirko


2014-03-16 9:07 GMT+00:00 Fatih Halta= s <fatih.haltas@nyu.edu>:

Dear All,

I have just restarted machines of my hadoop clusters. Now, I am trying to= restart hadoop clusters again, but getting error on namenode restart. I am= afraid of loosing my data as it was properly running for more than 3 month= s. Currently, I believe if I do namenode formatting, it will work again, ho= wever, data will be lost. Is there anyway to solve this without losing the = data.

I will really appreciate any help.

=
Thanks.


=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Here is the logs;
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
2014-02-26 16:02:39,698 INFO org.apache.hadoop.hdfs.server.namenode.NameNod= e: STARTUP_MSG:
/************************************************= ************
STARTUP_MSG: Starting NameNode
STARTUP_MSG= : =A0 host =3D ADUAE042-LAP-V/127.0.0.1
STARTUP_MSG: =A0 args =3D []
STARTUP_MSG: =A0 version =3D 1.= 0.4
STARTUP_MSG: =A0 build =3D https://svn.= apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; comp= iled by 'hortonfo' on Wed Oct =A03 05:13:58 UTC 2012
************************************************************/
2014-02-26 16:02:40,005 INFO org.apache.hadoop.metrics2.impl.MetricsConfi= g: loaded properties from hadoop-metrics2.properties
2014-02-26 1= 6:02:40,019 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBea= n for source MetricsSystem,sub=3DStats registered.
2014-02-26 16:02:40,021 INFO org.apache.hadoop.metrics2.impl.MetricsSy= stemImpl: Scheduled snapshot period at 10 second(s).
2014-02-26 1= 6:02:40,021 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNod= e metrics system started
2014-02-26 16:02:40,169 INFO org.apache.hadoop.metrics2.impl.MetricsSo= urceAdapter: MBean for source ugi registered.
2014-02-26 16:02:40= ,193 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for s= ource jvm registered.
2014-02-26 16:02:40,194 INFO org.apache.hadoop.metrics2.impl.MetricsSo= urceAdapter: MBean for source NameNode registered.
2014-02-26 16:= 02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM type =A0 =A0 =A0 =3D 64= -bit
2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max = memory =3D 17.77875 MB
2014-02-26 16:02:40,242 INFO org.apache.ha= doop.hdfs.util.GSet: capacity =A0 =A0 =A0=3D 2^21 =3D 2097152 entries
=
2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: recomme= nded=3D2097152, actual=3D2097152
2014-02-26 16:02:40,273 INFO org.apache.hadoop.hdfs.server.namenode.FS= Namesystem: fsOwner=3Dhadoop
2014-02-26 16:02:40,273 INFO org.apa= che.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=3Dsupergroup
2014-02-26 16:02:40,274 INFO org.apache.hadoop.hdfs.server.namenode.FS= Namesystem: isPermissionEnabled=3Dtrue
2014-02-26 16:02:40,279 IN= FO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidat= e.limit=3D100
2014-02-26 16:02:40,279 INFO org.apache.hadoop.hdfs.server.namenode.FS= Namesystem: isAccessTokenEnabled=3Dfalse accessKeyUpdateInterval=3D0 min(s)= , accessTokenLifetime=3D0 min(s)
2014-02-26 16:02:40,724 INFO org= .apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemSt= ateMBean and NameNodeMXBean
2014-02-26 16:02:40,749 INFO org.apache.hadoop.hdfs.server.namenode.Na= meNode: Caching file names occuring more than 10 times
2014-02-26= 16:02:40,780 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FS= Namesystem initialization failed.
java.io.IOException: NameNode is not formatted.
=A0 =A0 =A0 = =A0 at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead= (FSImage.java:330)
=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.serv= er.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem= .initialize(FSNamesystem.java:388)
=A0 =A0 =A0 =A0 at org.apache.= hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362= )
=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.namenode.NameNode.initiali= ze(NameNode.java:276)
=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.s= erver.namenode.NameNode.<init>(NameNode.java:496)
=A0 =A0 = =A0 =A0 at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(N= ameNode.java:1279)
=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.namenode.NameNode.mai= n(NameNode.java:1288)
2014-02-26 16:02:40,781 ERROR org.apache.ha= doop.hdfs.server.namenode.NameNode: java.io.IOException: NameNode is not fo= rmatted.
=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.namenode.FSImage.reco= verTransitionRead(FSImage.java:330)
=A0 =A0 =A0 =A0 at org.apache= .hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)<= /div>
=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.namenode.FSNames= ystem.initialize(FSNamesystem.java:388)
=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem= .<init>(FSNamesystem.java:362)
=A0 =A0 =A0 =A0 at org.apach= e.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
<= div>=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.namenode.NameNode.<= init>(NameNode.java:496)
=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.namenode.NameNode.cre= ateNameNode(NameNode.java:1279)
=A0 =A0 =A0 =A0 at org.apache.had= oop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)

2014-02-26 16:02:40,781 INFO org.apache.hadoop.hdfs.server.namenod= e.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
**********************= **************************************/

=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Here is the core-site.xml
=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D
=A0<?xml version=3D"1.0"?>
<?xml-stylesheet type=3D"text/xsl" href=3D"configuration= .xsl"?>

<!-- Put site-specific property overrides in this fi= le. -->

<configuration>
<pro= perty>
=A0 =A0 <name>fs.default.name</name>
=A0 =A0 <value>-BLANKED</value>
=A0 </propert= y>
=A0 <property>
=A0 =A0 <name>hadoop.t= mp.dir</name>
=A0 =A0 <value>/home/hadoop/project/had= oop-data</value>
=A0 </property>
</configuration>

<= /div>












--047d7b33cf2e2a56f304f5678990--