Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id D871D10A6B for ; Sun, 16 Mar 2014 09:08:23 +0000 (UTC) Received: (qmail 37095 invoked by uid 500); 16 Mar 2014 09:08:12 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 36572 invoked by uid 500); 16 Mar 2014 09:08:03 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 36521 invoked by uid 99); 16 Mar 2014 09:08:01 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 16 Mar 2014 09:08:01 +0000 X-ASF-Spam-Status: No, hits=2.5 required=5.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_SOFTFAIL X-Spam-Check-By: apache.org Received-SPF: softfail (nike.apache.org: transitioning domain of fh34@nyu.edu does not designate 209.85.219.47 as permitted sender) Received: from [209.85.219.47] (HELO mail-oa0-f47.google.com) (209.85.219.47) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 16 Mar 2014 09:07:55 +0000 Received: by mail-oa0-f47.google.com with SMTP id i11so4385924oag.20 for ; Sun, 16 Mar 2014 02:07:33 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:date:message-id:subject:from:to :content-type; bh=BoW9Jtc2/vnWY8XUXpxzS3sTJx9WvIvtpxgwm8hSDgg=; b=mKbIF9GDlev/+4bT+L4yyz3NvQTYWBVenYWrYG7BZmzvkbugMOQmkBZKArHZSvSOjB B8g7FiXU5YjW7Fpztx5Yv4PC1a5abWZUbsxpmXQ+wP2S4qTJahcU+i/WO74C23ENxOBE TO9Rbv9q3S+qC43Nu8hLK6KufgpKYAsmhECfZBJuPEaJ0oGhs0YRT7V0CjV6vIgetTEp sm+Fb2cq/J0SkBqXc5DNPk4BuewHkHhiz+sYZ/EceE7+egxyca/SzQ1E+WQxShPOa915 gqrIycPRcMvSYPecJk811JXGDMJLtour8wYl7BBwbDiTQtinhsk6o1AdyjsHaibkgC5R 09dQ== X-Gm-Message-State: ALoCoQnsng9zlzhC2mALWih2/lVRiwh8nqTHb1gnZymCp1tYhmpQL5DOIEyCepk4Zs95QreNvtcd MIME-Version: 1.0 X-Received: by 10.182.195.11 with SMTP id ia11mr15211856obc.8.1394960853095; Sun, 16 Mar 2014 02:07:33 -0700 (PDT) Received: by 10.60.162.9 with HTTP; Sun, 16 Mar 2014 02:07:33 -0700 (PDT) Date: Sun, 16 Mar 2014 13:07:33 +0400 Message-ID: Subject: I am about to lose all my data please help From: Fatih Haltas To: "user@hadoop.apache.org" Content-Type: multipart/alternative; boundary=f46d044280601fe36104f4b5a24f X-Virus-Checked: Checked by ClamAV on apache.org --f46d044280601fe36104f4b5a24f Content-Type: text/plain; charset=ISO-8859-1 Dear All, I have just restarted machines of my hadoop clusters. Now, I am trying to restart hadoop clusters again, but getting error on namenode restart. I am afraid of loosing my data as it was properly running for more than 3 months. Currently, I believe if I do namenode formatting, it will work again, however, data will be lost. Is there anyway to solve this without losing the data. I will really appreciate any help. Thanks. ===================== Here is the logs; ==================== 2014-02-26 16:02:39,698 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = ADUAE042-LAP-V/127.0.0.1 STARTUP_MSG: args = [] STARTUP_MSG: version = 1.0.4 STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012 ************************************************************/ 2014-02-26 16:02:40,005 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 2014-02-26 16:02:40,019 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered. 2014-02-26 16:02:40,021 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 2014-02-26 16:02:40,021 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started 2014-02-26 16:02:40,169 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered. 2014-02-26 16:02:40,193 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered. 2014-02-26 16:02:40,194 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered. 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 64-bit 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 17.77875 MB 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^21 = 2097152 entries 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: recommended=2097152, actual=2097152 2014-02-26 16:02:40,273 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop 2014-02-26 16:02:40,273 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup 2014-02-26 16:02:40,274 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true 2014-02-26 16:02:40,279 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100 2014-02-26 16:02:40,279 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s) 2014-02-26 16:02:40,724 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean 2014-02-26 16:02:40,749 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times 2014-02-26 16:02:40,780 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed. java.io.IOException: NameNode is not formatted. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:362) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:496) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288) 2014-02-26 16:02:40,781 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: NameNode is not formatted. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:362) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:496) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288) 2014-02-26 16:02:40,781 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1 ************************************************************/ =========================== Here is the core-site.xml =========================== fs.default.name -BLANKED hadoop.tmp.dir /home/hadoop/project/hadoop-data --f46d044280601fe36104f4b5a24f Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Dear All,

I have just restarted machine= s of my hadoop clusters. Now, I am trying to restart hadoop clusters again,= but getting error on namenode restart. I am afraid of loosing my data as i= t was properly running for more than 3 months. Currently, I believe if I do= namenode formatting, it will work again, however, data will be lost. Is th= ere anyway to solve this without losing the data.

I will really appreciate any help.

=
Thanks.


=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Here is the logs;
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
2014-02-26 16:02:39,698 INFO org.apache.hadoop.hdfs.server.namenode.NameNod= e: STARTUP_MSG:
/************************************************= ************
STARTUP_MSG: Starting NameNode
STARTUP_MSG= : =A0 host =3D ADUAE042-LAP-V/127.0.0.1
STARTUP_MSG: =A0 args =3D []
STARTUP_MSG: =A0 version =3D 1.= 0.4
STARTUP_MSG: =A0 build =3D https://svn.apache.org/repos/a= sf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'horto= nfo' on Wed Oct =A03 05:13:58 UTC 2012
************************************************************/
2014-02-26 16:02:40,005 INFO org.apache.hadoop.metrics2.impl.MetricsConfi= g: loaded properties from hadoop-metrics2.properties
2014-02-26 1= 6:02:40,019 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBea= n for source MetricsSystem,sub=3DStats registered.
2014-02-26 16:02:40,021 INFO org.apache.hadoop.metrics2.impl.MetricsSy= stemImpl: Scheduled snapshot period at 10 second(s).
2014-02-26 1= 6:02:40,021 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNod= e metrics system started
2014-02-26 16:02:40,169 INFO org.apache.hadoop.metrics2.impl.MetricsSo= urceAdapter: MBean for source ugi registered.
2014-02-26 16:02:40= ,193 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for s= ource jvm registered.
2014-02-26 16:02:40,194 INFO org.apache.hadoop.metrics2.impl.MetricsSo= urceAdapter: MBean for source NameNode registered.
2014-02-26 16:= 02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM type =A0 =A0 =A0 =3D 64= -bit
2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max = memory =3D 17.77875 MB
2014-02-26 16:02:40,242 INFO org.apache.ha= doop.hdfs.util.GSet: capacity =A0 =A0 =A0=3D 2^21 =3D 2097152 entries
=
2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: recomme= nded=3D2097152, actual=3D2097152
2014-02-26 16:02:40,273 INFO org.apache.hadoop.hdfs.server.namenode.FS= Namesystem: fsOwner=3Dhadoop
2014-02-26 16:02:40,273 INFO org.apa= che.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=3Dsupergroup
2014-02-26 16:02:40,274 INFO org.apache.hadoop.hdfs.server.namenode.FS= Namesystem: isPermissionEnabled=3Dtrue
2014-02-26 16:02:40,279 IN= FO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidat= e.limit=3D100
2014-02-26 16:02:40,279 INFO org.apache.hadoop.hdfs.server.namenode.FS= Namesystem: isAccessTokenEnabled=3Dfalse accessKeyUpdateInterval=3D0 min(s)= , accessTokenLifetime=3D0 min(s)
2014-02-26 16:02:40,724 INFO org= .apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemSt= ateMBean and NameNodeMXBean
2014-02-26 16:02:40,749 INFO org.apache.hadoop.hdfs.server.namenode.Na= meNode: Caching file names occuring more than 10 times
2014-02-26= 16:02:40,780 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FS= Namesystem initialization failed.
java.io.IOException: NameNode is not formatted.
=A0 =A0 =A0 = =A0 at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead= (FSImage.java:330)
=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.serv= er.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem= .initialize(FSNamesystem.java:388)
=A0 =A0 =A0 =A0 at org.apache.= hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362= )
=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.namenode.NameNode.initiali= ze(NameNode.java:276)
=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.s= erver.namenode.NameNode.<init>(NameNode.java:496)
=A0 =A0 = =A0 =A0 at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(N= ameNode.java:1279)
=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.namenode.NameNode.mai= n(NameNode.java:1288)
2014-02-26 16:02:40,781 ERROR org.apache.ha= doop.hdfs.server.namenode.NameNode: java.io.IOException: NameNode is not fo= rmatted.
=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.namenode.FSImage.reco= verTransitionRead(FSImage.java:330)
=A0 =A0 =A0 =A0 at org.apache= .hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)<= /div>
=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.namenode.FSNames= ystem.initialize(FSNamesystem.java:388)
=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem= .<init>(FSNamesystem.java:362)
=A0 =A0 =A0 =A0 at org.apach= e.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
<= div>=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.namenode.NameNode.<= init>(NameNode.java:496)
=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.namenode.NameNode.cre= ateNameNode(NameNode.java:1279)
=A0 =A0 =A0 =A0 at org.apache.had= oop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)

2014-02-26 16:02:40,781 INFO org.apache.hadoop.hdfs.server.namenod= e.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
****************************************= ********************/

=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Here is the core-site.xml
=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D
=A0<?xml version=3D"1.0"?>
<?xml-stylesheet type=3D"text/xsl" href=3D"configuration= .xsl"?>

<!-- Put site-specific property overrides in this fi= le. -->

<configuration>
<pro= perty>
=A0 =A0 <name>= fs.default.name</name>
=A0 =A0 <value>-BLANKED</value>
=A0 </propert= y>
=A0 <property>
=A0 =A0 <name>hadoop.t= mp.dir</name>
=A0 =A0 <value>/home/hadoop/project/had= oop-data</value>
=A0 </property>
</configuration>

<= /div>


--f46d044280601fe36104f4b5a24f--