Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 4CE5210091 for ; Wed, 19 Mar 2014 12:58:49 +0000 (UTC) Received: (qmail 38846 invoked by uid 500); 19 Mar 2014 12:58:41 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 37937 invoked by uid 500); 19 Mar 2014 12:58:39 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 37930 invoked by uid 99); 19 Mar 2014 12:58:38 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 19 Mar 2014 12:58:38 +0000 X-ASF-Spam-Status: No, hits=2.5 required=5.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_SOFTFAIL X-Spam-Check-By: apache.org Received-SPF: softfail (nike.apache.org: transitioning domain of fh34@nyu.edu does not designate 209.85.219.49 as permitted sender) Received: from [209.85.219.49] (HELO mail-oa0-f49.google.com) (209.85.219.49) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 19 Mar 2014 12:58:32 +0000 Received: by mail-oa0-f49.google.com with SMTP id h16so2872230oag.8 for ; Wed, 19 Mar 2014 05:58:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:content-type; bh=1vqqDqkpl09+90Jy3n+npv5hX3T2pK/3K0yL4Ul7nLo=; b=dhQCsux3VfqfrQ5zMfjlO6l71Gtzd/xrbBrzx5d83hjAp+HhPNjNA043cV7g5i+Slo Oogl5CBzmX+qmXj2rjT0DOwYqItmvNUyNKKoqqepcgYZe5FZTlqJlmzhVMulBr1wANCn 9fj6JUcvNjrhRx6M3E9+TUXQ0HzV8hzTP2TgqTqgGeNZgRUVeTMbQVDeB+JsWqutMLUr Z5nQsl4Apj8tzdIxx6vT61cL7rcV4aKhShXkOwMedouSoq+xCi7v4X06NwN80FAW7o4+ IVFLkfWJ3UsypFPizFFFmudR5dsmKvD4Z2QtQNzaBnq184rEWuzBnJHcXmzHvl/kK4D+ ndYQ== X-Gm-Message-State: ALoCoQmrW278BZnOAZTyDk+Z7VwU3vA1gUwJzaxDLwgV+9Zrp3zo8NxlEOoA5Y4J8Q0xrqu8J6D7 MIME-Version: 1.0 X-Received: by 10.60.52.138 with SMTP id t10mr835444oeo.59.1395233890572; Wed, 19 Mar 2014 05:58:10 -0700 (PDT) Received: by 10.60.162.9 with HTTP; Wed, 19 Mar 2014 05:58:10 -0700 (PDT) In-Reply-To: References: Date: Wed, 19 Mar 2014 16:58:10 +0400 Message-ID: Subject: Re: I am about to lose all my data please help From: Fatih Haltas To: "user@hadoop.apache.org" Content-Type: multipart/alternative; boundary=001a113304706d26f604f4f534d9 X-Virus-Checked: Checked by ClamAV on apache.org --001a113304706d26f604f4f534d9 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Thanks for you helps, but still could not solve my problem. On Tue, Mar 18, 2014 at 10:13 AM, Stanley Shi wrote: > Ah yes, I overlooked this. Then please check the file are there or not: > "ls /home/hadoop/project/hadoop-data/dfs/name"? > > Regards, > *Stanley Shi,* > > > > On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu wrote: > >> I don't think this is the case, because there is; >> >> hadoop.tmp.dir >> /home/hadoop/project/hadoop-data >> >> >> >> On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi wrote: >> >>> one possible reason is that you didn't set the namenode working >>> directory, by default it's in "/tmp" folder; and the "/tmp" folder migh= t >>> get deleted by the OS without any notification. If this is the case, I = am >>> afraid you have lost all your namenode data. >>> >>> * >>> dfs.name.dir >>> ${hadoop.tmp.dir}/dfs/name >>> Determines where on the local filesystem the DFS name no= de >>> should store the name table(fsimage). If this is a comma-delimit= ed list >>> of directories then the name table is replicated in all of the >>> directories, for redundancy. >>> * >>> >>> >>> Regards, >>> *Stanley Shi,* >>> >>> >>> >>> On Sun, Mar 16, 2014 at 5:29 PM, Mirko K=E4mpf = wrote: >>> >>>> Hi, >>>> >>>> what is the location of the namenodes fsimage and editlogs? >>>> And how much memory has the NameNode. >>>> >>>> Did you work with a Secondary NameNode or a Standby NameNode for >>>> checkpointing? >>>> >>>> Where are your HDFS blocks located, are those still safe? >>>> >>>> With this information at hand, one might be able to fix your setup, bu= t >>>> do not format the old namenode before >>>> all is working with a fresh one. >>>> >>>> Grab a copy of the maintainance guide: >>>> http://shop.oreilly.com/product/0636920025085.do?sortby=3DpublicationD= ate >>>> which helps solving such type of problems as well. >>>> >>>> Best wishes >>>> Mirko >>>> >>>> >>>> 2014-03-16 9:07 GMT+00:00 Fatih Haltas : >>>> >>>> Dear All, >>>>> >>>>> I have just restarted machines of my hadoop clusters. Now, I am tryin= g >>>>> to restart hadoop clusters again, but getting error on namenode resta= rt. I >>>>> am afraid of loosing my data as it was properly running for more than= 3 >>>>> months. Currently, I believe if I do namenode formatting, it will wor= k >>>>> again, however, data will be lost. Is there anyway to solve this with= out >>>>> losing the data. >>>>> >>>>> I will really appreciate any help. >>>>> >>>>> Thanks. >>>>> >>>>> >>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>>>> Here is the logs; >>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>>>> 2014-02-26 16:02:39,698 INFO >>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: >>>>> /************************************************************ >>>>> STARTUP_MSG: Starting NameNode >>>>> STARTUP_MSG: host =3D ADUAE042-LAP-V/127.0.0.1 >>>>> STARTUP_MSG: args =3D [] >>>>> STARTUP_MSG: version =3D 1.0.4 >>>>> STARTUP_MSG: build =3D >>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r >>>>> 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012 >>>>> ************************************************************/ >>>>> 2014-02-26 16:02:40,005 INFO >>>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from >>>>> hadoop-metrics2.properties >>>>> 2014-02-26 16:02:40,019 INFO >>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for sourc= e >>>>> MetricsSystem,sub=3DStats registered. >>>>> 2014-02-26 16:02:40,021 INFO >>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot >>>>> period at 10 second(s). >>>>> 2014-02-26 16:02:40,021 INFO >>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics s= ystem >>>>> started >>>>> 2014-02-26 16:02:40,169 INFO >>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for sourc= e ugi >>>>> registered. >>>>> 2014-02-26 16:02:40,193 INFO >>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for sourc= e jvm >>>>> registered. >>>>> 2014-02-26 16:02:40,194 INFO >>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for sourc= e >>>>> NameNode registered. >>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM typ= e >>>>> =3D 64-bit >>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max >>>>> memory =3D 17.77875 MB >>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: >>>>> capacity =3D 2^21 =3D 2097152 entries >>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: >>>>> recommended=3D2097152, actual=3D2097152 >>>>> 2014-02-26 16:02:40,273 INFO >>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=3Dhadoop >>>>> 2014-02-26 16:02:40,273 INFO >>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=3Dsup= ergroup >>>>> 2014-02-26 16:02:40,274 INFO >>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: >>>>> isPermissionEnabled=3Dtrue >>>>> 2014-02-26 16:02:40,279 INFO >>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: >>>>> dfs.block.invalidate.limit=3D100 >>>>> 2014-02-26 16:02:40,279 INFO >>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: >>>>> isAccessTokenEnabled=3Dfalse accessKeyUpdateInterval=3D0 min(s), >>>>> accessTokenLifetime=3D0 min(s) >>>>> 2014-02-26 16:02:40,724 INFO >>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered >>>>> FSNamesystemStateMBean and NameNodeMXBean >>>>> 2014-02-26 16:02:40,749 INFO >>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names >>>>> occuring more than 10 times >>>>> 2014-02-26 16:02:40,780 ERROR >>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem >>>>> initialization failed. >>>>> java.io.IOException: NameNode is not formatted. >>>>> at >>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(= FSImage.java:330) >>>>> at >>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDire= ctory.java:100) >>>>> at >>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSName= system.java:388) >>>>> at >>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesyst= em.java:362) >>>>> at >>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.j= ava:276) >>>>> at >>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:= 496) >>>>> at >>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNo= de.java:1279) >>>>> at >>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:12= 88) >>>>> 2014-02-26 16:02:40,781 ERROR >>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: >>>>> NameNode is not formatted. >>>>> at >>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(= FSImage.java:330) >>>>> at >>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDire= ctory.java:100) >>>>> at >>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSName= system.java:388) >>>>> at >>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesyst= em.java:362) >>>>> at >>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.j= ava:276) >>>>> at >>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:= 496) >>>>> at >>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNo= de.java:1279) >>>>> at >>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:12= 88) >>>>> >>>>> 2014-02-26 16:02:40,781 INFO >>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: >>>>> /************************************************************ >>>>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1 >>>>> ************************************************************/ >>>>> >>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D >>>>> Here is the core-site.xml >>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> fs.default.name >>>>> -BLANKED >>>>> >>>>> >>>>> hadoop.tmp.dir >>>>> /home/hadoop/project/hadoop-data >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>> >>> >> > --001a113304706d26f604f4f534d9 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Thanks for you helps, but still could not solve my problem= .


On Tue= , Mar 18, 2014 at 10:13 AM, Stanley Shi <sshi@gopivotal.com> wrote:
Ah yes, I overlooked this. = Then please check the file are there or not: "ls /home/hadoop/project/= hadoop-data/dfs/name"?

Reg= ards,
Stanley Shi,



On Tue, Mar 18, 2014 at = 2:06 PM, Azuryy Yu <azuryyyu@gmail.com> wrote:
I don't think this is the case, because there is;
=
=A0 <pro= perty>
=A0 =A0 <name>hadoop.tmp.dir</name>
=A0 =A0 <valu= e>/home/hadoop/project/hadoop-data</value>
=A0 </property>
<= /div>


On Tue, Mar 18, 2014 at 1:55 PM, Stanley= Shi <sshi@gopivotal.com> wrote:
one possible reason is that you didn't set the namenod= e working directory, by default it's in "/tmp" folder; and th= e "/tmp" folder might get deleted by the OS without any notificat= ion. If this is the case, I am afraid you have lost all your namenode data.=
<property>
  <name>dfs.name.dir</name>
  <value>${hadoop.tmp.dir}/dfs/name</value>
  <description>Determines where on the local filesystem the DFS name =
node
      should store the name table(fsimage).  If this is a comma-delimited l=
ist
      of directories then the name table is replicated in all of the
      directories, for redundancy. </description>
</property>

Regards,
Stanley Shi,<= /div>


On Sun, Mar 16, 2014 at 5:29 PM, Mirko K= =E4mpf <mirko.kaempf@gmail.com> wrote:
Hi,

what is the= location of the namenodes fsimage and editlogs?
And how much= memory has the NameNode.

Did you work with a Secon= dary NameNode or a Standby NameNode for checkpointing?

Where are your HDFS blocks located, are those still safe?
With this information at hand, one might be able to fix your setup, = but do not format the old namenode before
all is working with a fr= esh one.

Grab a copy of the maintainance guide: htt= p://shop.oreilly.com/product/0636920025085.do?sortby=3DpublicationDate<= br>
which helps solving such type of problems as well.

Best wishes
Mirko


2014-03-16 9:07 GMT+00:00 Fatih Halta= s <fatih.haltas@nyu.edu>:

Dear All,

I have just restarted machines of my hadoop clusters. Now, I am trying to= restart hadoop clusters again, but getting error on namenode restart. I am= afraid of loosing my data as it was properly running for more than 3 month= s. Currently, I believe if I do namenode formatting, it will work again, ho= wever, data will be lost. Is there anyway to solve this without losing the = data.

I will really appreciate any help.

=
Thanks.


=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Here is the logs;
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
2014-02-26 16:02:39,698 INFO org.apache.hadoop.hdfs.server.namenode.NameNod= e: STARTUP_MSG:
/************************************************= ************
STARTUP_MSG: Starting NameNode
STARTUP_MSG= : =A0 host =3D ADUAE042-LAP-V/127.0.0.1
STARTUP_MSG: =A0 args =3D []
STARTUP_MSG: =A0 version =3D 1.= 0.4
STARTUP_MSG: =A0 build =3D https://svn.= apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; comp= iled by 'hortonfo' on Wed Oct =A03 05:13:58 UTC 2012
************************************************************/
2014-02-26 16:02:40,005 INFO org.apache.hadoop.metrics2.impl.MetricsConfi= g: loaded properties from hadoop-metrics2.properties
2014-02-26 1= 6:02:40,019 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBea= n for source MetricsSystem,sub=3DStats registered.
2014-02-26 16:02:40,021 INFO org.apache.hadoop.metrics2.impl.MetricsSy= stemImpl: Scheduled snapshot period at 10 second(s).
2014-02-26 1= 6:02:40,021 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNod= e metrics system started
2014-02-26 16:02:40,169 INFO org.apache.hadoop.metrics2.impl.MetricsSo= urceAdapter: MBean for source ugi registered.
2014-02-26 16:02:40= ,193 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for s= ource jvm registered.
2014-02-26 16:02:40,194 INFO org.apache.hadoop.metrics2.impl.MetricsSo= urceAdapter: MBean for source NameNode registered.
2014-02-26 16:= 02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM type =A0 =A0 =A0 =3D 64= -bit
2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max = memory =3D 17.77875 MB
2014-02-26 16:02:40,242 INFO org.apache.ha= doop.hdfs.util.GSet: capacity =A0 =A0 =A0=3D 2^21 =3D 2097152 entries
=
2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: recomme= nded=3D2097152, actual=3D2097152
2014-02-26 16:02:40,273 INFO org.apache.hadoop.hdfs.server.namenode.FS= Namesystem: fsOwner=3Dhadoop
2014-02-26 16:02:40,273 INFO org.apa= che.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=3Dsupergroup
2014-02-26 16:02:40,274 INFO org.apache.hadoop.hdfs.server.namenode.FS= Namesystem: isPermissionEnabled=3Dtrue
2014-02-26 16:02:40,279 IN= FO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidat= e.limit=3D100
2014-02-26 16:02:40,279 INFO org.apache.hadoop.hdfs.server.namenode.FS= Namesystem: isAccessTokenEnabled=3Dfalse accessKeyUpdateInterval=3D0 min(s)= , accessTokenLifetime=3D0 min(s)
2014-02-26 16:02:40,724 INFO org= .apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemSt= ateMBean and NameNodeMXBean
2014-02-26 16:02:40,749 INFO org.apache.hadoop.hdfs.server.namenode.Na= meNode: Caching file names occuring more than 10 times
2014-02-26= 16:02:40,780 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FS= Namesystem initialization failed.
java.io.IOException: NameNode is not formatted.
=A0 =A0 =A0 = =A0 at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead= (FSImage.java:330)
=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.serv= er.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem= .initialize(FSNamesystem.java:388)
=A0 =A0 =A0 =A0 at org.apache.= hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362= )
=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.namenode.NameNode.initiali= ze(NameNode.java:276)
=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.s= erver.namenode.NameNode.<init>(NameNode.java:496)
=A0 =A0 = =A0 =A0 at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(N= ameNode.java:1279)
=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.namenode.NameNode.mai= n(NameNode.java:1288)
2014-02-26 16:02:40,781 ERROR org.apache.ha= doop.hdfs.server.namenode.NameNode: java.io.IOException: NameNode is not fo= rmatted.
=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.namenode.FSImage.reco= verTransitionRead(FSImage.java:330)
=A0 =A0 =A0 =A0 at org.apache= .hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)<= /div>
=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.namenode.FSNames= ystem.initialize(FSNamesystem.java:388)
=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem= .<init>(FSNamesystem.java:362)
=A0 =A0 =A0 =A0 at org.apach= e.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
<= div>=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.namenode.NameNode.<= init>(NameNode.java:496)
=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.namenode.NameNode.cre= ateNameNode(NameNode.java:1279)
=A0 =A0 =A0 =A0 at org.apache.had= oop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)

2014-02-26 16:02:40,781 INFO org.apache.hadoop.hdfs.server.namenod= e.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
**********************= **************************************/

=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Here is the core-site.xml
=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D
=A0<?xml version=3D"1.0"?>
<?xml-stylesheet type=3D"text/xsl" href=3D"configuration= .xsl"?>

<!-- Put site-specific property overrides in this fi= le. -->

<configuration>
<pro= perty>
=A0 =A0 <name>fs.default.name</name>
=A0 =A0 <value>-BLANKED</value>
=A0 </propert= y>
=A0 <property>
=A0 =A0 <name>hadoop.t= mp.dir</name>
=A0 =A0 <value>/home/hadoop/project/had= oop-data</value>
=A0 </property>
</configuration>

<= /div>







--001a113304706d26f604f4f534d9--