Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id B425010747 for ; Fri, 31 Jan 2014 14:11:44 +0000 (UTC) Received: (qmail 23895 invoked by uid 500); 31 Jan 2014 14:11:36 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 23809 invoked by uid 500); 31 Jan 2014 14:11:35 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 23802 invoked by uid 99); 31 Jan 2014 14:11:35 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 31 Jan 2014 14:11:35 +0000 X-ASF-Spam-Status: No, hits=3.8 required=5.0 tests=HTML_MESSAGE,HTTP_ESCAPED_HOST,NORMAL_HTTP_TO_IP,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of stutiawasthi@hcl.com designates 203.105.185.23 as permitted sender) Received: from [203.105.185.23] (HELO GWS05.hcl.com) (203.105.185.23) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 31 Jan 2014 14:11:30 +0000 X-IronPort-AV: E=Sophos;i="4.95,758,1384281000"; d="scan'208,217";a="54304658" Received: from unknown (HELO NDA-CORP-HT02.CORP.HCL.IN) ([10.248.64.34]) by GWS05.hcl.com with ESMTP/TLS/AES128-SHA; 31 Jan 2014 19:39:37 +0530 Received: from NDA-HCLC-CSHT05.HCLC.CORP.HCL.IN (10.33.64.80) by NDA-CORP-HT02.CORP.HCL.IN (10.248.64.34) with Microsoft SMTP Server (TLS) id 14.3.123.3; Fri, 31 Jan 2014 19:39:31 +0530 Received: from NDA-HCLC-MBS03.HCLC.CORP.HCL.IN ([10.33.64.34]) by NDA-HCLC-CSHT05.HCLC.CORP.HCL.IN ([::1]) with mapi id 14.03.0123.003; Fri, 31 Jan 2014 19:39:29 +0530 From: Stuti Awasthi To: user Subject: RE: java.io.FileNotFoundException: http://HOSTNAME:50070/getimage?getimage=1 Thread-Topic: java.io.FileNotFoundException: http://HOSTNAME:50070/getimage?getimage=1 Thread-Index: Ac8eg8Iux+OMPjmwTJi+9t5CoiSB5///rLgA//+iXTCAAGVygP//oT/g Date: Fri, 31 Jan 2014 14:09:29 +0000 Message-ID: <2270F3695BF2C04DB9FE975FB546FF3C21531F7D@NDA-HCLC-MBS03.hclc.corp.hcl.in> References: <2270F3695BF2C04DB9FE975FB546FF3C21531E6D@NDA-HCLC-MBS03.hclc.corp.hcl.in> <2270F3695BF2C04DB9FE975FB546FF3C21531EEC@NDA-HCLC-MBS03.hclc.corp.hcl.in> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.33.64.124] Content-Type: multipart/alternative; boundary="_000_2270F3695BF2C04DB9FE975FB546FF3C21531F7DNDAHCLCMBS03hcl_" MIME-Version: 1.0 X-Virus-Checked: Checked by ClamAV on apache.org --_000_2270F3695BF2C04DB9FE975FB546FF3C21531F7DNDAHCLCMBS03hcl_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Hadoop version is 1.0.4 In hdfs-default.html for 1.0.4 version we have following property: dfs.http.address dfs.secondary.http.address dfs.namenode.http-address : I suppose this property is not valid for Hadoop= 1.x Please suggest Thanks From: Jitendra Yadav [mailto:jeetuyadav200890@gmail.com] Sent: Friday, January 31, 2014 7:26 PM To: user Subject: Re: java.io.FileNotFoundException: http://HOSTNAME:50070/getimage?= getimage=3D1 Can you please change below property and restart your cluster again? FROM: dfs.http.address TO: dfs.namenode.http-addres Thanks Jitendra On Fri, Jan 31, 2014 at 7:07 PM, Stuti Awasthi > wrote: Hi Jitendra, I realized that some days back ,my cluster was down due to power failure af= ter which nn/current directory has : edits, edits.new file and now SNN is n= ot rolling these edits due to HTTP error. Also currently my NN and SNN are operating on same machine DFSadmin report : Configured Capacity: 659494076416 (614.2 GB) Present Capacity: 535599210496 (498.82 GB) DFS Remaining: 497454006272 (463.29 GB) DFS Used: 38145204224 (35.53 GB) DFS Used%: 7.12% Under replicated blocks: 283 Blocks with corrupt replicas: 3 Missing blocks: 3 ------------------------------------------------- Datanodes available: 8 (8 total, 0 dead) Name: 10.139.9.238:50010 Decommission Status : Normal Configured Capacity: 82436759552 (76.78 GB) DFS Used: 4302274560 (4.01 GB) Non DFS Used: 8391843840 (7.82 GB) DFS Remaining: 69742641152(64.95 GB) DFS Used%: 5.22% DFS Remaining%: 84.6% Last contact: Fri Jan 31 18:55:18 IST 2014 Name: 10.139.9.233:50010 Decommission Status : Normal Configured Capacity: 82436759552 (76.78 GB) DFS Used: 5774745600 (5.38 GB) Non DFS Used: 13409488896 (12.49 GB) DFS Remaining: 63252525056(58.91 GB) DFS Used%: 7.01% DFS Remaining%: 76.73% Last contact: Fri Jan 31 18:55:19 IST 2014 Name: 10.139.9.232:50010 Decommission Status : Normal Configured Capacity: 82436759552 (76.78 GB) DFS Used: 8524451840 (7.94 GB) Non DFS Used: 24847884288 (23.14 GB) DFS Remaining: 49064423424(45.69 GB) DFS Used%: 10.34% DFS Remaining%: 59.52% Last contact: Fri Jan 31 18:55:21 IST 2014 Name: 10.139.9.236:50010 Decommission Status : Normal Configured Capacity: 82436759552 (76.78 GB) DFS Used: 4543819776 (4.23 GB) Non DFS Used: 8669548544 (8.07 GB) DFS Remaining: 69223391232(64.47 GB) DFS Used%: 5.51% DFS Remaining%: 83.97% Last contact: Fri Jan 31 18:55:19 IST 2014 Name: 10.139.9.235:50010 Decommission Status : Normal Configured Capacity: 82436759552 (76.78 GB) DFS Used: 5092986880 (4.74 GB) Non DFS Used: 8669454336 (8.07 GB) DFS Remaining: 68674318336(63.96 GB) DFS Used%: 6.18% DFS Remaining%: 83.31% Last contact: Fri Jan 31 18:55:19 IST 2014 Name: 10.139.9.237:50010 Decommission Status : Normal Configured Capacity: 82436759552 (76.78 GB) DFS Used: 4604301312 (4.29 GB) Non DFS Used: 11005788160 (10.25 GB) DFS Remaining: 66826670080(62.24 GB) DFS Used%: 5.59% DFS Remaining%: 81.06% Last contact: Fri Jan 31 18:55:18 IST 2014 Name: 10.139.9.234:50010 Decommission Status : Normal Configured Capacity: 82436759552 (76.78 GB) DFS Used: 4277760000 (3.98 GB) Non DFS Used: 12124221440 (11.29 GB) DFS Remaining: 66034778112(61.5 GB) DFS Used%: 5.19% DFS Remaining%: 80.1% Last contact: Fri Jan 31 18:55:18 IST 2014 Name: 10.139.9.231:50010 Decommission Status : Normal Configured Capacity: 82436759552 (76.78 GB) DFS Used: 1024864256 (977.39 MB) Non DFS Used: 36776636416 (34.25 GB) DFS Remaining: 44635258880(41.57 GB) DFS Used%: 1.24% DFS Remaining%: 54.14% Last contact: Fri Jan 31 18:55:20 IST 2014 From: Jitendra Yadav [mailto:jeetuyadav200890@gmail.com] Sent: Friday, January 31, 2014 6:58 PM To: user Subject: Re: java.io.FileNotFoundException: http://HOSTNAME:50070/getimage?= getimage=3D1 Hi, Please post the output of dfs report command, this could help us to underst= and cluster health. # hadoop dfsadmin -report Thanks Jitendra On Fri, Jan 31, 2014 at 6:44 PM, Stuti Awasthi > wrote: Hi All, I am suddenly started facing issue on Hadoop Cluster. Seems like HTTP reque= st at port 50070 on dfs is not working properly. I have an Hadoop cluster which is operating from several days. Recently we = are also not able to see dfshealth.jsp page from webconsole. Problems : 1. http://:50070/dfshealth.jsp shows following error HTTP ERROR: 404 Problem accessing /. Reason: NOT_FOUND 2. SNN is not able to roll edits : ERROR in SecondaryNameNode Log java.io.FileNotFoundException: http://HOSTNAME:50070/getimage?getimage=3D1 at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpUR= LConnection.java:1401) at org.apache.hadoop.hdfs.server.namenode.TransferFsImage.getFileCli= ent(TransferFsImage.java:160) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$3.run(Se= condaryNameNode.java:347) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$3.run(Se= condaryNameNode.java:336) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:416) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInf= ormation.java:1093) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.download= CheckpointFiles(SecondaryNameNode.java:336) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckp= oint(SecondaryNameNode.java:411) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(S= econdaryNameNode.java:312) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(Seco= ndaryNameNode.java:275) ERROR in Namenode Log 2014-01-31 18:15:12,046 INFO org.apache.hadoop.hdfs.server.namenode.FSNames= ystem: Roll Edit Log from 10.139.9.231 2014-01-31 18:15:12,046 WARN org.apache.hadoop.hdfs.server.namenode.FSNames= ystem: Cannot roll edit log, edits.new files already exists in all healthy = directories: /usr/lib/hadoop/storage/dfs/nn/current/edits.new Namenode logs which suggest that webserver is started on 50070 successfully= : 2014-01-31 14:42:35,208 INFO org.apache.hadoop.http.HttpServer: Port return= ed by webServer.getConnectors()[0].getLocalPort() before open() is -1. Open= ing the listener on 50070 2014-01-31 14:42:35,209 INFO org.apache.hadoop.http.HttpServer: listener.ge= tLocalPort() returned 50070 webServer.getConnectors()[0].getLocalPort() ret= urned 50070 2014-01-31 14:42:35,209 INFO org.apache.hadoop.http.HttpServer: Jetty bound= to port 50070 2014-01-31 14:42:35,378 INFO org.apache.hadoop.hdfs.server.namenode.NameNod= e: Web-server up at: HOSTNAME:50070 Hdfs-site.xml dfs.replication 2 dfs.name.dir /usr/lib/hadoop/storage/dfs/nn dfs.data.dir /usr/lib/hadoop/storage/dfs/dn dfs.permissions false dfs.webhdfs.enabled true dfs.http.address HOSTNAME:50070 dfs.secondary.http.address HOSTNAME:50090 fs.checkpoint.dir /usr/lib/hadoop/storage/dfs/snn /etc/hosts (Note I have also tried by commenting 127.0.0.1 entry in host fi= le but the issue was not resolved) 127.0.0.1 localhost IP1 Hostname1 # Namenode- vm01 - itself IP2 Hostname2 # DataNode- vm02 ........ # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters Note : All Hadoop daemons are executing fine and the jobs are running prope= rly. How to resolve this issue, I have tried many options provided on different = forums but still facing the same issue. I belive that this can cause a major problem later as my edits are not gett= ing rolled into fsimage.. This can cause me a data loss in case of failure. Please suggest Thanks Stuti ::DISCLAIMER:: ---------------------------------------------------------------------------= ------------------------------------------------------------------------- The contents of this e-mail and any attachment(s) are confidential and inte= nded for the named recipient(s) only. E-mail transmission is not guaranteed to be secure or error-free as informa= tion could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or may contain viruses in trans= mission. The e mail and its contents (with or without referred errors) shall therefore not attach any liability = on the originator or HCL or its affiliates. Views or opinions, if any, presented in this email are solely those of the = author and may not necessarily reflect the views or opinions of HCL or its affiliates. Any form of reproduction, disse= mination, copying, disclosure, modification, distribution and / or publication of this message without the prior written= consent of authorized representative of HCL is strictly prohibited. If you have received this email in error please= delete it and notify the sender immediately. Before opening any email and/or attachments, please check them for viruses = and other defects. ---------------------------------------------------------------------------= ------------------------------------------------------------------------- --_000_2270F3695BF2C04DB9FE975FB546FF3C21531F7DNDAHCLCMBS03hcl_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable

Hadoop version is 1.0.4

In hdfs-default.html for = 1.0.4 version we have following property:

dfs.http.address

dfs.secondary.http.address

&n= bsp;

&n= bsp;

dfs.nam= enode.http-address : I suppose this property is not valid for Hadoop 1.x &n= bsp;   

&n= bsp;

Please = suggest

&n= bsp;

Thanks<= /span>

From: Jitendra= Yadav [mailto:jeetuyadav200890@gmail.com]
Sent: Friday, January 31, 2014 7:26 PM
To: user
Subject: Re: java.io.FileNotFoundException: http://HOSTNAME:50070/ge= timage?getimage=3D1

 

Can you please change below property and restart you= r cluster again?

 

FROM:

  <name>dfs.http.address</name>

 

TO:

 

Thanks

Jitendra 

 

On Fri, Jan 31, 2014 at 7:07 PM, Stuti Awasthi <<= a href=3D"mailto:stutiawasthi@hcl.com" target=3D"_blank">stutiawasthi@hcl.c= om> wrote:

Hi Jitendra,

 

I realized that some days back ,my clus= ter was down due to power failure after which nn/current directory has : edits, edits.new file and now SNN is not rolling these edits due to = HTTP error.

Also currently my NN and SNN are operat= ing on same machine

 

 

DFSadmin report :

 

Configu= red Capacity: 659494076416 (614.2 GB)

Present= Capacity: 535599210496 (498.82 GB)

DFS Rem= aining: 497454006272 (463.29 GB)

DFS Use= d: 38145204224 (35.53 GB)

DFS Use= d%: 7.12%

Under r= eplicated blocks: 283

Blocks = with corrupt replicas: 3

Missing= blocks: 3

 <= /span>

-------= ------------------------------------------

Datanod= es available: 8 (8 total, 0 dead)

 <= /span>

Name: <= a href=3D"http://10.139.9.238:50010" target=3D"_blank"> 10.139.9.238:50010

Decommi= ssion Status : Normal

Configu= red Capacity: 82436759552 (76.78 GB)

DFS Use= d: 4302274560 (4.01 GB)

Non DFS= Used: 8391843840 (7.82 GB)

DFS Rem= aining: 69742641152(64.95 GB)

DFS Use= d%: 5.22%

DFS Rem= aining%: 84.6%

Last co= ntact: Fri Jan 31 18:55:18 IST 2014

 <= /span>

 <= /span>

Name: <= a href=3D"http://10.139.9.233:50010" target=3D"_blank"> 10.139.9.233:50010

Decommi= ssion Status : Normal

Configu= red Capacity: 82436759552 (76.78 GB)

DFS Use= d: 5774745600 (5.38 GB)

Non DFS= Used: 13409488896 (12.49 GB)

DFS Rem= aining: 63252525056(58.91 GB)

DFS Use= d%: 7.01%

DFS Rem= aining%: 76.73%

Last co= ntact: Fri Jan 31 18:55:19 IST 2014

 <= /span>

 <= /span>

Name: <= a href=3D"http://10.139.9.232:50010" target=3D"_blank"> 10.139.9.232:50010

Decommi= ssion Status : Normal

Configu= red Capacity: 82436759552 (76.78 GB)

DFS Use= d: 8524451840 (7.94 GB)

Non DFS= Used: 24847884288 (23.14 GB)

DFS Rem= aining: 49064423424(45.69 GB)

DFS Use= d%: 10.34%

DFS Rem= aining%: 59.52%

Last co= ntact: Fri Jan 31 18:55:21 IST 2014

 <= /span>

 <= /span>

Name: <= a href=3D"http://10.139.9.236:50010" target=3D"_blank"> 10.139.9.236:50010

Decommi= ssion Status : Normal

Configu= red Capacity: 82436759552 (76.78 GB)

DFS Use= d: 4543819776 (4.23 GB)

Non DFS= Used: 8669548544 (8.07 GB)

DFS Rem= aining: 69223391232(64.47 GB)

DFS Use= d%: 5.51%

DFS Rem= aining%: 83.97%

Last co= ntact: Fri Jan 31 18:55:19 IST 2014

 <= /span>

 <= /span>

Name: <= a href=3D"http://10.139.9.235:50010" target=3D"_blank"> 10.139.9.235:50010

Decommi= ssion Status : Normal

Configu= red Capacity: 82436759552 (76.78 GB)

DFS Use= d: 5092986880 (4.74 GB)

Non DFS= Used: 8669454336 (8.07 GB)

DFS Rem= aining: 68674318336(63.96 GB)

DFS Use= d%: 6.18%

DFS Rem= aining%: 83.31%

Last co= ntact: Fri Jan 31 18:55:19 IST 2014

 <= /span>

 <= /span>

Name: <= a href=3D"http://10.139.9.237:50010" target=3D"_blank"> 10.139.9.237:50010

Decommi= ssion Status : Normal

Configu= red Capacity: 82436759552 (76.78 GB)

DFS Use= d: 4604301312 (4.29 GB)

Non DFS= Used: 11005788160 (10.25 GB)

DFS Rem= aining: 66826670080(62.24 GB)

DFS Use= d%: 5.59%

DFS Rem= aining%: 81.06%

Last co= ntact: Fri Jan 31 18:55:18 IST 2014

 <= /span>

 <= /span>

Name: <= a href=3D"http://10.139.9.234:50010" target=3D"_blank"> 10.139.9.234:50010

Decommi= ssion Status : Normal

Configu= red Capacity: 82436759552 (76.78 GB)

DFS Use= d: 4277760000 (3.98 GB)

Non DFS= Used: 12124221440 (11.29 GB)

DFS Rem= aining: 66034778112(61.5 GB)

DFS Use= d%: 5.19%

DFS Rem= aining%: 80.1%

Last co= ntact: Fri Jan 31 18:55:18 IST 2014

 <= /span>

 <= /span>

Name: <= a href=3D"http://10.139.9.231:50010" target=3D"_blank"> 10.139.9.231:50010

Decommi= ssion Status : Normal

Configu= red Capacity: 82436759552 (76.78 GB)

DFS Use= d: 1024864256 (977.39 MB)

Non DFS= Used: 36776636416 (34.25 GB)

DFS Rem= aining: 44635258880(41.57 GB)

DFS Use= d%: 1.24%

DFS Rem= aining%: 54.14%

Last co= ntact: Fri Jan 31 18:55:20 IST 2014

 <= /span>

 

 

From: Jitendra Yadav [mailto= :jeetuyadav= 200890@gmail.com]
Sent: Friday, January 31, 2014 6:58 PM
To: user
Subject: Re: java.io.FileNotFoundException: http://HOSTNAME:50070/getimage?getimage=3D1

 

Hi,

 

Please post the output of dfs report command, this could help us t= o understand cluster health.

 

# hadoop dfsadmin -report

 

Thanks

Jitendra

 

On Fri, Jan 31, 2014 at 6:44 PM, Stuti Awasthi <stutiawasthi@hcl.com> wro= te:

Hi All,

 

I am suddenly started facing issue on Hadoop Cluster. Seems like H= TTP request at port 50070 on dfs is not working properly.

I have an Hadoop cluster which is operating from several days. Rec= ently we are also not able to see dfshealth.jsp page from webconsole.<= /o:p>

 

Problems :

1. htt= p://<Hostname>:50070/dfshealth.jsp shows following error

 

HTTP ERROR: 404

Problem accessing /. Reason:

NOT_FOUND

 

2. SNN is not able to roll edits :     &n= bsp;   

ERROR in SecondaryNameNode Log

java.io= .FileNotFoundException: h= ttp://HOSTNAME:50070/getimage?getimage=3D1

 &= nbsp;     at sun.net.www.protocol.http.HttpURLConnectio= n.getInputStream(HttpURLConnection.java:1401)

 &= nbsp;     at org.apache.hadoop.hdfs.server.namenode.Tra= nsferFsImage.getFileClient(TransferFsImage.java:160)

 &= nbsp;     at org.apache.hadoop.hdfs.server.namenode.Sec= ondaryNameNode$3.run(SecondaryNameNode.java:347)

 &= nbsp;     at org.apache.hadoop.hdfs.server.namenode.Sec= ondaryNameNode$3.run(SecondaryNameNode.java:336)

 &= nbsp;     at java.security.AccessController.doPrivilege= d(Native Method)

 &= nbsp;     at javax.security.auth.Subject.doAs(Subject.j= ava:416)

 &= nbsp;     at org.apache.hadoop.security.UserGroupInform= ation.doAs(UserGroupInformation.java:1093)

 &= nbsp;     at org.apache.hadoop.hdfs.server.namenode.Sec= ondaryNameNode.downloadCheckpointFiles(SecondaryNameNode.java:336)

 &= nbsp;     at org.apache.hadoop.hdfs.server.namenode.Sec= ondaryNameNode.doCheckpoint(SecondaryNameNode.java:411)

 &= nbsp;     at org.apache.hadoop.hdfs.server.namenode.Sec= ondaryNameNode.doWork(SecondaryNameNode.java:312)

 &= nbsp;     at org.apache.hadoop.hdfs.server.namenode.Sec= ondaryNameNode.run(SecondaryNameNode.java:275)

 

ERROR in Namenode Log

2014-01= -31 18:15:12,046 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: = Roll Edit Log from 10.139.9.231

2014-01= -31 18:15:12,046 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: = Cannot roll edit log, edits.new files already exists in all healthy directo= ries:

  = /usr/lib/hadoop/storage/dfs/nn/current/edits.new

 

 

 

Namenode logs which suggest that webserver is started on 50070 suc= cessfully:

2014-01= -31 14:42:35,208 INFO org.apache.hadoop.http.HttpServer: Port returned by w= ebServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the= listener on 50070

2014-01= -31 14:42:35,209 INFO org.apache.hadoop.http.HttpServer: listener.getLocalP= ort() returned 50070 webServer.getConnectors()[0].getLocalPort() returned 5= 0070

2014-01= -31 14:42:35,209 INFO org.apache.hadoop.http.HttpServer: Jetty bound to por= t 50070

2014-01= -31 14:42:35,378 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Web-= server up at: HOSTNAME:50070

 

 

Hdfs-site.xml

<con= figuration>

 &= nbsp;  <property>

 &= nbsp;      <name>dfs.replication</name>= ;

 &= nbsp;      <value>2</value>

 &= nbsp;  </property>

 <= /span>

 &= nbsp;  <property>

 &= nbsp;      <name>dfs.name.dir</name>

 &= nbsp;      <value>/usr/lib/hadoop/storage/df= s/nn</value>

 &= nbsp;  </property>

 <= /span>

 &= nbsp;  <property>

 &= nbsp;      <name>dfs.data.dir</name>

 &= nbsp;      <value>/usr/lib/hadoop/storage/df= s/dn</value>

 &= nbsp;  </property>

 <= /span>

 &= nbsp;  <property>

 &= nbsp;      <name>dfs.permissions</name>= ;

 &= nbsp;      <value>false</value>=

 &= nbsp;  </property>

<pro= perty>

  = <name>dfs.webhdfs.enabled</name>

  = <value>true</value>

</pr= operty>

 <= /span>

<pro= perty>

  = <name>dfs.http.address</name>

  = <value>HOSTNAME:50070</value>

</pr= operty>

 <= /span>

<pro= perty>

  = <name>dfs.secondary.http.address</name>

  = <value>HOSTNAME:50090</value>

</pr= operty>

 <= /span>

<pro= perty>

  = <name>fs.checkpoint.dir</name>

  = <value>/usr/lib/hadoop/storage/dfs/snn</value>

</pr= operty>

 <= /span>

</co= nfiguration>

 

 

/etc/hosts (Note I have also tried by commenting 127.0.0.1 = entry in host file but the issue was not resolved)

 

127.0.0= .1       localhost

 <= /span>

IP1&nbs= p;   Hostname1         # Namen= ode- vm01 - itself

IP2&nbs= p;   Hostname2         # = DataNode- vm02

…= …..

 <= /span>

# The f= ollowing lines are desirable for IPv6 capable hosts

::1&nbs= p;    ip6-localhost ip6-loopback

fe00::0= ip6-localnet

ff00::0= ip6-mcastprefix

ff02::1= ip6-allnodes

ff02::2= ip6-allrouters

 <= /span>

 <= /span>

Note : = All Hadoop daemons are executing fine and the jobs are running properly.

 <= /span>

How to = resolve this issue, I have tried many options provided on different forums = but still facing the same issue.

I beliv= e that this can cause a major problem later as my edits are not getting rol= led into fsimage.. This can cause me a data loss in case of failure.=

 <= /span>

Please = suggest

 <= /span>

Thanks<= /span>

Stuti

 <= /span>

 <= /span>

 



::DISCLAIMER::
---------------------------------------------------------------------------= -------------------------------------------------------------------------

The contents of this e-mail and any attachmen= t(s) are confidential and intended for the named recipient(s) only.
E-mail transmission is not guaranteed to be secure or error-free as informa= tion could be intercepted, corrupted,
lost, destroyed, arrive late or incomplete, or may contain viruses in trans= mission. The e mail and its contents
(with or without referred errors) shall therefore not attach any liability = on the originator or HCL or its affiliates.
Views or opinions, if any, presented in this email are solely those of the = author and may not necessarily reflect the
views or opinions of HCL or its affiliates. Any form of reproduction, disse= mination, copying, disclosure, modification,
distribution and / or publication of this message without the prior written= consent of authorized representative of
HCL is strictly prohibited. If you have received this email in error please= delete it and notify the sender immediately.
Before opening any email and/or attachments, please check them for viruses = and other defects.

---------------------------------------------= ---------------------------------------------------------------------------= ----------------------------

 

 

--_000_2270F3695BF2C04DB9FE975FB546FF3C21531F7DNDAHCLCMBS03hcl_--