Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 6386110602 for ; Tue, 8 Oct 2013 18:22:56 +0000 (UTC) Received: (qmail 95773 invoked by uid 500); 8 Oct 2013 18:22:37 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 95635 invoked by uid 500); 8 Oct 2013 18:22:37 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 95625 invoked by uid 99); 8 Oct 2013 18:22:36 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 08 Oct 2013 18:22:36 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of dontariq@gmail.com designates 209.85.219.43 as permitted sender) Received: from [209.85.219.43] (HELO mail-oa0-f43.google.com) (209.85.219.43) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 08 Oct 2013 18:22:32 +0000 Received: by mail-oa0-f43.google.com with SMTP id i3so598887oag.16 for ; Tue, 08 Oct 2013 11:22:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type; bh=yfXwxmZo5lmYcz9P60nkF6GvJse0zk9AZiCj4yz5SNA=; b=SkNvZ1ec9lwLjvG68ClvZn0yQWhT/wg7VPvcBBF+diKjQjqE7M+2Uz8DsztRy+8ozO f9fmyiewFj1bByZubQS5yY11WcPNIF6P5b2DFTEcsSM9SJH32sqQYNsp/1Hd3rKDgxVL eWei5u3+idj4Fm7ehkuibYjwcaWuw5oejXNwqOLXCgX0DLZyMI+mArXCaA2QULZR45Dp t344M80WN+ilezukIKqhqBk1NH9WxzZYfGAkDdzVV/FvmOqJRE8w8Uforw8cswJGFIeF SmpbB/EGZHXg0KqEpg0yFxvCVmK5ScmMHCqY4MAr6l5LkIf7WRpSWyC7QdNpLCy1Nxfy 2+rA== X-Received: by 10.60.155.166 with SMTP id vx6mr2290381oeb.28.1381256531158; Tue, 08 Oct 2013 11:22:11 -0700 (PDT) MIME-Version: 1.0 Received: by 10.76.96.72 with HTTP; Tue, 8 Oct 2013 11:21:30 -0700 (PDT) In-Reply-To: References: <2e03060e43aeb4ed462d36ecd1c7a9ef@ufl.edu> From: Mohammad Tariq Date: Tue, 8 Oct 2013 23:51:30 +0530 Message-ID: Subject: Re: Error putting files in the HDFS To: "user@hadoop.apache.org" Content-Type: multipart/alternative; boundary=047d7bf18754e22ccf04e83ed81a X-Virus-Checked: Checked by ClamAV on apache.org --047d7bf18754e22ccf04e83ed81a Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable You don't have any more space left in your HDFS. Delete some old data or add additional storage. Warm Regards, Tariq cloudfront.blogspot.com On Tue, Oct 8, 2013 at 11:47 PM, Basu,Indrashish wrote= : > > > Hi , > > Just to update on this, I have deleted all the old logs and files from th= e > /tmp and /app/hadoop directory, and restarted all the nodes, I have now 1 > datanode available as per the below information : > > Configured Capacity: 3665985536 (3.41 GB) > Present Capacity: 24576 (24 KB) > > DFS Remaining: 0 (0 KB) > DFS Used: 24576 (24 KB) > DFS Used%: 100% > > Under replicated blocks: 0 > Blocks with corrupt replicas: 0 > Missing blocks: 0 > > ------------------------------**------------------- > Datanodes available: 1 (1 total, 0 dead) > > Name: 10.227.56.195:50010 > Decommission Status : Normal > Configured Capacity: 3665985536 (3.41 GB) > DFS Used: 24576 (24 KB) > Non DFS Used: 3665960960 (3.41 GB) > DFS Remaining: 0(0 KB) > DFS Used%: 0% > DFS Remaining%: 0% > Last contact: Tue Oct 08 11:12:19 PDT 2013 > > > However when I tried putting the files back in HDFS, I am getting the sam= e > error as stated earlier. Do I need to clear some space for the HDFS ? > > Regards, > Indrashish > > > > On Tue, 08 Oct 2013 14:01:19 -0400, Basu,Indrashish wrote: > >> Hi Jitendra, >> >> This is what I am getting in the datanode logs : >> >> 2013-10-07 11:27:41,960 INFO >> org.apache.hadoop.hdfs.server.**common.Storage: Storage directory >> /app/hadoop/tmp/dfs/data is not formatted. >> 2013-10-07 11:27:41,961 INFO >> org.apache.hadoop.hdfs.server.**common.Storage: Formatting ... >> 2013-10-07 11:27:42,094 INFO >> org.apache.hadoop.hdfs.server.**datanode.DataNode: Registered >> FSDatasetStatusMBean >> 2013-10-07 11:27:42,099 INFO >> org.apache.hadoop.hdfs.server.**datanode.DataNode: Opened info server at >> 50010 >> 2013-10-07 11:27:42,107 INFO >> org.apache.hadoop.hdfs.server.**datanode.DataNode: Balancing bandwith is >> 1048576 bytes/s >> 2013-10-07 11:27:42,369 INFO org.mortbay.log: Logging to >> org.slf4j.impl.**Log4jLoggerAdapter(org.**mortbay.log) via >> org.mortbay.log.Slf4jLog >> 2013-10-07 11:27:42,632 INFO org.apache.hadoop.http.**HttpServer: Port >> returned by webServer.getConnectors()[0].**getLocalPort() before open() >> is -1. Opening the listener on 50075 >> 2013-10-07 11:27:42,633 INFO org.apache.hadoop.http.**HttpServer: >> listener.getLocalPort() returned 50075 >> webServer.getConnectors()[0].**getLocalPort() returned 50075 >> 2013-10-07 11:27:42,634 INFO org.apache.hadoop.http.**HttpServer: Jetty >> bound to port 50075 >> 2013-10-07 11:27:42,634 INFO org.mortbay.log: jetty-6.1.14 >> 2013-10-07 11:31:29,821 INFO org.mortbay.log: Started >> SelectChannelConnector@0.0.0.**0:50075 >> 2013-10-07 11:31:29,843 INFO >> org.apache.hadoop.metrics.jvm.**JvmMetrics: Initializing JVM Metrics >> with processName=3DDataNode, sessionId=3Dnull >> 2013-10-07 11:31:29,912 INFO >> org.apache.hadoop.ipc.metrics.**RpcMetrics: Initializing RPC Metrics >> with hostName=3DDataNode, port=3D50020 >> 2013-10-07 11:31:29,922 INFO org.apache.hadoop.ipc.Server: IPC Server >> Responder: starting >> 2013-10-07 11:31:29,922 INFO org.apache.hadoop.ipc.Server: IPC Server >> listener on 50020: starting >> 2013-10-07 11:31:29,933 INFO org.apache.hadoop.ipc.Server: IPC Server >> handler 0 on 50020: starting >> 2013-10-07 11:31:29,933 INFO org.apache.hadoop.ipc.Server: IPC Server >> handler 1 on 50020: starting >> 2013-10-07 11:31:29,933 INFO org.apache.hadoop.ipc.Server: IPC Server >> handler 2 on 50020: starting >> 2013-10-07 11:31:29,934 INFO >> org.apache.hadoop.hdfs.server.**datanode.DataNode: dnRegistration =3D >> DatanodeRegistration(tegra-**ubuntu:50010, storageID=3D, infoPort=3D5007= 5, >> ipcPort=3D50020) >> 2013-10-07 11:31:29,971 INFO >> org.apache.hadoop.hdfs.server.**datanode.DataNode: New storage id >> DS-1027334635-127.0.1.1-50010-**1381170689938 is assigned to data-node >> 10.227.56.195:50010 >> 2013-10-07 11:31:29,973 INFO >> org.apache.hadoop.hdfs.server.**datanode.DataNode: >> DatanodeRegistration(10.227.**56.195:50010 , >> storageID=3DDS-1027334635-127.0.**1.1-50010-1381170689938, infoPort=3D50= 075, >> ipcPort=3D50020)In DataNode.run, data =3D FSDataset >> {dirpath=3D'/app/hadoop/tmp/dfs/**data/current'} >> 2013-10-07 11:31:29,974 INFO >> org.apache.hadoop.hdfs.server.**datanode.DataNode: using >> BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec >> 2013-10-07 11:31:30,032 INFO >> org.apache.hadoop.hdfs.server.**datanode.DataNode: BlockReport of 0 >> blocks got processed in 19 msecs >> 2013-10-07 11:31:30,035 INFO >> org.apache.hadoop.hdfs.server.**datanode.DataNode: Starting Periodic >> block scanner. >> 2013-10-07 11:41:42,222 INFO >> org.apache.hadoop.hdfs.server.**datanode.DataNode: BlockReport of 0 >> blocks got processed in 20 msecs >> 2013-10-07 12:41:43,482 INFO >> org.apache.hadoop.hdfs.server.**datanode.DataNode: BlockReport of 0 >> blocks got processed in 22 msecs >> 2013-10-07 13:41:44,755 INFO >> org.apache.hadoop.hdfs.server.**datanode.DataNode: BlockReport of 0 >> blocks got processed in 13 msecs >> >> >> I restarted the datanode and made sure that it is up and running >> (typed jps command). >> >> Regards, >> Indrashish >> >> On Tue, 8 Oct 2013 23:25:25 +0530, Jitendra Yadav wrote: >> >>> As per your dfs report, available DataNodes count is ZERO in you >>> cluster. >>> >>> Please check your data node logs. >>> >>> Regards >>> Jitendra >>> >>> On 10/8/13, Basu,Indrashish wrote: >>> >>>> >>>> Hello, >>>> >>>> My name is Indrashish Basu and I am a Masters student in the Departmen= t >>>> of Electrical and Computer Engineering. >>>> >>>> Currently I am doing my research project on Hadoop implementation on >>>> ARM processor and facing an issue while trying to run a sample Hadoop >>>> source code on the same. Every time I am trying to put some files in t= he >>>> HDFS, I am getting the below error. >>>> >>>> >>>> 13/10/07 11:31:29 WARN hdfs.DFSClient: DataStreamer Exception: >>>> org.apache.hadoop.ipc.**RemoteException: java.io.IOException: File >>>> /user/root/bin/cpu-kmeans2D could only be replicated to 0 nodes, inste= ad >>>> of 1 >>>> at >>>> >>>> org.apache.hadoop.hdfs.server.**namenode.FSNamesystem.** >>>> getAdditionalBlock(**FSNamesystem.java:1267) >>>> at >>>> >>>> org.apache.hadoop.hdfs.server.**namenode.NameNode.addBlock(** >>>> NameNode.java:422) >>>> at sun.reflect.**NativeMethodAccessorImpl.**invoke0(Native Method) >>>> at >>>> >>>> sun.reflect.**NativeMethodAccessorImpl.**invoke(** >>>> NativeMethodAccessorImpl.java:**57) >>>> at >>>> >>>> sun.reflect.**DelegatingMethodAccessorImpl.**invoke(** >>>> DelegatingMethodAccessorImpl.**java:43) >>>> at java.lang.reflect.Method.**invoke(Method.java:606) >>>> at org.apache.hadoop.ipc.RPC$**Server.call(RPC.java:508) >>>> at org.apache.hadoop.ipc.Server$**Handler$1.run(Server.java:959) >>>> at org.apache.hadoop.ipc.Server$**Handler$1.run(Server.java:955) >>>> at java.security.**AccessController.doPrivileged(**Native Method) >>>> at javax.security.auth.Subject.**doAs(Subject.java:415) >>>> at org.apache.hadoop.ipc.Server$**Handler.run(Server.java:953) >>>> >>>> at org.apache.hadoop.ipc.Client.**call(Client.java:739) >>>> at org.apache.hadoop.ipc.RPC$**Invoker.invoke(RPC.java:220) >>>> at com.sun.proxy.$Proxy0.**addBlock(Unknown Source) >>>> at sun.reflect.**NativeMethodAccessorImpl.**invoke0(Native Method) >>>> at >>>> >>>> sun.reflect.**NativeMethodAccessorImpl.**invoke(** >>>> NativeMethodAccessorImpl.java:**57) >>>> at >>>> >>>> sun.reflect.**DelegatingMethodAccessorImpl.**invoke(** >>>> DelegatingMethodAccessorImpl.**java:43) >>>> at java.lang.reflect.Method.**invoke(Method.java:606) >>>> at >>>> >>>> org.apache.hadoop.io.retry.**RetryInvocationHandler.**invokeMethod(** >>>> RetryInvocationHandler.java:**82) >>>> at >>>> >>>> org.apache.hadoop.io.retry.**RetryInvocationHandler.invoke(** >>>> RetryInvocationHandler.java:**59) >>>> at com.sun.proxy.$Proxy0.**addBlock(Unknown Source) >>>> at >>>> >>>> org.apache.hadoop.hdfs.**DFSClient$DFSOutputStream.** >>>> locateFollowingBlock(**DFSClient.java:2904) >>>> at >>>> >>>> org.apache.hadoop.hdfs.**DFSClient$DFSOutputStream.** >>>> nextBlockOutputStream(**DFSClient.java:2786) >>>> at >>>> >>>> org.apache.hadoop.hdfs.**DFSClient$DFSOutputStream.** >>>> access$2000(DFSClient.java:**2076) >>>> at >>>> >>>> org.apache.hadoop.hdfs.**DFSClient$DFSOutputStream$** >>>> DataStreamer.run(DFSClient.**java:2262) >>>> >>>> 13/10/07 11:31:29 WARN hdfs.DFSClient: Error Recovery for block null >>>> bad datanode[0] nodes =3D=3D null >>>> 13/10/07 11:31:29 WARN hdfs.DFSClient: Could not get block locations. >>>> Source file "/user/root/bin/cpu-kmeans2D" - Aborting... >>>> put: java.io.IOException: File /user/root/bin/cpu-kmeans2D could only >>>> be replicated to 0 nodes, instead of 1 >>>> >>>> >>>> I tried replicating the namenode and datanode by deleting all the old >>>> logs on the master and the slave nodes as well as the folders under >>>> /app/hadoop/, after which I formatted the namenode and started the >>>> process again (bin/start-all.sh), but still no luck with the same. >>>> >>>> I tried generating the admin report(pasted below) after doing the >>>> restart, it seems the data node is not getting started. >>>> >>>> ------------------------------**------------------- >>>> Datanodes available: 0 (0 total, 0 dead) >>>> >>>> root@tegra-ubuntu:~/hadoop-**gpu-master/hadoop-gpu-0.20.1# bin/hadoop >>>> dfsadmin -report >>>> Configured Capacity: 0 (0 KB) >>>> Present Capacity: 0 (0 KB) >>>> DFS Remaining: 0 (0 KB) >>>> DFS Used: 0 (0 KB) >>>> DFS Used%: =EF=BF=BD% >>>> Under replicated blocks: 0 >>>> Blocks with corrupt replicas: 0 >>>> Missing blocks: 0 >>>> >>>> ------------------------------**------------------- >>>> Datanodes available: 0 (0 total, 0 dead) >>>> >>>> >>>> I have tried the following methods to debug the process : >>>> >>>> 1) I logged in to the HADOOP home directory and removed all the old >>>> logs (rm -rf logs/*) >>>> >>>> 2) Next I deleted the contents of the directory on all my slave and >>>> master nodes (rm -rf /app/hadoop/*) >>>> >>>> 3) I formatted the namenode (bin/hadoop namenode -format) >>>> >>>> 4) I started all the processes - first the namenode, datanode and then >>>> the map - reduce. I typed jps on the terminal to ensure that all the >>>> processes (Namenode, Datanode, JobTracker, Task Tracker) are up and >>>> running. >>>> >>>> 5) Now doing this, I recreated the directories in the dfs. >>>> >>>> However still no luck with the process. >>>> >>>> >>>> Can you kindly assist regarding this ? I am new to Hadoop and I am >>>> having no idea as how I can proceed with this. >>>> >>>> >>>> >>>> >>>> Regards, >>>> >>>> -- >>>> Indrashish Basu >>>> Graduate Student >>>> Department of Electrical and Computer Engineering >>>> University of Florida >>>> >>>> > -- > Indrashish Basu > Graduate Student > Department of Electrical and Computer Engineering > University of Florida > --047d7bf18754e22ccf04e83ed81a Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
You don't have any more space left in your HDFS. Delet= e some old data or add additional storage.
=
Warm Regards,
Tariq
<= a href=3D"http://cloudfront.blogspot.com" target=3D"_blank">cloudfront.blog= spot.com


On Tue, Oct 8, 2013 at 11:47 PM, Basu,In= drashish <indrashish@ufl.edu> wrote:


Hi ,

Just to update on this, I have deleted all the old logs and files from the = /tmp and /app/hadoop directory, and restarted all the nodes, I have now 1 d= atanode available as per the below information :

Configured Capacity: 3665985536 (3.41 GB)
Present Capacity: 24576 (24 KB)

DFS Remaining: 0 (0 KB)
DFS Used: 24576 (24 KB)
DFS Used%: 100%

Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 1 (1 total, 0 dead)

Name: 10.227.56.19= 5:50010
Decommission Status : Normal
Configured Capacity: 3665985536 (3.41 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 3665960960 (3.41 GB)
DFS Remaining: 0(0 KB)
DFS Used%: 0%
DFS Remaining%: 0%
Last contact: Tue Oct 08 11:12:19 PDT 2013


However when I tried putting the files back in HDFS, I am getting the same = error as stated earlier. Do I need to clear some space for the HDFS ?

Regards,
Indrashish



On Tue, 08 Oct 2013 14:01:19 -0400, Basu,Indrashish wrote:
Hi Jitendra,

This is what I am getting in the datanode logs :

2013-10-07 11:27:41,960 INFO
org.apache.hadoop.hdfs.server.common.Storage: Storage directory
/app/hadoop/tmp/dfs/data is not formatted.
2013-10-07 11:27:41,961 INFO
org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
2013-10-07 11:27:42,094 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
FSDatasetStatusMBean
2013-10-07 11:27:42,099 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server = at
50010
2013-10-07 11:27:42,107 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith = is
1048576 bytes/s
2013-10-07 11:27:42,369 INFO org.mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2013-10-07 11:27:42,632 INFO org.apache.hadoop.http.HttpServer: Port=
returned by webServer.getConnectors()[0].getLocalPort() before open(= )
is -1. Opening the listener on 50075
2013-10-07 11:27:42,633 INFO org.apache.hadoop.http.HttpServer:
listener.getLocalPort() returned 50075
webServer.getConnectors()[0].getLocalPort() returned 50075
2013-10-07 11:27:42,634 INFO org.apache.hadoop.http.HttpServer: Jett= y
bound to port 50075
2013-10-07 11:27:42,634 INFO org.mortbay.log: jetty-6.1.14
2013-10-07 11:31:29,821 INFO org.mortbay.log: Started
S= electChannelConnector@0.0.0.0:50075
2013-10-07 11:31:29,843 INFO
org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=3DDataNode, sessionId=3Dnull
2013-10-07 11:31:29,912 INFO
org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=3DDataNode, port=3D50020
2013-10-07 11:31:29,922 INFO org.apache.hadoop.ipc.Server: IPC Server
Responder: starting
2013-10-07 11:31:29,922 INFO org.apache.hadoop.ipc.Server: IPC Server
listener on 50020: starting
2013-10-07 11:31:29,933 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 0 on 50020: starting
2013-10-07 11:31:29,933 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 1 on 50020: starting
2013-10-07 11:31:29,933 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 2 on 50020: starting
2013-10-07 11:31:29,934 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration =3D<= br> DatanodeRegistration(tegra-ubuntu:50010, storageID=3D, infoPort=3D50= 075,
ipcPort=3D50020)
2013-10-07 11:31:29,971 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: New storage id
DS-1027334635-127.0.1.1-50010-1381170689938 is assigned to data-node=
10.227.56.195:5001= 0
2013-10-07 11:31:29,973 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode:
DatanodeRegistration(10.227.56.195:50010,
storageID=3DDS-1027334635-127.0.1.1-50010-1381170689938, infoPort=3D= 50075,
ipcPort=3D50020)In DataNode.run, data =3D FSDataset
{dirpath=3D'/app/hadoop/tmp/dfs/data/current'}
2013-10-07 11:31:29,974 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: using
BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
2013-10-07 11:31:30,032 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks got processed in 19 msecs
2013-10-07 11:31:30,035 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic block scanner.
2013-10-07 11:41:42,222 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks got processed in 20 msecs
2013-10-07 12:41:43,482 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks got processed in 22 msecs
2013-10-07 13:41:44,755 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks got processed in 13 msecs


I restarted the datanode and made sure that it is up and running
(typed jps command).

Regards,
Indrashish

On Tue, 8 Oct 2013 23:25:25 +0530, Jitendra Yadav wrote:
As per your dfs report, available DataNodes =C2=A0count is ZERO =C2=A0in yo= u cluster.

Please check your data node logs.

Regards
Jitendra

On 10/8/13, Basu,Indrashish <indrashish@ufl.edu> wrote:

Hello,

My name is Indrashish Basu and I am a Masters student in the Department
of Electrical and Computer Engineering.

Currently I am doing my research project on Hadoop implementation on
ARM processor and facing an issue while trying to run a sample Hadoop
source code on the same. Every time I am trying to put some files in the HDFS, I am getting the below error.


13/10/07 11:31:29 WARN hdfs.DFSClient: DataStreamer Exception:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/root/bin/cpu-kmeans2D could only be replicated to 0 nodes, instead of 1
at

org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditi= onalBlock(FSNamesystem.java:1267)
at

org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(Name= Node.java:422)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method= )
at

sun.reflect.NativeMethodAccessorImpl.invoke(NativeMeth= odAccessorImpl.java:57)
at

sun.reflect.DelegatingMethodAccessorImpl.invoke(Delega= tingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)=
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

at org.apache.hadoop.ipc.Client.call(Client.java:739)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at com.sun.proxy.$Proxy0.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method= )
at

sun.reflect.NativeMethodAccessorImpl.invoke(NativeMeth= odAccessorImpl.java:57)
at

sun.reflect.DelegatingMethodAccessorImpl.invoke(Delega= tingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at

org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMetho= d(RetryInvocationHandler.java:82)
at

org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(Retr= yInvocationHandler.java:59)
at com.sun.proxy.$Proxy0.addBlock(Unknown Source)
at

org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollow= ingBlock(DFSClient.java:2904)
at

org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOut= putStream(DFSClient.java:2786)
at

org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(= DFSClient.java:2076)
at

org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer= .run(DFSClient.java:2262)

13/10/07 11:31:29 WARN hdfs.DFSClient: Error Recovery for block null
bad datanode[0] nodes =3D=3D null
13/10/07 11:31:29 WARN hdfs.DFSClient: Could not get block locations.
Source file "/user/root/bin/cpu-kmeans2D" - Aborting...
put: java.io.IOException: File /user/root/bin/cpu-kmeans2D could only
be replicated to 0 nodes, instead of 1


I tried replicating the namenode and datanode by deleting all the old
logs on the master and the slave nodes as well as the folders under
/app/hadoop/, after which I formatted the namenode and started the
process again (bin/start-all.sh), but still no luck with the same.

I tried generating the admin report(pasted below) after doing the
restart, it seems the data node is not getting started.

-------------------------------------------------
Datanodes available: 0 (0 total, 0 dead)

root@tegra-ubuntu:~/hadoop-gpu-master/hadoop-gpu-0.20.1# bin/hadoop<= br> dfsadmin -report
Configured Capacity: 0 (0 KB)
Present Capacity: 0 (0 KB)
DFS Remaining: 0 (0 KB)
DFS Used: 0 (0 KB)
DFS Used%: =EF=BF=BD%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 0 (0 total, 0 dead)


I have tried the following methods to debug the process :

1) I logged in to the HADOOP home directory and removed all the old
logs (rm -rf logs/*)

2) Next I deleted the contents of the directory on all my slave and
master nodes (rm -rf /app/hadoop/*)

3) I formatted the namenode (bin/hadoop namenode -format)

4) I started all the processes - first the namenode, datanode and then
the map - reduce. I typed jps on the terminal to ensure that all the
processes (Namenode, Datanode, JobTracker, Task Tracker) are up and
running.

5) Now doing this, I recreated the directories in the dfs.

However still no luck with the process.


Can you kindly assist regarding this ? I am new to Hadoop and I am
having no idea as how I can proceed with this.




Regards,

--
Indrashish Basu
Graduate Student
Department of Electrical and Computer Engineering
University of Florida


--
Indrashish Basu
Graduate Student
Department of Electrical and Computer Engineering
University of Florida

--047d7bf18754e22ccf04e83ed81a--