hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Basu,Indrashish" <indrash...@ufl.edu>
Subject Error putting files in the HDFS
Date Tue, 08 Oct 2013 17:42:45 GMT

Hello,

My name is Indrashish Basu and I am a Masters student in the Department 
of Electrical and Computer Engineering.

Currently I am doing my research project on Hadoop implementation on 
ARM processor and facing an issue while trying to run a sample Hadoop 
source code on the same. Every time I am trying to put some files in the 
HDFS, I am getting the below error.


13/10/07 11:31:29 WARN hdfs.DFSClient: DataStreamer Exception: 
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File 
/user/root/bin/cpu-kmeans2D could only be replicated to 0 nodes, instead 
of 1
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1267)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

at org.apache.hadoop.ipc.Client.call(Client.java:739)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at com.sun.proxy.$Proxy0.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at com.sun.proxy.$Proxy0.addBlock(Unknown Source)
at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2904)
at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2786)
at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2076)
at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2262)

13/10/07 11:31:29 WARN hdfs.DFSClient: Error Recovery for block null 
bad datanode[0] nodes == null
13/10/07 11:31:29 WARN hdfs.DFSClient: Could not get block locations. 
Source file "/user/root/bin/cpu-kmeans2D" - Aborting...
put: java.io.IOException: File /user/root/bin/cpu-kmeans2D could only 
be replicated to 0 nodes, instead of 1


I tried replicating the namenode and datanode by deleting all the old 
logs on the master and the slave nodes as well as the folders under 
/app/hadoop/, after which I formatted the namenode and started the 
process again (bin/start-all.sh), but still no luck with the same.

I tried generating the admin report(pasted below) after doing the 
restart, it seems the data node is not getting started.

-------------------------------------------------
Datanodes available: 0 (0 total, 0 dead)

root@tegra-ubuntu:~/hadoop-gpu-master/hadoop-gpu-0.20.1# bin/hadoop 
dfsadmin -report
Configured Capacity: 0 (0 KB)
Present Capacity: 0 (0 KB)
DFS Remaining: 0 (0 KB)
DFS Used: 0 (0 KB)
DFS Used%: �%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 0 (0 total, 0 dead)


I have tried the following methods to debug the process :

1) I logged in to the HADOOP home directory and removed all the old 
logs (rm -rf logs/*)

2) Next I deleted the contents of the directory on all my slave and 
master nodes (rm -rf /app/hadoop/*)

3) I formatted the namenode (bin/hadoop namenode -format)

4) I started all the processes - first the namenode, datanode and then 
the map - reduce. I typed jps on the terminal to ensure that all the 
processes (Namenode, Datanode, JobTracker, Task Tracker) are up and 
running.

5) Now doing this, I recreated the directories in the dfs.

However still no luck with the process.


Can you kindly assist regarding this ? I am new to Hadoop and I am 
having no idea as how I can proceed with this.




Regards,

-- 
Indrashish Basu
Graduate Student
Department of Electrical and Computer Engineering
University of Florida

Mime
View raw message