hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hairong Kuang" <hair...@yahoo-inc.com>
Subject RE: "could only be replicated to 0 nodes, instead of 1"
Date Wed, 05 Dec 2007 17:38:33 GMT
Check http://namenode_host:50070/dfshealth.jsp to see if your cluster is
out of safemode or not and how many datanodes are up.

You could check .out/.log files under the log directory to see if there
is any error starting datanodes/namenode.


-----Original Message-----
From: jerrro [mailto:jerrro@gmail.com] 
Sent: Wednesday, December 05, 2007 9:29 AM
To: hadoop-user@lucene.apache.org
Subject: Re: "could only be replicated to 0 nodes, instead of 1"

I did this several times, while tuning the configuration in all kinds of
way... But still, nothing helped - Even when I stop everything, reformat
and start it back again, I get this error whenever trying to use dfs

Jason Venner-2 wrote:
> This happens to me, when the dfs has gotten into an inconsistent
> NOTE: you will lose all of the contents of your HDS file system.
> What I hae to do, is stop dfs, remove the contents of the dfs 
> directories on all the machines, hadoop namenode -format on the 
> controller, then restart dfs.
> That consistently fixes the problem for me. This may be serious 
> overkill but it works.
> NOTE: you will lose all of the contents of your HDS file system.
> jerrro wrote:
>> I am trying to install/configure hadoop on a cluster with several 
>> computers.
>> I followed exactly the instructions in the hadoop website for 
>> configuring multiple slaves, and when I run start-all.sh I get no 
>> errors - both datanode and tasktracker are reported to be running 
>> (doing ps awux | grep hadoop on the slave nodes returns two java 
>> processes). Also, the log files are empty - nothing is printed there.

>> Still, when I try to use bin/hadoop dfs -put, I get the following 
>> error:
>> # bin/hadoop dfs -put w.txt w.txt
>> put: java.io.IOException: File /user/scohen/w4.txt could only be 
>> replicated to 0 nodes, instead of 1
>> and a file of size 0 is created on the DFS (bin/hadoop dfs -ls shows
>> I couldn't find much information about this error, but I did manage 
>> to see somewhere it might mean that there are no datanodes running. 
>> But as I said, start-all does not give any errors. Any ideas what 
>> could be problem?
>> Thanks.
>> Jerr.

View this message in context:
Sent from the Hadoop Users mailing list archive at Nabble.com.

View raw message