hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Xu, Richard " <richard...@citi.com>
Subject RE: Unable to start hadoop-0.20.2 but able to start hadoop-0.20.203 cluster
Date Tue, 31 May 2011 14:36:09 GMT
1 namenode, 1 datanode. Dfs.replication=3. We also tried 0, 1, 2, same result.

From: Yaozhen Pan [mailto:itzhak.pan@gmail.com]
Sent: Tuesday, May 31, 2011 10:34 AM
To: hdfs-user@hadoop.apache.org
Subject: Re: Unable to start hadoop-0.20.2 but able to start hadoop-0.20.203 cluster


How many datanodes are in your cluster? and what is the value of "dfs.replication" in hdfs-site.xml
(if not specified, default value is 3)?

From the error log, it seems there are not enough datanodes to replicate the files in hdfs.
在 2011 5 31 22:23,"Harsh J" <harsh@cloudera.com<mailto:harsh@cloudera.com>>写道:
Xu,

Please post the output of `hadoop dfsadmin -report` and attach the
tail of a started DN's log?

On Tue, May 31, 2011 at 7:44 PM, Xu, Richard <richard.xu@citi.com<mailto:richard.xu@citi.com>>
wrote:
> 2. Also, Configured Cap...
This might easily be the cause. I'm not sure if its a Solaris thing
that can lead to this though.

> 3. in datanode server, no error in logs, but tasktracker logs has the following suspicious
thing:...
I don't see any suspicious log message in what you'd posted. Anyhow,
the TT does not matter here.

--
Harsh J
Mime
View raw message