hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Bertrand Dechoux <decho...@gmail.com>
Subject Re: Not able to place enough replicas in Reduce
Date Mon, 24 Sep 2012 15:54:25 GMT
And do you have any remaining space in your HDFS? (Or do you have quota?
But the message should be different, I guess.)
What are the metrics you get from the namenode? Are all datanodes (you have
only one?) live?

http://localhost:50070/dfshealth.jsp

As long as you consume (map) you don't need much space in HDFS but when you
produce (reduce) you definitely need some.
As Ted pointed out, your error is a standard one when hadoop is unable to
replicate a block. It should not be related to the reduce itself and even
less related about your specific logic.

Regards

Bertrand

On Mon, Sep 24, 2012 at 5:41 PM, Jason Yang <lin.yang.jason@gmail.com>wrote:

> Hi, Ted
>
> here is the result of jps:
> yanglin@ubuntu:~$ jps
> 3286 TaskTracker
> 14053 Jps
> 2623 DataNode
> 2996 JobTracker
> 2329 NameNode
> 2925 SecondaryNameNode
> ---
> It seems that the DN is working.
>
> And it is not failed immediately when enter the reduce phase, actually it
> always failed after processing some data
>
>
> 2012/9/24 Steve Loughran <stevel@hortonworks.com>
>
>>
>>
>> On 24 September 2012 15:47, Ted Reynolds <tedr@hortonworks.com> wrote:
>>
>>> Jason,
>>>
>>> The line in the JobTracker log - "Could only be replicated to 0 nodes,
>>> instead of 1" points to a problem with your data node.  I generally means
>>> that your DataNode is either down or not functioning correctly.  What is
>>> the output of the "jps" command?  ("jps" is found in <JAVA_HOME>/bin).
>>>
>>>
>>
>> see also: http://wiki.apache.org/hadoop/CouldOnlyBeReplicatedTo
>>
>> -steve
>>
>
>
>
> --
> YANG, Lin
>
>


-- 
Bertrand Dechoux

Mime
View raw message