hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mike Kendall <mkend...@justin.tv>
Subject Re: how do i force replication?
Date Thu, 19 Nov 2009 18:46:19 GMT
and setrep is a good tool to add to my arsenal.  thanks.

On Thu, Nov 19, 2009 at 10:28 AM, Michael Thomas <thomas@hep.caltech.edu>wrote:

> On 11/19/2009 10:25 AM, Brian Bockelman wrote:
>
>> Hey Mike,
>>
>> 1) What was the initial replication factor requested?  It will always stay
>> at that level until you request a new one.
>> 2) I think to manually change a file's replication it is "hadoop dfsadmin
>> -setrep" or something like that.  Don't trust what I wrote, trust the help
>> output.
>>
>
> To change the replication to 5:
>
> hadoop fs -setrep 5 $filename
>
> Or to change an entire directory recursively:
>
> hadoop fs -setrep -R 5 $filename
>
> --Mike
>
>
>  3) If a file is stuck at 1 replica, it usually means that HDFS is trying
>> to replicate the block, but for some reason, the datanode can't/won't send
>> it to another datanode.  I've found things like network partition,
>> disk-level corruption, or truncation can cause this.
>>
>> Grep the NN logs for the block ID -- you'll quickly be able to determine
>> whether the NN is repeatedly trying to replicate and failing for some
>> reason.  Then, discover what datanode holds the block (or one of the
>> attempted destination nodes) and grep its log for errors.
>>
>> Good luck.
>>
>> Brian
>>
>> On Nov 19, 2009, at 11:58 AM, Mike Kendall wrote:
>>
>>  everything online says that replication will be taken care of
>>> automatically,
>>> but i've had a file (that i uploaded through the put command on one node)
>>> sitting with a replication of 1 for three days.
>>>
>>
>>
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message