hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Brian Bockelman <bbock...@cse.unl.edu>
Subject Re: how do i force replication?
Date Thu, 19 Nov 2009 18:25:17 GMT
Hey Mike,

1) What was the initial replication factor requested?  It will always stay at that level until
you request a new one.
2) I think to manually change a file's replication it is "hadoop dfsadmin -setrep" or something
like that.  Don't trust what I wrote, trust the help output.
3) If a file is stuck at 1 replica, it usually means that HDFS is trying to replicate the
block, but for some reason, the datanode can't/won't send it to another datanode.  I've found
things like network partition, disk-level corruption, or truncation can cause this.

Grep the NN logs for the block ID -- you'll quickly be able to determine whether the NN is
repeatedly trying to replicate and failing for some reason.  Then, discover what datanode
holds the block (or one of the attempted destination nodes) and grep its log for errors.

Good luck.


On Nov 19, 2009, at 11:58 AM, Mike Kendall wrote:

> everything online says that replication will be taken care of automatically,
> but i've had a file (that i uploaded through the put command on one node)
> sitting with a replication of 1 for three days.

View raw message