hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harsh J <ha...@cloudera.com>
Subject Re: Throttle replication speed in case of datanode failure
Date Thu, 17 Jan 2013 19:04:10 GMT
You can limit the bandwidth in bytes/second values applied
via dfs.balance.bandwidthPerSec in each DN's hdfs-site.xml. Default is 1
MB/s (1048576).

Also, unsure if your version already has it, but it can be applied at
runtime too via the dfsadmin -setBalancerBandwidth command.

On Thu, Jan 17, 2013 at 8:11 PM, Brennon Church <atarukun@gmail.com> wrote:

> Hello,
> Is there a way to throttle the speed at which under-replicated blocks are
> copied across a cluster?  Either limiting the bandwidth or the number of
> blocks per time period would work.
> I'm currently running Hadoop v1.0.1.  I think the
> dfs.namenode.replication.work.multiplier.per.iteration option would do the
> trick, but that is in v1.1.0 and higher.
> Thanks.
> --Brennon

Harsh J

View raw message