hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alex Loddengaard <a...@cloudera.com>
Subject Re: Delete replicated blocks?
Date Thu, 27 Aug 2009 18:00:21 GMT
I don't know for sure, but running the rebalancer might do this for you.

<
http://hadoop.apache.org/common/docs/r0.20.0/hdfs_user_guide.html#Rebalancer
>

Alex

On Thu, Aug 27, 2009 at 9:18 AM, Michael Thomas <thomas@hep.caltech.edu>wrote:

> dfs.replication is only used by the client at the time the files are
> written.  Changing this setting will not automatically change the
> replication level on existing files.  To do that, you need to use the
> hadoop cli:
>
> hadoop fs -setrep -R 1 /
>
> --Mike
>
>
> Vladimir Klimontovich wrote:
> > This will happen automatically.
> > On Aug 27, 2009, at 6:04 PM, Andy Liu wrote:
> >
> >> I'm running a test Hadoop cluster, which had a dfs.replication value
> >> of 3.
> >> I'm now running out of disk space, so I've reduced dfs.replication to
> >> 1 and
> >> restarted my datanodes.  Is there a way to free up the over-replicated
> >> blocks, or does this happen automatically at some point?
> >>
> >> Thanks,
> >> Andy
> >
> > ---
> > Vladimir Klimontovich,
> > skype: klimontovich
> > GoogleTalk/Jabber: klimontovich@gmail.com
> > Cell phone: +7926 890 2349
> >
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message