hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "J. Ryan Earl" <...@jryanearl.us>
Subject Re: overreplicated blocks are not getting removed
Date Wed, 08 Aug 2012 04:53:06 GMT
Replication factor is per file, and set when written.  When you wrote the
file, RF was 3.  Changing -default- replication factor for new files does
not affect existing files.

On Tue, Aug 7, 2012 at 10:23 AM, Brahma Reddy Battula <
brahmareddy.battula@huawei.com> wrote:

>  Hi All,
>
>  I ended up with over-replicated blocks which are not getting deleted..
>
>  I did like following..
>
>  Started hadoop cluster with three DN's
>
>  Written 1k files with RF (Replication Factor) =3
>
>  Change  RF =2 and Exclude one DN from cluster using decommission
>
>  After decommission successful again include same DN(which is excluded)
> to cluster.(by removing entry in exclude file and execute refreshnode)
>
>  In UI I am able to see RF=2 but in fsck report shown as RF=3 and all
> are over-replicated blocks.
>
>
>  i) I am not getting why NN is not issuing delete command
> for over-replicate blocks.?
>
>  ii) why fsck and ui are showing different RF for same file?
>
>
>  Please correct me if I am wrong...
>
>  If it is issue I'll file?
>
>
>  Thanks And Regards
>
>  Brahma Reddy
>

Mime
View raw message