hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harsh J <ha...@cloudera.com>
Subject Re: What about over-replicated blocks in HDFS?
Date Wed, 22 Aug 2012 08:35:05 GMT
Ajit,

NameNode takes care of over-replication situations, and you needn't
have to worry about over-replicated blocks or do anything manually.

On Wed, Aug 22, 2012 at 12:58 PM, Ajit Ratnaparkhi
<ajit.ratnaparkhi@gmail.com> wrote:
> Hi,
>
> This is about case where HDFS has some data blocks which are
> over-replicated.
>
> Scenario is discussed below,
> If one of datanodes goes down, Namenode will see some blocks as under
> replicated and will start replication of under replicated blocks to bring
> their replication level back to expected. If after that datanode which was
> down comes up again without any data loss, at this time there will be blocks
> having more replication level than expected. Does namenode itself take care
> of removing extra blocks? or do we need to schedule balancer for that?
>
> -Ajit



-- 
Harsh J

Mime
View raw message