hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hairong Kuang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-1172) Blocks in newly completed files are considered under-replicated too quickly
Date Thu, 31 Mar 2011 00:03:05 GMT

    [ https://issues.apache.org/jira/browse/HDFS-1172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13013750#comment-13013750
] 

Hairong Kuang commented on HDFS-1172:
-------------------------------------

Thank Matt for pointing out the additional memory benefit that this fix could provide. This
patch could benefit datanode decommission too.

> Blocks in newly completed files are considered under-replicated too quickly
> ---------------------------------------------------------------------------
>
>                 Key: HDFS-1172
>                 URL: https://issues.apache.org/jira/browse/HDFS-1172
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: name-node
>    Affects Versions: 0.21.0
>            Reporter: Todd Lipcon
>            Assignee: Hairong Kuang
>         Attachments: HDFS-1172.patch, replicateBlocksFUC.patch, replicateBlocksFUC1.patch
>
>
> I've seen this for a long time, and imagine it's a known issue, but couldn't find an
existing JIRA. It often happens that we see the NN schedule replication on the last block
of files very quickly after they're completed, before the other DNs in the pipeline have a
chance to report the new block. This results in a lot of extra replication work on the cluster,
as we replicate the block and then end up with multiple excess replicas which are very quickly
deleted.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message