hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ravi Prakash (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-7787) Wrong priorty of replication
Date Thu, 12 Feb 2015 18:37:12 GMT

    [ https://issues.apache.org/jira/browse/HDFS-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14318723#comment-14318723
] 

Ravi Prakash commented on HDFS-7787:
------------------------------------

Frode!
The code for prioritizing under-replicated blocks is here: https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java#L149
{noformat}
  <li>{@link #QUEUE_HIGHEST_PRIORITY}: the blocks that must be replicated
 *   first. That is blocks with only one copy, or blocks with zero live
 *   copies but a copy in a node being decommissioned. These blocks
 *   are at risk of loss if the disk or server on which they
 *   remain fails.</li>
{noformat}
It seems you want to split QUEUE_HIGHEST_PRIORITY into two queues: one for "That is blocks
with only one copy" and a more important "blocks with zero live copies but a copy in a node
being decommissioned" . This seems reasonable to me. Please see if you can submit a patch.
It'd be much appreciated.

You can change the rate of re-replication with parameters: Please look at dfs.namenode.replication.interval
, dfs.namenode.replication.work.multiplier.per.iteration etc. Could you please remove that
point from the description of the JIRA?







> Wrong priorty of replication
> ----------------------------
>
>                 Key: HDFS-7787
>                 URL: https://issues.apache.org/jira/browse/HDFS-7787
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: datanode
>    Affects Versions: 2.6.0
>         Environment: 2 namenodes HA, 6 datanodes in two racks
>            Reporter: Frode Halvorsen
>              Labels: balance, hdfs, replication-performance
>
> Each file has a setting of 3 replicas. split on different racks.
> After a simulated crash of one rack (shutdown of all nodes, deleted data-directory an
started nodes) and decommssion of one of the nodes in the orther rack the replication does
not follow 'normal' rules...
> My cluster has appx 25 mill files, and the one node I now try to decommision has 9 millions
underreplicated blocks, and 3,5 million blocks with 'no live replicas'. After a restart of
the node, it starts to replicate both types of blocks, but after a while, it only repliates
under-replicated blocks with other live copies. I would think that the 'normal' way to do
this would be to make sure that all blocks this node keeps the only copy of, should be the
first to be replicated/balanced ?  Another thing, is that this takes 'forever'. The rate it's
going now it will run for a couple of months before I can take down the node for maintance..
It only has appx 250 G of data in total .. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message