hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Frode Halvorsen (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-7787) Split QUEUE_HIGHEST_PRIORITY in UnderReplicatedBlocks to give more priority to blocks on nodes being decomissioned
Date Sun, 15 Feb 2015 12:33:11 GMT

    [ https://issues.apache.org/jira/browse/HDFS-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14321963#comment-14321963
] 

Frode Halvorsen commented on HDFS-7787:
---------------------------------------

I just did a log-analysis of the decommissioning  node, and looked at what it actually started
to replicate during av ten-minute period. I filtered on the log-lines for 'Staring thread
to transefer' and counted lines divided into replication to one, two or three nodes (blocks
with 2, 1 and 0 live replicas). It started 5036 threads during the 10 minutes I loked at,
and it was :
53 blokcs to one node (2 live replicas in the cluster)
3127 blocks to two nodes (blocks with one live replica)
1856 blocks to three nodes (blocks with no live replicas)


Of course this is a problem for me, as I won't be able to kill the node totally before all
blocks with no live replicas has been transfered. It's still 3.3 million of them, and at this
rate I won't be able to kille the node for another week and a half :(


> Split QUEUE_HIGHEST_PRIORITY in UnderReplicatedBlocks to give more priority to blocks
on nodes being decomissioned
> ------------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-7787
>                 URL: https://issues.apache.org/jira/browse/HDFS-7787
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: datanode
>    Affects Versions: 2.6.0
>         Environment: 2 namenodes HA, 6 datanodes in two racks
>            Reporter: Frode Halvorsen
>              Labels: balance, hdfs, replication-performance
>
> Each file has a setting of 3 replicas. split on different racks.
> After a simulated crash of one rack (shutdown of all nodes, deleted data-directory an
started nodes) and decommssion of one of the nodes in the orther rack the replication does
not follow 'normal' rules...
> My cluster has appx 25 mill files, and the one node I now try to decommision has 9 millions
underreplicated blocks, and 3,5 million blocks with 'no live replicas'. After a restart of
the node, it starts to replicate both types of blocks, but after a while, it only repliates
under-replicated blocks with other live copies. I would think that the 'normal' way to do
this would be to make sure that all blocks this node keeps the only copy of, should be the
first to be replicated/balanced ?  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message