hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Dmytro Molkov (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HDFS-1300) Decommissioning nodes does not increase replication priority
Date Wed, 14 Jul 2010 18:35:49 GMT

     [ https://issues.apache.org/jira/browse/HDFS-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Dmytro Molkov updated HDFS-1300:
--------------------------------

    Attachment: HDFS-1300.patch

Please have a look at the patch.
Instead of inserting only when it doesn't exist we should rather follow the same way we handle
dead datanodes. Update the block, so it will move up in priority queues when more nodes get
decommissioned.

> Decommissioning nodes does not increase replication priority
> ------------------------------------------------------------
>
>                 Key: HDFS-1300
>                 URL: https://issues.apache.org/jira/browse/HDFS-1300
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 0.20-append, 0.20.1, 0.20.2, 0.20.3, 0.21.0, 0.22.0
>            Reporter: Dmytro Molkov
>            Assignee: Dmytro Molkov
>             Fix For: 0.22.0
>
>         Attachments: HDFS-1300.patch
>
>
> Currently when you decommission a node each block is only inserted into neededReplications
if it is not there yet. This causes a problem of a block sitting in a low priority queue when
all replicas sit on the nodes being decommissioned.
> The common usecase for decommissioning nodes for us is proactively exclude them before
they went bad, so it would be great to get the blocks at risk onto the live datanodes as quickly
as possible.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message