hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Konstantin Shvachko (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-3034) Need to be able to evacuate a datanode
Date Tue, 18 Mar 2008 01:20:24 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-3034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12579695#action_12579695

Konstantin Shvachko commented on HADOOP-3034:

Sounds like decommission feature.

> Need to be able to evacuate a datanode
> --------------------------------------
>                 Key: HADOOP-3034
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3034
>             Project: Hadoop Core
>          Issue Type: Improvement
>            Reporter: Ted Dunning
> It would be very helpful if there were some way to evacuate data from one or more nodes.
> This scenario arise fairly often when several nodes need to be powered down at nearly
the same time.  Currently, they can only be taken down a few at a time (n-1 nodes at a time
where n is the replication factor) and then you have to wait until all files on these nodes
have been replicated.
> One implementation would be to be to allow the nodes in question be put into read only
mode and mark all blocks on those nodes as not counting as replicants.  This should cause
the namenode to copy these blocks and as soon as fsck shows no under-replicated files, the
nodes will be known to be clear for power-down.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message