hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vinayakumar B (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-10987) Make Decommission less expensive when lot of blocks present.
Date Wed, 12 Oct 2016 13:45:20 GMT

    [ https://issues.apache.org/jira/browse/HDFS-10987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15568760#comment-15568760
] 

Vinayakumar B commented on HDFS-10987:
--------------------------------------

bq. To be precise, the number of blocks doesn't have to be huge. It will yield if the number
is greater than the configured per-iteration-limit.
Yes, that's correct. But before-this patch, check against per-iteration-limit is done after
checking all blocks-per-node. So yielding is done only after current-nodes list is complete.

bq. When the sleep is interrupted, it should probably not ignore. It looks like it can simply
return.
Yes. Along with that, IMO should also add 'namesystem.isRunning()' to while loop condition
in 'check()' to end execution fast.

> Make Decommission less expensive when lot of blocks present.
> ------------------------------------------------------------
>
>                 Key: HDFS-10987
>                 URL: https://issues.apache.org/jira/browse/HDFS-10987
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Brahma Reddy Battula
>            Assignee: Brahma Reddy Battula
>            Priority: Critical
>         Attachments: HDFS-10987.patch
>
>
> When user want to decommission a node which having 50M blocks +,it could hold the namesystem
lock for long time.We've seen it is taking 36 sec+. 
> As we knew during this time, Namenode will not available... As this decommission will
continuosly run till all the blocks got replicated,hence Namenode will unavailable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message