hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jason Lowe (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-9434) Recommission a datanode with 500k blocks may pause NN for 30 seconds
Date Tue, 24 Nov 2015 21:15:11 GMT

    [ https://issues.apache.org/jira/browse/HDFS-9434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025422#comment-15025422

Jason Lowe commented on HDFS-9434:

This broke the 2.6 build.  The patch is assuming SLF4J but that hasn't happened on the 2.6

> Recommission a datanode with 500k blocks may pause NN for 30 seconds
> --------------------------------------------------------------------
>                 Key: HDFS-9434
>                 URL: https://issues.apache.org/jira/browse/HDFS-9434
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namenode
>            Reporter: Tsz Wo Nicholas Sze
>            Assignee: Tsz Wo Nicholas Sze
>             Fix For: 2.6.3
>         Attachments: h9434_20151116.patch
> In BlockManager, processOverReplicatedBlocksOnReCommission is called within the namespace
lock.  There is a (not very useful) log message printed in processOverReplicatedBlock.  When
there is a large number of blocks stored in a storage, printing the log message for each block
can pause NN to process any other operations.  We did see that it could pause NN  for 30 seconds
for a storage with 500k blocks.
> I suggest to change the log message to trace level as a quick fix.

This message was sent by Atlassian JIRA

View raw message