hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mingliang Liu (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-11090) Leave safemode immediately if all blocks have reported in
Date Wed, 02 Nov 2016 17:32:58 GMT

    [ https://issues.apache.org/jira/browse/HDFS-11090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15629742#comment-15629742

Mingliang Liu commented on HDFS-11090:

Is this mean we should leave safemode when all the blocks have been reported? I think sometimes
users intendly to set the threshold > 1 to not leave safemode.
I think what Andrew suggests is not to leave safe mode if the threshold is > 1. I'm also
with this.

Failing tests are related.

Please hold on commit. I need more time to review the idea and patch; we don't want to leave
safemode too early.

> Leave safemode immediately if all blocks have reported in
> ---------------------------------------------------------
>                 Key: HDFS-11090
>                 URL: https://issues.apache.org/jira/browse/HDFS-11090
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namenode
>    Affects Versions: 2.7.3
>            Reporter: Andrew Wang
>            Assignee: Yiqun Lin
>         Attachments: HDFS-11090.001.patch
> Startup safemode is triggered by two thresholds: % blocks reported in, and min # datanodes.
It's extended by an interval (default 30s) until these two thresholds are met.
> Safemode extension is helpful when the cluster has data, and the default % blocks threshold
(0.99) is used. It gives DNs a little extra time to report in and thus avoid unnecessary replication
> However, we can leave startup safemode early if 100% of blocks have reported in.
> Note that operators sometimes change the % blocks threshold to > 1 to never automatically
leave safemode. We should maintain this behavior.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org

View raw message