hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yongjun Zhang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-4257) The ReplaceDatanodeOnFailure policies could have a forgiving option
Date Tue, 02 Sep 2014 17:29:22 GMT

    [ https://issues.apache.org/jira/browse/HDFS-4257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14118393#comment-14118393

Yongjun Zhang commented on HDFS-4257:

Hi [~szetszwo], thanks for the rev, it looks good! A few very minor comments:

1. Wonder if we can add a log right after calling {{this.dtpReplaceDatanodeOnFailure = ReplaceDatanodeOnFailure.get(conf);}},
to indicate what policy is used? My concern is, user may change policy between different sessions,
it'd be nice to have a record in the log, so we can tell what policy is used. 

2. About method {{satisfy(...)}} in Condition interface, {{DEFAULT}} has "final" qualifier
for all parameters, but the others don't. It'd be nice to be consistent. Having "final" is
a good thing, to achieve both the benefit of "final" and code consistency.

3. The comments section and parameter specification for {{static final Condition DEFAULT =
new Condition() {}} used names "r", "n" and "replication", "nExistings" in a mixed way. Can
we use "replication", "nExistings"  to be consistent with other places in the same file?

Thanks a lot.

> The ReplaceDatanodeOnFailure policies could have a forgiving option
> -------------------------------------------------------------------
>                 Key: HDFS-4257
>                 URL: https://issues.apache.org/jira/browse/HDFS-4257
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: hdfs-client
>    Affects Versions: 2.0.2-alpha
>            Reporter: Harsh J
>            Assignee: Tsz Wo Nicholas Sze
>            Priority: Minor
>         Attachments: h4257_20140325.patch, h4257_20140325b.patch, h4257_20140326.patch,
h4257_20140819.patch, h4257_20140831.patch
> Similar question has previously come over HDFS-3091 and friends, but the essential problem
is: "Why can't I write to my cluster of 3 nodes, when I just have 1 node available at a point
in time.".
> The policies cover the 4 options, with {{Default}} being default:
> {{Disable}} -> Disables the whole replacement concept by throwing out an error (at
the server) or acts as {{Never}} at the client.
> {{Never}} -> Never replaces a DN upon pipeline failures (not too desirable in many
> {{Default}} -> Replace based on a few conditions, but whose minimum never touches
1. We always fail if only one DN remains and none others can be added.
> {{Always}} -> Replace no matter what. Fail if can't replace.
> Would it not make sense to have an option similar to Always/Default, where despite _trying_,
if it isn't possible to have > 1 DN in the pipeline, do not fail. I think that is what
the former write behavior was, and what fit with the minimum replication factor allowed value.
> Why is it grossly wrong to pass a write from a client for a block with just 1 remaining
replica in the pipeline (the minimum of 1 grows with the replication factor demanded from
the write), when replication is taken care of immediately afterwards? How often have we seen
missing blocks arise out of allowing this + facing a big rack(s) failure or so?

This message was sent by Atlassian JIRA

View raw message