hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yongjun Zhang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-4257) The ReplaceDatanodeOnFailure policies could have a forgiving option
Date Tue, 19 Aug 2014 15:57:19 GMT

    [ https://issues.apache.org/jira/browse/HDFS-4257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102348#comment-14102348
] 

Yongjun Zhang commented on HDFS-4257:
-------------------------------------

Hi Nicholas,

Thanks for the updated patch. I went through it and it looks good to me.  One small comment
here, my understanding is, with best effort is enabled, and when there is only one replica
currently being written, there is the potential of data loss if this replica DN also goes
down (see Colin's comments above). If it makes sense to you, can we add a Warning to the hdfs-default,xml
description to indicate the data loss possibility?

For the separate thread Colin proposed to repair the pipeline, - I discussed with Colin -
it may help alleviate the situation, but for a slow writer, if the block does not get to finalized
state for long time, then the possibility of data loss is still there. So we need to think
more about how to do better (Colin, please correct me if I'm wrong).

Thanks.


> The ReplaceDatanodeOnFailure policies could have a forgiving option
> -------------------------------------------------------------------
>
>                 Key: HDFS-4257
>                 URL: https://issues.apache.org/jira/browse/HDFS-4257
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: hdfs-client
>    Affects Versions: 2.0.2-alpha
>            Reporter: Harsh J
>            Assignee: Tsz Wo Nicholas Sze
>            Priority: Minor
>         Attachments: h4257_20140325.patch, h4257_20140325b.patch, h4257_20140326.patch,
h4257_20140819.patch
>
>
> Similar question has previously come over HDFS-3091 and friends, but the essential problem
is: "Why can't I write to my cluster of 3 nodes, when I just have 1 node available at a point
in time.".
> The policies cover the 4 options, with {{Default}} being default:
> {{Disable}} -> Disables the whole replacement concept by throwing out an error (at
the server) or acts as {{Never}} at the client.
> {{Never}} -> Never replaces a DN upon pipeline failures (not too desirable in many
cases).
> {{Default}} -> Replace based on a few conditions, but whose minimum never touches
1. We always fail if only one DN remains and none others can be added.
> {{Always}} -> Replace no matter what. Fail if can't replace.
> Would it not make sense to have an option similar to Always/Default, where despite _trying_,
if it isn't possible to have > 1 DN in the pipeline, do not fail. I think that is what
the former write behavior was, and what fit with the minimum replication factor allowed value.
> Why is it grossly wrong to pass a write from a client for a block with just 1 remaining
replica in the pipeline (the minimum of 1 grows with the replication factor demanded from
the write), when replication is taken care of immediately afterwards? How often have we seen
missing blocks arise out of allowing this + facing a big rack(s) failure or so?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message