hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mike Percy (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-5131) Need a DEFAULT-like pipeline recovery policy that works for writers that flush
Date Fri, 30 Aug 2013 18:30:52 GMT

    [ https://issues.apache.org/jira/browse/HDFS-5131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13754983#comment-13754983
] 

Mike Percy commented on HDFS-5131:
----------------------------------

Hi Harsh, I think it's pretty much the same feature but with different motivations and slightly
different requirements. They could be merged though, if we made the changes listed in both
to the new policy:
1) Allow only 1 DN if impossible to recover, and
2) Allow this policy even for clients that call hflush()

                
> Need a DEFAULT-like pipeline recovery policy that works for writers that flush
> ------------------------------------------------------------------------------
>
>                 Key: HDFS-5131
>                 URL: https://issues.apache.org/jira/browse/HDFS-5131
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>    Affects Versions: 2.0.6-alpha
>            Reporter: Mike Percy
>
> The Hadoop 2 pipeline-recovery mechanism currently has four policies: DISABLE (never
do recovery), NEVER (never do recovery unless client asks for it), ALWAYS (block until we
have recovered the write pipeline to minimum replication levels), and DEFAULT (try to do ALWAYS,
but use a heuristic to "give up" and allow writers to continue if not enough datanodes are
available to recover the pipeline).
> The big problem with default is that it specifically falls back to ALWAYS behavior if
a client calls hflush(). On its face, it seems like a reasonable thing to do, but in practice
this means that clients like Flume (as well as, I assume, HBase) just block when the cluster
is low on datanodes.
> In order to work around this issue, the easiest thing to do today is set the policy to
NEVER when using Flume to write to the cluster. But obviously that's not ideal.
> I believe what clients like Flume need is an additional policy which essentially uses
the heuristic logic used by DEFAULT even in cases where long-lived writers call hflush().

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message