hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "dhruba borthakur (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-2757) Should DFS outputstream's close wait forever?
Date Mon, 27 Apr 2009 21:02:30 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-2757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12703369#action_12703369

dhruba borthakur commented on HADOOP-2757:

Good point about HADOOP-4703. I think the patch for HADOOP-4703 helps when the datanode(s)
in the pipeline are dead. I am actually using 0.19, so it already has the fix for HADOOP_4703.

This patch implements a soft-mount-type of client. If the cluster is behaving badly (datanodes
down and/or namenode also down) then the client will never hang. The client  calls will return
error and the appliication can handle it according to its needs. This patch is necessary when
the app wants to handle real-timish loads. 

> Should DFS outputstream's close wait forever?
> ---------------------------------------------
>                 Key: HADOOP-2757
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2757
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: dfs
>            Reporter: Raghu Angadi
>            Assignee: dhruba borthakur
>         Attachments: softMount1.patch, softMount1.patch, softMount2.patch
> Currently {{DFSOutputStream.close()}} waits for ever if Namenode keeps throwing {{NotYetReplicated}}
exception, for whatever reason. Its pretty annoying for a user. Shoud the loop inside close
have a timeout? If so how much? It could probably something like 10 minutes.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message