hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hairong Kuang (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-2757) Should DFS outputstream's close wait forever?
Date Mon, 27 Apr 2009 20:36:30 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-2757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12703349#action_12703349
] 

Hairong Kuang commented on HADOOP-2757:
---------------------------------------

> In the case when all the datanode(s) in a pipeline go down but the namenode is alive,
the close still hangs. is this the case you are referring to?
Yes, if the dfs client hang is caused by down datanodes, then HADOOP-4703 should fix the problem.
Could you please doublecheck if this is true?

> Should DFS outputstream's close wait forever?
> ---------------------------------------------
>
>                 Key: HADOOP-2757
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2757
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: dfs
>            Reporter: Raghu Angadi
>            Assignee: dhruba borthakur
>         Attachments: softMount1.patch, softMount1.patch, softMount2.patch
>
>
> Currently {{DFSOutputStream.close()}} waits for ever if Namenode keeps throwing {{NotYetReplicated}}
exception, for whatever reason. Its pretty annoying for a user. Shoud the loop inside close
have a timeout? If so how much? It could probably something like 10 minutes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message