hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hairong Kuang (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-2757) Should DFS outputstream's close wait forever?
Date Tue, 28 Apr 2009 21:41:30 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-2757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12703838#action_12703838

Hairong Kuang commented on HADOOP-2757:

This patch only works when the client has a writer and NN hang. If the client does not have
a writer or if the client has a write but datanodes in the pipelien hang, this patch does
not solve the problem. Is this true? Maybe it makes sense to do it in RPC clients. Currently
a RPC client sends a ping to a server if it does not get a reply in one minute. The idea you
use to handle lease renewal can be used to handle the ping message.

> Should DFS outputstream's close wait forever?
> ---------------------------------------------
>                 Key: HADOOP-2757
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2757
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: dfs
>            Reporter: Raghu Angadi
>            Assignee: dhruba borthakur
>         Attachments: softMount1.patch, softMount1.patch, softMount2.patch
> Currently {{DFSOutputStream.close()}} waits for ever if Namenode keeps throwing {{NotYetReplicated}}
exception, for whatever reason. Its pretty annoying for a user. Shoud the loop inside close
have a timeout? If so how much? It could probably something like 10 minutes.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message