hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "dhruba borthakur (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-2757) Should DFS outputstream's close wait forever?
Date Tue, 28 Apr 2009 22:05:30 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-2757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12703852#action_12703852
] 

dhruba borthakur commented on HADOOP-2757:
------------------------------------------

> if the client has a write but datanodes in the pipeline hang, this patch does not solve
the problem

You are right.

> Maybe it makes sense to do it in RPC clients. 

The clients currently use the streaming API to send data to the datanode(s). Are you saying
that the client should periodically ping each of the datanode(s) using a RPC? 

> Should DFS outputstream's close wait forever?
> ---------------------------------------------
>
>                 Key: HADOOP-2757
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2757
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: dfs
>            Reporter: Raghu Angadi
>            Assignee: dhruba borthakur
>         Attachments: softMount1.patch, softMount1.patch, softMount2.patch
>
>
> Currently {{DFSOutputStream.close()}} waits for ever if Namenode keeps throwing {{NotYetReplicated}}
exception, for whatever reason. Its pretty annoying for a user. Shoud the loop inside close
have a timeout? If so how much? It could probably something like 10 minutes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message