hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tsz Wo (Nicholas), SZE (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-278) Should DFS outputstream's close wait forever?
Date Sat, 11 Jul 2009 00:57:14 GMT

    [ https://issues.apache.org/jira/browse/HDFS-278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12729918#action_12729918
] 

Tsz Wo (Nicholas), SZE commented on HDFS-278:
---------------------------------------------

{noformat}
+    this.hdfsTimeout = Client.getTimeout(conf);
{noformat}
The line above cannot be compiled since the jar file is not yet updated for HADOOP-6099.

> Should DFS outputstream's close wait forever?
> ---------------------------------------------
>
>                 Key: HDFS-278
>                 URL: https://issues.apache.org/jira/browse/HDFS-278
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Raghu Angadi
>            Assignee: dhruba borthakur
>             Fix For: 0.21.0
>
>         Attachments: softMount1.patch, softMount1.patch, softMount2.patch, softMount3.patch,
softMount4.txt, softMount5.txt, softMount6.txt, softMount7.txt, softMount8.txt
>
>
> Currently {{DFSOutputStream.close()}} waits for ever if Namenode keeps throwing {{NotYetReplicated}}
exception, for whatever reason. Its pretty annoying for a user. Shoud the loop inside close
have a timeout? If so how much? It could probably something like 10 minutes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message