hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "dhruba borthakur (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-951) DFSClient should handle all nodes in a pipeline failed.
Date Fri, 05 Feb 2010 15:07:27 GMT

    [ https://issues.apache.org/jira/browse/HDFS-951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12830125#action_12830125
] 

dhruba borthakur commented on HDFS-951:
---------------------------------------

If all datanodes in the pipleline are dead, then the application cannot write anymore to the
file. (This can be improved, of course). Are you saying that throwing exceptions to the write/close
call (after all datanodes in pipeline have failed) is a problem? 

Or are you saying that when all datanodes in the pipeline fail, all resources associated with
that OutputStream should be automatically released?


> DFSClient should handle all nodes in a pipeline failed.
> -------------------------------------------------------
>
>                 Key: HDFS-951
>                 URL: https://issues.apache.org/jira/browse/HDFS-951
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: He Yongqiang
>
> processDatanodeError-> setupPipelineForAppendOrRecovery  will set streamerClosed to
be true if all nodes in the pipeline failed in the past, and just return.
> Back to run() in data streammer,  the logic 
>  if (streamerClosed || hasError || dataQueue.size() == 0 || !clientRunning) {
>                 continue;
>   }
> will just let set closed=true in closeInternal().
> And DataOutputStream will not get a chance to clean up. The DataOutputStream will throw
exception or return null for following write/close.
> It will leave the file in writing in incomplete state.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message