hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raghu Angadi (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-3339) DFS Write pipeline does not detect defective datanode correctly if it times out.
Date Mon, 19 May 2008 18:09:56 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-3339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12598036#action_12598036
] 

Raghu Angadi commented on HADOOP-3339:
--------------------------------------

The test failure is another case of HADOOP-3354 and is not related to this patch. Also HADOOP-3416
is filed regd DFSClient.

> DFS Write pipeline does not detect defective datanode correctly if it times out.
> --------------------------------------------------------------------------------
>
>                 Key: HADOOP-3339
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3339
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.16.0
>            Reporter: Raghu Angadi
>            Assignee: Raghu Angadi
>             Fix For: 0.18.0
>
>         Attachments: HADOOP-3339.patch, tmp-3339-dn.patch
>
>
> When DFSClient is writing to DFS, it does not correctly detect the culprit datanode (rather
datanodes do not inform) properly if the bad node times out. Say, the last datanode in in
3 node pipeline is is too slow or defective. In this case, pipeline removes the first two
datanodes in first two attempts. The third attempt has only the 3rd datanode in the pipeline
and it will fail too. If the pipeline detects the bad 3rd node when the first failure occurs,
the write will succeed in the second attempt. 
> I will attach example logs of such cases. I think this should be fixed in 0.17.x.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message