hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raghu Angadi (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-3339) DFS Write pipeline does not detect defective datanode correctly if it times out.
Date Thu, 08 May 2008 18:03:55 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-3339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Raghu Angadi updated HADOOP-3339:
---------------------------------

    Attachment: tmp-3339-dn.patch


The attached patch fixes the main problem described (practically all the time). It informs
upstream properly about the the down stream failure.

Similar problem exists on client side as well. So if 2nd datanode timesout, most of the time
client removes the first datanode as the bad one. The issues on DataNode and Client are similar
but similar fix can not work, because on DataNode the responder needs properly write its state
upstream and Client needs to properly read all the remaining data on the socket from first
datanode.

The main issue is that BlockReceiver thread (and DataStreamer in the case of DFSClient) {{interrupt()}}
the 'responder' thread. But interrupting is a pretty coarse control. We don't know what state
the responder is in and interrupting has different effects depending on responder state. To
fix this properly we need to redesign how we handle these interactions.

I am trying out a fix for DFSClient.

> DFS Write pipeline does not detect defective datanode correctly if it times out.
> --------------------------------------------------------------------------------
>
>                 Key: HADOOP-3339
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3339
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.16.0
>            Reporter: Raghu Angadi
>            Assignee: Raghu Angadi
>             Fix For: 0.18.0
>
>         Attachments: tmp-3339-dn.patch
>
>
> When DFSClient is writing to DFS, it does not correctly detect the culprit datanode (rather
datanodes do not inform) properly if the bad node times out. Say, the last datanode in in
3 node pipeline is is too slow or defective. In this case, pipeline removes the first two
datanodes in first two attempts. The third attempt has only the 3rd datanode in the pipeline
and it will fail too. If the pipeline detects the bad 3rd node when the first failure occurs,
the write will succeed in the second attempt. 
> I will attach example logs of such cases. I think this should be fixed in 0.17.x.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message