hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raghu Angadi (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-3328) DFS write pipeline : only the last datanode needs to verify checksum
Date Fri, 23 May 2008 19:59:55 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-3328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12599486#action_12599486

Raghu Angadi commented on HADOOP-3328:

Verified that this patch does not change any checksum guarantees provided by current protocol.
i.e. if there is a corruption detected at the last node, it will not considered complete at
the any of the datanodes. This policy needs to be checked again if the protocol changes.

> DFS write pipeline : only the last datanode needs to verify checksum
> --------------------------------------------------------------------
>                 Key: HADOOP-3328
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3328
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: dfs
>    Affects Versions: 0.16.0
>            Reporter: Raghu Angadi
>            Assignee: Raghu Angadi
>         Attachments: HADOOP-3328.patch
> Currently all the datanodes in DFS write pipeline verify checksum. Since the current
protocol includes acks from  the datanodes, an ack from the last node could also serve as
verification that checksum ok. In that sense, only the last datanode needs to verify checksum.
Based on [this comment|http://issues.apache.org/jira/browse/HADOOP-1702?focusedCommentId=12575553#action_12575553]
from HADOOP-1702, CPU consumption might go down by another 25-30% (4/14) after HADOOP-1702.

> Also this would make it easier to use transferTo() and transferFrom() on intermediate
datanodes since they don't need to look at the data.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message