hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kihwal Lee (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-10178) Permanent write failures can happen if pipeline recoveries occur for the first packet
Date Fri, 18 Mar 2016 14:08:33 GMT

     [ https://issues.apache.org/jira/browse/HDFS-10178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Kihwal Lee updated HDFS-10178:
------------------------------
    Attachment: HDFS-10178.v2.patch

Made {{VolumeScanner}} retain the old behavior. Fixed javac warnings.

> Permanent write failures can happen if pipeline recoveries occur for the first packet
> -------------------------------------------------------------------------------------
>
>                 Key: HDFS-10178
>                 URL: https://issues.apache.org/jira/browse/HDFS-10178
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Kihwal Lee
>            Assignee: Kihwal Lee
>            Priority: Critical
>         Attachments: HDFS-10178.patch, HDFS-10178.v2.patch
>
>
> We have observed that write fails permanently if the first packet doesn't go through
properly and pipeline recovery happens. If the packet header is sent out, but the data portion
of the packet does not reach one or more datanodes in time, the pipeline recovery will be
done against the 0-byte partial block.  
> If additional datanodes are added, the block is transferred to the new nodes.  After
the transfer, each node will have a meta file containing the header and 0-length data block
file. The pipeline recovery seems to work correctly up to this point, but write fails when
actual data packet is resent. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message