hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "sam rash (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-1057) Concurrent readers hit ChecksumExceptions if following a writer to very end of file
Date Thu, 08 Apr 2010 18:13:37 GMT

    [ https://issues.apache.org/jira/browse/HDFS-1057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12855055#action_12855055

sam rash commented on HDFS-1057:

thanks for the additional info.

fwiw, I have an impl of #2 that does the slightly more efficient way:  the BlockSender.sendChunks()
examines 'len' and if % bytesPerChecksum != 0, it truncates len to a chunk boundary.  It can
do this since it returns len.  The result is that what was a single packet to the receiver
is now two, but the first one can be done with transferToFully() using existing checksums
and the lone partial chunk has its own packet (and in this case, there's the extra buffer
copy in order to recompute the checksum.

For this attempt, I punted on figuring out if the block is in progress or not--I'm ok with
the slight inefficiency if it avoids race conditions. I believe we might be able to address
this with some synchronization around state changes to a block being an ongoing create or

Can you comment on where the recovery code is that I also need to tweak?  (very new to working
on hdfs)

I still need to do some more testing on it and clean it up;  also, is there an existing unit
test for this case yet?  (there certainly isn't in 0.20)  I am also going to try to construct
one for that.

> Concurrent readers hit ChecksumExceptions if following a writer to very end of file
> -----------------------------------------------------------------------------------
>                 Key: HDFS-1057
>                 URL: https://issues.apache.org/jira/browse/HDFS-1057
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: data-node
>    Affects Versions: 0.21.0, 0.22.0
>            Reporter: Todd Lipcon
>            Priority: Critical
> In BlockReceiver.receivePacket, it calls replicaInfo.setBytesOnDisk before calling flush().
Therefore, if there is a concurrent reader, it's possible to race here - the reader will see
the new length while those bytes are still in the buffers of BlockReceiver. Thus the client
will potentially see checksum errors or EOFs. Additionally, the last checksum chunk of the
file is made accessible to readers even though it is not stable.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message