hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "sam rash (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-1057) Concurrent readers hit ChecksumExceptions if following a writer to very end of file
Date Thu, 08 Apr 2010 04:02:36 GMT

    [ https://issues.apache.org/jira/browse/HDFS-1057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12854787#action_12854787
] 

sam rash commented on HDFS-1057:
--------------------------------

Hi,

I'm working on getting this functionality into an internal 20-based rev as well.  Here are
some solutions I've thought of (and tried 1 & 2) in case it helps discussion towards a
good sol'n:

solutions (some tried, some theories):
1. truncate in progress blocks to chunk boundaries.  This solved this problem, but fails as
technically sync'd data is not available (partial chunk at the end)
2. when reader's request results in an artificial partial chunk (in BlockSender), recompute
the checksum for the partial chunk in the packet
	-functionally solves the problem
	-introduces inefficiency since the socket to socket copy can't be done
	-could possibly improve efficiency by having client break packet requests for anything that
has multiple chunks with a partial chunk at the end into two--partial chunk goes as a lone
request
		-results in one more packet going back & forth, but the previous full chunks can do
more efficient socket to socket copies
3. have datanode 'refresh' the length when it actually starts reading
	-will send more data than the client requests
	-checksum will match data
	-client does truncation of data *after* crc check
        -still has race condition with checksum for last chunk being out of sync with data
from a given reader's perspective


> Concurrent readers hit ChecksumExceptions if following a writer to very end of file
> -----------------------------------------------------------------------------------
>
>                 Key: HDFS-1057
>                 URL: https://issues.apache.org/jira/browse/HDFS-1057
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: data-node
>    Affects Versions: 0.21.0, 0.22.0
>            Reporter: Todd Lipcon
>            Priority: Critical
>
> In BlockReceiver.receivePacket, it calls replicaInfo.setBytesOnDisk before calling flush().
Therefore, if there is a concurrent reader, it's possible to race here - the reader will see
the new length while those bytes are still in the buffers of BlockReceiver. Thus the client
will potentially see checksum errors or EOFs. Additionally, the last checksum chunk of the
file is made accessible to readers even though it is not stable.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message