[ https://issues.apache.org/jira/browse/HDFS-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14956016#comment-14956016
]
Daniel Templeton commented on HDFS-9220:
----------------------------------------
Thanks, [~jingzhao]. Could you add some comments to the patch to make it a little more clear
what the test is doing? Something like:
{code}
// Reading in data from the small block. If the checksum is wrong, the read will throw an
exception. If it doesn't, the checksum is correct.
{code}
I'd also love a blank line before the try to make it easier on the eyes.
> Reading small file (< 512 bytes) that is open for append fails due to incorrect checksum
> ----------------------------------------------------------------------------------------
>
> Key: HDFS-9220
> URL: https://issues.apache.org/jira/browse/HDFS-9220
> Project: Hadoop HDFS
> Issue Type: Bug
> Affects Versions: 2.7.1
> Reporter: Bogdan Raducanu
> Assignee: Jagadesh Kiran N
> Priority: Blocker
> Attachments: HDFS-9220.000.patch, test2.java
>
>
> Exception:
> 2015-10-09 14:59:40 WARN DFSClient:1150 - fetchBlockByteRange(). Got a checksum exception
for /tmp/file0.05355529331575182 at BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882:0
from DatanodeInfoWithStorage[10.10.10.10]:5001
> All 3 replicas cause this exception and the read fails entirely with:
> BlockMissingException: Could not obtain block: BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882
file=/tmp/file0.05355529331575182
> Code to reproduce is attached.
> Does not happen in 2.7.0.
> Data is read correctly if checksum verification is disabled.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
|