hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mikhail Antonov (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-15908) Checksum verification is broken due to incorrect passing of ByteBuffers in DataChecksum
Date Sat, 28 May 2016 10:26:12 GMT

    [ https://issues.apache.org/jira/browse/HBASE-15908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15305280#comment-15305280
] 

Mikhail Antonov commented on HBASE-15908:
-----------------------------------------

FYI, I tested only on Hadoop 2.7 cluster, and in 2.5 checksumming is a little bit different
- namely, the second variance of call to DataChecksum verifyChunkedSums(), the one which uses
byte arrays instead of byte buffers does not even attempt to use native checksums. But anyway.

As this is practically an addendum one-liner fix to already committed jira, I've pushed it
to master, branch-1 and branch-1.3, will push to 1.2 if original jira is backported there.

> Checksum verification is broken due to incorrect passing of ByteBuffers in DataChecksum
> ---------------------------------------------------------------------------------------
>
>                 Key: HBASE-15908
>                 URL: https://issues.apache.org/jira/browse/HBASE-15908
>             Project: HBase
>          Issue Type: Bug
>          Components: HFile
>    Affects Versions: 1.3.0
>            Reporter: Mikhail Antonov
>            Assignee: Mikhail Antonov
>            Priority: Blocker
>
> It looks like HBASE-11625 (cc [~stack], [~appy]) has broken checksum verification? I'm
seeing the following on my cluster (1.3.0, Hadoop 2.7).
> Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem reading HFile
Trailer from file <file path>
> 	at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:497)
> 	at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:525)
> 	at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.<init>(StoreFile.java:1135)
> 	at org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:259)
> 	at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:427)
> 	at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:528)
> 	at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:518)
> 	at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:652)
> 	at org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:117)
> 	at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:519)
> 	at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:516)
> 	... 6 more
> Caused by: java.lang.IllegalArgumentException: input ByteBuffers must be direct buffers
> 	at org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums(Native Method)
> 	at org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:59)
> 	at org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:301)
> 	at org.apache.hadoop.hbase.io.hfile.ChecksumUtil.validateChecksum(ChecksumUtil.java:120)
> 	at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.validateChecksum(HFileBlock.java:1785)
> 	at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1728)
> 	at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1558)
> 	at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlock(HFileBlock.java:1397)
> 	at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlockWithBlockType(HFileBlock.java:1405)
> 	at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.<init>(HFileReaderV2.java:151)
> 	at org.apache.hadoop.hbase.io.hfile.HFileReaderV3.<init>(HFileReaderV3.java:78)
> 	at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:487)
> 	... 16 more
> Prior this change we won't use use native crc32 checksum verification as in Hadoop's
DataChecksum#verifyChunkedSums we would go this codepath
> if (data.hasArray() && checksums.hasArray()) {
>   <check native checksum, but using byte[] instead of byte buffers>
> }
> So we were fine. However, now we're dropping below and try to use the slightly different
variant of native crc32 (if one is available)  taking ByteBuffer instead of byte[], which
expects DirectByteBuffer, not Heap BB. 
> I think easiest fix working on all Hadoops would be to remove asReadonly() conversion
here:
> !validateChecksum(offset, onDiskBlockByteBuffer.asReadOnlyBuffer(), hdrSize)) {
> I don't see why do we need it. Let me test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message