Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id AF6B6200B13 for ; Wed, 1 Jun 2016 03:01:14 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id ADF67160A44; Wed, 1 Jun 2016 01:01:14 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 02637160A46 for ; Wed, 1 Jun 2016 03:01:13 +0200 (CEST) Received: (qmail 83009 invoked by uid 500); 1 Jun 2016 01:01:13 -0000 Mailing-List: contact issues-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@hbase.apache.org Received: (qmail 82994 invoked by uid 99); 1 Jun 2016 01:01:13 -0000 Received: from arcas.apache.org (HELO arcas) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 01 Jun 2016 01:01:13 +0000 Received: from arcas.apache.org (localhost [127.0.0.1]) by arcas (Postfix) with ESMTP id DF9F92C1F69 for ; Wed, 1 Jun 2016 01:01:12 +0000 (UTC) Date: Wed, 1 Jun 2016 01:01:12 +0000 (UTC) From: "huaxiang sun (JIRA)" To: issues@hbase.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HBASE-15908) Checksum verification is broken due to incorrect passing of ByteBuffers in DataChecksum MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Wed, 01 Jun 2016 01:01:14 -0000 [ https://issues.apache.org/jira/browse/HBASE-15908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308995#comment-15308995 ] huaxiang sun commented on HBASE-15908: -------------------------------------- Thanks [~mantonov]! > Checksum verification is broken due to incorrect passing of ByteBuffers in DataChecksum > --------------------------------------------------------------------------------------- > > Key: HBASE-15908 > URL: https://issues.apache.org/jira/browse/HBASE-15908 > Project: HBase > Issue Type: Bug > Components: HFile > Affects Versions: 1.3.0 > Reporter: Mikhail Antonov > Assignee: Mikhail Antonov > Priority: Blocker > Fix For: 1.3.0 > > Attachments: master.v1.patch > > > It looks like HBASE-11625 (cc [~stack], [~appy]) has broken checksum verification? I'm seeing the following on my cluster (1.3.0, Hadoop 2.7). > Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem reading HFile Trailer from file > at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:497) > at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:525) > at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1135) > at org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:259) > at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:427) > at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:528) > at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:518) > at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:652) > at org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:117) > at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:519) > at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:516) > ... 6 more > Caused by: java.lang.IllegalArgumentException: input ByteBuffers must be direct buffers > at org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums(Native Method) > at org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:59) > at org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:301) > at org.apache.hadoop.hbase.io.hfile.ChecksumUtil.validateChecksum(ChecksumUtil.java:120) > at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.validateChecksum(HFileBlock.java:1785) > at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1728) > at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1558) > at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlock(HFileBlock.java:1397) > at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlockWithBlockType(HFileBlock.java:1405) > at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.(HFileReaderV2.java:151) > at org.apache.hadoop.hbase.io.hfile.HFileReaderV3.(HFileReaderV3.java:78) > at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:487) > ... 16 more > Prior this change we won't use use native crc32 checksum verification as in Hadoop's DataChecksum#verifyChunkedSums we would go this codepath > if (data.hasArray() && checksums.hasArray()) { > > } > So we were fine. However, now we're dropping below and try to use the slightly different variant of native crc32 (if one is available) taking ByteBuffer instead of byte[], which expects DirectByteBuffer, not Heap BB. > I think easiest fix working on all Hadoops would be to remove asReadonly() conversion here: > !validateChecksum(offset, onDiskBlockByteBuffer.asReadOnlyBuffer(), hdrSize)) { > I don't see why do we need it. Let me test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)