Return-Path: Delivered-To: apmail-hbase-commits-archive@www.apache.org Received: (qmail 86089 invoked from network); 17 Sep 2010 04:21:51 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 17 Sep 2010 04:21:51 -0000 Received: (qmail 25359 invoked by uid 500); 17 Sep 2010 04:21:51 -0000 Delivered-To: apmail-hbase-commits-archive@hbase.apache.org Received: (qmail 25290 invoked by uid 500); 17 Sep 2010 04:21:50 -0000 Mailing-List: contact commits-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hbase.apache.org Delivered-To: mailing list commits@hbase.apache.org Received: (qmail 25282 invoked by uid 99); 17 Sep 2010 04:21:49 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 17 Sep 2010 04:21:49 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO eris.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 17 Sep 2010 04:21:49 +0000 Received: by eris.apache.org (Postfix, from userid 65534) id 07D5023889E2; Fri, 17 Sep 2010 04:21:29 +0000 (UTC) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: svn commit: r997975 - in /hbase/trunk: CHANGES.txt src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java Date: Fri, 17 Sep 2010 04:21:28 -0000 To: commits@hbase.apache.org From: stack@apache.org X-Mailer: svnmailer-1.0.8 Message-Id: <20100917042129.07D5023889E2@eris.apache.org> Author: stack Date: Fri Sep 17 04:21:28 2010 New Revision: 997975 URL: http://svn.apache.org/viewvc?rev=997975&view=rev Log: HBASE-3006 Reading compressed HFile blocks causes way too many DFS RPC calls severly impacting performance--Now add fix I intended, a spelling mistake in HFile Modified: hbase/trunk/CHANGES.txt hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java Modified: hbase/trunk/CHANGES.txt URL: http://svn.apache.org/viewvc/hbase/trunk/CHANGES.txt?rev=997975&r1=997974&r2=997975&view=diff ============================================================================== --- hbase/trunk/CHANGES.txt (original) +++ hbase/trunk/CHANGES.txt Fri Sep 17 04:21:28 2010 @@ -523,6 +523,9 @@ Release 0.21.0 - Unreleased HBASE-2986 multi writable can npe causing client hang HBASE-2979 Fix failing TestMultParrallel in hudson build HBASE-2899 hfile.min.blocksize.size ignored/documentation wrong + HBASE-3006 Reading compressed HFile blocks causes way too many DFS RPC + calls severly impacting performance + (Kannan Muthukkaruppan via Stack) IMPROVEMENTS HBASE-1760 Cleanup TODOs in HTable Modified: hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java URL: http://svn.apache.org/viewvc/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java?rev=997975&r1=997974&r2=997975&view=diff ============================================================================== --- hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java (original) +++ hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java Fri Sep 17 04:21:28 2010 @@ -19,6 +19,7 @@ */ package org.apache.hadoop.hbase.io.hfile; +import java.io.BufferedInputStream; import java.io.Closeable; import java.io.DataInputStream; import java.io.DataOutputStream; @@ -1051,10 +1052,15 @@ public class HFile { // decompressor reading into next block -- IIRC, it just grabs a // bunch of data w/o regard to whether decompressor is coming to end of a // decompression. + + // We use a buffer of DEFAULT_BLOCKSIZE size. This might be extreme. + // Could maybe do with less. Study and figure it: TODO InputStream is = this.compressAlgo.createDecompressionStream( - new BoundedRangeFileInputStream(this.istream, offset, compressedSize, - pread), - decompressor, 0); + new BufferedInputStream( + new BoundedRangeFileInputStream(this.istream, offset, compressedSize, + pread), + Math.min(DEFAULT_BLOCKSIZE, compressedSize)), + decompressor, 0); buf = ByteBuffer.allocate(decompressedSize); IOUtils.readFully(is, buf.array(), 0, buf.capacity()); is.close();