Return-Path: Delivered-To: apmail-lucene-hadoop-commits-archive@locus.apache.org Received: (qmail 35766 invoked from network); 7 Dec 2007 19:49:47 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 7 Dec 2007 19:49:47 -0000 Received: (qmail 37375 invoked by uid 500); 7 Dec 2007 19:49:36 -0000 Delivered-To: apmail-lucene-hadoop-commits-archive@lucene.apache.org Received: (qmail 37352 invoked by uid 500); 7 Dec 2007 19:49:35 -0000 Mailing-List: contact hadoop-commits-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hadoop-dev@lucene.apache.org Delivered-To: mailing list hadoop-commits@lucene.apache.org Received: (qmail 37343 invoked by uid 99); 7 Dec 2007 19:49:35 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 07 Dec 2007 11:49:35 -0800 X-ASF-Spam-Status: No, hits=-98.0 required=10.0 tests=ALL_TRUSTED,URIBL_BLACK X-Spam-Check-By: apache.org Received: from [140.211.11.3] (HELO eris.apache.org) (140.211.11.3) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 07 Dec 2007 19:49:44 +0000 Received: by eris.apache.org (Postfix, from userid 65534) id 1884F1A9832; Fri, 7 Dec 2007 11:49:20 -0800 (PST) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: svn commit: r602199 - in /lucene/hadoop/trunk/src/contrib/hbase: ./ conf/ src/java/org/apache/hadoop/hbase/ src/test/org/apache/hadoop/hbase/ Date: Fri, 07 Dec 2007 19:49:19 -0000 To: hadoop-commits@lucene.apache.org From: stack@apache.org X-Mailer: svnmailer-1.0.8 Message-Id: <20071207194922.1884F1A9832@eris.apache.org> X-Virus-Checked: Checked by ClamAV on apache.org Author: stack Date: Fri Dec 7 11:49:19 2007 New Revision: 602199 URL: http://svn.apache.org/viewvc?rev=602199&view=rev Log: HADOOP-2377 Holding open MapFile.Readers is expensive, so use less of them Modified: lucene/hadoop/trunk/src/contrib/hbase/CHANGES.txt lucene/hadoop/trunk/src/contrib/hbase/conf/hbase-default.xml lucene/hadoop/trunk/src/contrib/hbase/src/java/org/apache/hadoop/hbase/HConstants.java lucene/hadoop/trunk/src/contrib/hbase/src/java/org/apache/hadoop/hbase/HRegion.java lucene/hadoop/trunk/src/contrib/hbase/src/java/org/apache/hadoop/hbase/HStoreFile.java lucene/hadoop/trunk/src/contrib/hbase/src/test/org/apache/hadoop/hbase/PerformanceEvaluation.java Modified: lucene/hadoop/trunk/src/contrib/hbase/CHANGES.txt URL: http://svn.apache.org/viewvc/lucene/hadoop/trunk/src/contrib/hbase/CHANGES.txt?rev=602199&r1=602198&r2=602199&view=diff ============================================================================== --- lucene/hadoop/trunk/src/contrib/hbase/CHANGES.txt (original) +++ lucene/hadoop/trunk/src/contrib/hbase/CHANGES.txt Fri Dec 7 11:49:19 2007 @@ -94,6 +94,7 @@ HADOOP-2299 Support inclusive scans (Bryan Duxbury via Stack) HADOOP-2333 Client side retries happen at the wrong level HADOOP-2357 Compaction cleanup; less deleting + prevent possible file leaks + HADOOP-2377 Holding open MapFile.Readers is expensive, so use less of them Release 0.15.1 Modified: lucene/hadoop/trunk/src/contrib/hbase/conf/hbase-default.xml URL: http://svn.apache.org/viewvc/lucene/hadoop/trunk/src/contrib/hbase/conf/hbase-default.xml?rev=602199&r1=602198&r2=602199&view=diff ============================================================================== --- lucene/hadoop/trunk/src/contrib/hbase/conf/hbase-default.xml (original) +++ lucene/hadoop/trunk/src/contrib/hbase/conf/hbase-default.xml Fri Dec 7 11:49:19 2007 @@ -153,7 +153,7 @@ hbase.hregion.memcache.flush.size - 16777216 + 67108864 A HRegion memcache will be flushed to disk if size of the memcache exceeds this number of bytes. Value is checked by a thread that runs @@ -174,11 +174,10 @@ hbase.hregion.max.filesize - 67108864 + 268435456 Maximum desired file size for an HRegion. If filesize exceeds - value + (value / 2), the HRegion is split in two. Default: 64M. - If too large, splits will take so long, clients timeout. + value + (value / 2), the HRegion is split in two. Default: 256M. Modified: lucene/hadoop/trunk/src/contrib/hbase/src/java/org/apache/hadoop/hbase/HConstants.java URL: http://svn.apache.org/viewvc/lucene/hadoop/trunk/src/contrib/hbase/src/java/org/apache/hadoop/hbase/HConstants.java?rev=602199&r1=602198&r2=602199&view=diff ============================================================================== --- lucene/hadoop/trunk/src/contrib/hbase/src/java/org/apache/hadoop/hbase/HConstants.java (original) +++ lucene/hadoop/trunk/src/contrib/hbase/src/java/org/apache/hadoop/hbase/HConstants.java Fri Dec 7 11:49:19 2007 @@ -88,7 +88,7 @@ static final String HREGION_OLDLOGFILE_NAME = "oldlogfile.log"; /** Default maximum file size */ - static final long DEFAULT_MAX_FILE_SIZE = 64 * 1024 * 1024; // 64MB + static final long DEFAULT_MAX_FILE_SIZE = 256 * 1024 * 1024; // Always store the location of the root table's HRegion. // This HRegion is never split. Modified: lucene/hadoop/trunk/src/contrib/hbase/src/java/org/apache/hadoop/hbase/HRegion.java URL: http://svn.apache.org/viewvc/lucene/hadoop/trunk/src/contrib/hbase/src/java/org/apache/hadoop/hbase/HRegion.java?rev=602199&r1=602198&r2=602199&view=diff ============================================================================== --- lucene/hadoop/trunk/src/contrib/hbase/src/java/org/apache/hadoop/hbase/HRegion.java (original) +++ lucene/hadoop/trunk/src/contrib/hbase/src/java/org/apache/hadoop/hbase/HRegion.java Fri Dec 7 11:49:19 2007 @@ -310,9 +310,9 @@ fs.delete(merges); } - // By default, we flush the cache when 16M. + // By default, we flush the cache when 64M. this.memcacheFlushSize = conf.getInt("hbase.hregion.memcache.flush.size", - 1024*1024*16); + 1024*1024*64); this.flushListener = listener; this.blockingMemcacheSize = this.memcacheFlushSize * conf.getInt("hbase.hregion.memcache.block.multiplier", 2); Modified: lucene/hadoop/trunk/src/contrib/hbase/src/java/org/apache/hadoop/hbase/HStoreFile.java URL: http://svn.apache.org/viewvc/lucene/hadoop/trunk/src/contrib/hbase/src/java/org/apache/hadoop/hbase/HStoreFile.java?rev=602199&r1=602198&r2=602199&view=diff ============================================================================== --- lucene/hadoop/trunk/src/contrib/hbase/src/java/org/apache/hadoop/hbase/HStoreFile.java (original) +++ lucene/hadoop/trunk/src/contrib/hbase/src/java/org/apache/hadoop/hbase/HStoreFile.java Fri Dec 7 11:49:19 2007 @@ -504,14 +504,19 @@ */ static Reference readSplitInfo(final Path p, final FileSystem fs) throws IOException { + Reference r = null; FSDataInputStream in = fs.open(p); - String rn = in.readUTF(); - HStoreKey midkey = new HStoreKey(); - midkey.readFields(in); - long fid = in.readLong(); - boolean tmp = in.readBoolean(); - return new Reference(rn, fid, midkey, tmp? Range.top: Range.bottom); - + try { + String rn = in.readUTF(); + HStoreKey midkey = new HStoreKey(); + midkey.readFields(in); + long fid = in.readLong(); + boolean tmp = in.readBoolean(); + r = new Reference(rn, fid, midkey, tmp? Range.top: Range.bottom); + } finally { + in.close(); + } + return r; } private void createOrFail(final FileSystem fs, final Path p) Modified: lucene/hadoop/trunk/src/contrib/hbase/src/test/org/apache/hadoop/hbase/PerformanceEvaluation.java URL: http://svn.apache.org/viewvc/lucene/hadoop/trunk/src/contrib/hbase/src/test/org/apache/hadoop/hbase/PerformanceEvaluation.java?rev=602199&r1=602198&r2=602199&view=diff ============================================================================== --- lucene/hadoop/trunk/src/contrib/hbase/src/test/org/apache/hadoop/hbase/PerformanceEvaluation.java (original) +++ lucene/hadoop/trunk/src/contrib/hbase/src/test/org/apache/hadoop/hbase/PerformanceEvaluation.java Fri Dec 7 11:49:19 2007 @@ -64,15 +64,6 @@ * *

If number of clients > 1, we start up a MapReduce job. Each map task * runs an individual client. Each client does about 1GB of data. - * - *

If client == 1, the test table is created and deleted at end of each run - * and the sequentialWrite test is run first if a test requires - * a populated test table: e.g. if you are running the - * sequentialRead test, the test table must hold data for it to - * read. If client > 1, and we are running clients in a map task, the table - * is not deleted at the end-of-run. Also, if running the - * sequentialRead or randomRead tests, the - * sequentialWrite test is not automatically run first. */ public class PerformanceEvaluation implements HConstants { static final Logger LOG = @@ -553,23 +544,10 @@ try { admin = new HBaseAdmin(this.conf); checkTable(admin); - - if (cmd.equals(RANDOM_READ) || cmd.equals(RANDOM_READ_MEM) || - cmd.equals(SCAN) || cmd.equals(SEQUENTIAL_READ)) { - status.setStatus("Running " + SEQUENTIAL_WRITE + " first so " + - cmd + " has data to work against"); - runOneClient(SEQUENTIAL_WRITE, 0, this.R, this.R, status); - } - runOneClient(cmd, 0, this.R, this.R, status); } catch (Exception e) { LOG.error("Failed", e); - } finally { - LOG.info("Deleting table " + tableDescriptor.getName()); - if (admin != null) { - admin.deleteTable(tableDescriptor.getName()); - } - } + } } private void runTest(final String cmd) throws IOException {