Return-Path: Delivered-To: apmail-hadoop-hbase-commits-archive@minotaur.apache.org Received: (qmail 81223 invoked from network); 7 Nov 2009 00:18:41 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 7 Nov 2009 00:18:41 -0000 Received: (qmail 15684 invoked by uid 500); 7 Nov 2009 00:18:41 -0000 Delivered-To: apmail-hadoop-hbase-commits-archive@hadoop.apache.org Received: (qmail 15655 invoked by uid 500); 7 Nov 2009 00:18:41 -0000 Mailing-List: contact hbase-commits-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hbase-dev@hadoop.apache.org Delivered-To: mailing list hbase-commits@hadoop.apache.org Received: (qmail 15646 invoked by uid 99); 7 Nov 2009 00:18:41 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 07 Nov 2009 00:18:41 +0000 X-ASF-Spam-Status: No, hits=-2.6 required=5.0 tests=AWL,BAYES_00 X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO eris.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 07 Nov 2009 00:18:38 +0000 Received: by eris.apache.org (Postfix, from userid 65534) id E546D23888AD; Sat, 7 Nov 2009 00:18:18 +0000 (UTC) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: svn commit: r833615 - in /hadoop/hbase/branches/0.20: CHANGES.txt src/java/org/apache/hadoop/hbase/util/Migrate.java Date: Sat, 07 Nov 2009 00:18:18 -0000 To: hbase-commits@hadoop.apache.org From: stack@apache.org X-Mailer: svnmailer-1.0.8 Message-Id: <20091107001818.E546D23888AD@eris.apache.org> Author: stack Date: Sat Nov 7 00:18:18 2009 New Revision: 833615 URL: http://svn.apache.org/viewvc?rev=833615&view=rev Log: HBASE-1959 Compress tables during 0.19 to 0.20 migration Modified: hadoop/hbase/branches/0.20/CHANGES.txt hadoop/hbase/branches/0.20/src/java/org/apache/hadoop/hbase/util/Migrate.java Modified: hadoop/hbase/branches/0.20/CHANGES.txt URL: http://svn.apache.org/viewvc/hadoop/hbase/branches/0.20/CHANGES.txt?rev=833615&r1=833614&r2=833615&view=diff ============================================================================== --- hadoop/hbase/branches/0.20/CHANGES.txt (original) +++ hadoop/hbase/branches/0.20/CHANGES.txt Sat Nov 7 00:18:18 2009 @@ -36,6 +36,7 @@ HBASE-1949 KeyValue expiration by Time-to-Live during major compaction is broken (Gary Helmling via Stack) HBASE-1957 Get-s can't set a Filter (Roman Kalyakin via Stack) + HBASE-1959 Compress tables during 0.19 to 0.20 migration (Dave Latham via Stack) IMPROVEMENTS HBASE-1899 Use scanner caching in shell count Modified: hadoop/hbase/branches/0.20/src/java/org/apache/hadoop/hbase/util/Migrate.java URL: http://svn.apache.org/viewvc/hadoop/hbase/branches/0.20/src/java/org/apache/hadoop/hbase/util/Migrate.java?rev=833615&r1=833614&r2=833615&view=diff ============================================================================== --- hadoop/hbase/branches/0.20/src/java/org/apache/hadoop/hbase/util/Migrate.java (original) +++ hadoop/hbase/branches/0.20/src/java/org/apache/hadoop/hbase/util/Migrate.java Sat Nov 7 00:18:18 2009 @@ -373,9 +373,11 @@ Integer.parseInt(regiondir.getName()), Bytes.toBytes(familydir.getName()), Long.parseLong(mf.getName()), null); BloomFilterMapFile.Reader src = hsf.getReader(fs, false, false); + String compression = conf.get("migrate.compression", "NONE").trim(); + Compression.Algorithm compressAlgorithm = Compression.Algorithm.valueOf(compression); HFile.Writer tgt = StoreFile.getWriter(fs, familydir, conf.getInt("hfile.min.blocksize.size", 64*1024), - Compression.Algorithm.NONE, getComparator(basedir)); + compressAlgorithm, getComparator(basedir)); // From old 0.19 HLogEdit. ImmutableBytesWritable deleteBytes = new ImmutableBytesWritable("HBASE::DELETEVAL".getBytes("UTF-8")); @@ -449,6 +451,8 @@ hri.getTableDesc().setMemStoreFlushSize(catalogMemStoreFlushSize); result = true; } + String compression = getConf().get("migrate.compression", "NONE").trim(); + Compression.Algorithm compressAlgorithm = Compression.Algorithm.valueOf(compression); // Remove the old MEMCACHE_FLUSHSIZE if present hri.getTableDesc().remove(Bytes.toBytes("MEMCACHE_FLUSHSIZE")); for (HColumnDescriptor hcd: hri.getTableDesc().getFamilies()) { @@ -456,7 +460,7 @@ hcd.setBlockCacheEnabled(true); // Set compression to none. Previous was 'none'. Needs to be upper-case. // Any other compression we are turning off. Have user enable it. - hcd.setCompressionType(Algorithm.NONE); + hcd.setCompressionType(compressAlgorithm); result = true; } return result;