Return-Path: Delivered-To: apmail-hadoop-hbase-commits-archive@minotaur.apache.org Received: (qmail 10827 invoked from network); 14 Jun 2009 21:35:09 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 14 Jun 2009 21:35:09 -0000 Received: (qmail 29687 invoked by uid 500); 14 Jun 2009 21:35:20 -0000 Delivered-To: apmail-hadoop-hbase-commits-archive@hadoop.apache.org Received: (qmail 29640 invoked by uid 500); 14 Jun 2009 21:35:20 -0000 Mailing-List: contact hbase-commits-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hbase-dev@hadoop.apache.org Delivered-To: mailing list hbase-commits@hadoop.apache.org Received: (qmail 29631 invoked by uid 99); 14 Jun 2009 21:35:20 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 14 Jun 2009 21:35:20 +0000 X-ASF-Spam-Status: No, hits=-1997.6 required=10.0 tests=ALL_TRUSTED,URIBL_RHS_DOB,WEIRD_PORT X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO eris.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 14 Jun 2009 21:35:08 +0000 Received: by eris.apache.org (Postfix, from userid 65534) id BC207238897F; Sun, 14 Jun 2009 21:34:24 +0000 (UTC) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: svn commit: r784618 [8/11] - in /hadoop/hbase/trunk_on_hadoop-0.18.3/src: java/ java/org/apache/hadoop/hbase/ java/org/apache/hadoop/hbase/client/ java/org/apache/hadoop/hbase/client/tableindexed/ java/org/apache/hadoop/hbase/filter/ java/org/apache/ha... Date: Sun, 14 Jun 2009 21:34:19 -0000 To: hbase-commits@hadoop.apache.org From: apurtell@apache.org X-Mailer: svnmailer-1.0.8 Message-Id: <20090614213424.BC207238897F@eris.apache.org> X-Virus-Checked: Checked by ClamAV on apache.org Modified: hadoop/hbase/trunk_on_hadoop-0.18.3/src/java/overview.html URL: http://svn.apache.org/viewvc/hadoop/hbase/trunk_on_hadoop-0.18.3/src/java/overview.html?rev=784618&r1=784617&r2=784618&view=diff ============================================================================== --- hadoop/hbase/trunk_on_hadoop-0.18.3/src/java/overview.html (original) +++ hadoop/hbase/trunk_on_hadoop-0.18.3/src/java/overview.html Sun Jun 14 21:34:13 2009 @@ -27,9 +27,9 @@

Requirements

  • Java 1.6.x, preferably from Sun. - Use the latest version available.
  • -
  • This version of HBase will only run on Hadoop 0.20.x. +
  • Hadoop 0.19.x. This version of HBase will + only run on this version of Hadoop.
  • ssh must be installed and sshd must be running to use Hadoop's @@ -42,33 +42,15 @@ for how to up the limit. Also, as of 0.18.x hadoop, datanodes have an upper-bound on the number of threads they will support (dfs.datanode.max.xcievers). Default is 256. If loading lots of data into hbase, up this limit on your - hadoop cluster. + hadoop cluster. Also consider upping the number of datanode handlers from + the default of 3. See dfs.datanode.handler.count.
  • The clocks on cluster members should be in basic alignments. Some skew is tolerable but wild skew can generate odd behaviors. Run NTP on your cluster, or an equivalent.
  • -
  • HBase depends on ZooKeeper as of release 0.20.0. - In basic standalone and pseudo-distributed modes, HBase manages a ZooKeeper instance - for you but it is required that you run a ZooKeeper Quorum when running HBase - fully distributed (More on this below). -
  • -
  • This is a list of patches we recommend you apply to your running Hadoop cluster: - -

Windows

-If you are running HBase on Windows, you must install Cygwin. -Additionally, it is strongly recommended that you add or append to the following -environment variables. If you install Cygwin in a location that is not C:\cygwin you -should modify the following appropriately. +If you are running HBase on Windows, you must install Cygwin. Additionally, it is strongly recommended that you add or append to the following environment variables. If you install Cygwin in a location that is not C:\cygwin you should modify the following appropriately.

-

 HOME=c:\cygwin\home\jim
 ANT_HOME=(wherever you installed ant)
@@ -76,33 +58,27 @@
 PATH=C:\cygwin\bin;%JAVA_HOME%\bin;%ANT_HOME%\bin; other windows stuff 
 SHELL=/bin/bash
 
-
-For additional information, see the -Hadoop Quick Start Guide +For additional information, see the Hadoop Quick Start Guide

Getting Started

-What follows presumes you have obtained a copy of HBase, -see Releases, and are installing +What follows presumes you have obtained a copy of HBase and are installing for the first time. If upgrading your HBase instance, see Upgrading. -

Three modes are described: standalone, pseudo-distributed (where all servers are run on -a single host), and distributed. If new to hbase start by following the standalone instruction.

-Whatever your mode, define ${HBASE_HOME} to be the location of the root of your HBase installation, e.g. +Define ${HBASE_HOME} to be the location of the root of your HBase installation, e.g. /user/local/hbase. Edit ${HBASE_HOME}/conf/hbase-env.sh. In this file you can set the heapsize for HBase, etc. At a minimum, set JAVA_HOME to point at the root of your Java installation.

-

Standalone Mode

If you are running a standalone operation, there should be nothing further to configure; proceed to Running and Confirming Your Installation. If you are running a distributed operation, continue reading.

-

Distributed Operation: Pseudo- and Fully-Distributed Modes

+

Distributed Operation

Distributed mode requires an instance of the Hadoop Distributed File System (DFS). See the Hadoop requirements and instructions for how to set up a DFS. @@ -137,12 +113,13 @@

Fully-Distributed Operation

-

For running a fully-distributed operation on more than one host, the following +For running a fully-distributed operation on more than one host, the following configurations must be made in addition to those described in the pseudo-distributed operation section above. -In this mode, a ZooKeeper cluster is required.

-

In hbase-site.xml, set hbase.cluster.distributed to 'true'. -

+A Zookeeper cluster is also required to ensure higher availability. +In hbase-site.xml, you must also configure +hbase.cluster.distributed to 'true'. +

 <configuration>
   ...
@@ -157,60 +134,43 @@
   ...
 </configuration>
 
-
-

-In fully-distributed operation, you probably want to change your hbase.rootdir -from localhost to the name of the node running the HDFS namenode. In addition -to hbase-site.xml changes, a fully-distributed operation requires that you -modify ${HBASE_HOME}/conf/regionservers. -The regionserver file lists all hosts running HRegionServers, one host per line -(This file in HBase is like the hadoop slaves file at ${HADOOP_HOME}/conf/slaves). +Keep in mind that for a fully-distributed operation, you may not want your hbase.rootdir +to point to localhost (maybe, as in the configuration above, you will want to use +example.org). In addition to hbase-site.xml, a fully-distributed +operation requires that you also modify ${HBASE_HOME}/conf/regionservers. +regionserver lists all the hosts running HRegionServers, one host per line (This file +in HBase is like the hadoop slaves file at ${HADOOP_HOME}/conf/slaves).

-A distributed HBase depends on a running ZooKeeper cluster. -The ZooKeeper configuration file for HBase is stored at ${HBASE_HOME}/conf/zoo.cfg. -See the ZooKeeper Getting Started Guide -for information about the format and options of that file. Specifically, look at the -Running Replicated ZooKeeper section. - - -After configuring zoo.cfg, in ${HBASE_HOME}/conf/hbase-env.sh, -set the following to tell HBase to STOP managing its instance of ZooKeeper. -

+Furthermore, you have to configure a distributed ZooKeeper cluster. +The ZooKeeper configuration file is stored at ${HBASE_HOME}/conf/zoo.cfg. +See the ZooKeeper Getting Started Guide for information about the format and options of that file. +Specifically, look at the Running Replicated ZooKeeper section. +In ${HBASE_HOME}/conf/hbase-env.sh, set the following to tell HBase not to manage its own single instance of ZooKeeper.
   ...
 # Tell HBase whether it should manage it's own instance of Zookeeper or not.
 export HBASE_MANAGES_ZK=false
 
-

-Though not recommended, it can be convenient having HBase continue to manage -ZooKeeper even when in distributed mode (It can be good when testing or taking -hbase for a testdrive). Change ${HBASE_HOME}/conf/zoo.cfg and -set the server.0 property to the IP of the node that will be running ZooKeeper -(Leaving the default value of "localhost" will make it impossible to start HBase). +It's still possible to use HBase in order to start a single Zookeeper instance in fully-distributed operation. +The first thing to do is still to change ${HBASE_HOME}/conf/zoo.cfg and set a single node. +Note that leaving the value "localhost" will make it impossible to start HBase.

   ...
 server.0=example.org:2888:3888
-
Then on the example.org server do the following before running HBase.
 ${HBASE_HOME}/bin/hbase-daemon.sh start zookeeper
 
- -

To stop ZooKeeper, after you've shut down hbase, do: -

-
-${HBASE_HOME}/bin/hbase-daemon.sh stop zookeeper
-
-
Be aware that this option is only recommanded for testing purposes as a failure on that node would render HBase unusable.

+

Of note, if you have made HDFS client configuration on your hadoop cluster, HBase will not see this configuration unless you do one of the following:

    @@ -227,16 +187,12 @@

    If you are running in standalone, non-distributed mode, HBase by default uses the local filesystem.

    -

    If you are running a distributed cluster you will need to start the Hadoop DFS daemons and -ZooKeeper Quorum -before starting HBase and stop the daemons after HBase has shut down.

    -

    Start and +

    If you are running a distributed cluster you will need to start the Hadoop DFS daemons +before starting HBase and stop the daemons after HBase has shut down. Start and stop the Hadoop DFS daemons by running ${HADOOP_HOME}/bin/start-dfs.sh. You can ensure it started properly by testing the put and get of files into the Hadoop filesystem. HBase does not normally use the mapreduce daemons. These do not need to be started.

    -

    Start up your ZooKeeper cluster.

    -

    Start HBase with the following command:

    @@ -270,9 +226,114 @@
     

    Example API Usage

    -For sample Java code, see org.apache.hadoop.hbase.client documentation. +

    Once you have a running HBase, you probably want a way to hook your application up to it. + If your application is in Java, then you should use the Java API. Here's an example of what + a simple client might look like. This example assumes that you've created a table called + "myTable" with a column family called "myColumnFamily". +

    + +
    +
    +import java.io.IOException;
    +import org.apache.hadoop.hbase.client.HTable;
    +import org.apache.hadoop.hbase.client.Scanner;
    +import org.apache.hadoop.hbase.io.BatchUpdate;
    +import org.apache.hadoop.hbase.io.Cell;
    +import org.apache.hadoop.hbase.io.RowResult;
    +import org.apache.hadoop.hbase.util.Bytes;
    +
    +public class MyClient {
    +
    +  public static void main(String args[]) throws IOException {
    +    // You need a configuration object to tell the client where to connect.
    +    // But don't worry, the defaults are pulled from the local config file.
    +    HBaseConfiguration config = new HBaseConfiguration();
    +
    +    // This instantiates an HTable object that connects you to the "myTable"
    +    // table. 
    +    HTable table = new HTable(config, "myTable");
    +
    +    // To do any sort of update on a row, you use an instance of the BatchUpdate
    +    // class. A BatchUpdate takes a row and optionally a timestamp which your
    +    // updates will affect.  If no timestamp, the server applies current time
    +    // to the edits.
    +    BatchUpdate batchUpdate = new BatchUpdate("myRow");
    +
    +    // The BatchUpdate#put method takes a byte [] (or String) that designates
    +    // what cell you want to put a value into, and a byte array that is the
    +    // value you want to store. Note that if you want to store Strings, you
    +    // have to getBytes() from the String for HBase to store it since HBase is
    +    // all about byte arrays. The same goes for primitives like ints and longs
    +    // and user-defined classes - you must find a way to reduce it to bytes.
    +    // The Bytes class from the hbase util package has utility for going from
    +    // String to utf-8 bytes and back again and help for other base types.
    +    batchUpdate.put("myColumnFamily:columnQualifier1", 
    +      Bytes.toBytes("columnQualifier1 value!"));
    +
    +    // Deletes are batch operations in HBase as well. 
    +    batchUpdate.delete("myColumnFamily:cellIWantDeleted");
    +
    +    // Once you've done all the puts you want, you need to commit the results.
    +    // The HTable#commit method takes the BatchUpdate instance you've been 
    +    // building and pushes the batch of changes you made into HBase.
    +    table.commit(batchUpdate);
    +
    +    // Now, to retrieve the data we just wrote. The values that come back are
    +    // Cell instances. A Cell is a combination of the value as a byte array and
    +    // the timestamp the value was stored with. If you happen to know that the 
    +    // value contained is a string and want an actual string, then you must 
    +    // convert it yourself.
    +    Cell cell = table.get("myRow", "myColumnFamily:columnQualifier1");
    +    // This could throw a NullPointerException if there was no value at the cell
    +    // location.
    +    String valueStr = Bytes.toString(cell.getValue());
    +    
    +    // Sometimes, you won't know the row you're looking for. In this case, you
    +    // use a Scanner. This will give you cursor-like interface to the contents
    +    // of the table.
    +    Scanner scanner = 
    +      // we want to get back only "myColumnFamily:columnQualifier1" when we iterate
    +      table.getScanner(new String[]{"myColumnFamily:columnQualifier1"});
    +    
    +    
    +    // Scanners return RowResult instances. A RowResult is like the
    +    // row key and the columns all wrapped up in a single Object. 
    +    // RowResult#getRow gives you the row key. RowResult also implements 
    +    // Map, so you can get to your column results easily. 
    +    
    +    // Now, for the actual iteration. One way is to use a while loop like so:
    +    RowResult rowResult = scanner.next();
    +    
    +    while (rowResult != null) {
    +      // print out the row we found and the columns we were looking for
    +      System.out.println("Found row: " + Bytes.toString(rowResult.getRow()) +
    +        " with value: " + rowResult.get(Bytes.toBytes("myColumnFamily:columnQualifier1")));
    +      rowResult = scanner.next();
    +    }
    +    
    +    // The other approach is to use a foreach loop. Scanners are iterable!
    +    for (RowResult result : scanner) {
    +      // print out the row we found and the columns we were looking for
    +      System.out.println("Found row: " + Bytes.toString(rowResult.getRow()) +
    +        " with value: " + rowResult.get(Bytes.toBytes("myColumnFamily:columnQualifier1")));
    +    }
    +    
    +    // Make sure you close your scanners when you are done!
    +    // Its probably best to put the iteration into a try/finally with the below
    +    // inside the finally clause.
    +    scanner.close();
    +  }
    +}
    +
    +
    + +

    There are many other methods for putting data into and getting data out of + HBase, but these examples should get you started. See the HTable javadoc for + more methods. Additionally, there are methods for managing tables in the + HBaseAdmin class.

    -

    If your client is NOT Java, consider the Thrift or REST libraries.

    +

    If your client is NOT Java, then you should consider the Thrift or REST + libraries.

    Related Documentation

      Modified: hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/AbstractMergeTestBase.java URL: http://svn.apache.org/viewvc/hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/AbstractMergeTestBase.java?rev=784618&r1=784617&r2=784618&view=diff ============================================================================== --- hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/AbstractMergeTestBase.java (original) +++ hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/AbstractMergeTestBase.java Sun Jun 14 21:34:13 2009 @@ -23,7 +23,6 @@ import java.io.UnsupportedEncodingException; import java.util.Random; -import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.io.BatchUpdate; import org.apache.hadoop.hbase.io.ImmutableBytesWritable; import org.apache.hadoop.hbase.regionserver.HRegion; @@ -35,7 +34,7 @@ public abstract class AbstractMergeTestBase extends HBaseClusterTestCase { static final Log LOG = LogFactory.getLog(AbstractMergeTestBase.class.getName()); - static final byte [] COLUMN_NAME = Bytes.toBytes("contents"); + static final byte [] COLUMN_NAME = Bytes.toBytes("contents:"); protected final Random rand = new Random(); protected HTableDescriptor desc; protected ImmutableBytesWritable value; @@ -127,10 +126,11 @@ HRegionIncommon r = new HRegionIncommon(region); for(int i = firstRow; i < firstRow + nrows; i++) { - Put put = new Put(Bytes.toBytes("row_" + BatchUpdate batchUpdate = new BatchUpdate(Bytes.toBytes("row_" + String.format("%1$05d", i))); - put.add(COLUMN_NAME, null, value.get()); - region.put(put); + + batchUpdate.put(COLUMN_NAME, value.get()); + region.batchUpdate(batchUpdate, null); if(i % 10000 == 0) { System.out.println("Flushing write #" + i); r.flushcache(); Modified: hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/DFSAbort.java URL: http://svn.apache.org/viewvc/hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/DFSAbort.java?rev=784618&r1=784617&r2=784618&view=diff ============================================================================== --- hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/DFSAbort.java (original) +++ hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/DFSAbort.java Sun Jun 14 21:34:13 2009 @@ -40,7 +40,7 @@ try { super.setUp(); HTableDescriptor desc = new HTableDescriptor(getName()); - desc.addFamily(new HColumnDescriptor(HConstants.CATALOG_FAMILY)); + desc.addFamily(new HColumnDescriptor(HConstants.COLUMN_FAMILY_STR)); HBaseAdmin admin = new HBaseAdmin(conf); admin.createTable(desc); } catch (Exception e) { Modified: hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/HBaseTestCase.java URL: http://svn.apache.org/viewvc/hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/HBaseTestCase.java?rev=784618&r1=784617&r2=784618&view=diff ============================================================================== --- hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/HBaseTestCase.java (original) +++ hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/HBaseTestCase.java Sun Jun 14 21:34:13 2009 @@ -25,7 +25,6 @@ import java.util.Iterator; import java.util.List; import java.util.Map; -import java.util.NavigableMap; import java.util.SortedMap; import junit.framework.TestCase; @@ -35,13 +34,8 @@ import org.apache.hadoop.dfs.MiniDFSCluster; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; -import org.apache.hadoop.hbase.client.Delete; -import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.HTable; -import org.apache.hadoop.hbase.client.Put; -import org.apache.hadoop.hbase.client.Result; -import org.apache.hadoop.hbase.client.Scan; -import org.apache.hadoop.hbase.client.ResultScanner; +import org.apache.hadoop.hbase.client.Scanner; import org.apache.hadoop.hbase.io.BatchUpdate; import org.apache.hadoop.hbase.io.Cell; import org.apache.hadoop.hbase.io.RowResult; @@ -58,11 +52,11 @@ /** configuration parameter name for test directory */ public static final String TEST_DIRECTORY_KEY = "test.build.data"; - protected final static byte [] fam1 = Bytes.toBytes("colfamily1"); - protected final static byte [] fam2 = Bytes.toBytes("colfamily2"); - protected final static byte [] fam3 = Bytes.toBytes("colfamily3"); - protected static final byte [][] COLUMNS = {fam1, - fam2, fam3}; + protected final static byte [] COLFAMILY_NAME1 = Bytes.toBytes("colfamily1:"); + protected final static byte [] COLFAMILY_NAME2 = Bytes.toBytes("colfamily2:"); + protected final static byte [] COLFAMILY_NAME3 = Bytes.toBytes("colfamily3:"); + protected static final byte [][] COLUMNS = {COLFAMILY_NAME1, + COLFAMILY_NAME2, COLFAMILY_NAME3}; private boolean localfs = false; protected Path testDir = null; @@ -195,13 +189,13 @@ protected HTableDescriptor createTableDescriptor(final String name, final int versions) { HTableDescriptor htd = new HTableDescriptor(name); - htd.addFamily(new HColumnDescriptor(fam1, versions, + htd.addFamily(new HColumnDescriptor(COLFAMILY_NAME1, versions, HColumnDescriptor.DEFAULT_COMPRESSION, false, false, Integer.MAX_VALUE, HConstants.FOREVER, false)); - htd.addFamily(new HColumnDescriptor(fam2, versions, + htd.addFamily(new HColumnDescriptor(COLFAMILY_NAME2, versions, HColumnDescriptor.DEFAULT_COMPRESSION, false, false, Integer.MAX_VALUE, HConstants.FOREVER, false)); - htd.addFamily(new HColumnDescriptor(fam3, versions, + htd.addFamily(new HColumnDescriptor(COLFAMILY_NAME3, versions, HColumnDescriptor.DEFAULT_COMPRESSION, false, false, Integer.MAX_VALUE, HConstants.FOREVER, false)); return htd; @@ -290,13 +284,11 @@ break EXIT; } try { - Put put = new Put(t); - if(ts != -1) { - put.setTimeStamp(ts); - } + BatchUpdate batchUpdate = ts == -1 ? + new BatchUpdate(t) : new BatchUpdate(t, ts); try { - put.add(Bytes.toBytes(column), null, t); - updater.put(put); + batchUpdate.put(column, t); + updater.commit(batchUpdate); count++; } catch (RuntimeException ex) { ex.printStackTrace(); @@ -339,23 +331,44 @@ */ public static interface Incommon { /** - * - * @param delete - * @param lockid - * @param writeToWAL + * @param row + * @param column + * @return value for row/column pair * @throws IOException */ - public void delete(Delete delete, Integer lockid, boolean writeToWAL) + public Cell get(byte [] row, byte [] column) throws IOException; + /** + * @param row + * @param column + * @param versions + * @return value for row/column pair for number of versions requested + * @throws IOException + */ + public Cell[] get(byte [] row, byte [] column, int versions) throws IOException; + /** + * @param row + * @param column + * @param ts + * @param versions + * @return value for row/column/timestamp tuple for number of versions + * @throws IOException + */ + public Cell[] get(byte [] row, byte [] column, long ts, int versions) throws IOException; + /** + * @param row + * @param column + * @param ts + * @throws IOException + */ + public void deleteAll(byte [] row, byte [] column, long ts) throws IOException; /** - * @param put + * @param batchUpdate * @throws IOException */ - public void put(Put put) throws IOException; + public void commit(BatchUpdate batchUpdate) throws IOException; - public Result get(Get get) throws IOException; - /** * @param columns * @param firstRow @@ -380,46 +393,48 @@ this.region = HRegion; } - public void put(Put put) throws IOException { - region.put(put); + public void commit(BatchUpdate batchUpdate) throws IOException { + region.batchUpdate(batchUpdate, null); } - public void delete(Delete delete, Integer lockid, boolean writeToWAL) + public void deleteAll(byte [] row, byte [] column, long ts) throws IOException { - this.region.delete(delete, lockid, writeToWAL); + this.region.deleteAll(row, column, ts, null); } - - public Result get(Get get) throws IOException { - return region.get(get, null); - } - + public ScannerIncommon getScanner(byte [][] columns, byte [] firstRow, long ts) throws IOException { - Scan scan = new Scan(firstRow); - scan.addColumns(columns); - scan.setTimeRange(0, ts); return new - InternalScannerIncommon(region.getScanner(scan)); + InternalScannerIncommon(region.getScanner(columns, firstRow, ts, null)); } - - //New - public ScannerIncommon getScanner(byte [] family, byte [][] qualifiers, - byte [] firstRow, long ts) - throws IOException { - Scan scan = new Scan(firstRow); - for(int i=0; i getFull(byte [] row) throws IOException { + return region.getFull(row, null, HConstants.LATEST_TIMESTAMP, 1, null); } - public void flushcache() throws IOException { this.region.flushcache(); @@ -440,27 +455,33 @@ this.table = table; } - public void put(Put put) throws IOException { - table.put(put); + public void commit(BatchUpdate batchUpdate) throws IOException { + table.commit(batchUpdate); } + public void deleteAll(byte [] row, byte [] column, long ts) + throws IOException { + this.table.deleteAll(row, column, ts); + } - public void delete(Delete delete, Integer lockid, boolean writeToWAL) + public ScannerIncommon getScanner(byte [][] columns, byte [] firstRow, long ts) throws IOException { - this.table.delete(delete); + return new + ClientScannerIncommon(table.getScanner(columns, firstRow, ts, null)); } - public Result get(Get get) throws IOException { - return table.get(get); + public Cell get(byte [] row, byte [] column) throws IOException { + return this.table.get(row, column); } - public ScannerIncommon getScanner(byte [][] columns, byte [] firstRow, long ts) + public Cell[] get(byte [] row, byte [] column, int versions) throws IOException { - Scan scan = new Scan(firstRow); - scan.addColumns(columns); - scan.setTimeStamp(ts); - return new - ClientScannerIncommon(table.getScanner(scan)); + return this.table.get(row, column, versions); + } + + public Cell[] get(byte [] row, byte [] column, long ts, int versions) + throws IOException { + return this.table.get(row, column, ts, versions); } } @@ -473,19 +494,22 @@ } public static class ClientScannerIncommon implements ScannerIncommon { - ResultScanner scanner; - public ClientScannerIncommon(ResultScanner scanner) { + Scanner scanner; + public ClientScannerIncommon(Scanner scanner) { this.scanner = scanner; } public boolean next(List values) throws IOException { - Result results = scanner.next(); + RowResult results = scanner.next(); if (results == null) { return false; } values.clear(); - values.addAll(results.list()); + for (Map.Entry entry : results.entrySet()) { + values.add(new KeyValue(results.getRow(), entry.getKey(), + entry.getValue().getTimestamp(), entry.getValue().getValue())); + } return true; } @@ -520,53 +544,25 @@ } } -// protected void assertCellEquals(final HRegion region, final byte [] row, -// final byte [] column, final long timestamp, final String value) -// throws IOException { -// Map result = region.getFull(row, null, timestamp, 1, null); -// Cell cell_value = result.get(column); -// if (value == null) { -// assertEquals(Bytes.toString(column) + " at timestamp " + timestamp, null, -// cell_value); -// } else { -// if (cell_value == null) { -// fail(Bytes.toString(column) + " at timestamp " + timestamp + -// "\" was expected to be \"" + value + " but was null"); -// } -// if (cell_value != null) { -// assertEquals(Bytes.toString(column) + " at timestamp " -// + timestamp, value, new String(cell_value.getValue())); -// } -// } -// } - - protected void assertResultEquals(final HRegion region, final byte [] row, - final byte [] family, final byte [] qualifier, final long timestamp, - final byte [] value) - throws IOException { - Get get = new Get(row); - get.setTimeStamp(timestamp); - Result res = region.get(get, null); - NavigableMap>> map = - res.getMap(); - byte [] res_value = map.get(family).get(qualifier).get(timestamp); - - if (value == null) { - assertEquals(Bytes.toString(family) + " " + Bytes.toString(qualifier) + - " at timestamp " + timestamp, null, res_value); - } else { - if (res_value == null) { - fail(Bytes.toString(family) + " " + Bytes.toString(qualifier) + - " at timestamp " + timestamp + "\" was expected to be \"" + - value + " but was null"); - } - if (res_value != null) { - assertEquals(Bytes.toString(family) + " " + Bytes.toString(qualifier) + - " at timestamp " + - timestamp, value, new String(res_value)); - } + protected void assertCellEquals(final HRegion region, final byte [] row, + final byte [] column, final long timestamp, final String value) + throws IOException { + Map result = region.getFull(row, null, timestamp, 1, null); + Cell cell_value = result.get(column); + if (value == null) { + assertEquals(Bytes.toString(column) + " at timestamp " + timestamp, null, + cell_value); + } else { + if (cell_value == null) { + fail(Bytes.toString(column) + " at timestamp " + timestamp + + "\" was expected to be \"" + value + " but was null"); + } + if (cell_value != null) { + assertEquals(Bytes.toString(column) + " at timestamp " + + timestamp, value, new String(cell_value.getValue())); } } + } /** * Initializes parameters used in the test environment: Modified: hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java URL: http://svn.apache.org/viewvc/hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java?rev=784618&r1=784617&r2=784618&view=diff ============================================================================== --- hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java (original) +++ hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java Sun Jun 14 21:34:13 2009 @@ -33,7 +33,6 @@ import org.apache.hadoop.hbase.io.ImmutableBytesWritable; import org.apache.hadoop.hbase.io.hfile.HFile; import org.apache.hadoop.hbase.io.hfile.HFileScanner; -import org.apache.hadoop.hbase.io.hfile.Compression; import org.apache.hadoop.hbase.util.Bytes; /** @@ -188,7 +187,7 @@ @Override void setUp() throws Exception { - writer = new HFile.Writer(this.fs, this.mf, RFILE_BLOCKSIZE, (Compression.Algorithm) null, null); + writer = new HFile.Writer(this.fs, this.mf, RFILE_BLOCKSIZE, null, null); } @Override Modified: hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/MiniHBaseCluster.java URL: http://svn.apache.org/viewvc/hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/MiniHBaseCluster.java?rev=784618&r1=784617&r2=784618&view=diff ============================================================================== --- hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/MiniHBaseCluster.java (original) +++ hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/MiniHBaseCluster.java Sun Jun 14 21:34:13 2009 @@ -26,7 +26,6 @@ import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.client.HConnectionManager; import org.apache.hadoop.hbase.master.HMaster; import org.apache.hadoop.hbase.regionserver.HRegionServer; import org.apache.hadoop.hbase.regionserver.HRegion; @@ -63,7 +62,7 @@ } catch (BindException e) { //this port is already in use. try to use another (for multiple testing) int port = conf.getInt("hbase.master.port", DEFAULT_MASTER_PORT); - LOG.info("Failed binding Master to port: " + port, e); + LOG.info("MiniHBaseCluster: Failed binding Master to port: " + port); port++; conf.setInt("hbase.master.port", port); continue; @@ -173,7 +172,6 @@ if (this.hbaseCluster != null) { this.hbaseCluster.shutdown(); } - HConnectionManager.deleteAllConnections(false); } /** Modified: hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/PerformanceEvaluation.java URL: http://svn.apache.org/viewvc/hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/PerformanceEvaluation.java?rev=784618&r1=784617&r2=784618&view=diff ============================================================================== --- hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/PerformanceEvaluation.java (original) +++ hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/PerformanceEvaluation.java Sun Jun 14 21:34:13 2009 @@ -38,15 +38,13 @@ import org.apache.hadoop.dfs.MiniDFSCluster; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; -import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.HBaseAdmin; import org.apache.hadoop.hbase.client.HTable; -import org.apache.hadoop.hbase.client.Put; -import org.apache.hadoop.hbase.client.Result; -import org.apache.hadoop.hbase.client.Scan; -import org.apache.hadoop.hbase.client.ResultScanner; -import org.apache.hadoop.hbase.filter.PageFilter; -import org.apache.hadoop.hbase.filter.RowWhileMatchFilter; +import org.apache.hadoop.hbase.client.Scanner; +import org.apache.hadoop.hbase.filter.PageRowFilter; +import org.apache.hadoop.hbase.filter.WhileMatchRowFilter; +import org.apache.hadoop.hbase.io.BatchUpdate; +import org.apache.hadoop.hbase.io.RowResult; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.FSUtils; import org.apache.hadoop.hbase.util.Hash; @@ -88,13 +86,12 @@ private static final int ONE_GB = 1024 * 1024 * 1000; private static final int ROWS_PER_GB = ONE_GB / ROW_LENGTH; - static final byte [] FAMILY_NAME = Bytes.toBytes("info"); - static final byte [] QUALIFIER_NAME = Bytes.toBytes("data"); + static final byte [] COLUMN_NAME = Bytes.toBytes(COLUMN_FAMILY_STR + "data"); protected static final HTableDescriptor TABLE_DESCRIPTOR; static { TABLE_DESCRIPTOR = new HTableDescriptor("TestTable"); - TABLE_DESCRIPTOR.addFamily(new HColumnDescriptor(CATALOG_FAMILY)); + TABLE_DESCRIPTOR.addFamily(new HColumnDescriptor(COLUMN_FAMILY)); } private static final String RANDOM_READ = "randomRead"; @@ -434,12 +431,11 @@ @Override void testRow(final int i) throws IOException { - Scan scan = new Scan(getRandomRow(this.rand, this.totalRows)); - scan.addColumn(FAMILY_NAME, QUALIFIER_NAME); - scan.setFilter(new RowWhileMatchFilter(new PageFilter(120))); - ResultScanner s = this.table.getScanner(scan); + Scanner s = this.table.getScanner(new byte [][] {COLUMN_NAME}, + getRandomRow(this.rand, this.totalRows), + new WhileMatchRowFilter(new PageRowFilter(120))); //int count = 0; - for (Result rr = null; (rr = s.next()) != null;) { + for (RowResult rr = null; (rr = s.next()) != null;) { // LOG.info("" + count++ + " " + rr.toString()); } s.close(); @@ -465,9 +461,7 @@ @Override void testRow(final int i) throws IOException { - Get get = new Get(getRandomRow(this.rand, this.totalRows)); - get.addColumn(FAMILY_NAME, QUALIFIER_NAME); - this.table.get(get); + this.table.get(getRandomRow(this.rand, this.totalRows), COLUMN_NAME); } @Override @@ -491,9 +485,9 @@ @Override void testRow(final int i) throws IOException { byte [] row = getRandomRow(this.rand, this.totalRows); - Put put = new Put(row); - put.add(FAMILY_NAME, QUALIFIER_NAME, generateValue(this.rand)); - table.put(put); + BatchUpdate b = new BatchUpdate(row); + b.put(COLUMN_NAME, generateValue(this.rand)); + table.commit(b); } @Override @@ -503,7 +497,7 @@ } class ScanTest extends Test { - private ResultScanner testScanner; + private Scanner testScanner; ScanTest(final HBaseConfiguration conf, final int startRow, final int perClientRunRows, final int totalRows, final Status status) { @@ -513,9 +507,8 @@ @Override void testSetup() throws IOException { super.testSetup(); - Scan scan = new Scan(format(this.startRow)); - scan.addColumn(FAMILY_NAME, QUALIFIER_NAME); - this.testScanner = table.getScanner(scan); + this.testScanner = table.getScanner(new byte [][] {COLUMN_NAME}, + format(this.startRow)); } @Override @@ -546,9 +539,7 @@ @Override void testRow(final int i) throws IOException { - Get get = new Get(format(i)); - get.addColumn(FAMILY_NAME, QUALIFIER_NAME); - table.get(get); + table.get(format(i), COLUMN_NAME); } @Override @@ -565,9 +556,9 @@ @Override void testRow(final int i) throws IOException { - Put put = new Put(format(i)); - put.add(FAMILY_NAME, QUALIFIER_NAME, generateValue(this.rand)); - table.put(put); + BatchUpdate b = new BatchUpdate(format(i)); + b.put(COLUMN_NAME, generateValue(this.rand)); + table.commit(b); } @Override Modified: hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestEmptyMetaInfo.java URL: http://svn.apache.org/viewvc/hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestEmptyMetaInfo.java?rev=784618&r1=784617&r2=784618&view=diff ============================================================================== --- hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestEmptyMetaInfo.java (original) +++ hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestEmptyMetaInfo.java Sun Jun 14 21:34:13 2009 @@ -23,10 +23,9 @@ import java.io.IOException; import org.apache.hadoop.hbase.client.HTable; -import org.apache.hadoop.hbase.client.Put; -import org.apache.hadoop.hbase.client.Result; -import org.apache.hadoop.hbase.client.Scan; -import org.apache.hadoop.hbase.client.ResultScanner; +import org.apache.hadoop.hbase.client.Scanner; +import org.apache.hadoop.hbase.io.BatchUpdate; +import org.apache.hadoop.hbase.io.RowResult; import org.apache.hadoop.hbase.util.Bytes; /** @@ -45,10 +44,9 @@ byte [] regionName = HRegionInfo.createRegionName(tableName, Bytes.toBytes(i == 0? "": Integer.toString(i)), Long.toString(System.currentTimeMillis())); - Put put = new Put(regionName); - put.add(HConstants.CATALOG_FAMILY, HConstants.SERVER_QUALIFIER, - Bytes.toBytes("localhost:1234")); - t.put(put); + BatchUpdate b = new BatchUpdate(regionName); + b.put(HConstants.COL_SERVER, Bytes.toBytes("localhost:1234")); + t.commit(b); } long sleepTime = conf.getLong("hbase.master.meta.thread.rescanfrequency", 10000); @@ -61,18 +59,11 @@ } catch (InterruptedException e) { // ignore } - Scan scan = new Scan(); - scan.addColumn(HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER); - scan.addColumn(HConstants.CATALOG_FAMILY, HConstants.SERVER_QUALIFIER); - scan.addColumn(HConstants.CATALOG_FAMILY, HConstants.STARTCODE_QUALIFIER); - scan.addColumn(HConstants.CATALOG_FAMILY, HConstants.SPLITA_QUALIFIER); - scan.addColumn(HConstants.CATALOG_FAMILY, HConstants.SPLITB_QUALIFIER); - ResultScanner scanner = t.getScanner(scan); + Scanner scanner = t.getScanner(HConstants.ALL_META_COLUMNS, tableName); try { count = 0; - Result r; - while((r = scanner.next()) != null) { - if (!r.isEmpty()) { + for (RowResult r: scanner) { + if (r.size() > 0) { count += 1; } } Modified: hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestHBaseCluster.java URL: http://svn.apache.org/viewvc/hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestHBaseCluster.java?rev=784618&r1=784617&r2=784618&view=diff ============================================================================== --- hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestHBaseCluster.java (original) +++ hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestHBaseCluster.java Sun Jun 14 21:34:13 2009 @@ -1,5 +1,5 @@ /** - * Copyright 2009 The Apache Software Foundation + * Copyright 2007 The Apache Software Foundation * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file @@ -21,16 +21,15 @@ import java.io.IOException; import java.util.Collection; +import java.util.Iterator; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.HBaseAdmin; import org.apache.hadoop.hbase.client.HTable; -import org.apache.hadoop.hbase.client.Put; -import org.apache.hadoop.hbase.client.Result; -import org.apache.hadoop.hbase.client.Scan; -import org.apache.hadoop.hbase.client.ResultScanner; +import org.apache.hadoop.hbase.client.Scanner; +import org.apache.hadoop.hbase.io.BatchUpdate; +import org.apache.hadoop.hbase.io.RowResult; import org.apache.hadoop.hbase.util.Bytes; /** @@ -76,19 +75,18 @@ private static final int FIRST_ROW = 1; private static final int NUM_VALS = 1000; - private static final byte [] CONTENTS_CF = Bytes.toBytes("contents"); - private static final String CONTENTS_CQ_STR = "basic"; - private static final byte [] CONTENTS_CQ = Bytes.toBytes(CONTENTS_CQ_STR); + private static final byte [] CONTENTS = Bytes.toBytes("contents:"); + private static final String CONTENTS_BASIC_STR = "contents:basic"; + private static final byte [] CONTENTS_BASIC = Bytes.toBytes(CONTENTS_BASIC_STR); private static final String CONTENTSTR = "contentstr"; - // - private static final byte [] ANCHOR_CF = Bytes.toBytes("anchor"); - private static final String ANCHORNUM_CQ = "anchornum-"; - private static final String ANCHORSTR_VALUE = "anchorstr"; + private static final byte [] ANCHOR = Bytes.toBytes("anchor:"); + private static final String ANCHORNUM = "anchor:anchornum-"; + private static final String ANCHORSTR = "anchorstr"; private void setup() throws IOException { desc = new HTableDescriptor("test"); - desc.addFamily(new HColumnDescriptor(CONTENTS_CF)); - desc.addFamily(new HColumnDescriptor(ANCHOR_CF)); + desc.addFamily(new HColumnDescriptor(CONTENTS)); + desc.addFamily(new HColumnDescriptor(ANCHOR)); admin = new HBaseAdmin(conf); admin.createTable(desc); table = new HTable(conf, desc.getName()); @@ -102,10 +100,10 @@ // Write out a bunch of values for (int k = FIRST_ROW; k <= NUM_VALS; k++) { - Put put = new Put(Bytes.toBytes("row_" + k)); - put.add(CONTENTS_CF, CONTENTS_CQ, Bytes.toBytes(CONTENTSTR + k)); - put.add(ANCHOR_CF, Bytes.toBytes(ANCHORNUM_CQ + k), Bytes.toBytes(ANCHORSTR_VALUE + k)); - table.put(put); + BatchUpdate b = new BatchUpdate("row_" + k); + b.put(CONTENTS_BASIC, Bytes.toBytes(CONTENTSTR + k)); + b.put(ANCHORNUM + k, Bytes.toBytes(ANCHORSTR + k)); + table.commit(b); } LOG.info("Write " + NUM_VALS + " rows. Elapsed time: " + ((System.currentTimeMillis() - startTime) / 1000.0)); @@ -119,27 +117,21 @@ String rowlabelStr = "row_" + k; byte [] rowlabel = Bytes.toBytes(rowlabelStr); - Get get = new Get(rowlabel); - get.addColumn(CONTENTS_CF, CONTENTS_CQ); - byte [] bodydata = table.get(get).getValue(CONTENTS_CF, CONTENTS_CQ); - assertNotNull("no data for row " + rowlabelStr + "/" + CONTENTS_CQ_STR, + byte bodydata[] = table.get(rowlabel, CONTENTS_BASIC).getValue(); + assertNotNull("no data for row " + rowlabelStr + "/" + CONTENTS_BASIC_STR, bodydata); String bodystr = new String(bodydata, HConstants.UTF8_ENCODING); String teststr = CONTENTSTR + k; assertTrue("Incorrect value for key: (" + rowlabelStr + "/" + - CONTENTS_CQ_STR + "), expected: '" + teststr + "' got: '" + + CONTENTS_BASIC_STR + "), expected: '" + teststr + "' got: '" + bodystr + "'", teststr.compareTo(bodystr) == 0); - String collabelStr = ANCHORNUM_CQ + k; + String collabelStr = ANCHORNUM + k; collabel = Bytes.toBytes(collabelStr); - - get = new Get(rowlabel); - get.addColumn(ANCHOR_CF, collabel); - - bodydata = table.get(get).getValue(ANCHOR_CF, collabel); + bodydata = table.get(rowlabel, collabel).getValue(); assertNotNull("no data for row " + rowlabelStr + "/" + collabelStr, bodydata); bodystr = new String(bodydata, HConstants.UTF8_ENCODING); - teststr = ANCHORSTR_VALUE + k; + teststr = ANCHORSTR + k; assertTrue("Incorrect value for key: (" + rowlabelStr + "/" + collabelStr + "), expected: '" + teststr + "' got: '" + bodystr + "'", teststr.compareTo(bodystr) == 0); @@ -150,48 +142,47 @@ } private void scanner() throws IOException { + byte [][] cols = new byte [][] {Bytes.toBytes(ANCHORNUM + "[0-9]+"), + CONTENTS_BASIC}; long startTime = System.currentTimeMillis(); - Scan scan = new Scan(); - scan.addFamily(ANCHOR_CF); - scan.addColumn(CONTENTS_CF, CONTENTS_CQ); - ResultScanner s = table.getScanner(scan); + Scanner s = table.getScanner(cols, HConstants.EMPTY_BYTE_ARRAY); try { int contentsFetched = 0; int anchorFetched = 0; int k = 0; - for (Result curVals : s) { - for(KeyValue kv : curVals.raw()) { - byte [] family = kv.getFamily(); - byte [] qualifier = kv.getQualifier(); - String strValue = new String(kv.getValue()); - if(Bytes.equals(family, CONTENTS_CF)) { + for (RowResult curVals : s) { + for (Iterator it = curVals.keySet().iterator(); it.hasNext(); ) { + byte [] col = it.next(); + byte val[] = curVals.get(col).getValue(); + String curval = Bytes.toString(val); + if (Bytes.compareTo(col, CONTENTS_BASIC) == 0) { assertTrue("Error at:" + Bytes.toString(curVals.getRow()) - + ", Value for " + Bytes.toString(qualifier) + " should start with: " + CONTENTSTR - + ", but was fetched as: " + strValue, - strValue.startsWith(CONTENTSTR)); + + ", Value for " + Bytes.toString(col) + " should start with: " + CONTENTSTR + + ", but was fetched as: " + curval, + curval.startsWith(CONTENTSTR)); contentsFetched++; - } else if(Bytes.equals(family, ANCHOR_CF)) { - assertTrue("Error at:" + Bytes.toString(curVals.getRow()) - + ", Value for " + Bytes.toString(qualifier) + " should start with: " + ANCHORSTR_VALUE - + ", but was fetched as: " + strValue, - strValue.startsWith(ANCHORSTR_VALUE)); + } else if (Bytes.toString(col).startsWith(ANCHORNUM)) { + assertTrue("Error at:" + Bytes.toString(curVals.getRow()) + + ", Value for " + Bytes.toString(col) + " should start with: " + ANCHORSTR + + ", but was fetched as: " + curval, + curval.startsWith(ANCHORSTR)); anchorFetched++; } else { - LOG.info("Family: " + Bytes.toString(family) + ", Qualifier: " + Bytes.toString(qualifier)); + LOG.info(Bytes.toString(col)); } } k++; } assertEquals("Expected " + NUM_VALS + " " + - Bytes.toString(CONTENTS_CQ) + " values, but fetched " + + Bytes.toString(CONTENTS_BASIC) + " values, but fetched " + contentsFetched, NUM_VALS, contentsFetched); - assertEquals("Expected " + NUM_VALS + " " + ANCHORNUM_CQ + + assertEquals("Expected " + NUM_VALS + " " + ANCHORNUM + " values, but fetched " + anchorFetched, NUM_VALS, anchorFetched); @@ -210,7 +201,7 @@ assertTrue(Bytes.equals(desc.getName(), tables[0].getName())); Collection families = tables[0].getFamilies(); assertEquals(2, families.size()); - assertTrue(tables[0].hasFamily(CONTENTS_CF)); - assertTrue(tables[0].hasFamily(ANCHOR_CF)); + assertTrue(tables[0].hasFamily(CONTENTS)); + assertTrue(tables[0].hasFamily(ANCHOR)); } } \ No newline at end of file Modified: hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestKeyValue.java URL: http://svn.apache.org/viewvc/hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestKeyValue.java?rev=784618&r1=784617&r2=784618&view=diff ============================================================================== --- hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestKeyValue.java (original) +++ hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestKeyValue.java Sun Jun 14 21:34:13 2009 @@ -30,7 +30,6 @@ import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValue.KVComparator; -import org.apache.hadoop.hbase.KeyValue.Type; import org.apache.hadoop.hbase.util.Bytes; public class TestKeyValue extends TestCase { @@ -40,21 +39,13 @@ final byte [] a = Bytes.toBytes("aaa"); byte [] column1 = Bytes.toBytes("abc:def"); byte [] column2 = Bytes.toBytes("abcd:ef"); - byte [] family2 = Bytes.toBytes("abcd"); - byte [] qualifier2 = Bytes.toBytes("ef"); - KeyValue aaa = new KeyValue(a, column1, 0L, Type.Put, a); - assertFalse(aaa.matchingColumn(column2)); - assertTrue(aaa.matchingColumn(column1)); - aaa = new KeyValue(a, column2, 0L, Type.Put, a); - assertFalse(aaa.matchingColumn(column1)); - assertTrue(aaa.matchingColumn(family2,qualifier2)); + KeyValue aaa = new KeyValue(a, column1, a); + assertFalse(KeyValue.COMPARATOR. + compareColumns(aaa, column2, 0, column2.length, 4) == 0); column1 = Bytes.toBytes("abcd:"); - aaa = new KeyValue(a, column1, 0L, Type.Put, a); - assertTrue(aaa.matchingColumn(family2,null)); - assertFalse(aaa.matchingColumn(family2,qualifier2)); - // Previous test had an assertFalse that I don't understand - // assertFalse(KeyValue.COMPARATOR. - // compareColumns(aaa, column1, 0, column1.length, 4) == 0); + aaa = new KeyValue(a, column1, a); + assertFalse(KeyValue.COMPARATOR. + compareColumns(aaa, column1, 0, column1.length, 4) == 0); } public void testBasics() throws Exception { @@ -120,31 +111,31 @@ public void testMoreComparisons() throws Exception { // Root compares long now = System.currentTimeMillis(); - KeyValue a = new KeyValue(Bytes.toBytes(".META.,,99999999999999"), now); - KeyValue b = new KeyValue(Bytes.toBytes(".META.,,1"), now); + KeyValue a = new KeyValue(".META.,,99999999999999", now); + KeyValue b = new KeyValue(".META.,,1", now); KVComparator c = new KeyValue.RootComparator(); assertTrue(c.compare(b, a) < 0); - KeyValue aa = new KeyValue(Bytes.toBytes(".META.,,1"), now); - KeyValue bb = new KeyValue(Bytes.toBytes(".META.,,1"), - Bytes.toBytes("info:regioninfo"), 1235943454602L); + KeyValue aa = new KeyValue(".META.,,1", now); + KeyValue bb = new KeyValue(".META.,,1", "info:regioninfo", + 1235943454602L); assertTrue(c.compare(aa, bb) < 0); // Meta compares - KeyValue aaa = new KeyValue( - Bytes.toBytes("TestScanMultipleVersions,row_0500,1236020145502"), now); - KeyValue bbb = new KeyValue( - Bytes.toBytes("TestScanMultipleVersions,,99999999999999"), now); + KeyValue aaa = + new KeyValue("TestScanMultipleVersions,row_0500,1236020145502", now); + KeyValue bbb = new KeyValue("TestScanMultipleVersions,,99999999999999", + now); c = new KeyValue.MetaComparator(); assertTrue(c.compare(bbb, aaa) < 0); - KeyValue aaaa = new KeyValue(Bytes.toBytes("TestScanMultipleVersions,,1236023996656"), - Bytes.toBytes("info:regioninfo"), 1236024396271L); + KeyValue aaaa = new KeyValue("TestScanMultipleVersions,,1236023996656", + "info:regioninfo", 1236024396271L); assertTrue(c.compare(aaaa, bbb) < 0); - KeyValue x = new KeyValue(Bytes.toBytes("TestScanMultipleVersions,row_0500,1236034574162"), - Bytes.toBytes(""), 9223372036854775807L); - KeyValue y = new KeyValue(Bytes.toBytes("TestScanMultipleVersions,row_0500,1236034574162"), - Bytes.toBytes("info:regioninfo"), 1236034574912L); + KeyValue x = new KeyValue("TestScanMultipleVersions,row_0500,1236034574162", + "", 9223372036854775807L); + KeyValue y = new KeyValue("TestScanMultipleVersions,row_0500,1236034574162", + "info:regioninfo", 1236034574912L); assertTrue(c.compare(x, y) < 0); comparisons(new KeyValue.MetaComparator()); comparisons(new KeyValue.KVComparator()); @@ -160,53 +151,53 @@ public void testKeyValueBorderCases() throws IOException { // % sorts before , so if we don't do special comparator, rowB would // come before rowA. - KeyValue rowA = new KeyValue(Bytes.toBytes("testtable,www.hbase.org/,1234"), - Bytes.toBytes(""), Long.MAX_VALUE); - KeyValue rowB = new KeyValue(Bytes.toBytes("testtable,www.hbase.org/%20,99999"), - Bytes.toBytes(""), Long.MAX_VALUE); + KeyValue rowA = new KeyValue("testtable,www.hbase.org/,1234", + "", Long.MAX_VALUE); + KeyValue rowB = new KeyValue("testtable,www.hbase.org/%20,99999", + "", Long.MAX_VALUE); assertTrue(KeyValue.META_COMPARATOR.compare(rowA, rowB) < 0); - rowA = new KeyValue(Bytes.toBytes("testtable,,1234"), Bytes.toBytes(""), Long.MAX_VALUE); - rowB = new KeyValue(Bytes.toBytes("testtable,$www.hbase.org/,99999"), Bytes.toBytes(""), Long.MAX_VALUE); + rowA = new KeyValue("testtable,,1234", "", Long.MAX_VALUE); + rowB = new KeyValue("testtable,$www.hbase.org/,99999", "", Long.MAX_VALUE); assertTrue(KeyValue.META_COMPARATOR.compare(rowA, rowB) < 0); - rowA = new KeyValue(Bytes.toBytes(".META.,testtable,www.hbase.org/,1234,4321"), Bytes.toBytes(""), + rowA = new KeyValue(".META.,testtable,www.hbase.org/,1234,4321", "", Long.MAX_VALUE); - rowB = new KeyValue(Bytes.toBytes(".META.,testtable,www.hbase.org/%20,99999,99999"), Bytes.toBytes(""), + rowB = new KeyValue(".META.,testtable,www.hbase.org/%20,99999,99999", "", Long.MAX_VALUE); assertTrue(KeyValue.ROOT_COMPARATOR.compare(rowA, rowB) < 0); } private void metacomparisons(final KeyValue.MetaComparator c) { long now = System.currentTimeMillis(); - assertTrue(c.compare(new KeyValue(Bytes.toBytes(".META.,a,,0,1"), now), - new KeyValue(Bytes.toBytes(".META.,a,,0,1"), now)) == 0); - KeyValue a = new KeyValue(Bytes.toBytes(".META.,a,,0,1"), now); - KeyValue b = new KeyValue(Bytes.toBytes(".META.,a,,0,2"), now); + assertTrue(c.compare(new KeyValue(".META.,a,,0,1", now), + new KeyValue(".META.,a,,0,1", now)) == 0); + KeyValue a = new KeyValue(".META.,a,,0,1", now); + KeyValue b = new KeyValue(".META.,a,,0,2", now); assertTrue(c.compare(a, b) < 0); - assertTrue(c.compare(new KeyValue(Bytes.toBytes(".META.,a,,0,2"), now), - new KeyValue(Bytes.toBytes(".META.,a,,0,1"), now)) > 0); + assertTrue(c.compare(new KeyValue(".META.,a,,0,2", now), + new KeyValue(".META.,a,,0,1", now)) > 0); } private void comparisons(final KeyValue.KVComparator c) { long now = System.currentTimeMillis(); - assertTrue(c.compare(new KeyValue(Bytes.toBytes(".META.,,1"), now), - new KeyValue(Bytes.toBytes(".META.,,1"), now)) == 0); - assertTrue(c.compare(new KeyValue(Bytes.toBytes(".META.,,1"), now), - new KeyValue(Bytes.toBytes(".META.,,2"), now)) < 0); - assertTrue(c.compare(new KeyValue(Bytes.toBytes(".META.,,2"), now), - new KeyValue(Bytes.toBytes(".META.,,1"), now)) > 0); + assertTrue(c.compare(new KeyValue(".META.,,1", now), + new KeyValue(".META.,,1", now)) == 0); + assertTrue(c.compare(new KeyValue(".META.,,1", now), + new KeyValue(".META.,,2", now)) < 0); + assertTrue(c.compare(new KeyValue(".META.,,2", now), + new KeyValue(".META.,,1", now)) > 0); } public void testBinaryKeys() throws Exception { Set set = new TreeSet(KeyValue.COMPARATOR); - byte [] column = Bytes.toBytes("col:umn"); - KeyValue [] keys = {new KeyValue(Bytes.toBytes("aaaaa,\u0000\u0000,2"), column, 2), - new KeyValue(Bytes.toBytes("aaaaa,\u0001,3"), column, 3), - new KeyValue(Bytes.toBytes("aaaaa,,1"), column, 1), - new KeyValue(Bytes.toBytes("aaaaa,\u1000,5"), column, 5), - new KeyValue(Bytes.toBytes("aaaaa,a,4"), column, 4), - new KeyValue(Bytes.toBytes("a,a,0"), column, 0), + String column = "col:umn"; + KeyValue [] keys = {new KeyValue("aaaaa,\u0000\u0000,2", column, 2), + new KeyValue("aaaaa,\u0001,3", column, 3), + new KeyValue("aaaaa,,1", column, 1), + new KeyValue("aaaaa,\u1000,5", column, 5), + new KeyValue("aaaaa,a,4", column, 4), + new KeyValue("a,a,0", column, 0), }; // Add to set with bad comparator for (int i = 0; i < keys.length; i++) { @@ -235,12 +226,12 @@ } // Make up -ROOT- table keys. KeyValue [] rootKeys = { - new KeyValue(Bytes.toBytes(".META.,aaaaa,\u0000\u0000,0,2"), column, 2), - new KeyValue(Bytes.toBytes(".META.,aaaaa,\u0001,0,3"), column, 3), - new KeyValue(Bytes.toBytes(".META.,aaaaa,,0,1"), column, 1), - new KeyValue(Bytes.toBytes(".META.,aaaaa,\u1000,0,5"), column, 5), - new KeyValue(Bytes.toBytes(".META.,aaaaa,a,0,4"), column, 4), - new KeyValue(Bytes.toBytes(".META.,,0"), column, 0), + new KeyValue(".META.,aaaaa,\u0000\u0000,0,2", column, 2), + new KeyValue(".META.,aaaaa,\u0001,0,3", column, 3), + new KeyValue(".META.,aaaaa,,0,1", column, 1), + new KeyValue(".META.,aaaaa,\u1000,0,5", column, 5), + new KeyValue(".META.,aaaaa,a,0,4", column, 4), + new KeyValue(".META.,,0", column, 0), }; // This will output the keys incorrectly. set = new TreeSet(new KeyValue.MetaComparator()); @@ -269,11 +260,4 @@ assertTrue(count++ == k.getTimestamp()); } } - - public void testStackedUpKeyValue() { - // Test multiple KeyValues in a single blob. - - // TODO actually write this test! - - } } \ No newline at end of file Modified: hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestRegionRebalancing.java URL: http://svn.apache.org/viewvc/hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestRegionRebalancing.java?rev=784618&r1=784617&r2=784618&view=diff ============================================================================== --- hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestRegionRebalancing.java (original) +++ hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestRegionRebalancing.java Sun Jun 14 21:34:13 2009 @@ -28,7 +28,6 @@ import org.apache.hadoop.hbase.io.BatchUpdate; import org.apache.hadoop.hbase.client.HTable; -import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.regionserver.HRegionServer; import org.apache.hadoop.hbase.regionserver.HRegion; @@ -224,10 +223,9 @@ throws IOException { HRegion region = createNewHRegion(desc, startKey, endKey); byte [] keyToWrite = startKey == null ? Bytes.toBytes("row_000") : startKey; - Put put = new Put(keyToWrite); - byte [][] famAndQf = KeyValue.parseColumn(COLUMN_NAME); - put.add(famAndQf[0], famAndQf[1], Bytes.toBytes("test")); - region.put(put); + BatchUpdate bu = new BatchUpdate(keyToWrite); + bu.put(COLUMN_NAME, "test".getBytes()); + region.batchUpdate(bu, null); region.close(); region.getLog().closeAndDelete(); return region; Modified: hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestScanMultipleVersions.java URL: http://svn.apache.org/viewvc/hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestScanMultipleVersions.java?rev=784618&r1=784617&r2=784618&view=diff ============================================================================== --- hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestScanMultipleVersions.java (original) +++ hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestScanMultipleVersions.java Sun Jun 14 21:34:13 2009 @@ -21,12 +21,11 @@ package org.apache.hadoop.hbase; import org.apache.hadoop.fs.Path; -import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.HTable; -import org.apache.hadoop.hbase.client.Put; -import org.apache.hadoop.hbase.client.Result; -import org.apache.hadoop.hbase.client.Scan; -import org.apache.hadoop.hbase.client.ResultScanner; +import org.apache.hadoop.hbase.client.Scanner; +import org.apache.hadoop.hbase.io.BatchUpdate; +import org.apache.hadoop.hbase.io.Cell; +import org.apache.hadoop.hbase.io.RowResult; import org.apache.hadoop.hbase.regionserver.HRegion; import org.apache.hadoop.hbase.util.Bytes; @@ -54,7 +53,7 @@ // Create table description this.desc = new HTableDescriptor(TABLE_NAME); - this.desc.addFamily(new HColumnDescriptor(HConstants.CATALOG_FAMILY)); + this.desc.addFamily(new HColumnDescriptor(HConstants.COLUMN_FAMILY)); // Region 0 will contain the key range [,row_0500) INFOS[0] = new HRegionInfo(this.desc, HConstants.EMPTY_START_ROW, @@ -71,11 +70,9 @@ HRegion.createHRegion(this.INFOS[i], this.testDir, this.conf); // Insert data for (int j = 0; j < TIMESTAMPS.length; j++) { - Put put = new Put(ROWS[i]); - put.setTimeStamp(TIMESTAMPS[j]); - put.add(HConstants.CATALOG_FAMILY, null, TIMESTAMPS[j], - Bytes.toBytes(TIMESTAMPS[j])); - REGIONS[i].put(put); + BatchUpdate b = new BatchUpdate(ROWS[i], TIMESTAMPS[j]); + b.put(HConstants.COLUMN_FAMILY, Bytes.toBytes(TIMESTAMPS[j])); + REGIONS[i].batchUpdate(b, null); } // Insert the region we created into the meta HRegion.addRegionToMETA(meta, REGIONS[i]); @@ -96,25 +93,19 @@ HTable t = new HTable(conf, TABLE_NAME); for (int i = 0; i < ROWS.length; i++) { for (int j = 0; j < TIMESTAMPS.length; j++) { - Get get = new Get(ROWS[i]); - get.addFamily(HConstants.CATALOG_FAMILY); - get.setTimeStamp(TIMESTAMPS[j]); - Result result = t.get(get); - int cellCount = 0; - for(@SuppressWarnings("unused")KeyValue kv : result.sorted()) { - cellCount++; - } - assertTrue(cellCount == 1); + Cell [] cells = + t.get(ROWS[i], HConstants.COLUMN_FAMILY, TIMESTAMPS[j], 1); + assertTrue(cells != null && cells.length == 1); + System.out.println("Row=" + Bytes.toString(ROWS[i]) + ", cell=" + + cells[0]); } } // Case 1: scan with LATEST_TIMESTAMP. Should get two rows int count = 0; - Scan scan = new Scan(); - scan.addFamily(HConstants.CATALOG_FAMILY); - ResultScanner s = t.getScanner(scan); + Scanner s = t.getScanner(HConstants.COLUMN_FAMILY_ARRAY); try { - for (Result rr = null; (rr = s.next()) != null;) { + for (RowResult rr = null; (rr = s.next()) != null;) { System.out.println(rr.toString()); count += 1; } @@ -127,11 +118,8 @@ // (in this case > 1000 and < LATEST_TIMESTAMP. Should get 2 rows. count = 0; - scan = new Scan(); - scan.setTimeRange(1000L, Long.MAX_VALUE); - scan.addFamily(HConstants.CATALOG_FAMILY); - - s = t.getScanner(scan); + s = t.getScanner(HConstants.COLUMN_FAMILY_ARRAY, HConstants.EMPTY_START_ROW, + 10000L); try { while (s.next() != null) { count += 1; @@ -145,11 +133,8 @@ // (in this case == 1000. Should get 2 rows. count = 0; - scan = new Scan(); - scan.setTimeStamp(1000L); - scan.addFamily(HConstants.CATALOG_FAMILY); - - s = t.getScanner(scan); + s = t.getScanner(HConstants.COLUMN_FAMILY_ARRAY, HConstants.EMPTY_START_ROW, + 1000L); try { while (s.next() != null) { count += 1; @@ -163,11 +148,8 @@ // second timestamp (100 < timestamp < 1000). Should get 2 rows. count = 0; - scan = new Scan(); - scan.setTimeRange(100L, 1000L); - scan.addFamily(HConstants.CATALOG_FAMILY); - - s = t.getScanner(scan); + s = t.getScanner(HConstants.COLUMN_FAMILY_ARRAY, HConstants.EMPTY_START_ROW, + 500L); try { while (s.next() != null) { count += 1; @@ -181,11 +163,8 @@ // Should get 2 rows. count = 0; - scan = new Scan(); - scan.setTimeStamp(100L); - scan.addFamily(HConstants.CATALOG_FAMILY); - - s = t.getScanner(scan); + s = t.getScanner(HConstants.COLUMN_FAMILY_ARRAY, HConstants.EMPTY_START_ROW, + 100L); try { while (s.next() != null) { count += 1; Modified: hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestSerialization.java URL: http://svn.apache.org/viewvc/hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestSerialization.java?rev=784618&r1=784617&r2=784618&view=diff ============================================================================== --- hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestSerialization.java (original) +++ hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestSerialization.java Sun Jun 14 21:34:13 2009 @@ -1,5 +1,5 @@ /** - * Copyright 2009 The Apache Software Foundation + * Copyright 2007 The Apache Software Foundation * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file @@ -20,31 +20,13 @@ package org.apache.hadoop.hbase; -import java.io.ByteArrayOutputStream; -import java.io.DataOutputStream; -import java.io.IOException; -import java.util.ArrayList; -import java.util.List; -import java.util.Map; -import java.util.Set; -import java.util.NavigableSet; - -import org.apache.hadoop.hbase.client.Delete; -import org.apache.hadoop.hbase.client.Get; -import org.apache.hadoop.hbase.client.Put; -import org.apache.hadoop.hbase.client.Result; -import org.apache.hadoop.hbase.client.RowLock; -import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.io.BatchOperation; import org.apache.hadoop.hbase.io.BatchUpdate; import org.apache.hadoop.hbase.io.Cell; import org.apache.hadoop.hbase.io.HbaseMapWritable; import org.apache.hadoop.hbase.io.RowResult; -import org.apache.hadoop.hbase.io.TimeRange; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.Writables; -import org.apache.hadoop.io.DataInputBuffer; -import org.apache.hadoop.io.Writable; /** * Test HBase Writables serializations @@ -70,7 +52,6 @@ assertTrue(KeyValue.COMPARATOR.compare(original, newone) == 0); } - @SuppressWarnings("unchecked") public void testHbaseMapWritable() throws Exception { HbaseMapWritable hmw = new HbaseMapWritable(); @@ -176,7 +157,7 @@ assertTrue(Bytes.equals(bu.getRow(), bubu.getRow())); // Assert has same number of BatchOperations. int firstCount = 0; - for (@SuppressWarnings("unused")BatchOperation bo: bubu) { + for (BatchOperation bo: bubu) { firstCount++; } // Now deserialize again into same instance to ensure we're not @@ -185,358 +166,9 @@ // Assert rows are same again. assertTrue(Bytes.equals(bu.getRow(), bububu.getRow())); int secondCount = 0; - for (@SuppressWarnings("unused")BatchOperation bo: bububu) { + for (BatchOperation bo: bububu) { secondCount++; } assertEquals(firstCount, secondCount); } - - - // - // HBASE-880 - // - - public void testPut() throws Exception{ - byte[] row = "row".getBytes(); - byte[] fam = "fam".getBytes(); - byte[] qf1 = "qf1".getBytes(); - byte[] qf2 = "qf2".getBytes(); - byte[] qf3 = "qf3".getBytes(); - byte[] qf4 = "qf4".getBytes(); - byte[] qf5 = "qf5".getBytes(); - byte[] qf6 = "qf6".getBytes(); - byte[] qf7 = "qf7".getBytes(); - byte[] qf8 = "qf8".getBytes(); - - long ts = System.currentTimeMillis(); - byte[] val = "val".getBytes(); - - Put put = new Put(row); - put.add(fam, qf1, ts, val); - put.add(fam, qf2, ts, val); - put.add(fam, qf3, ts, val); - put.add(fam, qf4, ts, val); - put.add(fam, qf5, ts, val); - put.add(fam, qf6, ts, val); - put.add(fam, qf7, ts, val); - put.add(fam, qf8, ts, val); - - byte[] sb = Writables.getBytes(put); - Put desPut = (Put)Writables.getWritable(sb, new Put()); - - //Timing test -// long start = System.nanoTime(); -// desPut = (Put)Writables.getWritable(sb, new Put()); -// long stop = System.nanoTime(); -// System.out.println("timer " +(stop-start)); - - assertTrue(Bytes.equals(put.getRow(), desPut.getRow())); - List list = null; - List desList = null; - for(Map.Entry> entry : put.getFamilyMap().entrySet()){ - assertTrue(desPut.getFamilyMap().containsKey(entry.getKey())); - list = entry.getValue(); - desList = desPut.getFamilyMap().get(entry.getKey()); - for(int i=0; i list = null; - List desList = null; - for(Map.Entry> entry : put.getFamilyMap().entrySet()){ - assertTrue(desPut.getFamilyMap().containsKey(entry.getKey())); - list = entry.getValue(); - desList = desPut.getFamilyMap().get(entry.getKey()); - for(int i=0; i list = null; - List desList = null; - for(Map.Entry> entry : - delete.getFamilyMap().entrySet()){ - assertTrue(desDelete.getFamilyMap().containsKey(entry.getKey())); - list = entry.getValue(); - desList = desDelete.getFamilyMap().get(entry.getKey()); - for(int i=0; i set = null; - Set desSet = null; - - for(Map.Entry> entry : - get.getFamilyMap().entrySet()){ - assertTrue(desGet.getFamilyMap().containsKey(entry.getKey())); - set = entry.getValue(); - desSet = desGet.getFamilyMap().get(entry.getKey()); - for(byte [] qualifier : set){ - assertTrue(desSet.contains(qualifier)); - } - } - - assertEquals(get.getLockId(), desGet.getLockId()); - assertEquals(get.getMaxVersions(), desGet.getMaxVersions()); - TimeRange tr = get.getTimeRange(); - TimeRange desTr = desGet.getTimeRange(); - assertEquals(tr.getMax(), desTr.getMax()); - assertEquals(tr.getMin(), desTr.getMin()); - } - - - public void testScan() throws Exception{ - byte[] startRow = "startRow".getBytes(); - byte[] stopRow = "stopRow".getBytes(); - byte[] fam = "fam".getBytes(); - byte[] qf1 = "qf1".getBytes(); - - long ts = System.currentTimeMillis(); - int maxVersions = 2; - - Scan scan = new Scan(startRow, stopRow); - scan.addColumn(fam, qf1); - scan.setTimeRange(ts, ts+1); - scan.setMaxVersions(maxVersions); - - byte[] sb = Writables.getBytes(scan); - Scan desScan = (Scan)Writables.getWritable(sb, new Scan()); - - assertTrue(Bytes.equals(scan.getStartRow(), desScan.getStartRow())); - assertTrue(Bytes.equals(scan.getStopRow(), desScan.getStopRow())); - Set set = null; - Set desSet = null; - - for(Map.Entry> entry : - scan.getFamilyMap().entrySet()){ - assertTrue(desScan.getFamilyMap().containsKey(entry.getKey())); - set = entry.getValue(); - desSet = desScan.getFamilyMap().get(entry.getKey()); - for(byte[] column : set){ - assertTrue(desSet.contains(column)); - } - } - - assertEquals(scan.getMaxVersions(), desScan.getMaxVersions()); - TimeRange tr = scan.getTimeRange(); - TimeRange desTr = desScan.getTimeRange(); - assertEquals(tr.getMax(), desTr.getMax()); - assertEquals(tr.getMin(), desTr.getMin()); - } - - public void testResultEmpty() throws Exception { - List keys = new ArrayList(); - Result r = new Result(keys); - assertTrue(r.isEmpty()); - byte [] rb = Writables.getBytes(r); - Result deserializedR = (Result)Writables.getWritable(rb, new Result()); - assertTrue(deserializedR.isEmpty()); - } - - - public void testResult() throws Exception { - byte [] rowA = Bytes.toBytes("rowA"); - byte [] famA = Bytes.toBytes("famA"); - byte [] qfA = Bytes.toBytes("qfA"); - byte [] valueA = Bytes.toBytes("valueA"); - - byte [] rowB = Bytes.toBytes("rowB"); - byte [] famB = Bytes.toBytes("famB"); - byte [] qfB = Bytes.toBytes("qfB"); - byte [] valueB = Bytes.toBytes("valueB"); - - KeyValue kvA = new KeyValue(rowA, famA, qfA, valueA); - KeyValue kvB = new KeyValue(rowB, famB, qfB, valueB); - - Result result = new Result(new KeyValue[]{kvA, kvB}); - - byte [] rb = Writables.getBytes(result); - Result deResult = (Result)Writables.getWritable(rb, new Result()); - - assertTrue("results are not equivalent, first key mismatch", - result.sorted()[0].equals(deResult.sorted()[0])); - - assertTrue("results are not equivalent, second key mismatch", - result.sorted()[1].equals(deResult.sorted()[1])); - - // Test empty Result - Result r = new Result(); - byte [] b = Writables.getBytes(r); - Result deserialized = (Result)Writables.getWritable(b, new Result()); - assertEquals(r.size(), deserialized.size()); - } - - public void testResultArray() throws Exception { - byte [] rowA = Bytes.toBytes("rowA"); - byte [] famA = Bytes.toBytes("famA"); - byte [] qfA = Bytes.toBytes("qfA"); - byte [] valueA = Bytes.toBytes("valueA"); - - byte [] rowB = Bytes.toBytes("rowB"); - byte [] famB = Bytes.toBytes("famB"); - byte [] qfB = Bytes.toBytes("qfB"); - byte [] valueB = Bytes.toBytes("valueB"); - - KeyValue kvA = new KeyValue(rowA, famA, qfA, valueA); - KeyValue kvB = new KeyValue(rowB, famB, qfB, valueB); - - - Result result1 = new Result(new KeyValue[]{kvA, kvB}); - Result result2 = new Result(new KeyValue[]{kvB}); - Result result3 = new Result(new KeyValue[]{kvB}); - - Result [] results = new Result [] {result1, result2, result3}; - - ByteArrayOutputStream byteStream = new ByteArrayOutputStream(); - DataOutputStream out = new DataOutputStream(byteStream); - Result.writeArray(out, results); - - byte [] rb = byteStream.toByteArray(); - - DataInputBuffer in = new DataInputBuffer(); - in.reset(rb, 0, rb.length); - - Result [] deResults = Result.readArray(in); - - assertTrue(results.length == deResults.length); - - for(int i=0;i keys = new ArrayList(); - Result r = new Result(keys); - Result [] results = new Result [] {r}; - - ByteArrayOutputStream byteStream = new ByteArrayOutputStream(); - DataOutputStream out = new DataOutputStream(byteStream); - - Result.writeArray(out, results); - - results = null; - - byteStream = new ByteArrayOutputStream(); - out = new DataOutputStream(byteStream); - Result.writeArray(out, results); - - byte [] rb = byteStream.toByteArray(); - - DataInputBuffer in = new DataInputBuffer(); - in.reset(rb, 0, rb.length); - - Result [] deResults = Result.readArray(in); - - assertTrue(deResults.length == 0); - - results = new Result[0]; - - byteStream = new ByteArrayOutputStream(); - out = new DataOutputStream(byteStream); - Result.writeArray(out, results); - - rb = byteStream.toByteArray(); - - in = new DataInputBuffer(); - in.reset(rb, 0, rb.length); - - deResults = Result.readArray(in); - - assertTrue(deResults.length == 0); - - } - - public void testTimeRange(String[] args) throws Exception{ - TimeRange tr = new TimeRange(0,5); - byte [] mb = Writables.getBytes(tr); - TimeRange deserializedTr = - (TimeRange)Writables.getWritable(mb, new TimeRange()); - - assertEquals(tr.getMax(), deserializedTr.getMax()); - assertEquals(tr.getMin(), deserializedTr.getMin()); - - } - - public void testKeyValue2() throws Exception { - byte[] row = getName().getBytes(); - byte[] fam = "fam".getBytes(); - byte[] qf = "qf".getBytes(); - long ts = System.currentTimeMillis(); - byte[] val = "val".getBytes(); - - KeyValue kv = new KeyValue(row, fam, qf, ts, val); - - byte [] mb = Writables.getBytes(kv); - KeyValue deserializedKv = - (KeyValue)Writables.getWritable(mb, new KeyValue()); - assertTrue(Bytes.equals(kv.getBuffer(), deserializedKv.getBuffer())); - assertEquals(kv.getOffset(), deserializedKv.getOffset()); - assertEquals(kv.getLength(), deserializedKv.getLength()); - } } \ No newline at end of file Modified: hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestTable.java URL: http://svn.apache.org/viewvc/hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestTable.java?rev=784618&r1=784617&r2=784618&view=diff ============================================================================== --- hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestTable.java (original) +++ hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestTable.java Sun Jun 14 21:34:13 2009 @@ -23,7 +23,6 @@ import org.apache.hadoop.hbase.client.HBaseAdmin; import org.apache.hadoop.hbase.client.HTable; -import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.io.BatchUpdate; import org.apache.hadoop.hbase.util.Bytes; @@ -58,7 +57,7 @@ // Try doing a duplicate database create. msg = null; HTableDescriptor desc = new HTableDescriptor(getName()); - desc.addFamily(new HColumnDescriptor(HConstants.CATALOG_FAMILY)); + desc.addFamily(new HColumnDescriptor(HConstants.COLUMN_FAMILY)); admin.createTable(desc); assertTrue("First table creation completed", admin.listTables().length == 1); boolean gotException = false; @@ -75,7 +74,7 @@ // Now try and do concurrent creation with a bunch of threads. final HTableDescriptor threadDesc = new HTableDescriptor("threaded_" + getName()); - threadDesc.addFamily(new HColumnDescriptor(HConstants.CATALOG_FAMILY)); + threadDesc.addFamily(new HColumnDescriptor(HConstants.COLUMN_FAMILY)); int count = 10; Thread [] threads = new Thread [count]; final AtomicInteger successes = new AtomicInteger(0); @@ -110,8 +109,8 @@ } // All threads are now dead. Count up how many tables were created and // how many failed w/ appropriate exception. - assertEquals(1, successes.get()); - assertEquals(count - 1, failures.get()); + assertTrue(successes.get() == 1); + assertTrue(failures.get() == (count - 1)); } /** @@ -141,12 +140,10 @@ HTable table = new HTable(conf, getName()); try { byte[] value = Bytes.toBytes("somedata"); - // This used to use an empty row... That must have been a bug - Put put = new Put(value); - byte [][] famAndQf = KeyValue.parseColumn(colName); - put.add(famAndQf[0], famAndQf[1], value); - table.put(put); - fail("Put on read only table succeeded"); + BatchUpdate update = new BatchUpdate(); + update.put(colName, value); + table.commit(update); + fail("BatchUpdate on read only table succeeded"); } catch (Exception e) { // expected } Modified: hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestZooKeeper.java URL: http://svn.apache.org/viewvc/hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestZooKeeper.java?rev=784618&r1=784617&r2=784618&view=diff ============================================================================== --- hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestZooKeeper.java (original) +++ hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestZooKeeper.java Sun Jun 14 21:34:13 2009 @@ -25,7 +25,6 @@ import org.apache.hadoop.hbase.client.HConnection; import org.apache.hadoop.hbase.client.HConnectionManager; import org.apache.hadoop.hbase.client.HTable; -import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.io.BatchUpdate; import org.apache.hadoop.hbase.master.HMaster; import org.apache.hadoop.hbase.regionserver.HRegionServer; @@ -142,9 +141,9 @@ admin.createTable(desc); HTable table = new HTable("test"); - Put put = new Put(Bytes.toBytes("testrow")); - put.add(Bytes.toBytes("fam"), Bytes.toBytes("col"), Bytes.toBytes("testdata")); - table.put(put); + BatchUpdate batchUpdate = new BatchUpdate("testrow"); + batchUpdate.put("fam:col", Bytes.toBytes("testdata")); + table.commit(batchUpdate); } catch (Exception e) { e.printStackTrace(); fail();