hbase-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From apurt...@apache.org
Subject svn commit: r784618 [8/11] - in /hadoop/hbase/trunk_on_hadoop-0.18.3/src: java/ java/org/apache/hadoop/hbase/ java/org/apache/hadoop/hbase/client/ java/org/apache/hadoop/hbase/client/tableindexed/ java/org/apache/hadoop/hbase/filter/ java/org/apache/ha...
Date Sun, 14 Jun 2009 21:34:19 GMT
Modified: hadoop/hbase/trunk_on_hadoop-0.18.3/src/java/overview.html
URL: http://svn.apache.org/viewvc/hadoop/hbase/trunk_on_hadoop-0.18.3/src/java/overview.html?rev=784618&r1=784617&r2=784618&view=diff
==============================================================================
--- hadoop/hbase/trunk_on_hadoop-0.18.3/src/java/overview.html (original)
+++ hadoop/hbase/trunk_on_hadoop-0.18.3/src/java/overview.html Sun Jun 14 21:34:13 2009
@@ -27,9 +27,9 @@
 <h2><a name="requirements">Requirements</a></h2>
 <ul>
   <li>Java 1.6.x, preferably from <a href="http://www.java.com/en/download/">Sun</a>.
-  Use the latest version available.
   </li>
-  <li>This version of HBase will only run on <a href="http://hadoop.apache.org/core/releases.html">Hadoop 0.20.x</a>.  
+  <li><a href="http://hadoop.apache.org/core/releases.html">Hadoop 0.19.x</a>.  This version of HBase will 
+  only run on this version of Hadoop.
   </li>
   <li>
     ssh must be installed and sshd must be running to use Hadoop's
@@ -42,33 +42,15 @@
   for how to up the limit.  Also, as of 0.18.x hadoop, datanodes have an upper-bound
       on the number of threads they will support (<code>dfs.datanode.max.xcievers</code>).
       Default is 256.  If loading lots of data into hbase, up this limit on your
-      hadoop cluster.
+      hadoop cluster.  Also consider upping the number of datanode handlers from
+      the default of 3. See <code>dfs.datanode.handler.count</code>.</li>
       <li>The clocks on cluster members should be in basic alignments.  Some skew is tolerable but
       wild skew can generate odd behaviors.  Run <a href="http://en.wikipedia.org/wiki/Network_Time_Protocol">NTP</a>
       on your cluster, or an equivalent.</li>
-      <li>HBase depends on <a href="http://hadoop.apache.org/zookeeper/">ZooKeeper</a> as of release 0.20.0.
-      In basic standalone and pseudo-distributed modes, HBase manages a ZooKeeper instance
-      for you but it is required that you run a ZooKeeper Quorum when running HBase
-      fully distributed (More on this below).
-      </li>
-      <li>This is a list of patches we recommend you apply to your running Hadoop cluster:
-      <ul>
-      <li><a hef="https://issues.apache.org/jira/browse/HADOOP-4681">HADOOP-4681 <i>"DFSClient block read failures cause open DFSInputStream to become unusable"</i></a>. This patch will help with the ever-popular, "No live nodes contain current block".
-      The hadoop version bundled with hbase has this patch applied.  Its an HDFS client
-      fix so this should do for usual usage but if your cluster is missing the patch,
-      and in particular if calling hbase from a mapreduce job, you may run into this
-      issue.
-      </li>
-      </ul>
-      </li>
 </ul>
 <h3>Windows</h3>
-If you are running HBase on Windows, you must install <a href="http://cygwin.com/">Cygwin</a>.
-Additionally, it is <emph>strongly recommended</emph> that you add or append to the following
-environment variables. If you install Cygwin in a location that is not <code>C:\cygwin</code> you
-should modify the following appropriately.
+If you are running HBase on Windows, you must install <a href="http://cygwin.com/">Cygwin</a>. Additionally, it is <emph>strongly recommended</emph> that you add or append to the following environment variables. If you install Cygwin in a location that is not C:\cygwin you should modify the following appropriately.
 <p>
-<blockquote>
 <pre>
 HOME=c:\cygwin\home\jim
 ANT_HOME=(wherever you installed ant)
@@ -76,33 +58,27 @@
 PATH=C:\cygwin\bin;%JAVA_HOME%\bin;%ANT_HOME%\bin; other windows stuff 
 SHELL=/bin/bash
 </pre>
-</blockquote>
-For additional information, see the
-<a href="http://hadoop.apache.org/core/docs/current/quickstart.html">Hadoop Quick Start Guide</a>
+For additional information, see the <a href="http://hadoop.apache.org/core/docs/current/quickstart.html">Hadoop Quick Start Guide</a>
 </p>
 <h2><a name="getting_started" >Getting Started</a></h2>
 <p>
-What follows presumes you have obtained a copy of HBase,
-see <a href="http://hadoop.apache.org/hbase/releases.html">Releases</a>, and are installing
+What follows presumes you have obtained a copy of HBase and are installing
 for the first time. If upgrading your
 HBase instance, see <a href="#upgrading">Upgrading</a>.
-<p>Three modes are described: standalone, pseudo-distributed (where all servers are run on
-a single host), and distributed.  If new to hbase start by following the standalone instruction.
 </p>
 <p>
-Whatever your mode, define <code>${HBASE_HOME}</code> to be the location of the root of your HBase installation, e.g. 
+Define <code>${HBASE_HOME}</code> to be the location of the root of your HBase installation, e.g. 
 <code>/user/local/hbase</code>.  Edit <code>${HBASE_HOME}/conf/hbase-env.sh</code>.  In this file you can
 set the heapsize for HBase, etc.  At a minimum, set <code>JAVA_HOME</code> to point at the root of
 your Java installation.
 </p>
-<h2><a name="standalone">Standalone Mode</a></h2>
 <p>
 If you are running a standalone operation, there should be nothing further to configure; proceed to
 <a href=#runandconfirm>Running and Confirming Your Installation</a>.  If you are running a distributed 
 operation, continue reading.
 </p>
 
-<h2><a name="distributed">Distributed Operation: Pseudo- and Fully-Distributed Modes</a></h2>
+<h2><a name="distributed">Distributed Operation</a></h2>
 <p>Distributed mode requires an instance of the Hadoop Distributed File System (DFS).
 See the Hadoop <a href="http://lucene.apache.org/hadoop/api/overview-summary.html#overview_description">
 requirements and instructions</a> for how to set up a DFS.
@@ -137,12 +113,13 @@
 </p>
 
 <h3><a name="fully-distrib">Fully-Distributed Operation</a></h3>
-<p>For running a fully-distributed operation on more than one host, the following
+For running a fully-distributed operation on more than one host, the following
 configurations must be made <i>in addition</i> to those described in the
 <a href="#pseudo-distrib">pseudo-distributed operation</a> section above.
-In this mode, a ZooKeeper cluster is required.</p>  
-<p>In <code>hbase-site.xml</code>, set <code>hbase.cluster.distributed</code> to 'true'. 
-<blockquote>
+A Zookeeper cluster is also required to ensure higher availability.
+In <code>hbase-site.xml</code>, you must also configure
+<code>hbase.cluster.distributed</code> to 'true'. 
+</p>
 <pre>
 &lt;configuration&gt;
   ...
@@ -157,60 +134,43 @@
   ...
 &lt;/configuration&gt;
 </pre>
-</blockquote>
-</p>
 <p>
-In fully-distributed operation, you probably want to change your <code>hbase.rootdir</code> 
-from localhost to the name of the node running the HDFS namenode.  In addition
-to <code>hbase-site.xml</code> changes, a fully-distributed operation requires that you 
-modify <code>${HBASE_HOME}/conf/regionservers</code>.  
-The <code>regionserver</code> file lists all hosts running HRegionServers, one host per line
-(This file in HBase is like the hadoop slaves file at <code>${HADOOP_HOME}/conf/slaves</code>).
+Keep in mind that for a fully-distributed operation, you may not want your <code>hbase.rootdir</code> 
+to point to localhost (maybe, as in the configuration above, you will want to use 
+<code>example.org</code>).  In addition to <code>hbase-site.xml</code>, a fully-distributed 
+operation requires that you also modify <code>${HBASE_HOME}/conf/regionservers</code>.  
+<code>regionserver</code> lists all the hosts running HRegionServers, one host per line  (This file 
+in HBase is like the hadoop slaves file at <code>${HADOOP_HOME}/conf/slaves</code>).
 </p>
 <p>
-A distributed HBase depends on a running ZooKeeper cluster.
-The ZooKeeper configuration file for HBase is stored at <code>${HBASE_HOME}/conf/zoo.cfg</code>.
-See the ZooKeeper <a href="http://hadoop.apache.org/zookeeper/docs/current/zookeeperStarted.html"> Getting Started Guide</a>
-for information about the format and options of that file.  Specifically, look at the 
-<a href="http://hadoop.apache.org/zookeeper/docs/current/zookeeperStarted.html#sc_RunningReplicatedZooKeeper">Running Replicated ZooKeeper</a> section.
-
-
-After configuring <code>zoo.cfg</code>, in <code>${HBASE_HOME}/conf/hbase-env.sh</code>,
-set the following to tell HBase to STOP managing its instance of ZooKeeper.
-<blockquote>
+Furthermore, you have to configure a distributed ZooKeeper cluster.
+The ZooKeeper configuration file is stored at <code>${HBASE_HOME}/conf/zoo.cfg</code>.
+See the ZooKeeper <a href="http://hadoop.apache.org/zookeeper/docs/current/zookeeperStarted.html"> Getting Started Guide</a> for information about the format and options of that file.
+Specifically, look at the <a href="http://hadoop.apache.org/zookeeper/docs/current/zookeeperStarted.html#sc_RunningReplicatedZooKeeper">Running Replicated ZooKeeper</a> section.
+In <code>${HBASE_HOME}/conf/hbase-env.sh</code>, set the following to tell HBase not to manage its own single instance of ZooKeeper.
 <pre>
   ...
 # Tell HBase whether it should manage it's own instance of Zookeeper or not.
 export HBASE_MANAGES_ZK=false
 </pre>
-</blockquote>
 </p>
 <p>
-Though not recommended, it can be convenient having HBase continue to manage
-ZooKeeper even when in distributed mode (It can be good when testing or taking
-hbase for a testdrive).  Change <code>${HBASE_HOME}/conf/zoo.cfg</code> and
-set the server.0 property to the IP of the node that will be running ZooKeeper
-(Leaving the default value of "localhost" will make it impossible to start HBase).
+It's still possible to use HBase in order to start a single Zookeeper instance in fully-distributed operation.
+The first thing to do is still to change <code>${HBASE_HOME}/conf/zoo.cfg</code> and set a single node.
+Note that leaving the value "localhost" will make it impossible to start HBase.
 <pre>
   ...
 server.0=example.org:2888:3888
-<blockquote>
 </pre>
 Then on the example.org server do the following <i>before</i> running HBase. 
 <pre>
 ${HBASE_HOME}/bin/hbase-daemon.sh start zookeeper
 </pre>
-</blockquote>
-<p>To stop ZooKeeper, after you've shut down hbase, do:
-<blockquote>
-<pre>
-${HBASE_HOME}/bin/hbase-daemon.sh stop zookeeper
-</pre>
-</blockquote>
 Be aware that this option is only recommanded for testing purposes as a failure
 on that node would render HBase <b>unusable</b>.
 </p>
 
+
 <p>Of note, if you have made <i>HDFS client configuration</i> on your hadoop cluster, HBase will not
 see this configuration unless you do one of the following:
 <ul>
@@ -227,16 +187,12 @@
 <p>If you are running in standalone, non-distributed mode, HBase by default uses
 the local filesystem.</p>
 
-<p>If you are running a distributed cluster you will need to start the Hadoop DFS daemons and
-ZooKeeper Quorum
-before starting HBase and stop the daemons after HBase has shut down.</p>
-<p>Start and 
+<p>If you are running a distributed cluster you will need to start the Hadoop DFS daemons 
+before starting HBase and stop the daemons after HBase has shut down.  Start and 
 stop the Hadoop DFS daemons by running <code>${HADOOP_HOME}/bin/start-dfs.sh</code>.
 You can ensure it started properly by testing the put and get of files into the Hadoop filesystem.
 HBase does not normally use the mapreduce daemons.  These do not need to be started.</p>
 
-<p>Start up your ZooKeeper cluster.</p>
-
 <p>Start HBase with the following command:
 </p>
 <pre>
@@ -270,9 +226,114 @@
 </p>
 
 <h2><a name="client_example">Example API Usage</a></h2>
-For sample Java code, see <a href="org/apache/hadoop/hbase/client/package-summary.html#client_example">org.apache.hadoop.hbase.client</a> documentation.
+<p>Once you have a running HBase, you probably want a way to hook your application up to it. 
+  If your application is in Java, then you should use the Java API. Here's an example of what 
+  a simple client might look like.  This example assumes that you've created a table called
+  "myTable" with a column family called "myColumnFamily".
+</p>
+
+<div style="background-color: #cccccc; padding: 2px">
+<code><pre>
+import java.io.IOException;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Scanner;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.util.Bytes;
+
+public class MyClient {
+
+  public static void main(String args[]) throws IOException {
+    // You need a configuration object to tell the client where to connect.
+    // But don't worry, the defaults are pulled from the local config file.
+    HBaseConfiguration config = new HBaseConfiguration();
+
+    // This instantiates an HTable object that connects you to the "myTable"
+    // table. 
+    HTable table = new HTable(config, "myTable");
+
+    // To do any sort of update on a row, you use an instance of the BatchUpdate
+    // class. A BatchUpdate takes a row and optionally a timestamp which your
+    // updates will affect.  If no timestamp, the server applies current time
+    // to the edits.
+    BatchUpdate batchUpdate = new BatchUpdate("myRow");
+
+    // The BatchUpdate#put method takes a byte [] (or String) that designates
+    // what cell you want to put a value into, and a byte array that is the
+    // value you want to store. Note that if you want to store Strings, you
+    // have to getBytes() from the String for HBase to store it since HBase is
+    // all about byte arrays. The same goes for primitives like ints and longs
+    // and user-defined classes - you must find a way to reduce it to bytes.
+    // The Bytes class from the hbase util package has utility for going from
+    // String to utf-8 bytes and back again and help for other base types.
+    batchUpdate.put("myColumnFamily:columnQualifier1", 
+      Bytes.toBytes("columnQualifier1 value!"));
+
+    // Deletes are batch operations in HBase as well. 
+    batchUpdate.delete("myColumnFamily:cellIWantDeleted");
+
+    // Once you've done all the puts you want, you need to commit the results.
+    // The HTable#commit method takes the BatchUpdate instance you've been 
+    // building and pushes the batch of changes you made into HBase.
+    table.commit(batchUpdate);
+
+    // Now, to retrieve the data we just wrote. The values that come back are
+    // Cell instances. A Cell is a combination of the value as a byte array and
+    // the timestamp the value was stored with. If you happen to know that the 
+    // value contained is a string and want an actual string, then you must 
+    // convert it yourself.
+    Cell cell = table.get("myRow", "myColumnFamily:columnQualifier1");
+    // This could throw a NullPointerException if there was no value at the cell
+    // location.
+    String valueStr = Bytes.toString(cell.getValue());
+    
+    // Sometimes, you won't know the row you're looking for. In this case, you
+    // use a Scanner. This will give you cursor-like interface to the contents
+    // of the table.
+    Scanner scanner = 
+      // we want to get back only "myColumnFamily:columnQualifier1" when we iterate
+      table.getScanner(new String[]{"myColumnFamily:columnQualifier1"});
+    
+    
+    // Scanners return RowResult instances. A RowResult is like the
+    // row key and the columns all wrapped up in a single Object. 
+    // RowResult#getRow gives you the row key. RowResult also implements 
+    // Map, so you can get to your column results easily. 
+    
+    // Now, for the actual iteration. One way is to use a while loop like so:
+    RowResult rowResult = scanner.next();
+    
+    while (rowResult != null) {
+      // print out the row we found and the columns we were looking for
+      System.out.println("Found row: " + Bytes.toString(rowResult.getRow()) +
+        " with value: " + rowResult.get(Bytes.toBytes("myColumnFamily:columnQualifier1")));
+      rowResult = scanner.next();
+    }
+    
+    // The other approach is to use a foreach loop. Scanners are iterable!
+    for (RowResult result : scanner) {
+      // print out the row we found and the columns we were looking for
+      System.out.println("Found row: " + Bytes.toString(rowResult.getRow()) +
+        " with value: " + rowResult.get(Bytes.toBytes("myColumnFamily:columnQualifier1")));
+    }
+    
+    // Make sure you close your scanners when you are done!
+    // Its probably best to put the iteration into a try/finally with the below
+    // inside the finally clause.
+    scanner.close();
+  }
+}
+</pre></code>
+</div>
+
+<p>There are many other methods for putting data into and getting data out of 
+  HBase, but these examples should get you started. See the HTable javadoc for
+  more methods. Additionally, there are methods for managing tables in the 
+  HBaseAdmin class.</p>
 
-<p>If your client is NOT Java, consider the Thrift or REST libraries.</p>
+<p>If your client is NOT Java, then you should consider the Thrift or REST 
+  libraries.</p>
 
 <h2><a name="related" >Related Documentation</a></h2>
 <ul>

Modified: hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/AbstractMergeTestBase.java
URL: http://svn.apache.org/viewvc/hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/AbstractMergeTestBase.java?rev=784618&r1=784617&r2=784618&view=diff
==============================================================================
--- hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/AbstractMergeTestBase.java (original)
+++ hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/AbstractMergeTestBase.java Sun Jun 14 21:34:13 2009
@@ -23,7 +23,6 @@
 import java.io.UnsupportedEncodingException;
 import java.util.Random;
 
-import org.apache.hadoop.hbase.client.Put;
 import org.apache.hadoop.hbase.io.BatchUpdate;
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.hadoop.hbase.regionserver.HRegion;
@@ -35,7 +34,7 @@
 public abstract class AbstractMergeTestBase extends HBaseClusterTestCase {
   static final Log LOG =
     LogFactory.getLog(AbstractMergeTestBase.class.getName());
-  static final byte [] COLUMN_NAME = Bytes.toBytes("contents");
+  static final byte [] COLUMN_NAME = Bytes.toBytes("contents:");
   protected final Random rand = new Random();
   protected HTableDescriptor desc;
   protected ImmutableBytesWritable value;
@@ -127,10 +126,11 @@
 
     HRegionIncommon r = new HRegionIncommon(region);
     for(int i = firstRow; i < firstRow + nrows; i++) {
-      Put put = new Put(Bytes.toBytes("row_"
+      BatchUpdate batchUpdate = new BatchUpdate(Bytes.toBytes("row_"
           + String.format("%1$05d", i)));
-      put.add(COLUMN_NAME, null,  value.get());
-      region.put(put);
+
+      batchUpdate.put(COLUMN_NAME, value.get());
+      region.batchUpdate(batchUpdate, null);
       if(i % 10000 == 0) {
         System.out.println("Flushing write #" + i);
         r.flushcache();

Modified: hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/DFSAbort.java
URL: http://svn.apache.org/viewvc/hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/DFSAbort.java?rev=784618&r1=784617&r2=784618&view=diff
==============================================================================
--- hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/DFSAbort.java (original)
+++ hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/DFSAbort.java Sun Jun 14 21:34:13 2009
@@ -40,7 +40,7 @@
     try {
       super.setUp();
       HTableDescriptor desc = new HTableDescriptor(getName());
-      desc.addFamily(new HColumnDescriptor(HConstants.CATALOG_FAMILY));
+      desc.addFamily(new HColumnDescriptor(HConstants.COLUMN_FAMILY_STR));
       HBaseAdmin admin = new HBaseAdmin(conf);
       admin.createTable(desc);
     } catch (Exception e) {

Modified: hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/HBaseTestCase.java
URL: http://svn.apache.org/viewvc/hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/HBaseTestCase.java?rev=784618&r1=784617&r2=784618&view=diff
==============================================================================
--- hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/HBaseTestCase.java (original)
+++ hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/HBaseTestCase.java Sun Jun 14 21:34:13 2009
@@ -25,7 +25,6 @@
 import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
-import java.util.NavigableMap;
 import java.util.SortedMap;
 
 import junit.framework.TestCase;
@@ -35,13 +34,8 @@
 import org.apache.hadoop.dfs.MiniDFSCluster;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.hbase.client.Delete;
-import org.apache.hadoop.hbase.client.Get;
 import org.apache.hadoop.hbase.client.HTable;
-import org.apache.hadoop.hbase.client.Put;
-import org.apache.hadoop.hbase.client.Result;
-import org.apache.hadoop.hbase.client.Scan;
-import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scanner;
 import org.apache.hadoop.hbase.io.BatchUpdate;
 import org.apache.hadoop.hbase.io.Cell;
 import org.apache.hadoop.hbase.io.RowResult;
@@ -58,11 +52,11 @@
   /** configuration parameter name for test directory */
   public static final String TEST_DIRECTORY_KEY = "test.build.data";
 
-  protected final static byte [] fam1 = Bytes.toBytes("colfamily1");
-  protected final static byte [] fam2 = Bytes.toBytes("colfamily2");
-  protected final static byte [] fam3 = Bytes.toBytes("colfamily3");
-  protected static final byte [][] COLUMNS = {fam1,
-    fam2, fam3};
+  protected final static byte [] COLFAMILY_NAME1 = Bytes.toBytes("colfamily1:");
+  protected final static byte [] COLFAMILY_NAME2 = Bytes.toBytes("colfamily2:");
+  protected final static byte [] COLFAMILY_NAME3 = Bytes.toBytes("colfamily3:");
+  protected static final byte [][] COLUMNS = {COLFAMILY_NAME1,
+    COLFAMILY_NAME2, COLFAMILY_NAME3};
 
   private boolean localfs = false;
   protected Path testDir = null;
@@ -195,13 +189,13 @@
   protected HTableDescriptor createTableDescriptor(final String name,
       final int versions) {
     HTableDescriptor htd = new HTableDescriptor(name);
-    htd.addFamily(new HColumnDescriptor(fam1, versions,
+    htd.addFamily(new HColumnDescriptor(COLFAMILY_NAME1, versions,
       HColumnDescriptor.DEFAULT_COMPRESSION, false, false,
       Integer.MAX_VALUE, HConstants.FOREVER, false));
-    htd.addFamily(new HColumnDescriptor(fam2, versions,
+    htd.addFamily(new HColumnDescriptor(COLFAMILY_NAME2, versions,
         HColumnDescriptor.DEFAULT_COMPRESSION, false, false,
         Integer.MAX_VALUE, HConstants.FOREVER, false));
-    htd.addFamily(new HColumnDescriptor(fam3, versions,
+    htd.addFamily(new HColumnDescriptor(COLFAMILY_NAME3, versions,
         HColumnDescriptor.DEFAULT_COMPRESSION, false, false,
         Integer.MAX_VALUE,  HConstants.FOREVER, false));
     return htd;
@@ -290,13 +284,11 @@
             break EXIT;
           }
           try {
-            Put put = new Put(t);
-            if(ts != -1) {
-              put.setTimeStamp(ts);
-            }
+            BatchUpdate batchUpdate = ts == -1 ? 
+              new BatchUpdate(t) : new BatchUpdate(t, ts);
             try {
-              put.add(Bytes.toBytes(column), null, t);
-              updater.put(put);
+              batchUpdate.put(column, t);
+              updater.commit(batchUpdate);
               count++;
             } catch (RuntimeException ex) {
               ex.printStackTrace();
@@ -339,23 +331,44 @@
    */
   public static interface Incommon {
     /**
-     * 
-     * @param delete
-     * @param lockid
-     * @param writeToWAL
+     * @param row
+     * @param column
+     * @return value for row/column pair
      * @throws IOException
      */
-    public void delete(Delete delete,  Integer lockid, boolean writeToWAL)
+    public Cell get(byte [] row, byte [] column) throws IOException;
+    /**
+     * @param row
+     * @param column
+     * @param versions
+     * @return value for row/column pair for number of versions requested
+     * @throws IOException
+     */
+    public Cell[] get(byte [] row, byte [] column, int versions) throws IOException;
+    /**
+     * @param row
+     * @param column
+     * @param ts
+     * @param versions
+     * @return value for row/column/timestamp tuple for number of versions
+     * @throws IOException
+     */
+    public Cell[] get(byte [] row, byte [] column, long ts, int versions)
     throws IOException;
+    /**
+     * @param row
+     * @param column
+     * @param ts
+     * @throws IOException
+     */
+    public void deleteAll(byte [] row, byte [] column, long ts) throws IOException;
 
     /**
-     * @param put
+     * @param batchUpdate
      * @throws IOException
      */
-    public void put(Put put) throws IOException;
+    public void commit(BatchUpdate batchUpdate) throws IOException;
 
-    public Result get(Get get) throws IOException;
-    
     /**
      * @param columns
      * @param firstRow
@@ -380,46 +393,48 @@
       this.region = HRegion;
     }
     
-    public void put(Put put) throws IOException {
-      region.put(put);
+    public void commit(BatchUpdate batchUpdate) throws IOException {
+      region.batchUpdate(batchUpdate, null);
     }
     
-    public void delete(Delete delete,  Integer lockid, boolean writeToWAL)
+    public void deleteAll(byte [] row, byte [] column, long ts)
     throws IOException {
-      this.region.delete(delete, lockid, writeToWAL);
+      this.region.deleteAll(row, column, ts, null);
     }
-    
-    public Result get(Get get) throws IOException {
-      return region.get(get, null);
-    }
-    
+
     public ScannerIncommon getScanner(byte [][] columns, byte [] firstRow,
       long ts) 
     throws IOException {
-      Scan scan = new Scan(firstRow);
-      scan.addColumns(columns);
-      scan.setTimeRange(0, ts);
       return new 
-        InternalScannerIncommon(region.getScanner(scan));
+        InternalScannerIncommon(region.getScanner(columns, firstRow, ts, null));
     }
-    
-    //New
-    public ScannerIncommon getScanner(byte [] family, byte [][] qualifiers,
-        byte [] firstRow, long ts) 
-      throws IOException {
-        Scan scan = new Scan(firstRow);
-        for(int i=0; i<qualifiers.length; i++){
-          scan.addColumn(HConstants.CATALOG_FAMILY, qualifiers[i]);
-        }
-        scan.setTimeRange(0, ts);
-        return new 
-          InternalScannerIncommon(region.getScanner(scan));
-      }
-    
-    public Result get(Get get, Integer lockid) throws IOException{
-      return this.region.get(get, lockid);
+
+    public Cell get(byte [] row, byte [] column) throws IOException {
+      // TODO: Fix profligacy converting from List to Cell [].
+      Cell[] result = Cell.createSingleCellArray(this.region.get(row, column, -1, -1));
+      return (result == null)? null : result[0];
+    }
+
+    public Cell[] get(byte [] row, byte [] column, int versions)
+    throws IOException {
+      // TODO: Fix profligacy converting from List to Cell [].
+      return Cell.createSingleCellArray(this.region.get(row, column, -1, versions));
+    }
+
+    public Cell[] get(byte [] row, byte [] column, long ts, int versions)
+    throws IOException {
+      // TODO: Fix profligacy converting from List to Cell [].
+      return Cell.createSingleCellArray(this.region.get(row, column, ts, versions));
+    }
+
+    /**
+     * @param row
+     * @return values for each column in the specified row
+     * @throws IOException
+     */
+    public Map<byte [], Cell> getFull(byte [] row) throws IOException {
+      return region.getFull(row, null, HConstants.LATEST_TIMESTAMP, 1, null);
     }
-    
 
     public void flushcache() throws IOException {
       this.region.flushcache();
@@ -440,27 +455,33 @@
       this.table = table;
     }
     
-    public void put(Put put) throws IOException {
-      table.put(put);
+    public void commit(BatchUpdate batchUpdate) throws IOException {
+      table.commit(batchUpdate);
     }
     
+    public void deleteAll(byte [] row, byte [] column, long ts)
+    throws IOException {
+      this.table.deleteAll(row, column, ts);
+    }
     
-    public void delete(Delete delete,  Integer lockid, boolean writeToWAL)
+    public ScannerIncommon getScanner(byte [][] columns, byte [] firstRow, long ts) 
     throws IOException {
-      this.table.delete(delete);
+      return new 
+        ClientScannerIncommon(table.getScanner(columns, firstRow, ts, null));
     }
     
-    public Result get(Get get) throws IOException {
-      return table.get(get);
+    public Cell get(byte [] row, byte [] column) throws IOException {
+      return this.table.get(row, column);
     }
     
-    public ScannerIncommon getScanner(byte [][] columns, byte [] firstRow, long ts) 
+    public Cell[] get(byte [] row, byte [] column, int versions)
     throws IOException {
-      Scan scan = new Scan(firstRow);
-      scan.addColumns(columns);
-      scan.setTimeStamp(ts);
-      return new 
-        ClientScannerIncommon(table.getScanner(scan));
+      return this.table.get(row, column, versions);
+    }
+    
+    public Cell[] get(byte [] row, byte [] column, long ts, int versions)
+    throws IOException {
+      return this.table.get(row, column, ts, versions);
     }
   }
   
@@ -473,19 +494,22 @@
   }
   
   public static class ClientScannerIncommon implements ScannerIncommon {
-    ResultScanner scanner;
-    public ClientScannerIncommon(ResultScanner scanner) {
+    Scanner scanner;
+    public ClientScannerIncommon(Scanner scanner) {
       this.scanner = scanner;
     }
     
     public boolean next(List<KeyValue> values)
     throws IOException {
-      Result results = scanner.next();
+      RowResult results = scanner.next();
       if (results == null) {
         return false;
       }
       values.clear();
-      values.addAll(results.list());
+      for (Map.Entry<byte [], Cell> entry : results.entrySet()) {
+        values.add(new KeyValue(results.getRow(), entry.getKey(),
+          entry.getValue().getTimestamp(), entry.getValue().getValue()));
+      }
       return true;
     }
     
@@ -520,53 +544,25 @@
     }
   }
   
-//  protected void assertCellEquals(final HRegion region, final byte [] row,
-//    final byte [] column, final long timestamp, final String value)
-//  throws IOException {
-//    Map<byte [], Cell> result = region.getFull(row, null, timestamp, 1, null);
-//    Cell cell_value = result.get(column);
-//    if (value == null) {
-//      assertEquals(Bytes.toString(column) + " at timestamp " + timestamp, null,
-//        cell_value);
-//    } else {
-//      if (cell_value == null) {
-//        fail(Bytes.toString(column) + " at timestamp " + timestamp + 
-//          "\" was expected to be \"" + value + " but was null");
-//      }
-//      if (cell_value != null) {
-//        assertEquals(Bytes.toString(column) + " at timestamp " 
-//            + timestamp, value, new String(cell_value.getValue()));
-//      }
-//    }
-//  }
-
-  protected void assertResultEquals(final HRegion region, final byte [] row,
-      final byte [] family, final byte [] qualifier, final long timestamp,
-      final byte [] value)
-    throws IOException {
-      Get get = new Get(row);
-      get.setTimeStamp(timestamp);
-      Result res = region.get(get, null);
-      NavigableMap<byte[], NavigableMap<byte[], NavigableMap<Long, byte[]>>> map = 
-        res.getMap();
-      byte [] res_value = map.get(family).get(qualifier).get(timestamp);
-    
-      if (value == null) {
-        assertEquals(Bytes.toString(family) + " " + Bytes.toString(qualifier) +
-            " at timestamp " + timestamp, null, res_value);
-      } else {
-        if (res_value == null) {
-          fail(Bytes.toString(family) + " " + Bytes.toString(qualifier) + 
-              " at timestamp " + timestamp + "\" was expected to be \"" + 
-              value + " but was null");
-        }
-        if (res_value != null) {
-          assertEquals(Bytes.toString(family) + " " + Bytes.toString(qualifier) +
-              " at timestamp " + 
-              timestamp, value, new String(res_value));
-        }
+  protected void assertCellEquals(final HRegion region, final byte [] row,
+    final byte [] column, final long timestamp, final String value)
+  throws IOException {
+    Map<byte [], Cell> result = region.getFull(row, null, timestamp, 1, null);
+    Cell cell_value = result.get(column);
+    if (value == null) {
+      assertEquals(Bytes.toString(column) + " at timestamp " + timestamp, null,
+        cell_value);
+    } else {
+      if (cell_value == null) {
+        fail(Bytes.toString(column) + " at timestamp " + timestamp + 
+          "\" was expected to be \"" + value + " but was null");
+      }
+      if (cell_value != null) {
+        assertEquals(Bytes.toString(column) + " at timestamp " 
+            + timestamp, value, new String(cell_value.getValue()));
       }
     }
+  }
   
   /**
    * Initializes parameters used in the test environment:

Modified: hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java
URL: http://svn.apache.org/viewvc/hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java?rev=784618&r1=784617&r2=784618&view=diff
==============================================================================
--- hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java (original)
+++ hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java Sun Jun 14 21:34:13 2009
@@ -33,7 +33,6 @@
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.hadoop.hbase.io.hfile.HFile;
 import org.apache.hadoop.hbase.io.hfile.HFileScanner;
-import org.apache.hadoop.hbase.io.hfile.Compression;
 import org.apache.hadoop.hbase.util.Bytes;
 
 /**
@@ -188,7 +187,7 @@
     
     @Override
     void setUp() throws Exception {
-      writer = new HFile.Writer(this.fs, this.mf, RFILE_BLOCKSIZE, (Compression.Algorithm) null, null);
+      writer = new HFile.Writer(this.fs, this.mf, RFILE_BLOCKSIZE, null, null);
     }
     
     @Override

Modified: hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/MiniHBaseCluster.java
URL: http://svn.apache.org/viewvc/hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/MiniHBaseCluster.java?rev=784618&r1=784617&r2=784618&view=diff
==============================================================================
--- hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/MiniHBaseCluster.java (original)
+++ hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/MiniHBaseCluster.java Sun Jun 14 21:34:13 2009
@@ -26,7 +26,6 @@
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 
-import org.apache.hadoop.hbase.client.HConnectionManager;
 import org.apache.hadoop.hbase.master.HMaster;
 import org.apache.hadoop.hbase.regionserver.HRegionServer;
 import org.apache.hadoop.hbase.regionserver.HRegion;
@@ -63,7 +62,7 @@
         } catch (BindException e) {
           //this port is already in use. try to use another (for multiple testing)
           int port = conf.getInt("hbase.master.port", DEFAULT_MASTER_PORT);
-          LOG.info("Failed binding Master to port: " + port, e);
+          LOG.info("MiniHBaseCluster: Failed binding Master to port: " + port);
           port++;
           conf.setInt("hbase.master.port", port);
           continue;
@@ -173,7 +172,6 @@
     if (this.hbaseCluster != null) {
       this.hbaseCluster.shutdown();
     }
-    HConnectionManager.deleteAllConnections(false);
   }
 
   /**

Modified: hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/PerformanceEvaluation.java
URL: http://svn.apache.org/viewvc/hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/PerformanceEvaluation.java?rev=784618&r1=784617&r2=784618&view=diff
==============================================================================
--- hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/PerformanceEvaluation.java (original)
+++ hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/PerformanceEvaluation.java Sun Jun 14 21:34:13 2009
@@ -38,15 +38,13 @@
 import org.apache.hadoop.dfs.MiniDFSCluster;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.hbase.client.Get;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
 import org.apache.hadoop.hbase.client.HTable;
-import org.apache.hadoop.hbase.client.Put;
-import org.apache.hadoop.hbase.client.Result;
-import org.apache.hadoop.hbase.client.Scan;
-import org.apache.hadoop.hbase.client.ResultScanner;
-import org.apache.hadoop.hbase.filter.PageFilter;
-import org.apache.hadoop.hbase.filter.RowWhileMatchFilter;
+import org.apache.hadoop.hbase.client.Scanner;
+import org.apache.hadoop.hbase.filter.PageRowFilter;
+import org.apache.hadoop.hbase.filter.WhileMatchRowFilter;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.RowResult;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.hadoop.hbase.util.FSUtils;
 import org.apache.hadoop.hbase.util.Hash;
@@ -88,13 +86,12 @@
   private static final int ONE_GB = 1024 * 1024 * 1000;
   private static final int ROWS_PER_GB = ONE_GB / ROW_LENGTH;
   
-  static final byte [] FAMILY_NAME = Bytes.toBytes("info");
-  static final byte [] QUALIFIER_NAME = Bytes.toBytes("data");
+  static final byte [] COLUMN_NAME = Bytes.toBytes(COLUMN_FAMILY_STR + "data");
   
   protected static final HTableDescriptor TABLE_DESCRIPTOR;
   static {
     TABLE_DESCRIPTOR = new HTableDescriptor("TestTable");
-    TABLE_DESCRIPTOR.addFamily(new HColumnDescriptor(CATALOG_FAMILY));
+    TABLE_DESCRIPTOR.addFamily(new HColumnDescriptor(COLUMN_FAMILY));
   }
   
   private static final String RANDOM_READ = "randomRead";
@@ -434,12 +431,11 @@
     
     @Override
     void testRow(final int i) throws IOException {
-      Scan scan = new Scan(getRandomRow(this.rand, this.totalRows));
-      scan.addColumn(FAMILY_NAME, QUALIFIER_NAME);
-      scan.setFilter(new RowWhileMatchFilter(new PageFilter(120)));
-      ResultScanner s = this.table.getScanner(scan);
+      Scanner s = this.table.getScanner(new byte [][] {COLUMN_NAME},
+        getRandomRow(this.rand, this.totalRows),
+        new WhileMatchRowFilter(new PageRowFilter(120)));
       //int count = 0;
-      for (Result rr = null; (rr = s.next()) != null;) {
+      for (RowResult rr = null; (rr = s.next()) != null;) {
         // LOG.info("" + count++ + " " + rr.toString());
       }
       s.close();
@@ -465,9 +461,7 @@
     
     @Override
     void testRow(final int i) throws IOException {
-      Get get = new Get(getRandomRow(this.rand, this.totalRows));
-      get.addColumn(FAMILY_NAME, QUALIFIER_NAME);
-      this.table.get(get);
+      this.table.get(getRandomRow(this.rand, this.totalRows), COLUMN_NAME);
     }
 
     @Override
@@ -491,9 +485,9 @@
     @Override
     void testRow(final int i) throws IOException {
       byte [] row = getRandomRow(this.rand, this.totalRows);
-      Put put = new Put(row);
-      put.add(FAMILY_NAME, QUALIFIER_NAME, generateValue(this.rand));
-      table.put(put);
+      BatchUpdate b = new BatchUpdate(row);
+      b.put(COLUMN_NAME, generateValue(this.rand));
+      table.commit(b);
     }
 
     @Override
@@ -503,7 +497,7 @@
   }
   
   class ScanTest extends Test {
-    private ResultScanner testScanner;
+    private Scanner testScanner;
     
     ScanTest(final HBaseConfiguration conf, final int startRow,
         final int perClientRunRows, final int totalRows, final Status status) {
@@ -513,9 +507,8 @@
     @Override
     void testSetup() throws IOException {
       super.testSetup();
-      Scan scan = new Scan(format(this.startRow));
-      scan.addColumn(FAMILY_NAME, QUALIFIER_NAME);
-      this.testScanner = table.getScanner(scan);
+      this.testScanner = table.getScanner(new byte [][] {COLUMN_NAME},
+        format(this.startRow));
     }
     
     @Override
@@ -546,9 +539,7 @@
     
     @Override
     void testRow(final int i) throws IOException {
-      Get get = new Get(format(i));
-      get.addColumn(FAMILY_NAME, QUALIFIER_NAME);
-      table.get(get);
+      table.get(format(i), COLUMN_NAME);
     }
 
     @Override
@@ -565,9 +556,9 @@
     
     @Override
     void testRow(final int i) throws IOException {
-      Put put = new Put(format(i));
-      put.add(FAMILY_NAME, QUALIFIER_NAME, generateValue(this.rand));
-      table.put(put);
+      BatchUpdate b = new BatchUpdate(format(i));
+      b.put(COLUMN_NAME, generateValue(this.rand));
+      table.commit(b);
     }
 
     @Override

Modified: hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestEmptyMetaInfo.java
URL: http://svn.apache.org/viewvc/hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestEmptyMetaInfo.java?rev=784618&r1=784617&r2=784618&view=diff
==============================================================================
--- hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestEmptyMetaInfo.java (original)
+++ hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestEmptyMetaInfo.java Sun Jun 14 21:34:13 2009
@@ -23,10 +23,9 @@
 import java.io.IOException;
 
 import org.apache.hadoop.hbase.client.HTable;
-import org.apache.hadoop.hbase.client.Put;
-import org.apache.hadoop.hbase.client.Result;
-import org.apache.hadoop.hbase.client.Scan;
-import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scanner;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.RowResult;
 import org.apache.hadoop.hbase.util.Bytes;
 
 /**
@@ -45,10 +44,9 @@
       byte [] regionName = HRegionInfo.createRegionName(tableName,
         Bytes.toBytes(i == 0? "": Integer.toString(i)),
         Long.toString(System.currentTimeMillis()));
-      Put put = new Put(regionName);
-      put.add(HConstants.CATALOG_FAMILY, HConstants.SERVER_QUALIFIER,
-          Bytes.toBytes("localhost:1234"));
-      t.put(put);
+      BatchUpdate b = new BatchUpdate(regionName);
+      b.put(HConstants.COL_SERVER, Bytes.toBytes("localhost:1234"));
+      t.commit(b);
     }
     long sleepTime =
       conf.getLong("hbase.master.meta.thread.rescanfrequency", 10000);
@@ -61,18 +59,11 @@
       } catch (InterruptedException e) {
         // ignore
       }
-      Scan scan = new Scan();
-      scan.addColumn(HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER);
-      scan.addColumn(HConstants.CATALOG_FAMILY, HConstants.SERVER_QUALIFIER);
-      scan.addColumn(HConstants.CATALOG_FAMILY, HConstants.STARTCODE_QUALIFIER);
-      scan.addColumn(HConstants.CATALOG_FAMILY, HConstants.SPLITA_QUALIFIER);
-      scan.addColumn(HConstants.CATALOG_FAMILY, HConstants.SPLITB_QUALIFIER);
-      ResultScanner scanner = t.getScanner(scan);
+      Scanner scanner = t.getScanner(HConstants.ALL_META_COLUMNS, tableName);
       try {
         count = 0;
-        Result r;
-        while((r = scanner.next()) != null) {
-          if (!r.isEmpty()) {
+        for (RowResult r: scanner) {
+          if (r.size() > 0) {
             count += 1;
           }
         }

Modified: hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestHBaseCluster.java
URL: http://svn.apache.org/viewvc/hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestHBaseCluster.java?rev=784618&r1=784617&r2=784618&view=diff
==============================================================================
--- hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestHBaseCluster.java (original)
+++ hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestHBaseCluster.java Sun Jun 14 21:34:13 2009
@@ -1,5 +1,5 @@
 /**
- * Copyright 2009 The Apache Software Foundation
+ * Copyright 2007 The Apache Software Foundation
  *
  * Licensed to the Apache Software Foundation (ASF) under one
  * or more contributor license agreements.  See the NOTICE file
@@ -21,16 +21,15 @@
 
 import java.io.IOException;
 import java.util.Collection;
+import java.util.Iterator;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.hbase.client.Get;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
 import org.apache.hadoop.hbase.client.HTable;
-import org.apache.hadoop.hbase.client.Put;
-import org.apache.hadoop.hbase.client.Result;
-import org.apache.hadoop.hbase.client.Scan;
-import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scanner;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.RowResult;
 import org.apache.hadoop.hbase.util.Bytes;
 
 /**
@@ -76,19 +75,18 @@
 
   private static final int FIRST_ROW = 1;
   private static final int NUM_VALS = 1000;
-  private static final byte [] CONTENTS_CF = Bytes.toBytes("contents");
-  private static final String CONTENTS_CQ_STR = "basic";
-  private static final byte [] CONTENTS_CQ = Bytes.toBytes(CONTENTS_CQ_STR);
+  private static final byte [] CONTENTS = Bytes.toBytes("contents:");
+  private static final String CONTENTS_BASIC_STR = "contents:basic";
+  private static final byte [] CONTENTS_BASIC = Bytes.toBytes(CONTENTS_BASIC_STR);
   private static final String CONTENTSTR = "contentstr";
-  //
-  private static final byte [] ANCHOR_CF = Bytes.toBytes("anchor");
-  private static final String ANCHORNUM_CQ = "anchornum-";
-  private static final String ANCHORSTR_VALUE = "anchorstr";
+  private static final byte [] ANCHOR = Bytes.toBytes("anchor:");
+  private static final String ANCHORNUM = "anchor:anchornum-";
+  private static final String ANCHORSTR = "anchorstr";
 
   private void setup() throws IOException {
     desc = new HTableDescriptor("test");
-    desc.addFamily(new HColumnDescriptor(CONTENTS_CF));
-    desc.addFamily(new HColumnDescriptor(ANCHOR_CF));
+    desc.addFamily(new HColumnDescriptor(CONTENTS));
+    desc.addFamily(new HColumnDescriptor(ANCHOR));
     admin = new HBaseAdmin(conf);
     admin.createTable(desc);
     table = new HTable(conf, desc.getName());
@@ -102,10 +100,10 @@
     // Write out a bunch of values
 
     for (int k = FIRST_ROW; k <= NUM_VALS; k++) {
-      Put put = new Put(Bytes.toBytes("row_" + k));
-      put.add(CONTENTS_CF, CONTENTS_CQ, Bytes.toBytes(CONTENTSTR + k));
-      put.add(ANCHOR_CF, Bytes.toBytes(ANCHORNUM_CQ + k), Bytes.toBytes(ANCHORSTR_VALUE + k));
-      table.put(put);
+      BatchUpdate b = new BatchUpdate("row_" + k);
+      b.put(CONTENTS_BASIC, Bytes.toBytes(CONTENTSTR + k));
+      b.put(ANCHORNUM + k, Bytes.toBytes(ANCHORSTR + k));
+      table.commit(b);
     }
     LOG.info("Write " + NUM_VALS + " rows. Elapsed time: "
         + ((System.currentTimeMillis() - startTime) / 1000.0));
@@ -119,27 +117,21 @@
       String rowlabelStr = "row_" + k;
       byte [] rowlabel = Bytes.toBytes(rowlabelStr);
 
-      Get get = new Get(rowlabel);
-      get.addColumn(CONTENTS_CF, CONTENTS_CQ);
-      byte [] bodydata = table.get(get).getValue(CONTENTS_CF, CONTENTS_CQ);
-      assertNotNull("no data for row " + rowlabelStr + "/" + CONTENTS_CQ_STR,
+      byte bodydata[] = table.get(rowlabel, CONTENTS_BASIC).getValue();
+      assertNotNull("no data for row " + rowlabelStr + "/" + CONTENTS_BASIC_STR,
           bodydata);
       String bodystr = new String(bodydata, HConstants.UTF8_ENCODING);
       String teststr = CONTENTSTR + k;
       assertTrue("Incorrect value for key: (" + rowlabelStr + "/" +
-          CONTENTS_CQ_STR + "), expected: '" + teststr + "' got: '" +
+          CONTENTS_BASIC_STR + "), expected: '" + teststr + "' got: '" +
           bodystr + "'", teststr.compareTo(bodystr) == 0);
       
-      String collabelStr = ANCHORNUM_CQ + k;
+      String collabelStr = ANCHORNUM + k;
       collabel = Bytes.toBytes(collabelStr);
-      
-      get = new Get(rowlabel);
-      get.addColumn(ANCHOR_CF, collabel);
-      
-      bodydata = table.get(get).getValue(ANCHOR_CF, collabel);
+      bodydata = table.get(rowlabel, collabel).getValue();
       assertNotNull("no data for row " + rowlabelStr + "/" + collabelStr, bodydata);
       bodystr = new String(bodydata, HConstants.UTF8_ENCODING);
-      teststr = ANCHORSTR_VALUE + k;
+      teststr = ANCHORSTR + k;
       assertTrue("Incorrect value for key: (" + rowlabelStr + "/" + collabelStr +
           "), expected: '" + teststr + "' got: '" + bodystr + "'",
           teststr.compareTo(bodystr) == 0);
@@ -150,48 +142,47 @@
   }
   
   private void scanner() throws IOException {
+    byte [][] cols = new byte [][] {Bytes.toBytes(ANCHORNUM + "[0-9]+"),
+      CONTENTS_BASIC};
     
     long startTime = System.currentTimeMillis();
     
-    Scan scan = new Scan();
-    scan.addFamily(ANCHOR_CF);
-    scan.addColumn(CONTENTS_CF, CONTENTS_CQ);
-    ResultScanner s = table.getScanner(scan);
+    Scanner s = table.getScanner(cols, HConstants.EMPTY_BYTE_ARRAY);
     try {
 
       int contentsFetched = 0;
       int anchorFetched = 0;
       int k = 0;
-      for (Result curVals : s) {
-        for(KeyValue kv : curVals.raw()) {
-          byte [] family = kv.getFamily();
-          byte [] qualifier = kv.getQualifier();
-          String strValue = new String(kv.getValue());
-          if(Bytes.equals(family, CONTENTS_CF)) {
+      for (RowResult curVals : s) {
+        for (Iterator<byte []> it = curVals.keySet().iterator(); it.hasNext(); ) {
+          byte [] col = it.next();
+          byte val[] = curVals.get(col).getValue();
+          String curval = Bytes.toString(val);
+          if (Bytes.compareTo(col, CONTENTS_BASIC) == 0) {
             assertTrue("Error at:" + Bytes.toString(curVals.getRow()) 
-                + ", Value for " + Bytes.toString(qualifier) + " should start with: " + CONTENTSTR
-                + ", but was fetched as: " + strValue,
-                strValue.startsWith(CONTENTSTR));
+                + ", Value for " + Bytes.toString(col) + " should start with: " + CONTENTSTR
+                + ", but was fetched as: " + curval,
+                curval.startsWith(CONTENTSTR));
             contentsFetched++;
             
-          } else if(Bytes.equals(family, ANCHOR_CF)) {
-            assertTrue("Error at:" + Bytes.toString(curVals.getRow()) 
-                + ", Value for " + Bytes.toString(qualifier) + " should start with: " + ANCHORSTR_VALUE
-                + ", but was fetched as: " + strValue,
-                strValue.startsWith(ANCHORSTR_VALUE));
+          } else if (Bytes.toString(col).startsWith(ANCHORNUM)) {
+            assertTrue("Error at:" + Bytes.toString(curVals.getRow())
+                + ", Value for " + Bytes.toString(col) + " should start with: " + ANCHORSTR
+                + ", but was fetched as: " + curval,
+                curval.startsWith(ANCHORSTR));
             anchorFetched++;
             
           } else {
-            LOG.info("Family: " + Bytes.toString(family) + ", Qualifier: " + Bytes.toString(qualifier));
+            LOG.info(Bytes.toString(col));
           }
         }
         k++;
       }
       assertEquals("Expected " + NUM_VALS + " " +
-        Bytes.toString(CONTENTS_CQ) + " values, but fetched " +
+        Bytes.toString(CONTENTS_BASIC) + " values, but fetched " +
         contentsFetched,
         NUM_VALS, contentsFetched);
-      assertEquals("Expected " + NUM_VALS + " " + ANCHORNUM_CQ +
+      assertEquals("Expected " + NUM_VALS + " " + ANCHORNUM +
         " values, but fetched " + anchorFetched,
         NUM_VALS, anchorFetched);
 
@@ -210,7 +201,7 @@
     assertTrue(Bytes.equals(desc.getName(), tables[0].getName()));
     Collection<HColumnDescriptor> families = tables[0].getFamilies();
     assertEquals(2, families.size());
-    assertTrue(tables[0].hasFamily(CONTENTS_CF));
-    assertTrue(tables[0].hasFamily(ANCHOR_CF));
+    assertTrue(tables[0].hasFamily(CONTENTS));
+    assertTrue(tables[0].hasFamily(ANCHOR));
   }
 }
\ No newline at end of file

Modified: hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestKeyValue.java
URL: http://svn.apache.org/viewvc/hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestKeyValue.java?rev=784618&r1=784617&r2=784618&view=diff
==============================================================================
--- hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestKeyValue.java (original)
+++ hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestKeyValue.java Sun Jun 14 21:34:13 2009
@@ -30,7 +30,6 @@
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.KeyValue;
 import org.apache.hadoop.hbase.KeyValue.KVComparator;
-import org.apache.hadoop.hbase.KeyValue.Type;
 import org.apache.hadoop.hbase.util.Bytes;
 
 public class TestKeyValue extends TestCase {
@@ -40,21 +39,13 @@
     final byte [] a = Bytes.toBytes("aaa");
     byte [] column1 = Bytes.toBytes("abc:def");
     byte [] column2 = Bytes.toBytes("abcd:ef");
-    byte [] family2 = Bytes.toBytes("abcd");
-    byte [] qualifier2 = Bytes.toBytes("ef"); 
-    KeyValue aaa = new KeyValue(a, column1, 0L, Type.Put, a);
-    assertFalse(aaa.matchingColumn(column2));
-    assertTrue(aaa.matchingColumn(column1));
-    aaa = new KeyValue(a, column2, 0L, Type.Put, a);
-    assertFalse(aaa.matchingColumn(column1));
-    assertTrue(aaa.matchingColumn(family2,qualifier2));
+    KeyValue aaa = new KeyValue(a, column1, a);
+    assertFalse(KeyValue.COMPARATOR.
+      compareColumns(aaa, column2, 0, column2.length, 4) == 0);
     column1 = Bytes.toBytes("abcd:");
-    aaa = new KeyValue(a, column1, 0L, Type.Put, a);
-    assertTrue(aaa.matchingColumn(family2,null));
-    assertFalse(aaa.matchingColumn(family2,qualifier2));
-    // Previous test had an assertFalse that I don't understand
-    //    assertFalse(KeyValue.COMPARATOR.
-    //    compareColumns(aaa, column1, 0, column1.length, 4) == 0);
+    aaa = new KeyValue(a, column1, a);
+    assertFalse(KeyValue.COMPARATOR.
+      compareColumns(aaa, column1, 0, column1.length, 4) == 0);
   }
 
   public void testBasics() throws Exception {
@@ -120,31 +111,31 @@
   public void testMoreComparisons() throws Exception {
     // Root compares
     long now = System.currentTimeMillis();
-    KeyValue a = new KeyValue(Bytes.toBytes(".META.,,99999999999999"), now);
-    KeyValue b = new KeyValue(Bytes.toBytes(".META.,,1"), now);
+    KeyValue a = new KeyValue(".META.,,99999999999999", now);
+    KeyValue b = new KeyValue(".META.,,1", now);
     KVComparator c = new KeyValue.RootComparator();
     assertTrue(c.compare(b, a) < 0);
-    KeyValue aa = new KeyValue(Bytes.toBytes(".META.,,1"), now);
-    KeyValue bb = new KeyValue(Bytes.toBytes(".META.,,1"), 
-        Bytes.toBytes("info:regioninfo"), 1235943454602L);
+    KeyValue aa = new KeyValue(".META.,,1", now);
+    KeyValue bb = new KeyValue(".META.,,1", "info:regioninfo",
+      1235943454602L);
     assertTrue(c.compare(aa, bb) < 0);
     
     // Meta compares
-    KeyValue aaa = new KeyValue(
-        Bytes.toBytes("TestScanMultipleVersions,row_0500,1236020145502"), now);
-    KeyValue bbb = new KeyValue(
-        Bytes.toBytes("TestScanMultipleVersions,,99999999999999"), now);
+    KeyValue aaa =
+      new KeyValue("TestScanMultipleVersions,row_0500,1236020145502", now);
+    KeyValue bbb = new KeyValue("TestScanMultipleVersions,,99999999999999",
+      now);
     c = new KeyValue.MetaComparator();
     assertTrue(c.compare(bbb, aaa) < 0);
     
-    KeyValue aaaa = new KeyValue(Bytes.toBytes("TestScanMultipleVersions,,1236023996656"),
-        Bytes.toBytes("info:regioninfo"), 1236024396271L);
+    KeyValue aaaa = new KeyValue("TestScanMultipleVersions,,1236023996656",
+      "info:regioninfo", 1236024396271L);
     assertTrue(c.compare(aaaa, bbb) < 0);
     
-    KeyValue x = new KeyValue(Bytes.toBytes("TestScanMultipleVersions,row_0500,1236034574162"),
-        Bytes.toBytes(""), 9223372036854775807L);
-    KeyValue y = new KeyValue(Bytes.toBytes("TestScanMultipleVersions,row_0500,1236034574162"),
-        Bytes.toBytes("info:regioninfo"), 1236034574912L);
+    KeyValue x = new KeyValue("TestScanMultipleVersions,row_0500,1236034574162",
+      "", 9223372036854775807L);
+    KeyValue y = new KeyValue("TestScanMultipleVersions,row_0500,1236034574162",
+      "info:regioninfo", 1236034574912L);
     assertTrue(c.compare(x, y) < 0);
     comparisons(new KeyValue.MetaComparator());
     comparisons(new KeyValue.KVComparator());
@@ -160,53 +151,53 @@
   public void testKeyValueBorderCases() throws IOException {
     // % sorts before , so if we don't do special comparator, rowB would
     // come before rowA.
-    KeyValue rowA = new KeyValue(Bytes.toBytes("testtable,www.hbase.org/,1234"),
-      Bytes.toBytes(""), Long.MAX_VALUE);
-    KeyValue rowB = new KeyValue(Bytes.toBytes("testtable,www.hbase.org/%20,99999"),
-      Bytes.toBytes(""), Long.MAX_VALUE);
+    KeyValue rowA = new KeyValue("testtable,www.hbase.org/,1234",
+      "", Long.MAX_VALUE);
+    KeyValue rowB = new KeyValue("testtable,www.hbase.org/%20,99999",
+      "", Long.MAX_VALUE);
     assertTrue(KeyValue.META_COMPARATOR.compare(rowA, rowB) < 0);
 
-    rowA = new KeyValue(Bytes.toBytes("testtable,,1234"), Bytes.toBytes(""), Long.MAX_VALUE);
-    rowB = new KeyValue(Bytes.toBytes("testtable,$www.hbase.org/,99999"), Bytes.toBytes(""), Long.MAX_VALUE);
+    rowA = new KeyValue("testtable,,1234", "", Long.MAX_VALUE);
+    rowB = new KeyValue("testtable,$www.hbase.org/,99999", "", Long.MAX_VALUE);
     assertTrue(KeyValue.META_COMPARATOR.compare(rowA, rowB) < 0);
 
-    rowA = new KeyValue(Bytes.toBytes(".META.,testtable,www.hbase.org/,1234,4321"), Bytes.toBytes(""),
+    rowA = new KeyValue(".META.,testtable,www.hbase.org/,1234,4321", "",
       Long.MAX_VALUE);
-    rowB = new KeyValue(Bytes.toBytes(".META.,testtable,www.hbase.org/%20,99999,99999"), Bytes.toBytes(""),
+    rowB = new KeyValue(".META.,testtable,www.hbase.org/%20,99999,99999", "",
       Long.MAX_VALUE);
     assertTrue(KeyValue.ROOT_COMPARATOR.compare(rowA, rowB) < 0);
   }
 
   private void metacomparisons(final KeyValue.MetaComparator c) {
     long now = System.currentTimeMillis();
-    assertTrue(c.compare(new KeyValue(Bytes.toBytes(".META.,a,,0,1"), now),
-      new KeyValue(Bytes.toBytes(".META.,a,,0,1"), now)) == 0);
-    KeyValue a = new KeyValue(Bytes.toBytes(".META.,a,,0,1"), now);
-    KeyValue b = new KeyValue(Bytes.toBytes(".META.,a,,0,2"), now);
+    assertTrue(c.compare(new KeyValue(".META.,a,,0,1", now),
+      new KeyValue(".META.,a,,0,1", now)) == 0);
+    KeyValue a = new KeyValue(".META.,a,,0,1", now);
+    KeyValue b = new KeyValue(".META.,a,,0,2", now);
     assertTrue(c.compare(a, b) < 0);
-    assertTrue(c.compare(new KeyValue(Bytes.toBytes(".META.,a,,0,2"), now),
-      new KeyValue(Bytes.toBytes(".META.,a,,0,1"), now)) > 0);
+    assertTrue(c.compare(new KeyValue(".META.,a,,0,2", now),
+      new KeyValue(".META.,a,,0,1", now)) > 0);
   }
 
   private void comparisons(final KeyValue.KVComparator c) {
     long now = System.currentTimeMillis();
-    assertTrue(c.compare(new KeyValue(Bytes.toBytes(".META.,,1"), now),
-      new KeyValue(Bytes.toBytes(".META.,,1"), now)) == 0);
-    assertTrue(c.compare(new KeyValue(Bytes.toBytes(".META.,,1"), now),
-      new KeyValue(Bytes.toBytes(".META.,,2"), now)) < 0);
-    assertTrue(c.compare(new KeyValue(Bytes.toBytes(".META.,,2"), now),
-      new KeyValue(Bytes.toBytes(".META.,,1"), now)) > 0);
+    assertTrue(c.compare(new KeyValue(".META.,,1", now),
+      new KeyValue(".META.,,1", now)) == 0);
+    assertTrue(c.compare(new KeyValue(".META.,,1", now),
+      new KeyValue(".META.,,2", now)) < 0);
+    assertTrue(c.compare(new KeyValue(".META.,,2", now),
+      new KeyValue(".META.,,1", now)) > 0);
   }
 
   public void testBinaryKeys() throws Exception {
     Set<KeyValue> set = new TreeSet<KeyValue>(KeyValue.COMPARATOR);
-    byte [] column = Bytes.toBytes("col:umn");
-    KeyValue [] keys = {new KeyValue(Bytes.toBytes("aaaaa,\u0000\u0000,2"), column, 2),
-      new KeyValue(Bytes.toBytes("aaaaa,\u0001,3"), column, 3),
-      new KeyValue(Bytes.toBytes("aaaaa,,1"), column, 1),
-      new KeyValue(Bytes.toBytes("aaaaa,\u1000,5"), column, 5),
-      new KeyValue(Bytes.toBytes("aaaaa,a,4"), column, 4),
-      new KeyValue(Bytes.toBytes("a,a,0"), column, 0),
+    String column = "col:umn";
+    KeyValue [] keys = {new KeyValue("aaaaa,\u0000\u0000,2", column, 2),
+      new KeyValue("aaaaa,\u0001,3", column, 3),
+      new KeyValue("aaaaa,,1", column, 1),
+      new KeyValue("aaaaa,\u1000,5", column, 5),
+      new KeyValue("aaaaa,a,4", column, 4),
+      new KeyValue("a,a,0", column, 0),
     };
     // Add to set with bad comparator
     for (int i = 0; i < keys.length; i++) {
@@ -235,12 +226,12 @@
     }
     // Make up -ROOT- table keys.
     KeyValue [] rootKeys = {
-        new KeyValue(Bytes.toBytes(".META.,aaaaa,\u0000\u0000,0,2"), column, 2),
-        new KeyValue(Bytes.toBytes(".META.,aaaaa,\u0001,0,3"), column, 3),
-        new KeyValue(Bytes.toBytes(".META.,aaaaa,,0,1"), column, 1),
-        new KeyValue(Bytes.toBytes(".META.,aaaaa,\u1000,0,5"), column, 5),
-        new KeyValue(Bytes.toBytes(".META.,aaaaa,a,0,4"), column, 4),
-        new KeyValue(Bytes.toBytes(".META.,,0"), column, 0),
+        new KeyValue(".META.,aaaaa,\u0000\u0000,0,2", column, 2),
+        new KeyValue(".META.,aaaaa,\u0001,0,3", column, 3),
+        new KeyValue(".META.,aaaaa,,0,1", column, 1),
+        new KeyValue(".META.,aaaaa,\u1000,0,5", column, 5),
+        new KeyValue(".META.,aaaaa,a,0,4", column, 4),
+        new KeyValue(".META.,,0", column, 0),
       };
     // This will output the keys incorrectly.
     set = new TreeSet<KeyValue>(new KeyValue.MetaComparator());
@@ -269,11 +260,4 @@
       assertTrue(count++ == k.getTimestamp());
     }
   }
-
-  public void testStackedUpKeyValue() {
-    // Test multiple KeyValues in a single blob.
-
-    // TODO actually write this test!
-    
-  }
 }
\ No newline at end of file

Modified: hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestRegionRebalancing.java
URL: http://svn.apache.org/viewvc/hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestRegionRebalancing.java?rev=784618&r1=784617&r2=784618&view=diff
==============================================================================
--- hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestRegionRebalancing.java (original)
+++ hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestRegionRebalancing.java Sun Jun 14 21:34:13 2009
@@ -28,7 +28,6 @@
 
 import org.apache.hadoop.hbase.io.BatchUpdate;
 import org.apache.hadoop.hbase.client.HTable;
-import org.apache.hadoop.hbase.client.Put;
 
 import org.apache.hadoop.hbase.regionserver.HRegionServer;
 import org.apache.hadoop.hbase.regionserver.HRegion;
@@ -224,10 +223,9 @@
   throws IOException {
     HRegion region = createNewHRegion(desc, startKey, endKey);
     byte [] keyToWrite = startKey == null ? Bytes.toBytes("row_000") : startKey;
-    Put put = new Put(keyToWrite);
-    byte [][] famAndQf = KeyValue.parseColumn(COLUMN_NAME);
-    put.add(famAndQf[0], famAndQf[1], Bytes.toBytes("test"));
-    region.put(put);
+    BatchUpdate bu = new BatchUpdate(keyToWrite);
+    bu.put(COLUMN_NAME, "test".getBytes());
+    region.batchUpdate(bu, null);
     region.close();
     region.getLog().closeAndDelete();
     return region;

Modified: hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestScanMultipleVersions.java
URL: http://svn.apache.org/viewvc/hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestScanMultipleVersions.java?rev=784618&r1=784617&r2=784618&view=diff
==============================================================================
--- hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestScanMultipleVersions.java (original)
+++ hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestScanMultipleVersions.java Sun Jun 14 21:34:13 2009
@@ -21,12 +21,11 @@
 package org.apache.hadoop.hbase;
 
 import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.hbase.client.Get;
 import org.apache.hadoop.hbase.client.HTable;
-import org.apache.hadoop.hbase.client.Put;
-import org.apache.hadoop.hbase.client.Result;
-import org.apache.hadoop.hbase.client.Scan;
-import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scanner;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.io.RowResult;
 import org.apache.hadoop.hbase.regionserver.HRegion;
 import org.apache.hadoop.hbase.util.Bytes;
 
@@ -54,7 +53,7 @@
     // Create table description
     
     this.desc = new HTableDescriptor(TABLE_NAME);
-    this.desc.addFamily(new HColumnDescriptor(HConstants.CATALOG_FAMILY));
+    this.desc.addFamily(new HColumnDescriptor(HConstants.COLUMN_FAMILY));
 
     // Region 0 will contain the key range [,row_0500)
     INFOS[0] = new HRegionInfo(this.desc, HConstants.EMPTY_START_ROW,
@@ -71,11 +70,9 @@
         HRegion.createHRegion(this.INFOS[i], this.testDir, this.conf);
       // Insert data
       for (int j = 0; j < TIMESTAMPS.length; j++) {
-        Put put = new Put(ROWS[i]);
-        put.setTimeStamp(TIMESTAMPS[j]);
-        put.add(HConstants.CATALOG_FAMILY, null, TIMESTAMPS[j], 
-            Bytes.toBytes(TIMESTAMPS[j]));
-        REGIONS[i].put(put);
+        BatchUpdate b = new BatchUpdate(ROWS[i], TIMESTAMPS[j]);
+        b.put(HConstants.COLUMN_FAMILY, Bytes.toBytes(TIMESTAMPS[j]));
+        REGIONS[i].batchUpdate(b, null);
       }
       // Insert the region we created into the meta
       HRegion.addRegionToMETA(meta, REGIONS[i]);
@@ -96,25 +93,19 @@
     HTable t = new HTable(conf, TABLE_NAME);
     for (int i = 0; i < ROWS.length; i++) {
       for (int j = 0; j < TIMESTAMPS.length; j++) {
-        Get get = new Get(ROWS[i]);
-        get.addFamily(HConstants.CATALOG_FAMILY);
-        get.setTimeStamp(TIMESTAMPS[j]);
-        Result result = t.get(get);
-        int cellCount = 0;
-        for(@SuppressWarnings("unused")KeyValue kv : result.sorted()) {
-          cellCount++;
-        }
-        assertTrue(cellCount == 1);
+        Cell [] cells =
+          t.get(ROWS[i], HConstants.COLUMN_FAMILY, TIMESTAMPS[j], 1);
+        assertTrue(cells != null && cells.length == 1);
+        System.out.println("Row=" + Bytes.toString(ROWS[i]) + ", cell=" +
+          cells[0]);
       }
     }
     
     // Case 1: scan with LATEST_TIMESTAMP. Should get two rows
     int count = 0;
-    Scan scan = new Scan();
-    scan.addFamily(HConstants.CATALOG_FAMILY);
-    ResultScanner s = t.getScanner(scan);
+    Scanner s = t.getScanner(HConstants.COLUMN_FAMILY_ARRAY);
     try {
-      for (Result rr = null; (rr = s.next()) != null;) {
+      for (RowResult rr = null; (rr = s.next()) != null;) {
         System.out.println(rr.toString());
         count += 1;
       }
@@ -127,11 +118,8 @@
     // (in this case > 1000 and < LATEST_TIMESTAMP. Should get 2 rows.
     
     count = 0;
-    scan = new Scan();
-    scan.setTimeRange(1000L, Long.MAX_VALUE);
-    scan.addFamily(HConstants.CATALOG_FAMILY);
-
-    s = t.getScanner(scan);
+    s = t.getScanner(HConstants.COLUMN_FAMILY_ARRAY, HConstants.EMPTY_START_ROW,
+        10000L);
     try {
       while (s.next() != null) {
         count += 1;
@@ -145,11 +133,8 @@
     // (in this case == 1000. Should get 2 rows.
     
     count = 0;
-    scan = new Scan();
-    scan.setTimeStamp(1000L);
-    scan.addFamily(HConstants.CATALOG_FAMILY);
-
-    s = t.getScanner(scan);
+    s = t.getScanner(HConstants.COLUMN_FAMILY_ARRAY, HConstants.EMPTY_START_ROW,
+        1000L);
     try {
       while (s.next() != null) {
         count += 1;
@@ -163,11 +148,8 @@
     // second timestamp (100 < timestamp < 1000). Should get 2 rows.
     
     count = 0;
-    scan = new Scan();
-    scan.setTimeRange(100L, 1000L);
-    scan.addFamily(HConstants.CATALOG_FAMILY);
-
-    s = t.getScanner(scan);
+    s = t.getScanner(HConstants.COLUMN_FAMILY_ARRAY, HConstants.EMPTY_START_ROW,
+        500L);
     try {
       while (s.next() != null) {
         count += 1;
@@ -181,11 +163,8 @@
     // Should get 2 rows.
     
     count = 0;
-    scan = new Scan();
-    scan.setTimeStamp(100L);
-    scan.addFamily(HConstants.CATALOG_FAMILY);
-
-    s = t.getScanner(scan);
+    s = t.getScanner(HConstants.COLUMN_FAMILY_ARRAY, HConstants.EMPTY_START_ROW,
+        100L);
     try {
       while (s.next() != null) {
         count += 1;

Modified: hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestSerialization.java
URL: http://svn.apache.org/viewvc/hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestSerialization.java?rev=784618&r1=784617&r2=784618&view=diff
==============================================================================
--- hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestSerialization.java (original)
+++ hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestSerialization.java Sun Jun 14 21:34:13 2009
@@ -1,5 +1,5 @@
 /**
- * Copyright 2009 The Apache Software Foundation
+ * Copyright 2007 The Apache Software Foundation
  *
  * Licensed to the Apache Software Foundation (ASF) under one
  * or more contributor license agreements.  See the NOTICE file
@@ -20,31 +20,13 @@
 package org.apache.hadoop.hbase;
 
 
-import java.io.ByteArrayOutputStream;
-import java.io.DataOutputStream;
-import java.io.IOException;
-import java.util.ArrayList;
-import java.util.List;
-import java.util.Map;
-import java.util.Set;
-import java.util.NavigableSet;
-
-import org.apache.hadoop.hbase.client.Delete;
-import org.apache.hadoop.hbase.client.Get;
-import org.apache.hadoop.hbase.client.Put;
-import org.apache.hadoop.hbase.client.Result;
-import org.apache.hadoop.hbase.client.RowLock;
-import org.apache.hadoop.hbase.client.Scan;
 import org.apache.hadoop.hbase.io.BatchOperation;
 import org.apache.hadoop.hbase.io.BatchUpdate;
 import org.apache.hadoop.hbase.io.Cell;
 import org.apache.hadoop.hbase.io.HbaseMapWritable;
 import org.apache.hadoop.hbase.io.RowResult;
-import org.apache.hadoop.hbase.io.TimeRange;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.hadoop.hbase.util.Writables;
-import org.apache.hadoop.io.DataInputBuffer;
-import org.apache.hadoop.io.Writable;
 
 /**
  * Test HBase Writables serializations
@@ -70,7 +52,6 @@
     assertTrue(KeyValue.COMPARATOR.compare(original, newone) == 0);
   }
 
-  @SuppressWarnings("unchecked")
   public void testHbaseMapWritable() throws Exception {
     HbaseMapWritable<byte [], byte []> hmw =
       new HbaseMapWritable<byte[], byte[]>();
@@ -176,7 +157,7 @@
     assertTrue(Bytes.equals(bu.getRow(), bubu.getRow()));
     // Assert has same number of BatchOperations.
     int firstCount = 0;
-    for (@SuppressWarnings("unused")BatchOperation bo: bubu) {
+    for (BatchOperation bo: bubu) {
       firstCount++;
     }
     // Now deserialize again into same instance to ensure we're not
@@ -185,358 +166,9 @@
     // Assert rows are same again.
     assertTrue(Bytes.equals(bu.getRow(), bububu.getRow()));
     int secondCount = 0;
-    for (@SuppressWarnings("unused")BatchOperation bo: bububu) {
+    for (BatchOperation bo: bububu) {
       secondCount++;
     }
     assertEquals(firstCount, secondCount);
   }
-  
-  
-  //
-  // HBASE-880
-  //
-  
-  public void testPut() throws Exception{
-    byte[] row = "row".getBytes();
-    byte[] fam = "fam".getBytes();
-    byte[] qf1 = "qf1".getBytes();
-    byte[] qf2 = "qf2".getBytes();
-    byte[] qf3 = "qf3".getBytes();
-    byte[] qf4 = "qf4".getBytes();
-    byte[] qf5 = "qf5".getBytes();
-    byte[] qf6 = "qf6".getBytes();
-    byte[] qf7 = "qf7".getBytes();
-    byte[] qf8 = "qf8".getBytes();
-    
-    long ts = System.currentTimeMillis();
-    byte[] val = "val".getBytes();
-    
-    Put put = new Put(row);
-    put.add(fam, qf1, ts, val);
-    put.add(fam, qf2, ts, val);
-    put.add(fam, qf3, ts, val);
-    put.add(fam, qf4, ts, val);
-    put.add(fam, qf5, ts, val);
-    put.add(fam, qf6, ts, val);
-    put.add(fam, qf7, ts, val);
-    put.add(fam, qf8, ts, val);
-    
-    byte[] sb = Writables.getBytes(put);
-    Put desPut = (Put)Writables.getWritable(sb, new Put());
-
-    //Timing test
-//    long start = System.nanoTime();
-//    desPut = (Put)Writables.getWritable(sb, new Put());
-//    long stop = System.nanoTime();
-//    System.out.println("timer " +(stop-start));
-    
-    assertTrue(Bytes.equals(put.getRow(), desPut.getRow()));
-    List<KeyValue> list = null;
-    List<KeyValue> desList = null;
-    for(Map.Entry<byte[], List<KeyValue>> entry : put.getFamilyMap().entrySet()){
-      assertTrue(desPut.getFamilyMap().containsKey(entry.getKey()));
-      list = entry.getValue();
-      desList = desPut.getFamilyMap().get(entry.getKey());
-      for(int i=0; i<list.size(); i++){
-        assertTrue(list.get(i).equals(desList.get(i)));
-      }
-    }
-  }
-
-  
-  public void testPut2() throws Exception{
-    byte[] row = "testAbort,,1243116656250".getBytes();
-    byte[] fam = "historian".getBytes();
-    byte[] qf1 = "creation".getBytes();
-    
-    long ts = 9223372036854775807L;
-    byte[] val = "dont-care".getBytes();
-    
-    Put put = new Put(row);
-    put.add(fam, qf1, ts, val);
-    
-    byte[] sb = Writables.getBytes(put);
-    Put desPut = (Put)Writables.getWritable(sb, new Put());
-
-    assertTrue(Bytes.equals(put.getRow(), desPut.getRow()));
-    List<KeyValue> list = null;
-    List<KeyValue> desList = null;
-    for(Map.Entry<byte[], List<KeyValue>> entry : put.getFamilyMap().entrySet()){
-      assertTrue(desPut.getFamilyMap().containsKey(entry.getKey()));
-      list = entry.getValue();
-      desList = desPut.getFamilyMap().get(entry.getKey());
-      for(int i=0; i<list.size(); i++){
-        assertTrue(list.get(i).equals(desList.get(i)));
-      }
-    }
-  }
-  
-  
-  public void testDelete() throws Exception{
-    byte[] row = "row".getBytes();
-    byte[] fam = "fam".getBytes();
-    byte[] qf1 = "qf1".getBytes();
-    
-    long ts = System.currentTimeMillis();
-    
-    Delete delete = new Delete(row);
-    delete.deleteColumn(fam, qf1, ts);
-    
-    byte[] sb = Writables.getBytes(delete);
-    Delete desDelete = (Delete)Writables.getWritable(sb, new Delete());
-
-    assertTrue(Bytes.equals(delete.getRow(), desDelete.getRow()));
-    List<KeyValue> list = null;
-    List<KeyValue> desList = null;
-    for(Map.Entry<byte[], List<KeyValue>> entry :
-        delete.getFamilyMap().entrySet()){
-      assertTrue(desDelete.getFamilyMap().containsKey(entry.getKey()));
-      list = entry.getValue();
-      desList = desDelete.getFamilyMap().get(entry.getKey());
-      for(int i=0; i<list.size(); i++){
-        assertTrue(list.get(i).equals(desList.get(i)));
-      }
-    }
-  }
- 
-  public void testGet() throws Exception{
-    byte[] row = "row".getBytes();
-    byte[] fam = "fam".getBytes();
-    byte[] qf1 = "qf1".getBytes();
-    
-    long ts = System.currentTimeMillis();
-    int maxVersions = 2;
-    long lockid = 5;
-    RowLock rowLock = new RowLock(lockid);
-    
-    Get get = new Get(row, rowLock);
-    get.addColumn(fam, qf1);
-    get.setTimeRange(ts, ts+1);
-    get.setMaxVersions(maxVersions);
-    
-    byte[] sb = Writables.getBytes(get);
-    Get desGet = (Get)Writables.getWritable(sb, new Get());
-
-    assertTrue(Bytes.equals(get.getRow(), desGet.getRow()));
-    Set<byte[]> set = null;
-    Set<byte[]> desSet = null;
-    
-    for(Map.Entry<byte[], NavigableSet<byte[]>> entry :
-        get.getFamilyMap().entrySet()){
-      assertTrue(desGet.getFamilyMap().containsKey(entry.getKey()));
-      set = entry.getValue();
-      desSet = desGet.getFamilyMap().get(entry.getKey());
-      for(byte [] qualifier : set){
-        assertTrue(desSet.contains(qualifier));
-      }
-    }
-    
-    assertEquals(get.getLockId(), desGet.getLockId());
-    assertEquals(get.getMaxVersions(), desGet.getMaxVersions());
-    TimeRange tr = get.getTimeRange();
-    TimeRange desTr = desGet.getTimeRange();
-    assertEquals(tr.getMax(), desTr.getMax());
-    assertEquals(tr.getMin(), desTr.getMin());
-  }
-  
-
-  public void testScan() throws Exception{
-    byte[] startRow = "startRow".getBytes();
-    byte[] stopRow  = "stopRow".getBytes();
-    byte[] fam = "fam".getBytes();
-    byte[] qf1 = "qf1".getBytes();
-    
-    long ts = System.currentTimeMillis();
-    int maxVersions = 2;
-    
-    Scan scan = new Scan(startRow, stopRow);
-    scan.addColumn(fam, qf1);
-    scan.setTimeRange(ts, ts+1);
-    scan.setMaxVersions(maxVersions);
-    
-    byte[] sb = Writables.getBytes(scan);
-    Scan desScan = (Scan)Writables.getWritable(sb, new Scan());
-
-    assertTrue(Bytes.equals(scan.getStartRow(), desScan.getStartRow()));
-    assertTrue(Bytes.equals(scan.getStopRow(), desScan.getStopRow()));
-    Set<byte[]> set = null;
-    Set<byte[]> desSet = null;
-    
-    for(Map.Entry<byte[], NavigableSet<byte[]>> entry :
-        scan.getFamilyMap().entrySet()){
-      assertTrue(desScan.getFamilyMap().containsKey(entry.getKey()));
-      set = entry.getValue();
-      desSet = desScan.getFamilyMap().get(entry.getKey());
-      for(byte[] column : set){
-        assertTrue(desSet.contains(column));
-      }
-    }
-    
-    assertEquals(scan.getMaxVersions(), desScan.getMaxVersions());
-    TimeRange tr = scan.getTimeRange();
-    TimeRange desTr = desScan.getTimeRange();
-    assertEquals(tr.getMax(), desTr.getMax());
-    assertEquals(tr.getMin(), desTr.getMin());
-  }
-  
-  public void testResultEmpty() throws Exception {
-    List<KeyValue> keys = new ArrayList<KeyValue>();
-    Result r = new Result(keys);
-    assertTrue(r.isEmpty());
-    byte [] rb = Writables.getBytes(r);
-    Result deserializedR = (Result)Writables.getWritable(rb, new Result());
-    assertTrue(deserializedR.isEmpty());
-  }
-  
-  
-  public void testResult() throws Exception {
-    byte [] rowA = Bytes.toBytes("rowA");
-    byte [] famA = Bytes.toBytes("famA");
-    byte [] qfA = Bytes.toBytes("qfA");
-    byte [] valueA = Bytes.toBytes("valueA");
-    
-    byte [] rowB = Bytes.toBytes("rowB");
-    byte [] famB = Bytes.toBytes("famB");
-    byte [] qfB = Bytes.toBytes("qfB");
-    byte [] valueB = Bytes.toBytes("valueB");
-    
-    KeyValue kvA = new KeyValue(rowA, famA, qfA, valueA);
-    KeyValue kvB = new KeyValue(rowB, famB, qfB, valueB);
-    
-    Result result = new Result(new KeyValue[]{kvA, kvB});
-    
-    byte [] rb = Writables.getBytes(result);
-    Result deResult = (Result)Writables.getWritable(rb, new Result());
-    
-    assertTrue("results are not equivalent, first key mismatch",
-        result.sorted()[0].equals(deResult.sorted()[0]));
-    
-    assertTrue("results are not equivalent, second key mismatch",
-        result.sorted()[1].equals(deResult.sorted()[1]));
-    
-    // Test empty Result
-    Result r = new Result();
-    byte [] b = Writables.getBytes(r);
-    Result deserialized = (Result)Writables.getWritable(b, new Result());
-    assertEquals(r.size(), deserialized.size());
-  }
-  
-  public void testResultArray() throws Exception {
-    byte [] rowA = Bytes.toBytes("rowA");
-    byte [] famA = Bytes.toBytes("famA");
-    byte [] qfA = Bytes.toBytes("qfA");
-    byte [] valueA = Bytes.toBytes("valueA");
-    
-    byte [] rowB = Bytes.toBytes("rowB");
-    byte [] famB = Bytes.toBytes("famB");
-    byte [] qfB = Bytes.toBytes("qfB");
-    byte [] valueB = Bytes.toBytes("valueB");
-    
-    KeyValue kvA = new KeyValue(rowA, famA, qfA, valueA);
-    KeyValue kvB = new KeyValue(rowB, famB, qfB, valueB);
-
-    
-    Result result1 = new Result(new KeyValue[]{kvA, kvB});
-    Result result2 = new Result(new KeyValue[]{kvB});
-    Result result3 = new Result(new KeyValue[]{kvB});
-    
-    Result [] results = new Result [] {result1, result2, result3};
-    
-    ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
-    DataOutputStream out = new DataOutputStream(byteStream);
-    Result.writeArray(out, results);
-    
-    byte [] rb = byteStream.toByteArray();
-    
-    DataInputBuffer in = new DataInputBuffer();
-    in.reset(rb, 0, rb.length);
-    
-    Result [] deResults = Result.readArray(in);
-    
-    assertTrue(results.length == deResults.length);
-    
-    for(int i=0;i<results.length;i++) {
-      KeyValue [] keysA = results[i].sorted();
-      KeyValue [] keysB = deResults[i].sorted();
-      assertTrue(keysA.length == keysB.length);
-      for(int j=0;j<keysA.length;j++) {
-        assertTrue("Expected equivalent keys but found:\n" +
-            "KeyA : " + keysA[j].toString() + "\n" +
-            "KeyB : " + keysB[j].toString() + "\n" + 
-            keysA.length + " total keys, " + i + "th so far"
-            ,keysA[j].equals(keysB[j]));
-      }
-    }
-    
-  }
-  
-  public void testResultArrayEmpty() throws Exception {
-    List<KeyValue> keys = new ArrayList<KeyValue>();
-    Result r = new Result(keys);
-    Result [] results = new Result [] {r};
-
-    ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
-    DataOutputStream out = new DataOutputStream(byteStream);
-    
-    Result.writeArray(out, results);
-    
-    results = null;
-    
-    byteStream = new ByteArrayOutputStream();
-    out = new DataOutputStream(byteStream);
-    Result.writeArray(out, results);
-    
-    byte [] rb = byteStream.toByteArray();
-    
-    DataInputBuffer in = new DataInputBuffer();
-    in.reset(rb, 0, rb.length);
-    
-    Result [] deResults = Result.readArray(in);
-    
-    assertTrue(deResults.length == 0);
-    
-    results = new Result[0];
-
-    byteStream = new ByteArrayOutputStream();
-    out = new DataOutputStream(byteStream);
-    Result.writeArray(out, results);
-    
-    rb = byteStream.toByteArray();
-    
-    in = new DataInputBuffer();
-    in.reset(rb, 0, rb.length);
-    
-    deResults = Result.readArray(in);
-    
-    assertTrue(deResults.length == 0);
-    
-  }
-  
-  public void testTimeRange(String[] args) throws Exception{
-    TimeRange tr = new TimeRange(0,5);
-    byte [] mb = Writables.getBytes(tr);
-    TimeRange deserializedTr =
-      (TimeRange)Writables.getWritable(mb, new TimeRange());
-    
-    assertEquals(tr.getMax(), deserializedTr.getMax());
-    assertEquals(tr.getMin(), deserializedTr.getMin());
-    
-  }
-  
-  public void testKeyValue2() throws Exception {
-    byte[] row = getName().getBytes();
-    byte[] fam = "fam".getBytes();
-    byte[] qf = "qf".getBytes();
-    long ts = System.currentTimeMillis();
-    byte[] val = "val".getBytes();
-    
-    KeyValue kv = new KeyValue(row, fam, qf, ts, val);
-    
-    byte [] mb = Writables.getBytes(kv);
-    KeyValue deserializedKv =
-      (KeyValue)Writables.getWritable(mb, new KeyValue());
-    assertTrue(Bytes.equals(kv.getBuffer(), deserializedKv.getBuffer()));
-    assertEquals(kv.getOffset(), deserializedKv.getOffset());
-    assertEquals(kv.getLength(), deserializedKv.getLength());
-  }
 }
\ No newline at end of file

Modified: hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestTable.java
URL: http://svn.apache.org/viewvc/hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestTable.java?rev=784618&r1=784617&r2=784618&view=diff
==============================================================================
--- hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestTable.java (original)
+++ hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestTable.java Sun Jun 14 21:34:13 2009
@@ -23,7 +23,6 @@
 
 import org.apache.hadoop.hbase.client.HBaseAdmin;
 import org.apache.hadoop.hbase.client.HTable;
-import org.apache.hadoop.hbase.client.Put;
 import org.apache.hadoop.hbase.io.BatchUpdate;
 import org.apache.hadoop.hbase.util.Bytes;
 
@@ -58,7 +57,7 @@
     // Try doing a duplicate database create.
     msg = null;
     HTableDescriptor desc = new HTableDescriptor(getName());
-    desc.addFamily(new HColumnDescriptor(HConstants.CATALOG_FAMILY));
+    desc.addFamily(new HColumnDescriptor(HConstants.COLUMN_FAMILY));
     admin.createTable(desc);
     assertTrue("First table creation completed", admin.listTables().length == 1);
     boolean gotException = false;
@@ -75,7 +74,7 @@
     // Now try and do concurrent creation with a bunch of threads.
     final HTableDescriptor threadDesc =
       new HTableDescriptor("threaded_" + getName());
-    threadDesc.addFamily(new HColumnDescriptor(HConstants.CATALOG_FAMILY));
+    threadDesc.addFamily(new HColumnDescriptor(HConstants.COLUMN_FAMILY));
     int count = 10;
     Thread [] threads = new Thread [count];
     final AtomicInteger successes = new AtomicInteger(0);
@@ -110,8 +109,8 @@
     }
     // All threads are now dead.  Count up how many tables were created and
     // how many failed w/ appropriate exception.
-    assertEquals(1, successes.get());
-    assertEquals(count - 1, failures.get());
+    assertTrue(successes.get() == 1);
+    assertTrue(failures.get() == (count - 1));
   }
   
   /**
@@ -141,12 +140,10 @@
     HTable table = new HTable(conf, getName());
     try {
       byte[] value = Bytes.toBytes("somedata");
-      // This used to use an empty row... That must have been a bug
-      Put put = new Put(value);
-      byte [][] famAndQf = KeyValue.parseColumn(colName);
-      put.add(famAndQf[0], famAndQf[1], value);
-      table.put(put);
-      fail("Put on read only table succeeded");  
+      BatchUpdate update = new BatchUpdate();
+      update.put(colName, value);
+      table.commit(update);
+      fail("BatchUpdate on read only table succeeded");  
     } catch (Exception e) {
       // expected
     }

Modified: hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestZooKeeper.java
URL: http://svn.apache.org/viewvc/hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestZooKeeper.java?rev=784618&r1=784617&r2=784618&view=diff
==============================================================================
--- hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestZooKeeper.java (original)
+++ hadoop/hbase/trunk_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/TestZooKeeper.java Sun Jun 14 21:34:13 2009
@@ -25,7 +25,6 @@
 import org.apache.hadoop.hbase.client.HConnection;
 import org.apache.hadoop.hbase.client.HConnectionManager;
 import org.apache.hadoop.hbase.client.HTable;
-import org.apache.hadoop.hbase.client.Put;
 import org.apache.hadoop.hbase.io.BatchUpdate;
 import org.apache.hadoop.hbase.master.HMaster;
 import org.apache.hadoop.hbase.regionserver.HRegionServer;
@@ -142,9 +141,9 @@
       admin.createTable(desc);
   
       HTable table = new HTable("test");
-      Put put = new Put(Bytes.toBytes("testrow"));
-      put.add(Bytes.toBytes("fam"), Bytes.toBytes("col"), Bytes.toBytes("testdata"));
-      table.put(put);
+      BatchUpdate batchUpdate = new BatchUpdate("testrow");
+      batchUpdate.put("fam:col", Bytes.toBytes("testdata"));
+      table.commit(batchUpdate);
     } catch (Exception e) {
       e.printStackTrace();
       fail();



Mime
View raw message