hbase-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From apurt...@apache.org
Subject svn commit: r802915 - in /hadoop/hbase/branches/0.20_on_hadoop-0.18.3: ./ bin/ src/java/ src/java/org/apache/hadoop/hbase/ src/java/org/apache/hadoop/hbase/client/ src/java/org/apache/hadoop/hbase/mapred/ src/java/org/apache/hadoop/hbase/master/ src/ja...
Date Mon, 10 Aug 2009 19:49:06 GMT
Author: apurtell
Date: Mon Aug 10 19:49:05 2009
New Revision: 802915

URL: http://svn.apache.org/viewvc?rev=802915&view=rev
Log:
pull up to latest 0.20 branch

Modified:
    hadoop/hbase/branches/0.20_on_hadoop-0.18.3/CHANGES.txt
    hadoop/hbase/branches/0.20_on_hadoop-0.18.3/bin/HBase.rb
    hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/ClusterStatus.java
    hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/HColumnDescriptor.java
    hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/HTableDescriptor.java
    hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/client/HConnectionManager.java
    hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/client/ScannerCallable.java
    hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/mapred/TableMap.java
    hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/master/HMaster.java
    hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/master/RegionManager.java
    hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/regionserver/HLog.java
    hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
    hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/rest/AbstractModel.java
    hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/rest/RowModel.java
    hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/rest/ScannerModel.java
    hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/rest/TableModel.java
    hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/rest/TimestampModel.java
    hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/overview.html
    hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/MapFilePerformanceEvaluation.java

Modified: hadoop/hbase/branches/0.20_on_hadoop-0.18.3/CHANGES.txt
URL: http://svn.apache.org/viewvc/hadoop/hbase/branches/0.20_on_hadoop-0.18.3/CHANGES.txt?rev=802915&r1=802914&r2=802915&view=diff
==============================================================================
--- hadoop/hbase/branches/0.20_on_hadoop-0.18.3/CHANGES.txt (original)
+++ hadoop/hbase/branches/0.20_on_hadoop-0.18.3/CHANGES.txt Mon Aug 10 19:49:05 2009
@@ -299,13 +299,23 @@
    HBASE-1717  Put on client-side uses passed-in byte[]s rather than always
                using copies
    HBASE-1647  Filter#filterRow is called too often, filters rows it shouldn't
-               have (Doğacan Güney via Ryan Rawson and Stack)
+               have (Doğacan Güney via Ryan Rawson and Stack)
    HBASE-1718  Reuse of KeyValue during log replay could cause the wrong
                data to be used
    HBASE-1573  Holes in master state change; updated startcode and server
                go into .META. but catalog scanner just got old values (redux)
    HBASE-1534  Got ZooKeeper event, state: Disconnected on HRS and then NPE
                on reinit
+   HBASE-1725  Old TableMap interface's definitions are not generic enough
+               (Doğacan Güney via Stack)
+   HBASE-1732  Flag to disable regionserver restart
+   HBASE-1604  HBaseClient.getConnection() may return a broken connection
+               without throwing an exception (Eugene Kirpichov via Stack)
+   HBASE-1739  hbase-1683 broke splitting; only split three logs no matter
+               what N was
+   HBASE-1737  Regions unbalanced when adding new node
+   HBASE-1745  [tools] Tool to kick region out of inTransistion
+   HBASE-1757  REST server runs out of fds
 
   IMPROVEMENTS
    HBASE-1089  Add count of regions on filesystem to master UI; add percentage
@@ -537,6 +547,8 @@
    HBASE-1714  Thrift server: prefix scan API
    HBASE-1719  hold a reference to the region in stores instead of only the
                region info
+   HBASE-1743  [debug tool] Add regionsInTransition list to ClusterStatus
+               detailed output
 
   OPTIMIZATIONS
    HBASE-1412  Change values for delete column and column family in KeyValue

Modified: hadoop/hbase/branches/0.20_on_hadoop-0.18.3/bin/HBase.rb
URL: http://svn.apache.org/viewvc/hadoop/hbase/branches/0.20_on_hadoop-0.18.3/bin/HBase.rb?rev=802915&r1=802914&r2=802915&view=diff
==============================================================================
--- hadoop/hbase/branches/0.20_on_hadoop-0.18.3/bin/HBase.rb (original)
+++ hadoop/hbase/branches/0.20_on_hadoop-0.18.3/bin/HBase.rb Mon Aug 10 19:49:05 2009
@@ -270,6 +270,11 @@
       status = @admin.getClusterStatus()
       if format != nil and format == "detailed"
         puts("version %s" % [ status.getHBaseVersion() ])
+        # Put regions in transition first because usually empty
+        puts("%d regionsInTransition" % status.getRegionsInTransition().size())
+        for k, v in status.getRegionsInTransition()
+          puts("    %s" % [v])
+        end
         puts("%d live servers" % [ status.getServers() ])
         for server in status.getServerInfo()
           puts("    %s:%d %d" % \

Modified: hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/ClusterStatus.java
URL: http://svn.apache.org/viewvc/hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/ClusterStatus.java?rev=802915&r1=802914&r2=802915&view=diff
==============================================================================
--- hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/ClusterStatus.java
(original)
+++ hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/ClusterStatus.java
Mon Aug 10 19:49:05 2009
@@ -26,6 +26,10 @@
 import java.util.ArrayList;
 import java.util.Collection;
 import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.NavigableMap;
+import java.util.TreeMap;
 
 import org.apache.hadoop.io.VersionedWritable;
 
@@ -41,6 +45,7 @@
  * <li>The number of requests since last report.</li>
  * <li>Detailed region server loading and resource usage information,
  *  per server and per region.</li>
+ *  <li>Regions in transition at master</li>
  * </ul>
  */
 public class ClusterStatus extends VersionedWritable {
@@ -49,6 +54,7 @@
   private String hbaseVersion;
   private Collection<HServerInfo> liveServerInfo;
   private Collection<String> deadServers;
+  private NavigableMap<String, String> intransition;
 
   /**
    * Constructor, for Writable
@@ -191,6 +197,14 @@
     this.deadServers = deadServers;
   }
 
+  public Map<String, String> getRegionsInTransition() {
+    return this.intransition;
+  }
+
+  public void setRegionsInTransition(final NavigableMap<String, String> m) {
+    this.intransition = m;
+  }
+
   //
   // Writable
   //
@@ -206,6 +220,11 @@
     for (String server: deadServers) {
       out.writeUTF(server);
     }
+    out.writeInt(this.intransition.size());
+    for (Map.Entry<String, String> e: this.intransition.entrySet()) {
+      out.writeUTF(e.getKey());
+      out.writeUTF(e.getValue());
+    }
   }
 
   public void readFields(DataInput in) throws IOException {
@@ -223,5 +242,12 @@
     for (int i = 0; i < count; i++) {
       deadServers.add(in.readUTF());
     }
+    count = in.readInt();
+    this.intransition = new TreeMap<String, String>();
+    for (int i = 0; i < count; i++) {
+      String key = in.readUTF();
+      String value = in.readUTF();
+      this.intransition.put(key, value);
+    }
   }
-}
+}
\ No newline at end of file

Modified: hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/HColumnDescriptor.java
URL: http://svn.apache.org/viewvc/hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/HColumnDescriptor.java?rev=802915&r1=802914&r2=802915&view=diff
==============================================================================
--- hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/HColumnDescriptor.java
(original)
+++ hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/HColumnDescriptor.java
Mon Aug 10 19:49:05 2009
@@ -54,6 +54,7 @@
   // Time-to-live feature.  Version 4 was when we moved to byte arrays, HBASE-82.
   // Version 5 was when bloom filter descriptors were removed.
   // Version 6 adds metadata as a map where keys and values are byte[].
+  // Version 7 -- add new compression and hfile blocksize to HColumnDescriptor (HBASE-1217)
   private static final byte COLUMN_DESCRIPTOR_VERSION = (byte)7;
 
   /** 

Modified: hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/HTableDescriptor.java
URL: http://svn.apache.org/viewvc/hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/HTableDescriptor.java?rev=802915&r1=802914&r2=802915&view=diff
==============================================================================
--- hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/HTableDescriptor.java
(original)
+++ hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/HTableDescriptor.java
Mon Aug 10 19:49:05 2009
@@ -51,8 +51,8 @@
   // Changes prior to version 3 were not recorded here.
   // Version 3 adds metadata as a map where keys and values are byte[].
   // Version 4 adds indexes
-  // FIXME version 5 should remove indexes
-  public static final byte TABLE_DESCRIPTOR_VERSION = 4;
+  // Version 5 removed transactional pollution -- e.g. indexes
+  public static final byte TABLE_DESCRIPTOR_VERSION = 5;
 
   private byte [] name = HConstants.EMPTY_BYTE_ARRAY;
   private String nameAsString = "";

Modified: hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/client/HConnectionManager.java
URL: http://svn.apache.org/viewvc/hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/client/HConnectionManager.java?rev=802915&r1=802914&r2=802915&view=diff
==============================================================================
--- hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/client/HConnectionManager.java
(original)
+++ hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/client/HConnectionManager.java
Mon Aug 10 19:49:05 2009
@@ -133,7 +133,6 @@
     private final long pause;
     private final int numRetries;
     private final int maxRPCAttempts;
-    private final long rpcTimeout;
 
     private final Object masterLock = new Object();
     private volatile boolean closed;
@@ -184,8 +183,7 @@
       this.pause = conf.getLong("hbase.client.pause", 2 * 1000);
       this.numRetries = conf.getInt("hbase.client.retries.number", 10);
       this.maxRPCAttempts = conf.getInt("hbase.client.rpc.maxattempts", 1);
-      this.rpcTimeout = conf.getLong("hbase.regionserver.lease.period", 60000);
-      
+
       this.master = null;
       this.masterChecked = false;
     }

Modified: hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/client/ScannerCallable.java
URL: http://svn.apache.org/viewvc/hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/client/ScannerCallable.java?rev=802915&r1=802914&r2=802915&view=diff
==============================================================================
--- hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/client/ScannerCallable.java
(original)
+++ hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/client/ScannerCallable.java
Mon Aug 10 19:49:05 2009
@@ -23,8 +23,6 @@
 
 import java.io.IOException;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.hbase.DoNotRetryIOException;
 import org.apache.hadoop.hbase.HRegionInfo;
 import org.apache.hadoop.hbase.NotServingRegionException;
@@ -37,7 +35,6 @@
  * Used by {@link ResultScanner}s made by {@link HTable}.
  */
 public class ScannerCallable extends ServerCallable<Result[]> {
-  private static final Log LOG = LogFactory.getLog(ScannerCallable.class);
   private long scannerId = -1L;
   private boolean instantiated = false;
   private boolean closed = false;
@@ -102,7 +99,6 @@
 	try {
 		this.server.close(this.scannerId);
 	} catch (IOException e) {
-		LOG.warn("Ignore, probably already closed", e);
 	}
 	this.scannerId = -1L;
   }

Modified: hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/mapred/TableMap.java
URL: http://svn.apache.org/viewvc/hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/mapred/TableMap.java?rev=802915&r1=802914&r2=802915&view=diff
==============================================================================
--- hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/mapred/TableMap.java
(original)
+++ hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/mapred/TableMap.java
Mon Aug 10 19:49:05 2009
@@ -33,7 +33,7 @@
  * @param <V> Writable value class
  */
 @Deprecated
-public interface TableMap<K extends WritableComparable<K>, V extends Writable>
+public interface TableMap<K extends WritableComparable<? super K>, V extends Writable>
 extends Mapper<ImmutableBytesWritable, RowResult, K, V> {
 
 }

Modified: hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/master/HMaster.java
URL: http://svn.apache.org/viewvc/hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/master/HMaster.java?rev=802915&r1=802914&r2=802915&view=diff
==============================================================================
--- hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/master/HMaster.java
(original)
+++ hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/master/HMaster.java
Mon Aug 10 19:49:05 2009
@@ -1025,6 +1025,11 @@
         servername = 
           Bytes.toString(rr.getValue(CATALOG_FAMILY, SERVER_QUALIFIER));
       }
+      // Take region out of the intransistions in case it got stuck there doing
+      // an open or whatever.
+      this.regionManager.clearFromInTransition(regionname);
+      // If servername is still null, then none, exit.
+      if (servername == null) break;
       // Need to make up a HServerInfo 'servername' for that is how
       // items are keyed in regionmanager Maps.
       HServerAddress addr = new HServerAddress(servername);
@@ -1053,6 +1058,7 @@
     status.setHBaseVersion(VersionInfo.getVersion());
     status.setServerInfo(serverManager.serversToServerInfo.values());
     status.setDeadServers(serverManager.deadServers);
+    status.setRegionsInTransition(this.regionManager.getRegionsInTransition());
     return status;
   }
 

Modified: hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/master/RegionManager.java
URL: http://svn.apache.org/viewvc/hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/master/RegionManager.java?rev=802915&r1=802914&r2=802915&view=diff
==============================================================================
--- hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/master/RegionManager.java
(original)
+++ hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/master/RegionManager.java
Mon Aug 10 19:49:05 2009
@@ -20,21 +20,21 @@
 package org.apache.hadoop.hbase.master;
 
 import java.io.IOException;
-import java.util.concurrent.atomic.AtomicReference;
-import java.util.concurrent.atomic.AtomicInteger;
-import java.util.concurrent.locks.Lock;
-import java.util.concurrent.locks.ReentrantLock;
+import java.util.ArrayList;
+import java.util.Collections;
 import java.util.HashSet;
 import java.util.Iterator;
 import java.util.List;
-import java.util.ArrayList;
 import java.util.Map;
 import java.util.NavigableMap;
 import java.util.Set;
 import java.util.SortedMap;
 import java.util.TreeMap;
-import java.util.Collections;
 import java.util.concurrent.ConcurrentSkipListMap;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.concurrent.atomic.AtomicReference;
+import java.util.concurrent.locks.Lock;
+import java.util.concurrent.locks.ReentrantLock;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
@@ -43,15 +43,15 @@
 import org.apache.hadoop.fs.PathFilter;
 import org.apache.hadoop.hbase.HBaseConfiguration;
 import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HMsg;
+import org.apache.hadoop.hbase.HRegionInfo;
 import org.apache.hadoop.hbase.HServerAddress;
 import org.apache.hadoop.hbase.HServerInfo;
 import org.apache.hadoop.hbase.HServerLoad;
-import org.apache.hadoop.hbase.HRegionInfo;
 import org.apache.hadoop.hbase.RegionHistorian;
-import org.apache.hadoop.hbase.regionserver.HRegion;
 import org.apache.hadoop.hbase.client.Put;
 import org.apache.hadoop.hbase.ipc.HRegionInterface;
-import org.apache.hadoop.hbase.HMsg;
+import org.apache.hadoop.hbase.regionserver.HRegion;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.hadoop.hbase.util.Pair;
 import org.apache.hadoop.hbase.util.Threads;
@@ -1422,6 +1422,40 @@
     }
   }
 
+  /**
+   * @param regionname Name to clear from regions in transistion.
+   * @return True if we removed an element for the passed regionname.
+   */
+  boolean clearFromInTransition(final byte [] regionname) {
+    boolean result = false;
+    synchronized (this.regionsInTransition) {
+      if (this.regionsInTransition.isEmpty()) return result;
+      for (Map.Entry<String, RegionState> e: this.regionsInTransition.entrySet()) {
+        if (Bytes.equals(regionname, e.getValue().getRegionName())) {
+          this.regionsInTransition.remove(e.getKey());
+          LOG.debug("Removed " + e.getKey() + ", " + e.getValue());
+          result = true;
+          break;
+        }
+      }
+    }
+    return result;
+  }
+
+  /**
+   * @return Snapshot of regionsintransition as a sorted Map.
+   */
+  NavigableMap<String, String> getRegionsInTransition() {
+    NavigableMap<String, String> result = new TreeMap<String, String>();
+    synchronized (this.regionsInTransition) {
+      if (this.regionsInTransition.isEmpty()) return result;
+      for (Map.Entry<String, RegionState> e: this.regionsInTransition.entrySet()) {
+        result.put(e.getKey(), e.getValue().toString());
+      }
+    }
+    return result;
+  }
+
   /*
    * State of a Region as it transitions from closed to open, etc.  See
    * note on regionsInTransition data member above for listing of state

Modified: hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/regionserver/HLog.java
URL: http://svn.apache.org/viewvc/hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/regionserver/HLog.java?rev=802915&r1=802914&r2=802915&view=diff
==============================================================================
--- hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/regionserver/HLog.java
(original)
+++ hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/regionserver/HLog.java
Mon Aug 10 19:49:05 2009
@@ -860,7 +860,7 @@
         // Stop at logfiles.length when it's the last step
         int endIndex = step == maxSteps - 1? logfiles.length: 
           step * concurrentLogReads + concurrentLogReads;
-        for (int i = (step * 10); i < endIndex; i++) {
+        for (int i = (step * concurrentLogReads); i < endIndex; i++) {
           // Check for possibly empty file. With appends, currently Hadoop 
           // reports a zero length even if the file has been sync'd. Revisit if
           // HADOOP-4751 is committed.

Modified: hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
URL: http://svn.apache.org/viewvc/hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java?rev=802915&r1=802914&r2=802915&view=diff
==============================================================================
--- hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
(original)
+++ hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
Mon Aug 10 19:49:05 2009
@@ -351,7 +351,7 @@
     EventType type = event.getType();
     KeeperState state = event.getState();
     LOG.info("Got ZooKeeper event, state: " + state + ", type: " +
-              type + ", path: " + event.getPath());
+      type + ", path: " + event.getPath());
 
     // Ignore events if we're shutting down.
     if (stopRequested.get()) {
@@ -361,7 +361,13 @@
 
     if (state == KeeperState.Expired) {
       LOG.error("ZooKeeper session expired");
-      restart();
+      boolean restart =
+        this.conf.getBoolean("hbase.regionserver.restart.on.zk.expire", false);
+      if (restart) {
+        restart();
+      } else {
+        abort();
+      }
     } else if (type == EventType.NodeDeleted) {
       watchMasterAddress();
     } else if (type == EventType.NodeCreated) {

Modified: hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/rest/AbstractModel.java
URL: http://svn.apache.org/viewvc/hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/rest/AbstractModel.java?rev=802915&r1=802914&r2=802915&view=diff
==============================================================================
--- hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/rest/AbstractModel.java
(original)
+++ hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/rest/AbstractModel.java
Mon Aug 10 19:49:05 2009
@@ -69,7 +69,7 @@
 
   protected byte[][] getColumns(byte[] tableName) throws HBaseRestException {
     try {
-      HTable h = new HTable(tableName);
+      HTable h = new HTable(this.conf, tableName);
       Collection<HColumnDescriptor> columns = h.getTableDescriptor()
           .getFamilies();
       byte[][] resultant = new byte[columns.size()][];
@@ -93,7 +93,6 @@
         return true;
       }
     }
-
     return false;
   }
-}
+}
\ No newline at end of file

Modified: hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/rest/RowModel.java
URL: http://svn.apache.org/viewvc/hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/rest/RowModel.java?rev=802915&r1=802914&r2=802915&view=diff
==============================================================================
--- hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/rest/RowModel.java
(original)
+++ hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/rest/RowModel.java
Mon Aug 10 19:49:05 2009
@@ -55,7 +55,7 @@
   public Result get(byte[] tableName, Get get)
   throws HBaseRestException {
     try {
-      HTable table = new HTable(tableName);
+      HTable table = new HTable(this.conf, tableName);
       return table.get(get);
     } catch (IOException e) {
       throw new HBaseRestException(e);
@@ -112,7 +112,7 @@
 
   public void post(byte[] tableName, Put put) throws HBaseRestException {
     try {
-      HTable table = new HTable(tableName);
+      HTable table = new HTable(this.conf, tableName);
       table.put(put);
     } catch (IOException e) {
       throw new HBaseRestException(e);
@@ -122,7 +122,7 @@
   public void post(byte[] tableName, List<Put> puts)
       throws HBaseRestException {
     try {
-      HTable table = new HTable(tableName);
+      HTable table = new HTable(this.conf, tableName);
       table.put(puts);
     } catch (IOException e) {
       throw new HBaseRestException(e);
@@ -150,7 +150,7 @@
   public void delete(byte[] tableName, Delete delete)
   throws HBaseRestException {
     try {
-      HTable table = new HTable(tableName);
+      HTable table = new HTable(this.conf, tableName);
       table.delete(delete);
     } catch (IOException e) {
       throw new HBaseRestException(e);

Modified: hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/rest/ScannerModel.java
URL: http://svn.apache.org/viewvc/hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/rest/ScannerModel.java?rev=802915&r1=802914&r2=802915&view=diff
==============================================================================
--- hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/rest/ScannerModel.java
(original)
+++ hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/rest/ScannerModel.java
Mon Aug 10 19:49:05 2009
@@ -208,7 +208,7 @@
       long timestamp) throws HBaseRestException {
     try {
       HTable table;
-      table = new HTable(tableName);
+      table = new HTable(this.conf, tableName);
       Scan scan = new Scan();
       scan.addColumns(columns);
       scan.setTimeRange(0, timestamp);
@@ -228,7 +228,7 @@
       byte[] startRow, long timestamp) throws HBaseRestException {
     try {
       HTable table;
-      table = new HTable(tableName);
+      table = new HTable(this.conf, tableName);
       Scan scan = new Scan(startRow);
       scan.addColumns(columns);
       scan.setTimeRange(0, timestamp);
@@ -249,7 +249,7 @@
       long timestamp, RowFilterInterface filter) throws HBaseRestException {
     try {
       HTable table;
-      table = new HTable(tableName);
+      table = new HTable(this.conf, tableName);
       Scan scan = new Scan();
       scan.addColumns(columns);
       scan.setTimeRange(0, timestamp);
@@ -271,7 +271,7 @@
       throws HBaseRestException {
     try {
       HTable table;
-      table = new HTable(tableName);
+      table = new HTable(this.conf, tableName);
       Scan scan = new Scan(startRow);
       scan.addColumns(columns);
       scan.setTimeRange(0, timestamp);

Modified: hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/rest/TableModel.java
URL: http://svn.apache.org/viewvc/hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/rest/TableModel.java?rev=802915&r1=802914&r2=802915&view=diff
==============================================================================
--- hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/rest/TableModel.java
(original)
+++ hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/rest/TableModel.java
Mon Aug 10 19:49:05 2009
@@ -68,7 +68,7 @@
       throws HBaseRestException {
     try {
       ArrayList<Result> a = new ArrayList<Result>();
-      HTable table = new HTable(tableName);
+      HTable table = new HTable(this.conf, tableName);
 
       Scan scan = new Scan();
       scan.addColumns(columnNames);

Modified: hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/rest/TimestampModel.java
URL: http://svn.apache.org/viewvc/hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/rest/TimestampModel.java?rev=802915&r1=802914&r2=802915&view=diff
==============================================================================
--- hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/rest/TimestampModel.java
(original)
+++ hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/org/apache/hadoop/hbase/rest/TimestampModel.java
Mon Aug 10 19:49:05 2009
@@ -50,7 +50,7 @@
   public void delete(byte [] tableName, Delete delete)
   throws HBaseRestException {
     try {
-      HTable table = new HTable(tableName);
+      HTable table = new HTable(this.conf, tableName);
       table.delete(delete);
     } catch (IOException e) {
       throw new HBaseRestException(e);
@@ -78,7 +78,7 @@
   public Result get(final byte [] tableName, final Get get)
   throws HBaseRestException {
     try {
-      HTable table = new HTable(tableName);
+      HTable table = new HTable(this.conf, tableName);
       return table.get(get);
     } catch (IOException e) {
       throw new HBaseRestException(e);
@@ -140,7 +140,7 @@
   public void post(byte[] tableName, byte[] rowName, byte[] columnName,
       long timestamp, byte[] value) throws HBaseRestException {
     try {
-      HTable table = new HTable(tableName);
+      HTable table = new HTable(this.conf, tableName);
       Put put = new Put(rowName);
       put.setTimeStamp(timestamp);
       byte [][] famAndQf = KeyValue.parseColumn(columnName);

Modified: hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/overview.html
URL: http://svn.apache.org/viewvc/hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/overview.html?rev=802915&r1=802914&r2=802915&view=diff
==============================================================================
--- hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/overview.html (original)
+++ hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/java/overview.html Mon Aug 10 19:49:05
2009
@@ -285,7 +285,7 @@
 see this configuration unless you do one of the following:
 <ul>
     <li>Add a pointer to your <code>HADOOP_CONF_DIR</code> to <code>CLASSPATH</code>
in <code>hbase-env.sh</code></li>
-    <li>Add a copy of <code>hadoop-site.xml</code> to <code>${HBASE_HOME}/conf</code>,
or</li>
+    <li>Add a copy of <code>hdfs-site.xml</code> (or <code>hadoop-site.xml</code>)
to <code>${HBASE_HOME}/conf</code>, or</li>
     <li>If only a small set of HDFS client configurations, add them to <code>hbase-site.xml</code></li>
 </ul>
 An example of such an HDFS client configuration is <code>dfs.replication</code>.
 If for example,

Modified: hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/MapFilePerformanceEvaluation.java
URL: http://svn.apache.org/viewvc/hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/MapFilePerformanceEvaluation.java?rev=802915&r1=802914&r2=802915&view=diff
==============================================================================
--- hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/MapFilePerformanceEvaluation.java
(original)
+++ hadoop/hbase/branches/0.20_on_hadoop-0.18.3/src/test/org/apache/hadoop/hbase/MapFilePerformanceEvaluation.java
Mon Aug 10 19:49:05 2009
@@ -30,8 +30,8 @@
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
-import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.hadoop.io.MapFile;
+import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.hadoop.io.WritableComparable;
 
 /**



Mime
View raw message