hbase-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From e...@apache.org
Subject hbase git commit: Updated documentation
Date Sun, 15 Feb 2015 03:29:57 GMT
Repository: hbase
Updated Branches:
  refs/heads/branch-1.0 ecc7fb8ed -> 5fa72e037


Updated documentation


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/5fa72e03
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/5fa72e03
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/5fa72e03

Branch: refs/heads/branch-1.0
Commit: 5fa72e037578b73adb6c1dd9fd40ef0abb31a7c8
Parents: ecc7fb8
Author: Enis Soztutar <enis@apache.org>
Authored: Sat Feb 14 19:29:35 2015 -0800
Committer: Enis Soztutar <enis@apache.org>
Committed: Sat Feb 14 19:29:35 2015 -0800

----------------------------------------------------------------------
 src/main/asciidoc/_chapters/architecture.adoc   | 67 +++++++++++------
 src/main/asciidoc/_chapters/configuration.adoc  |  4 +-
 src/main/asciidoc/_chapters/datamodel.adoc      |  2 +-
 src/main/asciidoc/_chapters/hbase_apis.adoc     | 11 ++-
 src/main/asciidoc/_chapters/ops_mgt.adoc        |  8 +--
 src/main/asciidoc/_chapters/performance.adoc    | 37 ++++++----
 src/main/asciidoc/_chapters/schema_design.adoc  | 12 ++--
 src/main/asciidoc/_chapters/security.adoc       | 75 ++++++++++++++------
 src/main/asciidoc/_chapters/tracing.adoc        |  7 +-
 .../asciidoc/_chapters/troubleshooting.adoc     |  1 +
 src/main/asciidoc/_chapters/unit_testing.adoc   | 14 ++--
 11 files changed, 156 insertions(+), 82 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hbase/blob/5fa72e03/src/main/asciidoc/_chapters/architecture.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/architecture.adoc b/src/main/asciidoc/_chapters/architecture.adoc
index cd9a4a9..bae4a23 100644
--- a/src/main/asciidoc/_chapters/architecture.adoc
+++ b/src/main/asciidoc/_chapters/architecture.adoc
@@ -202,24 +202,24 @@ HBaseConfiguration conf2 = HBaseConfiguration.create();
 HTable table2 = new HTable(conf2, "myTable");
 ----
 
-For more information about how connections are handled in the HBase client, see link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HConnectionManager.html[HConnectionManager].
+For more information about how connections are handled in the HBase client, see link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/ConnectionFactory.html[ConnectionFactory].
 
 [[client.connection.pooling]]
 ===== Connection Pooling
 
-For applications which require high-end multithreaded access (e.g., web-servers or application
servers that may serve many application threads in a single JVM), you can pre-create an `HConnection`,
as shown in the following example:
+For applications which require high-end multithreaded access (e.g., web-servers or application
servers that may serve many application threads in a single JVM), you can pre-create a `Connection`,
as shown in the following example:
 
-.Pre-Creating a `HConnection`
+.Pre-Creating a `Connection`
 ====
 [source,java]
 ----
 // Create a connection to the cluster.
-HConnection connection = HConnectionManager.createConnection(Configuration);
-HTableInterface table = connection.getTable("myTable");
-// use table as needed, the table returned is lightweight
-table.close();
-// use the connection for other access to the cluster
-connection.close();
+Configuration conf = HBaseConfiguration.create();
+try (Connection connection = ConnectionFactory.createConnection(conf)) {
+  try (Table table = connection.getTable(TableName.valueOf(tablename)) {
+    // use table as needed, the table returned is lightweight
+  }
+}
 ----
 ====
 
@@ -228,22 +228,20 @@ Constructing HTableInterface implementation is very lightweight and
resources ar
 .`HTablePool` is Deprecated
 [WARNING]
 ====
-Previous versions of this guide discussed `HTablePool`, which was deprecated in HBase 0.94,
0.95, and 0.96, and removed in 0.98.1, by link:https://issues.apache.org/jira/browse/HBASE-6580[HBASE-6500].
-Please use link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HConnection.html[HConnection]
instead.
+Previous versions of this guide discussed `HTablePool`, which was deprecated in HBase 0.94,
0.95, and 0.96, and removed in 0.98.1, by link:https://issues.apache.org/jira/browse/HBASE-6580[HBASE-6500],
or `HConnection`, which is deprecated in HBase 1.0 by `Connection`.
+Please use link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Connection.html[Connection]
instead.
 ====
 
 [[client.writebuffer]]
 === WriteBuffer and Batch Methods
 
-If <<perf.hbase.client.autoflush>> is turned off on link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html[HTable],
``Put``s are sent to RegionServers when the writebuffer is filled.
-The writebuffer is 2MB by default.
-Before an (H)Table instance is discarded, either `close()` or `flushCommits()` should be
invoked so Puts will not be lost.
+In HBase 1.0 and later, link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html[HTable]
is deprecated in favor of link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html[Table].
`Table` does not use autoflush. To do buffered writes, use the BufferedMutator class.
 
-NOTE: `htable.delete(Delete);` does not go in the writebuffer! This only applies to Puts.
+Before a `Table` or `HTable` instance is discarded, invoke either `close()` or `flushCommits()`,
so `Put`s will not be lost.
 
 For additional information on write durability, review the link:../acid-semantics.html[ACID
semantics] page.
 
-For fine-grained control of batching of ``Put``s or ``Delete``s, see the link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#batch%28java.util.List%29[batch]
methods on HTable.
+For fine-grained control of batching of ``Put``s or ``Delete``s, see the link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#batch%28java.util.List%29[batch]
methods on Table.
 
 [[client.external]]
 === External Clients
@@ -523,7 +521,7 @@ The methods exposed by `HMasterInterface` are primarily metadata-oriented
method
 
 * Table (createTable, modifyTable, removeTable, enable, disable)
 * ColumnFamily (addColumn, modifyColumn, removeColumn)
-* Region (move, assign, unassign) For example, when the `HBaseAdmin` method `disableTable`
is invoked, it is serviced by the Master server.
+* Region (move, assign, unassign) For example, when the `Admin` method `disableTable` is
invoked, it is serviced by the Master server.
 
 [[master.processes]]
 === Processes
@@ -557,7 +555,7 @@ In a distributed cluster, a RegionServer runs on a <<arch.hdfs.dn>>.
 The methods exposed by `HRegionRegionInterface` contain both data-oriented and region-maintenance
methods:
 
 * Data (get, put, delete, next, etc.)
-* Region (splitRegion, compactRegion, etc.) For example, when the `HBaseAdmin` method `majorCompact`
is invoked on a table, the client is actually iterating through all regions for the specified
table and requesting a major compaction directly to each region.
+* Region (splitRegion, compactRegion, etc.) For example, when the `Admin` method `majorCompact`
is invoked on a table, the client is actually iterating through all regions for the specified
table and requesting a major compaction directly to each region.
 
 [[regionserver.arch.processes]]
 === Processes
@@ -1331,6 +1329,35 @@ The RegionServer splits a region, offlines the split region and then
adds the da
 See <<disable.splitting>> for how to manually manage splits (and for why you
might do this).
 
 ==== Custom Split Policies
+ou can override the default split policy using a custom link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/regionserver/RegionSplitPolicy.html[RegionSplitPolicy](HBase
0.94+). Typically a custom split policy should extend
+HBase's default split policy: link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/regionserver/IncreasingToUpperBoundRegionSplitPolicy.html[IncreasingToUpperBoundRegionSplitPolicy].
+
+The policy can set globally through the HBase configuration or on a per-table
+basis.
+
+.Configuring the Split Policy Globally in _hbase-site.xml_
+[source,xml]
+----
+<property>
+  <name>hbase.regionserver.region.split.policy</name>
+  <value>org.apache.hadoop.hbase.regionserver.IncreasingToUpperBoundRegionSplitPolicy</value>
+</property>
+----
+
+.Configuring a Split Policy On a Table Using the Java API
+[source,java]
+HTableDescriptor tableDesc = new HTableDescriptor("test");
+tableDesc.setValue(HTableDescriptor.SPLIT_POLICY, ConstantSizeRegionSplitPolicy.class.getName());
+tableDesc.addFamily(new HColumnDescriptor(Bytes.toBytes("cf1")));
+admin.createTable(tableDesc);              
+----
+
+[source]
+.Configuring the Split Policy On a Table Using HBase Shell
+----
+hbase> create 'test', {METHOD => 'table_att', CONFIG => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.ConstantSizeRegionSplitPolicy'}},
+{NAME => 'cf1'}
+----
 
 The default split policy can be overwritten using a custom link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/regionserver/RegionSplitPolicy.html[RegionSplitPolicy(HBase
0.94+)]. Typically a custom split policy should extend HBase's default split policy: link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/regionserver/ConstantSizeRegionSplitPolicy.html[ConstantSizeRegionSplitPolicy].
 
@@ -2281,7 +2308,7 @@ Ensure to set the following for all clients (and servers) that will
use region r
   <name>hbase.client.primaryCallTimeout.multiget</name>
   <value>10000</value>
   <description>
-      The timeout (in microseconds), before secondary fallback RPC’s are submitted for
multi-get requests (HTable.get(List<Get>)) with Consistency.TIMELINE to the secondary
replicas of the regions. Defaults to 10ms. Setting this lower will increase the number of
RPC’s, but will lower the p99 latencies.
+      The timeout (in microseconds), before secondary fallback RPC’s are submitted for
multi-get requests (Table.get(List<Get>)) with Consistency.TIMELINE to the secondary
replicas of the regions. Defaults to 10ms. Setting this lower will increase the number of
RPC’s, but will lower the p99 latencies.
   </description>
 </property>
 <property>
@@ -2317,7 +2344,7 @@ flush 't1'
 
 [source,java]
 ----
-HTableDescriptor htd = new HTableDesctiptor(TableName.valueOf(“test_table”));
+HTableDescriptor htd = new HTableDescriptor(TableName.valueOf(“test_table”));
 htd.setRegionReplication(2);
 ...
 admin.createTable(htd);

http://git-wip-us.apache.org/repos/asf/hbase/blob/5fa72e03/src/main/asciidoc/_chapters/configuration.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/configuration.adoc b/src/main/asciidoc/_chapters/configuration.adoc
index 4a9835a..6f8858d 100644
--- a/src/main/asciidoc/_chapters/configuration.adoc
+++ b/src/main/asciidoc/_chapters/configuration.adoc
@@ -626,7 +626,7 @@ Configuration config = HBaseConfiguration.create();
 config.set("hbase.zookeeper.quorum", "localhost");  // Here we are running zookeeper locally
 ----
 
-If multiple ZooKeeper instances make up your ZooKeeper ensemble, they may be specified in
a comma-separated list (just as in the _hbase-site.xml_ file). This populated `Configuration`
instance can then be passed to an link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html[HTable],
and so on.
+If multiple ZooKeeper instances make up your ZooKeeper ensemble, they may be specified in
a comma-separated list (just as in the _hbase-site.xml_ file). This populated `Configuration`
instance can then be passed to an link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html[Table],
and so on.
 
 [[example_config]]
 == Example Configurations
@@ -867,7 +867,7 @@ See the entry for `hbase.hregion.majorcompaction` in the <<compaction.parameters
 ====
 Major compactions are absolutely necessary for StoreFile clean-up.
 Do not disable them altogether.
-You can run major compactions manually via the HBase shell or via the link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HBaseAdmin.html#majorCompact%28java.lang.String%29[HBaseAdmin
API].
+You can run major compactions manually via the HBase shell or via the http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Admin.html#majorCompact(org.apache.hadoop.hbase.TableName)[Admin
API].
 ====
 
 For more information about compactions and the compaction file selection process, see <<compaction,compaction>>

http://git-wip-us.apache.org/repos/asf/hbase/blob/5fa72e03/src/main/asciidoc/_chapters/datamodel.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/datamodel.adoc b/src/main/asciidoc/_chapters/datamodel.adoc
index 91e6be8..74238ca 100644
--- a/src/main/asciidoc/_chapters/datamodel.adoc
+++ b/src/main/asciidoc/_chapters/datamodel.adoc
@@ -316,7 +316,7 @@ Note that generally the easiest way to specify a specific stop point for
a scan
 === Delete
 
 link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Delete.html[Delete] removes
a row from a table.
-Deletes are executed via link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#delete(org.apache.hadoop.hbase.client.Delete)[HTable.delete].
+Deletes are executed via link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#delete(org.apache.hadoop.hbase.client.Delete)[Table.delete].
 
 HBase does not modify data in place, and so deletes are handled by creating new markers called
_tombstones_.
 These tombstones, along with the dead values, are cleaned up on major compactions.

http://git-wip-us.apache.org/repos/asf/hbase/blob/5fa72e03/src/main/asciidoc/_chapters/hbase_apis.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/hbase_apis.adoc b/src/main/asciidoc/_chapters/hbase_apis.adoc
index 7fe0d3e..d73de61 100644
--- a/src/main/asciidoc/_chapters/hbase_apis.adoc
+++ b/src/main/asciidoc/_chapters/hbase_apis.adoc
@@ -38,11 +38,9 @@ See <<external_apis>> for more information.
 
 .Create a Table Using Java
 ====
-This example has been tested on HBase 0.96.1.1.
 
 [source,java]
 ----
-
 package com.example.hbase.admin;
 
 import java.io.IOException;
@@ -51,7 +49,7 @@ import org.apache.hadoop.hbase.HBaseConfiguration;
 import org.apache.hadoop.hbase.HColumnDescriptor;
 import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.TableName;
-import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.Admin;
 import org.apache.hadoop.hbase.io.compress.Compression.Algorithm;
 import org.apache.hadoop.conf.Configuration;
 
@@ -59,7 +57,7 @@ import static com.example.hbase.Constants.*;
 
 public class CreateSchema {
 
-  public static void createOrOverwrite(HBaseAdmin admin, HTableDescriptor table) throws IOException
{
+  public static void createOrOverwrite(Admin admin, HTableDescriptor table) throws IOException
{
     if (admin.tableExists(table.getName())) {
       admin.disableTable(table.getName());
       admin.deleteTable(table.getName());
@@ -69,7 +67,7 @@ public class CreateSchema {
 
   public static void createSchemaTables (Configuration config) {
     try {
-      final HBaseAdmin admin = new HBaseAdmin(config);
+      final Admin admin = new Admin(config);
       HTableDescriptor table = new HTableDescriptor(TableName.valueOf(TABLE_NAME));
       table.addFamily(new HColumnDescriptor(CF_DEFAULT).setCompressionType(Algorithm.SNAPPY));
 
@@ -90,14 +88,13 @@ public class CreateSchema {
 
 .Add, Modify, and Delete a Table
 ====
-This example has been tested on HBase 0.96.1.1.
 
 [source,java]
 ----
 public static void upgradeFrom0 (Configuration config) {
 
   try {
-    final HBaseAdmin admin = new HBaseAdmin(config);
+    final Admin admin = new Admin(config);
     TableName tableName = TableName.valueOf(TABLE_ASSETMETA);
     HTableDescriptor table_assetmeta = new HTableDescriptor(tableName);
     table_assetmeta.addFamily(new HColumnDescriptor(CF_DEFAULT).setCompressionType(Algorithm.SNAPPY));

http://git-wip-us.apache.org/repos/asf/hbase/blob/5fa72e03/src/main/asciidoc/_chapters/ops_mgt.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/ops_mgt.adoc b/src/main/asciidoc/_chapters/ops_mgt.adoc
index 852e76b..1402f52 100644
--- a/src/main/asciidoc/_chapters/ops_mgt.adoc
+++ b/src/main/asciidoc/_chapters/ops_mgt.adoc
@@ -106,7 +106,7 @@ private static final int ERROR_EXIT_CODE = 4;
 ----
 
 Here are some examples based on the following given case.
-There are two HTable called test-01 and test-02, they have two column family cf1 and cf2
respectively, and deployed on the 3 RegionServers.
+There are two Table objects called test-01 and test-02, they have two column family cf1 and
cf2 respectively, and deployed on the 3 RegionServers.
 see following table.
 
 [cols="1,1,1", options="header"]
@@ -665,7 +665,7 @@ The LoadTestTool has received many updates in recent HBase releases, including
s
 [[ops.regionmgt.majorcompact]]
 === Major Compaction
 
-Major compactions can be requested via the HBase shell or link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HBaseAdmin.html#majorCompact%28java.lang.String%29[HBaseAdmin.majorCompact].
+Major compactions can be requested via the HBase shell or link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Admin.html#majorCompact%28java.lang.String%29[Admin.majorCompact].
 
 Note: major compactions do NOT do region merges.
 See <<compaction,compaction>> for more information about compactions.
@@ -1352,7 +1352,7 @@ A single WAL edit goes through several steps in order to be replicated
to a slav
 . The edit is tagged with the master's UUID and added to a buffer.
   When the buffer is filled, or the reader reaches the end of the file, the buffer is sent
to a random region server on the slave cluster.
 . The region server reads the edits sequentially and separates them into buffers, one buffer
per table.
-  After all edits are read, each buffer is flushed using link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html[HTable],
HBase's normal client.
+  After all edits are read, each buffer is flushed using link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html[Table],
HBase's normal client.
   The master's UUID and the UUIDs of slaves which have already consumed the data are preserved
in the edits they are applied, in order to prevent replication loops.
 . In the master, the offset for the WAL that is currently being replicated is registered
in ZooKeeper.
 
@@ -1994,7 +1994,7 @@ or in code it would be as follows:
 
 [source,java]
 ----
-void rename(HBaseAdmin admin, String oldTableName, String newTableName) {
+void rename(Admin admin, String oldTableName, String newTableName) {
   String snapshotName = randomName();
   admin.disableTable(oldTableName);
   admin.snapshot(snapshotName, oldTableName);

http://git-wip-us.apache.org/repos/asf/hbase/blob/5fa72e03/src/main/asciidoc/_chapters/performance.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/performance.adoc b/src/main/asciidoc/_chapters/performance.adoc
index 36b3c70..2155d52 100644
--- a/src/main/asciidoc/_chapters/performance.adoc
+++ b/src/main/asciidoc/_chapters/performance.adoc
@@ -100,6 +100,17 @@ Using 10Gbe links between racks will greatly increase performance, and
assuming
 
 Are all the network interfaces functioning correctly? Are you sure? See the Troubleshooting
Case Study in <<casestudies.slownode>>.
 
+[[perf.network.call_me_maybe]]
+=== Network Consistency and Partition Tolerance
+The link:http://en.wikipedia.org/wiki/CAP_theorem[CAP Theorem] states that a distributed
system can maintain two out of the following three charateristics: 
+- *C*onsistency -- all nodes see the same data. 
+- *A*vailability -- every request receives a response about whether it succeeded or failed.
+- *P*artition tolerance -- the system continues to operate even if some of its components
become unavailable to the others.
+
+HBase favors consistency and partition tolerance, where a decision has to be made. Coda Hale
explains why partition tolerance is so important, in http://codahale.com/you-cant-sacrifice-partition-tolerance/.

+
+Robert Yokota used an automated testing framework called link:https://aphyr.com/tags/jepsen[Jepson]
to test HBase's partition tolerance in the face of network partitions, using techniques modeled
after Aphyr's link:https://aphyr.com/posts/281-call-me-maybe-carly-rae-jepsen-and-the-perils-of-network-partitions[Call
Me Maybe] series. The results, available as a link:http://eng.yammer.com/call-me-maybe-hbase/[blog
post] and an link:http://eng.yammer.com/call-me-maybe-hbase-addendum/[addendum], show that
HBase performs correctly.
+
 [[jvm]]
 == Java
 
@@ -439,7 +450,7 @@ When people get started with HBase they have a tendency to write code
that looks
 [source,java]
 ----
 Get get = new Get(rowkey);
-Result r = htable.get(get);
+Result r = table.get(get);
 byte[] b = r.getValue(Bytes.toBytes("cf"), Bytes.toBytes("attr"));  // returns current version
of value
 ----
 
@@ -452,7 +463,7 @@ public static final byte[] CF = "cf".getBytes();
 public static final byte[] ATTR = "attr".getBytes();
 ...
 Get get = new Get(rowkey);
-Result r = htable.get(get);
+Result r = table.get(get);
 byte[] b = r.getValue(CF, ATTR);  // returns current version of value
 ----
 
@@ -475,7 +486,7 @@ A useful pattern to speed up the bulk import process is to pre-create
empty regi
 Be somewhat conservative in this, because too-many regions can actually degrade performance.
 
 There are two different approaches to pre-creating splits.
-The first approach is to rely on the default `HBaseAdmin` strategy (which is implemented
in `Bytes.split`)...
+The first approach is to rely on the default `Admin` strategy (which is implemented in `Bytes.split`)...
 
 [source,java]
 ----
@@ -511,12 +522,12 @@ The default value of `hbase.regionserver.optionallogflushinterval` is
1000ms.
 [[perf.hbase.client.autoflush]]
 === HBase Client: AutoFlush
 
-When performing a lot of Puts, make sure that setAutoFlush is set to false on your link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html[HTable]
instance.
+When performing a lot of Puts, make sure that setAutoFlush is set to false on your link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html[Table]
instance.
 Otherwise, the Puts will be sent one at a time to the RegionServer.
-Puts added via `htable.add(Put)` and `htable.add( <List> Put)` wind up in the same
write buffer.
+Puts added via `table.add(Put)` and `table.add( <List> Put)` wind up in the same write
buffer.
 If `autoFlush = false`, these messages are not sent until the write-buffer is filled.
 To explicitly flush the messages, call `flushCommits`.
-Calling `close` on the `HTable` instance will invoke `flushCommits`.
+Calling `close` on the `Table` instance will invoke `flushCommits`.
 
 [[perf.hbase.client.putwal]]
 === HBase Client: Turn off WAL on Puts
@@ -553,7 +564,7 @@ If all your data is being written to one region at a time, then re-read
the sect
 
 Also, if you are pre-splitting regions and all your data is _still_ winding up in a single
region even though your keys aren't monotonically increasing, confirm that your keyspace actually
works with the split strategy.
 There are a variety of reasons that regions may appear "well split" but won't work with your
data.
-As the HBase client communicates directly with the RegionServers, this can be obtained via
link:hhttp://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#getRegionLocation(byte[])[HTable.getRegionLocation].
+As the HBase client communicates directly with the RegionServers, this can be obtained via
link:hhttp://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#getRegionLocation(byte[])[Table.getRegionLocation].
 
 See <<precreate.regions>>, as well as <<perf.configurations>>
 
@@ -622,14 +633,14 @@ Always have ResultScanner processing enclosed in try/catch blocks.
 ----
 Scan scan = new Scan();
 // set attrs...
-ResultScanner rs = htable.getScanner(scan);
+ResultScanner rs = table.getScanner(scan);
 try {
   for (Result r = rs.next(); r != null; r = rs.next()) {
   // process result...
 } finally {
   rs.close();  // always close the ResultScanner!
 }
-htable.close();
+table.close();
 ----
 
 [[perf.hbase.client.blockcache]]
@@ -761,16 +772,16 @@ In this case, special care must be taken to regularly perform major
compactions
 As is documented in <<datamodel>>, marking rows as deleted creates additional
StoreFiles which then need to be processed on reads.
 Tombstones only get cleaned up with major compactions.
 
-See also <<compaction>> and link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HBaseAdmin.html#majorCompact%28java.lang.String%29[HBaseAdmin.majorCompact].
+See also <<compaction>> and link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Admin.html#majorCompact%28java.lang.String%29[Admin.majorCompact].
 
 [[perf.deleting.rpc]]
 === Delete RPC Behavior
 
-Be aware that `htable.delete(Delete)` doesn't use the writeBuffer.
+Be aware that `Table.delete(Delete)` doesn't use the writeBuffer.
 It will execute an RegionServer RPC with each invocation.
-For a large number of deletes, consider `htable.delete(List)`.
+For a large number of deletes, consider `Table.delete(List)`.
 
-See http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#delete%28org.apache.hadoop.hbase.client.Delete%29
+See http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#delete%28org.apache.hadoop.hbase.client.Delete%29
 
 [[perf.hdfs]]
 == HDFS

http://git-wip-us.apache.org/repos/asf/hbase/blob/5fa72e03/src/main/asciidoc/_chapters/schema_design.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/schema_design.adoc b/src/main/asciidoc/_chapters/schema_design.adoc
index c930616..28f28a5 100644
--- a/src/main/asciidoc/_chapters/schema_design.adoc
+++ b/src/main/asciidoc/_chapters/schema_design.adoc
@@ -32,7 +32,7 @@ A good general introduction on the strength and weaknesses modelling on
the vari
 [[schema.creation]]
 ==  Schema Creation
 
-HBase schemas can be created or updated using the <<shell>> or by using link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HBaseAdmin.html[HBaseAdmin]
in the Java API.
+HBase schemas can be created or updated using the <<shell>> or by using link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Admin.html[Admin]
in the Java API.
 
 Tables must be disabled when making ColumnFamily modifications, for example:
 
@@ -40,7 +40,7 @@ Tables must be disabled when making ColumnFamily modifications, for example:
 ----
 
 Configuration config = HBaseConfiguration.create();
-HBaseAdmin admin = new HBaseAdmin(conf);
+Admin admin = new Admin(conf);
 String table = "myTable";
 
 admin.disableTable(table);
@@ -308,7 +308,7 @@ This is a fairly common question on the HBase dist-list so it pays to
get the ro
 === Relationship Between RowKeys and Region Splits
 
 If you pre-split your table, it is _critical_ to understand how your rowkey will be distributed
across the region boundaries.
-As an example of why this is important, consider the example of using displayable hex characters
as the lead position of the key (e.g., "0000000000000000" to "ffffffffffffffff"). Running
those key ranges through `Bytes.split` (which is the split strategy used when creating regions
in `HBaseAdmin.createTable(byte[] startKey, byte[] endKey, numRegions)` for 10 regions will
generate the following splits...
+As an example of why this is important, consider the example of using displayable hex characters
as the lead position of the key (e.g., "0000000000000000" to "ffffffffffffffff"). Running
those key ranges through `Bytes.split` (which is the split strategy used when creating regions
in `Admin.createTable(byte[] startKey, byte[] endKey, numRegions)` for 10 regions will generate
the following splits...
 
 ----
 
@@ -340,7 +340,7 @@ To conclude this example, the following is an example of how appropriate
splits
 
 [source,java]
 ----
-public static boolean createTable(HBaseAdmin admin, HTableDescriptor table, byte[][] splits)
+public static boolean createTable(Admin admin, HTableDescriptor table, byte[][] splits)
 throws IOException {
   try {
     admin.createTable( table, splits );
@@ -400,7 +400,7 @@ Take that into consideration when making your design, as well as block
size for
 
 === Counters
 
-One supported datatype that deserves special mention are "counters" (i.e., the ability to
do atomic increments of numbers). See link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#increment%28org.apache.hadoop.hbase.client.Increment%29[Increment]
in HTable.
+One supported datatype that deserves special mention are "counters" (i.e., the ability to
do atomic increments of numbers). See link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#increment%28org.apache.hadoop.hbase.client.Increment%29[Increment]
in `Table`.
 
 Synchronization on counters are done on the RegionServer, not in the client.
 
@@ -630,7 +630,7 @@ The rowkey of LOG_TYPES would be:
 * [type] (e.g., byte indicating hostname vs. event-type)
 * [bytes] variable length bytes for raw hostname or event-type.
 
-A column for this rowkey could be a long with an assigned number, which could be obtained
by using an link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#incrementColumnValue%28byte[],%20byte[],%20byte[],%20long%29[HBase
counter].
+A column for this rowkey could be a long with an assigned number, which could be obtained
by using an link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#incrementColumnValue%28byte[],%20byte[],%20byte[],%20long%29[HBase
counter].
 
 So the resulting composite rowkey would be:
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/5fa72e03/src/main/asciidoc/_chapters/security.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/security.adoc b/src/main/asciidoc/_chapters/security.adoc
index 21698fa..072f251 100644
--- a/src/main/asciidoc/_chapters/security.adoc
+++ b/src/main/asciidoc/_chapters/security.adoc
@@ -131,14 +131,19 @@ To do so, add the following to the `hbase-site.xml` file on every client:
 </property>
 ----
 
-This configuration property can also be set on a per connection basis.
-Set it in the `Configuration` supplied to `HTable`:
+This configuration property can also be set on a per-connection basis.
+Set it in the `Configuration` supplied to `Table`:
 
 [source,java]
 ----
 Configuration conf = HBaseConfiguration.create();
+Connection connection = ConnectionFactory.createConnection(conf);
 conf.set("hbase.rpc.protection", "privacy");
-HTable table = new HTable(conf, tablename);
+try (Connection connection = ConnectionFactory.createConnection(conf)) {
+  try (Table table = connection.getTable(TableName.valueOf(tablename)) {
+  .... do your stuff
+  }
+}
 ----
 
 Expect a ~10% performance penalty for encrypted communication.
@@ -265,8 +270,6 @@ Add the following to the `hbase-site.xml` file for every REST gateway:
 Substitute the appropriate credential and keytab for _$USER_ and _$KEYTAB_ respectively.
 
 The REST gateway will authenticate with HBase using the supplied credential.
-No authentication will be performed by the REST gateway itself.
-All client access via the REST gateway will use the REST gateway's credential and have its
privilege.
 
 In order to use the REST API principal to interact with HBase, it is also necessary to add
the `hbase.rest.kerberos.principal` to the `_acl_` table.
 For example, to give the REST API principal, `rest_server`, administrative access, a command
such as this one will suffice:
@@ -278,8 +281,30 @@ grant 'rest_server', 'RWCA'
 
 For more information about ACLs, please see the <<hbase.accesscontrol.configuration>>
section
 
-It should be possible for clients to authenticate with the HBase cluster through the REST
gateway in a pass-through manner via SPNEGO HTTP authentication.
-This is future work.
+HBase REST gateway supports link:http://hadoop.apache.org/docs/stable/hadoop-auth/index.html[SPNEGO
HTTP authentication] for client access to the gateway.
+To enable REST gateway Kerberos authentication for client access, add the following to the
`hbase-site.xml` file for every REST gateway.
+
+[source,xml]
+----
+<property>
+  <name>hbase.rest.authentication.type</name>
+  <value>kerberos</value>
+</property>
+<property>
+  <name>hbase.rest.authentication.kerberos.principal</name>
+  <value>HTTP/_HOST@HADOOP.LOCALDOMAIN</value>
+</property>
+<property>
+  <name>hbase.rest.authentication.kerberos.keytab</name>
+  <value>$KEYTAB</value>
+</property>
+----
+
+Substitute the keytab for HTTP for _$KEYTAB_.
+
+HBase REST gateway supports different 'hbase.rest.authentication.type': simple, kerberos.
+You can also implement a custom authentication by implemening Hadoop AuthenticationHandler,
then specify the full class name as 'hbase.rest.authentication.type' value.
+For more information, refer to link:http://hadoop.apache.org/docs/stable/hadoop-auth/index.html[SPNEGO
HTTP authentication].
 
 [[security.rest.gateway]]
 === REST Gateway Impersonation Configuration
@@ -881,18 +906,24 @@ public static void grantOnTable(final HBaseTestingUtility util, final
String use
   SecureTestUtil.updateACLs(util, new Callable<Void>() {
     @Override
     public Void call() throws Exception {
-      HTable acl = new HTable(util.getConfiguration(), AccessControlLists.ACL_TABLE_NAME);
-      try {
-        BlockingRpcChannel service = acl.coprocessorService(HConstants.EMPTY_START_ROW);
-        AccessControlService.BlockingInterface protocol =
-            AccessControlService.newBlockingStub(service);
-        ProtobufUtil.grant(protocol, user, table, family, qualifier, actions);
-      } finally {
-        acl.close();
+      Configuration conf = HBaseConfiguration.create();
+      Connection connection = ConnectionFactory.createConnection(conf);
+      try (Connection connection = ConnectionFactory.createConnection(conf)) {
+        try (Table table = connection.getTable(TableName.valueOf(tablename)) {
+          AccessControlLists.ACL_TABLE_NAME);
+          try {
+            BlockingRpcChannel service = acl.coprocessorService(HConstants.EMPTY_START_ROW);
+            AccessControlService.BlockingInterface protocol =
+                AccessControlService.newBlockingStub(service);
+            ProtobufUtil.grant(protocol, user, table, family, qualifier, actions);
+          } finally {
+            acl.close();
+          }
+          return null;
+        }
       }
-      return null;
     }
-  });
+  }
 }
 ----
 
@@ -931,7 +962,9 @@ public static void revokeFromTable(final HBaseTestingUtility util, final
String
   SecureTestUtil.updateACLs(util, new Callable<Void>() {
     @Override
     public Void call() throws Exception {
-      HTable acl = new HTable(util.getConfiguration(), AccessControlLists.ACL_TABLE_NAME);
+      Configuration conf = HBaseConfiguration.create();
+      Connection connection = ConnectionFactory.createConnection(conf);
+      Table acl = connection.getTable(util.getConfiguration(), AccessControlLists.ACL_TABLE_NAME);
       try {
         BlockingRpcChannel service = acl.coprocessorService(HConstants.EMPTY_START_ROW);
         AccessControlService.BlockingInterface protocol =
@@ -1215,9 +1248,11 @@ The correct way to apply cell level labels is to do so in the application
code w
 ====
 [source,java]
 ----
-static HTable createTableAndWriteDataWithLabels(TableName tableName, String... labelExps)
+static Table createTableAndWriteDataWithLabels(TableName tableName, String... labelExps)
     throws Exception {
-  HTable table = null;
+  Configuration conf = HBaseConfiguration.create();
+  Connection connection = ConnectionFactory.createConnection(conf);
+  Table table = NULL;
   try {
     table = TEST_UTIL.createTable(tableName, fam);
     int i = 1;

http://git-wip-us.apache.org/repos/asf/hbase/blob/5fa72e03/src/main/asciidoc/_chapters/tracing.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/tracing.adoc b/src/main/asciidoc/_chapters/tracing.adoc
index 9a4a811..6bb8065 100644
--- a/src/main/asciidoc/_chapters/tracing.adoc
+++ b/src/main/asciidoc/_chapters/tracing.adoc
@@ -124,8 +124,9 @@ For example, if you wanted to trace all of your get operations, you change
this:
 
 [source,java]
 ----
-
-HTable table = new HTable(conf, "t1");
+Configuration config = HBaseConfiguration.create();
+Connection connection = ConnectionFactory.createConnection(config);
+Table table = connection.getTable(TableName.valueOf("t1"));
 Get get = new Get(Bytes.toBytes("r1"));
 Result res = table.get(get);
 ----
@@ -137,7 +138,7 @@ into:
 
 TraceScope ts = Trace.startSpan("Gets", Sampler.ALWAYS);
 try {
-  HTable table = new HTable(conf, "t1");
+  Table table = connection.getTable(TableName.valueOf("t1"));
   Get get = new Get(Bytes.toBytes("r1"));
   Result res = table.get(get);
 } finally {

http://git-wip-us.apache.org/repos/asf/hbase/blob/5fa72e03/src/main/asciidoc/_chapters/troubleshooting.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/troubleshooting.adoc b/src/main/asciidoc/_chapters/troubleshooting.adoc
index 6d35f1d..1776c9e 100644
--- a/src/main/asciidoc/_chapters/troubleshooting.adoc
+++ b/src/main/asciidoc/_chapters/troubleshooting.adoc
@@ -627,6 +627,7 @@ This issue is caused by bugs in the MIT Kerberos replay_cache component,
link:ht
 These bugs caused the old version of krb5-server to erroneously block subsequent requests
sent from a Principal.
 This caused krb5-server to block the connections sent from one Client (one HTable instance
with multi-threading connection instances for each RegionServer); Messages, such as `Request
is a replay (34)`, are logged in the client log You can ignore the messages, because HTable
will retry 5 * 10 (50) times for each failed connection by default.
 HTable will throw IOException if any connection to the RegionServer fails after the retries,
so that the user client code for HTable instance can handle it further.
+NOTE: `HTable` is deprecated in HBase 1.0, in favor of `Table`.
 
 Alternatively, update krb5-server to a version which solves these issues, such as krb5-server-1.10.3.
 See JIRA link:https://issues.apache.org/jira/browse/HBASE-10379[HBASE-10379] for more details.

http://git-wip-us.apache.org/repos/asf/hbase/blob/5fa72e03/src/main/asciidoc/_chapters/unit_testing.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/unit_testing.adoc b/src/main/asciidoc/_chapters/unit_testing.adoc
index 1ffedf1..3f70001 100644
--- a/src/main/asciidoc/_chapters/unit_testing.adoc
+++ b/src/main/asciidoc/_chapters/unit_testing.adoc
@@ -42,7 +42,7 @@ This example will add unit tests to the following example class:
 
 public class MyHBaseDAO {
 
-    public static void insertRecord(HTableInterface table, HBaseTestObj obj)
+    public static void insertRecord(Table.getTable(table), HBaseTestObj obj)
     throws Exception {
         Put put = createPut(obj);
         table.put(put);
@@ -129,17 +129,19 @@ Next, add a `@RunWith` annotation to your test class, to direct it to
use Mockit
 
 @RunWith(MockitoJUnitRunner.class)
 public class TestMyHBaseDAO{
-  @Mock 
-  private HTableInterface table;
   @Mock
-  private HTablePool hTablePool;
+  Configuration config = HBaseConfiguration.create();
+  @Mock
+  Connection connection = ConnectionFactory.createConnection(config);
+  @Mock 
+  private Table table;
   @Captor
   private ArgumentCaptor putCaptor;
 
   @Test
   public void testInsertRecord() throws Exception {
     //return mock table when getTable is called
-    when(hTablePool.getTable("tablename")).thenReturn(table);
+    when(connection.getTable(TableName.valueOf("tablename")).thenReturn(table);
     //create test object and make a call to the DAO that needs testing
     HBaseTestObj obj = new HBaseTestObj();
     obj.setRowKey("ROWKEY-1");
@@ -162,7 +164,7 @@ This code populates `HBaseTestObj` with ``ROWKEY-1'', ``DATA-1'', ``DATA-2''
as
 It then inserts the record into the mocked table.
 The Put that the DAO would have inserted is captured, and values are tested to verify that
they are what you expected them to be.
 
-The key here is to manage htable pool and htable instance creation outside the DAO.
+The key here is to manage Connection and Table instance creation outside the DAO.
 This allows you to mock them cleanly and test Puts as shown above.
 Similarly, you can now expand into other operations such as Get, Scan, or Delete.
 


Mime
View raw message