hbase-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ndimi...@apache.org
Subject [2/3] hbase git commit: updating docs from master
Date Wed, 19 Apr 2017 03:43:28 GMT
updating docs from master


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/b9061c55
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/b9061c55
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/b9061c55

Branch: refs/heads/branch-1.1
Commit: b9061c55fd36edba35c8e7f9de76c3bd40cf822d
Parents: 9d5c0db
Author: Nick Dimiduk <ndimiduk@apache.org>
Authored: Tue Apr 18 20:34:35 2017 -0700
Committer: Nick Dimiduk <ndimiduk@apache.org>
Committed: Tue Apr 18 20:34:35 2017 -0700

----------------------------------------------------------------------
 src/main/asciidoc/_chapters/architecture.adoc   |  22 +--
 src/main/asciidoc/_chapters/community.adoc      |   9 +-
 src/main/asciidoc/_chapters/configuration.adoc  | 161 ++++---------------
 src/main/asciidoc/_chapters/cp.adoc             | 105 +++++-------
 src/main/asciidoc/_chapters/developer.adoc      |  72 +++------
 src/main/asciidoc/_chapters/security.adoc       |  29 ++--
 src/main/asciidoc/_chapters/shell.adoc          |  13 ++
 src/main/asciidoc/_chapters/sql.adoc            |   2 +-
 .../asciidoc/_chapters/troubleshooting.adoc     |  14 +-
 src/main/asciidoc/_chapters/unit_testing.adoc   |  38 ++---
 src/main/asciidoc/_chapters/upgrading.adoc      |  15 +-
 11 files changed, 164 insertions(+), 316 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hbase/blob/b9061c55/src/main/asciidoc/_chapters/architecture.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/architecture.adoc b/src/main/asciidoc/_chapters/architecture.adoc
index e51cb14..7f9ba07 100644
--- a/src/main/asciidoc/_chapters/architecture.adoc
+++ b/src/main/asciidoc/_chapters/architecture.adoc
@@ -219,20 +219,17 @@ For applications which require high-end multithreaded access (e.g.,
web-servers
 ----
 // Create a connection to the cluster.
 Configuration conf = HBaseConfiguration.create();
-try (Connection connection = ConnectionFactory.createConnection(conf)) {
-  try (Table table = connection.getTable(TableName.valueOf(tablename)) {
-    // use table as needed, the table returned is lightweight
-  }
+try (Connection connection = ConnectionFactory.createConnection(conf);
+     Table table = connection.getTable(TableName.valueOf(tablename))) {
+  // use table as needed, the table returned is lightweight
 }
 ----
 ====
 
-Constructing HTableInterface implementation is very lightweight and resources are controlled.
-
 .`HTablePool` is Deprecated
 [WARNING]
 ====
-Previous versions of this guide discussed `HTablePool`, which was deprecated in HBase 0.94,
0.95, and 0.96, and removed in 0.98.1, by link:https://issues.apache.org/jira/browse/HBASE-6580[HBASE-6500],
or `HConnection`, which is deprecated in HBase 1.0 by `Connection`.
+Previous versions of this guide discussed `HTablePool`, which was deprecated in HBase 0.94,
0.95, and 0.96, and removed in 0.98.1, by link:https://issues.apache.org/jira/browse/HBASE-6580[HBASE-6580],
or `HConnection`, which is deprecated in HBase 1.0 by `Connection`.
 Please use link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Connection.html[Connection]
instead.
 ====
 
@@ -398,7 +395,7 @@ Example: Find all columns in a row and family that start with "abc"
 
 [source,java]
 ----
-HTableInterface t = ...;
+Table t = ...;
 byte[] row = ...;
 byte[] family = ...;
 byte[] prefix = Bytes.toBytes("abc");
@@ -428,7 +425,7 @@ Example: Find all columns in a row and family that start with "abc" or
"xyz"
 
 [source,java]
 ----
-HTableInterface t = ...;
+Table t = ...;
 byte[] row = ...;
 byte[] family = ...;
 byte[][] prefixes = new byte[][] {Bytes.toBytes("abc"), Bytes.toBytes("xyz")};
@@ -463,7 +460,7 @@ Example: Find all columns in a row and family between "bbbb" (inclusive)
and "bb
 
 [source,java]
 ----
-HTableInterface t = ...;
+Table t = ...;
 byte[] row = ...;
 byte[] family = ...;
 byte[] startColumn = Bytes.toBytes("bbbb");
@@ -1415,11 +1412,6 @@ admin.createTable(tableDesc);
 hbase> create 'test', {METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.ConstantSizeRegionSplitPolicy'}},{NAME
=> 'cf1'}
 ----
 
-The default split policy can be overwritten using a custom
-link:http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/regionserver/RegionSplitPolicy.html[RegionSplitPolicy(HBase
0.94+)].
-Typically a custom split policy should extend HBase's default split policy:
-link:http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/regionserver/ConstantSizeRegionSplitPolicy.html[ConstantSizeRegionSplitPolicy].
-
 The policy can be set globally through the HBaseConfiguration used or on a per table basis:
 [source,java]
 ----

http://git-wip-us.apache.org/repos/asf/hbase/blob/b9061c55/src/main/asciidoc/_chapters/community.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/community.adoc b/src/main/asciidoc/_chapters/community.adoc
index ba07df7..f63d597 100644
--- a/src/main/asciidoc/_chapters/community.adoc
+++ b/src/main/asciidoc/_chapters/community.adoc
@@ -62,12 +62,11 @@ Any -1 on a patch by anyone vetoes a patch; it cannot be committed until
the jus
 .How to set fix version in JIRA on issue resolve
 
 Here is how link:http://search-hadoop.com/m/azemIi5RCJ1[we agreed] to set versions in JIRA
when we resolve an issue.
-If master is going to be 0.98.0 then:
+If master is going to be 2.0.0, and branch-1 1.4.0 then:
 
-* Commit only to master: Mark with 0.98
-* Commit to 0.95 and master: Mark with 0.98, and 0.95.x
-* Commit to 0.94.x and 0.95, and master: Mark with 0.98, 0.95.x, and 0.94.x
-* Commit to 89-fb: Mark with 89-fb.
+* Commit only to master: Mark with 2.0.0
+* Commit to branch-1 and master: Mark with 2.0.0, and 1.4.0
+* Commit to branch-1.3, branch-1, and master: Mark with 2.0.0, 1.4.0, and 1.3.x
 * Commit site fixes: no version
 
 [[hbase.when.to.close.jira]]

http://git-wip-us.apache.org/repos/asf/hbase/blob/b9061c55/src/main/asciidoc/_chapters/configuration.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/configuration.adoc b/src/main/asciidoc/_chapters/configuration.adoc
index d189c9f..ff4bf6a 100644
--- a/src/main/asciidoc/_chapters/configuration.adoc
+++ b/src/main/asciidoc/_chapters/configuration.adoc
@@ -93,54 +93,34 @@ This section lists required services and some required system configuration.
 
 [[java]]
 .Java
-[cols="1,1,1,4", options="header"]
+[cols="1,1,4", options="header"]
 |===
 |HBase Version
-|JDK 6
 |JDK 7
 |JDK 8
 
 |2.0
-|link:http://search-hadoop.com/m/DHED4Zlz0R1[Not Supported]
 |link:http://search-hadoop.com/m/YGbbsPxZ723m3as[Not Supported]
 |yes
 
 |1.3
-|link:http://search-hadoop.com/m/DHED4Zlz0R1[Not Supported]
 |yes
 |yes
 
 
 |1.2
-|link:http://search-hadoop.com/m/DHED4Zlz0R1[Not Supported]
 |yes
 |yes
 
 |1.1
-|link:http://search-hadoop.com/m/DHED4Zlz0R1[Not Supported]
 |yes
 |Running with JDK 8 will work but is not well tested.
 
-|1.0
-|link:http://search-hadoop.com/m/DHED4Zlz0R1[Not Supported]
-|yes
-|Running with JDK 8 will work but is not well tested.
-
-|0.98
-|yes
-|yes
-|Running with JDK 8 works but is not well tested. Building with JDK 8 would require removal
of the
-deprecated `remove()` method of the `PoolMap` class and is under consideration. See
-link:https://issues.apache.org/jira/browse/HBASE-7608[HBASE-7608] for more information about
JDK 8
-support.
-
-|0.94
-|yes
-|yes
-|N/A
 |===
 
-NOTE: In HBase 0.98.5 and newer, you must set `JAVA_HOME` on each node of your cluster. _hbase-env.sh_
provides a handy mechanism to do this.
+NOTE: HBase will neither build nor compile with Java 6.
+
+NOTE: You must set `JAVA_HOME` on each node of your cluster. _hbase-env.sh_ provides a handy
mechanism to do this.
 
 [[os]]
 .Operating System Utilities
@@ -213,8 +193,10 @@ See link:http://wiki.apache.org/hadoop/Distributions%20and%20Commercial%20Suppor
 [TIP]
 ====
 Hadoop 2.x is faster and includes features, such as short-circuit reads, which will help
improve your HBase random read profile.
-Hadoop 2.x also includes important bug fixes that will improve your overall HBase experience.
-HBase 0.98 drops support for Hadoop 1.0, deprecates use of Hadoop 1.1+, and HBase 1.0 will
not support Hadoop 1.x.
+Hadoop 2.x also includes important bug fixes that will improve your overall HBase experience.
HBase does not support running with
+earlier versions of Hadoop. See the table below for requirements specific to different HBase
versions.
+
+Hadoop 3.x is still in early access releases and has not yet been sufficiently tested by
the HBase community for production use cases.
 ====
 
 Use the following legend to interpret this table:
@@ -225,22 +207,21 @@ Use the following legend to interpret this table:
 * "X" = not supported
 * "NT" = Not tested
 
-[cols="1,1,1,1,1,1,1,1", options="header"]
+[cols="1,1,1,1,1", options="header"]
 |===
-| | HBase-0.94.x | HBase-0.98.x (Support for Hadoop 1.1+ is deprecated.) | HBase-1.0.x (Hadoop
1.x is NOT supported) | HBase-1.1.x | HBase-1.2.x | HBase-1.3.x | HBase-2.0.x
-|Hadoop-1.0.x  | X | X | X | X | X | X | X
-|Hadoop-1.1.x | S | NT | X | X | X | X | X
-|Hadoop-0.23.x | S | X | X | X | X | X | X
-|Hadoop-2.0.x-alpha | NT | X | X | X | X | X | X
-|Hadoop-2.1.0-beta | NT | X | X | X | X | X | X
-|Hadoop-2.2.0 | NT | S | NT | NT | X  | X | X
-|Hadoop-2.3.x | NT | S | NT | NT | X  | X | X
-|Hadoop-2.4.x | NT | S | S | S | S | S | X
-|Hadoop-2.5.x | NT | S | S | S | S | S | X
-|Hadoop-2.6.0 | X | X | X | X | X | X | X
-|Hadoop-2.6.1+ | NT | NT | NT | NT | S | S | S
-|Hadoop-2.7.0 | X | X | X | X | X | X | X
-|Hadoop-2.7.1+ | NT | NT | NT | NT | S | S | S
+| | HBase-1.1.x | HBase-1.2.x | HBase-1.3.x | HBase-2.0.x
+|Hadoop-2.0.x-alpha | X | X | X | X
+|Hadoop-2.1.0-beta | X | X | X | X
+|Hadoop-2.2.0 | NT | X  | X | X
+|Hadoop-2.3.x | NT | X  | X | X
+|Hadoop-2.4.x | S | S | S | X
+|Hadoop-2.5.x | S | S | S | X
+|Hadoop-2.6.0 | X | X | X | X
+|Hadoop-2.6.1+ | NT | S | S | S
+|Hadoop-2.7.0 | X | X | X | X
+|Hadoop-2.7.1+ | NT | S | S | S
+|Hadoop-2.8.0 | X | X | X | X
+|Hadoop-3.0.0-alphax | NT | NT | NT | NT
 |===
 
 .Hadoop Pre-2.6.1 and JDK 1.8 Kerberos
@@ -264,7 +245,13 @@ data loss. This patch is present in Apache Hadoop releases 2.6.1+.
 .Hadoop 2.7.x
 [TIP]
 ====
-Hadoop version 2.7.0 is not tested or supported as the Hadoop PMC has explicitly labeled
that release as not being stable.
+Hadoop version 2.7.0 is not tested or supported as the Hadoop PMC has explicitly labeled
that release as not being stable. (reference the link:https://s.apache.org/hadoop-2.7.0-announcement[announcement
of Apache Hadoop 2.7.0].)
+====
+
+.Hadoop 2.8.x
+[TIP]
+====
+Hadoop version 2.8.0 is not tested or supported as the Hadoop PMC has explicitly labeled
that release as not being stable. (reference the link:https://s.apache.org/hadoop-2.8.0-announcement[announcement
of Apache Hadoop 2.8.0].)
 ====
 
 .Replace the Hadoop Bundled With HBase!
@@ -278,88 +265,6 @@ Make sure you replace the jar in HBase everywhere on your cluster.
 Hadoop version mismatch issues have various manifestations but often all looks like its hung
up.
 ====
 
-[[hadoop2.hbase_0.94]]
-==== Apache HBase 0.94 with Hadoop 2
-
-To get 0.94.x to run on Hadoop 2.2.0, you need to change the hadoop 2 and protobuf versions
in the _pom.xml_: Here is a diff with pom.xml changes:
-
-[source]
-----
-$ svn diff pom.xml
-Index: pom.xml
-===================================================================
---- pom.xml     (revision 1545157)
-+++ pom.xml     (working copy)
-@@ -1034,7 +1034,7 @@
-     <slf4j.version>1.4.3</slf4j.version>
-     <log4j.version>1.2.16</log4j.version>
-     <mockito-all.version>1.8.5</mockito-all.version>
--    <protobuf.version>2.4.0a</protobuf.version>
-+    <protobuf.version>2.5.0</protobuf.version>
-     <stax-api.version>1.0.1</stax-api.version>
-     <thrift.version>0.8.0</thrift.version>
-     <zookeeper.version>3.4.5</zookeeper.version>
-@@ -2241,7 +2241,7 @@
-         </property>
-       </activation>
-       <properties>
--        <hadoop.version>2.0.0-alpha</hadoop.version>
-+        <hadoop.version>2.2.0</hadoop.version>
-         <slf4j.version>1.6.1</slf4j.version>
-       </properties>
-       <dependencies>
-----
-
-The next step is to regenerate Protobuf files and assuming that the Protobuf has been installed:
-
-* Go to the HBase root folder, using the command line;
-* Type the following commands:
-+
-
-[source,bourne]
-----
-$ protoc -Isrc/main/protobuf --java_out=src/main/java src/main/protobuf/hbase.proto
-----
-+
-
-[source,bourne]
-----
-$ protoc -Isrc/main/protobuf --java_out=src/main/java src/main/protobuf/ErrorHandling.proto
-----
-
-
-Building against the hadoop 2 profile by running something like the following command:
-
-----
-$  mvn clean install assembly:single -Dhadoop.profile=2.0 -DskipTests
-----
-
-[[hadoop.hbase_0.94]]
-==== Apache HBase 0.92 and 0.94
-
-HBase 0.92 and 0.94 versions can work with Hadoop versions, 0.20.205, 0.22.x, 1.0.x, and
1.1.x.
-HBase-0.94 can additionally work with Hadoop-0.23.x and 2.x, but you may have to recompile
the code using the specific maven profile (see top level pom.xml)
-
-[[hadoop.hbase_0.96]]
-==== Apache HBase 0.96
-
-As of Apache HBase 0.96.x, Apache Hadoop 1.0.x at least is required.
-Hadoop 2 is strongly encouraged (faster but also has fixes that help MTTR). We will no longer
run properly on older Hadoops such as 0.20.205 or branch-0.20-append.
-Do not move to Apache HBase 0.96.x if you cannot upgrade your Hadoop. See link:http://search-hadoop.com/m/7vFVx4EsUb2[HBase,
mail # dev - DISCUSS:
-                Have hbase require at least hadoop 1.0.0 in hbase 0.96.0?]
-
-[[hadoop.older.versions]]
-==== Hadoop versions 0.20.x - 1.x
-
-DO NOT use Hadoop versions older than 2.2.0 for HBase versions greater than 1.0. Check release
documentation if you are using an older version of HBase for Hadoop related information. 
-
-[[hadoop.security]]
-==== Apache HBase on Secure Hadoop
-
-Apache HBase will run on any Hadoop 0.20.x that incorporates Hadoop security features as
long as you do as suggested above and replace the Hadoop jar that ships with HBase with the
secure version.
-If you want to read more about how to setup Secure HBase, see <<hbase.secure.configuration,hbase.secure.configuration>>.
-
-
 [[dfs.datanode.max.transfer.threads]]
 ==== `dfs.datanode.max.transfer.threads` (((dfs.datanode.max.transfer.threads)))
 
@@ -392,8 +297,8 @@ See also <<casestudies.max.transfer.threads,casestudies.max.transfer.threads>>
a
 [[zookeeper.requirements]]
 === ZooKeeper Requirements
 
-ZooKeeper 3.4.x is required as of HBase 1.0.0.
-HBase makes use of the `multi` functionality that is only available since Zookeeper 3.4.0.
The `hbase.zookeeper.useMulti` configuration property defaults to `true` in HBase 1.0.0.
+ZooKeeper 3.4.x is required.
+HBase makes use of the `multi` functionality that is only available since Zookeeper 3.4.0.
The `hbase.zookeeper.useMulti` configuration property defaults to `true`.
 Refer to link:https://issues.apache.org/jira/browse/HBASE-12241[HBASE-12241 (The crash of
regionServer when taking deadserver's replication queue breaks replication)] and link:https://issues.apache.org/jira/browse/HBASE-6775[HBASE-6775
(Use ZK.multi when available for HBASE-6710 0.92/0.94 compatibility fix)] for background.
 The property is deprecated and useMulti is always enabled in HBase 2.0.
 
@@ -580,8 +485,6 @@ Check them out especially if HBase had trouble starting.
 HBase also puts up a UI listing vital attributes.
 By default it's deployed on the Master host at port 16010 (HBase RegionServers listen on
port 16020 by default and put up an informational HTTP server at port 16030). If the Master
is running on a host named `master.example.org` on the default port, point your browser at
pass:[http://master.example.org:16010] to see the web interface.
 
-Prior to HBase 0.98 the master UI was deployed on port 60010, and the HBase RegionServers
UI on port 60030.
-
 Once HBase has started, see the <<shell_exercises,shell exercises>> section for
how to create tables, add data, scan your insertions, and finally disable and drop your tables.
 
 To stop HBase after exiting the HBase shell enter
@@ -764,7 +667,7 @@ example9
 [[hbase_env]]
 ==== _hbase-env.sh_
 
-The following lines in the _hbase-env.sh_ file show how to set the `JAVA_HOME` environment
variable (required for HBase 0.98.5 and newer) and set the heap to 4 GB (rather than the default
value of 1 GB). If you copy and paste this example, be sure to adjust the `JAVA_HOME` to suit
your environment.
+The following lines in the _hbase-env.sh_ file show how to set the `JAVA_HOME` environment
variable (required for HBase) and set the heap to 4 GB (rather than the default value of 1
GB). If you copy and paste this example, be sure to adjust the `JAVA_HOME` to suit your environment.
 
 ----
 # The java implementation to use.

http://git-wip-us.apache.org/repos/asf/hbase/blob/b9061c55/src/main/asciidoc/_chapters/cp.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/cp.adoc b/src/main/asciidoc/_chapters/cp.adoc
index 7e60f2f..2f5267f 100644
--- a/src/main/asciidoc/_chapters/cp.adoc
+++ b/src/main/asciidoc/_chapters/cp.adoc
@@ -100,12 +100,10 @@ AOP::
 
 === Coprocessor Implementation Overview
 
-. Either your class should extend one of the Coprocessor classes, such as
-link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/BaseRegionObserver.html[BaseRegionObserver],
-or it should implement the link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/Coprocessor.html[Coprocessor]
-or
-link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/CoprocessorService.html[CoprocessorService]
-interface.
+. Your class should implement one of the Coprocessor interfaces -
+link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/Coprocessor.html[Coprocessor],
+link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/RegionObserver.html[RegionObserver],
+link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/CoprocessorService.html[CoprocessorService]
- to name a few.
 
 . Load the coprocessor, either statically (from the configuration) or dynamically,
 using HBase Shell. For more details see <<cp_loading,Loading Coprocessors>>.
@@ -150,36 +148,22 @@ RegionObserver::
   A RegionObserver coprocessor allows you to observe events on a region, such as `Get`
   and `Put` operations. See
   link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/RegionObserver.html[RegionObserver].
-  Consider overriding the convenience class
-  link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/BaseRegionObserver.html[BaseRegionObserver],
-  which implements the `RegionObserver` interface and will not break if new methods are added.
 
 RegionServerObserver::
   A RegionServerObserver allows you to observe events related to the RegionServer's
   operation, such as starting, stopping, or performing merges, commits, or rollbacks.
   See
   link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/RegionServerObserver.html[RegionServerObserver].
-  Consider overriding the convenience class
-  https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/BaseMasterAndRegionObserver.html[BaseMasterAndRegionObserver]
-  which implements both `MasterObserver` and `RegionServerObserver` interfaces and
-  will not break if new methods are added.
 
-MasterOvserver::
+MasterObserver::
   A MasterObserver allows you to observe events related to the HBase Master, such
   as table creation, deletion, or schema modification. See
   link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/MasterObserver.html[MasterObserver].
-  Consider overriding the convenience class
-  https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/BaseMasterAndRegionObserver.html[BaseMasterAndRegionObserver],
-  which implements both `MasterObserver` and `RegionServerObserver` interfaces and
-  will not break if new methods are added.
 
 WalObserver::
   A WalObserver allows you to observe events related to writes to the Write-Ahead
   Log (WAL). See
   link:http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/WALObserver.html[WALObserver].
-  Consider overriding the convenience class
-  link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/BaseWALObserver.html[BaseWALObserver],
-  which implements the `WalObserver` interface and will not break if new methods are added.
 
 <<cp_example,Examples>> provides working examples of observer coprocessors.
 
@@ -196,8 +180,7 @@ In contrast to observer coprocessors, where your code is run transparently,
endp
 coprocessors must be explicitly invoked using the
 link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/Table.html#coprocessorService%28java.lang.Class,%20byte%5B%5D,%20byte%5B%5D,%20org.apache.hadoop.hbase.client.coprocessor.Batch.Call%29[CoprocessorService()]
 method available in
-link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/Table.html[Table],
-link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/HTableInterface.html[HTableInterface],
+link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/Table.html[Table]
 or
 link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/HTable.html[HTable].
 
@@ -499,8 +482,8 @@ of the `users` table.
 The following Observer coprocessor prevents the details of the user `admin` from being
 returned in a `Get` or `Scan` of the `users` table.
 
-. Write a class that extends the
-link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/BaseRegionObserver.html[BaseRegionObserver]
+. Write a class that implements the
+link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/RegionObserver.html[RegionObserver]
 class.
 
 . Override the `preGetOp()` method (the `preGet()` method is deprecated) to check
@@ -520,7 +503,7 @@ Following are the implementation of the above steps:
 
 [source,java]
 ----
-public class RegionObserverExample extends BaseRegionObserver {
+public class RegionObserverExample implements RegionObserver {
 
     private static final byte[] ADMIN = Bytes.toBytes("admin");
     private static final byte[] COLUMN_FAMILY = Bytes.toBytes("details");
@@ -627,7 +610,7 @@ The effect is that the duplicate coprocessor is effectively ignored.
 +
 [source, java]
 ----
-public class SumEndPoint extends SumService implements Coprocessor, CoprocessorService {
+public class SumEndPoint extends Sum.SumService implements Coprocessor, CoprocessorService
{
 
     private RegionCoprocessorEnvironment env;
 
@@ -647,31 +630,33 @@ public class SumEndPoint extends SumService implements Coprocessor,
CoprocessorS
 
     @Override
     public void stop(CoprocessorEnvironment env) throws IOException {
-        // do mothing
+        // do nothing
     }
 
     @Override
-    public void getSum(RpcController controller, SumRequest request, RpcCallback done) {
+    public void getSum(RpcController controller, Sum.SumRequest request, RpcCallback<Sum.SumResponse>
done) {
         Scan scan = new Scan();
         scan.addFamily(Bytes.toBytes(request.getFamily()));
         scan.addColumn(Bytes.toBytes(request.getFamily()), Bytes.toBytes(request.getColumn()));
-        SumResponse response = null;
+
+        Sum.SumResponse response = null;
         InternalScanner scanner = null;
+
         try {
             scanner = env.getRegion().getScanner(scan);
-            List results = new ArrayList();
+            List<Cell> results = new ArrayList<>();
             boolean hasMore = false;
-                        long sum = 0L;
-                do {
-                        hasMore = scanner.next(results);
-                        for (Cell cell : results) {
-                            sum = sum + Bytes.toLong(CellUtil.cloneValue(cell));
-                     }
-                        results.clear();
-                } while (hasMore);
+            long sum = 0L;
 
-                response = SumResponse.newBuilder().setSum(sum).build();
+            do {
+                hasMore = scanner.next(results);
+                for (Cell cell : results) {
+                    sum = sum + Bytes.toLong(CellUtil.cloneValue(cell));
+                }
+                results.clear();
+            } while (hasMore);
 
+            response = Sum.SumResponse.newBuilder().setSum(sum).build();
         } catch (IOException ioe) {
             ResponseConverter.setControllerException(controller, ioe);
         } finally {
@@ -681,6 +666,7 @@ public class SumEndPoint extends SumService implements Coprocessor, CoprocessorS
                 } catch (IOException ignored) {}
             }
         }
+
         done.run(response);
     }
 }
@@ -689,33 +675,33 @@ public class SumEndPoint extends SumService implements Coprocessor,
CoprocessorS
 [source, java]
 ----
 Configuration conf = HBaseConfiguration.create();
-// Use below code for HBase version 1.x.x or above.
 Connection connection = ConnectionFactory.createConnection(conf);
 TableName tableName = TableName.valueOf("users");
 Table table = connection.getTable(tableName);
 
-//Use below code HBase version 0.98.xx or below.
-//HConnection connection = HConnectionManager.createConnection(conf);
-//HTableInterface table = connection.getTable("users");
-
-final SumRequest request = SumRequest.newBuilder().setFamily("salaryDet").setColumn("gross")
-                            .build();
+final Sum.SumRequest request = Sum.SumRequest.newBuilder().setFamily("salaryDet").setColumn("gross").build();
 try {
-Map<byte[], Long> results = table.CoprocessorService (SumService.class, null, null,
-new Batch.Call<SumService, Long>() {
-    @Override
-        public Long call(SumService aggregate) throws IOException {
-BlockingRpcCallback rpcCallback = new BlockingRpcCallback();
-            aggregate.getSum(null, request, rpcCallback);
-            SumResponse response = rpcCallback.get();
-            return response.hasSum() ? response.getSum() : 0L;
+    Map<byte[], Long> results = table.coprocessorService(
+        Sum.SumService.class,
+        null,  /* start key */
+        null,  /* end   key */
+        new Batch.Call<Sum.SumService, Long>() {
+            @Override
+            public Long call(Sum.SumService aggregate) throws IOException {
+                BlockingRpcCallback<Sum.SumResponse> rpcCallback = new BlockingRpcCallback<>();
+                aggregate.getSum(null, request, rpcCallback);
+                Sum.SumResponse response = rpcCallback.get();
+
+                return response.hasSum() ? response.getSum() : 0L;
+            }
         }
-    });
+    );
+
     for (Long sum : results.values()) {
         System.out.println("Sum = " + sum);
     }
 } catch (ServiceException e) {
-e.printStackTrace();
+    e.printStackTrace();
 } catch (Throwable e) {
     e.printStackTrace();
 }
@@ -769,15 +755,10 @@ Then you can read the configuration using code like the following:
 [source,java]
 ----
 Configuration conf = HBaseConfiguration.create();
-// Use below code for HBase version 1.x.x or above.
 Connection connection = ConnectionFactory.createConnection(conf);
 TableName tableName = TableName.valueOf("users");
 Table table = connection.getTable(tableName);
 
-//Use below code HBase version 0.98.xx or below.
-//HConnection connection = HConnectionManager.createConnection(conf);
-//HTableInterface table = connection.getTable("users");
-
 Get get = new Get(Bytes.toBytes("admin"));
 Result result = table.get(get);
 for (Cell c : result.rawCells()) {

http://git-wip-us.apache.org/repos/asf/hbase/blob/b9061c55/src/main/asciidoc/_chapters/developer.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/developer.adoc b/src/main/asciidoc/_chapters/developer.adoc
index 8765600..50b9c74 100644
--- a/src/main/asciidoc/_chapters/developer.adoc
+++ b/src/main/asciidoc/_chapters/developer.adoc
@@ -306,38 +306,27 @@ See the <<hbase.unittests.cmds,hbase.unittests.cmds>> section
in <<hbase.unittes
 [[maven.build.hadoop]]
 ==== Building against various hadoop versions.
 
-As of 0.96, Apache HBase supports building against Apache Hadoop versions: 1.0.3, 2.0.0-alpha
and 3.0.0-SNAPSHOT.
-By default, in 0.96 and earlier, we will build with Hadoop-1.0.x.
-As of 0.98, Hadoop 1.x is deprecated and Hadoop 2.x is the default.
-To change the version to build against, add a hadoop.profile property when you invoke +mvn+:
+HBase supports building against Apache Hadoop versions: 2.y and 3.y (early release artifacts).
By default we build against Hadoop 2.x.
+
+To build against a specific release from the Hadoop 2.y line, set e.g. `-Dhadoop-two.version=2.6.3`.
 
 [source,bourne]
 ----
-mvn -Dhadoop.profile=1.0 ...
+mvn -Dhadoop-two.version=2.6.3 ...
 ----
 
-The above will build against whatever explicit hadoop 1.x version we have in our _pom.xml_
as our '1.0' version.
-Tests may not all pass so you may need to pass `-DskipTests` unless you are inclined to fix
the failing tests.
-
-.'dependencyManagement.dependencies.dependency.artifactId' fororg.apache.hbase:${compat.module}:test-jar
with value '${compat.module}'does not match a valid id pattern
-[NOTE]
-====
-You will see ERRORs like the above title if you pass the _default_ profile; e.g.
-if you pass +hadoop.profile=1.1+ when building 0.96 or +hadoop.profile=2.0+ when building
hadoop 0.98; just drop the hadoop.profile stipulation in this case to get your build to run
again.
-This seems to be a maven peculiarity that is probably fixable but we've not spent the time
trying to figure it.
-====
-
-Similarly, for 3.0, you would just replace the profile value.
-Note that Hadoop-3.0.0-SNAPSHOT does not currently have a deployed maven artifact - you will
need to build and install your own in your local maven repository if you want to run against
this profile.
-
-In earlier versions of Apache HBase, you can build against older versions of Apache Hadoop,
notably, Hadoop 0.22.x and 0.23.x.
-If you are running, for example HBase-0.94 and wanted to build against Hadoop 0.23.x, you
would run with:
+To change the major release line of Hadoop we build against, add a hadoop.profile property
when you invoke +mvn+:
 
 [source,bourne]
 ----
-mvn -Dhadoop.profile=22 ...
+mvn -Dhadoop.profile=3.0 ...
 ----
 
+The above will build against whatever explicit hadoop 3.y version we have in our _pom.xml_
as our '3.0' version.
+Tests may not all pass so you may need to pass `-DskipTests` unless you are inclined to fix
the failing tests.
+
+To pick a particular Hadoop 3.y release, you'd set e.g. `-Dhadoop-three.version=3.0.0-alpha1`.
+
 [[build.protobuf]]
 ==== Build Protobuf
 
@@ -426,27 +415,6 @@ HBase 1.x requires Java 7 to build.
 See <<java,java>> for Java requirements per HBase release.
 ====
 
-=== Building against HBase 0.96-0.98
-
-HBase 0.96.x will run on Hadoop 1.x or Hadoop 2.x.
-HBase 0.98 still runs on both, but HBase 0.98 deprecates use of Hadoop 1.
-HBase 1.x will _not_                run on Hadoop 1.
-In the following procedures, we make a distinction between HBase 1.x builds and the awkward
process involved building HBase 0.96/0.98 for either Hadoop 1 or Hadoop 2 targets.
-
-You must choose which Hadoop to build against.
-It is not possible to build a single HBase binary that runs against both Hadoop 1 and Hadoop
2.
-Hadoop is included in the build, because it is needed to run HBase in standalone mode.
-Therefore, the set of modules included in the tarball changes, depending on the build target.
-To determine which HBase you have, look at the HBase version.
-The Hadoop version is embedded within it.
-
-Maven, our build system, natively does not allow a single product to be built against different
dependencies.
-Also, Maven cannot change the set of included modules and write out the correct _pom.xml_
files with appropriate dependencies, even using two build targets, one for Hadoop 1 and another
for Hadoop 2.
-A prerequisite step is required, which takes as input the current _pom.xml_s and generates
Hadoop 1 or Hadoop 2 versions using a script in the _dev-tools/_ directory, called _generate-hadoopX-poms.sh_
               where [replaceable]_X_ is either `1` or `2`.
-You then reference these generated poms when you build.
-For now, just be aware of the difference between HBase 1.x builds and those of HBase 0.96-0.98.
-This difference is important to the build instructions.
-
 [[maven.settings.xml]]
 .Example _~/.m2/settings.xml_ File
 ====
@@ -496,9 +464,7 @@ For the build to sign them for you, you a properly configured _settings.xml_
in
 [[maven.release]]
 === Making a Release Candidate
 
-NOTE: These instructions are for building HBase 1.0.x.
-For building earlier versions, e.g. 0.98.x, the process is different.
-See this section under the respective release documentation folders.
+NOTE: These instructions are for building HBase 1.y.z
 
 .Point Releases
 If you are making a point release (for example to quickly address a critical incompatibility
or security problem) off of a release branch instead of a development branch, the tagging
instructions are slightly different.
@@ -1110,11 +1076,13 @@ public class TestExample {
   // down in 'testExampleFoo()' where we use it to log current test's name.
   @Rule public TestName testName = new TestName();
 
-  // CategoryBasedTimeout.forClass(<testcase>) decides the timeout based on the category
-  // (small/medium/large) of the testcase. @ClassRule requires that the full testcase runs
within
-  // this timeout irrespective of individual test methods' times.
-  @ClassRule
-  public static TestRule timeout = CategoryBasedTimeout.forClass(TestExample.class);
+  // The below rule does two things. It decides the timeout based on the category
+  // (small/medium/large) of the testcase. This @Rule requires that the full testcase runs
+  // within this timeout irrespective of individual test methods' times. The second
+  // feature is we'll dump in the log when the test is done a count of threads still
+  // running.
+  @Rule public static TestRule timeout = CategoryBasedTimeout.builder().
+    withTimeout(this.getClass()).withLookingForStuckThread(true).build();
 
   @Before
   public void setUp() throws Exception {
@@ -1438,8 +1406,6 @@ The following interface classifications are commonly used:
 
 `@InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.COPROC)`::
   APIs for HBase coprocessor writers.
-  As of HBase 0.92/0.94/0.96/0.98 this api is still unstable.
-  No guarantees on compatibility with future versions.
 
 No `@InterfaceAudience` Classification::
   Packages without an `@InterfaceAudience` label are considered private.

http://git-wip-us.apache.org/repos/asf/hbase/blob/b9061c55/src/main/asciidoc/_chapters/security.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/security.adoc b/src/main/asciidoc/_chapters/security.adoc
index 0ed9ba2..ccb5adb 100644
--- a/src/main/asciidoc/_chapters/security.adoc
+++ b/src/main/asciidoc/_chapters/security.adoc
@@ -202,10 +202,9 @@ Set it in the `Configuration` supplied to `Table`:
 Configuration conf = HBaseConfiguration.create();
 Connection connection = ConnectionFactory.createConnection(conf);
 conf.set("hbase.rpc.protection", "privacy");
-try (Connection connection = ConnectionFactory.createConnection(conf)) {
-  try (Table table = connection.getTable(TableName.valueOf(tablename)) {
+try (Connection connection = ConnectionFactory.createConnection(conf);
+     Table table = connection.getTable(TableName.valueOf(tablename))) {
   .... do your stuff
-  }
 }
 ----
 
@@ -1014,24 +1013,16 @@ public static void grantOnTable(final HBaseTestingUtility util, final
String use
   SecureTestUtil.updateACLs(util, new Callable<Void>() {
     @Override
     public Void call() throws Exception {
-      Configuration conf = HBaseConfiguration.create();
-      Connection connection = ConnectionFactory.createConnection(conf);
-      try (Connection connection = ConnectionFactory.createConnection(conf)) {
-        try (Table table = connection.getTable(TableName.valueOf(tablename)) {
-          AccessControlLists.ACL_TABLE_NAME);
-          try {
-            BlockingRpcChannel service = acl.coprocessorService(HConstants.EMPTY_START_ROW);
-            AccessControlService.BlockingInterface protocol =
-                AccessControlService.newBlockingStub(service);
-            ProtobufUtil.grant(protocol, user, table, family, qualifier, actions);
-          } finally {
-            acl.close();
-          }
-          return null;
-        }
+      try (Connection connection = ConnectionFactory.createConnection(util.getConfiguration());
+           Table acl = connection.getTable(AccessControlLists.ACL_TABLE_NAME)) {
+        BlockingRpcChannel service = acl.coprocessorService(HConstants.EMPTY_START_ROW);
+        AccessControlService.BlockingInterface protocol =
+          AccessControlService.newBlockingStub(service);
+        AccessControlUtil.grant(null, protocol, user, table, family, qualifier, false, actions);
       }
+      return null;
     }
-  }
+  });
 }
 ----
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/b9061c55/src/main/asciidoc/_chapters/shell.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/shell.adoc b/src/main/asciidoc/_chapters/shell.adoc
index 8f1f59b..1e51a20 100644
--- a/src/main/asciidoc/_chapters/shell.adoc
+++ b/src/main/asciidoc/_chapters/shell.adoc
@@ -352,6 +352,19 @@ hbase(main):022:0> Date.new(1218920189000).toString() => "Sat Aug
16 20:56:29 UT
 
 To output in a format that is exactly like that of the HBase log format will take a little
messing with link:http://download.oracle.com/javase/6/docs/api/java/text/SimpleDateFormat.html[SimpleDateFormat].
 
+=== Query Shell Configuration
+----
+hbase(main):001:0> @shell.hbase.configuration.get("hbase.rpc.timeout")
+=> "60000"
+----
+To set a config in the shell:
+----
+hbase(main):005:0> @shell.hbase.configuration.setInt("hbase.rpc.timeout", 61010)
+hbase(main):006:0> @shell.hbase.configuration.get("hbase.rpc.timeout")
+=> "61010"
+----
+
+
 [[tricks.pre-split]]
 === Pre-splitting tables with the HBase Shell
 You can use a variety of options to pre-split tables when creating them via the HBase Shell
`create` command.

http://git-wip-us.apache.org/repos/asf/hbase/blob/b9061c55/src/main/asciidoc/_chapters/sql.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/sql.adoc b/src/main/asciidoc/_chapters/sql.adoc
index b47104c..b1ad063 100644
--- a/src/main/asciidoc/_chapters/sql.adoc
+++ b/src/main/asciidoc/_chapters/sql.adoc
@@ -37,6 +37,6 @@ link:http://phoenix.apache.org[Apache Phoenix]
 
 === Trafodion
 
-link:https://wiki.trafodion.org/[Trafodion: Transactional SQL-on-HBase]
+link:http://trafodion.incubator.apache.org/[Trafodion: Transactional SQL-on-HBase]
 
 :numbered:

http://git-wip-us.apache.org/repos/asf/hbase/blob/b9061c55/src/main/asciidoc/_chapters/troubleshooting.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/troubleshooting.adoc b/src/main/asciidoc/_chapters/troubleshooting.adoc
index c6253b8..1cf93d6 100644
--- a/src/main/asciidoc/_chapters/troubleshooting.adoc
+++ b/src/main/asciidoc/_chapters/troubleshooting.adoc
@@ -1050,7 +1050,7 @@ If you wish to increase the session timeout, add the following to your
_hbase-si
 ----
 <property>
   <name>zookeeper.session.timeout</name>
-  <value>1200000</value>
+  <value>120000</value>
 </property>
 <property>
   <name>hbase.zookeeper.property.tickTime</name>
@@ -1365,13 +1365,13 @@ on the HBase balancer, since the HDFS balancer would degrade locality.
This advi
 is still valid if your HDFS version is lower than 2.7.1.
 +
 link:https://issues.apache.org/jira/browse/HDFS-6133[HDFS-6133] provides the ability
-to exclude a given directory from the HDFS load balancer, by setting the
-`dfs.datanode.block-pinning.enabled` property to `true` in your HDFS
-configuration and running the following hdfs command:
+to exclude favored-nodes (pinned) blocks from the HDFS load balancer, by setting the
+`dfs.datanode.block-pinning.enabled` property to `true` in the HDFS service
+configuration.
 +
-----
-$ sudo -u hdfs hdfs balancer -exclude /hbase
-----
+HBase can be enabled to use the HDFS favored-nodes feature by switching the HBase balancer
+class (conf: `hbase.master.loadbalancer.class`) to `org.apache.hadoop.hbase.favored.FavoredNodeLoadBalancer`
+which is documented link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/favored/FavoredNodeLoadBalancer.html[here].
 +
 NOTE: HDFS-6133 is available in HDFS 2.7.0 and higher, but HBase does not support
 running on HDFS 2.7.0, so you must be using HDFS 2.7.1 or higher to use this feature

http://git-wip-us.apache.org/repos/asf/hbase/blob/b9061c55/src/main/asciidoc/_chapters/unit_testing.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/unit_testing.adoc b/src/main/asciidoc/_chapters/unit_testing.adoc
index 0c4d812..6131d5a 100644
--- a/src/main/asciidoc/_chapters/unit_testing.adoc
+++ b/src/main/asciidoc/_chapters/unit_testing.adoc
@@ -295,28 +295,28 @@ public class MyHBaseIntegrationTest {
 
     @Before
     public void setup() throws Exception {
-    	utility = new HBaseTestingUtility();
-    	utility.startMiniCluster();
+        utility = new HBaseTestingUtility();
+        utility.startMiniCluster();
     }
 
     @Test
-        public void testInsert() throws Exception {
-       	 HTableInterface table = utility.createTable(Bytes.toBytes("MyTest"), CF);
-       	 HBaseTestObj obj = new HBaseTestObj();
-       	 obj.setRowKey("ROWKEY-1");
-       	 obj.setData1("DATA-1");
-       	 obj.setData2("DATA-2");
-       	 MyHBaseDAO.insertRecord(table, obj);
-       	 Get get1 = new Get(Bytes.toBytes(obj.getRowKey()));
-       	 get1.addColumn(CF, CQ1);
-       	 Result result1 = table.get(get1);
-       	 assertEquals(Bytes.toString(result1.getRow()), obj.getRowKey());
-       	 assertEquals(Bytes.toString(result1.value()), obj.getData1());
-       	 Get get2 = new Get(Bytes.toBytes(obj.getRowKey()));
-       	 get2.addColumn(CF, CQ2);
-       	 Result result2 = table.get(get2);
-       	 assertEquals(Bytes.toString(result2.getRow()), obj.getRowKey());
-       	 assertEquals(Bytes.toString(result2.value()), obj.getData2());
+    public void testInsert() throws Exception {
+        Table table = utility.createTable(Bytes.toBytes("MyTest"), CF);
+        HBaseTestObj obj = new HBaseTestObj();
+        obj.setRowKey("ROWKEY-1");
+        obj.setData1("DATA-1");
+        obj.setData2("DATA-2");
+        MyHBaseDAO.insertRecord(table, obj);
+        Get get1 = new Get(Bytes.toBytes(obj.getRowKey()));
+        get1.addColumn(CF, CQ1);
+        Result result1 = table.get(get1);
+        assertEquals(Bytes.toString(result1.getRow()), obj.getRowKey());
+        assertEquals(Bytes.toString(result1.value()), obj.getData1());
+        Get get2 = new Get(Bytes.toBytes(obj.getRowKey()));
+        get2.addColumn(CF, CQ2);
+        Result result2 = table.get(get2);
+        assertEquals(Bytes.toString(result2.getRow()), obj.getRowKey());
+        assertEquals(Bytes.toString(result2.value()), obj.getData2());
     }
 }
 ----

http://git-wip-us.apache.org/repos/asf/hbase/blob/b9061c55/src/main/asciidoc/_chapters/upgrading.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/upgrading.adoc b/src/main/asciidoc/_chapters/upgrading.adoc
index b0a5565..7210040 100644
--- a/src/main/asciidoc/_chapters/upgrading.adoc
+++ b/src/main/asciidoc/_chapters/upgrading.adoc
@@ -74,12 +74,15 @@ In addition to the usual API versioning considerations HBase has other
compatibi
 * An API needs to be deprecated for a major version before we will change/remove it.
 * APIs available in a patch version will be available in all later patch versions. However,
new APIs may be added which will not be available in earlier patch versions.
 * New APIs introduced in a patch version will only be added in a source compatible way footnote:[See
'Source Compatibility' https://blogs.oracle.com/darcy/entry/kinds_of_compatibility]: i.e.
code that implements public APIs will continue to compile.
-* Example: A user using a newly deprecated API does not need to modify application code with
HBase API calls until the next major version.
+** Example: A user using a newly deprecated API does not need to modify application code
with HBase API calls until the next major version.
+* 
 
 .Client Binary compatibility
 * Client code written to APIs available in a given patch release can run unchanged (no recompilation
needed) against the new jars of later patch versions.
 * Client code written to APIs available in a given patch release might not run against the
old jars from an earlier patch version.
-* Example: Old compiled client code will work unchanged with the new jars.
+** Example: Old compiled client code will work unchanged with the new jars.
+* If a Client implements an HBase Interface, a recompile MAY be required upgrading to a newer
minor version (See release notes
+for warning about incompatible changes). All effort will be made to provide a default implementation
so this case should not arise.
 
 .Server-Side Limited API compatibility (taken from Hadoop)
 * Internal APIs are marked as Stable, Evolving, or Unstable
@@ -125,7 +128,7 @@ In addition to the usual API versioning considerations HBase has other
compatibi
 HBase has a lot of API points, but for the compatibility matrix above, we differentiate between
Client API, Limited Private API, and Private API. HBase uses a version of link:https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Compatibility.html[Hadoop's
Interface classification]. HBase's Interface classification classes can be found link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/classification/package-summary.html[here].
 
 * InterfaceAudience: captures the intended audience, possible values are Public (for end
users and external projects), LimitedPrivate (for other Projects, Coprocessors or other plugin
points), and Private (for internal use).
-* InterfaceStability: describes what types of interface changes are permitted. Possible values
are Stable, Evolving, Unstable, and Deprecated.
+* InterfaceStability: describes what types of interface changes are permitted. Possible values
are Stable, Evolving, Unstable, and Deprecated. Notice that this annotation is only valid
for classes which are marked as IA.LimitedPrivate. The stability of IA.Public classes is only
related to the upgrade type(major, minor or patch). And for IA.Private classes, there is no
guarantee on the stability between releases. Refer to the Compatibility Matrix above for more
details.
 
 [[hbase.client.api]]
 HBase Client API::
@@ -142,6 +145,9 @@ HBase Private API::
 [[hbase.versioning.pre10]]
 === Pre 1.0 versions
 
+.HBase Pre-1.0 versions are all EOM
+NOTE: For new installations, do not deploy 0.94.y, 0.96.y, or 0.98.y.  Deploy our stable
version. See link:https://issues.apache.org/jira/browse/HBASE-11642[EOL 0.96], link:https://issues.apache.org/jira/browse/HBASE-16215[clean
up of EOM releases], and link:http://www.apache.org/dist/hbase/[the header of our downloads].
+
 Before the semantic versioning scheme pre-1.0, HBase tracked either Hadoop's versions (0.2x)
or 0.9x versions. If you are into the arcane, checkout our old wiki page on link:http://wiki.apache.org/hadoop/Hbase/HBaseVersions[HBase
Versioning] which tries to connect the HBase version dots. Below sections cover ONLY the releases
before 1.0.
 
 [[hbase.development.series]]
@@ -257,9 +263,6 @@ A rolling upgrade from 0.94.x directly to 0.98.x does not work. The upgrade
path
 
 ==== The "Singularity"
 
-.HBase 0.96.x was EOL'd, September 1st, 2014
-NOTE: Do not deploy 0.96.x  Deploy at least 0.98.x. See link:https://issues.apache.org/jira/browse/HBASE-11642[EOL
0.96].
-
 You will have to stop your old 0.94.x cluster completely to upgrade. If you are replicating
between clusters, both clusters will have to go down to upgrade. Make sure it is a clean shutdown.
The less WAL files around, the faster the upgrade will run (the upgrade will split any log
files it finds in the filesystem as part of the upgrade process). All clients must be upgraded
to 0.96 too.
 
 The API has changed. You will need to recompile your code against 0.96 and you may need to
adjust applications to go against new APIs (TODO: List of changes).


Mime
View raw message