hbase-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From apurt...@apache.org
Subject [02/30] hbase git commit: HBASE-12918 Backport asciidoc changes (Enis Soztutar)
Date Wed, 28 Jan 2015 02:59:28 GMT
http://git-wip-us.apache.org/repos/asf/hbase/blob/7bf6c024/src/main/docbkx/upgrading.xml
----------------------------------------------------------------------
diff --git a/src/main/docbkx/upgrading.xml b/src/main/docbkx/upgrading.xml
deleted file mode 100644
index 869a44d..0000000
--- a/src/main/docbkx/upgrading.xml
+++ /dev/null
@@ -1,407 +0,0 @@
-<?xml version="1.0"?>
-    <chapter xml:id="upgrading"
-      version="5.0" xmlns="http://docbook.org/ns/docbook"
-      xmlns:xlink="http://www.w3.org/1999/xlink"
-      xmlns:xi="http://www.w3.org/2001/XInclude"
-      xmlns:svg="http://www.w3.org/2000/svg"
-      xmlns:m="http://www.w3.org/1998/Math/MathML"
-      xmlns:html="http://www.w3.org/1999/xhtml"
-      xmlns:db="http://docbook.org/ns/docbook">
-<!--
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
--->
-    <title>Upgrading</title>
-    <para>You cannot skip major verisons upgrading.  If you are upgrading from
-    version 0.90.x to 0.94.x, you must first go from 0.90.x to 0.92.x and then go
-    from 0.92.x to 0.94.x.</para>
-    <para>
-        Review <xref linkend="configuration" />, in particular the section on Hadoop version.
-    </para>
-    <section xml:id="hbase.versioning">
-        <title>HBase version numbers</title>
-        <para>HBase has not walked a straight line where version numbers are concerned.
-            Since we came up out of hadoop itself, we originally tracked hadoop versioning.
-            Later we left hadoop versioning behind because we were moving at a different rate
-            to that of our parent.  If you are into the arcane, checkout our old wiki page
-            on <link xlink:href="http://wiki.apache.org/hadoop/Hbase/HBaseVersions">HBase Versioning</link>
-            which tries to connect the HBase version dots.</para>
-        <section xml:id="hbase.development.series"><title>Odd/Even Versioning or "Development"" Series Releases</title>
-            <para>Ahead of big releases, we have been putting up preview versions to start the
-                feedback cycle turning-over  earlier.  These "Development" Series releases,
-                always odd-numbered, come with no guarantees, not even regards being able
-                to upgrade between two sequential releases (we reserve the right to break compatibility across
-                "Development" Series releases).  Needless to say, these releases are not for
-                production deploys.  They are a preview of what is coming in the hope that
-                interested parties will take the release for a test drive and flag us early if we
-                there are issues we've missed ahead of our rolling a production-worthy release.
-            </para>
-            <para>Our first "Development" Series was the 0.89 set that came out ahead of
-                HBase 0.90.0.  HBase 0.95 is another "Development" Series that portends
-                HBase 0.96.0.
-            </para>
-        </section>
-        <section xml:id="hbase.binary.compatibility">
-            <title>Binary Compatibility</title>
-            <para>When we say two HBase versions are compatible, we mean that the versions
-                are wire and binary compatible.  Compatible HBase versions means that
-                clients can talk to compatible but differently versioned servers.
-                It means too that you can just swap out the jars of one version and replace
-                them with the jars of another, compatible version and all will just work.
-                Unless otherwise specified, HBase point versions are binary compatible.
-                You can safely do rolling upgrades between binary compatible versions; i.e.
-                across point versions: e.g. from 0.94.5 to 0.94.6<footnote><para>See
-                        <link xlink:href="http://search-hadoop.com/m/bOOvwHGW981/Does+compatibility+between+versions+also+mean+binary+compatibility%253F&amp;subj=Re+Does+compatibility+between+versions+also+mean+binary+compatibility+">Does compatibility between versions also mean binary compatibility?</link>
-                        discussion on the hbaes dev mailing list.
-                </para></footnote>.
-            </para>
-        </section>
-        <section xml:id="hbase.rolling.restart">
-            <title>Rolling Upgrade between versions/Binary compatibility</title>
-            <para>Unless otherwise specified, HBase point versions are binary compatible.
-                you can do a rolling upgrade between hbase point versions;
-                for example, you can go to 0.94.6 from 0.94.5 by doing a rolling upgrade across the cluster
-                replacing the 0.94.5 binary with a 0.94.6 binary.
-            </para>
-        </section>
-</section>
-
-    <section xml:id="upgrade0.96">
-      <title>Upgrading from 0.94.x to 0.96.x</title>
-      <subtitle>The Singularity</subtitle>
-      <para>You will have to stop your old 0.94.x cluster completely to upgrade.  If you are replicating
-     between clusters, both clusters will have to go down to upgrade.  Make sure it is a clean shutdown.
-     The less WAL files around, the faster the upgrade will run (the upgrade will split any log files it
-     finds in the filesystem as part of the upgrade process).  All clients must be upgraded to 0.96 too.
- </para>
- <para>The API has changed.  You will need to recompile your code against 0.96 and you may need to
-     adjust applications to go against new APIs (TODO: List of changes).
- </para>
- <section>
-     <title>Executing the 0.96 Upgrade</title>
-     <note>
-     <para>HDFS and ZooKeeper should be up and running during the upgrade process.</para>
- </note>
- <para>hbase-0.96.0 comes with an upgrade script.  Run
-     <programlisting>$ bin/hbase upgrade</programlisting> to see its usage.
-     The script has two main modes: -check, and -execute.
- </para>
-     <section><title>check</title>
-         <para>The <emphasis>check</emphasis> step is run against a running 0.94 cluster.
-             Run it from a downloaded 0.96.x binary.  The <emphasis>check</emphasis> step
-             is looking for the presence of <filename>HFileV1</filename> files.  These are
-             unsupported in hbase-0.96.0.  To purge them -- have them rewritten as HFileV2 --
-             you must run a compaction.
-         </para>
-         <para>The <emphasis>check</emphasis> step prints stats at the end of its run
-             (grep for “Result:” in the log) printing absolute path of the tables it scanned,
-             any HFileV1 files found, the regions containing said files (the regions we
-             need to  major compact to purge the HFileV1s), and any corrupted files if
-             any found. A corrupt file is unreadable, and so is undefined (neither HFileV1 nor HFileV2).
-         </para>
-         <para>To run the check step, run <programlisting>$ bin/hbase upgrade -check</programlisting>.
-          Here is sample output:
-<programlisting>
-             Tables Processed:
-             hdfs://localhost:41020/myHBase/.META.
-             hdfs://localhost:41020/myHBase/usertable
-             hdfs://localhost:41020/myHBase/TestTable
-             hdfs://localhost:41020/myHBase/t
-
-             Count of HFileV1: 2
-             HFileV1:
-             hdfs://localhost:41020/myHBase/usertable    /fa02dac1f38d03577bd0f7e666f12812/family/249450144068442524
-             hdfs://localhost:41020/myHBase/usertable    /ecdd3eaee2d2fcf8184ac025555bb2af/family/249450144068442512
-
-             Count of corrupted files: 1
-             Corrupted Files:
-             hdfs://localhost:41020/myHBase/usertable/fa02dac1f38d03577bd0f7e666f12812/family/1
-             Count of Regions with HFileV1: 2
-             Regions to Major Compact:
-             hdfs://localhost:41020/myHBase/usertable/fa02dac1f38d03577bd0f7e666f12812
-             hdfs://localhost:41020/myHBase/usertable/ecdd3eaee2d2fcf8184ac025555bb2af
-
-             There are some HFileV1, or corrupt files (files with incorrect major version)
-</programlisting>
-             In the above sample output, there are two HFileV1 in two regions, and one corrupt file.
-             Corrupt files should probably be removed.  The regions that have HFileV1s need to be major
-             compacted.  To major compact, start up the hbase shell and review how to compact an individual
-             region.  After the major compaction is done, rerun the check step and the HFileV1s shoudl be
-             gone, replaced by HFileV2 instances.
-         </para>
-         <para>By default, the check step scans the hbase root directory (defined as hbase.rootdir in the configuration).
-             To scan a specific directory only, pass the <emphasis>-dir</emphasis> option.
-             <programlisting>$ bin/hbase upgrade -check -dir /myHBase/testTable</programlisting>
-             The above command would detect HFileV1s in the /myHBase/testTable directory.
-         </para>
-         <para>
-             Once the check step reports all the HFileV1 files have been rewritten, it is safe to proceed with the
-             upgrade.
-          </para>
-     </section>
-     <section><title>execute</title>
-         <para>After the check step shows the cluster is free of HFileV1, it is safe to proceed with the upgrade.
-             Next is the <emphasis>execute</emphasis> step.  You must <emphasis>SHUTDOWN YOUR 0.94.x CLUSTER</emphasis>
-             before you can run the <emphasis>execute</emphasis> step.  The execute step will not run if it
-             detects running HBase masters or regionservers.
-         <note>
-         <para>HDFS and ZooKeeper should be up and running during the upgrade process.
-             If zookeeper is managed by HBase, then you can start zookeeper so it is available to the upgrade
-             by running <programlisting>$ ./hbase/bin/hbase-daemon.sh  start zookeeper</programlisting>
-         </para></note>
-         </para>
-         <para>
-             The <emphasis>execute</emphasis> upgrade step is made of three substeps.
-
-             <itemizedlist>
-             <listitem> <para>Namespaces: HBase 0.96.0 has support for namespaces.  The upgrade needs to reorder directories in the filesystem for namespaces to work.</para> </listitem>
-             <listitem> <para>ZNodes: All znodes are purged so that new ones can be written in their place using a new protobuf'ed format and a few are migrated in place: e.g. replication and table state znodes</para> </listitem>
-             <listitem> <para>WAL Log Splitting: If the 0.94.x cluster shutdown was not clean, we'll split WAL logs as part of migration before
-                     we startup on 0.96.0.  This WAL splitting runs slower than the native distributed WAL splitting because it is all inside the
-                     single upgrade process (so try and get a clean shutdown of the 0.94.0 cluster  if you can).
-             </para> </listitem>
-         </itemizedlist>
-         </para>
-         <para>
-             To run the <emphasis>execute</emphasis> step, make sure that first you have copied hbase-0.96.0
-             binaries everywhere under servers and under clients.  Make sure the 0.94.0 cluster is down.
-             Then do as follows:
-         <programlisting>$ bin/hbase upgrade -execute</programlisting>
-         Here is some sample output
-         <programlisting>
-             Starting Namespace upgrade
-             Created version file at hdfs://localhost:41020/myHBase with version=7
-             Migrating table testTable to hdfs://localhost:41020/myHBase/.data/default/testTable
-             …..
-             Created version file at hdfs://localhost:41020/myHBase with version=8
-             Successfully completed NameSpace upgrade.
-             Starting Znode upgrade
-             ….
-             Successfully completed Znode upgrade
-
-             Starting Log splitting
-             …
-             Successfully completed Log splitting
-         </programlisting>
-         </para>
-         <para>
-             If the output from the execute step looks good, start hbase-0.96.0.
-         </para>
-     </section>
-     <section xml:id="096.migration.troubleshooting"><title>Troubleshooting</title>
-     <section xml:id="096.migration.troubleshooting.old.client"><title>Old Client connecting to 0.96 cluster</title>
-         <para>It will fail with an exception like the below.  Upgrade.
-             <programlisting>17:22:15  Exception in thread "main" java.lang.IllegalArgumentException: Not a host:port pair: PBUF
-17:22:15  *
-17:22:15   api-compat-8.ent.cloudera.com ��  ���(
-17:22:15    at org.apache.hadoop.hbase.util.Addressing.parseHostname(Addressing.java:60)
-17:22:15    at org.apache.hadoop.hbase.ServerName.&amp;init>(ServerName.java:101)
-17:22:15    at org.apache.hadoop.hbase.ServerName.parseVersionedServerName(ServerName.java:283)
-17:22:15    at org.apache.hadoop.hbase.MasterAddressTracker.bytesToServerName(MasterAddressTracker.java:77)
-17:22:15    at org.apache.hadoop.hbase.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:61)
-17:22:15    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getMaster(HConnectionManager.java:703)
-17:22:15    at org.apache.hadoop.hbase.client.HBaseAdmin.&amp;init>(HBaseAdmin.java:126)
-17:22:15    at Client_4_3_0.setup(Client_4_3_0.java:716)
-17:22:15    at Client_4_3_0.main(Client_4_3_0.java:63)</programlisting>
-         </para>
-     </section>
-     </section>
- </section>
-
-
-    </section>
-
-    <section xml:id="upgrade0.94">
-      <title>Upgrading from 0.92.x to 0.94.x</title>
-      <para>We used to think that 0.92 and 0.94 were interface compatible and that you can do a
-          rolling upgrade between these versions but then we figured that
-          <link xlink:href="https://issues.apache.org/jira/browse/HBASE-5357">HBASE-5357 Use builder pattern in HColumnDescriptor</link>
-          changed method signatures so rather than return void they instead return HColumnDescriptor.  This
-          will throw <programlisting>java.lang.NoSuchMethodError: org.apache.hadoop.hbase.HColumnDescriptor.setMaxVersions(I)V</programlisting>
-          .... so 0.92 and 0.94 are NOT compatible.  You cannot do a rolling upgrade between them.
-    </para>
-    </section>
-    <section xml:id="upgrade0.92">
-      <title>Upgrading from 0.90.x to 0.92.x</title>
-      <subtitle>Upgrade Guide</subtitle>
-<para>You will find that 0.92.0 runs a little differently to 0.90.x releases.  Here are a few things to watch out for upgrading from 0.90.x to 0.92.0.
-<note><title>tl;dr</title>
-<para>
-If you've not patience, here are the important things to know upgrading.
-<orderedlist>
-<listitem>Once you upgrade, you can’t go back.
-</listitem>
-<listitem>
-MSLAB is on by default. Watch that heap usage if you have a lot of regions.
-</listitem>
-<listitem>
-Distributed splitting is on by defaul.  It should make region server failover faster.
-</listitem>
-<listitem>
-There’s a separate tarball for security.
-</listitem>
-<listitem>
-If -XX:MaxDirectMemorySize is set in your hbase-env.sh, it’s going to enable the experimental off-heap cache (You may not want this).
-</listitem>
-</orderedlist>
-</para>
-</note>
-</para>
-
-<section>
-<title>You can’t go back!
-</title>
-<para>To move to 0.92.0, all you need to do is shutdown your cluster, replace your hbase 0.90.x with hbase 0.92.0 binaries (be sure you clear out all 0.90.x instances) and restart (You cannot do a rolling restart from 0.90.x to 0.92.x -- you must restart).
-On startup, the <varname>.META.</varname> table content is rewritten removing the table schema from the <varname>info:regioninfo</varname> column.
-Also, any flushes done post first startup will write out data in the new 0.92.0 file format, <link xlink:href="http://hbase.apache.org/book.html#hfilev2">HFile V2</link>.
-This means you cannot go back to 0.90.x once you’ve started HBase 0.92.0 over your HBase data directory.
-</para>
-</section>
-
-<section>
-<title>MSLAB is ON by default
-</title>
-<para>In 0.92.0, the <link xlink:href="http://hbase.apache.org/book.html#hbase.hregion.memstore.mslab.enabled">hbase.hregion.memstore.mslab.enabled</link> flag is set to true
-(See <xref linkend="mslab" />).  In 0.90.x it was <constant>false</constant>.  When it is enabled, memstores will step allocate memory in MSLAB 2MB chunks even if the
-memstore has zero or just a few small elements.  This is fine usually but if you had lots of regions per regionserver in a 0.90.x cluster (and MSLAB was off),
-you may find yourself OOME'ing on upgrade because the <mathphrase>thousands of regions * number of column families * 2MB MSLAB (at a minimum)</mathphrase>
-puts your heap over the top.  Set <varname>hbase.hregion.memstore.mslab.enabled</varname> to
-<constant>false</constant> or set the MSLAB size down from 2MB by setting <varname>hbase.hregion.memstore.mslab.chunksize</varname> to something less.
-</para>
-</section>
-
-<section><title>Distributed splitting is on by default
-</title>
-<para>Previous, WAL logs on crash were split by the Master alone.  In 0.92.0, log splitting is done by the cluster (See See “HBASE-1364 [performance] Distributed splitting of regionserver commit logs”).  This should cut down significantly on the amount of time it takes splitting logs and getting regions back online again.
-</para>
-</section>
-
-<section><title>Memory accounting is different now
-</title>
-<para>In 0.92.0, <xref linkend="hfilev2" /> indices and bloom filters take up residence in the same LRU used caching blocks that come from the filesystem.
-In 0.90.x, the HFile v1 indices lived outside of the LRU so they took up space even if the index was on a ‘cold’ file, one that wasn’t being actively used.  With the indices now in the LRU, you may find you
-have less space for block caching.  Adjust your block cache accordingly.  See the <xref linkend="block.cache" /> for more detail.
-The block size default size has been changed in 0.92.0 from 0.2 (20 percent of heap) to 0.25.
-</para>
-</section>
-
-
-<section><title>On the Hadoop version to use
-</title>
-<para>Run 0.92.0 on Hadoop 1.0.x (or CDH3u3 when it ships).  The performance benefits are worth making the move.  Otherwise, our Hadoop prescription is as it has been; you need an Hadoop that supports a working sync.  See <xref linkend="hadoop" />.
-</para>
-
-<para>If running on Hadoop 1.0.x (or CDH3u3), enable local read.  See <link xlink:href="http://files.meetup.com/1350427/hug_ebay_jdcryans.pdf">Practical Caching</link> presentation for ruminations on the performance benefits ‘going local’ (and for how to enable local reads).
-</para>
-</section>
-<section><title>HBase 0.92.0 ships with ZooKeeper 3.4.2
-</title>
-<para>If you can, upgrade your zookeeper.  If you can’t, 3.4.2 clients should work against 3.3.X ensembles (HBase makes use of 3.4.2 API).
-</para>
-</section>
-<section>
-<title>Online alter is off by default
-</title>
-<para>In 0.92.0, we’ve added an experimental online schema alter facility  (See <xref linkend="hbase.online.schema.update.enable" />).  Its off by default.  Enable it at your own risk.  Online alter and splitting tables do not play well together so be sure your cluster quiescent using this feature (for now).
-</para>
-</section>
-<section>
-<title>WebUI
-</title>
-<para>The webui has had a few additions made in 0.92.0.  It now shows a list of the regions currently transitioning, recent compactions/flushes, and a process list of running processes (usually empty if all is well and requests are being handled promptly).  Other additions including requests by region, a debugging servlet dump, etc.
-</para>
-</section>
-<section>
-<title>Security tarball
-</title>
-<para>We now ship with two tarballs; secure and insecure HBase.  Documentation on how to setup a secure HBase is on the way.
-</para>
-</section>
-
-<section><title>Experimental off-heap cache
-</title>
-<para>
-A new cache was contributed to 0.92.0 to act as a solution between using the “on-heap” cache which is the current LRU cache the region servers have and the operating system cache which is out of our control.
-To enable, set “-XX:MaxDirectMemorySize” in hbase-env.sh to the value for maximum direct memory size and specify hbase.offheapcache.percentage in hbase-site.xml with the percentage that you want to dedicate to off-heap cache. This should only be set for servers and not for clients. Use at your own risk.
-See this blog post for additional information on this new experimental feature: http://www.cloudera.com/blog/2012/01/caching-in-hbase-slabcache/
-</para>
-</section>
-
-<section><title>Changes in HBase replication
-</title>
-<para>0.92.0 adds two new features: multi-slave and multi-master replication. The way to enable this is the same as adding a new peer, so in order to have multi-master you would just run add_peer for each cluster that acts as a master to the other slave clusters. Collisions are handled at the timestamp level which may or may not be what you want, this needs to be evaluated on a per use case basis. Replication is still experimental in 0.92 and is disabled by default, run it at your own risk.
-</para>
-</section>
-
-
-<section><title>RegionServer now aborts if OOME
-</title>
-<para>If an OOME, we now have the JVM kill -9 the regionserver process so it goes down fast.  Previous, a RegionServer might stick around after incurring an OOME limping along in some wounded state.  To disable this facility, and recommend you leave it in place, you’d need to edit the bin/hbase file.  Look for the addition of the -XX:OnOutOfMemoryError="kill -9 %p" arguments (See [HBASE-4769] - ‘Abort RegionServer Immediately on OOME’)
-</para>
-</section>
-
-
-<section><title>HFile V2 and the “Bigger, Fewer” Tendency
-</title>
-<para>0.92.0 stores data in a new format, <xref linkend="hfilev2" />.   As HBase runs, it will move all your data from HFile v1 to HFile v2 format.  This auto-migration will run in the background as flushes and compactions run.
-HFile V2 allows HBase run with larger regions/files.  In fact, we encourage that all HBasers going forward tend toward Facebook axiom #1, run with larger, fewer regions.
-If you have lots of regions now -- more than 100s per host -- you should look into setting your region size up after you move to 0.92.0 (In 0.92.0, default size is now 1G, up from 256M), and then running online merge tool (See “HBASE-1621 merge tool should work on online cluster, but disabled table”).
-</para>
-</section>
-    </section>
-    <section xml:id="upgrade0.90">
-    <title>Upgrading to HBase 0.90.x from 0.20.x or 0.89.x</title>
-          <para>This version of 0.90.x HBase can be started on data written by
-              HBase 0.20.x or HBase 0.89.x.  There is no need of a migration step.
-              HBase 0.89.x and 0.90.x does write out the name of region directories
-              differently -- it names them with a md5 hash of the region name rather
-              than a jenkins hash -- so this means that once started, there is no
-              going back to HBase 0.20.x.
-          </para>
-          <para>
-             Be sure to remove the <filename>hbase-default.xml</filename> from
-             your <filename>conf</filename>
-             directory on upgrade.  A 0.20.x version of this file will have
-             sub-optimal configurations for 0.90.x HBase.  The
-             <filename>hbase-default.xml</filename> file is now bundled into the
-             HBase jar and read from there.  If you would like to review
-             the content of this file, see it in the src tree at
-             <filename>src/main/resources/hbase-default.xml</filename> or
-             see <xref linkend="hbase_default_configurations" />.
-          </para>
-          <para>
-            Finally, if upgrading from 0.20.x, check your
-            <varname>.META.</varname> schema in the shell.  In the past we would
-            recommend that users run with a 16kb
-            <varname>MEMSTORE_FLUSHSIZE</varname>.
-            Run <code>hbase> scan '-ROOT-'</code> in the shell. This will output
-            the current <varname>.META.</varname> schema.  Check
-            <varname>MEMSTORE_FLUSHSIZE</varname> size.  Is it 16kb (16384)?  If so, you will
-            need to change this (The 'normal'/default value is 64MB (67108864)).
-            Run the script <filename>bin/set_meta_memstore_size.rb</filename>.
-            This will make the necessary edit to your <varname>.META.</varname> schema.
-            Failure to run this change will make for a slow cluster <footnote>
-            <para>
-            See <link xlink:href="https://issues.apache.org/jira/browse/HBASE-3499">HBASE-3499 Users upgrading to 0.90.0 need to have their .META. table updated with the right MEMSTORE_SIZE</link>
-            </para>
-            </footnote>
-            .
-
-          </para>
-          </section>
-    </chapter>

http://git-wip-us.apache.org/repos/asf/hbase/blob/7bf6c024/src/main/docbkx/zookeeper.xml
----------------------------------------------------------------------
diff --git a/src/main/docbkx/zookeeper.xml b/src/main/docbkx/zookeeper.xml
deleted file mode 100644
index 4a257b5..0000000
--- a/src/main/docbkx/zookeeper.xml
+++ /dev/null
@@ -1,603 +0,0 @@
-<?xml version="1.0"?>
-    <chapter xml:id="zookeeper"
-      version="5.0" xmlns="http://docbook.org/ns/docbook"
-      xmlns:xlink="http://www.w3.org/1999/xlink"
-      xmlns:xi="http://www.w3.org/2001/XInclude"
-      xmlns:svg="http://www.w3.org/2000/svg"
-      xmlns:m="http://www.w3.org/1998/Math/MathML"
-      xmlns:html="http://www.w3.org/1999/xhtml"
-      xmlns:db="http://docbook.org/ns/docbook">
-<!--
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
--->
-
-            <title>ZooKeeper<indexterm>
-                <primary>ZooKeeper</primary>
-              </indexterm></title>
-
-            <para>A distributed Apache HBase installation depends on a running ZooKeeper cluster.
-            All participating nodes and clients need to be able to access the
-            running ZooKeeper ensemble. Apache HBase by default manages a ZooKeeper
-            "cluster" for you. It will start and stop the ZooKeeper ensemble
-            as part of the HBase start/stop process. You can also manage the
-            ZooKeeper ensemble independent of HBase and just point HBase at
-            the cluster it should use. To toggle HBase management of
-            ZooKeeper, use the <varname>HBASE_MANAGES_ZK</varname> variable in
-            <filename>conf/hbase-env.sh</filename>. This variable, which
-            defaults to <varname>true</varname>, tells HBase whether to
-            start/stop the ZooKeeper ensemble servers as part of HBase
-            start/stop.</para>
-
-            <para>When HBase manages the ZooKeeper ensemble, you can specify
-            ZooKeeper configuration using its native
-            <filename>zoo.cfg</filename> file, or, the easier option is to
-            just specify ZooKeeper options directly in
-            <filename>conf/hbase-site.xml</filename>. A ZooKeeper
-            configuration option can be set as a property in the HBase
-            <filename>hbase-site.xml</filename> XML configuration file by
-            prefacing the ZooKeeper option name with
-            <varname>hbase.zookeeper.property</varname>. For example, the
-            <varname>clientPort</varname> setting in ZooKeeper can be changed
-            by setting the
-            <varname>hbase.zookeeper.property.clientPort</varname> property.
-            For all default values used by HBase, including ZooKeeper
-            configuration, see <xref linkend="hbase_default_configurations" />. Look for the
-            <varname>hbase.zookeeper.property</varname> prefix <footnote>
-                <para>For the full list of ZooKeeper configurations, see
-                ZooKeeper's <filename>zoo.cfg</filename>. HBase does not ship
-                with a <filename>zoo.cfg</filename> so you will need to browse
-                the <filename>conf</filename> directory in an appropriate
-                ZooKeeper download.</para>
-              </footnote></para>
-
-            <para>You must at least list the ensemble servers in
-            <filename>hbase-site.xml</filename> using the
-            <varname>hbase.zookeeper.quorum</varname> property. This property
-            defaults to a single ensemble member at
-            <varname>localhost</varname> which is not suitable for a fully
-            distributed HBase. (It binds to the local machine only and remote
-            clients will not be able to connect). <note xml:id="how_many_zks">
-                <title>How many ZooKeepers should I run?</title>
-
-                <para>You can run a ZooKeeper ensemble that comprises 1 node
-                only but in production it is recommended that you run a
-                ZooKeeper ensemble of 3, 5 or 7 machines; the more members an
-                ensemble has, the more tolerant the ensemble is of host
-                failures. Also, run an odd number of machines. In ZooKeeper,
-                an even number of peers is supported, but it is normally not used
-                because an even sized ensemble requires, proportionally, more peers
-                to form a quorum than an odd sized ensemble requires. For example, an
-                ensemble with 4 peers requires 3 to form a quorum, while an ensemble with
-                5 also requires 3 to form a quorum. Thus, an ensemble of 5 allows 2 peers to
-                fail, and thus is more fault tolerant than the ensemble of 4, which allows
-                only 1 down peer.
-                </para>
-                <para>Give each ZooKeeper server around 1GB of RAM, and if possible, its own
-                dedicated disk (A dedicated disk is the best thing you can do
-                to ensure a performant ZooKeeper ensemble). For very heavily
-                loaded clusters, run ZooKeeper servers on separate machines
-                from RegionServers (DataNodes and TaskTrackers).</para>
-              </note></para>
-
-            <para>For example, to have HBase manage a ZooKeeper quorum on
-            nodes <emphasis>rs{1,2,3,4,5}.example.com</emphasis>, bound to
-            port 2222 (the default is 2181) ensure
-            <varname>HBASE_MANAGE_ZK</varname> is commented out or set to
-            <varname>true</varname> in <filename>conf/hbase-env.sh</filename>
-            and then edit <filename>conf/hbase-site.xml</filename> and set
-            <varname>hbase.zookeeper.property.clientPort</varname> and
-            <varname>hbase.zookeeper.quorum</varname>. You should also set
-            <varname>hbase.zookeeper.property.dataDir</varname> to other than
-            the default as the default has ZooKeeper persist data under
-            <filename>/tmp</filename> which is often cleared on system
-            restart. In the example below we have ZooKeeper persist to
-            <filename>/user/local/zookeeper</filename>. <programlisting>
-  &lt;configuration&gt;
-    ...
-    &lt;property&gt;
-      &lt;name&gt;hbase.zookeeper.property.clientPort&lt;/name&gt;
-      &lt;value&gt;2222&lt;/value&gt;
-      &lt;description&gt;Property from ZooKeeper's config zoo.cfg.
-      The port at which the clients will connect.
-      &lt;/description&gt;
-    &lt;/property&gt;
-    &lt;property&gt;
-      &lt;name&gt;hbase.zookeeper.quorum&lt;/name&gt;
-      &lt;value&gt;rs1.example.com,rs2.example.com,rs3.example.com,rs4.example.com,rs5.example.com&lt;/value&gt;
-      &lt;description&gt;Comma separated list of servers in the ZooKeeper Quorum.
-      For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com".
-      By default this is set to localhost for local and pseudo-distributed modes
-      of operation. For a fully-distributed setup, this should be set to a full
-      list of ZooKeeper quorum servers. If HBASE_MANAGES_ZK is set in hbase-env.sh
-      this is the list of servers which we will start/stop ZooKeeper on.
-      &lt;/description&gt;
-    &lt;/property&gt;
-    &lt;property&gt;
-      &lt;name&gt;hbase.zookeeper.property.dataDir&lt;/name&gt;
-      &lt;value&gt;/usr/local/zookeeper&lt;/value&gt;
-      &lt;description&gt;Property from ZooKeeper's config zoo.cfg.
-      The directory where the snapshot is stored.
-      &lt;/description&gt;
-    &lt;/property&gt;
-    ...
-  &lt;/configuration&gt;</programlisting></para>
-  <caution xml:id="zk.version">
-      <title>What verion of ZooKeeper should I use?</title>
-      <para>The newer version, the better.  For example, some folks have been bitten by
-          <link xlink:href="https://issues.apache.org/jira/browse/ZOOKEEPER-1277">ZOOKEEPER-1277</link>.
-          If running zookeeper 3.5+, you can ask hbase to make use of the new multi operation by
-          enabling <xref linkend="hbase.zookeeper.useMulti"/>" in your <filename>hbase-site.xml</filename>.
-      </para>
-  </caution>
-  <caution>
-      <title>ZooKeeper Maintenance</title>
-      <para>Be sure to set up the data dir cleaner described under
-          <link xlink:href="http://zookeeper.apache.org/doc/r3.1.2/zookeeperAdmin.html#sc_maintenance">Zookeeper Maintenance</link> else you could
-          have 'interesting' problems a couple of months in; i.e. zookeeper could start
-          dropping sessions if it has to run through a directory of hundreds of thousands of
-          logs which is wont to do around leader reelection time -- a process rare but run on
-      occasion whether because a machine is dropped or happens to hiccup.</para>
-  </caution>
-
-            <section>
-              <title>Using existing ZooKeeper ensemble</title>
-
-              <para>To point HBase at an existing ZooKeeper cluster, one that
-              is not managed by HBase, set <varname>HBASE_MANAGES_ZK</varname>
-              in <filename>conf/hbase-env.sh</filename> to false
-              <programlisting>
-  ...
-  # Tell HBase whether it should manage its own instance of Zookeeper or not.
-  export HBASE_MANAGES_ZK=false</programlisting> Next set ensemble locations
-              and client port, if non-standard, in
-              <filename>hbase-site.xml</filename>, or add a suitably
-              configured <filename>zoo.cfg</filename> to HBase's
-              <filename>CLASSPATH</filename>. HBase will prefer the
-              configuration found in <filename>zoo.cfg</filename> over any
-              settings in <filename>hbase-site.xml</filename>.</para>
-
-              <para>When HBase manages ZooKeeper, it will start/stop the
-              ZooKeeper servers as a part of the regular start/stop scripts.
-              If you would like to run ZooKeeper yourself, independent of
-              HBase start/stop, you would do the following</para>
-
-              <programlisting>
-${HBASE_HOME}/bin/hbase-daemons.sh {start,stop} zookeeper
-</programlisting>
-
-              <para>Note that you can use HBase in this manner to spin up a
-              ZooKeeper cluster, unrelated to HBase. Just make sure to set
-              <varname>HBASE_MANAGES_ZK</varname> to <varname>false</varname>
-              if you want it to stay up across HBase restarts so that when
-              HBase shuts down, it doesn't take ZooKeeper down with it.</para>
-
-              <para>For more information about running a distinct ZooKeeper
-              cluster, see the ZooKeeper <link
-              xlink:href="http://hadoop.apache.org/zookeeper/docs/current/zookeeperStarted.html">Getting
-              Started Guide</link>.  Additionally, see the <link xlink:href="http://wiki.apache.org/hadoop/ZooKeeper/FAQ#A7">ZooKeeper Wiki</link> or the
-          <link xlink:href="http://zookeeper.apache.org/doc/r3.3.3/zookeeperAdmin.html#sc_zkMulitServerSetup">ZooKeeper documentation</link>
-          for more information on ZooKeeper sizing.
-            </para>
-            </section>
-
-
-            <section xml:id="zk.sasl.auth">
-              <title>SASL Authentication with ZooKeeper</title>
-              <para>Newer releases of Apache HBase (&gt;= 0.92) will
-              support connecting to a ZooKeeper Quorum that supports
-              SASL authentication (which is available in Zookeeper
-              versions 3.4.0 or later).</para>
-
-              <para>This describes how to set up HBase to mutually
-              authenticate with a ZooKeeper Quorum. ZooKeeper/HBase
-              mutual authentication (<link
-              xlink:href="https://issues.apache.org/jira/browse/HBASE-2418">HBASE-2418</link>)
-              is required as part of a complete secure HBase configuration
-              (<link
-              xlink:href="https://issues.apache.org/jira/browse/HBASE-3025">HBASE-3025</link>).
-
-              For simplicity of explication, this section ignores
-              additional configuration required (Secure HDFS and Coprocessor
-              configuration).  It's recommended to begin with an
-              HBase-managed Zookeeper configuration (as opposed to a
-              standalone Zookeeper quorum) for ease of learning.
-              </para>
-
-              <section><title>Operating System Prerequisites</title></section>
-
-              <para>
-                  You need to have a working Kerberos KDC setup. For
-                  each <code>$HOST</code> that will run a ZooKeeper
-                  server, you should have a principle
-                  <code>zookeeper/$HOST</code>.  For each such host,
-                  add a service key (using the <code>kadmin</code> or
-                  <code>kadmin.local</code> tool's <code>ktadd</code>
-                  command) for <code>zookeeper/$HOST</code> and copy
-                  this file to <code>$HOST</code>, and make it
-                  readable only to the user that will run zookeeper on
-                  <code>$HOST</code>. Note the location of this file,
-                  which we will use below as
-                  <filename>$PATH_TO_ZOOKEEPER_KEYTAB</filename>.
-              </para>
-
-              <para>
-                Similarly, for each <code>$HOST</code> that will run
-                an HBase server (master or regionserver), you should
-                have a principle: <code>hbase/$HOST</code>. For each
-                host, add a keytab file called
-                <filename>hbase.keytab</filename> containing a service
-                key for <code>hbase/$HOST</code>, copy this file to
-                <code>$HOST</code>, and make it readable only to the
-                user that will run an HBase service on
-                <code>$HOST</code>. Note the location of this file,
-                which we will use below as
-                <filename>$PATH_TO_HBASE_KEYTAB</filename>.
-              </para>
-
-              <para>
-                Each user who will be an HBase client should also be
-                given a Kerberos principal. This principal should
-                usually have a password assigned to it (as opposed to,
-                as with the HBase servers, a keytab file) which only
-                this user knows. The client's principal's
-                <code>maxrenewlife</code> should be set so that it can
-                be renewed enough so that the user can complete their
-                HBase client processes. For example, if a user runs a
-                long-running HBase client process that takes at most 3
-                days, we might create this user's principal within
-                <code>kadmin</code> with: <code>addprinc -maxrenewlife
-                3days</code>. The Zookeeper client and server
-                libraries manage their own ticket refreshment by
-                running threads that wake up periodically to do the
-                refreshment.
-              </para>
-
-                <para>On each host that will run an HBase client
-                (e.g. <code>hbase shell</code>), add the following
-                file to the HBase home directory's <filename>conf</filename>
-                directory:</para>
-
-                <programlisting>
-                  Client {
-                    com.sun.security.auth.module.Krb5LoginModule required
-                    useKeyTab=false
-                    useTicketCache=true;
-                  };
-                </programlisting>
-
-                <para>We'll refer to this JAAS configuration file as
-                <filename>$CLIENT_CONF</filename> below.</para>
-
-              <section>
-                <title>HBase-managed Zookeeper Configuration</title>
-
-                <para>On each node that will run a zookeeper, a
-                master, or a regionserver, create a <link
-                xlink:href="http://docs.oracle.com/javase/1.4.2/docs/guide/security/jgss/tutorials/LoginConfigFile.html">JAAS</link>
-                configuration file in the conf directory of the node's
-                <filename>HBASE_HOME</filename> directory that looks like the
-                following:</para>
-
-                <programlisting>
-                  Server {
-                    com.sun.security.auth.module.Krb5LoginModule required
-                    useKeyTab=true
-                    keyTab="$PATH_TO_ZOOKEEPER_KEYTAB"
-                    storeKey=true
-                    useTicketCache=false
-                    principal="zookeeper/$HOST";
-                  };
-                  Client {
-                    com.sun.security.auth.module.Krb5LoginModule required
-                    useKeyTab=true
-                    useTicketCache=false
-                    keyTab="$PATH_TO_HBASE_KEYTAB"
-                    principal="hbase/$HOST";
-                  };
-                </programlisting>
-
-                where the <filename>$PATH_TO_HBASE_KEYTAB</filename> and
-                <filename>$PATH_TO_ZOOKEEPER_KEYTAB</filename> files are what
-                you created above, and <code>$HOST</code> is the hostname for that
-                node.
-
-                <para>The <code>Server</code> section will be used by
-                the Zookeeper quorum server, while the
-                <code>Client</code> section will be used by the HBase
-                master and regionservers. The path to this file should
-                be substituted for the text <filename>$HBASE_SERVER_CONF</filename>
-                in the <filename>hbase-env.sh</filename>
-                listing below.</para>
-
-                <para>
-                  The path to this file should be substituted for the
-                  text <filename>$CLIENT_CONF</filename> in the
-                  <filename>hbase-env.sh</filename> listing below.
-                </para>
-
-                <para>Modify your <filename>hbase-env.sh</filename> to include the
-                following:</para>
-
-                <programlisting>
-                  export HBASE_OPTS="-Djava.security.auth.login.config=$CLIENT_CONF"
-                  export HBASE_MANAGES_ZK=true
-                  export HBASE_ZOOKEEPER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_CONF"
-                  export HBASE_MASTER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_CONF"
-                  export HBASE_REGIONSERVER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_CONF"
-                </programlisting>
-
-                where <filename>$HBASE_SERVER_CONF</filename> and
-                <filename>$CLIENT_CONF</filename> are the full paths to the
-                JAAS configuration files created above.
-
-                <para>Modify your <filename>hbase-site.xml</filename> on each node
-                that will run zookeeper, master or regionserver to contain:</para>
-
-                <programlisting><![CDATA[
-                  <configuration>
-                    <property>
-                      <name>hbase.zookeeper.quorum</name>
-                      <value>$ZK_NODES</value>
-                    </property>
-                    <property>
-                      <name>hbase.cluster.distributed</name>
-                      <value>true</value>
-                    </property>
-                    <property>
-                      <name>hbase.zookeeper.property.authProvider.1</name>
-                      <value>org.apache.zookeeper.server.auth.SASLAuthenticationProvider</value>
-                    </property>
-                    <property>
-                      <name>hbase.zookeeper.property.kerberos.removeHostFromPrincipal</name>
-                      <value>true</value>
-                    </property>
-                    <property>
-                      <name>hbase.zookeeper.property.kerberos.removeRealmFromPrincipal</name>
-                      <value>true</value>
-                    </property>
-                  </configuration>
-                  ]]></programlisting>
-
-                <para>where <code>$ZK_NODES</code> is the
-                comma-separated list of hostnames of the Zookeeper
-                Quorum hosts.</para>
-
-                <para>Start your hbase cluster by running one or more
-                of the following set of commands on the appropriate
-                hosts:
-                </para>
-
-                <programlisting>
-                  bin/hbase zookeeper start
-                  bin/hbase master start
-                  bin/hbase regionserver start
-                </programlisting>
-
-              </section>
-
-              <section><title>External Zookeeper Configuration</title>
-                <para>Add a JAAS configuration file that looks like:
-
-                <programlisting>
-                  Client {
-                    com.sun.security.auth.module.Krb5LoginModule required
-                    useKeyTab=true
-                    useTicketCache=false
-                    keyTab="$PATH_TO_HBASE_KEYTAB"
-                    principal="hbase/$HOST";
-                  };
-                </programlisting>
-
-                where the <filename>$PATH_TO_HBASE_KEYTAB</filename> is the keytab
-                created above for HBase services to run on this host, and <code>$HOST</code> is the
-                hostname for that node. Put this in the HBase home's
-                configuration directory. We'll refer to this file's
-                full pathname as <filename>$HBASE_SERVER_CONF</filename> below.</para>
-
-                <para>Modify your hbase-env.sh to include the following:</para>
-
-                <programlisting>
-                  export HBASE_OPTS="-Djava.security.auth.login.config=$CLIENT_CONF"
-                  export HBASE_MANAGES_ZK=false
-                  export HBASE_MASTER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_CONF"
-                  export HBASE_REGIONSERVER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_CONF"
-                </programlisting>
-
-
-                <para>Modify your <filename>hbase-site.xml</filename> on each node
-                that will run a master or regionserver to contain:</para>
-
-                <programlisting><![CDATA[
-                  <configuration>
-                    <property>
-                      <name>hbase.zookeeper.quorum</name>
-                      <value>$ZK_NODES</value>
-                    </property>
-                    <property>
-                      <name>hbase.cluster.distributed</name>
-                      <value>true</value>
-                    </property>
-                  </configuration>
-                  ]]>
-                </programlisting>
-
-                <para>where <code>$ZK_NODES</code> is the
-                comma-separated list of hostnames of the Zookeeper
-                Quorum hosts.</para>
-
-                <para>
-                  Add a <filename>zoo.cfg</filename> for each Zookeeper Quorum host containing:
-                  <programlisting>
-                      authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
-                      kerberos.removeHostFromPrincipal=true
-                      kerberos.removeRealmFromPrincipal=true
-                  </programlisting>
-
-                  Also on each of these hosts, create a JAAS configuration file containing:
-
-                  <programlisting>
-                  Server {
-                    com.sun.security.auth.module.Krb5LoginModule required
-                    useKeyTab=true
-                    keyTab="$PATH_TO_ZOOKEEPER_KEYTAB"
-                    storeKey=true
-                    useTicketCache=false
-                    principal="zookeeper/$HOST";
-                  };
-                  </programlisting>
-
-                  where <code>$HOST</code> is the hostname of each
-                  Quorum host. We will refer to the full pathname of
-                  this file as <filename>$ZK_SERVER_CONF</filename> below.
-
-                </para>
-
-                <para>
-                  Start your Zookeepers on each Zookeeper Quorum host with:
-
-                  <programlisting>
-                    SERVER_JVMFLAGS="-Djava.security.auth.login.config=$ZK_SERVER_CONF" bin/zkServer start
-                  </programlisting>
-
-                </para>
-
-                <para>
-                  Start your HBase cluster by running one or more of the following set of commands on the appropriate nodes:
-                </para>
-
-                <programlisting>
-                  bin/hbase master start
-                  bin/hbase regionserver start
-                </programlisting>
-
-
-              </section>
-
-              <section>
-                <title>Zookeeper Server Authentication Log Output</title>
-                <para>If the configuration above is successful,
-                you should see something similar to the following in
-                your Zookeeper server logs:
-                <programlisting>
-11/12/05 22:43:39 INFO zookeeper.Login: successfully logged in.
-11/12/05 22:43:39 INFO server.NIOServerCnxnFactory: binding to port 0.0.0.0/0.0.0.0:2181
-11/12/05 22:43:39 INFO zookeeper.Login: TGT refresh thread started.
-11/12/05 22:43:39 INFO zookeeper.Login: TGT valid starting at:        Mon Dec 05 22:43:39 UTC 2011
-11/12/05 22:43:39 INFO zookeeper.Login: TGT expires:                  Tue Dec 06 22:43:39 UTC 2011
-11/12/05 22:43:39 INFO zookeeper.Login: TGT refresh sleeping until: Tue Dec 06 18:36:42 UTC 2011
-..
-11/12/05 22:43:59 INFO auth.SaslServerCallbackHandler:
-  Successfully authenticated client: authenticationID=hbase/ip-10-166-175-249.us-west-1.compute.internal@HADOOP.LOCALDOMAIN;
-  authorizationID=hbase/ip-10-166-175-249.us-west-1.compute.internal@HADOOP.LOCALDOMAIN.
-11/12/05 22:43:59 INFO auth.SaslServerCallbackHandler: Setting authorizedID: hbase
-11/12/05 22:43:59 INFO server.ZooKeeperServer: adding SASL authorization for authorizationID: hbase
-                </programlisting>
-
-                </para>
-
-              </section>
-
-              <section>
-                <title>Zookeeper Client Authentication Log Output</title>
-                <para>On the Zookeeper client side (HBase master or regionserver),
-                you should see something similar to the following:
-
-                <programlisting>
-11/12/05 22:43:59 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=ip-10-166-175-249.us-west-1.compute.internal:2181 sessionTimeout=180000 watcher=master:60000
-11/12/05 22:43:59 INFO zookeeper.ClientCnxn: Opening socket connection to server /10.166.175.249:2181
-11/12/05 22:43:59 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 14851@ip-10-166-175-249
-11/12/05 22:43:59 INFO zookeeper.Login: successfully logged in.
-11/12/05 22:43:59 INFO client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism.
-11/12/05 22:43:59 INFO zookeeper.Login: TGT refresh thread started.
-11/12/05 22:43:59 INFO zookeeper.ClientCnxn: Socket connection established to ip-10-166-175-249.us-west-1.compute.internal/10.166.175.249:2181, initiating session
-11/12/05 22:43:59 INFO zookeeper.Login: TGT valid starting at:        Mon Dec 05 22:43:59 UTC 2011
-11/12/05 22:43:59 INFO zookeeper.Login: TGT expires:                  Tue Dec 06 22:43:59 UTC 2011
-11/12/05 22:43:59 INFO zookeeper.Login: TGT refresh sleeping until: Tue Dec 06 18:30:37 UTC 2011
-11/12/05 22:43:59 INFO zookeeper.ClientCnxn: Session establishment complete on server ip-10-166-175-249.us-west-1.compute.internal/10.166.175.249:2181, sessionid = 0x134106594320000, negotiated timeout = 180000
-                </programlisting>
-                </para>
-              </section>
-
-              <section>
-                <title>Configuration from Scratch</title>
-
-                This has been tested on the current standard Amazon
-                Linux AMI.  First setup KDC and principals as
-                described above. Next checkout code and run a sanity
-                check.
-
-                <programlisting>
-                git clone git://git.apache.org/hbase.git
-                cd hbase
-                mvn clean test -Dtest=TestZooKeeperACL
-                </programlisting>
-
-                Then configure HBase as described above.
-                Manually edit target/cached_classpath.txt (see below)..
-
-                <programlisting>
-                bin/hbase zookeeper &amp;
-                bin/hbase master &amp;
-                bin/hbase regionserver &amp;
-                </programlisting>
-              </section>
-
-
-              <section>
-                <title>Future improvements</title>
-
-                <section><title>Fix target/cached_classpath.txt</title>
-                <para>
-                You must override the standard hadoop-core jar file from the
-                <code>target/cached_classpath.txt</code>
-                file with the version containing the HADOOP-7070 fix. You can use the following script to do this:
-
-                <programlisting>
-                  echo `find ~/.m2 -name "*hadoop-core*7070*SNAPSHOT.jar"` ':' `cat target/cached_classpath.txt` | sed 's/ //g' > target/tmp.txt
-                  mv target/tmp.txt target/cached_classpath.txt
-                </programlisting>
-
-                </para>
-
-                </section>
-
-                <section>
-                  <title>Set JAAS configuration
-                  programmatically</title>
-
-
-                  This would avoid the need for a separate Hadoop jar
-                  that fixes <link xlink:href="https://issues.apache.org/jira/browse/HADOOP-7070">HADOOP-7070</link>.
-                </section>
-
-                <section>
-                  <title>Elimination of
-                  <code>kerberos.removeHostFromPrincipal</code> and
-                  <code>kerberos.removeRealmFromPrincipal</code></title>
-                </section>
-
-              </section>
-
-
-            </section> <!-- SASL Authentication with ZooKeeper -->
-
-
-
-
-    </chapter>

http://git-wip-us.apache.org/repos/asf/hbase/blob/7bf6c024/src/main/site/asciidoc/acid-semantics.adoc
----------------------------------------------------------------------
diff --git a/src/main/site/asciidoc/acid-semantics.adoc b/src/main/site/asciidoc/acid-semantics.adoc
new file mode 100644
index 0000000..2df559a
--- /dev/null
+++ b/src/main/site/asciidoc/acid-semantics.adoc
@@ -0,0 +1,114 @@
+////
+
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+////
+
+= Apache HBase (TM) ACID Properties
+
+== About this Document
+
+Apache HBase (TM) is not an ACID compliant database. However, it does guarantee certain specific properties.
+
+This specification enumerates the ACID properties of HBase.
+
+== Definitions
+
+For the sake of common vocabulary, we define the following terms:
+Atomicity::
+  An operation is atomic if it either completes entirely or not at all.
+
+Consistency::
+  All actions cause the table to transition from one valid state directly to another (eg a row will not disappear during an update, etc).
+
+Isolation::
+  an operation is isolated if it appears to complete independently of any other concurrent transaction.
+
+Durability::
+  Any update that reports &quot;successful&quot; to the client will not be lost.
+
+Visibility::
+  An update is considered visible if any subsequent read will see the update as having been committed.
+
+
+The terms _must_ and _may_ are used as specified by link:[RFC 2119].
+
+In short, the word &quot;must&quot; implies that, if some case exists where the statement is not true, it is a bug. The word _may_ implies that, even if the guarantee is provided in a current release, users should not rely on it.
+
+== APIs to Consider
+- Read APIs
+* get
+* scan
+- Write APIs
+* put
+* batch put
+* delete
+- Combination (read-modify-write) APIs
+* incrementColumnValue
+* checkAndPut
+
+== Guarantees Provided
+
+.Atomicity
+.  All mutations are atomic within a row. Any put will either wholely succeed or wholely fail.footnoteref[Puts will either wholely succeed or wholely fail, provided that they are actually sent to the RegionServer.  If the writebuffer is used, Puts will not be sent until the writebuffer is filled or it is explicitly flushed.]
+.. An operation that returns a _success_ code has completely succeeded.
+.. An operation that returns a _failure_ code has completely failed.
+.. An operation that times out may have succeeded and may have failed. However, it will not have partially succeeded or failed.
+. This is true even if the mutation crosses multiple column families within a row.
+. APIs that mutate several rows will _not_ be atomic across the multiple rows. For example, a multiput that operates on rows 'a','b', and 'c' may return having mutated some but not all of the rows. In such cases, these APIs will return a list of success codes, each of which may be succeeded, failed, or timed out as described above.
+. The checkAndPut API happens atomically like the typical _compareAndSet (CAS)_ operation found in many hardware architectures.
+. The order of mutations is seen to happen in a well-defined order for each row, with no interleaving. For example, if one writer issues the mutation `a=1,b=1,c=1` and another writer issues the mutation `a=2,b=2,c=`, the row must either be `a=1,b=1,c=1` or `a=2,b=2,c=2` and must *not* be something like `a=1,b=2,c=1`. +
+NOTE:This is not true _across rows_ for multirow batch mutations.
+
+== Consistency and Isolation
+. All rows returned via any access API will consist of a complete row that existed at some point in the table's history.
+. This is true across column families - i.e a get of a full row that occurs concurrent with some mutations 1,2,3,4,5 will return a complete row that existed at some point in time between mutation i and i+1 for some i between 1 and 5.
+. The state of a row will only move forward through the history of edits to it.
+
+== Consistency of Scans
+A scan is *not* a consistent view of a table. Scans do *not* exhibit _snapshot isolation_.
+
+Rather, scans have the following properties:
+. Any row returned by the scan will be a consistent view (i.e. that version of the complete row existed at some point in time)footnoteref[consistency,A consistent view is not guaranteed intra-row scanning -- i.e. fetching a portion of a row in one RPC then going back to fetch another portion of the row in a subsequent RPC. Intra-row scanning happens when you set a limit on how many values to return per Scan#next (See link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html#setBatch(int)"[Scan#setBatch(int)]).]
+. A scan will always reflect a view of the data _at least as new as_ the beginning of the scan. This satisfies the visibility guarantees enumerated below.
+.. For example, if client A writes data X and then communicates via a side channel to client B, any scans started by client B will contain data at least as new as X.
+.. A scan _must_ reflect all mutations committed prior to the construction of the scanner, and _may_ reflect some mutations committed subsequent to the construction of the scanner.
+.. Scans must include _all_ data written prior to the scan (except in the case where data is subsequently mutated, in which case it _may_ reflect the mutation)
+
+Those familiar with relational databases will recognize this isolation level as "read committed".
+
+NOTE: The guarantees listed above regarding scanner consistency are referring to "transaction commit time", not the "timestamp" field of each cell. That is to say, a scanner started at time _t_ may see edits with a timestamp value greater than _t_, if those edits were committed with a "forward dated" timestamp before the scanner was constructed.
+
+== Visibility
+
+. When a client receives a &quot;success&quot; response for any mutation, that mutation is immediately visible to both that client and any client with whom it later communicates through side channels.footnoteref[consistency]
+. A row must never exhibit so-called "time-travel" properties. That is to say, if a series of mutations moves a row sequentially through a series of states, any sequence of concurrent reads will return a subsequence of those states. +
+For example, if a row's cells are mutated using the `incrementColumnValue` API, a client must never see the value of any cell decrease. +
+This is true regardless of which read API is used to read back the mutation.
+. Any version of a cell that has been returned to a read operation is guaranteed to be durably stored.
+
+== Durability
+. All visible data is also durable data. That is to say, a read will never return data that has not been made durable on disk.footnoteref[durability,In the context of Apache HBase, _durably on disk_; implies an `hflush()` call on the transaction log. This does not actually imply an `fsync()` to magnetic media, but rather just that the data has been written to the OS cache on all replicas of the log. In the case of a full datacenter power loss, it is possible that the edits are not truly durable.]
+. Any operation that returns a &quot;success&quot; code (eg does not throw an exception) will be made durable.footnoteref[durability]
+. Any operation that returns a &quot;failure&quot; code will not be made durable (subject to the Atomicity guarantees above).
+. All reasonable failure scenarios will not affect any of the guarantees of this document.
+
+== Tunability
+
+All of the above guarantees must be possible within Apache HBase. For users who would like to trade off some guarantees for performance, HBase may offer several tuning options. For example:
+
+* Visibility may be tuned on a per-read basis to allow stale reads or time travel.
+* Durability may be tuned to only flush data to disk on a periodic basis.
+
+== More Information
+
+For more information, see the link:book.html#client[client architecture] and  link:book.html#datamodel[data model] sections in the Apache HBase Reference Guide. 

http://git-wip-us.apache.org/repos/asf/hbase/blob/7bf6c024/src/main/site/asciidoc/bulk-loads.adoc
----------------------------------------------------------------------
diff --git a/src/main/site/asciidoc/bulk-loads.adoc b/src/main/site/asciidoc/bulk-loads.adoc
new file mode 100644
index 0000000..4dc463d
--- /dev/null
+++ b/src/main/site/asciidoc/bulk-loads.adoc
@@ -0,0 +1,19 @@
+////
+
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+////
+
+= Bulk Loads in Apache HBase (TM)
+
+This page has been retired.  The contents have been moved to the link:book.html#arch.bulk.load[Bulk Loading] section in the Reference Guide.
+

http://git-wip-us.apache.org/repos/asf/hbase/blob/7bf6c024/src/main/site/asciidoc/cygwin.adoc
----------------------------------------------------------------------
diff --git a/src/main/site/asciidoc/cygwin.adoc b/src/main/site/asciidoc/cygwin.adoc
new file mode 100644
index 0000000..bbb4b34
--- /dev/null
+++ b/src/main/site/asciidoc/cygwin.adoc
@@ -0,0 +1,193 @@
+////
+
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+////
+
+
+== Installing Apache HBase (TM) on Windows using Cygwin
+
+== Introduction
+
+link:http://hbase.apache.org[Apache HBase (TM)] is a distributed, column-oriented store, modeled after Google's link:http://research.google.com/archive/bigtable.html[BigTable]. Apache HBase is built on top of link:http://hadoop.apache.org[Hadoop] for its link:http://hadoop.apache.org/mapreduce[MapReduce] link:http://hadoop.apache.org/hdfs[distributed file system] implementations. All these projects are open-source and part of the link:http://www.apache.org[Apache Software Foundation].
+
+== Purpose
+
+This document explains the *intricacies* of running Apache HBase on Windows using Cygwin* as an all-in-one single-node installation for testing and development. The HBase link:http://hbase.apache.org/apidocs/overview-summary.html#overview_description[Overview] and link:book.html#getting_started[QuickStart] guides on the other hand go a long way in explaning how to setup link:http://hadoop.apache.org/hbase[HBase] in more complex deployment scenarios.
+
+== Installation
+
+For running Apache HBase on Windows, 3 technologies are required: 
+* Java
+* Cygwin
+* SSH 
+
+The following paragraphs detail the installation of each of the aforementioned technologies.
+
+=== Java
+
+HBase depends on the link:http://java.sun.com/javase/6/[Java Platform, Standard Edition, 6 Release]. So the target system has to be provided with at least the Java Runtime Environment (JRE); however if the system will also be used for development, the Jave Development Kit (JDK) is preferred. You can download the latest versions for both from link:http://java.sun.com/javase/downloads/index.jsp[Sun's download page]. Installation is a simple GUI wizard that guides you through the process.
+
+=== Cygwin
+
+Cygwin is probably the oddest technology in this solution stack. It provides a dynamic link library that emulates most of a *nix environment on Windows. On top of that a whole bunch of the most common *nix tools are supplied. Combined, the DLL with the tools form a very *nix-alike environment on Windows.
+
+For installation, Cygwin provides the link:http://cygwin.com/setup.exe[`setup.exe` utility] that tracks the versions of all installed components on the target system and provides the mechanism for installing or updating everything from the mirror sites of Cygwin.
+
+To support installation, the `setup.exe` utility uses 2 directories on the target system. The *Root* directory for Cygwin (defaults to _C:\cygwin)_ which will become _/_ within the eventual Cygwin installation; and the *Local Package* directory (e.g. _C:\cygsetup_ that is the cache where `setup.exe`stores the packages before they are installed. The cache must not be the same folder as the Cygwin root.
+
+Perform following steps to install Cygwin, which are elaboratly detailed in the link:http://cygwin.com/cygwin-ug-net/setup-net.html[2nd chapter] of the link:http://cygwin.com/cygwin-ug-net/cygwin-ug-net.html[Cygwin User's Guide].
+
+. Make sure you have `Administrator` privileges on the target system.
+. Choose and create you Root and *Local Package* directories. A good suggestion is to use `C:\cygwin\root` and `C:\cygwin\setup` folders.
+. Download the `setup.exe` utility and save it to the *Local Package* directory. Run the `setup.exe` utility.
+.. Choose  the `Install from Internet` option.
+.. Choose your *Root* and *Local Package* folders.
+.. Select an appropriate mirror.
+.. Don't select any additional packages yet, as we only want to install Cygwin for now.
+.. Wait for download and install.
+.. Finish the installation.
+. Optionally, you can now also add a shortcut to your Start menu pointing to the `setup.exe` utility in the *Local Package *folder.
+. Add `CYGWIN_HOME` system-wide environment variable that points to your *Root* directory.
+. Add `%CYGWIN_HOME%\bin` to the end of your `PATH` environment variable.
+. Reboot the sytem after making changes to the environment variables otherwise the OS will not be able to find the Cygwin utilities.
+. Test your installation by running your freshly created shortcuts or the `Cygwin.bat` command in the *Root* folder. You should end up in a terminal window that is running a link:http://www.gnu.org/software/bash/manual/bashref.html[Bash shell]. Test the shell by issuing following commands:
+.. `cd /` should take you to thr *Root* directory in Cygwin.
+.. The `LS` commands that should list all files and folders in the current directory.
+.. Use the `exit` command to end the terminal.
+. When needed, to *uninstall* Cygwin you can simply delete the *Root* and *Local Package* directory, and the *shortcuts* that were created during installation.
+
+=== SSH
+
+HBase (and Hadoop) rely on link:http://nl.wikipedia.org/wiki/Secure_Shell[*SSH*] for interprocess/-node *communication* and launching* remote commands*. SSH will be provisioned on the target system via Cygwin, which supports running Cygwin programs as *Windows services*!
+
+. Rerun the `*setup.exe*`* utility*.
+. Leave all parameters as is, skipping through the wizard using the `Next` button until the `Select Packages` panel is shown.
+. Maximize the window and click the `View` button to toggle to the list view, which is ordered alfabetically on `Package`, making it easier to find the packages we'll need.
+. Select the following packages by clicking the status word (normally `Skip`) so it's marked for installation. Use the `Next `button to download and install the packages.
+.. `OpenSSH`
+.. `tcp_wrappers`
+.. `diffutils`
+.. `zlib`
+. Wait for the install to complete and finish the installation.
+
+=== HBase
+
+Download the *latest release* of Apache HBase from link:http://www.apache.org/dyn/closer.cgi/hbase/. As the Apache HBase distributable is just a zipped archive, installation is as simple as unpacking the archive so it ends up in its final *installation* directory. Notice that HBase has to be installed in Cygwin and a good directory suggestion is to use `/usr/local/` (or [`*Root* directory]\usr\local` in Windows slang). You should end up with a `/usr/local/hbase-_versi` installation in Cygwin.
+
+This finishes installation. We go on with the configuration.
+
+== Configuration
+
+There are 3 parts left to configure: *Java, SSH and HBase* itself. Following paragraphs explain eacht topic in detail.
+
+=== Java
+
+One important thing to remember in shell scripting in general (i.e. *nix and Windows) is that managing, manipulating and assembling path names that contains spaces can be very hard, due to the need to escape and quote those characters and strings. So we try to stay away from spaces in path names. *nix environments can help us out here very easily by using *symbolic links*.
+
+. Create a link in `/usr/local` to the Java home directory by using the following command and substituting the name of your chosen Java environment: +
+----
+LN -s /cygdrive/c/Program\ Files/Java/*_jre name_*/usr/local/*_jre name_*
+----
+. Test your java installation by changing directories to your Java folder `CD /usr/local/_jre name_` and issueing the command `./bin/java -version`. This should output your version of the chosen JRE.
+
+=== SSH 
+
+Configuring *SSH *is quite elaborate, but primarily a question of launching it by default as a* Windows service*.
+
+. On Windows Vista and above make sure you run the Cygwin shell with *elevated privileges*, by right-clicking on the shortcut an using `Run as Administrator`.
+. First of all, we have to make sure the *rights on some crucial files* are correct. Use the commands underneath. You can verify all rights by using the `LS -L` command on the different files. Also, notice the auto-completion feature in the shell using `TAB` is extremely handy in these situations.
+.. `chmod +r /etc/passwd` to make the passwords file readable for all
+.. `chmod u+w /etc/passwd` to make the passwords file writable for the owner
+.. `chmod +r /etc/group` to make the groups file readable for all
+.. `chmod u+w /etc/group` to make the groups file writable for the owner
+.. `chmod 755 /var` to make the var folder writable to owner and readable and executable to all
+. Edit the */etc/hosts.allow* file using your favorite editor (why not VI in the shell!) and make sure the following two lines are in there before the `PARANOID` line: +
+----
+ALL : localhost 127.0.0.1/32 : allow
+ALL : [::1]/128 : allow
+----
+. Next we have to *configure SSH* by using the script `ssh-host-config`.
+.. If this script asks to overwrite an existing `/etc/ssh_config`, answer `yes`.
+.. If this script asks to overwrite an existing `/etc/sshd_config`, answer `yes`.
+.. If this script asks to use privilege separation, answer `yes`.
+.. If this script asks to install `sshd` as a service, answer `yes`. Make sure you started your shell as Adminstrator!
+.. If this script asks for the CYGWIN value, just `enter` as the default is `ntsec`.
+.. If this script asks to create the `sshd` account, answer `yes`.
+.. If this script asks to use a different user name as service account, answer `no` as the default will suffice.
+.. If this script asks to create the `cyg_server` account, answer `yes`. Enter a password for the account.
+. *Start the SSH service* using `net start sshd` or `cygrunsrv  --start  sshd`. Notice that `cygrunsrv` is the utility that make the process run as a Windows service. Confirm that you see a message stating that `the CYGWIN sshd service  was started succesfully.`
+. Harmonize Windows and Cygwin* user account* by using the commands: +
+----
+mkpasswd -cl > /etc/passwd
+mkgroup --local > /etc/group
+----
+. Test *the installation of SSH:
+.. Open a new Cygwin terminal.
+.. Use the command `whoami` to verify your userID.
+.. Issue an `ssh localhost` to connect to the system itself.
+.. Answer `yes` when presented with the server's fingerprint.
+.. Issue your password when prompted.
+.. Test a few commands in the remote session
+.. The `exit` command should take you back to your first shell in Cygwin.
+. `Exit` should terminate the Cygwin shell.
+
+=== HBase
+
+If all previous configurations are working properly, we just need some tinkering at the *HBase config* files to properly resolve on Windows/Cygwin. All files and paths referenced here start from the HBase `[*installation* directory]` as working directory.
+
+. HBase uses the `./conf/*hbase-env.sh*` to configure its dependencies on the runtime environment. Copy and uncomment following lines just underneath their original, change them to fit your environemnt. They should read something like: +
+----
+export JAVA_HOME=/usr/local/_jre name_
+export HBASE_IDENT_STRING=$HOSTNAME
+----
+. HBase uses the _./conf/`*hbase-default.xml*`_ file for configuration. Some properties do not resolve to existing directories because the JVM runs on Windows. This is the major issue to keep in mind when working with Cygwin: within the shell all paths are *nix-alike, hence relative to the root `/`. However, every parameter that is to be consumed within the windows processes themself, need to be Windows settings, hence `C:\`-alike. Change following propeties in the configuration file, adjusting paths where necessary to conform with your own installation:
+.. `hbase.rootdir` must read e.g. `file:///C:/cygwin/root/tmp/hbase/data`
+.. `hbase.tmp.dir` must read `C:/cygwin/root/tmp/hbase/tmp`
+.. `hbase.zookeeper.quorum` must read `127.0.0.1` because for some reason `localhost` doesn't seem to resolve properly on Cygwin.
+. Make sure the configured `hbase.rootdir` and `hbase.tmp.dir` *directories exist* and have the proper* rights* set up e.g. by issuing a `chmod 777` on them.
+
+== Testing
+
+This should conclude the installation and configuration of Apache HBase on Windows using Cygwin. So it's time *to test it*.
+
+. Start a Cygwin* terminal*, if you haven't already.
+. Change directory to HBase *installation* using `CD /usr/local/hbase-_version_`, preferably using auto-completion.
+. *Start HBase* using the command `./bin/start-hbase.sh`
+.. When prompted to accept the SSH fingerprint, answer `yes`.
+.. When prompted, provide your password. Maybe multiple times.
+.. When the command completes, the HBase server should have started.
+.. However, to be absolutely certain, check the logs in the `./logs` directory for any exceptions.
+. Next we *start the HBase shell* using the command `./bin/hbase shell`
+. We run some simple *test commands*
+.. Create a simple table using command `create 'test', 'data'`
+.. Verify the table exists using the command `list`
+.. Insert data into the table using e.g. +
+----
+put 'test', 'row1', 'data:1', 'value1'
+put 'test', 'row2', 'data:2', 'value2'
+put 'test', 'row3', 'data:3', 'value3'
+----
+.. List all rows in the table using the command `scan 'test'` that should list all the rows previously inserted. Notice how 3 new columns where added without changing the schema!
+.. Finally we get rid of the table by issuing `disable 'test'` followed by `drop 'test'` and verified by `list` which should give an empty listing.
+. *Leave the shell* by `exit`
+. To *stop the HBase server* issue the `./bin/stop-hbase.sh` command. And wait for it to complete!!! Killing the process might corrupt your data on disk.
+. In case of *problems*,
+.. Verify the HBase logs in the `./logs` directory.
+.. Try to fix the problem
+.. Get help on the forums or IRC (`#hbase@freenode.net`). People are very active and keen to help out!
+.. Stop and retest the server.
+
+== Conclusion
+
+Now your *HBase *server is running, *start coding* and build that next killer app on this particular, but scalable datastore!
+

http://git-wip-us.apache.org/repos/asf/hbase/blob/7bf6c024/src/main/site/asciidoc/export_control.adoc
----------------------------------------------------------------------
diff --git a/src/main/site/asciidoc/export_control.adoc b/src/main/site/asciidoc/export_control.adoc
new file mode 100644
index 0000000..4a4b2ae
--- /dev/null
+++ b/src/main/site/asciidoc/export_control.adoc
@@ -0,0 +1,40 @@
+////
+
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+////
+
+
+= Export Control
+
+This distribution uses or includes cryptographic software. The country in
+which you currently reside may have restrictions on the import, possession,
+use, and/or re-export to another country, of encryption software. BEFORE
+using any encryption software, please check your country's laws, regulations
+and policies concerning the import, possession, or use, and re-export of
+encryption software, to see if this is permitted. See the
+link:http://www.wassenaar.org/[Wassenaar Arrangement] for more
+information.
+
+The U.S. Government Department of Commerce, Bureau of Industry and Security 
+(BIS), has classified this software as Export Commodity Control Number (ECCN) 
+5D002.C.1, which includes information security software using or performing 
+cryptographic functions with asymmetric algorithms. The form and manner of this
+Apache Software Foundation distribution makes it eligible for export under the 
+License Exception ENC Technology Software Unrestricted (TSU) exception (see the
+BIS Export Administration Regulations, Section 740.13) for both object code and
+source code.
+
+Apache HBase uses the built-in java cryptography libraries. See Oracle's
+information regarding
+link:http://www.oracle.com/us/products/export/export-regulations-345813.html[Java cryptographic export regulations]
+for more details.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hbase/blob/7bf6c024/src/main/site/asciidoc/index.adoc
----------------------------------------------------------------------
diff --git a/src/main/site/asciidoc/index.adoc b/src/main/site/asciidoc/index.adoc
new file mode 100644
index 0000000..ac50fc8
--- /dev/null
+++ b/src/main/site/asciidoc/index.adoc
@@ -0,0 +1,71 @@
+////
+
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+////
+
+= Apache HBase&#153; Home
+
+.Welcome to Apache HBase(TM)
+link:http://www.apache.org/[Apache HBase(TM)] is the link:http://hadoop.apache.org[Hadoop] database, a distributed, scalable, big data store.
+
+.When Would I Use Apache HBase?
+Use Apache HBase when you need random, realtime read/write access to your Big Data. +
+This project's goal is the hosting of very large tables -- billions of rows X millions of columns -- atop clusters of commodity hardware.
+
+Apache HBase is an open-source, distributed, versioned, non-relational database modeled after Google's link:http://research.google.com/archive/bigtable.html[Bigtable: A Distributed Storage System for Structured Data] by Chang et al.
+
+Just as Bigtable leverages the distributed data storage provided by the Google File System, Apache HBase provides Bigtable-like capabilities on top of Hadoop and HDFS.
+
+.Features
+- Linear and modular scalability.
+- Strictly consistent reads and writes.
+- Automatic and configurable sharding of tables
+- Automatic failover support between RegionServers.
+- Convenient base classes for backing Hadoop MapReduce jobs with Apache HBase tables.
+- Easy to use Java API for client access.
+- Block cache and Bloom Filters for real-time queries.
+- Query predicate push down via server side Filters
+- Thrift gateway and a REST-ful Web service that supports XML, Protobuf, and binary data encoding options
+- Extensible jruby-based (JIRB) shell
+- Support for exporting metrics via the Hadoop metrics subsystem to files or Ganglia; or via JMX
+
+.Where Can I Get More Information?
+See the link:book.html#arch.overview[Architecture Overview], the link:book.html#faq[FAQ] and the other documentation links at the top!
+
+.Export Control
+The HBase distribution includes cryptographic software. See the link:export_control.html[export control notice].
+
+== News
+Feb 17, 2015:: link:http://www.meetup.com/hbaseusergroup/events/219260093/[HBase meetup around Strata+Hadoop World] in San Jose
+
+January 15th, 2015:: link:http://www.meetup.com/hbaseusergroup/events/218744798/[HBase meetup @ AppDynamics] in San Francisco
+
+November 20th, 2014::  link:http://www.meetup.com/hbaseusergroup/events/205219992/[HBase meetup @ WANdisco] in San Ramon
+
+October 27th, 2014:: link:http://www.meetup.com/hbaseusergroup/events/207386102/[HBase Meetup @ Apple] in Cupertino
+
+October 15th, 2014:: link:http://www.meetup.com/HBase-NYC/events/207655552[HBase Meetup @ Google] on the night before Strata/HW in NYC
+
+September 25th, 2014:: link:http://www.meetup.com/hbaseusergroup/events/203173692/[HBase Meetup @ Continuuity] in Palo Alto
+
+August 28th, 2014:: link:http://www.meetup.com/hbaseusergroup/events/197773762/[HBase Meetup @ Sift Science] in San Francisco
+
+July 17th, 2014:: link:http://www.meetup.com/hbaseusergroup/events/190994082/[HBase Meetup @ HP] in Sunnyvale
+
+June 5th, 2014:: link:http://www.meetup.com/Hadoop-Summit-Community-San-Jose/events/179081342/[HBase BOF at Hadoop Summit], San Jose Convention Center
+
+May 5th, 2014:: link:http://www.hbasecon.com[HBaseCon2014] at the Hilton San Francisco on Union Square
+
+March 12th, 2014:: link:http://www.meetup.com/hbaseusergroup/events/160757912/[HBase Meetup @ Ancestry.com] in San Francisco
+
+View link:old_news.html[Old News]


Mime
View raw message