hbase-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From e...@apache.org
Subject svn commit: r1462679 [10/14] - in /hbase/hbase.apache.org/trunk: ./ book/ case_studies/ community/ configuration/ developer/ getting_started/ ops_mgt/ performance/ rpc/
Date Sat, 30 Mar 2013 00:19:57 GMT
Modified: hbase/hbase.apache.org/trunk/configuration.html
URL: http://svn.apache.org/viewvc/hbase/hbase.apache.org/trunk/configuration.html?rev=1462679&r1=1462678&r2=1462679&view=diff
==============================================================================
--- hbase/hbase.apache.org/trunk/configuration.html (original)
+++ hbase/hbase.apache.org/trunk/configuration.html Sat Mar 30 00:19:55 2013
@@ -1,6 +1,6 @@
 <html><head>
       <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
-   <title>Chapter&nbsp;1.&nbsp;Apache HBase (TM) Configuration</title><link rel="stylesheet" type="text/css" href="css/freebsd_docbook.css"><meta name="generator" content="DocBook XSL-NS Stylesheets V1.76.1"></head><body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF"><div class="chapter" title="Chapter&nbsp;1.&nbsp;Apache HBase (TM) Configuration"><div class="titlepage"><div><div><h2 class="title"><a name="configuration"></a>Chapter&nbsp;1.&nbsp;Apache HBase (TM) Configuration</h2></div></div></div><div class="toc"><p><b>Table of Contents</b></p><dl><dt><span class="section"><a href="#basic.prerequisites">1.1. Basic Prerequisites</a></span></dt><dd><dl><dt><span class="section"><a href="#java">1.1.1. Java</a></span></dt><dt><span class="section"><a href="#os">1.1.2. Operating System</a></span></dt><dt><span class="section"><a href="#hadoop">1.1.3. Hadoop</a></span></dt></dl></dd><dt><span class="section"><a href="#standalone_dist">1.2. HBase run 
 modes: Standalone and Distributed</a></span></dt><dd><dl><dt><span class="section"><a href="#standalone">1.2.1. Standalone HBase</a></span></dt><dt><span class="section"><a href="#distributed">1.2.2. Distributed</a></span></dt><dt><span class="section"><a href="#confirm">1.2.3. Running and Confirming Your Installation</a></span></dt></dl></dd><dt><span class="section"><a href="#config.files">1.3. Configuration Files</a></span></dt><dd><dl><dt><span class="section"><a href="#hbase.site">1.3.1. <code class="filename">hbase-site.xml</code> and <code class="filename">hbase-default.xml</code></a></span></dt><dt><span class="section"><a href="#hbase.env.sh">1.3.2. <code class="filename">hbase-env.sh</code></a></span></dt><dt><span class="section"><a href="#log4j">1.3.3. <code class="filename">log4j.properties</code></a></span></dt><dt><span class="section"><a href="#client_dependencies">1.3.4. Client configuration and dependencies connecting to an HBase cluster</a></span></dt></dl
 ></dd><dt><span class="section"><a href="#example_config">1.4. Example Configurations</a></span></dt><dd><dl><dt><span class="section"><a href="#d1694e2061">1.4.1. Basic Distributed HBase Install</a></span></dt></dl></dd><dt><span class="section"><a href="#important_configurations">1.5. The Important Configurations</a></span></dt><dd><dl><dt><span class="section"><a href="#required_configuration">1.5.1. Required Configurations</a></span></dt><dt><span class="section"><a href="#recommended_configurations">1.5.2. Recommended Configurations</a></span></dt><dt><span class="section"><a href="#other_configuration">1.5.3. Other Configurations</a></span></dt></dl></dd></dl></div><p>This chapter is the Not-So-Quick start guide to Apache HBase (TM) configuration.  It goes
+   <title>Chapter&nbsp;1.&nbsp;Apache HBase (TM) Configuration</title><link rel="stylesheet" type="text/css" href="css/freebsd_docbook.css"><meta name="generator" content="DocBook XSL-NS Stylesheets V1.76.1"></head><body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF"><div class="chapter" title="Chapter&nbsp;1.&nbsp;Apache HBase (TM) Configuration"><div class="titlepage"><div><div><h2 class="title"><a name="configuration"></a>Chapter&nbsp;1.&nbsp;Apache HBase (TM) Configuration</h2></div></div></div><div class="toc"><p><b>Table of Contents</b></p><dl><dt><span class="section"><a href="#basic.prerequisites">1.1. Basic Prerequisites</a></span></dt><dd><dl><dt><span class="section"><a href="#java">1.1.1. Java</a></span></dt><dt><span class="section"><a href="#os">1.1.2. Operating System</a></span></dt><dt><span class="section"><a href="#hadoop">1.1.3. Hadoop</a></span></dt></dl></dd><dt><span class="section"><a href="#standalone_dist">1.2. HBase run 
 modes: Standalone and Distributed</a></span></dt><dd><dl><dt><span class="section"><a href="#standalone">1.2.1. Standalone HBase</a></span></dt><dt><span class="section"><a href="#distributed">1.2.2. Distributed</a></span></dt><dt><span class="section"><a href="#confirm">1.2.3. Running and Confirming Your Installation</a></span></dt></dl></dd><dt><span class="section"><a href="#config.files">1.3. Configuration Files</a></span></dt><dd><dl><dt><span class="section"><a href="#hbase.site">1.3.1. <code class="filename">hbase-site.xml</code> and <code class="filename">hbase-default.xml</code></a></span></dt><dt><span class="section"><a href="#hbase.env.sh">1.3.2. <code class="filename">hbase-env.sh</code></a></span></dt><dt><span class="section"><a href="#log4j">1.3.3. <code class="filename">log4j.properties</code></a></span></dt><dt><span class="section"><a href="#client_dependencies">1.3.4. Client configuration and dependencies connecting to an HBase cluster</a></span></dt></dl
 ></dd><dt><span class="section"><a href="#example_config">1.4. Example Configurations</a></span></dt><dd><dl><dt><span class="section"><a href="#d1699e2061">1.4.1. Basic Distributed HBase Install</a></span></dt></dl></dd><dt><span class="section"><a href="#important_configurations">1.5. The Important Configurations</a></span></dt><dd><dl><dt><span class="section"><a href="#required_configuration">1.5.1. Required Configurations</a></span></dt><dt><span class="section"><a href="#recommended_configurations">1.5.2. Recommended Configurations</a></span></dt><dt><span class="section"><a href="#other_configuration">1.5.3. Other Configurations</a></span></dt></dl></dd></dl></div><p>This chapter is the Not-So-Quick start guide to Apache HBase (TM) configuration.  It goes
     over system requirements, Hadoop setup, the different Apache HBase run modes, and the
     various configurations in HBase.  Please read this chapter carefully.  At a mimimum
     ensure that all <a class="xref" href="#basic.prerequisites" title="1.1.&nbsp;Basic Prerequisites">Section&nbsp;1.1, &#8220;Basic Prerequisites&#8221;</a> have
@@ -13,7 +13,7 @@
         off the ground -- and then add configuration to an XML file to
         do things like override HBase defaults, tell HBase what Filesystem to
         use, and the location of the ZooKeeper ensemble
-        <sup>[<a name="d1694e14" href="#ftn.d1694e14" class="footnote">1</a>]</sup>
+        <sup>[<a name="d1699e14" href="#ftn.d1699e14" class="footnote">1</a>]</sup>
         .
     </p><p>When running in distributed mode, after you make
     an edit to an HBase configuration, make sure you copy the
@@ -30,7 +30,7 @@
         on the hadoop wiki.</p></div><div class="section" title="1.1.2.2.&nbsp;DNS"><div class="titlepage"><div><div><h4 class="title"><a name="dns"></a>1.1.2.2.&nbsp;DNS</h4></div></div></div><p>HBase uses the local hostname to self-report its IP address.
         Both forward and reverse DNS resolving must work in versions of
         HBase previous to 0.92.0
-        <sup>[<a name="d1694e63" href="#ftn.d1694e63" class="footnote">2</a>]</sup>.</p><p>If your machine has multiple interfaces, HBase will use the
+        <sup>[<a name="d1699e63" href="#ftn.d1699e63" class="footnote">2</a>]</sup>.</p><p>If your machine has multiple interfaces, HBase will use the
         interface that the primary hostname resolves to.</p><p>If this is insufficient, you can set
         <code class="varname">hbase.regionserver.dns.interface</code> to indicate the
         primary interface. This only works if your cluster configuration is
@@ -42,9 +42,9 @@
         <a class="link" href="http://en.wikipedia.org/wiki/Network_Time_Protocol" target="_top">NTP</a>
         on your cluster, or an equivalent.</p><p>If you are having problems querying data, or "weird" cluster
         operations, check system time!</p></div><div class="section" title="1.1.2.5.&nbsp; ulimit and nproc"><div class="titlepage"><div><div><h4 class="title"><a name="ulimit"></a>1.1.2.5.&nbsp;
-          <code class="varname">ulimit</code><a class="indexterm" name="d1694e103"></a>
+          <code class="varname">ulimit</code><a class="indexterm" name="d1699e103"></a>
             and
-          <code class="varname">nproc</code><a class="indexterm" name="d1694e109"></a>
+          <code class="varname">nproc</code><a class="indexterm" name="d1699e109"></a>
         </h4></div></div></div><p>Apache HBase is a database.  It uses a lot of files all at the same time.
         The default ulimit -n -- i.e. user file limit -- of 1024 on most *nix systems
         is insufficient (On mac os x its 256). Any significant amount of loading will
@@ -62,15 +62,15 @@
         </p><p>You should also up the hbase users'
         <code class="varname">nproc</code> setting; under load, a low-nproc
         setting could manifest as <code class="classname">OutOfMemoryError</code>
-        <sup>[<a name="d1694e128" href="#ftn.d1694e128" class="footnote">3</a>]</sup>
-        <sup>[<a name="d1694e135" href="#ftn.d1694e135" class="footnote">4</a>]</sup>.
+        <sup>[<a name="d1699e128" href="#ftn.d1699e128" class="footnote">3</a>]</sup>
+        <sup>[<a name="d1699e135" href="#ftn.d1699e135" class="footnote">4</a>]</sup>.
        </p><p>To be clear, upping the file descriptors and nproc for the user who is
         running the HBase process is an operating system configuration, not an
         HBase configuration. Also, a common mistake is that administrators
         will up the file descriptors for a particular user but for whatever
         reason, HBase will be running as some one else. HBase prints in its
         logs as the first line the ulimit its seeing. Ensure its correct.
-        <sup>[<a name="d1694e147" href="#ftn.d1694e147" class="footnote">5</a>]</sup></p><div class="section" title="1.1.2.5.1.&nbsp;ulimit on Ubuntu"><div class="titlepage"><div><div><h5 class="title"><a name="ulimit_ubuntu"></a>1.1.2.5.1.&nbsp;<code class="varname">ulimit</code> on Ubuntu</h5></div></div></div><p>If you are on Ubuntu you will need to make the following
+        <sup>[<a name="d1699e147" href="#ftn.d1699e147" class="footnote">5</a>]</sup></p><div class="section" title="1.1.2.5.1.&nbsp;ulimit on Ubuntu"><div class="titlepage"><div><div><h5 class="title"><a name="ulimit_ubuntu"></a>1.1.2.5.1.&nbsp;<code class="varname">ulimit</code> on Ubuntu</h5></div></div></div><p>If you are on Ubuntu you will need to make the following
           changes:</p><p>In the file <code class="filename">/etc/security/limits.conf</code> add
           a line like: </p><pre class="programlisting">hadoop  -       nofile  32768</pre><p>
           Replace <code class="varname">hadoop</code> with whatever user is running
@@ -87,18 +87,18 @@
         the <a class="link" href="http://hbase.apache.org/cygwin.html" target="_top">Windows
         Installation</a> guide. Also
         <a class="link" href="http://search-hadoop.com/?q=hbase+windows&amp;fc_project=HBase&amp;fc_type=mail+_hash_+dev" target="_top">search our user mailing list</a> to pick
-        up latest fixes figured by Windows users.</p></div></div><div class="section" title="1.1.3.&nbsp;Hadoop"><div class="titlepage"><div><div><h3 class="title"><a name="hadoop"></a>1.1.3.&nbsp;<a class="link" href="http://hadoop.apache.org" target="_top">Hadoop</a><a class="indexterm" name="d1694e207"></a></h3></div></div></div><p>Selecting a Hadoop version is critical for your HBase deployment. Below table shows some information about what versions of Hadoop are supported by various HBase versions. Based on the version of HBase, you should select the most appropriate version of Hadoop. We are not in the Hadoop distro selection business. You can use Hadoop distributions from Apache, or learn about vendor distributions of Hadoop at <a class="link" href="http://wiki.apache.org/hadoop/Distributions%20and%20Commercial%20Support" target="_top">http://wiki.apache.org/hadoop/Distributions%20and%20Commercial%20Support</a></p><p>
-	     </p><div class="table"><a name="d1694e215"></a><p class="title"><b>Table&nbsp;1.1.&nbsp;Hadoop version support matrix</b></p><div class="table-contents"><table summary="Hadoop version support matrix" border="1"><colgroup><col align="left" class="c1"><col align="center" class="c2"><col align="center" class="c3"><col align="center" class="c4"></colgroup><thead><tr><th align="left">               </th><th align="center">HBase-0.92.x</th><th align="center">HBase-0.94.x</th><th align="center">HBase-0.96</th></tr></thead><tbody><tr><td align="left">Hadoop-0.20.205</td><td align="center">S</td><td align="center">X</td><td align="center">X</td></tr><tr><td align="left">Hadoop-0.22.x  </td><td align="center">S</td><td align="center">X</td><td align="center">X</td></tr><tr><td align="left">Hadoop-1.0.x   </td><td align="center">S</td><td align="center">S</td><td align="center">S</td></tr><tr><td align="left">Hadoop-1.1.x   </td><td align="center">NT</td><td align="center">S</td>
 <td align="center">S</td></tr><tr><td align="left">Hadoop-0.23.x  </td><td align="center">X</td><td align="center">S</td><td align="center">NT</td></tr><tr><td align="left">Hadoop-2.x     </td><td align="center">X</td><td align="center">S</td><td align="center">S</td></tr></tbody></table></div></div><p><br class="table-break">
+        up latest fixes figured by Windows users.</p></div></div><div class="section" title="1.1.3.&nbsp;Hadoop"><div class="titlepage"><div><div><h3 class="title"><a name="hadoop"></a>1.1.3.&nbsp;<a class="link" href="http://hadoop.apache.org" target="_top">Hadoop</a><a class="indexterm" name="d1699e207"></a></h3></div></div></div><p>Selecting a Hadoop version is critical for your HBase deployment. Below table shows some information about what versions of Hadoop are supported by various HBase versions. Based on the version of HBase, you should select the most appropriate version of Hadoop. We are not in the Hadoop distro selection business. You can use Hadoop distributions from Apache, or learn about vendor distributions of Hadoop at <a class="link" href="http://wiki.apache.org/hadoop/Distributions%20and%20Commercial%20Support" target="_top">http://wiki.apache.org/hadoop/Distributions%20and%20Commercial%20Support</a></p><p>
+	     </p><div class="table"><a name="d1699e215"></a><p class="title"><b>Table&nbsp;1.1.&nbsp;Hadoop version support matrix</b></p><div class="table-contents"><table summary="Hadoop version support matrix" border="1"><colgroup><col align="left" class="c1"><col align="center" class="c2"><col align="center" class="c3"><col align="center" class="c4"></colgroup><thead><tr><th align="left">               </th><th align="center">HBase-0.92.x</th><th align="center">HBase-0.94.x</th><th align="center">HBase-0.96</th></tr></thead><tbody><tr><td align="left">Hadoop-0.20.205</td><td align="center">S</td><td align="center">X</td><td align="center">X</td></tr><tr><td align="left">Hadoop-0.22.x  </td><td align="center">S</td><td align="center">X</td><td align="center">X</td></tr><tr><td align="left">Hadoop-1.0.x   </td><td align="center">S</td><td align="center">S</td><td align="center">S</td></tr><tr><td align="left">Hadoop-1.1.x   </td><td align="center">NT</td><td align="center">S</td>
 <td align="center">S</td></tr><tr><td align="left">Hadoop-0.23.x  </td><td align="center">X</td><td align="center">S</td><td align="center">NT</td></tr><tr><td align="left">Hadoop-2.x     </td><td align="center">X</td><td align="center">S</td><td align="center">S</td></tr></tbody></table></div></div><p><br class="table-break">
 
         Where
 		</p><table border="0" summary="Simple list" class="simplelist"><tr><td>S = supported and tested,</td></tr><tr><td>X = not supported,</td></tr><tr><td>NT = it should run, but not tested enough.</td></tr></table><p>
         </p><p>
 	Because HBase depends on Hadoop, it bundles an instance of the Hadoop jar under its <code class="filename">lib</code> directory. The bundled jar is ONLY for use in standalone mode. In distributed mode, it is <span class="emphasis"><em>critical</em></span> that the version of Hadoop that is out on your cluster match what is under HBase. Replace the hadoop jar found in the HBase lib directory with the hadoop jar you are running on your cluster to avoid version mismatch issues. Make sure you replace the jar in HBase everywhere on your cluster. Hadoop version mismatch issues have various manifestations but often all looks like its hung up.
     </p><div class="section" title="1.1.3.1.&nbsp;Apache HBase 0.92 and 0.94"><div class="titlepage"><div><div><h4 class="title"><a name="hadoop.hbase-0.94"></a>1.1.3.1.&nbsp;Apache HBase 0.92 and 0.94</h4></div></div></div><p>HBase 0.92 and 0.94 versions can work with Hadoop versions, 0.20.205, 0.22.x, 1.0.x, and 1.1.x. HBase-0.94 can additionally work with Hadoop-0.23.x and 2.x, but you may have to recompile the code using the specific maven profile (see top level pom.xml)</p></div><div class="section" title="1.1.3.2.&nbsp;Apache HBase 0.96"><div class="titlepage"><div><div><h4 class="title"><a name="hadoop.hbase-0.96"></a>1.1.3.2.&nbsp;Apache HBase 0.96</h4></div></div></div><p>Apache HBase 0.96.0 requires Apache Hadoop 1.x at a minimum, and it can run equally well on hadoop-2.0.
-	As of Apache HBase 0.96.x, Apache Hadoop 1.0.x at least is required. We will no longer run properly on older Hadoops such as 0.20.205 or branch-0.20-append. Do not move to Apache HBase 0.96.x if you cannot upgrade your Hadoop<sup>[<a name="d1694e315" href="#ftn.d1694e315" class="footnote">6</a>]</sup>.</p></div><div class="section" title="1.1.3.3.&nbsp;Hadoop versions 0.20.x - 1.x"><div class="titlepage"><div><div><h4 class="title"><a name="hadoop.older.versions"></a>1.1.3.3.&nbsp;Hadoop versions 0.20.x - 1.x</h4></div></div></div><p>
+	As of Apache HBase 0.96.x, Apache Hadoop 1.0.x at least is required. We will no longer run properly on older Hadoops such as 0.20.205 or branch-0.20-append. Do not move to Apache HBase 0.96.x if you cannot upgrade your Hadoop<sup>[<a name="d1699e315" href="#ftn.d1699e315" class="footnote">6</a>]</sup>.</p></div><div class="section" title="1.1.3.3.&nbsp;Hadoop versions 0.20.x - 1.x"><div class="titlepage"><div><div><h4 class="title"><a name="hadoop.older.versions"></a>1.1.3.3.&nbsp;Hadoop versions 0.20.x - 1.x</h4></div></div></div><p>
      HBase will lose data unless it is running on an HDFS that has a durable
         <code class="code">sync</code> implementation.  DO NOT use Hadoop 0.20.2, Hadoop 0.20.203.0, and Hadoop 0.20.204.0 which DO NOT have this attribute. Currently only Hadoop versions 0.20.205.x or any release in excess of this version -- this includes hadoop-1.0.0 -- have a working, durable sync
-          <sup>[<a name="d1694e329" href="#ftn.d1694e329" class="footnote">7</a>]</sup>.  Sync has to be explicitly enabled by setting
+          <sup>[<a name="d1699e329" href="#ftn.d1699e329" class="footnote">7</a>]</sup>.  Sync has to be explicitly enabled by setting
         <code class="varname">dfs.support.append</code> equal
         to true on both the client side -- in <code class="filename">hbase-site.xml</code>
         -- and on the serverside in <code class="filename">hdfs-site.xml</code> (The sync
@@ -116,7 +116,7 @@
           security features as long as you do as
           suggested above and replace the Hadoop jar that ships with HBase
           with the secure version.  If you want to read more about how to setup
-          Secure HBase, see <a class="xref" href="#">???</a>.</p></div><div class="section" title="1.1.3.5.&nbsp;dfs.datanode.max.xcievers"><div class="titlepage"><div><div><h4 class="title"><a name="dfs.datanode.max.xcievers"></a>1.1.3.5.&nbsp;<code class="varname">dfs.datanode.max.xcievers</code><a class="indexterm" name="d1694e365"></a></h4></div></div></div><p>An Hadoop HDFS datanode has an upper bound on the number of
+          Secure HBase, see <a class="xref" href="#">???</a>.</p></div><div class="section" title="1.1.3.5.&nbsp;dfs.datanode.max.xcievers"><div class="titlepage"><div><div><h4 class="title"><a name="dfs.datanode.max.xcievers"></a>1.1.3.5.&nbsp;<code class="varname">dfs.datanode.max.xcievers</code><a class="indexterm" name="d1699e365"></a></h4></div></div></div><p>An Hadoop HDFS datanode has an upper bound on the number of
         files that it will serve at any one time. The upper bound parameter is
         called <code class="varname">xcievers</code> (yes, this is misspelled). Again,
         before doing any loading, make sure you have configured Hadoop's
@@ -136,7 +136,7 @@
         blk_XXXXXXXXXXXXXXXXXXXXXX_YYYYYYYY from any node:
         java.io.IOException: No live nodes contain current block. Will get new
         block locations from namenode and retry...</code>
-        <sup>[<a name="d1694e388" href="#ftn.d1694e388" class="footnote">8</a>]</sup></p><p>See also <a class="xref" href="#">???</a>
+        <sup>[<a name="d1699e388" href="#ftn.d1699e388" class="footnote">8</a>]</sup></p><p>See also <a class="xref" href="#">???</a>
        </p></div></div></div><div class="section" title="1.2.&nbsp;HBase run modes: Standalone and Distributed"><div class="titlepage"><div><div><h2 class="title" style="clear: both"><a name="standalone_dist"></a>1.2.&nbsp;HBase run modes: Standalone and Distributed</h2></div></div></div><p>HBase has two run modes: <a class="xref" href="#standalone" title="1.2.1.&nbsp;Standalone HBase">Section&nbsp;1.2.1, &#8220;Standalone HBase&#8221;</a> and <a class="xref" href="#distributed" title="1.2.2.&nbsp;Distributed">Section&nbsp;1.2.2, &#8220;Distributed&#8221;</a>. Out of the box, HBase runs in
       standalone mode. To set up a distributed deploy, you will need to
       configure HBase by editing files in the HBase <code class="filename">conf</code>
@@ -155,7 +155,7 @@
         daemons run on a single node -- a.k.a
         <span class="emphasis"><em>pseudo-distributed</em></span>-- and
         <span class="emphasis"><em>fully-distributed</em></span> where the daemons are spread
-        across all nodes in the cluster <sup>[<a name="d1694e446" href="#ftn.d1694e446" class="footnote">9</a>]</sup>.</p><p>Distributed modes require an instance of the <span class="emphasis"><em>Hadoop
+        across all nodes in the cluster <sup>[<a name="d1699e446" href="#ftn.d1699e446" class="footnote">9</a>]</sup>.</p><p>Distributed modes require an instance of the <span class="emphasis"><em>Hadoop
         Distributed File System</em></span> (HDFS). See the Hadoop <a class="link" href="http://hadoop.apache.org/common/docs/r1.1.1/api/overview-summary.html#overview_description" target="_top">
         requirements and instructions</a> for how to set up a HDFS. Before
         proceeding, ensure you have an appropriate, working HDFS.</p><p>Below we describe the different distributed setups. Starting,
@@ -174,7 +174,7 @@
               Note that the <code class="varname">hbase.rootdir</code> property points to the
               local HDFS instance.
    		  </p><p>Now skip to <a class="xref" href="#confirm" title="1.2.3.&nbsp;Running and Confirming Your Installation">Section&nbsp;1.2.3, &#8220;Running and Confirming Your Installation&#8221;</a> for how to start and verify your
-          pseudo-distributed install. <sup>[<a name="d1694e494" href="#ftn.d1694e494" class="footnote">10</a>]</sup></p><div class="note" title="Note" style="margin-left: 0.5in; margin-right: 0.5in;"><h3 class="title">Note</h3><p>Let HBase create the <code class="varname">hbase.rootdir</code>
+          pseudo-distributed install. <sup>[<a name="d1699e494" href="#ftn.d1699e494" class="footnote">10</a>]</sup></p><div class="note" title="Note" style="margin-left: 0.5in; margin-right: 0.5in;"><h3 class="title">Note</h3><p>Let HBase create the <code class="varname">hbase.rootdir</code>
             directory. If you don't, you'll get warning saying HBase needs a
             migration run because the directory is missing files expected by
             HBase (it'll create them if you let it).</p></div><div class="section" title="1.2.2.1.1.&nbsp;Pseudo-distributed Configuration File"><div class="titlepage"><div><div><h5 class="title"><a name="pseudo.config"></a>1.2.2.1.1.&nbsp;Pseudo-distributed Configuration File</h5></div></div></div><p>Below is a sample pseudo-distributed file for the node <code class="varname">h-24-30.example.com</code>.
@@ -764,7 +764,7 @@ config.set("hbase.zookeeper.quorum", "lo
         This populated <code class="classname">Configuration</code> instance can then be passed to an
         <a class="link" href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html" target="_top">HTable</a>,
         and so on.
-        </p></div></div></div><div class="section" title="1.4.&nbsp;Example Configurations"><div class="titlepage"><div><div><h2 class="title" style="clear: both"><a name="example_config"></a>1.4.&nbsp;Example Configurations</h2></div></div></div><div class="section" title="1.4.1.&nbsp;Basic Distributed HBase Install"><div class="titlepage"><div><div><h3 class="title"><a name="d1694e2061"></a>1.4.1.&nbsp;Basic Distributed HBase Install</h3></div></div></div><p>Here is an example basic configuration for a distributed ten
+        </p></div></div></div><div class="section" title="1.4.&nbsp;Example Configurations"><div class="titlepage"><div><div><h2 class="title" style="clear: both"><a name="example_config"></a>1.4.&nbsp;Example Configurations</h2></div></div></div><div class="section" title="1.4.1.&nbsp;Basic Distributed HBase Install"><div class="titlepage"><div><div><h3 class="title"><a name="d1699e2061"></a>1.4.1.&nbsp;Basic Distributed HBase Install</h3></div></div></div><p>Here is an example basic configuration for a distributed ten
         node cluster. The nodes are named <code class="varname">example0</code>,
         <code class="varname">example1</code>, etc., through node
         <code class="varname">example9</code> in this example. The HBase Master and the
@@ -966,7 +966,7 @@ index e70ebc6..96f8c27 100644
           Keeping 5 regions per RS would be too low for a job, whereas 1000 will generate too many maps.
       </p></div></div><div class="section" title="1.5.2.7.&nbsp;Managed Splitting"><div class="titlepage"><div><div><h4 class="title"><a name="disable.splitting"></a>1.5.2.7.&nbsp;Managed Splitting</h4></div></div></div><p>
       Rather than let HBase auto-split your Regions, manage the splitting manually
-      <sup>[<a name="d1694e2287" href="#ftn.d1694e2287" class="footnote">11</a>]</sup>.
+      <sup>[<a name="d1699e2287" href="#ftn.d1699e2287" class="footnote">11</a>]</sup>.
  With growing amounts of data, splits will continually be needed. Since
  you always know exactly what regions you have, long-term debugging and
  profiling is much easier with manual splits. It is hard to trace the logs to
@@ -1026,26 +1026,26 @@ of all regions.
       <a class="link" href="http://search-hadoop.com/m/pduLg2fydtE/Inconsistent+scan+performance+with+caching+set+&amp;subj=Re+Inconsistent+scan+performance+with+caching+set+to+1" target="_top">Inconsistent scan performance with caching set to 1</a>
       and the issue cited therein where setting notcpdelay improved scan speeds.  You might also
       see the graphs on the tail of <a class="link" href="https://issues.apache.org/jira/browse/HBASE-7008" target="_top">HBASE-7008 Set scanner caching to a better default</a>
-      where our Lars Hofhansl tries various data sizes w/ Nagle's on and off measuring the effect.</p></div></div></div><div class="footnotes"><br><hr width="100" align="left"><div class="footnote"><p><sup>[<a id="ftn.d1694e14" href="#d1694e14" class="para">1</a>] </sup>
+      where our Lars Hofhansl tries various data sizes w/ Nagle's on and off measuring the effect.</p></div></div></div><div class="footnotes"><br><hr width="100" align="left"><div class="footnote"><p><sup>[<a id="ftn.d1699e14" href="#d1699e14" class="para">1</a>] </sup>
 Be careful editing XML.  Make sure you close all elements.
 Run your file through <span class="command"><strong>xmllint</strong></span> or similar
 to ensure well-formedness of your document after an edit session.
-</p></div><div class="footnote"><p><sup>[<a id="ftn.d1694e63" href="#d1694e63" class="para">2</a>] </sup>The <a class="link" href="https://github.com/sujee/hadoop-dns-checker" target="_top">hadoop-dns-checker</a> tool can be used to verify
+</p></div><div class="footnote"><p><sup>[<a id="ftn.d1699e63" href="#d1699e63" class="para">2</a>] </sup>The <a class="link" href="https://github.com/sujee/hadoop-dns-checker" target="_top">hadoop-dns-checker</a> tool can be used to verify
         DNS is working correctly on the cluster.  The project README file provides detailed instructions on usage.
-</p></div><div class="footnote"><p><sup>[<a id="ftn.d1694e128" href="#d1694e128" class="para">3</a>] </sup>See Jack Levin's <a class="link" href="" target="_top">major hdfs issues</a>
-                note up on the user list.</p></div><div class="footnote"><p><sup>[<a id="ftn.d1694e135" href="#d1694e135" class="para">4</a>] </sup>The requirement that a database requires upping of system limits
+</p></div><div class="footnote"><p><sup>[<a id="ftn.d1699e128" href="#d1699e128" class="para">3</a>] </sup>See Jack Levin's <a class="link" href="" target="_top">major hdfs issues</a>
+                note up on the user list.</p></div><div class="footnote"><p><sup>[<a id="ftn.d1699e135" href="#d1699e135" class="para">4</a>] </sup>The requirement that a database requires upping of system limits
         is not peculiar to Apache HBase.  See for example the section
         <span class="emphasis"><em>Setting Shell Limits for the Oracle User</em></span> in
         <a class="link" href="http://www.akadia.com/services/ora_linux_install_10g.html" target="_top">
-        Short Guide to install Oracle 10 on Linux</a>.</p></div><div class="footnote"><p><sup>[<a id="ftn.d1694e147" href="#d1694e147" class="para">5</a>] </sup>A useful read setting config on you hadoop cluster is Aaron
+        Short Guide to install Oracle 10 on Linux</a>.</p></div><div class="footnote"><p><sup>[<a id="ftn.d1699e147" href="#d1699e147" class="para">5</a>] </sup>A useful read setting config on you hadoop cluster is Aaron
             Kimballs' Configuration
-            Parameters: What can you just ignore?</p></div><div class="footnote"><p><sup>[<a id="ftn.d1694e315" href="#d1694e315" class="para">6</a>] </sup>See <a class="link" href="http://search-hadoop.com/m/7vFVx4EsUb2" target="_top">HBase, mail # dev - DISCUSS: Have hbase require at least hadoop 1.0.0 in hbase 0.96.0?</a></p></div><div class="footnote"><p><sup>[<a id="ftn.d1694e329" href="#d1694e329" class="para">7</a>] </sup>The Cloudera blog post <a class="link" href="http://www.cloudera.com/blog/2012/01/an-update-on-apache-hadoop-1-0/" target="_top">An update on Apache Hadoop 1.0</a>
+            Parameters: What can you just ignore?</p></div><div class="footnote"><p><sup>[<a id="ftn.d1699e315" href="#d1699e315" class="para">6</a>] </sup>See <a class="link" href="http://search-hadoop.com/m/7vFVx4EsUb2" target="_top">HBase, mail # dev - DISCUSS: Have hbase require at least hadoop 1.0.0 in hbase 0.96.0?</a></p></div><div class="footnote"><p><sup>[<a id="ftn.d1699e329" href="#d1699e329" class="para">7</a>] </sup>The Cloudera blog post <a class="link" href="http://www.cloudera.com/blog/2012/01/an-update-on-apache-hadoop-1-0/" target="_top">An update on Apache Hadoop 1.0</a>
           by Charles Zedlweski has a nice exposition on how all the Hadoop versions relate.
           Its worth checking out if you are having trouble making sense of the
           Hadoop version morass.
-          </p></div><div class="footnote"><p><sup>[<a id="ftn.d1694e388" href="#d1694e388" class="para">8</a>] </sup>See <a class="link" href="http://ccgtech.blogspot.com/2010/02/hadoop-hdfs-deceived-by-xciever.html" target="_top">Hadoop HDFS: Deceived by Xciever</a> for an informative rant on xceivering.</p></div><div class="footnote"><p><sup>[<a id="ftn.d1694e446" href="#d1694e446" class="para">9</a>] </sup>The pseudo-distributed vs fully-distributed nomenclature
-            comes from Hadoop.</p></div><div class="footnote"><p><sup>[<a id="ftn.d1694e494" href="#d1694e494" class="para">10</a>] </sup>See <a class="xref" href="#pseudo.extras" title="1.2.2.1.2.&nbsp;Pseudo-distributed Extras">Section&nbsp;1.2.2.1.2, &#8220;Pseudo-distributed Extras&#8221;</a> for notes on how to start extra Masters and
-              RegionServers when running pseudo-distributed.</p></div><div class="footnote"><p><sup>[<a id="ftn.d1694e2287" href="#d1694e2287" class="para">11</a>] </sup>What follows is taken from the javadoc at the head of
+          </p></div><div class="footnote"><p><sup>[<a id="ftn.d1699e388" href="#d1699e388" class="para">8</a>] </sup>See <a class="link" href="http://ccgtech.blogspot.com/2010/02/hadoop-hdfs-deceived-by-xciever.html" target="_top">Hadoop HDFS: Deceived by Xciever</a> for an informative rant on xceivering.</p></div><div class="footnote"><p><sup>[<a id="ftn.d1699e446" href="#d1699e446" class="para">9</a>] </sup>The pseudo-distributed vs fully-distributed nomenclature
+            comes from Hadoop.</p></div><div class="footnote"><p><sup>[<a id="ftn.d1699e494" href="#d1699e494" class="para">10</a>] </sup>See <a class="xref" href="#pseudo.extras" title="1.2.2.1.2.&nbsp;Pseudo-distributed Extras">Section&nbsp;1.2.2.1.2, &#8220;Pseudo-distributed Extras&#8221;</a> for notes on how to start extra Masters and
+              RegionServers when running pseudo-distributed.</p></div><div class="footnote"><p><sup>[<a id="ftn.d1699e2287" href="#d1699e2287" class="para">11</a>] </sup>What follows is taken from the javadoc at the head of
       the <code class="classname">org.apache.hadoop.hbase.util.RegionSplitter</code> tool
       added to HBase post-0.90.0 release.
       </p></div></div></div><div id="disqus_thread"></div><script type="text/javascript">

Modified: hbase/hbase.apache.org/trunk/configuration/configuration.html
URL: http://svn.apache.org/viewvc/hbase/hbase.apache.org/trunk/configuration/configuration.html?rev=1462679&r1=1462678&r2=1462679&view=diff
==============================================================================
--- hbase/hbase.apache.org/trunk/configuration/configuration.html (original)
+++ hbase/hbase.apache.org/trunk/configuration/configuration.html Sat Mar 30 00:19:55 2013
@@ -1,6 +1,6 @@
 <html><head>
       <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
-   <title>Chapter&nbsp;1.&nbsp;Apache HBase (TM) Configuration</title><link rel="stylesheet" type="text/css" href="../css/freebsd_docbook.css"><meta name="generator" content="DocBook XSL-NS Stylesheets V1.76.1"><link rel="home" href="configuration.html" title="Chapter&nbsp;1.&nbsp;Apache HBase (TM) Configuration"><link rel="next" href="standalone_dist.html" title="1.2.&nbsp;HBase run modes: Standalone and Distributed"></head><body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF"><div class="navheader"><table width="100%" summary="Navigation header"><tr><th colspan="3" align="center">Chapter&nbsp;1.&nbsp;Apache HBase (TM) Configuration</th></tr><tr><td width="20%" align="left">&nbsp;</td><th width="60%" align="center">&nbsp;</th><td width="20%" align="right">&nbsp;<a accesskey="n" href="standalone_dist.html">Next</a></td></tr></table><hr></div><div class="chapter" title="Chapter&nbsp;1.&nbsp;Apache HBase (TM) Configuration"><div class="titlepage"><d
 iv><div><h2 class="title"><a name="configuration"></a>Chapter&nbsp;1.&nbsp;Apache HBase (TM) Configuration</h2></div></div></div><div class="toc"><p><b>Table of Contents</b></p><dl><dt><span class="section"><a href="configuration.html#basic.prerequisites">1.1. Basic Prerequisites</a></span></dt><dd><dl><dt><span class="section"><a href="configuration.html#java">1.1.1. Java</a></span></dt><dt><span class="section"><a href="configuration.html#os">1.1.2. Operating System</a></span></dt><dt><span class="section"><a href="configuration.html#hadoop">1.1.3. Hadoop</a></span></dt></dl></dd><dt><span class="section"><a href="standalone_dist.html">1.2. HBase run modes: Standalone and Distributed</a></span></dt><dd><dl><dt><span class="section"><a href="standalone_dist.html#standalone">1.2.1. Standalone HBase</a></span></dt><dt><span class="section"><a href="standalone_dist.html#distributed">1.2.2. Distributed</a></span></dt><dt><span class="section"><a href="standalone_dist.html#confi
 rm">1.2.3. Running and Confirming Your Installation</a></span></dt></dl></dd><dt><span class="section"><a href="config.files.html">1.3. Configuration Files</a></span></dt><dd><dl><dt><span class="section"><a href="config.files.html#hbase.site">1.3.1. <code class="filename">hbase-site.xml</code> and <code class="filename">hbase-default.xml</code></a></span></dt><dt><span class="section"><a href="config.files.html#hbase.env.sh">1.3.2. <code class="filename">hbase-env.sh</code></a></span></dt><dt><span class="section"><a href="config.files.html#log4j">1.3.3. <code class="filename">log4j.properties</code></a></span></dt><dt><span class="section"><a href="config.files.html#client_dependencies">1.3.4. Client configuration and dependencies connecting to an HBase cluster</a></span></dt></dl></dd><dt><span class="section"><a href="example_config.html">1.4. Example Configurations</a></span></dt><dd><dl><dt><span class="section"><a href="example_config.html#d1694e2061">1.4.1. Basic Dis
 tributed HBase Install</a></span></dt></dl></dd><dt><span class="section"><a href="important_configurations.html">1.5. The Important Configurations</a></span></dt><dd><dl><dt><span class="section"><a href="important_configurations.html#required_configuration">1.5.1. Required Configurations</a></span></dt><dt><span class="section"><a href="important_configurations.html#recommended_configurations">1.5.2. Recommended Configurations</a></span></dt><dt><span class="section"><a href="important_configurations.html#other_configuration">1.5.3. Other Configurations</a></span></dt></dl></dd></dl></div><p>This chapter is the Not-So-Quick start guide to Apache HBase (TM) configuration.  It goes
+   <title>Chapter&nbsp;1.&nbsp;Apache HBase (TM) Configuration</title><link rel="stylesheet" type="text/css" href="../css/freebsd_docbook.css"><meta name="generator" content="DocBook XSL-NS Stylesheets V1.76.1"><link rel="home" href="configuration.html" title="Chapter&nbsp;1.&nbsp;Apache HBase (TM) Configuration"><link rel="next" href="standalone_dist.html" title="1.2.&nbsp;HBase run modes: Standalone and Distributed"></head><body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF"><div class="navheader"><table width="100%" summary="Navigation header"><tr><th colspan="3" align="center">Chapter&nbsp;1.&nbsp;Apache HBase (TM) Configuration</th></tr><tr><td width="20%" align="left">&nbsp;</td><th width="60%" align="center">&nbsp;</th><td width="20%" align="right">&nbsp;<a accesskey="n" href="standalone_dist.html">Next</a></td></tr></table><hr></div><div class="chapter" title="Chapter&nbsp;1.&nbsp;Apache HBase (TM) Configuration"><div class="titlepage"><d
 iv><div><h2 class="title"><a name="configuration"></a>Chapter&nbsp;1.&nbsp;Apache HBase (TM) Configuration</h2></div></div></div><div class="toc"><p><b>Table of Contents</b></p><dl><dt><span class="section"><a href="configuration.html#basic.prerequisites">1.1. Basic Prerequisites</a></span></dt><dd><dl><dt><span class="section"><a href="configuration.html#java">1.1.1. Java</a></span></dt><dt><span class="section"><a href="configuration.html#os">1.1.2. Operating System</a></span></dt><dt><span class="section"><a href="configuration.html#hadoop">1.1.3. Hadoop</a></span></dt></dl></dd><dt><span class="section"><a href="standalone_dist.html">1.2. HBase run modes: Standalone and Distributed</a></span></dt><dd><dl><dt><span class="section"><a href="standalone_dist.html#standalone">1.2.1. Standalone HBase</a></span></dt><dt><span class="section"><a href="standalone_dist.html#distributed">1.2.2. Distributed</a></span></dt><dt><span class="section"><a href="standalone_dist.html#confi
 rm">1.2.3. Running and Confirming Your Installation</a></span></dt></dl></dd><dt><span class="section"><a href="config.files.html">1.3. Configuration Files</a></span></dt><dd><dl><dt><span class="section"><a href="config.files.html#hbase.site">1.3.1. <code class="filename">hbase-site.xml</code> and <code class="filename">hbase-default.xml</code></a></span></dt><dt><span class="section"><a href="config.files.html#hbase.env.sh">1.3.2. <code class="filename">hbase-env.sh</code></a></span></dt><dt><span class="section"><a href="config.files.html#log4j">1.3.3. <code class="filename">log4j.properties</code></a></span></dt><dt><span class="section"><a href="config.files.html#client_dependencies">1.3.4. Client configuration and dependencies connecting to an HBase cluster</a></span></dt></dl></dd><dt><span class="section"><a href="example_config.html">1.4. Example Configurations</a></span></dt><dd><dl><dt><span class="section"><a href="example_config.html#d1699e2061">1.4.1. Basic Dis
 tributed HBase Install</a></span></dt></dl></dd><dt><span class="section"><a href="important_configurations.html">1.5. The Important Configurations</a></span></dt><dd><dl><dt><span class="section"><a href="important_configurations.html#required_configuration">1.5.1. Required Configurations</a></span></dt><dt><span class="section"><a href="important_configurations.html#recommended_configurations">1.5.2. Recommended Configurations</a></span></dt><dt><span class="section"><a href="important_configurations.html#other_configuration">1.5.3. Other Configurations</a></span></dt></dl></dd></dl></div><p>This chapter is the Not-So-Quick start guide to Apache HBase (TM) configuration.  It goes
     over system requirements, Hadoop setup, the different Apache HBase run modes, and the
     various configurations in HBase.  Please read this chapter carefully.  At a mimimum
     ensure that all <a class="xref" href="configuration.html#basic.prerequisites" title="1.1.&nbsp;Basic Prerequisites">Section&nbsp;1.1, &#8220;Basic Prerequisites&#8221;</a> have
@@ -13,7 +13,7 @@
         off the ground -- and then add configuration to an XML file to
         do things like override HBase defaults, tell HBase what Filesystem to
         use, and the location of the ZooKeeper ensemble
-        <sup>[<a name="d1694e14" href="#ftn.d1694e14" class="footnote">1</a>]</sup>
+        <sup>[<a name="d1699e14" href="#ftn.d1699e14" class="footnote">1</a>]</sup>
         .
     </p><p>When running in distributed mode, after you make
     an edit to an HBase configuration, make sure you copy the
@@ -30,7 +30,7 @@
         on the hadoop wiki.</p></div><div class="section" title="1.1.2.2.&nbsp;DNS"><div class="titlepage"><div><div><h4 class="title"><a name="dns"></a>1.1.2.2.&nbsp;DNS</h4></div></div></div><p>HBase uses the local hostname to self-report its IP address.
         Both forward and reverse DNS resolving must work in versions of
         HBase previous to 0.92.0
-        <sup>[<a name="d1694e63" href="#ftn.d1694e63" class="footnote">2</a>]</sup>.</p><p>If your machine has multiple interfaces, HBase will use the
+        <sup>[<a name="d1699e63" href="#ftn.d1699e63" class="footnote">2</a>]</sup>.</p><p>If your machine has multiple interfaces, HBase will use the
         interface that the primary hostname resolves to.</p><p>If this is insufficient, you can set
         <code class="varname">hbase.regionserver.dns.interface</code> to indicate the
         primary interface. This only works if your cluster configuration is
@@ -42,9 +42,9 @@
         <a class="link" href="http://en.wikipedia.org/wiki/Network_Time_Protocol" target="_top">NTP</a>
         on your cluster, or an equivalent.</p><p>If you are having problems querying data, or "weird" cluster
         operations, check system time!</p></div><div class="section" title="1.1.2.5.&nbsp; ulimit and nproc"><div class="titlepage"><div><div><h4 class="title"><a name="ulimit"></a>1.1.2.5.&nbsp;
-          <code class="varname">ulimit</code><a class="indexterm" name="d1694e103"></a>
+          <code class="varname">ulimit</code><a class="indexterm" name="d1699e103"></a>
             and
-          <code class="varname">nproc</code><a class="indexterm" name="d1694e109"></a>
+          <code class="varname">nproc</code><a class="indexterm" name="d1699e109"></a>
         </h4></div></div></div><p>Apache HBase is a database.  It uses a lot of files all at the same time.
         The default ulimit -n -- i.e. user file limit -- of 1024 on most *nix systems
         is insufficient (On mac os x its 256). Any significant amount of loading will
@@ -62,15 +62,15 @@
         </p><p>You should also up the hbase users'
         <code class="varname">nproc</code> setting; under load, a low-nproc
         setting could manifest as <code class="classname">OutOfMemoryError</code>
-        <sup>[<a name="d1694e128" href="#ftn.d1694e128" class="footnote">3</a>]</sup>
-        <sup>[<a name="d1694e135" href="#ftn.d1694e135" class="footnote">4</a>]</sup>.
+        <sup>[<a name="d1699e128" href="#ftn.d1699e128" class="footnote">3</a>]</sup>
+        <sup>[<a name="d1699e135" href="#ftn.d1699e135" class="footnote">4</a>]</sup>.
        </p><p>To be clear, upping the file descriptors and nproc for the user who is
         running the HBase process is an operating system configuration, not an
         HBase configuration. Also, a common mistake is that administrators
         will up the file descriptors for a particular user but for whatever
         reason, HBase will be running as some one else. HBase prints in its
         logs as the first line the ulimit its seeing. Ensure its correct.
-        <sup>[<a name="d1694e147" href="#ftn.d1694e147" class="footnote">5</a>]</sup></p><div class="section" title="1.1.2.5.1.&nbsp;ulimit on Ubuntu"><div class="titlepage"><div><div><h5 class="title"><a name="ulimit_ubuntu"></a>1.1.2.5.1.&nbsp;<code class="varname">ulimit</code> on Ubuntu</h5></div></div></div><p>If you are on Ubuntu you will need to make the following
+        <sup>[<a name="d1699e147" href="#ftn.d1699e147" class="footnote">5</a>]</sup></p><div class="section" title="1.1.2.5.1.&nbsp;ulimit on Ubuntu"><div class="titlepage"><div><div><h5 class="title"><a name="ulimit_ubuntu"></a>1.1.2.5.1.&nbsp;<code class="varname">ulimit</code> on Ubuntu</h5></div></div></div><p>If you are on Ubuntu you will need to make the following
           changes:</p><p>In the file <code class="filename">/etc/security/limits.conf</code> add
           a line like: </p><pre class="programlisting">hadoop  -       nofile  32768</pre><p>
           Replace <code class="varname">hadoop</code> with whatever user is running
@@ -87,18 +87,18 @@
         the <a class="link" href="http://hbase.apache.org/cygwin.html" target="_top">Windows
         Installation</a> guide. Also
         <a class="link" href="http://search-hadoop.com/?q=hbase+windows&amp;fc_project=HBase&amp;fc_type=mail+_hash_+dev" target="_top">search our user mailing list</a> to pick
-        up latest fixes figured by Windows users.</p></div></div><div class="section" title="1.1.3.&nbsp;Hadoop"><div class="titlepage"><div><div><h3 class="title"><a name="hadoop"></a>1.1.3.&nbsp;<a class="link" href="http://hadoop.apache.org" target="_top">Hadoop</a><a class="indexterm" name="d1694e207"></a></h3></div></div></div><p>Selecting a Hadoop version is critical for your HBase deployment. Below table shows some information about what versions of Hadoop are supported by various HBase versions. Based on the version of HBase, you should select the most appropriate version of Hadoop. We are not in the Hadoop distro selection business. You can use Hadoop distributions from Apache, or learn about vendor distributions of Hadoop at <a class="link" href="http://wiki.apache.org/hadoop/Distributions%20and%20Commercial%20Support" target="_top">http://wiki.apache.org/hadoop/Distributions%20and%20Commercial%20Support</a></p><p>
-	     </p><div class="table"><a name="d1694e215"></a><p class="title"><b>Table&nbsp;1.1.&nbsp;Hadoop version support matrix</b></p><div class="table-contents"><table summary="Hadoop version support matrix" border="1"><colgroup><col align="left" class="c1"><col align="center" class="c2"><col align="center" class="c3"><col align="center" class="c4"></colgroup><thead><tr><th align="left">               </th><th align="center">HBase-0.92.x</th><th align="center">HBase-0.94.x</th><th align="center">HBase-0.96</th></tr></thead><tbody><tr><td align="left">Hadoop-0.20.205</td><td align="center">S</td><td align="center">X</td><td align="center">X</td></tr><tr><td align="left">Hadoop-0.22.x  </td><td align="center">S</td><td align="center">X</td><td align="center">X</td></tr><tr><td align="left">Hadoop-1.0.x   </td><td align="center">S</td><td align="center">S</td><td align="center">S</td></tr><tr><td align="left">Hadoop-1.1.x   </td><td align="center">NT</td><td align="center">S</td>
 <td align="center">S</td></tr><tr><td align="left">Hadoop-0.23.x  </td><td align="center">X</td><td align="center">S</td><td align="center">NT</td></tr><tr><td align="left">Hadoop-2.x     </td><td align="center">X</td><td align="center">S</td><td align="center">S</td></tr></tbody></table></div></div><p><br class="table-break">
+        up latest fixes figured by Windows users.</p></div></div><div class="section" title="1.1.3.&nbsp;Hadoop"><div class="titlepage"><div><div><h3 class="title"><a name="hadoop"></a>1.1.3.&nbsp;<a class="link" href="http://hadoop.apache.org" target="_top">Hadoop</a><a class="indexterm" name="d1699e207"></a></h3></div></div></div><p>Selecting a Hadoop version is critical for your HBase deployment. Below table shows some information about what versions of Hadoop are supported by various HBase versions. Based on the version of HBase, you should select the most appropriate version of Hadoop. We are not in the Hadoop distro selection business. You can use Hadoop distributions from Apache, or learn about vendor distributions of Hadoop at <a class="link" href="http://wiki.apache.org/hadoop/Distributions%20and%20Commercial%20Support" target="_top">http://wiki.apache.org/hadoop/Distributions%20and%20Commercial%20Support</a></p><p>
+	     </p><div class="table"><a name="d1699e215"></a><p class="title"><b>Table&nbsp;1.1.&nbsp;Hadoop version support matrix</b></p><div class="table-contents"><table summary="Hadoop version support matrix" border="1"><colgroup><col align="left" class="c1"><col align="center" class="c2"><col align="center" class="c3"><col align="center" class="c4"></colgroup><thead><tr><th align="left">               </th><th align="center">HBase-0.92.x</th><th align="center">HBase-0.94.x</th><th align="center">HBase-0.96</th></tr></thead><tbody><tr><td align="left">Hadoop-0.20.205</td><td align="center">S</td><td align="center">X</td><td align="center">X</td></tr><tr><td align="left">Hadoop-0.22.x  </td><td align="center">S</td><td align="center">X</td><td align="center">X</td></tr><tr><td align="left">Hadoop-1.0.x   </td><td align="center">S</td><td align="center">S</td><td align="center">S</td></tr><tr><td align="left">Hadoop-1.1.x   </td><td align="center">NT</td><td align="center">S</td>
 <td align="center">S</td></tr><tr><td align="left">Hadoop-0.23.x  </td><td align="center">X</td><td align="center">S</td><td align="center">NT</td></tr><tr><td align="left">Hadoop-2.x     </td><td align="center">X</td><td align="center">S</td><td align="center">S</td></tr></tbody></table></div></div><p><br class="table-break">
 
         Where
 		</p><table border="0" summary="Simple list" class="simplelist"><tr><td>S = supported and tested,</td></tr><tr><td>X = not supported,</td></tr><tr><td>NT = it should run, but not tested enough.</td></tr></table><p>
         </p><p>
 	Because HBase depends on Hadoop, it bundles an instance of the Hadoop jar under its <code class="filename">lib</code> directory. The bundled jar is ONLY for use in standalone mode. In distributed mode, it is <span class="emphasis"><em>critical</em></span> that the version of Hadoop that is out on your cluster match what is under HBase. Replace the hadoop jar found in the HBase lib directory with the hadoop jar you are running on your cluster to avoid version mismatch issues. Make sure you replace the jar in HBase everywhere on your cluster. Hadoop version mismatch issues have various manifestations but often all looks like its hung up.
     </p><div class="section" title="1.1.3.1.&nbsp;Apache HBase 0.92 and 0.94"><div class="titlepage"><div><div><h4 class="title"><a name="hadoop.hbase-0.94"></a>1.1.3.1.&nbsp;Apache HBase 0.92 and 0.94</h4></div></div></div><p>HBase 0.92 and 0.94 versions can work with Hadoop versions, 0.20.205, 0.22.x, 1.0.x, and 1.1.x. HBase-0.94 can additionally work with Hadoop-0.23.x and 2.x, but you may have to recompile the code using the specific maven profile (see top level pom.xml)</p></div><div class="section" title="1.1.3.2.&nbsp;Apache HBase 0.96"><div class="titlepage"><div><div><h4 class="title"><a name="hadoop.hbase-0.96"></a>1.1.3.2.&nbsp;Apache HBase 0.96</h4></div></div></div><p>Apache HBase 0.96.0 requires Apache Hadoop 1.x at a minimum, and it can run equally well on hadoop-2.0.
-	As of Apache HBase 0.96.x, Apache Hadoop 1.0.x at least is required. We will no longer run properly on older Hadoops such as 0.20.205 or branch-0.20-append. Do not move to Apache HBase 0.96.x if you cannot upgrade your Hadoop<sup>[<a name="d1694e315" href="#ftn.d1694e315" class="footnote">6</a>]</sup>.</p></div><div class="section" title="1.1.3.3.&nbsp;Hadoop versions 0.20.x - 1.x"><div class="titlepage"><div><div><h4 class="title"><a name="hadoop.older.versions"></a>1.1.3.3.&nbsp;Hadoop versions 0.20.x - 1.x</h4></div></div></div><p>
+	As of Apache HBase 0.96.x, Apache Hadoop 1.0.x at least is required. We will no longer run properly on older Hadoops such as 0.20.205 or branch-0.20-append. Do not move to Apache HBase 0.96.x if you cannot upgrade your Hadoop<sup>[<a name="d1699e315" href="#ftn.d1699e315" class="footnote">6</a>]</sup>.</p></div><div class="section" title="1.1.3.3.&nbsp;Hadoop versions 0.20.x - 1.x"><div class="titlepage"><div><div><h4 class="title"><a name="hadoop.older.versions"></a>1.1.3.3.&nbsp;Hadoop versions 0.20.x - 1.x</h4></div></div></div><p>
      HBase will lose data unless it is running on an HDFS that has a durable
         <code class="code">sync</code> implementation.  DO NOT use Hadoop 0.20.2, Hadoop 0.20.203.0, and Hadoop 0.20.204.0 which DO NOT have this attribute. Currently only Hadoop versions 0.20.205.x or any release in excess of this version -- this includes hadoop-1.0.0 -- have a working, durable sync
-          <sup>[<a name="d1694e329" href="#ftn.d1694e329" class="footnote">7</a>]</sup>.  Sync has to be explicitly enabled by setting
+          <sup>[<a name="d1699e329" href="#ftn.d1699e329" class="footnote">7</a>]</sup>.  Sync has to be explicitly enabled by setting
         <code class="varname">dfs.support.append</code> equal
         to true on both the client side -- in <code class="filename">hbase-site.xml</code>
         -- and on the serverside in <code class="filename">hdfs-site.xml</code> (The sync
@@ -116,7 +116,7 @@
           security features as long as you do as
           suggested above and replace the Hadoop jar that ships with HBase
           with the secure version.  If you want to read more about how to setup
-          Secure HBase, see <a class="xref" href="">???</a>.</p></div><div class="section" title="1.1.3.5.&nbsp;dfs.datanode.max.xcievers"><div class="titlepage"><div><div><h4 class="title"><a name="dfs.datanode.max.xcievers"></a>1.1.3.5.&nbsp;<code class="varname">dfs.datanode.max.xcievers</code><a class="indexterm" name="d1694e365"></a></h4></div></div></div><p>An Hadoop HDFS datanode has an upper bound on the number of
+          Secure HBase, see <a class="xref" href="">???</a>.</p></div><div class="section" title="1.1.3.5.&nbsp;dfs.datanode.max.xcievers"><div class="titlepage"><div><div><h4 class="title"><a name="dfs.datanode.max.xcievers"></a>1.1.3.5.&nbsp;<code class="varname">dfs.datanode.max.xcievers</code><a class="indexterm" name="d1699e365"></a></h4></div></div></div><p>An Hadoop HDFS datanode has an upper bound on the number of
         files that it will serve at any one time. The upper bound parameter is
         called <code class="varname">xcievers</code> (yes, this is misspelled). Again,
         before doing any loading, make sure you have configured Hadoop's
@@ -136,25 +136,25 @@
         blk_XXXXXXXXXXXXXXXXXXXXXX_YYYYYYYY from any node:
         java.io.IOException: No live nodes contain current block. Will get new
         block locations from namenode and retry...</code>
-        <sup>[<a name="d1694e388" href="#ftn.d1694e388" class="footnote">8</a>]</sup></p><p>See also <a class="xref" href="">???</a>
-       </p></div></div></div><div class="footnotes"><br><hr width="100" align="left"><div class="footnote"><p><sup>[<a id="ftn.d1694e14" href="#d1694e14" class="para">1</a>] </sup>
+        <sup>[<a name="d1699e388" href="#ftn.d1699e388" class="footnote">8</a>]</sup></p><p>See also <a class="xref" href="">???</a>
+       </p></div></div></div><div class="footnotes"><br><hr width="100" align="left"><div class="footnote"><p><sup>[<a id="ftn.d1699e14" href="#d1699e14" class="para">1</a>] </sup>
 Be careful editing XML.  Make sure you close all elements.
 Run your file through <span class="command"><strong>xmllint</strong></span> or similar
 to ensure well-formedness of your document after an edit session.
-</p></div><div class="footnote"><p><sup>[<a id="ftn.d1694e63" href="#d1694e63" class="para">2</a>] </sup>The <a class="link" href="https://github.com/sujee/hadoop-dns-checker" target="_top">hadoop-dns-checker</a> tool can be used to verify
+</p></div><div class="footnote"><p><sup>[<a id="ftn.d1699e63" href="#d1699e63" class="para">2</a>] </sup>The <a class="link" href="https://github.com/sujee/hadoop-dns-checker" target="_top">hadoop-dns-checker</a> tool can be used to verify
         DNS is working correctly on the cluster.  The project README file provides detailed instructions on usage.
-</p></div><div class="footnote"><p><sup>[<a id="ftn.d1694e128" href="#d1694e128" class="para">3</a>] </sup>See Jack Levin's <a class="link" href="" target="_top">major hdfs issues</a>
-                note up on the user list.</p></div><div class="footnote"><p><sup>[<a id="ftn.d1694e135" href="#d1694e135" class="para">4</a>] </sup>The requirement that a database requires upping of system limits
+</p></div><div class="footnote"><p><sup>[<a id="ftn.d1699e128" href="#d1699e128" class="para">3</a>] </sup>See Jack Levin's <a class="link" href="" target="_top">major hdfs issues</a>
+                note up on the user list.</p></div><div class="footnote"><p><sup>[<a id="ftn.d1699e135" href="#d1699e135" class="para">4</a>] </sup>The requirement that a database requires upping of system limits
         is not peculiar to Apache HBase.  See for example the section
         <span class="emphasis"><em>Setting Shell Limits for the Oracle User</em></span> in
         <a class="link" href="http://www.akadia.com/services/ora_linux_install_10g.html" target="_top">
-        Short Guide to install Oracle 10 on Linux</a>.</p></div><div class="footnote"><p><sup>[<a id="ftn.d1694e147" href="#d1694e147" class="para">5</a>] </sup>A useful read setting config on you hadoop cluster is Aaron
+        Short Guide to install Oracle 10 on Linux</a>.</p></div><div class="footnote"><p><sup>[<a id="ftn.d1699e147" href="#d1699e147" class="para">5</a>] </sup>A useful read setting config on you hadoop cluster is Aaron
             Kimballs' Configuration
-            Parameters: What can you just ignore?</p></div><div class="footnote"><p><sup>[<a id="ftn.d1694e315" href="#d1694e315" class="para">6</a>] </sup>See <a class="link" href="http://search-hadoop.com/m/7vFVx4EsUb2" target="_top">HBase, mail # dev - DISCUSS: Have hbase require at least hadoop 1.0.0 in hbase 0.96.0?</a></p></div><div class="footnote"><p><sup>[<a id="ftn.d1694e329" href="#d1694e329" class="para">7</a>] </sup>The Cloudera blog post <a class="link" href="http://www.cloudera.com/blog/2012/01/an-update-on-apache-hadoop-1-0/" target="_top">An update on Apache Hadoop 1.0</a>
+            Parameters: What can you just ignore?</p></div><div class="footnote"><p><sup>[<a id="ftn.d1699e315" href="#d1699e315" class="para">6</a>] </sup>See <a class="link" href="http://search-hadoop.com/m/7vFVx4EsUb2" target="_top">HBase, mail # dev - DISCUSS: Have hbase require at least hadoop 1.0.0 in hbase 0.96.0?</a></p></div><div class="footnote"><p><sup>[<a id="ftn.d1699e329" href="#d1699e329" class="para">7</a>] </sup>The Cloudera blog post <a class="link" href="http://www.cloudera.com/blog/2012/01/an-update-on-apache-hadoop-1-0/" target="_top">An update on Apache Hadoop 1.0</a>
           by Charles Zedlweski has a nice exposition on how all the Hadoop versions relate.
           Its worth checking out if you are having trouble making sense of the
           Hadoop version morass.
-          </p></div><div class="footnote"><p><sup>[<a id="ftn.d1694e388" href="#d1694e388" class="para">8</a>] </sup>See <a class="link" href="http://ccgtech.blogspot.com/2010/02/hadoop-hdfs-deceived-by-xciever.html" target="_top">Hadoop HDFS: Deceived by Xciever</a> for an informative rant on xceivering.</p></div></div></div><div id="disqus_thread"></div><script type="text/javascript">
+          </p></div><div class="footnote"><p><sup>[<a id="ftn.d1699e388" href="#d1699e388" class="para">8</a>] </sup>See <a class="link" href="http://ccgtech.blogspot.com/2010/02/hadoop-hdfs-deceived-by-xciever.html" target="_top">Hadoop HDFS: Deceived by Xciever</a> for an informative rant on xceivering.</p></div></div></div><div id="disqus_thread"></div><script type="text/javascript">
     var disqus_shortname = 'hbase'; // required: replace example with your forum shortname
     var disqus_url = 'http://hbase.apache.org/book';
     var disqus_identifier = 'configuration';

Modified: hbase/hbase.apache.org/trunk/configuration/example_config.html
URL: http://svn.apache.org/viewvc/hbase/hbase.apache.org/trunk/configuration/example_config.html?rev=1462679&r1=1462678&r2=1462679&view=diff
==============================================================================
--- hbase/hbase.apache.org/trunk/configuration/example_config.html (original)
+++ hbase/hbase.apache.org/trunk/configuration/example_config.html Sat Mar 30 00:19:55 2013
@@ -1,6 +1,6 @@
 <html><head>
       <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
-   <title>1.4.&nbsp;Example Configurations</title><link rel="stylesheet" type="text/css" href="../css/freebsd_docbook.css"><meta name="generator" content="DocBook XSL-NS Stylesheets V1.76.1"><link rel="home" href="configuration.html" title="Chapter&nbsp;1.&nbsp;Apache HBase (TM) Configuration"><link rel="up" href="configuration.html" title="Chapter&nbsp;1.&nbsp;Apache HBase (TM) Configuration"><link rel="prev" href="config.files.html" title="1.3.&nbsp;Configuration Files"><link rel="next" href="important_configurations.html" title="1.5.&nbsp;The Important Configurations"></head><body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF"><div class="navheader"><table width="100%" summary="Navigation header"><tr><th colspan="3" align="center">1.4.&nbsp;Example Configurations</th></tr><tr><td width="20%" align="left"><a accesskey="p" href="config.files.html">Prev</a>&nbsp;</td><th width="60%" align="center">&nbsp;</th><td width="20%" align="right">&nbsp;<a
  accesskey="n" href="important_configurations.html">Next</a></td></tr></table><hr></div><div class="section" title="1.4.&nbsp;Example Configurations"><div class="titlepage"><div><div><h2 class="title" style="clear: both"><a name="example_config"></a>1.4.&nbsp;Example Configurations</h2></div></div></div><div class="section" title="1.4.1.&nbsp;Basic Distributed HBase Install"><div class="titlepage"><div><div><h3 class="title"><a name="d1694e2061"></a>1.4.1.&nbsp;Basic Distributed HBase Install</h3></div></div></div><p>Here is an example basic configuration for a distributed ten
+   <title>1.4.&nbsp;Example Configurations</title><link rel="stylesheet" type="text/css" href="../css/freebsd_docbook.css"><meta name="generator" content="DocBook XSL-NS Stylesheets V1.76.1"><link rel="home" href="configuration.html" title="Chapter&nbsp;1.&nbsp;Apache HBase (TM) Configuration"><link rel="up" href="configuration.html" title="Chapter&nbsp;1.&nbsp;Apache HBase (TM) Configuration"><link rel="prev" href="config.files.html" title="1.3.&nbsp;Configuration Files"><link rel="next" href="important_configurations.html" title="1.5.&nbsp;The Important Configurations"></head><body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF"><div class="navheader"><table width="100%" summary="Navigation header"><tr><th colspan="3" align="center">1.4.&nbsp;Example Configurations</th></tr><tr><td width="20%" align="left"><a accesskey="p" href="config.files.html">Prev</a>&nbsp;</td><th width="60%" align="center">&nbsp;</th><td width="20%" align="right">&nbsp;<a
  accesskey="n" href="important_configurations.html">Next</a></td></tr></table><hr></div><div class="section" title="1.4.&nbsp;Example Configurations"><div class="titlepage"><div><div><h2 class="title" style="clear: both"><a name="example_config"></a>1.4.&nbsp;Example Configurations</h2></div></div></div><div class="section" title="1.4.1.&nbsp;Basic Distributed HBase Install"><div class="titlepage"><div><div><h3 class="title"><a name="d1699e2061"></a>1.4.1.&nbsp;Basic Distributed HBase Install</h3></div></div></div><p>Here is an example basic configuration for a distributed ten
         node cluster. The nodes are named <code class="varname">example0</code>,
         <code class="varname">example1</code>, etc., through node
         <code class="varname">example9</code> in this example. The HBase Master and the

Modified: hbase/hbase.apache.org/trunk/configuration/important_configurations.html
URL: http://svn.apache.org/viewvc/hbase/hbase.apache.org/trunk/configuration/important_configurations.html?rev=1462679&r1=1462678&r2=1462679&view=diff
==============================================================================
--- hbase/hbase.apache.org/trunk/configuration/important_configurations.html (original)
+++ hbase/hbase.apache.org/trunk/configuration/important_configurations.html Sat Mar 30 00:19:55 2013
@@ -120,7 +120,7 @@
           Keeping 5 regions per RS would be too low for a job, whereas 1000 will generate too many maps.
       </p></div></div><div class="section" title="1.5.2.7.&nbsp;Managed Splitting"><div class="titlepage"><div><div><h4 class="title"><a name="disable.splitting"></a>1.5.2.7.&nbsp;Managed Splitting</h4></div></div></div><p>
       Rather than let HBase auto-split your Regions, manage the splitting manually
-      <sup>[<a name="d1694e2287" href="#ftn.d1694e2287" class="footnote">11</a>]</sup>.
+      <sup>[<a name="d1699e2287" href="#ftn.d1699e2287" class="footnote">11</a>]</sup>.
  With growing amounts of data, splits will continually be needed. Since
  you always know exactly what regions you have, long-term debugging and
  profiling is much easier with manual splits. It is hard to trace the logs to
@@ -180,7 +180,7 @@ of all regions.
       <a class="link" href="http://search-hadoop.com/m/pduLg2fydtE/Inconsistent+scan+performance+with+caching+set+&amp;subj=Re+Inconsistent+scan+performance+with+caching+set+to+1" target="_top">Inconsistent scan performance with caching set to 1</a>
       and the issue cited therein where setting notcpdelay improved scan speeds.  You might also
       see the graphs on the tail of <a class="link" href="https://issues.apache.org/jira/browse/HBASE-7008" target="_top">HBASE-7008 Set scanner caching to a better default</a>
-      where our Lars Hofhansl tries various data sizes w/ Nagle's on and off measuring the effect.</p></div></div><div class="footnotes"><br><hr width="100" align="left"><div class="footnote"><p><sup>[<a id="ftn.d1694e2287" href="#d1694e2287" class="para">11</a>] </sup>What follows is taken from the javadoc at the head of
+      where our Lars Hofhansl tries various data sizes w/ Nagle's on and off measuring the effect.</p></div></div><div class="footnotes"><br><hr width="100" align="left"><div class="footnote"><p><sup>[<a id="ftn.d1699e2287" href="#d1699e2287" class="para">11</a>] </sup>What follows is taken from the javadoc at the head of
       the <code class="classname">org.apache.hadoop.hbase.util.RegionSplitter</code> tool
       added to HBase post-0.90.0 release.
       </p></div></div></div><div id="disqus_thread"></div><script type="text/javascript">

Modified: hbase/hbase.apache.org/trunk/configuration/standalone_dist.html
URL: http://svn.apache.org/viewvc/hbase/hbase.apache.org/trunk/configuration/standalone_dist.html?rev=1462679&r1=1462678&r2=1462679&view=diff
==============================================================================
--- hbase/hbase.apache.org/trunk/configuration/standalone_dist.html (original)
+++ hbase/hbase.apache.org/trunk/configuration/standalone_dist.html Sat Mar 30 00:19:55 2013
@@ -18,7 +18,7 @@
         daemons run on a single node -- a.k.a
         <span class="emphasis"><em>pseudo-distributed</em></span>-- and
         <span class="emphasis"><em>fully-distributed</em></span> where the daemons are spread
-        across all nodes in the cluster <sup>[<a name="d1694e446" href="#ftn.d1694e446" class="footnote">9</a>]</sup>.</p><p>Distributed modes require an instance of the <span class="emphasis"><em>Hadoop
+        across all nodes in the cluster <sup>[<a name="d1699e446" href="#ftn.d1699e446" class="footnote">9</a>]</sup>.</p><p>Distributed modes require an instance of the <span class="emphasis"><em>Hadoop
         Distributed File System</em></span> (HDFS). See the Hadoop <a class="link" href="http://hadoop.apache.org/common/docs/r1.1.1/api/overview-summary.html#overview_description" target="_top">
         requirements and instructions</a> for how to set up a HDFS. Before
         proceeding, ensure you have an appropriate, working HDFS.</p><p>Below we describe the different distributed setups. Starting,
@@ -37,7 +37,7 @@
               Note that the <code class="varname">hbase.rootdir</code> property points to the
               local HDFS instance.
    		  </p><p>Now skip to <a class="xref" href="standalone_dist.html#confirm" title="1.2.3.&nbsp;Running and Confirming Your Installation">Section&nbsp;1.2.3, &#8220;Running and Confirming Your Installation&#8221;</a> for how to start and verify your
-          pseudo-distributed install. <sup>[<a name="d1694e494" href="#ftn.d1694e494" class="footnote">10</a>]</sup></p><div class="note" title="Note" style="margin-left: 0.5in; margin-right: 0.5in;"><h3 class="title">Note</h3><p>Let HBase create the <code class="varname">hbase.rootdir</code>
+          pseudo-distributed install. <sup>[<a name="d1699e494" href="#ftn.d1699e494" class="footnote">10</a>]</sup></p><div class="note" title="Note" style="margin-left: 0.5in; margin-right: 0.5in;"><h3 class="title">Note</h3><p>Let HBase create the <code class="varname">hbase.rootdir</code>
             directory. If you don't, you'll get warning saying HBase needs a
             migration run because the directory is missing files expected by
             HBase (it'll create them if you let it).</p></div><div class="section" title="1.2.2.1.1.&nbsp;Pseudo-distributed Configuration File"><div class="titlepage"><div><div><h5 class="title"><a name="pseudo.config"></a>1.2.2.1.1.&nbsp;Pseudo-distributed Configuration File</h5></div></div></div><p>Below is a sample pseudo-distributed file for the node <code class="varname">h-24-30.example.com</code>.
@@ -158,8 +158,8 @@ stopping hbase...............</pre><p> S
         complete. It can take longer if your cluster is comprised of many
         machines. If you are running a distributed operation, be sure to wait
         until HBase has shut down completely before stopping the Hadoop
-        daemons.</p></div><div class="footnotes"><br><hr width="100" align="left"><div class="footnote"><p><sup>[<a id="ftn.d1694e446" href="#d1694e446" class="para">9</a>] </sup>The pseudo-distributed vs fully-distributed nomenclature
-            comes from Hadoop.</p></div><div class="footnote"><p><sup>[<a id="ftn.d1694e494" href="#d1694e494" class="para">10</a>] </sup>See <a class="xref" href="standalone_dist.html#pseudo.extras" title="1.2.2.1.2.&nbsp;Pseudo-distributed Extras">Section&nbsp;1.2.2.1.2, &#8220;Pseudo-distributed Extras&#8221;</a> for notes on how to start extra Masters and
+        daemons.</p></div><div class="footnotes"><br><hr width="100" align="left"><div class="footnote"><p><sup>[<a id="ftn.d1699e446" href="#d1699e446" class="para">9</a>] </sup>The pseudo-distributed vs fully-distributed nomenclature
+            comes from Hadoop.</p></div><div class="footnote"><p><sup>[<a id="ftn.d1699e494" href="#d1699e494" class="para">10</a>] </sup>See <a class="xref" href="standalone_dist.html#pseudo.extras" title="1.2.2.1.2.&nbsp;Pseudo-distributed Extras">Section&nbsp;1.2.2.1.2, &#8220;Pseudo-distributed Extras&#8221;</a> for notes on how to start extra Masters and
               RegionServers when running pseudo-distributed.</p></div></div></div><div id="disqus_thread"></div><script type="text/javascript">
     var disqus_shortname = 'hbase'; // required: replace example with your forum shortname
     var disqus_url = 'http://hbase.apache.org/book';

Modified: hbase/hbase.apache.org/trunk/cygwin.html
URL: http://svn.apache.org/viewvc/hbase/hbase.apache.org/trunk/cygwin.html?rev=1462679&r1=1462678&r2=1462679&view=diff
==============================================================================
--- hbase/hbase.apache.org/trunk/cygwin.html (original)
+++ hbase/hbase.apache.org/trunk/cygwin.html Sat Mar 30 00:19:55 2013
@@ -1,6 +1,6 @@
 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
 
-<!-- Generated by Apache Maven Doxia at Mar 21, 2013 -->
+<!-- Generated by Apache Maven Doxia at Mar 29, 2013 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
@@ -13,7 +13,7 @@
     </style>
     <link rel="stylesheet" href="./css/print.css" type="text/css" media="print" />
     <link rel="shortcut icon" href="/images/favicon.ico" />
-    <meta name="Date-Revision-yyyymmdd" content="20130321" />
+    <meta name="Date-Revision-yyyymmdd" content="20130329" />
     <meta http-equiv="Content-Language" content="en" />
         <!--Google Analytics-->
 <script type="text/javascript">
@@ -412,7 +412,7 @@ Now your <b>HBase </b>server is running,
     <div id="footer">
        <div class="xright">      
                 
-                 <span id="publishDate">Last Published: 2013-03-21</span>
+                 <span id="publishDate">Last Published: 2013-03-29</span>
               &nbsp;| <span id="projectVersion">Version: 0.97-SNAPSHOT</span>
             &nbsp;
         </div>

Modified: hbase/hbase.apache.org/trunk/developer.html
URL: http://svn.apache.org/viewvc/hbase/hbase.apache.org/trunk/developer.html?rev=1462679&r1=1462678&r2=1462679&view=diff
==============================================================================
--- hbase/hbase.apache.org/trunk/developer.html (original)
+++ hbase/hbase.apache.org/trunk/developer.html Sat Mar 30 00:19:55 2013
@@ -53,7 +53,7 @@ mvn clean package -DskipTests
        </p></div><div class="section" title="1.3.2.&nbsp;Building in snappy compression support"><div class="titlepage"><div><div><h3 class="title"><a name="build.snappy"></a>1.3.2.&nbsp;Building in snappy compression support</h3></div></div></div><p>Pass <code class="code">-Dsnappy</code> to trigger the snappy maven profile for building
             snappy native libs into hbase.  See also <a class="xref" href="#">???</a></p></div><div class="section" title="1.3.3.&nbsp;Building the HBase tarball"><div class="titlepage"><div><div><h3 class="title"><a name="build.tgz"></a>1.3.3.&nbsp;Building the HBase tarball</h3></div></div></div><p>Do the following to build the HBase tarball.
         Passing the -Prelease will generate javadoc and run the RAT plugin to verify licenses on source.
-        </p><pre class="programlisting">% MAVEN_OPTS="-Xmx2g" mvn clean site install assembly:assembly -DskipTests -Prelease</pre><p>
+        </p><pre class="programlisting">% MAVEN_OPTS="-Xmx2g" mvn clean install javadoc:aggregate site assembly:assembly -DskipTests -Prelease</pre><p>
 </p></div><div class="section" title="1.3.4.&nbsp;Build Gotchas"><div class="titlepage"><div><div><h3 class="title"><a name="build.gotchas"></a>1.3.4.&nbsp;Build Gotchas</h3></div></div></div><p>If you see <code class="code">Unable to find resource 'VM_global_library.vm'</code>, ignore it.
 			Its not an error.  It is <a class="link" href="http://jira.codehaus.org/browse/MSITE-286" target="_top">officially ugly</a> though.
            </p></div></div><div class="section" title="1.4.&nbsp;Adding an Apache HBase release to Apache's Maven Repository"><div class="titlepage"><div><div><h2 class="title" style="clear: both"><a name="mvn_repo"></a>1.4.&nbsp;Adding an Apache HBase release to Apache's Maven Repository</h2></div></div></div><p>Follow the instructions at
@@ -194,7 +194,7 @@ What is the new development version for 
               # Getting the javadoc into site is a little tricky.  You have to build it independent, then
               # 'aggregate' it at top-level so the pre-site site lifecycle step can find it; that is
               # what the javadoc:javadoc and javadoc:aggregate is about.
-              $ MAVEN_OPTS=" -Xmx3g" mvn clean -DskipTests javadoc:javadoc javadoc:aggregate site  site:stage -DstagingDirectory=/Users/stack/checkouts/hbase.apache.org/trunk
+              $ MAVEN_OPTS=" -Xmx3g" mvn clean -DskipTests javadoc:aggregate site  site:stage -DstagingDirectory=/Users/stack/checkouts/hbase.apache.org/trunk
               # Check the deployed site by viewing in a brower.
               # If all is good, commit it and it will show up at http://hbase.apache.org
               #
@@ -247,18 +247,18 @@ Apache HBase uses a patched maven surefi
 its unit test characterizations.
 </p><p>Read the below to figure which annotation of the set small, medium, and large to
 put on your new HBase unit test.
-</p><div class="section" title="1.7.2.1.&nbsp;Small Tests"><div class="titlepage"><div><div><h4 class="title"><a name="hbase.unittests.small"></a>1.7.2.1.&nbsp;Small Tests<a class="indexterm" name="d1377e445"></a></h4></div></div></div><p>
+</p><div class="section" title="1.7.2.1.&nbsp;Small Tests"><div class="titlepage"><div><div><h4 class="title"><a name="hbase.unittests.small"></a>1.7.2.1.&nbsp;Small Tests<a class="indexterm" name="d1382e445"></a></h4></div></div></div><p>
 <span class="emphasis"><em>Small</em></span> tests are executed in a shared JVM. We put in this category all the tests that can
 be executed quickly in a shared JVM.  The maximum execution time for a small test is 15 seconds,
-and small tests should not use a (mini)cluster.</p></div><div class="section" title="1.7.2.2.&nbsp;Medium Tests"><div class="titlepage"><div><div><h4 class="title"><a name="hbase.unittests.medium"></a>1.7.2.2.&nbsp;Medium Tests<a class="indexterm" name="d1377e456"></a></h4></div></div></div><p><span class="emphasis"><em>Medium</em></span> tests represent tests that must be executed
+and small tests should not use a (mini)cluster.</p></div><div class="section" title="1.7.2.2.&nbsp;Medium Tests"><div class="titlepage"><div><div><h4 class="title"><a name="hbase.unittests.medium"></a>1.7.2.2.&nbsp;Medium Tests<a class="indexterm" name="d1382e456"></a></h4></div></div></div><p><span class="emphasis"><em>Medium</em></span> tests represent tests that must be executed
 before proposing a patch. They are designed to run in less than 30 minutes altogether,
 and are quite stable in their results. They are designed to last less than 50 seconds
 individually. They can use a cluster, and each of them is executed in a separate JVM.
-</p></div><div class="section" title="1.7.2.3.&nbsp;Large Tests"><div class="titlepage"><div><div><h4 class="title"><a name="hbase.unittests.large"></a>1.7.2.3.&nbsp;Large Tests<a class="indexterm" name="d1377e466"></a></h4></div></div></div><p><span class="emphasis"><em>Large</em></span> tests are everything else. They are typically large-scale
+</p></div><div class="section" title="1.7.2.3.&nbsp;Large Tests"><div class="titlepage"><div><div><h4 class="title"><a name="hbase.unittests.large"></a>1.7.2.3.&nbsp;Large Tests<a class="indexterm" name="d1382e466"></a></h4></div></div></div><p><span class="emphasis"><em>Large</em></span> tests are everything else. They are typically large-scale
 tests, regression tests for specific bugs, timeout tests, performance tests.
 They are executed before a commit on the pre-integration machines. They can be run on
 the developer machine as well.
-</p></div><div class="section" title="1.7.2.4.&nbsp;Integration Tests"><div class="titlepage"><div><div><h4 class="title"><a name="hbase.unittests.integration"></a>1.7.2.4.&nbsp;Integration Tests<a class="indexterm" name="d1377e476"></a></h4></div></div></div><p><span class="emphasis"><em>Integration</em></span> tests are system level tests. See
+</p></div><div class="section" title="1.7.2.4.&nbsp;Integration Tests"><div class="titlepage"><div><div><h4 class="title"><a name="hbase.unittests.integration"></a>1.7.2.4.&nbsp;Integration Tests<a class="indexterm" name="d1382e476"></a></h4></div></div></div><p><span class="emphasis"><em>Integration</em></span> tests are system level tests. See
 <a class="xref" href="#integration.tests" title="1.7.5.&nbsp;Integration Tests">Section&nbsp;1.7.5, &#8220;Integration Tests&#8221;</a> for more info.
 </p></div></div><div class="section" title="1.7.3.&nbsp;Running tests"><div class="titlepage"><div><div><h3 class="title"><a name="hbase.unittests.cmds"></a>1.7.3.&nbsp;Running tests</h3></div></div></div><p>Below we describe how to run the Apache HBase junit categories.</p><div class="section" title="1.7.3.1.&nbsp;Default: small and medium category tests"><div class="titlepage"><div><div><h4 class="title"><a name="hbase.unittests.cmds.test"></a>1.7.3.1.&nbsp;Default: small and medium category tests
 </h4></div></div></div><p>Running </p><pre class="programlisting">mvn test</pre><p> will execute all small tests in a single JVM
@@ -313,7 +313,7 @@ It must be executed from the directory w
 Running </p><pre class="programlisting">./dev-support/hbasetests.sh runAllTests</pre><p> will execute all tests.
 Running </p><pre class="programlisting">./dev-support/hbasetests.sh replayFailed</pre><p> will rerun the failed tests a
 second time, in a separate jvm and without parallelisation.
-</p></div><div class="section" title="1.7.3.7.&nbsp;Test Resource Checker"><div class="titlepage"><div><div><h4 class="title"><a name="hbase.unittests.resource.checker"></a>1.7.3.7.&nbsp;Test Resource Checker<a class="indexterm" name="d1377e604"></a></h4></div></div></div><p>
+</p></div><div class="section" title="1.7.3.7.&nbsp;Test Resource Checker"><div class="titlepage"><div><div><h4 class="title"><a name="hbase.unittests.resource.checker"></a>1.7.3.7.&nbsp;Test Resource Checker<a class="indexterm" name="d1382e604"></a></h4></div></div></div><p>
 A custom Maven SureFire plugin listener checks a  number of resources before
 and after each HBase unit test runs and logs its findings at the end of the test
 output files which can be found in <code class="filename">target/surefire-reports</code>
@@ -539,7 +539,7 @@ mvn compile
            </p><p>This convention comes from our parent project Hadoop.</p></div><div class="section" title="1.10.4.&nbsp;Invariants"><div class="titlepage"><div><div><h3 class="title"><a name="design.invariants"></a>1.10.4.&nbsp;Invariants</h3></div></div></div><p>We don't have many but what we have we list below.  All are subject to challenge of
            course but until then, please hold to the rules of the road.
            </p><div class="section" title="1.10.4.1.&nbsp;No permanent state in ZooKeeper"><div class="titlepage"><div><div><h4 class="title"><a name="design.invariants.zk.data"></a>1.10.4.1.&nbsp;No permanent state in ZooKeeper</h4></div></div></div><p>ZooKeeper state should transient (treat it like memory). If deleted, hbase
-          should be able to recover and essentially be in the same state<sup>[<a name="d1377e1044" href="#ftn.d1377e1044" class="footnote">1</a>]</sup>.
+          should be able to recover and essentially be in the same state<sup>[<a name="d1382e1044" href="#ftn.d1382e1044" class="footnote">1</a>]</sup>.
           </p></div></div><div class="section" title="1.10.5.&nbsp;Running In-Situ"><div class="titlepage"><div><div><h3 class="title"><a name="run.insitu"></a>1.10.5.&nbsp;Running In-Situ</h3></div></div></div><p>If you are developing Apache HBase, frequently it is useful to test your changes against a more-real cluster than what you find in unit tests. In this case, HBase can be run directly from the source in local-mode.
            All you need to do is run:
            </p><pre class="programlisting">${HBASE_HOME}/bin/start-hbase.sh</pre><p>
@@ -663,7 +663,7 @@ Bar bar = foo.getBar();     &lt;--- imag
               a contributor cannot be expected to be up on the
               particular vagaries and interconnections that occur
               in a project like hbase.  A committer should.
-            </p></div></div></div><div class="footnotes"><br><hr width="100" align="left"><div class="footnote"><p><sup>[<a id="ftn.d1377e1044" href="#d1377e1044" class="para">1</a>] </sup>There are currently
+            </p></div></div></div><div class="footnotes"><br><hr width="100" align="left"><div class="footnote"><p><sup>[<a id="ftn.d1382e1044" href="#d1382e1044" class="para">1</a>] </sup>There are currently
           a few exceptions that we need to fix around whether a table is enabled or disabled</p></div></div></div><div id="disqus_thread"></div><script type="text/javascript">
     var disqus_shortname = 'hbase'; // required: replace example with your forum shortname
     var disqus_url = 'http://hbase.apache.org/book';

Modified: hbase/hbase.apache.org/trunk/developer/build.html
URL: http://svn.apache.org/viewvc/hbase/hbase.apache.org/trunk/developer/build.html?rev=1462679&r1=1462678&r2=1462679&view=diff
==============================================================================
--- hbase/hbase.apache.org/trunk/developer/build.html (original)
+++ hbase/hbase.apache.org/trunk/developer/build.html Sat Mar 30 00:19:55 2013
@@ -12,7 +12,7 @@ mvn clean package -DskipTests
        </p></div><div class="section" title="1.3.2.&nbsp;Building in snappy compression support"><div class="titlepage"><div><div><h3 class="title"><a name="build.snappy"></a>1.3.2.&nbsp;Building in snappy compression support</h3></div></div></div><p>Pass <code class="code">-Dsnappy</code> to trigger the snappy maven profile for building
             snappy native libs into hbase.  See also <a class="xref" href="">???</a></p></div><div class="section" title="1.3.3.&nbsp;Building the HBase tarball"><div class="titlepage"><div><div><h3 class="title"><a name="build.tgz"></a>1.3.3.&nbsp;Building the HBase tarball</h3></div></div></div><p>Do the following to build the HBase tarball.
         Passing the -Prelease will generate javadoc and run the RAT plugin to verify licenses on source.
-        </p><pre class="programlisting">% MAVEN_OPTS="-Xmx2g" mvn clean site install assembly:assembly -DskipTests -Prelease</pre><p>
+        </p><pre class="programlisting">% MAVEN_OPTS="-Xmx2g" mvn clean install javadoc:aggregate site assembly:assembly -DskipTests -Prelease</pre><p>
 </p></div><div class="section" title="1.3.4.&nbsp;Build Gotchas"><div class="titlepage"><div><div><h3 class="title"><a name="build.gotchas"></a>1.3.4.&nbsp;Build Gotchas</h3></div></div></div><p>If you see <code class="code">Unable to find resource 'VM_global_library.vm'</code>, ignore it.
 			Its not an error.  It is <a class="link" href="http://jira.codehaus.org/browse/MSITE-286" target="_top">officially ugly</a> though.
            </p></div></div><div id="disqus_thread"></div><script type="text/javascript">



Mime
View raw message