hbase-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From e...@apache.org
Subject [1/9] hbase git commit: Blanket update of src/main/docbkx from master
Date Wed, 03 Dec 2014 05:53:24 GMT
Repository: hbase
Updated Branches:
  refs/heads/branch-1 5f150e49f -> 6a0c4f3ba


http://git-wip-us.apache.org/repos/asf/hbase/blob/48d9d27d/src/main/docbkx/shell.xml
----------------------------------------------------------------------
diff --git a/src/main/docbkx/shell.xml b/src/main/docbkx/shell.xml
index d1067a1..a400d8d 100644
--- a/src/main/docbkx/shell.xml
+++ b/src/main/docbkx/shell.xml
@@ -47,7 +47,7 @@
 
     <section
         xml:id="scripting">
-        <title>Scripting</title>
+        <title>Scripting with Ruby</title>
         <para>For examples scripting Apache HBase, look in the HBase <filename>bin</filename>
             directory. Look at the files that end in <filename>*.rb</filename>. To run one of these
             files, do as follows:</para>
@@ -55,6 +55,155 @@
     </section>
 
     <section>
+        <title>Running the Shell in Non-Interactive Mode</title>
+        <para>A new non-interactive mode has been added to the HBase Shell (<link
+                xlink:href="https://issues.apache.org/jira/browse/HBASE-11658">HBASE-11658)</link>.
+            Non-interactive mode captures the exit status (success or failure) of HBase Shell
+            commands and passes that status back to the command interpreter. If you use the normal
+            interactive mode, the HBase Shell will only ever return its own exit status, which will
+            nearly always be <literal>0</literal> for success.</para>
+        <para>To invoke non-interactive mode, pass the <option>-n</option> or
+                <option>--non-interactive</option> option to HBase Shell.</para>
+    </section>
+    
+    <section xml:id="hbase.shell.noninteractive">
+        <title>HBase Shell in OS Scripts</title>
+        <para>You can use the HBase shell from within operating system script interpreters like the
+            Bash shell which is the default command interpreter for most Linux and UNIX
+            distributions. The following guidelines use Bash syntax, but could be adjusted to work
+            with C-style shells such as csh or tcsh, and could probably be modified to work with the
+            Microsoft Windows script interpreter as well. Submissions are welcome.</para>
+        <note>
+            <para>Spawning HBase Shell commands in this way is slow, so keep that in mind when you
+                are deciding when combining HBase operations with the operating system command line
+                is appropriate.</para>
+        </note>
+        <example>
+            <title>Passing Commands to the HBase Shell</title>
+            <para>You can pass commands to the HBase Shell in non-interactive mode (see <xref
+                    linkend="hbasee.shell.noninteractive"/>) using the <command>echo</command>
+                command and the <literal>|</literal> (pipe) operator. Be sure to escape characters
+                in the HBase commands which would otherwise be interpreted by the shell. Some
+                debug-level output has been truncated from the example below.</para>
+            <screen>$ <userinput>echo "describe 'test1'" | ./hbase shell -n</userinput>
+                <computeroutput>
+Version 0.98.3-hadoop2, rd5e65a9144e315bb0a964e7730871af32f5018d5, Sat May 31 19:56:09 PDT 2014
+
+describe 'test1'
+
+DESCRIPTION                                          ENABLED
+ 'test1', {NAME => 'cf', DATA_BLOCK_ENCODING => 'NON true
+ E', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0',
+  VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIO
+ NS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS =>
+ 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false'
+ , BLOCKCACHE => 'true'}
+1 row(s) in 3.2410 seconds    
+                </computeroutput>            
+            </screen>
+            <para>To suppress all output, echo it to <filename>/dev/null:</filename></para>
+            <screen>$ <userinput>echo "describe 'test'" | ./hbase shell -n > /dev/null 2>&amp;1</userinput></screen>
+        </example>
+        <example>
+            <title>Checking the Result of a Scripted Command</title>
+            <para>Since scripts are not designed to be run interactively, you need a way to check
+                whether your command failed or succeeded. The HBase shell uses the standard
+                convention of returning a value of <literal>0</literal> for successful commands, and
+                some non-zero value for failed commands. Bash stores a command's return value in a
+                special environment variable called <varname>$?</varname>. Because that variable is
+                overwritten each time the shell runs any command, you should store the result in a
+                different, script-defined variable.</para>
+            <para>This is a naive script that shows one way to store the return value and make a
+                decision based upon it.</para>
+            <programlisting language="bourne">
+#!/bin/bash
+
+echo "describe 'test'" | ./hbase shell -n > /dev/null 2>&amp;1
+status=$?
+echo "The status was " $status  
+if ($status == 0); then
+    echo "The command succeeded"
+else
+    echo "The command may have failed."
+fi
+return $status
+            </programlisting>
+        </example>
+        <section>
+            <title>Checking for Success or Failure In Scripts</title>
+            <para>Getting an exit code of 0 means that the command you scripted definitely
+                succeeded. However, getting a non-zero exit code does not necessarily mean the
+                command failed. The command could have succeeded, but the client lost connectivity,
+                or some other event obscured its success. This is because RPC commands are
+                stateless. The only way to be sure of the status of an operation is to check. For
+                instance, if your script creates a table, but returns a non-zero exit value, you
+                should check whether the table was actually created before trying again to create
+                it.</para>
+        </section>
+    </section>
+    
+    <section>
+        <title>Read HBase Shell Commands from a Command File</title>
+        <para>You can enter HBase Shell commands into a text file, one command per line, and pass
+            that file to the HBase Shell.</para>
+        <example>
+            <title>Example Command File</title>
+            <screen>
+create 'test', 'cf'
+list 'test'
+put 'test', 'row1', 'cf:a', 'value1'
+put 'test', 'row2', 'cf:b', 'value2'
+put 'test', 'row3', 'cf:c', 'value3'
+put 'test', 'row4', 'cf:d', 'value4'
+scan 'test'
+get 'test', 'row1'
+disable 'test'
+enable 'test'
+            </screen>
+        </example>
+        <example>
+            <title>Directing HBase Shell to Execute the Commands</title>
+            <para>Pass the path to the command file as the only argument to the <command>hbase
+                    shell</command> command. Each command is executed and its output is shown. If
+                you do not include the <command>exit</command> command in your script, you are
+                returned to the HBase shell prompt. There is no way to programmatically check each
+                individual command for success or failure. Also, though you see the output for each
+                command, the commands themselves are not echoed to the screen so it can be difficult
+                to line up the command with its output.</para>
+            <screen>
+$ <userinput>./hbase shell ./sample_commands.txt</userinput>
+<computeroutput>0 row(s) in 3.4170 seconds
+
+TABLE
+test
+1 row(s) in 0.0590 seconds
+
+0 row(s) in 0.1540 seconds
+
+0 row(s) in 0.0080 seconds
+
+0 row(s) in 0.0060 seconds
+
+0 row(s) in 0.0060 seconds
+
+ROW                   COLUMN+CELL
+ row1                 column=cf:a, timestamp=1407130286968, value=value1
+ row2                 column=cf:b, timestamp=1407130286997, value=value2
+ row3                 column=cf:c, timestamp=1407130287007, value=value3
+ row4                 column=cf:d, timestamp=1407130287015, value=value4
+4 row(s) in 0.0420 seconds
+
+COLUMN                CELL
+ cf:a                 timestamp=1407130286968, value=value1
+1 row(s) in 0.0110 seconds
+
+0 row(s) in 1.5630 seconds
+
+0 row(s) in 0.4360 seconds</computeroutput>                
+            </screen>
+        </example>
+    </section>
+    <section>
         <title>Passing VM Options to the Shell</title>
         <para>You can pass VM options to the HBase Shell using the <code>HBASE_SHELL_OPTS</code>
             environment variable. You can set this in your environment, for instance by editing

http://git-wip-us.apache.org/repos/asf/hbase/blob/48d9d27d/src/main/docbkx/thrift_filter_language.xml
----------------------------------------------------------------------
diff --git a/src/main/docbkx/thrift_filter_language.xml b/src/main/docbkx/thrift_filter_language.xml
index a0535a4..74da600 100644
--- a/src/main/docbkx/thrift_filter_language.xml
+++ b/src/main/docbkx/thrift_filter_language.xml
@@ -348,13 +348,13 @@ is evaluated as
                         <itemizedlist>
                             <title>Syntax</title>
                             <listitem>
-                                <programlisting>KeyOnlyFilter ()</programlisting>
+                                <programlisting language="java">KeyOnlyFilter ()</programlisting>
                             </listitem>
                         </itemizedlist>
                         <itemizedlist>
                             <title>Example</title>
                             <listitem>
-                                <programlisting>KeyOnlyFilter ()"</programlisting>
+                                <programlisting language="java">KeyOnlyFilter ()"</programlisting>
                             </listitem>
                         </itemizedlist>
                     </listitem>
@@ -368,13 +368,13 @@ is evaluated as
                         <itemizedlist>
                             <title>Syntax</title>
                             <listitem>
-                                <programlisting>FirstKeyOnlyFilter ()</programlisting>
+                                <programlisting language="java">FirstKeyOnlyFilter ()</programlisting>
                             </listitem>
                         </itemizedlist>
                         <itemizedlist>
                             <title>Example</title>
                             <listitem>
-                                <programlisting>FirstKeyOnlyFilter ()</programlisting>
+                                <programlisting language="java">FirstKeyOnlyFilter ()</programlisting>
                             </listitem>
                         </itemizedlist>
                     </listitem>
@@ -388,13 +388,13 @@ is evaluated as
                         <itemizedlist>
                             <title>Syntax</title>
                             <listitem>
-                                <programlisting>PrefixFilter (‘&lt;row_prefix>’)</programlisting>
+                                <programlisting language="java">PrefixFilter (‘&lt;row_prefix>’)</programlisting>
                             </listitem>
                         </itemizedlist>
                         <itemizedlist>
                             <title>Example</title>
                             <listitem>
-                                <programlisting>PrefixFilter (‘Row’)</programlisting>
+                                <programlisting language="java">PrefixFilter (‘Row’)</programlisting>
                             </listitem>
                         </itemizedlist>
                     </listitem>
@@ -409,13 +409,13 @@ is evaluated as
                         <itemizedlist>
                             <title>Syntax</title>
                             <listitem>
-                                <programlisting>ColumnPrefixFilter(‘&lt;column_prefix>’)</programlisting>
+                                <programlisting language="java">ColumnPrefixFilter(‘&lt;column_prefix>’)</programlisting>
                             </listitem>
                         </itemizedlist>
                         <itemizedlist>
                             <title>Example</title>
                             <listitem>
-                                <programlisting>ColumnPrefixFilter(‘Col’)</programlisting>
+                                <programlisting language="java">ColumnPrefixFilter(‘Col’)</programlisting>
                             </listitem>
                         </itemizedlist>
                     </listitem>
@@ -430,13 +430,13 @@ is evaluated as
                         <itemizedlist>
                             <title>Syntax</title>
                             <listitem>
-                                <programlisting>MultipleColumnPrefixFilter(‘&lt;column_prefix>’, ‘&lt;column_prefix>’, …, ‘&lt;column_prefix>’)</programlisting>
+                                <programlisting language="java">MultipleColumnPrefixFilter(‘&lt;column_prefix>’, ‘&lt;column_prefix>’, …, ‘&lt;column_prefix>’)</programlisting>
                             </listitem>
                         </itemizedlist>
                         <itemizedlist>
                             <title>Example</title>
                             <listitem>
-                                <programlisting>MultipleColumnPrefixFilter(‘Col1’, ‘Col2’)</programlisting>
+                                <programlisting language="java">MultipleColumnPrefixFilter(‘Col1’, ‘Col2’)</programlisting>
                             </listitem>
                         </itemizedlist>
                     </listitem>
@@ -449,14 +449,14 @@ is evaluated as
                         <itemizedlist>
                             <title>Syntax</title>
                             <listitem>
-                                <programlisting>ColumnCountGetFilter
+                                <programlisting language="java">ColumnCountGetFilter
                         (‘&lt;limit>’)</programlisting>
                             </listitem>
                         </itemizedlist>
                         <itemizedlist>
                             <title>Example</title>
                             <listitem>
-                                <programlisting>ColumnCountGetFilter (4)</programlisting>
+                                <programlisting language="java">ColumnCountGetFilter (4)</programlisting>
                             </listitem>
                         </itemizedlist>
                     </listitem>
@@ -469,13 +469,13 @@ is evaluated as
                         <itemizedlist>
                             <title>Syntax</title>
                             <listitem>
-                                <programlisting>PageFilter (‘&lt;page_size&gt;’)</programlisting>
+                                <programlisting language="java">PageFilter (‘&lt;page_size&gt;’)</programlisting>
                             </listitem>
                         </itemizedlist>
                         <itemizedlist>
                             <title>Example</title>
                             <listitem>
-                                <programlisting>PageFilter (2)</programlisting>
+                                <programlisting language="java">PageFilter (2)</programlisting>
                             </listitem>
                         </itemizedlist>
                     </listitem>
@@ -489,13 +489,13 @@ is evaluated as
                         <itemizedlist>
                             <title>Syntax</title>
                             <listitem>
-                                <programlisting>ColumnPaginationFilter(‘&lt;limit>’, ‘&lt;offset>’)</programlisting>
+                                <programlisting language="java">ColumnPaginationFilter(‘&lt;limit>’, ‘&lt;offset>’)</programlisting>
                             </listitem>
                         </itemizedlist>
                         <itemizedlist>
                             <title>Example</title>
                             <listitem>
-                                <programlisting>ColumnPaginationFilter (3, 5)</programlisting>
+                                <programlisting language="java">ColumnPaginationFilter (3, 5)</programlisting>
                             </listitem>
                         </itemizedlist>
                     </listitem>
@@ -509,13 +509,13 @@ is evaluated as
                         <itemizedlist>
                             <title>Syntax</title>
                             <listitem>
-                                <programlisting>InclusiveStopFilter(‘&lt;stop_row_key>’)</programlisting>
+                                <programlisting language="java">InclusiveStopFilter(‘&lt;stop_row_key>’)</programlisting>
                             </listitem>
                         </itemizedlist>
                         <itemizedlist>
                             <title>Example</title>
                             <listitem>
-                                <programlisting>InclusiveStopFilter ('Row2')</programlisting>
+                                <programlisting language="java">InclusiveStopFilter ('Row2')</programlisting>
                             </listitem>
                         </itemizedlist>
                     </listitem>
@@ -528,13 +528,13 @@ is evaluated as
                         <itemizedlist>
                             <title>Syntax</title>
                             <listitem>
-                                <programlisting>TimeStampsFilter (&lt;timestamp>, &lt;timestamp>, ... ,&lt;timestamp>)</programlisting>
+                                <programlisting language="java">TimeStampsFilter (&lt;timestamp>, &lt;timestamp>, ... ,&lt;timestamp>)</programlisting>
                             </listitem>
                         </itemizedlist>
                         <itemizedlist>
                             <title>Example</title>
                             <listitem>
-                                <programlisting>TimeStampsFilter (5985489, 48895495, 58489845945)</programlisting>
+                                <programlisting language="java">TimeStampsFilter (5985489, 48895495, 58489845945)</programlisting>
                             </listitem>
                         </itemizedlist>
                     </listitem>
@@ -549,13 +549,13 @@ is evaluated as
                         <itemizedlist>
                             <title>Syntax</title>
                             <listitem>
-                                <programlisting>RowFilter (&lt;compareOp>, ‘&lt;row_comparator>’)</programlisting>
+                                <programlisting language="java">RowFilter (&lt;compareOp>, ‘&lt;row_comparator>’)</programlisting>
                             </listitem>
                         </itemizedlist>
                         <itemizedlist>
                             <title>Example</title>
                             <listitem>
-                                <programlisting>RowFilter (&lt;=, ‘xyz)</programlisting>
+                                <programlisting language="java">RowFilter (&lt;=, ‘xyz)</programlisting>
                             </listitem>
                         </itemizedlist>
                     </listitem>
@@ -570,13 +570,13 @@ is evaluated as
                         <itemizedlist>
                             <title>Syntax</title>
                             <listitem>
-                                <programlisting>QualifierFilter (&lt;compareOp&gt;, ‘&lt;qualifier_comparator>’)</programlisting>
+                                <programlisting language="java">QualifierFilter (&lt;compareOp&gt;, ‘&lt;qualifier_comparator>’)</programlisting>
                             </listitem>
                         </itemizedlist>
                         <itemizedlist>
                             <title>Example</title>
                             <listitem>
-                                <programlisting>QualifierFilter (=, ‘Column1’)</programlisting>
+                                <programlisting language="java">QualifierFilter (=, ‘Column1’)</programlisting>
                             </listitem>
                         </itemizedlist>
                     </listitem>
@@ -591,13 +591,13 @@ is evaluated as
                         <itemizedlist>
                             <title>Syntax</title>
                             <listitem>
-                                <programlisting>QualifierFilter (&lt;compareOp>,‘&lt;qualifier_comparator>’)</programlisting>
+                                <programlisting language="java">QualifierFilter (&lt;compareOp>,‘&lt;qualifier_comparator>’)</programlisting>
                             </listitem>
                         </itemizedlist>
                         <itemizedlist>
                             <title>Example</title>
                             <listitem>
-                                <programlisting>QualifierFilter (=,‘Column1’)</programlisting>
+                                <programlisting language="java">QualifierFilter (=,‘Column1’)</programlisting>
                             </listitem>
                         </itemizedlist>
                     </listitem>
@@ -611,13 +611,13 @@ is evaluated as
                         <itemizedlist>
                             <title>Syntax</title>
                             <listitem>
-                                <programlisting>ValueFilter (&lt;compareOp>,‘&lt;value_comparator>’) </programlisting>
+                                <programlisting language="java">ValueFilter (&lt;compareOp>,‘&lt;value_comparator>’) </programlisting>
                             </listitem>
                         </itemizedlist>
                         <itemizedlist>
                             <title>Example</title>
                             <listitem>
-                                <programlisting>ValueFilter (!=, ‘Value’)</programlisting>
+                                <programlisting language="java">ValueFilter (!=, ‘Value’)</programlisting>
                             </listitem>
                         </itemizedlist>
                     </listitem>
@@ -640,26 +640,26 @@ is evaluated as
                         <itemizedlist>
                             <title>Syntax</title>
                             <listitem>
-                                <programlisting><![CDATA[DependentColumnFilter (‘<family>’,‘<qualifier>’, <boolean>, <compare operator>, ‘<value
+                                <programlisting language="java"><![CDATA[DependentColumnFilter (‘<family>’,‘<qualifier>’, <boolean>, <compare operator>, ‘<value
                         comparator’)]]></programlisting>
                             </listitem>
                             <listitem>
-                                <programlisting><![CDATA[DependentColumnFilter (‘<family>’,‘<qualifier>’, <boolean>)]]></programlisting>
+                                <programlisting language="java"><![CDATA[DependentColumnFilter (‘<family>’,‘<qualifier>’, <boolean>)]]></programlisting>
                             </listitem>
                             <listitem>
-                                <programlisting>DependentColumnFilter (‘&lt;family>’,‘&lt;qualifier>’)</programlisting>
+                                <programlisting language="java">DependentColumnFilter (‘&lt;family>’,‘&lt;qualifier>’)</programlisting>
                             </listitem>
                         </itemizedlist>
                         <itemizedlist>
                             <title>Example</title>
                             <listitem>
-                                <programlisting>DependentColumnFilter (‘conf’, ‘blacklist’, false, >=, ‘zebra’)</programlisting>
+                                <programlisting language="java">DependentColumnFilter (‘conf’, ‘blacklist’, false, >=, ‘zebra’)</programlisting>
                             </listitem>
                             <listitem>
-                                <programlisting>DependentColumnFilter (‘conf’, 'blacklist', true)</programlisting>
+                                <programlisting language="java">DependentColumnFilter (‘conf’, 'blacklist', true)</programlisting>
                             </listitem>
                             <listitem>
-                                <programlisting>DependentColumnFilter (‘conf’, 'blacklist')</programlisting>
+                                <programlisting language="java">DependentColumnFilter (‘conf’, 'blacklist')</programlisting>
                             </listitem>
                         </itemizedlist>
                     </listitem>
@@ -683,16 +683,16 @@ is evaluated as
                         <itemizedlist>
                             <title>Syntax</title>
                             <listitem>
-                                <programlisting>SingleColumnValueFilter(‘&lt;family>’,‘&lt;qualifier>’, &lt;compare operator>, ‘&lt;comparator>’, &lt;filterIfColumnMissing_boolean>, &lt;latest_version_boolean>)</programlisting>
+                                <programlisting language="java">SingleColumnValueFilter(‘&lt;family>’,‘&lt;qualifier>’, &lt;compare operator>, ‘&lt;comparator>’, &lt;filterIfColumnMissing_boolean>, &lt;latest_version_boolean>)</programlisting>
                             </listitem>
                             <listitem>
-                                <programlisting>SingleColumnValueFilter(‘&lt;family>’, ‘&lt;qualifier>, &lt;compare operator>, ‘&lt;comparator>’)</programlisting>
+                                <programlisting language="java">SingleColumnValueFilter(‘&lt;family>’, ‘&lt;qualifier>, &lt;compare operator>, ‘&lt;comparator>’)</programlisting>
                             </listitem>
                         </itemizedlist>
                         <itemizedlist>
                             <title>Example</title>
                             <listitem>
-                                <programlisting>SingleColumnValueFilter (‘FamilyA’, ‘Column1’, &lt;=, ‘abc’, true, false)</programlisting>
+                                <programlisting language="java">SingleColumnValueFilter (‘FamilyA’, ‘Column1’, &lt;=, ‘abc’, true, false)</programlisting>
                             </listitem>
                             <listitem>
                                 <programlisting>SingleColumnValueFilter (‘FamilyA’, ‘Column1’, &lt;=, ‘abc’)</programlisting>
@@ -710,19 +710,19 @@ is evaluated as
                         <itemizedlist>
                             <title>Syntax</title>
                             <listitem>
-                                <programlisting>SingleColumnValueExcludeFilter('&lt;family>', '&lt;qualifier>', &lt;compare operator>, '&lt;comparator>', &lt;latest_version_boolean>, &lt;filterIfColumnMissing_boolean>)</programlisting>
+                                <programlisting language="java">SingleColumnValueExcludeFilter('&lt;family>', '&lt;qualifier>', &lt;compare operator>, '&lt;comparator>', &lt;latest_version_boolean>, &lt;filterIfColumnMissing_boolean>)</programlisting>
                             </listitem>
                             <listitem>
-                                <programlisting>SingleColumnValueExcludeFilter('&lt;family>', '&lt;qualifier>', &lt;compare operator>, '&lt;comparator>')</programlisting>
+                                <programlisting language="java">SingleColumnValueExcludeFilter('&lt;family>', '&lt;qualifier>', &lt;compare operator>, '&lt;comparator>')</programlisting>
                             </listitem>
                         </itemizedlist>
                         <itemizedlist>
                             <title>Example</title>
                             <listitem>
-                                <programlisting>SingleColumnValueExcludeFilter (‘FamilyA’, ‘Column1’, ‘&lt;=’, ‘abc’, ‘false’, ‘true’)</programlisting>
+                                <programlisting language="java">SingleColumnValueExcludeFilter (‘FamilyA’, ‘Column1’, ‘&lt;=’, ‘abc’, ‘false’, ‘true’)</programlisting>
                             </listitem>
                             <listitem>
-                                <programlisting>SingleColumnValueExcludeFilter (‘FamilyA’, ‘Column1’, ‘&lt;=’, ‘abc’)</programlisting>
+                                <programlisting language="java">SingleColumnValueExcludeFilter (‘FamilyA’, ‘Column1’, ‘&lt;=’, ‘abc’)</programlisting>
                             </listitem>
                         </itemizedlist>
                     </listitem>
@@ -739,13 +739,13 @@ is evaluated as
                         <itemizedlist>
                             <title>Syntax</title>
                             <listitem>
-                                <programlisting>ColumnRangeFilter (‘&lt;minColumn>’, &lt;minColumnInclusive_bool>, ‘&lt;maxColumn>’, &lt;maxColumnInclusive_bool>)</programlisting>
+                                <programlisting language="java">ColumnRangeFilter (‘&lt;minColumn>’, &lt;minColumnInclusive_bool>, ‘&lt;maxColumn>’, &lt;maxColumnInclusive_bool>)</programlisting>
                             </listitem>
                         </itemizedlist>
                         <itemizedlist>
                             <title>Example</title>
                             <listitem>
-                                <programlisting>ColumnRangeFilter (‘abc’, true, ‘xyz’, false)</programlisting>
+                                <programlisting language="java">ColumnRangeFilter (‘abc’, true, ‘xyz’, false)</programlisting>
                             </listitem>
                         </itemizedlist>
                     </listitem>

http://git-wip-us.apache.org/repos/asf/hbase/blob/48d9d27d/src/main/docbkx/tracing.xml
----------------------------------------------------------------------
diff --git a/src/main/docbkx/tracing.xml b/src/main/docbkx/tracing.xml
index 220cc79..b5dfd35 100644
--- a/src/main/docbkx/tracing.xml
+++ b/src/main/docbkx/tracing.xml
@@ -81,7 +81,7 @@ public void receiveSpan(Span span);
       change your config to use zipkin receiver, distribute the new configuration and then (rolling)
       restart. </para>
     <para> Here is the example of manual setup procedure. </para>
-    <screen><![CDATA[
+    <screen language="bourne"><![CDATA[
 $ git clone https://github.com/cloudera/htrace
 $ cd htrace/htrace-zipkin
 $ mvn compile assembly:single
@@ -92,7 +92,7 @@ $ cp target/htrace-zipkin-*-jar-with-dependencies.jar $HBASE_HOME/lib/
       for a <varname>hbase.zipkin.collector-hostname</varname> and
         <varname>hbase.zipkin.collector-port</varname> property with a value describing the Zipkin
       collector server to which span information are sent. </para>
-    <programlisting><![CDATA[
+    <programlisting language="xml"><![CDATA[
 <property>
   <name>hbase.trace.spanreceiver.classes</name>
   <value>org.htrace.impl.ZipkinSpanReceiver</value>
@@ -118,7 +118,7 @@ $ cp target/htrace-zipkin-*-jar-with-dependencies.jar $HBASE_HOME/lib/
     <title>Client Modifications</title>
     <para> In order to turn on tracing in your client code, you must initialize the module sending
       spans to receiver once per client process. </para>
-    <programlisting><![CDATA[
+    <programlisting language="java"><![CDATA[
 private SpanReceiverHost spanReceiverHost;
 
 ...
@@ -129,13 +129,13 @@ private SpanReceiverHost spanReceiverHost;
     <para>Then you simply start tracing span before requests you think are interesting, and close it
       when the request is done. For example, if you wanted to trace all of your get operations, you
       change this: </para>
-    <programlisting><![CDATA[
+    <programlisting language="java"><![CDATA[
 HTable table = new HTable(conf, "t1");
 Get get = new Get(Bytes.toBytes("r1"));
 Result res = table.get(get);
 ]]></programlisting>
     <para>into: </para>
-    <programlisting><![CDATA[
+    <programlisting language="java"><![CDATA[
 TraceScope ts = Trace.startSpan("Gets", Sampler.ALWAYS);
 try {
   HTable table = new HTable(conf, "t1");
@@ -146,7 +146,7 @@ try {
 }
 ]]></programlisting>
     <para>If you wanted to trace half of your 'get' operations, you would pass in: </para>
-    <programlisting><![CDATA[
+    <programlisting language="java"><![CDATA[
 new ProbabilitySampler(0.5)
 ]]></programlisting>
     <para>in lieu of <varname>Sampler.ALWAYS</varname> to <classname>Trace.startSpan()</classname>.

http://git-wip-us.apache.org/repos/asf/hbase/blob/48d9d27d/src/main/docbkx/troubleshooting.xml
----------------------------------------------------------------------
diff --git a/src/main/docbkx/troubleshooting.xml b/src/main/docbkx/troubleshooting.xml
index 884c194..d57bb08 100644
--- a/src/main/docbkx/troubleshooting.xml
+++ b/src/main/docbkx/troubleshooting.xml
@@ -128,7 +128,7 @@
         this or confirm this is happening GC logging can be turned on in the Java virtual machine. </para>
       <para> To enable, in <filename>hbase-env.sh</filename>, uncomment one of the below lines
         :</para>
-      <programlisting>
+      <programlisting language="bourne">
 # This enables basic gc logging to the .out file.
 # export SERVER_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps"
 
@@ -194,13 +194,13 @@
         collections take but if its too small, objects are promoted to old gen too quickly). In the
         below we constrain new gen size to 64m. </para>
       <para> Add the below line in <filename>hbase-env.sh</filename>:
-        <programlisting>
+        <programlisting language="bourne">
 export SERVER_GC_OPTS="$SERVER_GC_OPTS -XX:NewSize=64m -XX:MaxNewSize=64m"
             </programlisting>
       </para>
       <para> Similarly, to enable GC logging for client processes, uncomment one of the below lines
         in <filename>hbase-env.sh</filename>:</para>
-      <programlisting>
+      <programlisting language="bourne">
 # This enables basic gc logging to the .out file.
 # export CLIENT_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps"
 
@@ -290,7 +290,7 @@ export SERVER_GC_OPTS="$SERVER_GC_OPTS -XX:NewSize=64m -XX:MaxNewSize=64m"
         <title>zkcli</title>
         <para><code>zkcli</code> is a very useful tool for investigating ZooKeeper-related issues.
           To invoke:
-          <programlisting>
+          <programlisting language="bourne">
 ./hbase zkcli -server host:port &lt;cmd&gt; &lt;args&gt;
 </programlisting>
           The commands (and arguments) are:</para>
@@ -374,7 +374,7 @@ Swap: 16008732k total,	14348k used, 15994384k free, 11106908k cached
         <para>
           <code>jps</code> is shipped with every JDK and gives the java process ids for the current
           user (if root, then it gives the ids for all users). Example:</para>
-        <programlisting>
+        <programlisting language="bourne">
 hadoop@sv4borg12:~$ jps
 1322 TaskTracker
 17789 HRegionServer
@@ -418,7 +418,7 @@ hadoop@sv4borg12:~$ jps
         </itemizedlist>
         <para> You can then do stuff like checking out the full command line that started the
           process:</para>
-        <programlisting>
+        <programlisting language="bourne">
 hadoop@sv4borg12:~$ ps aux | grep HRegionServer
 hadoop   17789  155 35.2 9067824 8604364 ?     S&lt;l  Mar04 9855:48 /usr/java/jdk1.6.0_14/bin/java -Xmx8000m -XX:+DoEscapeAnalysis -XX:+AggressiveOpts -XX:+UseConcMarkSweepGC -XX:NewSize=64m -XX:MaxNewSize=64m -XX:CMSInitiatingOccupancyFraction=88 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xloggc:/export1/hadoop/logs/gc-hbase.log -Dcom.sun.management.jmxremote.port=10102 -Dcom.sun.management.jmxremote.authenticate=true -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.password.file=/home/hadoop/hbase/conf/jmxremote.password -Dcom.sun.management.jmxremote -Dhbase.log.dir=/export1/hadoop/logs -Dhbase.log.file=hbase-hadoop-regionserver-sv4borg12.log -Dhbase.home.dir=/home/hadoop/hbase -Dhbase.id.str=hadoop -Dhbase.root.logger=INFO,DRFA -Djava.library.path=/home/hadoop/hbase/lib/native/Linux-amd64-64 -classpath /home/hadoop/hbase/bin/../conf:[many jars]:/home/hadoop/hadoop/conf org.apache.hadoop.hbase.regionserver.HRegionServer start
         </programlisting>
@@ -607,6 +607,20 @@ java.lang.Thread.State: WAITING (on object monitor)
       <para>See <xref
           linkend="perf.hbase.client.caching" />. </para>
     </section>
+    <section>
+      <title>Performance Differences in Thrift and Java APIs</title>
+      <para>Poor performance, or even <code>ScannerTimeoutExceptions</code>, can occur if
+          <code>Scan.setCaching</code> is too high, as discussed in <xref
+          linkend="trouble.client.scantimeout"/>. If the Thrift client uses the wrong caching
+        settings for a given workload, performance can suffer compared to the Java API. To set
+        caching for a given scan in the Thrift client, use the <code>scannerGetList(scannerId,
+          numRows)</code> method, where <code>numRows</code> is an integer representing the number
+        of rows to cache. In one case, it was found that reducing the cache for Thrift scans from
+        1000 to 100 increased performance to near parity with the Java API given the same
+        queries.</para>
+      <para>See also Jesse Andersen's <link xlink:href="http://blog.cloudera.com/blog/2014/04/how-to-use-the-hbase-thrift-interface-part-3-using-scans/">blog post</link> 
+        about using Scans with Thrift.</para>
+    </section>
     <section
       xml:id="trouble.client.lease.exception">
       <title><classname>LeaseException</classname> when calling
@@ -807,14 +821,20 @@ at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:210)
           <code>HADOOP_CLASSPATH</code> set to include the HBase dependencies. The "hbase classpath"
         utility can be used to do this easily. For example (substitute VERSION with your HBase
         version):</para>
-      <programlisting>
-          HADOOP_CLASSPATH=`hbase classpath` hadoop jar $HBASE_HOME/hbase-VERSION.jar rowcounter usertable
+      <programlisting language="bourne">
+          HADOOP_CLASSPATH=`hbase classpath` hadoop jar $HBASE_HOME/hbase-server-VERSION.jar rowcounter usertable
       </programlisting>
       <para>See <link
           xlink:href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/package-summary.html#classpath">
           http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/package-summary.html#classpath</link>
         for more information on HBase MapReduce jobs and classpaths. </para>
     </section>
+    <section xml:id="trouble.hbasezerocopybytestring">
+      <title>Launching a job, you get java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString or class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.LiteralByteString</title>
+      <para>See <link xlink:href="https://issues.apache.org/jira/browse/HBASE-10304">HBASE-10304 Running an hbase job jar: IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.LiteralByteString</link> and <link xlink:href="https://issues.apache.org/jira/browse/HBASE-11118">HBASE-11118 non environment variable solution for "IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.LiteralByteString"</link>.  The issue can also show up
+          when trying to run spark jobs.  See <link xlink:href="https://issues.apache.org/jira/browse/HBASE-10877">HBASE-10877 HBase non-retriable exception list should be expanded</link>.
+      </para>
+    </section>
   </section>
 
   <section
@@ -827,11 +847,11 @@ at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:210)
       <title>HDFS Utilization of Tables and Regions</title>
       <para>To determine how much space HBase is using on HDFS use the <code>hadoop</code> shell
         commands from the NameNode. For example... </para>
-      <para><programlisting>hadoop fs -dus /hbase/</programlisting> ...returns the summarized disk
+      <para><programlisting language="bourne">hadoop fs -dus /hbase/</programlisting> ...returns the summarized disk
         utilization for all HBase objects. </para>
-      <para><programlisting>hadoop fs -dus /hbase/myTable</programlisting> ...returns the summarized
+      <para><programlisting language="bourne">hadoop fs -dus /hbase/myTable</programlisting> ...returns the summarized
         disk utilization for the HBase table 'myTable'. </para>
-      <para><programlisting>hadoop fs -du /hbase/myTable</programlisting> ...returns a list of the
+      <para><programlisting language="bourne">hadoop fs -du /hbase/myTable</programlisting> ...returns a list of the
         regions under the HBase table 'myTable' and their disk utilization. </para>
       <para>For more information on HDFS shell commands, see the <link
           xlink:href="http://hadoop.apache.org/common/docs/current/file_system_shell.html">HDFS
@@ -1071,7 +1091,7 @@ ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: ZooKeeper session expi
         <para>If you wish to increase the session timeout, add the following to your
             <filename>hbase-site.xml</filename> to increase the timeout from the default of 60
           seconds to 120 seconds. </para>
-        <programlisting>
+        <programlisting language="xml">
 <![CDATA[<property>
     <name>zookeeper.session.timeout</name>
     <value>1200000</value>
@@ -1127,6 +1147,23 @@ ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: ZooKeeper session expi
         </section>
 
       </section>
+    <section>
+      <title>Snapshot Errors Due to Reverse DNS</title>
+      <para>Several operations within HBase, including snapshots, rely on properly configured
+        reverse DNS. Some environments, such as Amazon EC2, have trouble with reverse DNS. If you
+        see errors like the following on your RegionServers, check your reverse DNS configuration:</para>
+      <screen>
+2013-05-01 00:04:56,356 DEBUG org.apache.hadoop.hbase.procedure.Subprocedure: Subprocedure 'backup1' 
+coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator.        
+      </screen>
+      <para>In general, the hostname reported by the RegionServer needs to be the same as the
+        hostname the Master is trying to reach. You can see a hostname mismatch by looking for the
+        following type of message in the RegionServer's logs at start-up.</para>
+      <screen>
+2013-05-01 00:03:00,614 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Master passed us hostname 
+to use. Was=myhost-1234, Now=ip-10-55-88-99.ec2.internal        
+      </screen>
+    </section>
       <section xml:id="trouble.rs.shutdown">
         <title>Shutdown Errors</title>
   <para />
@@ -1631,14 +1668,33 @@ security.provider.1=sun.security.pkcs11.SunPKCS11 ${java.home}/lib/security/nss.
       detail at <link
         xlink:href="http://www.centos.org/docs/5/html/5.1/Deployment_Guide/s3-proc-sys-vm.html" />. </para>
       <para>To find the current value on your system, run the following command:</para>
-      <screen>[user@host]# <userinput>cat /proc/sys/vm/min_free_kbytes</userinput></screen>
+      <screen language="bourne">[user@host]# <userinput>cat /proc/sys/vm/min_free_kbytes</userinput></screen>
       <para>Next, raise the value. Try doubling, then quadrupling the value. Note that setting the
         value too low or too high could have detrimental effects on your system. Consult your
         operating system vendor for specific recommendations.</para>
       <para>Use the following command to modify the value of <code>min_free_kbytes</code>,
         substituting <replaceable>&lt;value&gt;</replaceable> with your intended value:</para>
-      <screen>[user@host]# <userinput>echo &lt;value&gt; > /proc/sys/vm/min_free_kbytes</userinput></screen>
+      <screen language="bourne">[user@host]# <userinput>echo &lt;value&gt; > /proc/sys/vm/min_free_kbytes</userinput></screen>
     </section>
   </section>
+  <section>
+    <title>JDK Issues</title>
+  <section>
+    <title>NoSuchMethodError: java.util.concurrent.ConcurrentHashMap.keySet</title>
+<para>
+If you see this in your logs:
+    <programlisting>Caused by: java.lang.NoSuchMethodError: java.util.concurrent.ConcurrentHashMap.keySet()Ljava/util/concurrent/ConcurrentHashMap$KeySetView;
+  at org.apache.hadoop.hbase.master.ServerManager.findServerWithSameHostnamePortWithLock(ServerManager.java:393)
+  at org.apache.hadoop.hbase.master.ServerManager.checkAndRecordNewServer(ServerManager.java:307)
+  at org.apache.hadoop.hbase.master.ServerManager.regionServerStartup(ServerManager.java:244)
+  at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:304)
+  at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:7910)
+  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2020)
+  ... 4 more</programlisting>
+then check if you compiled with jdk8 and tried to run it on jdk7.  If so, this won't work.
+Run on jdk8 or recompile with jdk7.  See <link xlink:href="https://issues.apache.org/jira/browse/HBASE-10607">HBASE-10607 [JDK8] NoSuchMethodError involving ConcurrentHashMap.keySet if running on JRE 7</link>.
+</para>
+  </section>
+  </section>
 
 </chapter>

http://git-wip-us.apache.org/repos/asf/hbase/blob/48d9d27d/src/main/docbkx/unit_testing.xml
----------------------------------------------------------------------
diff --git a/src/main/docbkx/unit_testing.xml b/src/main/docbkx/unit_testing.xml
new file mode 100644
index 0000000..8d8c756
--- /dev/null
+++ b/src/main/docbkx/unit_testing.xml
@@ -0,0 +1,330 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<chapter version="5.0" xml:id="unit.tests" xmlns="http://docbook.org/ns/docbook"
+    xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xi="http://www.w3.org/2001/XInclude"
+    xmlns:svg="http://www.w3.org/2000/svg" xmlns:m="http://www.w3.org/1998/Math/MathML"
+    xmlns:html="http://www.w3.org/1999/xhtml" xmlns:db="http://docbook.org/ns/docbook">
+    <!--
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+
+    <title>Unit Testing HBase Applications</title>
+    <para>This chapter discusses unit testing your HBase application using JUnit, Mockito, MRUnit,
+        and HBaseTestingUtility. Much of the information comes from <link
+            xlink:href="http://blog.cloudera.com/blog/2013/09/how-to-test-hbase-applications-using-popular-tools/"
+            >a community blog post about testing HBase applications</link>. For information on unit
+        tests for HBase itself, see <xref linkend="hbase.tests"/>.</para>
+
+    <section>
+        <title>JUnit</title>
+        <para>HBase uses <link xlink:href="http://junit.org">JUnit</link> 4 for unit tests</para>
+        <para>This example will add unit tests to the following example class:</para>
+        <programlisting language="java">
+public class MyHBaseDAO {
+
+    public static void insertRecord(HTableInterface table, HBaseTestObj obj)
+    throws Exception {
+        Put put = createPut(obj);
+        table.put(put);
+    }
+    
+    private static Put createPut(HBaseTestObj obj) {
+        Put put = new Put(Bytes.toBytes(obj.getRowKey()));
+        put.add(Bytes.toBytes("CF"), Bytes.toBytes("CQ-1"),
+                    Bytes.toBytes(obj.getData1()));
+        put.add(Bytes.toBytes("CF"), Bytes.toBytes("CQ-2"),
+                    Bytes.toBytes(obj.getData2()));
+        return put;
+    }
+}                
+            </programlisting>
+        <para>The first step is to add JUnit dependencies to your Maven POM file:</para>
+        <programlisting language="xml"><![CDATA[
+<dependency>
+    <groupId>junit</groupId>
+    <artifactId>junit</artifactId>
+    <version>4.11</version>
+    <scope>test</scope>
+</dependency>                
+                ]]></programlisting>
+        <para>Next, add some unit tests to your code. Tests are annotated with
+                <literal>@Test</literal>. Here, the unit tests are in bold.</para>
+        <programlisting language="java">
+public class TestMyHbaseDAOData {
+  @Test
+  public void testCreatePut() throws Exception {
+  HBaseTestObj obj = new HBaseTestObj();
+  obj.setRowKey("ROWKEY-1");
+  obj.setData1("DATA-1");
+  obj.setData2("DATA-2");
+  Put put = MyHBaseDAO.createPut(obj);
+  <userinput>assertEquals(obj.getRowKey(), Bytes.toString(put.getRow()));
+  assertEquals(obj.getData1(), Bytes.toString(put.get(Bytes.toBytes("CF"), Bytes.toBytes("CQ-1")).get(0).getValue()));
+  assertEquals(obj.getData2(), Bytes.toString(put.get(Bytes.toBytes("CF"), Bytes.toBytes("CQ-2")).get(0).getValue()));</userinput>
+  }
+}                
+            </programlisting>
+        <para>These tests ensure that your <code>createPut</code> method creates, populates, and
+            returns a <code>Put</code> object with expected values. Of course, JUnit can do much
+            more than this. For an introduction to JUnit, see <link
+                xlink:href="https://github.com/junit-team/junit/wiki/Getting-started"
+                >https://github.com/junit-team/junit/wiki/Getting-started</link>. </para>
+    </section>
+
+    <section xml:id="mockito">
+        <title>Mockito</title>
+        <para>Mockito is a mocking framework. It goes further than JUnit by allowing you to test the
+            interactions between objects without having to replicate the entire environment. You can
+            read more about Mockito at its project site, <link
+                xlink:href="https://code.google.com/p/mockito/"
+                >https://code.google.com/p/mockito/</link>.</para>
+        <para>You can use Mockito to do unit testing on smaller units. For instance, you can mock a
+                <classname>org.apache.hadoop.hbase.Server</classname> instance or a
+                <classname>org.apache.hadoop.hbase.master.MasterServices</classname> interface
+            reference rather than a full-blown
+                <classname>org.apache.hadoop.hbase.master.HMaster</classname>.</para>
+        <para>This example builds upon the example code in <xref linkend="unit.tests"/>, to test the
+                <code>insertRecord</code> method.</para>
+        <para>First, add a dependency for Mockito to your Maven POM file.</para>
+        <programlisting language="xml"><![CDATA[
+<dependency>
+    <groupId>org.mockito</groupId>
+    <artifactId>mockito-all</artifactId>
+    <version>1.9.5</version>
+    <scope>test</scope>
+</dependency>                   
+                   ]]></programlisting>
+        <para>Next, add a <code>@RunWith</code> annotation to your test class, to direct it to use
+            Mockito.</para>
+        <programlisting language="java">
+<userinput>@RunWith(MockitoJUnitRunner.class)</userinput>
+public class TestMyHBaseDAO{
+  @Mock 
+  private HTableInterface table;
+  @Mock
+  private HTablePool hTablePool;
+  @Captor
+  private ArgumentCaptor putCaptor;
+
+  @Test
+  public void testInsertRecord() throws Exception {
+    //return mock table when getTable is called
+    when(hTablePool.getTable("tablename")).thenReturn(table);
+    //create test object and make a call to the DAO that needs testing
+    HBaseTestObj obj = new HBaseTestObj();
+    obj.setRowKey("ROWKEY-1");
+    obj.setData1("DATA-1");
+    obj.setData2("DATA-2");
+    MyHBaseDAO.insertRecord(table, obj);
+    verify(table).put(putCaptor.capture());
+    Put put = putCaptor.getValue();
+  
+    assertEquals(Bytes.toString(put.getRow()), obj.getRowKey());
+    assert(put.has(Bytes.toBytes("CF"), Bytes.toBytes("CQ-1")));
+    assert(put.has(Bytes.toBytes("CF"), Bytes.toBytes("CQ-2")));
+    assertEquals(Bytes.toString(put.get(Bytes.toBytes("CF"),Bytes.toBytes("CQ-1")).get(0).getValue()), "DATA-1");
+    assertEquals(Bytes.toString(put.get(Bytes.toBytes("CF"),Bytes.toBytes("CQ-2")).get(0).getValue()), "DATA-2");
+  }
+}                   
+               </programlisting>
+        <para>This code populates <code>HBaseTestObj</code> with “ROWKEY-1”, “DATA-1”, “DATA-2” as
+            values. It then inserts the record into the mocked table. The Put that the DAO would
+            have inserted is captured, and values are tested to verify that they are what you
+            expected them to be.</para>
+        <para>The key here is to manage htable pool and htable instance creation outside the DAO.
+            This allows you to mock them cleanly and test Puts as shown above. Similarly, you can
+            now expand into other operations such as Get, Scan, or Delete.</para>
+
+    </section>
+    <section>
+        <title>MRUnit</title>
+        <para><link xlink:href="http://mrunit.apache.org/">Apache MRUnit</link> is a library that
+            allows you to unit-test MapReduce jobs. You can use it to test HBase jobs in the same
+            way as other MapReduce jobs.</para>
+        <para>Given a MapReduce job that writes to an HBase table called <literal>MyTest</literal>,
+            which has one column family called <literal>CF</literal>, the reducer of such a job
+            could look like the following:</para>
+        <programlisting language="java"><![CDATA[
+public class MyReducer extends TableReducer<Text, Text, ImmutableBytesWritable> {
+   public static final byte[] CF = "CF".getBytes();
+   public static final byte[] QUALIFIER = "CQ-1".getBytes();
+   public void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException {
+     //bunch of processing to extract data to be inserted, in our case, lets say we are simply
+     //appending all the records we receive from the mapper for this particular
+     //key and insert one record into HBase
+     StringBuffer data = new StringBuffer();
+     Put put = new Put(Bytes.toBytes(key.toString()));
+     for (Text val : values) {
+         data = data.append(val);
+     }
+     put.add(CF, QUALIFIER, Bytes.toBytes(data.toString()));
+     //write to HBase
+     context.write(new ImmutableBytesWritable(Bytes.toBytes(key.toString())), put);
+   }
+ }  ]]>                  
+                </programlisting>
+        <para>To test this code, the first step is to add a dependency to MRUnit to your Maven POM
+            file. </para>
+        <programlisting language="xml"><![CDATA[
+<dependency>
+   <groupId>org.apache.mrunit</groupId>
+   <artifactId>mrunit</artifactId>
+   <version>1.0.0 </version>
+   <scope>test</scope>
+</dependency>                    
+                    ]]></programlisting>
+        <para>Next, use the ReducerDriver provided by MRUnit, in your Reducer job.</para>
+        <programlisting language="java"><![CDATA[
+public class MyReducerTest {
+    ReduceDriver<Text, Text, ImmutableBytesWritable, Writable> reduceDriver;
+    byte[] CF = "CF".getBytes();
+    byte[] QUALIFIER = "CQ-1".getBytes();
+
+    @Before
+    public void setUp() {
+      MyReducer reducer = new MyReducer();
+      reduceDriver = ReduceDriver.newReduceDriver(reducer);
+    }
+  
+   @Test
+   public void testHBaseInsert() throws IOException {
+      String strKey = "RowKey-1", strValue = "DATA", strValue1 = "DATA1", 
+strValue2 = "DATA2";
+      List<Text> list = new ArrayList<Text>();
+      list.add(new Text(strValue));
+      list.add(new Text(strValue1));
+      list.add(new Text(strValue2));
+      //since in our case all that the reducer is doing is appending the records that the mapper   
+      //sends it, we should get the following back
+      String expectedOutput = strValue + strValue1 + strValue2;
+     //Setup Input, mimic what mapper would have passed
+      //to the reducer and run test
+      reduceDriver.withInput(new Text(strKey), list);
+      //run the reducer and get its output
+      List<Pair<ImmutableBytesWritable, Writable>> result = reduceDriver.run();
+    
+      //extract key from result and verify
+      assertEquals(Bytes.toString(result.get(0).getFirst().get()), strKey);
+    
+      //extract value for CF/QUALIFIER and verify
+      Put a = (Put)result.get(0).getSecond();
+      String c = Bytes.toString(a.get(CF, QUALIFIER).get(0).getValue());
+      assertEquals(expectedOutput,c );
+   }
+
+}                    
+                    ]]></programlisting>
+        <para>Your MRUnit test verifies that the output is as expected, the Put that is inserted
+            into HBase has the correct value, and the ColumnFamily and ColumnQualifier have the
+            correct values.</para>
+        <para>MRUnit includes a MapperDriver to test mapping jobs, and you can use MRUnit to test
+            other operations, including reading from HBase, processing data, or writing to
+            HDFS,</para>
+    </section>
+
+    <section>
+        <title>Integration Testing with a HBase Mini-Cluster</title>
+        <para>HBase ships with HBaseTestingUtility, which makes it easy to write integration tests
+            using a <firstterm>mini-cluster</firstterm>. The first step is to add some dependencies
+            to your Maven POM file. Check the versions to be sure they are appropriate.</para>
+        <programlisting language="xml"><![CDATA[
+<dependency>
+    <groupId>org.apache.hadoop</groupId>
+    <artifactId>hadoop-common</artifactId>
+    <version>2.0.0</version>
+    <type>test-jar</type>
+    <scope>test</scope>
+</dependency>
+
+<dependency>
+    <groupId>org.apache.hbase</groupId>
+    <artifactId>hbase</artifactId>
+    <version>0.98.3</version>
+    <type>test-jar</type>
+    <scope>test</scope>
+</dependency>
+        
+<dependency>
+    <groupId>org.apache.hadoop</groupId>
+    <artifactId>hadoop-hdfs</artifactId>
+    <version>2.0.0</version>
+    <type>test-jar</type>
+    <scope>test</scope>
+</dependency>
+
+<dependency>
+    <groupId>org.apache.hadoop</groupId>
+    <artifactId>hadoop-hdfs</artifactId>
+    <version>2.0.0</version>
+    <scope>test</scope>
+</dependency>                    
+                    ]]></programlisting>
+        <para>This code represents an integration test for the MyDAO insert shown in <xref
+                linkend="unit.tests"/>.</para>
+        <programlisting language="java">
+public class MyHBaseIntegrationTest {
+    private static HBaseTestingUtility utility;
+    byte[] CF = "CF".getBytes();
+    byte[] QUALIFIER = "CQ-1".getBytes();
+    
+    @Before
+    public void setup() throws Exception {
+    	utility = new HBaseTestingUtility();
+    	utility.startMiniCluster();
+    }
+
+    @Test
+        public void testInsert() throws Exception {
+       	 HTableInterface table = utility.createTable(Bytes.toBytes("MyTest"),
+       			 Bytes.toBytes("CF"));
+       	 HBaseTestObj obj = new HBaseTestObj();
+       	 obj.setRowKey("ROWKEY-1");
+       	 obj.setData1("DATA-1");
+       	 obj.setData2("DATA-2");
+       	 MyHBaseDAO.insertRecord(table, obj);
+       	 Get get1 = new Get(Bytes.toBytes(obj.getRowKey()));
+       	 get1.addColumn(CF, CQ1);
+       	 Result result1 = table.get(get1);
+       	 assertEquals(Bytes.toString(result1.getRow()), obj.getRowKey());
+       	 assertEquals(Bytes.toString(result1.value()), obj.getData1());
+       	 Get get2 = new Get(Bytes.toBytes(obj.getRowKey()));
+       	 get2.addColumn(CF, CQ2);
+       	 Result result2 = table.get(get2);
+       	 assertEquals(Bytes.toString(result2.getRow()), obj.getRowKey());
+       	 assertEquals(Bytes.toString(result2.value()), obj.getData2());
+    }
+}                    
+                </programlisting>
+        <para>This code creates an HBase mini-cluster and starts it. Next, it creates a table called
+                <literal>MyTest</literal> with one column family, <literal>CF</literal>. A record is
+            inserted, a Get is performed from the same table, and the insertion is verified.</para>
+        <note>
+            <para>Starting the mini-cluster takes about 20-30 seconds, but that should be
+                appropriate for integration testing. </para>
+        </note>
+        <para>To use an HBase mini-cluster on Microsoft Windows, you need to use a Cygwin
+            environment.</para>
+        <para>See the paper at <link
+                xlink:href="http://blog.sematext.com/2010/08/30/hbase-case-study-using-hbasetestingutility-for-local-testing-development/"
+                >HBase Case-Study: Using HBaseTestingUtility for Local Testing and
+                Development</link> (2010) for more information about HBaseTestingUtility.</para>
+    </section>
+
+</chapter>
+
+                      
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hbase/blob/48d9d27d/src/main/docbkx/upgrading.xml
----------------------------------------------------------------------
diff --git a/src/main/docbkx/upgrading.xml b/src/main/docbkx/upgrading.xml
index 71b8516..546be22 100644
--- a/src/main/docbkx/upgrading.xml
+++ b/src/main/docbkx/upgrading.xml
@@ -76,13 +76,124 @@
                             compatibility between versions also mean binary compatibility?</link>
                         discussion on the hbaes dev mailing list. </para>
         </section>
+        <section xml:id="hbase.rolling.upgrade">
+          <title><firstterm>Rolling Upgrades</firstterm></title>
+          <para>A rolling upgrade is the process by which you update the servers
+            in your cluster a server at a time. You can rolling upgrade across HBase versions
+            if they are binary or wire compatible.
+            See <xlnk href="hbase.rolling.restart" /> for more on what this means.
+            Coarsely, a rolling upgrade is a graceful stop each server,
+            update the software, and then restart.  You do this for each server in the cluster.
+            Usually you upgrade the Master first and then the regionservers.
+            See <xlink href="rolling" /> for tools that can help use the rolling upgrade process.
+          </para>
+          <para>For example, in the below, hbase was symlinked to the actual hbase install.
+            On upgrade, before running a rolling restart over the cluser, we changed the symlink
+            to point at the new HBase software version and then ran 
+            <programlisting>$ HADOOP_HOME=~/hadoop-2.6.0-CRC-SNAPSHOT ~/hbase/bin/rolling-restart.sh --config ~/conf_hbase</programlisting>
+            The rolling-restart script will first gracefully stop and restart the master, and then
+            each of the regionservers in turn. Because the symlink was changed, on restart the
+            server will come up using the new hbase version.  Check logs for errors as the
+            rolling upgrade proceeds.
+          </para>
         <section
             xml:id="hbase.rolling.restart">
-            <title>Rolling Upgrade between versions/Binary compatibility</title>
-            <para>Unless otherwise specified, HBase point versions are binary compatible. you can do
-                a rolling upgrade between hbase point versions; for example, you can go to 0.94.6
-                from 0.94.5 by doing a rolling upgrade across the cluster replacing the 0.94.5
-                binary with a 0.94.6 binary. </para>
+            <title>Rolling Upgrade between versions that are Binary/Wire compatibile</title>
+            <para>Unless otherwise specified, HBase point versions are binary compatible. You can do
+              a <xlink href="hbase.rolling.upgrade" /> between hbase point versions.
+                For example, you can go to 0.94.6 from 0.94.5 by doing a rolling upgrade
+                across the cluster replacing the 0.94.5 binary with a 0.94.6 binary.</para>
+              <para>In the minor version-particular sections below, we call out where the versions
+                are wire/protocol compatible and in this case, it is also possible to do a
+                <xlink href="hbase.rolling.upgrade" />. For example, in
+            <xlink href="upgrade1.0.rolling.upgrade" />, we
+              state that it is possible to do a rolling upgrade between hbase-0.98.x and hbase-1.0.0.</para>
+        </section>
+        </section>
+    </section>
+    <section xml:id="upgrade1.0">
+        <title>Upgrading from 0.98.x to 1.0.x</title>
+        <para>In this section we first note the significant changes that come in with 1.0.0 HBase and then
+          we go over the upgrade process.  Be sure to read the significant changes section with care
+          so you avoid surprises.
+        </para>
+        <section xml:id="upgrade1.0.changes">
+            <title>Changes of Note!</title>
+            <para>In here we list important changes that are in 1.0.0 since 0.98.x., changes you should
+            be aware that will go into effect once you upgrade.</para>
+            <section xml:id="zookeeper.3.4"><title>ZooKeeper 3.4 is required in HBase 1.0.0</title>
+              <para>See <xref linkend="zookeeper.requirements" />.</para>
+            </section>
+            <section xml:id="default.ports.changed"><title>HBase Default Ports Changed</title>
+              <para>The ports used by HBase changed.  The used to be in the 600XX range.  In
+                hbase-1.0.0 they have been moved up out of the ephemeral port range and are
+                160XX instead (Master web UI was 60010 and is now 16030; the RegionServer
+                web UI was 60030 and is now 16030, etc). If you want to keep the old port
+                locations, copy the port setting configs from <filename>hbase-default.xml</filename>
+                into <filename>hbase-site.xml</filename>, change them back to the old values
+                from hbase-0.98.x era, and ensure you've distributed your configurations before
+              you restart.</para>
+            </section>
+            <section xml:id="upgrade1.0.hbase.bucketcache.percentage.in.combinedcache">
+                <title>hbase.bucketcache.percentage.in.combinedcache configuration has been REMOVED</title>
+                <para>You may have made use of this configuration if you are using BucketCache.
+                    If NOT using BucketCache, this change does not effect you.
+                    Its removal means that your L1 LruBlockCache is now sized
+                    using <varname>hfile.block.cache.size</varname> -- i.e. the way you
+                    would size the onheap L1 LruBlockCache if you were NOT doing
+                    BucketCache -- and the BucketCache size is not whatever the
+                    setting for hbase.bucketcache.size is.  You may need to adjust
+                    configs to get the LruBlockCache and BucketCache sizes set to
+                    what they were in 0.98.x and previous.  If you did not set this
+                    config., its default value was 0.9.  If you do nothing, your
+                    BucketCache will increase in size by 10%.  Your L1 LruBlockCache will
+                    become <varname>hfile.block.cache.size</varname> times your java
+                    heap size (hfile.block.cache.size is a float between 0.0 and 1.0).
+                    To read more, see
+                    <link xlink:href="https://issues.apache.org/jira/browse/HBASE-11520">HBASE-11520 Simplify offheap cache config by removing the confusing "hbase.bucketcache.percentage.in.combinedcache"</link>.
+                </para>
+          </section>
+          <section xml:id="hbase-12068"><title>If you have your own customer filters....</title>
+            <para>See the release notes on the issue <link xlink:href="https://issues.apache.org/jira/browse/HBASE-12068">HBASE-12068 [Branch-1] Avoid need to always do KeyValueUtil#ensureKeyValue for Filter transformCell</link>;
+              be sure to follow the recommendations therein.
+            </para>
+          </section>
+          <section xml:id="dlr"><title>Distributed Log Replay</title>
+            <para>
+              <xref linkend="distributed.log.replay" /> is off by default in hbase-1.0.
+                Enabling it can make a big difference improving HBase MTTR. Enable this
+                feature if you are doing a clean stop/start when you are upgrading.
+                You cannot rolling upgrade on to this feature (caveat if you are running
+                on a version of hbase in excess of hbase-0.98.4 -- see
+                <link xlink:href="https://issues.apache.org/jira/browse/HBASE-12577">HBASE-12577 Disable distributed log replay by default</link> for more).
+            </para>
+          </section>
+        </section>
+        <section xml:id="upgrade1.0.rolling.upgrade">
+          <title>Rolling upgrade from 0.98.x to HBase 1.0.0</title>
+          <note><title>From 0.96.x to 1.0.0</title>
+            <para>You cannot do a <xlink href="rolling.upgrade" /> from 0.96.x to 1.0.0 without
+              first doing a rolling upgrade to 0.98.x. See comment in
+              <link xlink:href="https://issues.apache.org/jira/browse/HBASE-11164?focusedCommentId=14182330&amp;page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&#35;comment-14182330">HBASE-11164 Document and test rolling updates from 0.98 -> 1.0</link> for the why.
+              Also because hbase-1.0.0 enables hfilev3 by default,
+              <link xlink:href="https://issues.apache.org/jira/browse/HBASE-9801">HBASE-9801 Change the default HFile version to V3</link>,
+              and support for hfilev3 only arrives in 0.98, this is another reason you cannot rolling upgrade from hbase-0.96.x;
+              if the rolling upgrade stalls, the 0.96.x servers cannot open files written by the servers running the newer hbase-1.0.0
+              hfilev3 writing servers.
+            </para>
+          </note>
+          <para>There are no known issues running a <xlink href="hbase.rolling.upgrade" /> from hbase-0.98.x to hbase-1.0.0.
+          
+          </para>
+        </section>
+        <section xml:id="upgrade1.0.from.0.94">
+          <title>Upgrading to 1.0 from 0.94</title>
+          <para>You cannot rolling upgrade from 0.94.x to 1.x.x.  You must stop your cluster,
+            install the 1.x.x software, run the migration described at <xref linkend="executing.the.0.96.upgrade" />
+            (substituting 1.x.x. wherever we make mention of 0.96.x in the section below),
+            and then restart.  Be sure to upgrade your zookeeper if it is a version less than the required 3.4.x.
+          </para>
+
         </section>
     </section>
 
@@ -112,7 +223,12 @@
     <section
         xml:id="upgrade0.96">
         <title>Upgrading from 0.94.x to 0.96.x</title>
-        <subtitle>The Singularity</subtitle>
+        <subtitle>The "Singularity"</subtitle>
+        <note><title>HBase 0.96.x was EOL'd, September 1st, 2014</title><para>
+            Do not deploy 0.96.x  Deploy a 0.98.x at least.
+            See <link xlink:href="https://issues.apache.org/jira/browse/HBASE-11642">EOL 0.96</link>.
+        </para></note>
+
         <para>You will have to stop your old 0.94.x cluster completely to upgrade. If you are
             replicating between clusters, both clusters will have to go down to upgrade. Make sure
             it is a clean shutdown. The less WAL files around, the faster the upgrade will run (the
@@ -120,13 +236,14 @@
             process). All clients must be upgraded to 0.96 too. </para>
         <para>The API has changed. You will need to recompile your code against 0.96 and you may
             need to adjust applications to go against new APIs (TODO: List of changes). </para>
-        <section>
+        <section xml:id="executing.the.0.96.upgrade">
             <title>Executing the 0.96 Upgrade</title>
             <note>
+              <title>HDFS and ZooKeeper must be up!</title>
                 <para>HDFS and ZooKeeper should be up and running during the upgrade process.</para>
             </note>
             <para>hbase-0.96.0 comes with an upgrade script. Run
-                <programlisting>$ bin/hbase upgrade</programlisting> to see its usage. The script
+                <programlisting language="bourne">$ bin/hbase upgrade</programlisting> to see its usage. The script
                 has two main modes: -check, and -execute. </para>
             <section>
                 <title>check</title>
@@ -173,7 +290,7 @@ There are some HFileV1, or corrupt files (files with incorrect major version)
                 <para>By default, the check step scans the hbase root directory (defined as
                     hbase.rootdir in the configuration). To scan a specific directory only, pass the
                         <emphasis>-dir</emphasis> option.</para>
-                <screen>$ bin/hbase upgrade -check -dir /myHBase/testTable</screen>
+                <screen language="bourne">$ bin/hbase upgrade -check -dir /myHBase/testTable</screen>
                 <para>The above command would detect HFileV1s in the /myHBase/testTable directory. </para>
                 <para> Once the check step reports all the HFileV1 files have been rewritten, it is
                     safe to proceed with the upgrade. </para>
@@ -214,7 +331,7 @@ There are some HFileV1, or corrupt files (files with incorrect major version)
                 <para> To run the <emphasis>execute</emphasis> step, make sure that first you have
                     copied hbase-0.96.0 binaries everywhere under servers and under clients. Make
                     sure the 0.94.0 cluster is down. Then do as follows:</para>
-                <screen>$ bin/hbase upgrade -execute</screen>
+                <screen language="bourne">$ bin/hbase upgrade -execute</screen>
                 <para>Here is some sample output.</para>
                 <programlisting>
 Starting Namespace upgrade
@@ -233,7 +350,7 @@ Successfully completed Log splitting
          </programlisting>
                 <para> If the output from the execute step looks good, stop the zookeeper instance
                     you started to do the upgrade:
-                    <programlisting>$ ./hbase/bin/hbase-daemon.sh stop zookeeper</programlisting>
+                    <programlisting language="bourne">$ ./hbase/bin/hbase-daemon.sh stop zookeeper</programlisting>
                     Now start up hbase-0.96.0. </para>
             </section>
             <section

http://git-wip-us.apache.org/repos/asf/hbase/blob/48d9d27d/src/main/docbkx/zookeeper.xml
----------------------------------------------------------------------
diff --git a/src/main/docbkx/zookeeper.xml b/src/main/docbkx/zookeeper.xml
index 20cb578..206ccf5 100644
--- a/src/main/docbkx/zookeeper.xml
+++ b/src/main/docbkx/zookeeper.xml
@@ -90,7 +90,7 @@
     has ZooKeeper persist data under <filename>/tmp</filename> which is often cleared on system
     restart. In the example below we have ZooKeeper persist to
       <filename>/user/local/zookeeper</filename>.</para>
-  <programlisting><![CDATA[
+  <programlisting language="java"><![CDATA[
   <configuration>
     ...
     <property>
@@ -144,7 +144,7 @@
     <para>To point HBase at an existing ZooKeeper cluster, one that is not managed by HBase, set
         <varname>HBASE_MANAGES_ZK</varname> in <filename>conf/hbase-env.sh</filename> to
       false</para>
-    <screen>
+    <screen language="bourne">
   ...
   # Tell HBase whether it should manage its own instance of Zookeeper or not.
   export HBASE_MANAGES_ZK=false</screen>
@@ -158,7 +158,7 @@
       regular start/stop scripts. If you would like to run ZooKeeper yourself, independent of HBase
       start/stop, you would do the following</para>
 
-    <screen>
+    <screen language="bourne">
 ${HBASE_HOME}/bin/hbase-daemons.sh {start,stop} zookeeper
 </screen>
 
@@ -223,7 +223,7 @@ ${HBASE_HOME}/bin/hbase-daemons.sh {start,stop} zookeeper
       <para>On each host that will run an HBase client (e.g. <code>hbase shell</code>), add the
         following file to the HBase home directory's <filename>conf</filename> directory:</para>
 
-      <programlisting>
+      <programlisting language="java">
 Client {
   com.sun.security.auth.module.Krb5LoginModule required
   useKeyTab=false
@@ -242,7 +242,7 @@ Client {
         configuration file in the conf directory of the node's <filename>HBASE_HOME</filename>
         directory that looks like the following:</para>
 
-      <programlisting>
+      <programlisting language="java">
 Server {
   com.sun.security.auth.module.Krb5LoginModule required
   useKeyTab=true
@@ -274,7 +274,7 @@ Client {
 
       <para>Modify your <filename>hbase-env.sh</filename> to include the following:</para>
 
-      <programlisting>
+      <programlisting language="bourne">
 export HBASE_OPTS="-Djava.security.auth.login.config=$CLIENT_CONF"
 export HBASE_MANAGES_ZK=true
 export HBASE_ZOOKEEPER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_CONF"
@@ -288,7 +288,7 @@ export HBASE_REGIONSERVER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_
       <para>Modify your <filename>hbase-site.xml</filename> on each node that will run zookeeper,
         master or regionserver to contain:</para>
 
-      <programlisting><![CDATA[
+      <programlisting language="java"><![CDATA[
 <configuration>
   <property>
     <name>hbase.zookeeper.quorum</name>
@@ -330,7 +330,7 @@ bin/hbase regionserver start
     <section>
       <title>External Zookeeper Configuration</title>
       <para>Add a JAAS configuration file that looks like:</para>
-      <programlisting>
+      <programlisting language="java">
 Client {
   com.sun.security.auth.module.Krb5LoginModule required
   useKeyTab=true
@@ -346,7 +346,7 @@ Client {
 
       <para>Modify your hbase-env.sh to include the following:</para>
 
-      <programlisting>
+      <programlisting language="bourne">
 export HBASE_OPTS="-Djava.security.auth.login.config=$CLIENT_CONF"
 export HBASE_MANAGES_ZK=false
 export HBASE_MASTER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_CONF"
@@ -357,7 +357,7 @@ export HBASE_REGIONSERVER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_
       <para>Modify your <filename>hbase-site.xml</filename> on each node that will run a master or
         regionserver to contain:</para>
 
-      <programlisting><![CDATA[
+      <programlisting language="xml"><![CDATA[
 <configuration>
   <property>
     <name>hbase.zookeeper.quorum</name>
@@ -375,13 +375,13 @@ export HBASE_REGIONSERVER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_
         Quorum hosts.</para>
 
       <para> Add a <filename>zoo.cfg</filename> for each Zookeeper Quorum host containing:</para>
-      <programlisting>
+      <programlisting language="java">
 authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
 kerberos.removeHostFromPrincipal=true
 kerberos.removeRealmFromPrincipal=true
                   </programlisting>
       <para>Also on each of these hosts, create a JAAS configuration file containing:</para>
-      <programlisting>
+      <programlisting language="java">
 Server {
   com.sun.security.auth.module.Krb5LoginModule required
   useKeyTab=true
@@ -395,7 +395,7 @@ Server {
         pathname of this file as <filename>$ZK_SERVER_CONF</filename> below. </para>
 
       <para> Start your Zookeepers on each Zookeeper Quorum host with:</para>
-      <programlisting>
+      <programlisting language="bourne">
 SERVER_JVMFLAGS="-Djava.security.auth.login.config=$ZK_SERVER_CONF" bin/zkServer start
                   </programlisting>
 
@@ -480,7 +480,7 @@ bin/hbase regionserver &amp;
         <para> You must override the standard hadoop-core jar file from the
             <code>target/cached_classpath.txt</code> file with the version containing the
           HADOOP-7070 fix. You can use the following script to do this:</para>
-        <screen>
+        <screen language="bourne">
 echo `find ~/.m2 -name "*hadoop-core*7070*SNAPSHOT.jar"` ':' `cat target/cached_classpath.txt` | sed 's/ //g' > target/tmp.txt
 mv target/tmp.txt target/cached_classpath.txt
                 </screen>


Mime
View raw message