hbase-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From st...@apache.org
Subject svn commit: r1455996 [6/7] - in /hbase/branches/0.94/src: docbkx/ site/ site/resources/css/ site/resources/images/ site/xdoc/
Date Wed, 13 Mar 2013 15:20:20 GMT
Modified: hbase/branches/0.94/src/docbkx/troubleshooting.xml
URL: http://svn.apache.org/viewvc/hbase/branches/0.94/src/docbkx/troubleshooting.xml?rev=1455996&r1=1455995&r2=1455996&view=diff
==============================================================================
--- hbase/branches/0.94/src/docbkx/troubleshooting.xml (original)
+++ hbase/branches/0.94/src/docbkx/troubleshooting.xml Wed Mar 13 15:20:19 2013
@@ -26,7 +26,7 @@
  * limitations under the License.
  */
 -->
-  <title>Troubleshooting and Debugging HBase</title>
+  <title>Troubleshooting and Debugging Apache HBase (TM)</title>
     <section xml:id="trouble.general">
       <title>General Guidelines</title>
       <para>
@@ -37,7 +37,7 @@
           should return some hits for those exceptions you’re seeing.
       </para>
       <para>
-          An error rarely comes alone in HBase, usually when something gets screwed up what will
+          An error rarely comes alone in Apache HBase (TM), usually when something gets screwed up what will
           follow may be hundreds of exceptions and stack traces coming from all over the place.
           The best way to approach this type of problem is to walk the log up to where it all
           began, for example one trick with RegionServers is that they will print some
@@ -54,7 +54,7 @@
           prolonged garbage collection pauses that last longer than the default ZooKeeper session timeout.
           For more information on GC pauses, see the
           <link xlink:href="http://www.cloudera.com/blog/2011/02/avoiding-full-gcs-in-hbase-with-memstore-local-allocation-buffers-part-1/">3 part blog post</link>  by Todd Lipcon
-          and <xref linkend="gcpause" /> above. 
+          and <xref linkend="gcpause" /> above.
       </para>
     </section>
     <section xml:id="trouble.log">
@@ -72,7 +72,7 @@
       JobTracker:  <filename>$HADOOP_HOME/logs/hadoop-&lt;user&gt;-jobtracker-&lt;hostname&gt;.log</filename>
       </para>
       <para>
-      TaskTracker:  <filename>$HADOOP_HOME/logs/hadoop-&lt;user&gt;-jobtracker-&lt;hostname&gt;.log</filename>
+      TaskTracker:  <filename>$HADOOP_HOME/logs/hadoop-&lt;user&gt;-tasktracker-&lt;hostname&gt;.log</filename>
       </para>
       <para>
       HMaster:  <filename>$HBASE_HOME/logs/hbase-&lt;user&gt;-master-&lt;hostname&gt;.log</filename>
@@ -91,7 +91,7 @@
           <title>NameNode</title>
           <para>The NameNode log is on the NameNode server.  The HBase Master is typically run on the NameNode server, and well as ZooKeeper.</para>
           <para>For smaller clusters the JobTracker is typically run on the NameNode server as well.</para>
-         </section>        
+         </section>
         <section xml:id="trouble.log.locations.datanode">
           <title>DataNode</title>
           <para>Each DataNode server will have a DataNode log for HDFS, as well as a RegionServer log for HBase.</para>
@@ -105,32 +105,32 @@
            insight on timings at the server.  Once enabled, the amount of log
            spewed is voluminous.  It is not recommended that you leave this
            logging on for more than short bursts of time.  To enable RPC-level
-           logging, browse to the RegionServer UI and click on 
+           logging, browse to the RegionServer UI and click on
            <emphasis>Log Level</emphasis>.  Set the log level to <varname>DEBUG</varname> for the package
            <classname>org.apache.hadoop.ipc</classname> (Thats right, for
            <classname>hadoop.ipc</classname>, NOT, <classname>hbase.ipc</classname>).  Then tail the RegionServers log.  Analyze.</para>
            <para>To disable, set the logging level back to <varname>INFO</varname> level.
            </para>
-         </section>                 
-       </section>      
+         </section>
+       </section>
       <section xml:id="trouble.log.gc">
         <title>JVM Garbage Collection Logs</title>
-          <para>HBase is memory intensive, and using the default GC you can see long pauses in all threads including the <emphasis>Juliet Pause</emphasis> aka "GC of Death". 
-           To help debug this or confirm this is happening GC logging can be turned on in the Java virtual machine.  
+          <para>HBase is memory intensive, and using the default GC you can see long pauses in all threads including the <emphasis>Juliet Pause</emphasis> aka "GC of Death".
+           To help debug this or confirm this is happening GC logging can be turned on in the Java virtual machine.
           </para>
           <para>
           To enable, in <filename>hbase-env.sh</filename> add:
-          <programlisting> 
+          <programlisting>
 export HBASE_OPTS="-XX:+UseConcMarkSweepGC -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xloggc:/home/hadoop/hbase/logs/gc-hbase.log"
           </programlisting>
-           Adjust the log directory to wherever you log.  Note:  The GC log does NOT roll automatically, so you'll have to keep an eye on it so it doesn't fill up the disk. 
+           Adjust the log directory to wherever you log.  Note:  The GC log does NOT roll automatically, so you'll have to keep an eye on it so it doesn't fill up the disk.
           </para>
           <para>
          At this point you should see logs like so:
           <programlisting>
-64898.952: [GC [1 CMS-initial-mark: 2811538K(3055704K)] 2812179K(3061272K), 0.0007360 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 
+64898.952: [GC [1 CMS-initial-mark: 2811538K(3055704K)] 2812179K(3061272K), 0.0007360 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
 64898.953: [CMS-concurrent-mark-start]
-64898.971: [GC 64898.971: [ParNew: 5567K->576K(5568K), 0.0101110 secs] 2817105K->2812715K(3061272K), 0.0102200 secs] [Times: user=0.07 sys=0.00, real=0.01 secs] 
+64898.971: [GC 64898.971: [ParNew: 5567K->576K(5568K), 0.0101110 secs] 2817105K->2812715K(3061272K), 0.0102200 secs] [Times: user=0.07 sys=0.00, real=0.01 secs]
           </programlisting>
           </para>
           <para>
@@ -139,20 +139,20 @@ export HBASE_OPTS="-XX:+UseConcMarkSweep
             <para>
            The third line indicates a "minor GC", which pauses the VM for 0.0101110 seconds - aka 10 milliseconds. It has reduced the "ParNew" from about 5.5m to 576k.
            Later on in this cycle we see:
-           <programlisting> 
-64901.445: [CMS-concurrent-mark: 1.542/2.492 secs] [Times: user=10.49 sys=0.33, real=2.49 secs] 
+           <programlisting>
+64901.445: [CMS-concurrent-mark: 1.542/2.492 secs] [Times: user=10.49 sys=0.33, real=2.49 secs]
 64901.445: [CMS-concurrent-preclean-start]
-64901.453: [GC 64901.453: [ParNew: 5505K->573K(5568K), 0.0062440 secs] 2868746K->2864292K(3061272K), 0.0063360 secs] [Times: user=0.05 sys=0.00, real=0.01 secs] 
-64901.476: [GC 64901.476: [ParNew: 5563K->575K(5568K), 0.0072510 secs] 2869283K->2864837K(3061272K), 0.0073320 secs] [Times: user=0.05 sys=0.01, real=0.01 secs] 
-64901.500: [GC 64901.500: [ParNew: 5517K->573K(5568K), 0.0120390 secs] 2869780K->2865267K(3061272K), 0.0121150 secs] [Times: user=0.09 sys=0.00, real=0.01 secs] 
-64901.529: [GC 64901.529: [ParNew: 5507K->569K(5568K), 0.0086240 secs] 2870200K->2865742K(3061272K), 0.0087180 secs] [Times: user=0.05 sys=0.00, real=0.01 secs] 
-64901.554: [GC 64901.555: [ParNew: 5516K->575K(5568K), 0.0107130 secs] 2870689K->2866291K(3061272K), 0.0107820 secs] [Times: user=0.06 sys=0.00, real=0.01 secs] 
-64901.578: [CMS-concurrent-preclean: 0.070/0.133 secs] [Times: user=0.48 sys=0.01, real=0.14 secs] 
+64901.453: [GC 64901.453: [ParNew: 5505K->573K(5568K), 0.0062440 secs] 2868746K->2864292K(3061272K), 0.0063360 secs] [Times: user=0.05 sys=0.00, real=0.01 secs]
+64901.476: [GC 64901.476: [ParNew: 5563K->575K(5568K), 0.0072510 secs] 2869283K->2864837K(3061272K), 0.0073320 secs] [Times: user=0.05 sys=0.01, real=0.01 secs]
+64901.500: [GC 64901.500: [ParNew: 5517K->573K(5568K), 0.0120390 secs] 2869780K->2865267K(3061272K), 0.0121150 secs] [Times: user=0.09 sys=0.00, real=0.01 secs]
+64901.529: [GC 64901.529: [ParNew: 5507K->569K(5568K), 0.0086240 secs] 2870200K->2865742K(3061272K), 0.0087180 secs] [Times: user=0.05 sys=0.00, real=0.01 secs]
+64901.554: [GC 64901.555: [ParNew: 5516K->575K(5568K), 0.0107130 secs] 2870689K->2866291K(3061272K), 0.0107820 secs] [Times: user=0.06 sys=0.00, real=0.01 secs]
+64901.578: [CMS-concurrent-preclean: 0.070/0.133 secs] [Times: user=0.48 sys=0.01, real=0.14 secs]
 64901.578: [CMS-concurrent-abortable-preclean-start]
-64901.584: [GC 64901.584: [ParNew: 5504K->571K(5568K), 0.0087270 secs] 2871220K->2866830K(3061272K), 0.0088220 secs] [Times: user=0.05 sys=0.00, real=0.01 secs] 
-64901.609: [GC 64901.609: [ParNew: 5512K->569K(5568K), 0.0063370 secs] 2871771K->2867322K(3061272K), 0.0064230 secs] [Times: user=0.06 sys=0.00, real=0.01 secs] 
-64901.615: [CMS-concurrent-abortable-preclean: 0.007/0.037 secs] [Times: user=0.13 sys=0.00, real=0.03 secs] 
-64901.616: [GC[YG occupancy: 645 K (5568 K)]64901.616: [Rescan (parallel) , 0.0020210 secs]64901.618: [weak refs processing, 0.0027950 secs] [1 CMS-remark: 2866753K(3055704K)] 2867399K(3061272K), 0.0049380 secs] [Times: user=0.00 sys=0.01, real=0.01 secs] 
+64901.584: [GC 64901.584: [ParNew: 5504K->571K(5568K), 0.0087270 secs] 2871220K->2866830K(3061272K), 0.0088220 secs] [Times: user=0.05 sys=0.00, real=0.01 secs]
+64901.609: [GC 64901.609: [ParNew: 5512K->569K(5568K), 0.0063370 secs] 2871771K->2867322K(3061272K), 0.0064230 secs] [Times: user=0.06 sys=0.00, real=0.01 secs]
+64901.615: [CMS-concurrent-abortable-preclean: 0.007/0.037 secs] [Times: user=0.13 sys=0.00, real=0.03 secs]
+64901.616: [GC[YG occupancy: 645 K (5568 K)]64901.616: [Rescan (parallel) , 0.0020210 secs]64901.618: [weak refs processing, 0.0027950 secs] [1 CMS-remark: 2866753K(3055704K)] 2867399K(3061272K), 0.0049380 secs] [Times: user=0.00 sys=0.01, real=0.01 secs]
 64901.621: [CMS-concurrent-sweep-start]
             </programlisting>
             </para>
@@ -161,20 +161,20 @@ export HBASE_OPTS="-XX:+UseConcMarkSweep
             </para>
             <para>
             There are a few more minor GCs, then there is a pause at the 2nd last line:
-            <programlisting>  
-64901.616: [GC[YG occupancy: 645 K (5568 K)]64901.616: [Rescan (parallel) , 0.0020210 secs]64901.618: [weak refs processing, 0.0027950 secs] [1 CMS-remark: 2866753K(3055704K)] 2867399K(3061272K), 0.0049380 secs] [Times: user=0.00 sys=0.01, real=0.01 secs] 
+            <programlisting>
+64901.616: [GC[YG occupancy: 645 K (5568 K)]64901.616: [Rescan (parallel) , 0.0020210 secs]64901.618: [weak refs processing, 0.0027950 secs] [1 CMS-remark: 2866753K(3055704K)] 2867399K(3061272K), 0.0049380 secs] [Times: user=0.00 sys=0.01, real=0.01 secs]
             </programlisting>
             </para>
             <para>
-            The pause here is 0.0049380 seconds (aka 4.9 milliseconds) to 'remark' the heap.  
+            The pause here is 0.0049380 seconds (aka 4.9 milliseconds) to 'remark' the heap.
             </para>
             <para>
             At this point the sweep starts, and you can watch the heap size go down:
             <programlisting>
-64901.637: [GC 64901.637: [ParNew: 5501K->569K(5568K), 0.0097350 secs] 2871958K->2867441K(3061272K), 0.0098370 secs] [Times: user=0.05 sys=0.00, real=0.01 secs] 
+64901.637: [GC 64901.637: [ParNew: 5501K->569K(5568K), 0.0097350 secs] 2871958K->2867441K(3061272K), 0.0098370 secs] [Times: user=0.05 sys=0.00, real=0.01 secs]
 ...  lines removed ...
-64904.936: [GC 64904.936: [ParNew: 5532K->568K(5568K), 0.0070720 secs] 1365024K->1360689K(3061272K), 0.0071930 secs] [Times: user=0.05 sys=0.00, real=0.01 secs] 
-64904.953: [CMS-concurrent-sweep: 2.030/3.332 secs] [Times: user=9.57 sys=0.26, real=3.33 secs] 
+64904.936: [GC 64904.936: [ParNew: 5532K->568K(5568K), 0.0070720 secs] 1365024K->1360689K(3061272K), 0.0071930 secs] [Times: user=0.05 sys=0.00, real=0.01 secs]
+64904.953: [CMS-concurrent-sweep: 2.030/3.332 secs] [Times: user=9.57 sys=0.26, real=3.33 secs]
             </programlisting>
             At this point, the CMS sweep took 3.332 seconds, and heap went from about ~ 2.8 GB to 1.3 GB (approximate).
             </para>
@@ -186,14 +186,14 @@ export HBASE_OPTS="-XX:+UseConcMarkSweep
             </para>
             <para>
              Add this to HBASE_OPTS:
-            <programlisting> 
+            <programlisting>
 export HBASE_OPTS="-XX:NewSize=64m -XX:MaxNewSize=64m &lt;cms options from above&gt; &lt;gc logging options from above&gt;"
             </programlisting>
             </para>
             <para>
             For more information on GC pauses, see the <link xlink:href="http://www.cloudera.com/blog/2011/02/avoiding-full-gcs-in-hbase-with-memstore-local-allocation-buffers-part-1/">3 part blog post</link>  by Todd Lipcon
             and <xref linkend="gcpause" /> above.
-            </para>                  
+            </para>
       </section>
     </section>
     <section xml:id="trouble.resources">
@@ -201,18 +201,18 @@ export HBASE_OPTS="-XX:NewSize=64m -XX:M
       <section xml:id="trouble.resources.searchhadoop">
         <title>search-hadoop.com</title>
         <para>
-        <link xlink:href="http://search-hadoop.com">search-hadoop.com</link> indexes all the mailing lists and is great for historical searches.  
+        <link xlink:href="http://search-hadoop.com">search-hadoop.com</link> indexes all the mailing lists and is great for historical searches.
         Search here first when you have an issue as its more than likely someone has already had your problem.
         </para>
       </section>
       <section xml:id="trouble.resources.lists">
         <title>Mailing Lists</title>
-        <para>Ask a question on the <link xlink:href="http://hbase.apache.org/mail-lists.html">HBase mailing lists</link>.
-        The 'dev' mailing list is aimed at the community of developers actually building HBase and for features currently under development, and 'user'
-        is generally used for questions on released versions of HBase.  Before going to the mailing list, make sure your
+        <para>Ask a question on the <link xlink:href="http://hbase.apache.org/mail-lists.html">Apache HBase mailing lists</link>.
+        The 'dev' mailing list is aimed at the community of developers actually building Apache HBase and for features currently under development, and 'user'
+        is generally used for questions on released versions of Apache HBase.  Before going to the mailing list, make sure your
         question has not already been answered by searching the mailing list archives first.  Use
         <xref linkend="trouble.resources.searchhadoop" />.
-        Take some time crafting your question<footnote><para>See <link xlink="http://www.mikeash.com/getting_answers.html">Getting Answers</link></para></footnote>; a quality question that includes all context and 
+        Take some time crafting your question<footnote><para>See <link xlink="http://www.mikeash.com/getting_answers.html">Getting Answers</link></para></footnote>; a quality question that includes all context and
         exhibits evidence the author has tried to find answers in the manual and out on lists
         is more likely to get a prompt response.
         </para>
@@ -236,7 +236,7 @@ export HBASE_OPTS="-XX:NewSize=64m -XX:M
               <title>Master Web Interface</title>
               <para>The Master starts a web-interface on port 60010 by default.
               </para>
-              <para>The Master web UI lists created tables and their definition (e.g., ColumnFamilies, blocksize, etc.).  Additionally, 
+              <para>The Master web UI lists created tables and their definition (e.g., ColumnFamilies, blocksize, etc.).  Additionally,
               the available RegionServers in the cluster are listed along with selected high-level metrics (requests, number of regions, usedHeap, maxHeap).
               The Master web UI allows navigation to each RegionServer's web UI.
               </para>
@@ -263,13 +263,13 @@ export HBASE_OPTS="-XX:NewSize=64m -XX:M
 	ls path [watch]
 	set path data [version]
 	delquota [-n|-b] path
-	quit 
+	quit
 	printwatches on|off
 	create [-s] [-e] path data acl
 	stat path [watch]
-	close 
+	close
 	ls2 path [watch]
-	history 
+	history
 	listquota path
 	setAcl path acl
 	getAcl path
@@ -292,7 +292,7 @@ export HBASE_OPTS="-XX:NewSize=64m -XX:M
       </section>
       <section xml:id="trouble.tools.top">
         <title>top</title>
-        <para>         
+        <para>
         <code>top</code> is probably one of the most important tool when first trying to see what’s running on a machine and how the resources are consumed. Here’s an example from production system:
         <programlisting>
 top - 14:46:59 up 39 days, 11:55,  1 user,  load average: 3.75, 3.57, 3.84
@@ -300,10 +300,10 @@ Tasks: 309 total,   1 running, 308 sleep
 Cpu(s):  4.5%us,  1.6%sy,  0.0%ni, 91.7%id,  1.4%wa,  0.1%hi,  0.6%si,  0.0%st
 Mem:  24414432k total, 24296956k used,   117476k free,     7196k buffers
 Swap: 16008732k total,	14348k used, 15994384k free, 11106908k cached
- 
-  PID USER  	PR  NI  VIRT  RES  SHR S %CPU %MEM	TIME+  COMMAND                                                                                                                                                                      
-15558 hadoop	18  -2 3292m 2.4g 3556 S   79 10.4   6523:52 java                                                                                                                                                                          
-13268 hadoop	18  -2 8967m 8.2g 4104 S   21 35.1   5170:30 java                                                                                                                                                                          
+
+  PID USER  	PR  NI  VIRT  RES  SHR S %CPU %MEM	TIME+  COMMAND
+15558 hadoop	18  -2 3292m 2.4g 3556 S   79 10.4   6523:52 java
+13268 hadoop	18  -2 8967m 8.2g 4104 S   21 35.1   5170:30 java
  8895 hadoop	18  -2 1581m 497m 3420 S   11  2.1   4002:32 java
 …
         </programlisting>
@@ -351,7 +351,7 @@ hadoop@sv4borg12:~$ jps
         <programlisting>
 hadoop@sv4borg12:~$ ps aux | grep HRegionServer
 hadoop   17789  155 35.2 9067824 8604364 ?     S&lt;l  Mar04 9855:48 /usr/java/jdk1.6.0_14/bin/java -Xmx8000m -XX:+DoEscapeAnalysis -XX:+AggressiveOpts -XX:+UseConcMarkSweepGC -XX:NewSize=64m -XX:MaxNewSize=64m -XX:CMSInitiatingOccupancyFraction=88 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xloggc:/export1/hadoop/logs/gc-hbase.log -Dcom.sun.management.jmxremote.port=10102 -Dcom.sun.management.jmxremote.authenticate=true -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.password.file=/home/hadoop/hbase/conf/jmxremote.password -Dcom.sun.management.jmxremote -Dhbase.log.dir=/export1/hadoop/logs -Dhbase.log.file=hbase-hadoop-regionserver-sv4borg12.log -Dhbase.home.dir=/home/hadoop/hbase -Dhbase.id.str=hadoop -Dhbase.root.logger=INFO,DRFA -Djava.library.path=/home/hadoop/hbase/lib/native/Linux-amd64-64 -classpath /home/hadoop/hbase/bin/../conf:[many jars]:/home/hadoop/hadoop/conf org.apache.hadoop.hbase.regionserver.HRegionServer start
-        </programlisting>      
+        </programlisting>
         </para>
       </section>
       <section xml:id="trouble.tools.jstack">
@@ -371,7 +371,7 @@ hadoop   17789  155 35.2 9067824 8604364
         	at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:395)
         	at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:647)
         	at java.lang.Thread.run(Thread.java:619)
- 
+
         	The MemStore flusher thread that is currently flushing to a file:
 "regionserver60020.cacheFlusher" daemon prio=10 tid=0x0000000040f4e000 nid=0x45eb in Object.wait() [0x00007f16b5b86000..0x00007f16b5b87af0]
    java.lang.Thread.State: WAITING (on object monitor)
@@ -444,7 +444,7 @@ hadoop   17789  155 35.2 9067824 8604364
         </para>
         <para>
         	A thread that receives data from HDFS:
-        <programlisting>        	
+        <programlisting>
 "IPC Client (47) connection to sv4borg9/10.4.24.40:9000 from hadoop" daemon prio=10 tid=0x00007f16a02d0000 nid=0x4fa3 runnable [0x00007f16b517d000..0x00007f16b517dbf0]
    java.lang.Thread.State: RUNNABLE
         	at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
@@ -498,63 +498,75 @@ hadoop   17789  155 35.2 9067824 8604364
         <section xml:id="trouble.tools.opentsdb">
           <title>OpenTSDB</title>
           <para>
-          <link xlink:href="http://opentsdb.net">OpenTSDB</link> is an excellent alternative to Ganglia as it uses HBase to store all the time series and doesn’t have to downsample. Monitoring your own HBase cluster that hosts OpenTSDB is a good exercise.
+          <link xlink:href="http://opentsdb.net">OpenTSDB</link> is an excellent alternative to Ganglia as it uses Apache HBase to store all the time series and doesn’t have to downsample. Monitoring your own HBase cluster that hosts OpenTSDB is a good exercise.
           </para>
           <para>
           Here’s an example of a cluster that’s suffering from hundreds of compactions launched almost all around the same time, which severely affects the IO performance:  (TODO:  insert graph plotting compactionQueueSize)
           </para>
           <para>
-          It’s a good practice to build dashboards with all the important graphs per machine and per cluster so that debugging issues can be done with a single quick look. For example, at StumbleUpon there’s one dashboard per cluster with the most important metrics from both the OS and HBase. You can then go down at the machine level and get even more detailed metrics.
+          It’s a good practice to build dashboards with all the important graphs per machine and per cluster so that debugging issues can be done with a single quick look. For example, at StumbleUpon there’s one dashboard per cluster with the most important metrics from both the OS and Apache HBase. You can then go down at the machine level and get even more detailed metrics.
           </para>
        </section>
        <section xml:id="trouble.tools.clustersshtop">
         <title>clusterssh+top</title>
-         <para> 
-          clusterssh+top, it’s like a poor man’s monitoring system and it can be quite useful when you have only a few machines as it’s very easy to setup. Starting clusterssh will give you one terminal per machine and another terminal in which whatever you type will be retyped in every window. This means that you can type “top” once and it will start it for all of your machines at the same time giving you full view of the current state of your cluster. You can also tail all the logs at the same time, edit files, etc.      
+         <para>
+          clusterssh+top, it’s like a poor man’s monitoring system and it can be quite useful when you have only a few machines as it’s very easy to setup. Starting clusterssh will give you one terminal per machine and another terminal in which whatever you type will be retyped in every window. This means that you can type “top” once and it will start it for all of your machines at the same time giving you full view of the current state of your cluster. You can also tail all the logs at the same time, edit files, etc.
           </para>
        </section>
     </section>
     </section>
-    
+
     <section xml:id="trouble.client">
       <title>Client</title>
-       <para>For more information on the HBase client, see <xref linkend="client"/>. 
+       <para>For more information on the HBase client, see <xref linkend="client"/>.
        </para>
        <section xml:id="trouble.client.scantimeout">
             <title>ScannerTimeoutException or UnknownScannerException</title>
-            <para>This is thrown if the time between RPC calls from the client to RegionServer exceeds the scan timeout.  
+            <para>This is thrown if the time between RPC calls from the client to RegionServer exceeds the scan timeout.
             For example, if <code>Scan.setCaching</code> is set to 500, then there will be an RPC call to fetch the next batch of rows every 500 <code>.next()</code> calls on the ResultScanner
             because data is being transferred in blocks of 500 rows to the client.  Reducing the setCaching value may be an option, but setting this value too low makes for inefficient
             processing on numbers of rows.
             </para>
             <para>See <xref linkend="perf.hbase.client.caching"/>.
             </para>
-       </section>    
+       </section>
+       <section xml:id="trouble.client.lease.exception">
+            <title><classname>LeaseException</classname> when calling <classname>Scanner.next</classname></title>
+            <para>
+In some situations clients that fetch data from a RegionServer get a LeaseException instead of the usual
+<xref linkend="trouble.client.scantimeout" />.  Usually the source of the exception is
+<classname>org.apache.hadoop.hbase.regionserver.Leases.removeLease(Leases.java:230)</classname> (line number may vary).
+It tends to happen in the context of a slow/freezing RegionServer#next call.
+It can be prevented by having <varname>hbase.rpc.timeout</varname> > <varname>hbase.regionserver.lease.period</varname>.
+Harsh J investigated the issue as part of the mailing list thread
+<link xlink:href="http://mail-archives.apache.org/mod_mbox/hbase-user/201209.mbox/%3CCAOcnVr3R-LqtKhFsk8Bhrm-YW2i9O6J6Fhjz2h7q6_sxvwd2yw%40mail.gmail.com%3E">HBase, mail # user - Lease does not exist exceptions</link>
+            </para>
+       </section>
        <section xml:id="trouble.client.scarylogs">
             <title>Shell or client application throws lots of scary exceptions during normal operation</title>
             <para>Since 0.20.0 the default log level for <code>org.apache.hadoop.hbase.*</code>is DEBUG. </para>
             <para>
-            On your clients, edit <filename>$HBASE_HOME/conf/log4j.properties</filename> and change this: <code>log4j.logger.org.apache.hadoop.hbase=DEBUG</code> to this: <code>log4j.logger.org.apache.hadoop.hbase=INFO</code>, or even <code>log4j.logger.org.apache.hadoop.hbase=WARN</code>. 
+            On your clients, edit <filename>$HBASE_HOME/conf/log4j.properties</filename> and change this: <code>log4j.logger.org.apache.hadoop.hbase=DEBUG</code> to this: <code>log4j.logger.org.apache.hadoop.hbase=INFO</code>, or even <code>log4j.logger.org.apache.hadoop.hbase=WARN</code>.
             </para>
-       </section>    
+       </section>
        <section xml:id="trouble.client.longpauseswithcompression">
             <title>Long Client Pauses With Compression</title>
-            <para>This is a fairly frequent question on the HBase dist-list.  The scenario is that a client is typically inserting a lot of data into a 
+            <para>This is a fairly frequent question on the Apache HBase dist-list.  The scenario is that a client is typically inserting a lot of data into a
             relatively un-optimized HBase cluster.  Compression can exacerbate the pauses, although it is not the source of the problem.</para>
             <para>See <xref linkend="precreate.regions"/> on the pattern for pre-creating regions and confirm that the table isn't starting with a single region.</para>
-            <para>See <xref linkend="perf.configurations"/> for cluster configuration, particularly <code>hbase.hstore.blockingStoreFiles</code>, <code>hbase.hregion.memstore.block.multiplier</code>, 
+            <para>See <xref linkend="perf.configurations"/> for cluster configuration, particularly <code>hbase.hstore.blockingStoreFiles</code>, <code>hbase.hregion.memstore.block.multiplier</code>,
             <code>MAX_FILESIZE</code> (region size), and <code>MEMSTORE_FLUSHSIZE.</code>  </para>
-            <para>A slightly longer explanation of why pauses can happen is as follows:  Puts are sometimes blocked on the MemStores which are blocked by the flusher thread which is blocked because there are 
+            <para>A slightly longer explanation of why pauses can happen is as follows:  Puts are sometimes blocked on the MemStores which are blocked by the flusher thread which is blocked because there are
             too many files to compact because the compactor is given too many small files to compact and has to compact the same data repeatedly.  This situation can occur even with minor compactions.
-            Compounding this situation, HBase doesn't compress data in memory.  Thus, the 64MB that lives in the MemStore could become a 6MB file after compression - which results in a smaller StoreFile.  The upside is that
+            Compounding this situation, Apache HBase doesn't compress data in memory.  Thus, the 64MB that lives in the MemStore could become a 6MB file after compression - which results in a smaller StoreFile.  The upside is that
             more data is packed into the same region, but performance is achieved by being able to write larger files - which is why HBase waits until the flushize before writing a new StoreFile.  And smaller StoreFiles
-            become targets for compaction.  Without compression the files are much bigger and don't need as much compaction, however this is at the expense of I/O.   
+            become targets for compaction.  Without compression the files are much bigger and don't need as much compaction, however this is at the expense of I/O.
             </para>
             <para>
             For additional information, see this thread on <link xlink:href="http://search-hadoop.com/m/WUnLM6ojHm1/Long+client+pauses+with+compression&amp;subj=Long+client+pauses+with+compression">Long client pauses with compression</link>.
             </para>
-            
-       </section>    
+
+       </section>
        <section xml:id="trouble.client.zookeeper">
             <title>ZooKeeper Client Connection Errors</title>
             <para>Errors like this...
@@ -576,11 +588,11 @@ hadoop   17789  155 35.2 9067824 8604364
  11/07/05 11:26:45 INFO zookeeper.ClientCnxn: Opening socket connection to
  server localhost/127.0.0.1:2181
 </programlisting>
-            ... are either due to ZooKeeper being down, or unreachable due to network issues.            
+            ... are either due to ZooKeeper being down, or unreachable due to network issues.
             </para>
             <para>The utility <xref linkend="trouble.tools.builtin.zkcli"/> may help investigate ZooKeeper issues.
             </para>
-       </section>    
+       </section>
        <section xml:id="trouble.client.oome.directmemory.leak">
             <title>Client running out of memory though heap size seems to be stable (but the off-heap/direct heap keeps growing)</title>
             <para>
@@ -595,15 +607,15 @@ it  a bit hefty.  You want to make this 
 server-side off-heap cache since this feature depends on being able to use big direct buffers (You may have to keep
 separate client-side and server-side config dirs).
             </para>
-       </section>    
+       </section>
        <section xml:id="trouble.client.slowdown.admin">
             <title>Client Slowdown When Calling Admin Methods (flush, compact, etc.)</title>
             <para>
 This is a client issue fixed by <link xlink:href="https://issues.apache.org/jira/browse/HBASE-5073">HBASE-5073</link> in 0.90.6.
-There was a ZooKeeper leak in the client and the client was getting pummeled by ZooKeeper events with each additional 
-invocation of the admin API. 
+There was a ZooKeeper leak in the client and the client was getting pummeled by ZooKeeper events with each additional
+invocation of the admin API.
             </para>
-       </section>    
+       </section>
 
        <section xml:id="trouble.client.security.rpc">
            <title>Secure Client Cannot Connect ([Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)])</title>
@@ -611,7 +623,7 @@ invocation of the admin API. 
 There can be several causes that produce this symptom.
            </para>
            <para>
-First, check that you have a valid Kerberos ticket. One is required in order to set up communication with a secure HBase cluster. Examine the ticket currently in the credential cache, if any, by running the klist command line utility. If no ticket is listed, you must obtain a ticket by running the kinit command with either a keytab specified, or by interactively entering a password for the desired principal.
+First, check that you have a valid Kerberos ticket. One is required in order to set up communication with a secure Apache HBase cluster. Examine the ticket currently in the credential cache, if any, by running the klist command line utility. If no ticket is listed, you must obtain a ticket by running the kinit command with either a keytab specified, or by interactively entering a password for the desired principal.
            </para>
            <para>
 Then, consult the <link xlink:href="http://docs.oracle.com/javase/1.5.0/docs/guide/security/jgss/tutorials/Troubleshooting.html">Java Security Guide troubleshooting section</link>. The most common problem addressed there is resolved by setting javax.security.auth.useSubjectCredsOnly system property value to false.
@@ -625,16 +637,16 @@ Finally, depending on your Kerberos conf
            <para>
 You may also need to download the <link xlink:href="http://www.oracle.com/technetwork/java/javase/downloads/jce-6-download-429243.html">unlimited strength JCE policy files</link>. Uncompress and extract the downloaded file, and install the policy jars into &lt;java-home&gt;/lib/security.
            </para>
-       </section>    
+       </section>
 
     </section>
-    
+
     <section xml:id="trouble.mapreduce">
       <title>MapReduce</title>
       <section xml:id="trouble.mapreduce.local">
         <title>You Think You're On The Cluster, But You're Actually Local</title>
         <para>This following stacktrace happened using <code>ImportTsv</code>, but things like this
-        can happen on any job with a mis-configuration.        
+        can happen on any job with a mis-configuration.
 <programlisting>
     WARN mapred.LocalJobRunner: job_local_0001
 java.lang.IllegalArgumentException: Can't read partitions file
@@ -659,17 +671,17 @@ Caused by: java.io.FileNotFoundException
 </programlisting>
        LocalJobRunner means the job is running locally, not on the cluster.
       </para>
-      <para>See 
+      <para>See
       <link xlink:href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/package-summary.html#classpath">
-      http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/package-summary.html#classpath</link> for more 
+      http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/package-summary.html#classpath</link> for more
       information on HBase MapReduce jobs and classpaths.
-      </para>        
+      </para>
       </section>
     </section>
-    
+
     <section xml:id="trouble.namenode">
       <title>NameNode</title>
-       <para>For more information on the NameNode, see <xref linkend="arch.hdfs"/>. 
+       <para>For more information on the NameNode, see <xref linkend="arch.hdfs"/>.
        </para>
        <section xml:id="trouble.namenode.disk">
             <title>HDFS Utilization of Tables and Regions</title>
@@ -679,7 +691,7 @@ Caused by: java.io.FileNotFoundException
             <para><programlisting>hadoop fs -du /hbase/myTable</programlisting> ...returns a list of the regions under the HBase table 'myTable' and their disk utilization. </para>
             <para>For more information on HDFS shell commands, see the <link xlink:href="http://hadoop.apache.org/common/docs/current/file_system_shell.html">HDFS FileSystem Shell documentation</link>.
             </para>
-       </section>    
+       </section>
        <section xml:id="trouble.namenode.hbase.objects">
             <title>Browsing HDFS for HBase Objects</title>
             <para>Somtimes it will be necessary to explore the HBase objects that exist on HDFS.  These objects could include the WALs (Write Ahead Logs), tables, regions, StoreFiles, etc.
@@ -697,30 +709,30 @@ Caused by: java.io.FileNotFoundException
             <para>The HDFS directory structure of HBase WAL is..
             <programlisting>
 <filename>/hbase</filename>
-     <filename>/.logs</filename>     
+     <filename>/.logs</filename>
           <filename>/&lt;RegionServer&gt;</filename>    (RegionServers)
                <filename>/&lt;HLog&gt;</filename>           (WAL HLog files for the RegionServer)
             </programlisting>
             </para>
-		    <para>See the <link xlink:href="see http://hadoop.apache.org/common/docs/current/hdfs_user_guide.html">HDFS User Guide</link> for other non-shell diagnostic 
-		    utilities like <code>fsck</code>. 
+		    <para>See the <link xlink:href="see http://hadoop.apache.org/common/docs/current/hdfs_user_guide.html">HDFS User Guide</link> for other non-shell diagnostic
+		    utilities like <code>fsck</code>.
             </para>
           <section xml:id="trouble.namenode.uncompaction">
             <title>Use Cases</title>
-              <para>Two common use-cases for querying HDFS for HBase objects is research the degree of uncompaction of a table.  If there are a large number of StoreFiles for each ColumnFamily it could 
+              <para>Two common use-cases for querying HDFS for HBase objects is research the degree of uncompaction of a table.  If there are a large number of StoreFiles for each ColumnFamily it could
               indicate the need for a major compaction.  Additionally, after a major compaction if the resulting StoreFile is "small" it could indicate the need for a reduction of ColumnFamilies for
               the table.
 		    </para>
 		  </section>
 
-       </section>    
+       </section>
      </section>
-        
+
     <section xml:id="trouble.network">
       <title>Network</title>
       <section xml:id="trouble.network.spikes">
         <title>Network Spikes</title>
-        <para>If you are seeing periodic network spikes you might want to check the <code>compactionQueues</code> to see if major 
+        <para>If you are seeing periodic network spikes you might want to check the <code>compactionQueues</code> to see if major
         compactions are happening.
         </para>
         <para>See <xref linkend="managed.compactions"/> for more information on managing compactions.
@@ -731,11 +743,17 @@ Caused by: java.io.FileNotFoundException
         <para>HBase expects the loopback IP Address to be 127.0.0.1.  See the Getting Started section on <xref linkend="loopback.ip" />.
         </para>
        </section>
+      <section xml:id="trouble.network.ints">
+        <title>Network Interfaces</title>
+        <para>Are all the network interfaces functioning correctly?  Are you sure?  See the Troubleshooting Case Study in <xref linkend="trouble.casestudy"/>.
+        </para>
+      </section>
+
     </section>
-        
+
     <section xml:id="trouble.rs">
       <title>RegionServer</title>
-        <para>For more information on the RegionServers, see <xref linkend="regionserver.arch"/>. 
+        <para>For more information on the RegionServers, see <xref linkend="regionserver.arch"/>.
        </para>
       <section xml:id="trouble.rs.startup">
         <title>Startup Errors</title>
@@ -743,9 +761,9 @@ Caused by: java.io.FileNotFoundException
             <title>Master Starts, But RegionServers Do Not</title>
             <para>The Master believes the RegionServers have the IP of 127.0.0.1 - which is localhost and resolves to the master's own localhost.
             </para>
-            <para>The RegionServers are erroneously informing the Master that their IP addresses are 127.0.0.1. 
+            <para>The RegionServers are erroneously informing the Master that their IP addresses are 127.0.0.1.
             </para>
-            <para>Modify <filename>/etc/hosts</filename> on the region servers, from...  
+            <para>Modify <filename>/etc/hosts</filename> on the region servers, from...
             <programlisting>
 # Do not remove the following line, or various programs
 # that require network functionality will fail.
@@ -761,7 +779,7 @@ Caused by: java.io.FileNotFoundException
             </programlisting>
             </para>
           </section>
-          
+
           <section xml:id="trouble.rs.startup.compression">
             <title>Compression Link Errors</title>
             <para>
@@ -775,8 +793,8 @@ java.lang.UnsatisfiedLinkError: no gplco
             </programlisting>
             .. then there is a path issue with the compression libraries.  See the Configuration section on <link linkend="lzo.compression">LZO compression configuration</link>.
             </para>
-          </section> 
-      </section>    
+          </section>
+      </section>
       <section xml:id="trouble.rs.runtime">
         <title>Runtime Errors</title>
 
@@ -789,7 +807,7 @@ java.lang.UnsatisfiedLinkError: no gplco
             Adding <code>-XX:+UseMembar</code> to the HBase <varname>HBASE_OPTS</varname> in <filename>conf/hbase-env.sh</filename>
             may fix it.
             </para>
-            <para>Also, are you using <xref linkend="client.rowlocks"/>?  These are discouraged because they can lock up the 
+            <para>Also, are you using <xref linkend="client.rowlocks"/>?  These are discouraged because they can lock up the
             RegionServers if not managed properly.
             </para>
         </section>
@@ -798,7 +816,7 @@ java.lang.UnsatisfiedLinkError: no gplco
            <para>
            If you see log messages like this...
 <programlisting>
-2010-09-13 01:24:17,336 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
+2010-09-13 01:24:17,336 WARN org.apache.hadoop.hdfs.server.datanode.DataNode:
 Disk-related IOException in BlockReceiver constructor. Cause is java.io.IOException: Too many open files
         at java.io.UnixFileSystem.createFileExclusively(Native Method)
         at java.io.File.createNewFile(File.java:883)
@@ -829,7 +847,7 @@ Disk-related IOException in BlockReceive
            <programlisting>
 2009-02-24 10:01:33,516 WARN org.apache.hadoop.hbase.util.Sleeper: We slept xxx ms, ten times longer than scheduled: 10000
 2009-02-24 10:01:33,516 WARN org.apache.hadoop.hbase.util.Sleeper: We slept xxx ms, ten times longer than scheduled: 15000
-2009-02-24 10:01:36,472 WARN org.apache.hadoop.hbase.regionserver.HRegionServer: unable to report to master for xxx milliseconds - retrying      
+2009-02-24 10:01:36,472 WARN org.apache.hadoop.hbase.regionserver.HRegionServer: unable to report to master for xxx milliseconds - retrying
            </programlisting>
            ... or see full GC compactions then you may be experiencing full GC's.
            </para>
@@ -860,12 +878,12 @@ java.io.IOException: Session Expired
        at org.apache.zookeeper.ClientCnxn$SendThread.readConnectResult(ClientCnxn.java:589)
        at org.apache.zookeeper.ClientCnxn$SendThread.doIO(ClientCnxn.java:709)
        at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:945)
-ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: ZooKeeper session expired           
+ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: ZooKeeper session expired
            </programlisting>
            <para>
            The JVM is doing a long running garbage collecting which is pausing every threads (aka "stop the world").
            Since the RegionServer's local ZooKeeper client cannot send heartbeats, the session times out.
-           By design, we shut down any node that isn't able to contact the ZooKeeper ensemble after getting a timeout so that it stops serving data that may already be assigned elsewhere.  
+           By design, we shut down any node that isn't able to contact the ZooKeeper ensemble after getting a timeout so that it stops serving data that may already be assigned elsewhere.
            </para>
            <para>
             <itemizedlist>
@@ -874,7 +892,7 @@ ERROR org.apache.hadoop.hbase.regionserv
               <listitem>Make sure you are not CPU starving the RegionServer thread. For example, if you are running a MapReduce job using 6 CPU-intensive tasks on a machine with 4 cores, you are probably starving the RegionServer enough to create longer garbage collection pauses.</listitem>
               <listitem>Increase the ZooKeeper session timeout</listitem>
            </itemizedlist>
-           If you wish to increase the session timeout, add the following to your <filename>hbase-site.xml</filename> to increase the timeout from the default of 60 seconds to 120 seconds. 
+           If you wish to increase the session timeout, add the following to your <filename>hbase-site.xml</filename> to increase the timeout from the default of 60 seconds to 120 seconds.
            <programlisting>
 &lt;property&gt;
     &lt;name&gt;zookeeper.session.timeout&lt;/name&gt;
@@ -888,8 +906,8 @@ ERROR org.apache.hadoop.hbase.regionserv
            </para>
            <para>
            Be aware that setting a higher timeout means that the regions served by a failed RegionServer will take at least
-           that amount of time to be transfered to another RegionServer. For a production system serving live requests, we would instead 
-           recommend setting it lower than 1 minute and over-provision your cluster in order the lower the memory load on each machines (hence having 
+           that amount of time to be transfered to another RegionServer. For a production system serving live requests, we would instead
+           recommend setting it lower than 1 minute and over-provision your cluster in order the lower the memory load on each machines (hence having
            less garbage to collect per machine).
            </para>
            <para>
@@ -906,7 +924,7 @@ ERROR org.apache.hadoop.hbase.regionserv
         <section xml:id="trouble.rs.runtime.double_listed_regions">
            <title>Regions listed by domain name, then IP</title>
            <para>
-           Fix your DNS.  In versions of HBase before 0.92.x, reverse DNS needs to give same answer
+           Fix your DNS.  In versions of Apache HBase before 0.92.x, reverse DNS needs to give same answer
            as forward lookup. See <link xlink:href="https://issues.apache.org/jira/browse/HBASE-3431">HBASE 3431
            RegionServer is not using the name given it by the master; double entry in master listing of servers</link> for gorey details.
           </para>
@@ -930,35 +948,41 @@ ERROR org.apache.hadoop.hbase.regionserv
            </para>
         </section>
 
-      </section>    
+      </section>
       <section xml:id="trouble.rs.shutdown">
         <title>Shutdown Errors</title>
 
-      </section>    
+      </section>
 
-    </section>    
+    </section>
 
     <section xml:id="trouble.master">
       <title>Master</title>
-       <para>For more information on the Master, see <xref linkend="master"/>. 
+       <para>For more information on the Master, see <xref linkend="master"/>.
        </para>
       <section xml:id="trouble.master.startup">
         <title>Startup Errors</title>
           <section xml:id="trouble.master.startup.migration">
              <title>Master says that you need to run the hbase migrations script</title>
              <para>Upon running that, the hbase migrations script says no files in root directory.</para>
-             <para>HBase expects the root directory to either not exist, or to have already been initialized by hbase running a previous time. If you create a new directory for HBase using Hadoop DFS, this error will occur. 
-             Make sure the HBase root directory does not currently exist or has been initialized by a previous run of HBase. Sure fire solution is to just use Hadoop dfs to delete the HBase root and let HBase create and initialize the directory itself. 
-             </para>          
+             <para>HBase expects the root directory to either not exist, or to have already been initialized by hbase running a previous time. If you create a new directory for HBase using Hadoop DFS, this error will occur.
+             Make sure the HBase root directory does not currently exist or has been initialized by a previous run of HBase. Sure fire solution is to just use Hadoop dfs to delete the HBase root and let HBase create and initialize the directory itself.
+             </para>
+          </section>
+          <section xml:id="trouble.master.startup.zk.buffer">
+              <title>Packet len6080218 is out of range!</title>
+              <para>If you have many regions on your cluster and you see an error
+                  like that reported above in this sections title in your logs, see
+                  <link xlink:href="https://issues.apache.org/jira/browse/HBASE-4246">HBASE-4246 Cluster with too many regions cannot withstand some master failover scenarios</link>.</para>
           </section>
-          
-      </section>    
+
+      </section>
       <section xml:id="trouble.master.shutdown">
         <title>Shutdown Errors</title>
 
-      </section>    
+      </section>
 
-    </section>    
+    </section>
 
     <section xml:id="trouble.zookeeper">
       <title>ZooKeeper</title>
@@ -967,28 +991,28 @@ ERROR org.apache.hadoop.hbase.regionserv
           <section xml:id="trouble.zookeeper.startup.address">
              <title>Could not find my address: xyz in list of ZooKeeper quorum servers</title>
              <para>A ZooKeeper server wasn't able to start, throws that error. xyz is the name of your server.</para>
-             <para>This is a name lookup problem. HBase tries to start a ZooKeeper server on some machine but that machine isn't able to find itself in the <varname>hbase.zookeeper.quorum</varname> configuration.  
-             </para>          
-             <para>Use the hostname presented in the error message instead of the value you used. If you have a DNS server, you can set <varname>hbase.zookeeper.dns.interface</varname> and <varname>hbase.zookeeper.dns.nameserver</varname> in <filename>hbase-site.xml</filename> to make sure it resolves to the correct FQDN.   
-             </para>          
+             <para>This is a name lookup problem. HBase tries to start a ZooKeeper server on some machine but that machine isn't able to find itself in the <varname>hbase.zookeeper.quorum</varname> configuration.
+             </para>
+             <para>Use the hostname presented in the error message instead of the value you used. If you have a DNS server, you can set <varname>hbase.zookeeper.dns.interface</varname> and <varname>hbase.zookeeper.dns.nameserver</varname> in <filename>hbase-site.xml</filename> to make sure it resolves to the correct FQDN.
+             </para>
           </section>
-          
-      </section>    
+
+      </section>
       <section xml:id="trouble.zookeeper.general">
           <title>ZooKeeper, The Cluster Canary</title>
           <para>ZooKeeper is the cluster's "canary in the mineshaft". It'll be the first to notice issues if any so making sure its happy is the short-cut to a humming cluster.
-          </para> 
+          </para>
           <para>
           See the <link xlink:href="http://wiki.apache.org/hadoop/ZooKeeper/Troubleshooting">ZooKeeper Operating Environment Troubleshooting</link> page. It has suggestions and tools for checking disk and networking performance; i.e. the operating environment your ZooKeeper and HBase are running in.
           </para>
          <para>Additionally, the utility <xref linkend="trouble.tools.builtin.zkcli"/> may help investigate ZooKeeper issues.
          </para>
-      </section>  
+      </section>
 
-    </section>    
+    </section>
 
     <section xml:id="trouble.ec2">
-       <title>Amazon EC2</title>      
+       <title>Amazon EC2</title>
           <section xml:id="trouble.ec2.zookeeper">
              <title>ZooKeeper does not seem to work on Amazon EC2</title>
              <para>HBase does not start when deployed as Amazon EC2 instances.  Exceptions like the below appear in the Master and/or RegionServer logs: </para>
@@ -1000,8 +1024,8 @@ ERROR org.apache.hadoop.hbase.regionserv
   java.net.ConnectException: Connection refused
              </programlisting>
              <para>
-             Security group policy is blocking the ZooKeeper port on a public address. 
-             Use the internal EC2 host names when configuring the ZooKeeper quorum peer list. 
+             Security group policy is blocking the ZooKeeper port on a public address.
+             Use the internal EC2 host names when configuring the ZooKeeper quorum peer list.
              </para>
           </section>
           <section xml:id="trouble.ec2.instability">
@@ -1015,15 +1039,15 @@ ERROR org.apache.hadoop.hbase.regionserv
              See Andrew's answer here, up on the user list: <link xlink:href="http://search-hadoop.com/m/sPdqNFAwyg2">Remote Java client connection into EC2 instance</link>.
              </para>
           </section>
-          
+
     </section>
-    
+
     <section xml:id="trouble.versions">
-       <title>HBase and Hadoop version issues</title>      
+       <title>HBase and Hadoop version issues</title>
           <section xml:id="trouble.versions.205">
              <title><code>NoClassDefFoundError</code> when trying to run 0.90.x on hadoop-0.20.205.x (or hadoop-1.0.x)</title>
-             <para>HBase 0.90.x does not ship with hadoop-0.20.205.x, etc.  To make it run, you need to replace the hadoop
-             jars that HBase shipped with in its <filename>lib</filename> directory with those of the Hadoop you want to
+             <para>Apache HBase 0.90.x does not ship with hadoop-0.20.205.x, etc.  To make it run, you need to replace the hadoop
+             jars that Apache HBase shipped with in its <filename>lib</filename> directory with those of the Hadoop you want to
              run HBase on.  If even after replacing Hadoop jars you get the below exception:
 <programlisting>
 sv4r6s38: Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/commons/configuration/Configuration
@@ -1041,6 +1065,26 @@ you need to copy under <filename>hbase/l
 in your Hadoop's <filename>lib</filename> directory.  That should fix the above complaint.
 </para>
 </section>
+
+          <section xml:id="trouble.versions.205">
+             <title>...cannot communicate with client version...</title>
+<para>If you see something like the following in your logs
+<computeroutput>...
+2012-09-24 10:20:52,168 FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting shutdown.
+org.apache.hadoop.ipc.RemoteException: Server IPC version 7 cannot communicate with client version 4
+...</computeroutput>
+...are you trying to talk to an Hadoop 2.0.x from an HBase that has an Hadoop 1.0.x client?
+Use the HBase built against Hadoop 2.0 or rebuild your HBase passing the <command>-Dhadoop.profile=2.0</command>
+attribute to Maven (See <xref linkend="maven.build.hadoop" /> for more).
+</para>
+
+</section>
 </section>
-    
+
+    <section xml:id="trouble.casestudy">
+      <title>Case Studies</title>
+      <para>For Performance and Troubleshooting Case Studies, see <xref linkend="casestudies"/>.
+      </para>
+    </section>
+
   </chapter>

Modified: hbase/branches/0.94/src/docbkx/upgrading.xml
URL: http://svn.apache.org/viewvc/hbase/branches/0.94/src/docbkx/upgrading.xml?rev=1455996&r1=1455995&r2=1455996&view=diff
==============================================================================
--- hbase/branches/0.94/src/docbkx/upgrading.xml (original)
+++ hbase/branches/0.94/src/docbkx/upgrading.xml Wed Mar 13 15:20:19 2013
@@ -27,49 +27,29 @@
  */
 -->
     <title>Upgrading</title>
+    <para>You cannot skip major verisons upgrading.  If you are upgrading from
+    version 0.20.x to 0.92.x, you must first go from 0.20.x to 0.90.x and then go
+    from 0.90.x to 0.92.x.</para>
     <para>
         Review <xref linkend="configuration" />, in particular the section on Hadoop version.
     </para>
-    <section xml:id="upgrade0.90">
-    <title>Upgrading to HBase 0.90.x from 0.20.x or 0.89.x</title>
-          <para>This version of 0.90.x HBase can be started on data written by
-              HBase 0.20.x or HBase 0.89.x.  There is no need of a migration step.
-              HBase 0.89.x and 0.90.x does write out the name of region directories
-              differently -- it names them with a md5 hash of the region name rather
-              than a jenkins hash -- so this means that once started, there is no
-              going back to HBase 0.20.x.
-          </para>
-          <para>
-             Be sure to remove the <filename>hbase-default.xml</filename> from
-             your <filename>conf</filename>
-             directory on upgrade.  A 0.20.x version of this file will have
-             sub-optimal configurations for 0.90.x HBase.  The
-             <filename>hbase-default.xml</filename> file is now bundled into the
-             HBase jar and read from there.  If you would like to review
-             the content of this file, see it in the src tree at
-             <filename>src/main/resources/hbase-default.xml</filename> or
-             see <xref linkend="hbase_default_configurations" />.
-          </para>
-          <para>
-            Finally, if upgrading from 0.20.x, check your 
-            <varname>.META.</varname> schema in the shell.  In the past we would
-            recommend that users run with a 16kb
-            <varname>MEMSTORE_FLUSHSIZE</varname>.
-            Run <code>hbase> scan '-ROOT-'</code> in the shell. This will output
-            the current <varname>.META.</varname> schema.  Check
-            <varname>MEMSTORE_FLUSHSIZE</varname> size.  Is it 16kb (16384)?  If so, you will
-            need to change this (The 'normal'/default value is 64MB (67108864)).
-            Run the script <filename>bin/set_meta_memstore_size.rb</filename>.
-            This will make the necessary edit to your <varname>.META.</varname> schema.
-            Failure to run this change will make for a slow cluster <footnote>
-            <para>
-            See <link xlink:href="https://issues.apache.org/jira/browse/HBASE-3499">HBASE-3499 Users upgrading to 0.90.0 need to have their .META. table updated with the right MEMSTORE_SIZE</link>
-            </para>
-            </footnote>
-            .
-
-          </para>
-          </section>
+    <section xml:id="upgrade0.96">
+      <title>Upgrading from 0.94.x to 0.96.x</title>
+      <subtitle>The Singularity</subtitle>
+      <para>You will have to stop your old 0.94 cluster completely to upgrade.  If you are replicating
+     between clusters, both clusters will have to go down to upgrade.  Make sure it is a clean shutdown
+     so there are no WAL files laying around (TODO: Can 0.96 read 0.94 WAL files?).  Make sure
+     zookeeper is cleared of state.  All clients must be upgraded to 0.96 too.
+ </para>
+ <para>The API has changed in a few areas; in particular how you use coprocessors (TODO: MapReduce too?)
+ </para>
+ <para>TODO: Write about 3.4 zk ensemble and multi support</para>
+    </section>
+    <section xml:id="upgrade0.94">
+      <title>Upgrading from 0.92.x to 0.94.x</title>
+    <para>0.92 and 0.94 are interface compatible.  You can do a rolling upgrade between these versions.
+    </para>
+    </section>
     <section xml:id="upgrade0.92">
       <title>Upgrading from 0.90.x to 0.92.x</title>
       <subtitle>Upgrade Guide</subtitle>
@@ -170,7 +150,7 @@ The block size default size has been cha
 <section><title>Experimental off-heap cache
 </title>
 <para>
-A new cache was contributed to 0.92.0 to act as a solution between using the “on-heap” cache which is the current LRU cache the region servers have and the operating system cache which is out of our control. 
+A new cache was contributed to 0.92.0 to act as a solution between using the “on-heap” cache which is the current LRU cache the region servers have and the operating system cache which is out of our control.
 To enable, set “-XX:MaxDirectMemorySize” in hbase-env.sh to the value for maximum direct memory size and specify hbase.offheapcache.percentage in hbase-site.xml with the percentage that you want to dedicate to off-heap cache. This should only be set for servers and not for clients. Use at your own risk.
 See this blog post for additional information on this new experimental feature: http://www.cloudera.com/blog/2012/01/caching-in-hbase-slabcache/
 </para>
@@ -194,8 +174,48 @@ See this blog post for additional inform
 </title>
 <para>0.92.0 stores data in a new format, <xref linkend="hfilev2" />.   As HBase runs, it will move all your data from HFile v1 to HFile v2 format.  This auto-migration will run in the background as flushes and compactions run.
 HFile V2 allows HBase run with larger regions/files.  In fact, we encourage that all HBasers going forward tend toward Facebook axiom #1, run with larger, fewer regions.
-If you have lots of regions now -- more than 100s per host -- you should look into setting your region size up after you move to 0.92.0 (In 0.92.0, default size is not 1G, up from 256M), and then running online merge tool (See “HBASE-1621 merge tool should work on online cluster, but disabled table”).
+If you have lots of regions now -- more than 100s per host -- you should look into setting your region size up after you move to 0.92.0 (In 0.92.0, default size is now 1G, up from 256M), and then running online merge tool (See “HBASE-1621 merge tool should work on online cluster, but disabled table”).
 </para>
 </section>
     </section>
+    <section xml:id="upgrade0.90">
+    <title>Upgrading to HBase 0.90.x from 0.20.x or 0.89.x</title>
+          <para>This version of 0.90.x HBase can be started on data written by
+              HBase 0.20.x or HBase 0.89.x.  There is no need of a migration step.
+              HBase 0.89.x and 0.90.x does write out the name of region directories
+              differently -- it names them with a md5 hash of the region name rather
+              than a jenkins hash -- so this means that once started, there is no
+              going back to HBase 0.20.x.
+          </para>
+          <para>
+             Be sure to remove the <filename>hbase-default.xml</filename> from
+             your <filename>conf</filename>
+             directory on upgrade.  A 0.20.x version of this file will have
+             sub-optimal configurations for 0.90.x HBase.  The
+             <filename>hbase-default.xml</filename> file is now bundled into the
+             HBase jar and read from there.  If you would like to review
+             the content of this file, see it in the src tree at
+             <filename>src/main/resources/hbase-default.xml</filename> or
+             see <xref linkend="hbase_default_configurations" />.
+          </para>
+          <para>
+            Finally, if upgrading from 0.20.x, check your
+            <varname>.META.</varname> schema in the shell.  In the past we would
+            recommend that users run with a 16kb
+            <varname>MEMSTORE_FLUSHSIZE</varname>.
+            Run <code>hbase> scan '-ROOT-'</code> in the shell. This will output
+            the current <varname>.META.</varname> schema.  Check
+            <varname>MEMSTORE_FLUSHSIZE</varname> size.  Is it 16kb (16384)?  If so, you will
+            need to change this (The 'normal'/default value is 64MB (67108864)).
+            Run the script <filename>bin/set_meta_memstore_size.rb</filename>.
+            This will make the necessary edit to your <varname>.META.</varname> schema.
+            Failure to run this change will make for a slow cluster <footnote>
+            <para>
+            See <link xlink:href="https://issues.apache.org/jira/browse/HBASE-3499">HBASE-3499 Users upgrading to 0.90.0 need to have their .META. table updated with the right MEMSTORE_SIZE</link>
+            </para>
+            </footnote>
+            .
+
+          </para>
+          </section>
     </chapter>

Modified: hbase/branches/0.94/src/site/resources/css/site.css
URL: http://svn.apache.org/viewvc/hbase/branches/0.94/src/site/resources/css/site.css?rev=1455996&r1=1455995&r2=1455996&view=diff
==============================================================================
--- hbase/branches/0.94/src/site/resources/css/site.css (original)
+++ hbase/branches/0.94/src/site/resources/css/site.css Wed Mar 13 15:20:19 2013
@@ -109,6 +109,11 @@ h4 {
   background-repeat: repeat-x;
 }
 
+.section {
+  padding-bottom: 0;
+  padding-top: 0;
+}
+
 /*
 #leftColumn {
   display: none !important

Modified: hbase/branches/0.94/src/site/resources/images/hbase_logo.png
URL: http://svn.apache.org/viewvc/hbase/branches/0.94/src/site/resources/images/hbase_logo.png?rev=1455996&r1=1455995&r2=1455996&view=diff
==============================================================================
Binary files - no diff available.

Modified: hbase/branches/0.94/src/site/resources/images/hbase_logo.svg
URL: http://svn.apache.org/viewvc/hbase/branches/0.94/src/site/resources/images/hbase_logo.svg?rev=1455996&r1=1455995&r2=1455996&view=diff
==============================================================================
--- hbase/branches/0.94/src/site/resources/images/hbase_logo.svg (original)
+++ hbase/branches/0.94/src/site/resources/images/hbase_logo.svg Wed Mar 13 15:20:19 2013
@@ -1,41 +1,78 @@
-<?xml version="1.0" encoding="utf-8"?>
-<!-- Generator: Adobe Illustrator 15.1.0, SVG Export Plug-In . SVG Version: 6.00 Build 0)  -->
-<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
-<svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
-	 width="792px" height="612px" viewBox="0 0 792 612" enable-background="new 0 0 792 612" xml:space="preserve">
-<path fill="#BA160C" d="M233.586,371.672h-9.895v-51.583h9.895V371.672L233.586,371.672z M223.691,307.6v-19.465h9.895V307.6
-	H223.691z M223.691,371.672h-9.896v-32.117h-63.584v32.117h-19.466v-83.537h19.466v31.954h55.128h8.457h9.896V371.672
-	L223.691,371.672z M223.691,288.135h-9.896V307.6h9.896V288.135z"/>
-<path fill="#BA160C" d="M335.939,329.334c6.812,4.218,10.219,10.652,10.219,19.303c0,6.272-2,11.571-6.002,15.897
-	c-4.325,4.758-10.165,7.137-17.519,7.137h-28.629v-19.465h28.629c2.812,0,4.218-2.109,4.218-6.327c0-4.216-1.406-6.325-4.218-6.325
-	h-28.629v-19.303h27.17c2.811,0,4.217-2.109,4.217-6.327c0-4.216-1.406-6.326-4.217-6.326h-27.17v-19.464h27.17
-	c7.353,0,13.192,2.379,17.519,7.137c3.892,4.325,5.839,9.625,5.839,15.896C344.536,318.954,341.67,325.009,335.939,329.334z
-	 M294.008,371.672h-52.312v-51.42h19.466h5.259h27.588v19.303h-32.847v12.652h32.847V371.672L294.008,371.672z M294.008,307.599
-	h-32.847v0h-19.466v-19.465h52.312V307.599z"/>
-<path fill="#878888" d="M355.123,266.419v-8.92h14.532v-5.353c0-1.932-0.644-2.899-1.933-2.899h-12.6v-8.919h12.6
-	c3.223,0,5.836,1.164,7.842,3.494c2.007,2.33,3.011,5.104,3.011,8.325v26.463h-8.921v-12.19H355.123L355.123,266.419z
-	 M473.726,278.61h-29.587c-3.469,0-6.417-1.152-8.845-3.458c-2.429-2.304-3.642-5.191-3.642-8.659v-14.049
-	c0-3.47,1.213-6.356,3.642-8.662c2.428-2.304,5.376-3.455,8.845-3.455h29.587v8.919h-29.587c-2.378,0-3.567,1.066-3.567,3.197
-	v14.049c0,2.131,1.189,3.196,3.567,3.196h29.587V278.61L473.726,278.61z M567.609,278.61h-8.996v-14.718h-22.895v14.718h-8.92
-	v-38.282h8.92v14.644h22.895v-14.644h8.996V278.61L567.609,278.61z M661.494,249.247h-31.889v5.725h29.807v8.92h-29.807v5.797
-	h31.814v8.92h-40.735v-38.282h40.809V249.247z M355.123,240.328v8.919h-12.674c-1.239,0-1.858,0.967-1.858,2.899v5.353h5.575h2.435
-	h6.522v8.92h-6.522h-2.435h-5.575v12.19h-8.92v-26.463c0-3.221,1.004-5.996,3.011-8.325c2.006-2.33,4.596-3.494,7.768-3.494H355.123
-	L355.123,240.328z M254.661,266.122v-8.92h13.083c1.288,0,1.933-1.313,1.933-3.939c0-2.676-0.645-4.015-1.933-4.015h-13.083v-8.919
-	h13.083c3.32,0,5.995,1.363,8.028,4.088c1.883,2.478,2.825,5.425,2.825,8.846c0,3.419-0.942,6.342-2.825,8.771
-	c-2.033,2.725-4.708,4.088-8.028,4.088H254.661z M177.649,278.61h-8.92v-12.19h-14.532v-8.92h14.532v-5.353
-	c0-1.932-0.644-2.899-1.932-2.899h-12.6v-8.919h12.6c3.222,0,5.835,1.164,7.842,3.494c2.007,2.33,3.01,5.104,3.01,8.325V278.61
-	L177.649,278.61z M254.661,240.328v8.919h-15.016v7.954h15.016v8.92h-15.016v12.488h-8.92v-38.282H254.661z M154.198,266.419h-7.604
-	h-1.354h-5.575v12.19h-8.92v-26.463c0-3.221,1.004-5.996,3.01-8.325c2.007-2.33,4.597-3.494,7.768-3.494h12.674v8.919h-12.674
-	c-1.239,0-1.858,0.967-1.858,2.899v5.353h5.575h1.354h7.604V266.419z"/>
-<path fill="#BA160C" d="M456.325,371.672H436.86V345.07h-31.094h-0.618v-19.466h0.618h31.094v-11.68
-	c0-4.216-1.406-6.324-4.218-6.324h-27.494v-19.465h27.494c7.03,0,12.733,2.541,17.114,7.623c4.379,5.083,6.569,11.139,6.569,18.167
-	V371.672z M405.148,345.07h-19.547h-12.165v26.602h-19.466v-57.748c0-7.028,2.19-13.083,6.569-18.167
-	c4.379-5.083,10.03-7.623,16.952-7.623h27.656V307.6h-27.656c-2.704,0-4.055,2.108-4.055,6.324v11.68h12.165h19.547V345.07z"/>
-<path fill="#BA160C" d="M564.329,345.88c0,7.03-2.109,13.031-6.327,18.006c-4.541,5.19-10.273,7.786-17.193,7.786h-72.02v-19.465
-	h72.02c2.704,0,4.055-2.109,4.055-6.327c0-4.216-1.352-6.325-4.055-6.325h-52.394c-6.92,0-12.652-2.596-17.193-7.787
-	c-4.327-4.865-6.49-10.813-6.49-17.843c0-7.028,2.218-13.083,6.651-18.167c4.434-5.083,10.112-7.623,17.032-7.623h72.021v19.464
-	h-72.021c-2.703,0-4.055,2.109-4.055,6.326c0,4.109,1.352,6.164,4.055,6.164h52.394c6.92,0,12.652,2.596,17.193,7.787
-	C562.22,332.85,564.329,338.852,564.329,345.88z"/>
-<polygon fill="#BA160C" points="661.494,307.599 591.906,307.599 591.906,320.089 656.952,320.089 656.952,339.555 591.906,339.555 
-	591.906,352.207 661.331,352.207 661.331,371.672 572.44,371.672 572.44,288.135 661.494,288.135 "/>
-</svg>
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- Generator: Adobe Illustrator 15.1.0, SVG Export Plug-In . SVG Version: 6.00 Build 0)  -->
+
+<svg
+   xmlns:dc="http://purl.org/dc/elements/1.1/"
+   xmlns:cc="http://creativecommons.org/ns#"
+   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+   xmlns:svg="http://www.w3.org/2000/svg"
+   xmlns="http://www.w3.org/2000/svg"
+   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+   version="1.1"
+   id="Layer_1"
+   x="0px"
+   y="0px"
+   width="792px"
+   height="612px"
+   viewBox="0 0 792 612"
+   enable-background="new 0 0 792 612"
+   xml:space="preserve"
+   inkscape:version="0.48.4 r9939"
+   sodipodi:docname="hbase_banner_logo.png"
+   inkscape:export-filename="hbase_logo_filledin.png"
+   inkscape:export-xdpi="90"
+   inkscape:export-ydpi="90"><metadata
+   id="metadata3285"><rdf:RDF><cc:Work
+       rdf:about=""><dc:format>image/svg+xml</dc:format><dc:type
+         rdf:resource="http://purl.org/dc/dcmitype/StillImage" /><dc:title></dc:title></cc:Work></rdf:RDF></metadata><defs
+   id="defs3283" /><sodipodi:namedview
+   pagecolor="#ffffff"
+   bordercolor="#666666"
+   borderopacity="1"
+   objecttolerance="10"
+   gridtolerance="10"
+   guidetolerance="10"
+   inkscape:pageopacity="0"
+   inkscape:pageshadow="2"
+   inkscape:window-width="1131"
+   inkscape:window-height="715"
+   id="namedview3281"
+   showgrid="false"
+   inkscape:zoom="4.3628026"
+   inkscape:cx="328.98554"
+   inkscape:cy="299.51695"
+   inkscape:window-x="752"
+   inkscape:window-y="456"
+   inkscape:window-maximized="0"
+   inkscape:current-layer="Layer_1" />
+<path
+   d="m 233.586,371.672 -9.895,0 0,-51.583 9.895,0 0,51.583 z m -9.77344,-51.59213 -0.12156,-31.94487 9.895,0 -0.0405,31.98539 z m -0.12156,51.59213 -9.896,0 0,-32.117 -63.584,0 0,32.117 -19.466,0 0,-83.537 19.466,0 0,31.954 55.128,0 8.457,0 9.896,0 0,51.583 z m 0,-83.537 -9.896,0 0,31.98539 10.01756,-0.0405 z"
+   id="path3269"
+   inkscape:connector-curvature="0"
+   style="fill:#ba160c"
+   sodipodi:nodetypes="cccccccccccccccccccccccccccccc" />
+<path
+   d="m 335.939,329.334 c 6.812,4.218 10.219,10.652 10.219,19.303 0,6.272 -2,11.571 -6.002,15.897 -4.325,4.758 -10.165,7.137 -17.519,7.137 l -28.629,0 0,-19.465 28.629,0 c 2.812,0 4.218,-2.109 4.218,-6.327 0,-4.216 -1.406,-6.325 -4.218,-6.325 l -28.629,0 0,-19.303 27.17,0 c 2.811,0 4.217,-2.109 4.217,-6.327 0,-4.216 -1.406,-6.326 -4.217,-6.326 l -27.17,0 0,-19.464 27.17,0 c 7.353,0 13.192,2.379 17.519,7.137 3.892,4.325 5.839,9.625 5.839,15.896 0,7.787 -2.866,13.842 -8.597,18.167 z m -41.931,42.338 -52.312,0 0,-51.42 19.466,0 5.259,0 27.588,0 0,19.303 -32.847,0 0,12.652 32.847,0 0,19.465 z m 0,-64.073 -32.847,0 0.0405,13.24974 -19.466,-0.48623 -0.0405,-32.22851 52.312,0 0,19.465 z"
+   id="path3271"
+   inkscape:connector-curvature="0"
+   style="fill:#ba160c"
+   sodipodi:nodetypes="cscsccsssccsssccscsccccccccccccccccccccc" />
+<path
+   d="M355.123,266.419v-8.92h14.532v-5.353c0-1.932-0.644-2.899-1.933-2.899h-12.6v-8.919h12.6  c3.223,0,5.836,1.164,7.842,3.494c2.007,2.33,3.011,5.104,3.011,8.325v26.463h-8.921v-12.19H355.123L355.123,266.419z   M473.726,278.61h-29.587c-3.469,0-6.417-1.152-8.845-3.458c-2.429-2.304-3.642-5.191-3.642-8.659v-14.049  c0-3.47,1.213-6.356,3.642-8.662c2.428-2.304,5.376-3.455,8.845-3.455h29.587v8.919h-29.587c-2.378,0-3.567,1.066-3.567,3.197  v14.049c0,2.131,1.189,3.196,3.567,3.196h29.587V278.61L473.726,278.61z M567.609,278.61h-8.996v-14.718h-22.895v14.718h-8.92  v-38.282h8.92v14.644h22.895v-14.644h8.996V278.61L567.609,278.61z M661.494,249.247h-31.889v5.725h29.807v8.92h-29.807v5.797  h31.814v8.92h-40.735v-38.282h40.809V249.247z M355.123,240.328v8.919h-12.674c-1.239,0-1.858,0.967-1.858,2.899v5.353h5.575h2.435  h6.522v8.92h-6.522h-2.435h-5.575v12.19h-8.92v-26.463c0-3.221,1.004-5.996,3.011-8.325c2.006-2.33,4.596-3.494,7.768-3.494H355.123  L355.123,240.328z M254.661,266.122v-8.92h13.083c1.
 288,0,1.933-1.313,1.933-3.939c0-2.676-0.645-4.015-1.933-4.015h-13.083v-8.919  h13.083c3.32,0,5.995,1.363,8.028,4.088c1.883,2.478,2.825,5.425,2.825,8.846c0,3.419-0.942,6.342-2.825,8.771  c-2.033,2.725-4.708,4.088-8.028,4.088H254.661z M177.649,278.61h-8.92v-12.19h-14.532v-8.92h14.532v-5.353  c0-1.932-0.644-2.899-1.932-2.899h-12.6v-8.919h12.6c3.222,0,5.835,1.164,7.842,3.494c2.007,2.33,3.01,5.104,3.01,8.325V278.61  L177.649,278.61z M254.661,240.328v8.919h-15.016v7.954h15.016v8.92h-15.016v12.488h-8.92v-38.282H254.661z M154.198,266.419h-7.604  h-1.354h-5.575v12.19h-8.92v-26.463c0-3.221,1.004-5.996,3.01-8.325c2.007-2.33,4.597-3.494,7.768-3.494h12.674v8.919h-12.674  c-1.239,0-1.858,0.967-1.858,2.899v5.353h5.575h1.354h7.604V266.419z"
+   id="path3273"
+   style="fill:#666666"
+   fill="#878888" />
+<path
+   fill="#BA160C"
+   d="M456.325,371.672H436.86V345.07h-31.094h-0.618v-19.466h0.618h31.094v-11.68  c0-4.216-1.406-6.324-4.218-6.324h-27.494v-19.465h27.494c7.03,0,12.733,2.541,17.114,7.623c4.379,5.083,6.569,11.139,6.569,18.167  V371.672z M405.148,345.07h-19.547h-12.165v26.602h-19.466v-57.748c0-7.028,2.19-13.083,6.569-18.167  c4.379-5.083,10.03-7.623,16.952-7.623h27.656V307.6h-27.656c-2.704,0-4.055,2.108-4.055,6.324v11.68h12.165h19.547V345.07z"
+   id="path3275" />
+<path
+   fill="#BA160C"
+   d="M564.329,345.88c0,7.03-2.109,13.031-6.327,18.006c-4.541,5.19-10.273,7.786-17.193,7.786h-72.02v-19.465  h72.02c2.704,0,4.055-2.109,4.055-6.327c0-4.216-1.352-6.325-4.055-6.325h-52.394c-6.92,0-12.652-2.596-17.193-7.787  c-4.327-4.865-6.49-10.813-6.49-17.843c0-7.028,2.218-13.083,6.651-18.167c4.434-5.083,10.112-7.623,17.032-7.623h72.021v19.464  h-72.021c-2.703,0-4.055,2.109-4.055,6.326c0,4.109,1.352,6.164,4.055,6.164h52.394c6.92,0,12.652,2.596,17.193,7.787  C562.22,332.85,564.329,338.852,564.329,345.88z"
+   id="path3277" />
+<polygon
+   fill="#BA160C"
+   points="661.494,307.599 591.906,307.599 591.906,320.089 656.952,320.089 656.952,339.555 591.906,339.555   591.906,352.207 661.331,352.207 661.331,371.672 572.44,371.672 572.44,288.135 661.494,288.135 "
+   id="polygon3279" />
+</svg>
\ No newline at end of file

Modified: hbase/branches/0.94/src/site/site.vm
URL: http://svn.apache.org/viewvc/hbase/branches/0.94/src/site/site.vm?rev=1455996&r1=1455995&r2=1455996&view=diff
==============================================================================
--- hbase/branches/0.94/src/site/site.vm (original)
+++ hbase/branches/0.94/src/site/site.vm Wed Mar 13 15:20:19 2013
@@ -481,6 +481,20 @@
       #end
     #end
     ## $headContent
+<!--Google Analytics-->
+<script type="text/javascript">
+
+  var _gaq = _gaq || [];
+  _gaq.push(['_setAccount', 'UA-30210968-1']);
+  _gaq.push(['_trackPageview']);
+
+  (function() {
+    var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
+    ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
+    var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
+  })();
+
+</script>
   </head>
   <body class="composite">
     <div id="banner">
@@ -521,7 +535,10 @@
       <hr/>
     </div>
     <div id="footer">
-      <div class="xright"> #publishDate( "right" $decoration.publishDate $decoration.version )&nbsp;| Copyright &#169;#copyright()All Rights Reserved.  </div>
+       <div class="xright">#publishDate( "right" $decoration.publishDate $decoration.version )&nbsp;
+        </div>
+        <div class="xright">Copyright &#169;<a href="http://www.apache.org">#copyright()</a>All Rights Reserved.  Apache Hadoop, Hadoop, HDFS, HBase and the HBase project logo are trademarks of the Apache Software Foundation.
+        </div>
       <div class="clear">
         <hr/>
       </div>

Modified: hbase/branches/0.94/src/site/site.xml
URL: http://svn.apache.org/viewvc/hbase/branches/0.94/src/site/site.xml?rev=1455996&r1=1455995&r2=1455996&view=diff
==============================================================================
--- hbase/branches/0.94/src/site/site.xml (original)
+++ hbase/branches/0.94/src/site/site.xml Wed Mar 13 15:20:19 2013
@@ -1,4 +1,4 @@
-<?xml version="1.0" encoding="ISO-8859-1"?>
+<?xml version="1.0" encoding="UTF-8"?>
 <!--
 /**
  * Licensed to the Apache Software Foundation (ASF) under one
@@ -24,7 +24,7 @@
     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
     xsi:schemaLocation="http://maven.apache.org/DECORATION/1.0.0 http://maven.apache.org/xsd/decoration-1.0.0.xsd">
   <bannerLeft>
-    <name>HBase</name>
+    <name>Apache HBase</name>
     <src>images/hbase_logo.png</src>
     <href>http://hbase.apache.org/</href>
   </bannerLeft>
@@ -32,7 +32,7 @@
   <version position="right" />
   <publishDate position="right" />
   <body>
-    <menu name="HBase Project">
+    <menu name="Apache HBase Project">
       <item name="Overview" href="index.html"/>
       <item name="License" href="license.html" />
       <item name="Downloads" href="http://www.apache.org/dyn/closer.cgi/hbase/" />
@@ -40,8 +40,11 @@
       <item name="Issue Tracking" href="issue-tracking.html" />
       <item name="Mailing Lists" href="mail-lists.html" />
       <item name="Source Repository" href="source-repository.html" />
+      <item name="ReviewBoard" href="https://reviews.apache.org"/>
       <item name="Team" href="team-list.html" />
-      <item name="Sponsors" href="sponsors.html" />
+      <item name="Thanks" href="sponsors.html" />
+      <item name="Blog" href="http://blogs.apache.org/hbase/" />
+      <item name="Other resources" href="resources.html" />
     </menu>
     <menu name="Documentation">
         <item name="Getting Started" href="book/quickstart.html" />
@@ -49,15 +52,20 @@
       <item name="X-Ref" href="xref/index.html" />
       <item name="Ref Guide (multi-page)"      href="book/book.html" />
       <item name="Ref Guide (single-page)"      href="book.html" />
+      <item name="中文参考指南(单页)" href="http://abloz.com/hbase/book.html" />
       <item name="FAQ" href="book/faq.html" />
       <item name="Videos/Presentations" href="book.html#other.info" />
       <item name="Wiki" href="http://wiki.apache.org/hadoop/Hbase" />
       <item name="ACID Semantics" href="acid-semantics.html" />
-      <item name="Bulk Loads" href="bulk-loads.html" />
+      <item name="Bulk Loads" href="book.html#arch.bulk.load" />
       <item name="Metrics"      href="metrics.html" />
       <item name="HBase on Windows"      href="cygwin.html" />
       <item name="Cluster replication"      href="replication.html" />
-      <item name="Pseudo-Dist. Extras"      href="pseudo-distributed.html" />
+    </menu>
+    <menu name="ASF">
+        <item name="Apache Software Foundation"      href="http://www.apache.org/foundation/" />
+        <item name="How Apache Works"      href="http://www.apache.org/foundation/how-it-works.html" />
+        <item name="Sponsoring Apache"      href="http://www.apache.org/foundation/sponsorship.html" />
     </menu>
   </body>
     <skin>

Modified: hbase/branches/0.94/src/site/xdoc/acid-semantics.xml
URL: http://svn.apache.org/viewvc/hbase/branches/0.94/src/site/xdoc/acid-semantics.xml?rev=1455996&r1=1455995&r2=1455996&view=diff
==============================================================================
--- hbase/branches/0.94/src/site/xdoc/acid-semantics.xml (original)
+++ hbase/branches/0.94/src/site/xdoc/acid-semantics.xml Wed Mar 13 15:20:19 2013
@@ -1,6 +1,5 @@
 <?xml version="1.0" encoding="UTF-8"?>
 <!--
-  Copyright 2010 The Apache Software Foundation
 
   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
@@ -23,13 +22,13 @@
   xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
   <properties>
     <title> 
-      HBase ACID Properties
+      Apache HBase (TM) ACID Properties
     </title>
   </properties>
 
   <body>
     <section name="About this Document">
-      <p>HBase is not an ACID compliant database. However, it does guarantee certain specific
+      <p>Apache HBase (TM) is not an ACID compliant database. However, it does guarantee certain specific
       properties.</p>
       <p>This specification enumerates the ACID properties of HBase.</p>
     </section>
@@ -197,7 +196,7 @@
         </ol>
       </section>
       <section name="Tunability">
-        <p>All of the above guarantees must be possible within HBase. For users who would like to trade
+        <p>All of the above guarantees must be possible within Apache HBase. For users who would like to trade
         off some guarantees for performance, HBase may offer several tuning options. For example:</p>
         <ul>
           <li>Visibility may be tuned on a per-read basis to allow stale reads or time travel.</li>
@@ -207,7 +206,7 @@
     </section>
     <section name="More Information">
       <p>
-      For more information, see the <a href="book.html#client">client architecture</a> or <a href="book.html#datamodel">data model</a> sections in the HBase book. 
+      For more information, see the <a href="book.html#client">client architecture</a> or <a href="book.html#datamodel">data model</a> sections in the Apache HBase Reference Guide. 
       </p>
     </section>
     
@@ -218,7 +217,7 @@
           (See <a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html#setBatch(int)">Scan#setBatch(int)</a>).
       </p>
 
-      <p>[2] In the context of HBase, &quot;durably on disk&quot; implies an hflush() call on the transaction
+      <p>[2] In the context of Apache HBase, &quot;durably on disk&quot; implies an hflush() call on the transaction
       log. This does not actually imply an fsync() to magnetic media, but rather just that the data has been
       written to the OS cache on all replicas of the log. In the case of a full datacenter power loss, it is
       possible that the edits are not truly durable.</p>



Mime
View raw message