hbase-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From mi...@apache.org
Subject [4/8] hbase git commit: HBASE-12902 Post-asciidoc conversion fix-ups
Date Fri, 23 Jan 2015 03:15:37 GMT
http://git-wip-us.apache.org/repos/asf/hbase/blob/5fbf80ee/src/main/asciidoc/_chapters/getting_started.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/getting_started.adoc b/src/main/asciidoc/_chapters/getting_started.adoc
index c67e959..9e0b5a1 100644
--- a/src/main/asciidoc/_chapters/getting_started.adoc
+++ b/src/main/asciidoc/_chapters/getting_started.adoc
@@ -31,7 +31,7 @@
 <<quickstart,quickstart>> will get you up and running on a single-node, standalone instance of HBase, followed by a pseudo-distributed single-machine instance, and finally a fully-distributed cluster. 
 
 [[quickstart]]
-== Quick Start - Standalone HBase
+== Quick Start
 
 This guide describes setup of a standalone HBase instance running against the local filesystem.
 This is not an appropriate configuration for a production instance of HBase, but will allow you to experiment with HBase.
@@ -56,7 +56,7 @@ Prior to HBase 0.94.x, HBase expected the loopback IP address to be 127.0.0.1. U
 
 .Example /etc/hosts File for Ubuntu
 ====
-The following [path]_/etc/hosts_ file works correctly for HBase 0.94.x and earlier, on Ubuntu. Use this as a template if you run into trouble. 
+The following _/etc/hosts_ file works correctly for HBase 0.94.x and earlier, on Ubuntu. Use this as a template if you run into trouble. 
 [listing]
 ----
 127.0.0.1 localhost
@@ -78,10 +78,10 @@ See <<java,java>> for information about supported JDK versions.
   Click on the suggested top link.
   This will take you to a mirror of _HBase
   Releases_.
-  Click on the folder named [path]_stable_ and then download the binary file that ends in [path]_.tar.gz_ to your local filesystem.
+  Click on the folder named _stable_ and then download the binary file that ends in _.tar.gz_ to your local filesystem.
   Be sure to choose the version that corresponds with the version of Hadoop you are likely to use later.
-  In most cases, you should choose the file for Hadoop 2, which will be called something like [path]_hbase-0.98.3-hadoop2-bin.tar.gz_.
-  Do not download the file ending in [path]_src.tar.gz_ for now.
+  In most cases, you should choose the file for Hadoop 2, which will be called something like _hbase-0.98.3-hadoop2-bin.tar.gz_.
+  Do not download the file ending in _src.tar.gz_ for now.
 . Extract the downloaded file, and change to the newly-created directory.
 +
 ----
@@ -90,29 +90,29 @@ $ tar xzvf hbase-<?eval ${project.version}?>-hadoop2-bin.tar.gz
 $ cd hbase-<?eval ${project.version}?>-hadoop2/
 ----
 
-. For HBase 0.98.5 and later, you are required to set the [var]+JAVA_HOME+            environment variable before starting HBase.
+. For HBase 0.98.5 and later, you are required to set the `JAVA_HOME`            environment variable before starting HBase.
   Prior to 0.98.5, HBase attempted to detect the location of Java if the variables was not set.
-  You can set the variable via your operating system's usual mechanism, but HBase provides a central mechanism, [path]_conf/hbase-env.sh_.
-  Edit this file, uncomment the line starting with [literal]+JAVA_HOME+, and set it to the appropriate location for your operating system.
-  The [var]+JAVA_HOME+ variable should be set to a directory which contains the executable file [path]_bin/java_.
+  You can set the variable via your operating system's usual mechanism, but HBase provides a central mechanism, _conf/hbase-env.sh_.
+  Edit this file, uncomment the line starting with `JAVA_HOME`, and set it to the appropriate location for your operating system.
+  The `JAVA_HOME` variable should be set to a directory which contains the executable file _bin/java_.
   Most modern Linux operating systems provide a mechanism, such as /usr/bin/alternatives on RHEL or CentOS, for transparently switching between versions of executables such as Java.
-  In this case, you can set [var]+JAVA_HOME+ to the directory containing the symbolic link to [path]_bin/java_, which is usually [path]_/usr_.
+  In this case, you can set `JAVA_HOME` to the directory containing the symbolic link to _bin/java_, which is usually _/usr_.
 +
 ----
 JAVA_HOME=/usr
 ----
 +
 NOTE: These instructions assume that each node of your cluster uses the same configuration.
-If this is not the case, you may need to set [var]+JAVA_HOME+              separately for each node.
+If this is not the case, you may need to set `JAVA_HOME`              separately for each node.
 
-. Edit [path]_conf/hbase-site.xml_, which is the main HBase configuration file.
+. Edit _conf/hbase-site.xml_, which is the main HBase configuration file.
   At this time, you only need to specify the directory on the local filesystem where HBase and Zookeeper write data.
   By default, a new directory is created under /tmp.
   Many servers are configured to delete the contents of /tmp upon reboot, so you should store the data elsewhere.
-  The following configuration will store HBase's data in the [path]_hbase_ directory, in the home directory of the user called [systemitem]+testuser+.
+  The following configuration will store HBase's data in the _hbase_ directory, in the home directory of the user called [systemitem]+testuser+.
   Paste the [markup]+<property>+ tags beneath the [markup]+<configuration>+ tags, which should be empty in a new HBase install.
 +
-.Example [path]_hbase-site.xml_ for Standalone HBase
+.Example _hbase-site.xml_ for Standalone HBase
 ====
 [source,xml]
 ----
@@ -134,22 +134,22 @@ You do not need to create the HBase data directory.
 HBase will do this for you.
 If you create the directory, HBase will attempt to do a migration, which is not what you want.
 
-. The [path]_bin/start-hbase.sh_ script is provided as a convenient way to start HBase.
+. The _bin/start-hbase.sh_ script is provided as a convenient way to start HBase.
   Issue the command, and if all goes well, a message is logged to standard output showing that HBase started successfully.
-  You can use the +jps+            command to verify that you have one running process called [literal]+HMaster+.
+  You can use the +jps+            command to verify that you have one running process called `HMaster`.
   In standalone mode HBase runs all daemons within this single JVM, i.e.
   the HMaster, a single HRegionServer, and the ZooKeeper daemon.
 +
 NOTE: Java needs to be installed and available.
-If you get an error indicating that Java is not installed, but it is on your system, perhaps in a non-standard location, edit the [path]_conf/hbase-env.sh_ file and modify the [var]+JAVA_HOME+ setting to point to the directory that contains [path]_bin/java_ your system.
+If you get an error indicating that Java is not installed, but it is on your system, perhaps in a non-standard location, edit the _conf/hbase-env.sh_ file and modify the `JAVA_HOME` setting to point to the directory that contains _bin/java_ your system.
 
 
 .Procedure: Use HBase For the First Time
 . Connect to HBase.
 +
-Connect to your running instance of HBase using the +hbase shell+            command, located in the [path]_bin/_ directory of your HBase install.
+Connect to your running instance of HBase using the +hbase shell+            command, located in the _bin/_ directory of your HBase install.
 In this example, some usage and version information that is printed when you start HBase Shell has been omitted.
-The HBase Shell prompt ends with a [literal]+>+ character.
+The HBase Shell prompt ends with a `>` character.
 +
 ----
 
@@ -159,12 +159,12 @@ hbase(main):001:0>
 
 . Display HBase Shell Help Text.
 +
-Type [literal]+help+ and press Enter, to display some basic usage information for HBase Shell, as well as several example commands.
+Type `help` and press Enter, to display some basic usage information for HBase Shell, as well as several example commands.
 Notice that table names, rows, columns all must be enclosed in quote characters.
 
 . Create a table.
 +
-Use the [code]+create+ command to create a new table.
+Use the `create` command to create a new table.
 You must specify the table name and the ColumnFamily name.
 +
 ----
@@ -175,7 +175,7 @@ hbase> create 'test', 'cf'
 
 . List Information About your Table
 +
-Use the [code]+list+ command to 
+Use the `list` command to 
 +
 ----
 
@@ -189,7 +189,7 @@ test
 
 . Put data into your table.
 +
-To put data into your table, use the [code]+put+ command.
+To put data into your table, use the `put` command.
 +
 ----
 
@@ -204,8 +204,8 @@ hbase> put 'test', 'row3', 'cf:c', 'value3'
 ----
 +
 Here, we insert three values, one at a time.
-The first insert is at [literal]+row1+, column [literal]+cf:a+, with a value of [literal]+value1+.
-Columns in HBase are comprised of a column family prefix, [literal]+cf+ in this example, followed by a colon and then a column qualifier suffix, [literal]+a+ in this case.
+The first insert is at `row1`, column `cf:a`, with a value of `value1`.
+Columns in HBase are comprised of a column family prefix, `cf` in this example, followed by a colon and then a column qualifier suffix, `a` in this case.
 
 . Scan the table for all data at once.
 +
@@ -237,8 +237,8 @@ COLUMN                CELL
 
 . Disable a table.
 +
-If you want to delete a table or change its settings, as well as in some other situations, you need to disable the table first, using the [code]+disable+            command.
-You can re-enable it using the [code]+enable+ command.
+If you want to delete a table or change its settings, as well as in some other situations, you need to disable the table first, using the `disable`            command.
+You can re-enable it using the `enable` command.
 +
 ----
 
@@ -259,7 +259,7 @@ hbase> disable 'test'
 
 . Drop the table.
 +
-To drop (delete) a table, use the [code]+drop+ command.
+To drop (delete) a table, use the `drop` command.
 +
 ----
 
@@ -274,7 +274,7 @@ HBase is still running in the background.
 
 
 .Procedure: Stop HBase
-. In the same way that the [path]_bin/start-hbase.sh_ script is provided to conveniently start all HBase daemons, the [path]_bin/stop-hbase.sh_            script stops them.
+. In the same way that the _bin/start-hbase.sh_ script is provided to conveniently start all HBase daemons, the _bin/stop-hbase.sh_            script stops them.
 +
 ----
 
@@ -291,7 +291,7 @@ $
 
 After working your way through <<quickstart,quickstart>>, you can re-configure HBase to run in pseudo-distributed mode.
 Pseudo-distributed mode means that HBase still runs completely on a single host, but each HBase daemon (HMaster, HRegionServer, and Zookeeper) runs as a separate process.
-By default, unless you configure the [code]+hbase.rootdir+ property as described in <<quickstart,quickstart>>, your data is still stored in [path]_/tmp/_.
+By default, unless you configure the `hbase.rootdir` property as described in <<quickstart,quickstart>>, your data is still stored in _/tmp/_.
 In this walk-through, we store your data in HDFS instead, assuming you have HDFS available.
 You can skip the HDFS configuration to continue storing your data in the local filesystem.
 
@@ -311,7 +311,7 @@ This procedure will create a totally new directory where HBase will store its da
 
 . Configure HBase.
 +
-Edit the [path]_hbase-site.xml_ configuration.
+Edit the _hbase-site.xml_ configuration.
 First, add the following property.
 which directs HBase to run in distributed mode, with one JVM instance per daemon. 
 +
@@ -324,7 +324,7 @@ which directs HBase to run in distributed mode, with one JVM instance per daemon
 </property>
 ----
 +
-Next, change the [code]+hbase.rootdir+ from the local filesystem to the address of your HDFS instance, using the [code]+hdfs:////+ URI syntax.
+Next, change the `hbase.rootdir` from the local filesystem to the address of your HDFS instance, using the `hdfs:////` URI syntax.
 In this example, HDFS is running on the localhost at port 8020.
 +
 [source,xml]
@@ -342,14 +342,14 @@ If you create the directory, HBase will attempt to do a migration, which is not
 
 . Start HBase.
 +
-Use the [path]_bin/start-hbase.sh_ command to start HBase.
+Use the _bin/start-hbase.sh_ command to start HBase.
 If your system is configured correctly, the +jps+ command should show the HMaster and HRegionServer processes running.
 
 . Check the HBase directory in HDFS.
 +
 If everything worked correctly, HBase created its directory in HDFS.
-In the configuration above, it is stored in [path]_/hbase/_ on HDFS.
-You can use the +hadoop fs+ command in Hadoop's [path]_bin/_ directory to list this directory.
+In the configuration above, it is stored in _/hbase/_ on HDFS.
+You can use the +hadoop fs+ command in Hadoop's _bin/_ directory to list this directory.
 +
 ----
 
@@ -385,7 +385,7 @@ The following command starts 3 backup servers using ports 16012/16022/16032, 160
 $ ./bin/local-master-backup.sh 2 3 5
 ----
 +
-To kill a backup master without killing the entire cluster, you need to find its process ID (PID). The PID is stored in a file with a name like [path]_/tmp/hbase-USER-X-master.pid_.
+To kill a backup master without killing the entire cluster, you need to find its process ID (PID). The PID is stored in a file with a name like _/tmp/hbase-USER-X-master.pid_.
 The only contents of the file are the PID.
 You can use the +kill -9+            command to kill that PID.
 The following command will kill the master with port offset 1, but leave the cluster running:
@@ -413,7 +413,7 @@ The following command starts four additional RegionServers, running on sequentia
 $ .bin/local-regionservers.sh start 2 3 4 5
 ----
 +
-To stop a RegionServer manually, use the +local-regionservers.sh+            command with the [literal]+stop+ parameter and the offset of the server to stop.
+To stop a RegionServer manually, use the +local-regionservers.sh+            command with the `stop` parameter and the offset of the server to stop.
 +
 ----
 $ .bin/local-regionservers.sh stop 3
@@ -421,7 +421,7 @@ $ .bin/local-regionservers.sh stop 3
 
 . Stop HBase.
 +
-You can stop HBase the same way as in the <<quickstart,quickstart>> procedure, using the [path]_bin/stop-hbase.sh_ command.
+You can stop HBase the same way as in the <<quickstart,quickstart>> procedure, using the _bin/stop-hbase.sh_ command.
 
 
 [[quickstart_fully_distributed]]
@@ -437,27 +437,25 @@ The architecture will be as follows:
 .Distributed Cluster Demo Architecture
 [cols="1,1,1,1", options="header"]
 |===
-| Node Name
-| Master
-| ZooKeeper
-| RegionServer
-
-
+| Node Name          | Master | ZooKeeper | RegionServer
+| node-a.example.com | yes    | yes       | no
+| node-b.example.com | backup | yes       | yes
+| node-c.example.com | no     | yes       | yes
 |===
 
 This quickstart assumes that each node is a virtual machine and that they are all on the same network.
-It builds upon the previous quickstart, <<quickstart_pseudo,quickstart-pseudo>>, assuming that the system you configured in that procedure is now [code]+node-a+.
-Stop HBase on [code]+node-a+        before continuing.
+It builds upon the previous quickstart, <<quickstart_pseudo,quickstart-pseudo>>, assuming that the system you configured in that procedure is now `node-a`.
+Stop HBase on `node-a`        before continuing.
 
 NOTE: Be sure that all the nodes have full access to communicate, and that no firewall rules are in place which could prevent them from talking to each other.
-If you see any errors like [literal]+no route to host+, check your firewall.
+If you see any errors like `no route to host`, check your firewall.
 
 .Procedure: Configure Password-Less SSH Access
 
-[code]+node-a+ needs to be able to log into [code]+node-b+ and [code]+node-c+ (and to itself) in order to start the daemons.
-The easiest way to accomplish this is to use the same username on all hosts, and configure password-less SSH login from [code]+node-a+ to each of the others. 
+`node-a` needs to be able to log into `node-b` and `node-c` (and to itself) in order to start the daemons.
+The easiest way to accomplish this is to use the same username on all hosts, and configure password-less SSH login from `node-a` to each of the others. 
 
-. On [code]+node-a+, generate a key pair.
+. On `node-a`, generate a key pair.
 +
 While logged in as the user who will run HBase, generate a SSH key pair, using the following command: 
 +
@@ -467,19 +465,19 @@ $ ssh-keygen -t rsa
 ----
 +
 If the command succeeds, the location of the key pair is printed to standard output.
-The default name of the public key is [path]_id_rsa.pub_.
+The default name of the public key is _id_rsa.pub_.
 
 . Create the directory that will hold the shared keys on the other nodes.
 +
-On [code]+node-b+ and [code]+node-c+, log in as the HBase user and create a [path]_.ssh/_ directory in the user's home directory, if it does not already exist.
+On `node-b` and `node-c`, log in as the HBase user and create a _.ssh/_ directory in the user's home directory, if it does not already exist.
 If it already exists, be aware that it may already contain other keys.
 
 . Copy the public key to the other nodes.
 +
-Securely copy the public key from [code]+node-a+ to each of the nodes, by using the +scp+ or some other secure means.
-On each of the other nodes, create a new file called [path]_.ssh/authorized_keys_ _if it does
-              not already exist_, and append the contents of the [path]_id_rsa.pub_ file to the end of it.
-Note that you also need to do this for [code]+node-a+ itself.
+Securely copy the public key from `node-a` to each of the nodes, by using the +scp+ or some other secure means.
+On each of the other nodes, create a new file called _.ssh/authorized_keys_ _if it does
+              not already exist_, and append the contents of the _id_rsa.pub_ file to the end of it.
+Note that you also need to do this for `node-a` itself.
 +
 ----
 $ cat id_rsa.pub >> ~/.ssh/authorized_keys
@@ -487,27 +485,27 @@ $ cat id_rsa.pub >> ~/.ssh/authorized_keys
 
 . Test password-less login.
 +
-If you performed the procedure correctly, if you SSH from [code]+node-a+ to either of the other nodes, using the same username, you should not be prompted for a password. 
+If you performed the procedure correctly, if you SSH from `node-a` to either of the other nodes, using the same username, you should not be prompted for a password. 
 
-. Since [code]+node-b+ will run a backup Master, repeat the procedure above, substituting [code]+node-b+ everywhere you see [code]+node-a+.
-  Be sure not to overwrite your existing [path]_.ssh/authorized_keys_ files, but concatenate the new key onto the existing file using the [code]+>>+ operator rather than the [code]+>+ operator.
+. Since `node-b` will run a backup Master, repeat the procedure above, substituting `node-b` everywhere you see `node-a`.
+  Be sure not to overwrite your existing _.ssh/authorized_keys_ files, but concatenate the new key onto the existing file using the `>>` operator rather than the `>` operator.
 
-.Procedure: Prepare [code]+node-a+
+.Procedure: Prepare `node-a`
 
 `node-a` will run your primary master and ZooKeeper processes, but no RegionServers.
-. Stop the RegionServer from starting on [code]+node-a+.
+. Stop the RegionServer from starting on `node-a`.
 
-. Edit [path]_conf/regionservers_ and remove the line which contains [literal]+localhost+. Add lines with the hostnames or IP addresses for [code]+node-b+ and [code]+node-c+.
+. Edit _conf/regionservers_ and remove the line which contains `localhost`. Add lines with the hostnames or IP addresses for `node-b` and `node-c`.
 +
-Even if you did want to run a RegionServer on [code]+node-a+, you should refer to it by the hostname the other servers would use to communicate with it.
-In this case, that would be [literal]+node-a.example.com+.
+Even if you did want to run a RegionServer on `node-a`, you should refer to it by the hostname the other servers would use to communicate with it.
+In this case, that would be `node-a.example.com`.
 This enables you to distribute the configuration to each node of your cluster any hostname conflicts.
 Save the file.
 
-. Configure HBase to use [code]+node-b+ as a backup master.
+. Configure HBase to use `node-b` as a backup master.
 +
-Create a new file in [path]_conf/_ called [path]_backup-masters_, and add a new line to it with the hostname for [code]+node-b+.
-In this demonstration, the hostname is [literal]+node-b.example.com+.
+Create a new file in _conf/_ called _backup-masters_, and add a new line to it with the hostname for `node-b`.
+In this demonstration, the hostname is `node-b.example.com`.
 
 . Configure ZooKeeper
 +
@@ -515,7 +513,7 @@ In reality, you should carefully consider your ZooKeeper configuration.
 You can find out more about configuring ZooKeeper in <<zookeeper,zookeeper>>.
 This configuration will direct HBase to start and manage a ZooKeeper instance on each node of the cluster.
 +
-On [code]+node-a+, edit [path]_conf/hbase-site.xml_ and add the following properties.
+On `node-a`, edit _conf/hbase-site.xml_ and add the following properties.
 +
 [source,bourne]
 ----
@@ -529,22 +527,22 @@ On [code]+node-a+, edit [path]_conf/hbase-site.xml_ and add the following proper
 </property>
 ----
 
-. Everywhere in your configuration that you have referred to [code]+node-a+ as [literal]+localhost+, change the reference to point to the hostname that the other nodes will use to refer to [code]+node-a+.
-  In these examples, the hostname is [literal]+node-a.example.com+.
+. Everywhere in your configuration that you have referred to `node-a` as `localhost`, change the reference to point to the hostname that the other nodes will use to refer to `node-a`.
+  In these examples, the hostname is `node-a.example.com`.
 
-.Procedure: Prepare [code]+node-b+ and [code]+node-c+
+.Procedure: Prepare `node-b` and `node-c`
 
-[code]+node-b+ will run a backup master server and a ZooKeeper instance.
+`node-b` will run a backup master server and a ZooKeeper instance.
 
 . Download and unpack HBase.
 +
-Download and unpack HBase to [code]+node-b+, just as you did for the standalone and pseudo-distributed quickstarts.
+Download and unpack HBase to `node-b`, just as you did for the standalone and pseudo-distributed quickstarts.
 
-. Copy the configuration files from [code]+node-a+ to [code]+node-b+.and
-  [code]+node-c+.
+. Copy the configuration files from `node-a` to `node-b`.and
+  `node-c`.
 +
 Each node of your cluster needs to have the same configuration information.
-Copy the contents of the [path]_conf/_ directory to the [path]_conf/_            directory on [code]+node-b+ and [code]+node-c+.
+Copy the contents of the _conf/_ directory to the _conf/_            directory on `node-b` and `node-c`.
 
 
 .Procedure: Start and Test Your Cluster
@@ -552,12 +550,12 @@ Copy the contents of the [path]_conf/_ directory to the [path]_conf/_
 +
 If you forgot to stop HBase from previous testing, you will have errors.
 Check to see whether HBase is running on any of your nodes by using the +jps+            command.
-Look for the processes [literal]+HMaster+, [literal]+HRegionServer+, and [literal]+HQuorumPeer+.
+Look for the processes `HMaster`, `HRegionServer`, and `HQuorumPeer`.
 If they exist, kill them.
 
 . Start the cluster.
 +
-On [code]+node-a+, issue the +start-hbase.sh+ command.
+On `node-a`, issue the +start-hbase.sh+ command.
 Your output will be similar to that below.
 +
 ----
@@ -614,9 +612,9 @@ $ jps
 .ZooKeeper Process Name
 [NOTE]
 ====
-The [code]+HQuorumPeer+ process is a ZooKeeper instance which is controlled and started by HBase.
+The `HQuorumPeer` process is a ZooKeeper instance which is controlled and started by HBase.
 If you use ZooKeeper this way, it is limited to one instance per cluster node, , and is appropriate for testing only.
-If ZooKeeper is run outside of HBase, the process is called [code]+QuorumPeer+.
+If ZooKeeper is run outside of HBase, the process is called `QuorumPeer`.
 For more about ZooKeeper configuration, including using an external ZooKeeper instance with HBase, see <<zookeeper,zookeeper>>.
 ====
 
@@ -628,9 +626,9 @@ NOTE: Web UI Port Changes
 In HBase newer than 0.98.x, the HTTP ports used by the HBase Web UI changed from 60010 for the Master and 60030 for each RegionServer to 16610 for the Master and 16030 for the RegionServer.
 
 +
-If everything is set up correctly, you should be able to connect to the UI for the Master [literal]+http://node-a.example.com:60110/+ or the secondary master at [literal]+http://node-b.example.com:60110/+ for the secondary master, using a web browser.
-If you can connect via [code]+localhost+ but not from another host, check your firewall rules.
-You can see the web UI for each of the RegionServers at port 60130 of their IP addresses, or by clicking their links in the web UI for the Master.
+If everything is set up correctly, you should be able to connect to the UI for the Master `http://node-a.example.com:16610/` or the secondary master at `http://node-b.example.com:16610/` for the secondary master, using a web browser.
+If you can connect via `localhost` but not from another host, check your firewall rules.
+You can see the web UI for each of the RegionServers at port 16630 of their IP addresses, or by clicking their links in the web UI for the Master.
 
 . Test what happens when nodes or services disappear.
 +

http://git-wip-us.apache.org/repos/asf/hbase/blob/5fbf80ee/src/main/asciidoc/_chapters/hbck_in_depth.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/hbck_in_depth.adoc b/src/main/asciidoc/_chapters/hbck_in_depth.adoc
index b0aace7..1b30c59 100644
--- a/src/main/asciidoc/_chapters/hbck_in_depth.adoc
+++ b/src/main/asciidoc/_chapters/hbck_in_depth.adoc
@@ -45,7 +45,7 @@ At the end of the commands output it prints OK or tells you the number of INCONS
 You may also want to run run hbck a few times because some inconsistencies can be transient (e.g.
 cluster is starting up or a region is splitting). Operationally you may want to run hbck regularly and setup alert (e.g.
 via nagios) if it repeatedly reports inconsistencies . A run of hbck will report a list of inconsistencies along with a brief description of the regions and tables affected.
-The using the [code]+-details+ option will report more details including a representative listing of all the splits present in all the tables. 
+The using the `-details` option will report more details including a representative listing of all the splits present in all the tables. 
 
 [source,bourne]
 ----
@@ -76,7 +76,7 @@ There are two invariants that when violated create inconsistencies in HBase:
 Repairs generally work in three phases -- a read-only information gathering phase that identifies inconsistencies, a table integrity repair phase that restores the table integrity invariant, and then finally a region consistency repair phase that restores the region consistency invariant.
 Starting from version 0.90.0, hbck could detect region consistency problems report on a subset of possible table integrity problems.
 It also included the ability to automatically fix the most common inconsistency, region assignment and deployment consistency problems.
-This repair could be done by using the [code]+-fix+ command line option.
+This repair could be done by using the `-fix` command line option.
 These problems close regions if they are open on the wrong server or on multiple region servers and also assigns regions to region servers if they are not open. 
 
 Starting from HBase versions 0.90.7, 0.92.2 and 0.94.0, several new command line options are introduced to aid repairing a corrupted HBase.
@@ -89,8 +89,8 @@ These are generally region consistency repairs -- localized single region repair
 Region consistency requires that the HBase instance has the state of the region's data in HDFS (.regioninfo files), the region's row in the hbase:meta table., and region's deployment/assignments on region servers and the master in accordance.
 Options for repairing region consistency include: 
 
-* [code]+-fixAssignments+ (equivalent to the 0.90 [code]+-fix+ option) repairs unassigned, incorrectly assigned or multiply assigned regions.
-* [code]+-fixMeta+ which removes meta rows when corresponding regions are not present in HDFS and adds new meta rows if they regions are present in HDFS while not in META.                To fix deployment and assignment problems you can run this command: 
+* `-fixAssignments` (equivalent to the 0.90 `-fix` option) repairs unassigned, incorrectly assigned or multiply assigned regions.
+* `-fixMeta` which removes meta rows when corresponding regions are not present in HDFS and adds new meta rows if they regions are present in HDFS while not in META.                To fix deployment and assignment problems you can run this command: 
 
 [source,bourne]
 ----
@@ -110,7 +110,7 @@ There are a few classes of table integrity problems that are low risk repairs.
 The first two are degenerate (startkey == endkey) regions and backwards regions (startkey > endkey). These are automatically handled by sidelining the data to a temporary directory (/hbck/xxxx). The third low-risk class is hdfs region holes.
 This can be repaired by using the:
 
-* [code]+-fixHdfsHoles+ option for fabricating new empty regions on the file system.
+* `-fixHdfsHoles` option for fabricating new empty regions on the file system.
   If holes are detected you can use -fixHdfsHoles and should include -fixMeta and -fixAssignments to make the new region consistent.
 
 [source,bourne]
@@ -119,7 +119,7 @@ This can be repaired by using the:
 $ ./bin/hbase hbck -fixAssignments -fixMeta -fixHdfsHoles
 ----
 
-Since this is a common operation, we've added a the [code]+-repairHoles+ flag that is equivalent to the previous command:
+Since this is a common operation, we've added a the `-repairHoles` flag that is equivalent to the previous command:
 
 [source,bourne]
 ----
@@ -133,12 +133,12 @@ If inconsistencies still remain after these steps, you most likely have table in
 
 Table integrity problems can require repairs that deal with overlaps.
 This is a riskier operation because it requires modifications to the file system, requires some decision making, and may require some manual steps.
-For these repairs it is best to analyze the output of a [code]+hbck -details+                run so that you isolate repairs attempts only upon problems the checks identify.
+For these repairs it is best to analyze the output of a `hbck -details`                run so that you isolate repairs attempts only upon problems the checks identify.
 Because this is riskier, there are safeguard that should be used to limit the scope of the repairs.
 WARNING: This is a relatively new and have only been tested on online but idle HBase instances (no reads/writes). Use at your own risk in an active production environment! The options for repairing table integrity violations include:
 
-* [code]+-fixHdfsOrphans+ option for ``adopting'' a region directory that is missing a region metadata file (the .regioninfo file).
-* [code]+-fixHdfsOverlaps+ ability for fixing overlapping regions
+* `-fixHdfsOrphans` option for ``adopting'' a region directory that is missing a region metadata file (the .regioninfo file).
+* `-fixHdfsOverlaps` ability for fixing overlapping regions
 
 When repairing overlapping regions, a region's data can be modified on the file system in two ways: 1) by merging regions into a larger region or 2) by sidelining regions by moving data to ``sideline'' directory where data could be restored later.
 Merging a large number of regions is technically correct but could result in an extremely large region that requires series of costly compactions and splitting operations.
@@ -147,13 +147,13 @@ Since these sidelined regions are already laid out in HBase's native directory a
 The default safeguard thresholds are conservative.
 These options let you override the default thresholds and to enable the large region sidelining feature.
 
-* [code]+-maxMerge <n>+ maximum number of overlapping regions to merge
-* [code]+-sidelineBigOverlaps+ if more than maxMerge regions are overlapping, sideline attempt to sideline the regions overlapping with the most other regions.
-* [code]+-maxOverlapsToSideline <n>+ if sidelining large overlapping regions, sideline at most n regions.
+* `-maxMerge <n>` maximum number of overlapping regions to merge
+* `-sidelineBigOverlaps` if more than maxMerge regions are overlapping, sideline attempt to sideline the regions overlapping with the most other regions.
+* `-maxOverlapsToSideline <n>` if sidelining large overlapping regions, sideline at most n regions.
 
 Since often times you would just want to get the tables repaired, you can use this option to turn on all repair options:
 
-* [code]+-repair+ includes all the region consistency options and only the hole repairing table integrity options.
+* `-repair` includes all the region consistency options and only the hole repairing table integrity options.
 
 Finally, there are safeguards to limit repairs to only specific tables.
 For example the following command would only attempt to check and repair table TableFoo and TableBar.
@@ -167,7 +167,7 @@ $ ./bin/hbase hbck -repair TableFoo TableBar
 
 There are a few special cases that hbck can handle as well.
 Sometimes the meta table's only region is inconsistently assigned or deployed.
-In this case there is a special [code]+-fixMetaOnly+ option that can try to fix meta assignments.
+In this case there is a special `-fixMetaOnly` option that can try to fix meta assignments.
 
 ----
 
@@ -177,7 +177,7 @@ $ ./bin/hbase hbck -fixMetaOnly -fixAssignments
 ==== Special cases: HBase version file is missing
 
 HBase's data on the file system requires a version file in order to start.
-If this flie is missing, you can use the [code]+-fixVersionFile+ option to fabricating a new HBase version file.
+If this flie is missing, you can use the `-fixVersionFile` option to fabricating a new HBase version file.
 This assumes that the version of hbck you are running is the appropriate version for the HBase cluster.
 
 ==== Special case: Root and META are corrupt.
@@ -204,9 +204,9 @@ HBase can clean up parents in the right order.
 However, there could be some lingering offline split parents sometimes.
 They are in META, in HDFS, and not deployed.
 But HBase can't clean them up.
-In this case, you can use the [code]+-fixSplitParents+ option to reset them in META to be online and not split.
+In this case, you can use the `-fixSplitParents` option to reset them in META to be online and not split.
 Therefore, hbck can merge them with other regions if fixing overlapping regions option is used. 
 
-This option should not normally be used, and it is not in [code]+-fixAll+. 
+This option should not normally be used, and it is not in `-fixAll`. 
 
 :numbered:

http://git-wip-us.apache.org/repos/asf/hbase/blob/5fbf80ee/src/main/asciidoc/_chapters/mapreduce.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/mapreduce.adoc b/src/main/asciidoc/_chapters/mapreduce.adoc
index 255ac0a..1228f57 100644
--- a/src/main/asciidoc/_chapters/mapreduce.adoc
+++ b/src/main/asciidoc/_chapters/mapreduce.adoc
@@ -38,37 +38,38 @@ In addition, it discusses other interactions and issues between HBase and MapRed
 .mapred and mapreduce
 [NOTE]
 ====
-There are two mapreduce packages in HBase as in MapReduce itself: [path]_org.apache.hadoop.hbase.mapred_      and [path]_org.apache.hadoop.hbase.mapreduce_.
+There are two mapreduce packages in HBase as in MapReduce itself: _org.apache.hadoop.hbase.mapred_      and _org.apache.hadoop.hbase.mapreduce_.
 The former does old-style API and the latter the new style.
 The latter has more facility though you can usually find an equivalent in the older package.
 Pick the package that goes with your mapreduce deploy.
-When in doubt or starting over, pick the [path]_org.apache.hadoop.hbase.mapreduce_.
+When in doubt or starting over, pick the _org.apache.hadoop.hbase.mapreduce_.
 In the notes below, we refer to o.a.h.h.mapreduce but replace with the o.a.h.h.mapred if that is what you are using. 
 ====  
 
 [[hbase.mapreduce.classpath]]
 == HBase, MapReduce, and the CLASSPATH
 
-By default, MapReduce jobs deployed to a MapReduce cluster do not have access to either the HBase configuration under [var]+$HBASE_CONF_DIR+ or the HBase classes.
+By default, MapReduce jobs deployed to a MapReduce cluster do not have access to either the HBase configuration under `$HBASE_CONF_DIR` or the HBase classes.
 
-To give the MapReduce jobs the access they need, you could add [path]_hbase-site.xml_ to the [path]_$HADOOP_HOME/conf/_ directory and add the HBase JARs to the [path]_HADOOP_HOME/conf/_        directory, then copy these changes across your cluster.
-You could add hbase-site.xml to $HADOOP_HOME/conf and add HBase jars to the $HADOOP_HOME/lib.
-You would then need to copy these changes across your cluster or edit [path]_$HADOOP_HOMEconf/hadoop-env.sh_ and add them to the [var]+HADOOP_CLASSPATH+ variable.
+To give the MapReduce jobs the access they need, you could add _hbase-site.xml_ to the _$HADOOP_HOME/conf/_ directory and add the HBase JARs to the _`$HADOOP_HOME`/conf/_        directory, then copy these changes across your cluster.
+You could add hbase-site.xml to `$HADOOP_HOME`/conf and add HBase jars to the $HADOOP_HOME/lib.
+You would then need to copy these changes across your cluster or edit _`$HADOOP_HOME`/conf/hadoop-env.sh_ and add them to the `HADOOP_CLASSPATH` variable.
 However, this approach is not recommended because it will pollute your Hadoop install with HBase references.
 It also requires you to restart the Hadoop cluster before Hadoop can use the HBase data.
 
 Since HBase 0.90.x, HBase adds its dependency JARs to the job configuration itself.
-The dependencies only need to be available on the local CLASSPATH.
-The following example runs the bundled HBase link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/RowCounter.html[RowCounter]        MapReduce job against a table named [systemitem]+usertable+ If you have not set the environment variables expected in the command (the parts prefixed by a [literal]+$+ sign and curly braces), you can use the actual system paths instead.
+The dependencies only need to be available on the local `CLASSPATH`.
+The following example runs the bundled HBase link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/RowCounter.html[RowCounter]        MapReduce job against a table named [systemitem]+usertable+ If you have not set the environment variables expected in the command (the parts prefixed by a `$` sign and curly braces), you can use the actual system paths instead.
 Be sure to use the correct version of the HBase JAR for your system.
-The backticks ([literal]+`+ symbols) cause ths shell to execute the sub-commands, setting the CLASSPATH as part of the command.
+The backticks (``` symbols) cause ths shell to execute the sub-commands, setting the CLASSPATH as part of the command.
 This example assumes you use a BASH-compatible shell. 
 
+[source,bash]
 ----
 $ HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-server-VERSION.jar rowcounter usertable
 ----
 
-When the command runs, internally, the HBase JAR finds the dependencies it needs for zookeeper, guava, and its other dependencies on the passed [var]+HADOOP_CLASSPATH+        and adds the JARs to the MapReduce job configuration.
+When the command runs, internally, the HBase JAR finds the dependencies it needs for zookeeper, guava, and its other dependencies on the passed `HADOOP_CLASSPATH`        and adds the JARs to the MapReduce job configuration.
 See the source at TableMapReduceUtil#addDependencyJars(org.apache.hadoop.mapreduce.Job) for how this is done. 
 
 [NOTE]
@@ -80,8 +81,9 @@ You may see an error like the following:
 java.lang.RuntimeException: java.lang.ClassNotFoundException: org.apache.hadoop.hbase.mapreduce.RowCounter$RowCounterMapper
 ----
 
-If this occurs, try modifying the command as follows, so that it uses the HBase JARs from the [path]_target/_ directory within the build environment.
+If this occurs, try modifying the command as follows, so that it uses the HBase JARs from the _target/_ directory within the build environment.
 
+[source,bash]
 ----
 $ HADOOP_CLASSPATH=${HBASE_HOME}/hbase-server/target/hbase-server-VERSION-SNAPSHOT.jar:`${HBASE_HOME}/bin/hbase classpath` ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-server/target/hbase-server-VERSION-SNAPSHOT.jar rowcounter usertable
 ----
@@ -94,7 +96,6 @@ Some mapreduce jobs that use HBase fail to launch.
 The symptom is an exception similar to the following:
 
 ----
-
 Exception in thread "main" java.lang.IllegalAccessError: class
     com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass
     com.google.protobuf.LiteralByteString
@@ -126,7 +127,7 @@ Exception in thread "main" java.lang.IllegalAccessError: class
 
 This is caused by an optimization introduced in link:https://issues.apache.org/jira/browse/HBASE-9867[HBASE-9867] that inadvertently introduced a classloader dependency. 
 
-This affects both jobs using the [code]+-libjars+ option and "fat jar," those which package their runtime dependencies in a nested [code]+lib+ folder.
+This affects both jobs using the `-libjars` option and "fat jar," those which package their runtime dependencies in a nested `lib` folder.
 
 In order to satisfy the new classloader requirements, hbase-protocol.jar must be included in Hadoop's classpath.
 See <<hbase.mapreduce.classpath,hbase.mapreduce.classpath>> for current recommendations for resolving classpath errors.
@@ -134,11 +135,11 @@ The following is included for historical purposes.
 
 This can be resolved system-wide by including a reference to the hbase-protocol.jar in hadoop's lib directory, via a symlink or by copying the jar into the new location.
 
-This can also be achieved on a per-job launch basis by including it in the [code]+HADOOP_CLASSPATH+ environment variable at job submission time.
+This can also be achieved on a per-job launch basis by including it in the `HADOOP_CLASSPATH` environment variable at job submission time.
 When launching jobs that package their dependencies, all three of the following job launching commands satisfy this requirement:
 
+[source,bash]
 ----
-
 $ HADOOP_CLASSPATH=/path/to/hbase-protocol.jar:/path/to/hbase/conf hadoop jar MyJob.jar MyJobMainClass
 $ HADOOP_CLASSPATH=$(hbase mapredcp):/path/to/hbase/conf hadoop jar MyJob.jar MyJobMainClass
 $ HADOOP_CLASSPATH=$(hbase classpath) hadoop jar MyJob.jar MyJobMainClass
@@ -146,8 +147,8 @@ $ HADOOP_CLASSPATH=$(hbase classpath) hadoop jar MyJob.jar MyJobMainClass
 
 For jars that do not package their dependencies, the following command structure is necessary:
 
+[source,bash]
 ----
-
 $ HADOOP_CLASSPATH=$(hbase mapredcp):/etc/hbase/conf hadoop jar MyApp.jar MyJobMainClass -libjars $(hbase mapredcp | tr ':' ',') ...
 ----
 
@@ -161,8 +162,8 @@ This functionality was lost due to a bug in HBase 0.95 (link:https://issues.apac
 The priority order for choosing the scanner caching is as follows:
 
 . Caching settings which are set on the scan object.
-. Caching settings which are specified via the configuration option +hbase.client.scanner.caching+, which can either be set manually in [path]_hbase-site.xml_ or via the helper method [code]+TableMapReduceUtil.setScannerCaching()+.
-. The default value [code]+HConstants.DEFAULT_HBASE_CLIENT_SCANNER_CACHING+, which is set to [literal]+100+.
+. Caching settings which are specified via the configuration option +hbase.client.scanner.caching+, which can either be set manually in _hbase-site.xml_ or via the helper method `TableMapReduceUtil.setScannerCaching()`.
+. The default value `HConstants.DEFAULT_HBASE_CLIENT_SCANNER_CACHING`, which is set to `100`.
 
 Optimizing the caching settings is a balance between the time the client waits for a result and the number of sets of results the client needs to receive.
 If the caching setting is too large, the client could end up waiting for a long time or the request could even time out.
@@ -178,6 +179,7 @@ See the API documentation for link:https://hbase.apache.org/apidocs/org/apache/h
 The HBase JAR also serves as a Driver for some bundled mapreduce jobs.
 To learn about the bundled MapReduce jobs, run the following command.
 
+[source,bash]
 ----
 $ ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-server-VERSION.jar
 An example program must be given as the first argument.
@@ -193,6 +195,7 @@ Valid program names are:
 Each of the valid program names are bundled MapReduce jobs.
 To run one of the jobs, model your command after the following example.
 
+[source,bash]
 ----
 $ ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-server-VERSION.jar rowcounter myTable
 ----
@@ -202,12 +205,12 @@ $ ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-server-VERSION.jar rowcounte
 HBase can be used as a data source, link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableInputFormat.html[TableInputFormat], and data sink, link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableOutputFormat.html[TableOutputFormat]        or link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/MultiTableOutputFormat.html[MultiTableOutputFormat], for MapReduce jobs.
 Writing MapReduce jobs that read or write HBase, it is advisable to subclass link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableMapper.html[TableMapper]        and/or link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableReducer.html[TableReducer].
 See the do-nothing pass-through classes link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/IdentityTableMapper.html[IdentityTableMapper]        and link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/IdentityTableReducer.html[IdentityTableReducer]        for basic usage.
-For a more involved example, see link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/RowCounter.html[RowCounter]        or review the [code]+org.apache.hadoop.hbase.mapreduce.TestTableMapReduce+ unit test. 
+For a more involved example, see link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/RowCounter.html[RowCounter]        or review the `org.apache.hadoop.hbase.mapreduce.TestTableMapReduce` unit test. 
 
 If you run MapReduce jobs that use HBase as source or sink, need to specify source and sink table and column names in your configuration.
 
-When you read from HBase, the [code]+TableInputFormat+ requests the list of regions from HBase and makes a map, which is either a [code]+map-per-region+ or [code]+mapreduce.job.maps+ map, whichever is smaller.
-If your job only has two maps, raise [code]+mapreduce.job.maps+ to a number greater than the number of regions.
+When you read from HBase, the `TableInputFormat` requests the list of regions from HBase and makes a map, which is either a `map-per-region` or `mapreduce.job.maps` map, whichever is smaller.
+If your job only has two maps, raise `mapreduce.job.maps` to a number greater than the number of regions.
 Maps will run on the adjacent TaskTracker if you are running a TaskTracer and RegionServer per node.
 When writing to HBase, it may make sense to avoid the Reduce step and write back into HBase from within your map.
 This approach works when your job does not need the sort and collation that MapReduce does on the map-emitted data.
@@ -226,15 +229,16 @@ For more on how this mechanism works, see <<arch.bulk.load,arch.bulk.load>>.
 
 == RowCounter Example
 
-The included link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/RowCounter.html[RowCounter]        MapReduce job uses [code]+TableInputFormat+ and does a count of all rows in the specified table.
+The included link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/RowCounter.html[RowCounter]        MapReduce job uses `TableInputFormat` and does a count of all rows in the specified table.
 To run it, use the following command: 
 
+[source,bash]
 ----
 $ ./bin/hadoop jar hbase-X.X.X.jar
 ----
 
 This will invoke the HBase MapReduce Driver class.
-Select [literal]+rowcounter+ from the choice of jobs offered.
+Select `rowcounter` from the choice of jobs offered.
 This will print rowcouner usage advice to standard output.
 Specify the tablename, column to count, and output directory.
 If you have classpath errors, see <<hbase.mapreduce.classpath,hbase.mapreduce.classpath>>.
@@ -251,7 +255,7 @@ Thus, if there are 100 regions in the table, there will be 100 map-tasks for the
 [[splitter.custom]]
 === Custom Splitters
 
-For those interested in implementing custom splitters, see the method [code]+getSplits+ in link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.html[TableInputFormatBase].
+For those interested in implementing custom splitters, see the method `getSplits` in link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.html[TableInputFormatBase].
 That is where the logic for map-task assignment resides. 
 
 [[mapreduce.example]]
@@ -266,7 +270,6 @@ There job would be defined as follows...
 
 [source,java]
 ----
-
 Configuration config = HBaseConfiguration.create();
 Job job = new Job(config, "ExampleRead");
 job.setJarByClass(MyReadJob.class);     // class that contains mapper
@@ -296,7 +299,6 @@ if (!b) {
 
 [source,java]
 ----
-
 public static class MyMapper extends TableMapper<Text, Text> {
 
   public void map(ImmutableBytesWritable row, Result value, Context context) throws InterruptedException, IOException {
@@ -313,7 +315,6 @@ This example will simply copy data from one table to another.
 
 [source,java]
 ----
-
 Configuration config = HBaseConfiguration.create();
 Job job = new Job(config,"ExampleReadWrite");
 job.setJarByClass(MyReadWriteJob.class);    // class that contains mapper
@@ -342,15 +343,14 @@ if (!b) {
 }
 ----
 
-An explanation is required of what [class]+TableMapReduceUtil+ is doing, especially with the reducer. link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableOutputFormat.html[TableOutputFormat]          is being used as the outputFormat class, and several parameters are being set on the config (e.g., TableOutputFormat.OUTPUT_TABLE), as well as setting the reducer output key to [class]+ImmutableBytesWritable+ and reducer value to [class]+Writable+.
-These could be set by the programmer on the job and conf, but [class]+TableMapReduceUtil+ tries to make things easier.
+An explanation is required of what `TableMapReduceUtil` is doing, especially with the reducer. link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableOutputFormat.html[TableOutputFormat]          is being used as the outputFormat class, and several parameters are being set on the config (e.g., TableOutputFormat.OUTPUT_TABLE), as well as setting the reducer output key to `ImmutableBytesWritable` and reducer value to `Writable`.
+These could be set by the programmer on the job and conf, but `TableMapReduceUtil` tries to make things easier.
 
-The following is the example mapper, which will create a [class]+Put+          and matching the input [class]+Result+ and emit it.
+The following is the example mapper, which will create a `Put`          and matching the input `Result` and emit it.
 Note: this is what the CopyTable utility does. 
 
 [source,java]
 ----
-
 public static class MyMapper extends TableMapper<ImmutableBytesWritable, Put>  {
 
 	public void map(ImmutableBytesWritable row, Result value, Context context) throws IOException, InterruptedException {
@@ -368,14 +368,14 @@ public static class MyMapper extends TableMapper<ImmutableBytesWritable, Put>  {
 }
 ----
 
-There isn't actually a reducer step, so [class]+TableOutputFormat+ takes care of sending the [class]+Put+ to the target table. 
+There isn't actually a reducer step, so `TableOutputFormat` takes care of sending the `Put` to the target table. 
 
-This is just an example, developers could choose not to use [class]+TableOutputFormat+ and connect to the target table themselves. 
+This is just an example, developers could choose not to use `TableOutputFormat` and connect to the target table themselves. 
 
 [[mapreduce.example.readwrite.multi]]
 === HBase MapReduce Read/Write Example With Multi-Table Output
 
-TODO: example for [class]+MultiTableOutputFormat+. 
+TODO: example for `MultiTableOutputFormat`. 
 
 [[mapreduce.example.summary]]
 === HBase MapReduce Summary to HBase Example
@@ -414,7 +414,7 @@ if (!b) {
 ----          
 
 In this example mapper a column with a String-value is chosen as the value to summarize upon.
-This value is used as the key to emit from the mapper, and an [class]+IntWritable+ represents an instance counter. 
+This value is used as the key to emit from the mapper, and an `IntWritable` represents an instance counter. 
 
 [source,java]
 ----
@@ -434,7 +434,7 @@ public static class MyMapper extends TableMapper<Text, IntWritable>  {
 }
 ----          
 
-In the reducer, the "ones" are counted (just like any other MR example that does this), and then emits a [class]+Put+. 
+In the reducer, the "ones" are counted (just like any other MR example that does this), and then emits a `Put`. 
 
 [source,java]
 ----
@@ -513,9 +513,8 @@ public static class MyReducer extends Reducer<Text, IntWritable, Text, IntWritab
 It is also possible to perform summaries without a reducer - if you use HBase as the reducer. 
 
 An HBase target table would need to exist for the job summary.
-The Table method [code]+incrementColumnValue+ would be used to atomically increment values.
-From a performance perspective, it might make sense to keep a Map of values with their values to be incremeneted for each map-task, and make one update per key at during the [code]+
-            cleanup+ method of the mapper.
+The Table method `incrementColumnValue` would be used to atomically increment values.
+From a performance perspective, it might make sense to keep a Map of values with their values to be incremeneted for each map-task, and make one update per key at during the `cleanup` method of the mapper.
 However, your milage may vary depending on the number of rows to be processed and unique keys. 
 
 In the end, the summary results are in HBase. 
@@ -525,7 +524,7 @@ In the end, the summary results are in HBase.
 
 Sometimes it is more appropriate to generate summaries to an RDBMS.
 For these cases, it is possible to generate summaries directly to an RDBMS via a custom reducer.
-The [code]+setup+ method can connect to an RDBMS (the connection information can be passed via custom parameters in the context) and the cleanup method can close the connection. 
+The `setup` method can connect to an RDBMS (the connection information can be passed via custom parameters in the context) and the cleanup method can close the connection. 
 
 It is critical to understand that number of reducers for the job affects the summarization implementation, and you'll have to design this into your reducer.
 Specifically, whether it is designed to run as a singleton (one reducer) or multiple reducers.
@@ -534,7 +533,6 @@ Recognize that the more reducers that are assigned to the job, the more simultan
 
 [source,java]
 ----
-
  public static class MyRdbmsReducer extends Reducer<Text, IntWritable, Text, IntWritable>  {
 
 	private Connection c = null;


Mime
View raw message