hbase-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From mi...@apache.org
Subject [3/8] hbase git commit: HBASE-12902 Post-asciidoc conversion fix-ups
Date Fri, 23 Jan 2015 03:15:36 GMT
http://git-wip-us.apache.org/repos/asf/hbase/blob/5fbf80ee/src/main/asciidoc/_chapters/ops_mgt.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/ops_mgt.adoc b/src/main/asciidoc/_chapters/ops_mgt.adoc
index c7f0e0f..b0b496a 100644
--- a/src/main/asciidoc/_chapters/ops_mgt.adoc
+++ b/src/main/asciidoc/_chapters/ops_mgt.adoc
@@ -34,9 +34,9 @@ The subject of operations is related to the topics of <<trouble,trouble>>, <<per
 == HBase Tools and Utilities
 
 HBase provides several tools for administration, analysis, and debugging of your cluster.
-The entry-point to most of these tools is the [path]_bin/hbase_ command, though some tools are available in the [path]_dev-support/_ directory.
+The entry-point to most of these tools is the _bin/hbase_ command, though some tools are available in the _dev-support/_ directory.
 
-To see usage instructions for [path]_bin/hbase_ command, run it with no arguments, or with the +-h+ argument.
+To see usage instructions for _bin/hbase_ command, run it with no arguments, or with the +-h+ argument.
 These are the usage instructions for HBase 0.98.x.
 Some commands, such as +version+, +pe+, +ltt+, +clean+, are not available in previous versions.
 
@@ -70,14 +70,14 @@ Some commands take arguments. Pass no args or -h for usage.
   CLASSNAME       Run the class named CLASSNAME
 ----
 
-Some of the tools and utilities below are Java classes which are passed directly to the [path]_bin/hbase_ command, as referred to in the last line of the usage instructions.
+Some of the tools and utilities below are Java classes which are passed directly to the _bin/hbase_ command, as referred to in the last line of the usage instructions.
 Others, such as +hbase shell+ (<<shell,shell>>), +hbase upgrade+ (<<upgrading,upgrading>>), and +hbase
         thrift+ (<<thrift,thrift>>), are documented elsewhere in this guide.
 
 === Canary
 
 There is a Canary class can help users to canary-test the HBase cluster status, with every column-family for every regions or regionservers granularity.
-To see the usage, use the [literal]+--help+ parameter. 
+To see the usage, use the `--help` parameter. 
 
 ----
 $ ${HBASE_HOME}/bin/hbase org.apache.hadoop.hbase.tool.Canary -help
@@ -197,17 +197,17 @@ $ ${HBASE_HOME}/bin/hbase orghapache.hadoop.hbase.tool.Canary -t 600000
 
 ==== Running Canary in a Kerberos-enabled Cluster
 
-To run Canary in a Kerberos-enabled cluster, configure the following two properties in [path]_hbase-site.xml_:
+To run Canary in a Kerberos-enabled cluster, configure the following two properties in _hbase-site.xml_:
 
-* [code]+hbase.client.keytab.file+
-* [code]+hbase.client.kerberos.principal+
+* `hbase.client.keytab.file`
+* `hbase.client.kerberos.principal`
 
 Kerberos credentials are refreshed every 30 seconds when Canary runs in daemon mode.
 
-To configure the DNS interface for the client, configure the following optional properties in [path]_hbase-site.xml_.
+To configure the DNS interface for the client, configure the following optional properties in _hbase-site.xml_.
 
-* [code]+hbase.client.dns.interface+
-* [code]+hbase.client.dns.nameserver+
+* `hbase.client.dns.interface`
+* `hbase.client.dns.nameserver`
 
 .Canary in a Kerberos-Enabled Cluster
 ====
@@ -244,10 +244,10 @@ See link:[HBASE-7351 Periodic health check script] for configurations and detail
 
 === Driver
 
-Several frequently-accessed utilities are provided as [code]+Driver+ classes, and executed by the [path]_bin/hbase_ command.
+Several frequently-accessed utilities are provided as `Driver` classes, and executed by the _bin/hbase_ command.
 These utilities represent MapReduce jobs which run on your cluster.
 They are run in the following way, replacing [replaceable]_UtilityName_ with the utility you want to run.
-This command assumes you have set the environment variable [literal]+HBASE_HOME+ to the directory where HBase is unpacked on your server.
+This command assumes you have set the environment variable `HBASE_HOME` to the directory where HBase is unpacked on your server.
 
 ----
 
@@ -299,10 +299,10 @@ See <<hfile_tool,hfile tool>>.
 === WAL Tools
 
 [[hlog_tool]]
-==== [class]+FSHLog+ tool
+==== `FSHLog` tool
 
-The main method on [class]+FSHLog+ offers manual split and dump facilities.
-Pass it WALs or the product of a split, the content of the [path]_recovered.edits_.
+The main method on `FSHLog` offers manual split and dump facilities.
+Pass it WALs or the product of a split, the content of the _recovered.edits_.
 directory.
 
 You can get a textual dump of a WAL file content by doing the following:
@@ -311,7 +311,7 @@ You can get a textual dump of a WAL file content by doing the following:
  $ ./bin/hbase org.apache.hadoop.hbase.regionserver.wal.FSHLog --dump hdfs://example.org:8020/hbase/.logs/example.org,60020,1283516293161/10.10.21.10%3A60020.1283973724012
 ----
 
-The return code will be non-zero if issues with the file so you can test wholesomeness of file by redirecting [var]+STDOUT+ to [code]+/dev/null+ and testing the program return.
+The return code will be non-zero if issues with the file so you can test wholesomeness of file by redirecting `STDOUT` to `/dev/null` and testing the program return.
 
 Similarly you can force a split of a log file directory by doing:
 
@@ -332,7 +332,7 @@ You can invoke it via the hbase cli with the 'wal' command.
 .WAL Printing in older versions of HBase
 [NOTE]
 ====
-Prior to version 2.0, the WAL Pretty Printer was called the [class]+HLogPrettyPrinter+, after an internal name for HBase's write ahead log.
+Prior to version 2.0, the WAL Pretty Printer was called the `HLogPrettyPrinter`, after an internal name for HBase's write ahead log.
 In those versions, you can pring the contents of a WAL using the same configuration as above, but with the 'hlog' command. 
 
 ----
@@ -394,13 +394,13 @@ For performance consider the following general options:
 .Scanner Caching
 [NOTE]
 ====
-Caching for the input Scan is configured via [code]+hbase.client.scanner.caching+          in the job configuration. 
+Caching for the input Scan is configured via `hbase.client.scanner.caching`          in the job configuration. 
 ====
 
 .Versions
 [NOTE]
 ====
-By default, CopyTable utility only copies the latest version of row cells unless [code]+--versions=n+ is explicitly specified in the command. 
+By default, CopyTable utility only copies the latest version of row cells unless `--versions=n` is explicitly specified in the command. 
 ====
 
 See Jonathan Hsieh's link:http://www.cloudera.com/blog/2012/06/online-hbase-backups-with-copytable-2/[Online
@@ -415,7 +415,7 @@ Invoke via:
 $ bin/hbase org.apache.hadoop.hbase.mapreduce.Export <tablename> <outputdir> [<versions> [<starttime> [<endtime>]]]
 ----
 
-Note: caching for the input Scan is configured via [code]+hbase.client.scanner.caching+ in the job configuration. 
+Note: caching for the input Scan is configured via `hbase.client.scanner.caching` in the job configuration. 
 
 === Import
 
@@ -435,7 +435,7 @@ $ bin/hbase -Dhbase.import.version=0.94 org.apache.hadoop.hbase.mapreduce.Import
 === ImportTsv
 
 ImportTsv is a utility that will load data in TSV format into HBase.
-It has two distinct usages: loading data from TSV format in HDFS into HBase via Puts, and preparing StoreFiles to be loaded via the [code]+completebulkload+. 
+It has two distinct usages: loading data from TSV format in HDFS into HBase via Puts, and preparing StoreFiles to be loaded via the `completebulkload`. 
 
 To load data via Puts (i.e., non-bulk loading):
 
@@ -525,7 +525,7 @@ For more information about bulk-loading HFiles into HBase, see <<arch.bulk.load,
 
 === CompleteBulkLoad
 
-The [code]+completebulkload+ utility will move generated StoreFiles into an HBase table.
+The `completebulkload` utility will move generated StoreFiles into an HBase table.
 This utility is often used in conjunction with output from <<importtsv,importtsv>>. 
 
 There are two ways to invoke this utility, with explicit classname and via the driver:
@@ -570,7 +570,7 @@ $ bin/hbase org.apache.hadoop.hbase.mapreduce.WALPlayer /backuplogdir oldTable1,
 ----
 
 WALPlayer, by default, runs as a mapreduce job.
-To NOT run WALPlayer as a mapreduce job on your cluster, force it to run all in the local process by adding the flags [code]+-Dmapreduce.jobtracker.address=local+ on the command line. 
+To NOT run WALPlayer as a mapreduce job on your cluster, force it to run all in the local process by adding the flags `-Dmapreduce.jobtracker.address=local` on the command line. 
 
 [[rowcounter]]
 === RowCounter and CellCounter
@@ -583,7 +583,7 @@ It will run the mapreduce all in a single process but it will run faster if you
 $ bin/hbase org.apache.hadoop.hbase.mapreduce.RowCounter <tablename> [<column1> <column2>...]
 ----
 
-Note: caching for the input Scan is configured via [code]+hbase.client.scanner.caching+ in the job configuration. 
+Note: caching for the input Scan is configured via `hbase.client.scanner.caching` in the job configuration. 
 
 HBase ships another diagnostic mapreduce job called link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/CellCounter.html[CellCounter].
 Like RowCounter, it gathers more fine-grained statistics about your table.
@@ -598,13 +598,13 @@ The statistics gathered by RowCounter are more fine-grained and include:
 
 The program allows you to limit the scope of the run.
 Provide a row regex or prefix to limit the rows to analyze.
-Use [code]+hbase.mapreduce.scan.column.family+ to specify scanning a single column family.
+Use `hbase.mapreduce.scan.column.family` to specify scanning a single column family.
 
 ----
 $ bin/hbase org.apache.hadoop.hbase.mapreduce.CellCounter <tablename> <outputDir> [regex or prefix]
 ----
 
-Note: just like RowCounter, caching for the input Scan is configured via [code]+hbase.client.scanner.caching+ in the job configuration. 
+Note: just like RowCounter, caching for the input Scan is configured via `hbase.client.scanner.caching` in the job configuration. 
 
 === mlockall
 
@@ -639,7 +639,7 @@ Options:
 
 === +hbase pe+
 
-The +hbase pe+ command is a shortcut provided to run the [code]+org.apache.hadoop.hbase.PerformanceEvaluation+ tool, which is used for testing.
+The +hbase pe+ command is a shortcut provided to run the `org.apache.hadoop.hbase.PerformanceEvaluation` tool, which is used for testing.
 The +hbase pe+ command was introduced in HBase 0.98.4.
 
 The PerformanceEvaluation tool accepts many different options and commands.
@@ -651,7 +651,7 @@ The PerformanceEvaluation tool has received many updates in recent HBase release
 
 === +hbase ltt+
 
-The +hbase ltt+ command is a shortcut provided to run the [code]+org.apache.hadoop.hbase.util.LoadTestTool+ utility, which is used for testing.
+The +hbase ltt+ command is a shortcut provided to run the `org.apache.hadoop.hbase.util.LoadTestTool` utility, which is used for testing.
 The +hbase ltt+ command was introduced in HBase 0.98.4.
 
 You must specify either +-write+ or +-update-read+ as the first option.
@@ -721,8 +721,8 @@ See <<lb,lb>> below.
 .Kill Node Tool
 [NOTE]
 ====
-In hbase-2.0, in the bin directory, we added a script named [path]_considerAsDead.sh_ that can be used to kill a regionserver.
-Hardware issues could be detected by specialized monitoring tools before the  zookeeper timeout has expired. [path]_considerAsDead.sh_ is a simple function to mark a RegionServer as dead.
+In hbase-2.0, in the bin directory, we added a script named _considerAsDead.sh_ that can be used to kill a regionserver.
+Hardware issues could be detected by specialized monitoring tools before the  zookeeper timeout has expired. _considerAsDead.sh_ is a simple function to mark a RegionServer as dead.
 It deletes all the znodes of the server, starting the recovery process.
 Plug in the script into your monitoring/fault detection tools to initiate faster failover.
 Be careful how you use this disruptive tool.
@@ -733,7 +733,7 @@ A downside to the above stop of a RegionServer is that regions could be offline
 Regions are closed in order.
 If many regions on the server, the first region to close may not be back online until all regions close and after the master notices the RegionServer's znode gone.
 In Apache HBase 0.90.2, we added facility for having a node gradually shed its load and then shutdown itself down.
-Apache HBase 0.90.2 added the [path]_graceful_stop.sh_ script.
+Apache HBase 0.90.2 added the _graceful_stop.sh_ script.
 Here is its usage:
 
 ----
@@ -748,21 +748,21 @@ Usage: graceful_stop.sh [--config &conf-dir>] [--restart] [--reload] [--thrift]
 ----
 
 To decommission a loaded RegionServer, run the following: +$
-          ./bin/graceful_stop.sh HOSTNAME+ where [var]+HOSTNAME+ is the host carrying the RegionServer you would decommission. 
+          ./bin/graceful_stop.sh HOSTNAME+ where `HOSTNAME` is the host carrying the RegionServer you would decommission. 
 
-.On [var]+HOSTNAME+
+.On `HOSTNAME`
 [NOTE]
 ====
-The [var]+HOSTNAME+ passed to [path]_graceful_stop.sh_ must match the hostname that hbase is using to identify RegionServers.
+The `HOSTNAME` passed to _graceful_stop.sh_ must match the hostname that hbase is using to identify RegionServers.
 Check the list of RegionServers in the master UI for how HBase is referring to servers.
 Its usually hostname but can also be FQDN.
-Whatever HBase is using, this is what you should pass the [path]_graceful_stop.sh_ decommission script.
+Whatever HBase is using, this is what you should pass the _graceful_stop.sh_ decommission script.
 If you pass IPs, the script is not yet smart enough to make a hostname (or FQDN) of it and so it will fail when it checks if server is currently running; the graceful unloading of regions will not run. 
 ====
 
-The [path]_graceful_stop.sh_ script will move the regions off the decommissioned RegionServer one at a time to minimize region churn.
+The _graceful_stop.sh_ script will move the regions off the decommissioned RegionServer one at a time to minimize region churn.
 It will verify the region deployed in the new location before it will moves the next region and so on until the decommissioned server is carrying zero regions.
-At this point, the [path]_graceful_stop.sh_ tells the RegionServer +stop+.
+At this point, the _graceful_stop.sh_ tells the RegionServer +stop+.
 The master will at this point notice the RegionServer gone but all regions will have already been redeployed and because the RegionServer went down cleanly, there will be no WAL logs to split. 
 
 .Load Balancer
@@ -797,8 +797,8 @@ Hence, it is better to manage the balancer apart from +graceful_stop+ reenabling
 
 If you have a large cluster, you may want to decommission more than one machine at a time by gracefully stopping mutiple RegionServers concurrently.
 To gracefully drain multiple regionservers at the same time, RegionServers can be put into a "draining" state.
-This is done by marking a RegionServer as a draining node by creating an entry in ZooKeeper under the [path]_hbase_root/draining_ znode.
-This znode has format [code]+name,port,startcode+ just like the regionserver entries under [path]_hbase_root/rs_ znode. 
+This is done by marking a RegionServer as a draining node by creating an entry in ZooKeeper under the _hbase_root/draining_ znode.
+This znode has format `name,port,startcode` just like the regionserver entries under _hbase_root/rs_ znode. 
 
 Without this facility, decommissioning mulitple nodes may be non-optimal because regions that are being drained from one region server may be moved to other regionservers that are also draining.
 Marking RegionServers to be in the draining state prevents this from happening.
@@ -810,7 +810,7 @@ See this link:http://inchoate-clatter.blogspot.com/2012/03/hbase-ops-automation.
 
 It is good having <<dfs.datanode.failed.volumes.tolerated,dfs.datanode.failed.volumes.tolerated>> set if you have a decent number of disks per machine for the case where a disk plain dies.
 But usually disks do the "John Wayne" -- i.e.
-take a while to go down spewing errors in [path]_dmesg_ -- or for some reason, run much slower than their companions.
+take a while to go down spewing errors in _dmesg_ -- or for some reason, run much slower than their companions.
 In this case you want to decommission the disk.
 You have two options.
 You can link:http://wiki.apache.org/hadoop/FAQ#I_want_to_make_a_large_cluster_smaller_by_taking_out_a_bunch_of_nodes_simultaneously._How_can_this_be_done.3F[decommission
@@ -835,13 +835,13 @@ These methods are detailed below.
 
 ==== Using the +rolling-restart.sh+ Script
 
-HBase ships with a script, [path]_bin/rolling-restart.sh_, that allows you to perform rolling restarts on the entire cluster, the master only, or the RegionServers only.
+HBase ships with a script, _bin/rolling-restart.sh_, that allows you to perform rolling restarts on the entire cluster, the master only, or the RegionServers only.
 The script is provided as a template for your own script, and is not explicitly tested.
 It requires password-less SSH login to be configured and assumes that you have deployed using a tarball.
 The script requires you to set some environment variables before running it.
 Examine the script and modify it to suit your needs.
 
-.[path]_rolling-restart.sh_ General Usage
+._rolling-restart.sh_ General Usage
 ====
 ----
 
@@ -851,19 +851,19 @@ Usage: rolling-restart.sh [--config <hbase-confdir>] [--rs-only] [--master-only]
 ====
 
 Rolling Restart on RegionServers Only::
-  To perform a rolling restart on the RegionServers only, use the [code]+--rs-only+ option.
+  To perform a rolling restart on the RegionServers only, use the `--rs-only` option.
   This might be necessary if you need to reboot the individual RegionServer or if you make a configuration change that only affects RegionServers and not the other HBase processes.
 
 Rolling Restart on Masters Only::
-  To perform a rolling restart on the active and backup Masters, use the [code]+--master-only+ option.
+  To perform a rolling restart on the active and backup Masters, use the `--master-only` option.
   You might use this if you know that your configuration change only affects the Master and not the RegionServers, or if you need to restart the server where the active Master is running.
 
 Graceful Restart::
-  If you specify the [code]+--graceful+ option, RegionServers are restarted using the [path]_bin/graceful_stop.sh_ script, which moves regions off a RegionServer before restarting it.
+  If you specify the `--graceful` option, RegionServers are restarted using the _bin/graceful_stop.sh_ script, which moves regions off a RegionServer before restarting it.
   This is safer, but can delay the restart.
 
 Limiting the Number of Threads::
-  To limit the rolling restart to using only a specific number of threads, use the [code]+--maxthreads+ option.
+  To limit the rolling restart to using only a specific number of threads, use the `--maxthreads` option.
 
 [[rolling.restart.manual]]
 ==== Manual Rolling Restart
@@ -882,7 +882,7 @@ It disables the load balancer before moving the regions.
 $ for i in `cat conf/regionservers|sort`; do ./bin/graceful_stop.sh --restart --reload --debug $i; done &> /tmp/log.txt &;
 ----
 
-Monitor the output of the [path]_/tmp/log.txt_ file to follow the progress of the script. 
+Monitor the output of the _/tmp/log.txt_ file to follow the progress of the script. 
 
 ==== Logic for Crafting Your Own Rolling Restart Script
 
@@ -936,7 +936,7 @@ $ for i in `cat conf/regionservers|sort`; do ./bin/graceful_stop.sh --restart --
 
 Adding a new regionserver in HBase is essentially free, you simply start it like this: +$ ./bin/hbase-daemon.sh start regionserver+ and it will register itself with the master.
 Ideally you also started a DataNode on the same machine so that the RS can eventually start to have local files.
-If you rely on ssh to start your daemons, don't forget to add the new hostname in [path]_conf/regionservers_ on the master. 
+If you rely on ssh to start your daemons, don't forget to add the new hostname in _conf/regionservers_ on the master. 
 
 At this point the region server isn't serving data because no regions have moved to it yet.
 If the balancer is enabled, it will start moving regions to the new RS.
@@ -961,10 +961,10 @@ You can also filter which metrics are emitted and extend the metrics framework t
 
 For HBase 0.95 and newer, HBase ships with a default metrics configuration, or [firstterm]_sink_.
 This includes a wide variety of individual metrics, and emits them every 10 seconds by default.
-To configure metrics for a given region server, edit the [path]_conf/hadoop-metrics2-hbase.properties_ file.
+To configure metrics for a given region server, edit the _conf/hadoop-metrics2-hbase.properties_ file.
 Restart the region server for the changes to take effect.
 
-To change the sampling rate for the default sink, edit the line beginning with [literal]+*.period+.
+To change the sampling rate for the default sink, edit the line beginning with `*.period`.
 To filter which metrics are emitted or to extend the metrics framework, see link:http://hadoop.apache.org/docs/current/api/org/apache/hadoop/metrics2/package-summary.html      
 
 .HBase Metrics and Ganglia
@@ -978,7 +978,7 @@ See link:http://hadoop.apache.org/docs/current/api/org/apache/hadoop/metrics2/pa
 
 === Disabling Metrics
 
-To disable metrics for a region server, edit the [path]_conf/hadoop-metrics2-hbase.properties_ file and comment out any uncommented lines.
+To disable metrics for a region server, edit the _conf/hadoop-metrics2-hbase.properties_ file and comment out any uncommented lines.
 Restart the region server for the changes to take effect.
 
 [[discovering.available.metrics]]
@@ -988,15 +988,15 @@ Rather than listing each metric which HBase emits by default, you can browse thr
 Different metrics are exposed for the Master process and each region server process.
 
 .Procedure: Access a JSON Output of Available Metrics
-. After starting HBase, access the region server's web UI, at [literal]+http://REGIONSERVER_HOSTNAME:60030+ by default (or port 16030 in HBase 1.0+).
+. After starting HBase, access the region server's web UI, at `http://REGIONSERVER_HOSTNAME:60030` by default (or port 16030 in HBase 1.0+).
 . Click the [label]#Metrics Dump# link near the top.
   The metrics for the region server are presented as a dump of the JMX bean in JSON format.
   This will dump out all metrics names and their values.
-  To include metrics descriptions in the listing -- this can be useful when you are exploring what is available -- add a query string of [literal]+?description=true+ so your URL becomes [literal]+http://REGIONSERVER_HOSTNAME:60030/jmx?description=true+.
+  To include metrics descriptions in the listing -- this can be useful when you are exploring what is available -- add a query string of `?description=true` so your URL becomes `http://REGIONSERVER_HOSTNAME:60030/jmx?description=true`.
   Not all beans and attributes have descriptions. 
-. To view metrics for the Master, connect to the Master's web UI instead (defaults to [literal]+http://localhost:60010+ or port 16010 in HBase 1.0+) and click its [label]#Metrics
+. To view metrics for the Master, connect to the Master's web UI instead (defaults to `http://localhost:60010` or port 16010 in HBase 1.0+) and click its [label]#Metrics
   Dump# link.
-  To include metrics descriptions in the listing -- this can be useful when you are exploring what is available -- add a query string of [literal]+?description=true+ so your URL becomes [literal]+http://REGIONSERVER_HOSTNAME:60010/jmx?description=true+.
+  To include metrics descriptions in the listing -- this can be useful when you are exploring what is available -- add a query string of `?description=true` so your URL becomes `http://REGIONSERVER_HOSTNAME:60010/jmx?description=true`.
   Not all beans and attributes have descriptions. 
 
 
@@ -1023,15 +1023,15 @@ This procedure uses +jvisualvm+, which is an application usually available in th
 === Units of Measure for Metrics
 
 Different metrics are expressed in different units, as appropriate.
-Often, the unit of measure is in the name (as in the metric [code]+shippedKBs+). Otherwise, use the following guidelines.
+Often, the unit of measure is in the name (as in the metric `shippedKBs`). Otherwise, use the following guidelines.
 When in doubt, you may need to examine the source for a given metric.
 
 * Metrics that refer to a point in time are usually expressed as a timestamp.
-* Metrics that refer to an age (such as [code]+ageOfLastShippedOp+) are usually expressed in milliseconds.
+* Metrics that refer to an age (such as `ageOfLastShippedOp`) are usually expressed in milliseconds.
 * Metrics that refer to memory sizes are in bytes.
-* Sizes of queues (such as [code]+sizeOfLogQueue+) are expressed as the number of items in the queue.
+* Sizes of queues (such as `sizeOfLogQueue`) are expressed as the number of items in the queue.
   Determine the size by multiplying by the block size (default is 64 MB in HDFS).
-* Metrics that refer to things like the number of a given type of operations (such as [code]+logEditsRead+) are expressed as an integer.
+* Metrics that refer to things like the number of a given type of operations (such as `logEditsRead`) are expressed as an integer.
 
 [[master_metrics]]
 === Most Important Master Metrics
@@ -1174,10 +1174,10 @@ It is also prepended with identifying tags [constant]+(responseTooSlow)+, [const
 
 There are two configuration knobs that can be used to adjust the thresholds for when queries are logged. 
 
-* [var]+hbase.ipc.warn.response.time+ Maximum number of milliseconds that a query can be run without being logged.
+* `hbase.ipc.warn.response.time` Maximum number of milliseconds that a query can be run without being logged.
   Defaults to 10000, or 10 seconds.
   Can be set to -1 to disable logging by time. 
-* [var]+hbase.ipc.warn.response.size+ Maximum byte size of response that a query can return without being logged.
+* `hbase.ipc.warn.response.size` Maximum byte size of response that a query can return without being logged.
   Defaults to 100 megabytes.
   Can be set to -1 to disable logging by size. 
 
@@ -1185,8 +1185,8 @@ There are two configuration knobs that can be used to adjust the thresholds for
 
 The slow query log exposes to metrics to JMX.
 
-* [var]+hadoop.regionserver_rpc_slowResponse+ a global metric reflecting the durations of all responses that triggered logging.
-* [var]+hadoop.regionserver_rpc_methodName.aboveOneSec+ A metric reflecting the durations of all responses that lasted for more than one second.
+* `hadoop.regionserver_rpc_slowResponse` a global metric reflecting the durations of all responses that triggered logging.
+* `hadoop.regionserver_rpc_methodName.aboveOneSec` A metric reflecting the durations of all responses that lasted for more than one second.
 
 ==== Output
 
@@ -1293,8 +1293,8 @@ For more information, see the link:http://hbase.apache.org/apidocs/org/apache/ha
 . Configure and start the source and destination clusters.
   Create tables with the same names and column families on both the source and destination clusters, so that the destination cluster knows where to store data it will receive.
   All hosts in the source and destination clusters should be reachable to each other.
-. On the source cluster, enable replication by setting [code]+hbase.replication+            to [literal]+true+ in [path]_hbase-site.xml_.
-. On the source cluster, in HBase Shell, add the destination cluster as a peer, using the [code]+add_peer+ command.
+. On the source cluster, enable replication by setting `hbase.replication`            to `true` in _hbase-site.xml_.
+. On the source cluster, in HBase Shell, add the destination cluster as a peer, using the `add_peer` command.
   The syntax is as follows:
 +
 ----
@@ -1307,7 +1307,7 @@ The ID is a string (prior to link:https://issues.apache.org/jira/browse/HBASE-11
 hbase.zookeeper.quorum:hbase.zookeeper.property.clientPort:zookeeper.znode.parent
 ----
 +
-If both clusters use the same ZooKeeper cluster, you must use a different [code]+zookeeper.znode.parent+, because they cannot write in the same folder.
+If both clusters use the same ZooKeeper cluster, you must use a different `zookeeper.znode.parent`, because they cannot write in the same folder.
 
 . On the source cluster, configure each column family to be replicated by setting its REPLICATION_SCOPE to 1, using commands such as the following in HBase Shell.
 +
@@ -1325,7 +1325,7 @@ Getting 1 rs from peer cluster # 0
 Choosing peer 10.10.1.49:62020
 ----
 
-. To verify the validity of replicated data, you can use the included [code]+VerifyReplication+ MapReduce job on the source cluster, providing it with the ID of the replication peer and table name to verify.
+. To verify the validity of replicated data, you can use the included `VerifyReplication` MapReduce job on the source cluster, providing it with the ID of the replication peer and table name to verify.
   Other options are possible, such as a time range or specific families to verify.
 +
 The command has the following form:
@@ -1334,7 +1334,7 @@ The command has the following form:
 hbase org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication [--starttime=timestamp1] [--stoptime=timestamp [--families=comma separated list of families] <peerId><tablename>
 ----
 +
-The [code]+VerifyReplication+ command prints out [literal]+GOODROWS+            and [literal]+BADROWS+ counters to indicate rows that did and did not replicate correctly. 
+The `VerifyReplication` command prints out `GOODROWS`            and `BADROWS` counters to indicate rows that did and did not replicate correctly. 
 
 
 === Detailed Information About Cluster Replication
@@ -1351,7 +1351,7 @@ A single WAL edit goes through several steps in order to be replicated to a slav
 . If the changed cell corresponds to a column family that is scoped for replication, the edit is added to the queue for replication.
 . In a separate thread, the edit is read from the log, as part of a batch process.
   Only the KeyValues that are eligible for replication are kept.
-  Replicable KeyValues are part of a column family whose schema is scoped GLOBAL, are not part of a catalog such as [code]+hbase:meta+, did not originate from the target slave cluster, and have not already been consumed by the target slave cluster.
+  Replicable KeyValues are part of a column family whose schema is scoped GLOBAL, are not part of a catalog such as `hbase:meta`, did not originate from the target slave cluster, and have not already been consumed by the target slave cluster.
 . The edit is tagged with the master's UUID and added to a buffer.
   When the buffer is filled, or the reader reaches the end of the file, the buffer is sent to a random region server on the slave cluster.
 . The region server reads the edits sequentially and separates them into buffers, one buffer per table.
@@ -1374,31 +1374,31 @@ When replication is active, a subset of region servers in the source cluster is
 This responsibility must be failed over like all other region server functions should a process or node crash.
 The following configuration settings are recommended for maintaining an even distribution of replication activity over the remaining live servers in the source cluster:
 
-* Set [code]+replication.source.maxretriesmultiplier+ to [literal]+300+.
-* Set [code]+replication.source.sleepforretries+ to [literal]+1+ (1 second). This value, combined with the value of [code]+replication.source.maxretriesmultiplier+, causes the retry cycle to last about 5 minutes.
-* Set [code]+replication.sleep.before.failover+ to [literal]+30000+ (30 seconds) in the source cluster site configuration.
+* Set `replication.source.maxretriesmultiplier` to `300`.
+* Set `replication.source.sleepforretries` to `1` (1 second). This value, combined with the value of `replication.source.maxretriesmultiplier`, causes the retry cycle to last about 5 minutes.
+* Set `replication.sleep.before.failover` to `30000` (30 seconds) in the source cluster site configuration.
 
 .Preserving Tags During Replication
 By default, the codec used for replication between clusters strips tags, such as cell-level ACLs, from cells.
 To prevent the tags from being stripped, you can use a different codec which does not strip them.
-Configure [code]+hbase.replication.rpc.codec+ to use [literal]+org.apache.hadoop.hbase.codec.KeyValueCodecWithTags+, on both the source and sink RegionServers involved in the replication.
+Configure `hbase.replication.rpc.codec` to use `org.apache.hadoop.hbase.codec.KeyValueCodecWithTags`, on both the source and sink RegionServers involved in the replication.
 This option was introduced in link:https://issues.apache.org/jira/browse/HBASE-10322[HBASE-10322].
 
 ==== Replication Internals
 
 Replication State in ZooKeeper::
   HBase replication maintains its state in ZooKeeper.
-  By default, the state is contained in the base node [path]_/hbase/replication_.
-  This node contains two child nodes, the [code]+Peers+ znode and the [code]+RS+                znode.
+  By default, the state is contained in the base node _/hbase/replication_.
+  This node contains two child nodes, the `Peers` znode and the `RS`                znode.
 
-The [code]+Peers+ Znode::
-  The [code]+peers+ znode is stored in [path]_/hbase/replication/peers_ by default.
+The `Peers` Znode::
+  The `peers` znode is stored in _/hbase/replication/peers_ by default.
   It consists of a list of all peer replication clusters, along with the status of each of them.
   The value of each peer is its cluster key, which is provided in the HBase Shell.
   The cluster key contains a list of ZooKeeper nodes in the cluster's quorum, the client port for the ZooKeeper quorum, and the base znode for HBase in HDFS on that cluster.
 
-The [code]+RS+ Znode::
-  The [code]+rs+ znode contains a list of WAL logs which need to be replicated.
+The `RS` Znode::
+  The `rs` znode contains a list of WAL logs which need to be replicated.
   This list is divided into a set of queues organized by region server and the peer cluster the region server is shipping the logs to.
   The rs znode has one child znode for each region server in the cluster.
   The child znode name is the region server's hostname, client port, and start code.
@@ -1406,11 +1406,11 @@ The [code]+RS+ Znode::
 
 ==== Choosing Region Servers to Replicate To
 
-When a master cluster region server initiates a replication source to a slave cluster, it first connects to the slave's ZooKeeper ensemble using the provided cluster key . It then scans the [path]_rs/_ directory to discover all the available sinks (region servers that are accepting incoming streams of edits to replicate) and randomly chooses a subset of them using a configured ratio which has a default value of 10%. For example, if a slave cluster has 150 machines, 15 will be chosen as potential recipient for edits that this master cluster region server sends.
+When a master cluster region server initiates a replication source to a slave cluster, it first connects to the slave's ZooKeeper ensemble using the provided cluster key . It then scans the _rs/_ directory to discover all the available sinks (region servers that are accepting incoming streams of edits to replicate) and randomly chooses a subset of them using a configured ratio which has a default value of 10%. For example, if a slave cluster has 150 machines, 15 will be chosen as potential recipient for edits that this master cluster region server sends.
 Because this selection is performed by each master region server, the probability that all slave region servers are used is very high, and this method works for clusters of any size.
 For example, a master cluster of 10 machines replicating to a slave cluster of 5 machines with a ratio of 10% causes the master cluster region servers to choose one machine each at random.
 
-A ZooKeeper watcher is placed on the [path]_${zookeeper.znode.parent}/rs_ node of the slave cluster by each of the master cluster's region servers.
+A ZooKeeper watcher is placed on the _${zookeeper.znode.parent}/rs_ node of the slave cluster by each of the master cluster's region servers.
 This watch is used to monitor changes in the composition of the slave cluster.
 When nodes are removed from the slave cluster, or if nodes go down or come back up, the master cluster's region servers will respond by selecting a new pool of slave region servers to replicate to.
 
@@ -1428,7 +1428,7 @@ This ensures that all the sources are aware that a new log exists before the reg
 The queue items are discarded when the replication thread cannot read more entries from a file (because it reached the end of the last block) and there are other files in the queue.
 This means that if a source is up to date and replicates from the log that the region server writes to, reading up to the "end" of the current file will not delete the item in the queue.
 
-A log can be archived if it is no longer used or if the number of logs exceeds [code]+hbase.regionserver.maxlogs+ because the insertion rate is faster than regions are flushed.
+A log can be archived if it is no longer used or if the number of logs exceeds `hbase.regionserver.maxlogs` because the insertion rate is faster than regions are flushed.
 When a log is archived, the source threads are notified that the path for that log changed.
 If a particular source has already finished with an archived log, it will just ignore the message.
 If the log is in the queue, the path will be updated in memory.
@@ -1463,7 +1463,7 @@ The next time the cleaning process needs to look for a log, it starts by using i
 When no region servers are failing, keeping track of the logs in ZooKeeper adds no value.
 Unfortunately, region servers do fail, and since ZooKeeper is highly available, it is useful for managing the transfer of the queues in the event of a failure.
 
-Each of the master cluster region servers keeps a watcher on every other region server, in order to be notified when one dies (just as the master does). When a failure happens, they all race to create a znode called [literal]+lock+ inside the dead region server's znode that contains its queues.
+Each of the master cluster region servers keeps a watcher on every other region server, in order to be notified when one dies (just as the master does). When a failure happens, they all race to create a znode called `lock` inside the dead region server's znode that contains its queues.
 The region server that creates it successfully then transfers all the queues to its own znode, one at a time since ZooKeeper does not support renaming queues.
 After queues are all transferred, they are deleted from the old location.
 The znodes that were recovered are renamed with the ID of the slave cluster appended with the name of the dead server.
@@ -1472,9 +1472,9 @@ Next, the master cluster region server creates one new source thread per copied
 The main difference is that those queues will never receive new data, since they do not belong to their new region server.
 When the reader hits the end of the last log, the queue's znode is deleted and the master cluster region server closes that replication source.
 
-Given a master cluster with 3 region servers replicating to a single slave with id [literal]+2+, the following hierarchy represents what the znodes layout could be at some point in time.
-The region servers' znodes all contain a [literal]+peers+          znode which contains a single queue.
-The znode names in the queues represent the actual file names on HDFS in the form [literal]+address,port.timestamp+.
+Given a master cluster with 3 region servers replicating to a single slave with id `2`, the following hierarchy represents what the znodes layout could be at some point in time.
+The region servers' znodes all contain a `peers`          znode which contains a single queue.
+The znode names in the queues represent the actual file names on HDFS in the form `address,port.timestamp`.
 
 ----
 
@@ -1553,16 +1553,16 @@ The new layout will be:
 
 The following metrics are exposed at the global region server level and (since HBase 0.95) at the peer level:
 
-[code]+source.sizeOfLogQueue+::
+`source.sizeOfLogQueue`::
   number of WALs to process (excludes the one which is being processed) at the Replication source
 
-[code]+source.shippedOps+::
+`source.shippedOps`::
   number of mutations shipped
 
-[code]+source.logEditsRead+::
+`source.logEditsRead`::
   number of mutations read from WALs at the replication source
 
-[code]+source.ageOfLastShippedOp+::
+`source.ageOfLastShippedOp`::
   age of last batch that was shipped by the replication source
 
 === Replication Configuration Options
@@ -1679,7 +1679,7 @@ The disadvantages of these methods are that you can degrade region server perfor
 [[ops.snapshots.configuration]]
 === Configuration
 
-To turn on the snapshot support just set the [var]+hbase.snapshot.enabled+        property to true.
+To turn on the snapshot support just set the `hbase.snapshot.enabled`        property to true.
 (Snapshots are enabled by default in 0.95+ and off by default in 0.94.6+)
 
 [source,java]
@@ -1789,7 +1789,7 @@ $ bin/hbase class org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot MySn
 ----
 
 .Limiting Bandwidth Consumption
-You can limit the bandwidth consumption when exporting a snapshot, by specifying the [code]+-bandwidth+ parameter, which expects an integer representing megabytes per second.
+You can limit the bandwidth consumption when exporting a snapshot, by specifying the `-bandwidth` parameter, which expects an integer representing megabytes per second.
 The following example limits the above example to 200 MB/sec.
 
 [source,bourne]
@@ -1856,7 +1856,7 @@ Generally less regions makes for a smoother running cluster (you can always manu
 The number of regions cannot be configured directly (unless you go for fully <<disable.splitting,disable.splitting>>); adjust the region size to achieve the target region size given table size.
 
 When configuring regions for multiple tables, note that most region settings can be set on a per-table basis via link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HTableDescriptor.html[HTableDescriptor], as well as shell commands.
-These settings will override the ones in [var]+hbase-site.xml+.
+These settings will override the ones in `hbase-site.xml`.
 That is useful if your tables have different workloads/use cases.
 
 Also note that in the discussion of region sizes here, _HDFS replication factor is not (and should not be) taken into account, whereas
@@ -1957,7 +1957,7 @@ See <<compaction,compaction>> for some details.
 
 When provisioning for large data sizes, however, it's good to keep in mind that compactions can affect write throughput.
 Thus, for write-intensive workloads, you may opt for less frequent compactions and more store files per regions.
-Minimum number of files for compactions ([var]+hbase.hstore.compaction.min+) can be set to higher value; <<hbase.hstore.blockingstorefiles,hbase.hstore.blockingStoreFiles>> should also be increased, as more files might accumulate in such case.
+Minimum number of files for compactions (`hbase.hstore.compaction.min`) can be set to higher value; <<hbase.hstore.blockingstorefiles,hbase.hstore.blockingStoreFiles>> should also be increased, as more files might accumulate in such case.
 You may also consider manually managing compactions: <<managed.compactions,managed.compactions>>
 
 [[ops.capacity.config.presplit]]

http://git-wip-us.apache.org/repos/asf/hbase/blob/5fbf80ee/src/main/asciidoc/_chapters/performance.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/performance.adoc b/src/main/asciidoc/_chapters/performance.adoc
index 48fd9dd..11a0f5e 100644
--- a/src/main/asciidoc/_chapters/performance.adoc
+++ b/src/main/asciidoc/_chapters/performance.adoc
@@ -111,9 +111,9 @@ Are all the network interfaces functioning correctly? Are you sure? See the Trou
 
 In his presentation, link:http://www.slideshare.net/cloudera/hbase-hug-presentation[Avoiding Full GCs
             with MemStore-Local Allocation Buffers], Todd Lipcon describes two cases of stop-the-world garbage collections common in HBase, especially during loading; CMS failure modes and old generation heap fragmentation brought.
-To address the first, start the CMS earlier than default by adding [code]+-XX:CMSInitiatingOccupancyFraction+ and setting it down from defaults.
+To address the first, start the CMS earlier than default by adding `-XX:CMSInitiatingOccupancyFraction` and setting it down from defaults.
 Start at 60 or 70 percent (The lower you bring down the threshold, the more GCing is done, the more CPU used). To address the second fragmentation issue, Todd added an experimental facility, 
-(((MSLAB))), that must be explicitly enabled in Apache HBase 0.90.x (Its defaulted to be on in Apache 0.92.x HBase). See [code]+hbase.hregion.memstore.mslab.enabled+ to true in your [class]+Configuration+.
+(((MSLAB))), that must be explicitly enabled in Apache HBase 0.90.x (Its defaulted to be on in Apache 0.92.x HBase). See `hbase.hregion.memstore.mslab.enabled` to true in your `Configuration`.
 See the cited slides for background and detail.
 The latest jvms do better regards fragmentation so make sure you are running a recent release.
 Read down in the message, link:http://osdir.com/ml/hotspot-gc-use/2011-11/msg00002.html[Identifying
@@ -125,7 +125,7 @@ Disable MSLAB in this case, or lower the amount of memory it uses or float less
 If you have a write-heavy workload, check out link:https://issues.apache.org/jira/browse/HBASE-8163[HBASE-8163
             MemStoreChunkPool: An improvement for JAVA GC when using MSLAB].
 It describes configurations to lower the amount of young GC during write-heavy loadings.
-If you do not have HBASE-8163 installed, and you are trying to improve your young GC times, one trick to consider -- courtesy of our Liang Xie -- is to set the GC config [var]+-XX:PretenureSizeThreshold+ in [path]_hbase-env.sh_ to be just smaller than the size of [var]+hbase.hregion.memstore.mslab.chunksize+ so MSLAB allocations happen in the tenured space directly rather than first in the young gen.
+If you do not have HBASE-8163 installed, and you are trying to improve your young GC times, one trick to consider -- courtesy of our Liang Xie -- is to set the GC config `-XX:PretenureSizeThreshold` in _hbase-env.sh_ to be just smaller than the size of `hbase.hregion.memstore.mslab.chunksize` so MSLAB allocations happen in the tenured space directly rather than first in the young gen.
 You'd do this because these MSLAB allocations are going to likely make it to the old gen anyways and rather than pay the price of a copies between s0 and s1 in eden space followed by the copy up from young to old gen after the MSLABs have achieved sufficient tenure, save a bit of YGC churn and allocate in the old gen directly. 
 
 For more information about GC logs, see <<trouble.log.gc,trouble.log.gc>>. 
@@ -145,12 +145,12 @@ See <<recommended_configurations,recommended configurations>>.
 For larger systems, managing link:[compactions and splits] may be something you want to consider.
 
 [[perf.handlers]]
-=== [var]+hbase.regionserver.handler.count+
+=== `hbase.regionserver.handler.count`
 
 See <<hbase.regionserver.handler.count,hbase.regionserver.handler.count>>. 
 
 [[perf.hfile.block.cache.size]]
-=== [var]+hfile.block.cache.size+
+=== `hfile.block.cache.size`
 
 See <<hfile.block.cache.size,hfile.block.cache.size>>.
 A memory setting for the RegionServer process. 
@@ -190,83 +190,79 @@ tableDesc.addFamily(cfDesc);
 See the API documentation for link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/io/hfile/CacheConfig.html[CacheConfig].
 
 [[perf.rs.memstore.size]]
-=== [var]+hbase.regionserver.global.memstore.size+
+=== `hbase.regionserver.global.memstore.size`
 
 See <<hbase.regionserver.global.memstore.size,hbase.regionserver.global.memstore.size>>.
 This memory setting is often adjusted for the RegionServer process depending on needs. 
 
 [[perf.rs.memstore.size.lower.limit]]
-=== [var]+hbase.regionserver.global.memstore.size.lower.limit+
+=== `hbase.regionserver.global.memstore.size.lower.limit`
 
 See <<hbase.regionserver.global.memstore.size.lower.limit,hbase.regionserver.global.memstore.size.lower.limit>>.
 This memory setting is often adjusted for the RegionServer process depending on needs. 
 
 [[perf.hstore.blockingstorefiles]]
-=== [var]+hbase.hstore.blockingStoreFiles+
+=== `hbase.hstore.blockingStoreFiles`
 
 See <<hbase.hstore.blockingstorefiles,hbase.hstore.blockingStoreFiles>>.
 If there is blocking in the RegionServer logs, increasing this can help. 
 
 [[perf.hregion.memstore.block.multiplier]]
-=== [var]+hbase.hregion.memstore.block.multiplier+
+=== `hbase.hregion.memstore.block.multiplier`
 
 See <<hbase.hregion.memstore.block.multiplier,hbase.hregion.memstore.block.multiplier>>.
 If there is enough RAM, increasing this can help. 
 
 [[hbase.regionserver.checksum.verify.performance]]
-=== [var]+hbase.regionserver.checksum.verify+
+=== `hbase.regionserver.checksum.verify`
 
 Have HBase write the checksum into the datablock and save having to do the checksum seek whenever you read.
 
 See <<hbase.regionserver.checksum.verify,hbase.regionserver.checksum.verify>>, <<hbase.hstore.bytes.per.checksum,hbase.hstore.bytes.per.checksum>> and <<hbase.hstore.checksum.algorithm,hbase.hstore.checksum.algorithm>>        For more information see the release note on link:https://issues.apache.org/jira/browse/HBASE-5074[HBASE-5074 support checksums in HBase block cache]. 
 
-=== Tuning [code]+callQueue+ Options
+=== Tuning `callQueue` Options
 
 link:https://issues.apache.org/jira/browse/HBASE-11355[HBASE-11355]        introduces several callQueue tuning mechanisms which can increase performance.
 See the JIRA for some benchmarking information.
 
-* To increase the number of callqueues, set +hbase.ipc.server.num.callqueue+ to a value greater than [literal]+1+.
-* To split the callqueue into separate read and write queues, set [code]+hbase.ipc.server.callqueue.read.ratio+ to a value between [literal]+0+ and [literal]+1+.
+* To increase the number of callqueues, set +hbase.ipc.server.num.callqueue+ to a value greater than `1`.
+* To split the callqueue into separate read and write queues, set `hbase.ipc.server.callqueue.read.ratio` to a value between `0` and `1`.
   This factor weights the queues toward writes (if below .5) or reads (if above .5). Another way to say this is that the factor determines what percentage of the split queues are used for reads.
   The following examples illustrate some of the possibilities.
   Note that you always have at least one write queue, no matter what setting you use.
 +
-* The default value of [literal]+0+ does not split the queue.
-* A value of [literal]+.3+ uses 30% of the queues for reading and 60% for writing.
-  Given a value of [literal]+10+ for +hbase.ipc.server.num.callqueue+, 3 queues would be used for reads and 7 for writes.
-* A value of [literal]+.5+ uses the same number of read queues and write queues.
-  Given a value of [literal]+10+ for +hbase.ipc.server.num.callqueue+, 5 queues would be used for reads and 5 for writes.
-* A value of [literal]+.6+ uses 60% of the queues for reading and 30% for reading.
-  Given a value of [literal]+10+ for +hbase.ipc.server.num.callqueue+, 7 queues would be used for reads and 3 for writes.
-* A value of [literal]+1.0+ uses one queue to process write requests, and all other queues process read requests.
-  A value higher than [literal]+1.0+                has the same effect as a value of [literal]+1.0+.
-  Given a value of [literal]+10+ for +hbase.ipc.server.num.callqueue+, 9 queues would be used for reads and 1 for writes.
+* The default value of `0` does not split the queue.
+* A value of `.3` uses 30% of the queues for reading and 60% for writing.
+  Given a value of `10` for +hbase.ipc.server.num.callqueue+, 3 queues would be used for reads and 7 for writes.
+* A value of `.5` uses the same number of read queues and write queues.
+  Given a value of `10` for +hbase.ipc.server.num.callqueue+, 5 queues would be used for reads and 5 for writes.
+* A value of `.6` uses 60% of the queues for reading and 30% for reading.
+  Given a value of `10` for +hbase.ipc.server.num.callqueue+, 7 queues would be used for reads and 3 for writes.
+* A value of `1.0` uses one queue to process write requests, and all other queues process read requests.
+  A value higher than `1.0`                has the same effect as a value of `1.0`.
+  Given a value of `10` for +hbase.ipc.server.num.callqueue+, 9 queues would be used for reads and 1 for writes.
 
 * You can also split the read queues so that separate queues are used for short reads (from Get operations) and long reads (from Scan operations), by setting the +hbase.ipc.server.callqueue.scan.ratio+ option.
   This option is a factor between 0 and 1, which determine the ratio of read queues used for Gets and Scans.
-  More queues are used for Gets if the value is below [literal]+.5+ and more are used for scans if the value is above [literal]+.5+.
+  More queues are used for Gets if the value is below `.5` and more are used for scans if the value is above `.5`.
   No matter what setting you use, at least one read queue is used for Get operations.
 +
-* A value of [literal]+0+ does not split the read queue.
-* A value of [literal]+.3+ uses 60% of the read queues for Gets and 30% for Scans.
-  Given a value of [literal]+20+ for +hbase.ipc.server.num.callqueue+ and a value of [literal]+.5
-  + for +hbase.ipc.server.callqueue.read.ratio+, 10 queues would be used for reads, out of those 10, 7 would be used for Gets and 3 for Scans.
-* A value of [literal]+.5+ uses half the read queues for Gets and half for Scans.
-  Given a value of [literal]+20+ for +hbase.ipc.server.num.callqueue+ and a value of [literal]+.5
-  + for +hbase.ipc.server.callqueue.read.ratio+, 10 queues would be used for reads, out of those 10, 5 would be used for Gets and 5 for Scans.
-* A value of [literal]+.6+ uses 30% of the read queues for Gets and 60% for Scans.
-  Given a value of [literal]+20+ for +hbase.ipc.server.num.callqueue+ and a value of [literal]+.5
-  + for +hbase.ipc.server.callqueue.read.ratio+, 10 queues would be used for reads, out of those 10, 3 would be used for Gets and 7 for Scans.
-* A value of [literal]+1.0+ uses all but one of the read queues for Scans.
-  Given a value of [literal]+20+ for +hbase.ipc.server.num.callqueue+ and a value of [literal]+.5
-  + for +hbase.ipc.server.callqueue.read.ratio+, 10 queues would be used for reads, out of those 10, 1 would be used for Gets and 9 for Scans.
-
-* You can use the new option +hbase.ipc.server.callqueue.handler.factor+ to programmatically tune the number of queues:
+* A value of `0` does not split the read queue.
+* A value of `.3` uses 60% of the read queues for Gets and 30% for Scans.
+  Given a value of `20` for +hbase.ipc.server.num.callqueue+ and a value of `.5` for `hbase.ipc.server.callqueue.read.ratio`, 10 queues would be used for reads, out of those 10, 7 would be used for Gets and 3 for Scans.
+* A value of `.5` uses half the read queues for Gets and half for Scans.
+  Given a value of `20` for +hbase.ipc.server.num.callqueue+ and a value of `.5` for `hbase.ipc.server.callqueue.read.ratio`, 10 queues would be used for reads, out of those 10, 5 would be used for Gets and 5 for Scans.
+* A value of `.6` uses 30% of the read queues for Gets and 60% for Scans.
+  Given a value of `20` for +hbase.ipc.server.num.callqueue+ and a value of `.5` for `hbase.ipc.server.callqueue.read.ratio`, 10 queues would be used for reads, out of those 10, 3 would be used for Gets and 7 for Scans.
+* A value of `1.0` uses all but one of the read queues for Scans.
+  Given a value of `20` for +hbase.ipc.server.num.callqueue+ and a value of`.5` for `hbase.ipc.server.callqueue.read.ratio`, 10 queues would be used for reads, out of those 10, 1 would be used for Gets and 9 for Scans.
+
+* You can use the new option `hbase.ipc.server.callqueue.handler.factor` to programmatically tune the number of queues:
 +
-* A value of [literal]+0+ uses a single shared queue between all the handlers.
-* A value of [literal]+1+ uses a separate queue for each handler.
-* A value between [literal]+0+ and [literal]+1+ tunes the number of queues against the number of handlers.
-  For instance, a value of [literal]+.5+ shares one queue between each two handlers.
+* A value of `0` uses a single shared queue between all the handlers.
+* A value of `1` uses a separate queue for each handler.
+* A value between `0` and `1` tunes the number of queues against the number of handlers.
+  For instance, a value of `.5` shares one queue between each two handlers.
 +
 Having more queues, such as in a situation where you have one queue per handler, reduces contention when adding a task to a queue or selecting it from a queue.
 The trade-off is that if you have some queues with long-running tasks, a handler may end up waiting to execute from that queue rather than processing another queue which has waiting tasks.
@@ -297,7 +293,7 @@ See also <<perf.compression.however,perf.compression.however>> for compression c
 [[schema.regionsize]]
 === Table RegionSize
 
-The regionsize can be set on a per-table basis via [code]+setFileSize+ on link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HTableDescriptor.html[HTableDescriptor]        in the event where certain tables require different regionsizes than the configured default regionsize. 
+The regionsize can be set on a per-table basis via `setFileSize` on link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HTableDescriptor.html[HTableDescriptor]        in the event where certain tables require different regionsizes than the configured default regionsize. 
 
 See <<ops.capacity.regions,ops.capacity.regions>> for more information. 
 
@@ -330,8 +326,8 @@ For more information on Bloom filters in relation to HBase, see <<blooms,blooms>
 Since HBase 0.96, row-based Bloom filters are enabled by default.
 You may choose to disable them or to change some tables to use row+column Bloom filters, depending on the characteristics of your data and how it is loaded into HBase.
 
-To determine whether Bloom filters could have a positive impact, check the value of [code]+blockCacheHitRatio+ in the RegionServer metrics.
-If Bloom filters are enabled, the value of [code]+blockCacheHitRatio+ should increase, because the Bloom filter is filtering out blocks that are definitely not needed. 
+To determine whether Bloom filters could have a positive impact, check the value of `blockCacheHitRatio` in the RegionServer metrics.
+If Bloom filters are enabled, the value of `blockCacheHitRatio` should increase, because the Bloom filter is filtering out blocks that are definitely not needed. 
 
 You can choose to enable Bloom filters for a row or for a row+column combination.
 If you generally scan entire rows, the row+column combination will not provide any benefit.
@@ -348,11 +344,11 @@ Bloom filters need to be rebuilt upon deletion, so may not be appropriate in env
 
 Bloom filters are enabled on a Column Family.
 You can do this by using the setBloomFilterType method of HColumnDescriptor or using the HBase API.
-Valid values are [literal]+NONE+ (the default), [literal]+ROW+, or [literal]+ROWCOL+.
-See <<bloom.filters.when,bloom.filters.when>> for more information on [literal]+ROW+ versus [literal]+ROWCOL+.
+Valid values are `NONE` (the default), `ROW`, or `ROWCOL`.
+See <<bloom.filters.when,bloom.filters.when>> for more information on `ROW` versus `ROWCOL`.
 See also the API documentation for link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html[HColumnDescriptor].
 
-The following example creates a table and enables a ROWCOL Bloom filter on the [literal]+colfam1+ column family.
+The following example creates a table and enables a ROWCOL Bloom filter on the `colfam1` column family.
 
 ----
 
@@ -361,7 +357,7 @@ hbase> create 'mytable',{NAME => 'colfam1', BLOOMFILTER => 'ROWCOL'}
 
 ==== Configuring Server-Wide Behavior of Bloom Filters
 
-You can configure the following settings in the [path]_hbase-site.xml_. 
+You can configure the following settings in the _hbase-site.xml_. 
 
 [cols="1,1,1", options="header"]
 |===
@@ -487,7 +483,7 @@ A useful pattern to speed up the bulk import process is to pre-create empty regi
 Be somewhat conservative in this, because too-many regions can actually degrade performance. 
 
 There are two different approaches to pre-creating splits.
-The first approach is to rely on the default [code]+HBaseAdmin+ strategy (which is implemented in [code]+Bytes.split+)... 
+The first approach is to rely on the default `HBaseAdmin` strategy (which is implemented in `Bytes.split`)... 
 
 [source,java]
 ----
@@ -513,23 +509,23 @@ See <<manual_region_splitting_decisions,manual region splitting decisions>>
 [[def.log.flush]]
 ===  Table Creation: Deferred Log Flush 
 
-The default behavior for Puts using the Write Ahead Log (WAL) is that [class]+WAL+ edits will be written immediately.
+The default behavior for Puts using the Write Ahead Log (WAL) is that `WAL` edits will be written immediately.
 If deferred log flush is used, WAL edits are kept in memory until the flush period.
-The benefit is aggregated and asynchronous [class]+WAL+- writes, but the potential downside is that if the RegionServer goes down the yet-to-be-flushed edits are lost.
+The benefit is aggregated and asynchronous `WAL`- writes, but the potential downside is that if the RegionServer goes down the yet-to-be-flushed edits are lost.
 This is safer, however, than not using WAL at all with Puts. 
 
 Deferred log flush can be configured on tables via link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HTableDescriptor.html[HTableDescriptor].
-The default value of [var]+hbase.regionserver.optionallogflushinterval+ is 1000ms. 
+The default value of `hbase.regionserver.optionallogflushinterval` is 1000ms. 
 
 [[perf.hbase.client.autoflush]]
 === HBase Client: AutoFlush
 
 When performing a lot of Puts, make sure that setAutoFlush is set to false on your link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html[HTable]        instance.
 Otherwise, the Puts will be sent one at a time to the RegionServer.
-Puts added via [code]+ htable.add(Put)+ and [code]+ htable.add( <List> Put)+ wind up in the same write buffer.
-If [code]+autoFlush = false+, these messages are not sent until the write-buffer is filled.
+Puts added via ` htable.add(Put)` and ` htable.add( <List> Put)` wind up in the same write buffer.
+If `autoFlush = false`, these messages are not sent until the write-buffer is filled.
 To explicitly flush the messages, call [method]+flushCommits+.
-Calling [method]+close+ on the [class]+HTable+ instance will invoke [method]+flushCommits+.
+Calling [method]+close+ on the `HTable` instance will invoke [method]+flushCommits+.
 
 [[perf.hbase.client.putwal]]
 === HBase Client: Turn off WAL on Puts
@@ -547,8 +543,8 @@ To disable the WAL, see <<wal.disable,wal.disable>>.
 [[perf.hbase.client.regiongroup]]
 === HBase Client: Group Puts by RegionServer
 
-In addition to using the writeBuffer, grouping [class]+Put+s by RegionServer can reduce the number of client RPC calls per writeBuffer flush.
-There is a utility [class]+HTableUtil+ currently on TRUNK that does this, but you can either copy that or implement your own version for those still on 0.90.x or earlier. 
+In addition to using the writeBuffer, grouping `Put`s by RegionServer can reduce the number of client RPC calls per writeBuffer flush.
+There is a utility `HTableUtil` currently on TRUNK that does this, but you can either copy that or implement your own version for those still on 0.90.x or earlier. 
 
 [[perf.hbase.write.mr.reducer]]
 === MapReduce: Skip The Reducer
@@ -599,17 +595,17 @@ Timeouts can also happen in a non-MapReduce use case (i.e., single threaded HBas
 === Scan Attribute Selection
 
 Whenever a Scan is used to process large numbers of rows (and especially when used as a MapReduce source), be aware of which attributes are selected.
-If [code]+scan.addFamily+        is called then _all_ of the attributes in the specified ColumnFamily will be returned to the client.
+If `scan.addFamily`        is called then _all_ of the attributes in the specified ColumnFamily will be returned to the client.
 If only a small number of the available attributes are to be processed, then only those attributes should be specified in the input scan because attribute over-selection is a non-trivial performance penalty over large datasets. 
 
 [[perf.hbase.client.seek]]
 === Avoid scan seeks
 
-When columns are selected explicitly with [code]+scan.addColumn+, HBase will schedule seek operations to seek between the selected columns.
+When columns are selected explicitly with `scan.addColumn`, HBase will schedule seek operations to seek between the selected columns.
 When rows have few columns and each column has only a few versions this can be inefficient.
 A seek operation is generally slower if does not seek at least past 5-10 columns/versions or 512-1024 bytes.
 
-In order to opportunistically look ahead a few columns/versions to see if the next column/version can be found that way before a seek operation is scheduled, a new attribute [code]+Scan.HINT_LOOKAHEAD+ can be set the on Scan object.
+In order to opportunistically look ahead a few columns/versions to see if the next column/version can be found that way before a seek operation is scheduled, a new attribute `Scan.HINT_LOOKAHEAD` can be set the on Scan object.
 The following code instructs the RegionServer to attempt two iterations of next before a seek is scheduled:
 
 [source,java]
@@ -652,7 +648,7 @@ htable.close();
 === Block Cache
 
 link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html[Scan]        instances can be set to use the block cache in the RegionServer via the [method]+setCacheBlocks+ method.
-For input Scans to MapReduce jobs, this should be [var]+false+.
+For input Scans to MapReduce jobs, this should be `false`.
 For frequently accessed rows, it is advisable to use the block cache.
 
 Cache more data by moving your Block Cache offheap.
@@ -661,7 +657,7 @@ See <<offheap.blockcache,offheap.blockcache>>
 [[perf.hbase.client.rowkeyonly]]
 === Optimal Loading of Row Keys
 
-When performing a table link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html[scan]        where only the row keys are needed (no families, qualifiers, values or timestamps), add a FilterList with a [var]+MUST_PASS_ALL+ operator to the scanner using [method]+setFilter+.
+When performing a table link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html[scan]        where only the row keys are needed (no families, qualifiers, values or timestamps), add a FilterList with a `MUST_PASS_ALL` operator to the scanner using [method]+setFilter+.
 The filter list should include both a link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/FirstKeyOnlyFilter.html[FirstKeyOnlyFilter]        and a link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/KeyOnlyFilter.html[KeyOnlyFilter].
 Using this filter combination will result in a worst case scenario of a RegionServer reading a single value from disk and minimal network traffic to the client for a single row. 
 
@@ -693,38 +689,38 @@ See also <<schema.bloom,schema.bloom>>.
 [[bloom_footprint]]
 ==== Bloom StoreFile footprint
 
-Bloom filters add an entry to the [class]+StoreFile+ general [class]+FileInfo+ data structure and then two extra entries to the [class]+StoreFile+ metadata section.
+Bloom filters add an entry to the `StoreFile` general `FileInfo` data structure and then two extra entries to the `StoreFile` metadata section.
 
-===== BloomFilter in the [class]+StoreFile+[class]+FileInfo+ data structure
+===== BloomFilter in the `StoreFile``FileInfo` data structure
 
-[class]+FileInfo+ has a [var]+BLOOM_FILTER_TYPE+ entry which is set to [var]+NONE+, [var]+ROW+ or [var]+ROWCOL.+
+`FileInfo` has a `BLOOM_FILTER_TYPE` entry which is set to `NONE`, `ROW` or `ROWCOL.`
 
-===== BloomFilter entries in [class]+StoreFile+ metadata
+===== BloomFilter entries in `StoreFile` metadata
 
-[var]+BLOOM_FILTER_META+ holds Bloom Size, Hash Function used, etc.
-Its small in size and is cached on [class]+StoreFile.Reader+ load
+`BLOOM_FILTER_META` holds Bloom Size, Hash Function used, etc.
+Its small in size and is cached on `StoreFile.Reader` load
 
-[var]+BLOOM_FILTER_DATA+ is the actual bloomfilter data.
+`BLOOM_FILTER_DATA` is the actual bloomfilter data.
 Obtained on-demand.
 Stored in the LRU cache, if it is enabled (Its enabled by default).
 
 [[config.bloom]]
 ==== Bloom Filter Configuration
 
-===== [var]+io.hfile.bloom.enabled+ global kill switch
+===== `io.hfile.bloom.enabled` global kill switch
 
-[code]+io.hfile.bloom.enabled+ in [class]+Configuration+ serves as the kill switch in case something goes wrong.
-Default = [var]+true+.
+`io.hfile.bloom.enabled` in `Configuration` serves as the kill switch in case something goes wrong.
+Default = `true`.
 
-===== [var]+io.hfile.bloom.error.rate+
+===== `io.hfile.bloom.error.rate`
 
-[var]+io.hfile.bloom.error.rate+ = average false positive rate.
+`io.hfile.bloom.error.rate` = average false positive rate.
 Default = 1%. Decrease rate by ½ (e.g.
 to .5%) == +1 bit per bloom entry.
 
-===== [var]+io.hfile.bloom.max.fold+
+===== `io.hfile.bloom.max.fold`
 
-[var]+io.hfile.bloom.max.fold+ = guaranteed minimum fold rate.
+`io.hfile.bloom.max.fold` = guaranteed minimum fold rate.
 Most people should leave this alone.
 Default = 7, or can collapse to at least 1/128th of original size.
 See the _Development Process_ section of the document link:https://issues.apache.org/jira/secure/attachment/12444007/Bloom_Filters_in_HBase.pdf[BloomFilters
@@ -740,9 +736,9 @@ Hedged reads can be helpful for times where a rare slow read is caused by a tran
 
 Because a HBase RegionServer is a HDFS client, you can enable hedged reads in HBase, by adding the following properties to the RegionServer's hbase-site.xml and tuning the values to suit your environment.
 
-* .Configuration for Hedged Reads[code]+dfs.client.hedged.read.threadpool.size+ - the number of threads dedicated to servicing hedged reads.
+* .Configuration for Hedged Reads`dfs.client.hedged.read.threadpool.size` - the number of threads dedicated to servicing hedged reads.
   If this is set to 0 (the default), hedged reads are disabled.
-* [code]+dfs.client.hedged.read.threshold.millis+ - the number of milliseconds to wait before spawning a second read thread.
+* `dfs.client.hedged.read.threshold.millis` - the number of milliseconds to wait before spawning a second read thread.
 
 .Hedged Reads Configuration Example
 ====
@@ -782,9 +778,9 @@ See also <<compaction,compaction>> and link:http://hbase.apache.org/apidocs/org/
 [[perf.deleting.rpc]]
 === Delete RPC Behavior
 
-Be aware that [code]+htable.delete(Delete)+ doesn't use the writeBuffer.
+Be aware that `htable.delete(Delete)` doesn't use the writeBuffer.
 It will execute an RegionServer RPC with each invocation.
-For a large number of deletes, consider [code]+htable.delete(List)+. 
+For a large number of deletes, consider `htable.delete(List)`. 
 
 See link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#delete%28org.apache.hadoop.hbase.client.Delete%29      
 
@@ -818,7 +814,7 @@ See link:http://blog.cloudera.com/blog/2013/08/how-improved-short-circuit-local-
 See link:http://archive.cloudera.com/cdh4/cdh/4/hadoop/hadoop-project-dist/hadoop-hdfs/ShortCircuitLocalReads.html[Hadoop
           shortcircuit reads configuration page] for how to enable the latter, better version of shortcircuit.
 For example, here is a minimal config.
-enabling short-circuit reads added to [path]_hbase-site.xml_: 
+enabling short-circuit reads added to _hbase-site.xml_: 
 
 [source,xml]
 ----
@@ -845,9 +841,9 @@ Be careful about permissions for the directory that hosts the shared domain sock
 
 If you are running on an old Hadoop, one that is without link:https://issues.apache.org/jira/browse/HDFS-347[HDFS-347] but that has link:https://issues.apache.org/jira/browse/HDFS-2246[HDFS-2246], you must set two configurations.
 First, the hdfs-site.xml needs to be amended.
-Set the property [var]+dfs.block.local-path-access.user+ to be the _only_        user that can use the shortcut.
+Set the property `dfs.block.local-path-access.user` to be the _only_        user that can use the shortcut.
 This has to be the user that started HBase.
-Then in hbase-site.xml, set [var]+dfs.client.read.shortcircuit+ to be [var]+true+      
+Then in hbase-site.xml, set `dfs.client.read.shortcircuit` to be `true`      
 
 Services -- at least the HBase RegionServers -- will need to be restarted in order to pick up the new configurations. 
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/5fbf80ee/src/main/asciidoc/_chapters/preface.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/preface.adoc b/src/main/asciidoc/_chapters/preface.adoc
index b3a580b..4f8941a 100644
--- a/src/main/asciidoc/_chapters/preface.adoc
+++ b/src/main/asciidoc/_chapters/preface.adoc
@@ -32,7 +32,7 @@ This is the official reference guide for the link:http://hbase.apache.org/[HBase
 Herein you will find either the definitive documentation on an HBase topic as of its standing when the referenced HBase version shipped, or it will point to the location in link:http://hbase.apache.org/apidocs/index.html[javadoc], link:https://issues.apache.org/jira/browse/HBASE[JIRA] or link:http://wiki.apache.org/hadoop/Hbase[wiki] where the pertinent information can be found.
 
 .About This Guide
-This reference guide is a work in progress. The source for this guide can be found in the [path]_src/main/docbkx_ directory of the HBase source. This reference guide is marked up using link:http://www.docbook.org/[DocBook] from which the the finished guide is generated as part of the 'site' build target. Run 
+This reference guide is a work in progress. The source for this guide can be found in the _src/main/dasciidoc_ directory of the HBase source. This reference guide is marked up using Asciidoc, from which the the finished guide is generated as part of the 'site' build target. Run 
 [source,bourne]
 ----
 mvn site
@@ -42,7 +42,7 @@ Amendments and improvements to the documentation are welcomed.
 Click link:https://issues.apache.org/jira/secure/CreateIssueDetails!init.jspa?pid=12310753&issuetype=1&components=12312132&summary=SHORT+DESCRIPTION[this link] to file a new documentation bug against Apache HBase with some values pre-selected.
 
 .Contributing to the Documentation
-For an overview of Docbook and suggestions to get started contributing to the documentation, see <<appendix_contributing_to_documentation,appendix contributing to documentation>>.
+For an overview of Asciidoc and suggestions to get started contributing to the documentation, see <<appendix_contributing_to_documentation,appendix contributing to documentation>>.
 
 .Providing Feedback
 This guide allows you to leave comments or questions on any page, using Disqus.

http://git-wip-us.apache.org/repos/asf/hbase/blob/5fbf80ee/src/main/asciidoc/_chapters/rpc.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/rpc.adoc b/src/main/asciidoc/_chapters/rpc.adoc
index 9c8e3cc..5d8b230 100644
--- a/src/main/asciidoc/_chapters/rpc.adoc
+++ b/src/main/asciidoc/_chapters/rpc.adoc
@@ -199,24 +199,24 @@ If later, fat request has clear advantage, can roll out a v2 later.
 ==== RPC Configurations
 
 .CellBlock Codecs
-To enable a codec other than the default [class]+KeyValueCodec+, set [var]+hbase.client.rpc.codec+ to the name of the Codec class to use.
-Codec must implement hbase's [class]+Codec+ Interface.
+To enable a codec other than the default `KeyValueCodec`, set `hbase.client.rpc.codec` to the name of the Codec class to use.
+Codec must implement hbase's `Codec` Interface.
 After connection setup, all passed cellblocks will be sent with this codec.
-The server will return cellblocks using this same codec as long as the codec is on the servers' CLASSPATH (else you will get [class]+UnsupportedCellCodecException+).
+The server will return cellblocks using this same codec as long as the codec is on the servers' CLASSPATH (else you will get `UnsupportedCellCodecException`).
 
-To change the default codec, set [var]+hbase.client.default.rpc.codec+. 
+To change the default codec, set `hbase.client.default.rpc.codec`. 
 
 To disable cellblocks completely and to go pure protobuf, set the default to the empty String and do not specify a codec in your Configuration.
-So, set [var]+hbase.client.default.rpc.codec+ to the empty string and do not set [var]+hbase.client.rpc.codec+.
+So, set `hbase.client.default.rpc.codec` to the empty string and do not set `hbase.client.rpc.codec`.
 This will cause the client to connect to the server with no codec specified.
 If a server sees no codec, it will return all responses in pure protobuf.
 Running pure protobuf all the time will be slower than running with cellblocks. 
 
 .Compression
 Uses hadoops compression codecs.
-To enable compressing of passed CellBlocks, set [var]+hbase.client.rpc.compressor+ to the name of the Compressor to use.
+To enable compressing of passed CellBlocks, set `hbase.client.rpc.compressor` to the name of the Compressor to use.
 Compressor must implement Hadoops' CompressionCodec Interface.
 After connection setup, all passed cellblocks will be sent compressed.
-The server will return cellblocks compressed using this same compressor as long as the compressor is on its CLASSPATH (else you will get [class]+UnsupportedCompressionCodecException+).
+The server will return cellblocks compressed using this same compressor as long as the compressor is on its CLASSPATH (else you will get `UnsupportedCompressionCodecException`).
 
 :numbered:

http://git-wip-us.apache.org/repos/asf/hbase/blob/5fbf80ee/src/main/asciidoc/_chapters/schema_design.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/schema_design.adoc b/src/main/asciidoc/_chapters/schema_design.adoc
index 9268edf..7570d6c 100644
--- a/src/main/asciidoc/_chapters/schema_design.adoc
+++ b/src/main/asciidoc/_chapters/schema_design.adoc
@@ -123,7 +123,7 @@ foo0004
 ----
 
 Now, imagine that you would like to spread these across four different regions.
-You decide to use four different salts: [literal]+a+, [literal]+b+, [literal]+c+, and [literal]+d+.
+You decide to use four different salts: `a`, `b`, `c`, and `d`.
 In this scenario, each of these letter prefixes will be on a different region.
 After applying the salts, you have the following rowkeys instead.
 Since you can now write to four separate regions, you theoretically have four times the throughput when writing that you would have if all the writes were going to the same region.
@@ -159,7 +159,7 @@ Using a deterministic hash allows the client to reconstruct the complete rowkey
 
 .Hashing Example
 [example]
-Given the same situation in the salting example above, you could instead apply a one-way hash that would cause the row with key [literal]+foo0003+ to always, and predictably, receive the [literal]+a+ prefix.
+Given the same situation in the salting example above, you could instead apply a one-way hash that would cause the row with key `foo0003` to always, and predictably, receive the `a` prefix.
 Then, to retrieve that row, you would already know the key.
 You could also optimize things so that certain pairs of keys were always in the same region, for instance.
 
@@ -292,8 +292,8 @@ See link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.ht
 
 A common problem in database processing is quickly finding the most recent version of a value.
 A technique using reverse timestamps as a part of the key can help greatly with a special case of this problem.
-Also found in the HBase chapter of Tom White's book Hadoop: The Definitive Guide (O'Reilly), the technique involves appending ([code]+Long.MAX_VALUE -
-          timestamp+) to the end of any key, e.g., [key][reverse_timestamp]. 
+Also found in the HBase chapter of Tom White's book Hadoop: The Definitive Guide (O'Reilly), the technique involves appending (`Long.MAX_VALUE -
+          timestamp`) to the end of any key, e.g., [key][reverse_timestamp]. 
 
 The most recent value for [key] in a table can be found by performing a Scan for [key] and obtaining the first record.
 Since HBase keys are in sorted order, this key sorts before any older row-keys for [key] and thus is first. 
@@ -317,7 +317,7 @@ This is a fairly common question on the HBase dist-list so it pays to get the ro
 === Relationship Between RowKeys and Region Splits
 
 If you pre-split your table, it is _critical_ to understand how your rowkey will be distributed across the region boundaries.
-As an example of why this is important, consider the example of using displayable hex characters as the lead position of the key (e.g., "0000000000000000" to "ffffffffffffffff"). Running those key ranges through [code]+Bytes.split+ (which is the split strategy used when creating regions in [code]+HBaseAdmin.createTable(byte[] startKey, byte[] endKey, numRegions)+ for 10 regions will generate the following splits...
+As an example of why this is important, consider the example of using displayable hex characters as the lead position of the key (e.g., "0000000000000000" to "ffffffffffffffff"). Running those key ranges through `Bytes.split` (which is the split strategy used when creating regions in `HBaseAdmin.createTable(byte[] startKey, byte[] endKey, numRegions)` for 10 regions will generate the following splits...
 
 ----
 
@@ -428,7 +428,7 @@ This applies to _all_ versions of a row - even the current one.
 The TTL time encoded in the HBase for the row is specified in UTC. 
 
 Store files which contains only expired rows are deleted on minor compaction.
-Setting [var]+hbase.store.delete.expired.storefile+ to [code]+false+ disables this feature.
+Setting `hbase.store.delete.expired.storefile` to `false` disables this feature.
 Setting link:[minimum number of versions] to other than 0 also disables this.
 
 See link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html[HColumnDescriptor] for more information. 
@@ -455,14 +455,14 @@ This allows for point-in-time queries even in the presence of deletes.
 Deleted cells are still subject to TTL and there will never be more than "maximum number of versions" deleted cells.
 A new "raw" scan options returns all deleted rows and the delete markers. 
 
-.Change the Value of [code]+KEEP_DELETED_CELLS+ Using HBase Shell
+.Change the Value of `KEEP_DELETED_CELLS` Using HBase Shell
 ====
 ----
 hbase> hbase> alter ‘t1′, NAME => ‘f1′, KEEP_DELETED_CELLS => true
 ----
 ====
 
-.Change the Value of [code]+KEEP_DELETED_CELLS+ Using the API
+.Change the Value of `KEEP_DELETED_CELLS` Using the API
 ====
 [source,java]
 ----
@@ -576,7 +576,7 @@ We can store them in an HBase table called LOG_DATA, but what will the rowkey be
 [[schema.casestudies.log_timeseries.tslead]]
 ==== Timestamp In The Rowkey Lead Position
 
-The rowkey [code]+[timestamp][hostname][log-event]+ suffers from the monotonically increasing rowkey problem described in <<timeseries,timeseries>>. 
+The rowkey `[timestamp][hostname][log-event]` suffers from the monotonically increasing rowkey problem described in <<timeseries,timeseries>>. 
 
 There is another pattern frequently mentioned in the dist-lists about ``bucketing'' timestamps, by performing a mod operation on the timestamp.
 If time-oriented scans are important, this could be a useful approach.
@@ -602,14 +602,14 @@ As stated above, to select data for a particular timerange, a Scan will need to
 [[schema.casestudies.log_timeseries.hostlead]]
 ==== Host In The Rowkey Lead Position
 
-The rowkey [code]+[hostname][log-event][timestamp]+ is a candidate if there is a large-ish number of hosts to spread the writes and reads across the keyspace.
+The rowkey `[hostname][log-event][timestamp]` is a candidate if there is a large-ish number of hosts to spread the writes and reads across the keyspace.
 This approach would be useful if scanning by hostname was a priority. 
 
 [[schema.casestudies.log_timeseries.revts]]
 ==== Timestamp, or Reverse Timestamp?
 
-If the most important access path is to pull most recent events, then storing the timestamps as reverse-timestamps (e.g., [code]+timestamp = Long.MAX_VALUE –
-            timestamp+) will create the property of being able to do a Scan on [code]+[hostname][log-event]+ to obtain the quickly obtain the most recently captured events. 
+If the most important access path is to pull most recent events, then storing the timestamps as reverse-timestamps (e.g., `timestamp = Long.MAX_VALUE –
+            timestamp`) will create the property of being able to do a Scan on `[hostname][log-event]` to obtain the quickly obtain the most recently captured events. 
 
 Neither approach is wrong, it just depends on what is most appropriate for the situation. 
 


Mime
View raw message