accumulo-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From mwa...@apache.org
Subject [accumulo-website] branch asf-site updated: Jekyll build from master:84542b9
Date Wed, 27 Dec 2017 15:18:37 GMT
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 68fcdd7  Jekyll build from master:84542b9
68fcdd7 is described below

commit 68fcdd7da9b44a33d62248e13917036e58233cb6
Author: Mike Walch <mwalch@apache.org>
AuthorDate: Wed Dec 27 10:18:15 2017 -0500

    Jekyll build from master:84542b9
    
    Updated GitHub pages version
---
 1.3/user_manual/Accumulo_Shell.html                |  35 +--
 1.3/user_manual/Administration.html                |  10 +-
 1.3/user_manual/Analytics.html                     |  40 +--
 1.3/user_manual/High_Speed_Ingest.html             |  15 +-
 1.3/user_manual/Security.html                      |  20 +-
 1.3/user_manual/Shell_Commands.html                | 295 +++++++-----------
 1.3/user_manual/Table_Configuration.html           |  85 +++---
 1.3/user_manual/Table_Design.html                  |  40 +--
 1.3/user_manual/Writing_Accumulo_Clients.html      |  25 +-
 1.3/user_manual/examples/aggregation.html          |   5 +-
 1.3/user_manual/examples/batch.html                |  10 +-
 1.3/user_manual/examples/bloom.html                |  35 +--
 1.3/user_manual/examples/bulkIngest.html           |   5 +-
 1.3/user_manual/examples/constraints.html          |   5 +-
 1.3/user_manual/examples/dirlist.html              |  25 +-
 1.3/user_manual/examples/filter.html               |  20 +-
 1.3/user_manual/examples/helloworld.html           |  35 +--
 1.3/user_manual/examples/mapred.html               |  20 +-
 1.3/user_manual/examples/shard.html                |  25 +-
 1.4/examples/batch.html                            |  10 +-
 1.4/examples/bloom.html                            |  60 ++--
 1.4/examples/bulkIngest.html                       |   5 +-
 1.4/examples/combiner.html                         |   5 +-
 1.4/examples/constraints.html                      |   5 +-
 1.4/examples/dirlist.html                          |  45 ++-
 1.4/examples/filedata.html                         |  20 +-
 1.4/examples/filter.html                           |  20 +-
 1.4/examples/helloworld.html                       |  35 +--
 1.4/examples/isolation.html                        |  10 +-
 1.4/examples/mapred.html                           |  20 +-
 1.4/examples/shard.html                            |  25 +-
 1.4/examples/visibility.html                       |  40 +--
 1.4/user_manual/Accumulo_Shell.html                |  35 +--
 1.4/user_manual/Administration.html                |  20 +-
 1.4/user_manual/Analytics.html                     |  40 +--
 1.4/user_manual/Development_Clients.html           |  35 +--
 1.4/user_manual/High_Speed_Ingest.html             |  15 +-
 1.4/user_manual/Security.html                      |  25 +-
 1.4/user_manual/Shell_Commands.html                | 340 +++++++++------------
 1.4/user_manual/Table_Configuration.html           | 155 ++++------
 1.4/user_manual/Table_Design.html                  |  40 +--
 1.4/user_manual/Writing_Accumulo_Clients.html      |  70 ++---
 1.5/examples/batch.html                            |  10 +-
 1.5/examples/bloom.html                            |  60 ++--
 1.5/examples/bulkIngest.html                       |   5 +-
 1.5/examples/classpath.html                        |  35 +--
 1.5/examples/client.html                           |  15 +-
 1.5/examples/combiner.html                         |   5 +-
 1.5/examples/constraints.html                      |   5 +-
 1.5/examples/dirlist.html                          |  45 ++-
 1.5/examples/export.html                           |  20 +-
 1.5/examples/filedata.html                         |  20 +-
 1.5/examples/filter.html                           |  20 +-
 1.5/examples/helloworld.html                       |  30 +-
 1.5/examples/isolation.html                        |  10 +-
 1.5/examples/mapred.html                           |  20 +-
 1.5/examples/maxmutation.html                      |  10 +-
 1.5/examples/regex.html                            |  15 +-
 1.5/examples/rowhash.html                          |  15 +-
 1.5/examples/shard.html                            |  25 +-
 1.5/examples/tabletofile.html                      |  15 +-
 1.5/examples/terasort.html                         |  10 +-
 1.5/examples/visibility.html                       |  40 +--
 1.6/examples/batch.html                            |  10 +-
 1.6/examples/bloom.html                            |  60 ++--
 1.6/examples/bulkIngest.html                       |   5 +-
 1.6/examples/classpath.html                        |  35 +--
 1.6/examples/client.html                           |  15 +-
 1.6/examples/combiner.html                         |   5 +-
 1.6/examples/constraints.html                      |   5 +-
 1.6/examples/dirlist.html                          |  45 ++-
 1.6/examples/export.html                           |  20 +-
 1.6/examples/filedata.html                         |  20 +-
 1.6/examples/filter.html                           |  20 +-
 1.6/examples/helloworld.html                       |  30 +-
 1.6/examples/isolation.html                        |  10 +-
 1.6/examples/mapred.html                           |  45 ++-
 1.6/examples/maxmutation.html                      |  10 +-
 1.6/examples/regex.html                            |  15 +-
 1.6/examples/reservations.html                     |  10 +-
 1.6/examples/rowhash.html                          |  15 +-
 1.6/examples/shard.html                            |  25 +-
 1.6/examples/tabletofile.html                      |  15 +-
 1.6/examples/terasort.html                         |  10 +-
 1.6/examples/visibility.html                       |  40 +--
 1.7/examples/batch.html                            |  10 +-
 1.7/examples/bloom.html                            |  60 ++--
 1.7/examples/bulkIngest.html                       |   5 +-
 1.7/examples/classpath.html                        |  35 +--
 1.7/examples/client.html                           |  15 +-
 1.7/examples/combiner.html                         |   5 +-
 1.7/examples/constraints.html                      |   5 +-
 1.7/examples/dirlist.html                          |  45 ++-
 1.7/examples/export.html                           |  20 +-
 1.7/examples/filedata.html                         |  20 +-
 1.7/examples/filter.html                           |  20 +-
 1.7/examples/helloworld.html                       |  30 +-
 1.7/examples/isolation.html                        |  10 +-
 1.7/examples/mapred.html                           |  45 ++-
 1.7/examples/maxmutation.html                      |  10 +-
 1.7/examples/regex.html                            |  15 +-
 1.7/examples/reservations.html                     |  10 +-
 1.7/examples/rowhash.html                          |  15 +-
 1.7/examples/shard.html                            |  25 +-
 1.7/examples/tabletofile.html                      |  15 +-
 1.7/examples/terasort.html                         |  10 +-
 1.7/examples/visibility.html                       |  40 +--
 1.8/examples/batch.html                            |  10 +-
 1.8/examples/bloom.html                            |  60 ++--
 1.8/examples/bulkIngest.html                       |   5 +-
 1.8/examples/classpath.html                        |  35 +--
 1.8/examples/client.html                           |  15 +-
 1.8/examples/combiner.html                         |   5 +-
 1.8/examples/constraints.html                      |   5 +-
 1.8/examples/dirlist.html                          |  45 ++-
 1.8/examples/export.html                           |  20 +-
 1.8/examples/filedata.html                         |  20 +-
 1.8/examples/filter.html                           |  20 +-
 1.8/examples/helloworld.html                       |  30 +-
 1.8/examples/isolation.html                        |  10 +-
 1.8/examples/mapred.html                           |  20 +-
 1.8/examples/maxmutation.html                      |  10 +-
 1.8/examples/regex.html                            |  15 +-
 1.8/examples/reservations.html                     |  10 +-
 1.8/examples/rgbalancer.html                       |  30 +-
 1.8/examples/rowhash.html                          |  15 +-
 1.8/examples/sample.html                           |  70 ++---
 1.8/examples/shard.html                            |  25 +-
 1.8/examples/tabletofile.html                      |  15 +-
 1.8/examples/terasort.html                         |  10 +-
 1.8/examples/visibility.html                       |  40 +--
 blog/2014/05/03/accumulo-classloader.html          |  10 +-
 .../05/27/getting-started-with-accumulo-1.6.0.html |  25 +-
 .../scaling-accumulo-with-multivolume-support.html |  15 +-
 ...eystores-for-configuring-accumulo-with-ssl.html |  40 ++-
 blog/2016/11/02/durability-performance.html        |  10 +-
 blog/2016/11/16/simpler-scripts-and-config.html    |  40 +--
 blog/2016/12/19/running-on-fedora-25.html          | 188 +++++-------
 blog/2017/04/21/introducing-uno-and-muchos.html    |  40 +--
 contributor/making-release.html                    |  20 +-
 contributor/verifying-release.html                 |   5 +-
 contributor/voting.html                            |  15 +-
 contributors-guide/index.html                      |  65 ++--
 docs/2.0/administration/caching.html               |  10 +-
 .../administration/configuration-management.html   |  20 +-
 docs/2.0/administration/in-depth-install.html      |  80 ++---
 docs/2.0/administration/kerberos.html              | 103 +++----
 docs/2.0/administration/multivolume.html           |  10 +-
 docs/2.0/administration/replication.html           |  90 +++---
 docs/2.0/administration/ssl.html                   |  36 ++-
 docs/2.0/administration/tracing.html               |  90 +++---
 docs/2.0/administration/upgrading.html             |   5 +-
 docs/2.0/development/development_tools.html        |  30 +-
 docs/2.0/development/high_speed_ingest.html        |  15 +-
 docs/2.0/development/iterators.html                |  15 +-
 docs/2.0/development/mapreduce.html                |  70 ++---
 docs/2.0/development/proxy.html                    |  65 ++--
 docs/2.0/development/security.html                 |  30 +-
 docs/2.0/development/summaries.html                |  30 +-
 docs/2.0/getting-started/clients.html              |  60 ++--
 docs/2.0/getting-started/quick-install.html        |  40 +--
 docs/2.0/getting-started/shell.html                |  45 ++-
 docs/2.0/getting-started/table_configuration.html  | 140 ++++-----
 docs/2.0/getting-started/table_design.html         |  50 ++-
 docs/2.0/troubleshooting/advanced.html             |  60 ++--
 docs/2.0/troubleshooting/basic.html                |  55 ++--
 .../troubleshooting/system-metadata-tables.html    |  10 +-
 docs/2.0/troubleshooting/tools.html                |  45 ++-
 features/index.html                                |   5 +-
 feed.xml                                           | 274 +++++++----------
 how-to-contribute/index.html                       |  20 +-
 quickstart-1.x/index.html                          |  15 +-
 release/accumulo-1.6.0/index.html                  |  20 +-
 release/accumulo-1.6.2/index.html                  |   5 +-
 release/accumulo-1.7.0/index.html                  |  15 +-
 tour/authorizations-code/index.html                |   5 +-
 tour/authorizations/index.html                     |   5 +-
 tour/basic-read-write/index.html                   |   5 +-
 tour/batch-scanner-code/index.html                 |  12 +-
 tour/batch-scanner/index.html                      |   7 +-
 tour/conditional-writer-code/index.html            |  10 +-
 tour/conditional-writer/index.html                 |  10 +-
 tour/data-model-code/index.html                    |   5 +-
 tour/data-model/index.html                         |   5 +-
 tour/getting-started/index.html                    |   5 +-
 tour/ranges-splits/index.html                      |  10 +-
 186 files changed, 2348 insertions(+), 3487 deletions(-)

diff --git a/1.3/user_manual/Accumulo_Shell.html b/1.3/user_manual/Accumulo_Shell.html
index 973732e..0f44fa5 100644
--- a/1.3/user_manual/Accumulo_Shell.html
+++ b/1.3/user_manual/Accumulo_Shell.html
@@ -166,13 +166,12 @@
 
 <p>The shell can be started by the following command:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ACCUMULO_HOME/bin/accumulo shell -u [username]
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ACCUMULO_HOME/bin/accumulo shell -u [username]
+</code></pre></div></div>
 
 <p>The shell will prompt for the corresponding password to the username specified and then display the following prompt:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>Shell - Apache Accumulo Interactive Shell
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Shell - Apache Accumulo Interactive Shell
 -
 - version 1.3
 - instance name: myinstance
@@ -180,14 +179,13 @@
 -
 - type 'help' for a list of available commands
 -
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h2 id="-basic-administration"><a id="Basic_Administration"></a> Basic Administration</h2>
 
 <p>The Accumulo shell can be used to create and delete tables, as well as to configure table and instance specific options.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@myinstance&gt; tables
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@myinstance&gt; tables
 !METADATA
 
 root@myinstance&gt; createtable mytable
@@ -205,44 +203,40 @@ root@myinstance testtable&gt;
 root@myinstance junk&gt; deletetable testtable
 
 root@myinstance&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The Shell can also be used to insert updates and scan tables. This is useful for inspecting tables.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@myinstance mytable&gt; scan
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@myinstance mytable&gt; scan
 
 root@myinstance mytable&gt; insert row1 colf colq value1
 insert successful
 
 root@myinstance mytable&gt; scan
 row1 colf:colq [] value1
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h2 id="-table-maintenance"><a id="Table_Maintenance"></a> Table Maintenance</h2>
 
 <p>The <strong>compact</strong> command instructs Accumulo to schedule a compaction of the table during which files are consolidated and deleted entries are removed.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@myinstance mytable&gt; compact -t mytable
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@myinstance mytable&gt; compact -t mytable
 07 16:13:53,201 [shell.Shell] INFO : Compaction of table mytable
 scheduled for 20100707161353EDT
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The <strong>flush</strong> command instructs Accumulo to write all entries currently in memory for a given table to disk.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@myinstance mytable&gt; flush -t mytable
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@myinstance mytable&gt; flush -t mytable
 07 16:14:19,351 [shell.Shell] INFO : Flush of table mytable
 initiated...
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h2 id="-user-administration"><a id="User_Administration"></a> User Administration</h2>
 
 <p>The Shell can be used to add, remove, and grant privileges to users.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@myinstance mytable&gt; createuser bob
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@myinstance mytable&gt; createuser bob
 Enter new password for 'bob': *********
 Please confirm new password for 'bob': *********
 
@@ -267,8 +261,7 @@ bob@myinstance bobstable&gt; user root
 Enter current password for 'root': *********
 
 root@myinstance bobstable&gt; revoke System.CREATE_TABLE -s -u bob
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <hr />
 
diff --git a/1.3/user_manual/Administration.html b/1.3/user_manual/Administration.html
index 3848bae..74f1d55 100644
--- a/1.3/user_manual/Administration.html
+++ b/1.3/user_manual/Administration.html
@@ -185,9 +185,8 @@
 
 <p>Choose a directory for the Accumulo installation. This directory will be referenced by the environment variable $ACCUMULO_HOME. Run the following:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ tar xzf $ACCUMULO_HOME/accumulo.tar.gz
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ tar xzf $ACCUMULO_HOME/accumulo.tar.gz
+</code></pre></div></div>
 
 <p>Repeat this step at each machine within the cluster. Usually all machines have the same $ACCUMULO_HOME.</p>
 
@@ -235,7 +234,7 @@ $ACCUMULO_HOME/conf/slaves, one per line.</li>
 <p>Specify appropriate values for the following settings in <br />
 $ACCUMULO_HOME/conf/accumulo-site.xml :</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>&lt;property&gt;
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>&lt;property&gt;
     &lt;name&gt;zookeeper&lt;/name&gt;
     &lt;value&gt;zooserver-one:2181,zooserver-two:2181&lt;/value&gt;
     &lt;description&gt;list of zookeeper servers&lt;/description&gt;
@@ -245,8 +244,7 @@ $ACCUMULO_HOME/conf/accumulo-site.xml :</p>
     &lt;value&gt;/var/accumulo/walogs&lt;/value&gt;
     &lt;description&gt;local directory for write ahead logs&lt;/description&gt;
 &lt;/property&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>This enables Accumulo to find ZooKeeper. Accumulo uses ZooKeeper to coordinate settings between processes and helps finalize TabletServer failure.</p>
 
diff --git a/1.3/user_manual/Analytics.html b/1.3/user_manual/Analytics.html
index 67f4e17..c68a3a7 100644
--- a/1.3/user_manual/Analytics.html
+++ b/1.3/user_manual/Analytics.html
@@ -178,17 +178,16 @@
 
 <p>To read from an Accumulo table create a Mapper with the following class parameterization and be sure to configure the AccumuloInputFormat.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>class MyMapper extends Mapper&lt;Key,Value,WritableComparable,Writable&gt; {
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>class MyMapper extends Mapper&lt;Key,Value,WritableComparable,Writable&gt; {
     public void map(Key k, Value v, Context c) {
         // transform key and value data here
     }
 }
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>To write to an Accumulo table, create a Reducer with the following class parameterization and be sure to configure the AccumuloOutputFormat. The key emitted from the Reducer identifies the table to which the mutation is sent. This allows a single Reducer to write to more than one table if desired. A default table can be configured using the AccumuloOutputFormat, in which case the output table name does not have to be passed to the Context object within the Reducer.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>class MyReducer extends Reducer&lt;WritableComparable, Writable, Text, Mutation&gt; {
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>class MyReducer extends Reducer&lt;WritableComparable, Writable, Text, Mutation&gt; {
 
     public void reduce(WritableComparable key, Iterator&lt;Text&gt; values, Context c) {
         
@@ -199,14 +198,13 @@
         c.write(new Text("output-table"), m);
     }
 }
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The Text object passed as the output should contain the name of the table to which this mutation should be applied. The Text can be null in which case the mutation will be applied to the default table name specified in the AccumuloOutputFormat options.</p>
 
 <h3 id="-accumuloinputformat-options"><a id="AccumuloInputFormat_options"></a> AccumuloInputFormat options</h3>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>Job job = new Job(getConf());
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Job job = new Job(getConf());
 AccumuloInputFormat.setInputInfo(job,
         "user",
         "passwd".getBytes(),
@@ -215,36 +213,32 @@ AccumuloInputFormat.setInputInfo(job,
 
 AccumuloInputFormat.setZooKeeperInstance(job, "myinstance",
         "zooserver-one,zooserver-two");
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>Optional settings:</strong></p>
 
 <p>To restrict Accumulo to a set of row ranges:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>ArrayList&lt;Range&gt; ranges = new ArrayList&lt;Range&gt;();
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ArrayList&lt;Range&gt; ranges = new ArrayList&lt;Range&gt;();
 // populate array list of row ranges ...
 AccumuloInputFormat.setRanges(job, ranges);
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>To restrict accumulo to a list of columns:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>ArrayList&lt;Pair&lt;Text,Text&gt;&gt; columns = new ArrayList&lt;Pair&lt;Text,Text&gt;&gt;();
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ArrayList&lt;Pair&lt;Text,Text&gt;&gt; columns = new ArrayList&lt;Pair&lt;Text,Text&gt;&gt;();
 // populate list of columns
 AccumuloInputFormat.fetchColumns(job, columns);
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>To use a regular expression to match row IDs:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>AccumuloInputFormat.setRegex(job, RegexType.ROW, "^.*");
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>AccumuloInputFormat.setRegex(job, RegexType.ROW, "^.*");
+</code></pre></div></div>
 
 <h3 id="-accumulooutputformat-options"><a id="AccumuloOutputFormat_options"></a> AccumuloOutputFormat options</h3>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>boolean createTables = true;
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>boolean createTables = true;
 String defaultTable = "mytable";
 
 AccumuloOutputFormat.setOutputInfo(job,
@@ -255,15 +249,13 @@ AccumuloOutputFormat.setOutputInfo(job,
 
 AccumuloOutputFormat.setZooKeeperInstance(job, "myinstance",
         "zooserver-one,zooserver-two");
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>Optional Settings:</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>AccumuloOutputFormat.setMaxLatency(job, 300); // milliseconds
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>AccumuloOutputFormat.setMaxLatency(job, 300); // milliseconds
 AccumuloOutputFormat.setMaxMutationBufferSize(job, 5000000); // bytes
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>An example of using MapReduce with Accumulo can be found at <br />
 accumulo/docs/examples/README.mapred</p>
diff --git a/1.3/user_manual/High_Speed_Ingest.html b/1.3/user_manual/High_Speed_Ingest.html
index 1ab4101..ab9f6c5 100644
--- a/1.3/user_manual/High_Speed_Ingest.html
+++ b/1.3/user_manual/High_Speed_Ingest.html
@@ -171,9 +171,8 @@
 
 <p>Pre-splitting a table ensures that there are as many tablets as desired available before ingest begins to take advantage of all the parallelism possible with the cluster hardware. Tables can be split anytime by using the shell:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>user@myinstance mytable&gt; addsplits -sf /local_splitfile -t mytable
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>user@myinstance mytable&gt; addsplits -sf /local_splitfile -t mytable
+</code></pre></div></div>
 
 <p>For the purposes of providing parallelism to ingest it is not necessary to create more tablets than there are physical machines within the cluster as the aggregate ingest rate is a function of the number of physical machines. Note that the aggregate ingest rate is still subject to the number of machines running ingest clients, and the distribution of rowIDs across the table. The aggregation ingest rate will be suboptimal if there are many inserts into a small number of rowIDs.</p>
 
@@ -189,7 +188,7 @@
 
 <p>To configure MapReduce to format data in preparation for bulk loading, the job should be set to use a range partitioner instead of the default hash partitioner. The range partitioner uses the split points of the Accumulo table that will receive the data. The split points can be obtained from the shell and used by the MapReduce RangePartitioner. Note that this is only useful if the existing table is already split into multiple tablets.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>user@myinstance mytable&gt; getsplits
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>user@myinstance mytable&gt; getsplits
 aa
 ab
 ac
@@ -197,14 +196,12 @@ ac
 zx
 zy
 zz
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Run the MapReduce job, using the AccumuloFileOutputFormat to create the files to be introduced to Accumulo. Once this is complete, the files can be added to Accumulo via the shell:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>user@myinstance mytable&gt; importdirectory /files_dir /failures
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>user@myinstance mytable&gt; importdirectory /files_dir /failures
+</code></pre></div></div>
 
 <p>Note that the paths referenced are directories within the same HDFS instance over which Accumulo is running. Accumulo places any files that failed to be added to the second directory specified.</p>
 
diff --git a/1.3/user_manual/Security.html b/1.3/user_manual/Security.html
index 4a227a7..0e23cc1 100644
--- a/1.3/user_manual/Security.html
+++ b/1.3/user_manual/Security.html
@@ -170,7 +170,7 @@
 
 <p>When mutations are applied, users can specify a security label for each value. This is done as the Mutation is created by passing a ColumnVisibility object to the put() method:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>Text rowID = new Text("row1");
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Text rowID = new Text("row1");
 Text colFam = new Text("myColFam");
 Text colQual = new Text("myColQual");
 ColumnVisibility colVis = new ColumnVisibility("public");
@@ -180,8 +180,7 @@ Value value = new Value("myValue");
 
 Mutation mutation = new Mutation(rowID);
 mutation.put(colFam, colQual, colVis, timestamp, value);
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h2 id="-security-label-expression-syntax"><a id="Security_Label_Expression_Syntax"></a> Security Label Expression Syntax</h2>
 
@@ -189,15 +188,14 @@ mutation.put(colFam, colQual, colVis, timestamp, value);
 
 <p>For example, suppose within our organization we want to label our data values with security labels defined in terms of user roles. We might have tokens such as:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>admin
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>admin
 audit
 system
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>These can be specified alone or combined using logical operators:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>// Users must have admin privileges:
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>// Users must have admin privileges:
 admin
 
 // Users must have admin and audit privileges
@@ -208,8 +206,7 @@ admin|audit
 
 // Users must have audit and one or both of admin or system
 (admin|system)&amp;audit
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>When both <code class="highlighter-rouge">|</code> and <code class="highlighter-rouge">&amp;</code> operators are used, parentheses must be used to specify precedence of the operators.</p>
 
@@ -219,12 +216,11 @@ admin|audit
 
 <p>Authorizations are specified as a comma-separated list of tokens the user possesses:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>// user possess both admin and system level access
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>// user possess both admin and system level access
 Authorization auths = new Authorization("admin","system");
 
 Scanner s = connector.createScanner("table", auths);
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h2 id="-secure-authorizations-handling"><a id="Secure_Authorizations_Handling"></a> Secure Authorizations Handling</h2>
 
diff --git a/1.3/user_manual/Shell_Commands.html b/1.3/user_manual/Shell_Commands.html
index 645c74d..9302936 100644
--- a/1.3/user_manual/Shell_Commands.html
+++ b/1.3/user_manual/Shell_Commands.html
@@ -154,89 +154,80 @@
 
 <p><strong>?</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: ? [ &lt;command&gt; &lt;command&gt; ] [-?] [-np]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: ? [ &lt;command&gt; &lt;command&gt; ] [-?] [-np]   
 description: provides information about the available commands   
   -?,-help  display this help   
   -np,-no-pagination  disables pagination of output   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>about</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: about [-?] [-v]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: about [-?] [-v]   
 description: displays information about this program   
   -?,-help  display this help   
   -v,-verbose displays details session information   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>addsplits</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: addsplits [&lt;split&gt; &lt;split&gt; ] [-?] [-b64] [-sf &lt;filename&gt;] -t &lt;tableName&gt;   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: addsplits [&lt;split&gt; &lt;split&gt; ] [-?] [-b64] [-sf &lt;filename&gt;] -t &lt;tableName&gt;   
 description: add split points to an existing table   
   -?,-help  display this help   
   -b64,-base64encoded decode encoded split points   
   -sf,-splits-file &lt;filename&gt; file with newline separated list of rows to add   
        to table   
   -t,-table &lt;tableName&gt;  name of a table to add split points to   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>authenticate</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: authenticate &lt;username&gt; [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: authenticate &lt;username&gt; [-?]   
 description: verifies a user's credentials   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>bye</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: bye [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: bye [-?]   
 description: exits the shell   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>classpath</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: classpath [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: classpath [-?]   
 description: lists the current files on the classpath   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>clear</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: clear [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: clear [-?]   
 description: clears the screen   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>cls</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: cls [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: cls [-?]   
 description: clears the screen   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>compact</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: compact [-?] [-override] -p &lt;pattern&gt; | -t &lt;tableName&gt;   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: compact [-?] [-override] -p &lt;pattern&gt; | -t &lt;tableName&gt;   
 description: sets all tablets for a table to major compact as soon as possible   
        (based on current time)   
   -?,-help  display this help   
   -override  override a future scheduled compaction   
   -p,-pattern &lt;pattern&gt;  regex pattern of table names to flush   
   -t,-table &lt;tableName&gt;  name of a table to flush   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>config</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: config [-?] [-d &lt;property&gt; | -f &lt;string&gt; | -s &lt;property=value&gt;] [-np]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: config [-?] [-d &lt;property&gt; | -f &lt;string&gt; | -s &lt;property=value&gt;] [-np]   
        [-t &lt;table&gt;]   
 description: prints system properties and table specific properties   
   -?,-help  display this help   
@@ -245,12 +236,11 @@ description: prints system properties and table specific properties
   -np,-no-pagination  disables pagination of output   
   -s,-set &lt;property=value&gt;  set a per-table property   
   -t,-table &lt;table&gt;  display/set/delete properties for specified table   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>createtable</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: createtable &lt;tableName&gt; [-?] [-a   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: createtable &lt;tableName&gt; [-?] [-a   
        &lt;&lt;columnfamily&gt;[:&lt;columnqualifier&gt;]=&lt;aggregation_class&gt;&gt;] [-b64]   
        [-cc &lt;table&gt;] [-cs &lt;table&gt; | -sf &lt;filename&gt;] [-ndi]  [-tl | -tm]   
 description: creates a new table, with optional aggregators and optionally   
@@ -267,40 +257,36 @@ description: creates a new table, with optional aggregators and optionally
        create a pre-split table   
   -tl,-time-logical  use logical time   
   -tm,-time-millis  use time in milliseconds   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>createuser</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: createuser &lt;username&gt; [-?] [-s &lt;comma-separated-authorizations&gt;]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: createuser &lt;username&gt; [-?] [-s &lt;comma-separated-authorizations&gt;]   
 description: creates a new user   
   -?,-help  display this help   
   -s,-scan-authorizations &lt;comma-separated-authorizations&gt;  scan authorizations   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>debug</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: debug [ on | off ] [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: debug [ on | off ] [-?]   
 description: turns debug logging on or off   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>delete</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: delete &lt;row&gt; &lt;colfamily&gt; &lt;colqualifier&gt; [-?] [-l &lt;expression&gt;] [-t   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: delete &lt;row&gt; &lt;colfamily&gt; &lt;colqualifier&gt; [-?] [-l &lt;expression&gt;] [-t   
        &lt;timestamp&gt;]   
 description: deletes a record from a table   
   -?,-help  display this help   
   -l,-authorization-label &lt;expression&gt;  formatted authorization label expression   
   -t,-timestamp &lt;timestamp&gt;  timestamp to use for insert   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>deleteiter</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: deleteiter [-?] [-majc] [-minc] -n &lt;itername&gt; [-scan] [-t &lt;table&gt;]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: deleteiter [-?] [-majc] [-minc] -n &lt;itername&gt; [-scan] [-t &lt;table&gt;]   
 description: deletes a table-specific iterator   
   -?,-help  display this help   
   -majc,-major-compaction  applied at major compaction   
@@ -308,12 +294,11 @@ description: deletes a table-specific iterator
   -n,-name &lt;itername&gt; iterator to delete   
   -scan,-scan-time  applied at scan time   
   -t,-table &lt;table&gt;  tableName   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>deletemany</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: deletemany [-?] [-b &lt;start-row&gt;] [-c   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: deletemany [-?] [-b &lt;start-row&gt;] [-c   
        &lt;&lt;columnfamily&gt;[:&lt;columnqualifier&gt;]&gt;] [-e &lt;end-row&gt;] [-f] [-np]   
        [-s &lt;comma-separated-authorizations&gt;] [-st]   
 description: scans a table and deletes the resulting records   
@@ -326,56 +311,50 @@ description: scans a table and deletes the resulting records
   -s,-scan-authorizations &lt;comma-separated-authorizations&gt;  scan authorizations   
        (all user auths are used if this argument is not specified)   
   -st,-show-timestamps  enables displaying timestamps   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>deletescaniter</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: deletescaniter [-?] [-a] [-n &lt;itername&gt;] [-t &lt;table&gt;]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: deletescaniter [-?] [-a] [-n &lt;itername&gt;] [-t &lt;table&gt;]   
 description: deletes a table-specific scan iterator so it is no longer used   
        during this shell session   
   -?,-help  display this help   
   -a,-all  delete all for tableName   
   -n,-name &lt;itername&gt; iterator to delete   
   -t,-table &lt;table&gt;  tableName   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>deletetable</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: deletetable &lt;tableName&gt; [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: deletetable &lt;tableName&gt; [-?]   
 description: deletes a table   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>deleteuser</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: deleteuser &lt;username&gt; [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: deleteuser &lt;username&gt; [-?]   
 description: deletes a user   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>droptable</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: droptable &lt;tableName&gt; [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: droptable &lt;tableName&gt; [-?]   
 description: deletes a table   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>dropuser</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: dropuser &lt;username&gt; [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: dropuser &lt;username&gt; [-?]   
 description: deletes a user   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>egrep</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: egrep &lt;regex&gt; &lt;regex&gt; [-?] [-b &lt;start-row&gt;] [-c   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: egrep &lt;regex&gt; &lt;regex&gt; [-?] [-b &lt;start-row&gt;] [-c   
        &lt;&lt;columnfamily&gt;[:&lt;columnqualifier&gt;]&gt;] [-e &lt;end-row&gt;] [-np] [-s   
        &lt;comma-separated-authorizations&gt;] [-st] [-t &lt;arg&gt;]   
 description: egreps a table in parallel on the server side (uses java regex)   
@@ -388,92 +367,83 @@ description: egreps a table in parallel on the server side (uses java regex)
        (all user auths are used if this argument is not specified)   
   -st,-show-timestamps  enables displaying timestamps   
   -t,-num-threads &lt;arg&gt;  num threads   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>execfile</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: execfile [-?] [-v]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: execfile [-?] [-v]   
 description: specifies a file containing accumulo commands to execute   
   -?,-help  display this help   
   -v,-verbose displays command prompt as commands are executed   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>exit</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: exit [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: exit [-?]   
 description: exits the shell   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>flush</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: flush [-?] -p &lt;pattern&gt; | -t &lt;tableName&gt;   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: flush [-?] -p &lt;pattern&gt; | -t &lt;tableName&gt;   
 description: makes a best effort to flush tables from memory to disk   
   -?,-help  display this help   
   -p,-pattern &lt;pattern&gt;  regex pattern of table names to flush   
   -t,-table &lt;tableName&gt;  name of a table to flush   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>formatter</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: formatter [-?] -f &lt;className&gt; | -l | -r   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: formatter [-?] -f &lt;className&gt; | -l | -r   
 description: specifies a formatter to use for displaying database entries   
   -?,-help  display this help   
   -f,-formatter &lt;className&gt;  fully qualified name of formatter class to use   
   -l,-list  display the current formatter   
   -r,-reset  reset to default formatter   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>getauths</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: getauths [-?] [-u &lt;user&gt;]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: getauths [-?] [-u &lt;user&gt;]   
 description: displays the maximum scan authorizations for a user   
   -?,-help  display this help   
   -u,-user &lt;user&gt;  user to operate on   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>getgroups</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: getgroups [-?] -t &lt;table&gt;   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: getgroups [-?] -t &lt;table&gt;   
 description: gets the locality groups for a given table   
   -?,-help  display this help   
   -t,-table &lt;table&gt;  get locality groups for specified table   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>getsplits</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: getsplits [-?] [-b64] [-m &lt;num&gt;] [-o &lt;file&gt;] [-v]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: getsplits [-?] [-b64] [-m &lt;num&gt;] [-o &lt;file&gt;] [-v]   
 description: retrieves the current split points for tablets in the current table   
   -?,-help  display this help   
   -b64,-base64encoded encode the split points   
   -m,-max &lt;num&gt;  specifies the maximum number of splits to create   
   -o,-output &lt;file&gt;  specifies a local file to write the splits to   
   -v,-verbose print out the tablet information with start/end rows   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>grant</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: grant &lt;permission&gt; [-?] -p &lt;pattern&gt; | -s | -t &lt;table&gt;  -u &lt;username&gt;   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: grant &lt;permission&gt; [-?] -p &lt;pattern&gt; | -s | -t &lt;table&gt;  -u &lt;username&gt;   
 description: grants system or table permissions for a user   
   -?,-help  display this help   
   -p,-pattern &lt;pattern&gt;  regex pattern of tables to grant permissions on   
   -s,-system  grant a system permission   
   -t,-table &lt;table&gt;  grant a table permission on this table   
   -u,-user &lt;username&gt; user to operate on   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>grep</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: grep &lt;term&gt; &lt;term&gt; [-?] [-b &lt;start-row&gt;] [-c   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: grep &lt;term&gt; &lt;term&gt; [-?] [-b &lt;start-row&gt;] [-c   
        &lt;&lt;columnfamily&gt;[:&lt;columnqualifier&gt;]&gt;] [-e &lt;end-row&gt;] [-np] [-s   
        &lt;comma-separated-authorizations&gt;] [-st] [-t &lt;arg&gt;]   
 description: searches a table for a substring, in parallel, on the server side   
@@ -486,21 +456,19 @@ description: searches a table for a substring, in parallel, on the server side
        (all user auths are used if this argument is not specified)   
   -st,-show-timestamps  enables displaying timestamps   
   -t,-num-threads &lt;arg&gt;  num threads   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>help</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: help [ &lt;command&gt; &lt;command&gt; ] [-?] [-np]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: help [ &lt;command&gt; &lt;command&gt; ] [-?] [-np]   
 description: provides information about the available commands   
   -?,-help  display this help   
   -np,-no-pagination  disables pagination of output   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>importdirectory</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: importdirectory &lt;directory&gt; &lt;failureDirectory&gt; [-?] [-a &lt;num&gt;] [-f &lt;num&gt;]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: importdirectory &lt;directory&gt; &lt;failureDirectory&gt; [-?] [-a &lt;num&gt;] [-f &lt;num&gt;]   
        [-g] [-v]   
 description: bulk imports an entire directory of data files to the current table   
   -?,-help  display this help   
@@ -509,108 +477,97 @@ description: bulk imports an entire directory of data files to the current table
   -g,-disableGC  prevents imported files from being deleted by the garbage   
        collector   
   -v,-verbose displays statistics from the import   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>info</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: info [-?] [-v]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: info [-?] [-v]   
 description: displays information about this program   
   -?,-help  display this help   
   -v,-verbose displays details session information   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>insert</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: insert &lt;row&gt; &lt;colfamily&gt; &lt;colqualifier&gt; &lt;value&gt; [-?] [-l &lt;expression&gt;] [-t   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: insert &lt;row&gt; &lt;colfamily&gt; &lt;colqualifier&gt; &lt;value&gt; [-?] [-l &lt;expression&gt;] [-t   
        &lt;timestamp&gt;]   
 description: inserts a record   
   -?,-help  display this help   
   -l,-authorization-label &lt;expression&gt;  formatted authorization label expression   
   -t,-timestamp &lt;timestamp&gt;  timestamp to use for insert   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>listscans</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: listscans [-?] [-np] [-ts &lt;tablet server&gt;]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: listscans [-?] [-np] [-ts &lt;tablet server&gt;]   
 description: list what scans are currently running in accumulo. See the   
        org.apache.accumulo.core.client.admin.ActiveScan javadoc for more information   
        about columns.   
   -?,-help  display this help   
   -np,-no-pagination  disables pagination of output   
   -ts,-tabletServer &lt;tablet server&gt;  list scans for a specific tablet server   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>masterstate</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: masterstate &lt;NORMAL|SAFE_MODE|CLEAN_STOP&gt; [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: masterstate &lt;NORMAL|SAFE_MODE|CLEAN_STOP&gt; [-?]   
 description: set the master state: NORMAL, SAFE_MODE or CLEAN_STOP   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>offline</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: offline [-?] -p &lt;pattern&gt; | -t &lt;tableName&gt;   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: offline [-?] -p &lt;pattern&gt; | -t &lt;tableName&gt;   
 description: starts the process of taking table offline   
   -?,-help  display this help   
   -p,-pattern &lt;pattern&gt;  regex pattern of table names to flush   
   -t,-table &lt;tableName&gt;  name of a table to flush   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>online</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: online [-?] -p &lt;pattern&gt; | -t &lt;tableName&gt;   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: online [-?] -p &lt;pattern&gt; | -t &lt;tableName&gt;   
 description: starts the process of putting a table online   
   -?,-help  display this help   
   -p,-pattern &lt;pattern&gt;  regex pattern of table names to flush   
   -t,-table &lt;tableName&gt;  name of a table to flush   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>passwd</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: passwd [-?] [-u &lt;user&gt;]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: passwd [-?] [-u &lt;user&gt;]   
 description: changes a user's password   
   -?,-help  display this help   
   -u,-user &lt;user&gt;  user to operate on   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>quit</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: quit [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: quit [-?]   
 description: exits the shell   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>renametable</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: renametable &lt;current table name&gt; &lt;new table name&gt; [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: renametable &lt;current table name&gt; &lt;new table name&gt; [-?]   
 description: rename a table   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>revoke</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: revoke &lt;permission&gt; [-?] -s | -t &lt;table&gt;  -u &lt;username&gt;   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: revoke &lt;permission&gt; [-?] -s | -t &lt;table&gt;  -u &lt;username&gt;   
 description: revokes system or table permissions from a user   
   -?,-help  display this help   
   -s,-system  revoke a system permission   
   -t,-table &lt;table&gt;  revoke a table permission on this table   
   -u,-user &lt;username&gt; user to operate on   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>scan</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: scan [-?] [-b &lt;start-row&gt;] [-c &lt;&lt;columnfamily&gt;[:&lt;columnqualifier&gt;]&gt;] [-e   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: scan [-?] [-b &lt;start-row&gt;] [-c &lt;&lt;columnfamily&gt;[:&lt;columnqualifier&gt;]&gt;] [-e   
        &lt;end-row&gt;] [-np] [-s &lt;comma-separated-authorizations&gt;] [-st]   
 description: scans the table, and displays the resulting records   
   -?,-help  display this help   
@@ -621,58 +578,53 @@ description: scans the table, and displays the resulting records
   -s,-scan-authorizations &lt;comma-separated-authorizations&gt;  scan authorizations   
        (all user auths are used if this argument is not specified)   
   -st,-show-timestamps  enables displaying timestamps   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>select</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: select &lt;row&gt; &lt;columnfamily&gt; &lt;columnqualifier&gt; [-?] [-np] [-s   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: select &lt;row&gt; &lt;columnfamily&gt; &lt;columnqualifier&gt; [-?] [-np] [-s   
        &lt;comma-separated-authorizations&gt;] [-st]   
 description: scans for and displays a single record   
   -?,-help  display this help   
   -np,-no-pagination  disables pagination of output   
   -s,-scan-authorizations &lt;comma-separated-authorizations&gt;  scan authorizations   
   -st,-show-timestamps  enables displaying timestamps   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>selectrow</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: selectrow &lt;row&gt; [-?] [-np] [-s &lt;comma-separated-authorizations&gt;] [-st]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: selectrow &lt;row&gt; [-?] [-np] [-s &lt;comma-separated-authorizations&gt;] [-st]   
 description: scans a single row and displays all resulting records   
   -?,-help  display this help   
   -np,-no-pagination  disables pagination of output   
   -s,-scan-authorizations &lt;comma-separated-authorizations&gt;  scan authorizations   
   -st,-show-timestamps  enables displaying timestamps   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>setauths</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: setauths [-?] -c | -s &lt;comma-separated-authorizations&gt;  [-u &lt;user&gt;]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: setauths [-?] -c | -s &lt;comma-separated-authorizations&gt;  [-u &lt;user&gt;]   
 description: sets the maximum scan authorizations for a user   
   -?,-help  display this help   
   -c,-clear-authorizations  clears the scan authorizations   
   -s,-scan-authorizations &lt;comma-separated-authorizations&gt;  set the scan   
        authorizations   
   -u,-user &lt;user&gt;  user to operate on   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>setgroups</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: setgroups &lt;group&gt;=&lt;col fam&gt;,&lt;col fam&gt; &lt;group&gt;=&lt;col fam&gt;,&lt;col fam&gt;   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: setgroups &lt;group&gt;=&lt;col fam&gt;,&lt;col fam&gt; &lt;group&gt;=&lt;col fam&gt;,&lt;col fam&gt;   
        [-?] -t &lt;table&gt;   
 description: sets the locality groups for a given table (for binary or commas,   
        use Java API)   
   -?,-help  display this help   
   -t,-table &lt;table&gt;  get locality groups for specified table   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>setiter</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: setiter [-?] -agg | -class &lt;name&gt; | -filter | -nolabel | -regex | -vers   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: setiter [-?] -agg | -class &lt;name&gt; | -filter | -nolabel | -regex | -vers   
        [-majc] [-minc] [-n &lt;itername&gt;]  -p &lt;pri&gt;  [-scan] [-t &lt;table&gt;]   
 description: sets a table-specific iterator   
   -?,-help  display this help   
@@ -688,12 +640,11 @@ description: sets a table-specific iterator
   -scan,-scan-time  applied at scan time   
   -t,-table &lt;table&gt;  tableName   
   -vers,-version  a versioning type   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>setscaniter</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: setscaniter [-?] -agg | -class &lt;name&gt; | -filter | -nolabel | -regex |   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: setscaniter [-?] -agg | -class &lt;name&gt; | -filter | -nolabel | -regex |   
        -vers  [-n &lt;itername&gt;]  -p &lt;pri&gt; [-t &lt;table&gt;]   
 description: sets a table-specific scan iterator for this shell session   
   -?,-help  display this help   
@@ -706,82 +657,72 @@ description: sets a table-specific scan iterator for this shell session
   -regex,-regular-expression  a regex matching type   
   -t,-table &lt;table&gt;  tableName   
   -vers,-version  a versioning type   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>systempermissions</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: systempermissions [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: systempermissions [-?]   
 description: displays a list of valid system permissions   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>table</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: table &lt;tableName&gt; [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: table &lt;tableName&gt; [-?]   
 description: switches to the specified table   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>tablepermissions</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: tablepermissions [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: tablepermissions [-?]   
 description: displays a list of valid table permissions   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>tables</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: tables [-?] [-l]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: tables [-?] [-l]   
 description: displays a list of all existing tables   
   -?,-help  display this help   
   -l,-list-ids  display internal table ids along with the table name   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>trace</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: trace [ on | off ] [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: trace [ on | off ] [-?]   
 description: turns trace logging on or off   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>user</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: user &lt;username&gt; [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: user &lt;username&gt; [-?]   
 description: switches to the specified user   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>userpermissions</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: userpermissions [-?] [-u &lt;user&gt;]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: userpermissions [-?] [-u &lt;user&gt;]   
 description: displays a user's system and table permissions   
   -?,-help  display this help   
   -u,-user &lt;user&gt;  user to operate on   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>users</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: users [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: users [-?]   
 description: displays a list of existing users   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>whoami</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: whoami [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: whoami [-?]   
 description: reports the current user name   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <hr />
 
diff --git a/1.3/user_manual/Table_Configuration.html b/1.3/user_manual/Table_Configuration.html
index 04e838f..87970e3 100644
--- a/1.3/user_manual/Table_Configuration.html
+++ b/1.3/user_manual/Table_Configuration.html
@@ -175,19 +175,18 @@
 
 <h3 id="-managing-locality-groups-via-the-shell"><a id="Managing_Locality_Groups_via_the_Shell"></a> Managing Locality Groups via the Shell</h3>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: setgroups &lt;group&gt;=&lt;col fam&gt;{,&lt;col fam&gt;}{ &lt;group&gt;=&lt;col fam&gt;{,&lt;col
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: setgroups &lt;group&gt;=&lt;col fam&gt;{,&lt;col fam&gt;}{ &lt;group&gt;=&lt;col fam&gt;{,&lt;col
 fam&gt;}} [-?] -t &lt;table&gt;
 
 user@myinstance mytable&gt; setgroups -t mytable group_one=colf1,colf2
 
 user@myinstance mytable&gt; getgroups -t mytable
 group_one=colf1,colf2
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h3 id="-managing-locality-groups-via-the-client-api"><a id="Managing_Locality_Groups_via_the_Client_API"></a> Managing Locality Groups via the Client API</h3>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>Connector conn;
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Connector conn;
 
 HashMap&lt;String,Set&lt;Text&gt;&gt; localityGroups =
     new HashMap&lt;String, Set&lt;Text&gt;&gt;();
@@ -208,14 +207,12 @@ conn.tableOperations().setLocalityGroups("mytable", localityGroups);
 // existing locality groups can be obtained as follows
 Map&lt;String, Set&lt;Text&gt;&gt; groups =
     conn.tableOperations().getLocalityGroups("mytable");
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The assignment of Column Families to Locality Groups can be changed anytime. The physical movement of column families into their new locality groups takes place via the periodic Major Compaction process that takes place continuously in the background. Major Compaction can also be scheduled to take place immediately through the shell:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>user@myinstance mytable&gt; compact -t mytable
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>user@myinstance mytable&gt; compact -t mytable
+</code></pre></div></div>
 
 <h2 id="-constraints"><a id="Constraints"></a> Constraints</h2>
 
@@ -223,7 +220,7 @@ Map&lt;String, Set&lt;Text&gt;&gt; groups =
 
 <p>Constraints can be enabled by setting a table property as follows:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>user@myinstance mytable&gt; config -t mytable -s table.constraint.1=com.test.ExampleConstraint
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>user@myinstance mytable&gt; config -t mytable -s table.constraint.1=com.test.ExampleConstraint
 user@myinstance mytable&gt; config -t mytable -s table.constraint.2=com.test.AnotherConstraint
 user@myinstance mytable&gt; config -t mytable -f constraint
 ---------+--------------------------------+----------------------------
@@ -232,8 +229,7 @@ SCOPE    | NAME                           | VALUE
 table    | table.constraint.1............ | com.test.ExampleConstraint
 table    | table.constraint.2............ | com.test.AnotherConstraint
 ---------+--------------------------------+----------------------------
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Currently there are no general-purpose constraints provided with the Accumulo distribution. New constraints can be created by writing a Java class that implements the org.apache.accumulo.core.constraints.Constraint interface.</p>
 
@@ -249,9 +245,8 @@ accumulo/src/examples/main/java/accumulo/examples/constraints .</p>
 
 <p>To enable bloom filters, enter the following command in the Shell:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>user@myinstance&gt; config -t mytable -s table.bloom.enabled=true
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>user@myinstance&gt; config -t mytable -s table.bloom.enabled=true
+</code></pre></div></div>
 
 <p>An extensive example of using Bloom Filters can be found at <br />
 accumulo/docs/examples/README.bloom .</p>
@@ -262,31 +257,28 @@ accumulo/docs/examples/README.bloom .</p>
 
 <h3 id="-setting-iterators-via-the-shell"><a id="Setting_Iterators_via_the_Shell"></a> Setting Iterators via the Shell</h3>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: setiter [-?] -agg | -class &lt;name&gt; | -filter | -nolabel | 
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: setiter [-?] -agg | -class &lt;name&gt; | -filter | -nolabel | 
 -regex | -vers [-majc] [-minc] [-n &lt;itername&gt;] -p &lt;pri&gt; [-scan] 
 [-t &lt;table&gt;]
 
 user@myinstance mytable&gt; setiter -t mytable -scan -p 10 -n myiter
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h3 id="-setting-iterators-programmatically"><a id="Setting_Iterators_Programmatically"></a> Setting Iterators Programmatically</h3>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>scanner.setScanIterators(
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>scanner.setScanIterators(
     15, // priority
     "com.company.MyIterator", // class name
     "myiter"); // name this iterator
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Some iterators take additional parameters from client code, as in the following example:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>bscan.setIteratorOption(
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>bscan.setIteratorOption(
     "myiter", // iterator reference
     "myoptionname",
     "myoptionvalue");
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Tables support separate Iterator settings to be applied at scan time, upon minor compaction and upon major compaction. For most uses, tables will have identical iterator settings for all three to avoid inconsistent results.</p>
 
@@ -298,7 +290,7 @@ user@myinstance mytable&gt; setiter -t mytable -scan -p 10 -n myiter
 
 <p>The version policy can be changed by changing the VersioningIterator options for a table as follows:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>user@myinstance mytable&gt; config -t mytable -s
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>user@myinstance mytable&gt; config -t mytable -s
 table.iterator.scan.vers.opt.maxVersions=3
 
 user@myinstance mytable&gt; config -t mytable -s
@@ -306,8 +298,7 @@ table.iterator.minc.vers.opt.maxVersions=3
 
 user@myinstance mytable&gt; config -t mytable -s
 table.iterator.majc.vers.opt.maxVersions=3
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h4 id="-logical-time"><a id="Logical_Time"></a> Logical Time</h4>
 
@@ -315,9 +306,8 @@ table.iterator.majc.vers.opt.maxVersions=3
 
 <p>A table can be configured to use logical timestamps at creation time as follows:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>user@myinstance&gt; createtable -tl logical
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>user@myinstance&gt; createtable -tl logical
+</code></pre></div></div>
 
 <h4 id="-deletes"><a id="Deletes"></a> Deletes</h4>
 
@@ -330,7 +320,7 @@ org.apache.accumulo.core.iterators.filter.Filter interface.</p>
 
 <p>To configure the AgeOff filter to remove data older than a certain date or a fixed amount of time from the present. The following example sets a table to delete everything inserted over 30 seconds ago:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>user@myinstance&gt; createtable filtertest
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>user@myinstance&gt; createtable filtertest
 user@myinstance filtertest&gt; setiter -t filtertest -scan -minc -majc -p
 10 -n myfilter -filter
 
@@ -364,12 +354,11 @@ foo a:b [] c
 
 user@myinstance filtertest&gt; scan
 user@myinstance filtertest&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>To see the iterator settings for a table, use:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>user@example filtertest&gt; config -t filtertest -f iterator
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>user@example filtertest&gt; config -t filtertest -f iterator
 ---------+------------------------------------------+------------------
 SCOPE    | NAME                                     | VALUE
 ---------+------------------------------------------+------------------
@@ -389,8 +378,7 @@ table    | table.iterator.scan.myfilter.opt.0 ..... |
 org.apache.accumulo.core.iterators.filter.AgeOffFilter
 table    | table.iterator.scan.myfilter.opt.0.ttl . | 30000
 ---------+------------------------------------------+------------------
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h2 id="-aggregating-iterators"><a id="Aggregating_Iterators"></a> Aggregating Iterators</h2>
 
@@ -398,21 +386,19 @@ table    | table.iterator.scan.myfilter.opt.0.ttl . | 30000
 
 <p>For example, if an aggregating iterator were configured on a table and the following mutations were inserted:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>Row     Family Qualifier Timestamp  Value
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Row     Family Qualifier Timestamp  Value
 rowID1  colfA  colqA     20100101   1
 rowID1  colfA  colqA     20100102   1
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The table would reflect only one aggregate value:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>rowID1  colfA  colqA     -          2
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>rowID1  colfA  colqA     -          2
+</code></pre></div></div>
 
 <p>Aggregating iterators can be enabled for a table as follows:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>user@myinstance&gt; createtable perDayCounts -a
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>user@myinstance&gt; createtable perDayCounts -a
 day=org.apache.accumulo.core.iterators.aggregation.StringSummation
 
 user@myinstance perDayCounts&gt; insert row1 day 20080101 1
@@ -425,8 +411,7 @@ user@myinstance perDayCounts&gt; scan
 row1 day:20080101 [] 2
 row1 day:20080103 [] 1
 row2 day:20080101 [] 2
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Accumulo includes the following aggregators:</p>
 
@@ -449,17 +434,15 @@ accumulo/src/examples/main/java/org/apache/accumulo/examples/aggregation/SortedS
 
 <p>The block cache can be configured on a per-table basis, and all tablets hosted on a tablet server share a single resource pool. To configure the size of the tablet server’s block cache, set the following properties:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>tserver.cache.data.size: Specifies the size of the cache for file data blocks.
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>tserver.cache.data.size: Specifies the size of the cache for file data blocks.
 tserver.cache.index.size: Specifies the size of the cache for file indices.
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>To enable the block cache for your table, set the following properties:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>table.cache.block.enable: Determines whether file (data) block cache is enabled.
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>table.cache.block.enable: Determines whether file (data) block cache is enabled.
 table.cache.index.enable: Determines whether index cache is enabled.
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The block cache can have a significant effect on alleviating hot spots, as well as reducing query latency. It is enabled by default for the !METADATA table.</p>
 
diff --git a/1.3/user_manual/Table_Design.html b/1.3/user_manual/Table_Design.html
index 2191d7f..c92c9aa 100644
--- a/1.3/user_manual/Table_Design.html
+++ b/1.3/user_manual/Table_Design.html
@@ -168,66 +168,60 @@
 
 <p>Since Accumulo tables are sorted by row ID, each table can be thought of as being indexed by the row ID. Lookups performed row ID can be executed quickly, by doing a binary search, first across the tablets, and then within a tablet. Clients should choose a row ID carefully in order to support their desired application. A simple rule is to select a unique identifier as the row ID for each entity to be stored and assign all the other attributes to be tracked to be columns under this row [...]
 
-<div class="highlighter-rouge"><pre class="highlight"><code>    userid,age,address,account-balance
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>    userid,age,address,account-balance
+</code></pre></div></div>
 
 <p>We might choose to store this data using the userid as the rowID and the rest of the data in column families:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>Mutation m = new Mutation(new Text(userid));
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Mutation m = new Mutation(new Text(userid));
 m.put(new Text("age"), age);
 m.put(new Text("address"), address);
 m.put(new Text("balance"), account_balance);
 
 writer.add(m);
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>We could then retrieve any of the columns for a specific userid by specifying the userid as the range of a scanner and fetching specific columns:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>Range r = new Range(userid, userid); // single row
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Range r = new Range(userid, userid); // single row
 Scanner s = conn.createScanner("userdata", auths);
 s.setRange(r);
 s.fetchColumnFamily(new Text("age"));
 
 for(Entry&lt;Key,Value&gt; entry : s)
     System.out.println(entry.getValue().toString());
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h2 id="-rowid-design"><a id="RowID_Design"></a> RowID Design</h2>
 
 <p>Often it is necessary to transform the rowID in order to have rows ordered in a way that is optimal for anticipated access patterns. A good example of this is reversing the order of components of internet domain names in order to group rows of the same parent domain together:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>com.google.code
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>com.google.code
 com.google.labs
 com.google.mail
 com.yahoo.mail
 com.yahoo.research
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Some data may result in the creation of very large rows - rows with many columns. In this case the table designer may wish to split up these rows for better load balancing while keeping them sorted together for scanning purposes. This can be done by appending a random substring at the end of the row:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>com.google.code_00
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>com.google.code_00
 com.google.code_01
 com.google.code_02
 com.google.labs_00
 com.google.mail_00
 com.google.mail_01
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>It could also be done by adding a string representation of some period of time such as date to the week or month:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>com.google.code_201003
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>com.google.code_201003
 com.google.code_201004
 com.google.code_201005
 com.google.labs_201003
 com.google.mail_201003
 com.google.mail_201004
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Appending dates provides the additional capability of restricting a scan to a given date range.</p>
 
@@ -243,7 +237,7 @@ com.google.mail_201004
 
 <p>To support efficient lookups of multiple rowIDs from the same table, the Accumulo client library provides a BatchScanner. Users specify a set of Ranges to the BatchScanner, which performs the lookups in multiple threads to multiple servers and returns an Iterator over all the rows retrieved. The rows returned are NOT in sorted order, as is the case with the basic Scanner interface.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>// first we scan the index for IDs of rows matching our query
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>// first we scan the index for IDs of rows matching our query
 
 Text term = new Text("mySearchTerm");
 
@@ -264,8 +258,7 @@ bscan.fetchFamily("attributes");
 
 for(Entry&lt;Key,Value&gt; entry : scan)
     System.out.println(e.getValue());
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>One advantage of the dynamic schema capabilities of Accumulo is that different fields may be indexed into the same physical table. However, it may be necessary to create different index tables if the terms must be formatted differently in order to maintain proper sort order. For example, real numbers must be formatted differently than their usual notation in order to be sorted correctly. In these cases, usually one index per unique data type will suffice.</p>
 
@@ -301,7 +294,7 @@ for(Entry&lt;Key,Value&gt; entry : scan)
 
 <p>Finally, we perform set intersection operations on the TabletServer via a special iterator called the Intersecting Iterator. Since documents are partitioned into many bins, a search of all documents must search every bin. We can use the BatchScanner to scan all bins in parallel. The Intersecting Iterator should be enabled on a BatchScanner within user query code as follows:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>Text[] terms = {new Text("the"), new Text("white"), new Text("house")};
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Text[] terms = {new Text("the"), new Text("white"), new Text("house")};
 
 BatchScanner bs = conn.createBatchScanner(table, auths, 20);
 bs.setScanIterators(20, IntersectingIterator.class.getName(), "ii");
@@ -316,8 +309,7 @@ bs.setRanges(Collections.singleton(new Range()));
 for(Entry&lt;Key,Value&gt; entry : bs) {
     System.out.println(" " + entry.getKey().getColumnQualifier());
 }
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>This code effectively has the BatchScanner scan all tablets of a table, looking for documents that match all the given terms. Because all tablets are being scanned for every query, each query is more expensive than other Accumulo scans, which typically involve a small number of TabletServers. This reduces the number of concurrent queries supported and is subject to what is known as the `straggler’ problem in which every query runs as slow as the slowest server participating.</p>
 
diff --git a/1.3/user_manual/Writing_Accumulo_Clients.html b/1.3/user_manual/Writing_Accumulo_Clients.html
index 49503ac..754bbb7 100644
--- a/1.3/user_manual/Writing_Accumulo_Clients.html
+++ b/1.3/user_manual/Writing_Accumulo_Clients.html
@@ -163,13 +163,12 @@
 
 <p>All clients must first identify the Accumulo instance to which they will be communicating. Code to do this is as follows:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>String instanceName = "myinstance";
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>String instanceName = "myinstance";
 String zooServers = "zooserver-one,zooserver-two"
 Instance inst = new ZooKeeperInstance(instanceName, zooServers);
 
 Connector conn = new Connector(inst, "user","passwd".getBytes());
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h2 id="-writing-data"><a id="Writing_Data"></a> Writing Data</h2>
 
@@ -177,7 +176,7 @@ Connector conn = new Connector(inst, "user","passwd".getBytes());
 
 <p>Mutations can be created thus:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>Text rowID = new Text("row1");
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Text rowID = new Text("row1");
 Text colFam = new Text("myColFam");
 Text colQual = new Text("myColQual");
 ColumnVisibility colVis = new ColumnVisibility("public");
@@ -187,8 +186,7 @@ Value value = new Value("myValue".getBytes());
 
 Mutation mutation = new Mutation(rowID);
 mutation.put(colFam, colQual, colVis, timestamp, value);
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h3 id="-batchwriter"><a id="BatchWriter"></a> BatchWriter</h3>
 
@@ -196,7 +194,7 @@ mutation.put(colFam, colQual, colVis, timestamp, value);
 
 <p>Mutations are added to a BatchWriter thus:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>long memBuf = 1000000L; // bytes to store before sending a batch
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>long memBuf = 1000000L; // bytes to store before sending a batch
 long timeout = 1000L; // milliseconds to wait before sending
 int numThreads = 10;
 
@@ -206,8 +204,7 @@ BatchWriter writer =
 writer.add(mutation);
 
 writer.close();
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>An example of using the batch writer can be found at <br />
 accumulo/docs/examples/README.batch</p>
@@ -220,7 +217,7 @@ accumulo/docs/examples/README.batch</p>
 
 <p>To retrieve data, Clients use a Scanner, which provides acts like an Iterator over keys and values. Scanners can be configured to start and stop at particular keys, and to return a subset of the columns available.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>// specify which visibilities we are allowed to see
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>// specify which visibilities we are allowed to see
 Authorizations auths = new Authorizations("public");
 
 Scanner scan =
@@ -233,8 +230,7 @@ for(Entry&lt;Key,Value&gt; entry : scan) {
     String row = e.getKey().getRow();
     Value value = e.getValue();
 }
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h3 id="-batchscanner"><a id="BatchScanner"></a> BatchScanner</h3>
 
@@ -242,7 +238,7 @@ for(Entry&lt;Key,Value&gt; entry : scan) {
 
 <p>The BatchScanner is configured similarly to the Scanner; it can be configured to retrieve a subset of the columns available, but rather than passing a single Range, BatchScanners accept a set of Ranges. It is important to note that the keys returned by a BatchScanner are not in sorted order since the keys streamed are from multiple TabletServers in parallel.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>ArrayList&lt;Range&gt; ranges = new ArrayList&lt;Range&gt;();
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ArrayList&lt;Range&gt; ranges = new ArrayList&lt;Range&gt;();
 // populate list of ranges ...
 
 BatchScanner bscan =
@@ -253,8 +249,7 @@ bscan.fetchFamily("attributes");
 
 for(Entry&lt;Key,Value&gt; entry : scan)
     System.out.println(e.getValue());
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>An example of the BatchScanner can be found at <br />
 accumulo/docs/examples/README.batch</p>
diff --git a/1.3/user_manual/examples/aggregation.html b/1.3/user_manual/examples/aggregation.html
index 4a84e5e..7c4ff35 100644
--- a/1.3/user_manual/examples/aggregation.html
+++ b/1.3/user_manual/examples/aggregation.html
@@ -152,7 +152,7 @@
 copy the produced jar into the accumulo lib dir.  This is already done in the
 tar distribution.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/accumulo shell -u username
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bin/accumulo shell -u username
 Enter current password for 'username'@'instance': ***
 
 Shell - Apache Accumulo Interactive Shell
@@ -177,8 +177,7 @@ username@instance aggtest1&gt; scan
 foo app:1 []  1,a,b,foo,w,z
 foo app:2 []  bird,cat,dog,mouse,muskrat
 username@instance aggtest1&gt; 
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>In this example a table is created and the example set aggregator is
 applied to the column family app.</p>
diff --git a/1.3/user_manual/examples/batch.html b/1.3/user_manual/examples/batch.html
index 2d9b68e..1ec5048 100644
--- a/1.3/user_manual/examples/batch.html
+++ b/1.3/user_manual/examples/batch.html
@@ -161,15 +161,14 @@ list of zookeeper nodes (given as zookeepers here).</p>
 <p>Before you run this, you must ensure that the user you are running has the
 “exampleVis” authorization. (you can set this in the shell with “setauths -u username -s exampleVis”)</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u root
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u root
 &gt; setauths -u username -s exampleVis
 &gt; exit
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>You must also create the table, batchtest1, ahead of time. (In the shell, use “createtable batchtest1”)</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username
 &gt; createtable batchtest1
 &gt; exit
 $ ./bin/accumulo org.apache.accumulo.examples.client.SequentialBatchWriter instance zookeepers username password batchtest1 0 10000 50 20000000 500 20 exampleVis
@@ -185,8 +184,7 @@ $ ./bin/accumulo org.apache.accumulo.examples.client.RandomBatchScanner instance
 07 11:33:11,416 [client.CountingVerifyingReceiver] INFO : 2173.91 lookups/sec   0.05 secs
 
 07 11:33:11,416 [client.CountingVerifyingReceiver] INFO : num results : 100
-</code></pre>
-</div>
+</code></pre></div></div>
 
         </div>
 
diff --git a/1.3/user_manual/examples/bloom.html b/1.3/user_manual/examples/bloom.html
index 9eb6e31..2ac1fbe 100644
--- a/1.3/user_manual/examples/bloom.html
+++ b/1.3/user_manual/examples/bloom.html
@@ -154,7 +154,7 @@ do not exist in a table.</p>
 
 <p>Below table named bloom_test is created and bloom filters are enabled.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./accumulo shell -u username -p password
 Shell - Apache Accumulo Interactive Shell
 - version: 1.3.x-incubating
 - instance name: instance
@@ -166,26 +166,23 @@ username@instance&gt; setauths -u username -s exampleVis
 username@instance&gt; createtable bloom_test
 username@instance bloom_test&gt; config -t bloom_test -s table.bloom.enabled=true
 username@instance bloom_test&gt; exit
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Below 1 million random values are inserted into accumulo.  The randomly
 generated rows range between 0 and 1 billion.  The random number generator is
 initialized with the seed 7.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.client.RandomBatchWriter -s 7 instance zookeepers username password bloom_test 1000000 0 1000000000 50 2000000 60000 3 exampleVis
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.client.RandomBatchWriter -s 7 instance zookeepers username password bloom_test 1000000 0 1000000000 50 2000000 60000 3 exampleVis
+</code></pre></div></div>
 
 <p>Below the table is flushed, look at the monitor page and wait for the flush to
 complete.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 username@instance&gt; flush -t bloom_test
 Flush of table bloom_test initiated...
 username@instance&gt; exit
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The flush will be finished when there are no entries in memory and the 
 number of minor compactions goes to zero. Refresh the page to see changes to the table.</p>
@@ -194,21 +191,20 @@ number of minor compactions goes to zero. Refresh the page to see changes to the
 same seed is used to generate the queries, therefore everything is found in the
 table.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.client.RandomBatchScanner -s 7 instance zookeepers username password bloom_test 500 0 1000000000 50 20 exampleVis
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.client.RandomBatchScanner -s 7 instance zookeepers username password bloom_test 500 0 1000000000 50 20 exampleVis
 Generating 500 random queries...finished
 96.19 lookups/sec   5.20 secs
 num results : 500
 Generating 500 random queries...finished
 102.35 lookups/sec   4.89 secs
 num results : 500
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Below another 500 queries are performed, using a different seed which results
 in nothing being found.  In this case the lookups are much faster because of
 the bloom filters.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ../bin/accumulo org.apache.accumulo.examples.client.RandomBatchScanner -s 8 instance zookeepers username password bloom_test 500 0 1000000000 50 20 exampleVis
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ../bin/accumulo org.apache.accumulo.examples.client.RandomBatchScanner -s 8 instance zookeepers username password bloom_test 500 0 1000000000 50 20 exampleVis
 Generating 500 random queries...finished
 2212.39 lookups/sec   0.23 secs
 num results : 0
@@ -217,8 +213,7 @@ Generating 500 random queries...finished
 4464.29 lookups/sec   0.11 secs
 num results : 0
 Did not find 500 rows
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <hr />
 
@@ -250,29 +245,27 @@ seed.</p>
 NG seed 7.  Even though only one map file will likely contain entries for this
 seed, all map files will be interrogated.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.client.RandomBatchScanner -s 7 instance zookeepers username password bloom_test1 500 0 1000000000 50 20 exampleVis
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.client.RandomBatchScanner -s 7 instance zookeepers username password bloom_test1 500 0 1000000000 50 20 exampleVis
 Generating 500 random queries...finished
 35.09 lookups/sec  14.25 secs
 num results : 500
 Generating 500 random queries...finished
 35.33 lookups/sec  14.15 secs
 num results : 500
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Below the same lookups are done against the table with bloom filters.  The
 lookups were 2.86 times faster because only one map file was used, even though three
 map files existed.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.client.RandomBatchScanner -s 7 instance zookeepers username password bloom_test2 500 0 1000000000 50 20 exampleVis
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.client.RandomBatchScanner -s 7 instance zookeepers username password bloom_test2 500 0 1000000000 50 20 exampleVis
 Generating 500 random queries...finished
 99.03 lookups/sec   5.05 secs
 num results : 500
 Generating 500 random queries...finished
 101.15 lookups/sec   4.94 secs
 num results : 500
-</code></pre>
-</div>
+</code></pre></div></div>
 
         </div>
 
diff --git a/1.3/user_manual/examples/bulkIngest.html b/1.3/user_manual/examples/bulkIngest.html
index 3b9c4b7..5b0c000 100644
--- a/1.3/user_manual/examples/bulkIngest.html
+++ b/1.3/user_manual/examples/bulkIngest.html
@@ -157,13 +157,12 @@ accumulo.  Then we verify the 1000 rows are in accumulo. The
 first two arguments to all of the commands except for GenerateTestData are the
 accumulo instance name, and a comma-separated list of zookeepers.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.mapreduce.bulk.SetupTable instance zookeepers username password test_bulk row_00000333 row_00000666
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.mapreduce.bulk.SetupTable instance zookeepers username password test_bulk row_00000333 row_00000666
 $ ./bin/accumulo org.apache.accumulo.examples.mapreduce.bulk.GenerateTestData 0 1000 bulk/test_1.txt
 
 $ ./bin/tool.sh lib/accumulo-examples-*[^c].jar org.apache.accumulo.examples.mapreduce.bulk.BulkIngestExample instance zookeepers username password test_bulk bulk tmp/bulkWork
 $ ./bin/accumulo org.apache.accumulo.examples.mapreduce.bulk.VerifyIngest instance zookeepers username password test_bulk 0 1000
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>For a high level discussion of bulk ingest, see the docs dir.</p>
 
diff --git a/1.3/user_manual/examples/constraints.html b/1.3/user_manual/examples/constraints.html
index b01f7a9..47a0daf 100644
--- a/1.3/user_manual/examples/constraints.html
+++ b/1.3/user_manual/examples/constraints.html
@@ -154,7 +154,7 @@ numeric keys.  The other constraint does not allow non numeric values. Two
 inserts that violate these constraints are attempted and denied.  The scan at
 the end shows the inserts were not allowed.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p pass
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p pass
 
 Shell - Apache Accumulo Interactive Shell
 - 
@@ -178,8 +178,7 @@ username@instance testConstraints&gt; insert r1! cf1 cq1 ABC
 username@instance testConstraints&gt; scan
 r1 cf1:cq1 []    1111
 username@instance testConstraints&gt; 
-</code></pre>
-</div>
+</code></pre></div></div>
 
         </div>
 
diff --git a/1.3/user_manual/examples/dirlist.html b/1.3/user_manual/examples/dirlist.html
index ba447ba..d4a6a0c 100644
--- a/1.3/user_manual/examples/dirlist.html
+++ b/1.3/user_manual/examples/dirlist.html
@@ -161,42 +161,37 @@
 
 <p>To begin, ingest some data with Ingest.java.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.dirlist.Ingest instance zookeepers username password direxample dirindex exampleVis /local/user1/workspace
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.dirlist.Ingest instance zookeepers username password direxample dirindex exampleVis /local/user1/workspace
+</code></pre></div></div>
 
 <p>Note that running this example will create tables direxample and dirindex in Accumulo that you should delete when you have completed the example.
 If you modify a file or add new files in the directory ingested (e.g. /local/user1/workspace), you can run Ingest again to add new information into the Accumulo tables.</p>
 
 <p>To browse the data ingested, use Viewer.java.  Be sure to give the “username” user the authorizations to see the data.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.dirlist.Viewer instance zookeepers username password direxample exampleVis /local/user1/workspace
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.dirlist.Viewer instance zookeepers username password direxample exampleVis /local/user1/workspace
+</code></pre></div></div>
 
 <p>To list the contents of specific directories, use QueryUtil.java.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.dirlist.QueryUtil instance zookeepers username password direxample exampleVis /local/user1
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.dirlist.QueryUtil instance zookeepers username password direxample exampleVis /local/user1
 $ ./bin/accumulo org.apache.accumulo.examples.dirlist.QueryUtil instance zookeepers username password direxample exampleVis /local/user1/workspace
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>To perform searches on file or directory names, also use QueryUtil.java.  Search terms must contain no more than one wild card and cannot contain “/”.
 <em>Note</em> these queries run on the <em>dirindex</em> table instead of the direxample table.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.dirlist.QueryUtil instance zookeepers username password dirindex exampleVis filename -search
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.dirlist.QueryUtil instance zookeepers username password dirindex exampleVis filename -search
 $ ./bin/accumulo org.apache.accumulo.examples.dirlist.QueryUtil instance zookeepers username password dirindex exampleVis 'filename*' -search
 $ ./bin/accumulo org.apache.accumulo.examples.dirlist.QueryUtil instance zookeepers username password dirindex exampleVis '*jar' -search
 $ ./bin/accumulo org.apache.accumulo.examples.dirlist.QueryUtil instance zookeepers username password dirindex exampleVis filename*jar -search
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>To count the number of direct children (directories and files) and descendants (children and children’s descendents, directories and files), run the FileCountMR over the direxample table.
 The results can be written back to the same table.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/tool.sh lib/accumulo-examples-*[^c].jar org.apache.accumulo.examples.dirlist.FileCountMR instance zookeepers username password direxample direxample exampleVis exampleVis
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/tool.sh lib/accumulo-examples-*[^c].jar org.apache.accumulo.examples.dirlist.FileCountMR instance zookeepers username password direxample direxample exampleVis exampleVis
+</code></pre></div></div>
 
 <p>Alternatively, you can also run FileCount.java.</p>
 
diff --git a/1.3/user_manual/examples/filter.html b/1.3/user_manual/examples/filter.html
index 74e42a3..f508a08 100644
--- a/1.3/user_manual/examples/filter.html
+++ b/1.3/user_manual/examples/filter.html
@@ -155,7 +155,7 @@ ones).  Filters implement the org.apache.accumulo.core.iterators.iterators.filte
 contains a method accept(Key k, Value v).  This method returns true if the key, 
 value pair are to be delivered and false if they are to be ignored.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance&gt; createtable filtertest
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance&gt; createtable filtertest
 username@instance filtertest&gt; setiter -t filtertest -scan -p 10 -n myfilter -filter
 FilteringIterator uses Filters to accept or reject key/value pairs
 ----------&gt; entering options: &lt;filterPriorityNumber&gt; &lt;ageoff|regex|filterClass&gt;
@@ -170,15 +170,13 @@ username@instance filtertest&gt; scan
 username@instance filtertest&gt; insert foo a b c
 username@instance filtertest&gt; scan
 foo a:b []    c
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>… wait 30 seconds …</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance filtertest&gt; scan
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance filtertest&gt; scan
 username@instance filtertest&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Note the absence of the entry inserted more than 30 seconds ago.  Since the
 scope was set to “scan”, this means the entry is still in Accumulo, but is
@@ -191,7 +189,7 @@ on the “minc” and “majc” scopes you can flush and compact your table. Th
 happen automatically as a background operation on any table that is being 
 actively written to, but these are the commands to force compaction:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance filtertest&gt; setiter -t filtertest -scan -minc -majc -p 10 -n myfilter -filter
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance filtertest&gt; setiter -t filtertest -scan -minc -majc -p 10 -n myfilter -filter
 FilteringIterator uses Filters to accept or reject key/value pairs
 ----------&gt; entering options: &lt;filterPriorityNumber&gt; &lt;ageoff|regex|filterClass&gt;
 ----------&gt; set org.apache.accumulo.core.iterators.FilteringIterator option (&lt;name&gt; &lt;value&gt;, hit enter to skip): 0 ageoff
@@ -206,15 +204,14 @@ username@instance filtertest&gt; flush -t filtertest
 username@instance filtertest&gt; compact -t filtertest
 08 11:14:10,800 [shell.Shell] INFO : Compaction of table filtertest scheduled for 20110208111410EST
 username@instance filtertest&gt; 
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>After the compaction runs, the newly created files will not contain any data that should be aged off, and the
 Accumulo garbage collector will remove the old files.</p>
 
 <p>To see the iterator settings for a table, use:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance filtertest&gt; config -t filtertest -f iterator
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance filtertest&gt; config -t filtertest -f iterator
 ---------+------------------------------------------+----------------------------------------------------------
 SCOPE    | NAME                                     | VALUE
 ---------+------------------------------------------+----------------------------------------------------------
@@ -235,8 +232,7 @@ table    | table.iterator.scan.vers .................. | 20,org.apache.accumulo.
 table    | table.iterator.scan.vers.opt.maxVersions .. | 1
 ---------+------------------------------------------+----------------------------------------------------------
 username@instance filtertest&gt; 
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>If you would like to apply multiple filters, this can be done using a single
 iterator. Just continue adding entries during the 
diff --git a/1.3/user_manual/examples/helloworld.html b/1.3/user_manual/examples/helloworld.html
index e62f736..f94ab99 100644
--- a/1.3/user_manual/examples/helloworld.html
+++ b/1.3/user_manual/examples/helloworld.html
@@ -158,46 +158,39 @@
 
 <p>Log into the accumulo shell:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+</code></pre></div></div>
 
 <p>Create a table called ‘hellotable’:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance&gt; createtable hellotable
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance&gt; createtable hellotable
+</code></pre></div></div>
 
 <p>Launch a Java program that inserts data with a BatchWriter:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.helloworld.InsertWithBatchWriter instance zookeepers hellotable username password
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.helloworld.InsertWithBatchWriter instance zookeepers hellotable username password
+</code></pre></div></div>
 
 <p>Alternatively, the same data can be inserted using MapReduce writers:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.helloworld.InsertWithOutputFormat instance zookeepers hellotable username password
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.helloworld.InsertWithOutputFormat instance zookeepers hellotable username password
+</code></pre></div></div>
 
 <p>On the accumulo status page at the URL below (where ‘master’ is replaced with the name or IP of your accumulo master), you should see 50K entries</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>http://master:50095/
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>http://master:50095/
+</code></pre></div></div>
 
 <p>To view the entries, use the shell to scan the table:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance&gt; table hellotable
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance&gt; table hellotable
 username@instance hellotable&gt; scan
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>You can also use a Java class to scan the table:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.helloworld.ReadData instance zookeepers hellotable username password row_0 row_1001
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.helloworld.ReadData instance zookeepers hellotable username password row_0 row_1001
+</code></pre></div></div>
 
         </div>
 
diff --git a/1.3/user_manual/examples/mapred.html b/1.3/user_manual/examples/mapred.html
index e492511..307baa9 100644
--- a/1.3/user_manual/examples/mapred.html
+++ b/1.3/user_manual/examples/mapred.html
@@ -155,17 +155,16 @@ accumulo table with aggregators.</p>
 <p>To run this example you will need a directory in HDFS containing text files.
 The accumulo readme will be used to show how to run this example.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ hadoop fs -copyFromLocal $ACCUMULO_HOME/README /user/username/wc/Accumulo.README
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ hadoop fs -copyFromLocal $ACCUMULO_HOME/README /user/username/wc/Accumulo.README
 $ hadoop fs -ls /user/username/wc
 Found 1 items
 -rw-r--r--   2 username supergroup       9359 2009-07-15 17:54 /user/username/wc/Accumulo.README
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The first part of running this example is to create a table with aggregation
 for the column family count.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 Shell - Apache Accumulo Interactive Shell
 - version: 1.3.x-incubating
 - instance name: instance
@@ -175,12 +174,11 @@ Shell - Apache Accumulo Interactive Shell
 - 
 username@instance&gt; createtable wordCount -a count=org.apache.accumulo.core.iterators.aggregation.StringSummation 
 username@instance wordCount&gt; quit
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>After creating the table, run the word count map reduce job.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>[user1@instance accumulo]$ bin/tool.sh lib/accumulo-examples-*[^c].jar org.apache.accumulo.examples.mapreduce.WordCount instance zookeepers /user/user1/wc wordCount -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[user1@instance accumulo]$ bin/tool.sh lib/accumulo-examples-*[^c].jar org.apache.accumulo.examples.mapreduce.WordCount instance zookeepers /user/user1/wc wordCount -u username -p password
 
 11/02/07 18:20:11 INFO input.FileInputFormat: Total input paths to process : 1
 11/02/07 18:20:12 INFO mapred.JobClient: Running job: job_201102071740_0003
@@ -197,13 +195,12 @@ username@instance wordCount&gt; quit
 11/02/07 18:20:22 INFO mapred.JobClient:     Map input records=255
 11/02/07 18:20:22 INFO mapred.JobClient:     Spilled Records=0
 11/02/07 18:20:22 INFO mapred.JobClient:     Map output records=1452
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>After the map reduce job completes, query the accumulo table to see word
 counts.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 username@instance&gt; table wordCount
 username@instance wordCount&gt; scan -b the
 the count:20080906 []    75
@@ -221,8 +218,7 @@ total count:20080906 []    1
 tserver, count:20080906 []    1
 tserver.compaction.major.concurrent.max count:20080906 []    1
 ...
-</code></pre>
-</div>
+</code></pre></div></div>
 
         </div>
 
diff --git a/1.3/user_manual/examples/shard.html b/1.3/user_manual/examples/shard.html
index 9e277d8..df3b8fc 100644
--- a/1.3/user_manual/examples/shard.html
+++ b/1.3/user_manual/examples/shard.html
@@ -160,21 +160,19 @@ document, or “sharded”. This example shows how to use the intersecting itera
 
 <p>To run these example programs, create two tables like below.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance&gt; createtable shard
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance&gt; createtable shard
 username@instance shard&gt; createtable doc2term
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>After creating the tables, index some files.  The following command indexes all of the java files in the Accumulo source code.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ cd /local/user1/workspace/accumulo/
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ cd /local/user1/workspace/accumulo/
 $ find src -name "*.java" | xargs ./bin/accumulo org.apache.accumulo.examples.shard.Index instance zookeepers shard username password 30
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The following command queries the index to find all files containing ‘foo’ and ‘bar’.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ cd $ACCUMULO_HOME
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ cd $ACCUMULO_HOME
 $ ./bin/accumulo org.apache.accumulo.examples.shard.Query instance zookeepers shard username password foo bar
 /local/user1/workspace/accumulo/src/core/src/test/java/accumulo/core/security/ColumnVisibilityTest.java
 /local/user1/workspace/accumulo/src/core/src/test/java/accumulo/core/client/mock/MockConnectorTest.java
@@ -187,18 +185,16 @@ $ ./bin/accumulo org.apache.accumulo.examples.shard.Query instance zookeepers sh
 /local/user1/workspace/accumulo/src/core/src/test/java/accumulo/core/iterators/WholeRowIteratorTest.java
 /local/user1/workspace/accumulo/src/server/src/test/java/accumulo/server/util/DefaultMapTest.java
 /local/user1/workspace/accumulo/src/server/src/test/java/accumulo/server/tabletserver/InMemoryMapTest.java
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Inorder to run ContinuousQuery, we need to run Reverse.java to populate doc2term</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.shard.Reverse instance zookeepers shard doc2term username password
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.shard.Reverse instance zookeepers shard doc2term username password
+</code></pre></div></div>
 
 <p>Below ContinuousQuery is run using 5 terms.  So it selects 5 random terms from each document, then it continually randomly selects one set of 5 terms and queries.  It prints the number of matching documents and the time in seconds.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.shard.ContinuousQuery instance zookeepers shard doc2term username password 5
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.shard.ContinuousQuery instance zookeepers shard doc2term username password 5
 [public, core, class, binarycomparable, b] 2  0.081
 [wordtodelete, unindexdocument, doctablename, putdelete, insert] 1  0.041
 [import, columnvisibilityinterpreterfactory, illegalstateexception, cv, columnvisibility] 1  0.049
@@ -206,8 +202,7 @@ $ ./bin/accumulo org.apache.accumulo.examples.shard.Query instance zookeepers sh
 [for, static, println, public, the] 55  0.211
 [sleeptime, wrappingiterator, options, long, utilwaitthread] 1  0.057
 [string, public, long, 0, wait] 12  0.132
-</code></pre>
-</div>
+</code></pre></div></div>
 
         </div>
 
diff --git a/1.4/examples/batch.html b/1.4/examples/batch.html
index da6ab48..2eeddbb 100644
--- a/1.4/examples/batch.html
+++ b/1.4/examples/batch.html
@@ -169,13 +169,12 @@ list of zookeeper nodes (given as zookeepers here).</p>
 <p>Before you run this, you must ensure that the user you are running has the
 “exampleVis” authorization. (you can set this in the shell with “setauths -u username -s exampleVis”)</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u root -e "setauths -u username -s exampleVis"
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u root -e "setauths -u username -s exampleVis"
+</code></pre></div></div>
 
 <p>You must also create the table, batchtest1, ahead of time. (In the shell, use “createtable batchtest1”)</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -e "createtable batchtest1"
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -e "createtable batchtest1"
 $ ./bin/accumulo org.apache.accumulo.examples.simple.client.SequentialBatchWriter instance zookeepers username password batchtest1 0 10000 50 20000000 500 20 exampleVis
 $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner instance zookeepers username password batchtest1 100 0 10000 50 20 exampleVis
 07 11:33:11,103 [client.CountingVerifyingReceiver] INFO : Generating 100 random queries...
@@ -189,8 +188,7 @@ $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner i
 07 11:33:11,416 [client.CountingVerifyingReceiver] INFO : 2173.91 lookups/sec   0.05 secs
 
 07 11:33:11,416 [client.CountingVerifyingReceiver] INFO : num results : 100
-</code></pre>
-</div>
+</code></pre></div></div>
 
         </div>
 
diff --git a/1.4/examples/bloom.html b/1.4/examples/bloom.html
index c6eae69..29b2b5f 100644
--- a/1.4/examples/bloom.html
+++ b/1.4/examples/bloom.html
@@ -154,7 +154,7 @@ do not exist in a table.</p>
 
 <p>Below table named bloom_test is created and bloom filters are enabled.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 Shell - Apache Accumulo Interactive Shell
 - version: 1.4.x
 - instance name: instance
@@ -166,43 +166,39 @@ username@instance&gt; setauths -u username -s exampleVis
 username@instance&gt; createtable bloom_test
 username@instance bloom_test&gt; config -t bloom_test -s table.bloom.enabled=true
 username@instance bloom_test&gt; exit
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Below 1 million random values are inserted into accumulo.  The randomly
 generated rows range between 0 and 1 billion.  The random number generator is
 initialized with the seed 7.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter -s 7 instance zookeepers username password bloom_test 1000000 0 1000000000 50 2000000 60000 3 exampleVis
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter -s 7 instance zookeepers username password bloom_test 1000000 0 1000000000 50 2000000 60000 3 exampleVis
+</code></pre></div></div>
 
 <p>Below the table is flushed:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test -w'
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test -w'
 05 10:40:06,069 [shell.Shell] INFO : Flush of table bloom_test completed.
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>After the flush completes, 500 random queries are done against the table.  The
 same seed is used to generate the queries, therefore everything is found in the
 table.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner -s 7 instance zookeepers username password bloom_test 500 0 1000000000 50 20 exampleVis
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner -s 7 instance zookeepers username password bloom_test 500 0 1000000000 50 20 exampleVis
 Generating 500 random queries...finished
 96.19 lookups/sec   5.20 secs
 num results : 500
 Generating 500 random queries...finished
 102.35 lookups/sec   4.89 secs
 num results : 500
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Below another 500 queries are performed, using a different seed which results
 in nothing being found.  In this case the lookups are much faster because of
 the bloom filters.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner -s 8 instance zookeepers username password bloom_test 500 0 1000000000 50 20 exampleVis
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner -s 8 instance zookeepers username password bloom_test 500 0 1000000000 50 20 exampleVis
 Generating 500 random queries...finished
 2212.39 lookups/sec   0.23 secs
 num results : 0
@@ -211,8 +207,7 @@ Generating 500 random queries...finished
 4464.29 lookups/sec   0.11 secs
 num results : 0
 Did not find 500 rows
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <hr />
 
@@ -244,7 +239,7 @@ million inserts.  If not, then more map files will be created.</p>
 
 <p>The commands for creating the first table without bloom filters are below.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 Shell - Apache Accumulo Interactive Shell
 - version: 1.4.x
 - instance name: instance
@@ -263,12 +258,11 @@ $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter -s
 $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test1 -w'
 $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter -s 9 instance zookeepers username password bloom_test1 1000000 0 1000000000 50 2000000 60000 3 exampleVis
 $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test1 -w'
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The commands for creating the second table with bloom filers are below.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 Shell - Apache Accumulo Interactive Shell
 - version: 1.4.x
 - instance name: instance
@@ -288,64 +282,59 @@ $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter -s
 $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test2 -w'
 $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter -s 9 instance zookeepers username password bloom_test2 1000000 0 1000000000 50 2000000 60000 3 exampleVis
 $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test2 -w'
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Below 500 lookups are done against the table without bloom filters using random
 NG seed 7.  Even though only one map file will likely contain entries for this
 seed, all map files will be interrogated.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner -s 7 instance zookeepers username password bloom_test1 500 0 1000000000 50 20 exampleVis
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner -s 7 instance zookeepers username password bloom_test1 500 0 1000000000 50 20 exampleVis
 Generating 500 random queries...finished
 35.09 lookups/sec  14.25 secs
 num results : 500
 Generating 500 random queries...finished
 35.33 lookups/sec  14.15 secs
 num results : 500
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Below the same lookups are done against the table with bloom filters.  The
 lookups were 2.86 times faster because only one map file was used, even though three
 map files existed.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner -s 7 instance zookeepers username password bloom_test2 500 0 1000000000 50 20 exampleVis
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner -s 7 instance zookeepers username password bloom_test2 500 0 1000000000 50 20 exampleVis
 Generating 500 random queries...finished
 99.03 lookups/sec   5.05 secs
 num results : 500
 Generating 500 random queries...finished
 101.15 lookups/sec   4.94 secs
 num results : 500
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>You can verify the table has three files by looking in HDFS.  To look in HDFS
 you will need the table ID, because this is used in HDFS instead of the table
 name.  The following command will show table ids.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password -e 'tables -l'
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password -e 'tables -l'
 !METADATA       =&gt;         !0
 bloom_test1     =&gt;         o7
 bloom_test2     =&gt;         o8
 trace           =&gt;          1
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>So the table id for bloom_test2 is o8.  The command below shows what files this
 table has in HDFS.  This assumes Accumulo is at the default location in HDFS.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ hadoop fs -lsr /accumulo/tables/o8
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ hadoop fs -lsr /accumulo/tables/o8
 drwxr-xr-x   - username supergroup          0 2012-01-10 14:02 /accumulo/tables/o8/default_tablet
 -rw-r--r--   3 username supergroup   52672650 2012-01-10 14:01 /accumulo/tables/o8/default_tablet/F00000dj.rf
 -rw-r--r--   3 username supergroup   52436176 2012-01-10 14:01 /accumulo/tables/o8/default_tablet/F00000dk.rf
 -rw-r--r--   3 username supergroup   52850173 2012-01-10 14:02 /accumulo/tables/o8/default_tablet/F00000dl.rf
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Running the PrintInfo command shows that one of the files has a bloom filter
 and its 1.5MB.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.core.file.rfile.PrintInfo /accumulo/tables/o8/default_tablet/F00000dj.rf
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.core.file.rfile.PrintInfo /accumulo/tables/o8/default_tablet/F00000dj.rf
 Locality group         : &lt;DEFAULT&gt;
   Start block          : 0
   Num   blocks         : 752
@@ -369,8 +358,7 @@ Meta block     : acu_bloom
   Raw size             : 1,540,292 bytes
   Compressed size      : 1,433,115 bytes
   Compression type     : gz
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.4/examples/bulkIngest.html b/1.4/examples/bulkIngest.html
index 89709bc..622937d 100644
--- a/1.4/examples/bulkIngest.html
+++ b/1.4/examples/bulkIngest.html
@@ -157,13 +157,12 @@ accumulo.  Then we verify the 1000 rows are in accumulo. The
 first two arguments to all of the commands except for GenerateTestData are the
 accumulo instance name, and a comma-separated list of zookeepers.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.mapreduce.bulk.SetupTable instance zookeepers username password test_bulk row_00000333 row_00000666
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.mapreduce.bulk.SetupTable instance zookeepers username password test_bulk row_00000333 row_00000666
 $ ./bin/accumulo org.apache.accumulo.examples.simple.mapreduce.bulk.GenerateTestData 0 1000 bulk/test_1.txt
 
 $ ./bin/tool.sh lib/examples-simple-*[^cs].jar org.apache.accumulo.examples.simple.mapreduce.bulk.BulkIngestExample instance zookeepers username password test_bulk bulk tmp/bulkWork
 $ ./bin/accumulo org.apache.accumulo.examples.simple.mapreduce.bulk.VerifyIngest instance zookeepers username password test_bulk 0 1000
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>For a high level discussion of bulk ingest, see the docs dir.</p>
 
diff --git a/1.4/examples/combiner.html b/1.4/examples/combiner.html
index 7872be1..36823dd 100644
--- a/1.4/examples/combiner.html
+++ b/1.4/examples/combiner.html
@@ -158,7 +158,7 @@
 copy the produced jar into the accumulo lib dir.  This is already done in the
 tar distribution.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/accumulo shell -u username
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bin/accumulo shell -u username
 Enter current password for 'username'@'instance': ***
 
 Shell - Apache Accumulo Interactive Shell
@@ -196,8 +196,7 @@ username@instance runners&gt; scan
 123456 hstat:virtualMarathon []    6a,6b,d5,2
 123456 name:first []    Joe
 123456 stat:marathon []    220,240,690,3
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>In this example a table is created and the example stats combiner is applied to
 the column family stat and hstat.  The stats combiner computes min,max,sum, and
diff --git a/1.4/examples/constraints.html b/1.4/examples/constraints.html
index 780ab46..6858eba 100644
--- a/1.4/examples/constraints.html
+++ b/1.4/examples/constraints.html
@@ -161,7 +161,7 @@ numeric keys.  The other constraint does not allow non numeric values. Two
 inserts that violate these constraints are attempted and denied.  The scan at
 the end shows the inserts were not allowed.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 
 Shell - Apache Accumulo Interactive Shell
 - 
@@ -185,8 +185,7 @@ username@instance testConstraints&gt; insert r1! cf1 cq1 ABC
 username@instance testConstraints&gt; scan
 r1 cf1:cq1 []    1111
 username@instance testConstraints&gt; 
-</code></pre>
-</div>
+</code></pre></div></div>
 
         </div>
 
diff --git a/1.4/examples/dirlist.html b/1.4/examples/dirlist.html
index 832a287..b596ef5 100644
--- a/1.4/examples/dirlist.html
+++ b/1.4/examples/dirlist.html
@@ -167,9 +167,8 @@
 
 <p>To begin, ingest some data with Ingest.java.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.Ingest instance zookeepers username password dirTable indexTable dataTable exampleVis 100000 /local/username/workspace
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.Ingest instance zookeepers username password dirTable indexTable dataTable exampleVis 100000 /local/username/workspace
+</code></pre></div></div>
 
 <p>This may take some time if there are large files in the /local/username/workspace directory.  If you use 0 instead of 100000 on the command line, the ingest will run much faster, but it will not put any file data into Accumulo (the dataTable will be empty).
 Note that running this example will create tables dirTable, indexTable, and dataTable in Accumulo that you should delete when you have completed the example.
@@ -177,46 +176,41 @@ If you modify a file or add new files in the directory ingested (e.g. /local/use
 
 <p>To browse the data ingested, use Viewer.java.  Be sure to give the “username” user the authorizations to see the data (in this case, run</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u root -e 'setauths -u username -s exampleVis'
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u root -e 'setauths -u username -s exampleVis'
+</code></pre></div></div>
 
 <p>then run the Viewer:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.Viewer instance zookeepers username password dirTable dataTable exampleVis /local/username/workspace
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.Viewer instance zookeepers username password dirTable dataTable exampleVis /local/username/workspace
+</code></pre></div></div>
 
 <p>To list the contents of specific directories, use QueryUtil.java.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil instance zookeepers username password dirTable exampleVis /local/username
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil instance zookeepers username password dirTable exampleVis /local/username
 $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil instance zookeepers username password dirTable exampleVis /local/username/workspace
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>To perform searches on file or directory names, also use QueryUtil.java.  Search terms must contain no more than one wild card and cannot contain “/”.
 <em>Note</em> these queries run on the <em>indexTable</em> table instead of the dirTable table.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil instance zookeepers username password indexTable exampleVis filename -search
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil instance zookeepers username password indexTable exampleVis filename -search
 $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil instance zookeepers username password indexTable exampleVis 'filename*' -search
 $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil instance zookeepers username password indexTable exampleVis '*jar' -search
 $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil instance zookeepers username password indexTable exampleVis filename*jar -search
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>To count the number of direct children (directories and files) and descendants (children and children’s descendants, directories and files), run the FileCount over the dirTable table.
 The results are written back to the same table.  FileCount reads from and writes to Accumulo.  This requires scan authorizations for the read and a visibility for the data written.
 In this example, the authorizations and visibility are set to the same value, exampleVis.  See README.visibility for more information on visibility and authorizations.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.FileCount instance zookeepers username password dirTable exampleVis exampleVis
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.FileCount instance zookeepers username password dirTable exampleVis exampleVis
+</code></pre></div></div>
 
 <h2 id="directory-table">Directory Table</h2>
 
 <p>Here is a illustration of what data looks like in the directory table:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>row colf:colq [vis]    value
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>row colf:colq [vis]    value
 000 dir:exec [exampleVis]    true
 000 dir:hidden [exampleVis]    false
 000 dir:lastmod [exampleVis]    1291996886000
@@ -230,8 +224,7 @@ In this example, the authorizations and visibility are set to the same value, ex
 002/local/Accumulo.README \x7F\xFF\xFE\xCFH\xA1\x82\x97:lastmod [exampleVis]    1308746481000
 002/local/Accumulo.README \x7F\xFF\xFE\xCFH\xA1\x82\x97:length [exampleVis]    9192
 002/local/Accumulo.README \x7F\xFF\xFE\xCFH\xA1\x82\x97:md5 [exampleVis]    274af6419a3c4c4a259260ac7017cbf1
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The rows are of the form depth + path, where depth is the number of slashes (“/”) in the path padded to 3 digits.  This is so that all the children of a directory appear as consecutive keys in Accumulo; without the depth, you would for example see all the subdirectories of /local before you saw /usr.
 For directories the column family is “dir”.  For files the column family is Long.MAX_VALUE - lastModified in bytes rather than string format so that newer versions sort earlier.</p>
@@ -240,13 +233,12 @@ For directories the column family is “dir”.  For files the column family is
 
 <p>Here is an illustration of what data looks like in the index table:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>row colf:colq [vis]
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>row colf:colq [vis]
 fAccumulo.README i:002/local/Accumulo.README [exampleVis]
 flocal i:001/local [exampleVis]
 rEMDAER.olumuccA i:002/local/Accumulo.README [exampleVis]
 rlacol i:001/local [exampleVis]
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The values of the index table are null.  The rows are of the form “f” + filename or “r” + reverse file name.  This is to enable searches with wildcards at the beginning, middle, or end.</p>
 
@@ -254,13 +246,12 @@ rlacol i:001/local [exampleVis]
 
 <p>Here is an illustration of what data looks like in the data table:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>row colf:colq [vis]    value
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>row colf:colq [vis]    value
 274af6419a3c4c4a259260ac7017cbf1 refs:e77276a2b56e5c15b540eaae32b12c69\x00filext [exampleVis]    README
 274af6419a3c4c4a259260ac7017cbf1 refs:e77276a2b56e5c15b540eaae32b12c69\x00name [exampleVis]    /local/Accumulo.README
 274af6419a3c4c4a259260ac7017cbf1 ~chunk:\x00\x0FB@\x00\x00\x00\x00 [exampleVis]    *******************************************************************************\x0A1. Building\x0A\x0AIn the normal tarball or RPM release of accumulo, [truncated]
 274af6419a3c4c4a259260ac7017cbf1 ~chunk:\x00\x0FB@\x00\x00\x00\x01 [exampleVis]
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The rows are the md5 hash of the file.  Some column family : column qualifier pairs are “refs” : hash of file name + null byte + property name, in which case the value is property value.  There can be multiple references to the same file which are distinguished by the hash of the file name.
 Other column family : column qualifier pairs are “~chunk” : chunk size in bytes + chunk number in bytes, in which case the value is the bytes for that chunk of the file.  There is an end of file data marker whose chunk number is the number of chunks for the file and whose value is empty.</p>
diff --git a/1.4/examples/filedata.html b/1.4/examples/filedata.html
index d12a7d4..7c09892 100644
--- a/1.4/examples/filedata.html
+++ b/1.4/examples/filedata.html
@@ -176,27 +176,23 @@ The example has the following classes:</p>
 
 <p>If you haven’t already run the README.dirlist example, ingest a file with FileDataIngest.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.filedata.FileDataIngest instance zookeepers username password dataTable exampleVis 1000 $ACCUMULO_HOME/README
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.filedata.FileDataIngest instance zookeepers username password dataTable exampleVis 1000 $ACCUMULO_HOME/README
+</code></pre></div></div>
 
 <p>Open the accumulo shell and look at the data.  The row is the MD5 hash of the file, which you can verify by running a command such as ‘md5sum’ on the file.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>&gt; scan -t dataTable
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>&gt; scan -t dataTable
+</code></pre></div></div>
 
 <p>Run the CharacterHistogram MapReduce to add some information about the file.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/tool.sh lib/examples-simple*[^cs].jar org.apache.accumulo.examples.simple.filedata.CharacterHistogram instance zookeepers username password dataTable exampleVis exampleVis
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bin/tool.sh lib/examples-simple*[^cs].jar org.apache.accumulo.examples.simple.filedata.CharacterHistogram instance zookeepers username password dataTable exampleVis exampleVis
+</code></pre></div></div>
 
 <p>Scan again to see the histogram stored in the ‘info’ column family.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>&gt; scan -t dataTable
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>&gt; scan -t dataTable
+</code></pre></div></div>
 
         </div>
 
diff --git a/1.4/examples/filter.html b/1.4/examples/filter.html
index 24ac428..cf1662f 100644
--- a/1.4/examples/filter.html
+++ b/1.4/examples/filter.html
@@ -158,7 +158,7 @@ Filter takes a “negate” parameter which defaults to false.  If set to true,
 return value of the accept method is negated, so that key/value pairs accepted
 by the method are omitted by the Filter.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance&gt; createtable filtertest
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance&gt; createtable filtertest
 username@instance filtertest&gt; setiter -t filtertest -scan -p 10 -n myfilter -ageoff
 AgeOffFilter removes entries with timestamps more than &lt;ttl&gt; milliseconds old
 ----------&gt; set AgeOffFilter parameter negate, default false keeps k/v that pass accept method, true rejects k/v that pass accept method: 
@@ -169,15 +169,13 @@ username@instance filtertest&gt; insert foo a b c
 username@instance filtertest&gt; scan
 foo a:b []    c
 username@instance filtertest&gt; 
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>… wait 30 seconds …</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance filtertest&gt; scan
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance filtertest&gt; scan
 username@instance filtertest&gt; 
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Note the absence of the entry inserted more than 30 seconds ago.  Since the
 scope was set to “scan”, this means the entry is still in Accumulo, but is
@@ -195,7 +193,7 @@ AgeOffFilter, but any Filter can be configured by using the -class flag.  The
 following commands show how to enable the AgeOffFilter for the minc and majc
 scopes using the -class flag, then flush and compact the table.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance filtertest&gt; setiter -t filtertest -minc -majc -p 10 -n myfilter -class org.apache.accumulo.core.iterators.user.AgeOffFilter
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance filtertest&gt; setiter -t filtertest -minc -majc -p 10 -n myfilter -class org.apache.accumulo.core.iterators.user.AgeOffFilter
 AgeOffFilter removes entries with timestamps more than &lt;ttl&gt; milliseconds old
 ----------&gt; set AgeOffFilter parameter negate, default false keeps k/v that pass accept method, true rejects k/v that pass accept method: 
 ----------&gt; set AgeOffFilter parameter ttl, time to live (milliseconds): 30000
@@ -210,8 +208,7 @@ username@instance filtertest&gt; compact -t filtertest -w
 06 10:43:00,632 [shell.Shell] INFO : Compacting table ...
 06 10:43:01,307 [shell.Shell] INFO : Compaction of table filtertest completed for given range
 username@instance filtertest&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>By default, flush and compact execute in the background, but with the -w flag
 they will wait to return until the operation has completed.  Both are 
@@ -224,7 +221,7 @@ the old files.</p>
 
 <p>To see the iterator settings for a table, use config.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance filtertest&gt; config -t filtertest -f iterator
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance filtertest&gt; config -t filtertest -f iterator
 ---------+---------------------------------------------+---------------------------------------------------------------------------
 SCOPE    | NAME                                        | VALUE
 ---------+---------------------------------------------+---------------------------------------------------------------------------
@@ -242,8 +239,7 @@ table    | table.iterator.scan.vers .................. | 20,org.apache.accumulo.
 table    | table.iterator.scan.vers.opt.maxVersions .. | 1
 ---------+---------------------------------------------+---------------------------------------------------------------------------
 username@instance filtertest&gt; 
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>When setting new iterators, make sure to order their priority numbers 
 (specified with -p) in the order you would like the iterators to be applied.
diff --git a/1.4/examples/helloworld.html b/1.4/examples/helloworld.html
index 2396b70..3a433b4 100644
--- a/1.4/examples/helloworld.html
+++ b/1.4/examples/helloworld.html
@@ -158,46 +158,39 @@
 
 <p>Log into the accumulo shell:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+</code></pre></div></div>
 
 <p>Create a table called ‘hellotable’:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance&gt; createtable hellotable
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance&gt; createtable hellotable
+</code></pre></div></div>
 
 <p>Launch a Java program that inserts data with a BatchWriter:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.helloworld.InsertWithBatchWriter instance zookeepers username password hellotable 
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.helloworld.InsertWithBatchWriter instance zookeepers username password hellotable 
+</code></pre></div></div>
 
 <p>Alternatively, the same data can be inserted using MapReduce writers:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.helloworld.InsertWithOutputFormat instance zookeepers username password hellotable 
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.helloworld.InsertWithOutputFormat instance zookeepers username password hellotable 
+</code></pre></div></div>
 
 <p>On the accumulo status page at the URL below (where ‘master’ is replaced with the name or IP of your accumulo master), you should see 50K entries</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>http://master:50095/
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>http://master:50095/
+</code></pre></div></div>
 
 <p>To view the entries, use the shell to scan the table:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance&gt; table hellotable
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance&gt; table hellotable
 username@instance hellotable&gt; scan
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>You can also use a Java class to scan the table:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.helloworld.ReadData instance zookeepers username password hellotable row_0 row_1001
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.helloworld.ReadData instance zookeepers username password hellotable row_0 row_1001
+</code></pre></div></div>
 
         </div>
 
diff --git a/1.4/examples/isolation.html b/1.4/examples/isolation.html
index 3da8cbe..8992a6b 100644
--- a/1.4/examples/isolation.html
+++ b/1.4/examples/isolation.html
@@ -162,7 +162,7 @@ reading the row at the same time a mutation is changing the row.</p>
 <p>Below, Interference Test is run without isolation enabled for 5000 iterations
 and it reports problems.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.isolation.InterferenceTest instance zookeepers username password isotest 5000 false
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.isolation.InterferenceTest instance zookeepers username password isotest 5000 false
 ERROR Columns in row 053 had multiple values [53, 4553]
 ERROR Columns in row 061 had multiple values [561, 61]
 ERROR Columns in row 070 had multiple values [570, 1070]
@@ -171,16 +171,14 @@ ERROR Columns in row 088 had multiple values [2588, 1588]
 ERROR Columns in row 106 had multiple values [2606, 3106]
 ERROR Columns in row 115 had multiple values [4615, 3115]
 finished
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Below, Interference Test is run with isolation enabled for 5000 iterations and
 it reports no problems.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.isolation.InterferenceTest instance zookeepers username password isotest 5000 true
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.isolation.InterferenceTest instance zookeepers username password isotest 5000 true
 finished
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.4/examples/mapred.html b/1.4/examples/mapred.html
index f435487..3cee163 100644
--- a/1.4/examples/mapred.html
+++ b/1.4/examples/mapred.html
@@ -155,17 +155,16 @@ accumulo table with combiners.</p>
 <p>To run this example you will need a directory in HDFS containing text files.
 The accumulo readme will be used to show how to run this example.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ hadoop fs -copyFromLocal $ACCUMULO_HOME/README /user/username/wc/Accumulo.README
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ hadoop fs -copyFromLocal $ACCUMULO_HOME/README /user/username/wc/Accumulo.README
 $ hadoop fs -ls /user/username/wc
 Found 1 items
 -rw-r--r--   2 username supergroup       9359 2009-07-15 17:54 /user/username/wc/Accumulo.README
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The first part of running this example is to create a table with a combiner
 for the column family count.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 Shell - Apache Accumulo Interactive Shell
 - version: 1.4.x
 - instance name: instance
@@ -181,12 +180,11 @@ SummingCombiner interprets Values as Longs and adds them together.  A variety of
 ----------&gt; set SummingCombiner parameter lossy, if true, failed decodes are ignored. Otherwise combiner will error on failed decodes (default false): &lt;TRUE|FALSE&gt;: false 
 ----------&gt; set SummingCombiner parameter type, &lt;VARLEN|FIXEDLEN|STRING|fullClassName&gt;: STRING
 username@instance wordCount&gt; quit
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>After creating the table, run the word count map reduce job.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/tool.sh lib/examples-simple*[^cs].jar org.apache.accumulo.examples.simple.mapreduce.WordCount instance zookeepers /user/username/wc wordCount -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bin/tool.sh lib/examples-simple*[^cs].jar org.apache.accumulo.examples.simple.mapreduce.WordCount instance zookeepers /user/username/wc wordCount -u username -p password
 
 11/02/07 18:20:11 INFO input.FileInputFormat: Total input paths to process : 1
 11/02/07 18:20:12 INFO mapred.JobClient: Running job: job_201102071740_0003
@@ -203,13 +201,12 @@ username@instance wordCount&gt; quit
 11/02/07 18:20:22 INFO mapred.JobClient:     Map input records=255
 11/02/07 18:20:22 INFO mapred.JobClient:     Spilled Records=0
 11/02/07 18:20:22 INFO mapred.JobClient:     Map output records=1452
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>After the map reduce job completes, query the accumulo table to see word
 counts.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 username@instance&gt; table wordCount
 username@instance wordCount&gt; scan -b the
 the count:20080906 []    75
@@ -227,8 +224,7 @@ total count:20080906 []    1
 tserver, count:20080906 []    1
 tserver.compaction.major.concurrent.max count:20080906 []    1
 ...
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Another example to look at is
 org.apache.accumulo.examples.simple.mapreduce.UniqueColumns.  This example
diff --git a/1.4/examples/shard.html b/1.4/examples/shard.html
index 690b431..71bda5c 100644
--- a/1.4/examples/shard.html
+++ b/1.4/examples/shard.html
@@ -160,21 +160,19 @@ document, or “sharded”. This example shows how to use the intersecting itera
 
 <p>To run these example programs, create two tables like below.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance&gt; createtable shard
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance&gt; createtable shard
 username@instance shard&gt; createtable doc2term
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>After creating the tables, index some files.  The following command indexes all of the java files in the Accumulo source code.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ cd /local/username/workspace/accumulo/
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ cd /local/username/workspace/accumulo/
 $ find src -name "*.java" | xargs ./bin/accumulo org.apache.accumulo.examples.simple.shard.Index instance zookeepers shard username password 30
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The following command queries the index to find all files containing ‘foo’ and ‘bar’.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ cd $ACCUMULO_HOME
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ cd $ACCUMULO_HOME
 $ ./bin/accumulo org.apache.accumulo.examples.simple.shard.Query instance zookeepers shard username password foo bar
 /local/username/workspace/accumulo/src/core/src/test/java/accumulo/core/security/ColumnVisibilityTest.java
 /local/username/workspace/accumulo/src/core/src/test/java/accumulo/core/client/mock/MockConnectorTest.java
@@ -187,18 +185,16 @@ $ ./bin/accumulo org.apache.accumulo.examples.simple.shard.Query instance zookee
 /local/username/workspace/accumulo/src/core/src/test/java/accumulo/core/iterators/WholeRowIteratorTest.java
 /local/username/workspace/accumulo/src/server/src/test/java/accumulo/server/util/DefaultMapTest.java
 /local/username/workspace/accumulo/src/server/src/test/java/accumulo/server/tabletserver/InMemoryMapTest.java
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>In order to run ContinuousQuery, we need to run Reverse.java to populate doc2term.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.shard.Reverse instance zookeepers shard doc2term username password
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.shard.Reverse instance zookeepers shard doc2term username password
+</code></pre></div></div>
 
 <p>Below ContinuousQuery is run using 5 terms.  So it selects 5 random terms from each document, then it continually randomly selects one set of 5 terms and queries.  It prints the number of matching documents and the time in seconds.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.shard.ContinuousQuery instance zookeepers shard doc2term username password 5
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.shard.ContinuousQuery instance zookeepers shard doc2term username password 5
 [public, core, class, binarycomparable, b] 2  0.081
 [wordtodelete, unindexdocument, doctablename, putdelete, insert] 1  0.041
 [import, columnvisibilityinterpreterfactory, illegalstateexception, cv, columnvisibility] 1  0.049
@@ -206,8 +202,7 @@ $ ./bin/accumulo org.apache.accumulo.examples.simple.shard.Query instance zookee
 [for, static, println, public, the] 55  0.211
 [sleeptime, wrappingiterator, options, long, utilwaitthread] 1  0.057
 [string, public, long, 0, wait] 12  0.132
-</code></pre>
-</div>
+</code></pre></div></div>
 
         </div>
 
diff --git a/1.4/examples/visibility.html b/1.4/examples/visibility.html
index 8665916..e199fbc 100644
--- a/1.4/examples/visibility.html
+++ b/1.4/examples/visibility.html
@@ -150,7 +150,7 @@
           
           <h2 id="creating-a-new-user">Creating a new user</h2>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@instance&gt; createuser username
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@instance&gt; createuser username
 Enter new password for 'username': ********
 Please confirm new password for 'username': ********
 root@instance&gt; user username
@@ -162,14 +162,13 @@ System permissions:
 
 Table permissions (!METADATA): Table.READ
 username@instance&gt; 
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>A user does not by default have permission to create a table.</p>
 
 <h2 id="granting-permissions-to-a-user">Granting permissions to a user</h2>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance&gt; user root
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance&gt; user root
 Enter password for user root: ********
 root@instance&gt; grant -s System.CREATE_TABLE -u username
 root@instance&gt; user username 
@@ -181,8 +180,7 @@ System permissions: System.CREATE_TABLE
 Table permissions (!METADATA): Table.READ
 Table permissions (vistest): Table.READ, Table.WRITE, Table.BULK_IMPORT, Table.ALTER_TABLE, Table.GRANT, Table.DROP_TABLE
 username@instance vistest&gt; 
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h2 id="inserting-data-with-visibilities">Inserting data with visibilities</h2>
 
@@ -191,7 +189,7 @@ tokens.  Authorization tokens are arbitrary strings taken from a restricted
 ASCII character set.  Parentheses are required to specify order of operations 
 in visibilities.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance vistest&gt; insert row f1 q1 v1 -l A
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance vistest&gt; insert row f1 q1 v1 -l A
 username@instance vistest&gt; insert row f2 q2 v2 -l A&amp;B
 username@instance vistest&gt; insert row f3 q3 v3 -l apple&amp;carrot|broccoli|spinach
 06 11:19:01,432 [shell.Shell] ERROR: org.apache.accumulo.core.util.BadArgumentException: cannot mix | and &amp; near index 12
@@ -199,8 +197,7 @@ apple&amp;carrot|broccoli|spinach
             ^
 username@instance vistest&gt; insert row f3 q3 v3 -l (apple&amp;carrot)|broccoli|spinach
 username@instance vistest&gt; 
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h2 id="scanning-with-authorizations">Scanning with authorizations</h2>
 
@@ -209,25 +206,23 @@ authorizations and each Accumulo scan has authorizations.  Scan authorizations
 are only allowed to be a subset of the user’s authorizations.  By default, a 
 user’s authorizations set is empty.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance vistest&gt; scan
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance vistest&gt; scan
 username@instance vistest&gt; scan -s A
 06 11:43:14,951 [shell.Shell] ERROR: java.lang.RuntimeException: org.apache.accumulo.core.client.AccumuloSecurityException: Error BAD_AUTHORIZATIONS - The user does not have the specified authorizations assigned
 username@instance vistest&gt; 
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h2 id="setting-authorizations-for-a-user">Setting authorizations for a user</h2>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance vistest&gt; setauths -s A
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance vistest&gt; setauths -s A
 06 11:53:42,056 [shell.Shell] ERROR: org.apache.accumulo.core.client.AccumuloSecurityException: Error PERMISSION_DENIED - User does not have permission to perform this action
 username@instance vistest&gt; 
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>A user cannot set authorizations unless the user has the System.ALTER_USER permission.
 The root user has this permission.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance vistest&gt; user root
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance vistest&gt; user root
 Enter password for user root: ********
 root@instance vistest&gt; setauths -s A -u username
 root@instance vistest&gt; user username
@@ -237,12 +232,11 @@ row f1:q1 [A]    v1
 username@instance vistest&gt; scan
 row f1:q1 [A]    v1
 username@instance vistest&gt; 
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The default authorizations for a scan are the user’s entire set of authorizations.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance vistest&gt; user root
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance vistest&gt; user root
 Enter password for user root: ********
 root@instance vistest&gt; setauths -s A,B,broccoli -u username
 root@instance vistest&gt; user username
@@ -253,13 +247,12 @@ row f2:q2 [A&amp;B]    v2
 row f3:q3 [(apple&amp;carrot)|broccoli|spinach]    v3
 username@instance vistest&gt; scan -s B
 username@instance vistest&gt; 
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>If you want, you can limit a user to only be able to insert data which they can read themselves.
 It can be set with the following constraint.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance vistest&gt; user root
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance vistest&gt; user root
 Enter password for user root: ******
 root@instance vistest&gt; config -t vistest -s table.constraint.1=org.apache.accumulo.core.security.VisibilityConstraint    
 root@instance vistest&gt; user username
@@ -274,8 +267,7 @@ row f2:q2 [A&amp;B]    v2
 row f3:q3 [(apple&amp;carrot)|broccoli|spinach]    v3
 row f4:q4 [spinach|broccoli]    v4
 username@instance vistest&gt; 
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.4/user_manual/Accumulo_Shell.html b/1.4/user_manual/Accumulo_Shell.html
index e3b209c..0c8045e 100644
--- a/1.4/user_manual/Accumulo_Shell.html
+++ b/1.4/user_manual/Accumulo_Shell.html
@@ -166,13 +166,12 @@
 
 <p>The shell can be started by the following command:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ACCUMULO_HOME/bin/accumulo shell -u [username]
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ACCUMULO_HOME/bin/accumulo shell -u [username]
+</code></pre></div></div>
 
 <p>The shell will prompt for the corresponding password to the username specified and then display the following prompt:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>Shell - Apache Accumulo Interactive Shell
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Shell - Apache Accumulo Interactive Shell
 -
 - version 1.3
 - instance name: myinstance
@@ -180,14 +179,13 @@
 -
 - type 'help' for a list of available commands
 -
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h2 id="-basic-administration"><a id="Basic_Administration"></a> Basic Administration</h2>
 
 <p>The Accumulo shell can be used to create and delete tables, as well as to configure table and instance specific options.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@myinstance&gt; tables
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@myinstance&gt; tables
 !METADATA
 
 root@myinstance&gt; createtable mytable
@@ -205,20 +203,18 @@ root@myinstance testtable&gt;
 root@myinstance junk&gt; deletetable testtable
 
 root@myinstance&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The Shell can also be used to insert updates and scan tables. This is useful for inspecting tables.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@myinstance mytable&gt; scan
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@myinstance mytable&gt; scan
 
 root@myinstance mytable&gt; insert row1 colf colq value1
 insert successful
 
 root@myinstance mytable&gt; scan
 row1 colf:colq [] value1
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The value in brackets “[]” would be the visibility labels. Since none were used, this is empty for this row. You can use the “-t” option to scan to see the timestamp for the cell, too.</p>
 
@@ -226,25 +222,23 @@ row1 colf:colq [] value1
 
 <p>The <strong>compact</strong> command instructs Accumulo to schedule a compaction of the table during which files are consolidated and deleted entries are removed.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@myinstance mytable&gt; compact -t mytable
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@myinstance mytable&gt; compact -t mytable
 07 16:13:53,201 [shell.Shell] INFO : Compaction of table mytable
 scheduled for 20100707161353EDT
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The <strong>flush</strong> command instructs Accumulo to write all entries currently in memory for a given table to disk.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@myinstance mytable&gt; flush -t mytable
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@myinstance mytable&gt; flush -t mytable
 07 16:14:19,351 [shell.Shell] INFO : Flush of table mytable
 initiated...
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h2 id="-user-administration"><a id="User_Administration"></a> User Administration</h2>
 
 <p>The Shell can be used to add, remove, and grant privileges to users.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@myinstance mytable&gt; createuser bob
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@myinstance mytable&gt; createuser bob
 Enter new password for 'bob': *********
 Please confirm new password for 'bob': *********
 
@@ -269,8 +263,7 @@ bob@myinstance bobstable&gt; user root
 Enter current password for 'root': *********
 
 root@myinstance bobstable&gt; revoke System.CREATE_TABLE -s -u bob
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <hr />
 
diff --git a/1.4/user_manual/Administration.html b/1.4/user_manual/Administration.html
index 4d8b94e..536caf4 100644
--- a/1.4/user_manual/Administration.html
+++ b/1.4/user_manual/Administration.html
@@ -185,9 +185,8 @@
 
 <p>Choose a directory for the Accumulo installation. This directory will be referenced by the environment variable $ACCUMULO_HOME. Run the following:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ tar xzf $ACCUMULO_HOME/accumulo.tar.gz
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ tar xzf $ACCUMULO_HOME/accumulo.tar.gz
+</code></pre></div></div>
 
 <p>Repeat this step at each machine within the cluster. Usually all machines have the same $ACCUMULO_HOME.</p>
 
@@ -235,7 +234,7 @@ $ACCUMULO_HOME/conf/slaves, one per line.</li>
 <p>Specify appropriate values for the following settings in <br />
 $ACCUMULO_HOME/conf/accumulo-site.xml :</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>&lt;property&gt;
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>&lt;property&gt;
     &lt;name&gt;zookeeper&lt;/name&gt;
     &lt;value&gt;zooserver-one:2181,zooserver-two:2181&lt;/value&gt;
     &lt;description&gt;list of zookeeper servers&lt;/description&gt;
@@ -245,8 +244,7 @@ $ACCUMULO_HOME/conf/accumulo-site.xml :</p>
     &lt;value&gt;/var/accumulo/walogs&lt;/value&gt;
     &lt;description&gt;local directory for write ahead logs&lt;/description&gt;
 &lt;/property&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>This enables Accumulo to find ZooKeeper. Accumulo uses ZooKeeper to coordinate settings between processes and helps finalize TabletServer failure.</p>
 
@@ -284,9 +282,8 @@ $ACCUMULO_HOME/bin/accumulo init . This script will prompt for a name for this i
 
 <p>Update your $ACCUMULO_HOME/conf/slaves (or $ACCUMULO_CONF_DIR/slaves) file to account for the addition.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ACCUMULO_HOME/bin/accumulo admin start &lt;host(s)&gt; {&lt;host&gt; ...}
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ACCUMULO_HOME/bin/accumulo admin start &lt;host(s)&gt; {&lt;host&gt; ...}
+</code></pre></div></div>
 
 <p>Alternatively, you can ssh to each of the hosts you want to add and run $ACCUMULO_HOME/bin/start-here.sh.</p>
 
@@ -296,9 +293,8 @@ $ACCUMULO_HOME/bin/accumulo init . This script will prompt for a name for this i
 
 <p>If you need to take a node out of operation, you can trigger a graceful shutdown of a tablet server. Accumulo will automatically rebalance the tablets across the available tablet servers.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ACCUMULO_HOME/bin/accumulo admin stop &lt;host(s)&gt; {&lt;host&gt; ...}
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ACCUMULO_HOME/bin/accumulo admin stop &lt;host(s)&gt; {&lt;host&gt; ...}
+</code></pre></div></div>
 
 <p>Alternatively, you can ssh to each of the hosts you want to remove and run $ACCUMULO_HOME/bin/stop-here.sh.</p>
 
diff --git a/1.4/user_manual/Analytics.html b/1.4/user_manual/Analytics.html
index 1a7c87a..1dfc2cb 100644
--- a/1.4/user_manual/Analytics.html
+++ b/1.4/user_manual/Analytics.html
@@ -178,17 +178,16 @@
 
 <p>To read from an Accumulo table create a Mapper with the following class parameterization and be sure to configure the AccumuloInputFormat.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>class MyMapper extends Mapper&lt;Key,Value,WritableComparable,Writable&gt; {
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>class MyMapper extends Mapper&lt;Key,Value,WritableComparable,Writable&gt; {
     public void map(Key k, Value v, Context c) {
         // transform key and value data here
     }
 }
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>To write to an Accumulo table, create a Reducer with the following class parameterization and be sure to configure the AccumuloOutputFormat. The key emitted from the Reducer identifies the table to which the mutation is sent. This allows a single Reducer to write to more than one table if desired. A default table can be configured using the AccumuloOutputFormat, in which case the output table name does not have to be passed to the Context object within the Reducer.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>class MyReducer extends Reducer&lt;WritableComparable, Writable, Text, Mutation&gt; {
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>class MyReducer extends Reducer&lt;WritableComparable, Writable, Text, Mutation&gt; {
 
     public void reduce(WritableComparable key, Iterable&lt;Text&gt; values, Context c) {
         
@@ -199,14 +198,13 @@
         c.write(new Text("output-table"), m);
     }
 }
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The Text object passed as the output should contain the name of the table to which this mutation should be applied. The Text can be null in which case the mutation will be applied to the default table name specified in the AccumuloOutputFormat options.</p>
 
 <h3 id="-accumuloinputformat-options"><a id="AccumuloInputFormat_options"></a> AccumuloInputFormat options</h3>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>Job job = new Job(getConf());
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Job job = new Job(getConf());
 AccumuloInputFormat.setInputInfo(job,
         "user",
         "passwd".getBytes(),
@@ -215,36 +213,32 @@ AccumuloInputFormat.setInputInfo(job,
 
 AccumuloInputFormat.setZooKeeperInstance(job, "myinstance",
         "zooserver-one,zooserver-two");
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>Optional settings:</strong></p>
 
 <p>To restrict Accumulo to a set of row ranges:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>ArrayList&lt;Range&gt; ranges = new ArrayList&lt;Range&gt;();
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ArrayList&lt;Range&gt; ranges = new ArrayList&lt;Range&gt;();
 // populate array list of row ranges ...
 AccumuloInputFormat.setRanges(job, ranges);
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>To restrict accumulo to a list of columns:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>ArrayList&lt;Pair&lt;Text,Text&gt;&gt; columns = new ArrayList&lt;Pair&lt;Text,Text&gt;&gt;();
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ArrayList&lt;Pair&lt;Text,Text&gt;&gt; columns = new ArrayList&lt;Pair&lt;Text,Text&gt;&gt;();
 // populate list of columns
 AccumuloInputFormat.fetchColumns(job, columns);
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>To use a regular expression to match row IDs:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>AccumuloInputFormat.setRegex(job, RegexType.ROW, "^.*");
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>AccumuloInputFormat.setRegex(job, RegexType.ROW, "^.*");
+</code></pre></div></div>
 
 <h3 id="-accumulooutputformat-options"><a id="AccumuloOutputFormat_options"></a> AccumuloOutputFormat options</h3>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>boolean createTables = true;
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>boolean createTables = true;
 String defaultTable = "mytable";
 
 AccumuloOutputFormat.setOutputInfo(job,
@@ -255,15 +249,13 @@ AccumuloOutputFormat.setOutputInfo(job,
 
 AccumuloOutputFormat.setZooKeeperInstance(job, "myinstance",
         "zooserver-one,zooserver-two");
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>Optional Settings:</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>AccumuloOutputFormat.setMaxLatency(job, 300); // milliseconds
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>AccumuloOutputFormat.setMaxLatency(job, 300); // milliseconds
 AccumuloOutputFormat.setMaxMutationBufferSize(job, 5000000); // bytes
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>An example of using MapReduce with Accumulo can be found at <br />
 accumulo/docs/examples/README.mapred</p>
diff --git a/1.4/user_manual/Development_Clients.html b/1.4/user_manual/Development_Clients.html
index a40d444..ac888d8 100644
--- a/1.4/user_manual/Development_Clients.html
+++ b/1.4/user_manual/Development_Clients.html
@@ -169,20 +169,18 @@
 
 <p>While normal interaction with the Accumulo client looks like this:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>Instance instance = new ZooKeeperInstance(...);
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Instance instance = new ZooKeeperInstance(...);
 Connector conn = instance.getConnector(user, passwd);
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>To interact with the MockAccumulo, just replace the ZooKeeperInstance with MockInstance:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>Instance instance = new MockInstance();
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Instance instance = new MockInstance();
+</code></pre></div></div>
 
 <p>In fact, you can use the “-fake” option to the Accumulo shell and interact with MockAccumulo:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell --fake -u root -p ''
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell --fake -u root -p ''
 
 Shell - Apache Accumulo Interactive Shell
 -
@@ -203,16 +201,14 @@ row3 cf:cq []    value3
 root@mock-instance test&gt; scan -b row2 -e row2
 row2 cf:cq []    value2
 root@mock-instance test&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>When testing Map Reduce jobs, you can also set the Mock Accumulo on the AccumuloInputFormat and AccumuloOutputFormat classes:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>// ... set up job configuration
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>// ... set up job configuration
 AccumuloInputFormat.setMockInstance(job, "mockInstance");
 AccumuloOutputFormat.setMockInstance(job, "mockInstance");
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h2 id="-mini-accumulo-cluster"><a id="Mini_Accumulo_Cluster"></a> Mini Accumulo Cluster</h2>
 
@@ -220,25 +216,22 @@ AccumuloOutputFormat.setMockInstance(job, "mockInstance");
 
 <p>To start it up, you will need to supply an empty directory and a root password as arguments:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>File tempDirectory = // JUnit and Guava supply mechanisms for creating temp directories
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>File tempDirectory = // JUnit and Guava supply mechanisms for creating temp directories
 MiniAccumuloCluster accumulo = new MiniAccumuloCluster(tempDirectory, "password");
 accumulo.start();
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Once we have our mini cluster running, we will want to interact with the Accumulo client API:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>Instance instance = new ZooKeeperInstance(accumulo.getInstanceName(), accumulo.getZooKeepers());
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Instance instance = new ZooKeeperInstance(accumulo.getInstanceName(), accumulo.getZooKeepers());
 Connector conn = instance.getConnector("root", "password");
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Upon completion of our development code, we will want to shutdown our MiniAccumuloCluster:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>accumulo.stop()
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>accumulo.stop()
 // delete your temporary folder
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <hr />
 
diff --git a/1.4/user_manual/High_Speed_Ingest.html b/1.4/user_manual/High_Speed_Ingest.html
index fa2ef18..56ce00c 100644
--- a/1.4/user_manual/High_Speed_Ingest.html
+++ b/1.4/user_manual/High_Speed_Ingest.html
@@ -172,9 +172,8 @@
 
 <p>Pre-splitting a table ensures that there are as many tablets as desired available before ingest begins to take advantage of all the parallelism possible with the cluster hardware. Tables can be split anytime by using the shell:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>user@myinstance mytable&gt; addsplits -sf /local_splitfile -t mytable
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>user@myinstance mytable&gt; addsplits -sf /local_splitfile -t mytable
+</code></pre></div></div>
 
 <p>For the purposes of providing parallelism to ingest it is not necessary to create more tablets than there are physical machines within the cluster as the aggregate ingest rate is a function of the number of physical machines. Note that the aggregate ingest rate is still subject to the number of machines running ingest clients, and the distribution of rowIDs across the table. The aggregation ingest rate will be suboptimal if there are many inserts into a small number of rowIDs.</p>
 
@@ -190,7 +189,7 @@
 
 <p>To configure MapReduce to format data in preparation for bulk loading, the job should be set to use a range partitioner instead of the default hash partitioner. The range partitioner uses the split points of the Accumulo table that will receive the data. The split points can be obtained from the shell and used by the MapReduce RangePartitioner. Note that this is only useful if the existing table is already split into multiple tablets.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>user@myinstance mytable&gt; getsplits
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>user@myinstance mytable&gt; getsplits
 aa
 ab
 ac
@@ -198,14 +197,12 @@ ac
 zx
 zy
 zz
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Run the MapReduce job, using the AccumuloFileOutputFormat to create the files to be introduced to Accumulo. Once this is complete, the files can be added to Accumulo via the shell:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>user@myinstance mytable&gt; importdirectory /files_dir /failures
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>user@myinstance mytable&gt; importdirectory /files_dir /failures
+</code></pre></div></div>
 
 <p>Note that the paths referenced are directories within the same HDFS instance over which Accumulo is running. Accumulo places any files that failed to be added to the second directory specified.</p>
 
diff --git a/1.4/user_manual/Security.html b/1.4/user_manual/Security.html
index a6c3e4f..c4f80cf 100644
--- a/1.4/user_manual/Security.html
+++ b/1.4/user_manual/Security.html
@@ -171,7 +171,7 @@
 
 <p>When mutations are applied, users can specify a security label for each value. This is done as the Mutation is created by passing a ColumnVisibility object to the put() method:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>Text rowID = new Text("row1");
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Text rowID = new Text("row1");
 Text colFam = new Text("myColFam");
 Text colQual = new Text("myColQual");
 ColumnVisibility colVis = new ColumnVisibility("public");
@@ -181,8 +181,7 @@ Value value = new Value("myValue");
 
 Mutation mutation = new Mutation(rowID);
 mutation.put(colFam, colQual, colVis, timestamp, value);
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h2 id="-security-label-expression-syntax"><a id="Security_Label_Expression_Syntax"></a> Security Label Expression Syntax</h2>
 
@@ -190,15 +189,14 @@ mutation.put(colFam, colQual, colVis, timestamp, value);
 
 <p>For example, suppose within our organization we want to label our data values with security labels defined in terms of user roles. We might have tokens such as:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>admin
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>admin
 audit
 system
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>These can be specified alone or combined using logical operators:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>// Users must have admin privileges:
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>// Users must have admin privileges:
 admin
 
 // Users must have admin and audit privileges
@@ -209,8 +207,7 @@ admin|audit
 
 // Users must have audit and one or both of admin or system
 (admin|system)&amp;audit
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>When both <code class="highlighter-rouge">|</code> and <code class="highlighter-rouge">&amp;</code> operators are used, parentheses must be used to specify precedence of the operators.</p>
 
@@ -220,12 +217,11 @@ admin|audit
 
 <p>Authorizations are specified as a comma-separated list of tokens the user possesses:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>// user possess both admin and system level access
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>// user possess both admin and system level access
 Authorization auths = new Authorization("admin","system");
 
 Scanner s = connector.createScanner("table", auths);
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h2 id="-user-authorizations"><a id="User_Authorizations"></a> User Authorizations</h2>
 
@@ -235,9 +231,8 @@ Scanner s = connector.createScanner("table", auths);
 
 <p>To prevent users from writing data they can not read, add the visibility constraint to a table. Use the -evc option in the createtable shell command to enable this constraint. For existing tables use the following shell command to enable the visibility constraint. Ensure the constraint number does not conflict with any existing constraints.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>config -t table -s table.constraint.1=org.apache.accumulo.core.security.VisibilityConstraint
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>config -t table -s table.constraint.1=org.apache.accumulo.core.security.VisibilityConstraint
+</code></pre></div></div>
 
 <p>Any user with the alter table permission can add or remove this constraint. This constraint is not applied to bulk imported data, if this a concern then disable the bulk import permission.</p>
 
diff --git a/1.4/user_manual/Shell_Commands.html b/1.4/user_manual/Shell_Commands.html
index 7074954..6dcbf69 100644
--- a/1.4/user_manual/Shell_Commands.html
+++ b/1.4/user_manual/Shell_Commands.html
@@ -154,70 +154,63 @@
 
 <p><strong>?</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: ? [ &lt;command&gt; &lt;command&gt; ] [-?] [-np] [-nw]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: ? [ &lt;command&gt; &lt;command&gt; ] [-?] [-np] [-nw]   
 description: provides information about the available commands   
   -?,-help  display this help   
   -np,-no-pagination  disables pagination of output   
   -nw,-no-wrap  disables wrapping of output   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>about</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: about [-?] [-v]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: about [-?] [-v]   
 description: displays information about this program   
   -?,-help  display this help   
   -v,-verbose  displays details session information   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>addsplits</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: addsplits [&lt;split&gt; &lt;split&gt; ] [-?] [-b64] [-sf &lt;filename&gt;] [-t &lt;tableName&gt;]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: addsplits [&lt;split&gt; &lt;split&gt; ] [-?] [-b64] [-sf &lt;filename&gt;] [-t &lt;tableName&gt;]   
 description: add split points to an existing table   
   -?,-help  display this help   
   -b64,-base64encoded  decode encoded split points   
   -sf,-splits-file &lt;filename&gt;  file with newline separated list of rows to add to   
           table   
   -t,-table &lt;tableName&gt;  name of a table to add split points to   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>authenticate</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: authenticate &lt;username&gt; [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: authenticate &lt;username&gt; [-?]   
 description: verifies a user's credentials   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>bye</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: bye [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: bye [-?]   
 description: exits the shell   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>classpath</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: classpath [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: classpath [-?]   
 description: lists the current files on the classpath   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>clear</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: clear [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: clear [-?]   
 description: clears the screen   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>clonetable</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: clonetable &lt;current table name&gt; &lt;new table name&gt; [-?] [-e &lt;arg&gt;] [-nf] [-s   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: clonetable &lt;current table name&gt; &lt;new table name&gt; [-?] [-e &lt;arg&gt;] [-nf] [-s   
           &lt;arg&gt;]   
 description: clone a table   
   -?,-help  display this help   
@@ -226,20 +219,18 @@ description: clone a table
   -nf,-noFlush  do not flush table data in memory before cloning.   
   -s,-set &lt;arg&gt;  set initial properties before the table comes online. Expects   
           &lt;prop&gt;=&lt;value&gt;,&lt;prop&gt;=&lt;value&gt;   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>cls</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: cls [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: cls [-?]   
 description: clears the screen   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>compact</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: compact [-?] [-b &lt;arg&gt;] [-e &lt;arg&gt;] [-nf] [-p &lt;pattern&gt; | -t &lt;tableName&gt;]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: compact [-?] [-b &lt;arg&gt;] [-e &lt;arg&gt;] [-nf] [-p &lt;pattern&gt; | -t &lt;tableName&gt;]   
           [-w]   
 description: sets all tablets for a table to major compact as soon as possible   
           (based on current time)   
@@ -250,12 +241,11 @@ description: sets all tablets for a table to major compact as soon as possible
   -p,-pattern &lt;pattern&gt;  regex pattern of table names to flush   
   -t,-table &lt;tableName&gt;  name of a table to flush   
   -w,-wait  wait for compact to finish   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>config</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: config [-?] [-d &lt;property&gt; | -f &lt;string&gt; | -s &lt;property=value&gt;]  [-np]  [-t   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: config [-?] [-d &lt;property&gt; | -f &lt;string&gt; | -s &lt;property=value&gt;]  [-np]  [-t   
           &lt;table&gt;]   
 description: prints system properties and table specific properties   
   -?,-help  display this help   
@@ -264,12 +254,11 @@ description: prints system properties and table specific properties
   -np,-no-pagination  disables pagination of output   
   -s,-set &lt;property=value&gt;  set a per-table property   
   -t,-table &lt;table&gt;  display/set/delete properties for specified table   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>createtable</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: createtable &lt;tableName&gt; [-?] [-a   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: createtable &lt;tableName&gt; [-?] [-a   
           &lt;&lt;columnfamily&gt;[:&lt;columnqualifier&gt;]=&lt;aggregation class&gt;&gt;] [-b64] [-cc   
           &lt;table&gt;] [-cs &lt;table&gt; | -sf &lt;filename&gt;] [-evc] [-f &lt;className&gt;] [-ndi]   
           [-tl | -tm]   
@@ -289,40 +278,36 @@ description: creates a new table, with optional aggregators and optionally pre-s
           pre-split table   
   -tl,-time-logical  use logical time   
   -tm,-time-millis  use time in milliseconds   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>createuser</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: createuser &lt;username&gt; [-?] [-s &lt;comma-separated-authorizations&gt;]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: createuser &lt;username&gt; [-?] [-s &lt;comma-separated-authorizations&gt;]   
 description: creates a new user   
   -?,-help  display this help   
   -s,-scan-authorizations &lt;comma-separated-authorizations&gt;  scan authorizations   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>debug</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: debug [ on | off ] [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: debug [ on | off ] [-?]   
 description: turns debug logging on or off   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>delete</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: delete &lt;row&gt; &lt;colfamily&gt; &lt;colqualifier&gt; [-?] [-l &lt;expression&gt;] [-t   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: delete &lt;row&gt; &lt;colfamily&gt; &lt;colqualifier&gt; [-?] [-l &lt;expression&gt;] [-t   
           &lt;timestamp&gt;]   
 description: deletes a record from a table   
   -?,-help  display this help   
   -l,-authorization-label &lt;expression&gt;  formatted authorization label expression   
   -t,-timestamp &lt;timestamp&gt;  timestamp to use for insert   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>deleteiter</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: deleteiter [-?] [-majc] [-minc] -n &lt;itername&gt; [-scan] [-t &lt;table&gt;]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: deleteiter [-?] [-majc] [-minc] -n &lt;itername&gt; [-scan] [-t &lt;table&gt;]   
 description: deletes a table-specific iterator   
   -?,-help  display this help   
   -majc,-major-compaction  applied at major compaction   
@@ -330,12 +315,11 @@ description: deletes a table-specific iterator
   -n,-name &lt;itername&gt;  iterator to delete   
   -scan,-scan-time  applied at scan time   
   -t,-table &lt;table&gt;  tableName   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>deletemany</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: deletemany [-?] [-b &lt;start-row&gt;] [-c   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: deletemany [-?] [-b &lt;start-row&gt;] [-c   
           «columnfamily&gt;[:&lt;columnqualifier&gt;],&lt;columnfamily&gt;[:&lt;columnqualifier&gt;]&gt;]   
           [-e &lt;end-row&gt;] [-f] [-fm &lt;className&gt;] [-np] [-r &lt;row&gt;] [-s   
           &lt;comma-separated-authorizations&gt;] [-st] [-t &lt;table&gt;]   
@@ -354,12 +338,11 @@ description: scans a table and deletes the resulting records
           (all user auths are used if this argument is not specified)   
   -st,-show-timestamps  enables displaying timestamps   
   -t,-table &lt;table&gt;  table to be created   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>deleterows</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: deleterows [-?] [-b &lt;arg&gt;] [-e &lt;arg&gt;] [-f] [-t &lt;table&gt;]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: deleterows [-?] [-b &lt;arg&gt;] [-e &lt;arg&gt;] [-f] [-t &lt;table&gt;]   
 description: delete a range of rows in a table.  Note that rows matching the start   
           row ARE NOT deleted, but rows matching the end row ARE deleted.   
   -?,-help  display this help   
@@ -367,69 +350,62 @@ description: delete a range of rows in a table.  Note that rows matching the sta
   -e,-end-row &lt;arg&gt;  end row   
   -f,-force  delete data even if start or end are not specified   
   -t,-tableName &lt;table&gt;  table to delete row range   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>deletescaniter</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: deletescaniter [-?] [-a] [-n &lt;itername&gt;] [-t &lt;table&gt;]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: deletescaniter [-?] [-a] [-n &lt;itername&gt;] [-t &lt;table&gt;]   
 description: deletes a table-specific scan iterator so it is no longer used during   
           this shell session   
   -?,-help  display this help   
   -a,-all  delete all for tableName   
   -n,-name &lt;itername&gt;  iterator to delete   
   -t,-table &lt;table&gt;  tableName   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>deletetable</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: deletetable &lt;tableName&gt; [-?] [-t &lt;arg&gt;]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: deletetable &lt;tableName&gt; [-?] [-t &lt;arg&gt;]   
 description: deletes a table   
   -?,-help  display this help   
   -t,-tableName &lt;arg&gt;  deletes a table   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>deleteuser</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: deleteuser &lt;username&gt; [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: deleteuser &lt;username&gt; [-?]   
 description: deletes a user   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>droptable</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: droptable &lt;tableName&gt; [-?] [-t &lt;arg&gt;]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: droptable &lt;tableName&gt; [-?] [-t &lt;arg&gt;]   
 description: deletes a table   
   -?,-help  display this help   
   -t,-tableName &lt;arg&gt;  deletes a table   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>dropuser</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: dropuser &lt;username&gt; [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: dropuser &lt;username&gt; [-?]   
 description: deletes a user   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>du</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: du &lt;table&gt; &lt;table&gt; [-?] [-p &lt;pattern&gt;]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: du &lt;table&gt; &lt;table&gt; [-?] [-p &lt;pattern&gt;]   
 description: Prints how much space is used by files referenced by a table.  When   
           multiple tables are specified it prints how much space is used by files   
           shared between tables, if any.   
   -?,-help  display this help   
   -p,-pattern &lt;pattern&gt;  regex pattern of table names   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>egrep</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: egrep &lt;regex&gt; &lt;regex&gt; [-?] [-b &lt;start-row&gt;] [-c   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: egrep &lt;regex&gt; &lt;regex&gt; [-?] [-b &lt;start-row&gt;] [-c   
           «columnfamily&gt;[:&lt;columnqualifier&gt;],&lt;columnfamily&gt;[:&lt;columnqualifier&gt;]&gt;]   
           [-e &lt;end-row&gt;] [-f &lt;int&gt;] [-fm &lt;className&gt;] [-np] [-nt &lt;arg&gt;] [-r &lt;row&gt;]   
           [-s &lt;comma-separated-authorizations&gt;] [-st] [-t &lt;table&gt;]   
@@ -451,29 +427,26 @@ description: searches each row, column family, column qualifier and value, in
           (all user auths are used if this argument is not specified)   
   -st,-show-timestamps  enables displaying timestamps   
   -t,-tableName &lt;table&gt;  table to grep through   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>execfile</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: execfile [-?] [-v]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: execfile [-?] [-v]   
 description: specifies a file containing accumulo commands to execute   
   -?,-help  display this help   
   -v,-verbose  displays command prompt as commands are executed   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>exit</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: exit [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: exit [-?]   
 description: exits the shell   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>flush</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: flush [-?] [-b &lt;arg&gt;] [-e &lt;arg&gt;] [-p &lt;pattern&gt; | -t &lt;tableName&gt;]  [-w]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: flush [-?] [-b &lt;arg&gt;] [-e &lt;arg&gt;] [-p &lt;pattern&gt; | -t &lt;tableName&gt;]  [-w]   
 description: flushes a tables data that is currently in memory to disk   
   -?,-help  display this help   
   -b,-begin-row &lt;arg&gt;  begin row   
@@ -481,42 +454,38 @@ description: flushes a tables data that is currently in memory to disk
   -p,-pattern &lt;pattern&gt;  regex pattern of table names to flush   
   -t,-table &lt;tableName&gt;  name of a table to flush   
   -w,-wait  wait for flush to finish   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>formatter</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: formatter [-?] -f &lt;className&gt; | -l | -r  [-t &lt;table&gt;]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: formatter [-?] -f &lt;className&gt; | -l | -r  [-t &lt;table&gt;]   
 description: specifies a formatter to use for displaying table entries   
   -?,-help  display this help   
   -f,-formatter &lt;className&gt;  fully qualified name of the formatter class to use   
   -l,-list  display the current formatter   
   -r,-remove  remove the current formatter   
   -t,-table &lt;table&gt;  table to set the formatter on   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>getauths</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: getauths [-?] [-u &lt;user&gt;]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: getauths [-?] [-u &lt;user&gt;]   
 description: displays the maximum scan authorizations for a user   
   -?,-help  display this help   
   -u,-user &lt;user&gt;  user to operate on   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>getgroups</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: getgroups [-?] [-t &lt;table&gt;]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: getgroups [-?] [-t &lt;table&gt;]   
 description: gets the locality groups for a given table   
   -?,-help  display this help   
   -t,-table &lt;table&gt;  get locality groups for specified table   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>getsplits</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: getsplits [-?] [-b64] [-m &lt;num&gt;] [-o &lt;file&gt;] [-t &lt;table&gt;] [-v]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: getsplits [-?] [-b64] [-m &lt;num&gt;] [-o &lt;file&gt;] [-t &lt;table&gt;] [-v]   
 description: retrieves the current split points for tablets in the current table   
   -?,-help  display this help   
   -b64,-base64encoded  encode the split points   
@@ -524,24 +493,22 @@ description: retrieves the current split points for tablets in the current table
   -o,-output &lt;file&gt;  specifies a local file to write the splits to   
   -t,-tableName &lt;table&gt;  table to get splits on   
   -v,-verbose  print out the tablet information with start/end rows   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>grant</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: grant &lt;permission&gt; [-?] -p &lt;pattern&gt; | -s | -t &lt;table&gt;  -u &lt;username&gt;   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: grant &lt;permission&gt; [-?] -p &lt;pattern&gt; | -s | -t &lt;table&gt;  -u &lt;username&gt;   
 description: grants system or table permissions for a user   
   -?,-help  display this help   
   -p,-pattern &lt;pattern&gt;  regex pattern of tables to grant permissions on   
   -s,-system  grant a system permission   
   -t,-table &lt;table&gt;  grant a table permission on this table   
   -u,-user &lt;username&gt;  user to operate on   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>grep</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: grep &lt;term&gt; &lt;term&gt; [-?] [-b &lt;start-row&gt;] [-c   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: grep &lt;term&gt; &lt;term&gt; [-?] [-b &lt;start-row&gt;] [-c   
           «columnfamily&gt;[:&lt;columnqualifier&gt;],&lt;columnfamily&gt;[:&lt;columnqualifier&gt;]&gt;]   
           [-e &lt;end-row&gt;] [-f &lt;int&gt;] [-fm &lt;className&gt;] [-np] [-nt &lt;arg&gt;] [-r &lt;row&gt;]   
           [-s &lt;comma-separated-authorizations&gt;] [-st] [-t &lt;table&gt;]   
@@ -563,60 +530,54 @@ description: searches each row, column family, column qualifier and value in a t
           (all user auths are used if this argument is not specified)   
   -st,-show-timestamps  enables displaying timestamps   
   -t,-tableName &lt;table&gt;  table to grep through   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>help</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: help [ &lt;command&gt; &lt;command&gt; ] [-?] [-np] [-nw]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: help [ &lt;command&gt; &lt;command&gt; ] [-?] [-np] [-nw]   
 description: provides information about the available commands   
   -?,-help  display this help   
   -np,-no-pagination  disables pagination of output   
   -nw,-no-wrap  disables wrapping of output   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>history</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: history [-?] [-c]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: history [-?] [-c]   
 description: Generates a list of commands previously executed   
   -?,-help  display this help   
   -c,-Clears History, takes no arguments.  Clears History File   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>importdirectory</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: importdirectory &lt;directory&gt; &lt;failureDirectory&gt; true|false [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: importdirectory &lt;directory&gt; &lt;failureDirectory&gt; true|false [-?]   
 description: bulk imports an entire directory of data files to the current table.   
           The boolean argument determines if accumulo sets the time.   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>info</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: info [-?] [-v]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: info [-?] [-v]   
 description: displays information about this program   
   -?,-help  display this help   
   -v,-verbose  displays details session information   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>insert</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: insert &lt;row&gt; &lt;colfamily&gt; &lt;colqualifier&gt; &lt;value&gt; [-?] [-l &lt;expression&gt;] [-t   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: insert &lt;row&gt; &lt;colfamily&gt; &lt;colqualifier&gt; &lt;value&gt; [-?] [-l &lt;expression&gt;] [-t   
           &lt;timestamp&gt;]   
 description: inserts a record   
   -?,-help  display this help   
   -l,-authorization-label &lt;expression&gt;  formatted authorization label expression   
   -t,-timestamp &lt;timestamp&gt;  timestamp to use for insert   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>listiter</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: listiter [-?] [-majc] [-minc] [-n &lt;itername&gt;] [-scan] [-t &lt;table&gt;]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: listiter [-?] [-majc] [-minc] [-n &lt;itername&gt;] [-scan] [-t &lt;table&gt;]   
 description: lists table-specific iterators   
   -?,-help  display this help   
   -majc,-major-compaction  applied at major compaction   
@@ -624,32 +585,29 @@ description: lists table-specific iterators
   -n,-name &lt;itername&gt;  iterator to delete   
   -scan,-scan-time  applied at scan time   
   -t,-table &lt;table&gt;  tableName   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>listscans</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: listscans [-?] [-np] [-ts &lt;tablet server&gt;]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: listscans [-?] [-np] [-ts &lt;tablet server&gt;]   
 description: list what scans are currently running in accumulo. See the   
           accumulo.core.client.admin.ActiveScan javadoc for more information about   
           columns.   
   -?,-help  display this help   
   -np,-no-pagination  disables pagination of output   
   -ts,-tabletServer &lt;tablet server&gt;  list scans for a specific tablet server   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>masterstate</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: masterstate is deprecated, use the command line utility instead [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: masterstate is deprecated, use the command line utility instead [-?]   
 description: DEPRECATED: use the command line utility instead   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>maxrow</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: maxrow [-?] [-b &lt;begin-row&gt;] [-be] [-e &lt;end-row&gt;] [-ee] [-s   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: maxrow [-?] [-b &lt;begin-row&gt;] [-be] [-e &lt;end-row&gt;] [-ee] [-s   
           &lt;comma-separated-authorizations&gt;] [-t &lt;table&gt;]   
 description: find the max row in a table within a given range   
   -?,-help  display this help   
@@ -660,12 +618,11 @@ description: find the max row in a table within a given range
   -s,-scan-authorizations &lt;comma-separated-authorizations&gt;  scan authorizations   
           (all user auths are used if this argument is not specified)   
   -t,-table &lt;table&gt;  table to be created   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>merge</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: merge [-?] [-b &lt;arg&gt;] [-e &lt;arg&gt;] [-f] [-s &lt;arg&gt;] [-t &lt;table&gt;] [-v]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: merge [-?] [-b &lt;arg&gt;] [-e &lt;arg&gt;] [-f] [-s &lt;arg&gt;] [-t &lt;table&gt;] [-v]   
 description: merge tablets in a table   
   -?,-help  display this help   
   -b,-begin-row &lt;arg&gt;  begin row   
@@ -675,77 +632,69 @@ description: merge tablets in a table
   -s,-size &lt;arg&gt;  merge tablets to the given size over the entire table   
   -t,-tableName &lt;table&gt;  table to be merged   
   -v,-verbose  verbose output during merge   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>notable</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: notable [-?] [-t &lt;arg&gt;]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: notable [-?] [-t &lt;arg&gt;]   
 description: returns to a tableless shell state   
   -?,-help  display this help   
   -t,-tableName &lt;arg&gt;  Returns to a no table state   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>offline</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: offline [-?] [-p &lt;pattern&gt; | -t &lt;tableName&gt;]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: offline [-?] [-p &lt;pattern&gt; | -t &lt;tableName&gt;]   
 description: starts the process of taking table offline   
   -?,-help  display this help   
   -p,-pattern &lt;pattern&gt;  regex pattern of table names to flush   
   -t,-table &lt;tableName&gt;  name of a table to flush   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>online</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: online [-?] [-p &lt;pattern&gt; | -t &lt;tableName&gt;]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: online [-?] [-p &lt;pattern&gt; | -t &lt;tableName&gt;]   
 description: starts the process of putting a table online   
   -?,-help  display this help   
   -p,-pattern &lt;pattern&gt;  regex pattern of table names to flush   
   -t,-table &lt;tableName&gt;  name of a table to flush   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>passwd</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: passwd [-?] [-u &lt;user&gt;]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: passwd [-?] [-u &lt;user&gt;]   
 description: changes a user's password   
   -?,-help  display this help   
   -u,-user &lt;user&gt;  user to operate on   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>quit</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: quit [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: quit [-?]   
 description: exits the shell   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>renametable</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: renametable &lt;current table name&gt; &lt;new table name&gt; [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: renametable &lt;current table name&gt; &lt;new table name&gt; [-?]   
 description: rename a table   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>revoke</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: revoke &lt;permission&gt; [-?] -s | -t &lt;table&gt;  -u &lt;username&gt;   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: revoke &lt;permission&gt; [-?] -s | -t &lt;table&gt;  -u &lt;username&gt;   
 description: revokes system or table permissions from a user   
   -?,-help  display this help   
   -s,-system  revoke a system permission   
   -t,-table &lt;table&gt;  revoke a table permission on this table   
   -u,-user &lt;username&gt;  user to operate on   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>scan</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: scan [-?] [-b &lt;start-row&gt;] [-c   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: scan [-?] [-b &lt;start-row&gt;] [-c   
           «columnfamily&gt;[:&lt;columnqualifier&gt;],&lt;columnfamily&gt;[:&lt;columnqualifier&gt;]&gt;]   
           [-e &lt;end-row&gt;] [-f &lt;int&gt;] [-fm &lt;className&gt;] [-np] [-r &lt;row&gt;] [-s   
           &lt;comma-separated-authorizations&gt;] [-st] [-t &lt;table&gt;]   
@@ -764,12 +713,11 @@ description: scans the table, and displays the resulting records
           (all user auths are used if this argument is not specified)   
   -st,-show-timestamps  enables displaying timestamps   
   -t,-tableName &lt;table&gt;  table to be scanned   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>select</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: select &lt;row&gt; &lt;columnfamily&gt; &lt;columnqualifier&gt; [-?] [-np] [-s   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: select &lt;row&gt; &lt;columnfamily&gt; &lt;columnqualifier&gt; [-?] [-np] [-s   
           &lt;comma-separated-authorizations&gt;] [-st] [-t &lt;table&gt;]   
 description: scans for and displays a single record   
   -?,-help  display this help   
@@ -777,12 +725,11 @@ description: scans for and displays a single record
   -s,-scan-authorizations &lt;comma-separated-authorizations&gt;  scan authorizations   
   -st,-show-timestamps  enables displaying timestamps   
   -t,-tableName &lt;table&gt;  table   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>selectrow</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: selectrow &lt;row&gt; [-?] [-np] [-s &lt;comma-separated-authorizations&gt;] [-st] [-t   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: selectrow &lt;row&gt; [-?] [-np] [-s &lt;comma-separated-authorizations&gt;] [-st] [-t   
           &lt;table&gt;]   
 description: scans a single row and displays all resulting records   
   -?,-help  display this help   
@@ -790,35 +737,32 @@ description: scans a single row and displays all resulting records
   -s,-scan-authorizations &lt;comma-separated-authorizations&gt;  scan authorizations   
   -st,-show-timestamps  enables displaying timestamps   
   -t,-tableName &lt;table&gt;  table to row select   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>setauths</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: setauths [-?] -c | -s &lt;comma-separated-authorizations&gt;  [-u &lt;user&gt;]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: setauths [-?] -c | -s &lt;comma-separated-authorizations&gt;  [-u &lt;user&gt;]   
 description: sets the maximum scan authorizations for a user   
   -?,-help  display this help   
   -c,-clear-authorizations  clears the scan authorizations   
   -s,-scan-authorizations &lt;comma-separated-authorizations&gt;  set the scan   
           authorizations   
   -u,-user &lt;user&gt;  user to operate on   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>setgroups</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: setgroups &lt;group&gt;=&lt;col fam&gt;,&lt;col fam&gt; &lt;group&gt;=&lt;col fam&gt;,&lt;col fam&gt; [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: setgroups &lt;group&gt;=&lt;col fam&gt;,&lt;col fam&gt; &lt;group&gt;=&lt;col fam&gt;,&lt;col fam&gt; [-?]   
           [-t &lt;table&gt;]   
 description: sets the locality groups for a given table (for binary or commas, use   
           Java API)   
   -?,-help  display this help   
   -t,-table &lt;table&gt;  get locality groups for specified table   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>setiter</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: setiter [-?] -ageoff | -agg | -class &lt;name&gt; | -regex | -reqvis | -vers   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: setiter [-?] -ageoff | -agg | -class &lt;name&gt; | -regex | -reqvis | -vers   
           [-majc] [-minc] [-n &lt;itername&gt;] -p &lt;pri&gt;  [-scan] [-t &lt;table&gt;]   
 description: sets a table-specific iterator   
   -?,-help  display this help   
@@ -834,12 +778,11 @@ description: sets a table-specific iterator
   -scan,-scan-time  applied at scan time   
   -t,-table &lt;table&gt;  tableName   
   -vers,-version  a versioning type   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>setscaniter</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: setscaniter [-?] -ageoff | -agg | -class &lt;name&gt; | -regex | -reqvis | -vers   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: setscaniter [-?] -ageoff | -agg | -class &lt;name&gt; | -regex | -reqvis | -vers   
           [-n &lt;itername&gt;] -p &lt;pri&gt;  [-t &lt;table&gt;]   
 description: sets a table-specific scan iterator for this shell session   
   -?,-help  display this help   
@@ -852,90 +795,79 @@ description: sets a table-specific scan iterator for this shell session
   -reqvis,-require-visibility  a type that omits entries with empty visibilities   
   -t,-table &lt;table&gt;  tableName   
   -vers,-version  a versioning type   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>sleep</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: sleep [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: sleep [-?]   
 description: sleep for the given number of seconds   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>systempermissions</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: systempermissions [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: systempermissions [-?]   
 description: displays a list of valid system permissions   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>table</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: table &lt;tableName&gt; [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: table &lt;tableName&gt; [-?]   
 description: switches to the specified table   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>tablepermissions</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: tablepermissions [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: tablepermissions [-?]   
 description: displays a list of valid table permissions   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>tables</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: tables [-?] [-l]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: tables [-?] [-l]   
 description: displays a list of all existing tables   
   -?,-help  display this help   
   -l,-list-ids  display internal table ids along with the table name   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>trace</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: trace [ on | off ] [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: trace [ on | off ] [-?]   
 description: turns trace logging on or off   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>user</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: user &lt;username&gt; [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: user &lt;username&gt; [-?]   
 description: switches to the specified user   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>userpermissions</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: userpermissions [-?] [-u &lt;user&gt;]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: userpermissions [-?] [-u &lt;user&gt;]   
 description: displays a user's system and table permissions   
   -?,-help  display this help   
   -u,-user &lt;user&gt;  user to operate on   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>users</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: users [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: users [-?]   
 description: displays a list of existing users   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p><strong>whoami</strong></p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: whoami [-?]   
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: whoami [-?]   
 description: reports the current user name   
   -?,-help  display this help   
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <hr />
 
diff --git a/1.4/user_manual/Table_Configuration.html b/1.4/user_manual/Table_Configuration.html
index 6a66b3e..194d359 100644
--- a/1.4/user_manual/Table_Configuration.html
+++ b/1.4/user_manual/Table_Configuration.html
@@ -179,19 +179,18 @@
 
 <h3 id="-managing-locality-groups-via-the-shell"><a id="Managing_Locality_Groups_via_the_Shell"></a> Managing Locality Groups via the Shell</h3>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: setgroups &lt;group&gt;=&lt;col fam&gt;{,&lt;col fam&gt;}{ &lt;group&gt;=&lt;col fam&gt;{,&lt;col
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: setgroups &lt;group&gt;=&lt;col fam&gt;{,&lt;col fam&gt;}{ &lt;group&gt;=&lt;col fam&gt;{,&lt;col
 fam&gt;}} [-?] -t &lt;table&gt;
 
 user@myinstance mytable&gt; setgroups -t mytable group_one=colf1,colf2
 
 user@myinstance mytable&gt; getgroups -t mytable
 group_one=colf1,colf2
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h3 id="-managing-locality-groups-via-the-client-api"><a id="Managing_Locality_Groups_via_the_Client_API"></a> Managing Locality Groups via the Client API</h3>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>Connector conn;
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Connector conn;
 
 HashMap&lt;String,Set&lt;Text&gt;&gt; localityGroups =
     new HashMap&lt;String, Set&lt;Text&gt;&gt;();
@@ -212,14 +211,12 @@ conn.tableOperations().setLocalityGroups("mytable", localityGroups);
 // existing locality groups can be obtained as follows
 Map&lt;String, Set&lt;Text&gt;&gt; groups =
     conn.tableOperations().getLocalityGroups("mytable");
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The assignment of Column Families to Locality Groups can be changed anytime. The physical movement of column families into their new locality groups takes place via the periodic Major Compaction process that takes place continuously in the background. Major Compaction can also be scheduled to take place immediately through the shell:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>user@myinstance mytable&gt; compact -t mytable
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>user@myinstance mytable&gt; compact -t mytable
+</code></pre></div></div>
 
 <h2 id="-constraints"><a id="Constraints"></a> Constraints</h2>
 
@@ -227,7 +224,7 @@ Map&lt;String, Set&lt;Text&gt;&gt; groups =
 
 <p>Constraints can be enabled by setting a table property as follows:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>user@myinstance mytable&gt; config -t mytable -s table.constraint.1=com.test.ExampleConstraint
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>user@myinstance mytable&gt; config -t mytable -s table.constraint.1=com.test.ExampleConstraint
 user@myinstance mytable&gt; config -t mytable -s table.constraint.2=com.test.AnotherConstraint
 user@myinstance mytable&gt; config -t mytable -f constraint
 ---------+--------------------------------+----------------------------
@@ -236,8 +233,7 @@ SCOPE    | NAME                           | VALUE
 table    | table.constraint.1............ | com.test.ExampleConstraint
 table    | table.constraint.2............ | com.test.AnotherConstraint
 ---------+--------------------------------+----------------------------
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Currently there are no general-purpose constraints provided with the Accumulo distribution. New constraints can be created by writing a Java class that implements the org.apache.accumulo.core.constraints.Constraint interface.</p>
 
@@ -253,9 +249,8 @@ accumulo/src/examples/simple/main/java/accumulo/examples/simple/constraints .</p
 
 <p>To enable bloom filters, enter the following command in the Shell:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>user@myinstance&gt; config -t mytable -s table.bloom.enabled=true
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>user@myinstance&gt; config -t mytable -s table.bloom.enabled=true
+</code></pre></div></div>
 
 <p>An extensive example of using Bloom Filters can be found at <br />
 accumulo/docs/examples/README.bloom .</p>
@@ -266,31 +261,28 @@ accumulo/docs/examples/README.bloom .</p>
 
 <h3 id="-setting-iterators-via-the-shell"><a id="Setting_Iterators_via_the_Shell"></a> Setting Iterators via the Shell</h3>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>usage: setiter [-?] -ageoff | -agg | -class &lt;name&gt; | -regex | 
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: setiter [-?] -ageoff | -agg | -class &lt;name&gt; | -regex | 
 -reqvis | -vers   [-majc] [-minc] [-n &lt;itername&gt;] -p &lt;pri&gt;   
 [-scan] [-t &lt;table&gt;]
 
 user@myinstance mytable&gt; setiter -t mytable -scan -p 10 -n myiter
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h3 id="-setting-iterators-programmatically"><a id="Setting_Iterators_Programmatically"></a> Setting Iterators Programmatically</h3>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>scanner.addIterator(new IteratorSetting(
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>scanner.addIterator(new IteratorSetting(
     15, // priority
     "myiter", // name this iterator
     "com.company.MyIterator" // class name
 ));
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Some iterators take additional parameters from client code, as in the following example:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>IteratorSetting iter = new IteratorSetting(...);
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>IteratorSetting iter = new IteratorSetting(...);
 iter.addOption("myoptionname", "myoptionvalue");
 scanner.addIterator(iter)
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Tables support separate Iterator settings to be applied at scan time, upon minor compaction and upon major compaction. For most uses, tables will have identical iterator settings for all three to avoid inconsistent results.</p>
 
@@ -302,7 +294,7 @@ scanner.addIterator(iter)
 
 <p>The version policy can be changed by changing the VersioningIterator options for a table as follows:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>user@myinstance mytable&gt; config -t mytable -s
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>user@myinstance mytable&gt; config -t mytable -s
 table.iterator.scan.vers.opt.maxVersions=3
 
 user@myinstance mytable&gt; config -t mytable -s
@@ -310,8 +302,7 @@ table.iterator.minc.vers.opt.maxVersions=3
 
 user@myinstance mytable&gt; config -t mytable -s
 table.iterator.majc.vers.opt.maxVersions=3
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h4 id="-logical-time"><a id="Logical_Time"></a> Logical Time</h4>
 
@@ -319,9 +310,8 @@ table.iterator.majc.vers.opt.maxVersions=3
 
 <p>A table can be configured to use logical timestamps at creation time as follows:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>user@myinstance&gt; createtable -tl logical
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>user@myinstance&gt; createtable -tl logical
+</code></pre></div></div>
 
 <h4 id="-deletes"><a id="Deletes"></a> Deletes</h4>
 
@@ -334,7 +324,7 @@ org.apache.accumulo.core.iterators.Filter class.</p>
 
 <p>The AgeOff filter can be configured to remove data older than a certain date or a fixed amount of time from the present. The following example sets a table to delete everything inserted over 30 seconds ago:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>user@myinstance&gt; createtable filtertest
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>user@myinstance&gt; createtable filtertest
 user@myinstance filtertest&gt; setiter -t filtertest -scan -minc -majc -p 10 -n myfilter -ageoff
 AgeOffFilter removes entries with timestamps more than &lt;ttl&gt; milliseconds old
 ----------&gt; set org.apache.accumulo.core.iterators.user.AgeOffFilter parameter negate, default false keeps k/v that pass accept method, true rejects k/v that pass accept method: 
@@ -348,12 +338,11 @@ foo a:b [] c
 user@myinstance filtertest&gt; sleep 4
 user@myinstance filtertest&gt; scan
 user@myinstance filtertest&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>To see the iterator settings for a table, use:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>user@example filtertest&gt; config -t filtertest -f iterator
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>user@example filtertest&gt; config -t filtertest -f iterator
 ---------+---------------------------------------------+------------------
 SCOPE    | NAME                                        | VALUE
 ---------+---------------------------------------------+------------------
@@ -370,8 +359,7 @@ table    | table.iterator.scan.myfilter.opt.ttl ...... | 3000
 table    | table.iterator.scan.vers .................. | 20,org.apache.accumulo.core.iterators.VersioningIterator
 table    | table.iterator.scan.vers.opt.maxVersions .. | 1
 ---------+------------------------------------------+------------------
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h3 id="-combiners"><a id="Combiners"></a> Combiners</h3>
 
@@ -379,21 +367,19 @@ table    | table.iterator.scan.vers.opt.maxVersions .. | 1
 
 <p>For example, if a summing combiner were configured on a table and the following mutations were inserted:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>Row     Family Qualifier Timestamp  Value
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Row     Family Qualifier Timestamp  Value
 rowID1  colfA  colqA     20100101   1
 rowID1  colfA  colqA     20100102   1
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The table would reflect only one aggregate value:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>rowID1  colfA  colqA     -          2
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>rowID1  colfA  colqA     -          2
+</code></pre></div></div>
 
 <p>Combiners can be enabled for a table using the setiter command in the shell. Below is an example.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@a14 perDayCounts&gt; setiter -t perDayCounts -p 10 -scan -minc -majc -n daycount 
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@a14 perDayCounts&gt; setiter -t perDayCounts -p 10 -scan -minc -majc -n daycount 
                        -class org.apache.accumulo.core.iterators.user.SummingCombiner
 TypedValueCombiner can interpret Values as a variety of number encodings 
   (VLong, Long, or String) before combining
@@ -411,8 +397,7 @@ root@a14 perDayCounts&gt; scan
 bar day:20080101 []    2
 foo day:20080101 []    2
 foo day:20080103 []    1
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Accumulo includes some useful Combiners out of the box. To find these look in the <br />
 <strong>org.apache.accumulo.core.iterators.user</strong> package.</p>
@@ -429,17 +414,15 @@ accumulo/src/examples/simple/main/java/org/apache/accumulo/examples/simple/combi
 
 <p>The block cache can be configured on a per-table basis, and all tablets hosted on a tablet server share a single resource pool. To configure the size of the tablet server’s block cache, set the following properties:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>tserver.cache.data.size: Specifies the size of the cache for file data blocks.
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>tserver.cache.data.size: Specifies the size of the cache for file data blocks.
 tserver.cache.index.size: Specifies the size of the cache for file indices.
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>To enable the block cache for your table, set the following properties:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>table.cache.block.enable: Determines whether file (data) block cache is enabled.
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>table.cache.block.enable: Determines whether file (data) block cache is enabled.
 table.cache.index.enable: Determines whether index cache is enabled.
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The block cache can have a significant effect on alleviating hot spots, as well as reducing query latency. It is enabled by default for the !METADATA table.</p>
 
@@ -447,9 +430,8 @@ table.cache.index.enable: Determines whether index cache is enabled.
 
 <p>As data is written to Accumulo it is buffered in memory. The data buffered in memory is eventually written to HDFS on a per tablet basis. Files can also be added to tablets directly by bulk import. In the background tablet servers run major compactions to merge multiple files into one. The tablet server has to decide which tablets to compact and which files within a tablet to compact. This decision is made using the compaction ratio, which is configurable on a per table basis. To conf [...]
 
-<div class="highlighter-rouge"><pre class="highlight"><code>table.compaction.major.ratio
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>table.compaction.major.ratio
+</code></pre></div></div>
 
 <p>Increasing this ratio will result in more files per tablet and less compaction work. More files per tablet means more higher query latency. So adjusting this ratio is a trade off between ingest and query performance. The ratio defaults to 3.</p>
 
@@ -457,15 +439,13 @@ table.cache.index.enable: Determines whether index cache is enabled.
 
 <p>The number of background threads tablet servers use to run major compactions is configurable. To configure this modify the following property:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>tserver.compaction.major.concurrent.max
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>tserver.compaction.major.concurrent.max
+</code></pre></div></div>
 
 <p>Also, the number of threads tablet servers use for minor compactions is configurable. To configure this modify the following property:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>tserver.compaction.minor.concurrent.max
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>tserver.compaction.minor.concurrent.max
+</code></pre></div></div>
 
 <p>The numbers of minor and major compactions running and queued is visible on the Accumulo monitor page. This allows you to see if compactions are backing up and adjustments to the above settings are needed. When adjusting the number of threads available for compactions, consider the number of cores and other tasks running on the nodes such as maps and reduces.</p>
 
@@ -473,9 +453,8 @@ table.cache.index.enable: Determines whether index cache is enabled.
 
 <p>Another option to deal with the files per tablet growing too large is to adjust the following property:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>table.file.max
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>table.file.max
+</code></pre></div></div>
 
 <p>When a tablet reaches this number of files and needs to flush its in-memory data to disk, it will choose to do a merging minor compaction. A merging minor compaction will merge the tablet’s smallest file with the data in memory at minor compaction time. Therefore the number of files will not grow beyond this limit. This will make minor compactions take longer, which will cause ingest performance to decrease. This can cause ingest to slow down until major compactions have enough time t [...]
 
@@ -487,10 +466,9 @@ table.cache.index.enable: Determines whether index cache is enabled.
 
 <p>In the shell:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@myinstance&gt; createtable newTable
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@myinstance&gt; createtable newTable
 root@myinstance&gt; addsplits -t newTable g n t
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>This will create a new table with 4 tablets. The table will be split on the letters <code class="highlighter-rouge">g'',</code>n’’, and ``t’’ which will work nicely if the row data start with lower-case alphabetic characters. If your row data includes binary information or numeric information, or if the distribution of the row information is not flat, then you would pick different split points. Now ingest and query can proceed on 4 nodes which can improve performance.</p>
 
@@ -500,48 +478,41 @@ root@myinstance&gt; addsplits -t newTable g n t
 
 <p>Accumulo supports tablet merging, which can be used to reduce delete the number of split points. The following command will merge all rows from <code class="highlighter-rouge">A'' to</code>Z’’ into a single tablet:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@myinstance&gt; merge -t myTable -s A -e Z
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@myinstance&gt; merge -t myTable -s A -e Z
+</code></pre></div></div>
 
 <p>If the result of a merge produces a tablet that is larger than the configured split size, the tablet may be split by the tablet server. Be sure to increase your tablet size prior to any merges if the goal is to have larger tablets:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@myinstance&gt; config -t myTable -s table.split.threshold=2G
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@myinstance&gt; config -t myTable -s table.split.threshold=2G
+</code></pre></div></div>
 
 <p>In order to merge small tablets, you can ask accumulo to merge sections of a table smaller than a given size.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@myinstance&gt; merge -t myTable -s 100M
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@myinstance&gt; merge -t myTable -s 100M
+</code></pre></div></div>
 
 <p>By default, small tablets will not be merged into tablets that are already larger than the given size. This can leave isolated small tablets. To force small tablets to be merged into larger tablets use the ``-force’’ option:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@myinstance&gt; merge -t myTable -s 100M --force
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@myinstance&gt; merge -t myTable -s 100M --force
+</code></pre></div></div>
 
 <p>Merging away small tablets works on one section at a time. If your table contains many sections of small split points, or you are attempting to change the split size of the entire table, it will be faster to set the split point and merge the entire table:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@myinstance&gt; config -t myTable -s table.split.threshold=256M
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@myinstance&gt; config -t myTable -s table.split.threshold=256M
 root@myinstance&gt; merge -t myTable
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h2 id="-delete-range"><a id="Delete_Range"></a> Delete Range</h2>
 
 <p>Consider an indexing scheme that uses date information in each row. For example ``20110823-15:20:25.013’’ might be a row that specifies a date and time. In some cases, we might like to delete rows based on this date, say to remove all the data older than the current year. Accumulo supports a delete range operation which can efficiently removes data between two rows. For example:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@myinstance&gt; deleterange -t myTable -s 2010 -e 2011
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@myinstance&gt; deleterange -t myTable -s 2010 -e 2011
+</code></pre></div></div>
 
 <p>This will delete all rows starting with <code class="highlighter-rouge">2010'' and it will stop at any row starting</code>2011’’. You can delete any data prior to 2011 with:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@myinstance&gt; deleterange -t myTable -e 2011 --force
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@myinstance&gt; deleterange -t myTable -e 2011 --force
+</code></pre></div></div>
 
 <p>The shell will not allow you to delete an unbounded range (no start) unless you provide the ``-force’’ option.</p>
 
@@ -557,7 +528,7 @@ root@myinstance&gt; merge -t myTable
 
 <p>In the following example we see that data inserted after the clone operation is not visible in the clone.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@a14&gt; createtable people
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@a14&gt; createtable people
 root@a14 people&gt; insert 890435 name last Doe
 root@a14 people&gt; insert 890435 name first John
 root@a14 people&gt; clonetable people test  
@@ -573,12 +544,11 @@ root@a14 test&gt; scan
 890435 name:first []    John
 890435 name:last []    Doe
 root@a14 test&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The du command in the shell shows how much space a table is using in HDFS. This command can also show how much overlapping space two cloned tables have in HDFS. In the example below du shows table ci is using 428M. Then ci is cloned to cic and du shows that both tables share 428M. After three entries are inserted into cic and its flushed, du shows the two tables still share 428M but cic has 226 bytes to itself. Finally, table cic is compacted and then du shows that each table uses 428M.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@a14&gt; du ci           
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@a14&gt; du ci           
              428,482,573 [ci]
 root@a14&gt; clonetable ci cic
 root@a14&gt; du ci cic
@@ -599,8 +569,7 @@ root@a14 cic&gt; du ci cic
              428,482,573 [ci]
              428,482,612 [cic]
 root@a14 cic&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <hr />
 
diff --git a/1.4/user_manual/Table_Design.html b/1.4/user_manual/Table_Design.html
index ed66e5e..e2e6919 100644
--- a/1.4/user_manual/Table_Design.html
+++ b/1.4/user_manual/Table_Design.html
@@ -168,66 +168,60 @@
 
 <p>Since Accumulo tables are sorted by row ID, each table can be thought of as being indexed by the row ID. Lookups performed row ID can be executed quickly, by doing a binary search, first across the tablets, and then within a tablet. Clients should choose a row ID carefully in order to support their desired application. A simple rule is to select a unique identifier as the row ID for each entity to be stored and assign all the other attributes to be tracked to be columns under this row [...]
 
-<div class="highlighter-rouge"><pre class="highlight"><code>    userid,age,address,account-balance
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>    userid,age,address,account-balance
+</code></pre></div></div>
 
 <p>We might choose to store this data using the userid as the rowID and the rest of the data in column families:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>Mutation m = new Mutation(new Text(userid));
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Mutation m = new Mutation(new Text(userid));
 m.put(new Text("age"), age);
 m.put(new Text("address"), address);
 m.put(new Text("balance"), account_balance);
 
 writer.add(m);
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>We could then retrieve any of the columns for a specific userid by specifying the userid as the range of a scanner and fetching specific columns:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>Range r = new Range(userid, userid); // single row
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Range r = new Range(userid, userid); // single row
 Scanner s = conn.createScanner("userdata", auths);
 s.setRange(r);
 s.fetchColumnFamily(new Text("age"));
 
 for(Entry&lt;Key,Value&gt; entry : s)
     System.out.println(entry.getValue().toString());
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h2 id="-rowid-design"><a id="RowID_Design"></a> RowID Design</h2>
 
 <p>Often it is necessary to transform the rowID in order to have rows ordered in a way that is optimal for anticipated access patterns. A good example of this is reversing the order of components of internet domain names in order to group rows of the same parent domain together:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>com.google.code
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>com.google.code
 com.google.labs
 com.google.mail
 com.yahoo.mail
 com.yahoo.research
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Some data may result in the creation of very large rows - rows with many columns. In this case the table designer may wish to split up these rows for better load balancing while keeping them sorted together for scanning purposes. This can be done by appending a random substring at the end of the row:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>com.google.code_00
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>com.google.code_00
 com.google.code_01
 com.google.code_02
 com.google.labs_00
 com.google.mail_00
 com.google.mail_01
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>It could also be done by adding a string representation of some period of time such as date to the week or month:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>com.google.code_201003
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>com.google.code_201003
 com.google.code_201004
 com.google.code_201005
 com.google.labs_201003
 com.google.mail_201003
 com.google.mail_201004
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Appending dates provides the additional capability of restricting a scan to a given date range.</p>
 
@@ -243,7 +237,7 @@ com.google.mail_201004
 
 <p>To support efficient lookups of multiple rowIDs from the same table, the Accumulo client library provides a BatchScanner. Users specify a set of Ranges to the BatchScanner, which performs the lookups in multiple threads to multiple servers and returns an Iterator over all the rows retrieved. The rows returned are NOT in sorted order, as is the case with the basic Scanner interface.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>// first we scan the index for IDs of rows matching our query
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>// first we scan the index for IDs of rows matching our query
 
 Text term = new Text("mySearchTerm");
 
@@ -264,8 +258,7 @@ bscan.fetchFamily("attributes");
 
 for(Entry&lt;Key,Value&gt; entry : scan)
     System.out.println(entry.getValue());
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>One advantage of the dynamic schema capabilities of Accumulo is that different fields may be indexed into the same physical table. However, it may be necessary to create different index tables if the terms must be formatted differently in order to maintain proper sort order. For example, real numbers must be formatted differently than their usual notation in order to be sorted correctly. In these cases, usually one index per unique data type will suffice.</p>
 
@@ -301,7 +294,7 @@ for(Entry&lt;Key,Value&gt; entry : scan)
 
 <p>Finally, we perform set intersection operations on the TabletServer via a special iterator called the Intersecting Iterator. Since documents are partitioned into many bins, a search of all documents must search every bin. We can use the BatchScanner to scan all bins in parallel. The Intersecting Iterator should be enabled on a BatchScanner within user query code as follows:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>Text[] terms = {new Text("the"), new Text("white"), new Text("house")};
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Text[] terms = {new Text("the"), new Text("white"), new Text("house")};
 
 BatchScanner bs = conn.createBatchScanner(table, auths, 20);
 IteratorSetting iter = new IteratorSetting(20, "ii", IntersectingIterator.class);
@@ -312,8 +305,7 @@ bs.setRanges(Collections.singleton(new Range()));
 for(Entry&lt;Key,Value&gt; entry : bs) {
     System.out.println(" " + entry.getKey().getColumnQualifier());
 }
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>This code effectively has the BatchScanner scan all tablets of a table, looking for documents that match all the given terms. Because all tablets are being scanned for every query, each query is more expensive than other Accumulo scans, which typically involve a small number of TabletServers. This reduces the number of concurrent queries supported and is subject to what is known as the `straggler’ problem in which every query runs as slow as the slowest server participating.</p>
 
diff --git a/1.4/user_manual/Writing_Accumulo_Clients.html b/1.4/user_manual/Writing_Accumulo_Clients.html
index 71b1c82..abb465a 100644
--- a/1.4/user_manual/Writing_Accumulo_Clients.html
+++ b/1.4/user_manual/Writing_Accumulo_Clients.html
@@ -176,9 +176,8 @@
 
 <p>Inorder to run client code written to run against Accumulo, you will need to include the jars that Accumulo depends on in your classpath. Accumulo client code depends on Hadoop and Zookeeper. For Hadoop add the hadoop core jar, all of the jars in the Hadoop lib directory, and the conf directory to the classpath. For Zookeeper 3.3 you only need to add the Zookeeper jar, and not what is in the Zookeeper lib directory. You can run the following command on a configured Accumulo system to  [...]
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ACCUMULO_HOME/bin/accumulo classpath
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ACCUMULO_HOME/bin/accumulo classpath
+</code></pre></div></div>
 
 <p>Another option for running your code is to put a jar file in $ACCUMULO_HOME/lib/ext. After doing this you can use the accumulo script to execute your code. For example if you create a jar containing the class com.foo.Client and placed that in lib/ext, then you could use the command $ACCUMULO_HOME/bin/accumulo com.foo.Client to execute your code.</p>
 
@@ -188,13 +187,12 @@
 
 <p>All clients must first identify the Accumulo instance to which they will be communicating. Code to do this is as follows:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>String instanceName = "myinstance";
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>String instanceName = "myinstance";
 String zooServers = "zooserver-one,zooserver-two"
 Instance inst = new ZooKeeperInstance(instanceName, zooServers);
 
 Connector conn = inst.getConnector("user", "passwd");
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h2 id="-writing-data"><a id="Writing_Data"></a> Writing Data</h2>
 
@@ -202,7 +200,7 @@ Connector conn = inst.getConnector("user", "passwd");
 
 <p>Mutations can be created thus:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>Text rowID = new Text("row1");
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Text rowID = new Text("row1");
 Text colFam = new Text("myColFam");
 Text colQual = new Text("myColQual");
 ColumnVisibility colVis = new ColumnVisibility("public");
@@ -212,8 +210,7 @@ Value value = new Value("myValue".getBytes());
 
 Mutation mutation = new Mutation(rowID);
 mutation.put(colFam, colQual, colVis, timestamp, value);
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h3 id="-batchwriter"><a id="BatchWriter"></a> BatchWriter</h3>
 
@@ -221,7 +218,7 @@ mutation.put(colFam, colQual, colVis, timestamp, value);
 
 <p>Mutations are added to a BatchWriter thus:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>long memBuf = 1000000L; // bytes to store before sending a batch
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>long memBuf = 1000000L; // bytes to store before sending a batch
 long timeout = 1000L; // milliseconds to wait before sending
 int numThreads = 10;
 
@@ -231,8 +228,7 @@ BatchWriter writer =
 writer.add(mutation);
 
 writer.close();
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>An example of using the batch writer can be found at <br />
 accumulo/docs/examples/README.batch</p>
@@ -245,7 +241,7 @@ accumulo/docs/examples/README.batch</p>
 
 <p>To retrieve data, Clients use a Scanner, which provides acts like an Iterator over keys and values. Scanners can be configured to start and stop at particular keys, and to return a subset of the columns available.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>// specify which visibilities we are allowed to see
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>// specify which visibilities we are allowed to see
 Authorizations auths = new Authorizations("public");
 
 Scanner scan =
@@ -258,8 +254,7 @@ for(Entry&lt;Key,Value&gt; entry : scan) {
     String row = e.getKey().getRow();
     Value value = e.getValue();
 }
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h3 id="-isolated-scanner"><a id="Isolated_Scanner"></a> Isolated Scanner</h3>
 
@@ -284,7 +279,7 @@ src/examples/src/main/java/org/apache/accumulo/examples/isolation/InterferenceTe
 
 <p>The BatchScanner is configured similarly to the Scanner; it can be configured to retrieve a subset of the columns available, but rather than passing a single Range, BatchScanners accept a set of Ranges. It is important to note that the keys returned by a BatchScanner are not in sorted order since the keys streamed are from multiple TabletServers in parallel.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>ArrayList&lt;Range&gt; ranges = new ArrayList&lt;Range&gt;();
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ArrayList&lt;Range&gt; ranges = new ArrayList&lt;Range&gt;();
 // populate list of ranges ...
 
 BatchScanner bscan =
@@ -295,8 +290,7 @@ bscan.fetchFamily("attributes");
 
 for(Entry&lt;Key,Value&gt; entry : scan)
     System.out.println(e.getValue());
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>An example of the BatchScanner can be found at <br />
 accumulo/docs/examples/README.batch</p>
@@ -313,19 +307,17 @@ accumulo/docs/examples/README.batch</p>
 
 <p>The configuration options for the proxy server live inside of a properties file. At the very least, you need to supply the following properties:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>protocolFactory=org.apache.thrift.protocol.TCompactProtocol$Factory
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>protocolFactory=org.apache.thrift.protocol.TCompactProtocol$Factory
 tokenClass=org.apache.accumulo.core.client.security.tokens.PasswordToken
 port=42424
 instance=test
 zookeepers=localhost:2181
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>You can find a sample configuration file in your distribution:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ACCUMULO_HOME/proxy/proxy.properties.
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ACCUMULO_HOME/proxy/proxy.properties.
+</code></pre></div></div>
 
 <p>This sample configuration file further demonstrates an ability to back the proxy server by MockAccumulo or the MiniAccumuloCluster.</p>
 
@@ -333,9 +325,8 @@ zookeepers=localhost:2181
 
 <p>After the properties file holding the configuration is created, the proxy server can be started using the following command in the Accumulo distribution (assuming your properties file is named config.properties):</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ACCUMULO_HOME/bin/accumulo proxy -p config.properties
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ACCUMULO_HOME/bin/accumulo proxy -p config.properties
+</code></pre></div></div>
 
 <h3 id="-creating-a-proxy-client"><a id="Creating_a_Proxy_Client"></a> Creating a Proxy Client</h3>
 
@@ -343,9 +334,8 @@ zookeepers=localhost:2181
 
 <p>You can find the thrift file for generating the client:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ACCUMULO_HOME/proxy/proxy.thrift.
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ACCUMULO_HOME/proxy/proxy.thrift.
+</code></pre></div></div>
 
 <p>After a client is generated, the port specified in the configuration properties above will be used to connect to the server.</p>
 
@@ -353,23 +343,21 @@ zookeepers=localhost:2181
 
 <p>The following examples have been written in Java and the method signatures may be slightly different depending on the language specified when generating client with the Thrift compiler. After initiating a connection to the Proxy (see Apache Thrift’s documentation for examples of connecting to a Thrift service), the methods on the proxy client will be available. The first thing to do is log in:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>Map password = new HashMap&lt;String,String&gt;();
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Map password = new HashMap&lt;String,String&gt;();
 password.put("password", "secret");
 ByteBuffer token = client.login("root", password);
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Once logged in, the token returned will be used for most subsequent calls to the client. Let’s create a table, add some data, scan the table, and delete it.</p>
 
 <p>First, create a table.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>client.createTable(token, "myTable", true, TimeType.MILLIS);
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>client.createTable(token, "myTable", true, TimeType.MILLIS);
+</code></pre></div></div>
 
 <p>Next, add some data:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>// first, create a writer on the server
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>// first, create a writer on the server
 String writer = client.createWriter(token, "myTable", new WriterOptions());
 
 // build column updates
@@ -379,12 +367,11 @@ Map&lt;ByteBuffer, List&lt;ColumnUpdate&gt; cells&gt; cellsToUpdate = //...
 client.updateAndFlush(writer, "myTable", cellsToUpdate);
 
 client.closeWriter(writer);
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Scan for the data and batch the return of the results on the server:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>String scanner = client.createScanner(token, "myTable", new ScanOptions());
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>String scanner = client.createScanner(token, "myTable", new ScanOptions());
 ScanResult results = client.nextK(scanner, 100);
 
 for(KeyValue keyValue : results.getResultsIterator()) {
@@ -392,8 +379,7 @@ for(KeyValue keyValue : results.getResultsIterator()) {
 }
 
 client.closeScanner(scanner);
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <hr />
 
diff --git a/1.5/examples/batch.html b/1.5/examples/batch.html
index 4f67ce4..ea201dc 100644
--- a/1.5/examples/batch.html
+++ b/1.5/examples/batch.html
@@ -169,13 +169,12 @@ list of zookeeper nodes (given as zookeepers here).</p>
 <p>Before you run this, you must ensure that the user you are running has the
 “exampleVis” authorization. (you can set this in the shell with “setauths -u username -s exampleVis”)</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u root -e "setauths -u username -s exampleVis"
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u root -e "setauths -u username -s exampleVis"
+</code></pre></div></div>
 
 <p>You must also create the table, batchtest1, ahead of time. (In the shell, use “createtable batchtest1”)</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -e "createtable batchtest1"
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -e "createtable batchtest1"
 $ ./bin/accumulo org.apache.accumulo.examples.simple.client.SequentialBatchWriter -i instance -z zookeepers -u username -p password -t batchtest1 --start 0 --num 10000 --size 50 --batchMemory 20M --batchLatency 500 --batchThreads 20 --vis exampleVis
 $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner -i instance -z zookeepers -u username -p password -t batchtest1 --num 100 --min 0 --max 10000 --size 50 --scanThreads 20 --auths exampleVis
 07 11:33:11,103 [client.CountingVerifyingReceiver] INFO : Generating 100 random queries...
@@ -189,8 +188,7 @@ $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner -
 07 11:33:11,416 [client.CountingVerifyingReceiver] INFO : 2173.91 lookups/sec   0.05 secs
 
 07 11:33:11,416 [client.CountingVerifyingReceiver] INFO : num results : 100
-</code></pre>
-</div>
+</code></pre></div></div>
 
         </div>
 
diff --git a/1.5/examples/bloom.html b/1.5/examples/bloom.html
index dadb695..bb85b97 100644
--- a/1.5/examples/bloom.html
+++ b/1.5/examples/bloom.html
@@ -154,7 +154,7 @@ do not exist in a table.</p>
 
 <p>Below table named bloom_test is created and bloom filters are enabled.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 Shell - Apache Accumulo Interactive Shell
 - version: 1.5.0
 - instance name: instance
@@ -166,43 +166,39 @@ username@instance&gt; setauths -u username -s exampleVis
 username@instance&gt; createtable bloom_test
 username@instance bloom_test&gt; config -t bloom_test -s table.bloom.enabled=true
 username@instance bloom_test&gt; exit
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Below 1 million random values are inserted into accumulo.  The randomly
 generated rows range between 0 and 1 billion.  The random number generator is
 initialized with the seed 7.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test --num 1000000 --min 0 --max 1000000000 --size 50 --batchMemory 2M --batchLatency 60s --batchThreads 3 --vis exampleVis
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test --num 1000000 --min 0 --max 1000000000 --size 50 --batchMemory 2M --batchLatency 60s --batchThreads 3 --vis exampleVis
+</code></pre></div></div>
 
 <p>Below the table is flushed:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test -w'
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test -w'
 05 10:40:06,069 [shell.Shell] INFO : Flush of table bloom_test completed.
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>After the flush completes, 500 random queries are done against the table.  The
 same seed is used to generate the queries, therefore everything is found in the
 table.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test --num 500 --min 0 --max 1000000000 --size 50 --batchThreads 20 --auths exampleVis
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test --num 500 --min 0 --max 1000000000 --size 50 --batchThreads 20 --auths exampleVis
 Generating 500 random queries...finished
 96.19 lookups/sec   5.20 secs
 num results : 500
 Generating 500 random queries...finished
 102.35 lookups/sec   4.89 secs
 num results : 500
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Below another 500 queries are performed, using a different seed which results
 in nothing being found.  In this case the lookups are much faster because of
 the bloom filters.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 8 -i instance -z zookeepers -u username -p password -t bloom_test --num 500 --min 0 --max 1000000000 --size 50 --batchThreads 20 --auths exampleVis
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 8 -i instance -z zookeepers -u username -p password -t bloom_test --num 500 --min 0 --max 1000000000 --size 50 --batchThreads 20 --auths exampleVis
 Generating 500 random queries...finished
 2212.39 lookups/sec   0.23 secs
 num results : 0
@@ -211,8 +207,7 @@ Generating 500 random queries...finished
 4464.29 lookups/sec   0.11 secs
 num results : 0
 Did not find 500 rows
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <hr />
 
@@ -244,7 +239,7 @@ million inserts.  If not, then more map files will be created.</p>
 
 <p>The commands for creating the first table without bloom filters are below.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 Shell - Apache Accumulo Interactive Shell
 - version: 1.5.0
 - instance name: instance
@@ -264,12 +259,11 @@ $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter --
 $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test1 -w'
 $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter --seed 9 $ARGS
 $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test1 -w'
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The commands for creating the second table with bloom filers are below.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 Shell - Apache Accumulo Interactive Shell
 - version: 1.5.0
 - instance name: instance
@@ -290,64 +284,59 @@ $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter --
 $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test2 -w'
 $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter --seed 9 $ARGS
 $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test2 -w'
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Below 500 lookups are done against the table without bloom filters using random
 NG seed 7.  Even though only one map file will likely contain entries for this
 seed, all map files will be interrogated.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test1 --num 500 --min 0 --max 1000000000 --size 50 --scanThreads 20 --auths exampleVis
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test1 --num 500 --min 0 --max 1000000000 --size 50 --scanThreads 20 --auths exampleVis
 Generating 500 random queries...finished
 35.09 lookups/sec  14.25 secs
 num results : 500
 Generating 500 random queries...finished
 35.33 lookups/sec  14.15 secs
 num results : 500
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Below the same lookups are done against the table with bloom filters.  The
 lookups were 2.86 times faster because only one map file was used, even though three
 map files existed.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test2 --num 500 --min 0 --max 1000000000 --size 50 --scanThreads 20 --auths exampleVis
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test2 --num 500 --min 0 --max 1000000000 --size 50 --scanThreads 20 --auths exampleVis
 Generating 500 random queries...finished
 99.03 lookups/sec   5.05 secs
 num results : 500
 Generating 500 random queries...finished
 101.15 lookups/sec   4.94 secs
 num results : 500
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>You can verify the table has three files by looking in HDFS.  To look in HDFS
 you will need the table ID, because this is used in HDFS instead of the table
 name.  The following command will show table ids.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password -e 'tables -l'
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password -e 'tables -l'
 !METADATA       =&gt;         !0
 bloom_test1     =&gt;         o7
 bloom_test2     =&gt;         o8
 trace           =&gt;          1
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>So the table id for bloom_test2 is o8.  The command below shows what files this
 table has in HDFS.  This assumes Accumulo is at the default location in HDFS.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ hadoop fs -lsr /accumulo/tables/o8
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ hadoop fs -lsr /accumulo/tables/o8
 drwxr-xr-x   - username supergroup          0 2012-01-10 14:02 /accumulo/tables/o8/default_tablet
 -rw-r--r--   3 username supergroup   52672650 2012-01-10 14:01 /accumulo/tables/o8/default_tablet/F00000dj.rf
 -rw-r--r--   3 username supergroup   52436176 2012-01-10 14:01 /accumulo/tables/o8/default_tablet/F00000dk.rf
 -rw-r--r--   3 username supergroup   52850173 2012-01-10 14:02 /accumulo/tables/o8/default_tablet/F00000dl.rf
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Running the rfile-info command shows that one of the files has a bloom filter
 and its 1.5MB.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo rfile-info /accumulo/tables/o8/default_tablet/F00000dj.rf
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo rfile-info /accumulo/tables/o8/default_tablet/F00000dj.rf
 Locality group         : &lt;DEFAULT&gt;
   Start block          : 0
   Num   blocks         : 752
@@ -371,8 +360,7 @@ Meta block     : acu_bloom
   Raw size             : 1,540,292 bytes
   Compressed size      : 1,433,115 bytes
   Compression type     : gz
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.5/examples/bulkIngest.html b/1.5/examples/bulkIngest.html
index 1afba82..b6c51d7 100644
--- a/1.5/examples/bulkIngest.html
+++ b/1.5/examples/bulkIngest.html
@@ -155,14 +155,13 @@ table called test_bulk which has two initial split points. Then 1000 rows of
 test data are created in HDFS. After that the 1000 rows are ingested into
 accumulo.  Then we verify the 1000 rows are in accumulo.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ PKG=org.apache.accumulo.examples.simple.mapreduce.bulk
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ PKG=org.apache.accumulo.examples.simple.mapreduce.bulk
 $ ARGS="-i instance -z zookeepers -u username -p password"
 $ ./bin/accumulo $PKG.SetupTable $ARGS -t test_bulk row_00000333 row_00000666
 $ ./bin/accumulo $PKG.GenerateTestData --start-row 0 --count 1000 --output bulk/test_1.txt
 $ ./bin/tool.sh lib/accumulo-examples-simple.jar $PKG.BulkIngestExample $ARGS -t test_bulk --inputDir bulk --workDir tmp/bulkWork
 $ ./bin/accumulo $PKG.VerifyIngest $ARGS -t test_bulk --start-row 0 --count 1000
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>For a high level discussion of bulk ingest, see the docs dir.</p>
 
diff --git a/1.5/examples/classpath.html b/1.5/examples/classpath.html
index 27c0329..f37b26d 100644
--- a/1.5/examples/classpath.html
+++ b/1.5/examples/classpath.html
@@ -155,59 +155,52 @@ table reference that jar.</p>
 
 <p>Execute the following command in the shell.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ hadoop fs -copyFromLocal $ACCUMULO_HOME/test/src/test/resources/FooFilter.jar /user1/lib
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ hadoop fs -copyFromLocal $ACCUMULO_HOME/test/src/test/resources/FooFilter.jar /user1/lib
+</code></pre></div></div>
 
 <p>Execute following in Accumulo shell to setup classpath context</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@test15&gt; config -s general.vfs.context.classpath.cx1=hdfs://&lt;namenode host&gt;:&lt;namenode port&gt;/user1/lib
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@test15&gt; config -s general.vfs.context.classpath.cx1=hdfs://&lt;namenode host&gt;:&lt;namenode port&gt;/user1/lib
+</code></pre></div></div>
 
 <p>Create a table</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@test15&gt; createtable nofoo
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@test15&gt; createtable nofoo
+</code></pre></div></div>
 
 <p>The following command makes this table use the configured classpath context</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@test15 nofoo&gt; config -t nofoo -s table.classpath.context=cx1
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@test15 nofoo&gt; config -t nofoo -s table.classpath.context=cx1
+</code></pre></div></div>
 
 <p>The following command configures an iterator thats in FooFilter.jar</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@test15 nofoo&gt; setiter -n foofilter -p 10 -scan -minc -majc -class org.apache.accumulo.test.FooFilter
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@test15 nofoo&gt; setiter -n foofilter -p 10 -scan -minc -majc -class org.apache.accumulo.test.FooFilter
 Filter accepts or rejects each Key/Value pair
 ----------&gt; set FooFilter parameter negate, default false keeps k/v that pass accept method, true rejects k/v that pass accept method: false
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The commands below show the filter is working.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@test15 nofoo&gt; insert foo1 f1 q1 v1
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@test15 nofoo&gt; insert foo1 f1 q1 v1
 root@test15 nofoo&gt; insert noo1 f1 q1 v2
 root@test15 nofoo&gt; scan
 noo1 f1:q1 []    v2
 root@test15 nofoo&gt; 
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Below, an attempt is made to add the FooFilter to a table thats not configured
 to use the clasppath context cx1.  This fails util the table is configured to
 use cx1.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@test15 nofoo&gt; createtable nofootwo
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@test15 nofoo&gt; createtable nofootwo
 root@test15 nofootwo&gt; setiter -n foofilter -p 10 -scan -minc -majc -class org.apache.accumulo.test.FooFilter
 2013-05-03 12:49:35,943 [shell.Shell] ERROR: java.lang.IllegalArgumentException: org.apache.accumulo.test.FooFilter
 root@test15 nofootwo&gt; config -t nofootwo -s table.classpath.context=cx1
 root@test15 nofootwo&gt; setiter -n foofilter -p 10 -scan -minc -majc -class org.apache.accumulo.test.FooFilter
 Filter accepts or rejects each Key/Value pair
 ----------&gt; set FooFilter parameter negate, default false keeps k/v that pass accept method, true rejects k/v that pass accept method: false
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.5/examples/client.html b/1.5/examples/client.html
index b5f762b..a9817aa 100644
--- a/1.5/examples/client.html
+++ b/1.5/examples/client.html
@@ -162,15 +162,14 @@
 class name, and enough arguments to find your accumulo instance.  For example,
 the Flush class will flush a table:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ PACKAGE=org.apache.accumulo.examples.simple.client
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ PACKAGE=org.apache.accumulo.examples.simple.client
 $ bin/accumulo $PACKAGE.Flush -u root -p mypassword -i instance -z zookeeper -t trace
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The very simple RowOperations class demonstrates how to read and write rows using the BatchWriter
 and Scanner:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/accumulo $PACKAGE.RowOperations -u root -p mypassword -i instance -z zookeeper 
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bin/accumulo $PACKAGE.RowOperations -u root -p mypassword -i instance -z zookeeper 
 2013-01-14 14:45:24,738 [client.RowOperations] INFO : This is everything
 2013-01-14 14:45:24,744 [client.RowOperations] INFO : Key: row1 column:1 [] 1358192724640 false Value: This is the value for this key
 2013-01-14 14:45:24,744 [client.RowOperations] INFO : Key: row1 column:2 [] 1358192724642 false Value: This is the value for this key
@@ -198,12 +197,11 @@ and Scanner:</p>
 2013-01-14 14:45:24,770 [client.RowOperations] INFO : Key: row3 column:2 [] 1358192724642 false Value: This is the value for this key
 2013-01-14 14:45:24,770 [client.RowOperations] INFO : Key: row3 column:3 [] 1358192724642 false Value: This is the value for this key
 2013-01-14 14:45:24,770 [client.RowOperations] INFO : Key: row3 column:4 [] 1358192724642 false Value: This is the value for this key
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>To create a table, write to it and read from it:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/accumulo $PACKAGE.ReadWriteExample -u root -p mypassword -i instance -z zookeeper --createtable --create --read 
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bin/accumulo $PACKAGE.ReadWriteExample -u root -p mypassword -i instance -z zookeeper --createtable --create --read 
 hello%00; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -&gt; world
 hello%01; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -&gt; world
 hello%02; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -&gt; world
@@ -214,8 +212,7 @@ hello%06; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -&gt; world
 hello%07; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -&gt; world
 hello%08; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -&gt; world
 hello%09; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -&gt; world
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.5/examples/combiner.html b/1.5/examples/combiner.html
index 9921586..a24fde6 100644
--- a/1.5/examples/combiner.html
+++ b/1.5/examples/combiner.html
@@ -158,7 +158,7 @@
 copy the produced jar into the accumulo lib dir.  This is already done in the
 tar distribution.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/accumulo shell -u username
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bin/accumulo shell -u username
 Enter current password for 'username'@'instance': ***
 
 Shell - Apache Accumulo Interactive Shell
@@ -196,8 +196,7 @@ username@instance runners&gt; scan
 123456 hstat:virtualMarathon []    6a,6b,d5,2
 123456 name:first []    Joe
 123456 stat:marathon []    220,240,690,3
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>In this example a table is created and the example stats combiner is applied to
 the column family stat and hstat.  The stats combiner computes min,max,sum, and
diff --git a/1.5/examples/constraints.html b/1.5/examples/constraints.html
index 1c93d6e..467664c 100644
--- a/1.5/examples/constraints.html
+++ b/1.5/examples/constraints.html
@@ -161,7 +161,7 @@ numeric keys.  The other constraint does not allow non numeric values. Two
 inserts that violate these constraints are attempted and denied.  The scan at
 the end shows the inserts were not allowed.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 
 Shell - Apache Accumulo Interactive Shell
 - 
@@ -185,8 +185,7 @@ username@instance testConstraints&gt; insert r1! cf1 cq1 ABC
 username@instance testConstraints&gt; scan
 r1 cf1:cq1 []    1111
 username@instance testConstraints&gt; 
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.5/examples/dirlist.html b/1.5/examples/dirlist.html
index a426fe7..149a021 100644
--- a/1.5/examples/dirlist.html
+++ b/1.5/examples/dirlist.html
@@ -167,9 +167,8 @@
 
 <p>To begin, ingest some data with Ingest.java.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.Ingest -i instance -z zookeepers -u username -p password --vis exampleVis --chunkSize 100000 /local/username/workspace
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.Ingest -i instance -z zookeepers -u username -p password --vis exampleVis --chunkSize 100000 /local/username/workspace
+</code></pre></div></div>
 
 <p>This may take some time if there are large files in the /local/username/workspace directory.  If you use 0 instead of 100000 on the command line, the ingest will run much faster, but it will not put any file data into Accumulo (the dataTable will be empty).
 Note that running this example will create tables dirTable, indexTable, and dataTable in Accumulo that you should delete when you have completed the example.
@@ -177,46 +176,41 @@ If you modify a file or add new files in the directory ingested (e.g. /local/use
 
 <p>To browse the data ingested, use Viewer.java.  Be sure to give the “username” user the authorizations to see the data (in this case, run</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u root -e 'setauths -u username -s exampleVis'
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u root -e 'setauths -u username -s exampleVis'
+</code></pre></div></div>
 
 <p>then run the Viewer:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.Viewer -i instance -z zookeepers -u username -p password -t dirTable --dataTable dataTable --auths exampleVis --path /local/username/workspace
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.Viewer -i instance -z zookeepers -u username -p password -t dirTable --dataTable dataTable --auths exampleVis --path /local/username/workspace
+</code></pre></div></div>
 
 <p>To list the contents of specific directories, use QueryUtil.java.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t dirTable --auths exampleVis --path /local/username
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t dirTable --auths exampleVis --path /local/username
 $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t dirTable --auths exampleVis --path /local/username/workspace
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>To perform searches on file or directory names, also use QueryUtil.java.  Search terms must contain no more than one wild card and cannot contain “/”.
 <em>Note</em> these queries run on the <em>indexTable</em> table instead of the dirTable table.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t indexTable --auths exampleVis --path filename --search
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t indexTable --auths exampleVis --path filename --search
 $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t indexTable --auths exampleVis --path 'filename*' --search
 $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t indexTable --auths exampleVis --path '*jar' --search
 $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t indexTable --auths exampleVis --path 'filename*jar' --search
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>To count the number of direct children (directories and files) and descendants (children and children’s descendants, directories and files), run the FileCount over the dirTable table.
 The results are written back to the same table.  FileCount reads from and writes to Accumulo.  This requires scan authorizations for the read and a visibility for the data written.
 In this example, the authorizations and visibility are set to the same value, exampleVis.  See README.visibility for more information on visibility and authorizations.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.FileCount -i instance -z zookeepers -u username -p password -t dirTable --auths exampleVis
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.FileCount -i instance -z zookeepers -u username -p password -t dirTable --auths exampleVis
+</code></pre></div></div>
 
 <h2 id="directory-table">Directory Table</h2>
 
 <p>Here is a illustration of what data looks like in the directory table:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>row colf:colq [vis]    value
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>row colf:colq [vis]    value
 000 dir:exec [exampleVis]    true
 000 dir:hidden [exampleVis]    false
 000 dir:lastmod [exampleVis]    1291996886000
@@ -230,8 +224,7 @@ In this example, the authorizations and visibility are set to the same value, ex
 002/local/Accumulo.README \x7F\xFF\xFE\xCFH\xA1\x82\x97:lastmod [exampleVis]    1308746481000
 002/local/Accumulo.README \x7F\xFF\xFE\xCFH\xA1\x82\x97:length [exampleVis]    9192
 002/local/Accumulo.README \x7F\xFF\xFE\xCFH\xA1\x82\x97:md5 [exampleVis]    274af6419a3c4c4a259260ac7017cbf1
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The rows are of the form depth + path, where depth is the number of slashes (“/”) in the path padded to 3 digits.  This is so that all the children of a directory appear as consecutive keys in Accumulo; without the depth, you would for example see all the subdirectories of /local before you saw /usr.
 For directories the column family is “dir”.  For files the column family is Long.MAX_VALUE - lastModified in bytes rather than string format so that newer versions sort earlier.</p>
@@ -240,13 +233,12 @@ For directories the column family is “dir”.  For files the column family is
 
 <p>Here is an illustration of what data looks like in the index table:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>row colf:colq [vis]
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>row colf:colq [vis]
 fAccumulo.README i:002/local/Accumulo.README [exampleVis]
 flocal i:001/local [exampleVis]
 rEMDAER.olumuccA i:002/local/Accumulo.README [exampleVis]
 rlacol i:001/local [exampleVis]
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The values of the index table are null.  The rows are of the form “f” + filename or “r” + reverse file name.  This is to enable searches with wildcards at the beginning, middle, or end.</p>
 
@@ -254,13 +246,12 @@ rlacol i:001/local [exampleVis]
 
 <p>Here is an illustration of what data looks like in the data table:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>row colf:colq [vis]    value
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>row colf:colq [vis]    value
 274af6419a3c4c4a259260ac7017cbf1 refs:e77276a2b56e5c15b540eaae32b12c69\x00filext [exampleVis]    README
 274af6419a3c4c4a259260ac7017cbf1 refs:e77276a2b56e5c15b540eaae32b12c69\x00name [exampleVis]    /local/Accumulo.README
 274af6419a3c4c4a259260ac7017cbf1 ~chunk:\x00\x0FB@\x00\x00\x00\x00 [exampleVis]    *******************************************************************************\x0A1. Building\x0A\x0AIn the normal tarball or RPM release of accumulo, [truncated]
 274af6419a3c4c4a259260ac7017cbf1 ~chunk:\x00\x0FB@\x00\x00\x00\x01 [exampleVis]
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The rows are the md5 hash of the file.  Some column family : column qualifier pairs are “refs” : hash of file name + null byte + property name, in which case the value is property value.  There can be multiple references to the same file which are distinguished by the hash of the file name.
 Other column family : column qualifier pairs are “~chunk” : chunk size in bytes + chunk number in bytes, in which case the value is the bytes for that chunk of the file.  There is an end of file data marker whose chunk number is the number of chunks for the file and whose value is empty.</p>
diff --git a/1.5/examples/export.html b/1.5/examples/export.html
index 9c78ec5..228c793 100644
--- a/1.5/examples/export.html
+++ b/1.5/examples/export.html
@@ -156,7 +156,7 @@ the table.  A table must be offline to export it, and it should remain offline
 for the duration of the distcp.  An easy way to take a table offline without
 interuppting access to it is to clone it and take the clone offline.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@test15&gt; createtable table1
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@test15&gt; createtable table1
 root@test15 table1&gt; insert a cf1 cq1 v1
 root@test15 table1&gt; insert h cf1 cq1 v2
 root@test15 table1&gt; insert z cf1 cq1 v3
@@ -172,34 +172,31 @@ root@test15 table1&gt; clonetable table1 table1_exp
 root@test15 table1&gt; offline table1_exp
 root@test15 table1&gt; exporttable -t table1_exp /tmp/table1_export
 root@test15 table1&gt; quit
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>After executing the export command, a few files are created in the hdfs dir.
 One of the files is a list of files to distcp as shown below.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ hadoop fs -ls /tmp/table1_export
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ hadoop fs -ls /tmp/table1_export
 Found 2 items
 -rw-r--r--   3 user supergroup        162 2012-07-25 09:56 /tmp/table1_export/distcp.txt
 -rw-r--r--   3 user supergroup        821 2012-07-25 09:56 /tmp/table1_export/exportMetadata.zip
 $ hadoop fs -cat /tmp/table1_export/distcp.txt
 hdfs://n1.example.com:6093/accumulo/tables/3/default_tablet/F0000000.rf
 hdfs://n1.example.com:6093/tmp/table1_export/exportMetadata.zip
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Before the table can be imported, it must be copied using distcp.  After the
 discp completed, the cloned table may be deleted.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ hadoop distcp -f /tmp/table1_export/distcp.txt /tmp/table1_export_dest
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ hadoop distcp -f /tmp/table1_export/distcp.txt /tmp/table1_export_dest
+</code></pre></div></div>
 
 <p>The Accumulo shell session below shows importing the table and inspecting it.
 The data, splits, config, and logical time information for the table were
 preserved.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@test15&gt; importtable table1_copy /tmp/table1_export_dest
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@test15&gt; importtable table1_copy /tmp/table1_export_dest
 root@test15&gt; table table1_copy
 root@test15 table1_copy&gt; scan
 a cf1:cq1 []    v1
@@ -224,8 +221,7 @@ root@test15 table1_copy&gt; scan -t !METADATA -b 5 -c srv:time
 5;b srv:time []    M1343224500467
 5;r srv:time []    M1343224500467
 5&lt; srv:time []    M1343224500467
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.5/examples/filedata.html b/1.5/examples/filedata.html
index 3d0f250..2bf5550 100644
--- a/1.5/examples/filedata.html
+++ b/1.5/examples/filedata.html
@@ -176,27 +176,23 @@ The example has the following classes:</p>
 
 <p>If you haven’t already run the README.dirlist example, ingest a file with FileDataIngest.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.filedata.FileDataIngest -i instance -z zookeepers -u username -p password -t dataTable --auths exampleVis --chunk 1000 $ACCUMULO_HOME/README
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.filedata.FileDataIngest -i instance -z zookeepers -u username -p password -t dataTable --auths exampleVis --chunk 1000 $ACCUMULO_HOME/README
+</code></pre></div></div>
 
 <p>Open the accumulo shell and look at the data.  The row is the MD5 hash of the file, which you can verify by running a command such as ‘md5sum’ on the file.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>&gt; scan -t dataTable
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>&gt; scan -t dataTable
+</code></pre></div></div>
 
 <p>Run the CharacterHistogram MapReduce to add some information about the file.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.filedata.CharacterHistogram -i instance -z zookeepers -u username -p password -t dataTable --auths exampleVis --vis exampleVis
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.filedata.CharacterHistogram -i instance -z zookeepers -u username -p password -t dataTable --auths exampleVis --vis exampleVis
+</code></pre></div></div>
 
 <p>Scan again to see the histogram stored in the ‘info’ column family.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>&gt; scan -t dataTable
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>&gt; scan -t dataTable
+</code></pre></div></div>
 
         </div>
 
diff --git a/1.5/examples/filter.html b/1.5/examples/filter.html
index 24ac428..cf1662f 100644
--- a/1.5/examples/filter.html
+++ b/1.5/examples/filter.html
@@ -158,7 +158,7 @@ Filter takes a “negate” parameter which defaults to false.  If set to true,
 return value of the accept method is negated, so that key/value pairs accepted
 by the method are omitted by the Filter.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance&gt; createtable filtertest
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance&gt; createtable filtertest
 username@instance filtertest&gt; setiter -t filtertest -scan -p 10 -n myfilter -ageoff
 AgeOffFilter removes entries with timestamps more than &lt;ttl&gt; milliseconds old
 ----------&gt; set AgeOffFilter parameter negate, default false keeps k/v that pass accept method, true rejects k/v that pass accept method: 
@@ -169,15 +169,13 @@ username@instance filtertest&gt; insert foo a b c
 username@instance filtertest&gt; scan
 foo a:b []    c
 username@instance filtertest&gt; 
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>… wait 30 seconds …</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance filtertest&gt; scan
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance filtertest&gt; scan
 username@instance filtertest&gt; 
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Note the absence of the entry inserted more than 30 seconds ago.  Since the
 scope was set to “scan”, this means the entry is still in Accumulo, but is
@@ -195,7 +193,7 @@ AgeOffFilter, but any Filter can be configured by using the -class flag.  The
 following commands show how to enable the AgeOffFilter for the minc and majc
 scopes using the -class flag, then flush and compact the table.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance filtertest&gt; setiter -t filtertest -minc -majc -p 10 -n myfilter -class org.apache.accumulo.core.iterators.user.AgeOffFilter
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance filtertest&gt; setiter -t filtertest -minc -majc -p 10 -n myfilter -class org.apache.accumulo.core.iterators.user.AgeOffFilter
 AgeOffFilter removes entries with timestamps more than &lt;ttl&gt; milliseconds old
 ----------&gt; set AgeOffFilter parameter negate, default false keeps k/v that pass accept method, true rejects k/v that pass accept method: 
 ----------&gt; set AgeOffFilter parameter ttl, time to live (milliseconds): 30000
@@ -210,8 +208,7 @@ username@instance filtertest&gt; compact -t filtertest -w
 06 10:43:00,632 [shell.Shell] INFO : Compacting table ...
 06 10:43:01,307 [shell.Shell] INFO : Compaction of table filtertest completed for given range
 username@instance filtertest&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>By default, flush and compact execute in the background, but with the -w flag
 they will wait to return until the operation has completed.  Both are 
@@ -224,7 +221,7 @@ the old files.</p>
 
 <p>To see the iterator settings for a table, use config.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance filtertest&gt; config -t filtertest -f iterator
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance filtertest&gt; config -t filtertest -f iterator
 ---------+---------------------------------------------+---------------------------------------------------------------------------
 SCOPE    | NAME                                        | VALUE
 ---------+---------------------------------------------+---------------------------------------------------------------------------
@@ -242,8 +239,7 @@ table    | table.iterator.scan.vers .................. | 20,org.apache.accumulo.
 table    | table.iterator.scan.vers.opt.maxVersions .. | 1
 ---------+---------------------------------------------+---------------------------------------------------------------------------
 username@instance filtertest&gt; 
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>When setting new iterators, make sure to order their priority numbers 
 (specified with -p) in the order you would like the iterators to be applied.
diff --git a/1.5/examples/helloworld.html b/1.5/examples/helloworld.html
index 95031c1..035e2a4 100644
--- a/1.5/examples/helloworld.html
+++ b/1.5/examples/helloworld.html
@@ -157,40 +157,34 @@
 
 <p>Log into the accumulo shell:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+</code></pre></div></div>
 
 <p>Create a table called ‘hellotable’:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance&gt; createtable hellotable
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance&gt; createtable hellotable
+</code></pre></div></div>
 
 <p>Launch a Java program that inserts data with a BatchWriter:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.helloworld.InsertWithBatchWriter -i instance -z zookeepers -u username -p password -t hellotable 
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.helloworld.InsertWithBatchWriter -i instance -z zookeepers -u username -p password -t hellotable 
+</code></pre></div></div>
 
 <p>On the accumulo status page at the URL below (where ‘master’ is replaced with the name or IP of your accumulo master), you should see 50K entries</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>http://master:50095/
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>http://master:50095/
+</code></pre></div></div>
 
 <p>To view the entries, use the shell to scan the table:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance&gt; table hellotable
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance&gt; table hellotable
 username@instance hellotable&gt; scan
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>You can also use a Java class to scan the table:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.helloworld.ReadData -i instance -z zookeepers -u username -p password -t hellotable --startKey row_0 --endKey row_1001
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.helloworld.ReadData -i instance -z zookeepers -u username -p password -t hellotable --startKey row_0 --endKey row_1001
+</code></pre></div></div>
 
         </div>
 
diff --git a/1.5/examples/isolation.html b/1.5/examples/isolation.html
index 1853eda..005fa74 100644
--- a/1.5/examples/isolation.html
+++ b/1.5/examples/isolation.html
@@ -162,7 +162,7 @@ reading the row at the same time a mutation is changing the row.</p>
 <p>Below, Interference Test is run without isolation enabled for 5000 iterations
 and it reports problems.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.isolation.InterferenceTest -i instance -z zookeepers -u username -p password -t isotest --iterations 5000
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.isolation.InterferenceTest -i instance -z zookeepers -u username -p password -t isotest --iterations 5000
 ERROR Columns in row 053 had multiple values [53, 4553]
 ERROR Columns in row 061 had multiple values [561, 61]
 ERROR Columns in row 070 had multiple values [570, 1070]
@@ -171,16 +171,14 @@ ERROR Columns in row 088 had multiple values [2588, 1588]
 ERROR Columns in row 106 had multiple values [2606, 3106]
 ERROR Columns in row 115 had multiple values [4615, 3115]
 finished
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Below, Interference Test is run with isolation enabled for 5000 iterations and
 it reports no problems.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.isolation.InterferenceTest -i instance -z zookeepers -u username -p password -t isotest --iterations 5000 --isolated
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.isolation.InterferenceTest -i instance -z zookeepers -u username -p password -t isotest --iterations 5000 --isolated
 finished
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.5/examples/mapred.html b/1.5/examples/mapred.html
index 8e7fd8f..f9ae697 100644
--- a/1.5/examples/mapred.html
+++ b/1.5/examples/mapred.html
@@ -155,17 +155,16 @@ accumulo table with combiners.</p>
 <p>To run this example you will need a directory in HDFS containing text files.
 The accumulo readme will be used to show how to run this example.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ hadoop fs -copyFromLocal $ACCUMULO_HOME/README /user/username/wc/Accumulo.README
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ hadoop fs -copyFromLocal $ACCUMULO_HOME/README /user/username/wc/Accumulo.README
 $ hadoop fs -ls /user/username/wc
 Found 1 items
 -rw-r--r--   2 username supergroup       9359 2009-07-15 17:54 /user/username/wc/Accumulo.README
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The first part of running this example is to create a table with a combiner
 for the column family count.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 Shell - Apache Accumulo Interactive Shell
 - version: 1.5.0
 - instance name: instance
@@ -181,12 +180,11 @@ SummingCombiner interprets Values as Longs and adds them together.  A variety of
 ----------&gt; set SummingCombiner parameter lossy, if true, failed decodes are ignored. Otherwise combiner will error on failed decodes (default false): &lt;TRUE|FALSE&gt;: false 
 ----------&gt; set SummingCombiner parameter type, &lt;VARLEN|FIXEDLEN|STRING|fullClassName&gt;: STRING
 username@instance wordCount&gt; quit
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>After creating the table, run the word count map reduce job.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.WordCount -i instance -z zookeepers  --input /user/username/wc -t wordCount -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.WordCount -i instance -z zookeepers  --input /user/username/wc -t wordCount -u username -p password
 
 11/02/07 18:20:11 INFO input.FileInputFormat: Total input paths to process : 1
 11/02/07 18:20:12 INFO mapred.JobClient: Running job: job_201102071740_0003
@@ -203,13 +201,12 @@ username@instance wordCount&gt; quit
 11/02/07 18:20:22 INFO mapred.JobClient:     Map input records=255
 11/02/07 18:20:22 INFO mapred.JobClient:     Spilled Records=0
 11/02/07 18:20:22 INFO mapred.JobClient:     Map output records=1452
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>After the map reduce job completes, query the accumulo table to see word
 counts.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 username@instance&gt; table wordCount
 username@instance wordCount&gt; scan -b the
 the count:20080906 []    75
@@ -227,8 +224,7 @@ total count:20080906 []    1
 tserver, count:20080906 []    1
 tserver.compaction.major.concurrent.max count:20080906 []    1
 ...
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Another example to look at is
 org.apache.accumulo.examples.simple.mapreduce.UniqueColumns.  This example
diff --git a/1.5/examples/maxmutation.html b/1.5/examples/maxmutation.html
index 134559e..f15330b 100644
--- a/1.5/examples/maxmutation.html
+++ b/1.5/examples/maxmutation.html
@@ -155,7 +155,7 @@ inadvertently create mutations so large, that they cause the tablet servers to
 run out of memory.  A simple contraint can be added to a table to reject very 
 large mutations.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 
 Shell - Apache Accumulo Interactive Shell
 - 
@@ -168,16 +168,14 @@ Shell - Apache Accumulo Interactive Shell
 username@instance&gt; createtable test_ingest
 username@instance test_ingest&gt; config -t test_ingest -s table.constraint.1=org.apache.accumulo.examples.simple.constraints.MaxMutationSize
 username@instance test_ingest&gt; 
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Now the table will reject any mutation that is larger than 1/256th of the 
 working memory of the tablet server.  The following command attempts to ingest 
 a single row with 10000 columns, which exceeds the memory limit:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.test.TestIngest -i instance -z zookeepers -u username -p password --rows 1 --cols 10000  ERROR : Constraint violates : ConstraintViolationSummary(constrainClass:org.apache.accumulo.examples.simple.constraints.MaxMutationSize, violationCode:0, violationDescription:mutation exceeded maximum size of 188160, numberOfViolatingMutations:1)
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.test.TestIngest -i instance -z zookeepers -u username -p password --rows 1 --cols 10000  ERROR : Constraint violates : ConstraintViolationSummary(constrainClass:org.apache.accumulo.examples.simple.constraints.MaxMutationSize, violationCode:0, violationDescription:mutation exceeded maximum size of 188160, numberOfViolatingMutations:1)
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.5/examples/regex.html b/1.5/examples/regex.html
index 6b17282..339c7c9 100644
--- a/1.5/examples/regex.html
+++ b/1.5/examples/regex.html
@@ -154,7 +154,7 @@ This is accomplished using a map-only mapreduce job and a scan-time iterator.</p
 <p>To run this example you will need some data in a table.  The following will
 put a trivial amount of data into accumulo using the accumulo shell:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 Shell - Apache Accumulo Interactive Shell
 - version: 1.5.0
 - instance name: instance
@@ -166,8 +166,7 @@ username@instance&gt; createtable input
 username@instance&gt; insert dogrow dogcf dogcq dogvalue
 username@instance&gt; insert catrow catcf catcq catvalue
 username@instance&gt; quit
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The RegexExample class sets an iterator on the scanner.  This does pattern matching
 against each key/value in accumulo, and only returns matching items.  It will do this
@@ -175,23 +174,21 @@ in parallel and will store the results in files in hdfs.</p>
 
 <p>The following will search for any rows in the input table that starts with “dog”:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.RegexExample -u user -p passwd -i instance -t input --rowRegex 'dog.*' --output /tmp/output
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.RegexExample -u user -p passwd -i instance -t input --rowRegex 'dog.*' --output /tmp/output
 
 $ hadoop fs -ls /tmp/output
 Found 3 items
 -rw-r--r--   1 username supergroup          0 2013-01-10 14:11 /tmp/output/_SUCCESS
 drwxr-xr-x   - username supergroup          0 2013-01-10 14:10 /tmp/output/_logs
 -rw-r--r--   1 username supergroup         51 2013-01-10 14:10 /tmp/output/part-m-00000
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>We can see the output of our little map-reduce job:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ hadoop fs -text /tmp/output/output/part-m-00000
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ hadoop fs -text /tmp/output/output/part-m-00000
 dogrow dogcf:dogcq [] 1357844987994 false    dogvalue
 $
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.5/examples/rowhash.html b/1.5/examples/rowhash.html
index f9ad984..b1146d0 100644
--- a/1.5/examples/rowhash.html
+++ b/1.5/examples/rowhash.html
@@ -154,7 +154,7 @@ writes back into that table.</p>
 <p>To run this example you will need some data in a table.  The following will
 put a trivial amount of data into accumulo using the accumulo shell:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 Shell - Apache Accumulo Interactive Shell
 - version: 1.5.0
 - instance name: instance
@@ -166,19 +166,17 @@ username@instance&gt; createtable input
 username@instance&gt; insert a-row cf cq value
 username@instance&gt; insert b-row cf cq value
 username@instance&gt; quit
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The RowHash class will insert a hash for each row in the database if it contains a 
 specified colum.  Here’s how you run the map/reduce job</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.RowHash -u user -p passwd -i instance -t input --column cf:cq 
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.RowHash -u user -p passwd -i instance -t input --column cf:cq 
+</code></pre></div></div>
 
 <p>Now we can scan the table and see the hashes:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 Shell - Apache Accumulo Interactive Shell
 - version: 1.5.0
 - instance name: instance
@@ -192,8 +190,7 @@ a-row cf-HASHTYPE:cq-MD5BASE64 []    IGPBYI1uC6+AJJxC4r5YBA==
 b-row cf:cq []    value
 b-row cf-HASHTYPE:cq-MD5BASE64 []    IGPBYI1uC6+AJJxC4r5YBA==
 username@instance&gt; 
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.5/examples/shard.html b/1.5/examples/shard.html
index 8e53a84..726366e 100644
--- a/1.5/examples/shard.html
+++ b/1.5/examples/shard.html
@@ -160,21 +160,19 @@ document, or “sharded”. This example shows how to use the intersecting itera
 
 <p>To run these example programs, create two tables like below.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance&gt; createtable shard
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance&gt; createtable shard
 username@instance shard&gt; createtable doc2term
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>After creating the tables, index some files.  The following command indexes all of the java files in the Accumulo source code.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ cd /local/username/workspace/accumulo/
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ cd /local/username/workspace/accumulo/
 $ find core/src server/src -name "*.java" | xargs ./bin/accumulo org.apache.accumulo.examples.simple.shard.Index -i instance -z zookeepers -t shard -u username -p password --partitions 30
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The following command queries the index to find all files containing ‘foo’ and ‘bar’.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ cd $ACCUMULO_HOME
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ cd $ACCUMULO_HOME
 $ ./bin/accumulo org.apache.accumulo.examples.simple.shard.Query -i instance -z zookeepers -t shard -u username -p password foo bar
 /local/username/workspace/accumulo/src/core/src/test/java/accumulo/core/security/ColumnVisibilityTest.java
 /local/username/workspace/accumulo/src/core/src/test/java/accumulo/core/client/mock/MockConnectorTest.java
@@ -187,19 +185,17 @@ $ ./bin/accumulo org.apache.accumulo.examples.simple.shard.Query -i instance -z
 /local/username/workspace/accumulo/src/core/src/test/java/accumulo/core/iterators/WholeRowIteratorTest.java
 /local/username/workspace/accumulo/src/server/src/test/java/accumulo/server/util/DefaultMapTest.java
 /local/username/workspace/accumulo/src/server/src/test/java/accumulo/server/tabletserver/InMemoryMapTest.java
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>In order to run ContinuousQuery, we need to run Reverse.java to populate doc2term.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.shard.Reverse -i instance -z zookeepers --shardTable shard --doc2Term doc2term -u username -p password
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.shard.Reverse -i instance -z zookeepers --shardTable shard --doc2Term doc2term -u username -p password
+</code></pre></div></div>
 
 <p>Below ContinuousQuery is run using 5 terms.  So it selects 5 random terms from each document, then it continually 
 randomly selects one set of 5 terms and queries.  It prints the number of matching documents and the time in seconds.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.shard.ContinuousQuery -i instance -z zookeepers --shardTable shard --doc2Term doc2term -u username -p password --terms 5
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.shard.ContinuousQuery -i instance -z zookeepers --shardTable shard --doc2Term doc2term -u username -p password --terms 5
 [public, core, class, binarycomparable, b] 2  0.081
 [wordtodelete, unindexdocument, doctablename, putdelete, insert] 1  0.041
 [import, columnvisibilityinterpreterfactory, illegalstateexception, cv, columnvisibility] 1  0.049
@@ -207,8 +203,7 @@ randomly selects one set of 5 terms and queries.  It prints the number of matchi
 [for, static, println, public, the] 55  0.211
 [sleeptime, wrappingiterator, options, long, utilwaitthread] 1  0.057
 [string, public, long, 0, wait] 12  0.132
-</code></pre>
-</div>
+</code></pre></div></div>
 
         </div>
 
diff --git a/1.5/examples/tabletofile.html b/1.5/examples/tabletofile.html
index 99d6dc3..b98a909 100644
--- a/1.5/examples/tabletofile.html
+++ b/1.5/examples/tabletofile.html
@@ -153,7 +153,7 @@
 <p>To run this example you will need some data in a table.  The following will
 put a trivial amount of data into accumulo using the accumulo shell:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 Shell - Apache Accumulo Interactive Shell
 - version: 1.5.0
 - instance name: instance
@@ -166,15 +166,14 @@ username@instance&gt; insert dog cf cq dogvalue
 username@instance&gt; insert cat cf cq catvalue
 username@instance&gt; insert junk family qualifier junkvalue
 username@instance&gt; quit
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The TableToFile class configures a map-only job to read the specified columns and
 write the key/value pairs to a file in HDFS.</p>
 
 <p>The following will extract the rows containing the column “cf:cq”:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.TableToFile -u user -p passwd -i instance -t input --columns cf:cq --output /tmp/output
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.TableToFile -u user -p passwd -i instance -t input --columns cf:cq --output /tmp/output
 
 $ hadoop fs -ls /tmp/output
 -rw-r--r--   1 username supergroup          0 2013-01-10 14:44 /tmp/output/_SUCCESS
@@ -183,17 +182,15 @@ drwxr-xr-x   - username supergroup          0 2013-01-10 14:44 /tmp/output/_logs
 -rw-r--r--   1 username supergroup       9049 2013-01-10 14:44 /tmp/output/_logs/history/job_201301081658_0011_1357847072863_username_TableToFile%5F1357847071434
 -rw-r--r--   1 username supergroup      26172 2013-01-10 14:44 /tmp/output/_logs/history/job_201301081658_0011_conf.xml
 -rw-r--r--   1 username supergroup         50 2013-01-10 14:44 /tmp/output/part-m-00000
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>We can see the output of our little map-reduce job:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ hadoop fs -text /tmp/output/output/part-m-00000
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ hadoop fs -text /tmp/output/output/part-m-00000
 catrow cf:cq []    catvalue
 dogrow cf:cq []    dogvalue
 $
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.5/examples/terasort.html b/1.5/examples/terasort.html
index dab0164..84dadbe 100644
--- a/1.5/examples/terasort.html
+++ b/1.5/examples/terasort.html
@@ -154,7 +154,7 @@ hadoop terasort benchmark.</p>
 
 <p>To run this example you run it with arguments describing the amount of data:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.TeraSortIngest \
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.TeraSortIngest \
 -i instance -z zookeepers -u user -p password \
 --count 10 \
 --minKeySize 10 \ 
@@ -163,12 +163,11 @@ hadoop terasort benchmark.</p>
 --maxValueSize 78 \
 --table sort \
 --splits 10 \
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>After the map reduce job completes, scan the data:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 username@instance&gt; scan -t sort 
 +l-$$OE/ZH c:         4 []    GGGGGGGGGGWWWWWWWWWWMMMMMMMMMMCCCCCCCCCCSSSSSSSSSSIIIIIIIIIIYYYYYYYYYYOOOOOOOO
 ,C)wDw//u= c:        10 []    CCCCCCCCCCSSSSSSSSSSIIIIIIIIIIYYYYYYYYYYOOOOOOOOOOEEEEEEEEEEUUUUUUUUUUKKKKKKKK
@@ -180,8 +179,7 @@ M^*dDE;6^&lt; c:         9 []    UUUUUUUUUUKKKKKKKKKKAAAAAAAAAAQQQQQQQQQQGGGGGGG
 le5awB.$sm c:         6 []    WWWWWWWWWWMMMMMMMMMMCCCCCCCCCCSSSSSSSSSSIIIIIIIIIIYYYYYYYYYYOOOOOOOOOOEEEEEEEE
 q__[fwhKFg c:         7 []    EEEEEEEEEEUUUUUUUUUUKKKKKKKKKKAAAAAAAAAAQQQQQQQQQQGGGGGGGGGGWWWWWWWWWWMMMMMMMM
 w[o||:N&amp;H, c:         2 []    QQQQQQQQQQGGGGGGGGGGWWWWWWWWWWMMMMMMMMMMCCCCCCCCCCSSSSSSSSSSIIIIIIIIIIYYYYYYYY
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Of course, a real benchmark would ingest millions of entries.</p>
 
diff --git a/1.5/examples/visibility.html b/1.5/examples/visibility.html
index 8665916..e199fbc 100644
--- a/1.5/examples/visibility.html
+++ b/1.5/examples/visibility.html
@@ -150,7 +150,7 @@
           
           <h2 id="creating-a-new-user">Creating a new user</h2>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@instance&gt; createuser username
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@instance&gt; createuser username
 Enter new password for 'username': ********
 Please confirm new password for 'username': ********
 root@instance&gt; user username
@@ -162,14 +162,13 @@ System permissions:
 
 Table permissions (!METADATA): Table.READ
 username@instance&gt; 
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>A user does not by default have permission to create a table.</p>
 
 <h2 id="granting-permissions-to-a-user">Granting permissions to a user</h2>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance&gt; user root
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance&gt; user root
 Enter password for user root: ********
 root@instance&gt; grant -s System.CREATE_TABLE -u username
 root@instance&gt; user username 
@@ -181,8 +180,7 @@ System permissions: System.CREATE_TABLE
 Table permissions (!METADATA): Table.READ
 Table permissions (vistest): Table.READ, Table.WRITE, Table.BULK_IMPORT, Table.ALTER_TABLE, Table.GRANT, Table.DROP_TABLE
 username@instance vistest&gt; 
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h2 id="inserting-data-with-visibilities">Inserting data with visibilities</h2>
 
@@ -191,7 +189,7 @@ tokens.  Authorization tokens are arbitrary strings taken from a restricted
 ASCII character set.  Parentheses are required to specify order of operations 
 in visibilities.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance vistest&gt; insert row f1 q1 v1 -l A
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance vistest&gt; insert row f1 q1 v1 -l A
 username@instance vistest&gt; insert row f2 q2 v2 -l A&amp;B
 username@instance vistest&gt; insert row f3 q3 v3 -l apple&amp;carrot|broccoli|spinach
 06 11:19:01,432 [shell.Shell] ERROR: org.apache.accumulo.core.util.BadArgumentException: cannot mix | and &amp; near index 12
@@ -199,8 +197,7 @@ apple&amp;carrot|broccoli|spinach
             ^
 username@instance vistest&gt; insert row f3 q3 v3 -l (apple&amp;carrot)|broccoli|spinach
 username@instance vistest&gt; 
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h2 id="scanning-with-authorizations">Scanning with authorizations</h2>
 
@@ -209,25 +206,23 @@ authorizations and each Accumulo scan has authorizations.  Scan authorizations
 are only allowed to be a subset of the user’s authorizations.  By default, a 
 user’s authorizations set is empty.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance vistest&gt; scan
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance vistest&gt; scan
 username@instance vistest&gt; scan -s A
 06 11:43:14,951 [shell.Shell] ERROR: java.lang.RuntimeException: org.apache.accumulo.core.client.AccumuloSecurityException: Error BAD_AUTHORIZATIONS - The user does not have the specified authorizations assigned
 username@instance vistest&gt; 
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h2 id="setting-authorizations-for-a-user">Setting authorizations for a user</h2>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance vistest&gt; setauths -s A
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance vistest&gt; setauths -s A
 06 11:53:42,056 [shell.Shell] ERROR: org.apache.accumulo.core.client.AccumuloSecurityException: Error PERMISSION_DENIED - User does not have permission to perform this action
 username@instance vistest&gt; 
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>A user cannot set authorizations unless the user has the System.ALTER_USER permission.
 The root user has this permission.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance vistest&gt; user root
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance vistest&gt; user root
 Enter password for user root: ********
 root@instance vistest&gt; setauths -s A -u username
 root@instance vistest&gt; user username
@@ -237,12 +232,11 @@ row f1:q1 [A]    v1
 username@instance vistest&gt; scan
 row f1:q1 [A]    v1
 username@instance vistest&gt; 
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The default authorizations for a scan are the user’s entire set of authorizations.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance vistest&gt; user root
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance vistest&gt; user root
 Enter password for user root: ********
 root@instance vistest&gt; setauths -s A,B,broccoli -u username
 root@instance vistest&gt; user username
@@ -253,13 +247,12 @@ row f2:q2 [A&amp;B]    v2
 row f3:q3 [(apple&amp;carrot)|broccoli|spinach]    v3
 username@instance vistest&gt; scan -s B
 username@instance vistest&gt; 
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>If you want, you can limit a user to only be able to insert data which they can read themselves.
 It can be set with the following constraint.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance vistest&gt; user root
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance vistest&gt; user root
 Enter password for user root: ******
 root@instance vistest&gt; config -t vistest -s table.constraint.1=org.apache.accumulo.core.security.VisibilityConstraint    
 root@instance vistest&gt; user username
@@ -274,8 +267,7 @@ row f2:q2 [A&amp;B]    v2
 row f3:q3 [(apple&amp;carrot)|broccoli|spinach]    v3
 row f4:q4 [spinach|broccoli]    v4
 username@instance vistest&gt; 
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.6/examples/batch.html b/1.6/examples/batch.html
index b14a99b..01b9e3e 100644
--- a/1.6/examples/batch.html
+++ b/1.6/examples/batch.html
@@ -169,13 +169,12 @@ list of zookeeper nodes (given as zookeepers here).</p>
 <p>Before you run this, you must ensure that the user you are running has the
 “exampleVis” authorization. (you can set this in the shell with “setauths -u username -s exampleVis”)</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u root -e "setauths -u username -s exampleVis"
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u root -e "setauths -u username -s exampleVis"
+</code></pre></div></div>
 
 <p>You must also create the table, batchtest1, ahead of time. (In the shell, use “createtable batchtest1”)</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -e "createtable batchtest1"
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -e "createtable batchtest1"
 $ ./bin/accumulo org.apache.accumulo.examples.simple.client.SequentialBatchWriter -i instance -z zookeepers -u username -p password -t batchtest1 --start 0 --num 10000 --size 50 --batchMemory 20M --batchLatency 500 --batchThreads 20 --vis exampleVis
 $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner -i instance -z zookeepers -u username -p password -t batchtest1 --num 100 --min 0 --max 10000 --size 50 --scanThreads 20 --auths exampleVis
 07 11:33:11,103 [client.CountingVerifyingReceiver] INFO : Generating 100 random queries...
@@ -189,8 +188,7 @@ $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner -
 07 11:33:11,416 [client.CountingVerifyingReceiver] INFO : 2173.91 lookups/sec   0.05 secs
 
 07 11:33:11,416 [client.CountingVerifyingReceiver] INFO : num results : 100
-</code></pre>
-</div>
+</code></pre></div></div>
 
         </div>
 
diff --git a/1.6/examples/bloom.html b/1.6/examples/bloom.html
index fbf4a49..90748b4 100644
--- a/1.6/examples/bloom.html
+++ b/1.6/examples/bloom.html
@@ -154,7 +154,7 @@ do not exist in a table.</p>
 
 <p>Below table named bloom_test is created and bloom filters are enabled.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 Shell - Apache Accumulo Interactive Shell
 - version: 1.6.0
 - instance name: instance
@@ -166,43 +166,39 @@ username@instance&gt; setauths -u username -s exampleVis
 username@instance&gt; createtable bloom_test
 username@instance bloom_test&gt; config -t bloom_test -s table.bloom.enabled=true
 username@instance bloom_test&gt; exit
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Below 1 million random values are inserted into accumulo. The randomly
 generated rows range between 0 and 1 billion. The random number generator is
 initialized with the seed 7.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test --num 1000000 --min 0 --max 1000000000 --size 50 --batchMemory 2M --batchLatency 60s --batchThreads 3 --vis exampleVis
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test --num 1000000 --min 0 --max 1000000000 --size 50 --batchMemory 2M --batchLatency 60s --batchThreads 3 --vis exampleVis
+</code></pre></div></div>
 
 <p>Below the table is flushed:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test -w'
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test -w'
 05 10:40:06,069 [shell.Shell] INFO : Flush of table bloom_test completed.
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>After the flush completes, 500 random queries are done against the table. The
 same seed is used to generate the queries, therefore everything is found in the
 table.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test --num 500 --min 0 --max 1000000000 --size 50 --batchThreads 20 --auths exampleVis
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test --num 500 --min 0 --max 1000000000 --size 50 --batchThreads 20 --auths exampleVis
 Generating 500 random queries...finished
 96.19 lookups/sec   5.20 secs
 num results : 500
 Generating 500 random queries...finished
 102.35 lookups/sec   4.89 secs
 num results : 500
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Below another 500 queries are performed, using a different seed which results
 in nothing being found. In this case the lookups are much faster because of
 the bloom filters.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 8 -i instance -z zookeepers -u username -p password -t bloom_test --num 500 --min 0 --max 1000000000 --size 50 --batchThreads 20 --auths exampleVis
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 8 -i instance -z zookeepers -u username -p password -t bloom_test --num 500 --min 0 --max 1000000000 --size 50 --batchThreads 20 --auths exampleVis
 Generating 500 random queries...finished
 2212.39 lookups/sec   0.23 secs
 num results : 0
@@ -211,8 +207,7 @@ Generating 500 random queries...finished
 4464.29 lookups/sec   0.11 secs
 num results : 0
 Did not find 500 rows
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <hr />
 
@@ -244,7 +239,7 @@ million inserts. If not, then more map files will be created.</p>
 
 <p>The commands for creating the first table without bloom filters are below.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 Shell - Apache Accumulo Interactive Shell
 - version: 1.6.0
 - instance name: instance
@@ -264,12 +259,11 @@ $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter --
 $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test1 -w'
 $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter --seed 9 $ARGS
 $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test1 -w'
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The commands for creating the second table with bloom filers are below.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 Shell - Apache Accumulo Interactive Shell
 - version: 1.6.0
 - instance name: instance
@@ -290,65 +284,60 @@ $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter --
 $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test2 -w'
 $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter --seed 9 $ARGS
 $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test2 -w'
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Below 500 lookups are done against the table without bloom filters using random
 NG seed 7. Even though only one map file will likely contain entries for this
 seed, all map files will be interrogated.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test1 --num 500 --min 0 --max 1000000000 --size 50 --scanThreads 20 --auths exampleVis
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test1 --num 500 --min 0 --max 1000000000 --size 50 --scanThreads 20 --auths exampleVis
 Generating 500 random queries...finished
 35.09 lookups/sec  14.25 secs
 num results : 500
 Generating 500 random queries...finished
 35.33 lookups/sec  14.15 secs
 num results : 500
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Below the same lookups are done against the table with bloom filters. The
 lookups were 2.86 times faster because only one map file was used, even though three
 map files existed.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test2 --num 500 --min 0 --max 1000000000 --size 50 --scanThreads 20 --auths exampleVis
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test2 --num 500 --min 0 --max 1000000000 --size 50 --scanThreads 20 --auths exampleVis
 Generating 500 random queries...finished
 99.03 lookups/sec   5.05 secs
 num results : 500
 Generating 500 random queries...finished
 101.15 lookups/sec   4.94 secs
 num results : 500
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>You can verify the table has three files by looking in HDFS. To look in HDFS
 you will need the table ID, because this is used in HDFS instead of the table
 name. The following command will show table ids.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password -e 'tables -l'
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password -e 'tables -l'
 accumulo.metadata    =&gt;        !0
 accumulo.root        =&gt;        +r
 bloom_test1          =&gt;        o7
 bloom_test2          =&gt;        o8
 trace                =&gt;         1
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>So the table id for bloom_test2 is o8. The command below shows what files this
 table has in HDFS. This assumes Accumulo is at the default location in HDFS.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ hadoop fs -lsr /accumulo/tables/o8
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ hadoop fs -lsr /accumulo/tables/o8
 drwxr-xr-x   - username supergroup          0 2012-01-10 14:02 /accumulo/tables/o8/default_tablet
 -rw-r--r--   3 username supergroup   52672650 2012-01-10 14:01 /accumulo/tables/o8/default_tablet/F00000dj.rf
 -rw-r--r--   3 username supergroup   52436176 2012-01-10 14:01 /accumulo/tables/o8/default_tablet/F00000dk.rf
 -rw-r--r--   3 username supergroup   52850173 2012-01-10 14:02 /accumulo/tables/o8/default_tablet/F00000dl.rf
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Running the rfile-info command shows that one of the files has a bloom filter
 and its 1.5MB.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo rfile-info /accumulo/tables/o8/default_tablet/F00000dj.rf
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo rfile-info /accumulo/tables/o8/default_tablet/F00000dj.rf
 Locality group         : &lt;DEFAULT&gt;
   Start block          : 0
   Num   blocks         : 752
@@ -372,8 +361,7 @@ Meta block     : acu_bloom
   Raw size             : 1,540,292 bytes
   Compressed size      : 1,433,115 bytes
   Compression type     : gz
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.6/examples/bulkIngest.html b/1.6/examples/bulkIngest.html
index 348ca5f..e1c9c8b 100644
--- a/1.6/examples/bulkIngest.html
+++ b/1.6/examples/bulkIngest.html
@@ -155,14 +155,13 @@ table called test_bulk which has two initial split points. Then 1000 rows of
 test data are created in HDFS. After that the 1000 rows are ingested into
 Accumulo. Then we verify the 1000 rows are in Accumulo.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ PKG=org.apache.accumulo.examples.simple.mapreduce.bulk
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ PKG=org.apache.accumulo.examples.simple.mapreduce.bulk
 $ ARGS="-i instance -z zookeepers -u username -p password"
 $ ./bin/accumulo $PKG.SetupTable $ARGS -t test_bulk row_00000333 row_00000666
 $ ./bin/accumulo $PKG.GenerateTestData --start-row 0 --count 1000 --output bulk/test_1.txt
 $ ./bin/tool.sh lib/accumulo-examples-simple.jar $PKG.BulkIngestExample $ARGS -t test_bulk --inputDir bulk --workDir tmp/bulkWork
 $ ./bin/accumulo $PKG.VerifyIngest $ARGS -t test_bulk --start-row 0 --count 1000
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>For a high level discussion of bulk ingest, see the docs dir.</p>
 
diff --git a/1.6/examples/classpath.html b/1.6/examples/classpath.html
index 9cf6ed3..b80232a 100644
--- a/1.6/examples/classpath.html
+++ b/1.6/examples/classpath.html
@@ -155,59 +155,52 @@ table reference that jar.</p>
 
 <p>Execute the following command in the shell.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ hadoop fs -copyFromLocal $ACCUMULO_HOME/test/src/test/resources/FooFilter.jar /user1/lib
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ hadoop fs -copyFromLocal $ACCUMULO_HOME/test/src/test/resources/FooFilter.jar /user1/lib
+</code></pre></div></div>
 
 <p>Execute following in Accumulo shell to setup classpath context</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@test16&gt; config -s general.vfs.context.classpath.cx1=hdfs://&lt;namenode host&gt;:&lt;namenode port&gt;/user1/lib
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@test16&gt; config -s general.vfs.context.classpath.cx1=hdfs://&lt;namenode host&gt;:&lt;namenode port&gt;/user1/lib
+</code></pre></div></div>
 
 <p>Create a table</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@test16&gt; createtable nofoo
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@test16&gt; createtable nofoo
+</code></pre></div></div>
 
 <p>The following command makes this table use the configured classpath context</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@test16 nofoo&gt; config -t nofoo -s table.classpath.context=cx1
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@test16 nofoo&gt; config -t nofoo -s table.classpath.context=cx1
+</code></pre></div></div>
 
 <p>The following command configures an iterator thats in FooFilter.jar</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@test16 nofoo&gt; setiter -n foofilter -p 10 -scan -minc -majc -class org.apache.accumulo.test.FooFilter
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@test16 nofoo&gt; setiter -n foofilter -p 10 -scan -minc -majc -class org.apache.accumulo.test.FooFilter
 Filter accepts or rejects each Key/Value pair
 ----------&gt; set FooFilter parameter negate, default false keeps k/v that pass accept method, true rejects k/v that pass accept method: false
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The commands below show the filter is working.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@test16 nofoo&gt; insert foo1 f1 q1 v1
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@test16 nofoo&gt; insert foo1 f1 q1 v1
 root@test16 nofoo&gt; insert noo1 f1 q1 v2
 root@test16 nofoo&gt; scan
 noo1 f1:q1 []    v2
 root@test15 nofoo&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Below, an attempt is made to add the FooFilter to a table thats not configured
 to use the clasppath context cx1. This fails util the table is configured to
 use cx1.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@test16 nofoo&gt; createtable nofootwo
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@test16 nofoo&gt; createtable nofootwo
 root@test16 nofootwo&gt; setiter -n foofilter -p 10 -scan -minc -majc -class org.apache.accumulo.test.FooFilter
 2013-05-03 12:49:35,943 [shell.Shell] ERROR: java.lang.IllegalArgumentException: org.apache.accumulo.test.FooFilter
 root@test16 nofootwo&gt; config -t nofootwo -s table.classpath.context=cx1
 root@test16 nofootwo&gt; setiter -n foofilter -p 10 -scan -minc -majc -class org.apache.accumulo.test.FooFilter
 Filter accepts or rejects each Key/Value pair
 ----------&gt; set FooFilter parameter negate, default false keeps k/v that pass accept method, true rejects k/v that pass accept method: false
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.6/examples/client.html b/1.6/examples/client.html
index 1274b66..d0090ba 100644
--- a/1.6/examples/client.html
+++ b/1.6/examples/client.html
@@ -162,15 +162,14 @@
 class name, and enough arguments to find your accumulo instance. For example,
 the Flush class will flush a table:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ PACKAGE=org.apache.accumulo.examples.simple.client
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ PACKAGE=org.apache.accumulo.examples.simple.client
 $ bin/accumulo $PACKAGE.Flush -u root -p mypassword -i instance -z zookeeper -t trace
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The very simple RowOperations class demonstrates how to read and write rows using the BatchWriter
 and Scanner:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/accumulo $PACKAGE.RowOperations -u root -p mypassword -i instance -z zookeeper
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bin/accumulo $PACKAGE.RowOperations -u root -p mypassword -i instance -z zookeeper
 2013-01-14 14:45:24,738 [client.RowOperations] INFO : This is everything
 2013-01-14 14:45:24,744 [client.RowOperations] INFO : Key: row1 column:1 [] 1358192724640 false Value: This is the value for this key
 2013-01-14 14:45:24,744 [client.RowOperations] INFO : Key: row1 column:2 [] 1358192724642 false Value: This is the value for this key
@@ -198,12 +197,11 @@ and Scanner:</p>
 2013-01-14 14:45:24,770 [client.RowOperations] INFO : Key: row3 column:2 [] 1358192724642 false Value: This is the value for this key
 2013-01-14 14:45:24,770 [client.RowOperations] INFO : Key: row3 column:3 [] 1358192724642 false Value: This is the value for this key
 2013-01-14 14:45:24,770 [client.RowOperations] INFO : Key: row3 column:4 [] 1358192724642 false Value: This is the value for this key
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>To create a table, write to it and read from it:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/accumulo $PACKAGE.ReadWriteExample -u root -p mypassword -i instance -z zookeeper --createtable --create --read
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bin/accumulo $PACKAGE.ReadWriteExample -u root -p mypassword -i instance -z zookeeper --createtable --create --read
 hello%00; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -&gt; world
 hello%01; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -&gt; world
 hello%02; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -&gt; world
@@ -214,8 +212,7 @@ hello%06; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -&gt; world
 hello%07; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -&gt; world
 hello%08; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -&gt; world
 hello%09; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -&gt; world
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.6/examples/combiner.html b/1.6/examples/combiner.html
index 3dad5c7..e19762c 100644
--- a/1.6/examples/combiner.html
+++ b/1.6/examples/combiner.html
@@ -158,7 +158,7 @@
 copy the produced jar into the accumulo lib dir. This is already done in the
 tar distribution.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/accumulo shell -u username
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bin/accumulo shell -u username
 Enter current password for 'username'@'instance': ***
 
 Shell - Apache Accumulo Interactive Shell
@@ -196,8 +196,7 @@ username@instance runners&gt; scan
 123456 hstat:virtualMarathon []    6a,6b,d5,2
 123456 name:first []    Joe
 123456 stat:marathon []    220,240,690,3
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>In this example a table is created and the example stats combiner is applied to
 the column family stat and hstat. The stats combiner computes min,max,sum, and
diff --git a/1.6/examples/constraints.html b/1.6/examples/constraints.html
index cf279ae..56e72d2 100644
--- a/1.6/examples/constraints.html
+++ b/1.6/examples/constraints.html
@@ -161,7 +161,7 @@ numeric keys. The other constraint does not allow non numeric values. Two
 inserts that violate these constraints are attempted and denied. The scan at
 the end shows the inserts were not allowed.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 
 Shell - Apache Accumulo Interactive Shell
 -
@@ -185,8 +185,7 @@ username@instance testConstraints&gt; insert r1! cf1 cq1 ABC
 username@instance testConstraints&gt; scan
 r1 cf1:cq1 []    1111
 username@instance testConstraints&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.6/examples/dirlist.html b/1.6/examples/dirlist.html
index 0440c33..5c19b76 100644
--- a/1.6/examples/dirlist.html
+++ b/1.6/examples/dirlist.html
@@ -167,9 +167,8 @@
 
 <p>To begin, ingest some data with Ingest.java.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.Ingest -i instance -z zookeepers -u username -p password --vis exampleVis --chunkSize 100000 /local/username/workspace
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.Ingest -i instance -z zookeepers -u username -p password --vis exampleVis --chunkSize 100000 /local/username/workspace
+</code></pre></div></div>
 
 <p>This may take some time if there are large files in the /local/username/workspace directory. If you use 0 instead of 100000 on the command line, the ingest will run much faster, but it will not put any file data into Accumulo (the dataTable will be empty).
 Note that running this example will create tables dirTable, indexTable, and dataTable in Accumulo that you should delete when you have completed the example.
@@ -177,46 +176,41 @@ If you modify a file or add new files in the directory ingested (e.g. /local/use
 
 <p>To browse the data ingested, use Viewer.java. Be sure to give the “username” user the authorizations to see the data (in this case, run</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u root -e 'setauths -u username -s exampleVis'
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u root -e 'setauths -u username -s exampleVis'
+</code></pre></div></div>
 
 <p>then run the Viewer:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.Viewer -i instance -z zookeepers -u username -p password -t dirTable --dataTable dataTable --auths exampleVis --path /local/username/workspace
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.Viewer -i instance -z zookeepers -u username -p password -t dirTable --dataTable dataTable --auths exampleVis --path /local/username/workspace
+</code></pre></div></div>
 
 <p>To list the contents of specific directories, use QueryUtil.java.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t dirTable --auths exampleVis --path /local/username
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t dirTable --auths exampleVis --path /local/username
 $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t dirTable --auths exampleVis --path /local/username/workspace
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>To perform searches on file or directory names, also use QueryUtil.java. Search terms must contain no more than one wild card and cannot contain “/”.
 <em>Note</em> these queries run on the <em>indexTable</em> table instead of the dirTable table.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t indexTable --auths exampleVis --path filename --search
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t indexTable --auths exampleVis --path filename --search
 $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t indexTable --auths exampleVis --path 'filename*' --search
 $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t indexTable --auths exampleVis --path '*jar' --search
 $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t indexTable --auths exampleVis --path 'filename*jar' --search
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>To count the number of direct children (directories and files) and descendants (children and children’s descendants, directories and files), run the FileCount over the dirTable table.
 The results are written back to the same table. FileCount reads from and writes to Accumulo. This requires scan authorizations for the read and a visibility for the data written.
 In this example, the authorizations and visibility are set to the same value, exampleVis. See README.visibility for more information on visibility and authorizations.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.FileCount -i instance -z zookeepers -u username -p password -t dirTable --auths exampleVis
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.FileCount -i instance -z zookeepers -u username -p password -t dirTable --auths exampleVis
+</code></pre></div></div>
 
 <h2 id="directory-table">Directory Table</h2>
 
 <p>Here is a illustration of what data looks like in the directory table:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>row colf:colq [vis]    value
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>row colf:colq [vis]    value
 000 dir:exec [exampleVis]    true
 000 dir:hidden [exampleVis]    false
 000 dir:lastmod [exampleVis]    1291996886000
@@ -230,8 +224,7 @@ In this example, the authorizations and visibility are set to the same value, ex
 002/local/Accumulo.README \x7F\xFF\xFE\xCFH\xA1\x82\x97:lastmod [exampleVis]    1308746481000
 002/local/Accumulo.README \x7F\xFF\xFE\xCFH\xA1\x82\x97:length [exampleVis]    9192
 002/local/Accumulo.README \x7F\xFF\xFE\xCFH\xA1\x82\x97:md5 [exampleVis]    274af6419a3c4c4a259260ac7017cbf1
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The rows are of the form depth + path, where depth is the number of slashes (“/”) in the path padded to 3 digits. This is so that all the children of a directory appear as consecutive keys in Accumulo; without the depth, you would for example see all the subdirectories of /local before you saw /usr.
 For directories the column family is “dir”. For files the column family is Long.MAX_VALUE - lastModified in bytes rather than string format so that newer versions sort earlier.</p>
@@ -240,13 +233,12 @@ For directories the column family is “dir”. For files the column family is L
 
 <p>Here is an illustration of what data looks like in the index table:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>row colf:colq [vis]
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>row colf:colq [vis]
 fAccumulo.README i:002/local/Accumulo.README [exampleVis]
 flocal i:001/local [exampleVis]
 rEMDAER.olumuccA i:002/local/Accumulo.README [exampleVis]
 rlacol i:001/local [exampleVis]
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The values of the index table are null. The rows are of the form “f” + filename or “r” + reverse file name. This is to enable searches with wildcards at the beginning, middle, or end.</p>
 
@@ -254,13 +246,12 @@ rlacol i:001/local [exampleVis]
 
 <p>Here is an illustration of what data looks like in the data table:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>row colf:colq [vis]    value
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>row colf:colq [vis]    value
 274af6419a3c4c4a259260ac7017cbf1 refs:e77276a2b56e5c15b540eaae32b12c69\x00filext [exampleVis]    README
 274af6419a3c4c4a259260ac7017cbf1 refs:e77276a2b56e5c15b540eaae32b12c69\x00name [exampleVis]    /local/Accumulo.README
 274af6419a3c4c4a259260ac7017cbf1 ~chunk:\x00\x0FB@\x00\x00\x00\x00 [exampleVis]    *******************************************************************************\x0A1. Building\x0A\x0AIn the normal tarball release of accumulo, [truncated]
 274af6419a3c4c4a259260ac7017cbf1 ~chunk:\x00\x0FB@\x00\x00\x00\x01 [exampleVis]
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The rows are the md5 hash of the file. Some column family : column qualifier pairs are “refs” : hash of file name + null byte + property name, in which case the value is property value. There can be multiple references to the same file which are distinguished by the hash of the file name.
 Other column family : column qualifier pairs are “~chunk” : chunk size in bytes + chunk number in bytes, in which case the value is the bytes for that chunk of the file. There is an end of file data marker whose chunk number is the number of chunks for the file and whose value is empty.</p>
diff --git a/1.6/examples/export.html b/1.6/examples/export.html
index 22dbb19..ddb9411 100644
--- a/1.6/examples/export.html
+++ b/1.6/examples/export.html
@@ -156,7 +156,7 @@ the table. A table must be offline to export it, and it should remain offline
 for the duration of the distcp. An easy way to take a table offline without
 interuppting access to it is to clone it and take the clone offline.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@test16&gt; createtable table1
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@test16&gt; createtable table1
 root@test16 table1&gt; insert a cf1 cq1 v1
 root@test16 table1&gt; insert h cf1 cq1 v2
 root@test16 table1&gt; insert z cf1 cq1 v3
@@ -172,34 +172,31 @@ root@test16 table1&gt; clonetable table1 table1_exp
 root@test16 table1&gt; offline table1_exp
 root@test16 table1&gt; exporttable -t table1_exp /tmp/table1_export
 root@test16 table1&gt; quit
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>After executing the export command, a few files are created in the hdfs dir.
 One of the files is a list of files to distcp as shown below.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ hadoop fs -ls /tmp/table1_export
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ hadoop fs -ls /tmp/table1_export
 Found 2 items
 -rw-r--r--   3 user supergroup        162 2012-07-25 09:56 /tmp/table1_export/distcp.txt
 -rw-r--r--   3 user supergroup        821 2012-07-25 09:56 /tmp/table1_export/exportMetadata.zip
 $ hadoop fs -cat /tmp/table1_export/distcp.txt
 hdfs://n1.example.com:6093/accumulo/tables/3/default_tablet/F0000000.rf
 hdfs://n1.example.com:6093/tmp/table1_export/exportMetadata.zip
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Before the table can be imported, it must be copied using distcp. After the
 discp completed, the cloned table may be deleted.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ hadoop distcp -f /tmp/table1_export/distcp.txt /tmp/table1_export_dest
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ hadoop distcp -f /tmp/table1_export/distcp.txt /tmp/table1_export_dest
+</code></pre></div></div>
 
 <p>The Accumulo shell session below shows importing the table and inspecting it.
 The data, splits, config, and logical time information for the table were
 preserved.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@test16&gt; importtable table1_copy /tmp/table1_export_dest
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@test16&gt; importtable table1_copy /tmp/table1_export_dest
 root@test16&gt; table table1_copy
 root@test16 table1_copy&gt; scan
 a cf1:cq1 []    v1
@@ -225,8 +222,7 @@ root@test16 table1_copy&gt; scan -t accumulo.metadata -b 5 -c srv:time
 5;b srv:time []    M1343224500467
 5;r srv:time []    M1343224500467
 5&lt; srv:time []    M1343224500467
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.6/examples/filedata.html b/1.6/examples/filedata.html
index 9666abd..305af24 100644
--- a/1.6/examples/filedata.html
+++ b/1.6/examples/filedata.html
@@ -176,27 +176,23 @@ The example has the following classes:</p>
 
 <p>If you haven’t already run the README.dirlist example, ingest a file with FileDataIngest.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.filedata.FileDataIngest -i instance -z zookeepers -u username -p password -t dataTable --auths exampleVis --chunk 1000 $ACCUMULO_HOME/README
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.filedata.FileDataIngest -i instance -z zookeepers -u username -p password -t dataTable --auths exampleVis --chunk 1000 $ACCUMULO_HOME/README
+</code></pre></div></div>
 
 <p>Open the accumulo shell and look at the data. The row is the MD5 hash of the file, which you can verify by running a command such as ‘md5sum’ on the file.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>&gt; scan -t dataTable
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>&gt; scan -t dataTable
+</code></pre></div></div>
 
 <p>Run the CharacterHistogram MapReduce to add some information about the file.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.filedata.CharacterHistogram -i instance -z zookeepers -u username -p password -t dataTable --auths exampleVis --vis exampleVis
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.filedata.CharacterHistogram -i instance -z zookeepers -u username -p password -t dataTable --auths exampleVis --vis exampleVis
+</code></pre></div></div>
 
 <p>Scan again to see the histogram stored in the ‘info’ column family.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>&gt; scan -t dataTable
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>&gt; scan -t dataTable
+</code></pre></div></div>
 
         </div>
 
diff --git a/1.6/examples/filter.html b/1.6/examples/filter.html
index a5f57b3..9d4c629 100644
--- a/1.6/examples/filter.html
+++ b/1.6/examples/filter.html
@@ -158,7 +158,7 @@ Filter takes a “negate” parameter which defaults to false. If set to true, t
 return value of the accept method is negated, so that key/value pairs accepted
 by the method are omitted by the Filter.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance&gt; createtable filtertest
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance&gt; createtable filtertest
 username@instance filtertest&gt; setiter -t filtertest -scan -p 10 -n myfilter -ageoff
 AgeOffFilter removes entries with timestamps more than &lt;ttl&gt; milliseconds old
 ----------&gt; set AgeOffFilter parameter negate, default false keeps k/v that pass accept method, true rejects k/v that pass accept method:
@@ -169,15 +169,13 @@ username@instance filtertest&gt; insert foo a b c
 username@instance filtertest&gt; scan
 foo a:b []    c
 username@instance filtertest&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>… wait 30 seconds …</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance filtertest&gt; scan
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance filtertest&gt; scan
 username@instance filtertest&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Note the absence of the entry inserted more than 30 seconds ago. Since the
 scope was set to “scan”, this means the entry is still in Accumulo, but is
@@ -195,7 +193,7 @@ AgeOffFilter, but any Filter can be configured by using the -class flag. The
 following commands show how to enable the AgeOffFilter for the minc and majc
 scopes using the -class flag, then flush and compact the table.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance filtertest&gt; setiter -t filtertest -minc -majc -p 10 -n myfilter -class org.apache.accumulo.core.iterators.user.AgeOffFilter
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance filtertest&gt; setiter -t filtertest -minc -majc -p 10 -n myfilter -class org.apache.accumulo.core.iterators.user.AgeOffFilter
 AgeOffFilter removes entries with timestamps more than &lt;ttl&gt; milliseconds old
 ----------&gt; set AgeOffFilter parameter negate, default false keeps k/v that pass accept method, true rejects k/v that pass accept method:
 ----------&gt; set AgeOffFilter parameter ttl, time to live (milliseconds): 30000
@@ -210,8 +208,7 @@ username@instance filtertest&gt; compact -t filtertest -w
 06 10:43:00,632 [shell.Shell] INFO : Compacting table ...
 06 10:43:01,307 [shell.Shell] INFO : Compaction of table filtertest completed for given range
 username@instance filtertest&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>By default, flush and compact execute in the background, but with the -w flag
 they will wait to return until the operation has completed. Both are
@@ -224,7 +221,7 @@ the old files.</p>
 
 <p>To see the iterator settings for a table, use config.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance filtertest&gt; config -t filtertest -f iterator
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance filtertest&gt; config -t filtertest -f iterator
 ---------+---------------------------------------------+---------------------------------------------------------------------------
 SCOPE    | NAME                                        | VALUE
 ---------+---------------------------------------------+---------------------------------------------------------------------------
@@ -242,8 +239,7 @@ table    | table.iterator.scan.vers .................. | 20,org.apache.accumulo.
 table    | table.iterator.scan.vers.opt.maxVersions .. | 1
 ---------+---------------------------------------------+---------------------------------------------------------------------------
 username@instance filtertest&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>When setting new iterators, make sure to order their priority numbers
 (specified with -p) in the order you would like the iterators to be applied.
diff --git a/1.6/examples/helloworld.html b/1.6/examples/helloworld.html
index 1e235b3..b5a97d3 100644
--- a/1.6/examples/helloworld.html
+++ b/1.6/examples/helloworld.html
@@ -157,40 +157,34 @@
 
 <p>Log into the accumulo shell:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+</code></pre></div></div>
 
 <p>Create a table called ‘hellotable’:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance&gt; createtable hellotable
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance&gt; createtable hellotable
+</code></pre></div></div>
 
 <p>Launch a Java program that inserts data with a BatchWriter:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.helloworld.InsertWithBatchWriter -i instance -z zookeepers -u username -p password -t hellotable
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.helloworld.InsertWithBatchWriter -i instance -z zookeepers -u username -p password -t hellotable
+</code></pre></div></div>
 
 <p>On the accumulo status page at the URL below (where ‘master’ is replaced with the name or IP of your accumulo master), you should see 50K entries</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>http://master:50095/
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>http://master:50095/
+</code></pre></div></div>
 
 <p>To view the entries, use the shell to scan the table:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance&gt; table hellotable
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance&gt; table hellotable
 username@instance hellotable&gt; scan
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>You can also use a Java class to scan the table:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.helloworld.ReadData -i instance -z zookeepers -u username -p password -t hellotable --startKey row_0 --endKey row_1001
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.helloworld.ReadData -i instance -z zookeepers -u username -p password -t hellotable --startKey row_0 --endKey row_1001
+</code></pre></div></div>
 
         </div>
 
diff --git a/1.6/examples/isolation.html b/1.6/examples/isolation.html
index c730057..b7cfb1c 100644
--- a/1.6/examples/isolation.html
+++ b/1.6/examples/isolation.html
@@ -162,7 +162,7 @@ reading the row at the same time a mutation is changing the row.</p>
 <p>Below, Interference Test is run without isolation enabled for 5000 iterations
 and it reports problems.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.isolation.InterferenceTest -i instance -z zookeepers -u username -p password -t isotest --iterations 5000
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.isolation.InterferenceTest -i instance -z zookeepers -u username -p password -t isotest --iterations 5000
 ERROR Columns in row 053 had multiple values [53, 4553]
 ERROR Columns in row 061 had multiple values [561, 61]
 ERROR Columns in row 070 had multiple values [570, 1070]
@@ -171,16 +171,14 @@ ERROR Columns in row 088 had multiple values [2588, 1588]
 ERROR Columns in row 106 had multiple values [2606, 3106]
 ERROR Columns in row 115 had multiple values [4615, 3115]
 finished
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Below, Interference Test is run with isolation enabled for 5000 iterations and
 it reports no problems.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.isolation.InterferenceTest -i instance -z zookeepers -u username -p password -t isotest --iterations 5000 --isolated
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.isolation.InterferenceTest -i instance -z zookeepers -u username -p password -t isotest --iterations 5000 --isolated
 finished
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.6/examples/mapred.html b/1.6/examples/mapred.html
index b0bf38c..a1f4617 100644
--- a/1.6/examples/mapred.html
+++ b/1.6/examples/mapred.html
@@ -155,17 +155,16 @@ accumulo table with combiners.</p>
 <p>To run this example you will need a directory in HDFS containing text files.
 The accumulo readme will be used to show how to run this example.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ hadoop fs -copyFromLocal $ACCUMULO_HOME/README /user/username/wc/Accumulo.README
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ hadoop fs -copyFromLocal $ACCUMULO_HOME/README /user/username/wc/Accumulo.README
 $ hadoop fs -ls /user/username/wc
 Found 1 items
 -rw-r--r--   2 username supergroup       9359 2009-07-15 17:54 /user/username/wc/Accumulo.README
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The first part of running this example is to create a table with a combiner
 for the column family count.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 Shell - Apache Accumulo Interactive Shell
 - version: 1.6.0
 - instance name: instance
@@ -181,12 +180,11 @@ SummingCombiner interprets Values as Longs and adds them together. A variety of
 ----------&gt; set SummingCombiner parameter lossy, if true, failed decodes are ignored. Otherwise combiner will error on failed decodes (default false): &lt;TRUE|FALSE&gt;: false
 ----------&gt; set SummingCombiner parameter type, &lt;VARLEN|FIXEDLEN|STRING|fullClassName&gt;: STRING
 username@instance wordCount&gt; quit
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>After creating the table, run the word count map reduce job.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.WordCount -i instance -z zookeepers  --input /user/username/wc -t wordCount -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.WordCount -i instance -z zookeepers  --input /user/username/wc -t wordCount -u username -p password
 
 11/02/07 18:20:11 INFO input.FileInputFormat: Total input paths to process : 1
 11/02/07 18:20:12 INFO mapred.JobClient: Running job: job_201102071740_0003
@@ -203,13 +201,12 @@ username@instance wordCount&gt; quit
 11/02/07 18:20:22 INFO mapred.JobClient:     Map input records=255
 11/02/07 18:20:22 INFO mapred.JobClient:     Spilled Records=0
 11/02/07 18:20:22 INFO mapred.JobClient:     Map output records=1452
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>After the map reduce job completes, query the accumulo table to see word
 counts.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 username@instance&gt; table wordCount
 username@instance wordCount&gt; scan -b the
 the count:20080906 []    75
@@ -227,8 +224,7 @@ total count:20080906 []    1
 tserver, count:20080906 []    1
 tserver.compaction.major.concurrent.max count:20080906 []    1
 ...
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Another example to look at is
 org.apache.accumulo.examples.simple.mapreduce.UniqueColumns. This example
@@ -244,9 +240,8 @@ displayed in the job’s configuration which is world-readable).</p>
 
 <p>To create a token file, use the create-token utility</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo create-token
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo create-token
+</code></pre></div></div>
 
 <p>It defaults to creating a PasswordToken, but you can specify the token class
 with -tc (requires the fully qualified class name). Based on the token class,
@@ -259,9 +254,8 @@ a file, but only the first one for each user will be recognized.</p>
 <p>Rather than waiting for the prompts, you can specify some options when calling
 create-token, for example</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo create-token -u root -p secret -f root.pw
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo create-token -u root -p secret -f root.pw
+</code></pre></div></div>
 
 <p>would create a token file containing a PasswordToken for
 user ‘root’ with password ‘secret’ and saved to ‘root.pw’</p>
@@ -269,9 +263,8 @@ user ‘root’ with password ‘secret’ and saved to ‘root.pw’</p>
 <p>This local file needs to be uploaded to hdfs to be used with the
 map-reduce job. For example, if the file were ‘root.pw’ in the local directory:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ hadoop fs -put root.pw root.pw
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ hadoop fs -put root.pw root.pw
+</code></pre></div></div>
 
 <p>This would put ‘root.pw’ in the user’s home directory in hdfs.</p>
 
@@ -280,18 +273,16 @@ map-reduce job. For example, if the file were ‘root.pw’ in the local directo
 the basic WordCount example by calling the same command as explained above
 except replacing the password with the token file (rather than -p, use -tf).</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.WordCount -i instance -z zookeepers  --input /user/username/wc -t wordCount -u username -tf tokenfile
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.WordCount -i instance -z zookeepers  --input /user/username/wc -t wordCount -u username -tf tokenfile
+</code></pre></div></div>
 
 <p>In the above examples, username was ‘root’ and tokenfile was ‘root.pw’</p>
 
 <p>However, if you don’t want to use the Opts class to parse arguments,
 the TokenFileWordCount is an example of using the token file manually.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.TokenFileWordCount instance zookeepers username tokenfile /user/username/wc wordCount
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.TokenFileWordCount instance zookeepers username tokenfile /user/username/wc wordCount
+</code></pre></div></div>
 
 <p>The results should be the same as the WordCount example except that the
 authentication token was not stored in the configuration. It was instead
diff --git a/1.6/examples/maxmutation.html b/1.6/examples/maxmutation.html
index c685409..5722ca1 100644
--- a/1.6/examples/maxmutation.html
+++ b/1.6/examples/maxmutation.html
@@ -155,7 +155,7 @@ inadvertently create mutations so large, that they cause the tablet servers to
 run out of memory. A simple contraint can be added to a table to reject very
 large mutations.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 
 Shell - Apache Accumulo Interactive Shell
 -
@@ -168,17 +168,15 @@ Shell - Apache Accumulo Interactive Shell
 username@instance&gt; createtable test_ingest
 username@instance test_ingest&gt; config -t test_ingest -s table.constraint.1=org.apache.accumulo.examples.simple.constraints.MaxMutationSize
 username@instance test_ingest&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Now the table will reject any mutation that is larger than 1/256th of the
 working memory of the tablet server. The following command attempts to ingest
 a single row with 10000 columns, which exceeds the memory limit:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.test.TestIngest -i instance -z zookeepers -u username -p password --rows 1 --cols 10000
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.test.TestIngest -i instance -z zookeepers -u username -p password --rows 1 --cols 10000
 ERROR : Constraint violates : ConstraintViolationSummary(constrainClass:org.apache.accumulo.examples.simple.constraints.MaxMutationSize, violationCode:0, violationDescription:mutation exceeded maximum size of 188160, numberOfViolatingMutations:1)
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.6/examples/regex.html b/1.6/examples/regex.html
index 51b4961..525bb85 100644
--- a/1.6/examples/regex.html
+++ b/1.6/examples/regex.html
@@ -154,7 +154,7 @@ This is accomplished using a map-only mapreduce job and a scan-time iterator.</p
 <p>To run this example you will need some data in a table. The following will
 put a trivial amount of data into accumulo using the accumulo shell:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 Shell - Apache Accumulo Interactive Shell
 - version: 1.6.0
 - instance name: instance
@@ -166,8 +166,7 @@ username@instance&gt; createtable input
 username@instance&gt; insert dogrow dogcf dogcq dogvalue
 username@instance&gt; insert catrow catcf catcq catvalue
 username@instance&gt; quit
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The RegexExample class sets an iterator on the scanner. This does pattern matching
 against each key/value in accumulo, and only returns matching items. It will do this
@@ -175,22 +174,20 @@ in parallel and will store the results in files in hdfs.</p>
 
 <p>The following will search for any rows in the input table that starts with “dog”:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.RegexExample -u user -p passwd -i instance -t input --rowRegex 'dog.*' --output /tmp/output
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.RegexExample -u user -p passwd -i instance -t input --rowRegex 'dog.*' --output /tmp/output
 
 $ hadoop fs -ls /tmp/output
 Found 3 items
 -rw-r--r--   1 username supergroup          0 2013-01-10 14:11 /tmp/output/_SUCCESS
 drwxr-xr-x   - username supergroup          0 2013-01-10 14:10 /tmp/output/_logs
 -rw-r--r--   1 username supergroup         51 2013-01-10 14:10 /tmp/output/part-m-00000
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>We can see the output of our little map-reduce job:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ hadoop fs -text /tmp/output/output/part-m-00000
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ hadoop fs -text /tmp/output/output/part-m-00000
 dogrow dogcf:dogcq [] 1357844987994 false    dogvalue
-</code></pre>
-</div>
+</code></pre></div></div>
 
         </div>
 
diff --git a/1.6/examples/reservations.html b/1.6/examples/reservations.html
index f59947a..198ec66 100644
--- a/1.6/examples/reservations.html
+++ b/1.6/examples/reservations.html
@@ -157,7 +157,7 @@ and trent to reserve room06 on 20140101. Bob ends up getting the reservation
 and everyone else is put on a wait list. The example code will take any string
 for what, when and who.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.reservations.ARS
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.reservations.ARS
 &gt;connect test16 localhost root secret ars
   connected
 &gt;
@@ -180,20 +180,18 @@ for what, when and who.</p>
   Reservation holder : mallory
   Wait list : [trent, eve]
 &gt;quit
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Scanning the table in the Accumulo shell after running the example shows the
 following:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@test16&gt; table ars
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@test16&gt; table ars
 root@test16 ars&gt; scan
 room06:20140101 res:0001 []    mallory
 room06:20140101 res:0003 []    trent
 room06:20140101 res:0004 []    eve
 room06:20140101 tx:seq []    6
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The tx:seq column is incremented for each update to the row allowing for
 detection of concurrent changes. For an update to go through, the sequence
diff --git a/1.6/examples/rowhash.html b/1.6/examples/rowhash.html
index d05cfc5..d6f11bd 100644
--- a/1.6/examples/rowhash.html
+++ b/1.6/examples/rowhash.html
@@ -154,7 +154,7 @@ writes back into that table.</p>
 <p>To run this example you will need some data in a table. The following will
 put a trivial amount of data into accumulo using the accumulo shell:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 Shell - Apache Accumulo Interactive Shell
 - version: 1.6.0
 - instance name: instance
@@ -166,19 +166,17 @@ username@instance&gt; createtable input
 username@instance&gt; insert a-row cf cq value
 username@instance&gt; insert b-row cf cq value
 username@instance&gt; quit
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The RowHash class will insert a hash for each row in the database if it contains a
 specified colum. Here’s how you run the map/reduce job</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.RowHash -u user -p passwd -i instance -t input --column cf:cq
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.RowHash -u user -p passwd -i instance -t input --column cf:cq
+</code></pre></div></div>
 
 <p>Now we can scan the table and see the hashes:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 Shell - Apache Accumulo Interactive Shell
 - version: 1.6.0
 - instance name: instance
@@ -192,8 +190,7 @@ a-row cf-HASHTYPE:cq-MD5BASE64 []    IGPBYI1uC6+AJJxC4r5YBA==
 b-row cf:cq []    value
 b-row cf-HASHTYPE:cq-MD5BASE64 []    IGPBYI1uC6+AJJxC4r5YBA==
 username@instance&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.6/examples/shard.html b/1.6/examples/shard.html
index 0262fd5..54818d8 100644
--- a/1.6/examples/shard.html
+++ b/1.6/examples/shard.html
@@ -160,21 +160,19 @@ document, or “sharded”. This example shows how to use the intersecting itera
 
 <p>To run these example programs, create two tables like below.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance&gt; createtable shard
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance&gt; createtable shard
 username@instance shard&gt; createtable doc2term
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>After creating the tables, index some files. The following command indexes all of the java files in the Accumulo source code.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ cd /local/username/workspace/accumulo/
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ cd /local/username/workspace/accumulo/
 $ find core/src server/src -name "*.java" | xargs ./bin/accumulo org.apache.accumulo.examples.simple.shard.Index -i instance -z zookeepers -t shard -u username -p password --partitions 30
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The following command queries the index to find all files containing ‘foo’ and ‘bar’.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ cd $ACCUMULO_HOME
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ cd $ACCUMULO_HOME
 $ ./bin/accumulo org.apache.accumulo.examples.simple.shard.Query -i instance -z zookeepers -t shard -u username -p password foo bar
 /local/username/workspace/accumulo/src/core/src/test/java/accumulo/core/security/ColumnVisibilityTest.java
 /local/username/workspace/accumulo/src/core/src/test/java/accumulo/core/client/mock/MockConnectorTest.java
@@ -187,19 +185,17 @@ $ ./bin/accumulo org.apache.accumulo.examples.simple.shard.Query -i instance -z
 /local/username/workspace/accumulo/src/core/src/test/java/accumulo/core/iterators/WholeRowIteratorTest.java
 /local/username/workspace/accumulo/src/server/src/test/java/accumulo/server/util/DefaultMapTest.java
 /local/username/workspace/accumulo/src/server/src/test/java/accumulo/server/tabletserver/InMemoryMapTest.java
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>In order to run ContinuousQuery, we need to run Reverse.java to populate doc2term.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.shard.Reverse -i instance -z zookeepers --shardTable shard --doc2Term doc2term -u username -p password
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.shard.Reverse -i instance -z zookeepers --shardTable shard --doc2Term doc2term -u username -p password
+</code></pre></div></div>
 
 <p>Below ContinuousQuery is run using 5 terms. So it selects 5 random terms from each document, then it continually
 randomly selects one set of 5 terms and queries. It prints the number of matching documents and the time in seconds.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.shard.ContinuousQuery -i instance -z zookeepers --shardTable shard --doc2Term doc2term -u username -p password --terms 5
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.shard.ContinuousQuery -i instance -z zookeepers --shardTable shard --doc2Term doc2term -u username -p password --terms 5
 [public, core, class, binarycomparable, b] 2  0.081
 [wordtodelete, unindexdocument, doctablename, putdelete, insert] 1  0.041
 [import, columnvisibilityinterpreterfactory, illegalstateexception, cv, columnvisibility] 1  0.049
@@ -207,8 +203,7 @@ randomly selects one set of 5 terms and queries. It prints the number of matchin
 [for, static, println, public, the] 55  0.211
 [sleeptime, wrappingiterator, options, long, utilwaitthread] 1  0.057
 [string, public, long, 0, wait] 12  0.132
-</code></pre>
-</div>
+</code></pre></div></div>
 
         </div>
 
diff --git a/1.6/examples/tabletofile.html b/1.6/examples/tabletofile.html
index aac7581..0921efb 100644
--- a/1.6/examples/tabletofile.html
+++ b/1.6/examples/tabletofile.html
@@ -153,7 +153,7 @@
 <p>To run this example you will need some data in a table. The following will
 put a trivial amount of data into accumulo using the accumulo shell:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 Shell - Apache Accumulo Interactive Shell
 - version: 1.6.0
 - instance name: instance
@@ -166,15 +166,14 @@ username@instance&gt; insert dog cf cq dogvalue
 username@instance&gt; insert cat cf cq catvalue
 username@instance&gt; insert junk family qualifier junkvalue
 username@instance&gt; quit
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The TableToFile class configures a map-only job to read the specified columns and
 write the key/value pairs to a file in HDFS.</p>
 
 <p>The following will extract the rows containing the column “cf:cq”:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.TableToFile -u user -p passwd -i instance -t input --columns cf:cq --output /tmp/output
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.TableToFile -u user -p passwd -i instance -t input --columns cf:cq --output /tmp/output
 
 $ hadoop fs -ls /tmp/output
 -rw-r--r--   1 username supergroup          0 2013-01-10 14:44 /tmp/output/_SUCCESS
@@ -183,17 +182,15 @@ drwxr-xr-x   - username supergroup          0 2013-01-10 14:44 /tmp/output/_logs
 -rw-r--r--   1 username supergroup       9049 2013-01-10 14:44 /tmp/output/_logs/history/job_201301081658_0011_1357847072863_username_TableToFile%5F1357847071434
 -rw-r--r--   1 username supergroup      26172 2013-01-10 14:44 /tmp/output/_logs/history/job_201301081658_0011_conf.xml
 -rw-r--r--   1 username supergroup         50 2013-01-10 14:44 /tmp/output/part-m-00000
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>We can see the output of our little map-reduce job:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ hadoop fs -text /tmp/output/output/part-m-00000
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ hadoop fs -text /tmp/output/output/part-m-00000
 catrow cf:cq []    catvalue
 dogrow cf:cq []    dogvalue
 $
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.6/examples/terasort.html b/1.6/examples/terasort.html
index cc7c117..b5e72f4 100644
--- a/1.6/examples/terasort.html
+++ b/1.6/examples/terasort.html
@@ -154,7 +154,7 @@ hadoop terasort benchmark.</p>
 
 <p>To run this example you run it with arguments describing the amount of data:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.TeraSortIngest \
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.TeraSortIngest \
 -i instance -z zookeepers -u user -p password \
 --count 10 \
 --minKeySize 10 \
@@ -163,12 +163,11 @@ hadoop terasort benchmark.</p>
 --maxValueSize 78 \
 --table sort \
 --splits 10 \
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>After the map reduce job completes, scan the data:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 username@instance&gt; scan -t sort
 +l-$$OE/ZH c:         4 []    GGGGGGGGGGWWWWWWWWWWMMMMMMMMMMCCCCCCCCCCSSSSSSSSSSIIIIIIIIIIYYYYYYYYYYOOOOOOOO
 ,C)wDw//u= c:        10 []    CCCCCCCCCCSSSSSSSSSSIIIIIIIIIIYYYYYYYYYYOOOOOOOOOOEEEEEEEEEEUUUUUUUUUUKKKKKKKK
@@ -180,8 +179,7 @@ M^*dDE;6^&lt; c:         9 []    UUUUUUUUUUKKKKKKKKKKAAAAAAAAAAQQQQQQQQQQGGGGGGG
 le5awB.$sm c:         6 []    WWWWWWWWWWMMMMMMMMMMCCCCCCCCCCSSSSSSSSSSIIIIIIIIIIYYYYYYYYYYOOOOOOOOOOEEEEEEEE
 q__[fwhKFg c:         7 []    EEEEEEEEEEUUUUUUUUUUKKKKKKKKKKAAAAAAAAAAQQQQQQQQQQGGGGGGGGGGWWWWWWWWWWMMMMMMMM
 w[o||:N&amp;H, c:         2 []    QQQQQQQQQQGGGGGGGGGGWWWWWWWWWWMMMMMMMMMMCCCCCCCCCCSSSSSSSSSSIIIIIIIIIIYYYYYYYY
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Of course, a real benchmark would ingest millions of entries.</p>
 
diff --git a/1.6/examples/visibility.html b/1.6/examples/visibility.html
index 8a404b2..c620a86 100644
--- a/1.6/examples/visibility.html
+++ b/1.6/examples/visibility.html
@@ -150,7 +150,7 @@
           
           <h2 id="creating-a-new-user">Creating a new user</h2>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@instance&gt; createuser username
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@instance&gt; createuser username
 Enter new password for 'username': ********
 Please confirm new password for 'username': ********
 root@instance&gt; user username
@@ -162,14 +162,13 @@ System permissions:
 
 Table permissions (accumulo.metadata): Table.READ
 username@instance&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>A user does not by default have permission to create a table.</p>
 
 <h2 id="granting-permissions-to-a-user">Granting permissions to a user</h2>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance&gt; user root
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance&gt; user root
 Enter password for user root: ********
 root@instance&gt; grant -s System.CREATE_TABLE -u username
 root@instance&gt; user username
@@ -181,8 +180,7 @@ System permissions: System.CREATE_TABLE
 Table permissions (accumulo.metadata): Table.READ
 Table permissions (vistest): Table.READ, Table.WRITE, Table.BULK_IMPORT, Table.ALTER_TABLE, Table.GRANT, Table.DROP_TABLE
 username@instance vistest&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h2 id="inserting-data-with-visibilities">Inserting data with visibilities</h2>
 
@@ -191,7 +189,7 @@ tokens. Authorization tokens are arbitrary strings taken from a restricted
 ASCII character set. Parentheses are required to specify order of operations
 in visibilities.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance vistest&gt; insert row f1 q1 v1 -l A
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance vistest&gt; insert row f1 q1 v1 -l A
 username@instance vistest&gt; insert row f2 q2 v2 -l A&amp;B
 username@instance vistest&gt; insert row f3 q3 v3 -l apple&amp;carrot|broccoli|spinach
 06 11:19:01,432 [shell.Shell] ERROR: org.apache.accumulo.core.util.BadArgumentException: cannot mix | and &amp; near index 12
@@ -199,8 +197,7 @@ apple&amp;carrot|broccoli|spinach
             ^
 username@instance vistest&gt; insert row f3 q3 v3 -l (apple&amp;carrot)|broccoli|spinach
 username@instance vistest&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h2 id="scanning-with-authorizations">Scanning with authorizations</h2>
 
@@ -209,25 +206,23 @@ authorizations and each Accumulo scan has authorizations. Scan authorizations
 are only allowed to be a subset of the user’s authorizations. By default, a
 user’s authorizations set is empty.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance vistest&gt; scan
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance vistest&gt; scan
 username@instance vistest&gt; scan -s A
 06 11:43:14,951 [shell.Shell] ERROR: java.lang.RuntimeException: org.apache.accumulo.core.client.AccumuloSecurityException: Error BAD_AUTHORIZATIONS - The user does not have the specified authorizations assigned
 username@instance vistest&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h2 id="setting-authorizations-for-a-user">Setting authorizations for a user</h2>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance vistest&gt; setauths -s A
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance vistest&gt; setauths -s A
 06 11:53:42,056 [shell.Shell] ERROR: org.apache.accumulo.core.client.AccumuloSecurityException: Error PERMISSION_DENIED - User does not have permission to perform this action
 username@instance vistest&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>A user cannot set authorizations unless the user has the System.ALTER_USER permission.
 The root user has this permission.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance vistest&gt; user root
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance vistest&gt; user root
 Enter password for user root: ********
 root@instance vistest&gt; setauths -s A -u username
 root@instance vistest&gt; user username
@@ -237,12 +232,11 @@ row f1:q1 [A]    v1
 username@instance vistest&gt; scan
 row f1:q1 [A]    v1
 username@instance vistest&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The default authorizations for a scan are the user’s entire set of authorizations.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance vistest&gt; user root
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance vistest&gt; user root
 Enter password for user root: ********
 root@instance vistest&gt; setauths -s A,B,broccoli -u username
 root@instance vistest&gt; user username
@@ -253,13 +247,12 @@ row f2:q2 [A&amp;B]    v2
 row f3:q3 [(apple&amp;carrot)|broccoli|spinach]    v3
 username@instance vistest&gt; scan -s B
 username@instance vistest&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>If you want, you can limit a user to only be able to insert data which they can read themselves.
 It can be set with the following constraint.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance vistest&gt; user root
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance vistest&gt; user root
 Enter password for user root: ******
 root@instance vistest&gt; config -t vistest -s table.constraint.1=org.apache.accumulo.core.security.VisibilityConstraint
 root@instance vistest&gt; user username
@@ -274,8 +267,7 @@ row f2:q2 [A&amp;B]    v2
 row f3:q3 [(apple&amp;carrot)|broccoli|spinach]    v3
 row f4:q4 [spinach|broccoli]    v4
 username@instance vistest&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.7/examples/batch.html b/1.7/examples/batch.html
index b14a99b..01b9e3e 100644
--- a/1.7/examples/batch.html
+++ b/1.7/examples/batch.html
@@ -169,13 +169,12 @@ list of zookeeper nodes (given as zookeepers here).</p>
 <p>Before you run this, you must ensure that the user you are running has the
 “exampleVis” authorization. (you can set this in the shell with “setauths -u username -s exampleVis”)</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u root -e "setauths -u username -s exampleVis"
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u root -e "setauths -u username -s exampleVis"
+</code></pre></div></div>
 
 <p>You must also create the table, batchtest1, ahead of time. (In the shell, use “createtable batchtest1”)</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -e "createtable batchtest1"
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -e "createtable batchtest1"
 $ ./bin/accumulo org.apache.accumulo.examples.simple.client.SequentialBatchWriter -i instance -z zookeepers -u username -p password -t batchtest1 --start 0 --num 10000 --size 50 --batchMemory 20M --batchLatency 500 --batchThreads 20 --vis exampleVis
 $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner -i instance -z zookeepers -u username -p password -t batchtest1 --num 100 --min 0 --max 10000 --size 50 --scanThreads 20 --auths exampleVis
 07 11:33:11,103 [client.CountingVerifyingReceiver] INFO : Generating 100 random queries...
@@ -189,8 +188,7 @@ $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner -
 07 11:33:11,416 [client.CountingVerifyingReceiver] INFO : 2173.91 lookups/sec   0.05 secs
 
 07 11:33:11,416 [client.CountingVerifyingReceiver] INFO : num results : 100
-</code></pre>
-</div>
+</code></pre></div></div>
 
         </div>
 
diff --git a/1.7/examples/bloom.html b/1.7/examples/bloom.html
index deb02db..0b67a26 100644
--- a/1.7/examples/bloom.html
+++ b/1.7/examples/bloom.html
@@ -154,7 +154,7 @@ do not exist in a table.</p>
 
 <p>Below table named bloom_test is created and bloom filters are enabled.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 Shell - Apache Accumulo Interactive Shell
 - version: 1.7.3
 - instance name: instance
@@ -166,43 +166,39 @@ username@instance&gt; setauths -u username -s exampleVis
 username@instance&gt; createtable bloom_test
 username@instance bloom_test&gt; config -t bloom_test -s table.bloom.enabled=true
 username@instance bloom_test&gt; exit
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Below 1 million random values are inserted into accumulo. The randomly
 generated rows range between 0 and 1 billion. The random number generator is
 initialized with the seed 7.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test --num 1000000 --min 0 --max 1000000000 --size 50 --batchMemory 2M --batchLatency 60s --batchThreads 3 --vis exampleVis
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test --num 1000000 --min 0 --max 1000000000 --size 50 --batchMemory 2M --batchLatency 60s --batchThreads 3 --vis exampleVis
+</code></pre></div></div>
 
 <p>Below the table is flushed:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test -w'
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test -w'
 05 10:40:06,069 [shell.Shell] INFO : Flush of table bloom_test completed.
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>After the flush completes, 500 random queries are done against the table. The
 same seed is used to generate the queries, therefore everything is found in the
 table.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test --num 500 --min 0 --max 1000000000 --size 50 --batchThreads 20 --auths exampleVis
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test --num 500 --min 0 --max 1000000000 --size 50 --batchThreads 20 --auths exampleVis
 Generating 500 random queries...finished
 96.19 lookups/sec   5.20 secs
 num results : 500
 Generating 500 random queries...finished
 102.35 lookups/sec   4.89 secs
 num results : 500
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Below another 500 queries are performed, using a different seed which results
 in nothing being found. In this case the lookups are much faster because of
 the bloom filters.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 8 -i instance -z zookeepers -u username -p password -t bloom_test --num 500 --min 0 --max 1000000000 --size 50 --batchThreads 20 --auths exampleVis
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 8 -i instance -z zookeepers -u username -p password -t bloom_test --num 500 --min 0 --max 1000000000 --size 50 --batchThreads 20 --auths exampleVis
 Generating 500 random queries...finished
 2212.39 lookups/sec   0.23 secs
 num results : 0
@@ -211,8 +207,7 @@ Generating 500 random queries...finished
 4464.29 lookups/sec   0.11 secs
 num results : 0
 Did not find 500 rows
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <hr />
 
@@ -244,7 +239,7 @@ million inserts. If not, then more map files will be created.</p>
 
 <p>The commands for creating the first table without bloom filters are below.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 Shell - Apache Accumulo Interactive Shell
 - version: 1.7.3
 - instance name: instance
@@ -264,12 +259,11 @@ $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter --
 $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test1 -w'
 $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter --seed 9 $ARGS
 $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test1 -w'
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The commands for creating the second table with bloom filers are below.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 Shell - Apache Accumulo Interactive Shell
 - version: 1.7.3
 - instance name: instance
@@ -290,65 +284,60 @@ $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter --
 $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test2 -w'
 $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter --seed 9 $ARGS
 $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test2 -w'
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Below 500 lookups are done against the table without bloom filters using random
 NG seed 7. Even though only one map file will likely contain entries for this
 seed, all map files will be interrogated.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test1 --num 500 --min 0 --max 1000000000 --size 50 --scanThreads 20 --auths exampleVis
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test1 --num 500 --min 0 --max 1000000000 --size 50 --scanThreads 20 --auths exampleVis
 Generating 500 random queries...finished
 35.09 lookups/sec  14.25 secs
 num results : 500
 Generating 500 random queries...finished
 35.33 lookups/sec  14.15 secs
 num results : 500
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Below the same lookups are done against the table with bloom filters. The
 lookups were 2.86 times faster because only one map file was used, even though three
 map files existed.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test2 --num 500 --min 0 --max 1000000000 --size 50 --scanThreads 20 --auths exampleVis
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test2 --num 500 --min 0 --max 1000000000 --size 50 --scanThreads 20 --auths exampleVis
 Generating 500 random queries...finished
 99.03 lookups/sec   5.05 secs
 num results : 500
 Generating 500 random queries...finished
 101.15 lookups/sec   4.94 secs
 num results : 500
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>You can verify the table has three files by looking in HDFS. To look in HDFS
 you will need the table ID, because this is used in HDFS instead of the table
 name. The following command will show table ids.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password -e 'tables -l'
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password -e 'tables -l'
 accumulo.metadata    =&gt;        !0
 accumulo.root        =&gt;        +r
 bloom_test1          =&gt;        o7
 bloom_test2          =&gt;        o8
 trace                =&gt;         1
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>So the table id for bloom_test2 is o8. The command below shows what files this
 table has in HDFS. This assumes Accumulo is at the default location in HDFS.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ hadoop fs -lsr /accumulo/tables/o8
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ hadoop fs -lsr /accumulo/tables/o8
 drwxr-xr-x   - username supergroup          0 2012-01-10 14:02 /accumulo/tables/o8/default_tablet
 -rw-r--r--   3 username supergroup   52672650 2012-01-10 14:01 /accumulo/tables/o8/default_tablet/F00000dj.rf
 -rw-r--r--   3 username supergroup   52436176 2012-01-10 14:01 /accumulo/tables/o8/default_tablet/F00000dk.rf
 -rw-r--r--   3 username supergroup   52850173 2012-01-10 14:02 /accumulo/tables/o8/default_tablet/F00000dl.rf
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Running the rfile-info command shows that one of the files has a bloom filter
 and its 1.5MB.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo rfile-info /accumulo/tables/o8/default_tablet/F00000dj.rf
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo rfile-info /accumulo/tables/o8/default_tablet/F00000dj.rf
 Locality group         : &lt;DEFAULT&gt;
   Start block          : 0
   Num   blocks         : 752
@@ -372,8 +361,7 @@ Meta block     : acu_bloom
   Raw size             : 1,540,292 bytes
   Compressed size      : 1,433,115 bytes
   Compression type     : gz
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.7/examples/bulkIngest.html b/1.7/examples/bulkIngest.html
index 348ca5f..e1c9c8b 100644
--- a/1.7/examples/bulkIngest.html
+++ b/1.7/examples/bulkIngest.html
@@ -155,14 +155,13 @@ table called test_bulk which has two initial split points. Then 1000 rows of
 test data are created in HDFS. After that the 1000 rows are ingested into
 Accumulo. Then we verify the 1000 rows are in Accumulo.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ PKG=org.apache.accumulo.examples.simple.mapreduce.bulk
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ PKG=org.apache.accumulo.examples.simple.mapreduce.bulk
 $ ARGS="-i instance -z zookeepers -u username -p password"
 $ ./bin/accumulo $PKG.SetupTable $ARGS -t test_bulk row_00000333 row_00000666
 $ ./bin/accumulo $PKG.GenerateTestData --start-row 0 --count 1000 --output bulk/test_1.txt
 $ ./bin/tool.sh lib/accumulo-examples-simple.jar $PKG.BulkIngestExample $ARGS -t test_bulk --inputDir bulk --workDir tmp/bulkWork
 $ ./bin/accumulo $PKG.VerifyIngest $ARGS -t test_bulk --start-row 0 --count 1000
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>For a high level discussion of bulk ingest, see the docs dir.</p>
 
diff --git a/1.7/examples/classpath.html b/1.7/examples/classpath.html
index 9890f80..42badb4 100644
--- a/1.7/examples/classpath.html
+++ b/1.7/examples/classpath.html
@@ -155,59 +155,52 @@ table reference that jar.</p>
 
 <p>Execute the following command in the shell.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ hadoop fs -copyFromLocal $ACCUMULO_HOME/test/src/test/resources/FooFilter.jar /user1/lib
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ hadoop fs -copyFromLocal $ACCUMULO_HOME/test/src/test/resources/FooFilter.jar /user1/lib
+</code></pre></div></div>
 
 <p>Execute following in Accumulo shell to setup classpath context</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@test17&gt; config -s general.vfs.context.classpath.cx1=hdfs://&lt;namenode host&gt;:&lt;namenode port&gt;/user1/lib
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@test17&gt; config -s general.vfs.context.classpath.cx1=hdfs://&lt;namenode host&gt;:&lt;namenode port&gt;/user1/lib
+</code></pre></div></div>
 
 <p>Create a table</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@test17&gt; createtable nofoo
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@test17&gt; createtable nofoo
+</code></pre></div></div>
 
 <p>The following command makes this table use the configured classpath context</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@test17 nofoo&gt; config -t nofoo -s table.classpath.context=cx1
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@test17 nofoo&gt; config -t nofoo -s table.classpath.context=cx1
+</code></pre></div></div>
 
 <p>The following command configures an iterator thats in FooFilter.jar</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@test17 nofoo&gt; setiter -n foofilter -p 10 -scan -minc -majc -class org.apache.accumulo.test.FooFilter
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@test17 nofoo&gt; setiter -n foofilter -p 10 -scan -minc -majc -class org.apache.accumulo.test.FooFilter
 Filter accepts or rejects each Key/Value pair
 ----------&gt; set FooFilter parameter negate, default false keeps k/v that pass accept method, true rejects k/v that pass accept method: false
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The commands below show the filter is working.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@test17 nofoo&gt; insert foo1 f1 q1 v1
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@test17 nofoo&gt; insert foo1 f1 q1 v1
 root@test17 nofoo&gt; insert noo1 f1 q1 v2
 root@test17 nofoo&gt; scan
 noo1 f1:q1 []    v2
 root@test15 nofoo&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Below, an attempt is made to add the FooFilter to a table thats not configured
 to use the clasppath context cx1. This fails util the table is configured to
 use cx1.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@test17 nofoo&gt; createtable nofootwo
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@test17 nofoo&gt; createtable nofootwo
 root@test17 nofootwo&gt; setiter -n foofilter -p 10 -scan -minc -majc -class org.apache.accumulo.test.FooFilter
 2013-05-03 12:49:35,943 [shell.Shell] ERROR: java.lang.IllegalArgumentException: org.apache.accumulo.test.FooFilter
 root@test17 nofootwo&gt; config -t nofootwo -s table.classpath.context=cx1
 root@test17 nofootwo&gt; setiter -n foofilter -p 10 -scan -minc -majc -class org.apache.accumulo.test.FooFilter
 Filter accepts or rejects each Key/Value pair
 ----------&gt; set FooFilter parameter negate, default false keeps k/v that pass accept method, true rejects k/v that pass accept method: false
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.7/examples/client.html b/1.7/examples/client.html
index 1274b66..d0090ba 100644
--- a/1.7/examples/client.html
+++ b/1.7/examples/client.html
@@ -162,15 +162,14 @@
 class name, and enough arguments to find your accumulo instance. For example,
 the Flush class will flush a table:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ PACKAGE=org.apache.accumulo.examples.simple.client
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ PACKAGE=org.apache.accumulo.examples.simple.client
 $ bin/accumulo $PACKAGE.Flush -u root -p mypassword -i instance -z zookeeper -t trace
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The very simple RowOperations class demonstrates how to read and write rows using the BatchWriter
 and Scanner:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/accumulo $PACKAGE.RowOperations -u root -p mypassword -i instance -z zookeeper
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bin/accumulo $PACKAGE.RowOperations -u root -p mypassword -i instance -z zookeeper
 2013-01-14 14:45:24,738 [client.RowOperations] INFO : This is everything
 2013-01-14 14:45:24,744 [client.RowOperations] INFO : Key: row1 column:1 [] 1358192724640 false Value: This is the value for this key
 2013-01-14 14:45:24,744 [client.RowOperations] INFO : Key: row1 column:2 [] 1358192724642 false Value: This is the value for this key
@@ -198,12 +197,11 @@ and Scanner:</p>
 2013-01-14 14:45:24,770 [client.RowOperations] INFO : Key: row3 column:2 [] 1358192724642 false Value: This is the value for this key
 2013-01-14 14:45:24,770 [client.RowOperations] INFO : Key: row3 column:3 [] 1358192724642 false Value: This is the value for this key
 2013-01-14 14:45:24,770 [client.RowOperations] INFO : Key: row3 column:4 [] 1358192724642 false Value: This is the value for this key
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>To create a table, write to it and read from it:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/accumulo $PACKAGE.ReadWriteExample -u root -p mypassword -i instance -z zookeeper --createtable --create --read
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bin/accumulo $PACKAGE.ReadWriteExample -u root -p mypassword -i instance -z zookeeper --createtable --create --read
 hello%00; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -&gt; world
 hello%01; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -&gt; world
 hello%02; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -&gt; world
@@ -214,8 +212,7 @@ hello%06; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -&gt; world
 hello%07; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -&gt; world
 hello%08; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -&gt; world
 hello%09; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -&gt; world
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.7/examples/combiner.html b/1.7/examples/combiner.html
index 6bdf5ba..78b6010 100644
--- a/1.7/examples/combiner.html
+++ b/1.7/examples/combiner.html
@@ -158,7 +158,7 @@
 copy the produced jar into the accumulo lib dir. This is already done in the
 tar distribution.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/accumulo shell -u username
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bin/accumulo shell -u username
 Enter current password for 'username'@'instance': ***
 
 Shell - Apache Accumulo Interactive Shell
@@ -196,8 +196,7 @@ username@instance runners&gt; scan
 123456 hstat:virtualMarathon []    6a,6b,d5,2
 123456 name:first []    Joe
 123456 stat:marathon []    220,240,690,3
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>In this example a table is created and the example stats combiner is applied to
 the column family stat and hstat. The stats combiner computes min,max,sum, and
diff --git a/1.7/examples/constraints.html b/1.7/examples/constraints.html
index a65ede6..8aa4064 100644
--- a/1.7/examples/constraints.html
+++ b/1.7/examples/constraints.html
@@ -161,7 +161,7 @@ numeric keys. The other constraint does not allow non numeric values. Two
 inserts that violate these constraints are attempted and denied. The scan at
 the end shows the inserts were not allowed.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 
 Shell - Apache Accumulo Interactive Shell
 -
@@ -185,8 +185,7 @@ username@instance testConstraints&gt; insert r1! cf1 cq1 ABC
 username@instance testConstraints&gt; scan
 r1 cf1:cq1 []    1111
 username@instance testConstraints&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.7/examples/dirlist.html b/1.7/examples/dirlist.html
index 0440c33..5c19b76 100644
--- a/1.7/examples/dirlist.html
+++ b/1.7/examples/dirlist.html
@@ -167,9 +167,8 @@
 
 <p>To begin, ingest some data with Ingest.java.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.Ingest -i instance -z zookeepers -u username -p password --vis exampleVis --chunkSize 100000 /local/username/workspace
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.Ingest -i instance -z zookeepers -u username -p password --vis exampleVis --chunkSize 100000 /local/username/workspace
+</code></pre></div></div>
 
 <p>This may take some time if there are large files in the /local/username/workspace directory. If you use 0 instead of 100000 on the command line, the ingest will run much faster, but it will not put any file data into Accumulo (the dataTable will be empty).
 Note that running this example will create tables dirTable, indexTable, and dataTable in Accumulo that you should delete when you have completed the example.
@@ -177,46 +176,41 @@ If you modify a file or add new files in the directory ingested (e.g. /local/use
 
 <p>To browse the data ingested, use Viewer.java. Be sure to give the “username” user the authorizations to see the data (in this case, run</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u root -e 'setauths -u username -s exampleVis'
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u root -e 'setauths -u username -s exampleVis'
+</code></pre></div></div>
 
 <p>then run the Viewer:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.Viewer -i instance -z zookeepers -u username -p password -t dirTable --dataTable dataTable --auths exampleVis --path /local/username/workspace
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.Viewer -i instance -z zookeepers -u username -p password -t dirTable --dataTable dataTable --auths exampleVis --path /local/username/workspace
+</code></pre></div></div>
 
 <p>To list the contents of specific directories, use QueryUtil.java.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t dirTable --auths exampleVis --path /local/username
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t dirTable --auths exampleVis --path /local/username
 $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t dirTable --auths exampleVis --path /local/username/workspace
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>To perform searches on file or directory names, also use QueryUtil.java. Search terms must contain no more than one wild card and cannot contain “/”.
 <em>Note</em> these queries run on the <em>indexTable</em> table instead of the dirTable table.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t indexTable --auths exampleVis --path filename --search
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t indexTable --auths exampleVis --path filename --search
 $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t indexTable --auths exampleVis --path 'filename*' --search
 $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t indexTable --auths exampleVis --path '*jar' --search
 $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t indexTable --auths exampleVis --path 'filename*jar' --search
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>To count the number of direct children (directories and files) and descendants (children and children’s descendants, directories and files), run the FileCount over the dirTable table.
 The results are written back to the same table. FileCount reads from and writes to Accumulo. This requires scan authorizations for the read and a visibility for the data written.
 In this example, the authorizations and visibility are set to the same value, exampleVis. See README.visibility for more information on visibility and authorizations.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.FileCount -i instance -z zookeepers -u username -p password -t dirTable --auths exampleVis
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.FileCount -i instance -z zookeepers -u username -p password -t dirTable --auths exampleVis
+</code></pre></div></div>
 
 <h2 id="directory-table">Directory Table</h2>
 
 <p>Here is a illustration of what data looks like in the directory table:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>row colf:colq [vis]    value
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>row colf:colq [vis]    value
 000 dir:exec [exampleVis]    true
 000 dir:hidden [exampleVis]    false
 000 dir:lastmod [exampleVis]    1291996886000
@@ -230,8 +224,7 @@ In this example, the authorizations and visibility are set to the same value, ex
 002/local/Accumulo.README \x7F\xFF\xFE\xCFH\xA1\x82\x97:lastmod [exampleVis]    1308746481000
 002/local/Accumulo.README \x7F\xFF\xFE\xCFH\xA1\x82\x97:length [exampleVis]    9192
 002/local/Accumulo.README \x7F\xFF\xFE\xCFH\xA1\x82\x97:md5 [exampleVis]    274af6419a3c4c4a259260ac7017cbf1
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The rows are of the form depth + path, where depth is the number of slashes (“/”) in the path padded to 3 digits. This is so that all the children of a directory appear as consecutive keys in Accumulo; without the depth, you would for example see all the subdirectories of /local before you saw /usr.
 For directories the column family is “dir”. For files the column family is Long.MAX_VALUE - lastModified in bytes rather than string format so that newer versions sort earlier.</p>
@@ -240,13 +233,12 @@ For directories the column family is “dir”. For files the column family is L
 
 <p>Here is an illustration of what data looks like in the index table:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>row colf:colq [vis]
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>row colf:colq [vis]
 fAccumulo.README i:002/local/Accumulo.README [exampleVis]
 flocal i:001/local [exampleVis]
 rEMDAER.olumuccA i:002/local/Accumulo.README [exampleVis]
 rlacol i:001/local [exampleVis]
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The values of the index table are null. The rows are of the form “f” + filename or “r” + reverse file name. This is to enable searches with wildcards at the beginning, middle, or end.</p>
 
@@ -254,13 +246,12 @@ rlacol i:001/local [exampleVis]
 
 <p>Here is an illustration of what data looks like in the data table:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>row colf:colq [vis]    value
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>row colf:colq [vis]    value
 274af6419a3c4c4a259260ac7017cbf1 refs:e77276a2b56e5c15b540eaae32b12c69\x00filext [exampleVis]    README
 274af6419a3c4c4a259260ac7017cbf1 refs:e77276a2b56e5c15b540eaae32b12c69\x00name [exampleVis]    /local/Accumulo.README
 274af6419a3c4c4a259260ac7017cbf1 ~chunk:\x00\x0FB@\x00\x00\x00\x00 [exampleVis]    *******************************************************************************\x0A1. Building\x0A\x0AIn the normal tarball release of accumulo, [truncated]
 274af6419a3c4c4a259260ac7017cbf1 ~chunk:\x00\x0FB@\x00\x00\x00\x01 [exampleVis]
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The rows are the md5 hash of the file. Some column family : column qualifier pairs are “refs” : hash of file name + null byte + property name, in which case the value is property value. There can be multiple references to the same file which are distinguished by the hash of the file name.
 Other column family : column qualifier pairs are “~chunk” : chunk size in bytes + chunk number in bytes, in which case the value is the bytes for that chunk of the file. There is an end of file data marker whose chunk number is the number of chunks for the file and whose value is empty.</p>
diff --git a/1.7/examples/export.html b/1.7/examples/export.html
index ad71970..0aea4b2 100644
--- a/1.7/examples/export.html
+++ b/1.7/examples/export.html
@@ -156,7 +156,7 @@ the table. A table must be offline to export it, and it should remain offline
 for the duration of the distcp. An easy way to take a table offline without
 interuppting access to it is to clone it and take the clone offline.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@test17&gt; createtable table1
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@test17&gt; createtable table1
 root@test17 table1&gt; insert a cf1 cq1 v1
 root@test17 table1&gt; insert h cf1 cq1 v2
 root@test17 table1&gt; insert z cf1 cq1 v3
@@ -172,34 +172,31 @@ root@test17 table1&gt; clonetable table1 table1_exp
 root@test17 table1&gt; offline table1_exp
 root@test17 table1&gt; exporttable -t table1_exp /tmp/table1_export
 root@test17 table1&gt; quit
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>After executing the export command, a few files are created in the hdfs dir.
 One of the files is a list of files to distcp as shown below.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ hadoop fs -ls /tmp/table1_export
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ hadoop fs -ls /tmp/table1_export
 Found 2 items
 -rw-r--r--   3 user supergroup        162 2012-07-25 09:56 /tmp/table1_export/distcp.txt
 -rw-r--r--   3 user supergroup        821 2012-07-25 09:56 /tmp/table1_export/exportMetadata.zip
 $ hadoop fs -cat /tmp/table1_export/distcp.txt
 hdfs://n1.example.com:6093/accumulo/tables/3/default_tablet/F0000000.rf
 hdfs://n1.example.com:6093/tmp/table1_export/exportMetadata.zip
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Before the table can be imported, it must be copied using distcp. After the
 discp completed, the cloned table may be deleted.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ hadoop distcp -f /tmp/table1_export/distcp.txt /tmp/table1_export_dest
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ hadoop distcp -f /tmp/table1_export/distcp.txt /tmp/table1_export_dest
+</code></pre></div></div>
 
 <p>The Accumulo shell session below shows importing the table and inspecting it.
 The data, splits, config, and logical time information for the table were
 preserved.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@test17&gt; importtable table1_copy /tmp/table1_export_dest
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@test17&gt; importtable table1_copy /tmp/table1_export_dest
 root@test17&gt; table table1_copy
 root@test17 table1_copy&gt; scan
 a cf1:cq1 []    v1
@@ -225,8 +222,7 @@ root@test17 table1_copy&gt; scan -t accumulo.metadata -b 5 -c srv:time
 5;b srv:time []    M1343224500467
 5;r srv:time []    M1343224500467
 5&lt; srv:time []    M1343224500467
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.7/examples/filedata.html b/1.7/examples/filedata.html
index 9666abd..305af24 100644
--- a/1.7/examples/filedata.html
+++ b/1.7/examples/filedata.html
@@ -176,27 +176,23 @@ The example has the following classes:</p>
 
 <p>If you haven’t already run the README.dirlist example, ingest a file with FileDataIngest.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.filedata.FileDataIngest -i instance -z zookeepers -u username -p password -t dataTable --auths exampleVis --chunk 1000 $ACCUMULO_HOME/README
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.filedata.FileDataIngest -i instance -z zookeepers -u username -p password -t dataTable --auths exampleVis --chunk 1000 $ACCUMULO_HOME/README
+</code></pre></div></div>
 
 <p>Open the accumulo shell and look at the data. The row is the MD5 hash of the file, which you can verify by running a command such as ‘md5sum’ on the file.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>&gt; scan -t dataTable
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>&gt; scan -t dataTable
+</code></pre></div></div>
 
 <p>Run the CharacterHistogram MapReduce to add some information about the file.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.filedata.CharacterHistogram -i instance -z zookeepers -u username -p password -t dataTable --auths exampleVis --vis exampleVis
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.filedata.CharacterHistogram -i instance -z zookeepers -u username -p password -t dataTable --auths exampleVis --vis exampleVis
+</code></pre></div></div>
 
 <p>Scan again to see the histogram stored in the ‘info’ column family.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>&gt; scan -t dataTable
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>&gt; scan -t dataTable
+</code></pre></div></div>
 
         </div>
 
diff --git a/1.7/examples/filter.html b/1.7/examples/filter.html
index a5f57b3..9d4c629 100644
--- a/1.7/examples/filter.html
+++ b/1.7/examples/filter.html
@@ -158,7 +158,7 @@ Filter takes a “negate” parameter which defaults to false. If set to true, t
 return value of the accept method is negated, so that key/value pairs accepted
 by the method are omitted by the Filter.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance&gt; createtable filtertest
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance&gt; createtable filtertest
 username@instance filtertest&gt; setiter -t filtertest -scan -p 10 -n myfilter -ageoff
 AgeOffFilter removes entries with timestamps more than &lt;ttl&gt; milliseconds old
 ----------&gt; set AgeOffFilter parameter negate, default false keeps k/v that pass accept method, true rejects k/v that pass accept method:
@@ -169,15 +169,13 @@ username@instance filtertest&gt; insert foo a b c
 username@instance filtertest&gt; scan
 foo a:b []    c
 username@instance filtertest&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>… wait 30 seconds …</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance filtertest&gt; scan
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance filtertest&gt; scan
 username@instance filtertest&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Note the absence of the entry inserted more than 30 seconds ago. Since the
 scope was set to “scan”, this means the entry is still in Accumulo, but is
@@ -195,7 +193,7 @@ AgeOffFilter, but any Filter can be configured by using the -class flag. The
 following commands show how to enable the AgeOffFilter for the minc and majc
 scopes using the -class flag, then flush and compact the table.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance filtertest&gt; setiter -t filtertest -minc -majc -p 10 -n myfilter -class org.apache.accumulo.core.iterators.user.AgeOffFilter
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance filtertest&gt; setiter -t filtertest -minc -majc -p 10 -n myfilter -class org.apache.accumulo.core.iterators.user.AgeOffFilter
 AgeOffFilter removes entries with timestamps more than &lt;ttl&gt; milliseconds old
 ----------&gt; set AgeOffFilter parameter negate, default false keeps k/v that pass accept method, true rejects k/v that pass accept method:
 ----------&gt; set AgeOffFilter parameter ttl, time to live (milliseconds): 30000
@@ -210,8 +208,7 @@ username@instance filtertest&gt; compact -t filtertest -w
 06 10:43:00,632 [shell.Shell] INFO : Compacting table ...
 06 10:43:01,307 [shell.Shell] INFO : Compaction of table filtertest completed for given range
 username@instance filtertest&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>By default, flush and compact execute in the background, but with the -w flag
 they will wait to return until the operation has completed. Both are
@@ -224,7 +221,7 @@ the old files.</p>
 
 <p>To see the iterator settings for a table, use config.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance filtertest&gt; config -t filtertest -f iterator
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance filtertest&gt; config -t filtertest -f iterator
 ---------+---------------------------------------------+---------------------------------------------------------------------------
 SCOPE    | NAME                                        | VALUE
 ---------+---------------------------------------------+---------------------------------------------------------------------------
@@ -242,8 +239,7 @@ table    | table.iterator.scan.vers .................. | 20,org.apache.accumulo.
 table    | table.iterator.scan.vers.opt.maxVersions .. | 1
 ---------+---------------------------------------------+---------------------------------------------------------------------------
 username@instance filtertest&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>When setting new iterators, make sure to order their priority numbers
 (specified with -p) in the order you would like the iterators to be applied.
diff --git a/1.7/examples/helloworld.html b/1.7/examples/helloworld.html
index 1e235b3..b5a97d3 100644
--- a/1.7/examples/helloworld.html
+++ b/1.7/examples/helloworld.html
@@ -157,40 +157,34 @@
 
 <p>Log into the accumulo shell:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+</code></pre></div></div>
 
 <p>Create a table called ‘hellotable’:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance&gt; createtable hellotable
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance&gt; createtable hellotable
+</code></pre></div></div>
 
 <p>Launch a Java program that inserts data with a BatchWriter:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.helloworld.InsertWithBatchWriter -i instance -z zookeepers -u username -p password -t hellotable
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.helloworld.InsertWithBatchWriter -i instance -z zookeepers -u username -p password -t hellotable
+</code></pre></div></div>
 
 <p>On the accumulo status page at the URL below (where ‘master’ is replaced with the name or IP of your accumulo master), you should see 50K entries</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>http://master:50095/
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>http://master:50095/
+</code></pre></div></div>
 
 <p>To view the entries, use the shell to scan the table:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance&gt; table hellotable
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance&gt; table hellotable
 username@instance hellotable&gt; scan
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>You can also use a Java class to scan the table:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.helloworld.ReadData -i instance -z zookeepers -u username -p password -t hellotable --startKey row_0 --endKey row_1001
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.helloworld.ReadData -i instance -z zookeepers -u username -p password -t hellotable --startKey row_0 --endKey row_1001
+</code></pre></div></div>
 
         </div>
 
diff --git a/1.7/examples/isolation.html b/1.7/examples/isolation.html
index c730057..b7cfb1c 100644
--- a/1.7/examples/isolation.html
+++ b/1.7/examples/isolation.html
@@ -162,7 +162,7 @@ reading the row at the same time a mutation is changing the row.</p>
 <p>Below, Interference Test is run without isolation enabled for 5000 iterations
 and it reports problems.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.isolation.InterferenceTest -i instance -z zookeepers -u username -p password -t isotest --iterations 5000
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.isolation.InterferenceTest -i instance -z zookeepers -u username -p password -t isotest --iterations 5000
 ERROR Columns in row 053 had multiple values [53, 4553]
 ERROR Columns in row 061 had multiple values [561, 61]
 ERROR Columns in row 070 had multiple values [570, 1070]
@@ -171,16 +171,14 @@ ERROR Columns in row 088 had multiple values [2588, 1588]
 ERROR Columns in row 106 had multiple values [2606, 3106]
 ERROR Columns in row 115 had multiple values [4615, 3115]
 finished
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Below, Interference Test is run with isolation enabled for 5000 iterations and
 it reports no problems.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.isolation.InterferenceTest -i instance -z zookeepers -u username -p password -t isotest --iterations 5000 --isolated
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.isolation.InterferenceTest -i instance -z zookeepers -u username -p password -t isotest --iterations 5000 --isolated
 finished
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.7/examples/mapred.html b/1.7/examples/mapred.html
index 7464a4b..bffd999 100644
--- a/1.7/examples/mapred.html
+++ b/1.7/examples/mapred.html
@@ -155,17 +155,16 @@ accumulo table with combiners.</p>
 <p>To run this example you will need a directory in HDFS containing text files.
 The accumulo readme will be used to show how to run this example.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ hadoop fs -copyFromLocal $ACCUMULO_HOME/README /user/username/wc/Accumulo.README
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ hadoop fs -copyFromLocal $ACCUMULO_HOME/README /user/username/wc/Accumulo.README
 $ hadoop fs -ls /user/username/wc
 Found 1 items
 -rw-r--r--   2 username supergroup       9359 2009-07-15 17:54 /user/username/wc/Accumulo.README
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The first part of running this example is to create a table with a combiner
 for the column family count.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 Shell - Apache Accumulo Interactive Shell
 - version: 1.7.3
 - instance name: instance
@@ -181,12 +180,11 @@ SummingCombiner interprets Values as Longs and adds them together. A variety of
 ----------&gt; set SummingCombiner parameter lossy, if true, failed decodes are ignored. Otherwise combiner will error on failed decodes (default false): &lt;TRUE|FALSE&gt;: false
 ----------&gt; set SummingCombiner parameter type, &lt;VARLEN|FIXEDLEN|STRING|fullClassName&gt;: STRING
 username@instance wordCount&gt; quit
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>After creating the table, run the word count map reduce job.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.WordCount -i instance -z zookeepers  --input /user/username/wc -t wordCount -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.WordCount -i instance -z zookeepers  --input /user/username/wc -t wordCount -u username -p password
 
 11/02/07 18:20:11 INFO input.FileInputFormat: Total input paths to process : 1
 11/02/07 18:20:12 INFO mapred.JobClient: Running job: job_201102071740_0003
@@ -203,13 +201,12 @@ username@instance wordCount&gt; quit
 11/02/07 18:20:22 INFO mapred.JobClient:     Map input records=255
 11/02/07 18:20:22 INFO mapred.JobClient:     Spilled Records=0
 11/02/07 18:20:22 INFO mapred.JobClient:     Map output records=1452
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>After the map reduce job completes, query the accumulo table to see word
 counts.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 username@instance&gt; table wordCount
 username@instance wordCount&gt; scan -b the
 the count:20080906 []    75
@@ -227,8 +224,7 @@ total count:20080906 []    1
 tserver, count:20080906 []    1
 tserver.compaction.major.concurrent.max count:20080906 []    1
 ...
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Another example to look at is
 org.apache.accumulo.examples.simple.mapreduce.UniqueColumns. This example
@@ -244,9 +240,8 @@ displayed in the job’s configuration which is world-readable).</p>
 
 <p>To create a token file, use the create-token utility</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo create-token
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo create-token
+</code></pre></div></div>
 
 <p>It defaults to creating a PasswordToken, but you can specify the token class
 with -tc (requires the fully qualified class name). Based on the token class,
@@ -259,9 +254,8 @@ a file, but only the first one for each user will be recognized.</p>
 <p>Rather than waiting for the prompts, you can specify some options when calling
 create-token, for example</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo create-token -u root -p secret -f root.pw
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo create-token -u root -p secret -f root.pw
+</code></pre></div></div>
 
 <p>would create a token file containing a PasswordToken for
 user ‘root’ with password ‘secret’ and saved to ‘root.pw’</p>
@@ -269,9 +263,8 @@ user ‘root’ with password ‘secret’ and saved to ‘root.pw’</p>
 <p>This local file needs to be uploaded to hdfs to be used with the
 map-reduce job. For example, if the file were ‘root.pw’ in the local directory:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ hadoop fs -put root.pw root.pw
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ hadoop fs -put root.pw root.pw
+</code></pre></div></div>
 
 <p>This would put ‘root.pw’ in the user’s home directory in hdfs.</p>
 
@@ -280,18 +273,16 @@ map-reduce job. For example, if the file were ‘root.pw’ in the local directo
 the basic WordCount example by calling the same command as explained above
 except replacing the password with the token file (rather than -p, use -tf).</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.WordCount -i instance -z zookeepers  --input /user/username/wc -t wordCount -u username -tf tokenfile
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.WordCount -i instance -z zookeepers  --input /user/username/wc -t wordCount -u username -tf tokenfile
+</code></pre></div></div>
 
 <p>In the above examples, username was ‘root’ and tokenfile was ‘root.pw’</p>
 
 <p>However, if you don’t want to use the Opts class to parse arguments,
 the TokenFileWordCount is an example of using the token file manually.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.TokenFileWordCount instance zookeepers username tokenfile /user/username/wc wordCount
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.TokenFileWordCount instance zookeepers username tokenfile /user/username/wc wordCount
+</code></pre></div></div>
 
 <p>The results should be the same as the WordCount example except that the
 authentication token was not stored in the configuration. It was instead
diff --git a/1.7/examples/maxmutation.html b/1.7/examples/maxmutation.html
index 21ff28a..dc2c11f 100644
--- a/1.7/examples/maxmutation.html
+++ b/1.7/examples/maxmutation.html
@@ -155,7 +155,7 @@ inadvertently create mutations so large, that they cause the tablet servers to
 run out of memory. A simple contraint can be added to a table to reject very
 large mutations.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 
 Shell - Apache Accumulo Interactive Shell
 -
@@ -168,18 +168,16 @@ Shell - Apache Accumulo Interactive Shell
 username@instance&gt; createtable test_ingest
 username@instance test_ingest&gt; config -t test_ingest -s table.constraint.1=org.apache.accumulo.examples.simple.constraints.MaxMutationSize
 username@instance test_ingest&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Now the table will reject any mutation that is larger than 1/256th of the
 working memory of the tablet server. The following command attempts to ingest
 a single row with 10000 columns, which exceeds the memory limit.
 Depending on the amount of Java heap your tserver(s) are given, you may have to increase the number of columns provided to see the failure.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.test.TestIngest -i instance -z zookeepers -u username -p password --rows 1 --cols 10000
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.test.TestIngest -i instance -z zookeepers -u username -p password --rows 1 --cols 10000
 ERROR : Constraint violates : ConstraintViolationSummary(constrainClass:org.apache.accumulo.examples.simple.constraints.MaxMutationSize, violationCode:0, violationDescription:mutation exceeded maximum size of 188160, numberOfViolatingMutations:1)
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.7/examples/regex.html b/1.7/examples/regex.html
index 14d2960..3819dc8 100644
--- a/1.7/examples/regex.html
+++ b/1.7/examples/regex.html
@@ -154,7 +154,7 @@ This is accomplished using a map-only mapreduce job and a scan-time iterator.</p
 <p>To run this example you will need some data in a table. The following will
 put a trivial amount of data into accumulo using the accumulo shell:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 Shell - Apache Accumulo Interactive Shell
 - version: 1.7.3
 - instance name: instance
@@ -166,8 +166,7 @@ username@instance&gt; createtable input
 username@instance&gt; insert dogrow dogcf dogcq dogvalue
 username@instance&gt; insert catrow catcf catcq catvalue
 username@instance&gt; quit
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The RegexExample class sets an iterator on the scanner. This does pattern matching
 against each key/value in accumulo, and only returns matching items. It will do this
@@ -175,22 +174,20 @@ in parallel and will store the results in files in hdfs.</p>
 
 <p>The following will search for any rows in the input table that starts with “dog”:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.RegexExample -u user -p passwd -i instance -t input --rowRegex 'dog.*' --output /tmp/output
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.RegexExample -u user -p passwd -i instance -t input --rowRegex 'dog.*' --output /tmp/output
 
 $ hadoop fs -ls /tmp/output
 Found 3 items
 -rw-r--r--   1 username supergroup          0 2013-01-10 14:11 /tmp/output/_SUCCESS
 drwxr-xr-x   - username supergroup          0 2013-01-10 14:10 /tmp/output/_logs
 -rw-r--r--   1 username supergroup         51 2013-01-10 14:10 /tmp/output/part-m-00000
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>We can see the output of our little map-reduce job:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ hadoop fs -text /tmp/output/part-m-00000
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ hadoop fs -text /tmp/output/part-m-00000
 dogrow dogcf:dogcq [] 1357844987994 false    dogvalue
-</code></pre>
-</div>
+</code></pre></div></div>
 
         </div>
 
diff --git a/1.7/examples/reservations.html b/1.7/examples/reservations.html
index f59947a..198ec66 100644
--- a/1.7/examples/reservations.html
+++ b/1.7/examples/reservations.html
@@ -157,7 +157,7 @@ and trent to reserve room06 on 20140101. Bob ends up getting the reservation
 and everyone else is put on a wait list. The example code will take any string
 for what, when and who.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.reservations.ARS
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.reservations.ARS
 &gt;connect test16 localhost root secret ars
   connected
 &gt;
@@ -180,20 +180,18 @@ for what, when and who.</p>
   Reservation holder : mallory
   Wait list : [trent, eve]
 &gt;quit
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Scanning the table in the Accumulo shell after running the example shows the
 following:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@test16&gt; table ars
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@test16&gt; table ars
 root@test16 ars&gt; scan
 room06:20140101 res:0001 []    mallory
 room06:20140101 res:0003 []    trent
 room06:20140101 res:0004 []    eve
 room06:20140101 tx:seq []    6
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The tx:seq column is incremented for each update to the row allowing for
 detection of concurrent changes. For an update to go through, the sequence
diff --git a/1.7/examples/rowhash.html b/1.7/examples/rowhash.html
index ce4cf1d..7d0c4c8 100644
--- a/1.7/examples/rowhash.html
+++ b/1.7/examples/rowhash.html
@@ -154,7 +154,7 @@ writes back into that table.</p>
 <p>To run this example you will need some data in a table. The following will
 put a trivial amount of data into accumulo using the accumulo shell:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 Shell - Apache Accumulo Interactive Shell
 - version: 1.7.3
 - instance name: instance
@@ -166,19 +166,17 @@ username@instance&gt; createtable input
 username@instance&gt; insert a-row cf cq value
 username@instance&gt; insert b-row cf cq value
 username@instance&gt; quit
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The RowHash class will insert a hash for each row in the database if it contains a
 specified colum. Here’s how you run the map/reduce job</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.RowHash -u user -p passwd -i instance -t input --column cf:cq
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.RowHash -u user -p passwd -i instance -t input --column cf:cq
+</code></pre></div></div>
 
 <p>Now we can scan the table and see the hashes:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 Shell - Apache Accumulo Interactive Shell
 - version: 1.7.3
 - instance name: instance
@@ -192,8 +190,7 @@ a-row cf-HASHTYPE:cq-MD5BASE64 []    IGPBYI1uC6+AJJxC4r5YBA==
 b-row cf:cq []    value
 b-row cf-HASHTYPE:cq-MD5BASE64 []    IGPBYI1uC6+AJJxC4r5YBA==
 username@instance&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.7/examples/shard.html b/1.7/examples/shard.html
index 0262fd5..54818d8 100644
--- a/1.7/examples/shard.html
+++ b/1.7/examples/shard.html
@@ -160,21 +160,19 @@ document, or “sharded”. This example shows how to use the intersecting itera
 
 <p>To run these example programs, create two tables like below.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance&gt; createtable shard
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance&gt; createtable shard
 username@instance shard&gt; createtable doc2term
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>After creating the tables, index some files. The following command indexes all of the java files in the Accumulo source code.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ cd /local/username/workspace/accumulo/
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ cd /local/username/workspace/accumulo/
 $ find core/src server/src -name "*.java" | xargs ./bin/accumulo org.apache.accumulo.examples.simple.shard.Index -i instance -z zookeepers -t shard -u username -p password --partitions 30
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The following command queries the index to find all files containing ‘foo’ and ‘bar’.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ cd $ACCUMULO_HOME
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ cd $ACCUMULO_HOME
 $ ./bin/accumulo org.apache.accumulo.examples.simple.shard.Query -i instance -z zookeepers -t shard -u username -p password foo bar
 /local/username/workspace/accumulo/src/core/src/test/java/accumulo/core/security/ColumnVisibilityTest.java
 /local/username/workspace/accumulo/src/core/src/test/java/accumulo/core/client/mock/MockConnectorTest.java
@@ -187,19 +185,17 @@ $ ./bin/accumulo org.apache.accumulo.examples.simple.shard.Query -i instance -z
 /local/username/workspace/accumulo/src/core/src/test/java/accumulo/core/iterators/WholeRowIteratorTest.java
 /local/username/workspace/accumulo/src/server/src/test/java/accumulo/server/util/DefaultMapTest.java
 /local/username/workspace/accumulo/src/server/src/test/java/accumulo/server/tabletserver/InMemoryMapTest.java
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>In order to run ContinuousQuery, we need to run Reverse.java to populate doc2term.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.shard.Reverse -i instance -z zookeepers --shardTable shard --doc2Term doc2term -u username -p password
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.shard.Reverse -i instance -z zookeepers --shardTable shard --doc2Term doc2term -u username -p password
+</code></pre></div></div>
 
 <p>Below ContinuousQuery is run using 5 terms. So it selects 5 random terms from each document, then it continually
 randomly selects one set of 5 terms and queries. It prints the number of matching documents and the time in seconds.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.shard.ContinuousQuery -i instance -z zookeepers --shardTable shard --doc2Term doc2term -u username -p password --terms 5
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.shard.ContinuousQuery -i instance -z zookeepers --shardTable shard --doc2Term doc2term -u username -p password --terms 5
 [public, core, class, binarycomparable, b] 2  0.081
 [wordtodelete, unindexdocument, doctablename, putdelete, insert] 1  0.041
 [import, columnvisibilityinterpreterfactory, illegalstateexception, cv, columnvisibility] 1  0.049
@@ -207,8 +203,7 @@ randomly selects one set of 5 terms and queries. It prints the number of matchin
 [for, static, println, public, the] 55  0.211
 [sleeptime, wrappingiterator, options, long, utilwaitthread] 1  0.057
 [string, public, long, 0, wait] 12  0.132
-</code></pre>
-</div>
+</code></pre></div></div>
 
         </div>
 
diff --git a/1.7/examples/tabletofile.html b/1.7/examples/tabletofile.html
index 4659fa0..72257d6 100644
--- a/1.7/examples/tabletofile.html
+++ b/1.7/examples/tabletofile.html
@@ -153,7 +153,7 @@
 <p>To run this example you will need some data in a table. The following will
 put a trivial amount of data into accumulo using the accumulo shell:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 Shell - Apache Accumulo Interactive Shell
 - version: 1.7.3
 - instance name: instance
@@ -166,15 +166,14 @@ username@instance&gt; insert dog cf cq dogvalue
 username@instance&gt; insert cat cf cq catvalue
 username@instance&gt; insert junk family qualifier junkvalue
 username@instance&gt; quit
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The TableToFile class configures a map-only job to read the specified columns and
 write the key/value pairs to a file in HDFS.</p>
 
 <p>The following will extract the rows containing the column “cf:cq”:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.TableToFile -u user -p passwd -i instance -t input --columns cf:cq --output /tmp/output
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.TableToFile -u user -p passwd -i instance -t input --columns cf:cq --output /tmp/output
 
 $ hadoop fs -ls /tmp/output
 -rw-r--r--   1 username supergroup          0 2013-01-10 14:44 /tmp/output/_SUCCESS
@@ -183,17 +182,15 @@ drwxr-xr-x   - username supergroup          0 2013-01-10 14:44 /tmp/output/_logs
 -rw-r--r--   1 username supergroup       9049 2013-01-10 14:44 /tmp/output/_logs/history/job_201301081658_0011_1357847072863_username_TableToFile%5F1357847071434
 -rw-r--r--   1 username supergroup      26172 2013-01-10 14:44 /tmp/output/_logs/history/job_201301081658_0011_conf.xml
 -rw-r--r--   1 username supergroup         50 2013-01-10 14:44 /tmp/output/part-m-00000
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>We can see the output of our little map-reduce job:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ hadoop fs -text /tmp/output/output/part-m-00000
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ hadoop fs -text /tmp/output/output/part-m-00000
 catrow cf:cq []    catvalue
 dogrow cf:cq []    dogvalue
 $
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.7/examples/terasort.html b/1.7/examples/terasort.html
index cc7c117..b5e72f4 100644
--- a/1.7/examples/terasort.html
+++ b/1.7/examples/terasort.html
@@ -154,7 +154,7 @@ hadoop terasort benchmark.</p>
 
 <p>To run this example you run it with arguments describing the amount of data:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.TeraSortIngest \
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.TeraSortIngest \
 -i instance -z zookeepers -u user -p password \
 --count 10 \
 --minKeySize 10 \
@@ -163,12 +163,11 @@ hadoop terasort benchmark.</p>
 --maxValueSize 78 \
 --table sort \
 --splits 10 \
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>After the map reduce job completes, scan the data:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 username@instance&gt; scan -t sort
 +l-$$OE/ZH c:         4 []    GGGGGGGGGGWWWWWWWWWWMMMMMMMMMMCCCCCCCCCCSSSSSSSSSSIIIIIIIIIIYYYYYYYYYYOOOOOOOO
 ,C)wDw//u= c:        10 []    CCCCCCCCCCSSSSSSSSSSIIIIIIIIIIYYYYYYYYYYOOOOOOOOOOEEEEEEEEEEUUUUUUUUUUKKKKKKKK
@@ -180,8 +179,7 @@ M^*dDE;6^&lt; c:         9 []    UUUUUUUUUUKKKKKKKKKKAAAAAAAAAAQQQQQQQQQQGGGGGGG
 le5awB.$sm c:         6 []    WWWWWWWWWWMMMMMMMMMMCCCCCCCCCCSSSSSSSSSSIIIIIIIIIIYYYYYYYYYYOOOOOOOOOOEEEEEEEE
 q__[fwhKFg c:         7 []    EEEEEEEEEEUUUUUUUUUUKKKKKKKKKKAAAAAAAAAAQQQQQQQQQQGGGGGGGGGGWWWWWWWWWWMMMMMMMM
 w[o||:N&amp;H, c:         2 []    QQQQQQQQQQGGGGGGGGGGWWWWWWWWWWMMMMMMMMMMCCCCCCCCCCSSSSSSSSSSIIIIIIIIIIYYYYYYYY
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Of course, a real benchmark would ingest millions of entries.</p>
 
diff --git a/1.7/examples/visibility.html b/1.7/examples/visibility.html
index 8a404b2..c620a86 100644
--- a/1.7/examples/visibility.html
+++ b/1.7/examples/visibility.html
@@ -150,7 +150,7 @@
           
           <h2 id="creating-a-new-user">Creating a new user</h2>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@instance&gt; createuser username
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@instance&gt; createuser username
 Enter new password for 'username': ********
 Please confirm new password for 'username': ********
 root@instance&gt; user username
@@ -162,14 +162,13 @@ System permissions:
 
 Table permissions (accumulo.metadata): Table.READ
 username@instance&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>A user does not by default have permission to create a table.</p>
 
 <h2 id="granting-permissions-to-a-user">Granting permissions to a user</h2>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance&gt; user root
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance&gt; user root
 Enter password for user root: ********
 root@instance&gt; grant -s System.CREATE_TABLE -u username
 root@instance&gt; user username
@@ -181,8 +180,7 @@ System permissions: System.CREATE_TABLE
 Table permissions (accumulo.metadata): Table.READ
 Table permissions (vistest): Table.READ, Table.WRITE, Table.BULK_IMPORT, Table.ALTER_TABLE, Table.GRANT, Table.DROP_TABLE
 username@instance vistest&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h2 id="inserting-data-with-visibilities">Inserting data with visibilities</h2>
 
@@ -191,7 +189,7 @@ tokens. Authorization tokens are arbitrary strings taken from a restricted
 ASCII character set. Parentheses are required to specify order of operations
 in visibilities.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance vistest&gt; insert row f1 q1 v1 -l A
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance vistest&gt; insert row f1 q1 v1 -l A
 username@instance vistest&gt; insert row f2 q2 v2 -l A&amp;B
 username@instance vistest&gt; insert row f3 q3 v3 -l apple&amp;carrot|broccoli|spinach
 06 11:19:01,432 [shell.Shell] ERROR: org.apache.accumulo.core.util.BadArgumentException: cannot mix | and &amp; near index 12
@@ -199,8 +197,7 @@ apple&amp;carrot|broccoli|spinach
             ^
 username@instance vistest&gt; insert row f3 q3 v3 -l (apple&amp;carrot)|broccoli|spinach
 username@instance vistest&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h2 id="scanning-with-authorizations">Scanning with authorizations</h2>
 
@@ -209,25 +206,23 @@ authorizations and each Accumulo scan has authorizations. Scan authorizations
 are only allowed to be a subset of the user’s authorizations. By default, a
 user’s authorizations set is empty.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance vistest&gt; scan
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance vistest&gt; scan
 username@instance vistest&gt; scan -s A
 06 11:43:14,951 [shell.Shell] ERROR: java.lang.RuntimeException: org.apache.accumulo.core.client.AccumuloSecurityException: Error BAD_AUTHORIZATIONS - The user does not have the specified authorizations assigned
 username@instance vistest&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <h2 id="setting-authorizations-for-a-user">Setting authorizations for a user</h2>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance vistest&gt; setauths -s A
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance vistest&gt; setauths -s A
 06 11:53:42,056 [shell.Shell] ERROR: org.apache.accumulo.core.client.AccumuloSecurityException: Error PERMISSION_DENIED - User does not have permission to perform this action
 username@instance vistest&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>A user cannot set authorizations unless the user has the System.ALTER_USER permission.
 The root user has this permission.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance vistest&gt; user root
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance vistest&gt; user root
 Enter password for user root: ********
 root@instance vistest&gt; setauths -s A -u username
 root@instance vistest&gt; user username
@@ -237,12 +232,11 @@ row f1:q1 [A]    v1
 username@instance vistest&gt; scan
 row f1:q1 [A]    v1
 username@instance vistest&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The default authorizations for a scan are the user’s entire set of authorizations.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance vistest&gt; user root
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance vistest&gt; user root
 Enter password for user root: ********
 root@instance vistest&gt; setauths -s A,B,broccoli -u username
 root@instance vistest&gt; user username
@@ -253,13 +247,12 @@ row f2:q2 [A&amp;B]    v2
 row f3:q3 [(apple&amp;carrot)|broccoli|spinach]    v3
 username@instance vistest&gt; scan -s B
 username@instance vistest&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>If you want, you can limit a user to only be able to insert data which they can read themselves.
 It can be set with the following constraint.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance vistest&gt; user root
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance vistest&gt; user root
 Enter password for user root: ******
 root@instance vistest&gt; config -t vistest -s table.constraint.1=org.apache.accumulo.core.security.VisibilityConstraint
 root@instance vistest&gt; user username
@@ -274,8 +267,7 @@ row f2:q2 [A&amp;B]    v2
 row f3:q3 [(apple&amp;carrot)|broccoli|spinach]    v3
 row f4:q4 [spinach|broccoli]    v4
 username@instance vistest&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.8/examples/batch.html b/1.8/examples/batch.html
index b0c710f..440f104 100644
--- a/1.8/examples/batch.html
+++ b/1.8/examples/batch.html
@@ -187,13 +187,12 @@ list of zookeeper nodes (given as zookeepers here).</p>
 <p>Before you run this, you must ensure that the user you are running has the
 “exampleVis” authorization. (you can set this in the shell with “setauths -u username -s exampleVis”)</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u root -e "setauths -u username -s exampleVis"
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u root -e "setauths -u username -s exampleVis"
+</code></pre></div></div>
 
 <p>You must also create the table, batchtest1, ahead of time. (In the shell, use “createtable batchtest1”)</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -e "createtable batchtest1"
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -e "createtable batchtest1"
 $ ./bin/accumulo org.apache.accumulo.examples.simple.client.SequentialBatchWriter -i instance -z zookeepers -u username -p password -t batchtest1 --start 0 --num 10000 --size 50 --batchMemory 20M --batchLatency 500 --batchThreads 20 --vis exampleVis
 $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner -i instance -z zookeepers -u username -p password -t batchtest1 --num 100 --min 0 --max 10000 --size 50 --scanThreads 20 --auths exampleVis
 07 11:33:11,103 [client.CountingVerifyingReceiver] INFO : Generating 100 random queries...
@@ -207,8 +206,7 @@ $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner -
 07 11:33:11,416 [client.CountingVerifyingReceiver] INFO : 2173.91 lookups/sec   0.05 secs
 
 07 11:33:11,416 [client.CountingVerifyingReceiver] INFO : num results : 100
-</code></pre>
-</div>
+</code></pre></div></div>
 
         </div>
 
diff --git a/1.8/examples/bloom.html b/1.8/examples/bloom.html
index 6023655..e556245 100644
--- a/1.8/examples/bloom.html
+++ b/1.8/examples/bloom.html
@@ -172,7 +172,7 @@ do not exist in a table.</p>
 
 <p>Below table named bloom_test is created and bloom filters are enabled.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 Shell - Apache Accumulo Interactive Shell
 - version: 1.5.0
 - instance name: instance
@@ -184,43 +184,39 @@ username@instance&gt; setauths -u username -s exampleVis
 username@instance&gt; createtable bloom_test
 username@instance bloom_test&gt; config -t bloom_test -s table.bloom.enabled=true
 username@instance bloom_test&gt; exit
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Below 1 million random values are inserted into accumulo. The randomly
 generated rows range between 0 and 1 billion. The random number generator is
 initialized with the seed 7.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test --num 1000000 --min 0 --max 1000000000 --size 50 --batchMemory 2M --batchLatency 60s --batchThreads 3 --vis exampleVis
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test --num 1000000 --min 0 --max 1000000000 --size 50 --batchMemory 2M --batchLatency 60s --batchThreads 3 --vis exampleVis
+</code></pre></div></div>
 
 <p>Below the table is flushed:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test -w'
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test -w'
 05 10:40:06,069 [shell.Shell] INFO : Flush of table bloom_test completed.
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>After the flush completes, 500 random queries are done against the table. The
 same seed is used to generate the queries, therefore everything is found in the
 table.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test --num 500 --min 0 --max 1000000000 --size 50 --scanThreads 20 --auths exampleVis
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test --num 500 --min 0 --max 1000000000 --size 50 --scanThreads 20 --auths exampleVis
 Generating 500 random queries...finished
 96.19 lookups/sec   5.20 secs
 num results : 500
 Generating 500 random queries...finished
 102.35 lookups/sec   4.89 secs
 num results : 500
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Below another 500 queries are performed, using a different seed which results
 in nothing being found. In this case the lookups are much faster because of
 the bloom filters.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 8 -i instance -z zookeepers -u username -p password -t bloom_test --num 500 --min 0 --max 1000000000 --size 50 -batchThreads 20 -auths exampleVis
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 8 -i instance -z zookeepers -u username -p password -t bloom_test --num 500 --min 0 --max 1000000000 --size 50 -batchThreads 20 -auths exampleVis
 Generating 500 random queries...finished
 2212.39 lookups/sec   0.23 secs
 num results : 0
@@ -229,8 +225,7 @@ Generating 500 random queries...finished
 4464.29 lookups/sec   0.11 secs
 num results : 0
 Did not find 500 rows
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <hr />
 
@@ -262,7 +257,7 @@ million inserts. If not, then more map files will be created.</p>
 
 <p>The commands for creating the first table without bloom filters are below.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 Shell - Apache Accumulo Interactive Shell
 - version: 1.5.0
 - instance name: instance
@@ -282,12 +277,11 @@ $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter --
 $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test1 -w'
 $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter --seed 9 $ARGS
 $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test1 -w'
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The commands for creating the second table with bloom filers are below.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 Shell - Apache Accumulo Interactive Shell
 - version: 1.5.0
 - instance name: instance
@@ -308,65 +302,60 @@ $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter --
 $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test2 -w'
 $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter --seed 9 $ARGS
 $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test2 -w'
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Below 500 lookups are done against the table without bloom filters using random
 NG seed 7. Even though only one map file will likely contain entries for this
 seed, all map files will be interrogated.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test1 --num 500 --min 0 --max 1000000000 --size 50 --scanThreads 20 --auths exampleVis
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test1 --num 500 --min 0 --max 1000000000 --size 50 --scanThreads 20 --auths exampleVis
 Generating 500 random queries...finished
 35.09 lookups/sec  14.25 secs
 num results : 500
 Generating 500 random queries...finished
 35.33 lookups/sec  14.15 secs
 num results : 500
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Below the same lookups are done against the table with bloom filters. The
 lookups were 2.86 times faster because only one map file was used, even though three
 map files existed.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test2 --num 500 --min 0 --max 1000000000 --size 50 -scanThreads 20 --auths exampleVis
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test2 --num 500 --min 0 --max 1000000000 --size 50 -scanThreads 20 --auths exampleVis
 Generating 500 random queries...finished
 99.03 lookups/sec   5.05 secs
 num results : 500
 Generating 500 random queries...finished
 101.15 lookups/sec   4.94 secs
 num results : 500
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>You can verify the table has three files by looking in HDFS. To look in HDFS
 you will need the table ID, because this is used in HDFS instead of the table
 name. The following command will show table ids.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password -e 'tables -l'
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password -e 'tables -l'
 accumulo.metadata    =&gt;        !0
 accumulo.root        =&gt;        +r
 bloom_test1          =&gt;        o7
 bloom_test2          =&gt;        o8
 trace                =&gt;         1
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>So the table id for bloom_test2 is o8. The command below shows what files this
 table has in HDFS. This assumes Accumulo is at the default location in HDFS.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ hadoop fs -lsr /accumulo/tables/o8
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ hadoop fs -lsr /accumulo/tables/o8
 drwxr-xr-x   - username supergroup          0 2012-01-10 14:02 /accumulo/tables/o8/default_tablet
 -rw-r--r--   3 username supergroup   52672650 2012-01-10 14:01 /accumulo/tables/o8/default_tablet/F00000dj.rf
 -rw-r--r--   3 username supergroup   52436176 2012-01-10 14:01 /accumulo/tables/o8/default_tablet/F00000dk.rf
 -rw-r--r--   3 username supergroup   52850173 2012-01-10 14:02 /accumulo/tables/o8/default_tablet/F00000dl.rf
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Running the rfile-info command shows that one of the files has a bloom filter
 and its 1.5MB.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo rfile-info /accumulo/tables/o8/default_tablet/F00000dj.rf
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo rfile-info /accumulo/tables/o8/default_tablet/F00000dj.rf
 Locality group         : &lt;DEFAULT&gt;
 Start block          : 0
 Num   blocks         : 752
@@ -390,8 +379,7 @@ Meta block     : acu_bloom
   Raw size             : 1,540,292 bytes
   Compressed size      : 1,433,115 bytes
   Compression type     : gz
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.8/examples/bulkIngest.html b/1.8/examples/bulkIngest.html
index 97aeb3b..0f708de 100644
--- a/1.8/examples/bulkIngest.html
+++ b/1.8/examples/bulkIngest.html
@@ -173,14 +173,13 @@ table called test_bulk which has two initial split points. Then 1000 rows of
 test data are created in HDFS. After that the 1000 rows are ingested into
 accumulo. Then we verify the 1000 rows are in accumulo.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ PKG=org.apache.accumulo.examples.simple.mapreduce.bulk
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ PKG=org.apache.accumulo.examples.simple.mapreduce.bulk
 $ ARGS="-i instance -z zookeepers -u username -p password"
 $ ./bin/accumulo $PKG.SetupTable $ARGS -t test_bulk row_00000333 row_00000666
 $ ./bin/accumulo $PKG.GenerateTestData --start-row 0 --count 1000 --output bulk/test_1.txt
 $ ./bin/tool.sh lib/accumulo-examples-simple.jar $PKG.BulkIngestExample $ARGS -t test_bulk --inputDir bulk --workDir tmp/bulkWork
 $ ./bin/accumulo $PKG.VerifyIngest $ARGS -t test_bulk --start-row 0 --count 1000
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>For a high level discussion of bulk ingest, see the docs dir.</p>
 
diff --git a/1.8/examples/classpath.html b/1.8/examples/classpath.html
index 13972f8..07cf870 100644
--- a/1.8/examples/classpath.html
+++ b/1.8/examples/classpath.html
@@ -173,59 +173,52 @@ table reference that jar.</p>
 
 <p>Execute the following command in the shell.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ hadoop fs -copyFromLocal $ACCUMULO_HOME/test/src/test/resources/FooFilter.jar /user1/lib
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ hadoop fs -copyFromLocal $ACCUMULO_HOME/test/src/test/resources/FooFilter.jar /user1/lib
+</code></pre></div></div>
 
 <p>Execute following in Accumulo shell to setup classpath context</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@test15&gt; config -s general.vfs.context.classpath.cx1=hdfs://&lt;namenode host&gt;:&lt;namenode port&gt;/user1/lib/[^.].*.jar
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@test15&gt; config -s general.vfs.context.classpath.cx1=hdfs://&lt;namenode host&gt;:&lt;namenode port&gt;/user1/lib/[^.].*.jar
+</code></pre></div></div>
 
 <p>Create a table</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@test15&gt; createtable nofoo
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@test15&gt; createtable nofoo
+</code></pre></div></div>
 
 <p>The following command makes this table use the configured classpath context</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@test15 nofoo&gt; config -t nofoo -s table.classpath.context=cx1
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@test15 nofoo&gt; config -t nofoo -s table.classpath.context=cx1
+</code></pre></div></div>
 
 <p>The following command configures an iterator thats in FooFilter.jar</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@test15 nofoo&gt; setiter -n foofilter -p 10 -scan -minc -majc -class org.apache.accumulo.test.FooFilter
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@test15 nofoo&gt; setiter -n foofilter -p 10 -scan -minc -majc -class org.apache.accumulo.test.FooFilter
 Filter accepts or rejects each Key/Value pair
 ----------&gt; set FooFilter parameter negate, default false keeps k/v that pass accept method, true rejects k/v that pass accept method: false
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The commands below show the filter is working.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@test15 nofoo&gt; insert foo1 f1 q1 v1
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@test15 nofoo&gt; insert foo1 f1 q1 v1
 root@test15 nofoo&gt; insert noo1 f1 q1 v2
 root@test15 nofoo&gt; scan
 noo1 f1:q1 []    v2
 root@test15 nofoo&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Below, an attempt is made to add the FooFilter to a table thats not configured
 to use the clasppath context cx1. This fails util the table is configured to
 use cx1.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@test15 nofoo&gt; createtable nofootwo
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@test15 nofoo&gt; createtable nofootwo
 root@test15 nofootwo&gt; setiter -n foofilter -p 10 -scan -minc -majc -class org.apache.accumulo.test.FooFilter
 2013-05-03 12:49:35,943 [shell.Shell] ERROR: java.lang.IllegalArgumentException: org.apache.accumulo.test.FooFilter
 root@test15 nofootwo&gt; config -t nofootwo -s table.classpath.context=cx1
 root@test15 nofootwo&gt; setiter -n foofilter -p 10 -scan -minc -majc -class org.apache.accumulo.test.FooFilter
 Filter accepts or rejects each Key/Value pair
 ----------&gt; set FooFilter parameter negate, default false keeps k/v that pass accept method, true rejects k/v that pass accept method: false
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.8/examples/client.html b/1.8/examples/client.html
index 9ceeb3c..e13a9a6 100644
--- a/1.8/examples/client.html
+++ b/1.8/examples/client.html
@@ -180,15 +180,14 @@ Notice:    Licensed to the Apache Software Foundation (ASF) under one
 class name, and enough arguments to find your accumulo instance. For example,
 the Flush class will flush a table:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ PACKAGE=org.apache.accumulo.examples.simple.client
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ PACKAGE=org.apache.accumulo.examples.simple.client
 $ bin/accumulo $PACKAGE.Flush -u root -p mypassword -i instance -z zookeeper -t trace
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The very simple RowOperations class demonstrates how to read and write rows using the BatchWriter
 and Scanner:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/accumulo $PACKAGE.RowOperations -u root -p mypassword -i instance -z zookeeper
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bin/accumulo $PACKAGE.RowOperations -u root -p mypassword -i instance -z zookeeper
 2013-01-14 14:45:24,738 [client.RowOperations] INFO : This is everything
 2013-01-14 14:45:24,744 [client.RowOperations] INFO : Key: row1 column:1 [] 1358192724640 false Value: This is the value for this key
 2013-01-14 14:45:24,744 [client.RowOperations] INFO : Key: row1 column:2 [] 1358192724642 false Value: This is the value for this key
@@ -216,12 +215,11 @@ and Scanner:</p>
 2013-01-14 14:45:24,770 [client.RowOperations] INFO : Key: row3 column:2 [] 1358192724642 false Value: This is the value for this key
 2013-01-14 14:45:24,770 [client.RowOperations] INFO : Key: row3 column:3 [] 1358192724642 false Value: This is the value for this key
 2013-01-14 14:45:24,770 [client.RowOperations] INFO : Key: row3 column:4 [] 1358192724642 false Value: This is the value for this key
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>To create a table, write to it and read from it:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/accumulo $PACKAGE.ReadWriteExample -u root -p mypassword -i instance -z zookeeper --createtable --create --read
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bin/accumulo $PACKAGE.ReadWriteExample -u root -p mypassword -i instance -z zookeeper --createtable --create --read
 hello%00; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -&gt; world
 hello%01; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -&gt; world
 hello%02; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -&gt; world
@@ -232,8 +230,7 @@ hello%06; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -&gt; world
 hello%07; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -&gt; world
 hello%08; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -&gt; world
 hello%09; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -&gt; world
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.8/examples/combiner.html b/1.8/examples/combiner.html
index eeab032..6127f87 100644
--- a/1.8/examples/combiner.html
+++ b/1.8/examples/combiner.html
@@ -176,7 +176,7 @@ Notice:    Licensed to the Apache Software Foundation (ASF) under one
 copy the produced jar into the accumulo lib dir. This is already done in the
 tar distribution.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/accumulo shell -u username
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bin/accumulo shell -u username
 Enter current password for 'username'@'instance': ***
 
 Shell - Apache Accumulo Interactive Shell
@@ -214,8 +214,7 @@ username@instance runners&gt; scan
 123456 hstat:virtualMarathon []    6a,6b,d5,2
 123456 name:first []    Joe
 123456 stat:marathon []    220,240,690,3
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>In this example a table is created and the example stats combiner is applied to
 the column family stat and hstat. The stats combiner computes min,max,sum, and
diff --git a/1.8/examples/constraints.html b/1.8/examples/constraints.html
index a279a05..01d88fe 100644
--- a/1.8/examples/constraints.html
+++ b/1.8/examples/constraints.html
@@ -179,7 +179,7 @@ numeric keys. The other constraint does not allow non numeric values. Two
 inserts that violate these constraints are attempted and denied. The scan at
 the end shows the inserts were not allowed.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 
 Shell - Apache Accumulo Interactive Shell
 -
@@ -203,8 +203,7 @@ username@instance testConstraints&gt; insert r1! cf1 cq1 ABC
 username@instance testConstraints&gt; scan
 r1 cf1:cq1 []    1111
 username@instance testConstraints&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.8/examples/dirlist.html b/1.8/examples/dirlist.html
index 2b6aa90..bfba134 100644
--- a/1.8/examples/dirlist.html
+++ b/1.8/examples/dirlist.html
@@ -185,9 +185,8 @@ Notice:    Licensed to the Apache Software Foundation (ASF) under one
 
 <p>To begin, ingest some data with Ingest.java.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.Ingest -i instance -z zookeepers -u username -p password --vis exampleVis --chunkSize 100000 /local/username/workspace
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.Ingest -i instance -z zookeepers -u username -p password --vis exampleVis --chunkSize 100000 /local/username/workspace
+</code></pre></div></div>
 
 <p>This may take some time if there are large files in the /local/username/workspace directory. If you use 0 instead of 100000 on the command line, the ingest will run much faster, but it will not put any file data into Accumulo (the dataTable will be empty).
 Note that running this example will create tables dirTable, indexTable, and dataTable in Accumulo that you should delete when you have completed the example.
@@ -195,46 +194,41 @@ If you modify a file or add new files in the directory ingested (e.g. /local/use
 
 <p>To browse the data ingested, use Viewer.java. Be sure to give the “username” user the authorizations to see the data (in this case, run</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u root -e 'setauths -u username -s exampleVis'
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u root -e 'setauths -u username -s exampleVis'
+</code></pre></div></div>
 
 <p>then run the Viewer:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.Viewer -i instance -z zookeepers -u username -p password -t dirTable --dataTable dataTable --auths exampleVis --path /local/username/workspace
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.Viewer -i instance -z zookeepers -u username -p password -t dirTable --dataTable dataTable --auths exampleVis --path /local/username/workspace
+</code></pre></div></div>
 
 <p>To list the contents of specific directories, use QueryUtil.java.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t dirTable --auths exampleVis --path /local/username
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t dirTable --auths exampleVis --path /local/username
 $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t dirTable --auths exampleVis --path /local/username/workspace
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>To perform searches on file or directory names, also use QueryUtil.java. Search terms must contain no more than one wild card and cannot contain “/”.
 <em>Note</em> these queries run on the <em>indexTable</em> table instead of the dirTable table.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t indexTable --auths exampleVis --path filename --search
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t indexTable --auths exampleVis --path filename --search
 $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t indexTable --auths exampleVis --path 'filename*' --search
 $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t indexTable --auths exampleVis --path '*jar' --search
 $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t indexTable --auths exampleVis --path 'filename*jar' --search
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>To count the number of direct children (directories and files) and descendants (children and children’s descendants, directories and files), run the FileCount over the dirTable table.
 The results are written back to the same table. FileCount reads from and writes to Accumulo. This requires scan authorizations for the read and a visibility for the data written.
 In this example, the authorizations and visibility are set to the same value, exampleVis. See README.visibility for more information on visibility and authorizations.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.FileCount -i instance -z zookeepers -u username -p password -t dirTable --auths exampleVis
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.FileCount -i instance -z zookeepers -u username -p password -t dirTable --auths exampleVis
+</code></pre></div></div>
 
 <h2 id="directory-table">Directory Table</h2>
 
 <p>Here is a illustration of what data looks like in the directory table:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>row colf:colq [vis]	value
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>row colf:colq [vis]	value
 000 dir:exec [exampleVis]    true
 000 dir:hidden [exampleVis]    false
 000 dir:lastmod [exampleVis]    1291996886000
@@ -248,8 +242,7 @@ In this example, the authorizations and visibility are set to the same value, ex
 002/local/Accumulo.README \x7F\xFF\xFE\xCFH\xA1\x82\x97:lastmod [exampleVis]    1308746481000
 002/local/Accumulo.README \x7F\xFF\xFE\xCFH\xA1\x82\x97:length [exampleVis]    9192
 002/local/Accumulo.README \x7F\xFF\xFE\xCFH\xA1\x82\x97:md5 [exampleVis]    274af6419a3c4c4a259260ac7017cbf1
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The rows are of the form depth + path, where depth is the number of slashes (“/”) in the path padded to 3 digits. This is so that all the children of a directory appear as consecutive keys in Accumulo; without the depth, you would for example see all the subdirectories of /local before you saw /usr.
 For directories the column family is “dir”. For files the column family is Long.MAX_VALUE - lastModified in bytes rather than string format so that newer versions sort earlier.</p>
@@ -258,13 +251,12 @@ For directories the column family is “dir”. For files the column family is L
 
 <p>Here is an illustration of what data looks like in the index table:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>row colf:colq [vis]
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>row colf:colq [vis]
 fAccumulo.README i:002/local/Accumulo.README [exampleVis]
 flocal i:001/local [exampleVis]
 rEMDAER.olumuccA i:002/local/Accumulo.README [exampleVis]
 rlacol i:001/local [exampleVis]
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The values of the index table are null. The rows are of the form “f” + filename or “r” + reverse file name. This is to enable searches with wildcards at the beginning, middle, or end.</p>
 
@@ -272,13 +264,12 @@ rlacol i:001/local [exampleVis]
 
 <p>Here is an illustration of what data looks like in the data table:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>row colf:colq [vis]	value
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>row colf:colq [vis]	value
 274af6419a3c4c4a259260ac7017cbf1 refs:e77276a2b56e5c15b540eaae32b12c69\x00filext [exampleVis]    README
 274af6419a3c4c4a259260ac7017cbf1 refs:e77276a2b56e5c15b540eaae32b12c69\x00name [exampleVis]    /local/Accumulo.README
 274af6419a3c4c4a259260ac7017cbf1 ~chunk:\x00\x0FB@\x00\x00\x00\x00 [exampleVis]    *******************************************************************************\x0A1. Building\x0A\x0AIn the normal tarball release of accumulo, [truncated]
 274af6419a3c4c4a259260ac7017cbf1 ~chunk:\x00\x0FB@\x00\x00\x00\x01 [exampleVis]
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The rows are the md5 hash of the file. Some column family : column qualifier pairs are “refs” : hash of file name + null byte + property name, in which case the value is property value. There can be multiple references to the same file which are distinguished by the hash of the file name.
 Other column family : column qualifier pairs are “~chunk” : chunk size in bytes + chunk number in bytes, in which case the value is the bytes for that chunk of the file. There is an end of file data marker whose chunk number is the number of chunks for the file and whose value is empty.</p>
diff --git a/1.8/examples/export.html b/1.8/examples/export.html
index 6314ea9..08b9118 100644
--- a/1.8/examples/export.html
+++ b/1.8/examples/export.html
@@ -174,7 +174,7 @@ the table. A table must be offline to export it, and it should remain offline
 for the duration of the distcp. An easy way to take a table offline without
 interuppting access to it is to clone it and take the clone offline.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@test15&gt; createtable table1
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@test15&gt; createtable table1
 root@test15 table1&gt; insert a cf1 cq1 v1
 root@test15 table1&gt; insert h cf1 cq1 v2
 root@test15 table1&gt; insert z cf1 cq1 v3
@@ -190,34 +190,31 @@ root@test15 table1&gt; clonetable table1 table1_exp
 root@test15 table1&gt; offline table1_exp
 root@test15 table1&gt; exporttable -t table1_exp /tmp/table1_export
 root@test15 table1&gt; quit
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>After executing the export command, a few files are created in the hdfs dir.
 One of the files is a list of files to distcp as shown below.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ hadoop fs -ls /tmp/table1_export
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ hadoop fs -ls /tmp/table1_export
 Found 2 items
 -rw-r--r--   3 user supergroup        162 2012-07-25 09:56 /tmp/table1_export/distcp.txt
 -rw-r--r--   3 user supergroup        821 2012-07-25 09:56 /tmp/table1_export/exportMetadata.zip
 $ hadoop fs -cat /tmp/table1_export/distcp.txt
 hdfs://n1.example.com:6093/accumulo/tables/3/default_tablet/F0000000.rf
 hdfs://n1.example.com:6093/tmp/table1_export/exportMetadata.zip
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Before the table can be imported, it must be copied using distcp. After the
 discp completed, the cloned table may be deleted.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ hadoop distcp -f /tmp/table1_export/distcp.txt /tmp/table1_export_dest
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ hadoop distcp -f /tmp/table1_export/distcp.txt /tmp/table1_export_dest
+</code></pre></div></div>
 
 <p>The Accumulo shell session below shows importing the table and inspecting it.
 The data, splits, config, and logical time information for the table were
 preserved.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>root@test15&gt; importtable table1_copy /tmp/table1_export_dest
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@test15&gt; importtable table1_copy /tmp/table1_export_dest
 root@test15&gt; table table1_copy
 root@test15 table1_copy&gt; scan
 a cf1:cq1 []    v1
@@ -243,8 +240,7 @@ root@test15 table1_copy&gt; scan -t accumulo.metadata -b 5 -c srv:time
 5;b srv:time []    M1343224500467
 5;r srv:time []    M1343224500467
 5&lt; srv:time []    M1343224500467
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.8/examples/filedata.html b/1.8/examples/filedata.html
index 0800087..fb65cb4 100644
--- a/1.8/examples/filedata.html
+++ b/1.8/examples/filedata.html
@@ -194,27 +194,23 @@ The example has the following classes:</p>
 
 <p>If you haven’t already run the README.dirlist example, ingest a file with FileDataIngest.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.filedata.FileDataIngest -i instance -z zookeepers -u username -p password -t dataTable --auths exampleVis --chunk 1000 $ACCUMULO_HOME/README
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.filedata.FileDataIngest -i instance -z zookeepers -u username -p password -t dataTable --auths exampleVis --chunk 1000 $ACCUMULO_HOME/README
+</code></pre></div></div>
 
 <p>Open the accumulo shell and look at the data. The row is the MD5 hash of the file, which you can verify by running a command such as ‘md5sum’ on the file.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>&gt; scan -t dataTable
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>&gt; scan -t dataTable
+</code></pre></div></div>
 
 <p>Run the CharacterHistogram MapReduce to add some information about the file.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.filedata.CharacterHistogram -i instance -z zookeepers -u username -p password -t dataTable --auths exampleVis --vis exampleVis
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.filedata.CharacterHistogram -i instance -z zookeepers -u username -p password -t dataTable --auths exampleVis --vis exampleVis
+</code></pre></div></div>
 
 <p>Scan again to see the histogram stored in the ‘info’ column family.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>&gt; scan -t dataTable
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>&gt; scan -t dataTable
+</code></pre></div></div>
 
         </div>
 
diff --git a/1.8/examples/filter.html b/1.8/examples/filter.html
index 8f13b0e..ec3f130 100644
--- a/1.8/examples/filter.html
+++ b/1.8/examples/filter.html
@@ -176,7 +176,7 @@ Filter takes a “negate” parameter which defaults to false. If set to true, t
 return value of the accept method is negated, so that key/value pairs accepted
 by the method are omitted by the Filter.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance&gt; createtable filtertest
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance&gt; createtable filtertest
 username@instance filtertest&gt; setiter -t filtertest -scan -p 10 -n myfilter -ageoff
 AgeOffFilter removes entries with timestamps more than &lt;ttl&gt; milliseconds old
 ----------&gt; set AgeOffFilter parameter negate, default false keeps k/v that pass accept method, true rejects k/v that pass accept method:
@@ -187,15 +187,13 @@ username@instance filtertest&gt; insert foo a b c
 username@instance filtertest&gt; scan
 foo a:b []    c
 username@instance filtertest&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>… wait 30 seconds …</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance filtertest&gt; scan
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance filtertest&gt; scan
 username@instance filtertest&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Note the absence of the entry inserted more than 30 seconds ago. Since the
 scope was set to “scan”, this means the entry is still in Accumulo, but is
@@ -213,7 +211,7 @@ AgeOffFilter, but any Filter can be configured by using the -class flag. The
 following commands show how to enable the AgeOffFilter for the minc and majc
 scopes using the -class flag, then flush and compact the table.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance filtertest&gt; setiter -t filtertest -minc -majc -p 10 -n myfilter -class org.apache.accumulo.core.iterators.user.AgeOffFilter
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance filtertest&gt; setiter -t filtertest -minc -majc -p 10 -n myfilter -class org.apache.accumulo.core.iterators.user.AgeOffFilter
 AgeOffFilter removes entries with timestamps more than &lt;ttl&gt; milliseconds old
 ----------&gt; set AgeOffFilter parameter negate, default false keeps k/v that pass accept method, true rejects k/v that pass accept method:
 ----------&gt; set AgeOffFilter parameter ttl, time to live (milliseconds): 30000
@@ -228,8 +226,7 @@ username@instance filtertest&gt; compact -t filtertest -w
 06 10:43:00,632 [shell.Shell] INFO : Compacting table ...
 06 10:43:01,307 [shell.Shell] INFO : Compaction of table filtertest completed for given range
 username@instance filtertest&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>By default, flush and compact execute in the background, but with the -w flag
 they will wait to return until the operation has completed. Both are
@@ -242,7 +239,7 @@ the old files.</p>
 
 <p>To see the iterator settings for a table, use config.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance filtertest&gt; config -t filtertest -f iterator
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance filtertest&gt; config -t filtertest -f iterator
 ---------+---------------------------------------------+---------------------------------------------------------------------------
 SCOPE    | NAME                                        | VALUE
 ---------+---------------------------------------------+---------------------------------------------------------------------------
@@ -260,8 +257,7 @@ table    | table.iterator.scan.vers .................. | 20,org.apache.accumulo.
 table    | table.iterator.scan.vers.opt.maxVersions .. | 1
 ---------+---------------------------------------------+---------------------------------------------------------------------------
 username@instance filtertest&gt;
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>When setting new iterators, make sure to order their priority numbers
 (specified with -p) in the order you would like the iterators to be applied.
diff --git a/1.8/examples/helloworld.html b/1.8/examples/helloworld.html
index 5a6ec93..e28f293 100644
--- a/1.8/examples/helloworld.html
+++ b/1.8/examples/helloworld.html
@@ -175,40 +175,34 @@ Notice:    Licensed to the Apache Software Foundation (ASF) under one
 
 <p>Log into the accumulo shell:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+</code></pre></div></div>
 
 <p>Create a table called ‘hellotable’:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance&gt; createtable hellotable
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance&gt; createtable hellotable
+</code></pre></div></div>
 
 <p>Launch a Java program that inserts data with a BatchWriter:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.helloworld.InsertWithBatchWriter -i instance -z zookeepers -u username -p password -t hellotable
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.helloworld.InsertWithBatchWriter -i instance -z zookeepers -u username -p password -t hellotable
+</code></pre></div></div>
 
 <p>On the accumulo status page at the URL below (where ‘master’ is replaced with the name or IP of your accumulo master), you should see 50K entries</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>http://master:9995/
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>http://master:9995/
+</code></pre></div></div>
 
 <p>To view the entries, use the shell to scan the table:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>username@instance&gt; table hellotable
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>username@instance&gt; table hellotable
 username@instance hellotable&gt; scan
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>You can also use a Java class to scan the table:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.helloworld.ReadData -i instance -z zookeepers -u username -p password -t hellotable --startKey row_0 --endKey row_1001
-</code></pre>
-</div>
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.helloworld.ReadData -i instance -z zookeepers -u username -p password -t hellotable --startKey row_0 --endKey row_1001
+</code></pre></div></div>
 
         </div>
 
diff --git a/1.8/examples/isolation.html b/1.8/examples/isolation.html
index c9a159b..8d2b38a 100644
--- a/1.8/examples/isolation.html
+++ b/1.8/examples/isolation.html
@@ -180,7 +180,7 @@ reading the row at the same time a mutation is changing the row.</p>
 <p>Below, Interference Test is run without isolation enabled for 5000 iterations
 and it reports problems.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.isolation.InterferenceTest -i instance -z zookeepers -u username -p password -t isotest --iterations 5000
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.isolation.InterferenceTest -i instance -z zookeepers -u username -p password -t isotest --iterations 5000
 ERROR Columns in row 053 had multiple values [53, 4553]
 ERROR Columns in row 061 had multiple values [561, 61]
 ERROR Columns in row 070 had multiple values [570, 1070]
@@ -189,16 +189,14 @@ ERROR Columns in row 088 had multiple values [2588, 1588]
 ERROR Columns in row 106 had multiple values [2606, 3106]
 ERROR Columns in row 115 had multiple values [4615, 3115]
 finished
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>Below, Interference Test is run with isolation enabled for 5000 iterations and
 it reports no problems.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.isolation.InterferenceTest -i instance -z zookeepers -u username -p password -t isotest --iterations 5000 --isolated
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.isolation.InterferenceTest -i instance -z zookeepers -u username -p password -t isotest --iterations 5000 --isolated
 finished
-</code></pre>
-</div>
+</code></pre></div></div>
 
 
         </div>
diff --git a/1.8/examples/mapred.html b/1.8/examples/mapred.html
index 54e59eb..7fa3cf3 100644
--- a/1.8/examples/mapred.html
+++ b/1.8/examples/mapred.html
@@ -173,17 +173,16 @@ accumulo table with combiners.</p>
 <p>To run this example you will need a directory in HDFS containing text files.
 The accumulo readme will be used to show how to run this example.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ hadoop fs -copyFromLocal $ACCUMULO_HOME/README /user/username/wc/Accumulo.README
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ hadoop fs -copyFromLocal $ACCUMULO_HOME/README /user/username/wc/Accumulo.README
 $ hadoop fs -ls /user/username/wc
 Found 1 items
 -rw-r--r--   2 username supergroup       9359 2009-07-15 17:54 /user/username/wc/Accumulo.README
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>The first part of running this example is to create a table with a combiner
 for the column family count.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password
 Shell - Apache Accumulo Interactive Shell
 - version: 1.5.0
 - instance name: instance
@@ -199,12 +198,11 @@ SummingCombiner interprets Values as Longs and adds them together. A variety of
 ----------&gt; set SummingCombiner parameter lossy, if true, failed decodes are ignored. Otherwise combiner will error on failed decodes (default false): &lt;TRUE|FALSE&gt;: false
 ----------&gt; set SummingCombiner parameter type, &lt;VARLEN|FIXEDLEN|STRING|fullClassName&gt;: STRING
 username@instance wordCount&gt; quit
-</code></pre>
-</div>
+</code></pre></div></div>
 
 <p>After creating the table, run the word count map reduce job.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.WordCount -i instance -z zookeepers  --input /user/username/wc -t wordCount -u username -p password
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.WordCount -i instance -z zookeepers  --input /user/username/wc -t wordCount -u username -p password
 
 11/02/07 18:20:11 INFO input.FileInputFormat: Total input paths to process : 1
 11/02/07 18:20:12 INFO mapred.JobClient: Running job: job_201102071740_0003
@@ -221,13 +219,12 @@ username@instance wordCount&gt; quit
 11/02/07 18:20:22 INFO mapred.JobClient:     Map input records=255
... 6359 lines suppressed ...

-- 
To stop receiving notification emails like this one, please contact
['"commits@accumulo.apache.org" <commits@accumulo.apache.org>'].

Mime
View raw message