accumulo-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From bil...@apache.org
Subject svn commit: r1238713 - /incubator/accumulo/site/trunk/content/accumulo/example/wikisearch.mdtext
Date Tue, 31 Jan 2012 17:50:06 GMT
Author: billie
Date: Tue Jan 31 17:50:06 2012
New Revision: 1238713

URL: http://svn.apache.org/viewvc?rev=1238713&view=rev
Log:
added padding in the tables

Modified:
    incubator/accumulo/site/trunk/content/accumulo/example/wikisearch.mdtext

Modified: incubator/accumulo/site/trunk/content/accumulo/example/wikisearch.mdtext
URL: http://svn.apache.org/viewvc/incubator/accumulo/site/trunk/content/accumulo/example/wikisearch.mdtext?rev=1238713&r1=1238712&r2=1238713&view=diff
==============================================================================
--- incubator/accumulo/site/trunk/content/accumulo/example/wikisearch.mdtext (original)
+++ incubator/accumulo/site/trunk/content/accumulo/example/wikisearch.mdtext Tue Jan 31 17:50:06
2012
@@ -32,6 +32,10 @@ The example uses an indexing technique h
 
 In the example, Accumulo tracks the cardinality of all terms as elements are ingested.  If
the cardinality is small enough, it will track the set of documents by term directly.  For
example:
 
+<style type="text/css">
+table td,th {padding-right: 10px;}
+</style>
+
 <table>
 <tr>
 <th>Row (word)</th>
@@ -274,7 +278,7 @@ For comparison, these are the cold start
 <td>8.13
 </table>
 
-Random Query Load
+### Random Query Load
 
 Random queries were generated using common english words.  A uniform random sample of 3 to
5 words taken from the 10000 most common words in the Project Gutenberg's online text collection
were joined with “and”.  Words containing anything other than letters (such as contractions)
were not used.  A client was started simultaneously on each of the 10 servers and each ran
100 random queries (1000 queries total).
 
@@ -297,7 +301,7 @@ Random queries were generated using comm
 <td>275655
 </table>
 
-Query Load During Ingest
+### Query Load During Ingest
 
 The English wikipedia data was re-ingested on top of the existing, compacted data. The following
 query samples were taken in 5 minute intervals while ingesting 132 articles/second:
 



Mime
View raw message