Return-Path:
Delivered-To: apmail-lucene-java-commits-archive@www.apache.org
Received: (qmail 47240 invoked from network); 23 Mar 2007 17:56:32 -0000
Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2)
by minotaur.apache.org with SMTP; 23 Mar 2007 17:56:32 -0000
Received: (qmail 89462 invoked by uid 500); 23 Mar 2007 17:56:40 -0000
Delivered-To: apmail-lucene-java-commits-archive@lucene.apache.org
Received: (qmail 89424 invoked by uid 500); 23 Mar 2007 17:56:40 -0000
Mailing-List: contact java-commits-help@lucene.apache.org; run by ezmlm
Precedence: bulk
List-Help:
List-Unsubscribe:
List-Post:
List-Id:
Reply-To: java-dev@lucene.apache.org
Delivered-To: mailing list java-commits@lucene.apache.org
Received: (qmail 89407 invoked by uid 99); 23 Mar 2007 17:56:40 -0000
Received: from herse.apache.org (HELO herse.apache.org) (140.211.11.133)
by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 23 Mar 2007 10:56:40 -0700
X-ASF-Spam-Status: No, hits=-99.5 required=10.0
tests=ALL_TRUSTED,NO_REAL_NAME
X-Spam-Check-By: apache.org
Received: from [140.211.11.3] (HELO eris.apache.org) (140.211.11.3)
by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 23 Mar 2007 10:56:30 -0700
Received: by eris.apache.org (Postfix, from userid 65534)
id 942791A983A; Fri, 23 Mar 2007 10:56:10 -0700 (PDT)
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Subject: svn commit: r521830 - in
/lucene/java/trunk/contrib/benchmark/src/java/org/apache/lucene/benchmark/byTask:
./ feeds/ tasks/ utils/
Date: Fri, 23 Mar 2007 17:56:10 -0000
To: java-commits@lucene.apache.org
From: doronc@apache.org
X-Mailer: svnmailer-1.1.0
Message-Id: <20070323175610.942791A983A@eris.apache.org>
X-Virus-Checked: Checked by ClamAV on apache.org
Author: doronc
Date: Fri Mar 23 10:56:09 2007
New Revision: 521830
URL: http://svn.apache.org/viewvc?view=rev&rev=521830
Log:
Documentation updates following LUCENE-837.
Modified:
lucene/java/trunk/contrib/benchmark/src/java/org/apache/lucene/benchmark/byTask/Benchmark.java
lucene/java/trunk/contrib/benchmark/src/java/org/apache/lucene/benchmark/byTask/feeds/BasicDocMaker.java
lucene/java/trunk/contrib/benchmark/src/java/org/apache/lucene/benchmark/byTask/package.html
lucene/java/trunk/contrib/benchmark/src/java/org/apache/lucene/benchmark/byTask/tasks/SearchTravRetLoadFieldSelectorTask.java
lucene/java/trunk/contrib/benchmark/src/java/org/apache/lucene/benchmark/byTask/tasks/SearchTravRetTask.java
lucene/java/trunk/contrib/benchmark/src/java/org/apache/lucene/benchmark/byTask/tasks/SearchTravTask.java
lucene/java/trunk/contrib/benchmark/src/java/org/apache/lucene/benchmark/byTask/tasks/WarmTask.java
lucene/java/trunk/contrib/benchmark/src/java/org/apache/lucene/benchmark/byTask/utils/Config.java
Modified: lucene/java/trunk/contrib/benchmark/src/java/org/apache/lucene/benchmark/byTask/Benchmark.java
URL: http://svn.apache.org/viewvc/lucene/java/trunk/contrib/benchmark/src/java/org/apache/lucene/benchmark/byTask/Benchmark.java?view=diff&rev=521830&r1=521829&r2=521830
==============================================================================
--- lucene/java/trunk/contrib/benchmark/src/java/org/apache/lucene/benchmark/byTask/Benchmark.java (original)
+++ lucene/java/trunk/contrib/benchmark/src/java/org/apache/lucene/benchmark/byTask/Benchmark.java Fri Mar 23 10:56:09 2007
@@ -65,7 +65,7 @@
public synchronized void execute() throws Exception {
if (executed) {
- throw new Exception("Benchmark was already executed");
+ throw new IllegalStateException("Benchmark was already executed");
}
executed = true;
algorithm.execute();
Modified: lucene/java/trunk/contrib/benchmark/src/java/org/apache/lucene/benchmark/byTask/feeds/BasicDocMaker.java
URL: http://svn.apache.org/viewvc/lucene/java/trunk/contrib/benchmark/src/java/org/apache/lucene/benchmark/byTask/feeds/BasicDocMaker.java?view=diff&rev=521830&r1=521829&r2=521830
==============================================================================
--- lucene/java/trunk/contrib/benchmark/src/java/org/apache/lucene/benchmark/byTask/feeds/BasicDocMaker.java (original)
+++ lucene/java/trunk/contrib/benchmark/src/java/org/apache/lucene/benchmark/byTask/feeds/BasicDocMaker.java Fri Mar 23 10:56:09 2007
@@ -41,7 +41,7 @@
* doc.stored=true|FALSE
* doc.tokenized=TRUE|false
* doc.term.vector=true|FALSE
- * doc.store.bytes=true|FALSE //Store the body contents raw UTF-8 bytes as a field
+ * doc.store.body.bytes=true|FALSE //Store the body contents raw UTF-8 bytes as a field
*/
public abstract class BasicDocMaker implements DocMaker {
Modified: lucene/java/trunk/contrib/benchmark/src/java/org/apache/lucene/benchmark/byTask/package.html
URL: http://svn.apache.org/viewvc/lucene/java/trunk/contrib/benchmark/src/java/org/apache/lucene/benchmark/byTask/package.html?view=diff&rev=521830&r1=521829&r2=521830
==============================================================================
--- lucene/java/trunk/contrib/benchmark/src/java/org/apache/lucene/benchmark/byTask/package.html (original)
+++ lucene/java/trunk/contrib/benchmark/src/java/org/apache/lucene/benchmark/byTask/package.html Fri Mar 23 10:56:09 2007
@@ -64,8 +64,9 @@
Benchmark "algorithm"
Supported tasks/commands
Benchmark properties
- Example input algorithm and the result benchmark
- report.
+ Example input algorithm and the result benchmark
+ report.
+ Results record counting clarified
@@ -75,12 +76,12 @@
-A benchmark is composed of some predefined tasks, allowing for creating an
-index, adding documents,
-optimizing, searching, generating reports, and more. A benchmark run takes an
-"algorithm" file
-that contains a description of the sequence of tasks making up the run, and some
-properties defining a few
+A benchmark is composed of some predefined tasks, allowing for creating an
+index, adding documents,
+optimizing, searching, generating reports, and more. A benchmark run takes an
+"algorithm" file
+that contains a description of the sequence of tasks making up the run, and some
+properties defining a few
additional characteristics of the benchmark run.
@@ -99,30 +100,30 @@
- would run your perf test
"algorithm".
java org.apache.lucene.benchmark.byTask.programmatic.Sample
-
- would run a performance test programmatically - without using an alg
- file. This is less readable, and less convinient, but possible.
+
- would run a performance test programmatically - without using an alg
+ file. This is less readable, and less convinient, but possible.
-You may find existing tasks sufficient for defining the benchmark you
-need, otherwise, you can extend the framework to meet your needs, as explained
-herein.
+You may find existing tasks sufficient for defining the benchmark you
+need, otherwise, you can extend the framework to meet your needs, as explained
+herein.
-Each benchmark run has a DocMaker and a QueryMaker. These two should usually
-match, so that "meaningful" queries are used for a certain collection.
-Properties set at the header of the alg file define which "makers" should be
-used. You can also specify your own makers, implementing the DocMaker and
-QureyMaker interfaces.
+Each benchmark run has a DocMaker and a QueryMaker. These two should usually
+match, so that "meaningful" queries are used for a certain collection.
+Properties set at the header of the alg file define which "makers" should be
+used. You can also specify your own makers, implementing the DocMaker and
+QureyMaker interfaces.
-Benchmark .alg file contains the benchmark "algorithm". The syntax is described
-below. Within the algorithm, you can specify groups of commands, assign them
-names, specify commands that should be repeated,
+Benchmark .alg file contains the benchmark "algorithm". The syntax is described
+below. Within the algorithm, you can specify groups of commands, assign them
+names, specify commands that should be repeated,
do commands in serial or in parallel,
and also control the speed of "firing" the commands.
@@ -157,10 +158,10 @@
-
- Measuring: When a command is executed, statistics for the elapsed
- execution time and memory consumption are collected.
- At any time, those statistics can be printed, using one of the
- available ReportTasks.
+ Measuring: When a command is executed, statistics for the elapsed
+ execution time and memory consumption are collected.
+ At any time, those statistics can be printed, using one of the
+ available ReportTasks.
-
Comments start with '#'.
@@ -169,98 +170,102 @@
Serial sequences are enclosed within '{ }'.
-
- Parallel sequences are enclosed within
- '[ ]'
+ Parallel sequences are enclosed within
+ '[ ]'
-
- Sequence naming: To name a sequence, put
- '"name"' just after
- '{' or '['.
-
Example - { "ManyAdds" AddDoc } : 1000000 -
- would
- name the sequence of 1M add docs "ManyAdds", and this name would later appear
- in statistic reports.
- If you don't specify a name for a sequence, it is given one: you can see it as
- the algorithm is printed just before benchmark execution starts.
+ Sequence naming: To name a sequence, put
+ '"name"' just after
+ '{' or '['.
+
Example - { "ManyAdds" AddDoc } : 1000000 -
+ would
+ name the sequence of 1M add docs "ManyAdds", and this name would later appear
+ in statistic reports.
+ If you don't specify a name for a sequence, it is given one: you can see it as
+ the algorithm is printed just before benchmark execution starts.
-
Repeating:
- To repeat sequence tasks N times, add ': N' just
- after the
- sequence closing tag - '}' or
- ']' or '>'.
-
Example - [ AddDoc ] : 4 - would do 4 addDoc
- in parallel, spawning 4 threads at once.
-
Example - [ AddDoc AddDoc ] : 4 - would do
- 8 addDoc in parallel, spawning 8 threads at once.
-
Example - { AddDoc } : 30 - would do addDoc
- 30 times in a row.
-
Example - { AddDoc AddDoc } : 30 - would do
- addDoc 60 times in a row.
-
- -
- Command parameter: a command can take a single parameter.
- If the certain command does not support a parameter, or if the parameter is of
- the wrong type,
+ To repeat sequence tasks N times, add ': N' just
+ after the
+ sequence closing tag - '}' or
+ ']' or '>'.
+
Example - [ AddDoc ] : 4 - would do 4 addDoc
+ in parallel, spawning 4 threads at once.
+
Example - [ AddDoc AddDoc ] : 4 - would do
+ 8 addDoc in parallel, spawning 8 threads at once.
+
Example - { AddDoc } : 30 - would do addDoc
+ 30 times in a row.
+
Example - { AddDoc AddDoc } : 30 - would do
+ addDoc 60 times in a row.
+
+ -
+ Command parameter: a command can optionally take a single parameter.
+ If the certain command does not support a parameter, or if the parameter is of
+ the wrong type,
reading the algorithm will fail with an exception and the test would not start.
- Currently the following tasks take parameters:
-
- - AddDoc takes a numeric parameter, indicating the required size of
- added document. Note: if the DocMaker implementation used in the test
- does not support makeDoc(size), an exception would be thrown and the test
- would fail.
-
- - DeleteDoc takes numeric parameter, indicating the docid to be
- deleted. The latter is not very useful for loops, since the docid is
- fixed, so for deletion in loops it is better to use the
-
doc.delete.step
property.
-
- - SetProp takes a "name,value" param, ',' used as a separator.
-
- - SearchTravRetTask and SearchTravTask take a numeric
- parameter, indicating the required traversal size.
-
-
-
Example - AddDoc(2000) - would add a document
- of size 2000 (~bytes).
-
See conf/task-sample.alg for how this can be used, for instance, to check
- which is faster, adding
+ Currently the following tasks take optional parameters:
+
+ - AddDoc takes a numeric parameter, indicating the required size of
+ added document. Note: if the DocMaker implementation used in the test
+ does not support makeDoc(size), an exception would be thrown and the test
+ would fail.
+
+ - DeleteDoc takes numeric parameter, indicating the docid to be
+ deleted. The latter is not very useful for loops, since the docid is
+ fixed, so for deletion in loops it is better to use the
+
doc.delete.step
property.
+
+ - SetProp takes a
name,value mandatory param,
+ ',' used as a separator.
+
+ - SearchTravRetTask and SearchTravTask take a numeric
+ parameter, indicating the required traversal size.
+
+ - SearchTravRetLoadFieldSelectorTask takes a string
+ parameter: a comma separated list of Fields to load.
+
+
+
Example - AddDoc(2000) - would add a document
+ of size 2000 (~bytes).
+
See conf/task-sample.alg for how this can be used, for instance, to check
+ which is faster, adding
many smaller documents, or few larger documents.
- Next candidates for supporting a parameter may be the Search tasks,
- for controlling the qurey size.
+ Next candidates for supporting a parameter may be the Search tasks,
+ for controlling the qurey size.
-
- Statistic recording elimination: - a sequence can also end with
- '>',
+ Statistic recording elimination: - a sequence can also end with
+ '>',
in which case child tasks would not store their statistics.
This can be useful to avoid exploding stats data, for adding say 1M docs.
Example - { "ManyAdds" AddDoc > : 1000000 -
would add million docs, measure that total, but not save stats for each addDoc.
-
Notice that the granularity of System.currentTimeMillis() (which is used
- here) is system dependant,
- and in some systems an operation that takes 5 ms to complete may show 0 ms
- latency time in performance measurements.
- Therefore it is sometimes more accurate to look at the elapsed time of a larger
- sequence, as demonstrated here.
+
Notice that the granularity of System.currentTimeMillis() (which is used
+ here) is system dependant,
+ and in some systems an operation that takes 5 ms to complete may show 0 ms
+ latency time in performance measurements.
+ Therefore it is sometimes more accurate to look at the elapsed time of a larger
+ sequence, as demonstrated here.
-
Rate:
- To set a rate (ops/sec or ops/min) for a sequence, add
- ': N : R' just after sequence closing tag.
+ To set a rate (ops/sec or ops/min) for a sequence, add
+ ': N : R' just after sequence closing tag.
This would specify repetition of N with rate of R operations/sec.
- Use 'R/sec' or
- 'R/min'
+ Use 'R/sec' or
+ 'R/min'
to explicitely specify that the rate is per second or per minute.
The default is per second,
-
Example - [ AddDoc ] : 400 : 3 - would do 400
- addDoc in parallel, starting up to 3 threads per second.
-
Example - { AddDoc } : 100 : 200/min - would
- do 100 addDoc serially,
+
Example - [ AddDoc ] : 400 : 3 - would do 400
+ addDoc in parallel, starting up to 3 threads per second.
+
Example - { AddDoc } : 100 : 200/min - would
+ do 100 addDoc serially,
waiting before starting next add, if otherwise rate would exceed 200 adds/min.
-
- Command names: Each class "AnyNameTask" in the
- package org.apache.lucene.benchmark.byTask.tasks,
+ Command names: Each class "AnyNameTask" in the
+ package org.apache.lucene.benchmark.byTask.tasks,
that extends PerfTask, is supported as command "AnyName" that can be
used in the benchmark "algorithm" description.
This allows to add new commands by just adding such classes.
@@ -287,85 +292,85 @@
RepAll - all (completed) task runs.
-
- RepSumByName - all statistics,
- aggregated by name. So, if AddDoc was executed 2000 times,
- only 1 report line would be created for it, aggregating all those
- 2000 statistic records.
+ RepSumByName - all statistics,
+ aggregated by name. So, if AddDoc was executed 2000 times,
+ only 1 report line would be created for it, aggregating all those
+ 2000 statistic records.
-
- RepSelectByPref prefixWord - all
- records for tasks whose name start with
- prefixWord.
+ RepSelectByPref prefixWord - all
+ records for tasks whose name start with
+ prefixWord.
-
- RepSumByPref prefixWord - all
- records for tasks whose name start with
- prefixWord,
+ RepSumByPref prefixWord - all
+ records for tasks whose name start with
+ prefixWord,
aggregated by their full task name.
-
- RepSumByNameRound - all statistics,
- aggregated by name and by Round.
- So, if AddDoc was executed 2000 times in each of 3
- rounds, 3 report lines would be
- created for it,
- aggregating all those 2000 statistic records in each round.
- See more about rounds in the NewRound
- command description below.
+ RepSumByNameRound - all statistics,
+ aggregated by name and by Round.
+ So, if AddDoc was executed 2000 times in each of 3
+ rounds, 3 report lines would be
+ created for it,
+ aggregating all those 2000 statistic records in each round.
+ See more about rounds in the NewRound
+ command description below.
-
- RepSumByPrefRound prefixWord -
- similar to RepSumByNameRound,
- just that only tasks whose name starts with
- prefixWord are included.
+ RepSumByPrefRound prefixWord -
+ similar to RepSumByNameRound,
+ just that only tasks whose name starts with
+ prefixWord are included.
- If needed, additional reports can be added by extending the abstract class
- ReportTask, and by
+ If needed, additional reports can be added by extending the abstract class
+ ReportTask, and by
manipulating the statistics data in Points and TaskStats.
- - Control tasks: Few of the tasks control the benchmark algorithm
- all over:
+
- Control tasks: Few of the tasks control the benchmark algorithm
+ all over:
-
ClearStats - clears the entire statistics.
- Further reports would only include task runs that would start after this
- call.
+ Further reports would only include task runs that would start after this
+ call.
-
- NewRound - virtually start a new round of
- performance test.
- Although this command can be placed anywhere, it mostly makes sense at
- the end of an outermost sequence.
-
This increments a global "round counter". All task runs that
- would start now would
- record the new, updated round counter as their round number.
- This would appear in reports.
+ NewRound - virtually start a new round of
+ performance test.
+ Although this command can be placed anywhere, it mostly makes sense at
+ the end of an outermost sequence.
+
This increments a global "round counter". All task runs that
+ would start now would
+ record the new, updated round counter as their round number.
+ This would appear in reports.
In particular, see RepSumByNameRound above.
-
An additional effect of NewRound, is that numeric and boolean
- properties defined (at the head
- of the .alg file) as a sequence of values, e.g.
- merge.factor=mrg:10:100:10:100 would
+
An additional effect of NewRound, is that numeric and boolean
+ properties defined (at the head
+ of the .alg file) as a sequence of values, e.g.
+ merge.factor=mrg:10:100:10:100 would
increment (cyclic) to the next value.
- Note: this would also be reflected in the reports, in this case under a
- column that would be named "mrg".
+ Note: this would also be reflected in the reports, in this case under a
+ column that would be named "mrg".
-
- ResetInputs - DocMaker and the
- various QueryMakers
+ ResetInputs - DocMaker and the
+ various QueryMakers
would reset their counters to start.
The way these Maker interfaces work, each call for makeDocument()
or makeQuery() creates the next document or query
that it "knows" to create.
- If that pool is "exhausted", the "maker" start over again.
- The resetInpus command
+ If that pool is "exhausted", the "maker" start over again.
+ The resetInpus command
therefore allows to make the rounds comparable.
It is therefore useful to invoke ResetInputs together with NewRound.
-
- ResetSystemErase - reset all index
- and input data and call gc.
+ ResetSystemErase - reset all index
+ and input data and call gc.
Does NOT reset statistics. This contains ResetInputs.
All writers/readers are nullified, deleted, closed.
Index is erased.
@@ -373,48 +378,48 @@
You would have to call CreateIndex once this was called...
-
- ResetSystemSoft - reset all
- index and input data and call gc.
+ ResetSystemSoft - reset all
+ index and input data and call gc.
Does NOT reset statistics. This contains ResetInputs.
All writers/readers are nullified, closed.
Index is NOT erased.
Directory is NOT erased.
- This is useful for testing performance on an existing index,
- for instance if the construction of a large index
- took a very long time and now you would to test
- its search or update performance.
+ This is useful for testing performance on an existing index,
+ for instance if the construction of a large index
+ took a very long time and now you would to test
+ its search or update performance.
-
- Other existing tasks are quite straightforward and would
- just be briefly described here.
+ Other existing tasks are quite straightforward and would
+ just be briefly described here.
-
- CreateIndex and
- OpenIndex both leave the
- index open for later update operations.
+ CreateIndex and
+ OpenIndex both leave the
+ index open for later update operations.
CloseIndex would close it.
-
- OpenReader, similarly, would
- leave an index reader open for later search operations.
+ OpenReader, similarly, would
+ leave an index reader open for later search operations.
But this have further semantics.
- If a Read operation is performed, and an open reader exists,
- it would be used.
- Otherwise, the read operation would open its own reader
- and close it when the read operation is done.
- This allows testing various scenarios - sharing a reader,
- searching with "cold" reader, with "warmed" reader, etc.
- The read operations affected by this are:
- Warm,
- Search,
- SearchTrav (search and traverse),
- and SearchTravRet (search
- and traverse and retrieve).
- Notice that each of the 3 search task types maintains
- its own queryMaker instance.
+ If a Read operation is performed, and an open reader exists,
+ it would be used.
+ Otherwise, the read operation would open its own reader
+ and close it when the read operation is done.
+ This allows testing various scenarios - sharing a reader,
+ searching with "cold" reader, with "warmed" reader, etc.
+ The read operations affected by this are:
+ Warm,
+ Search,
+ SearchTrav (search and traverse),
+ and SearchTravRet (search
+ and traverse and retrieve).
+ Notice that each of the 3 search task types maintains
+ its own queryMaker instance.
@@ -429,10 +434,10 @@
As mentioned above for the NewRound task,
numeric and boolean properties that are defined as a sequence
of values, e.g. merge.factor=mrg:10:100:10:100
-would increment (cyclic) to the next value,
-when NewRound is called, and would also
-appear as a named column in the reports (column
-name would be "mrg" in this example).
+would increment (cyclic) to the next value,
+when NewRound is called, and would also
+appear as a named column in the reports (column
+name would be "mrg" in this example).
@@ -441,13 +446,13 @@
-
- analyzer - full
- class name for the analyzer to use.
+ analyzer - full
+ class name for the analyzer to use.
Same analyzer would be used in the entire test.
-
- directory - valid values are
+ directory - valid values are
This tells which directory to use for the performance test.
@@ -475,50 +480,51 @@
-Here is a list of currently defined properties:
+Here is a list of currently defined properties:
+
+
+
+ - Docs and queries creation:
+ - analyzer
+
- doc.maker
+
- doc.stored
+
- doc.tokenized
+
- doc.term.vector
+
- doc.store.body.bytes
+
- docs.dir
+
- query.maker
+
- file.query.maker.file
+
- file.query.maker.default.field
+
+
+
+ - Logging:
+
- doc.add.log.step
+
- doc.delete.log.step
+
- log.queries
+
- task.max.depth.log
+
+
+
+ - Index writing:
+
- compound
+
- merge.factor
+
- max.buffered
+
- directory
+
+
+
+ - Doc deletion:
+
+
+
+
+
+
+For sample use of these properties see the *.alg files under conf.
-
- - Docs and queries creation:
- - analyzer
-
- doc.maker
-
- doc.stored
-
- doc.tokenized
-
- doc.term.vector
-
- docs.dir
-
- query.maker
-
- file.query.maker.file
-
- file.query.maker.default.field
-
-
-
- - Logging:
-
- doc.add.log.step
-
- doc.delete.log.step
-
- log.queries
-
- task.max.depth.log
-
-
-
- - Index writing:
-
- compound
-
- merge.factor
-
- max.buffered
-
- directory
-
-
-
- - Doc deletion:
-
-
-
-
-
-
-For sample use of these properties see the *.alg files under conf.
-
-
Example input algorithm and the result benchmark report
@@ -535,7 +541,7 @@
# The comparison is done twice.
#
# --------------------------------------------------------
-
+
# -------------------------------------------------------------------------------------
# multi val params are iterated by NewRound's, added to reports, start with column name.
merge.factor=mrg:10:20
@@ -606,6 +612,33 @@
PopulateLong - - 1 20 1000 - - 1 - - 10003 - - - 77.0 - - 129.92 - 87,309,608 - 100,831,232
+
+
+Results record counting clarified
+
+Two columns in the results table indicate records counts: records-per-run and
+records-per-second. What does it mean?
+
+Almost every task gets 1 in this count just for being executed.
+Task sequences aggregate the counts of their child tasks,
+plus their own count of 1.
+So, a task sequence containing 5 other task sequences, each running a single
+other task 10 times, would have a count of 1 + 5 * (1 + 10) = 56.
+
+The traverse and retrieve tasks "count" more: a traverse task
+would add 1 for each traversed result (hit), and a retrieve task would
+additionally add 1 for each retrieved doc. So, regular Search would
+count 1, SearchTrav that traverses 10 hits would count 11, and a
+SearchTravRet task that retrieves (and traverses) 10, would count 21.
+
+Confusing? this might help: always examine the elapsedSec
column,
+and always compare "apples to apples", .i.e. it is interesting to check how the
+rec/s
changed for the same task (or sequence) between two
+different runs, but it is not very useful to know how the rec/s
+differs between Search
and SearchTrav
tasks. For
+the latter, elapsedSec
would bring more insight.
+
+