lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yonik Seeley (Commented) (JIRA)" <>
Subject [jira] [Commented] (LUCENE-3584) bulk postings should be codec private
Date Sun, 04 Dec 2011 14:13:39 GMT


Yonik Seeley commented on LUCENE-3584:

bq. where is the code to your benchmark?  I don't trust it.

I'm always skeptical of benchmarks too :-)

No benchmark code this time, I just hit Solr directly from the browser, waiting for the times
to stabilize and picking the lowest (and assuring that I can hit very near that low again
and it wasn't a fluke.  Results are very repeatable though (and I killed the JVM and retried
to make sure hotspot would do the same thing again)

The index is from a 10M row CSV file I generated years ago.  For example, the field with 10
terms is simply a single valued field with a random number between 0 and 9, padded out to
10 chars.

Oh, this is Linux on a Phenom II, JKD 1.6.0_29 
> bulk postings should be codec private
> -------------------------------------
>                 Key: LUCENE-3584
>                 URL:
>             Project: Lucene - Java
>          Issue Type: Task
>            Reporter: Robert Muir
>            Assignee: Robert Muir
>             Fix For: 4.0
>         Attachments: LUCENE-3584.patch
> In LUCENE-2723, a lot of work was done to speed up Lucene's bulk postings read API.
> There were some upsides:
> * you could specify things like 'i dont care about frequency data up front'.
>   This made things like multitermquery->filter and other consumers that don't
>   care about freqs faster. But this is unrelated to 'bulkness' and we have a
>   separate patch now for this on LUCENE-2929.
> * the buffersize for standardcodec was increased to 128, increasing performance
>   for TermQueries, but this was unrelated too.
> But there were serious downsides/nocommits:
> * the API was hairy because it tried to be 'one-size-fits-all'. This made consumer code
> * the API could not really be specialized to your codec: e.g. could never take advantage
that e.g. docs and freqs are aligned.
> * the API forced codecs to implement delta encoding for things like documents and positions.

>   But this is totally up to the codec how it wants to encode! Some codecs might not use
delta encoding.
> * using such an API for positions was only theoretical, it would have been super complicated
and I doubt ever
>   performant or maintainable.
> * there was a regression with advance(), probably because the api forced you to do both
a linear scan thru
>   the remaining buffer, then refill...
> I think a cleaner approach is to let codecs do whatever they want to implement the DISI
> contract. This lets codecs have the freedom to implement whatever compression/buffering
they want
> for the best performance, and keeps consumers simple. If a codec uses delta encoding,
or if it wants
> to defer this to the last possible minute or do it at decode time, thats its own business.
Maybe a codec
> doesn't want to do any buffering at all.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:!default.jspa
For more information on JIRA, see:


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message