lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Uwe Schindler (JIRA)" <>
Subject [jira] [Commented] (LUCENE-4069) Segment-level Bloom filters for a 2 x speed up on rare term searches
Date Tue, 17 Jul 2012 08:47:35 GMT


Uwe Schindler commented on LUCENE-4069:

It would be a pain if user config settings require a custom SPI-registered class around just
to decode the index contents. There's the resource/classpath hell, the chance for misconfiguration
and running Luke suddenly gets more complex.
The line to be drawn is between what are just config settings (field names, memory limits)
and what are fundamentally different file formats (e.g. codec choices).
The design principle that looks to be adopted is that the former ought to be accommodated
without the need for custom SPI-registered classes and the latter would need to locate an
implementation via SPI to decode stored content. Seems reasonable.
The choice of hash algo does not fundamentally alter the on-disk format (they all produce
an int) so I would suggest we treat this as a config setting rather than a fundamentally different
choice of file format.

The design principle here is very easy: We must follow the SPI pattern, if you write an index
that could otherwise not be read with default settings and produces e.g. CorruptIndexException.
If you have a codec that writes some special things for specific fields, it is required to
write this information about the fields to the index. If you want to open this index using
IndexReader again, there must not be any requirement for configuration settings on the reader
itsself - a simple must be possible and a query must be able to execute.
The IndexReader must be able to get all this information from the index files. If a special
decoder for foobar is needed, it must be loadable by SPI. This is similar to postings. A new
postings format needs a new SPI, otherwise you cannot read the index.

And it is not true that Luke is more complex to configure. Just put the JAR file into classpath
that contains the SPI and you are fine. Setting up a build environment is more complicated
but thats more the problem of shitty Eclipse resource handling. ANT/MAVEN/IDEA is easy.
> Segment-level Bloom filters for a 2 x speed up on rare term searches
> --------------------------------------------------------------------
>                 Key: LUCENE-4069
>                 URL:
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: core/index
>    Affects Versions: 3.6, 4.0-ALPHA
>            Reporter: Mark Harwood
>            Priority: Minor
>             Fix For: 4.0
>         Attachments: BloomFilterPostingsBranch4x.patch, LUCENE-4069-tryDeleteDocument.patch,
LUCENE-4203.patch, MHBloomFilterOn3.6Branch.patch,,,,,
> An addition to each segment which stores a Bloom filter for selected fields in order
to give fast-fail to term searches, helping avoid wasted disk access.
> Best suited for low-frequency fields e.g. primary keys on big indexes with many segments
but also speeds up general searching in my tests.
> Overview slideshow here:
> Benchmarks based on Wikipedia content here:
> Patch based on 3.6 codebase attached.
> There are no 3.6 API changes currently - to play just add a field with "_blm" on the
end of the name to invoke special indexing/querying capability. Clearly a new Field or schema
declaration(!) would need adding to APIs to configure the service properly.
> Also, a patch for Lucene4.0 codebase introducing a new PostingsFormat

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:!default.jspa
For more information on JIRA, see:


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message