lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mark Harwood (JIRA)" <>
Subject [jira] [Commented] (LUCENE-4069) Segment-level Bloom filters for a 2 x speed up on rare term searches
Date Wed, 30 May 2012 14:45:23 GMT


Mark Harwood commented on LUCENE-4069:

Aaaargh. Unless I've missed something, I have concerns with the fundamental design of the
current Codec loading mechanism.

It seems too tied to the concept of a ServiceProvider class-loading mechanism, forcing users
to write new SPI-registered classes in order to simply declare what amount to index schema
configuration choices.

Example: If I take Rob's sample Codec above and choose to use a subtly different configuration
of the same PostingsFormat class for different fields it breaks:

      Codec fooCodec=new Lucene40Codec() {
        public PostingsFormat getPostingsFormatForField(String field) {
          if ("text".equals(field)) {
            return new FooPostingsFormat(1);
          if ("title".equals(field)) {
            //same impl as "text" field, different constructor settings        
            return new FooPostingsFormat(2);
          return super.getPostingsFormatForField(field);
This causes a file overwrite error as PerFieldPostingsFormat uses the same name from FooPostingsFormat(1)
and FooPostingsFormat(2) to create files.
In order to safely make use of differently configured choices of the same PostingsFormat we
are forced to declare a brand new subclass with a unique new service name and entry in the
service provider registration. This is essentially where I have got to in trying to integrate
this Bloom filtering logic.

This dependency on writing custom classes seems to make everything a bit fragile, no? What
hope has Luke got in opening the average index without careful assembly of classpaths etc?
If I contrast this with the world of database schemas it seems absurd to have a reliance on
writing custom classes with no behaviour simply in order to preserve a configuration of an
application's schema settings. Even an IOC container with XML declarations would offer a more
agile means of assembling pre-configured *beans* rather than relying on a Service Provider
mechanism that is only serving as a registry of *classes*.

Anyone else see this as a major pain?

> Segment-level Bloom filters for a 2 x speed up on rare term searches
> --------------------------------------------------------------------
>                 Key: LUCENE-4069
>                 URL:
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: core/index
>    Affects Versions: 3.6, 4.0
>            Reporter: Mark Harwood
>            Priority: Minor
>             Fix For: 4.0, 3.6.1
>         Attachments: BloomFilterCodec40.patch, MHBloomFilterOn3.6Branch.patch,
> An addition to each segment which stores a Bloom filter for selected fields in order
to give fast-fail to term searches, helping avoid wasted disk access.
> Best suited for low-frequency fields e.g. primary keys on big indexes with many segments
but also speeds up general searching in my tests.
> Overview slideshow here:
> Benchmarks based on Wikipedia content here:
> Patch based on 3.6 codebase attached.
> There are no 3.6 API changes currently - to play just add a field with "_blm" on the
end of the name to invoke special indexing/querying capability. Clearly a new Field or schema
declaration(!) would need adding to APIs to configure the service properly.
> Also, a patch for Lucene4.0 codebase introducing a new PostingsFormat

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:!default.jspa
For more information on JIRA, see:


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message