lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sanne Grinovero <>
Subject Re: A new Lucene Directory available
Date Sun, 15 Nov 2009 13:13:31 GMT
Hi Earwin,
thanks for the insight, as I mentioned I have no proper benchmarks to
back my statements but I can see how it behaves, so absolutely I could
be too optimistic.
They are currently profiling Infinispan and speeding up some
internals, so I'll wait for these tasks to finish to begin testing on
our part; while waiting I collect suggestions about how you think I
should test it properly? Which kind of comparisons would you like to

I'm currently working on JIRA clustering (called Scarlet), so the
typical index usage pattern of that application is going to be my
favorite scenario.

I know about the Terracotta efforts, I agree with you and have
collected much feedback about which problems were arising directly
talking with the people maintaining such systems. I even got to hear
some success cases, but yes they are scarce and there are some
problems; be assured that we have analyzed them carefully before
deciding for this design. I'm not a Terracotta expert myself, but was
helped on this by specialists. My personal opinion resulting from
these talks is that Terracotta works, but is too tricky to setup and
not viable in case the indexes change frequently.

About the RAMDirectory comparison, as you said yourself the bytes
aren't read constantly but just at index reopen so I wouldn't be too
worried about the "bunch of methods" as they're executed once per
segment loading; I'll improve that if possible, thanks for looking!
I'm sure many parts can be improved, patches are welcome.

Instances of ChunkCacheKey are not created for each single byte read
but for each byte[] buffer, being the size of these buffers
configurable. This was decided after observations that it was
improving performance to "chunk" segments in smaller pieces rather
than have huge arrays of bytes, but if you like you can configure it
to degenerate to approach the one key per segment ratio.
Comparing to a RAMDirectory is unfair, as with InfinispanDirectory I
can scale :-) Still I take the point, I'll have some tests also in
single node mode to compare them, for fun as the use cases are a bit
different but I'm confident I could surprise you when I have to choice
of the scenario.

About JGroups I'm not technically prepared for a match, but I've heard
of different stories of much bigger than 20 nodes business critical
clusters working very well. Sure, it won't scale without a proper
configuration at all levels: os, jgroups and infrastructure.

Thank you very much for you considerations, it's very appreciated.

On Sun, Nov 15, 2009 at 12:39 PM, Earwin Burrfoot <> wrote:
> Terracotta guys "easy-clustered" Lucene a few years ago. I'm yet to
> see at least one person saying it worked for him allright.
> This new directory ain't gonna be faster than RAMDirectory, as syncs
> on a map doesn't matter, they are taken once per opened file -> once
> per reopen, which is not happening thousands of times a sec.
> Taking a glance at the code (svn trunk), it actually is much slower. I
> mean, compare IndexInput.readByte()s. A whole slew of code and method
> calls plus a ChunkCacheKey created per each byte read (violent GC
> rape, ring the police!) VS if, incr, array access for RAMDir.
> I wouldn't be too optimistic in doesn't-fit-in-memory case VS
> FSDirectory either. OS' paging/file caching skills are hard to match,
> plus OS file cache resides outside of Java heap, which (as reallife
> experience dictates) is immensely good for your GC pauses.
> Now to the networking part. Infinispan is based on JGroups. Last time
> I saw it, it exploded under a moderate load on 20 nodes. I believe the
> library is still good, properly configured and for lesser loads, but
> not for distributing Lucene index that is frequently updated and
> merged on each node of the cluster.
> Please excuse me if I'm overboard in places, and correct me if I am wrong.
> On Sun, Nov 15, 2009 at 07:33, Sanne Grinovero
> <> wrote:
>> Hi John,
>> I didn't run a long running reliable benchmark, so at the moment I
>> can't really speak of numbers.
>> Suggestions and help on performance testing are welcome: I guess it
>> will shine in some situations, not necessarily all, so really choosing
>> a correct ratio of concurrent writers/searches, number of nodes in the
>> cluster and resources per node will never be fair enough to compare
>> this Directory with others.
>> On paper the premises are good: it's all in-memory, until it fits: it
>> will distribute data across nodes and overflow to disk is supported
>> (called passivation). A permanent store can be configured, so you
>> could set it to periodically flush incrementally to slower storages
>> like a database, a filesystem, a cloud storage service. This makes it
>> possible to avoid losing state even when all nodes are shut down.
>> A RAMDirectory is AFAIK not recommended as you could hit memory limits
>> and because it's basically a synchronized HashMap; Infinispan
>> implements ConcurrentHashMap and doesn't need synchronization.
>> Even if the data is replicated across nodes each node has it's own
>> local cache, so when caches are warm and all segments fit in memory it
>> should be, theoretically, the fastest Directory ever. The more it will
>> read from disk, the more it will behave similarly to a FSDirectory
>> with some buffers.
>> As per Lucene's design, writes can happen only at one node at a time:
>> one IndexWriter can own the lock, but IndexReaders and Searchers are
>> not blocked, so when using this Directory it should behave exactly as
>> if you had multiple processes sharing a local NIOFSdirectory:
>> basically the situation is that you can't scale on writers, but you
>> can scale near-linearly with readers adding in more power from more
>> machines.
>> Besides performance, the reasons to implement this was to be able to
>> easily add or remove processing power to a service (clouds), make it
>> easier to share indexes across nodes, and last but not least to remove
>> single points of failure: all data is distributed and there is no such
>> notion of Master: services will continue running fine when killing any
>> node.
>> I hope this peeks your interest, sorry if I couldn't provide numbers.
>> Regards,
>> Sanne
>> On Sat, Nov 14, 2009 at 11:15 PM, John Wang <> wrote:
>>> HI Sanne:
>>>     Very interesting!
>>>     What kinda performance should we expect with this, comparing to regular
>>> FSDIrectory on local HD.
>>> Thanks
>>> -John
>>> On Sat, Nov 14, 2009 at 11:44 AM, Sanne Grinovero
>>> <> wrote:
>>>> Hello all,
>>>> I'm a Lucene user and fan, I wanted to tell you that we just released
>>>> a first technology preview of a distributed in memory Directory for
>>>> Lucene.
>>>> The release announcement:
>>>> From there you'll find links to the Wiki, to the sources, to the issue
>>>> tracker. A minimal demo is included with the sources.
>>>> This was developed together with Google Summer of Code student Lukasz
>>>> Moren and much support from the Infinispan and Hibernate Search teams,
>>>> as we are storing the index segments on Infinispan and using it's
>>>> atomic distributed locks to implement a Lucene LockFactory.
>>>> Initial idea was to contribute it directly to Lucene, but as
>>>> Infinispan is a LGPL dependency we had to distribute it with
>>>> Infinispan (as the other way around would have introduced some legal
>>>> issues); still we hope you appreciate the effort and are interested in
>>>> giving it a try.
>>>> All kind of feedback is welcome, especially on benchmarking
>>>> methodologies as I yet have to do some serious performance tests.
>>>> Main code, build with Maven2:
>>>> svn co
>>>> infinispan-directory
>>>> Demo, see the Readme:
>>>> svn co
>>>> lucene-demo
>>>> Best Regards,
>>>> Sanne
>>>> --
>>>> Sanne Grinovero
>>>> Sourcesense - making sense of Open  Source:
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe, e-mail:
>>>> For additional commands, e-mail:
>> --
>> Sanne Grinovero
>> Sourcesense - making sense of Open  Source:
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail:
>> For additional commands, e-mail:
> --
> Kirill Zakharenko/Кирилл Захаренко (
> Home / Mobile: +7 (495) 683-567-4 / +7 (903) 5-888-423
> ICQ: 104465785
> ---------------------------------------------------------------------
> To unsubscribe, e-mail:
> For additional commands, e-mail:

Sanne Grinovero
Sourcesense - making sense of Open  Source:

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message