lucene-java-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chris D <>
Subject Scalability of Lucene indexes
Date Fri, 18 Feb 2005 16:01:44 GMT
Hi all, 

I have a question about scaling lucene across a cluster, and good ways
of breaking up the work.

We have a very large index and searches sometimes take more time than
they're allowed. What we have been doing is during indexing we index
into 256 seperate indexes (depending on the md5sum) then distribute
the indexes to the search machines. So if a machine has 128 indexes it
would have to do 128 searches. I gave parallelMultiSearcher a try and
it was significantly slower than simply iterating through the indexes
one at a time.

Our new plan is to somehow have only one index per search machine and
a larger main index stored on the master.

What I'm interested to know is whether having one extremely large
index for the master then splitting the index into several smaller
indexes (if this is possible) would be better than having several
smaller indexes and merging them on the search machines into one

I would also be interested to know how others have divided up search
work across a cluster.


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message