lucene-java-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <>
Subject [Lucene-java Wiki] Update of "ImproveIndexingSpeed" by MikeMcCandless
Date Sat, 09 Jun 2007 15:26:50 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Lucene-java Wiki" for change notification.

The following page has been changed by MikeMcCandless:

New page:
Here are some things to try to speed up the indexing speed of your
Lucene application.

 * '''Make sure you are using the latest version of Lucene.'''

 * '''Open a single writer and re-use it for the duration of your indexing session.'''

 * '''Flush by RAM usage instead of document count.'''

 Call writer.ramSizeInBytes() after every added doc then call flush() when it's using too
much RAM.  This is especially good if you have small docs or  highly variable doc sizes. 
You need to first set maxBufferedDocs large enough to prevent the writer from flushing based
on document count.  However, don't sett it too lage otherwise you may hit the LUCENE-845 issue.
 Somewhere around 2-3X your "typical" flush count should be OK.

 * '''Use as much RAM as you can afford.'''

 More RAM before flushing means Lucene writes larger segments to begin with which means less
merging later.

 * '''Turn off compound file format.'''

 Building the compound file format takes time during indexing (7-33% in testing for [
LUCENE-888]).  However, note that doing this will greatly increase the number of file descriptors
use by indexing and by searching, so you could run out of file descriptors if mergeFactor
is also large.

 * '''Increase mergeFactor, but not too much.'''

 Larger mergeFactors defers merging of segments until later, thus speeding up indexing because
merging is a large part of indexing. However, this will slow down searching, and, you will
run out of file descriptors if you make it too large.  Values that are too large may even
slow down indexing since merging more segments at once means much more seeking of the hard

 * '''Instead of indexing many small text fields, aggregate the text into a single "contents"
field and index only that (you can still store the other fields).'''

 * '''Turn off any features you are not in fact using.'''

 If you are storing fields but not using them at query time, don't store them.  Likewise for
term vectors.  If you are indexing many fields, turning off norms for those fields may help

 * '''Use a faster analyzer.'''

 Sometimes analysis of a document takes alot of time. For example, StandardAnalyzer is quite
time consuming.  If you can get by with a simpler analyzer, then try it.

 * '''Make sure document creation is not slow.'''

 Often the process of retrieving a document from somewhere external (database, filesystem,
crawled from a Web site, etc.) is very time consuming.

 * '''Don't optimize unless you really need to (for faster searching).'''

 * '''Use multiple threads with one IndexWriter.'''

 Modern hardware is highly concurrent (multi-core CPUs, multi-channel memory archiectures,
native command queueing in hard drives, etc.) so using more than one thread to add documents
can give good gains overall.  Even on older machines there is often still concurrency to be
gained between IO and CPU.  Test the number of threads to find the best performance point.

  * Index on separate indices then merge.

    If you have a very large amount of content to index then you can
    break your content into N "silos", index each silo on a separate
    machine, then use the writer.addIndexesNoOptimize to merge them
    all into one final index.

  * Use a faster machine, especially a fast IO system.

  * Run a Java profiler.

    If all else fails, profile your application to figure out where
    the time is going.  I've had success with a very simple profiler
    called <a href="">JMP</a>.  There are
    many others.  Often you will be pleasantly surprised to find some
    silly, unexpected method is taking far too much time.

View raw message