Return-Path: Delivered-To: apmail-lucene-java-commits-archive@www.apache.org Received: (qmail 22822 invoked from network); 9 Jun 2007 16:05:37 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 9 Jun 2007 16:05:37 -0000 Received: (qmail 48584 invoked by uid 500); 9 Jun 2007 16:05:41 -0000 Delivered-To: apmail-lucene-java-commits-archive@lucene.apache.org Received: (qmail 48502 invoked by uid 500); 9 Jun 2007 16:05:41 -0000 Mailing-List: contact java-commits-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: java-dev@lucene.apache.org Delivered-To: mailing list java-commits@lucene.apache.org Received: (qmail 48490 invoked by uid 99); 9 Jun 2007 16:05:40 -0000 Received: from herse.apache.org (HELO herse.apache.org) (140.211.11.133) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 09 Jun 2007 09:05:40 -0700 X-ASF-Spam-Status: No, hits=-100.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.130] (HELO eos.apache.org) (140.211.11.130) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 09 Jun 2007 09:05:36 -0700 Received: from eos.apache.org (localhost [127.0.0.1]) by eos.apache.org (Postfix) with ESMTP id 5E6A85A1CF for ; Sat, 9 Jun 2007 16:05:16 +0000 (GMT) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: Apache Wiki To: java-commits@lucene.apache.org Date: Sat, 09 Jun 2007 16:05:16 -0000 Message-ID: <20070609160516.7677.21692@eos.apache.org> Subject: [Lucene-java Wiki] Update of "ImproveIndexingSpeed" by MikeMcCandless X-Virus-Checked: Checked by ClamAV on apache.org Dear Wiki user, You have subscribed to a wiki page or wiki category on "Lucene-java Wiki" for change notification. The following page has been changed by MikeMcCandless: http://wiki.apache.org/lucene-java/ImproveIndexingSpeed ------------------------------------------------------------------------------ * '''Flush by RAM usage instead of document count.''' - Call writer.ramSizeInBytes() after every added doc then call flush() when it's using too much RAM. This is especially good if you have small docs or highly variable doc sizes. You need to first set maxBufferedDocs large enough to prevent the writer from flushing based on document count. However, don't sett it too lage otherwise you may hit the LUCENE-845 issue. Somewhere around 2-3X your "typical" flush count should be OK. + Call writer.ramSizeInBytes() after every added doc then call flush() when it's using too much RAM. This is especially good if you have small docs or highly variable doc sizes. You need to first set maxBufferedDocs large enough to prevent the writer from flushing based on document count. However, don't set it too large otherwise you may hit [http://issues.apache.org/jira/browse/LUCENE-845 LUCENE-845]. Somewhere around 2-3X your "typical" flush count should be OK. * '''Use as much RAM as you can afford.''' - More RAM before flushing means Lucene writes larger segments to begin with which means less merging later. + More RAM before flushing means Lucene writes larger segments to begin with which means less merging later. Testing in [http://issues.apache.org/jira/browse/LUCENE-843 LUCENE-843] found that around 48 MB is the sweet spot for that content set, but, your application could have a different sweet spot. + + * '''Increase mergeFactor, but not too much.''' + + Larger mergeFactors defers merging of segments until later, thus speeding up indexing because merging is a large part of indexing. However, this will slow down searching, and, you will run out of file descriptors if you make it too large. Values that are too large may even slow down indexing since merging more segments at once means much more seeking for the hard drives. * '''Turn off compound file format.''' Building the compound file format takes time during indexing (7-33% in testing for [http://issues.apache.org/jira/browse/LUCENE-888 LUCENE-888]). However, note that doing this will greatly increase the number of file descriptors use by indexing and by searching, so you could run out of file descriptors if mergeFactor is also large. - * '''Increase mergeFactor, but not too much.''' - - Larger mergeFactors defers merging of segments until later, thus speeding up indexing because merging is a large part of indexing. However, this will slow down searching, and, you will run out of file descriptors if you make it too large. Values that are too large may even slow down indexing since merging more segments at once means much more seeking of the hard drives. * '''Instead of indexing many small text fields, aggregate the text into a single "contents" field and index only that (you can still store the other fields).''' @@ -33, +34 @@ Sometimes analysis of a document takes alot of time. For example, StandardAnalyzer is quite time consuming. If you can get by with a simpler analyzer, then try it. - * '''Make sure document creation is not slow.''' + * '''Speed up document construction.''' Often the process of retrieving a document from somewhere external (database, filesystem, crawled from a Web site, etc.) is very time consuming. @@ -41, +42 @@ * '''Use multiple threads with one IndexWriter.''' - Modern hardware is highly concurrent (multi-core CPUs, multi-channel memory archiectures, native command queueing in hard drives, etc.) so using more than one thread to add documents can give good gains overall. Even on older machines there is often still concurrency to be gained between IO and CPU. Test the number of threads to find the best performance point. + Modern hardware is highly concurrent (multi-core CPUs, multi-channel memory architectures, native command queuing in hard drives, etc.) so using more than one thread to add documents can give good gains overall. Even on older machines there is often still concurrency to be gained between IO and CPU. Test the number of threads to find the best performance point. - * '''Index on separate indices then merge.''' + * '''Index into separate indices then merge.''' If you have a very large amount of content to index then you can break your content into N "silos", index each silo on a separate machine, then use the writer.addIndexesNoOptimize to merge them all into one final index. @@ -51, +52 @@ * '''Run a Java profiler.''' - If all else fails, profile your application to figure out where the time is going. I've had success with a very simple profiler called JMP. There are many others. Often you will be pleasantly surprised to find some silly, unexpected method is taking far too much time. + If all else fails, profile your application to figure out where the time is going. I've had success with a very simple profiler called [http://www.khelekore.org/jmp JMP]. There are many others. Often you will be pleasantly surprised to find some silly, unexpected method is taking far too much time.