lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Michael McCandless (JIRA)" <>
Subject [jira] Commented: (LUCENE-888) Improve indexing performance by increasing internal buffer sizes
Date Fri, 25 May 2007 18:41:16 GMT


Michael McCandless commented on LUCENE-888:

> I tested and reviewed your patch. It looks good and all tests pass!


> - Should we increase the buffer size for CompoundFileReader to 4KB
> not only for the merge mode but also for the normal read mode?

I'm a little nervous about that: I don't know the impact it will have
on searching, especially queries that heavily use skipping?

Hmmm, actually, a CSIndexInput potentially goes through 2 buffers when
it does a read -- its own (since each CSIndexInput subclasses from
BufferedIndexInput) and then the main stream of the
CompoundFileReader.  It seems like we shouldn't do this?  We should
not do a double copy.

It almost seems like the double copy would not occur becaase
readBytes() has logic to read directly from the underlying stream if
the sizes is >= bufferSize.  However, I see at least one case during
merging where this logic doesn't kick in (and we do a double buffer
copy) because the buffers become "skewed".  I will open a separate
issue for this; I think we should fix it and gain some more

> In BufferedIndexInput.setBufferSize() a new buffer should only be
> allocated if the new size is different from the previous buffer
> size.

Ahh, good.  Will do.

> Improve indexing performance by increasing internal buffer sizes
> ----------------------------------------------------------------
>                 Key: LUCENE-888
>                 URL:
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Index
>    Affects Versions: 2.1
>            Reporter: Michael McCandless
>         Assigned To: Michael McCandless
>            Priority: Minor
>         Attachments: LUCENE-888.patch
> In working on LUCENE-843, I noticed that two buffer sizes have a
> substantial impact on overall indexing performance.
> First is BufferedIndexOutput.BUFFER_SIZE (also used by
> BufferedIndexInput).  Second is CompoundFileWriter's buffer used to
> actually build the compound file.  Both are now 1 KB (1024 bytes).
> I ran the same indexing test I'm using for LUCENE-843.  I'm indexing
> ~5,500 byte plain text docs derived from the Europarl corpus
> (English).  I index 200,000 docs with compound file enabled and term
> vector positions & offsets stored plus stored fields.  I flush
> documents at 16 MB RAM usage, and I set maxBufferedDocs carefully to
> not hit LUCENE-845.  The resulting index is 1.7 GB.  The index is not
> optimized in the end and I left mergeFactor @ 10.
> I ran the tests on a quad-core OS X 10 machine with 4-drive RAID 0 IO
> system.
> At 1 KB (current Lucene trunk) it takes 622 sec to build the index; if
> I increase both buffers to 8 KB it takes 554 sec to build the index,
> which is an 11% overall gain!
> I will run more tests to see if there is a natural knee in the curve
> (buffer size above which we don't really gain much more performance).
> I'm guessing we should leave BufferedIndexInput's default BUFFER_SIZE
> at 1024, at least for now.  During searching there can be quite a few
> of this class instantiated, and likely a larger buffer size for the
> freq/prox streams could actually hurt search performance for those
> searches that use skipping.
> The CompoundFileWriter buffer is created only briefly, so I think we
> can use a fairly large (32 KB?) buffer there.  And there should not be
> too many BufferedIndexOutputs alive at once so I think a large-ish
> buffer (16 KB?) should be OK.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message