lucene-java-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Michael Sokolov <msoko...@safaribooksonline.com>
Subject Re: external file stored field codec
Date Fri, 18 Oct 2013 04:14:07 GMT
On 10/13/13 8:09 PM, Michael Sokolov wrote:
> On 10/13/2013 1:52 PM, Adrien Grand wrote:
>> Hi Michael,
>>
>> I'm not aware enough of operating system internals to know what
>> exactly happens when a file is open but it sounds to be like having
>> separate files per document or field adds levels of indirection when
>> loading stored fields, so I would be surprised it it actually proved
>> to be more efficient than storing everything in a single file.
>>
> That's true, Adrien, there's definitely a cost to using files. There 
> are some gnarly challenges in here (mostly to do with the large number 
> of files, as you say, and with cleaning up after deletes - deletion is 
> always hard).  I'm not sure it's going to be possible to both clean up 
> and maintain files for stale commits; this will become problematic in 
> the way that having index files on NFS mounts are problematic.
>
> I think the hope is that there will be countervailing savings during 
> writes and merges (mostly) because we may be able to cleverly avoid 
> copying the contents of stored fields being merged.  There may also be 
> savings when querying due to reduced RAM requirements since the large 
> stored fields won't be paged in while performing queries.  As I said, 
> some simple tests do show improvements under at least some 
> circumstances, so I'm pursuing this a bit further.  I have a 
> preliminary implementation as a codec now, and I'm learning a bit 
> about Lucene's index internals. BTW SimpleTextCodec is a great tool 
> for learning and debugging.
>
> The background for this is a document store with large files (think 
> PDFs, but lots of formats) that have to be tracked, and have 
> associated metadata.  We've been storing these externally, but it 
> would be beneficial to have a single data management layer: i.e. to 
> push this down into Lucene, for a variety of reasons.  For one, we 
> could rely on Solr to do our replication for us.
>
> I'll post back when I have some measurements.
>
> -Mike
This idea actually does seem to be working out pretty nicely.  I 
compared time to write and then to read documents that included a couple 
of small indexed fields and a binary stored field that varied in size.  
Writing to external files, via the FSFieldCodec, was 3-20 times faster 
than writing to the index in the normal way (using MMapDirectory).  
Reading was sometimes faster and sometimes slower. I also measured time 
for a forceMerge(1) at the end of each test: this was almost always 
nearly zero when binaries were external, and grew larger with more data 
in the normal case.  I believe the improvements we're seeing here result 
largely from removing the bulk of the data from the merge I/O path.

As with any performance measurements, a lot of factors can affect the 
measurements, but this effect seems pretty robust across the conditions 
I measured (different file sizes, numbers of files, and frequency of 
commits, with lots of repetition).  One oddity is a large difference 
between Mac SSD filesystem (15-20x writing, reading 0.6x)  via 
FSFieldCodec) and Linux ext4 HD filesystem (3-4x writing, 1.5x reading).

The codec works as a wrapper around another codec (like the compressing 
codecs), intercepting binary and string stored fields larger than a 
configurable threshold, and storing a file number as a reference in the 
main index which then functions kind of like a symlink.  The codec 
intercepts merges in order to clean up files that are no longer 
referenced, taking special care to preserve the ability of the other 
codecs to perform bulk merges.  The codec passes all the Lucene unit 
tests in the o.a.l.index package.

The implementation is still very experimental: there are lots of details 
to be worked out: for example, I haven't yet measured the performance 
impact of deletions, which could be pretty significant. It would be 
really great if someone with intimate knowledge of Lucene's indexing 
internals were able to review it: I'd be happy to share the code and my 
list of TODO's and questions if there's any interest, but at least I 
thought it would be interesting to know that the approach does seem to 
be worth pursuing.

-Mike

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Mime
View raw message