lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Adrien Grand (JIRA)" <>
Subject [jira] [Commented] (LUCENE-7579) Sorting on flushed segment
Date Fri, 16 Dec 2016 09:45:58 GMT


Adrien Grand commented on LUCENE-7579:

bq. I am not happy that I had to add this new public API in the StoredFieldsReader but it's
the only way to make this optimized for the compressing case. 

I was thinking about it too and I suspect the optimization does not bring much in the case
that blocks contain multiple documents (ie. small docs) since I would expect the fact that
sorting the stored fields format keeps decompressing blocks of 16KB for every single document
to be the bottleneck? Maybe we should not try to reuse the codec's stored fields format for
the temporary stored fields and rather do the buffering in memory or on disk with a custom
format that has faster random-access? I would expect it to be faster in many cases, and would
allow to get rid of this new API?

> Sorting on flushed segment
> --------------------------
>                 Key: LUCENE-7579
>                 URL:
>             Project: Lucene - Core
>          Issue Type: Bug
>            Reporter: Ferenczi Jim
> Today flushed segments built by an index writer with an index sort specified are not
sorted. The merge is responsible of sorting these segments potentially with others that are
already sorted (resulted from another merge). 
> I'd like to investigate the cost of sorting the segment directly during the flush. This
could make the merge faster since they are some cheap optimizations that can be done only
if all segments to be merged are sorted.
>  For instance the merge of the points could use the bulk merge instead of rebuilding
the points from scratch.
> I made a small prototype which sort the segment on flush here:
> The idea is simple, for points, norms, docvalues and terms I use the SortingLeafReader
implementation to translate the values that we have in RAM in a sorted enumeration for the
> For stored fields I use a two pass scheme where the documents are first written to disk
unsorted and then copied to another file with the correct sorting. I use the same stored field
format for the two steps and just remove the file produced by the first pass at the end of
the process.
> This prototype has no implementation for index sorting that use term vectors yet. I'll
add this later if the tests are good enough.
> Speaking of testing, I tried this branch on [~mikemccand] benchmark scripts and compared
master with index sorting against my branch with index sorting on flush. I tried with sparsetaxis
and wikipedia and the first results are weird. When I use the SerialScheduler and only one
thread to write the docs,  index sorting on flush is slower. But when I use two threads the
sorting on flush is much faster even with the SerialScheduler. I'll continue to run the tests
in order to be able to share something more meaningful.
> The tests are passing except one about concurrent DV updates. I don't know this part
at all so I did not fix the test yet. I don't even know if we can make it work with index
sorting ;).
>  [~mikemccand] I would love to have your feedback about the prototype. Could you please
take a look ? I am sure there are plenty of bugs, ... but I think it's a good start to evaluate
the feasibility of this feature.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message