lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Michael Sun (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SOLR-9764) Design a memory efficient DocSet if a query returns all docs
Date Mon, 05 Dec 2016 22:31:59 GMT

    [ https://issues.apache.org/jira/browse/SOLR-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15723569#comment-15723569
] 

Michael Sun commented on SOLR-9764:
-----------------------------------

Ah, yes, you are right. Thanks [~varunthacker] for suggestion. The 'inverse encoding' is a
good idea.

bq.Do you think this will be good enough for this case
On memory saving side, RoaringIdDocSet looks a good solution. It would only use a small amount
of memory in this use case.

On the other hand, there are some implication on CPU usage, mainly in constructing the DocSet.
RoaringIdDocSet saves memory by choosing different data structure based on matched documents
in a chunk. However, the code doesn't know what data structure to use before it iterate all
documents in a chunk and can result in some expensive 'shift' in data structure and 'resizing'.
For example, in this use case, for each chunk, the code basically starts fill a large short[],
then shift it to a bitmap, and convert data from short[] to bitmap, then fill bitmap, then
later switch back to a small short[]. All these steps can be expensive unless it's optimized
for some use cases. In addition, all these steps use iterator to get matched doc one by one.

The union and intersection using RoaringIdDocSet can be more expensive too in addition the
cost of constructing. Of course, it's hard to fully understand the performance implication
without testing on a prototype. Any suggestion is welcome.



> Design a memory efficient DocSet if a query returns all docs
> ------------------------------------------------------------
>
>                 Key: SOLR-9764
>                 URL: https://issues.apache.org/jira/browse/SOLR-9764
>             Project: Solr
>          Issue Type: Improvement
>      Security Level: Public(Default Security Level. Issues are Public) 
>            Reporter: Michael Sun
>         Attachments: SOLR-9764.patch, SOLR-9764.patch, SOLR-9764.patch, SOLR-9764.patch,
SOLR-9764.patch, SOLR_9764_no_cloneMe.patch
>
>
> In some use cases, particularly use cases with time series data, using collection alias
and partitioning data into multiple small collections using timestamp, a filter query can
match all documents in a collection. Currently BitDocSet is used which contains a large array
of long integers with every bits set to 1. After querying, the resulted DocSet saved in filter
cache is large and becomes one of the main memory consumers in these use cases.
> For example. suppose a Solr setup has 14 collections for data in last 14 days, each collection
with one day of data. A filter query for last one week data would result in at least six DocSet
in filter cache which matches all documents in six collections respectively.   
> This is to design a new DocSet that is memory efficient for such a use case.  The new
DocSet removes the large array, reduces memory usage and GC pressure without losing advantage
of large filter cache.
> In particular, for use cases when using time series data, collection alias and partition
data into multiple small collections using timestamp, the gain can be large.
> For further optimization, it may be helpful to design a DocSet with run length encoding.
Thanks [~mmokhtar] for suggestion. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Mime
View raw message