lucene-java-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Vitaly Funstein <vfunst...@gmail.com>
Subject SegmentReader heap usage with stored field compression on
Date Sat, 23 Aug 2014 23:08:52 GMT
Is it reasonable to assume that using stored field compression with a lot
of stored fields per document in a very large  index (100+ GB) could
potentially lead to a significant heap utilization? If I am reading the
code in CompressingStoredFieldsIndexReader correctly, there's a non-trivial
accounting overhead, per segment, to maintain fields index reader state,
which appears to be a function of both compression chunk size and overall
segment size.

Not sure if my hunch is correct here, but we have run into situations when
loading stored fields for a relatively small number of search results
(<100K) after a single query for an index of the above size would result in
OOME with 5+ GB heap sizes, with dominating objects in heap dump being
SegmentReader... hence the question. Thank you.

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message