lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Marvin Humphrey (JIRA)" <j...@apache.org>
Subject [jira] Commented: (LUCENE-1458) Further steps towards flexible indexing
Date Tue, 25 Nov 2008 17:47:44 GMT

    [ https://issues.apache.org/jira/browse/LUCENE-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12650647#action_12650647
] 

Marvin Humphrey commented on LUCENE-1458:
-----------------------------------------

>> We're trying to fake up an array of strings without having to load anything
>> into process memory.

> We could do something similar in Lucene. Not creating String objects is
> nice. 

OK, assume that you slurp all three files.  Here's the code from above, ported
from C to Java.  

{code}
while (hi >= lo) {
  int  mid           = lo + ((hi - lo) / 2);
  long midTextOffset = textLengths[mid];
  long midTextLength = textLengths[mid + 1] - midTextOffset;
  int comparison     = StringHelper.compareUTF8Bytes(
                          targetUTF8Bytes, 0, targetLength, 
                          termUTF8bytes, midTextOffset, midTextLength);
  if      (comparison < 0) { hi = mid - 1; }
  else if (comparison > 0) { lo = mid + 1; }
  else { 
    result = mid; 
    break;
  }
}
long offsetIntoMainTermDict = mainTermDictFilePointers[result];
...
{code}

Other than the slurping, the only significant difference is the need for the
comparison routine to take a byte[] array and an offset, rather than a char*
pointer.

You can also use FileChannels to memory map this stuff, right?  (Have to be
careful on 32-bit systems, though.)

> B-tree or FST/trie or ... something.

Much to my regret, my tree algorithm vocabulary is limited -- I haven't spent
enough time coding such projects that I can intuit sophisticated solutions.
So I'll be counting on you, Jason Rutherglen, and Eks Dev to suggest
appropriate algorithms based on your experience.

Our segment-based inverted index term dictionary has a few defining
characteristics.

First, a lot of tree algorithms are optimized to a greater or lesser extent
for insertion speed, but we hardly care about that at all.  We can spend all
the cycles we need at index-time balancing nodes within a segment, and once
the tree is written out, it will never be updated.

Second, when we are writing out the term dictionary at index-time, the raw
data will be fed into the writer in sorted order as iterated values, one
term/term-info pair at a time.  Ideally, the writer would be able to serialize
the tree structure during this single pass, but it could also write a
temporary file during the terms iteration then write a final file afterwards.
The main limitation is that the writer will never be able to "see" all
terms at once as an array.

Third, at read-time we're going to have one of these trees per segment.  We'd
really like to be able to conflate them somehow.  KinoSearch actually
implements a MultiLexicon class which keeps SegLexicons in a PriorityQueue;
MultiLexicon_Next() advances the queue to the next unique term.  However,
that's slow, unwieldy, and inflexible.  Can we do better?

> Actually: I just realized the terms index need not store all suffixes
> of the terms it stores. Only unique prefixes (ie a simple letter
> trie, not FST). Because, its goal is to simply find the spot in the
> main lexicon file to seek to and then scan from. This makes it even
> smaller!

It would be ideal if we could separate the keys from the values and put all
the keys in a single file.

> Though, if we want to do neat things like respelling, wildcard/prefix
> searching, etc., which reduce to graph-intersection problems, we would
> need the suffix and we would need the entire lexicon (not just every
> 128th index term) compiled into the FST.

The main purpose of breaking out a separate index structure is to avoid binary
searching over the large primary file.  There's nothing special about the
extra file -- in fact, it's a drawback that it doesn't include all terms.  If
we can jam all the data we need to binary search against into the front of the
file, but include the data for all terms in an infrequently-accessed tail, we
win.

> Further steps towards flexible indexing
> ---------------------------------------
>
>                 Key: LUCENE-1458
>                 URL: https://issues.apache.org/jira/browse/LUCENE-1458
>             Project: Lucene - Java
>          Issue Type: New Feature
>          Components: Index
>    Affects Versions: 2.9
>            Reporter: Michael McCandless
>            Assignee: Michael McCandless
>            Priority: Minor
>             Fix For: 2.9
>
>         Attachments: LUCENE-1458.patch, LUCENE-1458.patch, LUCENE-1458.patch, LUCENE-1458.patch
>
>
> I attached a very rough checkpoint of my current patch, to get early
> feedback.  All tests pass, though back compat tests don't pass due to
> changes to package-private APIs plus certain bugs in tests that
> happened to work (eg call TermPostions.nextPosition() too many times,
> which the new API asserts against).
> [Aside: I think, when we commit changes to package-private APIs such
> that back-compat tests don't pass, we could go back, make a branch on
> the back-compat tag, commit changes to the tests to use the new
> package private APIs on that branch, then fix nightly build to use the
> tip of that branch?o]
> There's still plenty to do before this is committable! This is a
> rather large change:
>   * Switches to a new more efficient terms dict format.  This still
>     uses tii/tis files, but the tii only stores term & long offset
>     (not a TermInfo).  At seek points, tis encodes term & freq/prox
>     offsets absolutely instead of with deltas delta.  Also, tis/tii
>     are structured by field, so we don't have to record field number
>     in every term.
> .
>     On first 1 M docs of Wikipedia, tii file is 36% smaller (0.99 MB
>     -> 0.64 MB) and tis file is 9% smaller (75.5 MB -> 68.5 MB).
> .
>     RAM usage when loading terms dict index is significantly less
>     since we only load an array of offsets and an array of String (no
>     more TermInfo array).  It should be faster to init too.
> .
>     This part is basically done.
>   * Introduces modular reader codec that strongly decouples terms dict
>     from docs/positions readers.  EG there is no more TermInfo used
>     when reading the new format.
> .
>     There's nice symmetry now between reading & writing in the codec
>     chain -- the current docs/prox format is captured in:
> {code}
> FormatPostingsTermsDictWriter/Reader
> FormatPostingsDocsWriter/Reader (.frq file) and
> FormatPostingsPositionsWriter/Reader (.prx file).
> {code}
>     This part is basically done.
>   * Introduces a new "flex" API for iterating through the fields,
>     terms, docs and positions:
> {code}
> FieldProducer -> TermsEnum -> DocsEnum -> PostingsEnum
> {code}
>     This replaces TermEnum/Docs/Positions.  SegmentReader emulates the
>     old API on top of the new API to keep back-compat.
>     
> Next steps:
>   * Plug in new codecs (pulsing, pfor) to exercise the modularity /
>     fix any hidden assumptions.
>   * Expose new API out of IndexReader, deprecate old API but emulate
>     old API on top of new one, switch all core/contrib users to the
>     new API.
>   * Maybe switch to AttributeSources as the base class for TermsEnum,
>     DocsEnum, PostingsEnum -- this would give readers API flexibility
>     (not just index-file-format flexibility).  EG if someone wanted
>     to store payload at the term-doc level instead of
>     term-doc-position level, you could just add a new attribute.
>   * Test performance & iterate.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-dev-help@lucene.apache.org


Mime
View raw message