lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Marvin Humphrey (JIRA)" <>
Subject [jira] Commented: (LUCENE-1458) Further steps towards flexible indexing
Date Mon, 24 Nov 2008 23:19:44 GMT


Marvin Humphrey commented on LUCENE-1458:

>> Hmm, maybe we can conflate this with a column-stride field writer
>> and require that sort fields have a fixed width?
> Yes I think column-stride fields writer should write the docID -> ord
> part of StringIndex to disk, and MultiRangeQuery in LUCENE-1461 would
> then use it. With enumerated type of fields (far fewer unique terms
> than docs), bit packing will make them compact.

How do you plan on dealing with the ord values changing as segments get 
added?  The addition of a single document triggers the rewriting of the
entire mapping.

I was planning on having SortCacheWriter write the out the docID -> ord
mapping, but with the understanding that there was a relatively high cost so
the module couldn't be core.   The idea was to take the cost of iterating over
the field caches during IndexReader startup, move that to index time, and write
out a file that could be memory mapped and shared among multiple search apps.

In theory, if we were to have only per-segment docID -> ord maps, we could
perform inter-segment collation the same way that it's handled at the
MultiSearcher level -- by comparing the original strings.  It wouldn't be that
expensive in the grand scheme of things, because most of the work would be
done by comparing ord values within large segments.

Unfortunately, that won't work because segment boundaries are hidden from

>> In KS, the relevant IndexReader methods no longer take a Term
>> object. (In fact, there IS no Term object any more -
>> KinoSearch::Index::Term has been removed.) Instead, they take a
>> string field and a generic "Obj".
> But you must at least require these Obj's to know how to compareTo one
> another? 


> Does this mean using per-field custom sort ordering (collator) is
> straightforward for KS?

That's one objective.  The implementation is incomplete.

Another objective is to allow non-string term types, e.g. TimeStamp,
Float... Hmm... how about FixedWidthText?

> Further steps towards flexible indexing
> ---------------------------------------
>                 Key: LUCENE-1458
>                 URL:
>             Project: Lucene - Java
>          Issue Type: New Feature
>          Components: Index
>    Affects Versions: 2.9
>            Reporter: Michael McCandless
>            Assignee: Michael McCandless
>            Priority: Minor
>             Fix For: 2.9
>         Attachments: LUCENE-1458.patch, LUCENE-1458.patch, LUCENE-1458.patch, LUCENE-1458.patch
> I attached a very rough checkpoint of my current patch, to get early
> feedback.  All tests pass, though back compat tests don't pass due to
> changes to package-private APIs plus certain bugs in tests that
> happened to work (eg call TermPostions.nextPosition() too many times,
> which the new API asserts against).
> [Aside: I think, when we commit changes to package-private APIs such
> that back-compat tests don't pass, we could go back, make a branch on
> the back-compat tag, commit changes to the tests to use the new
> package private APIs on that branch, then fix nightly build to use the
> tip of that branch?o]
> There's still plenty to do before this is committable! This is a
> rather large change:
>   * Switches to a new more efficient terms dict format.  This still
>     uses tii/tis files, but the tii only stores term & long offset
>     (not a TermInfo).  At seek points, tis encodes term & freq/prox
>     offsets absolutely instead of with deltas delta.  Also, tis/tii
>     are structured by field, so we don't have to record field number
>     in every term.
> .
>     On first 1 M docs of Wikipedia, tii file is 36% smaller (0.99 MB
>     -> 0.64 MB) and tis file is 9% smaller (75.5 MB -> 68.5 MB).
> .
>     RAM usage when loading terms dict index is significantly less
>     since we only load an array of offsets and an array of String (no
>     more TermInfo array).  It should be faster to init too.
> .
>     This part is basically done.
>   * Introduces modular reader codec that strongly decouples terms dict
>     from docs/positions readers.  EG there is no more TermInfo used
>     when reading the new format.
> .
>     There's nice symmetry now between reading & writing in the codec
>     chain -- the current docs/prox format is captured in:
> {code}
> FormatPostingsTermsDictWriter/Reader
> FormatPostingsDocsWriter/Reader (.frq file) and
> FormatPostingsPositionsWriter/Reader (.prx file).
> {code}
>     This part is basically done.
>   * Introduces a new "flex" API for iterating through the fields,
>     terms, docs and positions:
> {code}
> FieldProducer -> TermsEnum -> DocsEnum -> PostingsEnum
> {code}
>     This replaces TermEnum/Docs/Positions.  SegmentReader emulates the
>     old API on top of the new API to keep back-compat.
> Next steps:
>   * Plug in new codecs (pulsing, pfor) to exercise the modularity /
>     fix any hidden assumptions.
>   * Expose new API out of IndexReader, deprecate old API but emulate
>     old API on top of new one, switch all core/contrib users to the
>     new API.
>   * Maybe switch to AttributeSources as the base class for TermsEnum,
>     DocsEnum, PostingsEnum -- this would give readers API flexibility
>     (not just index-file-format flexibility).  EG if someone wanted
>     to store payload at the term-doc level instead of
>     term-doc-position level, you could just add a new attribute.
>   * Test performance & iterate.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message