lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Michael McCandless (JIRA)" <>
Subject [jira] Commented: (LUCENE-652) Compressed fields should be "externalized" (from Fields into Document)
Date Fri, 20 Mar 2009 17:50:50 GMT


Michael McCandless commented on LUCENE-652:

Good questions!

bq. Is an index compressed with Store.COMPRESS still readable?

Yes, we have to support that until Lucene 4.0.  But
Field.Store.COMPRESS will be removed in 3.0 (ie you can read previous
compressed fields, interact w/ an index that has compressed fields in
it, etc., just not add docs with Field.Store.COMPRESS to an index as
of 3.0).

bq. Can i uncompress fields compressed using the old tools also by retrieving the byte array
and using CompressionTools?

Well... yes, but: you can't actually get the compressed byte[]
(because Lucene will decompress it for you).

bq. Compressing was also used for string fields, maybe CompressionTols also suplies a method
to compress strings (and convert them to UTF-8 during that to be backwards compatible). This
would prevent people from calling String.getBytes() without charset and then wondering, why
they cannoit read their index again...

OK I'll add them.  I'll name them compressString and decompressString.

> Compressed fields should be "externalized" (from Fields into Document)
> ----------------------------------------------------------------------
>                 Key: LUCENE-652
>                 URL:
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Index
>    Affects Versions: 1.9, 2.0.0, 2.1
>            Reporter: Michael McCandless
>            Assignee: Michael McCandless
>            Priority: Minor
>             Fix For: 2.9
>         Attachments: LUCENE-652.patch
> Right now, as of 2.0 release, Lucene supports compressed stored fields.  However, after
discussion on java-dev, the suggestion arose, from Robert Engels, that it would be better
if this logic were moved into the Document level.  This way the indexing level just stores
opaque binary fields, and then Document handles compress/uncompressing as needed.
> This approach would have prevented issues like LUCENE-629 because merging of segments
would never need to decompress.
> See this thread for the recent discussion:
> When we do this we should also work on related issue LUCENE-648.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message