lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Michael McCandless (JIRA)" <j...@apache.org>
Subject [jira] Resolved: (LUCENE-1374) Merging of compressed string Fields may hit NPE
Date Wed, 03 Sep 2008 14:01:44 GMT

     [ https://issues.apache.org/jira/browse/LUCENE-1374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Michael McCandless resolved LUCENE-1374.
----------------------------------------

    Resolution: Fixed

Committed revision 691617.

> Merging of compressed string Fields may hit NPE
> -----------------------------------------------
>
>                 Key: LUCENE-1374
>                 URL: https://issues.apache.org/jira/browse/LUCENE-1374
>             Project: Lucene - Java
>          Issue Type: Bug
>          Components: Index
>    Affects Versions: 2.4
>            Reporter: Michael McCandless
>            Assignee: Michael McCandless
>             Fix For: 2.4
>
>         Attachments: LUCENE-1374.patch
>
>
> This bug was introduced with LUCENE-1219 (only present on 2.4).
> The bug happens when merging compressed string fields, but only if bulk-merging code
does not apply because the FieldInfos for the segment being merged are not congruent.  This
test shows the bug:
> {code}
>   public void testMergeCompressedFields() throws IOException {
>     File indexDir = new File(System.getProperty("tempDir"), "mergecompressedfields");
>     Directory dir = FSDirectory.getDirectory(indexDir);
>     try {
>       for(int i=0;i<5;i++) {
>         // Must make a new writer & doc each time, w/
>         // different fields, so bulk merge of stored fields
>         // cannot run:
>         IndexWriter w = new IndexWriter(dir, new WhitespaceAnalyzer(), i==0, IndexWriter.MaxFieldLength.UNLIMITED);
>         w.setMergeFactor(5);
>         w.setMergeScheduler(new SerialMergeScheduler());
>         Document doc = new Document();
>         doc.add(new Field("test1", "this is some data that will be compressed this this
this", Field.Store.COMPRESS, Field.Index.NO));
>         doc.add(new Field("test2", new byte[20], Field.Store.COMPRESS));
>         doc.add(new Field("field" + i, "random field", Field.Store.NO, Field.Index.TOKENIZED));
>         w.addDocument(doc);
>         w.close();
>       }
>       byte[] cmp = new byte[20];
>       IndexReader r = IndexReader.open(dir);
>       for(int i=0;i<5;i++) {
>         Document doc = r.document(i);
>         assertEquals("this is some data that will be compressed this this this", doc.getField("test1").stringValue());
>         byte[] b = doc.getField("test2").binaryValue();
>         assertTrue(Arrays.equals(b, cmp));
>       }
>     } finally {
>       dir.close();
>       _TestUtil.rmDir(indexDir);
>     }
>   }
> {code}
> It's because in FieldsReader, when we load a field "for merge" we create a FieldForMerge
instance which subsequently does not return the right values for getBinary{Value,Length,Offset}.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-dev-help@lucene.apache.org


Mime
View raw message