incubator-blur-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Aaron McCurry (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (BLUR-5) Write through caching for the BlockCache
Date Tue, 23 Oct 2012 01:54:12 GMT

    [ https://issues.apache.org/jira/browse/BLUR-5?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13482032#comment-13482032
] 

Aaron McCurry commented on BLUR-5:
----------------------------------

The patch looks good.  Only issue is I'm getting an error during some functional testing (working
around the NullPointerException mentioned on the ML).

ERROR 20121022_21:45:11:011_EDT [Lucene Merge Thread #19] concurrent.SimpleUncaughtExceptionHandler:
Unknown error in thread [Thread[Lucene Merge Thread #19,7,main]]
org.apache.lucene.index.MergePolicy$MergeException: java.lang.RuntimeException: Buffer size
exceeded, expecting max [8192] got [16384]
	at org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:535)
	at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:508)
Caused by: java.lang.RuntimeException: Buffer size exceeded, expecting max [8192] got [16384]
	at org.apache.blur.store.blockcache.BlockCache.store(BlockCache.java:82)
	at org.apache.blur.store.blockcache.BlockDirectoryCache.update(BlockDirectoryCache.java:48)
	at org.apache.blur.store.blockcache.CachedIndexOutput.writeBlock(CachedIndexOutput.java:72)
	at org.apache.blur.store.blockcache.CachedIndexOutput.writeInternal(CachedIndexOutput.java:81)
	at org.apache.blur.store.buffer.ReusedBufferedIndexOutput.writeBytes(ReusedBufferedIndexOutput.java:176)
	at org.apache.lucene.store.DataOutput.copyBytes(DataOutput.java:255)
	at org.apache.lucene.codecs.lucene40.Lucene40StoredFieldsWriter.addRawDocuments(Lucene40StoredFieldsWriter.java:208)
	at org.apache.lucene.codecs.lucene40.Lucene40StoredFieldsWriter.copyFieldsNoDeletions(Lucene40StoredFieldsWriter.java:321)
	at org.apache.lucene.codecs.lucene40.Lucene40StoredFieldsWriter.merge(Lucene40StoredFieldsWriter.java:245)
	at org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:247)
	at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:102)
	at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3742)
	at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3354)
	at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:402)
	at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:479)

It looks like during merges the buffer size changes, this can be determined through the IOContext.
 I'm sorry that you can't test this through the server, I am still trying to figure out that
issue.  But I think that we could add a unit test setting the IOContext to MERGE and it should
show you the issue.  Hopefully I figure out the problem NPE tomorrow.

Thanks Patrick!
                
> Write through caching for the BlockCache
> ----------------------------------------
>
>                 Key: BLUR-5
>                 URL: https://issues.apache.org/jira/browse/BLUR-5
>             Project: Apache Blur
>          Issue Type: Improvement
>            Reporter: Aaron McCurry
>         Attachments: BLUR-5.patch
>
>
> This will allow for better NRT update performance because the writer will not have to
read the NRT segments from HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message