jackrabbit-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Karl Meier <furzgesi...@gmail.com>
Subject Re: Lucene - out of memory
Date Wed, 25 Aug 2010 07:39:35 GMT
We had the same issue. Additionally it's getting slower and slower. The only
way was to deactivate Lucene.

On Wed, Aug 25, 2010 at 9:30 AM, Ard Schrijvers
<a.schrijvers@onehippo.com>wrote:

> Hello Robert,
>
> with how much memory is your application running? Are you also doing
> searches at the same time? In specific, sorting on properties?
>
> Furthermore, Lucene is quite memory consuming, certainly the way
> Jackrabbit is using it. You just have to make sure you have enough
> memory
>
> Regards Ard
>
> On Wed, Aug 25, 2010 at 9:24 AM, Seidel. Robert <Robert.Seidel@aeb.de>
> wrote:
> > Hi,
> >
> > After storing a while data (about 80.000 nodes) into an CR, an out of
> memory error occurred:
> >
> > 2010-08-25 00:22:58,439 ERROR (IndexMerger.java:568) - Error while
> merging indexes:
> > java.lang.OutOfMemoryError: Java heap space
> >            at java.util.HashMap.resize(HashMap.java:508)
> >            at java.util.HashMap.addEntry(HashMap.java:799)
> >            at java.util.HashMap.put(HashMap.java:431)
> >            at
> org.apache.jackrabbit.core.query.lucene.CachingIndexReader$CacheInitializer$1.collect(CachingIndexReader.java:433)
> >            at
> org.apache.jackrabbit.core.query.lucene.CachingIndexReader$CacheInitializer.collectTermDocs(CachingIndexReader.java:515)
> >            at
> org.apache.jackrabbit.core.query.lucene.CachingIndexReader$CacheInitializer.initializeParents(CachingIndexReader.java:425)
> >            at
> org.apache.jackrabbit.core.query.lucene.CachingIndexReader$CacheInitializer.run(CachingIndexReader.java:386)
> >            at
> org.apache.jackrabbit.core.query.lucene.CachingIndexReader.<init>(CachingIndexReader.java:135)
> >            at
> org.apache.jackrabbit.core.query.lucene.AbstractIndex.getReadOnlyIndexReader(AbstractIndex.java:315)
> >            at
> org.apache.jackrabbit.core.query.lucene.MultiIndex.replaceIndexes(MultiIndex.java:665)
> >            at
> org.apache.jackrabbit.core.query.lucene.IndexMerger$Worker.run(IndexMerger.java:551)
> >            at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:417)
> >            at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:269)
> >            at java.util.concurrent.FutureTask.run(FutureTask.java:123)
> >            at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:65)
> >            at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:168)
> >            at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:650)
> >            at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:675)
> >            at java.lang.Thread.run(Thread.java:595)
> >
> > If I add more properties or change the binary data mime type for full
> text indexing, then I'll get the error sooner.
> >
> > If I quit the application and restart it, it works again for a while. It
> seems to me, that lucene is stacking up memory.
> >
> > How can I handle that situation?
> >
> > Kindly regards,
> >
> > Robert Seidel
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message