lucene-java-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Aditi Goyal" <aditigupt...@gmail.com>
Subject Re: Document larger than setRAMBufferSizeMB()
Date Fri, 03 Oct 2008 09:33:29 GMT
Thanks Anshum.
Although it raises another query, committing the current buffer will commit
the docs before and what will happen to the current doc which threw an error
while adding a field to it, will that also get committed in the half??

Thanks a lot
Aditi

On Fri, Oct 3, 2008 at 2:12 PM, Anshum <anshumg@gmail.com> wrote:

> Hi Aditi,
>
> I guess increasing the buffer size would be a solution here, but in case
> you
> wouldn't know the expected max doc size. I guess the best way to handle
> that
> would be a regular try catch block in which you could commit the current
> buffer. At the least you could just continue the loop after doing whatever
> you wish to do using an exception handling block.
>
> --
> Anshum Gupta
> Naukri Labs!
> http://ai-cafe.blogspot.com
>
> The facts expressed here belong to everybody, the opinions to me. The
> distinction is yours to draw............
>
> On Fri, Oct 3, 2008 at 1:56 PM, Aditi Goyal <aditigupta20@gmail.com>
> wrote:
>
> > Hi Everyone,
> >
> > I have an index which I am opening at one time only. I keep adding the
> > documents to it until I reach a limit of 500.
> > After this, I close the index and open it again. (This is done in order
> to
> > save time taken by opening and closing the index)
> > Also, I have set setRAMBufferSizeMB to 16MB.
> >
> > If the document size itself is greater than 16MB what will happen in this
> > case??
> > It is throwing
> > java.lang.OutOfMemoryError: Java heap space
> > Now, my query is,
> > Can we change something in the way we parse/index to make it more memory
> > friendly so that it doesnt throw this exception.
> > And, Can it be caught and overcome gracefully?
> >
> >
> > Thanks a lot
> > Aditi
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message