jackrabbit-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From <rona...@nexen.com.br>
Subject Re: Re: Paging results
Date Mon, 04 Jun 2007 15:59:01 GMT
But,

As far as I know the RowIterator uses the NodeIterator in the inside, so, it
really doesn't make much of a difference does it?

Since the problem really was, too many nodes brought to memory at once...
Leading to a OutOfMemory exception...

Am I right? Or would RowIterator really help too?

Thanks

Ronaldo

-----Mensagem original-----
De: Paco Avila [mailto:pavila@git.es]
Enviada em: segunda-feira, 4 de junho de 2007 05:18
Para: users@jackrabbit.apache.org
Assunto: Re: Paging results

El sáb, 02-06-2007 a las 11:31 +0200, Marcel Reutegger escribió:
> Ronaldo Florence wrote:
> > I'm trying to page the results of a Xpath query, but I'm not sure 
> > how to do this. I used the skip method on the NodeIterator class, 
> > but I can't bring every node to memory, I need to page the results 
> > on the query, I have a large amount of data so it's imperative to do so.
> >  
> > I tried the following query:
> >  
> > //site/Dados/element(*, mtx:content)[position()=1 or position()=2 or 
> > position()=3]
> 
> jackrabbit only has limited support for the position() function, 
> mainly to address same name siblings.
> 
> the skip method is exactly what you should use. the returned 
> NodeIterator loads the nodes on a lazy basis. which means jackrabbit 
> will only load nodes that you actually access and none of the skipped
ones.

You can also use the RowIterator.


Mime
View raw message