jackrabbit-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Frédéric Esnault <f...@legisway.com>
Subject RE: atomic vs group node creation/storage
Date Wed, 20 Jun 2007 08:50:33 GMT
Hi Felix,

I understand the transient space memory consumption issue, and that's why I'm thinking of
something like a partial saving mechanism (ie. save the nodes every 1,000).

My real issue here is the persistent storage. Saving node by node, like I do there,
is not acceptable for me because, as I said, doing so increases very fast the number of rows
and the table sizes in mySQL. Doing so inserting 27 000 nodes one by one gave me a 35 million
rows default_node table, more than 22GB.

So I'm dealing with a persistent storage issue, here, and that's why I'm currently working
with "mass creation then save" strategy. But in production, users are definitely going to
use the "one by one creation/saving" strategy, which scares me....

Frédéric Esnault - Ingénieur R&D


-----Message d'origine-----
De : fmeschbe@gmail.com [mailto:fmeschbe@gmail.com] De la part de Felix Meschberger
Envoyé : mercredi 20 juin 2007 10:44
À : dev@jackrabbit.apache.org
Objet : Re: atomic vs group node creation/storage

Hi Frédéric,

Now this makes a whole lot more sense to me :-)

The first algorithm creates a number of nodes and properties in transient
space, which is currently kept in memory. The higher the number of nodes,
the higher the memory consumption. The second algorithm just creates a
single node and its properties in the transient space before saving them
away and releasing used memory (or at least making it available for GC).

This is currently an issue of the implementation of the transient space.
Stefan might have more elaborate details. For the time being, you should
probably go with the "node by node save" algorithm.

Hope this helps.

Regards
Felix

PS: In your initial post you seem to have switched algorithm descriptions
which caused some confusion :-)


Mime
View raw message