jackrabbit-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Frédéric Esnault <f...@legisway.com>
Subject atomic vs group node creation/storage
Date Wed, 20 Jun 2007 07:10:50 GMT
Hello there !

 

It seems to me that there is a storage problem, when you create a lot of nodes, one by one,
using this algorithm :

1.	for each node to create

	a.	create node
	b.	fill node properties/child nodes
	c.	save session

2.	end for

 

The default_node and default_prop tables number of rows (and size) increases very fast, and
in an unacceptable way.

I had a 35 million default_node table after inserting like this 27 000 nodes in a repository.

 

Then I used the other algorithm :

1.	for each node to create

	a.	create node
	b.	fill node properties/child nodes

2.	end for
3.	save session

 

And this gives a much better situation (currently I have a 36 000 content repository, and
my tables are correct - 60 000 rows for node table,

576 000 rows for properties).

 

The problem here is that in a production environment, users are going to create their nodes
one by one, day after day, never by full blocks.

So is there a storage problem ?

 

Frederic Esnault 


Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message