jackrabbit-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Frédéric Esnault <f...@legisway.com>
Subject RE: atomic vs group node creation/storage
Date Thu, 21 Jun 2007 08:51:46 GMT

> So far I couldn't reproduce the problem. The size (280 MB) could have
> another reason, maybe the database does not re-use empty space for
> some reason. When I had a similar problem, I also had a really large
> database (about 1 GB), but after compressing it was only 10 MB or so.
> It was not MySQL. My suggestion is:

> - Before running the test, clean the database
- After the test, display the number of rows in the database, and the size

> Without reproducible test case, it is hard to find the problem. If the
> size reproducibly grows much faster than the number of rows, I would
> be interested in finding out why, or finding a workaround.

The number of rows was increasing also very fast. When my default_node table reached 22 GB,
it was holding 35 million rows.

> Query q = manager.createQuery(
 >  "//contractors/element(*, nt:base)[@id=" + id + "]",
 >  Query.XPATH);

The problem here is that if you use predicate on the node with plenty of instances (say a
contract), the search works fine, the problem is if the search has to look at all the instances
of this type of node.

> I was told that a few thousand child nodes is not a problem, but if
> you expect 30000 or more, then you should consider using a deeper
> hierarchy (with the current Jackrabbit) because there is a performance
> degradation.

We actually plan a 100K nodes repository, with an extreme limit to 250K, which could possibly
mean something like a maximum of 25K to 30K child nodes, with an extrele limit to 60K to 80K
child nodes. AND searching them ;-)

Frederic Esnault

Mime
View raw message