jackrabbit-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Matthijs Wensveen <m.wensv...@func.nl>
Subject Re: Alternatives to workspace.clone()?
Date Tue, 12 Aug 2008 07:01:55 GMT
Alexander Klimetschek wrote:
> On Wed, Aug 6, 2008 at 1:58 PM, Matthijs Wensveen <m.wensveen@func.nl> wrote:
>> Hi,
>> Our application imports content in one workspace and when it's done and
>> everything is verified the content is cloned to the default workspace. At
>> the top of the freshly imported content is one node with a rather large
>> subtree (10.000+ nodes). More than occasionally the clone operation fails
>> because the jvm runs out of heap space. Are there any alternatives to clone
>> that do not put the entire tree in memory before writing it to the other
>> workspace?
> Well, you could clone only sub-parts (if that is possible) or simply
> copy node-by-node and save every X nodes, so the memory footprint is
> not that big.
> Regards,
> Alex

Hi Alex,

Cloning only sub-parts is not an option. The tree that is currently 
cloned is already a sub-part of the entire content tree. Copying might 
be an option. Is it possible to set the UUID of a copied node? I need to 
be able to update the copied subtree when there are updates in the 
'prepare' workspace. There is no node.setUUID() method, but maybe I can 
set the 'jcr:uuid' property?

To give a little more insight: The first time a piece of content (called 
a 'domain') is imported, it is put in the 'prepare' workspace. When this 
is done and its consistency is verified, the domain is cloned to the 
default workspace. After that updates on the domain are done on the 
prepare workspace, then the default workspace's domain node is updated, 
after verification of the prepare workspace's domain node. At least, 
that's how we designed the process. Unfortunately the clone operation 
fails when the domain subtree is too large.


Matthijs Wensveen
Func. Internet Integration
W http://www.func.nl
T +31 20 4230000
F +31 20 4223500 

View raw message