Return-Path: Delivered-To: apmail-jackrabbit-users-archive@locus.apache.org Received: (qmail 34001 invoked from network); 9 Jan 2009 09:47:04 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 9 Jan 2009 09:47:04 -0000 Received: (qmail 95955 invoked by uid 500); 9 Jan 2009 09:47:02 -0000 Delivered-To: apmail-jackrabbit-users-archive@jackrabbit.apache.org Received: (qmail 95936 invoked by uid 500); 9 Jan 2009 09:47:02 -0000 Mailing-List: contact users-help@jackrabbit.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: users@jackrabbit.apache.org Delivered-To: mailing list users@jackrabbit.apache.org Received: (qmail 95925 invoked by uid 99); 9 Jan 2009 09:47:02 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 09 Jan 2009 01:47:02 -0800 X-ASF-Spam-Status: No, hits=3.7 required=10.0 tests=DNS_FROM_OPENWHOIS,FORGED_HOTMAIL_RCVD2,SPF_HELO_PASS,SPF_PASS,WHOIS_MYPRIVREG X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of lists@nabble.com designates 216.139.236.158 as permitted sender) Received: from [216.139.236.158] (HELO kuber.nabble.com) (216.139.236.158) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 09 Jan 2009 09:46:55 +0000 Received: from isper.nabble.com ([192.168.236.156]) by kuber.nabble.com with esmtp (Exim 4.63) (envelope-from ) id 1LLDwo-00084L-HO for users@jackrabbit.apache.org; Fri, 09 Jan 2009 01:46:34 -0800 Message-ID: <21369165.post@talk.nabble.com> Date: Fri, 9 Jan 2009 01:46:34 -0800 (PST) From: quipere To: users@jackrabbit.apache.org Subject: Re: Issue with versioning of cloned nodes In-Reply-To: <510143ac0901090109j7f8fdc28re724c013cc9dfea8@mail.gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Nabble-From: jquipere@hotmail.com References: <235bf4ca0707100131p207e8993jc58e1954005e53e7@mail.gmail.com> <17142446.post@talk.nabble.com> <510143ac0805090029p18ec3c12sa6c13600f76a5f98@mail.gmail.com> <21356635.post@talk.nabble.com> <21368374.post@talk.nabble.com> <510143ac0901090109j7f8fdc28re724c013cc9dfea8@mail.gmail.com> X-Virus-Checked: Checked by ClamAV on apache.org Yes we have thought about this. Only the problem is, we need to be able to query the contents of the nodes. It's not binary but our nodes are xml files. The files are not really that large, but we use a lot of them, over multiple workspaces. So this creates a lot of redundancy. These nodes need to be as corresponding nodes in every workspace since we use references to these xml files/nodes in different workspaces. Cross workspace references should do the trick, but I know that's not possible because of the versioning mechanisme. Jukka Zitting wrote: > > Hi, > > On Fri, Jan 9, 2009 at 9:51 AM, quipere wrote: >> I know the versionhistory is shared over workspaces. But I like to know >> whether the versionable node is copied in every workspace where it is >> used, >> or just a ref to the node in the shared versionhistory since the data of >> the >> node is already stored in the versionhistory. I could imagine that the >> node >> will only be fysically copied in the workspace when it is checked out. > > There is a separate copy of the node in each workspace where it exists. > > The solution against excessive disk usage (and for major performance > improvements) when handling separate copies of large nodes (ones with > large binary properties) is the data store feature that ensures that > only a single copy of any one binary value is kept in the repository. > Then your nodes would still be separate copies, but all their binary > properties would be shared. > > BR, > > Jukka Zitting > > -- View this message in context: http://www.nabble.com/DM-Rule--3%3A-Workspaces-are-for-corresponding-nodes.-tp11477567p21369165.html Sent from the Jackrabbit - Users mailing list archive at Nabble.com.