Return-Path: Delivered-To: apmail-jackrabbit-dev-archive@www.apache.org Received: (qmail 78371 invoked from network); 16 Dec 2009 13:59:41 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 16 Dec 2009 13:59:41 -0000 Received: (qmail 31479 invoked by uid 500); 16 Dec 2009 13:59:41 -0000 Delivered-To: apmail-jackrabbit-dev-archive@jackrabbit.apache.org Received: (qmail 31403 invoked by uid 500); 16 Dec 2009 13:59:41 -0000 Mailing-List: contact dev-help@jackrabbit.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@jackrabbit.apache.org Delivered-To: mailing list dev@jackrabbit.apache.org Received: (qmail 31395 invoked by uid 99); 16 Dec 2009 13:59:41 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 16 Dec 2009 13:59:41 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.140] (HELO brutus.apache.org) (140.211.11.140) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 16 Dec 2009 13:59:39 +0000 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id 1DB9A234C498 for ; Wed, 16 Dec 2009 05:59:18 -0800 (PST) Message-ID: <1014439130.1260971958107.JavaMail.jira@brutus> Date: Wed, 16 Dec 2009 13:59:18 +0000 (UTC) From: "Stefan Guggisberg (JIRA)" To: dev@jackrabbit.apache.org Subject: [jira] Updated: (JCR-2442) make internal item cache hierarchy-aware In-Reply-To: <1642815777.1260970998074.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/JCR-2442?page=3Dcom.atlassian.= jira.plugin.system.issuetabpanels:all-tabpanel ] Stefan Guggisberg updated JCR-2442: ----------------------------------- Description:=20 currently there are 2 configuration parameters which affect the performance= of client-sided tree traversals: - fetch-depth - size of item cache my goal is to minimize the number of server-roundtrips triggered by travers= ing the node hierarchy on the client. the current eviction policy doesn't seem to be ideal for this use case. in = the case of relatively deep tree structures a request for e.g. '/foo' can easily cause a cache overflow and root nodes = might get evicted from the cache. a following request to '/foo' cannot be served from cache but will trigger = yet another deep fetch, despite the fact that the major part of the tree structure is still in the cache. increasing the cache size OTOH bears the risk of OOM errors since the memor= y footprint of the cached state seems=20 to be quite large. i tried several combinations of fetch depth and cache si= ze, to no avail. i either ran into OOM errors=20 or performance was inacceptably slow due to an excessive number of server r= oundtrips. i further noticed that sync'ing existing cached state with the results of a= deep fetch is rather slow, e.g. an inital request to '/foo' returns 11k items. the cache size is 10k, i.e. = the cache cannot accomodate the entire=20 result set. assuming that /foo has been evicted, the following request to = '/foo' will trigger another deep=20 fetch which this time takes considerably moe time since the result set need= s to be sync'ed with existing cached state.=20 using a LRU eviction policy and touching every node along the parent hierar= chy when requesting an item might be a solution. was: currently there are 2 configuration parameters which affect the performance= of client-sided tree traversals: - fetch-depth - size of item cache my goal is to minimize the numbers of server-roundtrips triggered by traver= sing the node hierarchy in the client. the current eviction policy doesn't seem to be ideal for this use case. in = the case of relatively deep tree structures a request for e.g. '/foo' can easily cause a cache overflow and root nodes = might get evicted from the cache. a following request to '/foo' cannot be served from cache but will cause a = deep fetch again, depsite the fact that the major part of the tree structure is still in the cache. increasing the cache size OTOH bears the risk of OOM errors since the memor= y footprint of the cached state seems to be quite large. using a LRU eviction policy and touching every node along the parent hierar= chy when requesting an item might be a solution. > make internal item cache hierarchy-aware > ---------------------------------------- > > Key: JCR-2442 > URL: https://issues.apache.org/jira/browse/JCR-2442 > Project: Jackrabbit Content Repository > Issue Type: Improvement > Components: jackrabbit-jcr2spi > Reporter: Stefan Guggisberg > Assignee: Michael D=C3=BCrig > > currently there are 2 configuration parameters which affect the performan= ce of client-sided tree traversals: > - fetch-depth > - size of item cache > my goal is to minimize the number of server-roundtrips triggered by trave= rsing the node hierarchy on the client. > the current eviction policy doesn't seem to be ideal for this use case. i= n the case of relatively deep tree structures > a request for e.g. '/foo' can easily cause a cache overflow and root node= s might get evicted from the cache. > a following request to '/foo' cannot be served from cache but will trigge= r yet another deep fetch, despite the fact > that the major part of the tree structure is still in the cache. > increasing the cache size OTOH bears the risk of OOM errors since the mem= ory footprint of the cached state seems=20 > to be quite large. i tried several combinations of fetch depth and cache = size, to no avail. i either ran into OOM errors=20 > or performance was inacceptably slow due to an excessive number of server= roundtrips. > i further noticed that sync'ing existing cached state with the results of= a deep fetch is rather slow, e.g. > an inital request to '/foo' returns 11k items. the cache size is 10k, i.e= . the cache cannot accomodate the entire=20 > result set. assuming that /foo has been evicted, the following request t= o '/foo' will trigger another deep=20 > fetch which this time takes considerably moe time since the result set ne= eds to be sync'ed with existing cached > state.=20 > using a LRU eviction policy and touching every node along the parent hier= archy when requesting an item might be a solution. --=20 This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.