jackrabbit-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Michael Dürig (JIRA) <j...@apache.org>
Subject [jira] Updated: (JCR-2442) make internal item cache hierarchy-aware
Date Thu, 25 Feb 2010 18:34:28 GMT

     [ https://issues.apache.org/jira/browse/JCR-2442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Michael Dürig updated JCR-2442:

    Attachment: JCR-2442.patch

Possible patch.

Since JCR-2498 should fix some of the observed performance issues, the approach in this patch
is quite simplistic: use separate caches for items above and below a certain depth (nodes
<=1, properties <=2). Items below the threshold go into a HashMap and are thus never
evicted. All other items go into an LRU map. 


> make internal item cache hierarchy-aware
> ----------------------------------------
>                 Key: JCR-2442
>                 URL: https://issues.apache.org/jira/browse/JCR-2442
>             Project: Jackrabbit Content Repository
>          Issue Type: Improvement
>          Components: jackrabbit-jcr2spi
>            Reporter: Stefan Guggisberg
>            Assignee: Michael Dürig
>         Attachments: JCR-2442.patch
> currently there are 2 configuration parameters which affect the performance of client-sided
tree traversals:
> - fetch-depth
> - size of item cache
> my goal is to minimize the number of server-roundtrips triggered by traversing the node
hierarchy on the client.
> the current eviction policy doesn't seem to be ideal for this use case. in the case of
relatively deep tree structures
> a request for e.g. '/foo' can easily cause a cache overflow and root nodes might get
evicted from the cache.
> a following request to '/foo' cannot be served from cache but will trigger yet another
deep fetch, despite the fact
> that the major part of the tree structure is still in the cache.
> increasing the cache size OTOH bears the risk of OOM errors since the memory footprint
of the cached state seems 
> to be quite large. i tried several combinations of fetch depth and cache size, to no
avail. i either ran into OOM errors 
> or performance was inacceptably slow due to an excessive number of server roundtrips.
> i further noticed that sync'ing existing cached state with the results of a deep fetch
is rather slow, e.g.
> an inital request to '/foo' returns 11k items. the cache size is 10k, i.e. the cache
cannot accomodate the entire 
> result set.  assuming that /foo has been evicted, the following request to '/foo' will
trigger another deep 
> fetch which this time takes considerably moe time since the result set needs to be sync'ed
with existing cached
> state. 
> using a LRU eviction policy and touching every node along the parent hierarchy when requesting
an item might be a solution.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message