jackrabbit-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Thomas Mueller (JIRA)" <j...@apache.org>
Subject [jira] Commented: (JCR-926) Global data store for binaries
Date Thu, 06 Sep 2007 10:26:31 GMT

    [ https://issues.apache.org/jira/browse/JCR-926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12525395

Thomas Mueller commented on JCR-926:

Revision 573209: Configuration is now supported. Still the system property 'org.jackrabbit.useDataStore'
is required to enable this feature, but now the data store class (and for the FileDataStore,
the path) can be configured:

    <DataStore class="org.apache.jackrabbit.core.data.FileDataStore">
        <param name="path" value="${rep.home}/repository"/>

The DataStore API was changed a bit to support this. The DataStore configuration is optional,
if missing the system almost works as now. Almost, because the BLOBValue class is no longer
used. The system property org.jackrabbit.useDataStore will be removed when this is tested.
Also, the system property org.jackrabbit.minBlobFileSize will be integrated into DataStore.
My idea is that each data store implementation (file system, database, S3?) can have a different
'minimum size' depending on the overhead to store / load a value.

By the way, the FileDataStore overhead (mainly calculating the SHA-1 digest) is quite low,
smaller than 10%:
Writing and reading 5 files 100 KB each, average over 5 runs:
FileDataStore: 1390 ms, FileOutputStream: 1287 ms

> Global data store for binaries
> ------------------------------
>                 Key: JCR-926
>                 URL: https://issues.apache.org/jira/browse/JCR-926
>             Project: Jackrabbit
>          Issue Type: New Feature
>          Components: core
>            Reporter: Jukka Zitting
>         Attachments: dataStore.patch, DataStore.patch, DataStore2.patch, dataStore3.patch,
dataStore4.zip, dataStore5-garbageCollector.patch, internalValue.patch, ReadWhileSaveTest.patch
> There are three main problems with the way Jackrabbit currently handles large binary
> 1) Persisting a large binary value blocks access to the persistence layer for extended
amounts of time (see JCR-314)
> 2) At least two copies of binary streams are made when saving them through the JCR API:
one in the transient space, and one when persisting the value
> 3) Versioining and copy operations on nodes or subtrees that contain large binary values
can quickly end up consuming excessive amounts of storage space.
> To solve these issues (and to get other nice benefits), I propose that we implement a
global "data store" concept in the repository. A data store is an append-only set of binary
values that uses short identifiers to identify and access the stored binary values. The data
store would trivially fit the requirements of transient space and transaction handling due
to the append-only nature. An explicit mark-and-sweep garbage collection process could be
added to avoid concerns about storing garbage values.
> See the recent NGP value record discussion, especially [1], for more background on this
> [1] http://mail-archives.apache.org/mod_mbox/jackrabbit-dev/200705.mbox/%3c510143ac0705120919k37d48dc1jc7474b23c9f02cbd@mail.gmail.com%3e

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message