jackrabbit-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Thomas Mueller (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (JCR-3735) Efficient copying of binaries in Jackrabbit DataStores
Date Tue, 25 Feb 2014 08:15:20 GMT

    [ https://issues.apache.org/jira/browse/JCR-3735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13911375#comment-13911375

Thomas Mueller commented on JCR-3735:

I wonder why you need to optimize the Jackrabbit 2.x DataStore? I think we should rather optimize
Oak, and should not spend too much time trying to optimize Jackrabbit 2.x.

Please note that in Oak, the BlobStores that are based on the AbstractBlobStore are not creating
temporary files. They do that by splitting large binaries into chunks. Similar for the SegmentNodeStore.

> Efficient copying of binaries in Jackrabbit DataStores
> ------------------------------------------------------
>                 Key: JCR-3735
>                 URL: https://issues.apache.org/jira/browse/JCR-3735
>             Project: Jackrabbit Content Repository
>          Issue Type: Improvement
>          Components: jackrabbit-core
>    Affects Versions: 2.7.4
>            Reporter: Amit Jain
> In the DataStore implementations an additional temporary file is created for every binary
uploaded. This step is an additional overhead when the upload process itself creates a temporary
> So, the solution proposed is to check if the input stream passed is a FileInputStream
and then use the FileChannel object associated with the input stream to copy the file.

This message was sent by Atlassian JIRA

View raw message