jackrabbit-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From jde...@21technologies.com
Subject Jackrabbit performance with large binaries
Date Fri, 08 Dec 2006 00:33:39 GMT
Hi,
I've been storing binary files of different sizes using 
SimpleDBPersistenceManager configured to use postgresql.  I have 
successfully added files of 2.5 MB (around 1 second to save) up through 
103 MB (around 80 seconds to save).  I am storing the binary files by 
creating a file system using nt:folder, nt:file, and nt:resource.  The 
binary files are then getting streamed into the jcr:data field of the 
appropriate resource node:

          Node resourceNode = fileNode.addNode("jcr:content", 
"nt:resource");
        resourceNode.setProperty("jcr:mimeType", 
typeHandler.getMimeType());
        resourceNode.setProperty("jcr:encoding", 
typeHandler.getTextEncoding());
        resourceNode.setProperty("jcr:data", resourceInput);

        resourceInput is defined as a BufferedInputStream(new 
FileInputStream(binaryFile), 16384);
        I then save the session.

I have been getting a lot of out of memory exceptions running these tests. 
 The ammount of memory needed to successfully save a file increases 
linearly with the size of the file.  In order to avoid an out of memory 
exception I need to set aside at least 7.5 times as much memory in the VM 
as the size of the file I want to save.  I have a similar problem when 
deleting files, since the entire node is brought into transient memory 
before it is deleted.  Is there a better way to save binary content that 
won't require a constant increase in memory? Is there any way to avoid 
bringing the entire file into memory before it's saved (or bringing the 
entire node into memory again when it's deleted)?

Thanks for the help,
Joe.
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message