jackrabbit-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Francisco Carriedo Scher <fcarrie...@gmail.com>
Subject Re: Question : Is there any limit on node dimension?
Date Sun, 24 Jun 2012 10:46:05 GMT
Don't worry, Jackrabbit can deal with such files properly. Just use the
datastore (saves the files in a path you specify in the file system, more
efficient by far than any RDMS handling large blobs) and you are done!

About getting resources out of the repository there is no problem neither,
the file will be streamed (no serious memory issues).

Hope that helps!


2012/6/9 Mark Herman <MHerman@nbme.org>

> Di Simone Maria Nicole wrote
> >
> > Hi everybody,
> > is there a limit or any best pratice for node dimensions?
> > In my project someone would like to store very big documents (1GB) but
> > I don't agree with this idea.
> > Is there any raccomandation about this topic?
> >
> What persistence manager do you plan on using?  The default uses whatever
> file system you're on, which can probably handle 1gb files without an
> issue.
> I wouldn't be surprised if some sql server implementations don't react well
> to being loaded with a bunch of 1gb files.
> Do you expect lucene to be indexing the content of this file?  In my
> experience, the indexer is fairly non-invasive so I wouldn't expect it to
> hurt the server.  I've never thrown something at it that big though.
> Either way you should watch the jvm memory, not sure how much jr needs to
> store in memory or if it can stream straight to the persistence manager.
> --
> View this message in context:
> http://jackrabbit.510166.n4.nabble.com/Question-Is-there-any-limit-on-node-dimension-tp4655240p4655338.html
> Sent from the Jackrabbit - Users mailing list archive at Nabble.com.

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message