jackrabbit-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From go canal <goca...@yahoo.com>
Subject Re: will jackrabbit scale to petabyte repository?
Date Wed, 12 Aug 2009 08:33:20 GMT
>>How many nodes do you plan for?

Just curious, is there any guideline on the # of nodes one jackrabbit can support with acceptable
performance ?

Each file will have at least one nt:file node, plus some folder nodes; so can I translate
the question to # of files for a very rough estimate ?

Another thought, is there a Jackrabbit + Hadoop configuration (using Hadoop as the DataStore
?) to address scalability and even performance (?) ....

 rgds,
canal




________________________________
From: Thomas Müller <thomas.mueller@day.com>
To: users@jackrabbit.apache.org
Sent: Wednesday, August 12, 2009 2:18:18 PM
Subject: Re: will jackrabbit scale to petabyte repository?

Hi,

> considering using jackrabbit as our jcr for a rewrite of our current app. We
> currently have about 1 PB of content and metadata that we would like to store
> in a single workspace. Will jackrabbit scale to this size? Has anyone created a
> repository of this size with jackrabbit? Should we limit the size of the
> workspaces?

How many nodes do you plan for?

If it's mainly binary data (such as files) I suggest to use the data
store. http://wiki.apache.org/jackrabbit/DataStore - then it shouldn't
be a problem.

If there is little binary data, the problem might be backup (it
depends on the persistence manager you use).

> We are
> also considering using the ‘Amazon
> S3 Persistence Manager Project’ found in the sandbox, has anyone used it in a
> production environment?

I didn't use it, but from what I know the performance might be a
problem. You would need to test it yourself.

Regards,
Thomas



      
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message