jackrabbit-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alexander Klimetschek <aklim...@day.com>
Subject Re: One Repository or Many?
Date Thu, 30 Jul 2009 14:07:30 GMT
2009/7/30 Fabián Mandelbaum <fmandelbaum@gmail.com>:
> Hello there,
>
> if I want to use JCR to store content and metadata for a CMS. And that
> CMS must support different customers. In order to separate each
> customers storage space for both security and reliability issues (if a
> given customer's storage space gets corrupt somehow, the rest of the
> customers are not affected), which is the recommended JCR strategy for
> this?
>
> With databases one has one DB server, containing many DBs, and the DBA
> configures them so each user of the server cannot access the DBs of
> the others, and if one DB gets corrupt, that doesn't affect the rest
> (it doesn't take the server down with it either)
>
> As far as I know, JCR doesn't have this 'one server, many DB' (storage
> spaces) concept. The closest to this concept is JCR workspaces, but
> again, the use of workspaces to separate content this way is
> discouraged by JCR gurus.
>
> So, "one JCR repo to rule 'em all" (and if so, which are the
> recommended "best practices" to avoid repo corruption, think on backup
> and restore using the system view XML export/import), or "one JCR repo
> per customer" ?
>
> Thanks in advance for your answers.

I think there is no general answer. It depends of the amount of data
and how "large" each customer is, ie. how many requests they will
generate and if you also want to give them separate web app servers
(in which case different repositories make sense). And also if you
want the ability to access or combine data across customers, eg. for
statistics etc. (where a single workspace would be the simplest thing,
or multiple workspaces where you could easily copy over certain data
to a separate workspace).

If you use embedded derby, it is very easy to have many repositories,
one per app server, each storing its data locally.

Regards,
Alex

-- 
Alexander Klimetschek
alexander.klimetschek@day.com

Mime
View raw message