incubator-clerezza-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Reto Bachmann-Gmuer <reto.bachm...@trialox.org>
Subject Re: Backup Strategy for content graph
Date Mon, 03 May 2010 11:36:44 GMT
Hi Fabian

What we could do (as a clerezza improvement), would be to log all changes to
the graph. from the log you could read the differences, otherwise there's no
usable way to get this from the graph.

possibly backing up directly the backend (rdb or sesame) might be faster
than getting a dump via clerezza.

Cheers,
reto

On Sun, May 2, 2010 at 4:34 PM, Tsuyoshi Ito <tsuy.ito@clerezza.com> wrote:

> Hi Fabian
>
>
> On Apr 30, 2010, at 4:41 PM, Fabian Wabbel wrote:
>
> > Hi,
> >
> > We're using the integrated backup solution from Clerezza at the moment
> for backing up the content graph (eg.
> http://localhost:8383/admin/backup/download). In one of our production
> systems, the graph is quite big and we're now facing problems copying it to
> another machine each night (at the moment it's 200m, if the size increases
> as it did before, it will hit 1g in the next weeks). Any idea how to create
> differential backups or any other advice to solve this issue?
>
> I think the graph is quite big because your system stores all digital
> assets in a graph. I suggest to remove the digital assets from the graph and
> store it directly on the filesystem or in a database optimized for storing
> digital assets (pdf, images etc)
>
> Cheers
> Tsuy

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message