incubator-couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From svilen>
Subject Re: huge attachments - experience?
Date Tue, 26 Mar 2013 11:53:15 GMT
Jens, Nils, Dave, thanks for answering. 

it's all in local network, or nearly-local network. speed doesn't
matter, consistency does. each copy can get changed, but its
usualy different things that change. it's all separate files or dirs of
files (eventualy the metadata could go as docs into couchdb itself,
oneday). think of describing many LP contents.

The most troublesome is renaming moving deleting stuff - files AND dirs
- as i keep doing that all the time. i've been trying lftp, rsync,
 csync2, ocsync.. all with various success. Noone manages dir-rename.
 Now i'm trying bazaar as that does manage the renameing but
 keeping .mp3/.flacs in a vcs is somewhat.. too much. 

hmm maybe i need to somehow disconnect the data itself from the
dir/file naming.

eventualy if i find the time, i may make a fuse filesystem layer using
couchdb as replicated-change-log while the actual files will be just
plain files.. but that's yet another todo.


On Tue, 26 Mar 2013 11:52:39 +0100
Nils Breunese <> wrote:

> svilen wrote:
> > i need some form of synchronised storage for audio files (with
> > some metadata). Something like 10-400Mb per attachment, 1-10
> > attachments per doc, overall about 1000 docs, 3000 attachments,
> > 300G total. Now i'm using just plain filesystem but it's a pain to
> > maintain consistency across several copies.
> Do you have a master copy? Are all copies on a LAN or around the
> globe? How fast should changes propagate across all copies? Is the
> metadata stored in the audio files, or could it be? Or does the
> metadata need to be stored separately? Not that I don't like CouchDB,
> but it sounds like plain old rsync could be a reasonable solution. 
> Nils.

View raw message