couchdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Adam Kocoloski <>
Subject Re: Gridfs for CouchDb
Date Mon, 29 Jul 2019 14:33:13 GMT
Hi Reddy,

Yes, something like this is possible to build on FoundationDB. The main challenge is that
every FoundationDB transaction needs to be under 10MB, so the CouchDB layer would need to
stitch together multiple transactions in order to support larger attachments and record some
metadata at the end to make the result visible to the user.

Personally, I’d like to see a design for attachments that allows CouchDB the option to offload
the actual binary storage for attachments to an object store purpose-built for that sort of
thing, while still maintaining the CouchDB API including replication capabilities. All the
major cloud providers have object storage services, and if you’re not running on cloud infrastructure
there are open source projects like Minio and Ceph that are far more efficient at storing
large binaries than CouchDB or FoundationDB will ever be.

Of course, I recognize that this integration is extra complexity that many administrators
do not need or want, and so we’ll require some native option for attachment storage. The
main question I have is whether we write all the extra code to support internal storage of
attachments that exceed 10 MB, knowing that we’d still deliver worse performance at higher
cost than the “object store offload” approach.

I’m curious why you proposed “attachment” vs. “largeAttachment” as a user-visible
distinction? That hadn’t occurred to me personally. Cheers,


> On Jul 29, 2019, at 1:43 AM, Reddy B. <> wrote:
> Hello,
> MongoDb has a driver called Gridfs intended to handle large files. Since they have a
hard limit of 16mb per document, this driver transparently splits a file in 256kb chunks and
then transparently reassembles it upon read. Metadata are stored so they support things such
as range queries (very useful in video/audio streaming scenario - Couchdb supports range queries
too), more information is available on this page:
> I was wondering is something similar could be built on top of FoundationDb and if such
an approach would solve the current issues with large attachments. In particular, it could
make replication easier, since only small files would need to be replicated and it would be
easier to resume replication at a particular chunk.
> MongoDb stores this data in a dedicated "collection" which is not the CouchDb way. My
thinking was that this could be opt-in: in addition to a document being able to have an attachment,
we could introduce a new entity called largeAttachment using such a driver behind the scene,
and the user would choose how to best store his data based on the performance caracteristics
of each storage method and his needs (field, attachment, largeAttachments).
> I am just wondering if the idea is broadly feasible in the next FDB based version or
if there is an obvious showstopper / challenge that would need to be addressed first.
> Thank you!
> Reddy

View raw message