couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Martin Hewitt <mar...@thenoi.se>
Subject Performance of many documents vs large documents
Date Tue, 10 Jan 2012 22:43:34 GMT
Hi all,

I'm currently scoping a project which will measure a variety of indicators over a long period,
and I'm trying to work out where to strike the balance of document number vs document size.

I could have one document per metric, leading to a small number of documents, but with each
document containing ticks for every 5-second interval of any given day, these documents would
quickly become huge. 

Clearly, I could decompose these huge per-metric documents down into smaller documents, and
I'm in the fortunate position that, because I'm dealing with time, I can decompose by year,
months, day, hour, minute or even second.

Going all the way to second-level would clearly create a huge number of documents, but all
of very small size, so that's the other extreme.

I'm aware the usual response to this is "somewhere in the middle", which is my working hypothesis
(decomposing to a "day" level), but I was wondering if there was a) anything in CouchDB's
architecture that would make one side of the "middle" more suited, or b) if someone has experience
architecting something like this.

Any help gratefully appreciated.

Martin
Mime
View raw message