couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nils Breunese <N.Breun...@vpro.nl>
Subject Re: Using couchdb for analytics
Date Tue, 27 Sep 2011 13:42:59 GMT
maku@makuchaku.in wrote:

> I just saw a video on how "ShareThis" is implementing its analytics (though
> on mongodb) using buffer queues
> (link<http://www.10gen.com/presentation/mongosv2010/sharethis>
> ).
>
> Coming back to the problem in hand, I did a quick test.
>
> _rev_limit=1000 (default)
> Counter started from 0 & was incremented to 44,000
>
> I managed to generate 1GB of disk usage in the database...
> (I added a couple of attachments to the document (~80k) to simulate real
> world size of the document).
>
> 1GB disk usage for just incrementing a counter is a bit too largeā€¦

I Instead of updating a document you could also use a reduce function to count.

1. Add a document for every count.
2. emit(foo, 1) in your map function.
3. Use the _count built-in as the reduce function.

It might not more efficient diskspace wise, but it probably will save a lot of update/conflict
headaches.

Just my 2 ct.,

Nils.
------------------------------------------------------------------------
 VPRO   www.vpro.nl
------------------------------------------------------------------------

Mime
View raw message