couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Owen Marshall <omarsh...@facilityone.com>
Subject Re: Tracking file throughput?
Date Fri, 03 Jun 2011 14:47:16 GMT
At Fri, 3 Jun 2011 15:28:54 +0100,
muji wrote:

> A quick search for continuous compaction didn't yield anything, and I
> don't see anything here:
> 
> http://wiki.apache.org/couchdb/Compaction
> 
> Could you point me in the right direction please?

I *think* what Jan means is to fire off a compaction call to the database either with each
update, or every so many updates. I looked at this as an option under similar circumstances
but didn't end up doing it because the database was under heavy writes and rapid compaction
made me feel just too... nervous ;-)

You should experiment with the effects of this. It may be absolutely fine for you.

> Funny you mention about caching before updating couch, that was my
> very first implementation! I was updating Redis with the throughput
> and then updating the file document once the upload completed. That
> worked very well but I wanted to remove Redis from the stack as the
> application is already pretty complex.
> 
> I'm guessing my best option is to revert back to that technique?

Maybe just prepare the data directly in your application layer and send the document out only
once, when everything completes.

> As an aside, why would my document update handler be raising
> conflicts? My understanding was that update handlers would not raise
> conflicts - is that correct?

IIRC, document update handlers *can* run into conflicts. The odds of this happening are much
lower because they tend to be very fast, but with a ton of rapid writes, anything is possible!

-- 
Owen Marshall
FacilityONE
http://www.facilityone.com | (502) 805-2126


Mime
View raw message