couchdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Paul Davis <>
Subject Re: chunkify profiling (was Re: Patch to couch_btree:chunkify)
Date Wed, 13 May 2009 20:01:31 GMT

No worries about the delay. I'd agree that the first graph doesn't
really show much than *maybe* we can say the patch reduces the
variability a bit.

On the second graph, I haven't the faintest why that'd be as such.
I'll have to try and setup fprof and see if I can figure out what
exactly is taking most of the time. Perhaps we're looking at wrong
thing by reducing term_to_binary. You did say that most of the time
was spent in size/1 as opposed to term_to_binary the other day which
is hard to believe at best.

I'll put this on the weekend agenda. Until I can show that its
consistently faster I'll hold off.

For reference, when you say 2K docs in batches of 1K, did you mean 200K?

Also, note to self, we should check speeds for dealing with uuids too
to see if the non-ordered mode makes a difference.


On Wed, May 13, 2009 at 3:33 PM, Adam Kocoloski <> wrote:
> Sorry for the delay on this front.  I ran hovercraft:lightning 20 times each
> with and without Paul's patch.  Each run inserted 2k docs in batches of
> 1000.  Here are two plots showing the effect of the patch:
> The first plot histograms the insert rate for the two scenarios*.  I don't
> really see much of a difference.  The second plot uses fprof to plot the
> fraction of time the couch_db_updater process spent in chunkify and any
> functions called by chunkify.  For those familiar with fprof, it's the ratio
> of ACC for couch_btree:chunkify/2 divided by OWN for the updater pid.  If
> fprof is to be believed, the trunk code is almost 2x faster.
> Adam
> * the reason the insert rate is so low is because fprof is running.  Turning
> off the profiler speeds things up by an order of magnitude, in accord with
> the numbers Chris has posted.

View raw message