couchdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Paul Davis <paul.joseph.da...@gmail.com>
Subject Re: chunkify profiling (was Re: Patch to couch_btree:chunkify)
Date Thu, 14 May 2009 16:14:04 GMT
>
> You know BERT better than I do -- you said the size of a binary is stored in
> its header, correct?
>

I'm not sure now. It may only get length information when being sent
across the wire.

>> I'll put this on the weekend agenda. Until I can show that its
>> consistently faster I'll hold off.
>>
>> For reference, when you say 2K docs in batches of 1K, did you mean 200K?
>
> No, I meant 2k (2 calls to _bulk_docs).  200k would have generated a
> multi-GB trace and I think fprof:profile() would have melted my MacBook
> processing it.  YMMV ;-)

I thought you knew the guys at Cern ;)

Thanks for writing this up and do please post code somewhere. This
weekend I'll take a bit of time to see if I can weasel anything better
out of the fprof stuff.

Paul Davis

Mime
View raw message