On Thu, Mar 26, 2009 at 6:36 PM, Adam Wolff <email@example.com> wrote:You'll definitely want to upgrade to trunk, or 0.9 which is just now
> Hi everyone,
> We are running an alpha version of our software against a couchdb instance
> with a handful of documents, and we're seeing response times from our views
> of ~500ms. This is measured both within our application, and hitting the
> view directly using firebug+firefox.
> The view I'm talking about matches about 5 documents and returns about 9K of
> data. I'm running:
> Apache CouchDB 0.8.1-incubating (LogLevel=info)
> Erlang (BEAM) emulator version 5.6.5 [source] [async-threads:0] [hipe]
> This is all running on my MacBook Pro 2.33GHz Core 2 Duo with 3GB of RAM.
out for testing pre-release at . 500 ms is way way slow, trunk
should help, but there's probably something else going on as well.
The reduce function is generally run once per final reduce operation
> By logging, I can see that my reduce function is running every time I access
> the view. The response time is about the same whether I've committed a new
> version of one of the documents in the view or not. This surprised me, since
> I thought that view results were cached. I've also tried logging the amount
> of time actually spent *in* my reduce function, but that appears to be
currently. If I'm not mistaken, this means that you get it once per
key when group=true and just once when group=false
When you say fairly complicated, how do you mean? There is a size
> I am seeing some very fast responses from couchdb, for straight resource
> access -- on order 10ms. But all of my views are relatively slow -- even
> ones that don't have a reduce step.
> So, I'm wondering if I have a bad version, or bad config, or if this is
> expected performance. I'm sure things are running faster in trunk, but I
> want to get a feel for what kind of performance I can expect from a view
> with a fairly complicated reduce step.
output constraint for reductions. Ie, reduce functions should return
data that grows less than log(# keys reduced) because of data is
stored in the internal btree nodes.
Also, the mechanics of reduce calculations have been on the back
> Thanks in advance for any advice,
burner for awhile in terms of keeping those partial reductions around.
I'm not 100% familiar with the entire code path, but I know that
there's definitely room for improvement but the speed optimizations
are being pushed back in favor of pulling in the big features.
If nothing looks obvious, you can try pasting your M/R functions to
see if anyone spots something that looks slow.