couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ramkrishna Kulkarni <>
Subject CouchDB filters: {"error":"changes_reader_died"}
Date Fri, 22 Jul 2011 11:39:13 GMT
I have a DB with some 40K documents. I initiate around 40 replications at
the same time with a filter. The filter is a simple string comparison on one
of the fields. On an average, each replication copies 1K documents to the
target DBs. This process takes several minutes (30 minutes sometimes). I
have observed that around 90% of the time is spent in filtering the
documents at source DB (I'm guessing this because CPU is fully loaded at
source for most of the time and once the copying starts, it finishes pretty
quickly). Situation is a little better if the number of simultaneously
replications is less.

This DB has 4 views (composite keys). Tried on 1.0.2 and 1.1.0. Server is a
2 core box running Ubuntu 10.10.
I've seen message like "changes_timeout" and {"error":"changes_reader_**

Please let me know if there are things to keep in mind while using filters.


  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message