couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Hans-Dieter Böhlau <boehlau...@googlemail.com>
Subject Re: CouchDB filters: {"error":"changes_reader_died"}
Date Fri, 29 Jul 2011 14:17:58 GMT
Hi, we run into the same problems: [error] [<0.10873.5>] changes loop
timeout, no data received

We want periodically to replicate a subset of documents into a new database.
The number of documents in the source-database grows over time. As we
noticed, the time until the changes request provides a response increases
too. In some cases it runs in into a timeout.

Please can anyone explain, why filtered replication seems to be so
expensive?!
Does the reponse time depend on the number of documents, the sequence
numbers. Or has the server load a significant impact?

Think, https://issues.apache.org/jira/browse/COUCHDB-1231 points on this
issue.

Best regards,
Hans


2011/7/22 Ramkrishna Kulkarni <ramkrishna.kulkarni@gmail.com>

> I have a DB with some 40K documents. I initiate around 40 replications at
> the same time with a filter. The filter is a simple string comparison on
> one
> of the fields. On an average, each replication copies 1K documents to the
> target DBs. This process takes several minutes (30 minutes sometimes). I
> have observed that around 90% of the time is spent in filtering the
> documents at source DB (I'm guessing this because CPU is fully loaded at
> source for most of the time and once the copying starts, it finishes pretty
> quickly). Situation is a little better if the number of simultaneously
> replications is less.
>
> This DB has 4 views (composite keys). Tried on 1.0.2 and 1.1.0. Server is a
> 2 core box running Ubuntu 10.10.
> I've seen message like "changes_timeout" and {"error":"changes_reader_**
> died"}
>
> Please let me know if there are things to keep in mind while using filters.
>
> Thanks,
> Ram
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message