couchdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Adam Kocoloski (JIRA)" <>
Subject [jira] Commented: (COUCHDB-968) Duplicated IDs in _all_docs
Date Sun, 28 Nov 2010 01:14:39 GMT


Adam Kocoloski commented on COUCHDB-968:

Lowering the _revs_limit for db1 allows you to reproduce this bug in a shorter amount of time.
 I've found that a _revs_limit of 5 and 15 iterations in the for loop results in a duplicate
about half the time.  So

curl localhost:5984/db1 -X PUT
curl localhost:5984/db2 -X PUT
curl localhost:5984/db1/_revs_limit -X PUT -d '5'
curl localhost:5984/_replicate -d '{"source":"db1", "target":"db2", "continuous":true}' -Hcontent-type:application/json
curl localhost:5984/_replicate -d '{"source":"db2", "target":"db1", "continuous":true}' -Hcontent-type:application/json
curl localhost:5984/db1/foo -X PUT -d '{}'
curl localhost:5984/db1/_design/update -X PUT -d '{"updates": {
       "a": "function(doc, req) { doc[\"random\"] = Math.random(); return [doc, \".\"]; }"
for i in {1..15}; do curl localhost:5984/db1/_design/update/_update/a/foo -d '{}'; done
curl localhost:5984/db1
curl localhost:5984/db1/_all_docs
curl localhost:5984/db1/_compact -X POST -Hcontent-type:application/json
sleep 2
curl localhost:5984/db1
curl localhost:5984/db1/_all_docs

> Duplicated IDs in _all_docs
> ---------------------------
>                 Key: COUCHDB-968
>                 URL:
>             Project: CouchDB
>          Issue Type: Bug
>          Components: Database Core
>    Affects Versions: 1.0, 1.0.1, 1.0.2
>         Environment: Ubuntu 10.04.
>            Reporter: Sebastian Cohnen
>            Priority: Blocker
> We have a database, which is causing serious trouble with compaction and replication
(huge memory and cpu usage, often causing couchdb to crash b/c all system memory is exhausted).
Yesterday we discovered that db/_all_docs is reporting duplicated IDs (see [1]). Until a few
minutes ago we thought that there are only few duplicates but today I took a closer look and
I found 10 IDs which sum up to a total of 922 duplicates. Some of them have only 1 duplicate,
others have hundreds.
> Some facts about the database in question:
> * ~13k documents, with 3-5k revs each
> * all duplicated documents are in conflict (with 1 up to 14 conflicts)
> * compaction is run on a daily bases
> * several thousands updates per hour
> * multi-master setup with pull replication from each other
> * delayed_commits=false on all nodes
> * used couchdb versions 1.0.0 and 1.0.x (*)
> Unfortunately the database's contents are confidential and I'm not allowed to publish
> [1]: Part of http://localhost:5984/DBNAME/_all_docs
> ...
> {"id":"9997","key":"9997","value":{"rev":"6096-603c68c1fa90ac3f56cf53771337ac9f"}},
> {"id":"9999","key":"9999","value":{"rev":"6097-3c873ccf6875ff3c4e2c6fa264c6a180"}},
> {"id":"9999","key":"9999","value":{"rev":"6097-3c873ccf6875ff3c4e2c6fa264c6a180"}},
> ...
> [*]
> There were two (old) servers (1.0.0) in production (already having the replication and
compaction issues). Then two servers (1.0.x) were added and replication was set up to bring
them in sync with the old production servers since the two new servers were meant to replace
the old ones (to update node.js application code among other things).

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message