couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Germain Maurice <germain.maur...@linkfluence.net>
Subject Issues with replication
Date Tue, 23 Mar 2010 09:14:37 GMT
Hi everybody,

We have a database with more than 8 millions of documents and the size 
is more than 450 GB. I wonder if it's a good choice to have all of our 
docs in only one file of this size.

We have issues with replication through every time. We tried both one 
shot and continuous replication and we got req_timedout followed by a 
couchdb crash. We opened reports bugs you can see here : 
https://issues.apache.org/jira/browse/COUCHDB-690 and
https://issues.apache.org/jira/browse/COUCHDB-701

Another issue is compaction which doesn't work fine on one of my host.
The compaction is launched on a database which has more than 100 000
documents (10GB) The compaction starts up, works, and after a while,
stops without any warning or alerts. On a other host the same database
was well compacted.

We really wonder how couchdb is used in a production environment. How
many documents do you store ? Do you use big databases or small
databases (file size i mean) ? Embedded replication or not ? Compaction
is efficient or not ?

We are trying to make some developpement in order to avoid the issues we
encounted, we would like some feedback about the best practices in 
environment production.

Best regards,
Germain

Mime
View raw message