couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From James Marca <jma...@translab.its.uci.edu>
Subject Re: An old database causes couchdb 1.3.0 to crash getting single documents
Date Wed, 01 May 2013 16:13:23 GMT
On Wed, May 01, 2013 at 03:13:50PM +0400, Alexander Shorin wrote:
> On Mon, Apr 29, 2013 at 8:16 PM, James Marca
> <jmarca@translab.its.uci.edu> wrote:
> >
> > I was able to get couchdb 1.2.x running on this machine, but 1.1.x
> > dies on start.
> >
> > 1.2.x does not compact this db.  I got some quick
> > erlang errors, then RAM usage slowly rose to 95% so I killed it.
> >
> > Other dbs of the same "generation" work fine...I can access them and
> > build views and so on. The only difference in the dbs is the data.
> > The problem one is the first one I started using, before I decided to
> > manually shard the data.  All the dbs identical design docs and all
> > that.  My wild ass guess is that it's something I did early on in the
> > early batches of model runs polluting the db.
> >
> > After a week of fighting this (my awareness of the scope of the
> > problem built slowly!), I'm thinking it might be easier to just
> > re-run my models, and re-generate the data...at least then the problem
> > is just CPU time.
> >
> > Thanks for the advice.
> >
> > James
> 
> 
> Hi James
> 
> I can share your pain: for my practice I'd got a lot of broken
> databases that acts in similar way. Most of them was under high
> concurrent writes load: "deadman's" database receives data via
> replications from 2-3 sources + massive bulk updates + triggering
> _update handlers and all this goes in same time. And server always was
> with *delayed_commits: true*. They fall with various symptoms: from
> explicit badrecord_db error in logs or random weird crush reports at
> the middle of data processing till forcing CouchDB unstoppable consume
> whole system memory. However, I never hit such problems without
> delayed_commits. Do you also have this option enabled?

I do have delayed_commits set to false on this machine.  However that
is not the problem, nor is high activity.  In this case, I am able to
crash the db with nothing going on but a single GET for a single
document.

I will get this issue isolated because I am curious about why it might
have happened, but I have to wait until the weekend when it is okay
for CouchDB to be unavailable.

Regards,
James


> 
> > After a week of fighting this (my awareness of the scope of the
> > problem built slowly!), I'm thinking it might be easier to just
> > re-run my models, and re-generate the data...at least then the problem
> > is just CPU time.
> 
> Yes, this is easiest way to workaround. Also better to keep somewhere
> backup copy for emerge cases.
> 
> --
> ,,,^..^,,,

Mime
View raw message