couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dave Cottlehuber <...@jsonified.com>
Subject Re: An old database causes couchdb 1.3.0 to crash getting single documents
Date Mon, 29 Apr 2013 21:38:52 GMT
On 29 April 2013 18:16, James Marca <jmarca@translab.its.uci.edu> wrote:
> On Mon, Apr 29, 2013 at 10:46:59AM +0200, Dave Cottlehuber wrote:
>> On 29 April 2013 09:50, Robert Newson <rnewson@apache.org> wrote:
>> > James,
>> >
>> > Did you compact this database with any version of couchdb between
>> > 0.9.0 and 1.3.0? I'm fairly sure we dropped the upgrade code for 0.9 a
>> > while back.
>> >
>> > B.
>>
>> Yup, 1.1.2 is the last one with 0.9 on-disk format compatibility. At a
>> minimum you'll need to go 0.9 -> 1.1.2, compact, and then step to
>> 1.3.0 should be OK.
>>
>> Also exactly what version you're migrating from will make a difference:
>>
>>     http://wiki.apache.org/couchdb/Breaking_changes
>>
>> You may need to update your view code, query code as well, and it's
>> possible for non-valid utf-8 docs to be rejected also if they are
>> present. I am not sure how an upgrade handles that.
>>
>> Benoit - you did a big upgrade a while back, do you remember what your
>> version stepping ended up requiring? IIRC you needed to go to 1.0.4
>> first, but I don't recall why.
>
>
> I was able to get couchdb 1.2.x running on this machine, but 1.1.x
> dies on start.

Sorry to hear that :-(

Can you share any error message, or better output from erlang console
& debug log?

I wonder if your erlang version was newer than what was available when released.

> 1.2.x does not compact this db.  I got some quick
> erlang errors, then RAM usage slowly rose to 95% so I killed it.
>
> Other dbs of the same "generation" work fine...I can access them and
> build views and so on. The only difference in the dbs is the data.
> The problem one is the first one I started using, before I decided to
> manually shard the data.  All the dbs identical design docs and all
> that.  My wild ass guess is that it's something I did early on in the
> early batches of model runs polluting the db.
>
> After a week of fighting this (my awareness of the scope of the
> problem built slowly!), I'm thinking it might be easier to just
> re-run my models, and re-generate the data...at least then the problem
> is just CPU time.

I understand if the data's too big to send, or sensitive, but it would
be useful to get a copy of the db and figure out what's wrong if
possible. Let me know off list if you can.

A+
Dave

Mime
View raw message