couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From James Marca <jma...@translab.its.uci.edu>
Subject Re: An old database causes couchdb 1.3.0 to crash getting single documents
Date Mon, 29 Apr 2013 08:13:16 GMT
On Mon, Apr 29, 2013 at 08:50:15AM +0100, Robert Newson wrote:
> James,
> 
> Did you compact this database with any version of couchdb between
> 0.9.0 and 1.3.0? I'm fairly sure we dropped the upgrade code for 0.9 a
> while back.

I honestly cannot recall when I created or compacted the file.

stat gives some clue...my recollection is that I compacted all of
these dbs at about the same time, because compacted they were much
smaller than not.


  File: ‘d00.couch’
  Size: 109907280004    Blocks: 214662712  IO Block: 4096   regular
  file
Device: fd00h/64768d    Inode: 415806      Links: 1
Access: (0644/-rw-r--r--)  Uid: (  103/ couchdb)   Gid: ( 1001/
  couchdb)
Access: 2012-02-09 14:22:56.960536759 -0800
Modify: 2012-06-11 09:54:12.032220132 -0700
Change: 2012-12-12 20:49:08.016217579 -0800
 Birth: -

Also, I kept this machine pretty up to date with couchdb (as an
example, I just switched from 1.2.x to 1.3.x last week, not too long
after 1.3 was released.)

Other dbs in the directory have about the same dates in stat. So if
those dates are about right, I would have compacted around February
2012.

I think that puts them at about 1.1.x type file structure.  Is there a
magic header or something I can check to determine the version of the
file structure?


> 
> B.
> 
> On 29 April 2013 08:47, Stanley Iriele <siriele@breaktimestudios.com> wrote:
> > I find this a little strange... When you say "carried over" do you mean
> > copied the db file? Or replicated the databases? Also how big is your
> > biggest file and what is your hardware spec

What I mean is that unless the change log said I had to dump and
restore the db or something, or replicate from one version to the new
version, I would just use the newer version of couch in place.  These
DBs were so big that once I had the views built, I left them alone
pretty much...they are an intermediate step in a manual
map-reduce-collate operation, where several large DBs holding analysis
output were compressed into one much smaller db.

The machine has 8G of RAM.  This is small by today's standards, but
the thing is, it should not break a sweat on individual docs...100G
split over 13 million docs isn't that big an average per doc.

And if I could check how big the biggest doc is in the db...I'd do
that.  Is there a way to check? (none that I know of in the standard
couchdb API)

Regards,
James

> > On Apr 28, 2013 11:33 PM, "James Marca" <jmarca@translab.its.uci.edu> wrote:
> >
> >> One additional note, I just tried compacting and it did not work...the
> >> RAM hit about 95%, then CouchDB crashed.
> >>
> >> Regards,
> >>
> >> James
> >>

-- 
James E. Marca, PhD
Researcher
Institute of Transportation Studies
AIRB Suite 4000
University of California
Irvine, CA 92697-3600
jmarca@translab.its.uci.edu
(949) 824-6287

Mime
View raw message