couchdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Adam Kocoloski <kocol...@apache.org>
Subject Re: _compact on 0.10.0 <> availability
Date Tue, 13 Apr 2010 17:28:57 GMT
On Apr 13, 2010, at 12:39 PM, J Chris Anderson wrote:

> 
> On Apr 13, 2010, at 9:31 AM, till wrote:
> 
>> Hey devs,
>> 
>> I'm trying to compact a production database here (in hope to recover
>> some space), and made the following observations:
>> 
>> * the set is 212+ million docs
>> * currently 0.8 TB in size
>> * the instance (XL) has 2 cores, one is idle, the other maybe utilized at 10%
>> * memory - 2 of 15 GB taken, no spikes
>> * io - well it's EBS :(
>> 
>> When I started _compact read operations slowed down (I'll give you 20
>> Mississippi's for something that loads instantly otherwise).
>> Everything "eventually" worked, but it slowed down tremendously.
>> 
>> I restarted the CouchDB process and everything is back to "snap".
>> 
>> Does anyone have any insight on why that is the case?
> 
> I'm guessing this is an EBS / EC2 issue. You are probably saturating the IO pipeline.
It's too bad there's not an easy way to 'nice' the compaction IO.
> 
> If you got unlucky and are on a particularly bad EBS / EC2 instance, you might do best
to start up a new Couch in the same availability zone and replicate across to it. This will
accomplish more-or-less the same effect as compaction.
> 
>> 
>> Till
> 

I'm surprised it's _that_ bad.  The compactor only submits one I/O to EBS at a time, so I
wouldn't expect other reads to be starved too much.  On the other hand, I'll bet compacting
a DB that large takes at least a month, especially if you used random IDs.

On the other hand, when you compact you're messing with the page cache something fierce. 
At 212M docs you need every one of those 16GB of RAM to keep the btree nodes cached.  The
compactor a) reads nodes that your client app may not have been touching and b) writes to
a new file and the kernel starts to cache that too.  So it's a fairly brutal process from
the perspective of the page cache.

Does anyone have a sense of how deep a btree with 212M entries will be?  That is, how many
pread calls are required to pull up a doc?

Till, do you have iostat numbers from the compaction run?

Best, Adam



Mime
View raw message