couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tom Nichols <>
Subject Re: Insert performance
Date Tue, 05 May 2009 17:43:48 GMT
So I did a rough calculation and it looks like I'm getting less than
1MB/s throughput in CouchDB --

3072 MB total / 6900 sec = 0.445 MB/s

So if the disk throughput is ~20 to 30 MB/s then the bottleneck is
somewhere in the database.  It's obviously not going to be anywhere
close to raw disk I/O speeds but this still seems incredibly slow.
Granted, I'm using a small instance...  I'll try a c1.medium and see
if the results are drastically different.

On Mon, May 4, 2009 at 5:29 PM, Jason Smith <> wrote:
> Tom Nichols wrote:
>> Hi, I have some questions about insert performance.
>> I have a single CouchDB 0.9.0 node running on small EC2 instance.  I
>> attached a huge EBS volume to it and mounted it where CouchDB's data
>> files are stored.  I fired up about ruby scripts running inserts and
>> after a weekend I only have about 30GB/ 12M rows of data...  Which
>> seems small.  'top' tells me that my CPU is only about 30% utilized.
>> Any idea what I might be doing wrong?  I pretty much just followed
>> these instructions:
> Hi, Tom.  I believe I read somewhere before that the smallest EC2 instances
> have a slower and/or higher-latency connection to EBS, so you might want to
> consider a large instance, or maybe even a high-memory small instance and
> see whether you get better "hardware" performance.
> Although strangely, when googling it, the first article I found says that
> their benchmarks found no difference between EBS or even the ephemeral
> filesystem.
> On the other hand, here is a forum posting and a random benchmark indicating
> that more expensive instances get better throughput:
> --
> Jason Smith
> Proven Corporation
> Bangkok, Thailand

View raw message