couchdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Paul Joseph Davis (JIRA)" <>
Subject [jira] Updated: (COUCHDB-768) Constant Bulk Saving results in Eventual Timeouts
Date Sat, 09 Oct 2010 19:46:10 GMT


Paul Joseph Davis updated COUCHDB-768:

    Skill Level: Regular Contributors Level (Easy to Medium)

> Constant Bulk Saving results in Eventual Timeouts
> -------------------------------------------------
>                 Key: COUCHDB-768
>                 URL:
>             Project: CouchDB
>          Issue Type: Bug
>          Components: HTTP Interface
>    Affects Versions: 0.10.2, 0.11
>         Environment: Software: Using Python 2.6 (couchdbkit OR httplib) OR curl to submit.
 The 0.11 is the Debian unstable version; the 0.10.2 install is from Ubuntu.
> CouchDB 0.11 is running on a Sun Fire X4600 M2, with NFS mounted storage to a Linux software
RAID10 (x4 WD20EARS SATA drives).  However, same issue arises using the server's 3G/s (10k
RPM) SAS drives.  The NFS share is mounted over dual intel gigabit NICs in a round-robin configuration.
>            Reporter: A.W. Stanley
>            Priority: Minor
> Situation:
> Saving documents in bulk (lots of 1,000, 4,000, and 10,000 have been tested) to a single
database results in degraded performance, and then a string of timeouts.  The timeouts are
not logged by CouchDB, so the HTTP interface becomes unusable for a period.  It then returns
and rapidly processes the next batch of jobs (read: the timeout is temporary).
> Replication:
> - I am having trouble replicating the behaviour with saving bulk loads of documents (I
have been working against doing so), but it appears to happen after an extended period;
> - I can replicate the behaviour by submitting a lot of individual files (single document
saves) in rapid succession.
> Diagnostics:
> - I had tried true and false for delayed_commits, just to rule that out;
> - Testing outside of CouchDB (postgres, file transfers, streaming, or otherwise trying
to hammer the I/O) yielded no issues with the systems involved.
> Functional Workarounds:
> - I have sharded the database in question.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message