couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Filipe David Manana <>
Subject Re: all_dbs_active error, not sure how to "fix"
Date Fri, 22 Apr 2011 14:36:17 GMT
On Fri, Apr 22, 2011 at 3:30 PM, Jonathan Johnson <> wrote:
> By doing that, it will increase the number of possible open files
> (although I admit I'm significantly lower than my current limit). My
> point is that I'm never actively connecting to 130 databases, so why
> is couch keeping them open? Shouldn't it recycle databases that hadn't
> been connected to recently?

Yes it should. I dunno, perhaps your application or library is doing
database accesses behind the scenes.
Also, if you change your machine's clock while Couch is running, I
think it might prevent it from properly recycling databases.
Finally, if you're using Erlang OTP R14B02 on a 64 bits machine,
there's a bug in that particular release regarding insertion in
ordered ets tables, which might cause Couch to not do the recycling as
it should.

> -Jon
> On Fri, Apr 22, 2011 at 9:05 AM, Filipe David Manana
> <> wrote:
>> Look at the "max_dbs_open" configuration parameter in the .ini files
>> and increase it to a higher value.
>> On Fri, Apr 22, 2011 at 3:01 PM, Jonathan Johnson <> wrote:
>>> I'm running couchdb 1.0.2 on CentOS 5.5. The databases are on an ext4
>>> formatted drive.
>>> I have 209 databases, but they're never truly active at the same time.
>>> Our stack is written in ruby. The web layer switches between active
>>> databases depending on the url. However, we have 16 web processes, so
>>> in theory the maximum number of truly active databases is 16.
>>> We also have a daemon process that loops through a chunk of the
>>> databases periodically. However, it's one thread, and as such also
>>> only truly works with one database at a time.
>>> My understanding is that CouchRest doesn't keep HTTP connections alive
>>> for multiple requests, but I don't know that for sure. I have even
>>> gone as far as putting in manual garbage collection calls in my daemon
>>> to ensure that any stranded connection objects will be collected.
>>> With all of that, however, I eventually get into a state where I get
>>> the all_dbs_active error. It doesn't happen often -- last time was
>>> nearly 3 weeks ago. However, once it gets in the state, restarting all
>>> of my clients doesn't release the databases. The only way to recover
>>> is to restart couch.
>>> open_os_files was at 2308 before I restarted it this morning, which is
>>> less than the current limit set (4096).
>>> I guess I feel like this is an issue inside of couch because even if I
>>> quit all of my active server processes that connect to couch, couch
>>> never frees up the open databases. I can hit it one-off from my
>>> browser and still get the error, even though I'm the only active
>>> connection.
>>> Has anyone else seen this? Any ideas of what I can try to prevent this
>>> from happening?
>>> Thanks!
>>> -Jon
>> --
>> Filipe David Manana,
>> "Reasonable men adapt themselves to the world.
>>  Unreasonable men adapt the world to themselves.
>>  That's why all progress depends on unreasonable men."

Filipe David Manana,,

"Reasonable men adapt themselves to the world.
 Unreasonable men adapt the world to themselves.
 That's why all progress depends on unreasonable men."

View raw message