couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Brian Candler <>
Subject Re: couchdb server connection refused error
Date Mon, 10 Aug 2009 19:49:14 GMT
On Mon, Aug 10, 2009 at 11:17:01AM -0700, Tommy Chheng wrote:
>>    Was the open files bottleneck hit on the client process, or the couchdb
>>    erlang process?
>>    I imagine it was the former.
>    They are on both the same machine so it wouldn't matter because it is
>    at the OS level?
>     If too many connections are being open, the OS will be refusing to
>    open more, no matter the client or couchdb process.

It matters because it is a per-process limit, not a system-wide limit.
(Well, there may be a system-wide limit on file descriptors too, but that
would be set elsewhere, as a sysctl tunable I think)

>      I raised the limits in the /etc/security/limits.conf file by adding
>      these lines:
>      * soft nofile 32768
>      * hard nofile 32768
>    For the uid which was running the client application?
>    I set it for all users in the limits file.

That's a bit of a sledgehammer approach, and will leave you vulnerable to
denial-of-service attacks from other users.

The limits are set like they are to give you some protection from this. If
the client program runs as uid foo, then you should just give uid foo this

I don't know what the client program actually is, though. Is it a web
browser? In that case you would have to give every user who runs a web
browser on that machine this privilege. However it seems remarkable that a
web browser would open 1000+ concurrent file handles, since code running in
the browser doesn't have direct filesystem access anyway (unless you're
running Java applets?)

Or is it some middleware application, which receives requests from the
browser clients, and forwards them onto the backends?

You might want to see if you can improve the application by closing files
when you're no longer using them. If your app really needs to have 1,000
files open concurrently then so be it, but if it's a file descriptor leak
then you'll want to plug it, otherwise you'll just die a bit later when you
reach 32K open files.



View raw message