incubator-couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Paul Davis <paul.joseph.da...@gmail.com>
Subject Re: couchdb server connection refused error
Date Mon, 10 Aug 2009 20:00:23 GMT
Idle thought, but I'm suddenly fairly certain that there was a bug fix
in 0.9.1 to fix leaking file handles. I also realized that it must be
the server with too many open files as the call to curl certainly
isn't running out of descriptors and the server running out could
definitely cause a connection refused error. Narf. Not sure why it
took so long to put that one together.

Can you try upgrade to 0.9.1 to see if the error persists? There
shouldn't be any sort of incompatibility in the releases so it'd just
be a matter of builidng and installing.

Paul Davis

On Mon, Aug 10, 2009 at 3:49 PM, Brian Candler<B.Candler@pobox.com> wrote:
> On Mon, Aug 10, 2009 at 11:17:01AM -0700, Tommy Chheng wrote:
>>>    Was the open files bottleneck hit on the client process, or the couchdb
>>>    erlang process?
>>>
>>>    I imagine it was the former.
>>>
>>    They are on both the same machine so it wouldn't matter because it is
>>    at the OS level?
>>     If too many connections are being open, the OS will be refusing to
>>    open more, no matter the client or couchdb process.
>
> It matters because it is a per-process limit, not a system-wide limit.
> (Well, there may be a system-wide limit on file descriptors too, but that
> would be set elsewhere, as a sysctl tunable I think)
>
>>      I raised the limits in the /etc/security/limits.conf file by adding
>>
>>      these lines:
>>
>>      * soft nofile 32768
>>
>>      * hard nofile 32768
>>
>>    For the uid which was running the client application?
>>
>>    I set it for all users in the limits file.
>
> That's a bit of a sledgehammer approach, and will leave you vulnerable to
> denial-of-service attacks from other users.
>
> The limits are set like they are to give you some protection from this. If
> the client program runs as uid foo, then you should just give uid foo this
> benefit.
>
> I don't know what the client program actually is, though. Is it a web
> browser? In that case you would have to give every user who runs a web
> browser on that machine this privilege. However it seems remarkable that a
> web browser would open 1000+ concurrent file handles, since code running in
> the browser doesn't have direct filesystem access anyway (unless you're
> running Java applets?)
>
> Or is it some middleware application, which receives requests from the
> browser clients, and forwards them onto the backends?
>
> You might want to see if you can improve the application by closing files
> when you're no longer using them. If your app really needs to have 1,000
> files open concurrently then so be it, but if it's a file descriptor leak
> then you'll want to plug it, otherwise you'll just die a bit later when you
> reach 32K open files.
>
> Regards,
>
> Brian.
>

Mime
View raw message