couchdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Randall Leeds (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (COUCHDB-1171) Multiple requests to _changes feed causes {error, system_limit} "Too many processes"
Date Mon, 23 May 2011 21:03:47 GMT

     [ https://issues.apache.org/jira/browse/COUCHDB-1171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Randall Leeds resolved COUCHDB-1171.
------------------------------------

    Resolution: Won't Fix

This issue is not a bug with CouchDB. System limits and Erlang limits may need to be tweaked
for high volume deployments.

There is a wiki page on performance tuning with relevant information. The reason it only appears
with _changes is that feed=longpoll or feed=continuous are the only reliable ways to make
long-lasting HTTP connections.

Raising the limits to ridiculously high values is totally acceptable and should not crater
your box (thanks, Erlang). Protecting against DDoS attacks that look like legitimate traffic
is incredibly difficult and, IMHO, outside the responsibility of CouchDB. If anyone is running
mission critical systems they should probably look at some DDoS protection appliances or proxies.

If there's a good argument the other way, please re-open the ticket.

> Multiple requests to _changes feed causes {error, system_limit} "Too many processes"
> ------------------------------------------------------------------------------------
>
>                 Key: COUCHDB-1171
>                 URL: https://issues.apache.org/jira/browse/COUCHDB-1171
>             Project: CouchDB
>          Issue Type: Bug
>    Affects Versions: 1.0.2, 1.0.3, 1.1
>            Reporter: Alexander Shorin
>
> Originally I have investigated of issue 182 of couchdb-python package where calling db.changes()
function over 32768 times generates next messages in CouchDB log:
> [Thu, 19 May 2011 14:03:26 GMT] [info] [<0.2909.0>] 127.0.0.1 - - 'GET' /test/_changes
200
> [Thu, 19 May 2011 14:03:26 GMT] [error] [emulator] Too many processes
> [Thu, 19 May 2011 14:03:26 GMT] [error] [<0.2909.0>] Uncaught error in HTTP request:
{error,system_limit}
> [Thu, 19 May 2011 14:03:26 GMT] [info] [<0.2909.0>] Stacktrace: [{erlang,spawn,
>                      [erlang,apply,
>                       [#Fun<couch_stats_collector.1.123391259>,[]]]},
>              {erlang,spawn,1},
>              {couch_httpd_db,handle_changes_req,2},
>              {couch_httpd_db,do_db_req,2},
>              {couch_httpd,handle_request_int,5},
>              {mochiweb_http,headers,5},
>              {proc_lib,init_p_do_apply,3}]
> [Thu, 19 May 2011 14:03:26 GMT] [info] [<0.2909.0>] 127.0.0.1 - - 'GET' /test/_changes
500
> More info about this issue could be found there: http://code.google.com/p/couchdb-python/issues/detail?id=182
> However, I still couldn't reproduce this error using only httplib module, but I've got
that same behavior using feed=longpool option:
> from httplib import HTTPConnection
> def test2():
>     conn = HTTPConnection('localhost:5984')
>     conn.connect()
>     i = 0
>     while(True):
>         conn.putrequest('GET', '/test/_changes?feed=longpool')
>         conn.endheaders()
>         conn.getresponse().read()
>         i = i + 1
>         if i % 100 == 0:
>             print i
> When i get's around 32667 exception raises
> Traceback (most recent call last):
>   File "/home/kxepal/projects/couchdb-python/issue-182/test.py", line 259, in <module>
>     test2()
>   File "/home/kxepal/projects/couchdb-python/issue-182/test.py", line 239, in test2
>     resp.read()
>   File "/usr/lib/python2.6/httplib.py", line 522, in read
>     return self._read_chunked(amt)
>   File "/usr/lib/python2.6/httplib.py", line 565, in _read_chunked
>     raise IncompleteRead(''.join(value))
> httplib.IncompleteRead: IncompleteRead(0 bytes read)
> [Thu, 19 May 2011 14:10:20 GMT] [info] [<0.3240.4>] 127.0.0.1 - - 'GET' /test/_changes?feed=longpool
200
> [Thu, 19 May 2011 14:10:20 GMT] [error] [emulator] Too many processes
> [Thu, 19 May 2011 14:10:20 GMT] [error] [<0.3240.4>] Uncaught error in HTTP request:
{error,system_limit}
> [Thu, 19 May 2011 14:10:20 GMT] [info] [<0.3240.4>] Stacktrace: [{erlang,spawn,
>                      [erlang,apply,
>                       [#Fun<couch_stats_collector.1.123391259>,[]]]},
>              {erlang,spawn,1},
>              {couch_httpd_db,handle_changes_req,2},
>              {couch_httpd_db,do_db_req,2},
>              {couch_httpd,handle_request_int,5},
>              {mochiweb_http,headers,5},
>              {proc_lib,init_p_do_apply,3}]
> [Thu, 19 May 2011 14:10:20 GMT] [info] [<0.3240.4>] 127.0.0.1 - - 'GET' /test/_changes?feed=longpool
500
> Same error. I know, that test function is quite outside from real use case, but is this
correct behavior and couldn't it be used in malicious aims?
> This exception occurres only for multiple requests within single connection for changes
feed, chunked lists or attachments are not affected, if I've done all right.
> Test environment:
> Gentoo Linux 2.6.38
> CouchDB 1.0.2 release
> couchdb-python@63feefd9e3b6
> Python 2.6.6
> If there is needed some additional information I could try to provide it.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message