lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Karl Wright (JIRA)" <j...@apache.org>
Subject [jira] Commented: (SOLR-1951) extractingUpdateHandler doesn't close socket handles promptly, and indexing load tests eventually run out of resources
Date Mon, 14 Jun 2010 15:19:14 GMT

    [ https://issues.apache.org/jira/browse/SOLR-1951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12878613#action_12878613
] 

Karl Wright commented on SOLR-1951:
-----------------------------------

So, the proper solution appears to be to use http keep-alive.  Jetty apparently supports the
Http 1.0 specification for this, which means that you can get Jetty to not close the socket
if you simply include the header:

Connection: Keep-Alive

... in each request, and never close the socket but instead reuse it for request after request.

But, unfortunately, keep-alive doesn't seem to work with jetty/Solr.  Only the first request
posted (per connection) seems to be recognized.  Subsequent requests are silently eaten. 
Either that, or I'm doing something fundamentally wrong.



> extractingUpdateHandler doesn't close socket handles promptly, and indexing load tests
eventually run out of resources
> ----------------------------------------------------------------------------------------------------------------------
>
>                 Key: SOLR-1951
>                 URL: https://issues.apache.org/jira/browse/SOLR-1951
>             Project: Solr
>          Issue Type: Bug
>          Components: update
>    Affects Versions: 1.4.1, 1.5
>         Environment: sun java
> solr 1.5 build based on trunk
> debian linux "lenny"
>            Reporter: Karl Wright
>         Attachments: solr-1951.zip
>
>
> When multiple threads pound on extractingUpdateRequestHandler using multipart form posting
over an extended period of time, I'm seeing a huge number of sockets piling up in the following
state:
> tcp6       0      0 127.0.0.1:8983          127.0.0.1:44058         TIME_WAIT
> Despite the fact that the client can only have 10 sockets open at a time, huge numbers
of sockets accumulate that are in this state:
> root@duck6:~# netstat -an | fgrep :8983 | wc
>   28223  169338 2257840
> root@duck6:~#
> The sheer number of sockets lying around seems to eventually cause commons-fileupload
to fail (silently - another bug) in creating a temporary file to contain the content data.
 This causes Solr to erroneously return a 400 code with "missing_content_data" or some such
to the indexing poster.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Mime
View raw message