lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yonik Seeley (JIRA)" <j...@apache.org>
Subject [jira] Commented: (SOLR-1951) extractingUpdateHandler doesn't close socket handles promptly, and indexing load tests eventually run out of resources
Date Mon, 14 Jun 2010 16:19:14 GMT

    [ https://issues.apache.org/jira/browse/SOLR-1951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12878628#action_12878628
] 

Yonik Seeley commented on SOLR-1951:
------------------------------------

>From what I've always seen, both jetty and tomcat handle persistent connections just fine.
 Problems with this are often in the clients.  Although I've done almost zero testing with
multi-part upload in the past, so separating the two issues for testing would probably be
best.

> extractingUpdateHandler doesn't close socket handles promptly, and indexing load tests
eventually run out of resources
> ----------------------------------------------------------------------------------------------------------------------
>
>                 Key: SOLR-1951
>                 URL: https://issues.apache.org/jira/browse/SOLR-1951
>             Project: Solr
>          Issue Type: Bug
>          Components: update
>    Affects Versions: 1.4.1, 1.5
>         Environment: sun java
> solr 1.5 build based on trunk
> debian linux "lenny"
>            Reporter: Karl Wright
>         Attachments: solr-1951.zip
>
>
> When multiple threads pound on extractingUpdateRequestHandler using multipart form posting
over an extended period of time, I'm seeing a huge number of sockets piling up in the following
state:
> tcp6       0      0 127.0.0.1:8983          127.0.0.1:44058         TIME_WAIT
> Despite the fact that the client can only have 10 sockets open at a time, huge numbers
of sockets accumulate that are in this state:
> root@duck6:~# netstat -an | fgrep :8983 | wc
>   28223  169338 2257840
> root@duck6:~#
> The sheer number of sockets lying around seems to eventually cause commons-fileupload
to fail (silently - another bug) in creating a temporary file to contain the content data.
 This causes Solr to erroneously return a 400 code with "missing_content_data" or some such
to the indexing poster.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Mime
View raw message