lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Otis Gospodnetic <>
Subject Re: CLOSE_WAIT after connecting to multiple shards from a primary shard
Date Tue, 31 May 2011 17:38:29 GMT

I think I already replied to this one on general@ or some other place.  Did you 
try the suggestion?  Please send any future replies to instead of this dev@ list, which is for development 
of Lucene/Solr itself.

Sematext :: :: Solr - Lucene - Nutch
Lucene ecosystem search ::

>From: Mukunda Madhava <>
>Sent: Mon, May 30, 2011 9:43:12 PM
>Subject: CLOSE_WAIT after connecting to multiple shards from a primary shard
>We are having a "primary" Solr shard, and multiple "secondary" shards. We 
>query data from the secondary shards by specifying the "shards" param in the 
>query params. 
>But we found that after recieving the data, there are large number of 
>CLOSE_WAIT on the secondary shards from the primary shards. 
>Like for e.g. 
>tcp        1      0 primaryshardhost:56109 secondaryshardhost1:8090 CLOSE_WAIT 
>tcp        1      0 primaryshardhost:51049 secondaryshardhost1:8090 CLOSE_WAIT 
>tcp        1      0 primaryshardhost:49537 secondaryshardhost1:8089 CLOSE_WAIT 
>tcp        1      0 primaryshardhost:44109 secondaryshardhost2:8090 CLOSE_WAIT 
>tcp        1      0 primaryshardhost:32041 secondaryshardhost2:8090 CLOSE_WAIT 
>tcp        1      0 primaryshardhost:48533 secondaryshardhost2:8089 CLOSE_WAIT 
>We even changed the code to open the Solr connections as below.. 
>SimpleHttpConnectionManager cm = new 
>HttpClient httpClient = new HttpClient(cm); 
>solrServer = new CommonsHttpSolrServer(url,httpClient); 
>But still we see these issues. Any ideas? 
>Does Solr persist the connections to the secondary shards?
View raw message