hc-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Bill Speirs <bill.spe...@gmail.com>
Subject Re: Too Many Open Files Exception
Date Tue, 01 Nov 2011 13:29:57 GMT
> Correct. The difference is that EntityUtils#consume will try to salvage
> the underlying connection, whereas HttpUriRequest#abort will not.
> The important point is to use try-finally to ensure connection release
> in all cases.

The problem is that I have two methods to "close" the connection: one
if everything is fine, another if there is an error/exception. So
try-catch-finally doesn't really help. I think what I want is the
following. In the normal case (the try block), I simply get the entity
and close the input stream on the content. However, if something goes
wrong I should simply call abort() on the request.

try {
  response = client.execute(request);
} catch( ... ) {

Does this make sense?

I suppose moving response.getEntity().getContent().close(); to a
finally block couldn't hurt, as I don't expect the input stream would
throw an exception after abort() was called... but it seems messy.

I guess I'm not sure why there isn't simply a close() method for the
request which can be called in either case.

> HttpClient automatically shuts down the underlying connection in case of
> an I/O exception including a timeout. The connection manager will evict
> closed connection immediately upon release.

That isn't what I'm seeing, although it's tough to reproduce. On two
separate machines running the same code I'm seeing the following:

try {
  HttpResponse response = client.execute(httpHost, request); // throws
a java.net.SocketTimeoutException
} catch(IOException e) {

This code is hit again, and everything hangs. I'm going to try and
enabled debugging so I can find out exactly where it is hanging.



To unsubscribe, e-mail: dev-unsubscribe@hc.apache.org
For additional commands, e-mail: dev-help@hc.apache.org

View raw message