hc-httpclient-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stevo Slavić <ssla...@gmail.com>
Subject Re: HttpDelete and similar could exhaust your http connection pool if not handled properly
Date Tue, 11 Jan 2011 21:21:30 GMT
@sebb

1) it's "delete" - tried to copy multiple fragments and assemble brief
readable one for the mail, made error, but I hope we understood each
other.

2) delete can be null, HttpClient.execute(HttpUriRequest request)
doesn't state request is final, so it can point to anything after
being passed to execute

3) please don't waste time documenting to be deprecated api


@Oleg

4) So it's better / more recommended to consumeContent of response
entity than to abort request? Is connection considered reusable when
contentConsumed compared to abort where it gets discarded and new
connection needs to be instantiated?

5) I see in 4.1(-beta2-SNAPSHOT) HttpClient.consumeContent is
deprecated in favor of EntityUtils.consume - there it just gets
content stream reference and closes it so its performance is let say
constant - not affected by content size. I think I read, before 4.1,
abort was more favorable than consumeContent in cases of large
response content streams.

6) What if client.execute throws exception and it's not handled (so
neither response content is consumed nor request aborted), will
connection be returned to the pool, discarded as invalid, or trapped
eternally exhausting the pool?



Regards,
Stevo

On Tue, Jan 11, 2011 at 9:19 PM, Oleg Kalnichevski <olegk@apache.org> wrote:
> On Tue, 2011-01-11 at 19:17 +0100, Stevo Slavić wrote:
>> It seems they were.
>>
>> On client side of communication I now have:
>>
>>
>>               HttpDelete httpDelete = new HttpDelete(deleteUrl);
>>
>>               try {
>>                       HttpResponse response = client.execute(delete);
>>                       if (response.getStatusLine().getStatusCode() !=
HttpStatus.SC_OK) {
>>                               throw new SomeCustomException(response.getStatusLine().getStatusCode(),
>> response.getStatusLine().getReasonPhrase());
>>                       }
>>               } catch (IOException ioe) {
>>                       throw new SomeCustomException(ioe);
>>               } finally {
>>                       if (delete != null) {
>>                               delete.abort();
>>                       }
>>               }
>>
>>
>> Without finally and abort call in it (also with abort just on
>> IOException in catch block), connections would not get returned to the
>> pool in regular non-exceptional return. HttpDelete refrence is lost -
>> not accessible outside of this block/method. This puts responsibility
>> to the developer to clean up low level resources which weren't
>> directly created or accessed - I guess it has to be like that.
>>
>
> What is wrong with just this?
>
> HttpDelete httpDelete = new HttpDelete(deleteUrl);
> HttpResponse response = client.execute(httpDelete);
> HttpEntity entity = response.getEntity();
> if (entity != null) {
>    entity.consumeContent();
> }
> if (response.getStatusLine().getStatusCode() != HttpStatus.SC_OK) {
>   throw new SomeCustomException();
> }
>
> Oleg
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
> For additional commands, e-mail: httpclient-users-help@hc.apache.org
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
For additional commands, e-mail: httpclient-users-help@hc.apache.org


Mime
View raw message