hc-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Oleg Kalnichevski <ol...@apache.org>
Subject Re: BasicAsyncResponseConsumer needs OOM protection?
Date Tue, 31 May 2016 19:04:01 GMT
On Tue, 2016-05-31 at 16:03 +0000, Idzerda, Edan wrote:
> Hello all!
> 
> We are starting to migrate to Forgerock's OpenIG as a reverse proxy.  It uses the HttpAsyncClient,
and as such it is using the BasicAsyncResponseConsumer to fetch the responses from the backend
web site.  We have run into a problem when the backend web sites return response data that
is too large to fit in our JVM heap.  We understand that the BasicAsyncResponseConsumer was
not meant to handle arbitrarily large responses, and we are looking at other ways to handle
these situations in the future.
> 
> However, we have observed a problem in the handling of OutOfMemoryErrors in the AbstractMultiworkerIOReactor.
 In short, when the BasicAsyncResponseConsumer throws an OutOfMemoryError, the worker thread
in the IOReactor does seem to return to the pool of available threads.  Once the number of
requests through the pool matches the number of worker threads, new requests hang indefinitely:
> 
> 2016/05/27 11:48:20:158 EDT [DEBUG] MainClientExec - [exchange: 3] start execution
> 2016/05/27 11:48:20:159 EDT [DEBUG] RequestAuthCache - Auth cache not set in the context
> 2016/05/27 11:48:20:159 EDT [DEBUG] InternalHttpAsyncClient - [exchange: 3] Request connection
for {}->http://c3dufedsso2.premierinc.com:8080
> 2016/05/27 11:48:20:159 EDT [DEBUG] PoolingNHttpClientConnectionManager - Connection
request: [route: {}->http://c3dufedsso2.premierinc.com:8080][total kept alive: 0; route
allocated: 0 of 64; total allocated: 0 of 64]
> 
> 
> We have observed that the main thread returns a ConnectionClosedException:
> 
> org.apache.http.ConnectionClosedException: Connection closed unexpectedly
>         at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.closed(HttpAsyncRequestExecutor.java:137)
>         at org.apache.http.impl.nio.client.InternalRequestExecutor.closed(InternalRequestExecutor.java:64)
>         at org.apache.http.impl.nio.client.InternalIODispatch.onClosed(InternalIODispatch.java:71)
>         at org.apache.http.impl.nio.client.InternalIODispatch.onClosed(InternalIODispatch.java:39)
>         at org.apache.http.impl.nio.reactor.AbstractIODispatch.disconnected(AbstractIODispatch.java:102)
>         at org.apache.http.impl.nio.reactor.BaseIOReactor.sessionClosed(BaseIOReactor.java:281)
>         at org.apache.http.impl.nio.reactor.AbstractIOReactor.processClosedSessions(AbstractIOReactor.java:442)
>         at org.apache.http.impl.nio.reactor.AbstractIOReactor.hardShutdown(AbstractIOReactor.java:578)
>         at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:307)
>         at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:106)
>         at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:590)
>         at java.lang.Thread.run(Thread.java:745)
> 
> 
> And the dispatcher thread ends this way:
> 
> Exception in thread "I/O dispatcher 6" java.lang.OutOfMemoryError: Java heap space
>         at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
>         at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
>         at org.apache.http.nio.util.HeapByteBufferAllocator.allocate(HeapByteBufferAllocator.java:51)
>         at org.apache.http.nio.util.ExpandableBuffer.<init>(ExpandableBuffer.java:67)
>         at org.apache.http.nio.util.SimpleInputBuffer.<init>(SimpleInputBuffer.java:47)
>         at org.apache.http.nio.protocol.BasicAsyncResponseConsumer.onEntityEnclosed(BasicAsyncResponseConsumer.java:74)
> 
> 
> I have been able to also reproduce this outside of OpenIG using a PoolingNHttpClientConnectionManager
with defaults settings.  To fix it, I have been testing a patch against httpcore 4.4.1 that
catches OutOfMemoryErrors in two methods of the BasicAsyncResponseConsumer class.  I throw
them as new IOException's in order to keep the same method signatures, and I only catch OutOfMemoryError
specifically because catching all throwables broke a unit test looking for a Truncated content
exception.
> 
> Using this patch, I can request HTTP responses that exceed available memory without resulting
in a hang.  Does this seem like the appropriate place to patch this error?  Or should I create
a JIRA against this issue and suggest this patch as a solution?  I've attached a diff to this
email
> 
> Thanks for your assistance and for working on such a great set of Java libraries.
> 
> - edan
> 

Edan

Basic request / response consumers are not supposed to be used for
anything other than the most basic use cases. One is generally advised
to build custom response consumers specifically tailed to their
particular application.  

Here is an example of a fully streaming reverse proxy that makes use of
custom message producers / consumers to stream messages utilizing a
small fixed size buffer

https://github.com/apache/httpcore/blob/4.4.x/httpcore-nio/src/examples/org/apache/http/examples/nio/NHttpReverseProxy.java

Here is another example how to consume response messages without
buffeting their content in memory

https://github.com/apache/httpasyncclient/blob/4.1.x/httpasyncclient/src/examples/org/apache/http/examples/nio/client/AsyncClientHttpExchangeStreaming.java

Oleg

> 
> --
> Edan Idzerda
> edan_idzerda@premierinc.com
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@hc.apache.org
> For additional commands, e-mail: dev-help@hc.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@hc.apache.org
For additional commands, e-mail: dev-help@hc.apache.org


Mime
View raw message