hc-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sam Berlin" <sber...@gmail.com>
Subject Re: [HttpCore] NIO extensions: non-blocking client side transport
Date Sat, 07 Oct 2006 16:33:14 GMT
The third choice should be an option for users who want to take the
full advantage that NIO offers.  Anything less is giving up some of
the main features of NIO.  But you're correct -- not as many
developers are familiar with channels / buffers as they are with
streams.  I recommend exposing the API via the third choice, but
having an additional layer that can wrap the buffer into a stream.

You can do this via something like:
  Pipe pipe = Pipe.open();
  SinkChannel /* a WritableByteChannel */ sink = pipe.sink();
  // written to with: sink.write(myBuffer);
  SourceChannel /* a ReadableByteChannel */ source = pipe.source();
  InputStream input = Channels.newInputStream(source);
  // Now all data written to 'sink' can be read from 'input'

The only difficult is that sink.write(...) has the potential to block
until input gobbles data from it.  This would mean that writing from
the buffer into the sink would have to be done in a different thread
than the Selector thread.

Sam

On 10/7/06, Roland Weber <http-async@dubioso.net> wrote:
> Hi Oleg
>
> > Yes, something similar but not quite the same. I am thinking about
> > having an event driven architecture of some sort, but I certainly do not
> > want to go the same route asyncweb folks did with regards to memory
> > management. As far as I know asyncweb buffers content in memory and can
> > be prone to 'out of memory' conditions even when serving moderate
> > amounts of data under heavy load. In my humble opinion this is way worse
> > then dropping incoming connections due to the worker thread pool
> > depletion, because the former gives the clients a very clean and
> > reliable recovery mechanism, whereas the latter does not. Dropping the
> > connection due to the out of memory condition after having processed the
> > request while sending out the response is a complete insanity.
>
> I was already wondering what drawbacks asyncweb might have.
> The story of never blocking anything just sounded too good.
>
> > So, in my opinion there are several options we could pursue.
> >
> > (1) Never ever block I/O in HTTP service. As a consequence always buffer
> > content in memory. This approach is flawed, but is relatively simple.
>
> Rather not.
>
> > (2) Always block I/O in HTTP service when serving potentially large
> > entities in order to prevent session buffer overflow. Requires a worker
> > thread per large entity content stream.
>
> Based on my limited understanding of NIO, this option sounds best.
> It should allow for both blocking and non-blocking operation, with
> a mix of buffering and non-buffering. Or am I getting something wrong?
>
> > (3) Do not use streams. Use callbacks for I/O events that take NIO
> > buffers as parameters.
>
> This *sounds* good, but somehow I don't buy the story. Our entities
> are based on streams. File IO is based on streams. Many developers
> are familiar with streams. There's nothing wrong with having callbacks
> as an option, but I'd rather not have them as the only option.
>
> cheers,
>  Roland
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: httpclient-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: httpclient-dev-help@jakarta.apache.org
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: httpclient-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: httpclient-dev-help@jakarta.apache.org


Mime
View raw message