hc-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tatu Saloranta <cowtownco...@yahoo.com>
Subject Re: [HttpCore] NIO extensions: progress report
Date Mon, 28 Aug 2006 20:44:36 GMT


--- Oleg Kalnichevski <olegk@apache.org> wrote:

> On Sun, 2006-08-27 at 14:18 -0700, Tatu Saloranta
> wrote:
...
> > Actually, I think there are cases where this is
> not
> > true: specifically, when server itself has to act
> as a
> > client towards other systems. This is typically
> the
> > way web services work: there are multiple layers
> of
...
> Hi Tatu,
> 
> I respectfully remain unconvinced. The problem is
> not the blocking I/O
> as such, but rather the one thread per connection
> model, which is often

Yes, exactly, but:

> (mis)used due to its simplicity. I personally think
> that the platform
> you have described, which is essentially an HTTP
> proxy, may perform

In its simplest form yes; but in practice there is
usually
business logic combining data from multiple sources,
dynamically: different sources having different
response
rates and variations thereof.

> better using a reasonably sized thread pool,
> blocking I/O connections
> and carefully chosen connection keep-alive strategy.
> NIO _may_ provide

I don't see how this would solve the problem, as long
as
threads essentially must be used for doing blocking
access...
Unless I am missing something (which is possible, so
please
read on).

Also, I am not thinking raw performance (throughput)
as much
as scalability and robustness WRT number of concurrent
open connections.
High throughput does not help if the system crashes
under
high load. This is the usual "linux vs. solaris"
comparison:
for some cases you just want the most optimal
solution, for
others more robust operation (graceful degradation of
the
quality of service).

> more advantageous only if most of those connections
> constantly generate
> some fair amount of traffic at a very low rate thus
> infinitely blocking
> worker threads and preventing them from being
> efficiently reused. At the

Yes, although I am not thinking of low rate so much,
but high latency:
if a call to another (slow) service takes 500 ms to
respond,
the thread in question will be idling, and you need
huge
number of 'worker' (idler, rather...) threads just for
this
simple binding. And knowing how badly Java threading
scales
on most platforms, at 1000 thread pool you are SOL due
to
scheduling overhead.

But perhaps it is possible to just decouple handling
of the request
sending, and response processing from each other, and
on average blocking
on each of those would be much less problematic, than
that of static thread
allocation for the whole request/response transaction.

Come to think of this now, maybe this is exactly what
has
been talked about so far? ;-)
If so, yes, you are exactly right, this might solve
many of
the problems of the simple thread-per-connection
strategy.

> same time asynchronous HTTP transport may easily
> succumb to the same
> problem when using a dedicated worker thread per
> HTTP request / HTTP
> connection.

Sure. NIO is no silver bullet: obviously processing of
request and replies also needs to get rid of the 
one-thread-per-request strategy.

-+ Tatu +-


__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---------------------------------------------------------------------
To unsubscribe, e-mail: httpclient-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: httpclient-dev-help@jakarta.apache.org


Mime
View raw message