tomcat-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Christopher Schultz <>
Subject Re: Connectors, blocking, and keepalive
Date Thu, 27 Feb 2014 17:56:43 GMT

On 2/25/14, 3:31 AM, Mark Thomas wrote:
> On 25/02/2014 06:03, Christopher Schultz wrote:
>> All,
>> I'm looking at the comparison table at the bottom of the HTTP
>> connectors page, and I have a few questions about it.
>> First, what does "Polling size" mean?
> Maximum number of connections in the poller. I'd simply remove it from
> the table. It doesn't add anything.

Okay, thanks.

>> Second, under the NIO connector, both "Read HTTP Body" and "Write
>> HTTP Response" say that they are "sim-Blocking"... does that mean
>> that the API itself is stream-based (i.e. blocking) but that the
>> actual under-the-covers behavior is to use non-blocking I/O?
> It means simulated blocking. The low level writes use a non-blocking
> API but blocking is simulated by not returning to the caller until the
> write completes.

That's what I was thinking. Thanks for confirming.

>> It is important to make that distinction since the end user (the
>> code) can't tell the difference?
> The end user shouldn't be able to tell the difference. It is important
> and it indicates that there is some overhead associated with the process.

Aah, okay. A "true" blocking read or write would be more efficient, but
you can't have both blocking and non-blocking operations on a connection
after it's been established?

>> Third, under "Wait for next Request", only the BIO connector says 
>> "blocking". Does "Wait for next Request" really mean 
>> wait-for-next-keepalive-request-on-the-same-connection? That's the
>> only thing that would make sense to me.
> Correct.


>> Fourth, the "SSL Handshake" says non-blocking for NIO but blocking
>> for the BIO and APR connectors. Does that mean that SSL handshaking
>> with the NIO connector is done in such a way that it does not
>> tie-up a thread from the pool for the entire SSL handshake and
>> subsequent request? Meaning that the thread(s) that handle the SSL
>> handshake may not be the same one(s) that begin processing the
>> request itself?
> Correct. Once request processing starts (i.e. after the request
> headers have been read) the same thread is used. Up to that point,
> different threads may be used as the input is read (with the NIO
> connector) using non-blocking IO.

Good. Are there multiple stages of SSL handshaking (I know there are at
the TCP/IP and SSL level themselves -- I mean in the Java code to set it
up) where multiple threads could participate -- serially, of course --
in the handshake? I want to develop a pipeline diagram and want to make
sure it's accurate. If the (current) reality is that a single thread
does the SSL handshake and then another thread (possibly the same one)
handles the actual request, then the diagram will be simpler.

Let me take this opportunity to mention that while I could go read the
code, I've never used Java's NIO package and would probably spend a lot
of time figuring out basic things instead of answering the higher-level
questions I'd like to handle, here. Not to mention that the
connector-related code is more complicated than one would expect given
the fairly small perceived set of requirements they have (i.e. take an
incoming connection and allocate a thread, then dispatch). It's
obviously far more complicated than that and there is a lot of code to
handle some very esoteric requirements, etc.

I appreciate you taking the time to answer directly instead of
recommending that I read the code. You are saving me an enormous amount
of time. ;)

>> Lastly, does anything change when Websocket is introduced into the
>> mix?
> Yes. Lots.
>> For example, when a connection is upgraded from HTTP to Websocket,
>> is there another possibility for thread-switching or anything like
>> that?
> Yes. Everything switches to non-blocking mode (or simulated
> non-blocking in the case of BIO).
>> Or is the upgrade completely-handled by the request-processing
>> thread that was already assigned to handle the HTTP request?
> The upgrade process is handled by the request processing thread but
> once the upgrade is complete (i.e. the 101 response has been returned)
> that thread returns to the pool.

Okay, so the upgrade occurs and the remainder of the request gets
re-queued. Or, rather, a thread is re-assigned when an IO event occurs.
Is there any priority assigned to events, or are they processed
essentially serially, in the order that they occurred -- that is,
dispatched to threads from the pool in the order that the IO events arrived?

>> Also, (forgive my Websocket ignorance) once the connection has been
>> upgraded for a single request, does it stay upgraded or is the next
>> (keepalive) request expected to be a regular HTTP request that can
>> also be upgraded?
> The upgrade is permanent. When the WebSocket processing ends, the
> socket is closed.

Okay, so if a client played its cards right, it could send a traditional
HTTP request with keepalive, make several more requests over the same
connection, and then finally upgrade to Websocket for the final request.
After that, the connection is terminated entirely.

There is an implication there that if you want to use Websocket, don't
use it for tiny request/response activities because performance will
actually drop. One would be foolish to "replace" plain-old HTTP with
Websocket but try to treat them the same.

>> In the event that the request "stays upgraded", does the connection
>> go back into the request queue to be handled by another thread, or
>> does the current thread handle subsequent requests (e.g. BIO-style
>> behavior, regardless of connector).
> Either. It depends how the upgrade handler is written. WebSocket uses
> Servlet 3.1 NIO so everything becomes non-blocking.

I think you answered this question above: the connection is closed
entirely, so there will never be another "next request" on that
connection, right?

>> I'm giving a talk at ApacheCon NA comparing the various connectors
>> and I'd like to build a couple of diagrams showing how threads are 
>> allocated, cycled, etc. so the audience can get a better handle on
>> where the various efficiencies are for each, as well as what each 
>> configuration setting can accomplish. I think I should be able to 
>> re-write a lot of the Users' Guide section on connectors (a
>> currently mere 4 paragraphs) to help folks understand what the
>> options are, why they are available, and why they might want to use
>> one over the other.
> I'd really encourage you to spend some time poking around in the
> low-level connector code debugging a few sample requests through the
> process.

I will definitely do that, but I wanted to get a mental framework before
I did. There's a lot of code in there... even the BIO connector isn't as
fall-off-a-log simple as one might expect.


View raw message