tomcat-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From André Warnier>
Subject Re: Concept doubt about threads & servlets
Date Sat, 29 Sep 2012 21:23:14 GMT
Jose María Zaragoza wrote:
> Thanks
>>> Yes, but I am curious how you would make a browser send several
>>> requests in a row on the
>>> same connection, without waiting for the first request to return a
>>> response.
> For example, with AJAX calls
> Well , I suppose that diferents AJAX calls go throught the same TCP
> connection, but I'm not sure.

Since this is not (to my knowledge) described in any specification, different browsers can

be doing this differently.

>> It is called HTTP pipe-lining and Tomcat supports it (and has done for as long as
I can remember).

I did not know this, and I was under the impression that when they make several requests 
in parallel, browsers open multiple TCP connections to the server.
But note that the same Wikipedia article seems to say, in "Implementation in web 
browsers", that most browsers do not use pipelining anyway.

> OK. So, ALL requests over the same TCP connection are stacked up into
> a buffer and proccess one-by-one, arent' they ?
> Please , confirm this fact if it's right
>> The behaviour will be as you describe *if* the client only uses one thread but most
clients will use multiple threads.
> Umm, I thought that persistent connections were the default behaviour.
> I don't know if AJAX calls use persistent connection or open a new
> connection for each request

I don't think that this is specified anywhere, so different browsers (or other HTTP 
clients) may act differently.

> All my doubts are about the relationship between   threads -
> connection - requests
> one thread by connection and that thread process all requests on-by-one ?
> one thread by connection and others threads are created to process
> each request ?

It can be either.  See this article :

It can also get a bit more complicated if you have another front-end server in front of 
Tomcat, because that front-end may keep a number of connections open to Tomcat (for 
efficiency reasons), and depending on Tomcat's configuration this may or may not result in

Tomcat threads being held waiting on each of these connections.
And the front-end may accept requests from 1 or more client connections, and decide to 
distribute them over this pool of Tomcat connections.

Initially HTTP 1.0 is a protocol where you have one client opening a connection to the 
server and sending one request, resulting in the server processing that request and 
sending back a response. Then either side may decide to drop the connection and forget all

about it, until a new connection and request comes in.
And this for however many connections/requests in parallel.
Then comes the "keep-alive" feature of HTTP 1.1, which is an attempt at optimising this, 
by having the  client/server connection stay open for a while (or for a number of 
requests) after the first request, in order to save the overhead of establishing and 
tearing down a TCP connection each time, if the client sends a number of requests in rapid

But the general logic stays the same, and to my knowledge there is nothing in the 
specifications that specifies /how/ the webserver must handle internally a series of 
requests that come in. From the server point of view, each HTTP request is supposed to be

independent of other HTTP requests that precede it or follow it in time.
(Except I for such "pipelined requests", which should be processed in the order that they

come in.)

The basic point is that you cannot in this area /rely/ on any specific server behaviour if

you write a web application.  Your application must be so that it reacts properly if the 
requests are not processed in the order in which you /think/ the client is sending them.

In Tomcat, one request will at some point be processed by one thread, which will run 
whatever servlet code needs to be run to answer that request.  So there is some kind of 
link : one request = one thread = one servlet instance being run.
Now whether that /same/ Tomcat thread will process the next request from the same client 
or not, that is - in a general sense - unpredictable.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message