httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Peter J. Cranstone" <>
Subject RE: Multi-threaded proxy? was Re: re-do of proxy request bodyhandling - ready for review
Date Wed, 02 Feb 2005 18:25:44 GMT

Who is trying to serve up 2GB files?

Peter J. Cranstone

-----Original Message-----
From: Ronald Park [] 
Sent: Wednesday, February 02, 2005 11:24 AM
Subject: Re: Multi-threaded proxy? was Re: re-do of proxy request
bodyhandling - ready for review

Imagine, just as a wild theoretical scenario (:D), that you have
the following setup:

Apache -> (proxy) -> Squid -> (cache miss) -> Apache -> (docroot)

Where the back-end Apache serves up large files (in the 2G range)
(and, yes, there are far more files that can be effectively cached).
Now imagine you have thousands of clients trying to get those files
some of which have very slow connections.  And also imagine that
their are more front-end Apache instances than back-ends.

The backend Apache could quickly delivery the file through to
the frontend Apache's mod_proxy if it wasn't held up by waiting
for each chunk to be spoonfed over to the slow client.  Even for
relatively good clients, it's likely a number of them are going
to tie up a thread in the back-end for longer than it would if
the front-end gobbled up the proxy response faster.

The problem with the 'gobble up the whole proxy response' all at
once though is that for these huge files, the original client might
not get any response for a noticable amount of time.  Further, if
an impatient client, it might give up and reissue the request again,
tying up another set of threads (and internal bandwidth). :(


On Wed, 2005-02-02 at 18:51 +0100, Mladen Turk wrote:
> Paul Querna wrote:
> > 
> > One thought I have been tossing around for a long time is tying the 
> > proxy code into the Event MPM.  Instead of a thread blocking while it 
> > waits for a backend reply, the backend server's FD would be added to the

> > Event Thread, and then when the reply is ready, any available worker 
> > thread would handle it, like they do new requests.
> > 
> > This would work well for backend servers that might take a second or two

> >  for a reply, but it does add at least 3 context switches.  (in some use

> > cases this would work great, in others, it would hurt performance...)
> >
> I don't think it would give any benefit. Well perhaps only on
> forward proxies it could spare some keep-alive connections.
> Regards,
> Mladen.
Ronald Park <>

View raw message