httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Brian Candler <>
Subject Re: Should fastcgi be a proxy backend?
Date Tue, 07 Mar 2006 18:47:07 GMT
On Sun, Mar 05, 2006 at 03:06:09PM -0800, Garrett Rooney wrote:
> First of all, mod_proxy_balancer really assumes that you can make
> multiple connections to back end fastcgi processes at once.  This may
> be true for some things that speak fastcgi (python programs that use
> flup to do it sure look like they'd work for that sort of thing, but I
> haven't really tried it yet), but in general the vast majority of
> fastcgi programs are single threaded, non-asynchronous, and are
> designed to process exactly one connection at a time. This is sort of
> a problem because mod_proxy_balancer doesn't actually have any
> mechanism for coordinating between the various httpd processes about
> who is using what backend process.

I'm not sure what you mean there, in particular what you mean by 'assumes
that you can make multiple connections to back end fastcgi processes'

What I'm familiar with is apache 1.x with mod_fcgi. In that case, the
typical fastcgi program does indeed handle a single request at once, but you
have a pool of them, analagous to httpd with its pool of worker processes.

Multiple front-end processes can open sockets to the pool, and each
connection is assigned to a different worker. In this regard, a fastcgi
process is just like a httpd process. I don't see why mod_proxy_foo can't
open multiple fastcgi connections to the pool, in the same way as it could
open multiple http connections to an apache 1.x server.

(You could think of the fastcgi protocol as just a bastardised form of HTTP.
I wonder why they didn't just use HTTP as the protocol, and add some
X-CGI-Environment: headers or somesuch)

> Second, mod_proxy_balancer doesn't (seem to) have any real mechanism
> for adding back end processes on the fly, which is something that
> would be really nice to be able to do.  I'd eventually love to be able
> to tell mod_proxy_fcgi that it should start up N back end processes at
> startup, and create up to M more if needed at any given time. 
> Processes should be able to be killed off if they become nonresponsive
> (or probably after processing a certain number of requests)

... sounds very similar to httpd worker process management (for non-threaded

> , and they
> should NOT be bound up to a single httpd worker process.

In that case, is the underlying problem that mod_proxy_foo shouldn't really
hold open a *persistent* connection to the fastcgi worker pool, otherwise it
will tie up a fastcgi worker without good reason, preventing it from doing
work for someone else?

> So is there some reason I'm missing that justifies staying within the
> proxy framework

Maybe. You might want to consider the case where the fastcgi server is a
*remote* pool of workers, where the fastcgi messages are sent over a TCP/IP
socket, rather than a local Unix domain socket. In that case, some remote
process is responsible for managing the pool, and this is arguably very
similar to the proxy case.

OTOH, the typical approach when using such a remote pool is to have a
different port number for each fastcgi application, since I'm not sure that
the fastcgi protocol itself has some way of passing down a URL or partial
URL which could identify the particular worker of interest. If it did, a
single process listening on a single socket could manage a number of
different applications, each with a different pool of workers. In any case,
though, it probably needs a configured list of applications, as it will need
some parameters for each one (e.g. minimum and maximum size of pool, as you

Just a few thoughts...


View raw message