httpd-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Joshua Slive <>
Subject Re: [users@httpd] How to limit simultaneous CGI processes?
Date Mon, 27 Sep 2004 15:50:04 GMT
On Mon, 27 Sep 2004 14:57:39 +0100, Phil Endecott
<> wrote:
> Yes, that's a good idea.  My concern about implementing a lock in the
> CGI process was that if it terminates abnormally it could fail to
> release its lock, but I think that flock() is released automatically
> when the process terminates (by whatever means), so it avoids this problem.
> But is flock() fair?  I.E. will the process that has been waiting
> longest be next to get the lock?  And how can I scale from one process
> at a time to n (for e.g. n=3) processes at a time?  (Presumably with 3
> files and something like a select(), but then how do I maintain
> fairness?) (I can probably work this out given time, but if anyone knows
> the answer that would be great...)

These are difficult problems.  As you say, they are all solvable, but
will require some very careful design and planning.  I'm no expert,
but I'm sure there are computer-sci people who could go on at length
about how to handle queues like this.

Another possible solution to your problem is to setup a second apache
running on another port and use that exclusively for the cgi script. 
Then you can run that apache with a very small maxclients and let the
OS handle the queuing (check the ListenBacklog directive).  You can
pass requests to this other server either directly, or using


The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:> for more info.
To unsubscribe, e-mail:
   "   from the digest:
For additional commands, e-mail:

View raw message