httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Paul J. Reder" <>
Subject Re: plz vote on tagging current CVS as APACHE_2_0_19
Date Fri, 22 Jun 2001 22:45:37 GMT wrote:
> Your going to lose information between restarts, or you are going to
> require copying large chunks of shared memory on graceful restarts.

If the requirement is to maintain exactly all of th individual values, then yes the
info will either be lost or lots of copies will be required. If the requirement is
to collect Apache statistics and collect statistics of active workers then no, info
will not be lost and no copying is required. The values in the individual workers that
are going away are summed up and the totalled Apache results are stored at a higher
level. Those workers no longer exist.

> You are either going to leak shared memory like a sieve, or you are going
> to need to copy the data.  At what point are you planning to free the
> shared memory that was allocated suring the first starting of the server?

During the perform_idle_server_maintenance it checks if the shmem chunk is scheduled
for cleanup and has no processes left under it. The shmem segment is then freed via
perform_idle_server_maintenance garbage collection. No leak.

> > You will need to be able to start up a new process per row to take advantage
> > of these dribs and drabs that are appearing. With my design, as soon as there
> > is ThreadsPerChild number of workers on the free list, and there is a free
> > process, I can start a new process with ThreadsPerChild workers.
> What do you mean, and there is a free process?  Does that mean that if I
> am in a position where I have MaxClients = 5, and I have 5 processes with
> one thread in a long-lived request, that I won't start a new process?
> That won't work.  You need to be able to start a new process to replace an
> old one before an old process has died, otherwise, in pathological cases,
> you will have a dead server.

My feeling is that there should be another config directive called MaxTotalWorkers
so you could define MaxClients=20, ThreadsPerChild=100, MaxTotalWorkers=1000. You
could only start up 10 processes with 100 workers each, but if some of the
processes got below 100 workers (thus below the total of 1000 workers - in fact,
below MaxTotalWorkers - ThreadsPerChild) you could start another process.

You should never be able to start more processes than the config specifies. The 
user set the config for a reason. Creating more processes than are allowed in the
config violates the principle of least astonishment in my opinion. Let the user
define what their upper bound is. They know their system. Allow them to config
in the amount of extra transitional processes that they determine their system
can handle. We should not be guessing that we can start extra processes that
weren't configed. IMHO this is just asking for trouble.

> > What am I missing here? There is no overrun of the configured values. And the
> > algorithm isn't any more complex.
> The algorithm is MUCH more complex.  I know it is, because I implemented
> my entire patch in 155 lines.  That is the size of the entire patch.  I
> need to test it more, but I expect to post later today or tomorrow.

The specific algorithm I was talking about here was the one to determine if
another process with workers can be started up. Granted, the list processing
algorithm is more complex than the static table algorithm, but I think the
gains are worth it.

Now pardon me, I am going to cease debating for a bit and work on getting the
code done and tested. We'll let the code and the possibilities do the talking.

Paul J. Reder
"The strength of the Constitution lies entirely in the determination of each
citizen to defend it.  Only if every single citizen feels duty bound to do
his share in this defense are the constitutional rights secure."
-- Albert Einstein

View raw message