httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ryan Bloom <>
Subject Re: [Fwd: [Patch]: Scoreboard as linked list.]
Date Fri, 03 Aug 2001 16:42:18 GMT
On Friday 03 August 2001 06:28, Paul J. Reder wrote:
> Unfortunately, the first version of the reply went directly to Ryan. Here
> is a copy...
> Ryan Bloom wrote:
> > -1.  As I have stated multiple times, if this uses a mutex to lock the
> > list whenever something walks the scoreboard, I can't accept it.  It will
> > kill the performance for modules that I have.
> I'm very impressed that you were able to get the patch, apply it, build it
> and test it in the 30 minutes between my post and yours :)

That's easy to explain.  I didn't apply it.  I read it.  I happen to be VERY well-
versed in how Apache 2.0 works.  I tend to be able to read patches, and
determine most of the effects of the code.  If my initial reading doesn't show me
any problems, then I apply and test it.  It took me about 30 minutes to read through
the code.  It helps that I was trying to add a feature to the configure script, and I
couldn't figure out how to do it, so I was checking e-mail every minute or so.

> I do understand your concern. But I feel you are overreacting. When we had
> this discussion you indicated this concern, but indicated a willingness to
> test the code if I wrote it. It is now written, I would appreciate you
> actually taking a look at it before panning it outright.

No.  I specifically stated that if we had to lock the mutex every time we had to
walk the scoreboard, I would have to veto the patch.  Since this patch very
obviously has that requirement, I have lived up to my word.

> There are three times when locking can occur:
>    1. When workers or processes come and go.
>    2. When p_i_s_m checks for idleness.
>    3. When the scoreboard is walked for status.
> 1 occurs rarely (even in my pathological test scenario of MRPC = 3000 under
> extreme abuse). In fact, 1 can be eliminated almost entirely if you set
> MRPC to 0. The exception being the replacement of segfaulted processes. Due
> to its infrequency and the short duration of the locking time this should
> not cause any problem, and in my testing did not seem to since my solution
> performed at least as well as yours.
> 2 occurs once per second. This is a quick walk to check the status flag and
> tally some counts. This could be optimized in several ways *if* it proves
> to be a problem. A) It could avoid locking if MRPC = 0.
>      B) It could use a read lock to permit simultaneous status walks.
>      C) It could use finer granularity locking to allow multiple walkers
> withing the SB. This code is already optimized, compared to the current
> code, since it only walks through the workers in the active list. Workers
> on the free list aren't walked, therefore do not incurr that overhead.
> 3 may occur frequently (in your case), or infrequently (in everyone elses
> ;) ). Locking occurs in two parts. This can be optimized in a number of
> ways *if* it proves to be a problem.
>      A-C) As above.
>      D) The walking code itself could be optimized to do less work during
> the locked phase. For example, the mod_status code currently does all of
> its network writes in the loop while holding a lock. The code could be
> rewritten to memcpy the parts into a local buffer, then write the buffer to
> the network outside the locking region.

> Obviously we have a difference of opinion. The only way to resolve this is
> to test it. I have done my part in the testing process. Vetoing it without
> testing it is bad form.

Nonsense, people veto without testing all the time.  It can suck, and I have
complained about it in the past as well, but people do it all the time.

I have other concerns that I still haven't finished thinking through, so I haven't
posted them yet, but I will continue to analyse the patch and post comments as
I finish thinking them through.

Ryan Bloom               
Covalent Technologies

View raw message