httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Paul J. Reder" <rede...@raleigh.ibm.com>
Subject Re: [Patch]: Scoreboard as linked list.
Date Fri, 03 Aug 2001 13:24:12 GMT
Bill Stoddard wrote:
> 
> >
> > Ryan Bloom wrote:
> >
> > >My modules are walking the scoreboard on every request to gather information
> > >that is only stored there.  Any locking will kill the performance of those modules.
> > >
> > that sounds kinda ugly performance wise anyway,
> > just out of interest, why does you module need scoreboard info on each
> > request.
> >
> > couldn't we use a rw_lock/spin lock  instead of a mutex? that wouldn't
> > be as big a hit as a mutex
> >
> > ..Ian
> 
> There are two things impacting performance in this patch. The first is the overhead of
> following pointers.  If you do that each request, it can add up if you have large numbers
> of concurrent clients. I don't have a feel for the overhead relative to the rest of the
> server though. The additional overhead of running 10,000 pointer may be noise in the
> server. Or maybe not.

The proposed patch does not incurr a noticeable amount of extra overhead due to the pointers.
The worker is accessed directly via a pointer in the conn_rec. So no walking or dereferencing
is required. The old code had to compute indexes (and internally convert the indexes via
array derefs to an actual address). It pans out about the same.

For SB walks, the current code loops through row/col indexes and computes addresses. The
proposed patch just follows process/worker pointer chains. It works out about the same.

> 
> The second performance issue is lock contention. Acquiring a lock with no contention
is
> fast. If the lock has contention, the performance goes to hell fast. So the suggestion
of
> using a rw_lock (spin on multi cpu systems) sounds just right since accesses during normal
> HTTP requests can just acquire the reader lock.

Normal HTTP requests don't need a lock at all. Updating the worker counts is done without
a
lock based on the fact that if the worker is handling the request, it cannot be in the process
of being returned to the free list. The update follows the current pattern of behavior, allowing
the workers to be updated even during a mod_status walk. The worst that can happen is the
status report might be slightly inaccurate for that precise moment.

As I said in my first response to Ryan, even under very heavy abuse with the pathological
MRPC = 3000, lock contention was low. If someone can prove to me that contention is a problem
then we can discuss which of the several alternative optimizations would be best.

> 
> According to Paul's testing, his patch tends to manage the processes a bit better. At
any
> point in time, he has fewer processes active. Need to think about this some to determine
> why that is. If the observation holds up, this is a peformance mark in favor of Paul's
> patch as fewer processes means less memory and that's goodness.

My code *does* still reach the user defined maximum number of processes. It just takes longer
and happens less frequently than the current code.

> 
> Paul's patch also lets us eliminate HARD_THREAD_LIMIT and HARD_SERVER_LIMIT which is
cool
> IMO. It also lets us not allocate scoreboard for mod_status if mod_status is not loaded
> (not implemented yet, but the design enables it).
> 
> Benchmarking will tell us what we need to know on the performance front.

Yes, please, benchmark this. Show me where my results were flawed. I didn't see the 
problems in real life that some of you are seeing in theory. Show me the bottlenecks
and we can see if they can be addressed.

-- 
Paul J. Reder
-----------------------------------------------------------
"The strength of the Constitution lies entirely in the determination of each
citizen to defend it.  Only if every single citizen feels duty bound to do
his share in this defense are the constitutional rights secure."
-- Albert Einstein

Mime
View raw message