httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Min Xu <...@cae.wisc.edu>
Subject Re: Strange Behavior of Apache 2.0.43 on SPARC MP system
Date Wed, 12 Feb 2003 21:08:53 GMT
On Wed, Feb 12, 2003 at 10:35:18AM -0800, Justin Erenkrantz wrote:
> --On Wednesday, February 12, 2003 11:52 AM -0600 Min Xu 
> <mxu@cae.wisc.edu> wrote:
> 
> >First, I don't think the disk should be bottleneck in any case,
> >this is because the system has 2GB memory, Solaris's file cache is
> >able to cache all the file content. top shows the following stats:
> 
> The size of memory has nothing to do with the available bandwidth 
> that the memory has.

I didn't say it does. I was trying to rule out the possibilities that
the disk being the bottleneck. But you apparently have underestimated
Sun server's memory system. For our system, it uses Sun's gigaplane/Sunfire
system interconnect.

> ...
> 
> All MP Sparcs share the same memory backplane.  That's why you hardly 
> ever see performance improvements past 8x CPUs because the memory 
> bandwidth kills you (the CPUs are starved for memory).  Moving to a 
> NUMA architecture might help, but I think that's not a feature 
> UltraSparc or Solaris support.  (I hear Linux has experimental NUMA 
> support now.)

It is indeed the case apache's performance doesn't go up very much
past 8x CPUs in my experiments. However, whether this is due to limited
memory bandwidth is yet to be tested. Also, I am not aware of any
literature supports that NUMA architecture would have higher memory
bandwidth.

> I'd recommend reading http://www.sunperf.com/perfmontools.html.  You 
> should also experiment with mod_mem_cache and mod_disk_cache.

Thanks for the suggestions, I would like to try mod_mem_cache and
mod_disk_cache.

> >To test the context switching hypothesis and the backplane
> >hypothesis I changed all files in the repository to 2 bytes long,
> >that's an "a" plus an "eof". I rerun the experiment, the
> >performance is poorer!
> 
> There will still be overhead in the OS networking layer.  You are 
> using connection keep-alives and pipelining, right?  The fact that 
> your top output had a lot of kernel time, I'd bet you are spending a 
> lot of time contending on the virtual network (which is usually the 
> case when you are not using connection keep-alives - the TCP stack 
> just gets hammered).  I'd bet the local network is not optimized for 
> performance.  (DMA can't be used and functionality that could be 
> implemented on dedicated hardware must be done on the main CPU.)

Sounds interesting. I'd like to test whether the networking layer
is problem or not somehow.

> Please stop trying to convince us to pay attention to benchmarks 
> where the client and server are on the same machine.  There are just 
> too many variables that will screw things up.  The performance 
> characteristics change dramatically when they are physically separate 
> boxes.  -- justin

I agree. We will soon have two new 8P Sun servers equipped with
Gigabit ethernet coming to our lab. With that, I should be able to
experiment with separate machines.

Thanks for your insightful comments.

-Min

-- 
Rapid keystrokes and painless deletions often leave a writer satisfied with
work that is merely competent.
  -- "Writing Well" Donald Hall and Sven Birkerts

Mime
View raw message