httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dean Gaudet <>
Subject mmap performance test
Date Mon, 13 Apr 1998 09:26:24 GMT
Folks may want to try this on their system of choice.  I'll tell you right
away that linux doesn't deal well with it, but work is in progress on
fixing it. 

The general idea is to simulate a working set which is larger than RAM.
Systems which don't do good readahead with mmap may have problems with
this and apache-1.3.

Create yourself a file which is as large as memory.  My test box has
128Mb of ram so I did this:

% dd if=/dev/zero of=../htdocs/128Mb bs=65536 count=2048

Now, we launch a bunch of zeusbench clients against it.  We use multiple
invocations of zb staggered over time to simulate what it's like to
have multiple folks accessing the content in an uncoordinated manner.
If we were to use a single zb, all of its "clients" would have very high
locality of reference on the large file.

Try this script:

    zb /128Mb -p 8080 -c 1 -t 120 &
    sleep 10
    zb /128Mb -p 8080 -c 1 -t 120 &
    sleep 10
    zb /128Mb -p 8080 -c 1 -t 120 &
    sleep 10
    zb /128Mb -p 8080 -c 1 -t 120 &
    sleep 10
    zb /128Mb -p 8080 -c 1 -t 120 &
    sleep 10
    zb /128Mb -p 8080 -c 1 -t 120 &

Then run vmstat in another window and watch your system start paging
like crazy.

Now, do the entire experiment again, but in main/http_core.c insert
"#undef USE_MMAP_FILES" after all the #includes... this disables
mmap(), and uses read() instead.  Under linux you'll see a ~4x boost
in performance because mmap() reads ahead only one 4k page, and read()
reads ahead 4 pages.

I'm thinking that a config-time directive "mmapthresholds min max"
is necessary.


View raw message