httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dean Gaudet <>
Subject Re: infoworld review
Date Wed, 09 Jul 1997 23:29:02 GMT
If you look at <>
it indicates that the break-even is around 4k and winnings around 8k.  But
I'm guessing that mod_php might also be a bit faster at parsing an html
file than mod_include, and if people are comparing mod_include vs. mod_php
then it's not quite the same as stating that serving all html is faster.

In any event it should be easy to at least get a module that does mmap
now, without the buff.c fixes.  Except that it'll copy enough of the
mmaped region to fill an 8k buffer first, and then spit out the rest
without copying.  That's the part I want to tweak -- I want it to
recognize that it's getting a large bwrite() and then use writev() to push
out its partial buffer and the entire buffer passed in.

Also look at <> for an
example mmap patch against 1.0.5.  It brings up another point -- even if
we mmap() our timer system requires us to write in 32k chunks so that the
timer gets reset periodically. 


On Wed, 9 Jul 1997, Rasmus Lerdorf wrote:

> > Regarding what came out of the discussion on linux-kernel about this: we
> > should be using mmap().  I don't know how FreeBSD handles mmap(), but
> > linux at least, according to the kernel goons, will always win with an
> > mmap(), no matter the file size. 
> I have seen real-world proof of this one.  A number of sites out there
> have reported to me that adding mod_php and having all .html files passed
> through mod_php's parser actually sped up their server.  I can only
> attribute this to the fact that mod_php mmap's the entire file.
> -Rasmus

View raw message