httpd-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nick Kew <n...@webthing.com>
Subject Re: [users@httpd] Understanding Memory Usage Per Process
Date Wed, 06 Apr 2005 10:09:01 GMT
Joshua Slive wrote:
> On Apr 5, 2005 11:11 AM, Jerry Lam <jlam@sandvine.com> wrote:
> 
>>What I did is I ran a script on the command line which will take 180M of memory to
finsih, then I tried running the same script within apache by using mod_websh to serve the
request. The apache child process goes up to 200M (from the top command) and it never drops
back to its initial value even after the script is done.
>>The script doesn't have memory leak. My understanding is that the mod_websh will run
the tcl interpreter and execute the script. After it is done, it kills the tcl interpreter
and memory should be returned back to the OS (I supposed). But it doesn't seem this is the
case. I'm sure the memory is not lock up. If I ran the same script over and over while true
it within the request, the memory usage stays exactly 200M even after some hours of execution
within the process.
>>
>>- Can someone explain this to me clearly what is happening here? I would like to know
how memory is allocated within apache? Though the only explanation I have is that mod_websh
request memory from apache, and apache assigns some blocks of memory to mod_websh after mod_websh
is done, it returns the memory back to apache (in this case 200M) but the memory is kept within
the process to serve the next request. Please add / modify / correct this explanation if you
think something not quite true about it.
> 
> 
> I don't know anything about mod_websh, but the point of modules like
> this is usually to retain the interpreter in memory between requests
> to speed execution.  If you really want something that will execute
> and then get completely cleaned up, you should try the CGI interface.

That's true, but not necessarily the whole story.  When you're dealing
with hundreds of megabytes, it seems unlikely that everything really
should be cacheable, and unhelpful to cache quite that much.

I've seen that kind of behaviour in Apache 1.3: when I tried XSLT in 1.3
(not my own module - downloaded code registered at modules.apache.org)
I could bring down the entire server with a single transform, and
it would not release the memory (by contrast, running the same transform
from CGI would peak at about 3Mb and run thousands of times faster).
It was one clear reason to upgrade to 2.0!

Tcl does cache bytecode, and reference-counts cached data.  So there's
certainly scope for bad things to happen if the module is buggy, or if
it just doesn't protect you from yourself.  I'm currently working on a
Tcl-in-Apache framework myself, and this is an important issue.  By
default, Tcl uses its own memory management, not memory from apache,
but maybe mod_websh overrides this with apache pools.  For an
overview of memory management in Apache, see
http://www.apachetutor.org/dev/pools (that's apache 2, but the
memory-management component of it is inherited from apache1).

Joshua suggested CGI, which will probably be a big improvement.
The other option that may very well help is an upgrade to Apache 2.
Though I'm not sure what Tcl options are available there, given that
mod_tcl leaves a lot to be desired (makes no attempt at thread-safety,
and only exposes an Apache-1-like API; no filters.  That was enough
to convince me to start my Tcl work from scratch).

-- 
Nick Kew

---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
   "   from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org


Mime
View raw message