perl-modperl mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "" <>
Subject Re: Hosting provider disallows mod_perl - "memory hog / unstable"
Date Wed, 01 Sep 2004 13:58:06 GMT
Hi all,
   Thank you all for your responses, I am getting a better picture now.
I guess my hosting provider's concern is that they have a lot of clients who have infrequently
running scripts. By design, mod_perl keeps things in memory for longer so that subsequent
calls do not incur a reload of the environment. By restricting use to CGI they get such infrequently
used environments unloaded ASAP.
Is there a way to configure mod_perl to aggressively unload perl instantiations such that
it holds onto data for the minimum timespan?

Jeff Norman <> wrote:
On Mon, 2004-08-30 at 14:12, Perrin Harkins wrote:

> The truth is that mod_perl uses the same amount of memory that Perl CGI
> scripts use. The difference is that CGI scripts exit as soon as they
> finish. Serving 10 simultaneous requests with CGI requires the same
> amount of memory as it does with mod_perl (with a small amount extra for
> the apache interface modules). You can do things under mod_perl like
> load tons of stuff into a global variable, but that would be the
> programmer's fault, not mod_perl's.

That's not entirely true. It is in fact the case that mod_perl's
*upper-bound* on memeroy usage is similar to the equivalent script
runnung as a cgi.

A well designed mod_perl application loads as many shared libraries as
possible before Apache forks off the child processes. This takes
advantage of the standard "copy-on-write" behavior of the fork() system
call; meaning that only the portions of the process memory that differ
from the parent will actually take up extra memory, the rest is shared
with the parent until one of them tries to write to that memory, at
which time it is copied before the change is made, effectively
"unsharing" that chunck of memory.

Unfortunately, it's not a perfect world, and the Perl interpreter isn't
perfect either: it mixes code and data segments together throughout the
process address space. This has the effect that as the code runs and
variables/structures are changed, some of the surrounding code segments
in the memory space are swept up into the memory chunks during a
copy-on-write, thereby slowly duplicating the memory between processes
(where the code would ideally be indefinitely shared).
Fortunately, Apache has a built in defence against this memory creep:
the MaxRequestsPerChild directive forces a child process to die and
respawn after a certain number requests have been served, thereby
forcing the child process to "start fresh" with the maximum amount of
shared memory.

In the long run, this means that if you pre-load as many shared
libraries as possible and tweak the MaxRequestsPerChild directive,
you'll probably see significantly less memory usage on average. Not to
mention all the other speed and efficiency increases that you're already
mod_perl provides.


-- (please don't reply to @yahoo)

Post your free ad now! Yahoo! Canada Personals

View raw message