perl-modperl mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rolf Banting <rolf.b...@gmail.com>
Subject Re: Profiling live mod_perl backends
Date Mon, 30 Mar 2009 16:11:40 GMT
On Mon, Mar 30, 2009 at 2:14 PM, Cosimo Streppone <cosimo@streppone.it>wrote:

> In data 30 mars 2009 alle ore 13:46:09, Rolf Banting <rolf.b.mr@gmail.com>
> ha scritto:
>
> > On Sun, Mar 29, 2009 at 9:52 PM, Perrin Harkins <pharkins@gmail.com>
> > wrote:
> >
> >> On Sun, Mar 29, 2009 at 4:44 PM, Cosimo Streppone <cosimo@streppone.it>
> >> wrote:
> >> > The main problem is that in the past we experienced some kind of
> >> > performance problems that only manifested themselves really clearly
> >> > in production and only at peak traffic hours.
> >> > Out of peak hours, everything was fine.
> >>
> >> That sounds like a problem with a shared resource like the database,
> >> not something you'll find by profiling the code.  You'd be better off
> >> either using DBI::Profile or using logging on your database to find
> >> the problem.
>
> I get the points.
>
> The problem that we had, this was in November last year,
> was that all the backends were at load 15.0-20.0 (normal was ~3-4)
> after an update to the application.
>
> In those cases, it's pretty clear where the problem is
> (CPU/load/etc...). What's not really clear is which point in the
> code is causing it.
>
> In our case, it was the code, and particularly a single function,
> used in many places, that spent a lot of time doing useless things.
> We sorted that out "by intuition", knowing the hot spots of the code.
>
> What I want to do now is to prevent this kind of problems, possibly
> in a more systematic and/or scientific way, and I thought of doing
> this by running automated performance/stress tests before deployment.
>
> While I try to get there, I thought it might be useful to dedicate 1 of the
> live backends to this "live profiling". Even if the application now is
> not having any problem, even at peak times.
>
> Maybe I just have to try and see what I get :)
>
> DBI::Profile is also another good "track" to follow.
>
> --
> Cosimo
>

If you know which routines you want to profile it is easy enough to set up
wrapped versions of the routines which record execution times etc and
install them at run time so the wrapped version gets executed.

Here's some simple test code I (mostly) found in "High Order Perl" by Mark
Jason Dominus:

package TheDude;

sub make_me_proud { return "I love you pops"}

package main;

use Data::Dumper;

my %calls;
my %times;

sub profile {
    my ( $func, $name ) = @_;
    my $stub = sub {
        my $start   = time;
## Call the real sub routine
        my $return  = $func->(@_);
        my $end     = time;
        my $elapsed = $end - $start;
        $calls{$name} += 1;
        $times{$name}->{'elapsed'}  += $elapsed;
        return $return;
    };
    return $stub;
}

## Overwrite the sub routine with a wrapped version
no strict qw(refs);
*{"TheDude::make_me_proud"} =
profile(\&TheDude::make_me_proud,"TheDude::make_me_proud");
#
TheDude::make_me_proud() foreach (1..1000000);

print Dumper \%calls;
print Dumper \%times;

If you put that in a script an run you'll get something like:

$VAR1 = {
          'TheDude::make_me_proud' => 1000000
        };
$VAR1 = {
          'TheDude::make_me_proud' => {
                                        'elapsed' => 3
                                      }
        };

Obviously you can customise the routine returned by "profile" to do anything
you like.

Rolf

Mime
View raw message