NYTProf v2 – A major advance in perl profilers

After much hacking, and just in time for OSCON, I’m delighted to announce the release of Devel::NYTProf version 2. A powerful, efficient, feature-rich perl source code profiler.

“If NYTProf 1 is a Toyota, then 2.0 is a Cadillac”
— Adam Kaplan, author of NYTProf 1.

The Short Story

(I’ve written up the long story, for the record, in another post.)

Adam forked Devel::FastProf (the best statement-level profiler at the time), added a test harness and tests, made some internal improvements, and grafted on an html report derived from Devel::Cover. The resulting Devel::NYTProf v1 was a big hit.

Meanwhile I’d been working on Devel::FastProf, contributing some interesting new profiling features, but it had no test harness and no html reports. When Adam released NYTProf I switched. Attracted not so much by the html report as by the test harness. (A lesson to anyone wanting to attract developers to an open source project.)

I started out by adding in the same new features I’d been adding to FastProf, albeit with more polish and tests. And then I got carried away…

“Holy shit! That is amazing.”
— Andy Lester, after using a recent development version.

Example Screen Shots

As an example I’ve used NYTProf to profile perlcritic 1.088 running on its own source code.

$ cd Perl-Critic-1.088
$ perl -d:NYTProf -S perlcritic .
$ nytprofhtml
$ open nytprof/index.html

The first image is the main index page, showing the top few subroutines and the start of the list of all source files.

NYTProf perlcritic index.png

Each source file has links to separate line-level, block-level, and sub-level reports (though I hope to merge them in future). Clicking on a subroutine name takes you to the line-level report for the file it’s defined in and positions you at the subroutine definition.

(The color coding is based on Median Absolute Deviation of all the values in the column, same as in NYProf v1.)

Here’s the all_perl_files() subroutine, for example:

NYTProf perlcritic all_perl_files.png

The colored numbers show the number of statements executed, the total time taken, and the average. The statement times are always exclusive times. Time actually spent on that statement, the expressions and any built-in functions it uses. It doesn’t include any time spent executing statements elsewhere in subroutines called by it. In NYTProf subroutine timings are inclusive and statement timings are exclusive.

Where did you come from and where are you going?

Notice the grey text.

On lines that define a subroutine NYTProf now adds ‘comments’ giving the total number of times the sub was called, the inclusive time spent in that sub, and the average. Then it adds a break-down of the same details for every location that called the subroutine. Here’s a better example of that:

NYTProf sub-callers.png

On lines that call subroutines NYTProf now adds ‘comments’ giving the name of the actual subroutines called (resolving code refs to subroutine names, including the class that handled a method call). It also tells you how many calls were made and how much time was spent in that subroutine for calls made from that line. Here’s an example:

NYTProf subs-called.png

When you mouse-over the grey text it turns black and you can click on embedded links to take you to the callers or callees. So with a few clicks you can run up and down the call stack exploring where the time is being spent and where the hot spots are being called from. The ability to explore the code so easily, guided by these performance signposts is incredibly useful.

Rolling up for a higher level view

Sometimes per-statement timing can overwhelming. In large subroutines it becomes “hard to see the wood for the trees”. So, for the first time in any Perl profiler, NYTProf now provides a block-level view of the timing data:

NYTProf perlcritic all_perl_files block level.png

What’s happening here is that NYTProf is taking the same time measurements per statement, but instead of accumulating the time against the line the statement is on, it accumulates it against the line of the first statement in the enclosing block. (The actual line it accumulates it against isn’t ideal in some cases. I’m hoping to improve that soon.)

This report is a little more tricky to read but can be very useful, especially in large subroutines. (I hope to improve the readability with some css :hover magic in future.)

The subroutine-level report is very similar except that all the timings are accumulated against line of the first statement in the subroutine.

Have a Look

Back in June I gave a talk at the Irish Open Source Technology conference where I showed the first version of the annotated html report (which I’d been hacking on till 3am the night before while struggling with a cold – presentations are great motivators). You can see the 15 minute video here or here).

Explore for yourself

I’ve uploaded the full set of reports for you to explore here. Take a look. Let me know what you think.

Devel::DProf – broken by the passage of time

Measure twice, cut once.

To measure the performance of your Perl code many developers instinctively reach for Devel::DProf. The venerable perl profiler dates back to 1995.

When profiling you have two broad choices: cpu-time vs wallclock time, and subroutine-level profiling vs line-level profiling. DProf is a subroutine-level profiler. (I’ll talk more about subroutine-level profiling vs line-level profiling in an upcoming post about the soon-to-be-released NYTProf 2.0.)

Measuring cpu-time should, in theory, give you a clear picture of the time spent by perl executing your code, independent of the load on the machine or delays waiting for i/o. Sadly life isn’t so simple.

The big problem is that on most systems cpu-time measurement has a resolution of 0.01 seconds (a HZ value of 100). That may not sound like much time, but you can execute a lot of perl code, and a lot of subroutine calls, in that time.

So any profiler using cpu-time with such a course granularity is going to give ‘noisy’ results. Some subroutines may appear to be slow because the cpu-time ‘tick’ happened to occur more while those subs were running. Other subroutines may appear to be fast because the cpu-time ‘tick’ doesn’t happen much while they’re running.

Try playing around with the venerable Devel::DProf like this:

$ perl -we 'print "sub s$_ { sqrt(42) for 1..100 }; s$_({});\n" for 1..1000' > x.pl
$ perl -d:DProf x.pl
$ dprofpp
Total Elapsed Time = 0.047999 Seconds
  User+System Time = 0.047999 Seconds
Exclusive Times
%Time ExclSec CumulS #Calls sec/call Csec/c  Name
 20.8   0.010  0.010      1   0.0100 0.0100  main::s986
 20.8   0.010  0.010      1   0.0100 0.0100  main::s29
 20.8   0.010  0.010      1   0.0100 0.0100  main::s321
 0.00       - -0.000      1        -      -  main::s322
 0.00       - -0.000      1        -      -  main::s323
 0.00       - -0.000      1        -      -  main::s329
 0.00       - -0.000      1        -      -  main::s330

Clearly nonsense. Out of 1000 subroutines that would all take about the same length of time, a cpu-time ‘tick’ happened to occur in those three. The rest of the subroutines appear to be ‘free’. (If you play with this yourself you’ll probably need to adjust the 1..100 to see the effect on your system.)

Using dprofpp -r (to “display elapsed real times” instead of cpu-times) gives a very similar granular result. That’s because Devel::DProf artificially reduces the resolution of real time to match the granular cpu-time! It’s also odd that dprofpp doesn’t provide an option to sort by elapsed time, but there’d be little value anyway given the low resolution.

Many people have reported strange results from Devel::DProf over the years. Design decisions that made sense back in 1995 are now causing problems as CPUs have got so much faster. Several have implemented alternative profilers.

Using cpu-time is effectively a kind of statistical sampling. That’s fine as long as you remember it and interpret the results accordingly. The work-around for the course granularity is to execute the code for long enough for the effects to be averaged out. But that can be inconvenient, and how do you know how much is enough?

Personally I’d recommend using wallclock time for profiling. I want to know how much time was spent waiting for network, waiting for disk, waiting for cpu. Those are often significant factors in performance issues. Too many network round trips, or too many individual disk requests, for example, can easily kill performance. If you only profile cpu time you may not spot these kinds of problems. You may just be chasing the random ghosts of occasional cpu clock ticks.

 

So which profiler would I recommend? Stay tuned for an announcement in a day or so. Easy! NYTProf 2.0

Irish Open Source Technology Conference – June 18th-20th

I’ll be speaking at the Irish Open Source Technology Conference this year. It’s on at Dublin’s CineWorld Complex, from June 18th for three days. They’re running a 2-for-1 offer on tickets at the moment.

I’ll be speaking about something Perl’ish, naturally.

The “Perl Myths” presentation I gave at IWTC earlier this year turned out to be a hit. (At least, it was after the event. There were less than ten people in the room at the time, including me! Perl clearly isn’t a hot topic among Irish web developers.)

My blog post, with embedded slides and video, has topped 7400 hits, plus another 3000 or so views on slideshare.

I’m upgrading my methods for this next talk. I’ve bought a great wireless USB microphone and the amazing
ScreenFlow screencasting software to capture everything in detail.

So I’m going all high-tech. No more “camcorder perched on a chair at the back” for me!

It’ll be a good trial run for OSCON where I’m speaking in July.

Announcing Apache2::AuthPAM

I’ve been working on a mod_perl2 based admin interface for a pool of Gofer servers recently.

I needed to secure it and PAM plus SSL was the simplest way to go.

The Authen::PAM module is old (2005), doesn’t have good reviews, and the test results are poor, but it worked for us.

That just left the question of how to interface with Apache 2. There’s an Apache::AuthPAM module but that’s even older (2002), has more failing test reports than passes, and was mod_perl 1 only.

I emailed the author but got no reply so I’ve gone ahead and hacked it into a new module for mod_perl2: Apache2::AuthPAM. The source code is hosted by google.

Realistically I’m unlikely to touch it again, unless we find problems, so I’d be delighted if anyone wants to take it over, or even just co-maintain it.

Finding the cause of inexplicable warnings in XS code

Occasionally you may run across an odd warning like this:

   Use of uninitialized value in subroutine entry at X line Y

where the code at that line is a call to an XS subroutine (let’s call it xsub()) and you’re certain that the arguments you’re passing are not undefined.

Somewhere, deep in the XS/C code, an undefined value is being used. But where? And why is perl reporting that line?

Perl is reporting the last line of perl code that was executed at the same or higher level in the stack. So other perl code, such as a callback, may have been executed between entering xsub() and the warning being generated, but that perl code must have returned before the warning was triggered.

Assuming XS/C code is large and complex, like mod_perl, how can you locate the code that’s triggering the warning?

Here’s a trick I’ve used a few times over the years:

    $SUB{__WARN__} = sub {
        CORE::dump if $_[0] =~ /uninitialized value in subroutine entry/;
        warn @_;
    }

That make the program abort and generate a core dump file at the point the warning is generated. You can then use a debugger, or Devel::CoreStack, to report the C call stack at the time. It’s a savage but effective technique.

If the XS/C code was compiled with options to keep debug info (i.e., -g) then that’ll show you exactly where in the XS/C code the undefined value is being used. If not, then it’ll at least show you the name of the XS/C function and the overall call stack.

(The dump function is a curious vestige of old ways. You could use kill(9, $$). I’m not sure about the portability of either, for this purpose, beyond unix-like systems.)

I suggested the technique to Graham Barr recently and it proved effective in tracking down the source of that warning in a very large mod_perl application. The warning pointed the finger at a $r->internal_redirect($uri) call. The actual cause was a PerlInitHandler returning undef. (The handler was an old version of DashProfiler::start_sample_period_all_profiles.)

Anyway, it dawned on me this morning that I should update the technique. It doesn’t have to be so savage. On modern systems you don’t need to shoot the process dead to get a C stack trace.

A few approaches came to mind:

  • spawn a “gcore $$” command (or similar) to get a core file from the running process
  • spawn a “pstack $$” command (or similar) to directly dump the stack trace from the running process
  • spawn a “gdb $$ &” (to attach to the running process) followed immediately by kill(17, $$) to send a SIGSTOP to the process to give time for the debugger to attach and for you to investigate the state of the live process.

I think the second of those would be most useful most of the time.

Hopefully this will be useful to someone.

Pay no attention to that callback behind the curtain!

So you’ve got some perl code that connects to a particular database via a particular DBI driver. You want it to connect to a different database or driver. But you can’t change that part of the code. What can you do?

I ran into this problem recently. A large application is using an old version of DBIx::HA which doesn’t support DBD::Gofer. DBIx::HA can’t be upgraded (long story, don’t ask) but I wanted to use DBD::Gofer to provide client-side caching via Cache::FastMmap. (I’ll save more details of that, and the 40% reduction in database requests it gave, for another post.)

I needed a way for DBIx::HA to think that it was connecting to a particular driver and database, but for it to actually connect to another. Using $ENV{DBI_AUTOPROXY} wasn’t an option because that has global effect whereas I needed fine control over which connections were affected. It’s also fairly blunt instrument in other ways.

It seemed like I was stuck. Then I remembered the DBI callback mechanism – it would provide an elegant solution to this. I added it to DBI 1.49 back in November 2005 and enhanced it further in 1.55. I’d never documented it though. I think I was never quite sure it had sufficient functionality to be really useful. Now I’m sure it has.

The DBI callback mechanism lets you intercept, and optionally replace, any method call on a DBI handle. At the extreme, it lets you become a puppet master, deceiving the application in any way you want.

Here’s how the code looked (with a few irrelevant details changed):

    # The following section of code uses the DBI Callback mechanism to
    # intercept connect() calls to DBD::Sybase and, where appropriate, 
    # reroute them to DBD::Gofer.
    our $in_callback;

    # get Gofer $drh and make it pretend to be named Sybase
    # to keep DBIx::HA 0.62 happy
    my $gofer_drh  = DBI->install_driver("Gofer");
    $gofer_drh->{Name} = "Sybase";

    # get the Sybase drh and install a callback to intercept connect()s
    my $sybase_drh = DBI->install_driver("Sybase");
    $sybase_drh->{Callbacks} = {
        connect => sub {
            # protect against recursion when gofer itself makes a connection
            return if $in_callback; local $in_callback = 1;

            my $drh = shift;
            my ($dsn, $u, $p, $attr) = @_;
            warn "connect via callback $drh $dsn\n" if $DEBUG;

            # we're only interested in connections to particular databases
            return unless $dsn =~ /some pattern/;

            # rewrite the DSN to connect to the same DSN via Gofer
            # using the null transport so we can use Gofer caching
            $dsn = "transport=null;dsn=dbi:Sybase(ReadOnly=1):$dsn";

            my $dbh = $gofer_drh->connect($dsn, $u, $p, $attr);

            if (not $dbh) { # gofer connection failed for some reason
                warn "connect via gofer failed: $DBI::errstr\n"
                    unless our $connect_via_gofer_err++; # warn once
                return; # DBI will now call original connect method
            }

            undef $_;    # tell DBI not to call original connect method
            return $dbh; # tell DBI to return this $dbh instead
        },
    };

So the application, via DBIx::HA, executed

  $dbh = DBI->connect("dbi:Sybase:foo",...)

but what it got back was a DBD::Gofer dbh, as if the application has executed

  $dbh = DBI->connect("dbi:Gofer:transport=null;dsn=dbi:Sybase(ReadOnly=1):foo",...).

I guess I should document the callback mechanism now. Meanwhile the closest thing to documentation is the test file.

I’ve always enjoyed this kind of “plumbing”. If you come up with any interesting uses of DBI callbacks, do let me know.

This is a perl blog, too. At least partly.

Just in case it wasn’t clear, perl programming is one of my interests, so this is, at least partly, a perl blog.

I’m only spelling it out like this because, by saying “perl blog“, I help to keep Mr. Schwern happy, and I’m saying “perl programming” purely for my own amusement.

While I’m here I might as well spread some Link Love to other notable perl blogs and the like: use.perl.org, perlmonks.org, news.perlfoundation.org and of course planet.perl.org super blog which lists many more.

Code duplication, cheap but not free

I’m working on a large old codebase at the moment where Repeat Yourself seems to have been standard practice. Here’s a typical example:

    if (exists(Foo::StaticData::NodeRemap->get_data()->{ $args{cid} }->{ $graph_id })
        && Foo::StaticData::NodeRemap->get_data()->{ $args{cid} }->{ $graph_id }->{ as_of_rev } <= $current_revision) {
        $self->{cid} = Foo::StaticData::NodeRemap->get_data()->{ $args{cid} }->{ $graph_id }->{ new_node_id };
    }
    elsif (exists(Foo::StaticData::NodeRemap->get_data()->{ $args{cid} }->{''})
        && Foo::StaticData::NodeRemap->get_data()->{ $args{cid} }->{''}->{ as_of_rev } <= $current_revision) {
        $self->{cid} = Foo::StaticData::NodeRemap->get_data()->{ $args{cid} }->{ '' }->{ new_node_id };
    }

The codebase has many, many, examples of this style written by a variety of developers.

What puzzles me is why this kind of code didn’t raise red flags for the developers at the time. It’s harder to read, harder to maintain, and slower than a simpler approach. Perhaps the elsif part was a copy-n-paste job, but that still doesn’t explain the three instances of Foo::StaticData::NodeRemap->get_data()->{ $args{cid} }->{ $graph_id } in the first part. Perhaps even those were copy-n-pasted.

I’d always have an urge to factor out the common expression into a temporary variable for efficiency and clarity. I wonder if my sensitivity to code duplication is partly due to needing to pay close attention to efficiency for most of my career. (I started Perl programming about 15 years ago, when cpu performance was measured in MHz.)

I also wonder how soon tools like Perl::Critic can help detect duplicate code fragments. Common sub-expressions that may be candidates for elimination.

I’m working on optimizing the codebase at the moment. That chunk showed up as a performance issue so I rewrote it as

    my $NodeRemap = Foo::StaticData::NodeRemap->get_data();
    for my $id ( $graph_id, '' ) {
        my $x = $NodeRemap->{ $args{cid} }->{ $id };
        next unless $x and $x->{as_of_rev} > $current_revision;
        $self->{cid} = $x->{new_node_id};
        last;
    }

Spot my mistake?

Sidebar: This post is also an experiment in posting code to my blog. I’m trying out MarsEdit. It’s good but I’d like to see a “Paste Preformatted” mechanism that would also html escape the contents of the paste buffer. It’s scriptable so I guess I could implement it myself in my copious spare time…

Loaded Perl: A history in 530,000 emails

MarkMail is a free service for searching mailing list archives. They’ve just loaded 530,000 emails from 75 perl-related mailing lists into their index.

They’ve got a home page for searching these lists at http://perl.markmail.org/.

Of course the first thing people often do with new search engines is search for themselves. I’m no exception. Where MarkMail shines is the ability to drill-down into the results in many ways with a single click (bugs, announcements, attachments etc). Worth a look.

The graph of messages per month is not just cute, you can click and drag over a range of bars to narrow the search to a specific period. It clearly shows my activity rising sharply in 2001 and then dropping to a lower level after 2004.

I particularly pleased that they’ve indexed dbi-users, dbi-dev, and dbi-announce lists.

Perl Myths

Update: several more recent versions of my Perl Myths talk are available. These have significant updates. Slides can be found on slideshare.net and screencasts can be found on my blip.tv channel.

I’ve uploaded my Perl Myths presentation to slideshare.net and google video:

“Perl has it’s share of myths. This presentation debunks a few popular ones with hard facts. Surprise yourself with the realities.”

While I agree with Andy Lester that Good Perl code is the best form of evangelism, I wanted to put together a presentation that others could refer to when they encounter misinformation about Perl. I cover these myths that I’ve heard recently:

  • Perl is dead
  • Perl is hard to read / test / maintain
  • Perl 6 is killing Perl 5

and pull in a wealth of upto date information, some of it quite surprising even to those familiar with Perl and its community. There are two versions, plus a video. I recommend the one with notes (which have useful extra detail and context for the slides) which is best viewed as a PDF. There’s also one without notes which I’ve embedded here:

I videoed an extended version of this presentation at IWTC in Dublin in February. The first 40 minutes or so correspond with the slides above. In the remaining 30 minutes or so I talk about Parrot and Perl 6. I’ve embedded the video below, but wordpress forces me to use a small size so you’ll probably prefer to view it at video.google.com: