After much hacking, and just in time for OSCON, I’m delighted to announce the release of Devel::NYTProf version 2. A powerful, efficient, feature-rich perl source code profiler.
“If NYTProf 1 is a Toyota, then 2.0 is a Cadillac”
— Adam Kaplan, author of NYTProf 1.
The Short Story
(I’ve written up the long story, for the record, in another post.)
Adam forked Devel::FastProf (the best statement-level profiler at the time), added a test harness and tests, made some internal improvements, and grafted on an html report derived from Devel::Cover. The resulting Devel::NYTProf v1 was a big hit.
Meanwhile I’d been working on Devel::FastProf, contributing some interesting new profiling features, but it had no test harness and no html reports. When Adam released NYTProf I switched. Attracted not so much by the html report as by the test harness. (A lesson to anyone wanting to attract developers to an open source project.)
I started out by adding in the same new features I’d been adding to FastProf, albeit with more polish and tests. And then I got carried away…
“Holy shit! That is amazing.”
— Andy Lester, after using a recent development version.
Example Screen Shots
As an example I’ve used NYTProf to profile perlcritic 1.088 running on its own source code.
$ cd Perl-Critic-1.088 $ perl -d:NYTProf -S perlcritic . $ nytprofhtml $ open nytprof/index.html
The first image is the main index page, showing the top few subroutines and the start of the list of all source files.

Each source file has links to separate line-level, block-level, and sub-level reports (though I hope to merge them in future). Clicking on a subroutine name takes you to the line-level report for the file it’s defined in and positions you at the subroutine definition.
(The color coding is based on Median Absolute Deviation of all the values in the column, same as in NYProf v1.)
Here’s the all_perl_files() subroutine, for example:

The colored numbers show the number of statements executed, the total time taken, and the average. The statement times are always exclusive times. Time actually spent on that statement, the expressions and any built-in functions it uses. It doesn’t include any time spent executing statements elsewhere in subroutines called by it. In NYTProf subroutine timings are inclusive and statement timings are exclusive.
Where did you come from and where are you going?
Notice the grey text.
On lines that define a subroutine NYTProf now adds ‘comments’ giving the total number of times the sub was called, the inclusive time spent in that sub, and the average. Then it adds a break-down of the same details for every location that called the subroutine. Here’s a better example of that:
On lines that call subroutines NYTProf now adds ‘comments’ giving the name of the actual subroutines called (resolving code refs to subroutine names, including the class that handled a method call). It also tells you how many calls were made and how much time was spent in that subroutine for calls made from that line. Here’s an example:
When you mouse-over the grey text it turns black and you can click on embedded links to take you to the callers or callees. So with a few clicks you can run up and down the call stack exploring where the time is being spent and where the hot spots are being called from. The ability to explore the code so easily, guided by these performance signposts is incredibly useful.
Rolling up for a higher level view
Sometimes per-statement timing can overwhelming. In large subroutines it becomes “hard to see the wood for the trees”. So, for the first time in any Perl profiler, NYTProf now provides a block-level view of the timing data:

What’s happening here is that NYTProf is taking the same time measurements per statement, but instead of accumulating the time against the line the statement is on, it accumulates it against the line of the first statement in the enclosing block. (The actual line it accumulates it against isn’t ideal in some cases. I’m hoping to improve that soon.)
This report is a little more tricky to read but can be very useful, especially in large subroutines. (I hope to improve the readability with some css :hover magic in future.)
The subroutine-level report is very similar except that all the timings are accumulated against line of the first statement in the subroutine.
Have a Look
Back in June I gave a talk at the Irish Open Source Technology conference where I showed the first version of the annotated html report (which I’d been hacking on till 3am the night before while struggling with a cold – presentations are great motivators). You can see the 15 minute video here or here).
Explore for yourself
I’ve uploaded the full set of reports for you to explore here. Take a look. Let me know what you think.