Is your Perl community visible?

As I mentioned recently, I’m working on an update to my Perl Myths talk. (Which is really a review of the state of the art, state of the community, resources, and best practices. You could even call it marketing.)

In recent months, and especially while researching for this update, it’s become clear to me that the Perl community is both functioning well and growing more conscious of its own role and value.

But are the various components of “the community” sufficiently visible? Continue reading

NYTProf v3 – a sneak peak

I’ve had a great week at OSCON. The talks are excellent but the real value is in the relationships formed and renewed in the “hallway track”. I’m honoured and humbled to be able to call many great people my friends.

My talk on Devel::NYTProf seemed to go well. This year I covered not just NYTProf and the new features in v3 (not yet released) but also added a section on how to use NYTProf to optimize your perl code.

Here’s a quick summary, with links to the slides and screen-cast, and outline what’s still to be done before v3 gets released (getting closer by the day). Continue reading

Unattributed copying of perl blog content via Planet Perl

I recall other bloggers complaining of unattributed redistribution of their work. Now a site called rapid-dev.net has started redistributing Plant Perl posts, including mine, with an advert at the top.

I wouldn’t mind if the page had clear attribution, but it doesn’t. In fact, at the bottom it says “Author: hoanatwho”.

That doesn’t feel right. Especially as many of my posts, and probably many others from Planet Perl, use the first-person pronoun “I”.

Why does this matter? A couple of months ago Merlin Mann wrote a long but excellent piece that explains why far better than I could.

Nobody but me is allowed to decide why I make things. And — if and when I choose to give away the things that I make — nobody but me is allowed to define how or where I’ll do it. I am independent.

Merlin discusses, with his typical style, the motivations of those who make their work available for free, and the perils of presuming to understand their motives. Although written mostly about bloggers it seems very applicable to authors of Open Source software. For me it echoes how I feel about coding and, to an extent, the freedom that Perl give me to express my thoughts.

If you have a blog I recommend you at least make the licence for reuse clear. My blog has a “Terms of Use” link in the sidebar that refers to Creative Commons “Attribution-Noncommercial-Share Alike 3.0″ license.

Looking at the Planet Perl page I see it has no licence. Perhaps that should be fixed — even if only to say that the license of the feeds being aggregated must be respected.

Has NYTProf helped you? Tell me how…

At OSCON this year1 I’m giving a “State-of-the-art Profiling with Devel::NYTProf” talk. It’ll be an update of the one I gave last year, including coverage of new features added since then (including, hopefully, two significant new features that are in development).

This year I’d like to spend some time talking about how interpret the raw information and using it to guide code changes. Approaches like common sub-expression elimination and moving invariant code out of loops are straight-forward. They’re ‘low hanging fruit’ with no API changes involved. Good for a first-pass through the code.

Moving loops down into lower-level code is an example of a deeper change I’ve found useful. There are many more. I’d like to collect them to add to the talk and the NYTProf documentation.

So here’s a question for you: after looking at the NYTProf report, how did you identify what you needed to do to fix the problems?

I’m interested in your experiences. How you used NYTProf, how you interpreted the raw information NYTProf presented, and then, critically, how you decided what code changes to make to improve performance. What worked, what didn’t. The practice, not the theory.

Could you to take a moment to think back over the times you’ve used NYTProf, the testing strategy you’ve used, and the code changes you’ve made as a result? Ideally go back and review the diffs and commit comments.

Then send me an email — tell me your story!

The more detail the better! Ideally with actual code (or pseudo-code) snippets2.


  1. OSCON is in San Jose this year, July 20-24th. You can use the code ‘os09fos’ to get a 20% discount.
  2. Annotated diff’s would be greatly appreciated. I’ll give credit for any examples used, naturally, and I’ll happily anonymize any code snippets that aren’t open source.

Thanks, Iron Man, for the good excuse to perl blog

I’ve been thinking that I haven’t blogged much lately. Assorted half-baked ideas would cross my mind and then evaporate before I’d find the time, or motivation, to actually start writing.

The folks at the Enlightened Perl Organisation have solved the motivation problem by announcing the Iron Man Blogging Challenge: in short, “maintain a rolling frequency of 4 posts every 32 days, with no more than 10 days between posts“.

So about one post a week. I can aim for that!

Can you? “The rules are very simple: you blog, about Perl. Any aspect of Perl you like. Long, short, funny, serious, advanced, basic: it doesn’t matter. It doesn’t have to be in English, either, if that’s not your native language.” Why not try? Help yourself and help perl at the same time.

I’ll try to capture the half-baked ideas for perl blog posts as they cross my mind, then build on them as time and mood allow. Hopefully about one a week will mature into an actual blog post.

Meanwhile…

I remembered that almost exactly a year ago schwern blogged that he was “horrified at the junk which shows up when you search for perl blog on Google“. It seems the situation hasn’t improved much since. The planet.perl.org site is top, but the rest are a bit of a mishmash.

The problem is partly that “perl blog” isn’t a great search term. Google naturally gives preference to words that appear in urls and titles (all else being equal), but blogs rarely explicitly call themselves blogs on their pages or urls. I suspect many on the first page of results are there because ‘blog’ appears in the url. (To help out I’ve included “perl blog” in the title of this post :)

Searching for “perl blogs” (plural) works better because it finds pages talking about perl blogs, which is useful when searching for perl blogs.

One entry in the “perl blogs” results was the Perl Foundation’s wiki page listing perl blogs. That was new to me. This blog wasn’t on it so I’ve added it. Got a perl blog, or know someone who has, that’s not listed on that page? Go and add it, now! It’ll only take a moment.

Along similar lines, I’ve added the phrases “perl blog” and “perl programming” to the sidebar of my blog pages. The first is to help people searching for “perl blog”. The second is mostly for my own amusement.

Examples of Modern Perl

In the spirit of re-tweeting, this is a short post to highlight some great examples of “modern perl”. (I’m using the term modern perl very loosely, not referring specifically to any one book, website, or module or organization.)

Firstly I’d like to highlight a couple of recent posts by Jonathan Rockway:

* Unshortening URLs with Modern Perl (also available here). An interesting example application built with modern perl modules like MooseX::Declare, MooseX::Getopt, HTTP::Engine, AnyEvent::HTTP, TryCatch, and KiokuDB.

* Multimethods. Another great example from Jonathan highlighting the combined power of MooseX::Types, MooseX::Declare, and MooseX:: MultiMethods.

Then, from his work at the BBC, Curtis “Ovid” Poe has given us a great series of thoughtful posts on the benefits of replacing multiple inheritance with roles in a complex production code-base. The slides of his Inheritance vs Roles talk is a good place to start. Then dive in to the blog posts back here and work your way forward.

I ♥ modern perl!

Generate Treemaps for HTML from Perl, please.

Seeing this video of treemap for perlcritic memory usage reminded me of something…

I’d really like to be able to use treemaps in NYTProf reports to visualize the time spent in different parts, and depths, of the package namespace hierarchy. Currently that information is reported in a series of tables.

A much better interface could be provided by treemaps. Ideally allowing the user to drill-down into deeper levels of the package namespace hierarchy. (It needn’t be this flashy, just 2D would be fine :-)

In case you’re not familiar with them, treemaps are a great way to visualise hierarchical data. Here’s an example treemap of the disk space used by the files in a directory tree (from schoschie) 82244D56-FA2C-4044-AE46-EE53B63861BE.jpg

Perl already has a Treemap module, which can generate treemap images via the Imager module. Treemap is designed to support more output formats by sub-classing.

I guess it wouldn’t be hard to write a sub-class to generate HTML client-side image map data along with the image, so clicks on the image could be used to drill-down into treemaps that have more detail of the specific area that was clicked on.

More interesting, and more flexible, would be a sub-class to generate the treemap as a Scalable Vector Graphics diagram using the SVG module (and others).

I’m not going to be able to work on either of those ideas anytime soon.

Any volunteers?

Perl DynaLoader hack using .bs files

I needed to install a perl extension from a third-party. It’s an interface to a shared library, a .so file, that they also supply.

Normally I’d add the directory containing the shared library to the LD_LIBRARY_PATH environment variable. Then when the extension is loaded the system dynamic loader can find it.

For various reasons I didn’t want to do that in this case.

An alternative approach is to pre-load the shared library with a flag to make it’s symbols globally available for later linking. (The flag is called RTLD_GLOBAL on Linux, Solaris and other systems that use dlopen(). This hack may not work on other systems.)

But how to pre-load the shared library, only when needed, and without changing any existing perl code? This is where the pesky little .bs files that get installed with perl extensions come in handy.

They’re known as ‘bootstrap’ files. If the .bs file for an extension not empty then DynaLoader (and XSLoader) will execute the contents just before loading the shared object for the extension.

So I put code like this into the .bs file for the extension:

    use DynaLoader;
    DynaLoader::dl_load_file("$ENV{...}/...so", 1)
	    or die DynaLoader::dl_error();

Not a recommended approach, but neat and handy for me in this case.

NYTProf 2.09 – now handles modules using AutoLoader, like POSIX and Storable

I’ve uploaded Devel::NYTProf 2.09 to CPAN.

If you’re using VMS the big news is that Peter (Stig) Edwards has contributed patches that enable NYTProf to work on VMS. Yeah! Thanks Peter.

For the rest of us there’s only one significant new feature in this release: NYTProf now includes a heuristic (that’s geek for “it’ll be wrong sometimes”) to handle modules using AutoLoader. The most common of which are Storable and POSIX. You may have encountered a warning like this when running nytprofhtml:

Unable to open '/../../lib/Storable.pm' for reading: No such file or directory

It’s a symptom of a deeper problem caused by AutoSplit, the companion to AutoLoader. The details of the cause, effect, and fix aren’t worth going into now. If you’re interested you can read my summary to the mailing list.

The upshot is that NYTProf now reports times for autoloaded subs as if the sub was part of the parent module. Just what you want. Time spent in the AUTOLOAD’er sub is also reported, naturally.

There is a small chance that the heuristic will pick the wrong ‘parent’ module file for the autoloaded subroutine file. That can happen if there are other modules that use the same name for the last portion of the package name (e.g., Bar::Foo and Baz::Foo). If it ever happens to you, please let me know. Ideally with a small test case.

The list of changes can be found here (or here if you’re a detail fanatic).

One other particularly notable item is that the savesrc option wasn’t working reliably. Thanks to Andy Grundman providing a good test case, that’s now been fixed.

Enjoy!