Hey, my own TV channel!

It felt strange when I first set up this blog. What would I write about? Who would care?

For several years now I’ve been giving talks at conferences and workshops. I’d generally upload a PDF of the slides somewhere, or at least email them to anyone that asked. I’ve now added a special page on the blog where I can list all the talks I’ve given. That now acts as a single location to find all my talks and links to slides any related materials. (It’s currently a work-in progress. I’ll be filling it in from time to time. Any major updates will be accompanied by a blog post.)

Slides, no matter how good, miss much of the real event. No ad-libs, no questions and answers. When writing slides I’m always caught between the desire write little, so the audience can pay attention to what I’m saying, and to write lots, so people reading the slides later still get a reasonably full picture.

There’s also the problem of notes. I often use ‘presenter notes’ on the slides to give extra information. Both to myself, if I may need it while presenting, and also for links to data sources and credits for images used. I’ve uploaded some talks to slideshare.net but I have to include a separate version with notes (which is useful for download and print, but almost unreadable in their viewer.

I tried making a video of a talk on a camcorder. The results weren’t great. Grainy, noisy, hard to read, and massive video files.

Then I decided to try using screencasting software. I bought a great wireless USB microphone and the amazing ScreenFlow screencasting software. Now I can to capture everything in fine detail and edit it easily afterwards.

Great. Now what? I needed somewhere to host the (very large) videos. I looked around and tried a few, like vimeo, but wasn’t happy with the results. Vimo, for example, transcode to quite a low resolution and don’t let viewers download the original.

Eventually I found the wonder that is blip.tv. A whole laundry list of great features. If you produce videos of any kind, give them a look.

So, now I have my own TV channel.

Strange world!

Perl Myths – OSCON 2008

I gave a updated version of my earlier Perl Myths talk at OSCON this year. It includes updated numbers, updated job trend graphs (showing good growth in perl jobs) and slides for the perl6 portion that were missing from the upload of the previous talk.

Two versions of the slides are available: one with just the slides on a landscape page, and another with slides and notes on a portrait page.

I also have a screencast of the presentation which I hope to edit and upload before long. (I’ll update this page and post a new note when I do.)

Interesting Items OSCON 2008 – Dealing with Streaming Data

This is a collection of links to things discussed, or just mentioned, at OSCON that I found interesting enough to note. Hopefully one of a series for OSCON 2008, as time allows.

These items are from a great talk on “A Streaming Database” by Rafael J. Fernández-Moctezuma at PDXPUG day.

Hancock is a C-based domain-specific language designed to make it easy to read, write, and maintain programs that manipulate large amounts of relatively uniform data. In addition to C constructs, Hancock provides domain-specific forms to facilitate large-scale data processing

The CQL continuous query language (google)

Borealis is a distributed stream processing engine. Borealis builds on previous efforts in the area of stream processing: Aurora and Medusa.

CEDR is the Complex Event Detection and Response project from Microsoft Research.

Google Protocol Buffers “allow you to define simple data structures in a special definition language, then compile them to produce classes to represent those structures in the language of your choice”.
Which seems like Thrift which is “a software framework for scalable cross-language services development. It combines a powerful software stack with a code generation engine to build services that work efficiently and seamlessly between langauges”.

NYTProf v2 – A major advance in perl profilers

After much hacking, and just in time for OSCON, I’m delighted to announce the release of Devel::NYTProf version 2. A powerful, efficient, feature-rich perl source code profiler.

“If NYTProf 1 is a Toyota, then 2.0 is a Cadillac”
— Adam Kaplan, author of NYTProf 1.

The Short Story

(I’ve written up the long story, for the record, in another post.)

Adam forked Devel::FastProf (the best statement-level profiler at the time), added a test harness and tests, made some internal improvements, and grafted on an html report derived from Devel::Cover. The resulting Devel::NYTProf v1 was a big hit.

Meanwhile I’d been working on Devel::FastProf, contributing some interesting new profiling features, but it had no test harness and no html reports. When Adam released NYTProf I switched. Attracted not so much by the html report as by the test harness. (A lesson to anyone wanting to attract developers to an open source project.)

I started out by adding in the same new features I’d been adding to FastProf, albeit with more polish and tests. And then I got carried away…

“Holy shit! That is amazing.”
— Andy Lester, after using a recent development version.

Example Screen Shots

As an example I’ve used NYTProf to profile perlcritic 1.088 running on its own source code.

$ cd Perl-Critic-1.088
$ perl -d:NYTProf -S perlcritic .
$ nytprofhtml
$ open nytprof/index.html

The first image is the main index page, showing the top few subroutines and the start of the list of all source files.

NYTProf perlcritic index.png

Each source file has links to separate line-level, block-level, and sub-level reports (though I hope to merge them in future). Clicking on a subroutine name takes you to the line-level report for the file it’s defined in and positions you at the subroutine definition.

(The color coding is based on Median Absolute Deviation of all the values in the column, same as in NYProf v1.)

Here’s the all_perl_files() subroutine, for example:

NYTProf perlcritic all_perl_files.png

The colored numbers show the number of statements executed, the total time taken, and the average. The statement times are always exclusive times. Time actually spent on that statement, the expressions and any built-in functions it uses. It doesn’t include any time spent executing statements elsewhere in subroutines called by it. In NYTProf subroutine timings are inclusive and statement timings are exclusive.

Where did you come from and where are you going?

Notice the grey text.

On lines that define a subroutine NYTProf now adds ‘comments’ giving the total number of times the sub was called, the inclusive time spent in that sub, and the average. Then it adds a break-down of the same details for every location that called the subroutine. Here’s a better example of that:

NYTProf sub-callers.png

On lines that call subroutines NYTProf now adds ‘comments’ giving the name of the actual subroutines called (resolving code refs to subroutine names, including the class that handled a method call). It also tells you how many calls were made and how much time was spent in that subroutine for calls made from that line. Here’s an example:

NYTProf subs-called.png

When you mouse-over the grey text it turns black and you can click on embedded links to take you to the callers or callees. So with a few clicks you can run up and down the call stack exploring where the time is being spent and where the hot spots are being called from. The ability to explore the code so easily, guided by these performance signposts is incredibly useful.

Rolling up for a higher level view

Sometimes per-statement timing can overwhelming. In large subroutines it becomes “hard to see the wood for the trees”. So, for the first time in any Perl profiler, NYTProf now provides a block-level view of the timing data:

NYTProf perlcritic all_perl_files block level.png

What’s happening here is that NYTProf is taking the same time measurements per statement, but instead of accumulating the time against the line the statement is on, it accumulates it against the line of the first statement in the enclosing block. (The actual line it accumulates it against isn’t ideal in some cases. I’m hoping to improve that soon.)

This report is a little more tricky to read but can be very useful, especially in large subroutines. (I hope to improve the readability with some css :hover magic in future.)

The subroutine-level report is very similar except that all the timings are accumulated against line of the first statement in the subroutine.

Have a Look

Back in June I gave a talk at the Irish Open Source Technology conference where I showed the first version of the annotated html report (which I’d been hacking on till 3am the night before while struggling with a cold – presentations are great motivators). You can see the 15 minute video here or here).

Explore for yourself

I’ve uploaded the full set of reports for you to explore here. Take a look. Let me know what you think.

Perl and Parrot – Baseless Myths and Startling Realities

 
I was recently invited to speak at the Irish Web Technology Conference (26-29 May in Dublin). I’m used to preaching to the converted but this would be the first time I’ve spoken to a (presumably) more sceptical audience. I agreed speak but haven’t yet been asked to provide an abstract.
 
Around the same time I saw a call for participation for XTech (6-9 May in Dublin). So I figured I’d submit a proposal. I’m guessing the audience would be similar so I could develop a single talk for both.
 
Here’s what I came up with in the last hour before the deadline: 
Perl5:
  • Perl5 isn’t the new kid on the block. Perl is 21 years old. Perl5 is 14 years old.
  • Perl5 hasn’t been generating buzz recently.
  • Perl5 has just been getting on with the job. Boring but true.
  • Lots of jobs, in fact. I’ll show you the surprising scale of the Perl jobs market.
 
Massive Module Market:
  • Vibrant developer community
  • Over 14,000 distributions (53,000 modules) with over 6,400 ‘owners’ (lead developers).
  • Thousands of releases per month to hundreds of modules.
  • CPAN has over 360 mirrors in 51 regions (TLDs)
  • Automated testing applied to all uploads by the CPAN Testers Network: 61 different platforms and 20 different versions of Perl.
  • I’ll take you on a lightning tour.
 
Perl5.10:
  • Five years after Perl5.8, Perl5.10 is now out.
  • Packing a powerful punch for power users.
  • I’ll show you the highlights.
 
Parrot:
  • An advanced virtual machine for dynamic languages.
  • Advanced capabilities with blinding speed.
  • Already supports over 20 languages.
  • I’ll give you a quick overview.
 
Perl6:
  • A new generation of programming languages.
  • Advancing the state of the art in powerful practical languages.
  • specification not an implementation.
  • Multiple implementations exist already.
  • Generating code for multiple backends: Parrot, Perl5, Lisp, JavaScript.
  • Sharing a common test suite of almost 20,000 tests.
  • Perl6 is written in the best language for the job: Perl6!
  • I’ll demonstrate Perl6 code for you.

And I’ll do all this in 40 minutes. Fasten your seat-belts!

 

The IWTC session is 75 minutes so I figure I can write a good presentation by the end of February for that and then distil the essence down to the 40 minute session I (hope to) have at XTech in May.
 
I’d welcome any comments on the abstract. Especially anything worth saying, or ideally showing, to a relatively perl-sceptical audience.
 
I don’t want to get into a  language comparison debate. Perl can stand on it’s own. But I do want to show that for any cool gizmo that language Foo has, that Perl has something similar. An obvious example is “Ruby has Rails, Perl has Catalyst (and others)”. That’s easy to say but doesn’t carry much weight. For each of those I’d really like great example.
 
For Catalyst a big-name-web-site built using it would do. Other cool gizomos need other killer examples. Got any suggestions?
 
Looking at it the other way, perl has a few cool gizmos that might be worth a mention if time allows: perltidy springs to mind. What others can you think of? And what parallels do they have in other languages?