The company I work for, TigerLead.com, has an opening for a “skilled coder / database wrangler”.
We’re looking for a skilled coder / database wrangler to play a key role within our Operations and Engineering teams. The various responsibilities of the job include working with the large databases underlying our real estate search tools, setting up services for new clients, communicating with clients to evaluate bug reports, troubleshooting technical issues escalated by our client services team, and interfacing with the engineering team on systems maintenance and development. The scope of work that we do involves managing hundreds of external data feeds that feed into in-house databases totaling several million property listings. These listing databases power hundreds of real estate search sites used by more than a million home-buyer leads, who are tracked and cultivated by the thousands of Realtors using our management software. This position is critical to the robustness of these systems.
If that sounds like interesting work to you then take a look at the full job posting.
TigerLead is a lovely company to work for and this is a great opportunity. Highly recommended.
I’ve recently started looking into geocoding in perl. We’re currently using some old hand-coded logic to query the Yahoo Search API. I wanted to switch to Geo::Coder::Yahoo but I noticed that that depended on Yahoo::Search which hadn’t been updated since March 2007 and had accumulated a number of bug reports (which may well be closed by the time you read this).
Several related to the fact that Yahoo::Search didn’t handle Unicode properly when using its default internal XML parser (instead of the optional XML::Simple which does the right thing, but slowly).
What happened next makes a nice little example of getting things done in the Open Source world… Continue reading
Where I’m working at the moment we’re using the Yahoo Geocoding API but aren’t very happy with it. I’ve been asked to look into how we can improve our geo coding.
I released Devel::NYTProf v3 on Christmas Eve 2009. Over the next couple of months a few more features were added. The v3 work had involved a complete rewrite of the subroutine profiler and heavy work on much else besides. At that point I felt I’d done enough with NYTProf for now and it was time to focus on other more pressing projects.
Over those months I’d also started working on enhancements for PostgreSQL PL/Perl. That project turned into something of an epic adventure with more than its fair share of highs and lows and twists and turns. The dust is only just settling now. I would have blogged about it but security issues arose that led the PostgreSQL team to consider removing the plperl language entirely. Fortunately I was able to help avoid that by removing Safe.pm entirely! At some point I hope to write a blog post worthy of the journey. Meanwhile, if you’re using PostgreSQL, you really do want to upgrade to the latest point-release.
One of the my goals in enhancing PostgreSQL PL/Perl was improve the integration with NYTProf. I wanted to be able to profile PL/Perl code embedded in the database server. With PostgreSQL 8.4 I could get the profiler to run, with some hackery, but in the report the subroutines were all __ANON__ and you couldn’t see the source code, so there were no statement timings. It was useless.
The key problem was that Devel::NYTProf couldn’t see into string evals properly. To fix that I had to go back spelunking deep in the NYTProf guts again; mostly in the data model and report generation code. With NYTProf v4, string evals are now treated as files, mostly, and a whole new level of insight is opened up!
In the rest of this post I’ll be describing this and other new features.
Speaking of belated screencasts, I also haven’t blogged about my visit to the Ann Arbor Perl Mongers in Michigan.
The Ann Arbor Perl Mongers group was being restarted (after a 10 year gap) by the TigerLead tech team. I’m working for TigerLead and was going to be in Ann Arbor for a meeting so they asked me to give a couple of talks: Devel::NYTProf and Perl Myths.
I like giving talks at events like these because there’s no set time limit and the audience is more relaxed (the free pizza probably helped).
I’ve uploaded a screencast of the Perl Myths talk. As usual it covers the Perl jobs market, CPAN, best practices, power tools, community and perl6. At almost 1 hour 20 minutes it’s significantly longer than my usual, more rushed, 40 minute version given at conferences and includes 15 minutes of Q & A at the end.
I’ve just been updating the page where I keep links to my presentations and noticed that, not only had I not updated the section for the 2009 Italian Perl Workshop, but I hadn’t even uploaded the screencasts I’d made.
So, with apologies for the delay, here’s my entry for IPW09, with the links to the uploaded screencasts:
I had a great time at IPW08 and was delighted to be invited back for IPW09, which was another great success. My contributions were two talks. The first was called “DBI Oddmenti” and covered DBD::Gofer (16 minute screencast), DBI::Profiler (7 minute screencast), and DBDI a key component of a future DBI for Perl 6 (5 minute screencast). The second was “State-of-the-art Profiling with Devel::NYTProf” (40 minute screencast).
With 30 talks from 20 speakers on 2 tracks, IPW09 was another success for the Italian Perl Association, which was formally incorporated at the event. I’m confident that YAPC::EU 2010 is in safe hands.
I’m really looking forward to YAPC::EU. We’re combining the conference with our family summer holiday. We’ll be staying in a cottage in the village of Calci a few miles outside Pisa.
I released Devel::NYTProf 3.0 almost three months ago, on Christmas Eve.
Since then a few point releases have accumulated some changes and features worth mentioning: