A blog by Ryan Breen of CloudFloor
The WebKit team just announced the latest round of updates to the already robust WebKit Inspector, and there are a couple of new features that deserve mention:
- Timeline Panel: Like dynaTrace AJAX Edition, WebKit Inspector now provides a graphical timeline which overlays network operations, script execution, and rendering on the same view. Like dynaTrace’s PurePath feature, WebKit Inspector tracks the causal relationship between events and browser operations, so you will always know the cost of a specific user action.
With the ever increasing complexity of web applications, this is of great benefit to developers. We can now see the full performance of the application from the network through to final rendering to the screen with enough granularity of detail to take action against bottlenecks.
- Audit Panel: Providing functionality similar to YSlow and PageSpeed, the Audit Panel is a rules engine which currently provides recommendations on network and client side performance optimizations.
The team plans to open this framework so developers can contribute new rules in different categories. It would be great to see some degree of standardization so that an interested developer could easily write new rules to be executed in YSlow, PageSpeed, and WebKit Inspector.
There’s been so much going on in the performance space lately that I’ve been snowed under. It’s difficult to know where to begin chronicling all of the progress. I’ll start with a few updates from Sitepen.
- Back in April, Kris Zyp had a great article for IBM developerWorks called Ajax performance analysis. The developerWorks crew puts out some great material, and this is no exception. Simply put, it’s one of the best articles I’ve seen on the topic, and it should be required reading for every Ajax developer. He discusses Firebug, YSlow, and some client-side instrumentation techniques.
- Old friend of the Perf, Tom Trenka, had a nice post about string operations across browsers in May. One of the more interesting takeaways is with regards to IE7 versus IE6. The net — there’s no longer any justification, if there ever was, for special casing string concat operations for IE.
- One of my favorite tools, Firebug Lite, has seen some “>dramatic improvements in the Dojo Toolkit version, as discussed by Mike Wilcox in early June. The features discussed: a popup mode that remembers size and position, ReCSS (so you can reload stylesheets without reloading the app), a DOM Inspector, an Object inspector, and a command line. They’ve definitely taken Firebug Lite a long way past the initial goal of offering a bare subset of Firebug functionality to IE developers.
- A few days ago, Mike posted another article — this time with a nice addition to the recent swell of client-side profiling articles. Mike whipped up a nice generic mechanism for tracker client side performance in a cookie to remove some of the tedium from generating a statistically relevant data set in your own browser.
- Finally, Alex Russell expands on the concept of lazy loading by creating a stub loader for Dojo. Weighing in at a slim 6kB (gzipped over the wire), this build of dojo.js is just the bootstrap code necessary for loading the main functionality, all of which is deferred until it’s actually called within an application. John Resig posted a follow-up regarding some of the clear downsides of this approach, such as the potential violation of user expectations.
Hot on the heels of yesterday’s discussion of Jiffy come a few unrelated notes involving client side performance testing. It looks like this approach is finally gathering the mindshare it deserves, and it’s really cool to see all the effort going into developing these solutions.
The first is a fairly basic client side performance tracker for Rails by Eric Falcao. Currently, it appears to only track the time from the start of the document parse to the onload event. Eric could make this a really compelling tool by providing an API allowing the developer to add more granular timings as desired. This is the approach I’ve used in the prototype Rails client side perf tool I used for Dojo Charts measurements (and a couple of other projects), and it’s also the approach used by Jiffy.
You could then hook into prototype.js or RJS code generators to auto-insert these performance counters for common actions (here’s an example — time every XHR fired on behalf of every end user). To use Eric’s words, there are some really cool ways to make this type of instrumentation “we’re-all-spoiled-with-rails simple.”
Next is a really cool cross-browser benchmark of SVG, VML, and Canvas by Ernest Delgado. Ernest uses two case studies of Google Maps charting to compare SVG (for Safari and Firefox) and VML (for IE) with a Canvas implementation. I’ve done some studies of the relative performance of VML and SVG, but I’ve never looked at how a Canvas implementation could compare.
Ernest’s findings are interesting. In my research, Firefox’ SVG implementation was notably slower than Safari’s, and Ernest’s data bears that out for Firefox 1 and 2. But Firefox 3 renders SVG in his case studies in between 1/2 and 1/3 time time of Firefox 2, so it appears the team has done some solid work on the SVG engine (or, perhaps, performance improvements elsewhere in the browser are responsible for the gains).
Compared to Safari 3, Firefox 3 turns in a mixed performance. In the first case, Firefox is significantly faster. In the second case, Safari is faster. I would like to know more about the sample size of the measurements to see if these numbers would hold up, but it definitely looks like the Mozilla team has been hard at work.
Speaking of the Mozilla team, John Resig last week described a new plugin for deep profiling jQuery. This plugin will instrument every jQuery call and give some basic stats as to call count and time spent. And this is just the beginning. Per John:
The next stage of development for this plugin would be to reveal which methods are running inside other jQuery methods – in addition to monitoring other aspects of the application (such as timers, Ajax callbacks, etc.). I’m pleased with even this most-basic result – it gives me the ability to quickly, and easily, learn much more about a jQuery-using application.
Clearly, the complexity of apps being run on the client side requires more measurement within the browser. That’s why we’re seeing a mini-explosion of browser side performance collection tools or demos. This is a space ripe for innovation.
I’m at the O’Reilly Velocity Conference in San Francisco today and will be sitting on a panel with Bill Scott, Ernest Mueller, and Scott Ruthfield. Steve Souders is moderating.
Bill is kicking off the show with something really exciting — the Jiffy plugin for Firebug. Jiffy relies on Scott Ruthfield’s Jiffy-Web open source analysis suite to track the performance of an application from both the client and server side. Client side performance tracking is something I’ve been a fan of for a while (I used a similar technique for the Dojo Charts benchmark last year).
This looks like a great new tool to make this type of analysis more accessible, and I’ll be attending Bill’s sessions today to get more information.
I’m doing a Webinar Thursday with Bob Buffone of Rock Star Apps and Nexaweb. I’ve never done a joint webinar before, so it should be a lot of fun. 2 hours of Ajax/RIA performance discussions — what could be better?
Bob’s blog has more details on how to sign up. It’s free, of course.
I’m fascinated by cases where seemingly banal technical details become precious commodities because very few have expended the time and energy necessary to document them. One good example is mobile browser connection profiles — there are thousands of combinations of mobile device and browser software, and each has its own particular connection limits and concurrency profile. No central body provides gratis access to this information, so those looking to study or test mobile browsers have few and costly options to choose from.
That’s why I was excited to see a post by Jason Grigsby of Cloud Four (via Ajaxian) about a research project to collect this information with some clever server-side magic. Just hit this link in your mobile device and help contribute to a worthy cause. The results will be published under a creative commons license for all to use.
I’ve talked before about the recent move by browser vendors to implement the Selectors API. There is potential for significant performance benefits from moving this code into the browser, but there is risk as well. If the provided functionality is buggy (as history tells us it must be), libraries will need to patch around these bugs on a case-by-case basis. If the spec is ambiguous or differs from de facto standards used in common practice, that’s yet more work for the library maintainers.
John Resig provided some insight with a post today into how browser vendors, the W3C, and library maintainers are coming together to smooth over the rough parts of the spec. It’s a fascinating read, providing a peek into the sausage-making process of spec wrangling for those who don’t frequent the public-webapi mailing list.
Testing new arrangements of DOM elements to improve the object load order or parallelism can be a bit of a cumbersome task. Fire up a text editor, create a test page with a meaningful name, hit with different browsers, and repeat a few hundred times. As an exemplar of the old aphorism that good programmers are lazy, Steve Souders (formerly of Yahoo!, now of Google) created Cuzillion to remove some of the friction from these testing cycles.
Cuzillion is a simple web app that allows for easy arrangement of different page elements (external scripts, images, stylesheets) within a DOM. These sample pages are each defined by a simple restian URL, so they can be shared with other developers as examples of what to do (or what not to do). Loading a page in Cuzillion also reports a high level number for page load time and some micro-metrics from within the page (the time to load an inline script, for example). You can use Page Detailer or HttpWatch to get a more detailed analysis of object load order.
When YSlow was released last year, one of the aspects of the project that excited me the most was the documentation it provided: just by ranking specific performance decisions made by the application, it served to educate developers on what they can do better. I could see a community developing around Cuzillion to serve a similar purpose, especially as the tool expands to handle more DOM elements or object load techniques (such as external scripts referenced via
Since switching to Google hosting for my personal e-mail, all of my WordPress ‘comments pending approval’ e-mails have been silently going to my spam folder. I just finished digging through the 4000 messages that queued up. Damn comment spam.
Apologies to those whose comments were delayed. I’ve corrected the e-mail issue, and I’ll do a better job of staying on top of comment moderation from now on.
It’s great to see pseudo-standards such as Firebug’s console and profiling APIs gain traction. That makes it much easier for users to get meaningful comparative data between browsers while testing their applications.