Posts Tagged ‘performance’

Microsoft takes on browser benchmark

Friday, March 27th, 2009

Recently, a friend sent me a link for video and methodology of Microsoft’s browser benchmark. I was a little bored right now so I finally took the time to read the long PDF on the methodology. Here are the summary of my thoughts:

  • Issue highlighted is very valid. Micro-benchmark does not test end user scenario well. A more macro view is needed. Though completely discounting micro-benchmark is not exactly the right thing. Micro-benchmark does provide good, practically unbiased (due to lack of dependency on network stack, web servers, load-balancing, etc.) measure on browser’s parts. A combination of micro- and macro-benchmark would actually be the best
  • Moving on, the actual test that Microsoft did was very geared towards measuring load time. However, honestly, load time is getting less and less important today. We’re moving more and more towards heavy AJAX pages, where performance while the user is navigating in the page is getting more and more important. This includes (but not restricted to) re-rendering speed as user scrolls horizontally and vertically (or diagonally, for OS X users), Javascript processing that is followed/combined with DOM manipulation along with the actual re-rendering of changed DOM, canvas and animation (i.e. HTML5, CSS), etc.
  • Testing with pre-caching is reasonable, but should not be the only holy grail. If you’re doing lab tests, you could easily arrange to not cache content. Without caching, browser performance will be affected for the worse: it tested how good the browser is at utilizing parallelism (the context here involve things like parallel Javascript downloading, js-to-css blocking download, etc.). Lack of parallelism has been causing a huge slow-down in less modern browsers, though Chrome1/2, FF3.1, Safari 4, IE8 are all trying to fix this (see: http://stevesouders.com/ua/).
  • W.r.t. measurement overhead, actually there is one test that completely ignores measurement overhead: measurement using video recording. We can record the visual cue that the browsers made and perform manual/human comparison. This is very easy to do for web browsers since the results of the computation is directly displayable on the screen. However, manual comparison may cause inaccuracy, especially since the paper seems very intent to calculating to tens of milliseconds. Hence, automating this and improving the accuracy of the timing would be awesome.
  • While we are talking about milliseconds, I disagree with this liking of measuring browser load time to tens of milliseconds (2 decimal places for timings in seconds). Users are not gonna care if the page loaded 50ms faster, just measure accurate to a hundred milliseconds (1 decimal place for timings in second).
  • The issue of extensibility completely contradicts the issue highlighted in the first section: that benchmark should test what users will experience. Right now, more and more users (especially Firefox users) are relying on add-ons to improve their browsing experience. I guess extensibility should generally be addressed separately; however, it should not be discounted so completely. I know that Microsoft is trying to sell this thing (IE8) all right, so I guess it’s acceptable.
  • Inconsistent definitions: I really like this part, it is exactly as I imagine it should be. (The video recording thing I suggested above is also based on my dislike of using browsers’ own onload mechanism to determine that the page has fully loaded.)

[On inconsistent definitions:] Another factor that impacts benchmarking is having a consistent baseline of what it means to complete  a task. In car and horse racing, the end is clear—there is a finish line that all contestants must cross to finish the race. Browser benchmarking needs a similar way to define when a webpage is done loading. The problem is that there is no standard which dictates when a browser can or should report that it is “done” loading a page—each browser operates in a different way, so relying on this marker could produce highly variable results.

Quoted from: Measuring Browser Performance (Microsoft)

Honestly, right now, I don’t really care which one is the fastest browsers around. As long as they are around the same ballpark (not orders of magnitude slower), I would care more about customization feature. If you took a look at my two instances of Firefox (one FF3.1b3, another FF3.0.8), they are both heavily customized with heavy theme-ing (yeah, one of them looks like Chrome) and tonnes of addons.

Oh, and being a Mac users, whatever IE develops do not directly affect me that much. ;)

Reduce # of HTTP requests

Saturday, November 1st, 2008

I think I’ve become more absent-minded recently. Several days ago I received an Amazon package at home. I left it lying around and unopened until this morning. I remembered that I ordered a book that I’ve coveted for several months and that the book is in that package. Lol.

So I spent this morning reading Rule 1 from High Performance Web Sites (that’s the book that I received in the package). It’s interesting really, the design of this website tries to follow that rule pretty closely. That is, it tries to reduce the number of HTTP requests being sent to the server. HTTP requests add a burden to the server and to your web browsers. It gets slower; much slower. Of course Firefox users would be very familiar with tabs auto-restoring during crashes/restarts. At the point of auto-restoring, it is not uncommon for me to load 50 tabs at the same time. If each site has 20 HTTP requests in average (Blimey! That many? You’ll be surprised, some website have over a hundred requests; heck, even my faculty website has over 40 HTTP requests.), the auto-restore can easily amassed 1000 requests. Firefox, by default, limits concurrent HTTP requests to about 30 (check out your about:config, search for network.http.max-connections). Older Firefox default to 2 concurrent connection per server. Ouch!

I’m hoping that this blog will not become a beast in term of HTTP requests. The initial design (from default Wordpress 2.6.3) contains over 10 requests. I’ve redesigned the page substantially and reduces the amount of requests by quite a bit. Even after adding line-numbering (a vertical image strip) and Google Analytics (a whopping 2 requests, 1 for the javascript file, another for the actual tracking, which uses a heavily parameterized GET request for a 1×1 gif image), this site has a manageable 6 requests. I plan to add a Javascript library for some of the additional stuffs I plan to do. But I’ll try to do that while reducing other requests. My first project is to replace that image for line numbers with a Javascript. So the Javascript library will take the place of the image download.

My next game will involve playing around with the cache settings, probably even installing the notoriously problematic plugin, WP Super Cache.

Readers who are planning to create a website of their own may also want to consider sticking with HTML standards instead of XHTML standards. That will save you tonnes of closing tags. The w3cschool HTML tag reference pages will indicate whether the closing tag is optional for HTML. I have no intention to switch to HTML for this blog though. As long as I don’t get digged or slashdotted, I don’t expect amazingly high amount of traffic… Heck, even 100 readers a day will be pretty amazing! Saving bandwidth is not my concern right now.

Anyway, for more of interesting stuffs and analysis on how to make your frontend better, be it website or webapps, try reading Steve Souders’ website.