Archive for the ‘Web Programming’ Category

Backward-compatibility vs. forward-fitting

Monday, April 12th, 2010

I have serious problem with Adobe. I just watched a presentation video from Adobe on CS 5 here (YouTube video). My problem is with their last feature: exporting Flash animation to HTML5 canvas.

This seriously encourage people to stick with outdated technology (it is debatable, but this is my view) and not move on to program directly in future technology (HTML 5). Well, this, obviously, is what Adobe wants. However, Flash is never a good platform to develop with. I’ve heard some experienced Flash developers (and mind you, these are the good ones, not your run-of-the-mill Flash developers) complaining about the shortcoming of Flash technology. Why should we let programmers get lazy and stick with lousy platform when there is momentum to force them to move on to newer, cleaner technology. (Yes, part of the problem is the developers themselves; lots of lazy ones out there.)

Also, knowing that Adobe does ship very buggy Flash Player, I’d doubt that their Flash-to-Javascript/HTML5 compiler would be any good. Extrapolate a little and we might see a wholesale exporting of Flash games to HTML5 in the future. Oh gosh, I don’t want to imagine that world. This is what I see: Javascript code exported from Flash with lousy performance, probably pollutes the global namespace, and likely to be bloated as well. This will cause the web to crawl as slow as ever when developers could have moved into faster, cleaner technology. (On another note, Microsoft demo-ed its IE9 recently with hardware acceleration to render HTML 5 canvas and animation, uber cool. Other browsers will definitely go there as well.)

At the end of the day, what we are missing today is backward-compatibility of HTML5 canvas to browsers with no canvas implementation. What we don’t miss: forward-fitting old, outdated technology to newer, cleaner HTML 5. So if anything, I would want to focus on writing a backward-compatible Javascript abstraction of canvas. Enabling substitution of HTML 5 with older technology (yes, I’m talking about Flash). We’ll see whether I have the time to take that up.

Apple might not do everything right, but I totally support their attempts to stamp Adobe out of iPhone/iPad, including the infamous change of Section 3.3.1 on top of not implementing Flash at all in iPhone and iPad.

Steve Souders’ tech talk on writing fast code

Saturday, April 4th, 2009

This guy is my hero on web performance technique, so I was real happy when this video was published recently in googletechtalks YouTube channel (check out the channel, it has many, many other interesting videos).

In this tech talk, he built up upon the previous tech talk (which I happened to watch live, but could not find it online) and talk about how you can take advantage of parallel Javascript downloads while maintaining any coupling you have between external scripts and inline scripts that you have. One of the techniques involve a modification of John Resig’s degrading script tags. He also mentioned problems with iframe preventing onload events to be fired earlier. Finally, he made clear how you could flush your data early to ensure that you utilize chunked transfer. The last bit was particularly interesting. It has been in my mind recently and I have experimented quite a bit with PHP to perform this. Soon I’m going to try to figure out whether a Python (non-Apache) web server can perform similarly (I believe so). I already wrote a testing web server (based on HttpServer class).

In PHP/Apache case, there were quite a few difficulties in making sure that chunked transfer is performed. One is to make sure that you have enough data (and specifically flush PHP output buffering otherwise). Another involves figuring out Apache’s limitation on sending a single chunk (you have to have enough data before Apache agree to push these data; worse still, if you activate gzipping, the data must exceed the minimum gzip window). Of course, the most interesting problems would be to figure out how to structure the page to ensure that you get the most out of chunked transfer. In another word, you would want to transfer as much dumb data (that can be generated real fast) as possible to utilize the time taken for the server to generate the difficult content as efficiently as possible (at that time, the browser can start downloading CSS, some images, and maybe some Javascript). Also note that the browser will start partial rendering of the page; however, the rest of the page will only be rendered when the rest of the data comes (which can be very long when you’re unlucky). So you need to make sure that the page renders fine with partial data.

Souders’ talk revealed several more considerations that escape my own experiments in this. One is that some proxy server will buffer chunked transfer; the effect of which is that the user will get the entire page in a single monolithic transfer. Also, some browsers (Chrome and Safari) will not start partial render unless there’s enough data (2KB for Chrome, and 1KB for Safari).

Microsoft takes on browser benchmark

Friday, March 27th, 2009

Recently, a friend sent me a link for video and methodology of Microsoft’s browser benchmark. I was a little bored right now so I finally took the time to read the long PDF on the methodology. Here are the summary of my thoughts:

  • Issue highlighted is very valid. Micro-benchmark does not test end user scenario well. A more macro view is needed. Though completely discounting micro-benchmark is not exactly the right thing. Micro-benchmark does provide good, practically unbiased (due to lack of dependency on network stack, web servers, load-balancing, etc.) measure on browser’s parts. A combination of micro- and macro-benchmark would actually be the best
  • Moving on, the actual test that Microsoft did was very geared towards measuring load time. However, honestly, load time is getting less and less important today. We’re moving more and more towards heavy AJAX pages, where performance while the user is navigating in the page is getting more and more important. This includes (but not restricted to) re-rendering speed as user scrolls horizontally and vertically (or diagonally, for OS X users), Javascript processing that is followed/combined with DOM manipulation along with the actual re-rendering of changed DOM, canvas and animation (i.e. HTML5, CSS), etc.
  • Testing with pre-caching is reasonable, but should not be the only holy grail. If you’re doing lab tests, you could easily arrange to not cache content. Without caching, browser performance will be affected for the worse: it tested how good the browser is at utilizing parallelism (the context here involve things like parallel Javascript downloading, js-to-css blocking download, etc.). Lack of parallelism has been causing a huge slow-down in less modern browsers, though Chrome1/2, FF3.1, Safari 4, IE8 are all trying to fix this (see: http://stevesouders.com/ua/).
  • W.r.t. measurement overhead, actually there is one test that completely ignores measurement overhead: measurement using video recording. We can record the visual cue that the browsers made and perform manual/human comparison. This is very easy to do for web browsers since the results of the computation is directly displayable on the screen. However, manual comparison may cause inaccuracy, especially since the paper seems very intent to calculating to tens of milliseconds. Hence, automating this and improving the accuracy of the timing would be awesome.
  • While we are talking about milliseconds, I disagree with this liking of measuring browser load time to tens of milliseconds (2 decimal places for timings in seconds). Users are not gonna care if the page loaded 50ms faster, just measure accurate to a hundred milliseconds (1 decimal place for timings in second).
  • The issue of extensibility completely contradicts the issue highlighted in the first section: that benchmark should test what users will experience. Right now, more and more users (especially Firefox users) are relying on add-ons to improve their browsing experience. I guess extensibility should generally be addressed separately; however, it should not be discounted so completely. I know that Microsoft is trying to sell this thing (IE8) all right, so I guess it’s acceptable.
  • Inconsistent definitions: I really like this part, it is exactly as I imagine it should be. (The video recording thing I suggested above is also based on my dislike of using browsers’ own onload mechanism to determine that the page has fully loaded.)

[On inconsistent definitions:] Another factor that impacts benchmarking is having a consistent baseline of what it means to complete  a task. In car and horse racing, the end is clear—there is a finish line that all contestants must cross to finish the race. Browser benchmarking needs a similar way to define when a webpage is done loading. The problem is that there is no standard which dictates when a browser can or should report that it is “done” loading a page—each browser operates in a different way, so relying on this marker could produce highly variable results.

Quoted from: Measuring Browser Performance (Microsoft)

Honestly, right now, I don’t really care which one is the fastest browsers around. As long as they are around the same ballpark (not orders of magnitude slower), I would care more about customization feature. If you took a look at my two instances of Firefox (one FF3.1b3, another FF3.0.8), they are both heavily customized with heavy theme-ing (yeah, one of them looks like Chrome) and tonnes of addons.

Oh, and being a Mac users, whatever IE develops do not directly affect me that much. ;)

Reduce # of HTTP requests

Saturday, November 1st, 2008

I think I’ve become more absent-minded recently. Several days ago I received an Amazon package at home. I left it lying around and unopened until this morning. I remembered that I ordered a book that I’ve coveted for several months and that the book is in that package. Lol.

So I spent this morning reading Rule 1 from High Performance Web Sites (that’s the book that I received in the package). It’s interesting really, the design of this website tries to follow that rule pretty closely. That is, it tries to reduce the number of HTTP requests being sent to the server. HTTP requests add a burden to the server and to your web browsers. It gets slower; much slower. Of course Firefox users would be very familiar with tabs auto-restoring during crashes/restarts. At the point of auto-restoring, it is not uncommon for me to load 50 tabs at the same time. If each site has 20 HTTP requests in average (Blimey! That many? You’ll be surprised, some website have over a hundred requests; heck, even my faculty website has over 40 HTTP requests.), the auto-restore can easily amassed 1000 requests. Firefox, by default, limits concurrent HTTP requests to about 30 (check out your about:config, search for network.http.max-connections). Older Firefox default to 2 concurrent connection per server. Ouch!

I’m hoping that this blog will not become a beast in term of HTTP requests. The initial design (from default Wordpress 2.6.3) contains over 10 requests. I’ve redesigned the page substantially and reduces the amount of requests by quite a bit. Even after adding line-numbering (a vertical image strip) and Google Analytics (a whopping 2 requests, 1 for the javascript file, another for the actual tracking, which uses a heavily parameterized GET request for a 1×1 gif image), this site has a manageable 6 requests. I plan to add a Javascript library for some of the additional stuffs I plan to do. But I’ll try to do that while reducing other requests. My first project is to replace that image for line numbers with a Javascript. So the Javascript library will take the place of the image download.

My next game will involve playing around with the cache settings, probably even installing the notoriously problematic plugin, WP Super Cache.

Readers who are planning to create a website of their own may also want to consider sticking with HTML standards instead of XHTML standards. That will save you tonnes of closing tags. The w3cschool HTML tag reference pages will indicate whether the closing tag is optional for HTML. I have no intention to switch to HTML for this blog though. As long as I don’t get digged or slashdotted, I don’t expect amazingly high amount of traffic… Heck, even 100 readers a day will be pretty amazing! Saving bandwidth is not my concern right now.

Anyway, for more of interesting stuffs and analysis on how to make your frontend better, be it website or webapps, try reading Steve Souders’ website.