Archive for the ‘Javascript’ Category

Backward-compatibility vs. forward-fitting

Monday, April 12th, 2010

I have serious problem with Adobe. I just watched a presentation video from Adobe on CS 5 here (YouTube video). My problem is with their last feature: exporting Flash animation to HTML5 canvas.

This seriously encourage people to stick with outdated technology (it is debatable, but this is my view) and not move on to program directly in future technology (HTML 5). Well, this, obviously, is what Adobe wants. However, Flash is never a good platform to develop with. I’ve heard some experienced Flash developers (and mind you, these are the good ones, not your run-of-the-mill Flash developers) complaining about the shortcoming of Flash technology. Why should we let programmers get lazy and stick with lousy platform when there is momentum to force them to move on to newer, cleaner technology. (Yes, part of the problem is the developers themselves; lots of lazy ones out there.)

Also, knowing that Adobe does ship very buggy Flash Player, I’d doubt that their Flash-to-Javascript/HTML5 compiler would be any good. Extrapolate a little and we might see a wholesale exporting of Flash games to HTML5 in the future. Oh gosh, I don’t want to imagine that world. This is what I see: Javascript code exported from Flash with lousy performance, probably pollutes the global namespace, and likely to be bloated as well. This will cause the web to crawl as slow as ever when developers could have moved into faster, cleaner technology. (On another note, Microsoft demo-ed its IE9 recently with hardware acceleration to render HTML 5 canvas and animation, uber cool. Other browsers will definitely go there as well.)

At the end of the day, what we are missing today is backward-compatibility of HTML5 canvas to browsers with no canvas implementation. What we don’t miss: forward-fitting old, outdated technology to newer, cleaner HTML 5. So if anything, I would want to focus on writing a backward-compatible Javascript abstraction of canvas. Enabling substitution of HTML 5 with older technology (yes, I’m talking about Flash). We’ll see whether I have the time to take that up.

Apple might not do everything right, but I totally support their attempts to stamp Adobe out of iPhone/iPad, including the infamous change of Section 3.3.1 on top of not implementing Flash at all in iPhone and iPad.

Steve Souders’ tech talk on writing fast code

Saturday, April 4th, 2009

This guy is my hero on web performance technique, so I was real happy when this video was published recently in googletechtalks YouTube channel (check out the channel, it has many, many other interesting videos).

In this tech talk, he built up upon the previous tech talk (which I happened to watch live, but could not find it online) and talk about how you can take advantage of parallel Javascript downloads while maintaining any coupling you have between external scripts and inline scripts that you have. One of the techniques involve a modification of John Resig’s degrading script tags. He also mentioned problems with iframe preventing onload events to be fired earlier. Finally, he made clear how you could flush your data early to ensure that you utilize chunked transfer. The last bit was particularly interesting. It has been in my mind recently and I have experimented quite a bit with PHP to perform this. Soon I’m going to try to figure out whether a Python (non-Apache) web server can perform similarly (I believe so). I already wrote a testing web server (based on HttpServer class).

In PHP/Apache case, there were quite a few difficulties in making sure that chunked transfer is performed. One is to make sure that you have enough data (and specifically flush PHP output buffering otherwise). Another involves figuring out Apache’s limitation on sending a single chunk (you have to have enough data before Apache agree to push these data; worse still, if you activate gzipping, the data must exceed the minimum gzip window). Of course, the most interesting problems would be to figure out how to structure the page to ensure that you get the most out of chunked transfer. In another word, you would want to transfer as much dumb data (that can be generated real fast) as possible to utilize the time taken for the server to generate the difficult content as efficiently as possible (at that time, the browser can start downloading CSS, some images, and maybe some Javascript). Also note that the browser will start partial rendering of the page; however, the rest of the page will only be rendered when the rest of the data comes (which can be very long when you’re unlucky). So you need to make sure that the page renders fine with partial data.

Souders’ talk revealed several more considerations that escape my own experiments in this. One is that some proxy server will buffer chunked transfer; the effect of which is that the user will get the entire page in a single monolithic transfer. Also, some browsers (Chrome and Safari) will not start partial render unless there’s enough data (2KB for Chrome, and 1KB for Safari).

Microsoft takes on browser benchmark

Friday, March 27th, 2009

Recently, a friend sent me a link for video and methodology of Microsoft’s browser benchmark. I was a little bored right now so I finally took the time to read the long PDF on the methodology. Here are the summary of my thoughts:

  • Issue highlighted is very valid. Micro-benchmark does not test end user scenario well. A more macro view is needed. Though completely discounting micro-benchmark is not exactly the right thing. Micro-benchmark does provide good, practically unbiased (due to lack of dependency on network stack, web servers, load-balancing, etc.) measure on browser’s parts. A combination of micro- and macro-benchmark would actually be the best
  • Moving on, the actual test that Microsoft did was very geared towards measuring load time. However, honestly, load time is getting less and less important today. We’re moving more and more towards heavy AJAX pages, where performance while the user is navigating in the page is getting more and more important. This includes (but not restricted to) re-rendering speed as user scrolls horizontally and vertically (or diagonally, for OS X users), Javascript processing that is followed/combined with DOM manipulation along with the actual re-rendering of changed DOM, canvas and animation (i.e. HTML5, CSS), etc.
  • Testing with pre-caching is reasonable, but should not be the only holy grail. If you’re doing lab tests, you could easily arrange to not cache content. Without caching, browser performance will be affected for the worse: it tested how good the browser is at utilizing parallelism (the context here involve things like parallel Javascript downloading, js-to-css blocking download, etc.). Lack of parallelism has been causing a huge slow-down in less modern browsers, though Chrome1/2, FF3.1, Safari 4, IE8 are all trying to fix this (see:
  • W.r.t. measurement overhead, actually there is one test that completely ignores measurement overhead: measurement using video recording. We can record the visual cue that the browsers made and perform manual/human comparison. This is very easy to do for web browsers since the results of the computation is directly displayable on the screen. However, manual comparison may cause inaccuracy, especially since the paper seems very intent to calculating to tens of milliseconds. Hence, automating this and improving the accuracy of the timing would be awesome.
  • While we are talking about milliseconds, I disagree with this liking of measuring browser load time to tens of milliseconds (2 decimal places for timings in seconds). Users are not gonna care if the page loaded 50ms faster, just measure accurate to a hundred milliseconds (1 decimal place for timings in second).
  • The issue of extensibility completely contradicts the issue highlighted in the first section: that benchmark should test what users will experience. Right now, more and more users (especially Firefox users) are relying on add-ons to improve their browsing experience. I guess extensibility should generally be addressed separately; however, it should not be discounted so completely. I know that Microsoft is trying to sell this thing (IE8) all right, so I guess it’s acceptable.
  • Inconsistent definitions: I really like this part, it is exactly as I imagine it should be. (The video recording thing I suggested above is also based on my dislike of using browsers’ own onload mechanism to determine that the page has fully loaded.)

[On inconsistent definitions:] Another factor that impacts benchmarking is having a consistent baseline of what it means to complete  a task. In car and horse racing, the end is clear—there is a finish line that all contestants must cross to finish the race. Browser benchmarking needs a similar way to define when a webpage is done loading. The problem is that there is no standard which dictates when a browser can or should report that it is “done” loading a page—each browser operates in a different way, so relying on this marker could produce highly variable results.

Quoted from: Measuring Browser Performance (Microsoft)

Honestly, right now, I don’t really care which one is the fastest browsers around. As long as they are around the same ballpark (not orders of magnitude slower), I would care more about customization feature. If you took a look at my two instances of Firefox (one FF3.1b3, another FF3.0.8), they are both heavily customized with heavy theme-ing (yeah, one of them looks like Chrome) and tonnes of addons.

Oh, and being a Mac users, whatever IE develops do not directly affect me that much. ;)

John Resig’s degrading script tags

Thursday, November 13th, 2008

I just re-discovered this interesting Javascript trick from John Resig’s blog (link here). Under usual circumstances, I would frown when I see eval in Javascript code, the one construct I tried to avoid like a plague. But I don’t resist interesting usage of eval. This is one of them. Basically, the trick allows you to have a script tag with an embedded Javascript, like this:

<script src="some_js_file.js">
  // Do something with the loaded file.
  callSomething(); ...

Interesting huh? To make this work, the loaded script file itself will end with the a two-liner that basically find the script element and evaluates the innerHTML of the element:

var scripts =
eval(scripts[scripts.length - 1].innerHTML);

Ingenious. As an added bonus, the content of the script element will not be executed if the Javascript fails to download (since, obviously, the last two lines that we just added won’t be executed).

Also interesting is how this hack utilizes the synchronous property of Javascript loading. In most browsers except the most recent ones (most of them still in beta/nightly), whenever a script tag is encountered, the rendering stops until the script is downloaded and executed. That means by the time the script is loaded, it knows for sure that the script tag it is in is the last one on the DOM since nothing else has been rendered. Thus, you can access the element by using the above method (see scripts[scripts.length - 1] part of the code).

I’m a little bit behind with how newer browsers download its Javascript. I suspect that the method above may not work. I guess it’ll depend on the heuristics the browsers use to make Javascript download not blocking. I heard that Firefox will actually assume that the script does not do anything to the DOM and continues rendering, in which case the above technique may not work. (I’m sure I’m missing something, probably there are some heuristics that FF used that I’m not aware of.)

Well, still, it is an ingenuous way of utilizing eval.

Javascript: several paradigms in one

Saturday, November 8th, 2008

Now, coders familiar with C++’s infamous text The C++ Programming Language The C++ Programming Language by Bjarne Stroustrup1 would be familiar with his adage of C++ being a language made up of 4 different paradigms: C imperative, modular data abstraction, object-oriented C++, and template metaprogramming/generic programming. While some would say that you should stick with one of these paradigms when programming in C++, the truth is more of a mixed. I focused on object-oriented C++ with sprinkles of templates (I stay away from advanced, Boost-y templates, they are harder to get right and harder to maintain—and time is more important to me than reducing code duplication to 0.00%).

Javascript. Well, early Javascript programs I saw years back were purely procedural with sprinkles of functional paradigms. And they look ugly. For awhile, I decided to stay away from Javascript, only picking up this really cool language slightly over 2 years back. Man, this language is one heck of a language. Like C++, it’s a mixed bag of paradigms, some of which are:

  1. Procedural programming
  2. Functional programming
  3. Object-oriented programming
  4. Semi-generic programming
  5. “Hacks”-oriented programming

Procedural programming

This is probably the very basic of Javascript that most programmers use in the early days. They consist of simply procedure-based programming techniques. Programmers wrote libraries of functions and procedures that does certain manipulation and use them without much data abstraction. It is common to see code like this:

function highlightElement(element) {

// ... and later ...
function onLoad() {

Functional paradigm

Most veteran Javascript programmers would be very familiar with the concept of closure in Javascript. Many programmers also use closure without realizing that they are using one. Closure is very useful and it is one of the best thing offered by Javascript. The essence of closure is that you can create a function/procedure anonymously in a given scope; this anonymous function (while function and procedure differs slightly, let’s just use the term function here) keeps a closure, that is the environment where it is being declared. Therefore, this function can access all variables accessible in the scope where the function is declared. Well, an example will speak more than my rumbling:

var x = 1;
function createHandler(int index) {
  var element = getTableRow(index);
  return function() {
    // This function can access
    // 'element', 'index', and 'x'.
    return x;

var handler = createHandler(5);

// This function call can still access 'element'
// and 'index' although we can't directly
// access them from this scope.
handler(); // This returns 1

// This is the fun part...
x = 2;
handler(); // This returns 2!!

Object-oriented Javascript

This is my favorite by far. Javascript offers a really cool “primitive” object-oriented programming. By “primitive” I meant not having visibility that marks most other OOP languages. However, it’s still cool. To create a class, we just need to write class prototypes.

Highlighter = function(highlightColor) {
  this.highlightColor_ = highlightColor;
Highlighter.prototype.highlightElement = function(
    el) { = this.highlightColor_;
Highlighter.prototype.setColor = function(color) {
  this.highlightColor_ = color;

// To create an object:
var highlighter = new Highlighter('#dedede');

You can perform inheritance and mixins by inheriting the prototype attribute of a class. Awesomeness. Furthermore, you can inject methods into an object without actually having a prototype for the method.

// Let's add method to highlighter object.
highlighter.getColor = function() {
  return this.highlightColor_;

// Now you can use getColor on highlighter object
// but not other Highlighter object.
highlighter.getColor(); // This works.
new Highlighter().getColor(); // Failed!

Semi-generic programming

Well, Javascript is dynamically-typed, so it is very generic. What I want to highlight here is that you can write your code on-the-fly and executes it right away. You can, for example, easily ask your user to type in some Javascript in a textarea and grab that as a string and run it.

Is this a good thing? Usually no! Stay away from this as much as you can. So, stay away from any form of eval! (I guess an exception would be when unsafely parsing JSON; though most modern browsers now have good JSON parser that is probably faster than using eval directly.)

“Hacks”-oriented programming

Ok, this is pushing it a little bit. But hey! You almost never be able to escape from this when you’re doing the more advanced DOM manipulation. I’m almost tempted to say that you’ve never experienced the real Javascript if you haven’t needed to perform weird and crazy hacks on it (well, unless you’re working on server-side Rhino).

Well… When it’s about programming for several web browsers, some of which as old as 6 years and quirky beyond belief2, you have to perform some hacks to make sure things work correctly across all browsers. Recently, I illustrated this by writing about min-/max-width hacks for IE.

This is one fun part of the language. While it’s frustrating sometime, you have to admit it’s fun.

So, this is Javascript. Some consider DOM manipulation as another aspect of Javascript programming. Yes, it is. But DOM manipulation generally falls in the Object-oriented programming side of Javascript (oh, and “hacks”-oriented). Isn’t Javascript cool? I’ve used it frequently in the past 2 years and I must say it’s one heck of a language.

And Javascript 1.7 is coming with many interesting new features.

  1. While I wouldn’t suggest this book for first-time C++ programmer with little background in OOP or C; this book is of tremendous use if you plan to master C++ standards, short of reading the standards itself—which in itself is actually a pretty interesting reading, though I’ve never managed to read more than 5-6 pages without falling asleep []
  2. []

min-width and max-width with Javascript (IE Hack)

Wednesday, November 5th, 2008

So, recently some of the pages I wrote require me to apply certain CSS feature not supported by IE6. The max-width and min-width. They are what they say, provide a minimum and maximum width requirement for a box element. They work like a charm in Firefox, Webkit, and Chrome, just not IE.

Well, readers who are familiar with cross-browser (in)compatibility problems probably would have encountered similar situations when features work differently in different browser. Short hacking and Google search usually helps a lot. Not in this case. Most pages (well, I only went through the top 20-40 pages in Google search, with several combination of keywords) mention the pretty standard CSS expression hacks, like this one. It’s not good enough. CSS expressions are slow. Intuitively, it should run on every resize. But that’s not the case. Steve Souders research into CSS expressions are still fresh in my mind. Expressions executes on every page resize and mouse move events1! Not good!! Definitely not good!

The next obvious move is to use Javascript and register your own handler. Unfortunately, either the CSS expression hacks are too popular or nobody publishes Javascript code that does this, I could find no results in Google. So I didn’t waste any more time and roll out my own hacks. The hacks is darn simple (okay, not darn simple, but not too difficult either). The idea is to enclose the div where you want to set your max-/min-width with a wrapper div that should have a width of 100%. This wrapper div will take the form of the enclosed div should you not place max-/min-width. Subsequently, register a listener to window’s resize that check whether this wrapper div’s width exceeds the max-width or decreases below the min-width and then set the width of the enclosed div appropriately. The crucial part of the code is the following (I simplify it to register directly on window.onresize, which is obviously a bad code, but illustrates the concept well).

// This one for max-width
window.onresize = function() { =
      wrapperDiv.clientWidth > maxWidth + 1 ?
          maxWidth + 'px' : 'auto';

// And this one for min-width
window.onresize = function() { =
      wrapperDiv.clientWidth < minWidth + 1 ?
          minWidth + 'px' : 'auto';

Replace enclosedDiv and wrapperDiv by the appropriate elements as mentioned in earlier paragraph. Btw, You only need to do this in IE. In other browsers, just set the CSS properties appropriately.

Notice the maxWidth + 1 and minWidth + 1? Without the + 1, IE will lock because of race condition. (This page also mentioned this problem, so it seems like a common problem, and not because of my incompetence.)

Right now I’m writing a library that is specialized for this problem. Look forward for it soon. The libraries will handle multiple resize handlers graciously (well, technically, you only need 1 resize handler that resize all the divs in the page in the correct order; and you define the order). The library should also handle creation of the wrapper for IE automatically. In fact, 90% of the library is done, but I’m not confident of releasing it because I haven’t actually tested it at all! I’m going to replace my hacks with the library, if it works fine after we test it, it’ll show up in this blog.

Javascript is fun (yes, cross-browser compatibility is painful, but isn’t it the thing that makes Javascript colorful?)!

  1. []