The teaching of computer science

April 7th, 2009
Kids & Computers

Kids & Computers

Today’s ACM TechNews includes an article on CMU trying to buck trends of having few women in Computer Science. I agree wholeheartedly.

However, before we even get there, computer science is seriously in need of polished publicity. I have talked with several undergraduates and high school students recently and encouraged them to take up computer science. In general, most of them were impressed when they heard that I majored in Computer Science. They thought that it is pretty cool as a major. However, when I asked them whether they are considering (especially the high school students) to major in computer science, the response I got was roughly this: no. “Why not?” They replied back that computer science involve lots of programming and they think that programming is hard, really hard. They also strongly equate computer science with programming and only programming. I do believe that programming is not easy, especially at the beginning, but it is not harder than, say, solving difficult physics questions, or designing chemistry experiment to confirm a hypothesis. (I was lucky to be introduced to programming, with LOGO and Basic—yes, Basic, unfortunately—in late elementary school; but most of these people have never touched any programming language before.) Few of them also think that programming is geeky, something that I would never refute; I think being geeky is cool.

Furthermore, even if programming were taught in schools before university, I have a strong feeling that they are being taught wrongly. Instead of being taught to appreciate the beauty of programming (recursion, abstraction, closure, memoization, etc.), teachers were more focused on getting the syntax right. That, in itself, is not too bad. However, the choice of language really do reveal the weakness of the method. Teaching C++ or Java as a first programming language is definitely not a good idea. These languages are very thick with syntax. Example to the point, I have seen loops being taught as a syntax; as for its concept, teachers keep reiterating that it is used to make a particular code loops over and over until the condition turns to become false. The explanation is correct. On the other hand, such explanation rarely extract the awe and understanding of the power of loop construct.

(Granted, at high school level, I am only familiar with Cambridge ‘A’ Level syllabus for Computing, which uses C/C++, I might have too narrow a viewpoint, or even outdated.)

So the first thing we should straighten out is to clean up the way programming should be taught to students. I would start with Javascript (hey, something that can be applied right away on their own websites is cool and fun!), Python, or Scheme. (Yes, you’re right, I think type system only make teaching programming more convoluted without offering much direct benefits.) Picking good teachers is no longer optional. The aim should be to make students appreciate computer science as a diverse, exciting, and fun field. It should definitely not be aimed to teach students to be proficient in one (or, worse, two) programming language. Probably introducing them at younger age would make it even more effective, as students get higher exposure to it and the curriculum gets more refined as it is harder to teach the younger students. (In fact, it is probably more rational to teach visual programming technique. About half a year ago, I watched a Google Tech Talk on tangible functional programming, where the concept of value, pair, and function/lambda is translated into intuitive visual system that allows even stronger form of composability.)

Certainly, putting some into good publicity will not hurt. Though I do think that we should put more effort into integrating computer science into early education (just like other sciences) to let the younger generation figures out how suitable is the major for themselves, before the stigma attached to CS becomes too deeply rooted in them.

Acknowledgment: Photo was published by shapeshift under Creative Commons (Attribution & Share-Alike); link to flickr page.

Steve Souders’ tech talk on writing fast code

April 4th, 2009

This guy is my hero on web performance technique, so I was real happy when this video was published recently in googletechtalks YouTube channel (check out the channel, it has many, many other interesting videos).

In this tech talk, he built up upon the previous tech talk (which I happened to watch live, but could not find it online) and talk about how you can take advantage of parallel Javascript downloads while maintaining any coupling you have between external scripts and inline scripts that you have. One of the techniques involve a modification of John Resig’s degrading script tags. He also mentioned problems with iframe preventing onload events to be fired earlier. Finally, he made clear how you could flush your data early to ensure that you utilize chunked transfer. The last bit was particularly interesting. It has been in my mind recently and I have experimented quite a bit with PHP to perform this. Soon I’m going to try to figure out whether a Python (non-Apache) web server can perform similarly (I believe so). I already wrote a testing web server (based on HttpServer class).

In PHP/Apache case, there were quite a few difficulties in making sure that chunked transfer is performed. One is to make sure that you have enough data (and specifically flush PHP output buffering otherwise). Another involves figuring out Apache’s limitation on sending a single chunk (you have to have enough data before Apache agree to push these data; worse still, if you activate gzipping, the data must exceed the minimum gzip window). Of course, the most interesting problems would be to figure out how to structure the page to ensure that you get the most out of chunked transfer. In another word, you would want to transfer as much dumb data (that can be generated real fast) as possible to utilize the time taken for the server to generate the difficult content as efficiently as possible (at that time, the browser can start downloading CSS, some images, and maybe some Javascript). Also note that the browser will start partial rendering of the page; however, the rest of the page will only be rendered when the rest of the data comes (which can be very long when you’re unlucky). So you need to make sure that the page renders fine with partial data.

Souders’ talk revealed several more considerations that escape my own experiments in this. One is that some proxy server will buffer chunked transfer; the effect of which is that the user will get the entire page in a single monolithic transfer. Also, some browsers (Chrome and Safari) will not start partial render unless there’s enough data (2KB for Chrome, and 1KB for Safari).

Deriving natural laws from raw data

April 4th, 2009

So I read this article today. Cornell researchers have successfully implemented a parallel computer system that is able to basically derive physics laws from observation data. This is extremely interesting. Before deciding to take computer science, I was split between majoring in physics or computer science (and, to much lesser extent, mathematics). To see an inter-disciplinary research like this one made me very happy and impressed. In fact, I do believe that many of the more interesting research (current and future) will involve multi-disciplinary twists. Just recently I read a paper describing translation of polynomial (or restricted-polynomial), completely-partitionable, first order differential equations into distributed protocols (that satisfy the same mathematical properties of the derivatives)1.

So, in this deduction system, it basically figures out derivatives (changes in values w.r.t. to some other changes in the system, e.g. time, position, etc.) that describe the observables, and uses this derivatives to derive system invariants. Technically, invariants are not the actual physical equations. However, it encodes all the information needed to derive physical equations for the system. With this technique, Cornell researchers were able to derive conservation of energy and momentum, and also Newton’s second laws from observations taken from simple experiments (including spring-loaded acceleration, simple and double pendulums).

Read the article for more details. Also, I’m definitely waiting to read the Science article (Vol. 323, No. 5924) that was supposed to come out on April 3 (the digital version seems to not have come out though).

  1. Gupta, I. On the Design of Distributed Protocols from Differential Equations. PODC’04. []

Copy and move semantics

March 29th, 2009

Recently, I have been asked some questions by an academic juniors (by a few years) about smart pointers. In the discussions, we came across terms like copy-constructible, and move semantics. While these concepts may be straightforward for more experienced C++ developers, the concepts are more absurd for programmers who are brought up in the era of garbage-collected languages.

Let’s go through the easy part first: copy semantics. When we talked about copy semantics we meant that a call like these:

vector<int> list;
<..>
vector<int> list2(list);  // Copy constructor

vector<int> list;

/* Thanks to Richie for providing a
   correct version of assignment. */
vector<int> list2;
list2 = list;  // Copy-on-assign

will results in list2 copying the content of list. The first example is an example of a copy constructor: a constructor that takes as a parameter an object of the same type as the constructor’s class (roughly-speaking), and copy the contents of the passed objects. It generally means that changing the content of list will not change the contents of list2 and vice-versa. A class with copy constructor that fulfill the copy semantic is sometime called copy-constructible.

The second example also performs copying operation. The operation is, however, implemented as assignment operator. This is sometime called copy-on-assign.

We should perhaps note that the default copy-constructor and assignment operator generally does not fulfill copy semantics. In fact, it is neither copy nor move semantics; neither here nor there. The default copy-constructor and assignment performs (more or less) a shallow copy; a mixed semantics if you like. They directly copy fields contents. If the field is of primitives type, it correctly performs copy operation. If the field is of types that correctly implemented copy semantics, the default also correctly performs copy operation; otherwise they will not adhere to proper copy semantics. Furthermore, if the field is pointer type, it will perform a copy: a copy of the address in the pointer. Hence, both the new object and the old one will contain the same address in the pointer field. Most of the time, this is not what you want.

Remember to not rely on the defaults: either implement your own copy-constructor and copy assignment, or make them private so that they will not be automatically generated. Boost has a base-class called noncopyable that you may want to use. I prefer using a macro (the macro definition is incomplete but shows the structure that we want):

#define DISABLE_COPY_AND_ASSIGNMENT(classname) \
classname(const classname& t) {} \
classname& operator=(const classname& t) { \
  return *this; \
}

// In some_class.h:
class SomeClass {
 public:
  <..>
 private:
  <..>
  DISABLE_COPY_AND_ASSIGNMENT(SomeClass);
}

Moving on, move semantics arise (more or less) because copy semantics impose performance overhead due to memory allocation and initialization. In many cases you may simply want to give the object to another piece of code without performing an expensive copy operation. In the olden days, you would simply initialize the object you wish to pass on as a pointer and simply pass the pointer to another person in the future. However, this risks 2 major issues: (1) as long as you hold a reference to the pointer, you may still be able to modify the object (a big issue when involving multi-threading); (2) you have to perform manual memory management. Hence, we arrive at modern move semantics. :)

Today, most C++ developers are well-acquainted with smart pointers (if you do not know what they meant, please Google it, they are important). One of the basic smart pointers in C++0x is called unique_ptr. A unique_ptr maintain (as its name implies) a unique pointer: when one instance of unique_ptr is holding to a pointer to an object, no other instances may have a pointer to the same object. Furthermore, people have developed techniques to ensure that you never need to hold a raw pointer at all. All seems good. Well, not really. Smart pointers usually rely on stack allocation, which means that they die (along with the object they hold) when they go out of scope. Here is where move semantics become useful:

unique_ptr<SomeClass> createSomeClass() {
  unique_ptr<SomeClass> ptr(new SomeClass(<..>));
  <..>
  return ptr;
}

// Somewhere else:
unique_ptr<SomeClass> a_ptr = createSomeClass();

Note that, technically, a copy and an assignment occurs here (however, many modern compilers may do away with the copy as optimization). First, when you return ptr, a copy-constructor for unique_ptr is called with ptr as its argument. After that, this new object is assigned to a_ptr. Note that while I said copy-constructor, a more apt term would be move-constructor. Roughly, a move-constructor will pilfer the pointer from the unique_ptr passed to it; the original unique_ptr will no longer hold the pointer. Hence, the term move semantics. Similarly, on assignment, the pointer is being moved from one unique_ptr to another.

Half the time, you would adhere closely to either copy or move semantics. However, sometime you may want to consider partial copy semantics, e.g. shallow copy. This is generally acceptable since the cost of following full copy semantics may be prohibitive. However, we usually do not mix copy semantics and move semantics together. They generally don’t play well together and will cause confusion to other developers. (There is no such thing as mixed copy/move semantics.)

Microsoft takes on browser benchmark

March 27th, 2009

Recently, a friend sent me a link for video and methodology of Microsoft’s browser benchmark. I was a little bored right now so I finally took the time to read the long PDF on the methodology. Here are the summary of my thoughts:

  • Issue highlighted is very valid. Micro-benchmark does not test end user scenario well. A more macro view is needed. Though completely discounting micro-benchmark is not exactly the right thing. Micro-benchmark does provide good, practically unbiased (due to lack of dependency on network stack, web servers, load-balancing, etc.) measure on browser’s parts. A combination of micro- and macro-benchmark would actually be the best
  • Moving on, the actual test that Microsoft did was very geared towards measuring load time. However, honestly, load time is getting less and less important today. We’re moving more and more towards heavy AJAX pages, where performance while the user is navigating in the page is getting more and more important. This includes (but not restricted to) re-rendering speed as user scrolls horizontally and vertically (or diagonally, for OS X users), Javascript processing that is followed/combined with DOM manipulation along with the actual re-rendering of changed DOM, canvas and animation (i.e. HTML5, CSS), etc.
  • Testing with pre-caching is reasonable, but should not be the only holy grail. If you’re doing lab tests, you could easily arrange to not cache content. Without caching, browser performance will be affected for the worse: it tested how good the browser is at utilizing parallelism (the context here involve things like parallel Javascript downloading, js-to-css blocking download, etc.). Lack of parallelism has been causing a huge slow-down in less modern browsers, though Chrome1/2, FF3.1, Safari 4, IE8 are all trying to fix this (see: http://stevesouders.com/ua/).
  • W.r.t. measurement overhead, actually there is one test that completely ignores measurement overhead: measurement using video recording. We can record the visual cue that the browsers made and perform manual/human comparison. This is very easy to do for web browsers since the results of the computation is directly displayable on the screen. However, manual comparison may cause inaccuracy, especially since the paper seems very intent to calculating to tens of milliseconds. Hence, automating this and improving the accuracy of the timing would be awesome.
  • While we are talking about milliseconds, I disagree with this liking of measuring browser load time to tens of milliseconds (2 decimal places for timings in seconds). Users are not gonna care if the page loaded 50ms faster, just measure accurate to a hundred milliseconds (1 decimal place for timings in second).
  • The issue of extensibility completely contradicts the issue highlighted in the first section: that benchmark should test what users will experience. Right now, more and more users (especially Firefox users) are relying on add-ons to improve their browsing experience. I guess extensibility should generally be addressed separately; however, it should not be discounted so completely. I know that Microsoft is trying to sell this thing (IE8) all right, so I guess it’s acceptable.
  • Inconsistent definitions: I really like this part, it is exactly as I imagine it should be. (The video recording thing I suggested above is also based on my dislike of using browsers’ own onload mechanism to determine that the page has fully loaded.)

[On inconsistent definitions:] Another factor that impacts benchmarking is having a consistent baseline of what it means to complete  a task. In car and horse racing, the end is clear—there is a finish line that all contestants must cross to finish the race. Browser benchmarking needs a similar way to define when a webpage is done loading. The problem is that there is no standard which dictates when a browser can or should report that it is “done” loading a page—each browser operates in a different way, so relying on this marker could produce highly variable results.

Quoted from: Measuring Browser Performance (Microsoft)

Honestly, right now, I don’t really care which one is the fastest browsers around. As long as they are around the same ballpark (not orders of magnitude slower), I would care more about customization feature. If you took a look at my two instances of Firefox (one FF3.1b3, another FF3.0.8), they are both heavily customized with heavy theme-ing (yeah, one of them looks like Chrome) and tonnes of addons.

Oh, and being a Mac users, whatever IE develops do not directly affect me that much. ;)

P2P simulator

February 14th, 2009

I will be working to write simple cycle-driven peer-to-peer simulator really soon. My workload is getting really, really heavy, but they are really fun! I have been learning more and more. Hoping to be able to write some stuffs here soon. But don’t count on it.

Happy Valentine’s day btw (this is one of the reasons why I wrote this today).

P.S. Heck, I completely forgot to publish this one.