Archive for the ‘Software Engineering’ Category

Solving the root of the problem

Saturday, December 13th, 2008

I realized that there are two kinds of code maintainers in this world (to tell you upfront, I lied).

The first kind will find workaround around the symptoms of the problems. The second will attempt to find the root cause of the problem and stem it at the source. They are very different kind of maintainers. The first kind will solve problems quickly and appears to have higher productivity. But only the second kind adds value to this entire enterprise of maintaining a piece of software code. In fact the first will destroy values.

Recently, I fixed a major problem with one of my component by fixing its root cause. Then I went on to fix the other components that depend on it. Oh boy. You should have seen the spaghetti code people added to those other components (I wrote the original code of more than half of them, but I don’t recognize them anymore). People wanted to add new functionality, discovered that there is problem with the component I wrote, and instead of fixing it, they try all sorts of workaround. I returned to the component because there is an interesting feature I was asked to write, and I thought it’s one cool features. I hit upon the same problems and so on and so forth. Anyway…

Let’s see. Imagine you have a UI blocks that you use to, well, let’s say, show a tabular data. My code basically wrote a renderer that renders these tabular data with the UI blocks from this mythical framework. I wrote the entire set of data model and the renderer quite some time ago. It exposes many APIs and event hooks for other programmers to write additional features into the table (say, make the table sortable by its headings). Myself and others have added several other features into the table and it gets blown up. The problem is, the renderer assumes a 1 row per logical data, and pretty recently, it is started being used to render a table with several rows per logical data (where several is variable, even within one table). The model and renderer still works properly. But the API that exposed the rendered table (say getRowUIElement, or insertRowOnIndex) fails miserably.

Instead of fixing the root cause (the renderer was not able to provide correct API for these new kind of tables), many chose to re-code their features by accessing the UI blocks directly, thus breaking the abstraction provided by the renderer. That was an easy fix for most of them though they are indeed very awkward (a pluggable feature require user to pass in a function that basically convert a data model index into an actual index in the UI block—if each logical data renders into three rows, this function will convert row 2 from the data model to index 6 in the actual UI block). Needless to say, the code became very messy.

When I encountered the same problem, I was in better position to solve this problem since I knew the code well. So I practically rewrote the renderer, which turned out to be a straightforward job (really, the actual diff was less than 40 lines on the code!). The nightmare began when I started to make all the new features work without hacks (who love ugly code? I don’t.). It was insane! These people are no doubt smart if they could think of all these sort of workarounds. Some of them actually worth a second look if only to appreciate the way the writer slips around the problem altogether. Yet, in almost all cases, it is infinitely easier to just change the renderer, rather than make all these features almost unreadable.

I lack sleep now, but I think it’s a job well done this time around (I don’t have many codes I’m proud of, but this is certainly one of them).

Remember, think simple. Write simple code! Simple code is easy to maintain and easy to understand. Almost always, avoid workaround when possible. Try to discover the root cause.

Now there is the third kind of maintainer (see, I lied). The ones who wrote a workaround, filed a bug against the root cause, either fix the root cause (or get people who are more familiar with the code to fix it), and rewrote the workaround with the proper way. These are the guys you want when handling critical system. Hey, it’s true that solving the root problem is a good thing, but sometime you need to act decisive and fast. Workarounds may be the only way to push that piece of bugfix out before too many people got affected. But remember to fix the root cause! Remember, you do not want to build workaround around a workaround around another workaround (which seems to be pretty common nowadays).

Parallel programming is a double-edged sword

Tuesday, November 11th, 2008

Today I read an interesting article at DDJ on understanding parallel performance. It outlines several truths about multithreaded code. One is that it is not always faster than non-threaded code.

Yes. It’s surprising. Several years back, when I was young(er) and single-core was the only choice of affordable PC (there were those servers with multiprocessors, but those aren’t affordable at all), I was naive to think that if I make my code multithreaded, I will gain some speed. In general that’s probably true, up to a point. Too many threads imply expensive context-switching, especially in a single core (heck, it is still a problem with multi-cores, as long as there are more threads than cores, as the article clearly pointed out). Over the years, I realized several considerations to ponder about when coding multithreaded code.

The first is how CPU-bound are your threads. If your process is highly CPU-bound, it is not worth to have more threads than cores. Remember, there are always other processes running as well, so the optimum number would be the same as the number of cores, or less as number of cores explodes. For CPU-bound processing, it is better to minimize context-switching.

Blocking process is another consideration. Any process that may block for I/O or to obtain a lock should be run on a thread pool. In this case, I would say that having 2-3 times threads compared to the number of cores are optimal for most cases. Note that for highly I/O-bound processes, more threads may be needed to fully squeeze every ounce of performance out of those cores.

Locks. When your threads require access to common resources, reduce the number of threads. Too many threads will cause lock contention everywhere and slows down processing as CPU cycles are used to acquire locks. Either minimize contention, which may not be possible on most cases, or segregate resources appropriately. For example, you may want just 1 thread to write to a common file, with any threads wanting to write to that file queue its task to that 1 thread. It’s not too easy in other cases. Although there are usually way out that minimize the number of lock contention. For example, when writing to an object, try to have different threads writing to different parts of the object, hopefully unrelated part.

In some cases, you can make a single thread perform much better than multiple threads (well, I said single thread, but I’m lying). The idea is to saturate the thread as much as you can with CPU-bound tasks. You may have some other threads (see I lied) that collect resources and writing out data as the single thread completed its tasks. In a web server example, you can have one master thread. When an incoming HTTP connection arrives, another thread will process the request into a task (perhaps a Request object?) and queue it at the master thread as it comes. The master thread may process one request and realized that it needs to read a file and queue that task to another file reader task, while at the same time returning the partially processed request back to the queue and start processing other request. Most bottleneck here will come from the queue implementation, but this is a baby example so I’m going to stop at that. Another bottleneck will be if the number of incoming requests arrive faster than the speed in which the master thread can process the requests, in which case, we arrive at classic example of dropping of requests (a la network routers and switches). In that case, it may be time to consider having 2 master threads.

From the above examples, you can probably tell that this method of self-managing your “threads” (in this case, instead of threads, we have requests) allow for much faster processing as it reduces context-switching.

What about LWPs (light-weight processes)? Indeed… what about LWPs? :) LWPs are fast, I love it.

P.S. Do read the DDJ’s article above. It’s very thoughtful.