Steven Pemberton : Views and Feelings

July 2004

The Power of Two

Computers are getting faster

Try this experiment: take a piece of paper and draw a line to divide it into two lengthwise; in the right half write this year's date. Now divide the left half into two, a top part and a bottom part, and write the date 18 months ago in the top part.

Now, divide the bottom part into two, left and right, and in the right part write the date 18 months before that (or three years before now). Keep doing this, halving first vertically then horizontally as long as you can, preferably back to 1984 (if you have a thin enough pen).

Moore's Law says that at constant price computers double in speed every 18 months. So if the space with this year's date in it represents the speed of a new computer now, the space with the date 18 months ago represents the speed of a new computer then, and each smaller box represents the speed of a new computer 18 months earlier.

This diagram hopefully demonstrates two things: firstly that your current computer is more powerful than all your earlier computers put together, and secondly that when the Macintosh was introduced 20 years ago (see that tiny dot?), we had relatively tiny amounts of computer power available. (You might also want to contemplate how much computing power the moon landing had available in 1969...)

So the question is this: what are we doing with all that extra computer power?

The answer is partly: pixels. If you double the resolution of your screen, then you have 4 times as many pixels to write in the same amount of time. But the answer is also partly: nothing. Since we are not usually running videos constantly, our computers spend most of their time idle. It is no wonder that people are able to turn their old unwanted 386's into perfectly acceptable print servers, file servers or routers, since if a computer isn't processing pixels or sound, there are plenty of cycles available.

So the next question is: have we become equivalently more productive as computer users? Unless you are doing computer-based rendering, the answer must surely be no.

As an example, take one group of computer users: programmers. Most of them are writing with programming languages designed in the 1970's or with languages that are architecturally similar, languages that were designed to allow optimal use of the computer hardware. Their only real productivity gain in all that time is in reduced compilation times. Why are they not using programming languages that allow them to work ten times faster, and that require the computers to spend ten times more work to compile and optimise them? I know of researchers working in programming language research who are using such languages (and making a fortune writing programs for companies who need one-off programs), but there are only a handful of them.

Usability is about doing things faster, correctly, and enjoying it. Improving usability is not only about how to present information to users; we should also be thinking about how to use all that excess computer power to take the load off their shoulders.

First published in ACM/Interactions, July 2004

Other Posts