Monday, April 30, 2012

A Personal History of Computer Hardware

Reading Herb Sutter's comments on changes in computer hardware ("The Free Lunch Is Over", from 2004 (, and "Welcome to the Jungle", from 2011 ( led me to think about the computers I have engaged with over the years.

I had fleeting encounters with computers as a university student; this was at a time when a whole university had just a handful of computers.  My first real engagement with computers was in the late 1960s when I got a summer job at a computing laboratory run by CSIRO, the Commonwealth (of Australia) Scientific and Industrial Research Organisation.  The machine was a Control Data 3200, which had (I think) 32,000 24-bit words of memory.  That is 96 kilobytes (though the byte wasn't in use then), less than one thousandth of the memory of any video card today, let alone the memory of a whole computer.  It occupied the whole of a large room, being made of discrete transistors (not integrated circuits, i.e. "chips").  Input was by punched card, one card per line of program; you put the bundle of cards in a box, and waited some hours for the program to be run, since the computer required specialised human operators.  Then you looked at the printed output, found the missing comma in your program and tried again.  The machine had four magnetic tape units (one tape held about 5 megabytes), and there was a monstrous line printer.  I think there was also a pen plotter, though I didn't use it.  As a great privilege I got to go once or twice into the machine room and actually sit at the console and type commands.

Despite all the obvious differences, the basic architecture of both the hardware and the software was remarkably similar to that which prevailed across the whole of Sutter's "Free lunch" period, 1975-2005.  There was a single processing unit, a quantity of memory (RAM), and slower but more capacious external storage, in this case provided by the magnetic tape drives.  I did some programming in assembly language, and the underlying operations that the machine carried out (load, store, add, shift, jump, and so forth) are still there, though the way these operations are carried out inside the CPU has become much more complex and there are new types of operation (I don't think there were any stack manipulation instructions then, let alone vector instructions).   The higher-level language was Fortran, and far as I remember the cycle of compile (separately for each "compilation unit"), link, load, run was the same as that still used today with languages like C++.

I went to England for further study, and encountered my first "departmental" computer, meaning that it belonged to the Mathematics Department, not the University as a whole.  It was a PDP-8 computer, the size of a bar fridge, it had (I think) the equivalent of 8 kilobytes of memory, and the program was input via paper tape.  I took a course on Lisp using this machine; it was the first interactive language I encountered, where I could change things on the fly.  Around this time I visited a friend at Cambridge University and encountered for the first time the arrangement of numerous terminals connected to a single computer.  By this time integrated circuits were being used, though the single-chip microprocessor didn't arrive until a little later.  Also hard drives were arriving, though they were the size of washing machines or bigger.

My working life was spent in University mathematics departments, so computers were always there, though often just in the background.  The system of numerous terminals connected to a single computer, probably in another building, remained dominant for quite some time.  For a while the terminals were teletypes; they physically typed onto paper.  The Control key on computer keyboards dates from the teletype era: it was used to control the teletype by, for example, advancing the paper a line (control-J), or ringing the bell on the teletype (control-G).  The resulting non-printing "control characters" are still used in computer text files.  In the 1960s a character set only held 64 characters including the control characters; there was only room for UPPER CASE letters.  When  character sets with 128 characters (7 bits) came into use, lower case letters became available, and computer output became much more readable.

The teletypes gave way to the ubiquitous green-screen monitors, 80 characters across and 24 or 25 lines deep.  What look like descendants of these can still be seen in shop checkout counters.

At some point the mathematics typesetting program TeX arrived, and we all became amateur typesetters.  Before that, mathematical typing was done by administrative staff, and it was a specialised skill, using IBM golfball typewriters.  TeX allowed the production of better-looking results than any typewriter could achieve, but it wasn't easy to use, and really only people from mathematics and related disciplines took to it.  It was and is open-source software and remains the standard method of producing mathematical documents.

The next big change was the spread of personal computers.  The first one of these I got to use was an Apple II that belonged to a friend.  I went round to his place, and he sat me down in front of the machine and then went out to do some errand.  I knew that in principle I couldn't harm the computer just by pressing keys, but I was still a bit nervous (it was expensive).  I touched a key, there was a loud bang, and the computer stopped working.  The machine was full of plug-in cards, and it turned out that a sharp protrusion on one card had managed to eat its way into a capacitor on a neighbouring card, resulting in a destructive short circuit.

The first computer that I owned myself (1985) was a Commodore 64; the name indicated that it had 64 kilobytes of memory in its small plastic box, that is two thirds of the memory of the room-filling machine of the late 1960s.  It also had an inbuilt sound synthesiser chip, and it was the only computer I have ever used that had a genuine random number generator.  Usually there is a pseudo-random number generator, a small program that generates a determinate sequence of numbers once the starting point is set, but the Commodore 64 could read the analogue noise generator circuit in the sound chip, which gave genuine physically-based random numbers.  The Commodore was much cheaper than the Apple, but it didn't have a floppy disk drive, only a very slow unit that stored data on audio cassettes.  It has been said that the Commodore 64 was the last computer that one person could understand all of; it even came with a circuit diagram.

These home computers had some of the attributes of a video game console and certainly helped the evolution of computers into multi-media machines.

In 1989 the Internet proper arrived in Australia with a satellite link from Australia to the mainland U.S. via Hawaii, and the establishment of what was called AARNET by a consortium of Australian universities and the CSIRO.  Previously there had been more local Australian networks, with international email available, though not easy to use.  A lot of the network developments happened in University computer science departments, with mathematics, physics and engineering departments not far behind.  General use outside Universities didn't start in Australia until about 1993.

At home I bought an Atari, also in 1989; I was getting involved in electronic music, and the Atari was well adapted for that.  Meantime at work workstations had arrived, desktop computers in their own right, with much better displays than the old terminals, and networked together.  A little later I got a Sun desktop computer at work.   It had 4 megabytes of memory (I think), but by default it only had an 80 megabyte hard drive.  This was nowhere near enough, and I got an additional 600 megabyte disk drive, which cost over $2000.  Twenty years later, a drive with 1,000 times the capacity costs around one twentieth of the price, not allowing for inflation.  I don't think anyone foresaw this extraordinary increase in hard drive capacity.

The Sun workstation had an additional piece of hardware that could be used as a sound card, though it was actually a general scientific data collector.  It contained a so-called DSP (Digital Signal Processor) chip, that for certain purposes was much faster than the main processor.  DSP chips are still used in specialised applications, including sound cards.

After that the World Wide Web appeared, via the Mosaic browser.  The IBM PC and clones gradually become dominant; at work they were connected to a central server, and were more likely to run Linux than Windows.  I also used a PC at home; I changed to the Macintosh in 2006.

A computing-related development that came at work shortly before I retired was the establishment of an "access grid room", essentially a well-equipped and well-connected video conferencing room allowing the sharing of specialised mathematics courses between universities.  Another development late in my working life, and one related to Sutter's comments, was the building of super-computer class machines by hooking together a network of 100 or more PCs.   Smaller versions of these clusters were within the reach of individual University departments or research centres.  I didn't have an excuse to seek access to them.

The electronic computer was born a little before I was, but stored program machines did not arrive until after I was born, the earliest electronic computers not being stored-program.  The transistor was also born shortly after I was, so the twin revolutions of computing as we know it and of micro-electronics have taken place in my lifetime.

Thursday, April 19, 2012

There Is No Free Lunch in the Jungle

I have not normally been posting on technical topics, and I am not a professional programmer.  But I do spend a fair bit of time writing programs for artistic purposes.  Professional programmers won't find anything of technical interest.

Recently I came across two articles by Herb Sutter, entitled "The Free Lunch Is Over", from 2004 (, and "Welcome to the Jungle", from 2011 (  Together they chart  fundamental changes in the way that computer hardware is organised, and the effect that this is having on computer programs and computer programmers.  Sutter is a programming guru who works for Microsoft, and he is particularly interested in changes to programming techniques.

In "The Free Lunch Is Over", Sutter presciently pointed out that the era of ever faster and more powerful computer processors is ending.  The free lunch was the continual increase in computer processor speeds, sustained over a very long period (Sutter says roughly 1975 to 2005, but 1975 is an approximate starting date for desktop computers; for bigger computers it surely extends further back).  This meant that software developers didn't have to worry too much about inefficient software; it might be a bit slow today, but tomorrow's machines will run it fast enough.  Sutter's article, which first appeared in 2004, pointed out that processor clock speed had started to level out.  Since then, there has been almost no increase in clock speed, which has stagnated at something under 4 gigahertz; the obstacle is the amount of heat generated in the small space of the chip.  Sutter's first era is the era of the free lunch of ever-increasing processor speeds

It is still possible to pack ever more transistors into a chip, so since 2005 there has been a proliferation of multi-core chips, where each "core" is equivalent to the whole processor of an earlier machine.  Today typical desk-top machines have four cores, and even phones and tablets are beginning to have two cores.  Different programs can run at the same time on different cores, but to really make use of the cores a single program has to utilise several cores simultaneously.  This requires a big change on the part of programmers, who need to acquire new tools and a new mindset.  Various approaches to what is variously called parallel programming, concurrency or multi-threading have been around for a long time, but now they suddenly become central.  Sutter's second era is "multi-core", that of machines with a relatively small number of powerful cores.  The first article takes us to this point.

In the second article, Sutter considers that the "multi-core" era is already ending even before we have learnt to cope with it.  The third era is that of "hetero-core", the era of heterogeneous cores, which according to Sutter started in 2011.  As far as the actual hardware is concerned, the third era arrived when powerful graphics cards started to be fitted to home computers for computer games.  These graphics cards contain a large number (for example 100) of very small specialised cores, originally only capable of processing pixels for display.  These small cores have gradually become more general-purpose, and there has been considerable interest in scientific computing circles in harnessing their power for general-purpose computation, not just graphics.  This interest is now going mainstream, but it brings with yet more challenges for programmers, as now, added to the already difficult challenge of adapting a program to make use of multiple cores, different parts of the one program may be running on cores with very different capabilities.

Sutter has the "hetero-core" era ending some time in the 2020s because he thinks that is when Moore's Law (that the number of transistors on a chip doubles every two years) will finally end.  At that point our desktop and laptop and pocket computing devices will have as much power as they are going to get.  Sutter thinks by then another trend will have already taken over, the availability of "hardware as a service": enormous clusters of computers available to be used over the Internet by anyone, for a fee.  This provides still another challenge for programmers: a program will run partly on the by then 1,000 or more heterogeneous cores in the user's local machine (desktop, laptop, tablet or phone), and partly on a much bigger collection of cores available at the other end of a wi-fi link.  Sutter considers that building larger and larger networks of computers will be, for the foreseeable future, much easier than cramming more and more transistors into a single chip or box, so growth in computing power will take place less in individual machines and more in the availability of networks of computers.  As Sutter points out, already Amazon and others offer large clusters of computers for hire; he gives the example of a cluster with 30,000 virtual cores that was (virtually) put together for a pharmaceutical company who hired it for one day, at a cost of under $1500 per hour.  The calculations would have taken years on a desktop computer.

Interesting times!