SURVEY   THE MILLENNIUM BUG

Small cause
Further Reading:

Search archive

    Hugely expensive effect
 
IN A SENSE, reflects Robin Guenier, who led the first Year 2000 campaign in Britain, the computer was invented at just the wrong time. Had it been developed before the first world war, say, or in the past decade, the point of distinguishing between dates in different centuries would have been obvious. But the computer was born smack in the middle of the century, when some people thought humanity might never see another millennium. The first two digits of the year seemed dispensable.

This has created two distinct problems. One is in the software that runs on computers (in, for instance, the operating systems that give computers their basic capability, or the applications that enable them to perform specific tasks); the other is in the software embedded in the chips that are part of components in all sorts of systems and gadgets, from consumer electronics to the sophisticated equipment for controlling industrial or medical processes. Where either sort of software has been told to measure time in terms of years, the turn of the century brings hazards.

In some organisations, these hazards have been known about for years. In 1989, America’s Social Security Administration tried to set up a payment schedule that ran beyond the end of the century. The computer system could not process it. This prompted the SSA to start combing through the computer code it relies on to make 50m payments a month. Visa, a big credit-card company, also spotted the problem early, but not quite early enough: for almost six months last year, it had to stop issuing cards with expiry dates in 2000 for fear they would be rejected.


picture

Many time-sensitive businesses have already begun to hit millennium problems; more will do so after the start of 1999, and of the financial year spanning 1999 and 2000. Systems will crash, components will fail. Generally this will merely cause irritation, not catastrophe. If a lift breaks or a mailing list becomes inaccessible, people will usually manage until the problem is solved. Some of the glitches, though, may turn out to be more serious. A broken lift in an office is a nuisance; in the emergency wing of a hospital it may be a killer. An inaccessible mailing list in a big firm may be an inconvenience; if it is a small firm’s stock-in-trade, the business will suffer.

The failures that worry people most are those that threaten public safety. In rich, well-organised countries they are likely to be tackled before those that affect public comfort. So nuclear-power stations are not likely to blow up (although they may shut down); nor aeroplanes to drop out of the sky (though they may be delayed); nor weapons to explode accidentally (though they may be unavailable for use). The threats to safety will come from more mundane failures: faulty railway signals, say. Organisations whose operations could pose a risk to life normally build in several safety checks. When they start to deal with the Year 2000 problem, they will home in first on the safety features.

But that will not stop all worries, because different countries’ and companies’ state of readiness will diverge widely. A disaster in one part of the world—at a Bulgarian nuclear-power plant, perhaps, or on an Indonesian flight—could affect confidence in other, better-prepared countries. In an interconnected world, people will easily exaggerate the threat from an invisible technology.

A rational choice

Even those who foresaw the Year 2000 problem in the computer’s early days chose to ignore it, for two good reasons. First, abbreviating dates made overwhelming economic sense at the time. This is hard to imagine now that memory seems almost limitless, but chart 2 is a reminder. The
Gartner Group, a consultancy in Connecticut, estimates that the cost of one megabyte of magnetic disk storage (enough for a solid novel) in 1965 was $761, compared with 75 cents today and perhaps 34 cents in 2000. The two-digit date thus bought productivity gains for which the bill is now arriving.

The second reason for not worrying about abbreviated dates was that nobody expected software to last so long. Other investments, after all, wear out in time: buildings fall down, machines rust. When they do, companies scrap and replace them. But software turns out to be a different sort of investment. It hardly ever wears out. “In the late 1970s, when I programmed for a living,” recalls Tim Bresnahan, an economics professor at Stanford University, “no one realised that systems would be upgraded and enhanced, not replaced. It has been a big surprise to discover how long software lives.”

Companies increasingly buy off-the-shelf systems from specialist companies such as SAP or PeopleSoft. But many still rely—for, say, their payrolls and billing systems—on custom-built software which they have tweaked and updated over the years, but not jettisoned. Each tweaking has buried two-digit dates deeper in a tangle of software spaghetti.

If software is in a microprocessor buried in a sliver of plastic and embedded in a valve or switch, its unforeseen longevity is an even bigger problem. These “embedded systems” may have survived, little altered, since punch-card days; because the chips are grafted into another piece of machinery, their software is less likely to be changed than the sort that runs on a mainframe or PC. As John Gage, chief scientist at Sun Microsystems, puts it: “These things may sit for years in a box out by a railroad track; you can’t change the code, and it was probably written by some kid who’s now 65 and happily retired.”

Why should the processors in embedded systems care what year it is anyway? Sometimes there is a good reason: they monitor when an appliance needs to be serviced, or on which days a bank vault should open. Sometimes a clock was built into mass-produced chips just in case it was needed in the component in which the chip would eventually be installed. SmithKline Beecham acquired two apparently identical weighing machines at a plant in Belgium. One cleanly passed a test of millennium compliance; when an assiduous employee tested the second, it failed. The manufacturer had bought the systems from two different sources.

Whatever else the millennium brings, it will undoubtedly make companies better informed about their IT. “Most companies”, says Capers Jones, who runs a consultancy called Software Productivity Research in Burlington, Massachusetts, “do not really know how much software they currently own, and do not even have a clue as to how much software they truly need. Even worse, most companies do not know how many of the applications they own are active and still being used, or dormant and simply taking up space.”

At least bug-hunting will give companies a better grasp of what they have, and perhaps more incentive to manage it coherently. “J.P. Morgan wouldn’t exist without IT,” says Peter Miller, the bank’s chief information officer. “If we were a car company, we’d carry our IT on our books as assets, and we would have an asset-management policy.” And even for chief information officers, debugging has produced some bonuses, such as the opportunity to regain control of their company’s computer systems, often weakened by the move to decentralised networks of PCs. Given the toil of bug-squashing, they deserve some rewards.