Hey, we beat the Y2K bug -- so far

January 03, 2000|By Mike Himowitz

Anybody need 15 gallons of bottled water, two dozen flashlight batteries, a Coleman lantern, two jars of applesauce and five cans of Chef Boyardee fat-free ravioli?

If we held a household Y2K yard sale today, those items would be on the table. And at first blush, it appears that the money we spent on them bought us little more than peace of mind.

That's because Doomsday did not arrive on schedule. The planet rolled into the year 2000 with a worldwide fireworks extravaganza but barely a hiccup from the computers that control our communications, power, water and transportation systems.

This leads naturally to a question that friends, family members and colleagues have asked me over the past few weeks: Was the whole Y2K phenomenon a case of hype, or even an outright hoax?

The answers are "maybe a little bit" and "absolutely not." The Y2K threat was real -- and we're not out of the woods yet.

By most analysts' reckoning, 80 percent to 90 percent of the world's Y2K computer troubles won't show up until businesses and government agencies have been back in normal operation for weeks, or even months. Instead of one apocalypse, we may still face hundreds or thousands of smaller computer failures -- the digital equivalent of being nibbled to death by ducks.

But the fact that ABC's Peter Jennings was able to whisk us from one spectacular New Year's celebration to another -- Sydney, Tokyo, Moscow, Jerusalem, Berlin, London, New York, Las Vegas -- without reporting any glitch more interesting than a room full of whacked-out slot machines in Delaware was a tribute to the most elaborate, expensive and coordinated technology effort in history.

As a society, we identified a serious problem and did something about it in a timely fashion. This is not the way the world normally works, and we're naturally suspicious of it.

Depending on the source of the estimate, businesses, governments and individuals have spent $300 billion to $600 billion to forestall this potential disaster, which was created by a simple shortcut that programmers have used since the dawn of computing -- representing the year by two digits instead of four.

Now you and I are smart enough to figure out that the date "1100" probably means Jan. 1, 2000. But computers are dumber than we are, and without specific instructions to the contrary, they're just as likely to interpret the digits "00" as 1900 instead of 2000. This simple mistake can trash any program that makes calculations based on dates -- which means most of the software that runs our major industries, financial institutions and government agencies.

The campaign to track down and stamp out these bugs -- embedded in millions of lines of code created by generations of programmers in our computer software and the microchips that control everything from microwave ovens to oil refineries -- has succeeded to a remarkable extent. We no longer face The End of Civilization As We Know It.

But hopefully, the Y2K frenzy has taught us important lessons -- the first and foremost of which is how dependent on technology we've become, and how fragile that technology is.

The air traffic control system, the Social Security Administration, the National Weather Service, the Internal Revenue Service and most other government agencies are absolutely dependent on computers to provide the critical services we use every day. So are the utilities, banks, hospitals, stock markets, insurance companies and other private institutions that hold our lives together.

Massive, simultaneous failures in those big-ticket systems could indeed have led to an apocalypse. Had nothing been done about Y2K, flying on New Year's would have been a high-risk occupation, this week's paycheck might never have made it to your bank account, and the newspaper you're reading now probably wouldn't have made it to your doorstep.

But there are other Y2K lessons, too. One is that the Law of Unintended Consequences can apply to the tiniest decisions in our lives, and that collectively those decisions can return to haunt us. As a result, very smart, well-intentioned people can do things that, in retrospect, seem really stupid.

The Y2K bug is one of them. It was born because programmers found a simple and effective way to conserve the two most precious resources in early computers -- memory and disk storage.

This is hard to conceive of in an age when we can a buy a PC with 64 megabytes of memory and a 10-gigabyte hard drive for less than $1,000. But well into the 1980s, digital storage was thousands of times more expensive than it is today, and a two-digit year made sense to clever people trying to get the most out of limited resources.

"Any of us who are in our 50s in the computer field are old enough to have written lots of code where, if you could use two bytes instead of four bytes, you'd do it in a minute," a former programmer named Harry Lewis recalled in an article posted on Harvard's Web site.

Baltimore Sun Articles
|
|
|
Please note the green-lined linked article text has been applied commercially without any involvement from our newsroom editors, reporters or any other editorial staff.