HAL/S is the most rigorously tested computer in the world.
Recent developments in the investigation of the Columbia disaster have focused on the onboard computer, which was in control of the shuttle prior to and during its cataclysmic descent. All indications are that the computer sensed unusual drag on the damaged left wing and adjusted course to compensate. Unfortunately, at the hottest point of re-entry, the slightest course adjustment can cause disaster by putting heat stress on less protected parts of the shuttle. This particular course adjustment not only exposed more vulnerable areas of the shuttle to extreme heat from friction, but further weakened the tile protecting the damaged left wing by exposing that wing to more of the drag. At any rate, that is one popular theory as to why Columbia broke up.
Fans of “2001: A Space Odyssey” must have felt a little pang of irony when they found out the on-board programming language of NASA’s Space Shuttle program is called HAL/S. Those same fans might recall that Arthur C. Clarke chose the acronym by moving up the alphabet one letter for each letter in IBM to form a word that is also the common name HAL. The fact that IBM created HAL/S (higher-order assembly language/shuttle) after “2001” only adds to the irony and causes one to wonder if HAL and HAL/S have more in common than their names.
Though both the fictional HAL and the real HAL were involved in space accidents, the similarity ends there. The computing systems aboard the shuttle fleet–IBM’s custom AP-101–are the most rigorously tested in the world. According to the New York Times, “It is one of a handful of projects in the world to receive a Level 5 rating from Carnegie Mellon University’s Software Engineering Institute for the reliability of its code and the rigor of its testing processes.” According the Times report, the guidance system program has more than 400,000 lines of code. Recent versions have no errors, at least none that could be found after 30 years of testing on the performance of the program. According to our programming expert Nelson King, a typical software project with 1 million lines of code contains hundreds of errors, dozens that affect the performance of the program.
If you can’t blame the system itself for the course correction, who or what can you blame? In a sense you can blame the system, or the procedures of Mission Control. As for the system, it behaved exactly as programmed. But the program did not take into account the situation. The program assumed that the drag on the left wing was not the result of damage but rather of atmospheric drag that would require a course correction in order to maintain proper course. In programming terms, we say the system lacked intelligence; it did not know what the cause of the drag was, and this lack of intelligence was fatal. Some engineers consider a lack of intelligence a bug. But properly understood, a bug is an error in the program, not a limitation of a program’s intelligence. If this were a bug, all programs would be extremely buggy, because they lack the intelligence to understand the nuances of their circumstances.
The greater error lies outside of the computer. If you know that the computer lacks intelligence, you don’t let it control the ship at crucial times such as landing. The idea is, if you put the ship on autopilot, it will be less prone to human error. True enough. But in complex situations, we should prefer human error to machine error. Humans are orders of magnitude more intelligent than computers. This intelligence is especially needed in situations that require flexible thinking. If the pilot had been at the controls, perhaps he could have seen the left wing sensor drag data and interpreted it differently than the computer. He or she might have thought, “I know the wing is slightly damaged; that would account for the drag; therefore a course correction is not required.” More likely, he or she would have made the correct judgment in a flash (as it were, without thinking) and avoided the correction as a matter of his or her training.
Until we know more about the cause of the crash, we can only speculate if that would have saved Columbia and her courageous crew. But we can learn from this incident to trust human intelligence above computational precision in future missions. And we can bring this lesson beyond space exploration into every enterprise. In situations that require quick and flexible thinking, humans should be preferred over so-called intelligent systems. That was the central lesson of “2001.” For all his computational prowess, HAL was turned off in order to complete the mission.
James Mathewson is editor of ComputerUser magazine and ComputerUser.com