When will productivity make a quantum leap?
Recently I had one of my monthly conversations with Vance Opperman. Those who read our masthead may recognize the name. Vance is the CEO and general counsel and owner of our company. What the masthead doesn’t say is that Vance is also a recognized expert in business technology, law, and governance. Brilliant people like Vance don’t just enliven conversations, they inspire me with questions that force me to find answers. This meeting with Vance inspired more persistent questions than any conversation I have had since I interviewed MIT’s Michael Dertouzos for this space last March. I am still trying to answer one of the late Prof. Dertouzos’s questions (What makes a good search engine?). But Vance’s main question–what new technologies will enhance business productivity?–has taken control of my life. I suspect I will be working on Vance’s question for a while. But here is what I’ve come up with so far.
Most new technology initiatives face uphill battles for slices of slender IT budgets. After all the tech industry has been through since the Y2K fiasco, executives require aggressive return on investment (ROI) schedules, which require obvious productivity improvements. This doesn’t just mean faster machines. It also means better tools. And frankly, we are not seeing major improvements in the tools we use. Productivity has come a long way through office automation and the Web. But one frontier looms large on the horizon: Reducing communication confusion. We waste an enormous amount of time clarifying e-mail, looking for information on the Web, or finding just the right file on the network. In short, computer-mediated communication has only enhanced the speed (and quantity) of messages, not the quality of messages. Good communication still requires a lot of human effort.
I have studied communication for most of my life, and it seems to me that all computer-mediated communication technologies have one central flaw: They make human language fit into the framework of modern computing rather than the other way around. Modern computing reduces all work into the simplest terms–yes or no, true or false, 1 or 0. But because everything must be couched in these terms, the programs we write to do this work involve millions of lines of code. The more complex the task, the longer the program. The longer the program, the better the chance that it will contain at least one flaw or bug. There is a practical limit to the complexity of the task that we can reduce to yes-or-no questions. Communication systems, like voice recognition, are stuck in a rut because they have reached that limit. Every enhancement developers make to these programs involves a less stable system.
The book I review in this issue, “The Chip,” talks about a time when electrical engineers could design circuits to do certain tasks, but no one could build the circuits because they involved soldering thousands of transistors onto a circuit board. Since the circuits were soldered by hand, the chances of a flaw in a circuit increased with the number of transistors. This problem, called the “tyranny of numbers,” brought electronics to a standstill in the ’50s. There were several potential solutions, including automating the soldering process. But most were not elegant. The ultimate solution, developed by both Jack Kilby of Texas Instruments and Robert Noyce of Fairchild Semiconductor and later Intel, was to integrate all those transistors on a single wafer of silicon. The integrated circuit solved the tyranny of numbers. Now every electronic device we use is made smart with integrated circuitry.
Nowadays, we have a new tyranny of numbers for hardware. There is, in fact, a limit to the number of transistors we can fit on a chip. In his past several Pursuits columns, Nelson King has explored the various new proposed solutions that don’t involve etching circuits into silicon. But we are still a few years from being able to utilize the chips we have to the fullest. By the time we need to switch to new types of integrated circuits, there will be at least one viable solution to the new tyranny of numbers. The likely answer will be to take advantage of the peculiar atomic physics of materials for information processing, a.k.a. quantum computing.
We also have a tyranny of numbers on the software side. But this one affects us now. We can write programs that simulate simplified versions of human language. But natural language is far more complex than the simplistic models. Such programs don’t come close to simulating actual human language. At a certain degree of complexity, we simply can’t produce stable programs that are free of flaws. I have worked on Prof. Dertouzos’s problem for a year, and I now have a good idea about how to radically improve search and filtering software. But it seems to me that we could never write the programs because they involve too much complexity to code into yes-or-no algorithms.
Nelson King’s Pursuits column this month talks about a variant of this problem for Web services. IBM proposed a solution called automatic programming. Because most bugs are attributed to human error, IBM is developing programs that check for flaws in other programs and fix them before they adversely affect the systems. It’s an intriguing solution, but I suspect it will not help programs that get so complex that the flaws in the checker programs start to come out. Then you would need checker programs to check the checker programs, and so on ad infinitum.
My hunch is that the solution will come from left field, as the integrated circuit did. Kilby’s solution seems so simple now, but in the context of the times, it was radical. One left-field approach I’ve been mulling is what I call multistate programs. Suppose we could write a program that did not reduce everything to yes-or-no questions, but also included a third state–yes and no. Yes and no would be that state that corresponds to sentences that are true in some contexts and false in others. Most of the complexity of human language comes down to this. A sentence is meant in one light and taken in another. Think of how many times someone asks us a question and the proper answer is “Yes and no.” That is, the answer depends on other circumstances.
I know it sounds like a crazy idea, but it just might work to bring an end to the tyranny of numbers in communication software. And it seems naturally paired with current work in quantum computing, some of which uses four-state qbits to process information. This may not seem like much, but it would cut programming complexity in half, ultimately enabling much more complex–and better–communications tools. When communications systems better model human language, productivity will make a quantum leap.