How is future technology relevant to current pursuits? Pursuits head: dek: by Nelson King
How many times have you heard a comment like this: “Only 10 years ago, this was considered science fiction.” You’d think we 21st-century moderns would be accustomed to scientific and technological developments shaking up our world with some regularity, but we continue to be at least mildly surprised. Case in point: The day I finished working on this column, an American research group announced it had cloned a human embryo.
Developments like this prompted me to put together a series of four columns on the future of computing. The majority of COMPUTERUSER readers are business folk from small- and medium-size companies who have an interest in computing. Future technology is important to this group, but not necessarily a future more than few years out.
However, most of what I’m going to write about isn’t tomorrow’s breakthroughs, or even next year’s. Ere long, some of the science I’ll describe will be discarded as obsolete or just plain wrong. Knowing this caused me to think that rather than diving straight into the main topic–microscopic computers–it might be useful to first consider the value of looking into the future of technology.
Framework on the mind
If you were in the mainframe or minicomputer business and lived through the PC revolution, you already know first-hand that those who don’t see the future are likely to be unpleasantly surprised by it. More positively, a new technology is more often than not an opportunity to make a fortune. New technologies usually raise some people and companies and ruin others. We have plenty of historical evidence for this. Still, at what point should nonspecialists pay attention to new technologies? In computing, there are always dozens of major technological innovations on the horizon–the horizon being anywhere from five to 50 years away. At what point should the average COMPUTERUSER reader starts to evaluate a future technology?
If you think I’m going to give you a formula, forget it. There is no calculus that will guarantee you can find the right time to seriously consider a new technology–or whether you can profit from it. However, it isn’t necessary to evaluate new technology with precision in order to derive some value from the exercise.
There are similarities in the long-term evaluation process, whether you’re considering an upgrade to Windows XP or next decade’s use of diamondoid nanoprocessed gears. You pick up the track of a new technology when it interests you, or when it appears that it may have some impact. Over time you learn about it and develop your opinions. This is how you get ready for that day when decisions need to be made. Admittedly, most of this is a subjective process; but like I said, precision isn’t the point.
Mini this and micro that–we’re already quite familiar with the progress of miniaturization. Computers have gone from rooms full of tubes and wires to things that can slip into your shirt pocket or sit on a watchband. Now consider fitting computers on the head of a pin.
Nanotechnology is about the very, very small (one nanometer is about the width of four atoms, or one-one billionth of a meter), is a vast area of research. I’ll soon describe its influence on computing, but keep in mind that it will impact many fields, as this partial list of nano-words indicates: nanobiology, nanochemistry, nanocomputing, nanoelectronics, nanofabrication, nanomedicine, nanophotonics, and nanophysics.
This is a realm difficult to imagine and it becomes visible only within range of a powerful electron microscope. The idea of engineering at this level is only a few decades old. The actual ability to do the engineering began only within the last few years. Yet this is another tectonic technological shift in the works.
Time for a switch
The physical basis of computing, as we generally know it, is in the principle of an electronic switch. The state of the switch, on or off, becomes the representation of the binary system (ones and zeros) from which we derive bits, bytes, and the rest of computational capability that becomes built into logic (CPU) and memory (RAM) circuits. Put these circuits together with some wiring and you get a computer. From the very first days of practical computers, engineers have searched for ways to make the switches and circuits smaller. The more you can get into a smaller space, the more powerful the computer.
This is all Computing 101 and probably quite familiar to you. Nanotechnology for computers follows the same approaches and motivations-make smaller switches and circuits-but the molecular (or even atomic) scale makes radical changes in almost everything. At the moment we’re approaching nanotechnology from roughly two directions: the ongoing and traditional process of making computer chips by electron lithography and the more fundamental and difficult approach of building integrated circuits by manipulating atoms and molecules directly-molecular engineering. Most scientists and engineers believe we are coming close to the end of downsizing traditional integrated-circuit capability. The next generation use the building blocks from nanotechnology processes.
There are numerous approaches to constructing nanocomputer components. This is where watching the future of this particular technology becomes complicated. There are switches made from carbon nanotubes (Hewlett-Packard research calls them oligomers); Bell Labs is working on a self-organizing process created by pouring an organic polymer on a base; IBM and others are working on “machining” processes that use molecular manipulation with tunneling microscopes. This is just a sampling. Not all of these approaches (and maybe none) will be the route to the first commercially viable nanocomputer component. But scientists and engineers are becoming very confident that such components will be made-they just don’t know when.
In the terms we’re discussing, human beings are quite large. The many degrees of physical separation between human scale and nanoscale represent a gulf that needs to be crossed. Put another way, after solving the problems of nanomanipulation, we also have to interface it with humans and human scale devices. Even if we put several computers on the head of a pin, proceeding to combine them into systems, link them to devices we can see and monitor, and generally make them work within our existing human infrastructure will be no small task.
Then there’s the software. It’s axiomatic that software lags behind hardware. This isn’t always true, but in general, it takes years before software takes advantage of hardware capabilities. This will be true with nanotechnology, and I believe we must solve some very difficult issues involving the way we create software. In fact, if the rosy visions of nanotechnology have a flaw, it’s the software needed to organize and control the thousands, millions, or billions of individual “nanites” or “nanobots” that are envisioned. I’ll cover this issue in much more depth in a later column.
It should come as no surprise that there are both optimists and pessimists in the nanocomputer research community. Some believe that useful components will become available within five years. That is truly a short timeline, and if it’s accurate, a lot of companies had better get with the program soon. On the other hand, more pessimistic types believe that even the most rudimentary components are still 10 years away. In which case, we have more time to dream about uses: How about a computer screen with the same size and clarity (resolution) as a paperback book? Or clothing filled with various computerized sensors that can monitor health and comfort? We also have time for nightmares such as embedding computers in the human body or surveillance by ubiquitous microscopic computers. However you look at this technology, it’s potential is staggering, which makes it worth watching. As for the outcome: The pessimist sees the glass half empty, the optimist sees it half full and the realist says, “I’m thirsty.”