What are we going to do with all that storage space headed our way?
In my last column (first in a series of four), I looked into the future of computing largely concerning processors and memory that will be derived from nanotechnology. This column considers the future of peripherals and communications, which have their own leading technologies in material science and lasers.
As I wrote previously, this series of columns isn’t about cataloging the wonders of future technology. The Web and computer media are full of stories and information about what’s coming. Of course, the information amounts to thousands of undigested and unrelated pieces, and it’s often far from clear what’s significant and what’s not. I have no claim to 20/20 insight, but it’s my task here to provide some context and analysis. Above all, I’ll do my best to frame this look into the future of computer technology to help those who are (or should be) evaluating it for their business or personal use.
In almost all physical sciences, researchers are more than ever stepping into the world of fundamental building blocks–atomic and molecular structures. Far more powerful microscopes and sensing gear allow us to actually view molecules and some atomic features. We have greatly enhanced our ability to visualize the structures and their context using computer-generated images. We are increasingly able to use sophisticated mathematical models to describe and predict behavior and properties. Taken together, advances in microscopy, computer visualization, and mathematical modeling have vastly increased our analytical and predictive capacity for a world far beyond normal human senses and comprehension.
The upshot: The ability to tackle various mysteries, questions, and problems in physical science is at a more fundamental and explanatory level than ever before. Where researchers would once have experimented by brute force, mixing hundreds or thousands of chemicals to get a desired result, they can now use their tools to make often strikingly targeted and accurate predictions of which chemicals to mix. Nowhere is this transition more apparent than in material science.
While some hardcore computer folks read magazines and journals about integrated-circuit technology, I’ll bet very few read the Journal of Material Science. Yet if you wanted information that consistently reveals how major findings in physics and chemistry make their way into practical computer (and other) products, tracking developments in material science is arguably the best approach. This is the world of ceramics, resins, alloys, polymers and a thousand other physical materials–some natural, some man-made–that are used in electronics and many other industries. When you next sit in a dentist’s chair and have a tooth crowned with a resin hardened by a laser, you’re enjoying the fruits of material science that are closely related to the developments that led to the read/write CD. In fact, just about every form of data storage is a commercial success because of innovations in the materials of the recording medium.
Knowledge of magnetoresistance (materials that change their conductivity when magnetized) dates from its discovery by Lord Kelvin in 1856, but practical application took another 115 years, when it was used to store computer data on a disk. To achieve reliability under many operating conditions, the magnetic films and the composition of the disk itself required many years of experimentation with a wide variety of materials. This is a typical story of materials science development, except that today choice of materials would be much less arbitrary.
One computer technology currently awaiting advances in material science is the use of holography for data storage. Other forms of storage–floppy, hard disk, CD, DVD–record data on the surface of a disk, which is flat and two-dimensional. What if you could record in three dimensions with data not only side-to-side but also top-to-bottom? By some estimates you’d get a storage medium that would start out with about 400 gigabytes of space, enough to store 100 full-length movies. As the technology improved, a terabyte (one trillion bytes) would be within reach.
To create a hologram of an object, light from a single laser is split into two beams. One, the reference beam, goes straight to the target–often photographic film, but in this case, a disk. The other, the signal beam, is usually reflected through a representation of digital bits, and then directed into collision with the reference beam to produce an interference pattern that has both width and depth. The physics of this is well known; the hard part is finding the right material to hold the pattern. It has to be reliable and competitive in price with other storage media. Various plastics are now in trial phases, but it will be several more years before commercial applications appear. Nevertheless, holographic storage is close.
The capacity of holographic storage is prodigious, but who needs the Library of Congress on two disks? Aside from some space-saving and marginal convenience factors, this question has no good answer right now. How do we make intelligent use of such capacity? So what if you can get 100 movies on a holographic disk? Storage media aren’t always selected because of capacity. Just ask former executives of SyQuest how they lost the removable-storage war despite having larger-capacity media than their competitors.
Since 1990 we’ve had removable media go from 5.25-inch floppies (900K) to 3.5-inch floppies (1.4MB) to CD-ROM (640MB) to DVD (5.7GB). Soon we will have other formats that accommodate 10 to 20GB and featuring technologies such as holography. These formats will hold up to 1 terabyte by the end of this decade. How do we best use that capacity?
Laser, laser, burning bright
Einstein predicted the principles of stimulated emission of light, the basis of laser technology, in 1917. The first working lasers appeared in 1960. This 43-year development time from theory to commercial application is common. Tack on another 10 to 20 years for large-scale success. Of course, the lead times from theory to practice have shrunk since Einstein. But the point is, discovery of the principles doesn’t always lead to immediate acceptance. Even after acceptance, a lot of experimentation is required to verify the theory and explore practical applications.
The road to commercial products usually goes through not only technical difficulties but also a host of manufacturing, marketing, and even political hurdles. We’re familiar with the vicissitudes of the process, but it’s amazing how often arrival estimates for new technologies don’t seem to include them. It’s not idle speculation to consider the general environment of a new technology for potential roadblocks due to changing economics, political resistance, social impact, and competition.
Today, laser technology is a primary component of optical-disk storage. It’s also critical to printing and scanning, with a particularly bright spot in its future appearing to be in communication. Lasers are used in transmission and switching, and are a big factor in the switch to optical communication systems, which includes optical-fiber cabling. The communications industry likes to highlight the improvement in transmission quality by a factor of 10,000 since 1965, most of which is attributable to optical systems.
However, the problems with communications systems–bandwidth availability, pricing, conglomeration, bad service–have much more to do with politics, competition (or lack thereof), and general economic conditions than with improvements in the technologies involved.
The powerful combination of computers, mathematics, and microscopy has put us on the brink of being able to model, predict, and manipulate some of the most fundamental aspects of the physical world. However, this capability is far from the end of the story. We still have to ask: Are the technologies feasible? If so, are they commercially viable? If so, how long will it take for them to arrive? And by the way, once they arrive will there be some as yet unknown factor that will affect their widespread use? These are questions worth carrying to the next column on software and complexity.