Two emerging technologies could change the way we compute.
I’ve been so caught up in current events lately that I haven’t taken the time to write down my thoughts about the future. In the current crop of computing technology, there is nothing all that new. Basic PC architecture is the same as it’s ever beenÑslow and painfully kludgy. The microprocessor performs one operation at a time, reading and writing from memory that needs to be continually refreshed, and making copies to magnetic media. The only way to boost performance is by adding more transistors (or processors), speeding up the clock, and growing the size of the RAM and magnetic media.
These problems are well known to technologists, but not so well known to the public. While consumers are delighted by all the things their computers can do, technology companies have been working to create alternative architectures. Here I want to briefly mention two technologies that will change the very nature of computing in the future. Consider this a jumping-off point on your own research on these topics.
As I said, microprocessor architecture is limited. Performance is a question of how many bits (32 or 64), and how many Hertz. Sure, some chips have additional command sets for certain functions, but they all do stuff one command at a time. You can speed things up by adding additional processors, or linking PCs into massively parallel clusters, but each processor still only performs one command at a time.
Enter field programmable gate arrays (FPGAs). FPGAs do two things differently than microprocessors: They perform multiple operations in parallel, and they can be reconfigured to do different things on the fly (hence the name). Right now their use is limited to high-end Web servers, which must process thousands of transactions at one time, and specialty systems, which need to be configured after production. Part of the reason for this is production cost; part of the reason is the multipurpose nature of processors. While microprocessors can perform any task, FPGAs must be programmed to perform particular tasks. So FPGAs won’t replace microprocessors. But their use will grow to the point where microprocessors and FPGAs will work in concert to boost performance for certain common tasks and to ease configuration and integration issues.
The problem with RAM is it doesn’t hold its charge. So a computer needs to refresh memory thousands of times per second to make sure that what’s in memory is active. Power down the computer and lose everything in memory. That’s why every time you turn on a computer, you have to go into a long boot sequence, as the operating system is loaded into memory. There are instant-on computers. I had one myself. Remember Apple’s eMate–a mini laptop with the Newton architecture? I’d be typing on the bus, come to my stop, close the lid and go home. Open the lid (which turned the computer on and off) and what I was typing would just be there waiting for me. No booting up, saving, worrying about lost data in crashes, etc. Well, today’s instant-on computers use Flash RAM–a kind of memory that keeps its charge. The trouble is, Flash is very limited and, for a variety of reasons, can’t be used in anything but handheld systems. Even there, they consume vast quantities of power.
Enter magnetic random access memory (MRAM). Because it’s magnetic, MRAM keeps its charge without consuming much power at all. There are a variety of obstacles keeping MRAM from your next handheld or laptop device. Most of them are economic: Right now it costs a lot more to produce MRAM “chips” than electronic RAM. But MRAM is flying aboard lots of expensive military devices that need low-power, instant-on capabilities. As their use grows in that sector, the price will come down for the consumer sector. Imagine sitting down at your desk, turning on your computer, and getting right to work. What a concept!
For more information:
James Mathewson is editor of ComputerUser magazine and ComputerUser.com