Enable breadcrumbs token at /includes/pageheader.html.twig

Microprocessor Research Aims At Shattering Speed Records

One year after surviving the year 2000 problem, computer users may be blessed with huge leaps in processing speeds and capabilities. Researchers at semiconductor manufacturers are developing new generations of chips that, in just three years, will offer 15 times as many transistors and compute several times as fast as today's models.

No end is in sight to blazingly fast advances, say designers at major chip manufacturers.

One year after surviving the year 2000 problem, computer users may be blessed with huge leaps in processing speeds and capabilities. Researchers at semiconductor manufacturers are developing new generations of chips that, in just three years, will offer 15 times as many transistors and compute several times as fast as today’s models.

These advances are likely to continue generating even greater performance levels over the subsequent 10 years, semiconductor experts offer. Previously predicted technology limits are falling by the wayside as designers push existing manufacturing methods to new microscopic definitions. In addition, novel experimental fabrication technologies are beginning to move out of the laboratory, further increasing development speed.

Designers at leading companies, such as Lucent Technologies and Intel, admit that ultimate limits exist; however, current technology is not even close to those limits. In the foreseeable future, they predict that Moore’s Law, propounded by Gordon Moore, one of the founders of Intel, will continue to hold: Processor performance will double every 18 to 24 months.

In 1980, a Z-80 central processing unit (CPU) from Zilog was an 8-bit machine that ran at 2 megahertz. Today’s mass-market Pentium II CPUs from Intel run at 450 megahertz and have more than 7.5 million transistors.

Albert Yu, vice president and general manager of Intel’s microprocessor products group, declares, “I believe we are well on our way to making the 100-million transistor microprocessor a reality by 2001.” And by 2011, he says, processors are likely to have a billion transistors, run at 10 gigahertz and process 100 billion instructions per second.

Two major factors that determine the performance of a microprocessor are the amount of transistors on a chip and the clock speed, or chip’s operating speed. The smaller the transistors, the more that can fit on a chip of a given size. Transistors are shrinking rapidly, say designers and will continue to shrink for some time.

The number of transistors depends on line width, explain the designers. This is the narrowest dimension that designers can make a feature on the silicon chip. Reducing the line width by half allows four times as many transistors to be placed on the chip, because size is reduced in two dimensions. In the days of the Z-80 and the Intel 8085, line widths were approximately 2 microns. Now they are at 0.25 microns and shrinking fast. Manufacturers talk matter-of-factly of line widths that are 0.13, 0.10 or fewer microns.

Lucent Technologies has a projection electron beam technology that it believes will make 0.08 microns a reality. If the Intel Pentium II at 0.25 microns has 7.5 million transistors, a chip of the same size at 0.125 would have 16 million. This improvement is coming, experts predict.

According to Jim Boddie, director for technology development at Lucent Technologies wireless and multimedia group, “There’s a lot of debate about where it actually has to stop. Bell Labs has fabricated a transistor that you will be likely to see in 2010, a working transistor at .06 microns. We call it the nanotransistor [SIGNAL, February 1999, page 19]. It’s really an atomic scale transistor because, if you look at the vertical geometry, the gate—the part that controls the transistor—is only four atoms thick. If we can make models now, by 2010 we’ll certainly be able to do it in large quantities. If you used the technology to build DRAMs [dynamic random access memories, the standard form of memory in today’s computers], you’d get a 64-gigabit DRAM.” Lucent does not manufacture microprocessors, but it does make digital signal processors using the technology.

The technology needed for successively narrower line widths rapidly increases in expense, with chip fabrication facilities now costing well over $1 billion. However, according to Intel’s Seth Walker, worldwide demand is expected to be great enough to keep prices within reason.

Manufacturing chips with extremely small features is a more difficult problem. Today’s technology uses light projected through a mask onto a silicon wafer coated with a resist. The light alters the resist in a manner similar to exposing film. Later, a corrosive substance called an etchant is used to etch lines in the silicon where the resist was exposed to light, producing the desired pattern on the silicon. Thin lines of a conductor, usually aluminum, are used to connect transistors. The process of masking, projecting and etching is repeated many times until the microprocessor is complete.

The laws of physics pose one problem in this process. As line widths approach the wavelength of the light, the resulting lines lose sharpness. The higher the frequency of the light used to expose the resist, the smaller the wavelength and the narrower the line width. According to engineers in the field, technology is reaching the point at which light cannot produce fine enough lines.

However, this is not the end of the progress line. Companies such as Lucent are working on projection electron beam lithography, which uses a beam of electrons instead of light. Electrons have a much smaller wavelength. If electron beam lithography can be adapted to work in a production setting, it will allow much narrower line widths.

Other questions of design arise, producing complex trade-offs. Joseph Schutz, a director of microprocessor design at Intel, says that as transistors come to be more densely packed, the “wires” connecting them—thin traces of aluminum—move closer together. Further, the wires now tend to be more like strips of tape than conventional round wires. They act like the plates of a capacitor, reducing performance. IBM recently announced that it has learned how to use copper instead of aluminum, allowing perhaps a 30-percent increase in clock speed.

Schutz explains that capacitance can be a problem. “The wires today are relatively tall and narrowly spaced, so the side-to-side capacitance is extreme, and you have to worry about signal quality and degradation in performance caused by the resulting delay.

“Say you’re trying to drive a ‘1’ in a wire, and the two wires on each side are driving a ‘0.’ You get coupling and degradation, so you need software design tools that can handle the calculations to minimize it.”

Increasingly, delay caused by a signal’s time of travel from one part of a chip to another is an obstacle to higher performance. Given that the speed of light is about one foot per nanosecond, and that signals in a chip travel at perhaps 70 percent of that speed, says Schutz, it may seem strange that the slowness of the signal in a chip the size of a fingernail could be a problem. But, after adding the time it takes for transistors to switch and settle into stable states between clock pulses, signal speed does in fact become serious.

“If you are wiring a small chip, congestion isn’t a problem because everything is close to everything else,” Schutz says. “But as the chip gets more complex, you actually need more wires in proportion to the size. The things you need to talk to on the processor are less likely to be nearby. We need to make sure that runs between blocks that need to talk to each other are as short as possible and don’t take a scenic route.”

The same laws of physics make it undesirable for signals to travel off the CPU to remote parts of the computer—for example, to the main memory or the graphics card—for data. The distances involved, even inches, can be immense compared to the native speed of the microprocessor. This leads to pressure to put as much of the computer as possible on a single chip. Consequently, says Schutz, as the amount of transistors available on a chip grows, there is a tendency to put the slowest components that are currently off the processor onto it. “We tend usually to ask: ‘Is there something inefficient off the chip that needs to be on it?’ At clock speeds of a gigahertz, it doesn’t make sense to me to try to talk to components off the chip, because the speed of light stays the same.”

A good example is cache memory that holds data for quick use by the processor. These have recently been moved aboard the chip, taking perhaps inches out of the travel distance. According to Schutz, much of the rapidly growing transistor count will be used for integrating more of the function of the computer into the CPU. The eventual result will probably be putting an entire computer on a chip, which would both increase performance and greatly lower assembly costs.

Increased integration of different components on a single chip is not of interest only to makers of CPUs. According to Boddie, “Today we can do the complete cellular baseband portion of a cell phone, everything except the radio, on a single device: DSP [digital signal processor] SRAM [static random access memory] flash memory, input/output, and the analog connect to the radio.”

However, he says, the very complexity of today’s processors is a stumbling block to advancement. As transistor counts increase and wiring becomes more intricate, it rapidly becomes more difficult to design connections and verify that they are correct. Doing so requires very sophisticated software for computer-aided design. “That could be the limit,” he suggests.

Others are less concerned. Schutz says, “I’m a little bit more philosophical about it, partly because I’ve been in the field for a while, partly because we’ve been saying the same thing for almost 15 years. Part of it is like the angst you always have: Yes, the next generation is harder, and yes, you need better software. We’ll find a way to solve the problem.”

Another challenge, according to Intel’s Albert Yu, is testing a vastly complex processor to ensure that it works. Some flaws are obvious, such as when a processor always adds numbers incorrectly. The problem, say designers, is a flaw that appears only in certain unusual conditions. Even today, these sometimes are not caught until after a microprocessor has gone to market.

“Testing and compatibility validation are an unbelievably difficult challenge in designs as complex as the ones we’re contemplating for Micro 2011,” Yu declares. “Testing all possible computational and compatibility combinations begins to verge toward the infinite. It’s clear that we need a breakthrough in our validation technology before we can enter the billion-transistor realm.”

A greater number of transistors causes another problem. The larger the amount of transistors of a given size, the greater the power that is dissipated in operating them. Dissipated power increases with clock frequency. A Pentium II at 450 megahertz—today’s high-end in mass-market machines—dissipates 27.1 watts. As dissipated power increases, so does the temperature, which decreases the working lifetime of chips and, if high enough, actually burns them out. While liquid cooling is possible, it is complex and expensive, and industry experts want to avoid it. However, lowering the voltage at which the chip operates decreases the dissipated power. Furthermore, the gate—the layer of metal oxide that prevents the passage of electrons when a transistor is off—gets thinner as transistors shrink, so lower voltages are necessary to keep from blowing out the oxide. Consequently, designers strive for lower operating voltages. Says Schutz, “The regime we’re in now is that we pretty much lower the voltage on every processor generation, usually by 20 to 30 percent. The current production parts are running anywhere from 2 volts to 1.6 volts.” Early processors ran at 5 volts.

In principle, everything else being equal, lower voltages also decrease the speed at which the transistors switch. However, for a given voltage, as transistors get smaller, they get faster because the gate is thinner. The trade-offs are complex but, say designers, will be manageable for the foreseeable future.

Says Schutz, “As for ultimate clock speed, it’ll be higher than anybody can even imagine. Moore’s Law is probably not going to stop while I’m at Intel. I’m 45. Clock speed goes up one and a half times per generation. So let’s say you’re at 800 megahertz, which is where we are today, so it’s one and a half to the fifth power. That comes to about 6 gigahertz.”