Computer-Aided Design Assumes Greater Burden in Chip Maturation

March 1999
By Fred V. Reed

Developmental software itself ultimately may be the limiting factor in future semiconductor design.

Semiconductor designers are increasing their dependence on computer-aided design and testing to advance microcircuitry beyond the current state of the art. Demand for more and more complex chips has necessitated taking design out of the hands of engineers and into the realm of cyberspace.

The same processor advances that have fueled the information revolution also have enabled substantial software advances in computer-aided design. The two disciplines have fed each other, as more powerful processors enable complex design software that in turn spawns the next generation of computer chips.

Increasingly, however, the laws of physics are weighing in with their own complications. Engineers planning future chip generations now must consider unwanted effects arising from the greater density inherent in greater chip power. These must be factored into chip planning along with conventional design elements.

Designers agree that this reliance on computer-aided design (CAD) for chip development can go on for some time, but not forever. The problem is, nobody knows just when computing will hit the wall.

Engineers are faced with designing new microcircuitry concurrent with its complexity rising by leaps and bounds. Clock rates, for example, are pushing toward 1 gigahertz. A Pentium II chip from Intel now has 7.5 million transistors, and chips with 100 million transistors are expected shortly into the next century. Creating these huge circuits involves hundreds of people in countless disciplines, and one error, if strategically located, can render a chip design useless. The task is daunting, especially when pressure to get a new product to market before the competition is intense.

As manufacturers work to manage this rapidly growing complexity, several trends are emerging. First, the dependence of designers on sophisticated CAD software is growing. Chips are described, synthesized, analyzed, debugged and ultimately tested almost entirely with specialized software. Some designers believe that the ultimate limit on the complexity of circuits may lie with the CAD software rather than with physical limits.

Second, the question of how to test a design to determine whether it actually works has become a major problem in itself. Currently, designers cannot test a new microprocessor exhaustively—that is, offer the chip every input it might encounter in real life to assess whether it produces the right answer.

Third, as circuit elements are crammed closer and closer together on chips, electronic interactions between them grow more complex. Without clever chip design, this crowding can lead to malfunctions. The signal that goes into a line may not be the signal that comes out. Transistors can cease to operate as neat, clean switches.

Faced with these problems, designers are turning to innovative software to craft a new and complex chip. They do not begin by laying out circuit elements on a computer screen. This comes later, after many layers of abstraction. Before this, the circuitry is described in a computer language written specifically for this purpose.

Bryan Ackland, head of the digital signal processor and very large scale integration research department at Bell Labs, Murray Hill, New Jersey, the research and development arm of Lucent Technologies, says, “If I were designing the control section of a digital signal processor, I’d probably write a description of the logical function in VHDL (very high speed integrated circuit hardware description language) or in Verilog. Half the industry uses one, and the other half uses the other.”

The result is not circuit diagrams or logic gates connected by data paths, but rather computer code having no obvious relation to anything in the real world.

Chips are designed hierarchically, first at a block-diagram level, then in more detail, down to individual arithmetic and logical operators. After the description is written in Verilog or hardware description language (HDL), says Ackland, “You feed it into a simulator, which is typically software, apply some inputs to it and see what comes out the other side. This lets you know that in fact it’s doing what it’s supposed to do.” This work is usually done on UNIX workstations, says Ackland, but increasingly it is performed on NT platforms.

Joe Schutz, director of microprocessor design at Intel Corporation, Santa Clara, California, describes the actual process of design. “The first step in designing a new CPU (central processing unit) is to collect all ideas that would improve performance and rank them in order. Some things that would work require too many transistors, so you eliminate them from the list. That is, you reduce the list to fit in the transistor budget.

“At Intel, we use a lot of our own software. We make, in software, a coarse model of the CPU built of the basic blocks. It’s written in enough detail that you can run code through it and check it. You can play with the design at this stage and make improvements. This doesn’t involve many people, and it’s not expensive. It’s really a proof-of-concept stage.

“The next step is to write it in HDL. This dramatically increases the detail that is coded in the model. This model is useful to make sure that our block-level assumptions make sense once they are implemented in more detail.” This iterative sequence continues in ever greater detail—designing and testing, designing and testing.

Bell Labs’ Ackland explains, “You can think of CAD as consisting of at least two big pieces. One is moving down the hierarchy, generating the actual design. The other is going back up the hierarchy, verifying that the new layer you’ve created is actually equivalent to the higher-level specification. For all the different layers in the design, you can have simulators at different levels of accuracy.”

Schutz notes, “As the model for the chip gets more detailed, it runs more slowly in the simulator. Since the simulator is too slow to run all the software we would like, we depend on a more statistical approach to measuring performance. At this stage, snippets of actual commercial software are run on the simulator. These are chosen to be representative of the full software package. You can’t run full programs through the HDL because of the computer time that would be required. There aren’t enough computers in the universe for that.

“Finally we complete the testing with assembly language programs that we write to ensure that both the internal and external operation of the CPU meet our specifications. Since we depend on this model to generate production tester traces, we definitely have to make sure that it’s coded perfectly so that, for every cycle that the clock ticks, all of the pins behave just as expected.”

Logic and timing are distinct problems. A circuit works logically when the right elements are correctly wired together. However, as system clocks reach 1 gigahertz, an electrical signal will have only one-billionth of a second to get where it needs to be and let the circuit settle into a stable state before the next clock pulse arrives. If for any reason—interelement effects, for example, or excessive line length—a signal does not arrive in time, the logically correct circuit will not work.

“When the signals all produce the right patterns, the architects have most of their work behind them,” Schutz says. “At this point, the HDL is divided into hundreds of pieces. The individual sections of HDL are converted to schematics. The HDL description as written does not ensure clock rate performance.

“Suppose for example that the engineer who is writing the HDL needs a 32-bit OR gate (a logic gate that gives a one output if any of its inputs is a one). The definition for this is simple. The software is just a definition: the output and all the inputs in one line of code. But it may not be a practical circuit in real life because OR gates have very poor noise immunity, so the engineer doing the schematics will have to figure out how to do the same thing a different way. A register file that is a few lines of code in HDL, for example, turns into sheets of schematics. It also takes a lot of skill to do this part of design.”

A knowledge of many disciplines is needed to design a chip, which is why work on them is so highly collaborative. “A tremendous amount of interaction is involved in design,” Schutz warrants. “Architects understand the software that is run on the CPU and what kind of hardware is needed to run that software efficiently. They trade off performances versus die size and power. Circuit engineers understand transistors, device physics and all of the circuit implementation issues. They are masters of the many CAD tools used to design and verify the circuits. Specialized circuit designers also design the caches and other memory elements. There are mask designers who design the physical layout of the CPUs using CAD software. The design of a CPU is a huge task, and it takes many disciplines. No one person can understand all parts of a design,” he continues.

However, designing the CPU is only the first hurdle. The next problem is that as the detail of the design increases stage by stage, so does the difficulty of testing it. In fact, this difficulty grows exponentially with complexity. A simple example would be a 32-bit multiplier that might see any of 1.44 x 1019 inputs (for example, 232 possible numbers for the multiplier times the same number for the multiplicand, giving 264). The number of inputs that an entire microprocessor might encounter dwarfs this already unmanageable number. Even this analogy does not demonstrate the magnitude of the problem because it does not take into account the effects of sequence. A CPU that has just completed a series of multiplications will be in a different machine state, with different values in registers, than the same CPU after a series of divisions. An error can occur in one case that would be hidden in another case. Exhaustive testing simply is not possible.

“Since you can’t apply all possible inputs, you have to apply some subset that you think is representative,” says Ackland. “But as chips get more and more complex, the degree to which your test vectors (inputs) represent the overall function of the chip tends to become narrower and narrower. So you have less and less confidence that the chip can really do the job it’s supposed to do.

“The result is that people are putting more emphasis on other means of verification. For example, if you want to prove that the chip is going to run fast enough, rather than giving all possible inputs in the simulator, people will use timing-verification programs that actually look at the gate delays and try to understand whether there is any path in the circuit that isn’t going to make it.”

Once the actual logic of the design has been verified, there is a third problem—electronic interactions between circuit elements. A circuit that is logically impeccable on paper can fail on silicon because of such things as interelement capacitance. The transistor-packing densities of chips expected in the next decade will be so great that these problems can be serious.

“As we go to finer line-widths, the models of transistors that we have traditionally used begin to fail,” Ackland states. “Traditionally, we have thought of transistors as switches. You put a few transistors together and get a well-defined gate, [you put] a few gates together and you get a well-defined register. These lines are blurring because the transistors are becoming less and less like ideal switches.

“The wires are becoming a problem,” he continues. “Parasitic resistance of wires and cross-coupling capacitance break down our old models of how things work. The nice hierarchical picture, in which you can assume that a NAND (not and) gate really operates exactly like a NAND gate under all conditions, just doesn’t work. You can understand how these circuits work at a detailed level by using a very accurate simulator. But the trouble is that when you try to do that with a large number of transistors, the simulation time just blows up on you. A lot of emphasis in the CAD world now is on how we can model signal integrity questions at a fine level of detail without bringing simulation to a grinding halt.”

Intel’s Schutz agrees and says that such electronic effects now determine questions of design. “We’re working on a CPU with a bus that’s 256 bits wide,” he allows. “Since the lines are getting closer together and taller, line-to-line coupling is a large part of the total parasitic load. This creates special problems for the circuit engineers. If an adjacent line couples to a “zero” while the signal is going to a “one,” the signal will slow or, in some cases, not get to a “one” in time. Lines can be spaced apart or shielded by power supply lines to solve these kinds of problems. The trick is to find all of these problems in design, before silicon,” he declares.

This approach works for now, and most experts believe that the industry has not come to the end of the line yet. Every advance in computing speed brings more powerful CAD software, offsetting the growth in complexity. The question confronting designers is how long advances in CAD software can keep up.

Ackland believes that it is a reasonable question. “CAD tools have always just tracked our ability to make chips,” he says. “If you look back over the years, our chip design has been as much limited by the CAD tools as by the underlying process technology. The chips we’re building today—we could not have built six or seven years ago, even if we had had the manufacturing technology, because the CAD tools wouldn’t have been good enough.”