Search:  

 Blog     e-Newsletter       Resource Library      Directories      Webinars     Apps
AFCEA logo
 

Active-Pixel Approach Brings Chip Minicams Into Focus

May 1999
By Fred V. Reed

Battle between imaging systems improves reliability, drives down costs, opens commercial opportunities.

Improved complementary metal-oxide semiconductor imaging technology allows entire video cameras to be integrated on a single chip, promising decreases in the price, complexity and size of cameras. Until recently, the image quality produced by these types of cameras has been less than ideal; however, the advent of active-pixel chips indicates that advancements in this arena not only are on the way, but also have arrived and are increasing practical applications of the technology.

Imaging chips currently reaching the marketplace are far better than those available just a few years ago, industry officials say. Research conducted by government agencies has boosted private sector undertakings. These items are now evolving into products that offer improved quality and bring with them numerous application opportunities.

Photobit Corporation, Pasadena, California, using technology developed at the National Aeronautics and Space Administration’s Jet Propulsion Laboratory, has manufactured a 640 pixel x 480 pixel chip that runs at 30 frames per second with autoexposure. With a pixel size of 7.9 microns x 7.9 microns, the chip features a clock rate of 24 megahertz, includes sensitivity of 1 lux and dissipates power of 300 milliwatts (mWs). Timing and control functions are integrated on the chip as is analog-to-digital conversion. Frame rate, autoexposure parameters, exposure, window size and location, and a test mode are all programmable by the user.

Complementary metal-oxide semiconductor (CMOS) image sensor technology is relatively new to the imaging game. Until recently, charge-coupled devices (CCDs) have dominated digital imaging; however, the inherently serial nature of their readout is a limitation, according to Dr. Eric R. Fossum, chief scientist, Photobit. In addition, many of the support functions required to put a camera on a chip cannot readily be integrated into a CCD device. “The unique CCD fabrication process precludes cost-efficient integration of on-chip ancillary circuits such as timing generators, clock drivers, signal processors and analog-to-digital converters, so that implementations of a CCD-based camera require an actual set of chips,” Fossum says. “This increases power requirements and retards miniaturization of a camera.” Nonetheless, after 30 years of development, CCDs currently have good dynamic range, low noise and a high degree of responsiveness, he adds.

CMOS has the advantage of being a mainstream technology. Because it is a standard process used to manufacture processor and other logic chips, it is possible to produce the new chips in many existing silicon foundries, and integration of signal processing is easy, Fossum offers.

The two technologies are similar in several aspects. Both sensors feature an array of square or rectangular silicon photodetectors that are built into the face of the sensor chip. Photons falling on these detectors produce electrons. These are integrated, or summed, over a period of time—33 milliseconds for the Photobit chip—corresponding to a video frame rate of 30 frames per second, which is the television standard. The resulting output signal is processed to produce video.

Silicon is well suited to imaging, Fossum says. “Pixels implemented in silicon have an intrinsically panchromatic response to visible and near-infrared, or NIR, photons. A cutoff filter in the optical system [in the lens of the camera, for example] can eliminate the effect of NIR,” which makes the chip’s response closely parallel that of the human eye, he explains.

The chip’s ability to detect light is measured by its quantum efficiency (QE), the ratio of the number of electrons produced by a pixel to the number of incident photons. QE across a pixel is typically 30 percent in the visible range, Fossum says. Because part of the area of a pixel can contain an amplifier to boost the induced signal, the part of the incident light that falls on the amplifier instead of on the photodetector would be lost, reducing the sensitivity of the sensor. The ratio of the light-sensitive region to the area of the pixel is the fill-factor. Microlenses can be placed over the pixel to concentrate incident light on the photodetector, which in effect improves the fill-factor to 75 to 80 percent.

Color is handled by microfilters over the pixels, typically red, green and blue, but sometimes cyan, magenta and yellow. The signal from a particular pixel is proportional to the intensity of light of the color determined by its particular filter. The color of light falling on each pixel is later reconstructed by interpolation during signal processing.

Depending on the chip’s design, the signal from each pixel is read out and processed, on or off the chip, to produce the output picture. However, CMOS and CCD chips conduct readout and processing in notably different ways. In a CCD chip, readout is accomplished by serially shifting the charges in the pixels out of the pixel array, in the manner of the shift registers common in computers. This requires relatively high voltage and can cause blurring and smearing, Fossum says. In addition, it is impossible to read out only a particular region of the image, making such things as electronic windowing, tilting and panning difficult. By contrast, the CMOS chip is amenable to row-and-column addressing of the sort used in dynamic random access memory chips. “Row-and-column decoders [or shift registers] select rows and pixels within the row for read-out,” Fossum explains.

However, it is the introduction of active pixels that has improved the quality of CMOS chips. “There are two approaches to pixels: passive and active,” he says. “In a passive pixel, charge collected by the pixel flows out into a column readout wire when an in-pixel switch is selected. An amplifier at the end of the wire converts the sensed charge into a voltage level.” However, this process results in several problems. The capacitance associated with the readout wire leads to induced noise, and passive-pixel designs do not scale well to larger chips or to faster readout, Fossum offers.

“Just a year or two ago, the commercial state of the art was represented by low-performance, monochrome, passive-pixel CMOS image sensors. In a sense, this early introduction of CMOS has hurt the technology’s reputation. These chips were useful only for toys and machine vision, where imaging performance was secondary to on-chip functionality,” he adds.

According to Fossum, active pixels make it a new ball game. The active pixel’s built-in amplifier provides gain to the signal before it reaches the analog signal processor at the bottom of each row, resulting in low noise.

A key step in putting a camera on a chip was designing a suitable analog-to-digital converter (ADC). Adequate speed was a requirement. A 1280 pixel x 720 pixel array running at 60 frames per second produces data at 55.3 megasamples per second. Depending on the chip, a sample runs to about 10 bits; however, less demanding applications, such as teleconferencing or machine vision, produce less data. In addition, the ADC must have a resolution of at least 8 bits so that it can distinguish 256 levels of brightness from each pixel. Combining 8 bits each from the red, green and blue pixels produces 24-bit color, which is the industry standard. In addition, it must dissipate little power, preferably less than 100 mWs, Fossum says, to avoid “hot spots” on the chip because when transistors are heated, the dark current tends to increase, producing increased noise in the image.

Two approaches to the camera-on-a-chip exist, each aimed at different applications. In the first approach, which Fossum says is advocated by Intel, Photobit and Kodak, color interpolation and image compression are done off-chip either in a host computer or in other chips. This allows flexibility and is also important because of bandwidth limitations of interface standards such as Firewire and universal serial bus. “Video chips produce a large amount of data, and color interpolation triples data volume,” Fossum points out.

The second approach favors integrating as much of the necessary signal processing as possible onto the sensor chip and will be useful in applications in which miniaturization is important.

Possible applications of cameras-on-chips go beyond the obvious. Fossum tells of a paper presented by Photobit at an Institute of Electrical and Electronics Engineers conference that describes what he says is probably one of the world’s largest commercial CMOS chips, larger than 37 millimeters x 26 millimeters. When placed in a dental patient’s mouth during X-rays, it captures the picture of the patient’s teeth.

“The history of microelectronics teaches us that integration leads to greater reliability, lower system-power requirements, and plummeting cost/performance ratios,” Fossum offers. He expects a battle between CCDs and CMOS imaging systems as CCD manufacturers cut prices and seek to improve performance, and CMOS makers capitalize on their technology’s inherent advantages.