Enable breadcrumbs token at /includes/pageheader.html.twig

Turbo Codes Offer Broadcasting At Near-Channel Capacity

Engineers are using a new class of algorithms capable of encoding and decoding communications at speeds close to transmission channel maximum capacities, a feat that has eluded engineers since the 1940s when a theoretical limit to channel capacity was first defined. Under development since the early 1990s, these algorithms are now being tested in proof-of-concept devices.

New family of algorithms provides breakthrough in digital satellite and radio communications.

Engineers are using a new class of algorithms capable of encoding and decoding communications at speeds close to transmission channel maximum capacities, a feat that has eluded engineers since the 1940s when a theoretical limit to channel capacity was first defined. Under development since the early 1990s, these algorithms are now being tested in proof-of-concept devices.

Known as turbo codes, these algorithms were developed by Ecole Nationale Supérieure des Télécommunications (ENST), Brest, France. The class of iterative decoding algorithms provides error-correction performance at near-channel capacity and functions like a turbocharger in an engine. The incoming data stream is separated into two blocks, and the two are then coded with a convolution encoder that writes one strand in a standard fashion and scrambles the second with a random interleaver. The unscrambled block is then decoded to produce correction metrics that are interleaved and decoded a second time using the scrambled data. To provide optimal performance, this process is repeated continuously until all errors have been found and corrected.

Turbo codes provide potential advantages across many applications. For portable communications devices where power saving is important, for example cellular telephones or tactical radios, turbo codes allow for increased energy efficiency without compromising performance. In larger systems such as satellite groundstations, the algorithms offer high bandwidth efficiency and low signal-to-noise ratios.

The Shannon limit, which is defined by the maximum improvement in the signal-to-noise ratio that can be achieved by the best modulation technique, has long posed a problem for engineers seeking more efficient communications. Depending on the application, steps such as saving power while increasing transmission performance or bandwidth efficiency are vital to the development of new communications equipment. Turbo codes offer the possibility of serving both ends of this requirement spectrum.

Turbo codes remained a theoretical possibility since they were first described in an ENST paper in 1993. However, it has only been in the last year that these algorithms have begun to be tested in potential products.

A variety of turbo codes falls into several classes. Those originally described by ENST were convolutional codes with either serial or parallel concatenation. Another class is called turbo product code. Unlike convolutional code, it is much simpler to implement, but it has a lower performance rating.

Adaptive Broadband, Sunnyvale, California, a supplier of terrestrial and satellite-based communications equipment, built a prototype satellite modem using turbo codes. Eric Jacobsen, a principal member of the firm’s technical staff, believes this is among the first proof-of-concept devices built to operate with the algorithms. The challenge was to develop a code using the best compromise of all the possible turbo code permutations for satellite communications, he notes.

“Everybody knows that this channel limit exists, and there’s all this performance [capacity] left for people to use that nobody could figure out how to get to. The assumption before was that the closer you wanted to get to capacity, the higher the complexity of the decoder,” Jacobsen explains. Codes exist that predate turbo codes and can operate at capacity, but they were extremely complicated to decode and impractical to implement. The big revolution behind the turbo code is that it is essentially a simple code and a somewhat complicated architecture that is practical to put in use and gets results very close to capacity, Jacobsen says.

The prototype was developed from an existing modem with the primary modification being the extension of its frame to fit in an extra fan. The modem board was altered, and an additional card was also inserted, but Jacobsen notes this was not a major redesign. Development was initiated last year, with laboratory testing beginning this past January. He explained that there was some debate about the most efficient approach to concatenation after the turbo codes paper was released. This delayed development until this issue was resolved. “It took a couple of years for things to settle down to the point where it was clear to us that there wasn’t going to be one or a narrow set of codes coming out of this,” he acknowledges.

Tests with the prototype indicate that turbo codes have certain advantages in power and bandwidth efficiency when compared with a widely used decoding algorithm such as a Viterbi-Reed-Solomon. This allows for less power to be used while getting exactly the same bit-error rate performance. Jacobsen observes that this power advantage is useful for small systems such as portable communications.

Turbo codes can also be used for applications where bandwidth efficiency is necessary. For applications that are not particularly limited by power, high-rate turbo codes offer high-bandwidth efficiency with good forward-error-correction protection and low-code-rate overhead, he adds.

Existing turbo codes can also be employed to suit a user’s needs. Adaptive Broadband developed six separate codes so users could select between power or bandwidth efficiency. All these codes are stored in the prototype and can be selected from the front panel or from a remote port.

Though the proof-of-concept model is bulky, Jacobsen says the firm can build a smaller version. The prototype features 20 field programmable gate array (FPGA) chips and a number of supporting memories. He notes that this can be reduced to three FPGAs and two memory components in a unit that is probably capable of performing better than the current prototype. While this would be very economical and space efficient, he believes that the next logical developmental step is to move to an application-specific integrated circuit, which would be a single chip-specific solution.

However, turbo codes have their drawbacks in the form of a throughput delay that can be annoying in synchronous communications such as voice or videoconferencing. Jacobsen explains that there is a pause associated with concatenated Viterbi-Reed-Solomon codes as well because of a natural processing delay through all the decoders. But turbo codes are more complex than the traditional solutions and consume more time, he says.

Performance is proportional to the size of the interleaver. The bigger the interleaver between two codes, the better the performance generally is, Jacobsen maintains. But the throughput delay will also be longer because the code has to iterate several times through the interleaver. Because of this, there is a trade-off between how close to capacity one wishes to get and how much one can tolerate delay.

Jacobsen explains that applications such as broadcast video are not particularly delay-sensitive, but problems arise in synchronous communications where a 1- or 2-second delay becomes noticeable. Delays become particularly apparent in low-data-rate voice transmissions. One way around this is to transmit at a higher data rate of 1 to 2 megabits because the delay itself will only be a small fraction of the overall transmission time. For example, with a 64-kilohertz transmission and an interleaver with a memory of 8 kilobits, one pass through the interleaver will cause an eighth of a second delay, which is significant at low data rates. If the same message travels through a 2-megabit carrier such as a T1 or E1 line, 8 kilobits would be an extremely small fraction of the transmission delay. Because of these requirements, he contends that turbo code is probably not an optimal solution for users with applications such as low-rate voice on a single channel.

Another compromise general to turbo codes is that they are complicated and require more memory resources. This is an engineering issue because increased chip speeds and densities are quickly reaching the point where it is practical to put a turbo code on a single chip, Jacobsen notes.

Because an interleaver is a block of memory that has data written into it in a particular order and read out in a different order, sequencing this information properly is critical for a turbo code, Jacobsen says. A Viterbi-Reed-Solomon code uses a rectangular interleaver. In this configuration, the memory component is arranged in what can be described as a matrix—rows and columns of locations. The data is recorded in rows and read in columns. It is simple to implement this type of code, he claims, but this does not work well with a turbo code because there is too much structure to the memory. Jacobsen notes that turbo codes require an interleaver with a “very random-looking memory” orientation and adds that though it may sound odd, part of the process of designing a turbo code is selecting a good random interleaver.

Aside from Adaptive Broadband’s prototype satellite modem, Jacobsen sees a variety of uses, including both satellite and terrestrial broadcasts, and the cellular telephone industry is also considering using turbo codes. However, he believes that a different turbo code would be needed because it is a different application, and cellular telephones are more delay sensitive. Another advantage for cellular telephones, which are often limited by their power output, would be the superior power-to-output ratio of turbo codes, he says.

Turbo codes have a number of applications that could be used for video and data broadcasts in satellite modems. Users of those applications are more interested in bandwidth rather than power savings, Jacobsen maintains.