OFDM: Old Technology for New Markets

For years, broadcast wireless networking was limited to practical speeds of
less than 11Mbps. Now, with the advent of Orthogonal Frequency Division Multiplexing
(OFDM) , wireless standards like IEEE 802.11a and 802.11g
are moving the real-world wireless LAN speed limit to
50Mbps and beyond.

It’s not just WLANs; vendors like Flarion and NextNet Wireless are using OFDM to
bring similar increases to the metropolitan area network (MAN) over the much
more widely deployed Code Division Multiple Access (CDMA)
networks. Indeed, even as CDMA-based third-generation (3G)
wireless data and voice services finally enter the US market, Flarion is looking
to OFDM to drive fourth generation wireless.

And, it’s not just vendors; the IEEE 802.16a committee is working on
a standard using OFDM for MANs as well.

While it’s revolutionary to the wireless business, OFDM actually has a long
history dating back to the late 1960s. As Navin Sabharwal, Director of Residential
& Networking Technologies for Allied Business Intelligence, observes,
"OFDM-based products span everything from wireless LAN to mobile access
to digital television."

The principal driving force behind OFDM’s increased popularity is the desire
for faster wireless technologies and the increase in multimedia applications,
which require higher speeds. In particular, In-Stat/MDR‘s Allen Nogee, Senior Analyst, Wireless
Component Technology, thinks, "OFDM has much promise in the future, especially
at the low-cost, high-capacity position."

It’s not all-smooth sailing ahead for OFDM. While industry analysts think OFDM
may succeed, it does have problems to overcome. Nogee, for example, comments
that in cellular 4G, "Even if OFDM is proven to be superior technology,
[the existing CDMA-based companies and carriers] aren’t happy to share, and
in cellular, it’s more about politics sometimes than technology."

Worse — despite the efforts of the OFDM Forum, technically, there are numerous
incompatible OFDM standards.

Inside OFDM

While OFDM has been around for more than 40 years, and has shown up in such
disparate places as asymmetric DSL (ADSL) broadband and
digital audio and video broadcasts, it’s only now becoming popular for wireless
communications.

To understand why, we need to take a deeper look at how OFDM works.

Typically, OFDM, a spread-spectrum technology that gives wireless networking
a new physical (PHY) layer, is implemented in embedded chipsets made up of radio
transceivers, Fast Fourier Transform (FFT) processors, system input/output (I/O),
serial to parallel and back again translators and OFDM logic.

In practice, the OFDM chipset bundles data over narrowband carriers transmitted
in parallel at different frequencies. High bandwidth is achieved by using these
"parallel subchannels (aka sub-carriers) that areas closely spaced as possible
in frequency without overlapping/interfering, according to Dr. Douglas Jones,
Professor of Electrical and Computer Engineering atUniversity
of Illinois at Urbana-Champaign
. By being orthogonal, they have no overlap,
and thus do not interfere at all with each other. Orthogonal means that
they are perpendicular, but in a mathematical, rather than a spatial, sense."

This differs from the traditional analog modulation by transmitting a digital
signal that’s determined, Jones goes on to say, "by the use of the Fast
Fourier Transform (FFT) algorithm. It can be better because it allows precise
control of all those multiple simultaneous frequencies (carriers) used to simultaneously
carry many data bits in parallel on different frequencies." With the use
of the FFT on both transmitter and receiver, an OFDM system, "naturally
and efficiently spaces the frequencies such that they are as close together
as is possible and yet are orthogonal, so you get maximum bandwidth with no
interference between subchannels."

In addition, to getting even better performance, OFDM deployments typically
protect against intersymbol interference
(ISI) by using a redundant symbol extension, the guard interval. Typically,
ISI comes from multi-path delays. Jones explains that this is "the result
of receiving not one, but several copies of the signal, due to multiple reflections
(i.e., multipath), say, off buildings, airplanes, etc., of the transmitted signal.
Since the reflections travel a longer distance to get to the receiver antenna,
they are delayed, hence multipath delay spread." The simple guard interval
performs error correction from multipath distortion without requiring complex
error correction.

Jones explains, though, that by its very nature OFDM defeats multipath distortion,
a bane of single carrier methods like CDMA. OFDM combats it by transmitting
bits in parallel, with each bit being transmitted rather slowly. For example,
suppose I want to transmit 1,000,000 bits per second. I could transmit them
one at a time, each taking one microsecond to send. Any delay spread longer
than one microsecond would cause delayed reflections from multipath to overlap
the direct signal for the next bit, thus causing ISI. If instead I transmit
1000 bits in parallel at a time on 1000 separate OFDM subchannels, I can transmit
them 1000 times slower; that is, one millisecond to send them. A multipath delay
spread of 1 microsecond would only overlap 1/1000th of the transmission
interval for any given bit, thus causing hardly any interference at all!

OFDM, though, has to contend with other problems besides multipath distortion.
Two of the most important are frequency offset and phase noise.

These are, at heart, both engineering problems. Both can happen when the receiver’s
voltage-controlled oscillator (VCO) is not oscillating at exactly the same carrier
frequency as the transmitter’s VCO. When the problem is permanent, it’s called
frequency offset; when it varies over time, it’s called phase noise jitter.
In either case, it causes more errors because the no-longer orthogonal sub-carriers
can interfere with each other.

The solution that IEEE 802.11a uses is to include a training sequence at the
beginning of every packet using four of 802.11a’s 52 subcarriers. These are
then modulated with the known training data using binary phase-shift keying
(BPSK) to produce "pilot tones." These tones let both sides determine
the frequency offset and phase noise jitter between the transmitter and the
receiver. Once known, adjusting the VCO’s frequency and adaptively correcting
for the current state of interference can deal with the interference.

Another problem with OFDM is that, like any multi-carrier system, it has vast
variations between its peaks and valleys of signal power: the peak-to-average
ratio (PAR). PAR’s large dynamic range poses a real problem for power amplifier
(PA) designs. There are many ways to deal with PAR and because of this OFDM
implementations tend to use incompatible methods. On packet-based networks,
like 802.11a, the approach is simply to limit power output and retransmit packets
if data goes missing — presumably because of a low powered transmission derived
from a PAR incident.

This also results in another engineering problem that anyone who uses an 802.11a
NIC in a laptop already knows about: OFDM cards eat power like a dad does Halloween
candy after the kids are asleep. While there are efforts afoot to put OFDM on
an electrical diet, any device using OFDM is going to consume more power than
wireless devices that don’t use OFDM for the PHY.

With these problems taken care of, a variety of different encoding methods
can be used to transmit data over OFDM PHY. In 802.11a, these include BPSK for
6 to 9 Mbps, Quadrature Phase Shift Keying (QPSK) for 12 to 18Mbps, and Quadrature
Amplitude Modulation (QAM) for speeds from 24 to 54Mbps.

Next: A look at New Uses for OFDM.

Get the Free Newsletter!

Subscribe to our newsletter.

Subscribe to Daily Tech Insider for top news, trends & analysis

News Around the Web