19.11.2014 Views

mohatta2015.pdf

signal processing from power amplifier operation control point of view

signal processing from power amplifier operation control point of view

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

THE IDEA 153<br />

Traditionally, we still call it MAPSD, even though strictly speaking its ML symbol<br />

detection. In general, ML detection can be viewed as a special case of MAP<br />

detection in which a symbol or sequence values are assumed equi-likely.<br />

7.1.2 Soft information<br />

So far, we have concentrated on recovering the binary symbols being transmitted.<br />

In reality, we are interested on underlying information being sent. Often a code is<br />

used that relates the information to the transmitted symbols. Consider the Alice<br />

and Bob example in Table 1.1, in which there are three possible messages. Notice<br />

that two binary symbols are used to represent the message. Because two binary<br />

symbols can represent up to four messages, there is one pattern that is not used<br />

(s\ = —1, S2 = +1). This fact can be used when decoding the message.<br />

In the Alice and Bob example, when we perform MAPSD, the detected values are<br />

Si = —1, S2 — +1, an invalid combination. That means there is an error somewhere<br />

in the detected sequence. But where? To answer this question, we need to know<br />

more than just the detected symbol values. We need to know how confident we are<br />

of each symbol value.<br />

Soft information generation is about assigning a confidence level or likelihood to<br />

each symbol value. Ideally it is a function of the likelihood that a symbol equals<br />

a certain value, given the received signal (Pr{s 2 = +1|τ~})- But wait a minute. In<br />

the previous subsection, we used symbol metrics related to symbol likelihoods in<br />

MAPSD. In fact, they were proportional to symbol likelihoods. Thus, if we are<br />

using MAPSD, then we can use the symbol metrics as the soft information!<br />

7.1.2.1 Using soft information But what do we do with the soft information?<br />

Consider the Alice and Bob example in Table 1.1. Suppose we use MAPSD and<br />

form the symbol metrics given in Table 7.1. Notice that the symbol metrics for the<br />

two symbol values do not add to one. That is because we used metrics proportional<br />

to a posteriori likelihoods. Though not necessary, we could normalize each pair by<br />

its sum to get likelihoods that add to one.<br />

To decode the message using the soft information, we form message metrics<br />

by multiplying the symbol metrics corresponding to a particular message. For<br />

example, to form a metric for message 1, we would multiply the symbol metrics<br />

L(s 1 = +1) = 0.96975 and L(s 2 = -1) = 0.90783, giving 0.8804. We would need to<br />

form metrics for all possible messages, as shown in Table 7.2. The decoded message<br />

is the one with the largest metric, in this case message 3, which is correct.<br />

Table 7.1<br />

Examplo of MAPSD symbol metrics<br />

Symbol Metric Value<br />

L(s 1 = +1) 0.96975<br />

L(si = -1) 1.4529<br />

LÍS2 = +1) 1.5149<br />

L(s 2 = -1) 0.90783

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!