19.11.2014 Views

mohatta2015.pdf

signal processing from power amplifier operation control point of view

signal processing from power amplifier operation control point of view

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

152 ADVANCED TOPICS<br />

detecting the wrong sequence. Suppose we have a different performance criterion.<br />

Suppose we want to minimize the chance of detecting the wrong symbol value for<br />

a particular symbol. Does MLSD minimize that as well?<br />

The answer is no. To minimize the chance of making a symbol error for «2, it<br />

is not enough to find the best sequence metric. We must consider all the sequence<br />

metrics and divide them into two groups: one corresponding to S2 = +1 and<br />

one corresponding to S2 = — 1. We use the first group to form a symbol metric<br />

associated with S2 = +1 (¿(«2 = +1)) and the second group to form a symbol<br />

metric associated with S2 = —1 (¿(«2 = —1))· The larger of these two metrics<br />

indicates which symbol value to use for the detected value.<br />

Consider the Alice and Bob example and suppose we wish to detect symbol<br />

S2- The sequence metrics are given in Table 6.1. Combinations 1, 3, 5, and 7<br />

correspond to the hypothesis S2 = +1. Combinations 2, 4, 6, and 8 correspond to<br />

the hypothesis S2 = — 1.<br />

So what do we use for the symbol metric? It turns out that we want to sum<br />

likelihoods of the different sequences, which ends up being the sum of exponentials,<br />

with exponents related to the sequence metrics. Specifically, the exponents are the<br />

negative of the sequence values divided by twice the noise power (200). Thus,<br />

L(s 2 = +1)<br />

L(s 2 = -1)<br />

e(-4O/200) + e(-4fi8/20O) + e(-436/200) + e(-144/20O) ¡j j)<br />

0.8187 + 0.0963 + 0.1130 + 0.4868 = 1.5149 (7.2)<br />

e(-ii80/2(K)) + e(-:i88/200) + e(-l()76/200) + e(-64/200) (73)<br />

0.0334 + 0.1437 + 0.0046 + 0.7261 = 0.90783, (7.4)<br />

where e is Euler's number, approximately 2.71. Observe that the first metric is<br />

larger, so the detected value would be S2 = +1.<br />

So where did this metric come from? It is related to the likelihood or probability<br />

that S2 takes on a certain value, given the two received values. Before obtaining<br />

the received values, we would assume the likelihood of S2 being +1 or —1 is 0.5.<br />

This is referred to as the a priori (Latin) or prior likelihood, sometimes denoted<br />

P{s2 = +1}. It is the likelihood prior to receiving the data (prior to the channel).<br />

The likelihood after receiving the data is referred to as the a posteriori likelihood,<br />

sometimes denoted P{s2 = +l|i"}. As the metric is related to the likelihood of the<br />

symbol taking on a certain value after obtaining the received samples, this form<br />

of equalization is referred to as Maximum a posteriori (MAP) symbol detection<br />

(MAPSD). We have to be careful, because the "S" in MAPSD refers to "symbol"<br />

whereas it refers to "sequence" in MLSD.<br />

There is a second aspect of MAPSD that needs to be mentioned. There is a way<br />

in the symbol metric to add prior information about the symbols. For example,<br />

suppose we are told that the probabilities associated with the symbol being +1<br />

and —1 are 0.7 and 0.3 (instead of 0.5 and 0.5). We can take advantage of this<br />

information by including it in the metric.<br />

In general, MAP approaches incorporate prior information whereas ML approaches<br />

do not. Thus, there exists MAP sequence detection which has a metric<br />

similar to MLSD, except there is a way to add prior information. Also, there<br />

exists ML symbol detection, in which prior information is not included. You may<br />

have noticed that our example above did not explicitly include prior information.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!