06.02.2015 Views

UNIVERSITY OF PERUGIA - Library of Ph.D. Theses | EURASIP

UNIVERSITY OF PERUGIA - Library of Ph.D. Theses | EURASIP

UNIVERSITY OF PERUGIA - Library of Ph.D. Theses | EURASIP

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Fig. 12 shows the behavior <strong>of</strong> the optimum parameter ν as a function <strong>of</strong> the SNR. The<br />

2<br />

linear dependence at high SNR is not surprising, because, for a fixed noise power σ , an<br />

SNR increase corresponds to an increase <strong>of</strong> the signal subspace eigenvalues. Consequently,<br />

the correction should scale in a similar manner. On the contrary, when the SNR is low, the<br />

parameter ν has to compensate for the channel estimation errors too, and therefore the<br />

linear dependence is no longer satisfied.<br />

2.4. REGULARIZED DETECTORS FOR DOWNLINK TRANSMISSIONS<br />

As previously anticipated by the CMOE detector, regularization techniques [Hans] deal with<br />

ill-conditioned problems by substituting the matrix R with a matrix characterized by a<br />

smaller eigenvalue spread. Consequently, the variance <strong>of</strong> the estimation errors decreases, at<br />

the cost <strong>of</strong> introducing some bias in the detector estimate. The goal <strong>of</strong> these techniques is to<br />

find a good trade-<strong>of</strong>f between bias and variance, constructing the regularized matrix by<br />

judiciously modifying ˆR . Among the eigenvalue spread reduction techniques, we can<br />

distinguish between full-rank regularization techniques, where the inverse covariance matrix<br />

−1<br />

R is approximated by a matrix with full rank MN , and reduced-rank approaches, where<br />

the inverse covariance matrix is approximated by a matrix with rank r lower than MN .<br />

Most <strong>of</strong> the techniques proposed in the DS-CDMA literature fall in this second category.<br />

The main reason is that the projection <strong>of</strong> the received signal onto a lower-dimensional<br />

subspace speeds up the convergence <strong>of</strong> adaptive algorithms, because the number <strong>of</strong><br />

parameters to be updated is smaller with respect to the full-rank case. However, since we are<br />

dealing with short data blocks, the application <strong>of</strong> adaptive algorithms seems inopportune,<br />

because the limited time may not be enough to guarantee the convergence <strong>of</strong> the algorithm.<br />

Therefore, when batch approaches are employed, also the full-rank regularization<br />

techniques should be considered, because a higher subspace dimension leads to an increased<br />

number <strong>of</strong> degrees <strong>of</strong> freedom, which can potentially be exploited to improve the<br />

26

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!