09.04.2014 Views

Decentralized Processing: An Information Theoretic Perspective

Decentralized Processing: An Information Theoretic Perspective

Decentralized Processing: An Information Theoretic Perspective

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Decentralized</strong> <strong>Processing</strong>:<br />

<strong>An</strong> <strong>Information</strong> <strong>Theoretic</strong> <strong>Perspective</strong><br />

Shlomo Shamai, , EE. Dept. Technion, Israel<br />

sshlomo@ee.technion.ac.il<br />

Communications Sciences Institute<br />

Department of Electrical Engineering<br />

USC Viterbi School of Engineering<br />

February 2, 2007<br />

Joint work with Amichai Sanderovich, Yossef Steinberg and<br />

Michael Peleg, , EE, Technion.<br />

1


Overview of presentation<br />

<strong>Decentralized</strong> <strong>Processing</strong>: Overview<br />

<br />

<br />

Nomadic transmitter and remote destination.<br />

Simple access points (agents) that encode but do not decode.<br />

<strong>Information</strong> theoretic upper and lower bounds.<br />

The Gaussian scalar channel – capacity through EPI.<br />

<strong>Decentralized</strong> MIMO: Multiple antenna Tx:<br />

<br />

<br />

<br />

Two schemes: Simple and Wyner-Ziv<br />

compression.<br />

Multiplexing gain of the schemes.<br />

Upper bounds for block fading.<br />

Impact of common feedback – cooperation mode.<br />

<br />

<br />

Two nomadic schemes.<br />

Limited benefits of the feedback – due to the nomadic setting.<br />

2


<strong>Decentralized</strong> <strong>Processing</strong>:<br />

Nomadic transmitter<br />

Overview<br />

Agent<br />

A1<br />

Transmitter<br />

S<br />

X<br />

Channel<br />

P(Y1,Y2|X)<br />

Y1<br />

Agent<br />

C1<br />

Destination<br />

D<br />

A2<br />

Y2<br />

C2<br />

3


Scheme description<br />

Memoryless broadcast channel without<br />

n<br />

feedback:<br />

n n n<br />

P( Y1 ,…, YT | X ) = ∏ P( Y1, i, …, YT , i<br />

| X<br />

i<br />

)<br />

i=<br />

1<br />

Connected via T (here T=2) non-interfering and<br />

reliable links with bandwidths<br />

C1, …,<br />

CT<br />

bits/channel use.<br />

One shot transmission (one block), but extends<br />

to steady-state state operation.<br />

Nomadic transmitter can not adapt to the<br />

different access points (agents), where each<br />

may use different ways of forwarding.<br />

Nomadic transmitter the agents lack the<br />

codebook knowledge.<br />

4


Scheme description<br />

Agents:<br />

Agents observe a memoryless noisy source<br />

similar to the CEO problem.<br />

Each agent encodes n ≤ k channel outputs<br />

into an index<br />

{<br />

nC<br />

(<br />

n<br />

V 1, , 2 t<br />

}<br />

t<br />

∈ … , Vt = φt Yt<br />

).<br />

Each agent sends the k / n indices to the<br />

destination.<br />

Final destination:<br />

Decodes the transmitted message from the<br />

k / n<br />

received indices .<br />

{ }<br />

V<br />

1, … , T<br />

5


Problem statement<br />

Nomadic transmitters:<br />

<br />

Agents are ignorant of code-book structure.<br />

Objective: maximize the total achievable<br />

rate of the scheme over<br />

<br />

<br />

<br />

Agents encoding functions.<br />

Single letter distribution used for codebook<br />

selection.<br />

Decoding function.<br />

6


Some Relevant Literature<br />

Distributed source coding are directly applicable:<br />

<br />

<br />

<br />

<br />

<br />

<br />

<br />

The CEO problem [Berger, Zhang and Viswanathan 1996].<br />

Upper bound on the sum rate distortion function for the CEO<br />

problem [Chen, Zhang, Berger and Wicker 2004].<br />

Distributed Wyner-Ziv<br />

source coding over tree structure [Draper<br />

and Wornell 2004].<br />

Rate Region of Quadratic Gaussian CEO [Oohama<br />

2005].<br />

Network Vector Quantization [Fleming, Zhao and Effros 2004].<br />

Rate Region of the Quadratic Gaussian two terminals source<br />

coding [Wagner, Tavildar and Viswanath2006].<br />

Backward channels, Distributed WZ [Servetto<br />

2006].<br />

7


Relevant literature<br />

Relevant channel coding problems:<br />

<br />

<br />

<br />

<br />

<br />

Parallel relay scheme [Schein/Gallager<br />

2001].<br />

Relay channel [Cover and El-Gamal<br />

1979].<br />

Multihopping for relay networks [Kramer,<br />

Gastpar and Gupta 2004].<br />

User cooperation [Sendonaris, Erkip and<br />

Aazhang 2003].<br />

MIMO relay problems [Wang, Zhang,Høst<br />

st-<br />

Madsen 2005].<br />

8


Achievable rate<br />

Agents compress their received signal:<br />

( U Y )<br />

t<br />

, ∈T<br />

where Ut<br />

depends only on Yt<br />

, i.e. the<br />

following are Markov chains:<br />

{ }<br />

Transmit U , 1<br />

… , UT<br />

to the destination,<br />

save rate by using the correlations<br />

between them (Wyner(<br />

Wyner-Ziv<br />

coding).<br />

t<br />

ε<br />

( , ,<br />

)<br />

U −Y − X Y<br />

… …<br />

U<br />

… …<br />

t t 1, , t− 1, t+ 1, , T 1, , t− 1, t+<br />

1, , T<br />

9


Achievable rate<br />

Achievable rate<br />

R < max I( X ; U1, …, U )<br />

T<br />

T<br />

P( X ) P( Y1<br />

,…, Y | X ) ∏P( U | Y )<br />

T t t<br />

t=<br />

1<br />

Where<br />

∑<br />

t∈S<br />

C > I( U ; Y | U )<br />

t S S S C<br />

for all<br />

S<br />

⊆<br />

{ 1, …,<br />

T}<br />

10


The Gaussian channel<br />

For a Gaussian channel with:<br />

The capacity is<br />

2 2<br />

Y = X + N , E X = P , E N = P<br />

t t X t N t<br />

1 ⎛ 1−<br />

2<br />

R ≤ max min ∑[ C − r ] + log 1+<br />

P ∑<br />

0≤rt ≤Ct S⊆{ 1, …,<br />

T} ⎜<br />

C<br />

⎝<br />

t t 2 X<br />

t∈S 2<br />

t∈S PN<br />

Proof: through entropy power inequality and the<br />

contra-polymatroid<br />

region for the achievable<br />

rates [Tse<br />

et. al. 2004, Oohama 2005].<br />

r t can be interpreted as the bandwidth used for<br />

noise quantization.<br />

−2r<br />

t<br />

t<br />

⎞<br />

⎟<br />

⎠<br />

11


The Gaussian channel<br />

For symmetric agents, analytic solution<br />

gives:<br />

R<br />

1 ⎛ ⎛<br />

= log ⎜ 1 + 2 SNR<br />

1 −<br />

2 ⎜ ⎜<br />

2<br />

⎝ ⎝<br />

Asymptotics:<br />

2 4C<br />

SNR + 2 (1 + 2 SNR)<br />

− SNR<br />

2 4C<br />

⎧SNR → ∞ : R = 2C<br />

⎪<br />

⎨ 1<br />

⎪C → ∞ : R = log2<br />

1 + 2SNR<br />

⎩<br />

2<br />

( )<br />

⎞⎞<br />

⎟<br />

⎟⎟<br />

⎠⎠<br />

12


The Gaussian channel<br />

13


<strong>Decentralized</strong> MIMO: Scheme<br />

description<br />

Nomadic multi antenna transmitter<br />

Agent<br />

A1<br />

Transmitter<br />

S<br />

X1<br />

X2<br />

Y1<br />

Agent<br />

C1<br />

Destination<br />

D<br />

Y2<br />

A2<br />

C2<br />

14


<strong>Decentralized</strong> MIMO: Scheme<br />

description<br />

MIMO channel:<br />

t Tx antennas and agents with single antenna.<br />

Rayleigh fading channel:<br />

where<br />

Y ∈ C<br />

( )<br />

h ∼ C N 0, I , i = 1, …, r,<br />

i<br />

Y = HX + N<br />

[ r ,1] ,<br />

t<br />

r<br />

E ⎡<br />

⎣ X X ⎤<br />

⎦ ≤ P,<br />

[ t ,1] *<br />

X ∈ C ,<br />

( )<br />

H fully known to both agents and remote destination,<br />

can be fast fading (Shannon capacity),<br />

and block fading channel (average rate).<br />

Connected to the remote destination via orthogonal<br />

reliable links with bandwidths bits/channel use.<br />

C , , 1<br />

… Cr<br />

[ , , , ]<br />

T<br />

H = h1 h2<br />

… h r<br />

N ∼ CN 0, I<br />

r<br />

15


Relevant literature – Distributed<br />

MIMO<br />

Relevant channel coding problems:<br />

<br />

<br />

<br />

<br />

<br />

<br />

Gaussian MIMO channels [Telatar<br />

1999].<br />

MIMO broadcast [H. Weingarten, Y. Steinberg, and S.<br />

Shamai 2004].<br />

Capacity scaling laws in MIMO relay networks<br />

[Bolcskei, Nabar, Oyman and Paulraj 2006].<br />

Capacity-achieving achieving input covariance for single-user<br />

MIMO [Tulino,, Lozano and Verdu 2006].<br />

Quadratic gaussian two-terminal terminal source-coding<br />

coding<br />

[Wagner, Tavildar and Viswanath 2005].<br />

Gaussian Many-Help<br />

Help-One Problem [Tavildar,<br />

Viswanath and Wagner 2007].<br />

16


Achievable rate – simple<br />

compression<br />

Each agent needs to know only its own fading h i<br />

.<br />

Destination knows H.<br />

Agents compress their received signal ( i is the<br />

agent index): n<br />

n<br />

Y i<br />

to the codewords U i<br />

,<br />

nC<br />

Codebook size is 2 .<br />

i<br />

is defined by:<br />

U = Y + d , d ∼ N 0, P<br />

U i<br />

( )<br />

i i i i D i<br />

P<br />

D<br />

i<br />

=<br />

h<br />

i<br />

2<br />

P / t + 1<br />

C<br />

2 −1<br />

17


Achievable rate – simple<br />

compression<br />

{ }<br />

Transmit U , 1<br />

… , Ur<br />

to the destination.<br />

Simple compression: no use of dependencies of<br />

{ U , 1<br />

… , U<br />

r}<br />

.<br />

{ }<br />

Then uses U , 1<br />

… , U<br />

r<br />

to decode X.<br />

The achievable rate is:<br />

⎛ ⎛ 1<br />

⎞<br />

⎜ ⎜<br />

P 1<br />

⎟<br />

⎜<br />

D<br />

+<br />

⎜ 1<br />

⎟<br />

P<br />

RSC = EH log<br />

2<br />

det ⎜ Ir<br />

+ ⎜ ⋱ ⎟ HH<br />

⎜ t ⎜<br />

⎟<br />

⎜ ⎜<br />

1 ⎟<br />

⎜ ⎜<br />

PD<br />

+ 1⎟<br />

⎝ ⎝<br />

r ⎠<br />

*<br />

⎞<br />

⎟<br />

⎟<br />

⎟<br />

⎟<br />

⎟<br />

⎟<br />

⎠<br />

18


Achievable rate WZ compression<br />

Use the dependencies between receptions to increase<br />

compression efficiency.<br />

Requires the knowledge of H in all agents.<br />

Define<br />

Only now:<br />

U = Y + d<br />

i i i<br />

d<br />

i<br />

∼<br />

,<br />

1<br />

N 0, , 1 2<br />

i<br />

P + 1<br />

( )<br />

−ri<br />

( H )<br />

PD<br />

= −<br />

D<br />

i<br />

<br />

( )<br />

r H<br />

i is bandwidth wasted on noise compression (this is<br />

due to nomadic assumption: no decoding, so only source<br />

coding possible [Oohama2005]).<br />

19


Achievable rate WZ compression –<br />

The achievable rate:<br />

cont.<br />

(<br />

r<br />

{ ( )}<br />

)<br />

i=<br />

1<br />

RWZ = E<br />

⎡<br />

H<br />

max min RS ri H , H<br />

S<br />

, C<br />

⎢⎣<br />

0≤ri<br />

( H ) ≤C<br />

S⊆{ 1, …,<br />

r}<br />

(<br />

r<br />

{ ( )}<br />

, , ) ≜ ⎡ − ( )<br />

i=<br />

1<br />

∑ ⎣<br />

R r H H C C r H<br />

S i S i<br />

C<br />

i∈S<br />

⎛<br />

P ⎛ ⎧⎪<br />

1 ⎫⎪<br />

+ log2<br />

det ⎜ I + diag<br />

S ⎨ ⎬<br />

⎜ t ⎜ ⎪ PD<br />

+ 1<br />

i<br />

⎝ ⎝ ⎩ ⎪⎭<br />

⎤⎦<br />

⎤<br />

⎥⎦<br />

i∈S<br />

⎞<br />

H<br />

S<br />

H<br />

⎟<br />

⎠<br />

*<br />

S<br />

⎞<br />

⎟<br />

⎟<br />

⎠<br />

Notice that i<br />

is the bandwidth left for signal<br />

transmission.<br />

( )<br />

H ≜ H i, j , i ∈ S, j = 1, …,<br />

t<br />

S<br />

( )<br />

C − r H<br />

20


Achievable rate WZ compression –<br />

cont.<br />

( )<br />

{ } 1<br />

The optimization over ri<br />

H in WZ<br />

i =<br />

compression needs to be performed for any<br />

The optimization is concave, and thus can be<br />

performed efficiently.<br />

As t → ∞, the channel matrix hardens, and<br />

no need for H in the agents, since it is<br />

required only for the WZ binning resolution.<br />

r<br />

H<br />

21


Multiplexing Gain<br />

Multiplexing gain =<br />

( )<br />

( P)<br />

R P<br />

lim<br />

P→∞<br />

log2<br />

For simple compression, when t ≥ r and<br />

C =<br />

the compression noise di<br />

in the definition of<br />

is with power<br />

2 2<br />

hi<br />

P / t + 1 hi<br />

P / t + 1<br />

PD<br />

= =<br />

i C<br />

2 −1 C=<br />

log( P)<br />

P −1<br />

Upper bounded by:<br />

P<br />

*<br />

2 2<br />

hi<br />

P / t + 1 hi<br />

P / t + 1<br />

= max<br />

= max<br />

i<br />

C<br />

2 −1 = log( ) i P −1<br />

D C P<br />

log 2<br />

Ui<br />

( P)<br />

22


Multiplexing Gain<br />

Thus full multiplexing Gain is attained (inequality(<br />

since we use the largest compression noise, there are<br />

others smaller)<br />

⎛ r<br />

P 1 ⎞ ⎛<br />

*<br />

Pλi<br />

/ t ⎞<br />

RSC ≥ EH log2 det Ir<br />

+ HH = ∑ Eλ<br />

log<br />

2<br />

1+<br />

⎜ i<br />

t 1 P ⎟ ⎜ * i 1<br />

1 P ⎟<br />

*<br />

⎝ +<br />

D ⎠ = ⎝ +<br />

D ⎠<br />

( ) { }<br />

*<br />

un − ordered eignvalues HH = λi<br />

RSC<br />

( P)<br />

lim ≥ r<br />

P→∞<br />

log<br />

2 ( P)<br />

When t < r, only subset of t agents can be<br />

used.<br />

23


Multiplexing Gain<br />

Wyner-Ziv<br />

compression is always better than the<br />

simple compression, and thus, trivially:<br />

2<br />

( P)<br />

( P)<br />

RWZ<br />

lim m,<br />

P→∞<br />

log<br />

≥ m ≜ min { r,<br />

t}<br />

24


Nomadic joint UB<br />

For upper bounds on average rate with<br />

block fading (not valid for fast fading):<br />

Assume that the agents can fully<br />

cooperate.<br />

Equivalent to m parallel channels, with<br />

signal to noise ratios:<br />

{ λ P t i }<br />

results in almost simple MIMO.<br />

In [SSSK2005] there was no H. We copy that<br />

result for a fixed H:<br />

⎛<br />

R ( H ) ≤ max log2<br />

⎜1 + Pλi<br />

/ t<br />

Bi<br />

2 −1<br />

⎞<br />

Bi<br />

2 + Pλ<br />

/ t<br />

∑<br />

⎟<br />

∑Bi<br />

≤Ctotal<br />

⎝ i ⎠<br />

Bi<br />

≥0<br />

25


Nomadic separated UB<br />

<strong>An</strong>other upper bound is based on:<br />

r<br />

1<br />

(<br />

n<br />

)<br />

1<br />

(<br />

n<br />

R ≤ I X ; V{ }<br />

| H ≤ ∑ I X ; V )<br />

1, ,<br />

i<br />

| H<br />

… r<br />

n n i = 1<br />

th<br />

Vi<br />

is the message sent form the i agent.<br />

Every term in the sum can be treated as a single-<br />

agent model, number of antennas at the source is<br />

not important. Invoking results of [SSSK2005] for<br />

single antenna, for every term of the above sum<br />

(loose: not accounting for the single source<br />

setting):<br />

r ⎛<br />

Ci<br />

2 2 −1<br />

⎞<br />

R ( H ) ≤ ∑log2 1 + P / t hi<br />

C<br />

2<br />

⎜<br />

i<br />

i=<br />

1<br />

2 + P / t h ⎟<br />

⎝<br />

i ⎠<br />

26


Performance of 4 by 4 system, with C=2<br />

8<br />

7<br />

R [bits/sec]<br />

6<br />

5<br />

4<br />

3<br />

Nomadic Joint upper bound<br />

Cut-set 1<br />

Nomadic seperated upper bound<br />

Cut-set 2<br />

Simple compression<br />

Wyner-Ziv<br />

2<br />

4 6 8 10 12 14 16<br />

P [dB]<br />

27


Performance of t=4 by r=8 system, with C=1<br />

8<br />

7<br />

R [bits/sec]<br />

6<br />

5<br />

4<br />

3<br />

Nomadic Joint upper bound<br />

Cut-set 1<br />

Nomadic seperated upper bound<br />

Cut-set 2<br />

Simple compression<br />

Wyner-Ziv<br />

2 4 6 8 10 12 14 16<br />

P [dB]<br />

28


Asymptotics – simple compression<br />

Consider the case of:<br />

r,<br />

t → ∞<br />

r Ctotal<br />

while τ = and C = , C fixed, ind. . of r.<br />

t<br />

r<br />

Here we take min/max on<br />

P Di<br />

to get upper/lower<br />

bounds:<br />

2<br />

h a. s.<br />

identical asymptotically:<br />

∀i<br />

: i<br />

→1<br />

t<br />

simple compression:<br />

⎛ P 1 ⎞<br />

⎛<br />

*<br />

Pλ<br />

/ t ⎞<br />

EH log2 det Ir + HH → mEH log ⎜<br />

2<br />

1+<br />

⎟<br />

⎜ C / r<br />

t 1+ P ⎟ ⎜ 1+ P + 1 2 −1<br />

⎟<br />

Ctotal<br />

P<br />

r<br />

So lim RSC<br />

(no dependence on τ = ).<br />

r→∞<br />

1 P<br />

Notice that<br />

total<br />

( ) ( )<br />

⎝ D*<br />

⎠ ⎝ ⎠<br />

= +<br />

total<br />

⎛<br />

log det<br />

⎝<br />

*<br />

EH<br />

2 ⎜ Ir<br />

+ HH ⎟ → rP<br />

t P


Feedback link:<br />

t<br />

= 1, r = 2.<br />

Y1<br />

Agent<br />

A1<br />

C1<br />

Transmitter<br />

S<br />

X<br />

Channel<br />

P(Y1,Y2|X)<br />

Cf<br />

Destination<br />

D<br />

Y2<br />

Agent<br />

A2<br />

C2<br />

30


Relevant literature – Feedback -<br />

Cooperation<br />

Feedback to one agent, in asymmetric case<br />

[Schein-Gallager<br />

2001].<br />

Coding for interactive communication [Schulman<br />

1996].<br />

Cooperative relay broadcast channels [Liang-<br />

Veeravalli 2005].<br />

Cooperative receivers [Dabora-Servetto<br />

2005\6].<br />

Network-Coding in Interference Networks<br />

[Smith-Vishwanath<br />

2006].<br />

31


Feedback link - setting<br />

The final destination has finite capacity<br />

feedback – common to the two agents.<br />

Cooperation is limited to two phases.<br />

The final destination does not reveal<br />

codebook to agents.<br />

Feedback link is used to improve<br />

compression quality.<br />

Network coding can be used.<br />

C f<br />

32


Feedback link - setting<br />

First phase: the agents (1,2) transmit M1M<br />

and M2, M<br />

respectively.<br />

The final destination receives these<br />

messages and forwards MfM<br />

through the<br />

feedback link, involving network coding.<br />

Second phase: the first agent sends M’1 M<br />

and the second M’2, M , through the loss less<br />

links.<br />

33


Feedback link - setting<br />

We have that:<br />

1 log 2 ( M' M t t ) ≤ C , t 1, 2<br />

t<br />

=<br />

n<br />

1 log 2 ( M f ) ≤ C<br />

n<br />

f<br />

where<br />

t t t<br />

(<br />

n<br />

Y )<br />

( Y<br />

n<br />

)<br />

M = φ , t = 1, 2<br />

M' = φ ' ,M<br />

t t t f<br />

( )<br />

M = φ M ,M<br />

f 1 2<br />

34


Feedback link – achievable rate #1<br />

First phase: each agent uses WZ to<br />

n<br />

n<br />

compresses Yt<br />

into U<br />

t.<br />

Then sends the corresponding bin by Mt. M<br />

n<br />

The destination uses 1 2 to decode<br />

U ,<br />

and then sends Mf = φ ( M<br />

1, M2<br />

).<br />

Each agent then decodes the compressed<br />

n<br />

n<br />

vector U3<br />

− t, from Mf<br />

with the SI Yt<br />

.<br />

Network coding effect example:<br />

( ) φ ( )<br />

M = φ M ⊕ M<br />

f 1 2<br />

M ,M (<br />

n<br />

) 1<br />

U<br />

2<br />

35


Feedback link – achievable rate #1<br />

Second phase: each agent uses WZ (with<br />

SI at agents & destination) to compresses<br />

n<br />

n<br />

into Z t<br />

, given U3<br />

−t<br />

.<br />

Then sends the corresponding bin: M’t.<br />

The destination finally decodes<br />

(<br />

n n<br />

Z ) 1<br />

, Z2<br />

from the received M’1, M<br />

M’2.<br />

Since agents have better SI than D, they<br />

can decode U n<br />

3 −t .<br />

n<br />

Y t<br />

36


Feedback link – achievable rate #1<br />

Achievable rate:<br />

R < max I( X ; U1, U2, Z1, Z2)<br />

With the constraints:<br />

( 1; 1<br />

|<br />

2 ) ≤<br />

( 2; 2<br />

|<br />

1)<br />

≤<br />

( )<br />

I U Y U<br />

I Y , Y ; U , U ≤ 2C<br />

1 2 1 2<br />

C<br />

I U Y U C<br />

f<br />

f<br />

( ; | , , ) ≤ − ( ; | )<br />

I Z Y U U Z C I U Y U<br />

1 1 1 2 2 1 1 2<br />

( ) 2<br />

;<br />

2<br />

|<br />

1<br />

,<br />

2<br />

,<br />

1<br />

≤ − ( 2; 2<br />

|<br />

1)<br />

I Z Y U U Z C I U Y U<br />

f<br />

( , ; , | , ) ≤ 2 − ( , ; , )<br />

I Z Z Y Y U U C I Y Y U U<br />

⎫<br />

⎪<br />

⎬<br />

⎪<br />

⎪⎭<br />

Standard WZ<br />

1 2 1 2 1 2 1 2 1 2<br />

Conditional-WZ<br />

on remaining<br />

BW<br />

37


Feedback link – achievable rate #1<br />

( )<br />

n n<br />

For Gaussian setting, if side information<br />

U1 , U<br />

2<br />

is known at decoder (destination): no rate gain<br />

by sending it to the encoder (agents).<br />

WZ = Conditional Rate-Distortion.<br />

This mean that for Gaussian setting, the scheme<br />

can do the same as without feedback.<br />

For other setting (such as binary), the scheme<br />

can use the feedback for rate improvement.<br />

38


Feedback link – achievable rate #2<br />

( )<br />

n n<br />

The destination does not decode U at the first<br />

1<br />

, U<br />

2<br />

phase.<br />

For decoding at the agents, first phase requires:<br />

I ( U ) ( )<br />

1<br />

, Y1 | Y2 ≤ I U1 , Y1 | U<br />

2 (since U1 −Y1 −Y2<br />

, U<br />

2 ).<br />

A1 and A2 have better SI than D.<br />

Second phase: each agent uses WZ to compress<br />

(<br />

n n<br />

Y )<br />

t<br />

, U<br />

n<br />

3 − t into Z t<br />

.<br />

Then sends the corresponding bin: M’t.<br />

The destination finally jointly decodes<br />

(<br />

n n n n<br />

U ) from the received M1, M<br />

M’1,<br />

M2,<br />

M’2.<br />

1<br />

, Z1 , U<br />

2<br />

, Z2<br />

n n<br />

n n<br />

Better than decoding , and Z , Z separately.<br />

( U ) 1<br />

U ( )<br />

2<br />

1 2<br />

39


Feedback link – achievable rate<br />

Achievable rate is:<br />

R < max I( X ; U , U , Z , Z )<br />

But, with the constraints:<br />

( 1; 1<br />

|<br />

2 )<br />

( ; | )<br />

I U Y Y<br />

2 2 1<br />

≤<br />

C<br />

I U Y Y ≤ C<br />

f<br />

f<br />

⎫ ⎪⎬<br />

⎪⎭<br />

1 2 1 2<br />

#2<br />

Standard WZ<br />

( ) 1<br />

;<br />

1<br />

|<br />

1<br />

,<br />

2<br />

,<br />

2<br />

max ( 1; 1<br />

|<br />

2 ), ( 2; 2<br />

|<br />

2,<br />

1)<br />

{ }<br />

I Z Y U U Z + I U Y Y I U Y Z U ≤ C<br />

( ) 2<br />

;<br />

2<br />

|<br />

1<br />

,<br />

2<br />

,<br />

1<br />

max ( 2; 2<br />

|<br />

1) , ( 1; 1<br />

|<br />

1,<br />

2 )<br />

{ }<br />

I Z Y U U Z + I U Y Y I U Y Z U ≤ C<br />

I ( Z1, Z2; Y1 , Y2 | U1, U<br />

2 ) + I ( Y1 , Y2 ; U1, U<br />

2 ) ≤ 2C<br />

Remaining BW under<br />

Conditional-WZ<br />

decoding at both agents and D<br />

40


Feedback link – achievable rate #2<br />

For Gaussian symmetric channel, the<br />

achievable rate with no feedback is with<br />

equality in the non-diagonal constraint:<br />

( )<br />

I U , U ; Y , Y ≤ 2C<br />

1 2 1 2<br />

The achievable rate in the second scheme, is<br />

limited by the same constraint:<br />

( ) ( )<br />

I Z , Z ; Y , Y | U , U + I Y , Y ; U , U ≤ 2C<br />

1 2 1 2 1 2 1 2 1 2<br />

Thus, for the Gaussian symmetric setting, no<br />

benefit from the feedback.<br />

41


Feedback link – achievable rate #3<br />

<strong>An</strong> increase in the achievable rate, for the<br />

Gaussian setting: by not requiring the<br />

destination to decode (<br />

n n<br />

U ) 1<br />

, U<br />

2 , at all.<br />

This means an improved compression, on<br />

the expense of wasted bandwidth.<br />

Nomadic setting is not improved even by<br />

considering Layering (since, again<br />

Gaussian WZ equals conditional MI).<br />

42


Feedback link – achievable rate #3<br />

Achievable rate:<br />

R<br />

< max I( X ; Z , Z )<br />

With the constraints:<br />

( 1; 1<br />

|<br />

2 )<br />

( ; | )<br />

I U Y Y<br />

2 2 1<br />

≤<br />

C<br />

I U Y Y ≤ C<br />

f<br />

f<br />

1 2<br />

( ) 1<br />

;<br />

1, 2<br />

|<br />

2<br />

≤ − ( 1; 1<br />

|<br />

2 )<br />

I Z Y U Z C I U Y Y<br />

( ) 2<br />

;<br />

2, 1<br />

|<br />

1<br />

≤ − ( 2; 2<br />

|<br />

1)<br />

I Z Y U Z C I U Y Y<br />

⎫ ⎪⎬<br />

⎪⎭<br />

Standard WZ<br />

( , ; , , , ) ≤ 2 ( ; | )<br />

I Z Z Y Y U U C − ∑ I U Y Y −<br />

1 2 1 2 1 2 i i 3 i<br />

i=<br />

1,2<br />

WZ on<br />

remaining<br />

BW<br />

43


Achievable with feedback –<br />

numerical example<br />

U’s s not decoded by Destination.<br />

Cf required for<br />

the best rate<br />

enhancement.<br />

2.2<br />

2<br />

1.8<br />

1.6<br />

Achievable rate w/o feedback, along with the optimal feedback<br />

R with no feedback<br />

R with feedback<br />

C f<br />

: Optimal feedback BW<br />

R [bits/sec]<br />

1.4<br />

1.2<br />

1<br />

0.8<br />

0.6<br />

0.4<br />

44<br />

0.2<br />

-5 0 5 10<br />

44<br />

SNR [dB]


Conclusions<br />

A framework for communication schemes which includes<br />

a nomadic transmitter communicating through agents.<br />

The Gaussian case is solved provided the agents do not<br />

know/use the code book of the nomadic transmitter.<br />

Explicit solution is given for two equivalent agents.<br />

Simple and Wyner-Ziv<br />

compression schemes for<br />

nomadic multi antenna transmitter communicating<br />

through agents.<br />

The full multiplexing gain is demonstrated for both<br />

compression schemes, no CSI at transmitter!<br />

45


Conclusions – Cont.<br />

Upper bounds for average rate (block fading).<br />

Numerical results demonstrate the tightness of<br />

the bounds and the effectiveness of the WZ<br />

approach.<br />

For the single transmit antenna: the impact of a<br />

feedback link is investigated, in three variations.<br />

A nomadic setting substantially limits the gain<br />

from the feedback link.<br />

For efficient use of the feedback, the agents<br />

need decoding ability.<br />

46


Outlook<br />

Distributed MIMO: Improved upper bounds,<br />

based on EPI vector versions.<br />

Distributed MIMO: Multiple antennas agents.<br />

Distributed MIMO: Including decoding agents<br />

(broadcast MIMO channel).<br />

Distributed MIMO: Multi cell sites joint<br />

processing.<br />

The impact of feedback with decoding agents.<br />

47


Thank you!

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!