Convolutional Coding and Decoding Zhong Gu

Convolutional Coding and Decoding Zhong Gu Convolutional Coding and Decoding Zhong Gu

ee.iastate.edu
from ee.iastate.edu More from this publisher

<strong>Convolutional</strong> <strong>Coding</strong> <strong>and</strong><strong>Decoding</strong><strong>Zhong</strong> <strong>Gu</strong>Dec. 4, 2000CrpE537 Fall 2000Instructor Dr. Russell Steve


Why use channel <strong>Coding</strong>?• The channel is not ideal.• Propagation can introduces errors to thesignal caused by path loss <strong>and</strong> path fading.• So channel coding is introduced toovercome these problems.


Structure of <strong>Convolutional</strong> CodeFigure 1. <strong>Convolutional</strong> Encoder• At every instant, k bits areinputted to the register <strong>and</strong> kbits moved out <strong>and</strong> n bits areencoded <strong>and</strong> outputted• K is called constraint length ofconvolutional code• R=k/n is coding rate• Every output is related tomodulo-2 adder


An simple convolutional codeFigure 2. K=3, k=1, n=1 convolutionalencoder• Here the adder vectorof output bit 0 is [1 00];• The adder vector ofoutput bit 1 is [1 0 1];• The adder vector ofoutput bit 2 is [1 1 1];


Tree diagramFigure 3. Tree Diagram for K=3, k=1 encoder• If the input is a 0, then the upperbranch is followed, elsewhere, thelower one is followed. So it’s easyto find the output code given theinput sequence.• Output bit sequence is determinedby the current input bit <strong>and</strong> the twoprevious bits.• Current output sequence isdetermined by the input bit <strong>and</strong> thefour possible states of the register,which are showed in figure 3.


State DiagramFigure 5: State diagram of aconvolutional encoder• The three bits in thetransition branchdenotes the outputsequence <strong>and</strong> thedotted lines show thetransition triggered by1 <strong>and</strong> solid lines showthe transition triggeredby 0.


Maximum likelihood decoding• When the encoded information is transmitted over the channel, it is distorted.• The conlvolutional decoder regenerated the information by estimating the mostlikely path of state transition in the trellis.• Maximum likelihood decoding means the decoder searches all the possiblepaths in the trellis <strong>and</strong> compares the metrics between each path <strong>and</strong> the inputsequence. The path with the minimum metric is selected as the output. Somaximum likelihood decoder is the optimum decoder• In general, a convolutional Code CC(n, k, K) has 2 (K-1)k possible states. At asampling instant, there are 2 k merging paths for each node <strong>and</strong> the one with theminimum distance is selected <strong>and</strong> called surviving path. At each instant, thereare 2 (K-1)k surviving paths are stored in the memory.• When all the input sequence are processed, the decoder will select a path withthe minimum metric <strong>and</strong> output it as the decoding result.• In the real systems, the input sequence is very long. It is showed that it is longenough to make the trellis depth L> (5*K ) <strong>and</strong> decode only the newestmessage bit within the Viterbi trellis.


Viterbi Algorithm1.Calculate branch metricsThe branch metric at time instant j for path, is defined as the joint probability of the received n-bitsymbol conditioned on the estimated transmitted n-bit symbol :2.Calculate path metricsmαj= lnnnαP(rji| cji) = ∑i= 1 i=1∏ln P(rThe path metric for path , at time instant J is the sum of the branch metrics along this path.Mα=3. Information sequence updateAt each instant, there are 2 k merging paths for every node. The decoder select the one with thelargest metric as the survivor.∑jiJαm jj=1| cαjiα1α 22max( M , M ,!,Mα)k)4. Output the decoded sequenceAt instant J, the (J-L)th information symbol is output from the memory with the largest metric.


Use Viterbi algorithm to decodeLet m(s,t) represent the metric of state s at time t.1.Initially, all state metrics are zeros, i.e. m(0,0) = m(1,0) = m(2,0) = m(3,0) = 0.2.For every state, there are two entering branches, called upper branch <strong>and</strong> lower branchrespectively. Variables M_upper (s, t) <strong>and</strong> M_lower (s, t). are used to st<strong>and</strong> for Hammingdistance between the current input information bits <strong>and</strong> expected information bits whichcause the state transition to s at time t.3. Compare M_upper (s, t) + m(s*, t-1) <strong>and</strong> M_lower (s, t) + m(s*, t-1), where s is the stateat time t, <strong>and</strong> s* is the pervious state at time (t-1) for a given branch. Choose the branchwith the small value as the surviving branch entering the state at time t <strong>and</strong> let m(s, t)= thisvalue. That is:if M_upper (s, t) + m(s*, t-1) < M_lower (s, t) + m(s*, t-1)The upper branch is the surviving branch <strong>and</strong> m(s,t)= M_upper (s, t) + m(s*, t-1)Otherwise, the lower branch is the surviving branch <strong>and</strong> m(s,t)= M_lower (s, t) + m(s*, t-1)Repeat above steps until all the input data are calculated at time T.4. Compare m(0, T) m(1,T) m(2, T) m(3, T), choose the minimum one <strong>and</strong> trace back thepath from this state.5. The output data can be generated corresponding to the path


A decoding example


Simulation with SNR=-9.5410.50-0.5The original s igna lThe transmitted signal after encoding withi 1050-5-10 5 10 15 20-100 5 10 15 201The decoded signal2The error0.51.501-0.50.5-10 5 10 15 2000 5 10 15 20


Simulation with SNR=0 db K=61The original signalThe transmitted signal after encoding with noise20.510-0.50-1-10 5 10 15 20-20 5 10 15 201The decoded signal1The error0.50.500-0.5-0.5-10 5 10 15 20-10 5 10 15 20


Relationship of SNR <strong>and</strong> Error rate0.5The relationship of error rate0.450.40.35Error rate0.30.250.20.150.10.0500-5-10-15-20-25SNR-30-35-40-45-50


Conclusion• <strong>Convolutional</strong> encoding can be used to improve theperformance of wireless systems.• Viterbi algorithm is an optimum decoder.• Using convolutional coding, the information can beextracted without any error from noisy channel if the SNRis big enough. When the SNR is lower than some value,there will be some error <strong>and</strong> the error rate increases as theSNR decreases. When the SNR is lower than some certainvalue, the convolutional encoder can not extract theinformation.• Question?

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!