Turbo Coding And MAP Decoding - Part 1

2y ago
19 Views
2 Downloads
720.97 KB
24 Pages
Last View : 23d ago
Last Download : 3m ago
Upload by : Noelle Grant
Transcription

Turbo Coding and MAP Decoding1Intuitive Guide to Principles of Communicationswww.complextoreal.comTurbo Coding and MAP decoding - Part 1Baye’s Theorem of conditional probabilitiesLet’s first state the theorem of conditional probability, also called the Baye’s theorem.P( A and B) P( A)P(B given A)Which we can write in more formal terminology asP( A , B) P( A) P(B A)Awhere P(B A) is referred to as the probability of event B given that A has already occurred.If event A always occurs with event B, then we can write the following expression for theabsolute probability of event A.P( A) P( A, B)BBIf events A and B are independent from each other then (A) degenerates toP( A , B) P( A)P( A , B) P( A) P( B )CThe relationship C is very important and we will use it heavily in the explanation of Turbodecoding in this chapter.www.complextoreal.com

Turbo Coding and MAP Decoding2If there are three independent events A, B and C, then the Baye’s rule becomesP( A, B C) P( A C) P(B A, C)DA-priori and a-posteriori probabilitiesHere is Bayes' theorem again.P( A, B) P( AB) Pr obability ofboth A and BP( B) a prioria posterioriprobability of probability ofevent Bevent A P( B A) P( A) Ea prioria posterioriprobability of probability ofevent Aevent BThe probability of event A conditioned on event B, is given by the probability of A given Btimes the probability of event a.The probability of A, or P(A) is the base probability of even A and is called the a-prioriprobability. The term P(A,B) the conditional probability is called the a-posterioriprobability or APP. One is independent probability, the other depends on some eventoccurring. We will be using the acronym APP a lot, so make sure you remember that is thesame a-posteriori probability. In other words, the APP of an event is a function of ananother event also occurring at the same time. We can write (E) asP( A, B) P( AB) APPP( B A) P( A)P( B)FThis says that we can determine the APP of an event by taking the conditional probability ofthat event divided by it’s a-priori probability.What these mean is best explained by the following two quotes.In epistemological terms “ A priori” and “a posteriori” refer primarily to how, or on whatbasis, a proposition might be known. In general terms, a proposition is knowable a priori ifit is knowable independently of experience, while a proposition knowable a posteriori isknowable on the basis of experience. The distinction between a priori and a posterioriknowledge thus broadly corresponds to the distinction between empirical and nonempirical knowledge.” [2]“But how do we decide when we have gathered enough data to justify modifying ourprediction of the probabilities? That is one of the essential problems of decision theory.How do we make the transition from a priori statistics to a posteriori probability?” [3]www.complextoreal.com

Turbo Coding and MAP Decoding3The MAP algorithm helps us make the transition from a-priori knowledge to knowledgebased on received data.Structure of a Turbo CodeAccording to Shannon, the ultimate code would be one where a message is sent infinitetimes, each time shuffled randomly. The receiver has an infinite versions of the messagealbeit corrupted randomly. From these copies, the decoder would be able to decode withnear error-free probability the message sent. This is the theory of an ultimate code, the onethat can correct all errors for a virtually signal.Turbo code is a step in that direction. But it turns out that for an acceptable performance wedo not really need to send the information infinite number of times, just two or three timesprovides pretty decent results for our earthly channels.In Turbo codes, particularly the parallel structure, Recursive systematic convolutional (RSC)codes working in parallel are used to create the “random” versions of the message.The parallel structure uses two or more RSC codes, each with a different interleaver. Thepurpose of the interleaver is to offer each encoder an uncorrelated or a “random” version ofthe information, resulting in parity bits from each RSC that are independent. How“independent” these parity bits are, is essentially a function of the type and length/depth ofthe interleaver. The design of interleaver in itself is a science. In a typical Viterbi code, themessages are decoded in blocks of only about 200 bits or so, where as in Turbo coding theblocks are on the order of 16K bits long. The reason for this length is to effectivelyrandomize the sequence going to the second encoder. The longer the block length, thebetter is its correlation with the message from the first encoder, i.e. the correlation is low.On the receiving side, there are same number of decoders as on the encoder side, eachworking on the same information and an independent set of parity bit. This type ofstructure is called Parallel Concatenated Convolutional Code or PCCC.The convolutional codes used in turbo codes usually have small constraint length. Where alonger constraint length is an advantage in stand-alone convolutional codes, it does not leadto better performance in TC and increases computation complexity and delay. The codes inPCCC must be RSC. The RSC property allows the use of systematic bit as a standard to whichthe independent parity bits from the different coders are used to assess its reliability. Thedecoding most often applied is an iterative form of decoding.When we have two such codes, the signal produced is rate 1/3. If there are three encoders,then the rate is ¼ and so on. Usually two encoders are enough as increasing the number ofencoders reduces bandwidth efficiency and does not buy proportionate increase inperformance.www.complextoreal.com

Turbo Coding and MAP Decoding4uky 1k, sEC1y 1k, p 1EC2yk2, p n 1ECnykn, pFigure 1 – A rate 1/(n 1) Parallel Concatenated Convolutional Code (PCC) Turbo CodeTurbo codes also come as Serial Concatenated Convolutional Code or SCCC. The SCCC codesappear to have better performance at higher SNRs. Where the PCCC codes require bothconstituent codes to be RSC, in SCCC, only the inner code must be RSC. PCCC codes alsoseem to have a flattening of performance around 10-6 which is less evident in SCCC. TheSCCC constituent code rates can also be different as shown below. The outer code can evenbe a block code.In general the PCCC is a special form of SCCC. We can even think of concatenation ofRS/Convolutional codes, used in line-of-sight links as a form of SCCC. A Turbo SCCC maylook like the figure below with different rate constituent codes.EC1Rate 1/2 EC2Rate 2/3Figure 2 – Serially concatenated constituent coding (SCCC)Then there are also hybrid versions that use both PCCC and SCCC such as shown in figurebelow.EC2 1EC1 2EC3Figure 3 – Hybrid Turbo CodesThere is an another form called Turbo Product Code or TPC. This form has a very differentstructure from the PCCC or SCCC. TPC use block codes instead of convolutional codes. Twodifferent block codes (usually Hamming codes) are concatenated serially without anwww.complextoreal.com

Turbo Coding and MAP Decoding5interleaver in between. Since the two codes are independent and operate in rows andcolumns, this alone offers enough randomization that no interleaver is required. TPC codes,like PCCC also perform well in low SNR and can be formed by concatenating any type ofblock codes. Typical coding method is to array the coded data in rows and then the secondcode uses the columns of the new data for its coding. The following shows a TPC codecreated from a (7x5) and a (8x4) Hamming code. The 8x4 code first codes the 4 info bits into8, by adding 4 p1 party bits. These are arrayed in five rows. Then the 7x5 code works onthese in columns and creates (in this case, both codes are systematic) new parity bits p2 foreach column. The net code rate is 5/14 0.33. The decoding is done along rows and 2p1p1p1p1p1p2p2p1p1p1p1p1p2p2p1p1p1p1p1p2p2Figure 4 – Turbo Product codesWhat makes all these codes “Turbo” is not their structure but a form of feedback iterativedecoding. If the structure of a SCCC does not use the iterative coding then it would be just aplain old concatenated code, not a turbo code.Maximum a-posteriori Probability (MAP) decoding algorithmTurbo codes are decoded using a method called the Maximum Likelihood Detection or MLD.Filtered signal is fed to the decoders, and the decoders work on the signal amplitude tooutput a soft “decision” The a priori probabilities of the input symbols is used, and a softoutput indicating the reliability of the decision (amounting to a suggestion by decoder 1 todecoder 2) is calculated which is then iterated between the two decoders.The form of MLD decoding used by turbo codes is called the Maximum a-posterioriProbability or MAP. In communications, this algorithm was first identified in BCJR. And thatis how it is known for Turbo applications. The MAP algorithm is related to many otheralgorithms, such as Hidden Markov Model, HMM which is used in voice recognition,genomics and music processing. Other similar algorithms are Baum-Welch algorithm,Expectation maximization, Forward-Backward algorithm, and more. MAP is a complexalgorithm, hard to understand and hard to explain.www.complextoreal.com

Turbo Coding and MAP Decoding6In addition to MAP algorithm, another algorithm called SOVA , based on Viterbi decoding isalso used. SOVA uses Viterbi decoding method but with soft outputs instead of hard. SOVAmaximizes the probability of the sequence, whereas MAP maximizes the bit probabilities ateach time, even if that makes the sequence not-legal. MAP produces near optimal decoding.In turbo codes, the MAP algorithm is used iteratively to improve performance. It is like the20 questions game, where each previous guess helps to improve your knowledge of thehidden information. The number of iteration is often preset as in 20 questions. Moreiteration are done when the SNR is low, when SNR is high, lesser number of iterations arerequired since the results converge quickly. Doing 20 iteration maybe a waste if signalquality is good. Instead of making a decision ad-hoc, the algorithm is often pre-set withnumber of iterations. On the average, seven iterations give adequate results and no more 20are ever required. These numbers have relationship to the Central Limit Theorem.DEC1 1DEC2Figure 5 – Iterative decoding in MAP algorithmAlthough used together, the terms MAP and iterative decoding are separate concepts. MAPalgorithm refers to specific math. The iterative process on the other hand can be applied toany type of coding including block coding which is not trellis based and may not use MAPalgorithm.I am going to concentrate only on PCCC decoding using iterative MAP algorithm In part 2,we will go through a step-by-step example. In this part, we will cover the theory of MAPalgorithm.We are going to describe MAP decoding using a Turbo code in shown Figure 5 with two RSCencoders. Each RSC has two memory registers so the trellis has four states with constraintlength equal to 3.www.complextoreal.com

Turbo Coding and MAP Decodingu7u c1, s Transmittedsymbol itx c1, pu c 2,s( x n)irReceviedsymboly1,s y1, py1,sy1, py 2,s y 2,s y 2, p DEC1DEC2 c 2, pFigure 6 – A rate 1/3 PCCC Turbo code in a 8PSK channelThe rate 1/3 code shown here has two identical RSC convolutional codes. The coding trellisfor each is given by the figure 7. The blue lines show transitions in response to a 0 and redlines in response to a 1. The notation 1/11, the first is the input bit, the next are code bits. Ofthese, the first is what we called the systematic bit, and as you can see it is the same as theinput bit. The second bit is the parity bit. Each code uses the same trellis for encoding. Thelabels along the branches can be read as uk / c1k ck2 . Since this a RSC, the first code bit is sameas the information bit or uk c1k .The info bits are called uk. The coded bits are referred to by the vector c. Then the coded bitsare transformed to an analog symbol x and transmitted. On the receive side, a noisy versionof x is received. By looking at how far the received symbol is from the decision regions, ametric of confidence is added to each of the three bits in the symbol. Often Gray coding isused, which means that not all bits in the symbol have same level of confidence for decodingpurposes. There are special algorithms for mapping the symbols (one received voltagevalue, to M soft-decisions, with M being the M in M-PSK.) Let’s assume that after themapping and creating of soft-metrics, the vector y is received. One pair of these decodedsoft-bits are sent to the first decoder and another set, using a de-interleaved version of thesystematic bit and the second parity bit are sent to the second decoder.Each decoder works only on these bits of information and pass their confidence scores toeach other until both agree within a certain threshold. Then the process restarts with nextsymbol in a sequence or block consisting of N symbols (or bits.)DefinitionsN is the frame size of transmitted symbols. So for a M-PSK, there would be 3N bitstransmitted per frame, of these 1/3 will be the information bit. In a typical Turbo code,there may be as many as 20,000 smbols in a frame.www.complextoreal.com

Turbo Coding and MAP Decoding8The sequence of information bits is given by u. The first encoder gets uk (u1, u2, u3, uN)and the second encoder gets a preset reordering of this same sequence. For example, wemay pick this mapping function.u u1 , u2 , u3 , u4 , u5 , u6 , u7 . uN u u3 , u4 , u1 , u2 , u5 , u6 , u7 . uN The encoder mapping is be given by the vector c. The ck (c1,k s , c1,k p ) are two bits producedby the first encoder, and ck (ck2, s , ck2, p ) are the two bits produced by the second encoder.The information bit to code bits mapping is done a trellis such as the one described in TableI.Table I – Mapping of information bits to code bits00s2s111s111s20010s3s30101s410s4Figure 7 – the trellis diagram of the rate 1/2 code RSC codeThe symbol described by vector x (3 bits per vector) is sent for each time i. Let’s call thisvector xi. There would N of these symbols transmitted.x1 1,1,1 x2 1,1, 1 www.complextoreal.com

Turbo Coding and MAP Decoding9yk ( y1,k s , y1,k p , yk2, p ) are the soft mapped bits. The goal is to take these and make a guessabout the transmitted x vector and hence code bits which inturn decode u, the informationbit. Of this three soft bits, each decoder gets just two of these values. The first decodergets yk1 ( y1,k s , y1,k p ) and the second decoder gets yk 2 ( yk2, s , yk2, p ) which are its respectivereceived data. Each decoder works with just two values. The second decoder however gets areordered version of the systematic bit, so it is getting only one bit of independentinformation.Log-likelihood Ratio (LLR)Let’s take the information bit, a binary variable, uk, where u is its value at time k. Its Loglikelihood Ratio (LLR) is defined as the natural log of its base probabilities.L(uk ) lnP(uk 1)P(uk 1)(1.1)If u has two values, 1 and -1 volts representing 0 and 1 bit and since these are equallylikely, as they are in most communication system, then this ratio is equal to zero. Thismetric is used in most error correction coding and is called the log likelihood ratio or LLR.This is a sensitive metric, quite a bit better than a linear metric. Logs make it easy to dealwith very small and very large numbers, as you well know.Now note what happens to this metric, if the the binary variable is not equally likely, ashappens in trellis decoding. From elementary probability theory, we know that sum of allprobabilities of an event add to 1. So we can write that the probability of u 1 as 1 minusthe probability of u -1.P(uk 1) 1 P(uk 1)Now using equation (1.1), rewrite the expression of LLR from (1.2) as.L(uk ) lnP(uk 1)1 P(uk 1)(1.2) is plotted in Figure 8 as a function of the probability of one of the events. Let’s say weare given L(uk) 1.0. Then the probability that uk 1 is equal to 0.73www.complextoreal.com

Turbo Coding and MAP lity, uk 1Figure 8 – The range of LLR is from – to and is a direct indication of the bit.As we can see, if LLR, a non-dimensional metric, is positive, it is a pretty good indicator ofthe sign of the bit. This is an even better indicator than the Euclidean distance since itsrange is so large. LLR a very useful parameter and we will see how it is used in Turbodecoding.In (1.2) formulate the LLR of a decoded bit uk, at time k, conditioned on the received signal,a sequence of N bits. The lower index of y means it starts at time 1 and the upper indexmeans the ending point at N.L(uk ) lnP (uk 1 y1N )(1.2)P (uk 1 y1N )We can reformulate the numerator and denominator using Baye’s rule C.L(uk ) lnP( y1N , uk 1) / P( y1N )P( y1N , uk 1) / P( y1N ) lnP( y1N , uk 1)P( y1N , uk 1)(1.3)This formulation of the LLR includes joint probabilities between the received bits and theinformation bit, the numerator of which can be written as (1.4). For RSC trellis, each path isuniquely specified by any pair of these: 1. the start state s’, 2. the ending state s, 3. The inputbit. If we know any two of these pieces of information, we can identify the correct pathwithout error. In this equation, we only know the whole sequence y1N , we do not know uk,nor do we have any other of the piece of information. But what we do know is that saying auk 1 is same as saying that the correct path is one of the four possible paths shown in Fig.8.So the joint probability of ( y1N , uk 1) (having received the N bit sequence, the probabilityof uk at time k 1) is the same as replacing the information bit with ending and startingstates. These formulations are equivalent.www.complextoreal.com

Turbo Coding and MAP Decoding11N'N P (uk 1, y1 ) P ( s , s, y1 )NN(1.4)00s1s111s211s20010s3s3010110s4s4Figure 9 – Trellis transition possible at any time k in response to a 1 and -1.We have changed the left hand side of the equation by replacing uk 1 with two states s’and s. The s’ is the starting state and s is the ending state for all allowable transitions forwhich the uk 1. There are four valid transitions related to decoding a 1. Assume thatthere is a probability associated with each of these four transitions. This says that theprobability of making a decoding decision in favor of a 1 is the sum of the probability of allfour of these possible paths.Now plug Error! Reference source not found. into (1.3) to get the log likelihood ratio of ukas P (s , s, yN1) P (s , s, yN1)'L(uk ) lnP ( y1N , ukP ( y1N , uk 1) 1) uk 1'(1.5)uk 1Now we need to make one more conceptual leap. The probability of deciding which road, i.e.the 1 road or the -1 road taken is a function of where the path started and where it will endwhich are usually given as boundary conditions. If we split the whole received sequenceinto manageable parts, it may help us identify the starting or ending states. We incorporatethese ideas into (1.5) to get,y N yk 1 , y , y N11k k 1 y ,y ,yp k fWe take the N bit sequence and separate it into three pieces, from 1 to k-1, then the kthpoint, and then from k 1 to N. We adopt a slightly easier terminology in (1.6).P ( s' , s, y1N ) P ( s' , s, y p , yk , yf )(1.6)www.complextoreal.com

Turbo Coding and MAP Decoding12ypThe past sequence, the part that came before the current data point.ykThe current data pointyfThe sequence points that come after the current point.Rewrite using (1.6) using Baye’s rule of joint probabilities (D).P ( s' , s, y) P ( s' , s, y p , yk , yf ) P ( yf s' , s, yp , yk )P ( s' , s, y p , yk )(1.7)This looks complicated but we are just applying Bayes rule to simplify (1.6) and tobreakdown the terms in terms of past, present and future parts of the sequence. So thatwhenever we are making a decision about a bit 1 or a -1, a cumulative metric will take intoaccount the starting and the ending point of the sequence. The starting and ending points ofa convolutional sequence are known and we use this information to winnow out the likeliersequences.The term yf is the future sequence and we make an assumption that it is independent of thepast and only depends on the present state s. We can remove these dependencies tosimplify (1.7).P( s' , s, y ) P( y f s ' , s, y p , yk ) P( s ' , s, y p , yk ) P ( y f s ) P ( s ' , s, y p , yk )(1.8)Now apply Baye’s rule to the last term in (1.8), to getP ( s ' , s , y p , yk ) P ( s , y k s ' , y p ) P ( s ' , y k )P( s' , s, y ) P( y f s ) P( s, yk s ' , y p ) P( s ' , y p )(1.9)Now define a few new terms.www.complextoreal.com

Turbo Coding and MAP Decoding13 k 1 ( s ') P( s ' , y p ) k ( s) P( y f s )(1.10) k ( s ', s) P( s, yc s ' , y p )The terms , , on the right are the metrics we will compute in MAP decoding. Now thenumerator term of (1.5) becomes.P( s' , s, y ) k 1 ( s ') k ( s) k ( s ', s )(1.11)We can plug this into (1.5) to get the LLR equation for MAP algorithm.L(uk ) k 1 ( s ') k ( s) k ( s ', s)uk 1k 1 ( s ') k ( s) k ( s ', s)(1.12)uk 1 k 1 (s ') This first term in (1.11) is called the Forward metric. k (s) is called the Backward metric. k ( s' , s) is called Transition metric.The MAP algorithm allows us to compute these metrics. Decision is made at each time k,about the transmitted bit uk. Unlike Viterbi decoding where decision is made in favor if thelikeliest sequence by carrying forward the sum of metrics, here the decision is made for thelikeliest bit.The MAP algorithm is also known as Forward-Backward Algorithm because we areassessing probabilities in both forward and backward time.How to calculate the Forward metricsWe will start with the forward metric at time k. k ( s) P(s, s' , y p , yk ) All statesWhich is also equal to the probabilities at the previous state s’ times the transitionprobability to the current state s.www.complextoreal.com

Turbo Coding and MAP Decoding k ( s) 14P(s, s' , y p , yk ) (s , s). All states'k 1 ( s'(1.13))All s 'The initial condition is that since we always start at state 1,if s 1otherwise 1 0 0 ( s) Example:4.4817 (1) 1.004.4817(.9526)s1.2231.0288 s2(.0063) (2) 00.60654.4817 (3) 00.03020.2231 1.6487(.0474) (4) 00(0)0k-133.1155s3ks4k 1Figure 10 - Computing Forward metricsThe left most value at the top is the starting point, so it has a value of 0 (1) 1 , the startingvalue of is 0 at all the remaining three states. The ending forward metric at t k is k ( s) (s , s). 'k 1 ( s')All s ' k (1) 4.4812 1.0 4.4812 k (2) 0.2231 1.0 4.4817 0.0 0.2231Notice that these numbers are larger than 1, which means, they are not probabilities but arecalled Metrics. It also means that since a typical Turbo code frame is thousands of trellissections long that multiplication of these numbers may result in numerical overflow. Forthat reason, both and are normalized.Backward Metric k (s)www.complextoreal.com

Turbo Coding and MAP Decoding15Same as the initial values of 0 (0) 1 , we assume that since trellis always ends at state 1,all probabilities at this point are equal to 1 or 0. N (1) 1 and zero elsewhere.This probability is equal to the product of all transition probabilities times the probability atthe last state working backwards. k 1 ( s ' ) k ( s) (s' , s)(1.14)All sCompare this with forward metric equation. k (s) . 'k 1 (s ) (s' , s)Alls 55 N (1) 1.0.04981.6487 N (2) 0.0821 N (3) 0.3679s3.08210(0)s4kk 1N N (4) 0Figure 11 - Computing Backward metrics k 1 ( s ' ) k ( s) ( s ' , s)Alls k 2 (1) 1.0 20.0885 20.0885 k 2 (2) 1.0 .0498 0.0498You can see these values at time k 1 for state 1 and 2. These are then normalized.www.complextoreal.com

Turbo Coding and MAP Decoding16Numerical issuesIn order to prevent data overflow that can happen in the very large trellis of a Turbo codes,, k (s ) and k 1 ( s ' ) need to be normalized. For forward metrics, we normalized them bytheir sum. k (s) k (s) k ( s ) (1.15)sThe reverse metric is similalry normalized by the same quantity as above. It s formulationlooks different but it is the same thing as the term that normalized the forward metric.(Change index to k instead of k-1.) k 1 ( s ') k 1 ( s '). k 2 (s ') k 1 (s ', s)s(1.16)s'How to calculate transition probabilitiesComputing transition metrics turns out to be the hardest part. To restate the definition ofthe transition metric from (1.10) k ( s ', s) P( s, yc s ' , y p )The transition itself does not depend on the past, so we can remove the dependency on thepast and then apply the Baye’s theorem. k ( s ', s) P( s, yk s ' , y p ) P ( s , yk s ' )(1.17) P ( yk s ' , s ) . P ( s s ' ) k (s ', s) P( yk s' , s) . P(uk )(1.18)There are two terms in this final definition of the transition metric. Let’s take the last onefirst. Here P(uk) is a-priori probability of the input bit. Let’s take its log likelihood.L(uk ) logP(uk 1)P(uk 1) logP(uk 1)1 P(uk 1)Or by taking this expression to power of e, we getwww.complextoreal.com

Turbo Coding and MAP Decodinge L ( uk ) 17P(uk 1)1 P(uk 1)From here, some clever algebra gives usP(uk 1) e L (uk ) (1 P(uk 1))e L (uk )1 e L (uk )1 1 e L (uk )e L (uk )/2 L (uk )/2 e1 e L (uk ) Now we get the general expression for both, 1 and -1.P(u k 1) e L ( uk ) / 2e L ( uk ) / 2 L ( uk )1 e(1.19)The term underlined is a common factor and designated byAk e L (uk )/21 e L ( uk )The a-priori probability is now given byP (uk 1) Ak e L(uk )/2(1.20)Now for the second term in transition metric expression of (1.18), P( yk s ' , s) can bewritten asnP ( yk s' , s) P ( yk ck ) P ( ykl ckl )l 1For rate ½, we can write this simply as,P ( yk x k ) y1,k s c1k y1,k pck2This is just a correlation of the input signal values with the trellis values, c. The receivedsignal values yk are assumed to be corrupted by an AWGN process. The probabilityP( ykl ckl ) is given in a Gaussian channel bywww.complextoreal.com

Turbo Coding and MAP DecodingP( ykl ckl ) 181 1 exp 2 ( ykl ckl ) 2 2 2 (1.21)Apply (1.21) to (1.18), to get 2 2 yi, p ci, p y1, s c1, s qk 1 k k kP( y / u ) EXP k k 2 2 22 2i 2 nn Expanding this we get, 1 1, s 21,s 2 ( y ) (ck ) 1 k EXP 2 2 2 n q i 2 i, pyk2 1 2 c p ,ik2n2 y1, sc1, s EXP k k 2 n i, p i, pq y ck k i 2 2n The square of the two c (always equal to 1 or -1) values is equal to 1, since square of 1 and-1 is the same. This gives the following equationq y1,k s c1,k syki , p cki , p Bk EXP 2 n2 i 2 n(1.22)Where ( y1,k s )2 1Bk EXP 2 n2 q i 2 y 1 2 n2 i, pk2.Now we can write the full equation for transition metric by combining (1.22) and (1.19).www.complextoreal.com

Turbo Coding and MAP Decoding19 k (s ', s) P( yk s' , s) . P(uk ) .i, p i, p y1, s c1, sq y c k kkk B exp k22 i 2 nn L (c ) / 2 kAe k (1.23)Ak and Bk are constants and are not important because when we put this expression in theLLR form, they will cancel out. The index q 2 for rate ½ code we are using for example.Now we define, n2 N0Ecp 2 2 coderate Eb / N 0 2 Eb / N 0(1.24)Where p is inverse of code rate, i.e. equal to 2 for code rate 1/2EbEc. N 0 rate N 0where(1.25)Ebis ratio of the un-coded bit energy to noise PSD, Ec is coded bit energy, and Ec N0code rate Eb.Substitute for n2 pinto (1.23) we have2 Eb / N 0q 4 Eb / N 0 y1,k s c1k y i , p cki 111 Ak Bk exp k L(ck ) ck p2 2i 2 2 www.complextoreal.com

Turbo Coding and MAP Decoding20 4 Eb / N 0 1 1, s 1 q i , p i 1 Ak Bk exp yk ck yk ck L (c1k ) c1k p2 i 2 2 q 4 Eb / N0 1 i , p i 14 Eb / N0 1 1, s 1 11 Ak Bk exp L(ck ) ck yk ck exp yk xk p2p2 2 i 2 With Lc 4 Eb / N0 4.Es / N0p q 11 1 Ak Bk exp L(c1k ) c1k Lc y1,k s c1k exp Lc yki , p cki 22 2 i 2 (1.26)where we define a new partial transition metric, the underlined part of (1.26). i 2 q1 e ( s ', s) exp Lc yki , p cki .2In summary the LLR of the information bit uk at time k is, k 1 ( s' ) k ( s' , s) k ( s) L(u k ) log u k 1 ( s' ) k ( s' , s) k ( s) u ( s ' ) ( s ' , s ) ( s ) , ( s' ) ( s' , s)k 1ks'kk 1sk(1.27) 1 0 0 ( s) if s 1otherwise(1.28)s'www.complextoreal.com

Turbo Coding and MAP Decoding k 1 ( s' ) k21( s) k ( s' , s)sk 2s( s' ) k 1 ( s' , s),if s 1otherwise 1 0 N ( s) (1.29)s' 1 212 i 2 q12 ( s ', s) exp Le (c1k ) c1k Lc y1,k s c1k exp Lc yki , p cki (1.30)Substitute (1.30) into (1.27), to get q 11 i , p i 1 e 111, s1 (s') (s) expL(c) c Lc y c exp Lc y k c k k 1kkkkk 22 2 i 2 log uq 11 i , p i 1 Lc y k c k k 1 (s' ) k (s) exp 2 Le (c1k ) c1k 2 Lc y1k,s c1k exp 2 i 2 u (1.31)If q 2, the equation become11 1 ( s ') k ( s) exp Le (c1k ) c1k Lc y1,k s c1k exp Lc yk2, p ck2 222 log u111 e111,s12,p2 k 1 (s ') k (s) exp 2 L (ck ) ck 2 Lc yk ck exp Lc 2 yk ck u k 1L(u1k ) and Lc y1,k s can be pulled out because both numerator and denominator are constant.The factor ½ and c1k cancel. We ge

Turbo Coding and MAP Decoding www.complextoreal.com 4 EC1 EC2 ECn S1 Sn 1 p y k 1, p y k 2, n p y k, s y k k 1, u Figure 1 – A rate 1/(n 1) Parallel Concatenated Convolutional Code (PCC) Turbo Code Turbo codes also come as Serial Concatenated Convolutional Code or SCCC.File Size: 720KBPage Count: 24

Related Documents:

Turbo Coding and MAP Decoding www.complextoreal.com 4 EC1 EC2 ECn S1 Sn 1 p y k 1, p y k 2, n p y k, s y k k 1, u Figure 1 - A rate 1/(n 1) Parallel Concatenated Convolutional Code (PCC) Turbo Code Turbo codes also come as Serial Concatenated Convolutional Code or SCCC. The SCCC codes appear to have better performance at higher SNRs. Where .

Turbo decoding using the MAP algorithm Part 2 – A Step by step example S u k s p k y y 1, 1, s k p k y y 2, 2, EC2 EC1 Figure 1 – A rate 1/3 PCCC Turbo code In this example, we will use a rate 1/3 Turbo Code which has two identical convolutional codes. Note that since 2,s y k is a dete

Coding and Decoding Coding and Decoding is an important part of Logical reasoning section in all aptitude related examinations. Coding is a process used to encrypt a word, a number in a particular code or pattern based on some set of rules. Decoding is a process to decrypt the pattern into its original form from the given codes.

turbo codes, and other concatenated coding schemes are outlined. Index Terms— Belief propagation, iterative decoding, low-den-sity parity-check (LDPC) codes, message-passing decoders, turbo codes, turbo decoding. I. INTRODUCTION IN the wake of the phenomenal success of turbo codes [2],

This makes it a rate 1/3 turbo code. D Turbo Interleaver D D D p l (1) p l (2) u l u l Fig. 2. LTE rate 1/3 turbo encoder [17]. Similarly here, knowing the starting and ending states of the of the encoder at the decoder is important to avoid performance loss. This is handled via trellis termination. B. Decoding The turbo decoder consists of two .

INSTRUCTIONS FOR ADDING PRIMOS SOUNDS TO BOSS DOGGTM, ALPHA DOGG TM AND TURBO DOGGTM Boss DoggTM Model# 3757 Alpha Dogg TM Model# 3756 Turbo Dogg TM Model# 3755 INSTALLATION: STEP 1 – Connect Boss / Alpha / Turbo Dogg Speaker to computer (PC or Mac) a) Turn OFF Boss / Alpha / Turbo Dogg remote. b) Turn ON Boss / Alpha / Turbo Dogg Speaker.File Size: 1MBPage Count: 5

Shell Turbo T 32 Toegestaan Shell Turbo T 32 Ja Ja Shell Turbo T 46 Toegestaan Shell Turbo T 46 Ja Ja . BP Turbo Oil 2380 Toegestaan BP Turbo Oil 2380 Ja Ja Castrol Perfecto T32 Toegestaan Castrol Perfecto T32 Ja Ja Tribol GR 1350-2.5 PD Toegestaan Tribol GR 1350-2.5 PD Ja Ja

Tulang rawan yang paling banyak dijumpai pada orang dewasa. Lokasi : - Ujung ventral iga - Larynx,trachea, bronchus - Permukaan sendi tulang - Pada janin & anak yg sedang tumbuh pada lempeng epifisis Matriks tulang rawan hilain mengandung kolagen tipe II, meskipun terdapat juga sejumlah kecil kolagen tipe IX, X, XI dan tipe lainnya. Proteoglikan mengandung kondroitin 4-sulfat, kondroitin 6 .