The Capacity Of Low-density Parity-check Codes Under .

2y ago
20 Views
2 Downloads
475.18 KB
20 Pages
Last View : 29d ago
Last Download : 3m ago
Upload by : Amalia Wilborn
Transcription

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 47, NO. 2, FEBRUARY 2001599The Capacity of Low-Density Parity-Check CodesUnder Message-Passing DecodingThomas J. Richardson and Rüdiger L. UrbankeAbstract—In this paper, we present a general method for determining the capacity of low-density parity-check (LDPC) codesunder message-passing decoding when used over any binary-inputmemoryless channel with discrete or continuous output alphabets.Transmitting at rates below this capacity, a randomly chosen element of the given ensemble will achieve an arbitrarily small targetprobability of error with a probability that approaches one exponentially fast in the length of the code. (By concatenating with anappropriate outer code one can achieve a probability of error thatapproaches zero exponentially fast in the length of the code with arbitrarily small loss in rate.) Conversely, transmitting at rates abovethis capacity the probability of error is bounded away from zeroby a strictly positive constant which is independent of the lengthof the code and of the number of iterations performed. Our resultsare based on the observation that the concentration of the performance of the decoder around its average performance, as observedby Luby et al. [1] in the case of a binary-symmetric channel and abinary message-passing algorithm, is a general phenomenon. Forthe particularly important case of belief-propagation decoders, weprovide an effective algorithm to determine the corresponding capacity to any desired degree of accuracy. The ideas presented inthis paper are broadly applicable and extensions of the generalmethod to low-density parity-check codes over larger alphabets,turbo codes, and other concatenated coding schemes are outlined.Index Terms—Belief propagation, iterative decoding, low-density parity-check (LDPC) codes, message-passing decoders, turbocodes, turbo decoding.I. INTRODUCTIONIN the wake of the phenomenal success of turbo codes [2],another class of codes exhibiting similar characteristicsand performance was rediscovered [3], [4]. This class ofcodes, called low-density parity-check (LDPC) codes, was firstintroduced by Gallager in his thesis in 1961 [5]. In the periodbetween Gallager’s thesis and the invention of turbo codes,LDPC codes and their variants were largely neglected. Notableexceptions are the work of Zyablov and Pinsker [6], Tanner [7],and Margulis [8].In their original (regular) incarnation, the performance ofLDPC codes over the binary-input additive white Gaussiannoise (BIAWGN) channel is only slightly inferior to that ofparallel or serially concatenated convolutional codes (turboManuscript received November 16, 1999; revised August 18, 2000.T. J. Richardson was with Bell Labs, Lucent Technologies, Murray Hill, NJ07974 USA. He is now with Flarion Technologies, Bedminster, NJ 07921 USA(e-mail: richardson@flarion.com).R. L. Urbanke was with Bell Labs, Lucent Technologies, Murray Hill, NJ07974 USA. He is now with the EPFL, LTHC-DSC, CH-1015 Lausanne,Switzerland (e-mail: rudiger.urbanke@epfl.ch).Communicated by F. R. Kschischang, Associate Editor for Coding Theory.Publisher Item Identifier S 0018-9448(01)00737-4.codes). For example, a rate one-half LDPC code of blockof roughly 1.4 dB to achieve alength 10 000 requires an, whereas an equivalent turbo codebit-error probability ofwith comparable complexity achieves the same performance0.8 dB. Shannon capacity dictates that,at, roughly,in order to achieve reliable transmission at a rate of one-half bitper channel use over the continuous-input AWGN channel, anof at least 0 dB is required, and this increases to 0.187dB if we restrict the input to be binary.It is well known, that any linear code can be expressed as.the set of solutions of a parity-check equationtakes elements inFurthermore, if the code is binary thenand the arithmetic is also over this field. A-regGFular LDPC code, as originally defined by Gallager, is a binarylinear code determined by the condition that every codeword bitparticipates in exactly parity-check equations and that everycodeword bits, wheresuch check equation involves exactlyandare parameters that can be chosen freely.1 In otherwords, the corresponding parity-check matrix has ones ineach column and ones in each row. The modifier “low-density” conveys the fact that the fraction of nonzero entries in issmall, in particular it is linear in the block length , as comparedto “random” linear codes for which the expected fraction of onesgrows like . Following the lead of Luby et al. [1] we do notfocus on particular LDPC codes in this paper, but rather analyzethe performance of ensembles of codes. One way of constructingan ensemble of LDPC codes would be to consider the set of allparity-check matrices of length which fulfill the above row,and column sum constraints for some fixed parametersand to equip this set with a uniform probability distribution. Itis more convenient, however, to proceed as suggested in [1] andto define the ensemble of LDPC codes via bipartite graphs, seeSection II-A, since the resulting ensemble is easier to analyze.2Many variations and extensions of Gallager’s original definitionare possible and these extensions are important in order to construct LDPC codes that approach capacity. To mention only thetwo most important ones: a) constructing irregular as opposedto regular codes [1], [9], [10], i.e., variables may participate indifferent numbers of checks, and check equations may involvedifferent numbers of variables and b) allowing nodes to represent groups of bits rather than single bits [11] (see also [5]).In his thesis, Gallager discussed several decoding algorithmsthat work directly on the nodes and edges of the bipartite graph(see Section II) representing the code. Here, one set of nodes,1As we will discuss in more detail in Section II-A, the design rate of such a.code is equal to 102In any case, these two ensembles are essentially identical if we consider largeblock lengths.0018–9448/01 10.00 2001 IEEE

600IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 47, NO. 2, FEBRUARY 2001the “variable” nodes, correspond to the variables, i.e., the codeword bits or, equivalently, the columns of , the other set, the“check” nodes, correspond to the constraints or, equivalently,the rows of . A variable node is connected to a constraint nodein the graph if and only if the corresponding variable participates in the corresponding constraint. The decoding algorithmsof interest work iteratively. Information is exchanged betweenneighboring nodes in the graph by passing messages along theedges. Each message can be associated to the codeword bit corresponding to the variable node incident to the edge carryingthe message. Invariably, the messages can be interpreted as conveying an estimate of that bit’s value along with some reliabilityinformation for that estimate. Associated with such a message isa hard decision: one can consider the bit’s most likely value implied by the message. We will say that a message is “correct/incorrect” if its associated hard decision is correct/incorrect, i.e.,does/does not agree with the true value of the codeword bit. Inthe sequel, the precise meaning will be clear from context.Gallager [5] proposed the following step-by-step program todetermine the worst binary-symmetric channel (BSC), i.e., theBSC with the largest crossover probability, over which an appro-regular LDPC code in conjunctionpriately constructedwith a given iterative decoding algorithm can be used to transmitinformation reliably.1) [Code Construction] For increasing length , construct-regular LDPC codes that do nota sequence ofwherecontain cycles of length less or equal to2) [Density Evolution and Threshold Determination] Determine the average fraction of incorrect messages passed atthe th iteration assuming that the graph does not containcycles of length or less. For an iterative decoder witha finite message alphabet this fraction can be expressedby means of a system of coupled recursive functionsandwhich depend on the ensemble parametersthe channel parameter. Determine the maximum channelparameter, called the threshold, with the property thatfor all parameters strictly less than this threshold theexpected fraction of incorrect messages approaches zeroas the number of iterations increases.3) Conclude that if one applies the chosen decoding algodecoding roundsrithm to this sequence of codes withwhile operating over a BSC with parameter less than thethreshold found in Step 2), then the bit-error probability, where and are positive constantsdecreases as(one actually requires).which depend on(Since the block-error probability is at most times thebit-error probability, it follows that the block-error probability can be upper-bounded in the same way.)Although the analysis presented in this paper contains manyof the same elements as the program suggested by Gallager thereis one crucial difference. As remarked earlier, we do not focuson any particular (sequence of) codes but rather on (sequencesof) ensembles as suggested in [1]. The main advantage gainedby this approach is the following: even in the regular case it isnot an easy task to construct codes which contain only largecycles and this task becomes almost hopeless once we allow irregular graphs. Sampling an element of an (irregular) ensemble,on the other hand, is almost trivial. This approach was appliedwith great success in [1] to the analysis (and design) of LDPCensembles used over the binary erasure channel (BEC) as wellas the BSC when employed in conjunction with a one-bit message-passing decoder.The contribution of the present paper is to extend the methodof analysis in [1] to a very broad class of channels and decoding algorithms and to introduce some new tools for the analysis of these decoding systems. To clarify further the nature ofthe results we briefly describe an example and formulate someclaims that represent what we consider to be the apex of thetheory. Let us consider the ensemble of-regular LDPCcodes (see Section II for its definition) of length for use overa BIAWGNC, i.e., we transmit a codeword consisting of bitsand receive valueswhere theare independent and identically distributed (i.i.d.) zero-meanGaussian random variables with variance . Choose a code atrandom from this ensemble (see Section II), choose a messagewith uniform probability from the set of messages, and transmitthe corresponding codeword. Decode the received word usingthe belief-propagation algorithm. The following statements areconsequences of the theory:be the expected fraction of in[Concentration] Letcorrect messages which are passed in the th iteration,where the expectation is over all instances of the code, thechoice of the message, and the realization of the noise., the probability that the actual fractionFor anyof incorrect messages which are passed in the th iteration for any particular such instance lies outside the rangeconverges to zero exponentiallyfast in .converges to[Convergence to Cycle-Free Case]as tends to infinity, whereis the expectedfraction of incorrect messages passed in the th decodinground assuming that the graph does not contain cycles ofor less.length[Density Evolution and Threshold Determination]is computable by a deterministic algorithm. Furthermore,(in this casethere exists a channel parameter),3 the threshold with the following property: ifthen; if, on the other hand,then there exists a constantsuch thatfor all. For the current example, using themethods outlined in Section III-B to efficiently implementfor.density evolution, we getThe first statement asserts that (almost) all codes behave alike andso the determination of the average behavior of the ensemblesuffices to characterize the individual behavior of (almost)3Gallager attempted to compute the threshold for the BSC by a combinatorialapproach. The complexity of his approach precluded considering more than afew iterations though. For the threshold of the (3; 6)-regular LDPC ensemblehe obtained the approximate lower bound of 0:07 whereas density evolutiongives 0:084.

RICHARDSON AND URBANKE: THE CAPACITY OF LOW-DENSITY PARITY-CHECK CODESall codes. The second statement then claims that for longcodes this average behavior is equal to the behavior which onecan observe on cycle-free graphs and that for the cycle-freecase this average behavior is computable by a deterministicalgorithm. Finally, from the last statement we conclude that longcodes will exhibit a threshold phenomenon, clearly separatingthe region where reliable transmission is possible from thatwhere it is not. As a consequence, if we want to transmit-regular LDPCover the BIAWGNC using codes from theensemble and a belief-propagation decoder we are confrontedand if we are givenwith the following dichotomy. Ifany target bit-error probability, call it , then we can chooseand anan appropriate number of decoding iterations. We can then be assured thatappropriate block lengthall but at most an exponentially (in ) small subset from the-regular LDPC ensemble of lengthif decoded forrounds will exhibit a bit-error probability of at most .4On the other hand, ifand if we are willing to performat most decoding rounds, where is a fixed natural number,then all but at most an exponentially (in ) small subset of-regular LDPC ensemble will exhibit a bitcodes of the5 if decoded by means of aterror probability of at leastmost rounds of a belief-propagation decoder. With regardsto this converse, we conjecture that actually the followingmuch stronger statement is true—namely, that all codes in the-regular LDPC ensemble have bit-error probability of atregardless of their length and regardless of howleastmany iterations are performed. This will follow if one canprove that cycles in the graph (and the resulting dependence)can only degrade the performance of the iterative decoder onaverage, 6 a statement which seems intuitive but has so fareluded proof.Each of the above statements generalizes to some extent toa wide variety of codes, channels, and decoders. The concentration result holds in essentially all cases and depends mainlyon the fact that decoding is “local.” Similarly, the convergencetois very general, holding for all cases of inofterest. It is a consequence of the fact that typically the decodingneighborhoods become “tree-like” if we fix the number of iterations and let the block length tend to infinity. Concerningthe density evolution, for decoders with a message alphabet ofcan in general be expressed by meanssize the quantitycoupled recursive functions (see Section III-A). Forofmessage-passing algorithms with infinite message alphabets thesituation is quite more involved. Nevertheless, for the important case of the sum-product or belief-propagation decoder wewill present in Section III-B an efficient algorithm to calculate4Although this is not the main focus of this paper, one can actuallyachieve an exponetially decreasing probability of error. To achieve this,simply use an outer code which is capable of recovering from a small linearfraction of errors. If properly designed, the LDPC code will decrease the bitprobability of error below this linear fraction with exponential probabilitywithin a fixed number of decoding rounds. The outer code then removesall remaining errors. Clearly, the incurred rate loss can be made as smallas desired.5Note that , the probability of sending an erroneous messages, is at least0:068. To this corresponds a bit-error probability which is at least 0:05.6By constructing simple examples, it is not very hard to show that forsome specific bits the probability of error can actually be decreased bythe presence of cycles.601. Finally, the existence of a threshold as asserted in thelast statement requires more assumptions. It depends on boththe decoding algorithm used and the class of channels considered. As we will see, such a threshold always exists if we aredealing with a family of channels which can be ordered by physical degradation (see Section III-B1) and if the decoding algorithm respects this ordering. In some other instances, like in thecase of decoders with finite message alphabets, the existence ofa threshold can often be shown by analyzing the correspondingrecursions directly.Roughly speaking, the general statement is then of the following kind. We are given a particular ensemble of codes, afamily of channels, and a particular decoding algorithm. Fromthese quantities we will be able to calculate the critical channelparameter which is called the threshold. Then almost any longenough code can be used to provide sufficiently reliable transmission of information if decoded by the given decoder for asufficiently large number of iterations provided that the actualchannel parameter is below this threshold. Conversely, reliabletransmission over channels with parameter above this thresholdis not possible with most codes chosen at random from “long”ensembles. Therefore, one should think of the threshold as theequivalent of a “random capacity” for a given ensemble of codesand a particular decoder, except that in the case of random capacity the channel is usually fixed and one is interested in thelargest rate whereas for the threshold we fix the rate and ask forthe “worst” channel.The outline of the paper is as follows. Section II introducesthe class of codes, channels, and decoding algorithms considered in this paper. We show that under suitable symmetryconditions the error probability is independent of the transmitted codeword. Section III then focuses on the determinationand the determination of the threshold value. Weoffirst investigate decoders with finite message alphabets. In thiscan be expressed by meanscase, as pointed out above,of multiple coupled recursive functions, and the existence ofa threshold can be shown by a closer investigation of theserecursions. We will see that by a judicious choice of the messages the resulting thresholds can be made surprisingly closeto the ultimate limit as given by the (Shannon) capacity bound.We next investigate the important case of belief-propagationdecoders. Despite the fact that the message alphabet is infinite,in this case we describe an efficient algorithm that can be usedfor a belief-propagation decoder for anyto calculatebinary-input memoryless output-symmetric channel to anydesired accuracy. We also show that in this case the existence ofa threshold is guaranteed if the channel family can be orderedby physical degradation, as is the case for many importantfamilies of channels, including the BEC, BSC, BIAWGNC, andthe binary-input Laplace (BIL) channel. In Section IV, we willshow that, as the length of the code increases, the behavior ofindividual instances concentrates around the expected behaviorand that this expected behavior converges to the behavior forthe cycle-free case.The ideas presented in this paper are broadly applicable, andextensions of the general method to irregular LDPC codes,LDPC codes over larger alphabets, turbo codes, and otherconcatenated coding schemes are outlined in Section V.

602IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 47, NO. 2, FEBRUARY 2001II. BASIC NOTATION AND ASSUMPTIONSIn this section we will introduce and discuss some of thebasic notation and assumptions that we will use throughout thispaper. We start by giving a precise definition of the ensemble-regular LDPC codes that we will consider. We thenofdiscuss the notion of decoding neighborhoods. This notion willplay an important role when giving a proof of the concentrationtheorem. After a brief look at the class of binary-input memoryless channels we will introduce the class of message-passingdecoders, which are iterative decoders working on the graph andwhich obey the extrinsic information principle well known fromturbo decoding. Finally, we will see that under a suitable set ofsymmetry assumptions the conditional decoding probability willbecome independent of the transmitted codeword, allowing usin the sequel to assume that the all-one codeword was transmitted.A. EnsemblesLet be the length of the binary code given as the set of so. We conlutions to the parity-check equationstruct a bipartite graph with variable nodes andcheck nodes. Each variable node corresponds to one bit of thecodeword, i.e., to one column of , and each check node corresponds to one parity-check equation, i.e., to one row of .Edges in the graph connect variable nodes to check nodes andare in one-to-one correspondence with the nonzero entries of .Since each check equation typically decreases the number ofdegrees of freedom by one, it follows that the design rate of thecode isThe actual rate of a given code may be higher since these checkequations might not all be independent, but we shall generallyedges in the biparignore this possibility. There aretite graph, edges incident to each variable node on the left andedges incident to each check node on the right. Fig. 1 gives-regular code of length .an example of aof-regular LDPC codesThe ensemblewhich we consider in this paper is defined asof lengthor“sockets” according tofollows. Assign to each nodewhether it is a variable node or a check node, respectively.Label the variable and check sockets separately with the setin some arbitrary fashion. Pick a perletters at random with uniform probabilitymutation onsuch permutations. The correspondingfrom the set of all(labeled) bipartite graph is then defined by identifying edgeswith pairs of sockets and letting the set of such pairs be. This induces a uniform distribu(the set of labeled bipartite graphs).tion on the setIn practice, one usually modifies the permutation in an attemptto obtain the best possible performance for the given length.In particular, one avoids double edges between the nodes andexcessive overlap of the neighbor sets of the nodes.whereStrictly speaking, edges are unordered pairsanddenote the corresponding variable node socket andcheck node socket, respectively. It is often more convenient toFig. 1. A (3; 6)-regular code of length 10. There are 10 variable nodes andfive check nodes. The nd 30 md “sockets” of the variable and checknodes are labeled (not shown).think of an edge as a pair of nodeswhere and are thevariable and check nodes incident to the given edge. If the graphdoes not contain parallel edges then both descriptions are equivalent and we can freely switch between the two representations.Although this constitutes a moderate abuse of notation, we willeven if parallel edges are notmaintain the notationexcluded. The reader is then advised to think of as some edgewhich connects the variable node to the check node . Thissimplifies the notation significantly and should not cause anyconfusion. By a directed edge we mean an ordered pairorcorresponding to the edge. When we saythat a message traverses a directed edge we mean that it traverses it in the indicated direction. When we say that a messagetraverses an undirected edge we mean that it traverses it insome direction.B. Decoding NeighborhoodsA path in the graph is a directed sequence of directed edgessuch that, ifthen thefor. The length of the path is the number ofdirected edges in it and we say that the path starts from , endsat , and connects to . Given two nodes in the graph, weif they are connected by asay that they have distancepath of length but not by a path of length less than . (If the.) For a given nodenodes are not connected then we say, we define its neighborhood of depth , denoted by, asthe induced subgraph consisting of all nodes reached and edgestraversed by paths of length at most starting from (including). Note that for any two nodes and we have, by symmetryof the distance function(1)

RICHARDSON AND URBANKE: THE CAPACITY OF LOW-DENSITY PARITY-CHECK CODES603Fig. 2. The directed neighborhood of depth 2 of the directed edge ( ; ).Let be a particular edge between variable node and checkaccording to our convention. The undinode , i.e.,, is derected neighborhood of depth of , denoted by. We claim that for any pair of edgesfined asandFig. 3. The BSC with parameter .(2)By symmetry, it is clearly enough to prove the implication in. A directed versionone direction. Hence, assume thatof , denoted , lies on a path of length starting from eitheror . It follows that there exists a path starting from either orand ending at either or of length at most. Reversingthe orientation of the directed edges in the path we obtain a pathfrom either or to either or . Thus,of length at mostthere exists a path of length at most starting from either orcontaining a directed version of the edge. We conclude.that, deThe directed neighborhood of depth of, is defined as the induced subgraph containing allnoted bystarting from such thatedges and nodes on paths. Fig. 2 gives an example of such a directed neighborhood of depth . A directed neighborhood of a directed edgeis defined analogously. If the induced subgraph (corresponding to a directed or undirected neighborhood) is a treethen we say that the neighborhood is tree-like, otherwise, we saythat it is not tree-like. Note that the neighborhood is tree-like if,and only if all involved nodes are distinct. Forone can prove that the number of (distinct) nodes as well as thenumber of (distinct) edges in a neighborhood of a node or of anis upper-bounded byedge (undirected or directed) of depth(see [5, p. 92]).C. Binary-Input Memoryless ChannelsThe class of channels we consider in this paper is the classof memoryless channels with binary input alphabetand discrete or continuous output alphabet . We will, however,indicate some extensions in Section V., be the channelExample 1 [BSC]: Let ,. Let ,, be the output atinput at time ,time . The BSC with parameter is characterized by the rela, whereis a sequence of i.i.d. Bernoullitionand.random variables withThe channel is depicted in Fig. 3. It is well known (see [12,p. 186]), that the (Shannon) capacity of this channel is(3)whereentropy function [12, p. 13].is the binaryExample 2 [Continuous Additive Channels: Gaussian andLaplace]: Consider a memoryless binary-input channel withcontinuous output alphabet and additive noise. More precisely,, wherelet the channel be modeled byand where is a sequence of i.i.d. random variables with prob. The best known example is the BIAWGNability densitychannel for whichThe capacity of the BIAWGN channel is given by(4)whereAs a second example, we consider the BIL channel for whichThe capacity of the BIL channel is given by(5)

604IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 47, NO. 2, FEBRUARY 2001D. Message-Passing DecodersWithout loss of generality, we can assume that the channeloutput alphabet is equal to the decoder input alphabet. Given acode and a channel model there are, in general, many reasonabledecoding algorithms based on message passing—the class of algorithms we consider in this paper. These algorithms behave as, has an asfollows. At time zero, every variable node ,sociated received message , a random variable taking values in. Messages are exchanged between nodes in the graph alongthe edges in the graph in discrete time steps. First, each variablea mesnode sends back to each neighboring check nodesage taking values in some message alphabet . Typically, attime zero, a variable node sends as its first message (this). Each check nodeprocesses the mesrequiressages it receives and sends back to each neighboring variable. Each variable nodenode a message taking values innow processes the messages it receives together with its associated received value to produce new messages which it then,asends to its neighboring check nodes. For every time ,cycle or iteration of message passing proceeds with check nodesprocessing and transmitting messages followed by the variablenodes processing and transmitting messages.An important condition on the processing is that a messagesent from a node along an adjacent edge may not dependon the message previously received along edge . There is agood reason for excluding the incoming message along edge indetermining the outgoing message along edge . In turbo codingterminology, this guarantees that only extrinsic information ispassed along. This is known to be an important property of goodmessage-passing decoders. Even more importantly, it is exactlythis restriction that makes it possible to analyze the behavior ofthe decoder.,, denote the variable nodeLet,, denote themessage map and let. These funccheck node message map as a function oftions represent the processing performed at the variable nodesand constraint nodes, respectively.Note that, because of the imposed restriction on the dependence of messages, the outgoing message only depends onincoming messages at a variable node andincoming messages at a check node. Note also that we allowthese maps to depend on the iteration number. We assume thateach node of the same degree invokes the same message mapfor each edge and that all edges connected to such a node aredenotetreated equally. For completeness, letinitially transmits thethe initial message map, i.e., nodeto all of its neighbors.messagedenote the space of probabilityFor any given set , letdistribution defined over . In our analysis the messages will berandom variables, and we are interested in tracking the evolutionof their distributions during the execution of the algorithm. Tothis end, we define the mapas follows. Ifis a random variable distributed ac, and, arecording to, andrandom variables distributed according toall indicated random variables are independent, then thedistribution ofis, by definition,.We define the mapin an analogous way.E. Symmetry Assumptions: Restriction to All-One CodewordIt is helpful to think of the messages (and the received values)in the following way. Each message represents an estimate of aparticular codeword bit. More precisely, it contains an estimateof its sign and, possibly, some estimate of its reliability. To beconcrete, consider a discrete case and assume that the outputalphabet isand that

turbo codes, and other concatenated coding schemes are outlined. Index Terms— Belief propagation, iterative decoding, low-den-sity parity-check (LDPC) codes, message-passing decoders, turbo codes, turbo decoding. I. INTRODUCTION IN the wake of the phenomenal success of turbo codes [2],

Related Documents:

May 02, 2018 · D. Program Evaluation ͟The organization has provided a description of the framework for how each program will be evaluated. The framework should include all the elements below: ͟The evaluation methods are cost-effective for the organization ͟Quantitative and qualitative data is being collected (at Basics tier, data collection must have begun)

Silat is a combative art of self-defense and survival rooted from Matay archipelago. It was traced at thé early of Langkasuka Kingdom (2nd century CE) till thé reign of Melaka (Malaysia) Sultanate era (13th century). Silat has now evolved to become part of social culture and tradition with thé appearance of a fine physical and spiritual .

On an exceptional basis, Member States may request UNESCO to provide thé candidates with access to thé platform so they can complète thé form by themselves. Thèse requests must be addressed to esd rize unesco. or by 15 A ril 2021 UNESCO will provide thé nomineewith accessto thé platform via their émail address.

̶The leading indicator of employee engagement is based on the quality of the relationship between employee and supervisor Empower your managers! ̶Help them understand the impact on the organization ̶Share important changes, plan options, tasks, and deadlines ̶Provide key messages and talking points ̶Prepare them to answer employee questions

Dr. Sunita Bharatwal** Dr. Pawan Garga*** Abstract Customer satisfaction is derived from thè functionalities and values, a product or Service can provide. The current study aims to segregate thè dimensions of ordine Service quality and gather insights on its impact on web shopping. The trends of purchases have

Chính Văn.- Còn đức Thế tôn thì tuệ giác cực kỳ trong sạch 8: hiện hành bất nhị 9, đạt đến vô tướng 10, đứng vào chỗ đứng của các đức Thế tôn 11, thể hiện tính bình đẳng của các Ngài, đến chỗ không còn chướng ngại 12, giáo pháp không thể khuynh đảo, tâm thức không bị cản trở, cái được

MARCH 1973/FIFTY CENTS o 1 u ar CC,, tonics INCLUDING Electronics World UNDERSTANDING NEW FM TUNER SPECS CRYSTALS FOR CB BUILD: 1;: .Á Low Cóst Digital Clock ','Thé Light.Probé *Stage Lighting for thé Amateur s. Po ROCK\ MUSIC AND NOISE POLLUTION HOW WE HEAR THE WAY WE DO TEST REPORTS: - Dynacó FM -51 . ti Whárfedale W60E Speaker System' .

Le genou de Lucy. Odile Jacob. 1999. Coppens Y. Pré-textes. L’homme préhistorique en morceaux. Eds Odile Jacob. 2011. Costentin J., Delaveau P. Café, thé, chocolat, les bons effets sur le cerveau et pour le corps. Editions Odile Jacob. 2010. Crawford M., Marsh D. The driving force : food in human evolution and the future.