Overview Of The H.264 / AVC Video Coding Standard

1y ago
7 Views
2 Downloads
2.92 MB
19 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Rosemary Rios
Transcription

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, JULY 20031Overview of the H.264 / AVCVideo Coding StandardThomas Wiegand, Gary J. Sullivan, Gisle Bjontegaard, and Ajay LuthraAbstract— H.264/AVC is newest video coding standard of theITU-T Video Coding Experts Group (VCEG) and theISO/IEC Moving Picture Experts Group (MPEG). The maingoals of the H.264/AVC standardization effort have beenenhanced compression performance and provision of onversational" (video telephony) and "non-conversational"(storage, broadcast, or streaming) applications. H.264/AVChas achieved a significant improvement in rate-distortionefficiency relative to existing standards. This article providesan overview of the technical features of H.264/AVC, describesprofiles and applications for the standard, and outlines thehistory of the standardization process.Index Terms—Video, Standards, MPEG-2, H.263, MPEG-4,AVC, H.264, JVT.I. INTRODUCTIONis the newest internationalvideo coding standard [1]. Bythe time of this publication, itis expected to have been approved by ITU-T asRecommendation H.264 and by ISO/IEC as InternationalStandard 14496-10 (MPEG-4 part 10) Advanced VideoCoding (AVC).H.264/AVCThe MPEG-2 video coding standard (also known as ITU-TH.262) [2], which was developed about 10 years agoprimarily as an extension of prior MPEG-1 video capabilitywith support of interlaced video coding, was an enablingtechnology for digital television systems worldwide. It iswidely used for the transmission of standard definition (SD)and High Definition (HD) TV signals over satellite, cable,and terrestrial emission and the storage of high-quality SDvideo signals onto DVDs.However, an increasing number of services and growingpopularity of high definition TV are creating greater needsfor higher coding efficiency. Moreover, other transmissionmedia such as Cable Modem, xDSL or UMTS offer muchlower data rates than broadcast channels, and enhancedcoding efficiency can enable the transmission of more videochannels or higher quality video representations withinexisting digital transmission capacities.Video coding for telecommunication applications hasevolved through the development of the ITU-T H.261,H.262 (MPEG-2), and H.263 video coding standards (andlater enhancements of H.263 known as H.263 andH.263 ), and has diversified from ISDN and T1/E1service to embrace PSTN, mobile wireless networks, andLAN/Internet network delivery. Throughout this evolution,continued efforts have been made to maximize codingefficiency while dealing with the diversification of networktypes and their characteristic formatting and loss/errorrobustness requirements.Recently the MPEG-4 Visual (MPEG-4 part 2) standard [5]has also begun to emerge in use in some applicationdomains of the prior coding standards. It has provided videoshape coding capability, and has similarly worked towardbroadening the range of environments for digital video use.In early 1998 the Video Coding Experts Group (VCEG –ITU-T SG16 Q.6) issued a call for proposals on a projectcalled H.26L, with the target to double the codingefficiency (which means halving the bit rate necessary for agiven level of fidelity) in comparison to any other existingvideo coding standards for a broad variety of applications.The first draft design for that new standard was adopted inOctober of 1999. In December of 2001, VCEG and theMoving Picture Experts Group (MPEG – ISO/IECJTC 1/SC 29/WG 11) formed a Joint Video Team (JVT),with the charter to finalize the draft new video codingstandard for formal approval submission as H.264/AVC [1]in March 2003.The scope of the standardization is illustrated in Fig. 1,which shows the typical video coding/decoding chain(excluding the transport or storage of the video signal). Ashas been the case for all ITU-T and ISO/IEC video codingstandards, only the central decoder is standardized, byimposing restrictions on the bitstream and syntax, anddefining the decoding process of the syntax elements suchthat every decoder conforming to the standard will producesimilar output when given an encoded bitstream thatconforms to the constraints of the standard. This limitationof the scope of the standard permits maximal freedom tooptimize implementations in a manner appropriate tospecific applications (balancing compression quality,implementation cost, time to market, etc.). However, itprovides no guarantees of end-to-end reproduction quality,as it allows even crude encoding techniques to beconsidered conforming.SourceDestinationPre -ProcessingEncodingPost -Processing& Error RecoveryDecodingScope of StandardFig. 1: Scope of video coding standardization.

WIEGAND et al.: OVERVIEW OF THE H.264/AVC VIDEO CODING STANDARDThis paper is organized as follows. Section II provides ahigh-level overview of H.264/AVC applications andhighlights some key technical features of the design thatenable improved operation for this broad variety ofapplications. Section III explains the network abstractionlayer and the overall structure of H.264/AVC coded videodata. The video coding layer is described in Section IV.Section V explains the profiles supported by H.264/AVCand some potential application areas of the standard. Variable block-size motion compensation withsmall block sizes: This standard supports moreflexibility in the selection of motion compensationblock sizes and shapes than any previous standard,with a minimum luma motion compensation blocksize as small as 4x4. Quarter-sample-accuratemotioncompensation: Most prior standards enable halfsample motion vector accuracy at most. The newdesign improves up on this by adding quartersample motion vector accuracy, as first found in anadvanced profile of the MPEG-4 Visual (part 2)standard, but further reduces the complexity of theinterpolation processing compared to the priordesign. Motion vectors over picture boundaries: Whilemotion vectors in MPEG-2 and its predecessorswere required to point only to areas within thepreviously-decoded reference picture, the pictureboundary extrapolation technique first found as anoptional feature in H.263 is included inH.264/AVC. Multiplereferencepicturemotioncompensation: Predictively coded pictures (called"P" pictures) in MPEG-2 and its predecessors usedonly one previous picture to predict the values inan incoming picture. The new design extends uponthe enhanced reference picture selection techniquefound in H.263 to enable efficient coding byallowing an encoder to select, for motioncompensation purposes, among a larger number ofpictures that have been decoded and stored in thedecoder. The same extension of referencingcapability is also applied to motion-compensatedbi-prediction, which is restricted in MPEG-2 tousing two specific pictures only (one of thesebeing the previous intra (I) or P picture in displayorder and the other being the next I or P picture indisplay order). Decoupling of referencing order from displayorder: In prior standards, there was a strictdependency between the ordering of pictures formotion compensation referencing purposes and theordering of pictures for display purposes. InH.264/AVC, these restrictions are largelyremoved, allowing the encoder to choose theordering of pictures for referencing and displaypurposes with a high degree of flexibilityconstrained only by a total memory capacity boundimposed to ensure decoding ability. Removal ofthe restriction also enables removing the extradelay previously associated with bi-predictivecoding. Decoupling of picture representation methodsfrom picture referencing capability: In priorstandards, pictures encoded using some encodingmethods (namely bi-predictively-encoded pictures)II. APPLICATIONS AND DESIGN FEATURE HIGHLIGHTSThe new standard is designed for technical solutionsincluding at least the following application areas Broadcast over cable, satellite, Cable Modem, DSL,terrestrial, etc. Interactive or serial storage on optical and magneticdevices, DVD, etc. Conversational services over ISDN, Ethernet, LAN,DSL, wireless and mobile networks, modems, etc. ormixtures of these. Video-on-demand or multimedia streaming servicesover ISDN, Cable Modem, DSL, LAN, wirelessnetworks, etc. Multimedia Messaging Services (MMS) over ISDN,DSL, Ethernet, LAN, wireless and mobile networks,etc.Moreover, new applications may be deployed over existingand future networks. This raises the question about how tohandle this variety of applications and networks.Control DataTo address this need for flexibility and customizability, theH.264/AVC design covers a Video Coding Layer (VCL),which is designed to efficiently represent the video content,and a Network Abstraction Layer (NAL), which formats theVCL representation of the video and provides headerinformation in a manner appropriate for conveyance by avariety of transport layers or storage media (see Fig. 2).Video Coding LayerCoded MacroblockData PartitioningCoded Slice/PartitionNetwork Abstraction LayerH.320 MP4FF H.323/IPMPEG-2etc.Fig. 2: Structure of H.264/AVC video encoder.Relative to prior video coding methods, as exemplified byMPEG-2 video, some highlighted features of the design thatenable enhanced coding efficiency include the followingenhancements of the ability to predict the values of thecontent of a picture to be encoded:2

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, JULY 2003could not be used as references for prediction ofother pictures in the video sequence. By removingthis restriction, the new standard provides theencoder more flexibility and, in many cases, anability to use a picture for referencing that is acloser approximation to the picture being encoded. Weighted prediction: A new innovation inH.264/AVC allows the motion-compensatedprediction signal to be weighted and offset byamounts specified by the encoder. This candramatically improve coding efficiency for scenescontaining fades, and can be used flexibly for otherpurposes as well.Improved "skipped" and "direct" motioninference: In prior standards, a "skipped" area of apredictively-coded picture could not motion in thescene content. This had a detrimental effect whencoding video containing global motion, so the newH.264/AVC design instead infers motion in"skipped" areas. For bi-predictively coded areas(called B slices), H.264/AVC also includes anenhanced motion inference method known as"direct" motion compensation, which improvesfurther on prior "direct" prediction designs foundin H.263 and MPEG-4 Visual.Directional spatial prediction for intra coding:A new technique of extrapolating the edges of thepreviously-decoded parts of the current picture isapplied in regions of pictures that are coded asintra (i.e., coded without reference to the contentof some other picture). This improves the qualityof the prediction signal, and also allows predictionfrom neighboring areas that were not coded usingintra coding (something not enabled when usingthe transform-domain prediction method found inH.263 and MPEG-4 Visual).In-the-loop deblocking filtering: Block-basedvideo coding produces artifacts known as blockingartifacts. These can originate from both theprediction and residual difference coding stages ofthe decoding process. Application of an adaptivedeblocking filter is a well-known method ofimproving the resulting video quality, and whendesigned well, this can improve both objective andsubjective video quality. Building further on aconcept from an optional feature of H.263 , thedeblocking filter in the H.264/AVC design isbrought within the motion-compensated predictionloop, so that this improvement in quality can beused in inter-picture prediction to improve theability to predict other pictures as well.In addition to improved prediction methods, other parts ofthe design were also enhanced for improved codingefficiency, including: Small block-size transform: All major priorvideo coding standards used a transform block sizeof 8x8, while the new H.264/AVC design is based3primarily on a 4x4 transform. This allows theencoder to represent signals in a more locallyadaptive fashion, which reduces artifacts knowncolloquially as "ringing". (The smaller block sizeis also justified partly by the advances in theability to better predict the content of the videousing the techniques noted above, and by the needto provide transform regions with boundaries thatcorrespond to those of the smallest predictionregions.) Hierarchical block transform: While in mostcases, using the small 4x4 transform block size isperceptually beneficial, there are some signals thatcontain sufficient correlation to call for somemethod of using a representation with longer basisfunctions. The H.264/AVC standard enables this intwo ways: 1) by using a hierarchical transform toextend the effective block size use for lowfrequency chroma information to an 8x8 array, and2) by allowing the encoder to select a specialcoding type for intra coding, enabling extension ofthe length of the luma transform for low-frequencyinformation to a 16x16 block size in a manner verysimilar to that applied to the chroma. Short word-length transform: All prior standarddesigns have effectively required encoders anddecoders to use more complex processing fortransform computation. While previous designshave generally required 32-bit processing, theH.264/AVC design requires only 16-bit arithmetic. Exact-match inverse transform: In previousvideo coding standards, the transform used forrepresenting the video was generally specifiedonly within an error tolerance bound, due to theimpracticality of obtaining an exact match to theideal specified inverse transform. As a result, eachdecoder design would produce slightly differentdecoded video, causing a "drift" between encoderand decoder representation of the video andreducing effective video quality. Building on apath laid out as an optional feature in the H.263 effort, H.264/AVC is the first standard to achieveexact equality of decoded video content from alldecoders. Arithmetic entropy coding: An advanced entropycoding method known as arithmetic coding isincluded in H.264/AVC. While arithmetic codingwas previously found as an optional feature ofH.263, a more effective use of this technique isfound in H.264/AVC to create a very powerfulentropy coding method known as CABAC(context-adaptive binary arithmetic coding). Context-adaptive entropy coding: The twoentropy coding methods applied in H.264/AVC,termed CAVLC (context-adaptive variable-lengthcoding) and CABAC, both use context-basedadaptivity to improve performance relative to priorstandard designs.

WIEGAND et al.: OVERVIEW OF THE H.264/AVC VIDEO CODING STANDARDRobustness to data errors/losses and flexibility for operationover a variety of network environments is enabled by anumber of design aspects new to the H.264/AVC standardincluding the following highlighted features. NAL unit syntax structure: Each syntax structurein H.264/AVC is placed into a logical data packetcalled a NAL unit. Rather than forcing a specificbitstream interface to the system as in prior videocoding standards, the NAL unit syntax structureallows greater customization of the method ofcarrying the video content in a manner appropriatefor each specific network. Flexible slice size: Unlike the rigid slice structurefound in MPEG-2 (which reduces codingefficiency by increasing the quantity of header dataand decreasing the effectiveness of prediction),slice sizes in H.264/AVC are highly flexible, aswas the case earlier in MPEG-1. vectors and other prediction information) is moreimportant or more valuable than other informationfor purposes of representing the video content,H.264/AVC allows the syntax of each slice to beseparated into up to three different partitions fortransmission, depending on a categorization ofsyntax elements. This part of the design buildsfurther on a path taken in MPEG-4 Visual and inan optional part of H.263 . Here the design issimplified by having a single syntax withpartitioning of that same syntax controlled by aspecified categorization of syntax elements.Parameter set structure: The parameter setdesign provides for robust and efficientconveyance header information. As the loss of afew key bits of information (such as sequenceheader or picture header information) could have asevere negative impact on the decoding processwhen using prior standards, this key informationwas separated for handling in a more flexible andspecialized manner in the H.264/AVC design. Flexible macroblock ordering (FMO): A newability to partition the picture into regions calledslice groups has been developed, with each slicebecoming an independently-decodable subset of aslice group. When used effectively, flexiblemacroblock ordering can significantly enhancerobustness to data losses by managing the spatialrelationship between the regions that are coded ineach slice. (FMO can also be used for a variety ofother purposes as well.)Arbitrary slice ordering (ASO): Since each sliceof a coded picture can be (approximately) decodedindependently of the other slices of the picture, theH.264/AVC design enables sending and receivingthe slices of the picture in any order relative toeach other. This capability, first found in anoptional part of H.263 , can improve end-to-enddelay in real-time applications, particularly whenused on networks having out-of-order deliverybehavior (e.g., internet protocol networks).Redundant pictures: In order to enhancerobustness to data loss, the H.264/AVC designcontains a new ability to allow an encoder to sendredundant representations of regions of pictures,enabling a (typically somewhat degraded)representation of regions of pictures for which theprimary representation has been lost during datatransmission.Data Partitioning: Since some coded informationfor representation of each region (e.g., motion4 SP/SI synchronization/switching pictures: TheH.264/AVC design includes a new featureconsisting of picture types that allow exactsynchronization of the decoding process of somedecoders with an ongoing video stream producedby other decoders without penalizing all decoderswith the loss of efficiency resulting from sendingan I picture. This can enable switching a decoderbetween representations of the video content thatused different data rates, recovery from data lossesor errors, as well as enabling trick modes such asfast-forward, fast-reverse, etc.In the following two sections, a more detailed description ofthe key features is given.III. NETWORK ABSTRACTION LAYERThe network abstraction layer (NAL) is designed in order toprovide "network friendliness" to enable simple andeffective customization of the use of the VCL for a broadvariety of systems.The NAL facilitates the ability to map H.264/AVC VCLdata to transport layers such as RTP/IP for any kind of real-time wire-line and wirelessInternet services (conversational and streaming) File formats, e.g. ISO MP4 for storage and MMS H.32X for wireline and wireless conversationalservices MPEG-2 systems for broadcasting services, etc.The full degree of customization of the video content to fitthe needs of each particular application is outside the scopeof the H.264/AVC standardization effort, but the design ofthe NAL anticipates a variety of such mappings. Some keyconcepts of the NAL are NAL units, byte stream, andpacket format uses of NAL units, parameter sets, and accessunits. A short description of these concepts is given belowwhereas a more detailed description including errorresilience aspects is provided in [6] and [7].A. NAL unitsThe coded video data is organized into NAL units, each ofwhich is effectively a packet that contains an integernumber of bytes. The first byte of each NAL unit is aheader byte that contains an indication of the type of data in

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, JULY 2003the NAL unit, and the remaining bytes contain payload dataof the type indicated by the header.The payload data in the NAL unit is interleaved asnecessary with emulation prevention bytes, which are bytesinserted with a specific value to prevent a particular patternof data called a start code prefix from being accidentallygenerated inside the payload.The NAL unit structure definition specifies a generic formatfor use in both packet-oriented and bitstream-orientedtransport systems, and a series of NAL units generated byan encoder is referred to as a NAL unit stream.B. NAL units in byte stream format useSome systems (e.g., H.320 and MPEG-2 H.222.0 systems)require delivery of the entire or partial NAL unit stream asan ordered stream of bytes or bits within which thelocations of NAL unit boundaries need to be identifiablefrom patterns within the coded data itself.For use in such systems, the H.264/AVC specificationdefines a byte stream format. In the byte stream format,each NAL unit is prefixed by a specific pattern of threebytes called a start code prefix. The boundaries of the NALunit can then be identified by searching the coded data forthe unique start code prefix pattern. The use of emulationprevention bytes guarantees that start code prefixes areunique identifiers of the start of a new NAL unit.A small amount of additional data (one byte per videopicture) is also added to allow decoders that operate insystems that provide streams of bits without alignment tobyte boundaries to recover the necessary alignment fromthe data in the stream.Additional data can also be inserted in the byte streamformat that allows expansion of the amount of data to besent and can aid in achieving more rapid byte alignmentrecovery, if desired.C. NAL units in packet-transport system useIn other systems (e.g., internet protocol / RTP systems), thecoded data is carried in packets that are framed by thesystem transport protocol, and identification of theboundaries of NAL units within the packets can beestablished without use of start code prefix patterns. In suchsystems, the inclusion of start code prefixes in the datawould be a waste of data carrying capacity, so instead theNAL units can be carried in data packets without start codeprefixes.D. VCL and non-VCL NAL unitsNAL units are classified into VCL and non-VCL NALunits. The VCL NAL units contain the data that representsthe values of the samples in the video pictures, and the nonVCL NAL units contain any associated additionalinformation such as parameter sets (important header datathat can apply to a large number of VCL NAL units) andsupplemental enhancement information (timing informationand other supplemental data that may enhance usability of5the decoded video signal but are not necessary for decodingthe values of the samples in the video pictures).E. Parameter setsA parameter set is supposed to contain information that isexpected to rarely change and offers the decoding of a largenumber of VCL NAL units. There are two types ofparameter sets: sequence parameter sets, which apply to a series ofconsecutive coded video pictures called a coded videosequence, and picture parameter sets, which apply to the decoding ofone or more individual pictures within a coded videosequence.The sequence and picture parameter set mechanismdecouples the transmission of infrequently changinginformation from the transmission of coded representationsof the values of the samples in the video pictures. EachVCL NAL unit contains an identifier that refers to thecontent of the relevant picture parameter set, and eachpicture parameter set contains an identifier that refers to thecontent of the relevant sequence parameter set. In thismanner, a small amount of data (the identifier) can be usedto refer to a larger amount of information (the parameterset) without repeating that information within each VCLNAL unit.Sequence and picture parameter sets can be sent well aheadof the VCL NAL units that they apply to, and can berepeated to provide robustness against data loss. In someapplications, parameter sets may be sent within the channelthat carries the VCL NAL units (termed "in-band"transmission). In other applications (see Fig. 3) it can beadvantageous to convey the parameter sets "out-of-band"using a more reliable transport mechanism than the videochannel itself.H.264/AVC Encoder123NAL unit with VCL Data encodedwith PS #3 ( address in Slice Header )Reliable Parameter Set ExchangeH.264/AVC Decoder321Parameter Set #3: Video format PAL Entr. Code CABAC .Fig. 3: Parameter set use with reliable "out-of-band" parameter setexchange.F. Access unitsA set of NAL units in a specified form is referred to as anaccess unit. The decoding of each access unit results in onedecoded picture. The format of an access unit is shown inFig. 4.

WIEGAND et al.: OVERVIEW OF THE H.264/AVC VIDEO CODING STANDARD6G. Coded video sequencesA coded video sequence consists of a series of access unitsthat are sequential in the NAL unit stream and use only onesequence parameter set. Each coded video sequence can bedecoded independently of any other coded video sequence,given the necessary parameter set information, which maybe conveyed "in-band" or "out-of-band". At the beginningof a coded video sequence is an instantaneous decodingrefresh (IDR) access unit. An IDR access unit contains anintra picture – a coded picture that can be decoded withoutdecoding any previous pictures in the NAL unit stream, andthe presence of an IDR access unit indicates that nosubsequent picture in the stream will require reference topictures prior to the intra picture it contains in order to bedecoded.A NAL unit stream may contain one or more coded videosequences.IV. VIDEO CODING LAYERFig. 4: Structure of an access unit.Each access unit contains a set of VCL NAL units thattogether compose a primary coded picture. It may also beprefixed with an access unit delimiter to aid in locating thestart of the access unit. Some supplemental enhancementinformation (SEI) containing data such as picture timinginformation may also precede the primary coded picture.The primary coded picture consists of a set of VCL NALunits consisting of slices or slice data partitions thatrepresent the samples of the video picture.Following the primary coded picture may be someadditional VCL NAL units that contain redundantrepresentations of areas of the same video picture. Theseare referred to as redundant coded pictures, and areavailable for use by a decoder in recovering from loss orcorruption of the data in the primary coded pictures.Decoders are not required to decode redundant codedpictures if they are present.Finally, if the coded picture is the last picture of a codedvideo sequence (a sequence of pictures that isindependently decodable and uses only one sequenceparameter set), an end of sequence NAL unit may bepresent to indicate the end of the sequence; and if the codedpicture is the last coded picture in the entire NAL unitstream, an end of stream NAL unit may be present toindicate that the stream is ending.As in all prior ITU-T and ISO/IEC JTC1 video standardssince H.261 [3], the VCL design follows the so-calledblock-based hybrid video coding approach (as depicted inFig. 8), in which each coded picture is represented in blockshaped units of associated luma and chroma samples calledmacroblocks. The basic source-coding algorithm is a hybridof inter-picture prediction to exploit temporal statisticaldependencies and transform coding of the predictionresidual to exploit spatial statistical dependencies. There isno single coding element in the VCL that provides themajority of the significant improvement in compressionefficiency in relation to prior video coding standards. It israther a plurality of smaller improvements that add up to thesignificant gain.A. Pictures, frames, and fieldsA coded video sequence in H.264/AVC consists of asequence of coded pictures. A coded picture in [1] canrepresent either an entire frame or a single field, as was alsothe case for MPEG-2 video.Generally, a frame of video can be considered to containtwo interleaved fields, a top and a bottom field. The topfield contains even-numbered rows 0, 2,., H/2-1 with Hbeing the number of rows of the frame. The bottom fieldcontains the odd-numbered rows (starting with the secondline of the frame). If the two fields of a frame were capturedat different time instants, the frame is referred to as aninterlaced frame, and otherwise it is referred to as aprogressive frame (see Fig. 5). The coding representation inH.264/AVC is primarily agnostic with respect to this videocharacteristic, i.e., the underlying interlaced or progressivetiming of the original captured pictures. Instead, its codingspecifies a representation based primarily on geometricconcepts rather than being based on timing.

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, JULY 2003ProgressiveFrameTopFieldBottomField tInterlaced Frame (Top Field First)Fig. 5: Progressive and interlaced frames and fields.B. YCbCr color space and 4:2:0 samplingThe human visual system seems to perceive scene contentin terms of brightness and color information separately, andwith greater sensitivity to the details of brightness thancolor. Video transmission systems can be designed to takeadvantage of this. (This is true of conventional analog TVsystems as well as digital ones.) In H.264/AVC as in priorstandards, this is done by using a YCbCr color spacetogether with reducing the sampling resolution of the Cband Cr chroma information.The video color space used by H.264/AVC separates acolor representation into three components called Y, Cb,and Cr. Component Y is called luma, and representsbrightness. The two chroma components Cb and Crrepresent the extent to which the color deviates from graytoward blue and red, respectively. (The terms luma andchroma are used in this paper and in the standard rather thanthe terms luminance and chrominance, in order to avoid theimplication of the use of linear light transfer characteristicsthat is often associated with the terms luminance andchrominance.)Becaus

H.264/AVC design covers a Video Coding Layer (VCL), which is designed to efficiently represent the video content, and a Network Abstraction Layer (NAL), which formats the VCL representation of the video and provides header information in a manner appropriate for conveyance by a variety of transport layers or storage media (see Fig. 2).

Related Documents:

May 02, 2018 · D. Program Evaluation ͟The organization has provided a description of the framework for how each program will be evaluated. The framework should include all the elements below: ͟The evaluation methods are cost-effective for the organization ͟Quantitative and qualitative data is being collected (at Basics tier, data collection must have begun)

Silat is a combative art of self-defense and survival rooted from Matay archipelago. It was traced at thé early of Langkasuka Kingdom (2nd century CE) till thé reign of Melaka (Malaysia) Sultanate era (13th century). Silat has now evolved to become part of social culture and tradition with thé appearance of a fine physical and spiritual .

On an exceptional basis, Member States may request UNESCO to provide thé candidates with access to thé platform so they can complète thé form by themselves. Thèse requests must be addressed to esd rize unesco. or by 15 A ril 2021 UNESCO will provide thé nomineewith accessto thé platform via their émail address.

̶The leading indicator of employee engagement is based on the quality of the relationship between employee and supervisor Empower your managers! ̶Help them understand the impact on the organization ̶Share important changes, plan options, tasks, and deadlines ̶Provide key messages and talking points ̶Prepare them to answer employee questions

Dr. Sunita Bharatwal** Dr. Pawan Garga*** Abstract Customer satisfaction is derived from thè functionalities and values, a product or Service can provide. The current study aims to segregate thè dimensions of ordine Service quality and gather insights on its impact on web shopping. The trends of purchases have

Chính Văn.- Còn đức Thế tôn thì tuệ giác cực kỳ trong sạch 8: hiện hành bất nhị 9, đạt đến vô tướng 10, đứng vào chỗ đứng của các đức Thế tôn 11, thể hiện tính bình đẳng của các Ngài, đến chỗ không còn chướng ngại 12, giáo pháp không thể khuynh đảo, tâm thức không bị cản trở, cái được

Le genou de Lucy. Odile Jacob. 1999. Coppens Y. Pré-textes. L’homme préhistorique en morceaux. Eds Odile Jacob. 2011. Costentin J., Delaveau P. Café, thé, chocolat, les bons effets sur le cerveau et pour le corps. Editions Odile Jacob. 2010. Crawford M., Marsh D. The driving force : food in human evolution and the future.

Le genou de Lucy. Odile Jacob. 1999. Coppens Y. Pré-textes. L’homme préhistorique en morceaux. Eds Odile Jacob. 2011. Costentin J., Delaveau P. Café, thé, chocolat, les bons effets sur le cerveau et pour le corps. Editions Odile Jacob. 2010. 3 Crawford M., Marsh D. The driving force : food in human evolution and the future.