Overview Of The Multiview High Efficiency Video Coding (MV-HEVC) Standard

1y ago
27 Views
4 Downloads
611.81 KB
5 Pages
Last View : 30d ago
Last Download : 1m ago
Upload by : Aarya Seiber
Transcription

OVERVIEW OF THE MULTIVIEW HIGH EFFICIENCY VIDEO CODING (MV-HEVC)STANDARDMiska M. Hannuksela1, Ye Yan2, Xuehui Huang2, and Houqiang Li2Nokia Technologies, 2University of Science and Technology of China1ABSTRACTThis paper reviews the multiview extension (MV-HEVC) ofthe High Efficiency Video Coding (HEVC) standard. MVHEVC is capable of multiview video coding with or withoutaccompanying depth views. The key design concepts anddesign elements of MV-HEVC are described in the paper.Furthermore, the features and characteristics of MV-HEVCcompared to other standardized video codec extensions forthree-dimensional (3D) video coding are reviewed.Index Terms— HEVC, MV-HEVC, 3D video coding1. INTRODUCTIONWhile stereoscopic content is most commonly used in today's 3D video content and services, an increasing amountof attention has been paid to depth-enhanced and multiview3D video. Depth or disparity information can be used indepth-image-based rendering (DIBR) [1] to synthesizeviews as decoder-side post-processing. When applied withstereoscopic displays, view synthesis can be used for adjusting the disparity between the displayed views according toviewers' preferences and view conditions, such as viewingdistance and display size, in order to reach as comfortable3D experience as possible. Depth-enhanced multiview videotogether with DIBR can also be used with multiview autostereoscopic displays to generate a required number ofviews for displaying from the received views. The multiview-video-plus-depth (MVD) format refers to a data formatwith more than one texture view, each paired with the depthview of the same viewpoint [2].The two most recent international video coding standards, namely the Advanced Video Coding (AVC) standard(also known as H.264) [3] and the High Efficiency VideoCoding (HEVC) standard (also known as H.265) [4], wereinitially intended for two-dimensional (2D) video. Multiview extensions for both AVC and HEVC have been developed, referred to as Multiview Video Coding (MVC) [3][5]and Multiview HEVC (MV-HEVC) [4], respectively. Fordepth-enhanced multiview video coding one can use AVCextensions referred to as the multiview video and depth coding (MVC D) [3][6] and the multiview and depth videowith enhanced non-base view coding (3D-AVC) [3], and theHEVC extensions MV-HEVC and 3D-HEVC [7][8]. A fun-978-1-4799-8339-1/15/ 31.00 2015 IEEEdamental principle of MVC D and MV-HEVC is to re-usethe coding tools of the underlying 2D coding, i.e. AVC andHEVC, respectively, so that implementations can be realized by software changes to high-level syntax in the sliceheader level and above. On the contrary, 3D-AVC and 3DHEVC introduce new low-level coding tools and aim at improved compression efficiency compared to the MVC Dand MV-HEVC, respectively.Section 2 of this paper reviews the design of MVHEVC. Compared to earlier reviews, such as [9], the paperdescribes the design of the approved MV-HEVC standard.Section 3 of the paper presents a comprehensive featurecomparison between 3D video coding standards. Finally,conclusions are provided in Section 4.2. MULTI-LAYER HEVC EXTENSIONSThe development of the scalable extensions (SHVC)[4][9][10] of HEVC started when the MV-HEVC projectwas already ongoing. In an early phase of the SHVC development, it was decided to use an approach requiring onlyhigh-level syntax changes as well as inter-layer processing.Furthermore, since SHVC and MV-HEVC both shared thefundamental principle of using only the HEVC coding toolsfor slice data, their design was unified [11][12][13]. Thishigh-level syntax principle is achieved be enabling the inclusion of pictures originating from direct reference layersin the reference picture list(s) used for decoding pictures ofpredicted layers, while otherwise these inter-layer referencepictures are treated identically to any other reference pictures. Eventually, both SHVC and MV-HEVC were releasedas parts of HEVC version 2 [4] and share the same specifications of multi-layer extensions. As one consequence ofthis unified approach, views are treated as scalable layerslike any other types of scalability.An elementary unit in the HEVC bitstreams is called anetwork abstraction layer (NAL) unit, consisting of a headerand a payload. The NAL unit header contains a 5-bit NALunit type, 6-bit layer identifier called nuh layer id, and a 3bit temporal sub-layer identifier. A video parameter set(VPS) NAL unit specifies a mapping of the layer identifiersto values of scalability dimensions, including a dependencyidentifier for SHVC, a view order index for MV-HEVC and3D-HEVC, a depth flag for 3D-HEVC, and an auxiliary2154ICIP 2015

layer type identifier, referred to as AuxId, which may beused in any multi-layer bitstreams. A selectable number ofscalability dimensions can be used in a multi-layer bitstream. The VPS also indicates a mapping from each vieworder index to a view identifier value, where each uniqueview identifier value represents a distinct viewpoint. Temporal sub-layers are used for temporal scalability, i.e. theyprovide the capability of extracting sub-bitstreams of different picture rates. For more information on the high-levelsyntax and temporal scalability of HEVC, the reader is referred to [14].Auxiliary picture layers, originally proposed in [15],enable multiplexing of supplemental coded video into a bitstream that also contains the primary coded video. It wasdecided to enable depth views with the auxiliary picturelayer mechanism in MV-HEVC. AuxId value equal to 2associated, in VPS, with a layer indicates that the layerrepresents a sequence of depth pictures. More detailed properties on the depth auxiliary layers, such as the depth or disparity range represented by the sample values, can be provided with the depth representation information supplemental enhancement information (SEI) message [4]. Depth auxiliary layers are also associated with a view order index andview identifier; hence, multiview depth coding is supportedin MV-HEVC. It is noteworthy that 3D-HEVC uses thedepth flag scalability dimension to indicate depth views.This is because 3D-HEVC enables the use of depth information for non-base texture views. Auxiliary picture layers, onthe other hand, are not allowed to affect the decoding of theprimary video. As a consequence, depth views of MVHEVC and 3D-HEVC are not compatible.As scalable multi-layer bitstreams enable decoding ofmore than one combinations of layers and temporal sublayers, the multi-layer HEVC decoding process is given asinput a target output operation point, specifying the outputlayer set (OLS) and the highest temporal sub-layer to bedecoded. An OLS represents a set of layers, which can either be necessary or unnecessary layers. A necessary layer iseither an output layer, meaning that the pictures of the layerare output by the decoding process, or a reference layer,meaning that its pictures may be directly or indirectly usedas a reference for prediction of pictures of any output layer.The VPS includes a specification of OLSs, and can alsospecify buffering requirements and parameters for OLSs.Unnecessary layers are not required to be decoded for reconstructing the output layers but can be included in OLSsfor indicating buffering requirements for such sets of layersin which some layers are coded with potential future extensions.Multi-layer HEVC extensions support hybrid codecscalability, in which the base layer is coded as a separatenon-HEVC bitstream. The multi-layer HEVC decodingprocess inputs decoded base layer pictures and certain properties for them. The base layer bitstream and HEVC en-hancement bitstream are separate, and it is up to the systemslayer to handle synchronization between the bitstreams aswell as the connection between the base and enhancementlayer decoding processes. The hybrid codec scalability feature enables 3D services that are compatible with 2D AVCdecoding. For example, broadcast services could provideconventional 2D service for AVC-capable devices and build3D capability using either MV-HEVC or 3D-HEVC on topof the AVC service.While earlier video coding standards specified profilelevel conformance points applying to a bitstream, multilayer HEVC extensions specify layer-wise conformancepoints. To be more exact, a profile-tier-level (PTL) combination is indicated for each necessary layer of each OLS,while even finer-grain temporal-sub-layer-based PTL signaling is allowed. Consequently, decoder capabilities can beindicated as a list of PTL values, where the number of listelements indicates the number of layers supported by thedecoder. Non-base layers that are not inter-layer predictedcan be indicated to conform to a single-layer profile, such asthe Main profile, while they also require so-called independent non-base layer decoding (INBLD) capability to dealcorrectly with layer-wise decoding. The Multiview Mainprofile was specified for MV-HEVC and suits both non-basetexture views and non-base depth views.Table I shows an example of a bitstream containingthree texture views and three depth views, both coded withthe so-called IBP inter-view prediction order, where leftside view (I view) is coded independently of other views,the right-side view (P view) may utilize inter-view prediction from the I view, and center view (B view) may be predicted from both the I and P views. As can be seen the vieworder index values of the respective views (left, center,right) of texture and depth are identical. The VPS is used tospecify the mapping view order index and AuxId values tonuh layer id values, on which Table I shows one examplewhere depth views follow in decoding order the textureviews, while other mappings and decoding orders wouldalso be possible. The reference layers are indicated in theVPS according to the IBP inter-view prediction pattern. Inthis example, four OLSs are specified in the VPS. The 0thOLS contains only the texture base view conforming to theMain profile. The first and second OLSs represent a stereoscopic video with wide (side views) and narrow (adjacentviews) baseline, respectively. As can be seen from Table I,the P view is not an output layer but is a necessary layer inthe narrow baseline OLS. Each necessary non-base viewconforms to the Multiview Main profile. In the fourth OLS,all texture and depth views are output. The independentlycoded depth view is indicated to conform to the Main profilewith the INBLD capability, while all predicted views comply with the Multiview Main profile.2155

Table I. Example bitstream with three texture views and three depth views with IBP inter-view predictionMain3. COMPARISON OF MVD CODING STANDARDSTable II presents a comparison of the features of the MVDcoding scenarios overviewed in Section 1. The followingsub-sections provide further details on the information provided in Table II with a specific focus on MV-HEVC. Thegoal of this section is to assist in selecting a 3D coding format that suits the properties and purposes of the used videoacquisition equipment, display, and application.3.1. Rate-distortion (RD) performanceThe RD performance between 3D-HEVC and MV-HEVCwas analyzed in [16] with 3-view MVD coding using viewsynthesis optimization (VSO) [8] in the encoding. The JCT3V common test conditions [17] were used, according towhich the RD performance of the texture views as well asthree synthesized views between each pair of coded views iscompared. It was found that 3D-HEVC reduced the bitrateby about 14% and 20% for coded texture views and synthesized views, respectively, compared to MV-HEVC in termsof the Bjontegaard delta bitrate (dBR) [18]. No results for a2-view scenario were provided in [16], but it can be assumed that the bitrate reduction of 3D-HEVC compared toMV-HEVC would be roughly halved. Furthermore, it isTable II. Comparison of MVD coding extensions(yes* supported with constraints explained in the text)AVC extensions HEVC extensionsMVMVC D 3D-AVC3D-HEVCHEVCData formatUnpaired MVDMixed resolutionT exture w.r.t. depthBetween texture es*yes*Base texture viewAVCSecond texture viewMVCHEVCHEVCAVCor AVC or AVCMVC or MVMV- or3D-AVC HEVC 3D-HEVCyesMV MainyesyesMainMV MainMV MainProfile ofnecessary layeryes3rd OLS, 3-viewMVDOutput layerMainProfile ofnecessary layeryes2nd OLS, narrowstereo textureOutput layer0, 103, 43Profile ofnecessary layer021354Output layernuh layer id ofreference layers000222Profile ofnecessary layernuh layer id021021Output dxDepthvideoColorvideo0th OLS, base texture 1st OLS, wide stereoviewtextureyesyesyesyesyesyesMainMV MainMV MainMain, INBLDMV MainMV Mainmentioned in [16] that only one depth view is coded forMV-HEVC. Encoder-side optimizations and verification ofthe RD performance difference between MV-HEVC and3D-HEVC are therefore subjects of further studies.3.2. Data formatUncompressed depth information can be obtained for camera-captured content generally in two ways, either throughdepth estimation or by using depth sensors. In a depth estimation process, the respective pixels in adjacent views aresearched, resulting into a disparity picture or, equivalently,to a depth picture. Many depth sensors produce pictureshaving a significantly lower spatial resolution than the resolution produced by typical color image sensors. Depth sensors and color image sensors are typically located adjacently, i.e. the viewpoint of the depth pictures differs from theviewpoint(s) of the color image sensors. The data formatcalled unpaired MVD, introduced in [17], comprises textureand depth views that need not represent the same viewpoints. The unpaired MVD format provides flexibility indepth acquisition and reduced complexity in encoder-sidepre-processing.All depth-enhanced coding formats support the MVDand the unpaired MVD data formats. When using the unpaired MVD format texture coding tools using depth viewshave to be selectively turned off in 3D-AVC and 3D-HEVC.As there no dependencies from depth views to texture viewsor vice versa in MVC D and MV-HEVC, the selection ofthe transmitted views e.g. for bitrate adaptation can be performed flexibly e.g. resulting into bitstreams with an unequal number of texture and depth views. 3D-AVC and 3DHEVC require more careful view extraction to ensure thatcross-component prediction dependencies are obeyed in thetransmitted bitstream.When only a limited amount of receiver-side disparityadjustment is required, unpaired MVD can be used to improve RD performance as studied in [20] and [21] forMVC D and 3D-AVC, respectively. As no similar studywas made for MV-HEVC, an experiment was performed for2156

this paper. JCT-3V common test conditions (CTC) [17]were used for coding MV-HEVC content with two textureviews and one depth view. The baseline was adjusted by10% in the decoding end through the view synthesis arrangement described in [20]. The results are reported in Table IIIa, indicating that with the exception of one sequence,the transmission of one depth view provides improved compression efficiency. The unpaired MVD format can therefore be justified to be used with MV-HEVC from the RDperformance point of view, when the disparity adjustmentneeds are limited.3.3. Mixed spatial resolutionMixed resolution between texture and depth views may bedesirable to achieve reduced computational complexity incases where the acquired depth resolution is inherentlysmaller than the texture resolution, e.g. when a depth sensorhas been used to acquire a depth map sequence. As can beobserved from Table II, the codecs are similar when itcomes to supporting a mixed resolution between texture anddepth views. 3D-HEVC includes such depth-based texturecoding tools that require the same spatial resolution beingapplied both in texture and depth pictures. Hence, in order tosupport unequal spatial resolutions between texture anddepth views, 3D-HEVC encoders need to turn off depthbased texture coding tools.A proper selection of depth resolution can providecompression benefits when considering both coded and synthesized views as studied in [22] and [23] in the context ofMVC D and 3D-AVC, respectively. Early results for MVHEVC were provided in [24], while for this paper we performed a new set of simulations with a recent version of thereference software, HTM 12.1 [19], with depth pictures having half of the resolution vertically and horizontally compared to the texture pictures. The resampling was performedas explained in [25], and the JCT-3V CTC [17] was used.As can be seen from Table IIIb, the bitrate is reduced bymore than 5% on average, while the RD performance of thesynthesized views remain unchanged. Even though VSOTable III. MV-HEVC performance in a) 2V 1D unpaired MVD and b) 3-view MVD with 1/4-resolutiondepth relative to conventional MV-HEVC (dBR %)S01S02S03S04S05S06S08S10Avga) Unpaired S02S03S04S05S06S08S10Avgb) Depth 6-0.4was not used in these simulations, they are indicative that adepth resolution lower than texture resolution can provide agood trade-off between RD performance and complexity.Mixed-resolution stereoscopic video has been broadlystudied e.g. in [26][27][28][29], as it could potentiallyachieve a comparable quality with symmetric-resolutionstereoscopic video but reduce computational complexitythanks to a smaller number of samples processed. The compared MVD coding extensions use pictures of other viewswithout resampling as inter-view reference pictures, andhence none of them supports mixed-resolution coding wheninter-view prediction is applied. However, both MV-HEVCand 3D-HEVC support views having different resolutions,when no prediction takes place between the views.3.4. CompatibilityThe MVD coding extensions provide seamless compatibilitywith 2D video decoding. The HEVC extensions have theadditional benefit that they can use a base view of any coding standard, such as AVC. This feature enables i) an efficient use of hardware designs including both AVC andHEVC decoders for stereoscopic services as well as ii) establishment of 3D video services providing compatibilitywith AVC-capable 2D clients while using HEVC-basedcoding technology for non-base views and depth views tokeep the additional bitrate as low as possible. Similarly, itmay be desired to support clients with stereoscopic decodingcapability and other clients with multiview decoding capability with the same content or service. It can be seen fromTable II that 3D-AVC and 3D-HEVC leave it up to the encoder to determine whether the second view conforms toMVC and MV-HEVC, respectively, or requires dedicatedcoding tools of 3D-AVC and 3D-HEVC, respectively.4. CONCLUSIONSThe paper reviewed the design of the MV-HEVC standard,which introduces only high-level changes over the HEVCstandard, facilitating implementations with software updatesto existing HEVC codecs. Compared to the 3D-HEVC approach of introducing dedicated 3D coding tools, MVHEVC is inferior in rate-distortion performance but is easierto implement and provides more flexibility in terms of thedata format and scalability, as explained in the paper.2157

REFERENCES[1] L. McMillan Jr., “An image-based approach to threedimensional computer graphics,” PhD thesis, Department of Computer Science, University of North Carolina, 1997.[2] A. Smolic et al., “Multi-view video plus depth (MVD) format for advanced 3D video systems,” Joint Video Team (JVT)document JVT-W100, Apr. 2007.[3] ITU-T Recommendation H.264, “Advanced video coding forgeneric audiovisual services,” Feb. 2014.[4] ITU-T Recommendation H.265, “High efficiency video coding,” Oct. 2014.[5] Y. Chen, Y.-K. Wang, K. Ugur, M. M. Hannuksela,J. Lainema, and M. Gabbouj, “The emerging MVC standard for 3Dvideo services,” EURASIP Journal on Advances in SignalProcessing, vol. 2009, Article ID 786015, 13 pages, 2009.doi:10.1155/2009/786015.[6] Y. Chen, M. M. Hannuksela, T. Suzuki, and S. Hattori,“Overview of the MVC D 3D video coding standard,” Journal ofVisual Communication and Image Representation, vol. 25, no. 4,pp. 679-688, May 2014.[7] G. Tech, K. Wegner, Y. Chen, and S. Yea (editors), “3DHEVC draft text 7,” JCT-3V document JCT3V-K1001, Apr. 2015.[8] Y. Chen, G. Tech, K. Wegner, and S. Yea (editors), “Testmodel 10 of 3D-HEVC and MV-HEVC,” JCT-3V documentJCT3V-J1003, Nov. 2014.[9] G. J. Sullivan,J. M. Boyce,Y. Chen,J.-R. Ohm,C. A. Segall, and A. Vetro, “Standardized extensions of High Efficiency Video Coding (HEVC) ,” IEEE Journal of Selected Topicsin Signal Processing, vol. 7, no. 6, pp. 1001-1016, Dec. 2013.[10] Y. Ye and P. Andrivon, “The scalable extensions of HEVCfor ultra-high-definition video delivery,” IEEE MultiMedia,vol. 21, no. 3, pp. 58-64, July-Sept. 2014.[11] K. Ugur,M. M. Hannuksela,J. Lainema,andD. Rusanovskyy, “Unification of scalable and multi-view extensions with HLS only changes,” JCT-VC document JCTVC-L0188,Jan. 2013.[12] J. Chen, J. Boyce, Y. Ye, and M. M. Hannuksela, “SHVCWorking Draft 1,” JCT-VC document JCTVC-L1008, Mar. 2013.[13] G. Tech, K. Wegner, Y. Chen, M. M. Hannuksela, andJ. Boyce, “MV-HEVC draft text 3,” JCT-3V document JCT3VC1004, Mar. 2013.[14] R. Sjöberg, Y. Chen, A. Fujibayashi, M. M. Hannuksela,J. Samuelsson, T. K. Tan, Y.-K. Wang, and S. Wenger, “Overviewof HEVC high-level syntax and reference picture management,”IEEE Transactions on Circuits and Systems for Video Technology,vol. 22, no. 12, pp. 1858-1870, Dec. 2012.[15] M. M. Hannuksela, “REXT/MV-HEVC/SHVC HLS: auxiliary picture layers,” JCT-VC document JCTVC-O0041 and JCT3V document JCT3V-F0031, Oct. 2013.[16] Y. Chen, G. Tech, K. Müller, A. Vetro, L. Zhang, andS. Shimizu, “Comparative results for 3D-HEVC and MV-HEVCwith depth coding,” JCT-3V document JCT3V-G0109, Jan. 2014.[17] K. Müller and A. Vetro, “Common test conditions of 3DVcore experiments,” JCT-3V document JCT3V-G1100, Jan. 2014.[18] G. Bjøntegaard, “Calculation of average PSNR differencesbetween RD-Curves,” ITU-T SG16 Q.6 (VCEG) documentVCEG-M33, Apr. 2001.[19] hofer.de/svn/svn 3DVCSoftware/tags/[20] P. Aflaki, D. Rusanovskyy, M. M. Hannuksela, and M. Gabbouj, “Unpaired multiview video plus depth compression,” Proc. ofInternational Conference on Digital Signal Processing, July 2013.[21] L. Chen, D. Rusanovskyy, P. Aflaki, and M. M. Hannuksela,“3D-AVC: Coding of unpaired MVD data,” JCT-3V documentJCT3V-D0161, Apr. 2013.[22] P. Aflaki, M. M. Hannuksela, and M. Gabbouj, “Flexibledepth map spatial resolution in depth-enhanced multiview videocoding,” Proc. of IEEE International Conference on Acoustics,Speech, and Signal Processing (ICASSP), May 2014.[23] P. Aflaki, M. M. Hannuksela, and X. Huang, “CE7: Removalof texture-to-depth resolution ratio restrictions,” JCT-3V documentJCT3V-E0035, July 2013.[24] S. Shimizu and S. Sugimoto, “AHG13: Results with quarterresolution depth map coding,” JCT-3V document JCT3V-G0151,Jan. 2014.[25] P. Aflaki, M. M. Hannuksela, D. Rusanovskyy, and M. Gabbouj, “Nonlinear depth map resampling for depth-enhanced 3-Dvideo coding,” IEEE Signal Processing Letters, vol. 20, no. 1,pp. 87-90, Jan. 2013.[26] M. G. Perkins, “Data compression of stereopairs,” IEEETransactions on Communications, vol. 40, no. 4, pp. 684-696,Apr. 1992.[27] H. Brust, A. Smolic, K. Müller, G. Tech, and T. Wiegand,“Mixed resolution coding of stereoscopic video for mobile devices,” Proc. of 3DTV Conference, May 2009.[28] G. Saygili, C. G. Gurler, and A. M. Tekalp, “Quality assessment of asymmetric stereo video coding,” Proc. of IEEE International Conference on Image Processing, pp. 4009-4012,Sept. 2010.[29] P. Aflaki, M. M. Hannuksela, and M. Gabbouj, “Subjectivequality assessment of asymmetric stereoscopic 3D video,” Springer Journal of Signal, Image and Video Processing, Mar. 2013.Online http://doi.org/10.1007/s11760-013-0439-02158

the so-called IBP inter-view prediction order, where left-side view (I view) is coded independently of other views, the right-side view (P view) may utilize inter-view predic-tion from the I view, and center view (B view) may be pre-dicted from both the I and P views. As can be seen the view order index values of the respective views (left, center,

Related Documents:

May 02, 2018 · D. Program Evaluation ͟The organization has provided a description of the framework for how each program will be evaluated. The framework should include all the elements below: ͟The evaluation methods are cost-effective for the organization ͟Quantitative and qualitative data is being collected (at Basics tier, data collection must have begun)

Silat is a combative art of self-defense and survival rooted from Matay archipelago. It was traced at thé early of Langkasuka Kingdom (2nd century CE) till thé reign of Melaka (Malaysia) Sultanate era (13th century). Silat has now evolved to become part of social culture and tradition with thé appearance of a fine physical and spiritual .

Multiview Figures Unit: Orthographic and Multiview Projection Problem Area: Multiview Projections Lesson: Proportional Views of Multiview Figures Student Learning Objectives. Instruction in this lesson should result in students achieving the following objectives: 1 Explain the reasons that proportional views are created for multiview drawings.

On an exceptional basis, Member States may request UNESCO to provide thé candidates with access to thé platform so they can complète thé form by themselves. Thèse requests must be addressed to esd rize unesco. or by 15 A ril 2021 UNESCO will provide thé nomineewith accessto thé platform via their émail address.

̶The leading indicator of employee engagement is based on the quality of the relationship between employee and supervisor Empower your managers! ̶Help them understand the impact on the organization ̶Share important changes, plan options, tasks, and deadlines ̶Provide key messages and talking points ̶Prepare them to answer employee questions

Dr. Sunita Bharatwal** Dr. Pawan Garga*** Abstract Customer satisfaction is derived from thè functionalities and values, a product or Service can provide. The current study aims to segregate thè dimensions of ordine Service quality and gather insights on its impact on web shopping. The trends of purchases have

Chính Văn.- Còn đức Thế tôn thì tuệ giác cực kỳ trong sạch 8: hiện hành bất nhị 9, đạt đến vô tướng 10, đứng vào chỗ đứng của các đức Thế tôn 11, thể hiện tính bình đẳng của các Ngài, đến chỗ không còn chướng ngại 12, giáo pháp không thể khuynh đảo, tâm thức không bị cản trở, cái được

Le genou de Lucy. Odile Jacob. 1999. Coppens Y. Pré-textes. L’homme préhistorique en morceaux. Eds Odile Jacob. 2011. Costentin J., Delaveau P. Café, thé, chocolat, les bons effets sur le cerveau et pour le corps. Editions Odile Jacob. 2010. Crawford M., Marsh D. The driving force : food in human evolution and the future.