Defining Architecture Components Of The Big Data Ecosystem

2y ago
38 Views
2 Downloads
910.68 KB
9 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Noelle Grant
Transcription

Defining Architecture Components of the Big DataEcosystemYuri Demchenko, Cees de LaatPeter MembreySystem and Network Engineering GroupUniversity of AmsterdamAmsterdam, The Netherlandse-mail: {y.demchenko, C.T.A.M.deLaat}@uva.nlHong Kong Polytechnic UniversityHong Kong SAR, Chinae-mail: cspmembrey@comp.polyu.edu.hkAbstract—Big Data are becoming a new technology focus both inscience and in industry and motivate technology shift to datacentric architecture and operational models. There is a vital needto define the basic information/semantic models, architecturecomponents and operational models that together comprise a socalled Big Data Ecosystem. This paper discusses a nature of BigData that may originate from different scientific, industry andsocial activity domains and proposes improved Big Data definitionthat includes the following parts: Big Data properties ( also calledBig Data 5V: Volume, Velocity, Variety, Value and Veracity), datamodels and structures, data analytics, infrastructure and security.The paper discusses paradigm change from traditional host orservice based to data centric architecture and operational modelsin Big Data. The Big Data Architecture Framework (BDAF) isproposed to address all aspects of the Big Data Ecosystem andincludes the following components: Big Data Infrastructure, BigData Analytics, Data structures and models, Big Data LifecycleManagement, Big Data Security. The paper analyses requirementsto and provides suggestions how the mentioned above componentscan address the main Big Data challenges. The presented workintends to provide a consolidated view of the Big Data phenomenaand related challenges to modern technologies, and initiate widediscussion.In this paper we continue with the Big Data definition andenhance the definition given in [3] that includes the 5V Big Dataproperties: Volume, Variety, Velocity, Value, Veracity, andsuggest other dimensions for Big Data analysis and taxonomy,in particular comparing and contrasting Big Data technologiesin e-Science, industry, business, social media, healthcare. Witha long tradition of working with constantly increasing volume ofdata, modern e-Science can offer industry the scientific analysismethods, while industry can bring advanced and fast developingBig Data technologies and tools to science and wider public.Keywords- Big Data Technology, Big Data Ecosystem, Big DataArchitecture Framework (BDAF), Big Data Infrastructure (BDI),Big Data Lifecycle Management (BDLM), Cloud based Big DataInfrastructure Services.There are not many academic papers related to Big Data; inmost cases they are focused on some component technology(e.g. Data Analytics or Machine Learning) or solution thatreflect only a small part of the whole problem area. The samerelates to the Big Data definition that would provide a conceptualbasis for the further technology development. There is no wellestablished terminology in this area. Currently this problem istargeted by the recently established NIST Big Data WorkingGroup (NBD-WG) [4] that meets at weekly basis in subgroupsfocused on Big Data definition, Big Data ReferenceArchitecture, Big Data Requirements, Big Data Security. Theauthors are actively contributing to the NBD-WG and havepresented the approach and ideas proposed/discussed in thispaper at one of NBD-WG virtual meetings [5]. We will refer tothe NBD-WG discussions and documents in many places alongthis paper to support our ideas or illustrate alternative approach.I.INTRODUCTIONBig Data, also referred to as Data Intensive Technologies,are becoming a new technology trend in science, industry andbusiness [1, 2, 3]. Big Data are becoming related to almost allaspects of human activity from just recording events to research,design, production and digital services or products delivery tothe final consumer. Current technologies such as CloudComputing and ubiquitous network connectivity provide aplatform for automation of all processes in data collection,storing, processing and visualization.The goal of our research at current stage is to understand thenature of Big Data, their main features, trends and newpossibilities in Big Data technologies development, identify thesecurity issues and problems related to the specific Big Dataproperties, and based on this to review architecture models andpropose a consistent approach to defining the Big Dataarchitecture/solutions to resolve existing challenges and knownissues/problems.In Big Data, data are rather a “fuel” that “powers” the wholecomplex of technical facilities and infrastructure componentsbuilt around a specific data origin and their target use. We willcall it a Big Data Ecosystem (BDE). By defining BDE wecontrast its data centric character to traditional definition of thearchitecture that is more applicable for facility or service centrictechnologies. We discuss the major (architecture) componentsthat together constitute the Big Data Ecosystem: 5V Big Dataproperties, Data Models and Structures, Big Data Infrastructure,Big Data lifecycle management (or data transformation flow),Big Data Security Infrastructure.The paper is organised as follows. Section II investigatesdifferent Big Data origin domains and target use and based onthis proposes a new extended/improved Big Data definition asthe main component of the Big Data Ecosystem. Section IIIanalyses the paradigm change in Big Data and Data Intensivetechnologies. Section IV proposes the Big Data ArchitectureFramework that combines all the major components of the Big

Data Ecosystem. The section also briefly discusses Big DataManagement issues and required Big Data structures. Section Vprovides suggestions about building Big Data Infrastructure andspecifically Big Data Analytics components. Section VII refersto other works related to defining Big Data architecture and itscomponents. The paper concludes with the summary andsuggestions for further research.II.BIG DATA DEFINITION AND ANALYSISA. Big Data Nature and Application DomainsWe observe that Big Data “revolution” is happening indifferent human activity domains empowered by significantgrowth of the computer power, ubiquitous availability ofcomputing and storage resources, increase of digital contentproduction, mobility. This creates a variety of the Big Dataorigin and usage domains.Table 1 lists the main Big Data origin domains and targeteduse or application, which are not exhausting and are presentedto illustrate a need for detailed analysis of these aspects. Werefer to the discussion in [5] presented by the authors atNBDWG about relations between these two dimensions toindicate their dependence. We can assume high relevance ofBig Data to business; this actually explains the current stronginterest to Big Data from business which is actually becomingthe main driving force in this technology domain.TABLE 1. BIG DATA ORIGIN AND TARGET USE DOMAINSBig Data OriginBig Data Target Use1. Science(a) Scientific discovery2. Telecom(b) New technologies3. Industry(c) Manufacturing, process4. Businesscontrol, transport5. Living Environment,(d) Personal services,Citiescampaigns6. Social media and(e) Living environmentnetworkssupport7. Healthcare(f) Healthcare supportScience has been traditionally dealing with challenges tohandle large volume of data in complex scientific researchexperiments, involving also wide cooperation amongdistributed groups of individual scientists and researchorganizations. Scientific research typically includes collectionof data in passive observation or active experiments which aimto verify one or another scientific hypothesis. Scientificresearch and discovery methods are typically based on theinitial hypothesis and a model which can be refined based onthe collected data. The refined model may lead to a new moreadvanced and precise experiment and/or the previous data reevaluation. The future Scientific Data and Big DataInfrastructure (SDI/BDI) needs to support all data handlingoperations and processes providing also access to data and tofacilities to collaborating researchers. Besides traditional accesscontrol and data security issues, security services need to ensuresecure and trusted environment for researcher to conduct theirresearch.In business, private companies will not typically share dataor expertise. When dealing with data, companies will intendalways to keep control over their information assets. They mayuse shared third party facilities, like clouds or specialistsinstruments, but special measures need to be taken to ensureworkspace safety and data protection, including input/outputdata sanitization.Big Data in industry are related to controlling complextechnological processes and objects or facilities. Moderncomputer-aided manufacturing produces huge amount of datawhich are in general need to be stored or retained to alloweffective quality control or diagnostics in case of failure orcrash. Similarly to e-Science, in many industrialapplications/scenarios there is a need for collaboration orinteraction of many workers and technologists.Big Data rise is tightly connected to social data revolutionthat both provided initial motivation for developing large scaleservices, global infrastructure and high performance analyticaltools, and produces huge amount of data on their own. Socialnetwork are widely used for collecting personal information andproviding better profiled personal services staring frompersonal search advice to targeted advertisements and preciselytargeted campaigns.We accept the proposed analysis is not exhaustive and canbe extended and detailed but we use it to illustrate a need for amore detailed research in this area.B. 5V of Big DataDespite the “Big Data” became a new buzz-word, there isno consistent definition of Big Data, nor detailed analysis of thisnew emerging technology. Most discussions until now havebeen going in blogosphere where active contributors havegenerally converged on the most important features andincentives of the Big Data [6, 7, 8].We refer to our recent paper [3] where we summarized theexisting at that time discussions and proposed the Big Datadefinition as having the following 5V properties: Volume,Velocity, Variety that constitute native/original Big Dataproperties, and Value and Veracity as acquired as a result ofdata initial classification and processing in the context of aspecific process or model.To provide background for discussion, we quote here fewdefinitions by leading experts and consulting companies. Westart with the IDC definition of Big Data (rather strict andconservative): "A new generation of technologies andarchitectures designed to economically extract value from verylarge volumes of a wide variety of data by enabling highvelocity capture, discovery, and/or analysis" [9].

It can be complemented with more simple definition byJason Bloomberg [8]: “Big Data: a massive volume of bothstructured and unstructured data that is so large that it's difficultto process using traditional database and software techniques.”This is also in accordance with the definition given by Jim Grayin his seminal book [10].We concur with the Gartner definition of Big Data that istermed as 3 parts definition: “Big data is high-volume, highvelocity and high-variety information assets that demand costeffective, innovative forms of information processing forenhanced insight and decision making.” [11, 12]. Furtheranalysis of the Big Data use cases, in particular those discussedby NBD-WG [4] reveals other aspects and Big Data features.During the Big Data lifecycle, each stage of the datatransformation or processing changes the dataset content, stateand consequently may change/enrich the data model. In manycases there is a need to link original data and processed data,keeping referral integrity (see more discussion about this in thefollowing sections).This motivates other Big Data features: Dynamicity icity/Variability reflects the fact that data are in constantchange and may have a definite state, besides commonlydefined as data in move, in rest, or being processed. Supportingthese data properly will require scalable provenance models andtools incorporating also data integrity and confidentiality.C. From 5V to 5 Parts Big Data DefinitionIt is obvious that current Big Data definition addresses onlythree basic Big Data properties Volume, Velocity, Variety (socalled 3V) and related technology components. To improve andextend the Big Data definition as a new technology, we need tofind a way to reflect its all important features and provide aguidance/basis for further technology development. We canrefer to one of the best example of the Cloud Computingdefinition [18] that has been given by NIST in 2008 and actuallyshaped the current cloud industry.We propose a Big Data definition as having five parts thatgroup the main Big Data features and related infrastructurecomponents:(1) Big Data Properties: 5V Volume, Variety, Velocity, Value, Veracity Additionally: Data Dynamicity (Variability) and Linkage.(2) New Data Models Data linking, provenance and referral integrity Data Lifecycle and Variability/Evolution(3) New Analytics Real-time/streaming analytics, interactive and machinelearning analytics(4) New Infrastructure and Tools Cloud based infrastructure, storage, network, highperformance computing Heterogeneous multi-provider services integration New Data Centric (multi-stakeholder) service models New Data Centric security models for trusted infrastructureand data processing and storage(5) Source and Target that are important aspect sometimesdefining data types and data structures, e.g. raw data, datastreams, correlated data High velocity/speed data capture from variety of sensorsand data sources Data delivery to different visualisation and actionablesystems and consumers Full digitised input and output, (ubiquitous) sensornetworks, full digital controlTo reflect the major Big Data features and ecosystemcomponents, we can summarise them in a form of the improvedGartner definition:“Big Data (Data Intensive) Technologies are targeting toprocess high-volume, high-velocity, high-variety data(sets/assets) to extract intended data value and ensure highveracity of original data and obtained information that demandcost-effective, innovative forms of data and informationprocessing (analytics) for enhanced insight, decision making,and processes control; all of those demand (should besupported by) new data models (supporting all data states andstages during the whole data lifecycle) and new infrastructureservices and tools that allow obtaining (and processing) datafrom a variety of sources (including sensor networks) anddelivering data in a variety of forms to different data andinformation consumers and devices.”D. Big Data EcosystemBig Data is not just a database or Hadoop problem, althoughthey constitute the core technologies and components for largescale data processing and data analytics [13, 14, 15]. It is thewhole complex of components to store, process, visualize anddeliver results to target applications. Actually Big Data is “afuel” of all data related processes, source, target, and outcome.All this complex of interrelated components can be definedas the Big Data Ecosystem (BDE) that deals with the evolvingdata, models and supporting infrastructure during the whole BigData lifecycle. In the following we will provide more detailsabout our vision of the BDE.III. PARADIGM CHANGE IN BIG DATA AND DATA INTESIVESCIENCE AND TECHNOLOGIESThe recent advancements in the general ICT, CloudComputing and Big Data technologies facilitate the paradigmchange in modern e-Science and industry that is characterizedby the following features [3, 16]: Transformation of all processes, events and products intodigital form by means of multi-dimensional multi-faceted

measurements, monitoring and control; digitising existingartifacts and other content.Automation of all data production, consumption andmanagement processes including data collection, storing,classification, indexing and other components of the generaldata curation and provenance.Possibility to re-use and repurpose the initial data sets fornew and secondary data analysis based on the modelimprovementGlobal data availability and access over the network forcooperative group of researchers or technologists, includingwide public access to scientific or production data.Existence of necessary infrastructure components andmanagement tools that allow fast infrastructure and servicescomposition, adaptation and provisioning on demand forspecific research projects and tasks.Advanced security and access control technologies thatensure secure operation of the complex research andproduction infrastructures and allow creating trusted secureenvironment for cooperating groups of researchers andtechnology specialists.The following are additional factors that will create newchallenges and motivate both general and security paradigmschange in Big Data ecosystem: Virtualization: can improve security of data processingenvironment but cannot solve data security “in rest”. Mobility of the different components of the typical datainfrastructure: sensors or data source, data consumer, anddata themselves (original data and staged/evolutional data).This in its own cause the following problemso On-demand infrastructure services provisioningo Inter-domain context communication Big Data aggregation that may involve data from differentadministrative/logical domains and evolutionally changingdata structures (also semantically different). Policy granularity: Big Data may have complex structureand require different and high-granular policies for theiraccess control and handling.The future Big Data Infrastructure (BDI) should support thewhole data lifecycle and explore the benefit of the datastorage/preservation, aggregation and provenance in a largescale and during long/unlimited period of time. Important is thatthis infrastructure must ensure data security (integrity,confidentiality, availability, and accountability), and dataownership protection. With current practice that assumes dataaccess, use by different user groups and in general processing onthe third party facilities/datacenters, there should be a possibilityto enforce data/dataset policy that they can be processed ontrusted systems and/or complying other requirements.Customers must trust the BDI to process their data on BDIfacilities and be ensured that their stored research data areprotected from non-authorised access. Privacy issues are alsoarising from distributed remote character of BDI that can spanmultiple countries with different local policies. This should beprovided by the access control and accounting infrastructurewhich is an important component of the future BDI [3, 16, 17].A. From Big Data to All-Data MethaphorOne of difficulties in defining Big Data and setting acommon language/vocabulary for Big Data is the different viewof the potential stakeholders. For example, big business and bigscience are arguing how big are big data: is Petabyte a big data?Is Exabyte a big data? While smaller businesses and “long-tale”science [8] (i.e., that doesn’t generate huge amount of data) mayconclude that they will never become Big Data players and allthis hype is not for them.In this respect, it is important to look at the current Big Datarelated trends in general and investigate/analyse what are thecomponents of the Big Data ecosystem and how they impact thepresent ICT infrastructure changes in first place, and how thesechanges will affect other IT domains and applications.Following the trend in some Big Data analytics domain tocollect and analyse all available data (all data that can becollected), we can extend it to the following metaphor: “FromBig Data to All-Data”. It is depicted in Figure 1 that illustratesthat the traditional dilemma “move data to computing orcomputing to data” is not valid in this case, and we really needto look at the future Big Data/All-Data processing model andinfrastructure differently.All-Data infrastructure will need to adopt genericallydistributed storage and computing, a complex of functionalitieswhich we depicted as Data Bus will provide all complexfunctionality to exchange data, distribute and synchroniseprocesses, and many other functions that should cope with thecontinuous data production, processing and consumption.Figure 1. From Big Data to All-Data Metaphor.B. Moving to Data-Centric Models and TechnologiesCurrent IT and communication technologies are OS/systembased and host/service centric what means that allcommunication or processing are bound to host/computer thatruns application software. This is especially related to securityservices that use server/host based PKI certificates and securityprotocols. The administrative and security domains are the keyconcepts, around which the services and protocols are built. Adomain provides a context for establishing security context andtrust relation. This creates a number of problems when data

(payload or session context) are moved from one system toanother or between domains, or operated in a distributedmanner.Big Data will require different data centric operationalmodels and protocols, what is especially important in situationwhen the object or event related data will go through a numberof transformations and become even more distributed, betweentraditional security domains. The same relates to the currentfederated access control model that is based on the crossadministrative and security domains identities and policymanagement.When moving to generically distributed data centric modelsadditional research are needed to address the following issues: Maintaining semantic and referral integrity, in particular tosupport data provenance, Data location, search, access Data integrity and identifiability, referral integrity Data security and data centric access control, encryptionenforced and attribute based access Data ownership, personally identified data, privacy, opacity Trusted virtualisation platform, data centric trustbootstrappingIV.PROPOSED BIG DATA ARCHITECTURE FRAMEWORKDiscussion above motivates a need for a new approach to thedefinition of the Big Data Ecosystem that would address themajor challenges related to the Big Data properties andcomponent technologies.In this section we propose the Big Data ArchitectureFramework (BDAF) that would support the extended Big Datadefinition given in section II.C and support the main componentsand processes in the Big Data Ecosystem (BDE). We base ourBDAF definition on industry best practices and our experiencein defining architectures for new technologies, in particular,NIST Cloud Computing Reference Architecture (CCRA) [18],Intercloud Architecture Framework (ICAF) by authors [19],recent documents by the NIST Big Data Working Group [4], inparticular initial Big Data Reference Architecture [20] or BigData technology Roadmap [21]. We also refer to other relatedarchitecture definitions: Information as a Service by Open DataCenter Alliance [22], TMF Big Data Analytics Architecture[23], IBM Business Analytics and Optimisation ReferenceArchitecture [24], LexisNexis HPCC Systems [25].The proposed definition of the Big Data ArchitectureFramework summarises majority of the known to us researchand discussions in this area. The proposed BDAF comprises ofthe following 5 components that address different aspects of theBig Data Ecosystem and Big Data definition aspects which weconsider to some extent orthogonal and complementary:(1) Data Models, Structures, Types Data formats, non/relational, file systems, etc.(2) Big Data Management Big Data Lifecycle (Management) Big Data transformation/staging Provenance, Curation, Archiving(3) Big Data Analytics and Tools Big Data Applications Target use, presentation, visualisation(4) Big Data Infrastructure (BDI) Storage, Compute, (High Performance Computing,)Network Sensor network, target/actionable devices Big Data Operational support(5) Big Data Security Data security in-rest, in-move, trusted processingenvironmentsTo simply validate the consistency of the proposed definitionwe can look how the proposed components are related to eachother. This is illustrated in Table 2 that shows what architecturecomponent is used or required by another component.TABLE 3. INTERRELATION BETWEEN BDAF COMPONENTSColn: Used ByDataDataBD Infra BDBig DataModels Mngnt& & Operat Analytics SecurityRow: Reqs ThisLifecycleData ModelsData Mngnt&LifecycleBD Infrastr& OperationBD AnalyticsBig DataSecurity The proposed BDAF definition is rather technical andinfrastructure focused and actually reflecting the technologyoriented stakeholders. The further research on the BDAFdefinition should analyse the interests and messages related todifferent stakeholder groups in Big Data, in particular we will belooking for contribution from the data archives providers andlibraries who are expected to play a renewed role in the BDE[26].A. Data Models and StructuresDifferent stages of the Big Data transformation will requiredifferent data structures, models and formats, including also apossibility to process both structured and unstructured data [27].The following data types can be defined according to currentNBDWG discussions [28]:(a) data described via a formal data model(b) data described via a formalized grammar(c) data described via a standard format(d) arbitrary textual or binary dataFigure 2 illustrates the Big Data structures, models and theirlinkage at different processing stages. We can admit that datastructures and correspondingly models may be different atdifferent data processing stages, however in many cases it isimportant to keep linkage between data.We can look closer at the scientific data types, theirtransformation and related requirements where we have longtime experience. Emergence of computer aided research

methods is transforming the way research is done and scientificdata are used. The following types of scientific data are defined[16]: Raw data collected from observation and from experiment(according to an initial research model) Structured data and datasets that went through data filteringand processing (supporting some particular formal model) Published data that supports one or another scientifichypothesis, research result or statement Data linked to publications to support the wide researchconsolidation, integration, and openness.is one of the problems to be addressed by Big Data structuresand underlying infrastructure.We can mention as the main motivation the EuropeanCommission’s initiative to support Open Access to scientificdata from publicly funded projects that suggests introduction ofthe following mechanisms to allow linking publications and data[30, 31]: PID - persistent data ID ORCID – Open Researcher and Contributor Identifier [32].B. Data Management and Big Data LifecycleWith the digital technologies proliferation into all aspects ofbusiness activities, the industry and business are entering a newplayground where they need to use scientific methods to benefitfrom the new opportunities to collect and mine data fordesirable information, such as market prediction, customerbehavior predictions, social groups activity predictions, etc.Numerous blog articles [6, 33] and industry papers [34, 35]suggest that the Big Data technologies need to adopt scientificdiscovery methods that include iterative model improvementand collection of improved data, re-use of collected data withimproved model.Figure 2. Big Data structures, models and their linkage atdifferent processing stages.Once the data is published, it is essential to allow otherscientists to be able to validate and reproduce the data that theyare interested in, and possibly contribute with new results.Capturing information about the processes involved intransformation from raw data up until the generation ofpublished data becomes an important aspect of scientific datamanagement. Scientific data provenance becomes an issue thatalso needs to be taken into consideration by Big Data providers[29].Another aspect to take into consideration is to guaranteereusability of published data within the scientific community.Understanding semantics of the published data becomes animportant issue to allow for reusability, and this had beentraditionally been done manually. However, as we anticipateunprecedented scale of published data that will be generated inBig Data Science, attaching clear data semantic becomes anecessary condition for efficient reuse of published data.Learning from best practices in semantic web community onhow to provide a reusable published data, will be one ofconsideration that will be addressed by BDI/SDI.Big data are typically distributed both on the collection sideand on the processing/access side: data need to be collected(sometimes in a time sensitive way or with other environmentalattributes), distributed and/or replicated. Linking distributed dataFigure 3. Big Data Lifecycle in Big Data Ecosystem.We refer to the Scientific Data Lifecycle Management modeldescribed in our earlier paper [3, 16] and was a subject fordetailed research in another work [36] that reflects complex anditerative process of the scientific research that includes a numberof consequent stages: research project or experiment planning;data collection; data processing; publishing research results;discussion, feedback; archiving (or discarding)The required new approach to data management andprocessing in Big Data industry is reflected in the Big DataLifecycle Management (BDLM) model (see Figure 3) proposedas a result of analysis of the existing practices in differentscientific communities and industry technology domains.New BDLM requires data storage and preservation at allstages what should allow data re-use/re-purposing andsecondary research/analytics on the processed data andpublished results. However, this is pos

targeted by the recently established NIST Big Data Working Group (NBD-WG) [4] that meets at weekly basis in subgroups focused on Big Data definition, Big Data Reference Architecture, Big Data Requirements, Big Data Security. The authors are actively contributing to the NBD-WG and have presen

Related Documents:

May 02, 2018 · D. Program Evaluation ͟The organization has provided a description of the framework for how each program will be evaluated. The framework should include all the elements below: ͟The evaluation methods are cost-effective for the organization ͟Quantitative and qualitative data is being collected (at Basics tier, data collection must have begun)

Silat is a combative art of self-defense and survival rooted from Matay archipelago. It was traced at thé early of Langkasuka Kingdom (2nd century CE) till thé reign of Melaka (Malaysia) Sultanate era (13th century). Silat has now evolved to become part of social culture and tradition with thé appearance of a fine physical and spiritual .

On an exceptional basis, Member States may request UNESCO to provide thé candidates with access to thé platform so they can complète thé form by themselves. Thèse requests must be addressed to esd rize unesco. or by 15 A ril 2021 UNESCO will provide thé nomineewith accessto thé platform via their émail address.

̶The leading indicator of employee engagement is based on the quality of the relationship between employee and supervisor Empower your managers! ̶Help them understand the impact on the organization ̶Share important changes, plan options, tasks, and deadlines ̶Provide key messages and talking points ̶Prepare them to answer employee questions

Dr. Sunita Bharatwal** Dr. Pawan Garga*** Abstract Customer satisfaction is derived from thè functionalities and values, a product or Service can provide. The current study aims to segregate thè dimensions of ordine Service quality and gather insights on its impact on web shopping. The trends of purchases have

Chính Văn.- Còn đức Thế tôn thì tuệ giác cực kỳ trong sạch 8: hiện hành bất nhị 9, đạt đến vô tướng 10, đứng vào chỗ đứng của các đức Thế tôn 11, thể hiện tính bình đẳng của các Ngài, đến chỗ không còn chướng ngại 12, giáo pháp không thể khuynh đảo, tâm thức không bị cản trở, cái được

Le genou de Lucy. Odile Jacob. 1999. Coppens Y. Pré-textes. L’homme préhistorique en morceaux. Eds Odile Jacob. 2011. Costentin J., Delaveau P. Café, thé, chocolat, les bons effets sur le cerveau et pour le corps. Editions Odile Jacob. 2010. Crawford M., Marsh D. The driving force : food in human evolution and the future.

Le genou de Lucy. Odile Jacob. 1999. Coppens Y. Pré-textes. L’homme préhistorique en morceaux. Eds Odile Jacob. 2011. Costentin J., Delaveau P. Café, thé, chocolat, les bons effets sur le cerveau et pour le corps. Editions Odile Jacob. 2010. 3 Crawford M., Marsh D. The driving force : food in human evolution and the future.