Mapping The Landscape Of Human-Level Artificial General .

2y ago
2 Views
2 Downloads
724.37 KB
18 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Isobel Thacker
Transcription

ArticlesMapping the Landscape ofHuman-Level ArtificialGeneral IntelligenceSam S. Adams, Itamar Arel, Joscha Bach, Robert Coop, Rod Furlan,Ben Goertzel, J. Storrs Hall, Alexei Samsonovich, Matthias Scheutz,Matthew Schlesinger, Stuart C. Shapiro, John F. SowaI We present the broad outlines of a roadmaptoward human-level artificial general intelligence (henceforth, AGI). We begin by discussingAGI in general, adopting a pragmatic goal forits attainment and a necessary foundation ofcharacteristics and requirements. An initialcapability landscape will be presented, drawingon major themes from developmental psychology and illuminated by mathematical, physiological, and information-processing perspectives. The challenge of identifying appropriatetasks and environments for measuring AGI willbe addressed, and seven scenarios will be presented as milestones suggesting a roadmapacross the AGI landscape along with directionsfor future research and collaboration.This article is the result of an ongoing collaborative effortby the coauthors, preceding and during the AGI RoadmapWorkshop held at the University of Tennessee, Knoxvillein October 2009, and from many continuing discussions sincethen. Some of the ideas also trace back to discussions held during two Evaluation and Metrics for Human Level AI workshopaorganized by John Laird and Pat Langley (one in Ann Arbor inlate 2008, and one in Tempe in early 2009). Some of the conclusions of the Ann Arbor workshop were recorded (Laird et al.2009). Inspiration was also obtained from discussion at theFuture of AGI postconference workshop of the AGI-09 conference, triggered by Itamar Arel’s presentation AGI Roadmap (Arel2009); and from an earlier article on AGI road-mapping (Areland Livingston 2009).Of course, this is far from the first attempt to plot a coursetoward human-level AGI: arguably this was the goal of thefounders of the field of artificial intelligence in the 1950s, andhas been pursued by a steady stream of AI researchers since,even as the majority of the AI field has focused its attention onmore narrow, specific subgoals. The ideas presented here buildon the ideas of others in innumerable ways, but to review thehistory of AI and situate the current effort in the context of itspredecessors would require a much longer article than this one.Thus we have chosen to focus on the results of our AGI roadmapdiscussions, acknowledging in a broad way the many debtsowed to many prior researchers. References to the prior literature on evaluation of advanced AI systems are given by Laird(Laird et al. 2009) and Geortzel and Bugaj (2009), which may ina limited sense be considered prequels to this article.We begin by discussing AGI in general and adopt a pragmatic goal for measuring progress toward its attainment. We alsoCopyright 2012, Association for the Advancement of Artificial Intelligence. All rights reserved. ISSN 0738-4602SPRING 2012 25

Articlesadopt, as a provisional starting point, a slightlymodified version of the characteristics and requirements for AGI proposed by Laird and Wray (2010),upon which we will later construct a number ofspecific scenarios for assessing progress in achieving AGI. An initial capability landscape for AGIwill be presented, drawing on major themes fromdevelopmental psychology and illuminated bymathematical, physiological, and informationprocessing perspectives. The challenge of identifying appropriate tasks and environments for measuring AGI will be taken up. Several scenarios willbe presented as milestones outlining a roadmapacross the AGI landscape, and directions for futurework and collaboration will conclude the article.The Goal: Human-LevelGeneral IntelligenceSimply stated, the goal of AGI research as considered here is the development and demonstrationof systems that exhibit the broad range of generalintelligence found in humans. This goal of developing AGI echoes that of the early years of the artificial intelligence movement, which after manyvaliant efforts largely settled for research into narrow AI systems that could demonstrate or surpasshuman performance in a specific task, but couldnot generalize this capability to other types of tasksor other domains.A classic example of the narrow AI approach wasIBM’s Deep Blue system (Campbell, Hoane, andHsu 2002), which successfully defeated world chesschampion Gary Kasparov but could not readilyapply that skill to any other problem domain without substantial human reprogramming. In early2011, IBM’s Watson question-answering system(Ferrucci 2010) dramatically defeated two all-timechampions in the quiz show Jeopardy, but havingnever personally visited Chicago’s O’Hare andMidway airports, fumbled on a question that anyhuman frequent flier in the US would have known.To apply the technology underlying Watson toanother domain, such as insurance or call-centersupport, would require not merely education ofthe AI system, but significant reprogramming andhuman scoring of relevant data — the analogue ofneeding to perform brain surgery on a human eachtime the person needs to confront a new sort oftask. As impressive as these and other AI systemsare in their restricted roles, they all lack the basiccognitive capabilities and common sense of a typical five-year-old child, let alone a fully educatedadult professional.Given the immense scope of the task of creatingAGI, we believe the best path to long-term successis collaboration and coordination of the efforts ofmultiple research groups. A common goal and ashared understanding of the landscape ahead of us26AI MAGAZINEwill be crucial to that success, and it was the aim ofour workshop to make substantial progress in thatdirection.A Pragmatic Goal for AGIThe heterogeneity of general intelligence inhumans makes it practically impossible to developa comprehensive, fine-grained measurement system for AGI. While we encourage research in defining such high-fidelity metrics for specific capabilities, we feel that at this stage of AGI developmenta pragmatic, high-level goal is the best we canagree upon.Nils Nilsson, one of the early leaders of the AIfield, stated such a goal in the 2005 AI Magazinearticle Human-Level Artificial Intelligence? Be Serious! (Nilsson, 2005):I claim achieving real human-level artificial intelligence would necessarily imply that most of thetasks that humans perform for pay could be automated. Rather than work toward this goal ofautomation by building special-purpose systems, Iargue for the development of general-purpose, educable systems that can learn and be taught to perform any of the thousands of jobs that humans canperform. Joining others who have made similar proposals, I advocate beginning with a system that hasminimal, although extensive, built-in capabilities.These would have to include the ability to improvethrough learning along with many other abilities.Many variant approaches have been proposedfor achieving such a goal, and both the AI and AGIcommunities have been working for decades onthe myriad subgoals that would have to beachieved and integrated to deliver a comprehensive AGI system. But aside from the many technological and theoretical challenges involved in thiseffort, we feel the greatest impediment to progressis the absence of a common framework for collaboration and comparison of results. The AGI community has been working on defining such aframework for several years now, and as mentionedearlier, this present effort on AGI road-mappingbuilds on the shoulders of much prior work.Characteristics and Requirements for AGIAs a starting point for our road-mapping effort, wehave adopted a slightly modified version of Lairdand Wray’s Cognitive Architecture Requirementsfor Achieving AGI (Laird and Wray 2010). Theiroutline of eight characteristics for environments,tasks, and agents and 12 requirements for generalcognitive architectures provides a level foundationfor the comparison of appropriate scenarios forassessing AGI systems. We stress, however, that inour perspective these requirements should not beconsidered as final or set in stone, but simply as aconvenient point of departure for discussion andcollaboration. As our understanding of AGIimproves through ongoing research, our under-

Articlesstanding of the associated requirements is sure toevolve. In all probability, each of the many currentAGI research paradigms could be used to spawn aslightly different requirements list, but we muststart somewhere if we are to make progress as acommunity.To test the capability of any AGI system, thecharacteristics of the intelligent agent and itsassigned tasks within the context of a given environment must be well specified. Failure to do thismay result in a convincing demonstration, butmake it exceedingly difficult for other researchersto duplicate experiments and compare and contrast alternative approaches and implementations.The characteristics shown in figure 1 provide thenecessary (if not sufficient) degrees of dynamismand complexity that will weed out most narrow AIapproaches at the outset, while continually challenging researchers to consider the larger goal ofAGI during their work on subsystems and distinctcapabilities. We have added the requirement foropenness of the environment (C2), that is, theagent should not be able to rely on a fixed libraryof objects, relations, and events. We also requireobjects to have an internal structure that requirescomplex, flexible representations (C1).There are nearly as many different AGI architectures as there are researchers in the field. If we areever to be able to compare and contrast systems,let alone integrate them, a common set of architectural features must form the basis for that comparison.Figure 2 reprises Laird and Wray’s requirementsfor general cognitive architectures and provides aframework for that basis. We have modifiedrequirement R0, which in Laird and Wray reads“fixed structure for all tasks,” to emphasize that thesystem may grow and develop over time, perhapschanging its structure dramatically — but thesechanges need to be effected by the agent itself, notby the intervention of the programmer. We anticipate further additions and changes to these listsover time as the community converges in experience with various cognitive architectures and AGIsystems, but we have adopted them so we canprogress as a community to our larger, sharedgoals.C1. The environment is complex, with diverse, interactingand richly structured objects.C2. The environment is dynamic and open.C3. Task-relevant regularities exist at multiple time scales.C4. Other agents impact performance.C5. Tasks can be complex, diverse and novel.C6. Interactions between agent, environment and tasksare complex and limited.C7. Computational resources of the agent are limited.C8. Agent existence is long-term and continual.Figure 1. Characteristics for AGI Environments, Tasks, and Agents.R0. New tasks do not require re-programming ofthe agentR1. Realize a symbol systemRepresent and effectively use:R2. Modality-specific knowledgeR3. Large bodies of diverse knowledgeR4. Knowledge with different levels of generalityR5. Diverse levels of knowledgeR6. Beliefs independent of current perceptionR7. Rich, hierarchical control knowledgeR8. Meta-cognitive knowledgeR9. Support a spectrum of bounded and unboundeddeliberationR10. Support diverse, comprehensive learningR11 Support incremental, online learningCurrent ChallengesIn addition to providing a framework for collaborative research in AGI, Laird and Wray (2010) identified two significant challenges: one of the best ways to refine and extend thesesets of requirements and characteristics is to develop agents using cognitive architectures that test thesufficiency and necessity of all these and other possible characteristics and requirements on a varietyof real-world tasks. One challenge is to find tasksand environments where all of these characteristicsare active, and thus all of the requirements must beFigure 2. Cognitive Architecture Requirements for AGI.confronted. A second challenge is that the existenceof an architecture that achieves a subset of theserequirements does not guarantee that such anarchitecture can be extended to achieve otherrequirements while maintaining satisfaction of theoriginal set of requirements.Our 2009 AGI road-mapping workshop took upthe first challenge of finding appropriate tasks andSPRING 2012 27

Articlesenvironments to assess AGI systems, while the second challenge will be more appropriately handledby individual research efforts. We also added athird challenge, that of defining the landscape ofAGI, in service to the AGI road-mapping effort thathas been under way for several years.The balance of this article will deal with thesetwo challenges in reverse order. First, we will provide an initial landscape of AGI capabilities basedon developmental psychology and underpinnedby mathematical, physiological, and informationprocessing perspectives. Second, we will discuss theissues that arise in selecting and defining appropriate tasks and environments for assessing AGI,and finally, we will present a number of scenariosand directions for future work and collaboration.Challenge: Mapping the LandscapeThere was much discussion both in preparationand throughout our AGI Roadmap Workshop onthe process of developing a roadmap. A traditionalhighway roadmap shows multiple driving routesacross a landscape of cities and towns, natural features like rivers and mountains, and political features like state and national borders. A technologyroadmap typically shows a single progression ofdevelopmental milestones from a known startingpoint to a desired result. Our first challenge indefining a roadmap for achieving AGI was that weinitially had neither a well-defined starting pointnor a commonly agreed upon target result. Thehistory of both AI and AGI is replete with thisproblem, which is somewhat understandable giventhe breadth and depth of the subjects of bothhuman intelligence and computer technology. Wemade progress by borrowing more metaphors fromthe highway roadmap, deciding to first define thelandscape for AGI and then populate that landscape with milestones that may be traversedthrough multiple alternative routes.The final destination, full human-level artificialgeneral intelligence, encompasses a system thatcould learn, replicate, and possibly exceed humanlevel performance in the full breadth of cognitiveand intellectual abilities. The starting point, however, was more problematic, since there are manycurrent approaches to achieving AGI that assumedifferent initial states. We finally settled on a developmental approach to the roadmap, followinghuman cognitive development from birth throughadulthood.The various scenarios for assessing AGI that hadbeen submitted by participants prior to the workshop were then arrayed as milestones betweenthese two endpoints, without any specific routesbetween them aside from their relative requirements for increasing levels of human cognitivedevelopment.28AI MAGAZINETop Down: CharacterizingHuman Cognitive DevelopmentThe psychological approach to intelligence encompasses a broad variety of subapproaches ratherthan presenting a unified perspective. Viewed historically, efforts to conceptualize, define, andmeasure intelligence in humans reflect a distincttrend from general to specific (Gregory 1996),much like the history of AI.Early work in defining and measuring intelligence was heavily influenced by Spearman, who in1904 proposed the psychological factor g (for general intelligence). Spearman argued that g was biologically determined, and represented the overallintellectual skill level of an individual. A relatedadvance was made in 1905 by Binet and Simon,who developed a novel approach for measuringgeneral intelligence in French schoolchildren. Aunique feature of the Binet-Simon scale was that itprovided comprehensive age norms, so that eachchild could be systematically compared with others across both age and intellectual skill level. In1916, Terman introduced the notion of an intelligence quotient or IQ, which is computed by dividing the test taker’s mental age (that is, his or herage-equivalent performance level) by the physicalor chronological age.In subsequent years, psychologists began toquestion the concept of intelligence as a single,undifferentiated capacity. There were two primaryconcerns. First, while performance within an individual across knowledge domains is somewhat correlated, it is not unusual for skill levels in onedomain to be considerably higher or lower than inanother (that is, intraindividual variability). Second, two individuals with comparable overall performance levels might differ significantly acrossspecific knowledge domains (that is, interindividual variability).These issues helped to motivate a number ofalternative theories, definitions, and measurementapproaches, which share the idea that intelligenceis multifaceted and variable both within and acrossindividuals. Of these approaches, a particularlywell-known example is Gardner’s theory of multiple intelligences (Gardner 1999), which proposeseight distinct forms or types of intelligence: (1) linguistic, (2) logical-mathematical, (3) musical, (4)bodily-kinesthetic, (5) spatial, (6) interpersonal, (7)intrapersonal, and (8) naturalist. Gardner’s theorysuggests that each individual’s intellectual skill isrepresented by an intelligence profile, a uniquemosaic or combination of skill levels across theeight forms of intelligence.While Gardner’s theory has had significantimpact within the field of adult intelligence, it hashad comparatively less influence on the study ofintellectual development in children. Instead,researchers in the field of cognitive development

Articlesseek to (1) describe processes of intellectualchange, while (2) identifying and explaining theunderlying mechanisms (both biological and environmental) that make these changes possible.Contemporary theories of cognitive developmentare very diverse and defy simple systematization,and a thorough treatment of the field would takeus too far from our focus. However, two majorschools of thought, those of Piaget and Vygotsky,will serve as axes for our AGI landscape.she is capable of doing alone; he referred to thiscognitive space as the zone of proximal development.Third, Vygotsky also stressed that each childinherits a unique set of objects, ideas, and traditions that guide learning (for example, books, calculators, computers, and others). These tools ofintellectual adaptation not only influence the pattern of cognitive development, but also serve asconstraints on the rate and extent of development.Piaget’s TheorySurveying the Landscapeof Human IntelligenceIn his classic work that founded the science of cognitive development, Piaget proposed that humansprogress through four qualitatively distinct stages(Piaget 1953).First, during the sensorimotor stage (0–2 years),infants acquire a rich repertoire of perceptual andmotor skills (for example, reaching, grasping,crawling, walking, and others). This stage includesa number of major milestones, such as the abilityto search for hidden objects and the ability to useobjects as simple tools.Second, infants enter the preoperational stage(2–6 years) as they acquire the capacity to mentally represent their experiences (for example, memory, mental images, drawing, language, and others), but lack the ability to systematicallycoordinate these representations in a logically consistent manner.During the next stage (6 years to adolescence),children at the concrete operational level masterbasic elements of logical and mathematicalthought, including the ability to reason aboutclasses and categories, as well as numerical operations and relations.The final stage of development, formal operations, begins in adolescence and includes the useof deductive logic, combinatorial reasoning, andthe ability to reason about hypothetical events.Vygotsky’s TheoryIn contrast to Piaget’s view, which focuses on theindividual, Vygotsky’s classic theory of cognitivedevelopment emphasizes the sociocultural perspective (Vygotsky 1986). Vygotsky’s theory not onlyhighlights the influence of the social environment,but it also proposes that each culture provides aunique developmental context for the child. Threefundamental concepts from this theory are (1)internalization, (2) zone of proximal development,and (3) tools of intellectual adaptation.First, Vygotsky proposed that the capacity forthought begins by acquiring speech (that is, thinking out loud), which gradually becomes covert orinternalized.Second, he emphasized that parents, teachers,and skilled peers facilitate development by helpingthe child function at a level just beyond what he orWhile many consider the views of Piaget andVygotsky to be at odds because of their differentfoci, we consider them to be complementary, partial descriptions of the same development process.Piaget’s approach focused on the stages of cognitive development of an individual child, whileVygotsky considered the same development within the context of social interaction with otherhumans with access to and training in the use ofcultural artifacts like tools, languages, books, and ashared environment. By placing the developmental stages of each theory on opposing axes (figure3), we can outline the landscape of human cognitive development and provide a structure for theplacement of milestones along the road to AGI.Bottom Up: Substrataof the AGI LandscapeOur ultimate goal as a community is to create functional computer systems that demonstrate humanlevel general intelligence. This lofty aim requiresthat we deeply understand the characteristics ofhuman cognitive behavior and development outlined above, as well as implement that understanding in a nonbiological substrate, the moderndigital computer. To accomplish this, we drawinspiration and perspectives from many differentdisciplines, as shown in figure 4. From physiologywe seek to understand the biological implementation of human intelligence; from mathematics, thenature of information and intelligence regardlessof its implementation; and from information processing, we map these insights and others into themetaphors of computer science that will mostdirectly lead to successful computer implementation. This section describes each of these underpinnings for our road-mapping efforts.The Physiological PerspectiveGiven advances in bioscience, it has becomeincreasingly feasible to take a more holistic physiological perspective on cognitive developmentincorporating genetic, biochemical, and neuralmechanisms, among others. A unique strength ofbiologically inspired cognitive theories is their ability to account for universal aspects of developmentSPRING 2012 29

Individual Capability (Piaget)ArticlesFull Human-LevelIntelligenceFormal Operational(Adolescent to Adult)Concrete Operational(6 years to Adolescent)tPreoperational(2 - 6 years)Sensory/Motor(Birth - 2 years)iveDemoplveenitannogCmHuInternalizationZone of ProximalDevelopmentTools of Intellectual AdaptationSocial-Cultural Engagement (Vygotsky)Figure 3. The Landscape of Human Cognitive Development.that emerge in a consistent pattern across a widerange of cultures, physical environments, and historical time periods (for example, motor-skilldevelopment, object perception, language acquisition, and others). The physiological approach alsoplays a prominent role in explaining differencesbetween typical and atypical patterns of development (for example, autism, ADHD, learning disorders, disabilities, and others). Both physiologicaland information-processing perspectives favormodular accounts of cognitive development,which view the brain as divided into special-purpose input-output systems that are devoted to specific processing tasks.The Mathematical PerspectiveThie mathematical perspective is typified by therecent work of Marcus Hutter and Shane Legg, whogive a formal definition of general intelligencebased on the Solomonoff-Levin distribution (Leggand Hutter 2007). Put very roughly, they defineintelligence as the average reward-achieving capability of a system, calculated by averaging over allpossible reward-summable environments, whereeach environment is weighted in such a way thatmore compactly describable programs have largerweights. Variants of this definition have been pro-30AI MAGAZINEposed to take into account the way that intelligence may be biased to particular sorts of environments, and the fact that not all intelligent systemsare explicitly reward-seeking (Goertzel 2010).While this notion of intelligence as compressionis useful, even if a mathematical definition of general intelligence were fully agreed upon, thiswouldn’t address the human-level part of AGI.Human intelligence is neither completely generalin the sense of a theoretical AGI like AIXI, Hutter’soptimally intelligent agent (Hutter 2004), nor is ithighly specialized in the sense of current AI software. It has strong general capability, yet is biasedtoward the class of environments in which humanintelligence develops — a class of environmentswhose detailed formalization remains largely anunsolved problem.The Information Processing PerspectiveMuch of modern cognitive science uses the computer as a metaphor for the mind. This perspectivediffers from the system-theoretic ideas of Piagetand Vygotsky, but at the same time provides amore direct mapping to the target implementationfor AGI systems.According to the information-processing perspective, cognitive development in infancy and

ArticlesHuman Cognitive Developmentc. ProInfohMatlogyoisPhyImplementation MetaphorInformation TheoryExisting RealizationFigure 4. The Substrata for the AGI Landscape.childhood is due to changes in both hardware(neural maturation, for example, synaptogenesis,neural pruning, and others) and software (forexample, acquisition of strategies for acquiring andprocessing information). Rather than advocating astage-based approach, this perspective often highlights processes of change that are gradual or continuous. For example, short-term memory — typically measured by the ability to remember a stringof random letters or digits — improves linearlyduring early childhood (Dempster 1981).Challenge: FindingTasks and EnvironmentsThe next challenge we address is that of definingappropriate tasks and environments for assessingprogress in AGI where all of the Laird and Wraycharacteristics are active, and thus all of therequirements must be confronted. The usefulnessof any task or environment for this purpose is critically dependent on how well it provides a basis forcomparison between alternative approaches andarchitectures. With this in mind, we believe thatboth tasks and environments must be designed orspecified with some knowledge of each other. Forexample, consider the competition to develop anautomobile capable of driving itself across a roughly specified desert course.1 While much of thearchitecture and implementation would likely beuseful for the later competition for autonomouscity driving,2 such a different environmentrequired significant reconsideration of the tasksand subtasks themselves.Tasks, therefore, require a context to be useful,and environments, including the AGI’s embodiment itself, must be considered in tandem with thetask definition. Using this reasoning, we decided todefine scenarios that combine both tasks and theirnecessary environments.Further, since we are considering multiple scenarios as well as multiple approaches and architectures, it is also important to be able to compare andcontrast tasks belonging to different scenarios.With this in mind, we chose to proceed by firstarticulating a rough heuristic list of human intelligence competencies (figure 5). As a rule of thumb,tasks may be conceived as ways to assess competencies within environments. However, contemporary cognitive science does not give us adequateguidance to formulate anything close to a complete, rigid, and refined list of competencies. Whatwe present here must be frankly classified as anintuitive approach for thinking about task generation, rather than a rigorous analytical methodology from which tasks can be derived.Environments and Embodiments for AGIGeneral intelligence in humans develops withinthe context of a human body, complete with manythousands of input sensors and output effectors,which is itself situated within the context of a richly reactive environment, complete with many other humans, some of which may be caregivers,teachers, collaborators, or adversaries. Human perceptual input ranges from skin sensors reacting totemperature and pressure, smell and taste sensorsin the nose and mouth, sound and light sensors inthe ears and eyes, to internal proprioceptive sen-SPRING 2012 31

cationTacticalSubgoal creationAppropriatenessStrategicAffect-basedSocial uralInductionVerbalRelationshipsModeling Self dRelationshipsPhysical ontrolTool eory of ionQuantitativePhysical Construction with ObjectsCounting Observed EntitiesFormation of Novel ConceptsGrounded Small Number ArithmeticVerbal InventionComparison of Quantitative Propertiesof Observed EntitiesSocial OrganizationMeasurement Using Simple ToolsFigure 5. Some of the Important Competency Areas Associated with Human-Level General Intelligence.sors that provide

Human-Level Artificial General Intelligence Sam S. Adams, Itamar Arel, Joscha Bach, Robert Coop, Rod Furlan, Ben Goertzel, J. Storrs Hall, Alexei Samsonovich, Matthias Scheutz, Matthew Schlesinger, Stuart C. Shapiro, John F. Sowa We present the broad outlines of a roadmap toward human-level artificial general intelli-gence (henceforth, AGI).

Related Documents:

May 02, 2018 · D. Program Evaluation ͟The organization has provided a description of the framework for how each program will be evaluated. The framework should include all the elements below: ͟The evaluation methods are cost-effective for the organization ͟Quantitative and qualitative data is being collected (at Basics tier, data collection must have begun)

Silat is a combative art of self-defense and survival rooted from Matay archipelago. It was traced at thé early of Langkasuka Kingdom (2nd century CE) till thé reign of Melaka (Malaysia) Sultanate era (13th century). Silat has now evolved to become part of social culture and tradition with thé appearance of a fine physical and spiritual .

On an exceptional basis, Member States may request UNESCO to provide thé candidates with access to thé platform so they can complète thé form by themselves. Thèse requests must be addressed to esd rize unesco. or by 15 A ril 2021 UNESCO will provide thé nomineewith accessto thé platform via their émail address.

̶The leading indicator of employee engagement is based on the quality of the relationship between employee and supervisor Empower your managers! ̶Help them understand the impact on the organization ̶Share important changes, plan options, tasks, and deadlines ̶Provide key messages and talking points ̶Prepare them to answer employee questions

Dr. Sunita Bharatwal** Dr. Pawan Garga*** Abstract Customer satisfaction is derived from thè functionalities and values, a product or Service can provide. The current study aims to segregate thè dimensions of ordine Service quality and gather insights on its impact on web shopping. The trends of purchases have

Chính Văn.- Còn đức Thế tôn thì tuệ giác cực kỳ trong sạch 8: hiện hành bất nhị 9, đạt đến vô tướng 10, đứng vào chỗ đứng của các đức Thế tôn 11, thể hiện tính bình đẳng của các Ngài, đến chỗ không còn chướng ngại 12, giáo pháp không thể khuynh đảo, tâm thức không bị cản trở, cái được

concept mapping has been developed to address these limitations of mind mapping. 3.2 Concept Mapping Concept mapping is often confused with mind mapping (Ahlberg, 1993, 2004; Slotte & Lonka, 1999). However, unlike mind mapping, concept mapping is more structured, and less pictorial in nature.

Le genou de Lucy. Odile Jacob. 1999. Coppens Y. Pré-textes. L’homme préhistorique en morceaux. Eds Odile Jacob. 2011. Costentin J., Delaveau P. Café, thé, chocolat, les bons effets sur le cerveau et pour le corps. Editions Odile Jacob. 2010. Crawford M., Marsh D. The driving force : food in human evolution and the future.