Practical Evaluation Of The Lasp Programming Model At Large Scale

1y ago
7 Views
2 Downloads
586.49 KB
6 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Pierre Damon
Transcription

Practical Evaluation of theLasp Programming Model at Large ScaleAn Experience ReportChristopher S. MeiklejohnVitor EnesJunghun YooUniversité catholique de LouvainLouvain-la-Neuve, BelgiumHASLab / INESC TECUniversidade do MinhoBraga, PortugalUniversity of OxfordOxford, United KingdomCarlos BaqueroPeter Van RoyAnnette BieniusaHASLab / INESC TECUniversidade do MinhoBraga, PortugalUniversité catholique de LouvainLouvain-la-Neuve, BelgiumTechnische Universität KaiserslauternKaiserslautern, GermanyABSTRACTProgramming models for building large-scale distributed applications assist the developer in reasoning about consistency anddistribution. However, many of the programming models for weakconsistency, which promise the largest scalability gains, have littlein the way of evaluation to demonstrate the promised scalability.We present an experience report on the implementation and largescale evaluation of one of these models, Lasp, originally presentedat PPDP ‘15, which provides a declarative, functional programmingstyle for distributed applications. We demonstrate the scalability ofLasp’s prototype runtime implementation up to 1024 nodes in theAmazon cloud computing environment. It achieves high scalabilityby uniquely combining hybrid gossip with a programming modelbased on convergent computation. We report on the engineeringchallenges of this implementation and its evaluation, specificallyrelated to operating research prototypes in a production cloud environment.ACM Reference format:Christopher S. Meiklejohn, Vitor Enes, Junghun Yoo, Carlos Baquero, PeterVan Roy, and Annette Bieniusa. 2017. Practical Evaluation of the Lasp Programming Model at Large Scale. In Proceedings of PPDP’17, Namur, Belgium,October 9–11, 2017, 6 ODUCTIONOnce a specialized field for applications that required large data sets,large-scale distributed applications have become commonplace inour globalized society. Regardless of whether you are developinga rich-web application or a native mobile application, managingdistributed data is challenging. For simplicity, developers todayPermission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than theauthor(s) must be honored. Abstracting with credit is permitted. To copy otherwise, orrepublish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from permissions@acm.org.PPDP’17, October 9–11, 2017, Namur, Belgium 2017 Copyright held by the owner/author(s). Publication rights licensed to Association for Computing Machinery.ACM ISBN 978-1-4503-5291-8/17/10. . . lly resort to using a single database that provides a formof strong1 consistency. In essence, the database serves as sharedmemory for the clients in the system.A single database is an obvious bottleneck as it introduces a serialization point for all operations; this restricts the possible throughput of the system. As developers strive to provide a near-nativeexperience where operations appear to happen immediately, andsince not all clients can be geographically located close to the database, application performance can suffer as users move farther fromthe database; or worse, when clients can’t communicate with thedatabase at all because they are offline. To provide good user experience, including high availability and low latency, developers areforced to integrate replication in the system design.Systems that favor weak consistency scale better: data items canbe locally replicated, locally mutated by the application, and theirstate can be disseminated asynchronously, outside of the criticalpath. Weak consistency allows applications to continue to operatewhile offline. While these systems provide for high scalability andhigh performance, programming with weak consistency can be achallenge for the application developer as updates to data itemshave no guarantee on update visibility or update order. Concurrencyposes an additional problem, as updates happening concurrently atdifferent replicas may be conflicting.Numerous systems and programming models[2, 3, 6, 8, 11, 14,17, 18] have been proposed for working with weak consistency,however few have seen adoption. Many of the systems have soundtheoretical foundations, but few perform evaluations at scale todemonstrate the benefits in practice. We believe that the lack ofthese results comes from the difficulty in the required infrastructure for large-scale experiments, and the challenges in engineeringan implementation of a theoretical model using existing softwarelanguages and libraries.In this paper, we discuss the practical issues encountered whenevaluating one of these programming models, Lasp [4, 5], originallypresented at PPDP ‘15. Lasp is designed using a holistic approachwhere the programming model was co-designed with its runtimesystem to ensure scalability. We examine the challenges of engineering an implementation capable of scaling to a large number of1For instance, linearizability, where a value follows the real-time order of updates.

PPDP’17, October 9–11, 2017, Namur, Belgiumnodes running in a public cloud environment, using a real world application scenario. Further, we report on the engineering challengesof demonstrating the scalability of the Lasp model. Our experiencereport substantiates that empirically validating scalability is nontrivial, regardless of the programming model.2ADVERTISEMENT COUNTERLasp was invented to ease the development of distributed applications with weak consistency. The advertisement counter scenariofrom Rovio Entertainment, creator of Angry Birds, is an ideal fitfor Lasp. This application counts the total number of times eachadvertisement is displayed on all client mobile phones, up to a giventhreshold for each. The application has the following properties:C. Meiklejohn et al.These two concepts allow Lasp applications to be both transparently and arbitrarily distributed across a set of nodes withoutaltering application behavior. For brevity, the reader is referredto [11] for a full treatment of the Lasp semantics.The advertisement counter uses two data structures from Lasp:the Add-Wins Set CRDT3 , where elements can be arbitrarily removed and inserted without coordination and under concurrentadd and remove operations the add will ‘win’; and the Grow-OnlyCounter CRDT, which models a counter that only increments.2.2 Replicated data. Data is fully replicated to every client inthe system. This replicated data is under high contentionby each client in the system. High scalability. Clients resemble individual mobile phoneinstances of the application, so the application should scaleup to millions of clients. High availability. Clients need to continue operationwhen disconnected as mobile phones frequently have periods of signal loss (offline operation). Initialization. When the advertisement counter application is first initialized, we first create Grow-Only Countersfor each unique advertisement we want to track impressions for, and we then insert references to them into aninitial Add-Wins Set of advertisements. Selection of displayable advertisements. We define adataflow computation in Lasp that will derive an Add-WinsSet of advertisements to display to the clients based onadvertisements that have valid “contracts”: records thatrepresent that an advertisement is allowed to be displayedat the current time (Figure 1). Enforcing invariants. Since clients increment each advertisement counter as advertisement impressions occur,when the target number of impressions is reached boththe client and the server will fire a trigger to remove theadvertisement counter from the set of advertisements, toprevent the advertisement from being further displayed.This can be done without coordination through the use ofthe Add-Wins Set.As part of the large-scale evaluation done in the SyncFree project,and following the personal curiosity of the developers, we decidedto invest resources in using industrial-strength engineering techniques to evaluate the scalability of this application running in areal world production cloud environment.2.1LaspLasp [11] is a programming model that allows developers to writeapplications with Conflict-Free Replicated Data Types (CRDTs) [1,16]. CRDTs are abstract data types, designed for use in concurrentand distributed programming, that have a binary merge operationto join any two replicas of a single CRDT. Under concurrent modification without coordination, different replicas of a single CRDTmay diverge; the merge operation supports value convergence byensuring that given enough communication, all replicas, without coordination, will converge to a single deterministic value regardlessof the order that data is received and merged.Historically, before CRDTs were introduced, ad-hoc merge functions were used, often with few formal guarantees. Later, after theirdevelopment, programmers who wanted to use CRDTs in their applications would have two choices: either, using a single CRDT fromexisting literature to store application state, fitting their problemto an existing data structure; or, building a custom CRDT that fitstheir application domain, which requires to ensure that the mergeoperation is both deterministic and convergent.Lasp improves this choice in two ways:OverviewThe design of the advertisement counter is roughly broken intothree components.Asynchronous AdsToDisplayContractsFigure 1: Asynchronous dataflow computation in Lasp that derives the set of displayable advertisements.The advertisement counter has two important design choices,which makes its implementation in Lasp ideal. Offline support. As Angry Birds is a mobile application,there will be periods without connectivity. During thistime, advertisements should still be displayable. Lower-bound invariant. Advertisements need to be displayed a minimum number of times; additional impressionsare not problematic. This is a monotonic condition: oncethe condition is true, it remains true. Composition. Lasp provides set-theoretic and functionalcombinators for composing CRDTs into larger CRDTs. Monotonic conditional. Lasp introduces a conditionaloperation that allows the execution of application logicbased on monotonic conditions2 on CRDTs.2Monotonicity implies that once a condition becomes true, it remains true; a monotonicity check can be done without distributed coordination.Filter3a.k.a. Observed-Remove Set

Practical Evaluation of theLasp Programming Model at Large Scale2.3ImplementationThe advertisement counter is broken into two components thatwork in concert. Both components track a single replica of a set ofidentifiers of displayable advertisements, and for each identifier areplica of an advertisement counter that tracks the total number oftimes the advertisement has been displayed to the user. Each nodein our experiment runs either a single client or server process. Server processes. One or more server processes, eachresponsible for propagating their state to clients and disabling advertisements that have been displayed a minimumnumber of times by monotonically removing them fromthe set of displayable advertisements. Client processes. Many client processes that periodicallypropagate their state with other nodes, and increment theircounter replicas based on a synthetic workload.The prototype implementation of the Lasp programming modelis built in the Erlang programming language and exposed to theuser as an application library.The fully instrumented Lasp advertisement counter client is implemented in 276 lines of Erlang code, and the fully instrumentedadvertisement counter server is 333 lines of Erlang code. Around50% of this code is for instrumentation and orchestration, to ensurewe can perform a full analysis of the application during experimentation. The Lasp runtime system takes care of cluster maintenance,data synchronization and storage, which are done manually in theprevious approaches (ad-hoc merge or custom CRDT design).3SYSTEM ARCHITECTURETo perform a real world evaluation of the advertisement counter, weimplemented an efficient, scalable runtime system for Lasp. Lasp’sruntime system is a highly-scalable eventually consistent data storewith two different dissemination mechanisms (state-based vs. deltabased) and two different cluster topologies (datacenter vs. hybridgossip). Lasp’s programming model, presented in [11], sits abovethe data store and exposes a programming interface.Datacenter Lasp [11] operates using a structured overlay network. Hybrid Gossip Lasp [12] uses an unstructured overlay network, and by design should achieve greater scalability and providebetter fault-tolerance [15].3.1Datacenter LaspDatacenter Lasp refers to the prototype implementation of theruntime system presented with the programming model, at thisconference two years ago [11].In Datacenter Lasp, all CRDT state is both partitioned and replicated across several datacenter nodes. Client processes communicate directly with server processes that are running on datacenternodes; client processes do not communicate amongst each other.Replication is used across datacenter nodes for fault tolerance,and partitioning/sharding is used for horizontal scalability: thisis achieved through the use of consistent hashing and hash-spacepartitioning. In our experiments this is simplified and there is nopartitioning, since the data set for our experiments never exceeds asingle datacenter node’s available capacity.PPDP’17, October 9–11, 2017, Namur, Belgium3.2Hybrid Gossip LaspHybrid Gossip Lasp is inspired by two Hybrid Gossip protocols,HyParView [10], and Plumtree [9]. In Hybrid Gossip Lasp, nodes areassembled in a peer-to-peer topology, where client processes cancommunicate either with server processes running on datacenternodes or client processes. State is delivered transitively throughother processes in the system: there is no need to communicatedirectly with a server process running on a datacenter node.Hybrid Gossip Lasp uses a membership protocol heavily inspiredby HyParView, to compute an overlay network containing all ofthe members in the cluster. The notable differences between theHyParView protocol and our membership protocol were the resultsof adapting the theoretical treatment in the HyParView paper toan actual implementation that was used for this experiment.Specifically, the original HyParView protocol was evaluated in alow-churn environment, whereas our environment has much higherchurn. Churn is defined as rate of node turnover, i.e., percentage ofnodes leaving and being replaced by new nodes, per time unit. Thehigher churn in our environment was a byproduct of attempting toreduce experimentation time to save costs when operating largeclusters: this allowed experiments that would normally take hoursfor cluster deployment and operations to be reduced to fractionalhours at significant cost savings. For details on the modificationsto the protocol, the reader is referred to [13].3.3Dissemination ProtocolsThe system supports two data dissemination protocols. State-based. Objects are locally updated through mutatorsthat inflate the state. Objects are periodically sent to peersthat merge the received object with their local state. Delta-based. Objects are locally updated by merging thestate with the result of δ -mutators [1], called deltas, thatcompactly represent changed portions of state. These deltasare buffered locally and sent to each local peer in everypropagation interval.4ENGINEERING SCALEThe Lasp semantics ensures that the runtime system is correct intheory for arbitrary distribution of the computation. However, engineering a scalable real-world system requires a significant amountof sophisticated tooling to ensure scalability both for deploymentand for observability during execution. Near the end of the SyncFreeproject, we designed an experiment with the goal of scaling to 10 000nodes. We finally achieved a scale of 1024 nodes at a total cloudcomputing cost of about e9000.4.1Experiment ConfigurationFor the purposes of the experiment, we used a total of 70 m3.2xlargeinstances in the Amazon EC2 cloud computing environment, withinthe same region and availability zone. We used the Apache Mesos [7]cluster computing framework to subdivide each of these machinesinto smaller, fully-isolated machines using cgroups. Each virtualmachine, representing a single Lasp node, communicated with othernodes in the cluster using TCP, and given the uniform deploymentacross all of the allocated instances, had varying latencies to othernodes in the system depending on their physical location.

PPDP’17, October 9–11, 2017, Namur, BelgiumWhen subdividing resources for the experiment, we allocatedeach server task 4 GB of memory with 2 virtual CPUs, and eachclient task 1 GB of memory, with 0.5 virtual CPUs. Here a task is alogical unit of computation that is executed on one virtual machine.We consider that these numbers vastly underrepresent the capabilities of modern mobile devices in widespread deployment todayand therefore will lead to conservative results in the evaluation.We allocate more resources to servers, specifically in DatacenterLasp mode, as servers are required to maintain connections to morenodes in the system; the advertisement counter does not requiremore resources between Datacenter and Hybrid Gossip modes.4.2Experimental WorkflowAs running experiments in an unsimulated cloud environment canbe challenging due to the inherent nondeterminism across differentexecutions of the same experiment, we created a workflow targetedat reducing nondeterminism by controlling the experiments’ setupand teardown procedures with detailed instrumentation for postexperimental analysis. We describe that workflow below. Bootstrapping. Initially, all of the server and client processes are bootstrapped and joined into a single cluster.The experiment does not begin until we ensure that all ofthe nodes in the system are connected and the connectiongraph forms a single connected component. Each nodeshould be reachable by every other node in the system,either directly as a local neighbor, or indirectly via multihop. During this process, the system creates advertisementcounters and the set of displayable ads. Simulation. Once we ensure the cluster is connected, eachnode starts collecting metrics and generating its own workload that randomly selects a counter to increment basedon the set of displayable advertisements every predefinedimpression interval. Periodically, each process propagateslocal replicas with neighbor processes. It should be notedthat each client has its own workload generator: using acentralized harness for running the experiment introducescoordination, which reduces the scalability of the system. Convergence. As each of the experiments has a controllednumber of events that will be generated based on the number of clients participating in the system, the experimentcontinues to run until each node has observed the effectsof all events: we refer to this process as convergence. Metrics aggregation and archival. Once convergenceis reached, the experiment is complete. Each node, uponobserving convergence begins uploading metrics recordedduring the experiment to a central location: these logs areused for analysis of the runtime system. Once this processis complete, the experiment harness waits for the systemto fully teardown the cluster before starting a subsequentrun, to prevent state leakage between runs when reusingthe same hardware to reduce costs.4.3Experimental InfrastructureEvaluation of a large-scale distributed programming model is difficult. This is due to failures in the underlying frameworks that areused to provide mechanisms for deployment and operations, andC. Meiklejohn et al.because of inadequate tools required to observe the system duringexecution to ensure it is operating properly.4.3.1 Apache Mesos. While experimentation shows Lasp scalability to 1024 nodes, we do not believe that this number is a firmupper limit. When attempting to run experiments with 2048 nodeswe quickly ran into problems with the Apache Mesos cloud computing framework. One issue is that when attempting to bootstrapa cluster containing 70 instances too quickly, instances become disconnected and need to be manually reprovisioned. This required aslower cluster deployment where a cluster would be scaled from 35instances, first to 50 instances, and then to 70 instances. As the 2048experiment required 140 m3.2xlarge instances to operate, clusterdeployment would take significantly longer.When attempting to launch 2048 tasks in Mesos (with a singletask representing a single application node), instances would become overloaded quickly and fail to respond to heartbeat messages:this triggered these instances being marked as offline by Mesos andthe tasks orphaned. This would require restarting the experimentand reallocating the cluster to account for the lost tasks.4.3.2 Sprinter. Once tasks were launched by Apache Mesos, weneeded a mechanism for client processes to discover other clientprocesses in the system and connect to them.Therefore, we built an open source service discovery librarycalled Sprinter that was used to fetch a list of running tasks fromthe Mesos framework, Marathon, and supply them to the system astargets to connect to. Sprinter also performs the following functions: Graph analysis for connectedness. Each node uploadsits local membership view to Amazon S3. The first, lexicographically ordered, server periodically pulls this membership information and builds a local graph that is analyzedto determine if the graph contains all clients, and that theconnection graph forms a single connected component. Delay experiment for connectedness. Based on graphanalysis, the experiment’s start is delayed until the connection graph forms a single connected component. Periodic reconnection if isolated. If a node becomesisolated from the cluster, it will rejoin the cluster, using theinformation provided by Marathon.To assist in operator debugging of the experiments, a graphicaltool was built to visualize the graph information from Sprinteralong with extensive logging to the server node with informationabout cluster conditions.4.3.3 Partisan. Distributed Erlang has known scalability problems when operated in the range of 50 or more nodes as it tracks fullmembership information in the cluster at each node and maintainsfull connectivity between nodes using a single TCP connection thatis used for both data transmission and heartbeat messages. Singleconnections are problematic because of head-of-line blocking whenlarge messages are transmitted.We knew that for the experiment to scale we would need: (1)to move away from Distributed Erlang, (2) to configure networktopologies for both Datacenter Lasp and Hybrid Gossip Lasp in asingle specification, and (3) to specify configurations at runtimewithout having to modify application code. To do this we builtPartisan, an open source Erlang library that provides an alternative

Practical Evaluation of theLasp Programming Model at Large Scale4.3.4 Workflow CRDT (W-CRDT). In our experiments, a centraltask could not be used to orchestrate the execution: early experiments demonstrated that the central task quickly became a bottleneck and slowed down execution to the speed of the central task.Therefore, we eliminated the central task.However, without a central task performing orchestration, it becomes more difficult to control when nodes should perform certainactions. For example, after event generation is complete, we shouldwait for convergence before proceeding to metrics aggregation.Therefore, we needed a mechanism for asynchronously controllingthe workflow of the application scenario.We devised a novel data structure, called the Workflow-CRDT (WCRDT), that is disseminated between nodes for controlling whencertain actions should take place. This object is not instrumented byour runtime or included in any of the application logging, to preventthe structure itself from influencing the results of the experiment.The W-CRDT is a sequence of Grow-Only Map CRDTs, where eachmap is a function from opaque node identifiers to booleans. Thesequence is implemented with the recursive Pair CRDT (similar toa recursive list type). The W-CRDT operates as follows: Per node flag. Each node’s portion of a task to be completed is modeled as a flag; each node toggles its flag whenit has completed its work. Tasks as grow-only maps. Each task that needs to beperformed is represented by one grow-only map. When allthe map’s flags are true, the task is considered as complete.This corresponds to a barrier synchronization. Sequential composition of tasks. Each task can be sequenced with another task. A task starts when its precedingtask has completed. Workflow completion. The workflow is considered complete when all of the tasks that make up the sequentialcomposition are complete.The W-CRDT is used to model the following sequential workflowin each experiment. Perform event generation. Once event generation iscomplete, nodes mark event generation complete. Blocking for convergence. Once convergence is reached,nodes mark convergence complete. Log aggregation. Once convergence is reached, nodesbegin uploading their logs to a central location and marklog aggregation complete. Shutdown. Shutdown once log aggregation is complete.5EVALUATIONFor Datacenter Lasp, we ran experiments using state-based dissemination, with a single server, and 32, 64, 128, 256 clients, formingwith the server a star graph topology. For Hybrid Gossip Lasp, weran experiments using both dissemination strategies, with a singleserver, and 32, 64, 128, 256, 512, and 1024 clients.Each experiment was run twice, with the advertisement impression interval fixed at 10 seconds and the propagation interval at 5seconds. The total number of impressions was configured to ensurethat, in all executions, the experiment ran for 30 minutes.Figure 2 and Figure 3 evaluate three different operational modesfor Lasp, examining the state transmission for the duration of the experiment. Two Hybrid Gossip dissemination strategies, state-basedand delta-based, are evaluated using a single overlay generated bythe HyParView protocol. We also evaluated Datacenter Lasp, whereclients propagate changes to the server using a state-based dissemination strategy. We did not evaluate delta-based for DatacenterLasp, as it is unrealistic to believe that the server could buffer allchanges in the system. In this evaluation, we scale up to 256 clientprocesses: this is the largest number of client processes a singleserver could support in Datacenter Lasp. Hybrid Gossip scaled to1024 nodes, before we ran into issues with Apache Mesos.Datacenter Lasp performs the best in terms of state transmissionwhen compared to Hybrid Gossip Lasp using the same dissemination strategy. This results from Datacenter Lasp have no redundancyat all: the star topology has a single point of failure that is usedfor communication between all nodes in the system. Delta-baseddissemination demonstrates a clear advantage for Hybrid GossipLasp where redundancy is required to keep the system operating:state transmission can be reduced without sacrificing the faulttolerance properties of the underlying overlay network. In termsof protocol transmission in Hybrid Gossip Lasp, delta-based dissemination performs better than state-based, even though it is amore complex protocol: in delta-based dissemination a process cantrack which updates have been seen by its neighbor processes andit will not disseminate an unchanged object, while in state-baseddissemination an object is always propagated. communication layer that eschews the use of Distributed Erlang.Partisan supports multiple network configurations and topologies:a client-server star topology, a full connectivity topology mirroringDistributed Erlang’s, a static topology where per-node membershipis explicitly maintained, and a random unstructured overlay membership protocol inspired by the HyParView membership protocol.PPDP’17, October 9–11, 2017, Namur, Belgium Figure 2: Comparison of state- and delta-based disseminationin both Datacenter and Hybrid Gossip Lasp with 32/64 clients.Our experiments confirm several design considerations made inLasp. First, as demonstrated by the graphs, in the Datacenter Laspmodel the transmission cost is reduced as there is no redundancyin messaging and subsequently no fault-tolerance. In this model,because of communication through a datacenter node, an updatetakes two hops to reach all clients in the system. However, thismodel has limited scalability because a centralized point, which

PPDP’17, October 9–11, 2017, Namur, BelgiumC. Meiklejohn et al.to accommodate one another in a way that allows scalability. However, the effort required to demonstrate Lasp as both scalable andpractical remained a non-trivial challenge. Figure 3: Comparison of state- and delta-based disseminationin both Datacenter and Hybrid Gossip Lasp with DatacenterLasp 256 clients (limited in scalability) and Hybrid GossipLasp 1024 clients.could be partitioned and replicated across multiple servers, is usedas a coordination point for all clients.Hybrid Gossip Lasp adds additional redundancy by constructing a random ove

To perform a real world evaluation of the advertisement counter, we implemented an efficient, scalable runtime system for Lasp. Lasp's runtime system is a highly-scalable eventually consistent data store with two different dissemination mechanisms (state-based vs. delta-based) and two different cluster topologies (datacenter vs. hybrid gossip).

Related Documents:

May 02, 2018 · D. Program Evaluation ͟The organization has provided a description of the framework for how each program will be evaluated. The framework should include all the elements below: ͟The evaluation methods are cost-effective for the organization ͟Quantitative and qualitative data is being collected (at Basics tier, data collection must have begun)

Silat is a combative art of self-defense and survival rooted from Matay archipelago. It was traced at thé early of Langkasuka Kingdom (2nd century CE) till thé reign of Melaka (Malaysia) Sultanate era (13th century). Silat has now evolved to become part of social culture and tradition with thé appearance of a fine physical and spiritual .

On an exceptional basis, Member States may request UNESCO to provide thé candidates with access to thé platform so they can complète thé form by themselves. Thèse requests must be addressed to esd rize unesco. or by 15 A ril 2021 UNESCO will provide thé nomineewith accessto thé platform via their émail address.

̶The leading indicator of employee engagement is based on the quality of the relationship between employee and supervisor Empower your managers! ̶Help them understand the impact on the organization ̶Share important changes, plan options, tasks, and deadlines ̶Provide key messages and talking points ̶Prepare them to answer employee questions

Dr. Sunita Bharatwal** Dr. Pawan Garga*** Abstract Customer satisfaction is derived from thè functionalities and values, a product or Service can provide. The current study aims to segregate thè dimensions of ordine Service quality and gather insights on its impact on web shopping. The trends of purchases have

Chính Văn.- Còn đức Thế tôn thì tuệ giác cực kỳ trong sạch 8: hiện hành bất nhị 9, đạt đến vô tướng 10, đứng vào chỗ đứng của các đức Thế tôn 11, thể hiện tính bình đẳng của các Ngài, đến chỗ không còn chướng ngại 12, giáo pháp không thể khuynh đảo, tâm thức không bị cản trở, cái được

Le genou de Lucy. Odile Jacob. 1999. Coppens Y. Pré-textes. L’homme préhistorique en morceaux. Eds Odile Jacob. 2011. Costentin J., Delaveau P. Café, thé, chocolat, les bons effets sur le cerveau et pour le corps. Editions Odile Jacob. 2010. Crawford M., Marsh D. The driving force : food in human evolution and the future.

Pile designers therefore looked at calculation based on theoretical soil mechanics. 16 Geotechnical Design to EC7 13 January 2017 Layer 1 Layer 2 Layer 3 L 1 L 2 L 3 Q s1 Q s2 Q s3 Q b Ultimate pile resistance Q u Q s Q b Traditional Pile Design to BS 8004. 17 Geotechnical Design to EC7 13 January 2017 Traditional Pile Design to BS 8004 The usual approach is to divide the ground into .