The Akamai Network: A Platform For High-Performance .

2y ago
23 Views
6 Downloads
493.43 KB
18 Pages
Last View : 12d ago
Last Download : 3m ago
Upload by : Azalea Piercy
Transcription

The Akamai Network: A Platform for High-PerformanceInternet ApplicationsErik Nygren†Ramesh K. Sitaraman†‡Jennifer Sun††Akamai Technologies, 8 Cambridge Center, Cambridge, MA 02142‡{nygren, ramesh}@akamai.com, jennifer sun@post.harvard.eduDepartment of Computer Science, University of Massachusetts, Amherst, MA 01002ramesh@cs.umass.eduABSTRACTComprising more than 61,000 servers located across nearly 1,000networks in 70 countries worldwide, the Akamai platform delivershundreds of billions of Internet interactions daily, helpingthousands of enterprises boost the performance and reliability oftheir Internet applications. In this paper, we give an overview ofthe components and capabilities of this large-scale distributedcomputing platform, and offer some insight into its architecture,design principles, operation, and management.Categories and Subject DescriptorsC.2.1 [Network Architecture and Design]: Distributed networksC.2.4 [Distributed Systems]: Distributed applications, Networkoperating systemsGeneral TermsAlgorithms, Management, Performance, Design, Reliability,Security, Fault Tolerance.KeywordsAkamai, CDN, overlay networks, application acceleration, HTTP,DNS, content delivery, quality of service, streaming media1. INTRODUCTIONThe Internet is radically transforming every aspect of humansociety by enabling a wide range of applications for business,commerce, entertainment, news, and social networking. Yet theInternet was never architected to support the levels ofperformance, reliability, and scalability that modern-daycommercial applications demand, creating significant technicalobstacles for those who wish to transact business on the Web.Moreover, these obstacles are becoming even more challenging ascurrent and future applications are evolving.Akamai first pioneered the concept of Content Delivery Networks(CDNs) [18] more than a decade ago to help businesses overcomethese technical hurdles. Since then, both the Web and the Akamaiplatform have evolved tremendously. Today, Akamai delivers 1520% of all Web traffic worldwide and provides a broad range ofcommercial services beyond content delivery, including Web andIP application acceleration, EdgeComputing , delivery of liveand on-demand high-definition (HD) media, high-availabilitystorage, analytics, and authoritative DNS services.This paper presents a broad overview of the current Akamaiplatform, including a discussion of many of the key technologiesand architectural approaches used to achieve its results. We hopeto offer insight into the richness of the platform and the breadth oftechnological research and innovation needed to make a system ofthis scale work.The paper is organized as follows. We first present the problemspace and look at the motivations for creating such a platform.Next, an overview of the Akamai platform is followed by anexamination of how it overcomes the Internet‘s inherentlimitations for delivering web content, media streams, anddynamic applications. We present the case that a highlydistributed network is the most effective architecture for thesepurposes, particularly as content becomes more interactive andmore bandwidth hungry. We then take a more detailed look at themain components of the Akamai platform, with a focus on itsdesign principles and fault tolerant architecture. Finally, we offera cross-section of customer results to validate the real-worldefficacy of the platform.2. INTERNET APPLICATIONREQUIREMENTSModern enterprise applications and services on the Internetrequire rigorous end-to-end system quality, as even smalldegradations in performance and reliability can have aconsiderable business impact. A single one-hour outage can cost alarge e-commerce site hundreds of thousands to millions ofdollars in lost revenue, for example.1 In addition, outages cancause significant damage to brand reputation. The cost ofenterprise application downtime is comparable, and may bemeasured in terms of both lost revenue and reduced productivity.Application performance is also directly tied to key businessmetrics such as application adoption and site conversion rates. A2009 Forrester Consulting survey found that a majority of onlineshoppers cited website performance as an important factor in theironline store loyalty, and that 40% of consumers will wait no morethan 3 seconds for a page to load before abandoning a site [19].We can find a more concrete quantification of this effect in anAkamai study on an e-commerce website [11]. In the study, sitevisitors were partitioned: half were directed to the site throughAkamai (providing a high-performance experience) while theother half were sent directly to the site‘s origin servers. Analysisshowed that the users on the high-performance site were 15%1For instance, a one-hour outage could cost one well-known,large online retailer 2.8 million in sales, based on 2009revenue numbers.

more likely to complete a purchase and 9% less likely to abandonthe site after viewing just one page. For B2B applications, thestory is similar. In a 2009 IDC survey, customers using Akamai‘senterprise application acceleration services reported annualrevenue increases of 200,000 to over 3 million directlyattributable to the improved performance and reliability of theirapplications [20].Unfortunately, inherent limitations in the Internet‘s architecturemake it difficult to achieve desired levels of performance nativelyon the Internet. Designed as a best-effort network, the Internetprovides no guarantees on end-to-end reliability or performance.On the contrary, wide-area Internet communications are subject toa number of bottlenecks that adversely impact performance,including latency, packet loss, network outages, inefficientprotocols, and inter-network friction.In addition, there are serious questions as to whether the Internetcan scale to accommodate the demands of online video. Evenshort term projections show required capacity levels that are anorder of magnitude greater than what we see on the Internet today.Distributing HD-quality programming to a global audiencerequires tens of petabits per second of capacity—an increase ofseveral orders of magnitude.Bridging the technological gap between the limited capabilities ofthe Internet‘s infrastructure and the performance requirements ofcurrent and future distributed applications is thus critical to thecontinued growth and success of the Internet and its viability forbusiness. We now take a closer look at why this is so challenging.3. INTERNET DELIVERY CHALLENGESAlthough often referred to as a single entity, the Internet isactually composed of thousands of different networks, eachproviding access to a small percentage of end users.2 Even thelargest network has only about 5% of Internet access traffic, andpercentages drop off sharply from there (see Figure 1). In fact, ittakes well over 650 networks to reach 90% of all access traffic.This means that centrally-hosted content must travel over multiplenetworks to reach its end users.Unfortunately, inter-network data communication is neither anefficient nor reliable operation and can be adversely affected by anumber of factors. The most significant include:Peering point congestion. Capacity at peering points wherenetworks exchange traffic typically lags demand, due in largepart to the economic structure of the Internet. Money flowsin at the first mile (i.e., website hosting) and at the last mile(i.e., end users), spurring investment in first and last mileinfrastructure. However, there is little economic incentive fornetworks to invest in the middle mile—the high-cost, zerorevenue peering points where networks are forced tocooperate with competing entities. These peering points thusbecome bottlenecks that cause packet loss and increaselatency.2According to [13], there were over 34,600 active networks(ASes) as of June 2010.Figure 1: Percentage of access traffic from top networksInefficient routing protocols. Although it has managedadmirably for scaling a best-effort Internet, BGP has anumber of well-documented limitations. It was neverdesigned for performance: BGP bases its route calculationsprimarily on AS hop count, knowing nothing about thetopologies, latencies, or real-time congestion of theunderlying networks. In practice, it is used primarily toenforce networks‘ business agreements with each other ratherthan to provide good end-to-end performance. For example,[34] notes that several paths between locations within Asiaare actually routed through peering points in the US, greatlyincreasing latency. In addition, when routes stop working orconnectivity degrades, BGP can be slow to converge on newroutes. Finally, it is well-known that BGP is vulnerable tohuman error as well as foul play; misconfigured or hijackedroutes can quickly propagate throughout the Internet, causingroute flapping, bloated paths, and even broad connectivityoutages [25].Unreliable networks. Across the Internet, outages arehappening all the time, caused by a wide variety of reasons—cable cuts, misconfigured routers, DDoS attacks, poweroutages, even earthquakes and other natural disasters. Whilefailures vary in scope, large-scale occurrences are notuncommon. In January 2008, for example, parts of SoutheastAsia and the Middle East experienced an estimated 75%reduction in bandwidth connectivity [43] when a series ofundersea cables were accidentally cut. In December of thesame year, another cable cut incident lead to outages forlarge numbers of networks in Egypt and India. In both casesthe disruptions lasted for multiple days.Fragile peering relationships can be culprits as well. Whentwo networks de-peer over business disputes, they canpartition the Internet, such that customers from onenetwork—as well as any networks single-homed to it—maybe unable to reach customers of the other network. Duringthe high-profile de-peering between Sprint and Cogent inOctober 2008, for instance, connectivity was adverselyaffected for an estimated 3,500 networks [35].

Finally, several high profile examples of Internet outagescaused by BGP hijacking can be found in [9], such as theglobal YouTube blackout inadvertently caused by Pakistan inFebruary 2008, as well as the widespread Internet outagecaused by a China Telecom leak in April 2010.anomalous peaks like promotions, events, and attacks,investing in significant infrastructure that sits underutilizedmost of the time. This also has an environmental cost whenunderutilized infrastructure consumes significant amounts ofpower [33].Inefficient communications protocols: Although it wasdesigned for reliability and congestion-avoidance, TCPcarries significant overhead and can have suboptimalperformance for links with high latency or packet loss, bothof which are common across the wide-area Internet. Middlemile congestion exacerbates the problem, as packet losstriggers TCP retransmissions, further slowing downcommunications.Finally, it is important to note that origin scalability is only apart of the scalability challenge. End-to-end applicationscalability means not only ensuring that there is adequateorigin server capacity, but also adequate network bandwidthavailable at all points between end users and the applicationsthey are trying to access. As we will discuss further inSection 5.1, this is a serious problem as Internet video comesof age.Additionally, for interactive applications, the multiple roundtrips required for HTTP requests can quickly add up,affecting application performance [41][40]. Most webbrowser also limit the number of parallel connections theymake for a given host name, further limiting performanceover long distances for sites that consist of many objects.Application limitations and slow rate of change adoption.Although some of the challenges the Internet faces can bepartially addressed by changes to protocols and/or clientsoftware, history shows that these are all slow to change.While enterprises want to provide the best performance totheir end users, they often have little or no control over theend users‘ software. While the benefits of some protocolchanges can be seen as soon as some clients and serversadopt them, other proposed changes can be infeasible toimplement as they require close to 100% client adoption toavoid breaking older clients. Most enterprises would alsoprefer to not to have to keep up with adapting their webinfrastructure to tune performance of all of the heterogeneousclient software in-use. For example, Microsoft‘s InternetExplorer 6 (which has considerably slower performance thanlater versions and doesn‘t work reliably with protocoloptimizations such as gzip compression) was still one of themost popular browsers in use in December 2009, despitebeing introduced more than eight years prior [29].TCP also becomes a serious performance bottleneck forvideo and other large files. Because it requires receiveracknowledgements for every window of data packets sent,throughput (when using standard TCP) is inversely related tonetwork latency or round trip time (RTT). Thus, the distancebetween server and end user can become the overridingbottleneck in download speeds and video viewing quality.Table 1 illustrates the stark results of this effect. True HDquality streams, for example, are not possible if the server isnot relatively close by.Table 1: Effect of Distance on Throughput and Download TimeDistance(Server to User)Local: 100 mi.Regional:500–1,000 mi.NetworkRTT1.6 ms16 msTypicalPacketLossThroughput4GB DVDDownloadTime0.6%44 Mbps(high qualityHDTV)12 min.0.7%4 Mbps(basicHDTV)2.2 hrs.8.2 hrs.20 hrsCross-continent: 3,000 mi.48 ms1.0%1 Mbps (SDTV)Multi-continent: 6,000 mi.96 ms1.4%0.4 Mbps(poor)Although many alternate protocols and performanceenhancements to TCP have been proposed in the literature([23], [30], [45]), these tend to be very slow to make theirway into use by real-world end users, as achieving commonimplementation across the Internet is a formidable task.Scalability. Scaling Internet applications means havingenough resources available to respond to instantaneousdemand, whether during planned events or unexpectedperiods of peak traffic. Scaling and mirroring origininfrastructure is costly and time-consuming, and it is difficultto predict capacity needs in advance.Unfortunately,underprovisioning means potentially losing business whileoverprovisioning means wasting money on unusedinfrastructure. Moreover, website demand is often very spiky,meaning that companies traditionally needed to provision for4. DELIVERY NETWORK OVERVIEWThe Internet delivery challenges posed above (and in more detailin [27]) illustrate how difficult it can be for enterprises to achieveacceptable levels of performance, reliability, and cost-effectivescalability in their Web operations. Most of the bottlenecks areoutside the control of any given entity and are inherent to the waythe Internet works—as a loosely-coordinated patchwork ofheterogeneous autonomous networks.Over a decade ago, Akamai introduced the Content DeliveryNetwork (CDN) concept to address these challenges. Originally,CDNs improved website performance by caching static sitecontent at the edge of the Internet, close to end users, in order toavoid middle mile bottlenecks as much as possible. Since then thetechnology has rapidly evolved beyond static web contentdelivery. Today, Akamai has application delivery networks thatcan accelerate entire web or IP-based applications, media deliverynetworks that provide HD-quality delivery of live and on-demandmedia, and EdgeComputing networks that deploy and executeentire Java J2EE applications in a distributed fashion.In addition, service offerings have matured to meet additionalenterprise needs, such as the ability to maintain visibility andcontrol over their content across the distributed network. Thismeans providing robust security, logging, SLAs, diagnostics,reporting and analytics, and management tools. Here, as with thecontent delivery itself, there are challenges of scale, reliability,and performance to be overcome.

4.1 Delivery Networks as Virtual Networks4.2 Anatomy of a Delivery NetworkConceptually, a delivery network is a virtual network3 built as asoftware layer over the actual Internet, deployed on widelydistributed hardware, and tailored to meet the specific systemsrequirements of distributed applications and services [Figure 2]. Adelivery network provides enhanced reliability, performance,scalability and security that is not achievable by directly utilizingthe underlying Internet. A CDN, in the traditional sense ofdelivering static Web content, is one type of delivery network.The Akamai network is a very large distributed system consistingof tens of thousands of globally deployed servers that runsophisticated algorithms to enable the delivery of highly scalabledistributed applications. We can think of it as being comprised ofmultiple delivery networks, each tailored to a different type ofcontent—for example, static web content, streaming media, ordynamic applications. At a high level, these delivery networksshare a similar architecture, which is shown in Figure 3, but theunderlying technology and implementation of each systemcomponent may differ in order to best suit the specific type ofcontent, streaming media, or application being delivered.A different but complimentary approach to addressing challengesfacing Internet applications is a clean-slate redesign of the Internet[32]. While a re-architecture of the Internet might be beneficial,its adoption in the real world is far from guaranteed. Withhundreds of billions of dollars in sunk investments and entrenchedadoption by tens of thousands of entities, the current Internetarchitecture will change slowly, if at all. For example, considerthat IPv6—a needed incremental change—was first proposed in1996 but is just beginning to ramp up in actual deployment nearly15 years later.The main components of Akamai‘s delivery networks are asfollows:When the user types a URL into his/her browser, the domainname of the URL is translated by the mapping system intothe IP address of an edge server to serve the content (arrow1). To assign the user to a server, the mapping system basesits answers on large amounts of historical and current datathat have been collected and processed regarding globalnetwork and server conditions. This data is used to choose anedge server that is located close to the end user.The beauty of the virtual network approach is that it works overthe existing Internet as-is, requiring no client software and nochanges to the underlying networks. And, since it is built almostentirely in software, it can easily be adapted to futurerequirements as the Internet evolves.Each edge server is part of the edge server platform, a largeglobal deployment of servers located in thousands of sitesaround the world. These servers are responsible forprocessing requests from nearby users and serving therequested content (arrow 2).In order to respond to a request from a user, the edge servermay need to request content from an origin server.4 Forinstance, dynamic content on a web page that is customizedfor each user cannot be entirely cached by the edge platformand must be fetched from the origin. The transport system isused to download the required data in a reliable and efficientmanner. More generally, the transport system is responsiblefor moving data and content over the long-haul Internet withhigh reliability and performance. In many cases, the transportsystem may also cache static content.The communications and control system is used fordisseminating status information, control messages, andconfiguration updates in a fault-tolerant and timely fashion.The data collection and analysis system is responsible forcollecting and processing data from various sources such asserver logs, client logs, and network and server information.The collected data can be used for monitoring, alerting,analytics, reporting, and billing.Finally, the management portal serves two functions. First, itprovides a configuration management platform that allows anenterprise customer to retain fine-grained control how theircontent and applications are served to the end user. TheseFigure 2: A delivery network is a virtual network built as asoftware layer over the Internet that is deployed on widelydistributed hardware.43The concept of building a virtual network in software to makethe underlying network more reliable or higher-performing has along history both in parallel ([28], [40]) and distributednetworks [6].The origin includes the backend web servers, applicationservers, and databases that host the web application, and is oftenowned and controlled by the content or application providerrather than the operator of the delivery network. In the case ofstreaming media, the origin includes facilities for video captureand encoding of live events, as well as storage facilities for ondemand media.

configurations are updated across the edge platform from themanagement portal via the communications and controlsystem. In addition, the management portal provides theenterprise with visibility on how their users are interactingwith their applications and content, including reports onaudience demographics and traffic metrics.While all of Akamai‘s delivery networks incorporate the systemsoutlined above, the specific design of each system is influenced byapplication requirements. For instance, the transport system of anapplication delivery network will have a different set ofrequirements and a different architecture than that of a contentdelivery network. We will look at each of these systemcomponents in more detail in the upcoming sections.To guide our design choices, we begin with the assumption that asignificant number of failures (whether they be at the machine,rack, cluster, connectivity, network levels) is expected to beoccurring at all times in the network. Indeed, while not standardin system design, this assumption seems natural in the context ofthe Internet. We have seen many reasons that Internet failures canoccur in Section 3, and have observed it to be true empiricallywithin our own network.What this means is that we have designed our delivery networkswith the philosophy that failures are normal and the deliverynetwork must operate seamlessly despite them. Much effort isinvested in designing recovery from all types of faults, includingmultiple concurrent faults.This philosophy guides every level of design decision—down tothe choice of which types of servers to buy: the use of robustcommodity servers makes more sense in this context than moreexpensive servers with significant hardware redundancy. While itis still important to be able to immediately identify failinghardware (e.g., via ECC memory and disk integrity checks thatenable servers to automatically take themselves out of service),there are diminishing returns from building redundancy intohardware (e.g, dual power supplies) rather than software. Deeperimplications of this philosophy are discussed at length in [1].We now mention a few key principles that pervade our platformsystem design:Design for reliability. Because of the nature of our business,the goal is to attain extremely close to 100% end-to-endavailability. This requires significant effort given ourfundamental assumption that components will fail frequentlyand in unpredictable ways. We must ensure full redundancyof components (no single points of failure), build in multiplelevels of fault tolerance, and use protocols such as PAXOS[26] and decentralized leader election to accommodate forthe possibility of failed system components.Figure 3: System components of a delivery network. Tounderstand how these components interact, it is instructive towalk through a simple example of a user attempting todownload a web page through the Akamai network.4.3 System Design PrinciplesThe complexity of a globally distributed delivery network bringsabout a unique set of challenges in architecture, operation andmanagement—particularly in an environment as heterogeneousand unpredictable as the Internet. For example, networkmanagement and data collection needs to be scalable and fastacross thousands of server clusters, many of which are located inunmanned, third-party data centers, and any number of whichmight be offline or experiencing bad connectivity at any giventime. Configuration changes and software updates need to berolled out across the network in a safe, quick, and consistentmanner, without disrupting service. Enterprises also must be ableto maintain visibility and fine-grained control over their contentacross the distributed platform.Design for scalability. With more than 60,000 machines(and growing) across the globe, all platform componentsmust be highly scalable. At a basic level, scaling meanshandling more traffic, content, and customers. This alsotranslates into handling increasingly large volumes ofresulting data that must be collected and analyzed, as well asbuilding communications, control, and mapping systems thatmust support an ever-increasing number of distributedmachines.Limit the necessity for human management. To a verylarge extent, we design the system to be autonomic. This is acorollary to the philosophy that failures are commonplaceand that the system must be designed to operate in spite ofthem. Moreover, it is necessary in order to scale, else thehuman operational expense becomes too high. As such, thesystem must be able to respond to faults, handle shifts in loadand capacity, self-tune for performance, and safely deploysoftware and configuration updates with minimal humanintervention. (To manage its 60,000-plus machines, theAkamai network operation centers currently employ around60 people, distributed to work 24x7x365.)Design for performance. There is continual work beingdone to improve the performance of the system‘s criticalpaths, not only from the perspective of improving end user

response times but for many different metrics across theplatform, such as cache hit rates and network resourceutilization. An added benefit to some of this work is energyefficiency; for example, kernel and other softwareoptimizations enable greater capacity and more traffic servedwith fewer machines.We will explore these principles further as we examine each of thethe Akamai delivery networks in greater detail in the nextsections. In Section 5 and Section 6 we outline specific challengesand solutions in the design of content, streaming media, andapplication delivery networks, and look at the characteristics ofthe transport systems, which differ for each of the deliverynetworks.5 In Section 7, we provide details on the generic systemcomponents that are shared among the Akamai delivery networks,such as the edge server platform, the mapping system, thecommunications and control system, and the data collection andanalysis system.5. HIGH-PERFORMANCE STREAMINGAND CONTENT DELIVERY NETWORKSIn this section, we focus on the architectural considerations ofdelivery networks for web content and streaming media. Afundamental principle for enhancing performance, reliability, andscalability for content and stream delivery is minimizing longhaul communication through the middle-mile bottleneck of theInternet—a goal made feasible only by a pervasive, distributedarchitecture where servers sit as ―close‖ to end users as possible.Here, closeness may be defined in both geographic and networktopological measures; the ideal situation (from a user performanceperspective) would consist of servers located within each user‘sown ISP and geography, thus minimizing the reliance on internetwork and long-distance communications.6A key question is just how distributed such an architecture needsto be. Akamai‘s approach generally has been to reach out to thetrue edge of the Internet, deploying not only in large Tier 1 andTier 2 data centers, but also in large numbers of end user ISPs.Rather than taking the approach of deploying massive serverfarms in a few dozen data centers, Akamai has deployed serverclusters of varying size in thousands of locations—an approachthat arguably adds complexity to system design and management.However, we made this architectural choice as we feel that it isthe one that has the most efficacy.Internet access traffic is highly fragmented across networks—thetop 45 networks combined account for only half of user accesstraffic, and the numbers drop off dramatically from there. Thismeans that unless a CDN is deployed in thousands of networks, alarge percentage of traffic being served would still need to travelover multiple networks to reach end users. Being deployed inlocal ISPs is particularly critical for regions of the world withpoor connectivity. More importantly, as we saw in Section 3,5The transport systems do share services and components, but aretailored to meet the requirements of the different types ofapplications they support.6When long-haul communication is unavoidable, as in the case ofcold content or live streaming, the transport system isarchitected to ensure that these communications happen withhigh reliability and performance.Table 1, because of the way TCP works, the distance betweenserver and end user becomes a bottleneck for video throughput. Ifa CDN has only a few dozen server locations, the majority ofusers around the world would be unable to enjoy the high qualityvideo streams their last mile broadband access would otherwiseallow. Finally, being highly distributed also increases platformavailability, as an outage across an entire data center (or evenmultiple data centers) does not need to affect delivery networkperformance.For these reasons, Akamai‘s approach is to deploy servers as closeto end users as possible, minimizing the effects of peering pointcongestion, latency, and network outages when deliveringcontent. As a result, customers enjoy levels of reliability andperformance that are not possible with more centralizedapproaches.Finally, while peer-to-peer technologies [8] provide a highlydistributed

these technical hurdles. Since then, both the Web and the Akamai platform have evolved tremendously. Today, Akamai delivers 15-20% of all Web traffic worldwide and provides a broad range of commercial

Related Documents:

May 02, 2018 · D. Program Evaluation ͟The organization has provided a description of the framework for how each program will be evaluated. The framework should include all the elements below: ͟The evaluation methods are cost-effective for the organization ͟Quantitative and qualitative data is being collected (at Basics tier, data collection must have begun)

Silat is a combative art of self-defense and survival rooted from Matay archipelago. It was traced at thé early of Langkasuka Kingdom (2nd century CE) till thé reign of Melaka (Malaysia) Sultanate era (13th century). Silat has now evolved to become part of social culture and tradition with thé appearance of a fine physical and spiritual .

new configuration, Akamai Luna Control Center runs a series of tests against your FTP It can be server. used in conjunction wi th Akamai's Net Storage product . 15. Once FTP has conjunction with Akamai's Net Storage product, collect the Akamai integrator from the support team. 16. Extract the Akamai integrator.zip in any folder. 17. Run

On an exceptional basis, Member States may request UNESCO to provide thé candidates with access to thé platform so they can complète thé form by themselves. Thèse requests must be addressed to esd rize unesco. or by 15 A ril 2021 UNESCO will provide thé nomineewith accessto thé platform via their émail address.

̶The leading indicator of employee engagement is based on the quality of the relationship between employee and supervisor Empower your managers! ̶Help them understand the impact on the organization ̶Share important changes, plan options, tasks, and deadlines ̶Provide key messages and talking points ̶Prepare them to answer employee questions

Dr. Sunita Bharatwal** Dr. Pawan Garga*** Abstract Customer satisfaction is derived from thè functionalities and values, a product or Service can provide. The current study aims to segregate thè dimensions of ordine Service quality and gather insights on its impact on web shopping. The trends of purchases have

Akamai to Impera Incapsula Migration uide Transitioning Static and Dynamic Domains To support Akamai's caching capabilities, many Akamai customers have split their applications in such a way that static content is sent to one subdomain and dynamic content is sent to another. The static content is then sent through Akamai's CDN, while the .

Chính Văn.- Còn đức Thế tôn thì tuệ giác cực kỳ trong sạch 8: hiện hành bất nhị 9, đạt đến vô tướng 10, đứng vào chỗ đứng của các đức Thế tôn 11, thể hiện tính bình đẳng của các Ngài, đến chỗ không còn chướng ngại 12, giáo pháp không thể khuynh đảo, tâm thức không bị cản trở, cái được