Serverless Computing: Design, Implementation, And Performance

1y ago
15 Views
2 Downloads
535.86 KB
6 Pages
Last View : 15d ago
Last Download : 3m ago
Upload by : Callan Shouse
Transcription

2017 IEEE 37th International Conference on Distributed Computing Systems WorkshopsServerless Computing:Design, Implementation, and PerformanceGarrett McGrathPaul R. BrennerDept. of Computer Science and EngineeringUniversity of Notre DameNotre Dame, IndianaEmail: mmcgrat4@nd.eduDept. of Computer Science and EngineeringUniversity of Notre DameNotre Dame, IndianaEmail: paul.r.brenner@nd.eduAbstract—We present the design of a novel performanceoriented serverless computing platform implemented in .NET,deployed in Microsoft Azure, and utilizing Windows containersas function execution environments. Implementation challengessuch as function scaling and container discovery, lifecycle, andreuse are discussed in detail. We propose metrics to evaluate theexecution performance of serverless platforms and conduct testson our prototype as well as AWS Lambda, Azure Functions,Google Cloud Functions, and IBM’s deployment of ApacheOpenWhisk. Our measurements show the prototype achievinggreater throughput than other platforms at most concurrencylevels, and we examine the scaling and instance expiration trendsin the implementations. Additionally, we discuss the gaps andlimitations in our current design, propose possible solutions, andhighlight future research.registries, pulling and running the containers when executionis required [9]. Peer work on the OpenLambda platformpresents an analysis of the scaling advantages of serverlesscomputing, as well as a performance analysis of variouscontainer transitions [10]. Other performance analyses havestudied the effect of language runtime and VPC impact onAWS Lambda start times [11], and measured the potential ofAWS Lambda for embarrassingly parallel high performancescientific computing [12].Serverless computing has proved a good fit for IoT applications, intersecting with the edge/fog computing infrastructureconversation. There are ongoing efforts to integrate serverlesscomputing into a ”hierarchy of datacenters” to empower theforeseen proliferation of IoT devices [13]. AWS has recentlyjoined this field with their Lambda@Edge [14] product, whichallows application developers to place limited Lambda functions in edge nodes. AWS has been pursuing other expansionsof serverless computing as well, including Greengrass [15],which provides a single programming model across IoT andLambda functions. Serverless computing allows applicationdevelopers to decompose large applications into small functions, allowing application components to scale individually,but this presents a new problem in the coherent managementof a large array of functions. AWS recently introduced StepFunctions [16], which allows for easier organization andvisualization of function interaction.The application of serverless computing is an active areaof development. Our previous work on serverless computingstudied serverless programming paradigms such as functioncascades, and experimented with deploying monolithic applications on serverless platforms [17]. Other work has studiedthe architecture of scalable chatbots in serverless platforms[18]. There are multiple projects aimed at extending thefunctionality of existing serverless platforms. Lambdash [19]is a shim allowing the easy execution of shell commandsin AWS Lambda containers, enabling developers to explorethe Lambda runtime environment. Other efforts such as Apex[20] and Sparta [21] allow users to deploy functions to AWSLambda in languages not supported natively, such as Go.Serverless computing is often championed as a cost-savingtool, and there are multiple works which report cost saving op-I. I NTRODUCTIONFollowing the lead of AWS Lambda [1], services such asApache OpenWhisk [2], Azure Functions [3], Google CloudFunctions [4], Iron.io IronFunctions [5], and OpenLambda [6]have emerged and introduced serverless computing, a cloudoffering where application logic is split into functions andexecuted in response to events. These events can be triggeredfrom sources external to the cloud platform but also commonly occur internally between the cloud platform’s serviceofferings, allowing developers to easily compose applicationsdistributed across many services within a cloud.Serverless computing is a partial realization of an eventdriven ideal, in which applications are defined by actions andthe events that trigger them. This language is reminiscent ofactive database systems, and the event-driven literature hastheorized for some time about general computing systemsin which actions are processed reactively to event streams[7]. Serverless function platforms fully embrace these ideas,defining actions through simple function abstractions andbuilding out event processing logic across their clouds. IBMstrongly echoes these concepts in their OpenWhisk platform(now Apache OpenWhisk), in which functions are explicitlydefined in terms of event, trigger, and action [8].Beyond the event-driven foundation, design discussionsshift toward container management and software developmentstrategies used to leverage function centric infrastructure.Iron.io uses Docker to store function containers in private2332-5666/17 31.00 2017 IEEE1545-0678/17DOI 10.1109/ICDCSW.2017.36405

portunities in deploying microservices to serverless platformsrather than building out traditional applications [22] [23].Others have tried to calculate the points at which serverless orvirtual machine deployments become more cost effective [24].Serverless computing is becoming increasingly relevant,with Gartner reporting that ”the value of [serverless computing] has been clearly demonstrated, maps naturally tomicroservice software architecture, and is on a trajectory ofincreased growth and adoption” [25]. Forrester argues that ”today’s PaaS investments lead to serverless computing,” viewingserverless computing as the next-generation of cloud serviceabstractions [26]. Serverless computing is quickly proliferatingacross many cloud providers, and is powering an increasingnumber of mobile and IoT applications. As its scope andpopularity expands, it is important to ensure the fundamentalperformance characteristics of serverless platforms are sound.In this work we hope to aid in this effort by detailingthe implementation of a new performance-focused serverlessplatform, and comparing its performance to existing offerings.Node.js functions are currently supported, which is ourchosen language because of its availability on all majorserverless computing platforms.3) Memory Size - A function’s memory size determines themaximum memory a function’s container can consume.The maximum function memory size is currently set at1 GB. The CPU cores assigned to a function’s containeris set proportionally to its memory size.4) Code Blob URI - A zip archive containing a function’scode is provided during function creation. This code iscopied to a blob inside the platform’s storage account,and the URI of that blob is placed in the function’smetadata.Function containers will be discussed in detail below, as willwarm queues, which are queues indexed by function identifierwhich hold the available running containers of each function.B. Function ExecutionOur implementation provides a very basic function programming model and only supports manual invocations. Whilethe processing of event sources and quality of programmingconstructs are important considerations in serverless offerings,our work focuses on the execution processing of such systems,for which manual execution support is sufficient.Functions are executed by calling an ”/invoke” route offof function resources on the REST API. The invocation callrequest bodies are provided to the functions as inputs, andthe response bodies contains the function outputs. Executionbegins in the web service which receives the invocationcalls and subsequently retrieves function metadata from tablestorage. An execution request object is created containingthe function metadata and inputs, and then the web serviceattempts to locate an available container in the worker serviceto process the execution request.Interaction between the web and worker services is controlled through a shared messaging layer. Specifically, there isa global ”cold queue”, as well as a ”warm queue” for eachfunction in the platform. These queues hold available containermessages, which simply consist of a URI containing theaddress of the worker instance and the name of the availablecontainer. Messages in the cold queue indicate a worker hasunallocated memory in which it could start a container, andvisible messages in a function’s warm queue indicate existingfunction containers not currently handling execution requests.The web service first tries to dequeue a message from afunction’s warm queue. If no messages are found, the webservice dequeues a message from the cold queue, which willassign a new container to the function when sent to theworker service. If all workers are fully allocated with runningcontainers, the cold queue will be empty. Therefore, if theweb service is unable to find an available container in boththe warm queue and cold queue, it will return HTTP 503Service Unavailable because there are no resources to fulfillthe execution request. For this reason, the cold queue is anexcellent target for auto-scaling, as it reflects the availablespace across the platform.II. P ROTOTYPE D ESIGNWe have developed a performance-oriented serverless computing platform1 to study serverless implementation considerations and provide a baseline for existing platform comparison.The platform is implemented in .NET, deployed to MicrosoftAzure, and has a small feature-set and simple design. Theprototype depends upon Azure Storage for data persistenceand its messaging layer. Besides Azure Storage services, ourimplementation consists of two components: a web servicewhich exposes the platform’s public REST API, and a workerservice which manages and executes function containers. Theweb service discovers available workers through a messaginglayer consisting of various Azure Storage queues. Functionmetadata is stored in Azure Storage tables, and function codeis stored in Azure Storage blobs.Figure 1 shows an overview of the platform’s components.Azure Storage was chosen because it provides highly scalableand low-latency storage primitives through a simple API,aligning well with the goals of this implementation [27]. Forthe sake of brevity these storage entities will be referred to asqueues, tables, and blobs, with the understanding that in thecontext of this paper these terms apply to the respective AzureStorage services.A. Function MetadataA function is associated with a number of entities acrossthe platform, including its metadata, code, running containers,and ”warm queue”. Function metadata is the source of truthfor function existence and is defined by four fields:1) Function Identifier - Function identifiers are randomlygenerated GUIDs assigned during function creation andused to uniquely identify and locate function resources.2) Language Runtime - A function’s language runtimespecifies the language of the function’s code. Only1 prototype406

Fig. 1. Overview of prototype components, showing the organization of the web and worker services, as well as code,metadata, and messaging entities in Azure Storage.arbitrarily set period of 15 minutes, after which it is removedand its memory reclaimed. Whenever memory is reclaimed,worker services send new container allocations to the coldqueue if their unused memory exceeds the maximum functionmemory size.Container expiration has implications for the web servicebecause it is possible to dequeue an expired container froma function’s warm queue. In this case, when the web servicesends the execution request, the worker service will returnHTTP 404 Not Found. The web service will then delete theexpired message from the queue and retry.Once a container allocation message is found in a queue,the web service sends an HTTP request to a worker serviceusing the URI contained in the message. The worker thenexecutes the function and returns the function outputs to theweb service, which in turn responds to the invocation call.C. Container AllocationEach worker manages a pool of unallocated memory whichit can assign to function containers. When memory is reserved,a container name is generated, which uniquely identifies acontainer and its memory reservation, and is embedded inthe URI sent in container allocation messages. Therefore,each message in the queues is uniquely identifiable and canbe associated with a specific memory reservation within aworker service instance. Memory is allocated conservatively,and worker services assume all functions will consume theirallocated memory size.When container allocations are sent to the cold queue, theyhave not yet been assigned to a function. To ensure workersdo not over-provision their memory pool, it is assumed theassigned function will have the maximum function memorysize. Then, when a worker service receives an executionrequest for an unassigned allocation, it reclaims memory ifthe assigned function requires less than the maximum size.After the container is created and its function executed for thefirst time, the container allocation message is placed in thatfunction’s warm queue.E. Container ImageThe platform uses Docker to run Windows Nano Servercontainers and communicates with the Docker service throughthe Docker Engine API. The container image is built toinclude the function runtime (currently only Node.js v6.9.5)and an execution handler. Notably absent from the image isany function code. Custom containers are not built for eachfunction in the platform, instead we attach a read-only volumecontaining function code when starting the container. A singleimage design was chosen for multiple reasons: it is simplerto only manage a single image, attaching volumes is a fastoperation, and Windows Nano Server container images aresignificantly larger than lightweight Linux images such asAlpine Linux, affecting both storage costs and start-up times.In addition to the read-only volume, the memory size and CPUpercentage of the container are proportionally set based uponthe function’s memory size.The container’s execution handler is a simple Node.js serverwhich receives function inputs from the worker service. Theworker service sends function inputs to the handler in therequest body of an HTTP request, the handler calls thefunction with the specified inputs, and responds to the workerservice with the function outputs. The container is addressableon the worker service’s LAN because containers are added toD. Container RemovalThere are two ways a container can be removed. Firstly,when a function is deleted, the web service deletes thefunction’s warm queue, which is periodically monitored forexistence by the worker service instances holding containersof that function. If a worker service detects that a deletedfunction queue, it removes that function’s running containersand reclaims their memory reservations. Secondly, in ourimplementation a container can be removed if it is idle for an407

the default ”nat” network, which is the Windows equivalentof the Linux container ”bridge” network.III. P ERFORMANCE R ESULTSWe designed two tests to measure the execution performance of our implementation, AWS Lambda, Azure Functions, Google Cloud Functions, and Apache OpenWhisk. Wedeveloped a performance tool2 to conduct these experiments,which deploys a Node.js test function to the different servicesusing the Serverless Framework [28]. We also built a Serverless Plugin3 to enable Serverless Framework support for ourplatform.This tool is deigned to measure the overhead introduced bythe platforms using a simple test function which immediatelycompletes execution and returns. This function is invokedsynchronously with HTTP events/triggers as supported by thevarious platforms, and through the function’s invocation routeon our platform. Manual invocation calls were not used onthe other services as they are typically viewed as developmentand testing routes, and we believed a popular productionevent/trigger such as an HTTP endpoint would better reflectexisting platform performance. A 512MB function memorysize was used in all platforms except Microsoft Azure, whichdynamically discovers the memory requirements of functions.The prototype was deployed in Microsoft Azure, where theweb service was an API App in Azure App Service, andthe worker service was two DS2 v2 virtual machines runningWindows Server 2016. All platform tables, queues, and blobsresided in a single Azure storage account.Network latencies were not accounted for in our tests, but toreduce their effects we performed our experiments from virtualmachines inside the same region as our target function, exceptin the case of OpenWhisk, which we measured from Azure’sSouth Central US region, and from which we observed singledigit millisecond network latencies to our function endpoint inIBM’s US South region.Fig. 2. Concurrency test results, plotting the average number of executionscompleted per second versus the number of concurrent execution requests tothe function.load is approaching the scalability targets of a single AzureStorage queue [29]. AWS Lambda appears to scale linearly andexhibits the highest throughput of the commercial platformsat 15 concurrent requests. Google Cloud Functions exhibitssub-linear scaling and appears to taper off as the numberof concurrent requests approaches 15. The performance ofAzure Functions is extremely variable, although the throughput reported is quite high in places, outperforming the otherplatforms at lower concurrency levels. This variability isintriguing, especially because it persists across test iterations.OpenWhisk’s performance is curious, and shows low throughput until eight concurrent requests, at which point the functionbegins to sub-linearly scale. This behavior may be causedby OpenWhisk’s container pool starting multiple containersbefore beginning reuse, but this behavior is dependent on theconfiguration of IBM’s deployment.A. Concurrency TestB. Backoff TestFigure 2 shows the results of the concurrency test, whichis designed to measure the ability of serverless platforms toperformantly invoke a function at scale. Our tool maintainsinvocation calls to the test function by reissuing each requestimmediately after receiving the response from the previouscall. The test begins by maintaining a single invocation call inthis way, and every 10 seconds adds an additional concurrentcall, up to a maximum of 15 concurrent requests to thetest function. The tool measures the number of responsesreceived per second, which should increase with the level ofconcurrency. This test was repeated 10 times on each of theplatforms.The prototype demonstrates near-linear scaling betweenconcurrency levels 1 and 14, but sees a significant performancedrop at 15 concurrent requests. This drop is due to increasedlatencies observed from the warm queue, indicating that the2 Available:3 Available:Figure 3 shows the results of the backoff test, which isdesigned to study the cold start times and expiration behaviorsof function instances in the various platforms. The backofftest sends single execution requests to the test function atincreasing intervals, ranging from one to thirty minutes.As described in the prototype design, function containersexpire after 15 minutes of unuse. Figure 3 shows this behavior,and the execution latencies after 15 minutes show the cold startperformance of our prototype. It appears Azure Functions alsoexpires function resources after a few minutes, and exhibitssimilar cold start times as our prototype. It is important to notethat although both our prototype and Azure Functions are Windows implementations, their function execution environmentsare very different, as our prototype uses Windows containersand Azure Functions runs in Azure App Service. OpenWhiskalso appears to deallocate containers after about 10 minutesand has much lower cold start times than Azure Functionsor our prototype. Most notably, AWS Lambda and totype-plugin408

is only returned once execution has completed. However,asynchronous executions respond to clients before functionexecution, so it is necessary to have additional logic to ensurethese executions complete successfully.We believe the prototype can support this requirement bystoring active executions in a set of queues and introducinga third service responsible for monitoring the status of thesequeue messages. Worker services would continually updatemessage visibility delays during function execution, and themonitoring service would detect failures by looking for visiblemessages. Failed messages could then be re-executed. Notethat this is about handling platform execution failures and notexceptions thrown by the function during execution, for whichretry may also be desired.C. Worker UtilizationFig. 3. Backoff test results, plotting the average execution latency of thefunction versus the time since the function’s previous execution.A large area for improvement in our implementation isworker utilization. Realistic designs would require an overallocation of worker resources, with the observation that notall functions on a worker are constantly executing, or using allof their memory reservation. Utilization in a serverless contextpresents competing tradeoffs between execution performanceand operating costs; however, the evaluation of utilizationstrategies is difficult without representative datasets of execution loads on serverless platforms. Future research wouldbenefit from increased transparency from existing platforms,and from methods of synthesizing serverless computing loads.Cloud Functions appear largely unaffected by function idling.Possible explanations for this behavior could be extremelyfast container start times or preallocation of containers asconsidered below in the discussion of Windows containers.IV. L IMITATIONS AND F UTURE W ORKA. Warm QueuesThe warm queue is a FIFO queue, which is problematicfor container expiration. Imagine a function under heavy loadhas 10 containers allocated for execution, and then load dropssuch that a single container could handle all of the function’sexecutions. Ideally, the extra 9 containers would expire aftera short time, but because of the FIFO queue, so long as thereare 10 executions of the function per container expirationperiod, all containers will remain allocated to the function.Of course, the solution is to use ”warm stacks” instead of”warm queues”, but Azure Storage does not currently supportLIFO queues. This is perhaps the largest issue with ourcurrent implementation; however, other warm stack storageoptions such as a Redis cache [30] or a consistent hashing[31] implementation are promising, and may offer improvedperformance as well.D. Windows ContainersWindows containers have some limitations compared toLinux containers, largely because Linux containers were designed around Linux cgroups which support useful operationsnot available on Windows. Most notably in the context ofserverless computing is the support of container resource updating and container pausing. A common pattern in serverlessplatform implementations is pausing containers when idleto prevent resource consumption, and then unpausing thembefore execution resumes [10], [32].Another potentially useful operation is container resourceupdating. Because we reserve resources for containers beforeexecutions begin, it would be beneficial for cold start performance if we were able to start containers before they areassigned to a function, and then resize the container once anexecution request is received. Future work can study how tosupport these semantics in Windows containers, perhaps bylimiting or updating the resources to the function process itselfrather than the container as a whole. Alternatively, the prototype could experiment with Linux containers to compare startup performances and test the viability of container resizingduring cold starts.B. Asynchronous ExecutionsCurrently the prototype only supports synchronous invocations. In other words, a request to execute a function willreturn the result of that function execution, it will not simplystart the function and return. Asynchronous executions bythemselves are simple to support, the web service can simplyrespond to the invocation call and then process the executionrequest normally. The difficulty in asynchronous execution isin guaranteeing at-least-once execution rather than best effortexecution. It is important to understand that synchronous orasynchronous execution is only guaranteed once an invocationrequest returns with a successful status code. Therefore, nofurther work is needed for synchronous execution requests(as in our implementation), because a successful status codeE. SecuritySecurity of serverless systems is also an open researchquestion. Hosting arbitrary user code in containers on multitenant systems is a dangerous proposition, and care must betaken when constructing and running function containers to409

prevent vulnerabilities. This intersection of remote procedurecalls (RPC) and container security represents a significant realworld test of general container security. Therefore, althoughserverless platforms are able to carefully craft the functioncontainers and restrict function permissions arbitrarily, increasing the chances of secure execution, further study isneeded to assess the attack surface within function executionenvironments.[10] S. Hendrickson, S. Sturdevant, T. Harter, V. Venkataramani, A. C.Arpaci-Dusseau, and R. H. Arpaci-Dusseau, “Serverless computationwith openlambda,” in Proceedings of the 8th USENIX Conference onHot Topics in Cloud Computing, ser. HotCloud’16. Berkeley, CA,USA: USENIX Association, 2016, pp. 33–39.[11] b,2016.[12] E. Jonas, “Microservices and Teraflops,” Available: http://ericjonas.com/pywren.html, 2016.[13] E. d. Lara, C. S. Gomes, S. Langridge, S. H. Mortazavi, and M. Roodi,“Poster abstract: Hierarchical serverless computing for the mobile edge,”in 2016 IEEE/ACM Symposium on Edge Computing (SEC), Oct 2016,pp. 109–110.[14] Amazon Web Services, “AWS Lambda@Edge,” Available: -edge.html, 2017.[15] ——, “AWS Greengrass,” Available: https://aws.amazon.com/greengrass/, 2017.[16] ——, “AWS Step Functions,” Available: https://aws.amazon.com/step-functions/, 2017.[17] G. McGrath, J. Short, S. Ennis, B. Judson, and P. Brenner, “Cloud eventprogramming paradigms: Applications and analysis,” in 2016 IEEE 9thInternational Conference on Cloud Computing (CLOUD), June 2016,pp. 400–406.[18] M. Yan, P. Castro, P. Cheng, and V. Ishakian, “Building a chatbot withserverless computing,” in Proceedings of the 1st International Workshopon Mashups of Things and APIs, ser. MOTA ’16. New York, NY, USA:ACM, 2016, pp. 5:1–5:4.[19] E. Hammond, “Lambdash: Run sh commands inside AWS Lambdaenvironment,” Available: https://github.com/alestic/lambdash, 2017.[20] Apex, “Apex: Serverless Architecture,” Available: http://apex.run/, 2017.[21] Sparta, “Sparta: A Go framework for AWS Lambda microservices,”Available: http://gosparta.io/, 2017.[22] M. Villamizar, O. Garcs, L. Ochoa, H. Castro, L. Salamanca, M. Verano,R. Casallas, S. Gil, C. Valencia, A. Zambrano, and M. Lang, “Infrastructure cost comparison of running web applications in the cloud usingaws lambda and monolithic and microservice architectures,” in 201616th IEEE/ACM International Symposium on Cluster, Cloud and GridComputing (CCGrid), May 2016, pp. 179–182.[23] B. Wagner and A. Sood, “Economics of Resilient Cloud Services,” ArXive-prints, Jul. 2016.[24] A. Warzon, “AWS Lambda pricing in context: A comparison to EC2,”Available: https://www.trek10.com/blog/lambda-cost/, 2016.[25] C. Lowery, “Emerging Technology Analysis: Serverless Computing andFunction Platform as a Service,” Gartner, Tech. Rep., September 2016.[26] J. S. Hammond, J. R. Rymer, C. Mines, R. Heffner, D. Bartoletti,C. Tajima, and R. Birrell, “How To Capture The Benefits Of Microservice Design,” Forrester Research, Tech. Rep., May 2016.[27] B. Calder, J. Wang, A. Ogus, N. Nilakantan, A. Skjolsvold, S. McKelvie,Y. Xu, S. Srivastav, J. Wu, H. Simitci, J. Haridas, C. Uddaraju,H. Khatri, A. Edwards, V. Bedekar, S. Mainali, R. Abbasi, A. Agarwal,M. F. u. Haq, M. I. u. Haq, D. Bhardwaj, S. Dayanand, A. Adusumilli,M. McNett, S. Sankaran, K. Manivannan, and L. Rigas, “Windowsazure storage: A highly available cloud storage service with strongconsistency,” in Proceedings of the Twenty-Third ACM Symposium onOperating Systems Principles, ser. SOSP ’11. New York, NY, USA:ACM, 2011, pp. 143–157.[28] Serverless, Inc., “Serverless Framework,” Available: https://serverless.com/, 2017.[29] Targets,” Available: rage-scalability-targets, March 2017.[30] Redis, “Redis,” Available: https://redis.io/, 2017.[31] I. Stoica, R. Morris, D. Karger, M. F. Kaashoek, and H. Balakrishnan,“Chord: A scalable peer-to-peer lookup service for internet applications,”in Proceedings of the 2001 Conference on Applications, Technologies,Architectures, and Protocols for Computer Communications, ser. SIGCOMM ’01. New York, NY, USA: ACM, 2001, pp. 149–160.[32] T. Wagner, “Understanding container reuse in AWS Lambda,” tainer-reuse-in-lambda/,2014.F. Performance MeasuresThere are significant opportunities to expand understandingof serverless platform performance by defining performancemeasures and tests thereof. This work focused on the overheadintroduced by the platforms during single-function execution,but

the implementation of a new performance-focused serverless platform, and comparing its performance to existing offerings. II. PROTOTYPE DESIGN We have developed a performance-oriented serverless com-puting platform1 to study serverless implementation considera-tions and provide a baseline for existing platform comparison.

Related Documents:

Hacking Serverless Runtimes Serverless technology is getting increasingly ubiquitous in the enterprise and startup communities. As micro-services multiply and single purpose services grow, how do you audit and defend serverless runtimes? The advantages of serverless runtimes are clear: increased

AWS Serverless Application Model Developer Guide Benefits of using AWS SAM What is the AWS Serverless Application Model (AWS SAM)? The AWS Serverless Application Model (AWS SAM) is an open-source framework that you can use to build serverless applications on AWS.

Serverless computing: An evolution of cloud computing Serverless computing is an evolution of cloud computing service models -from Infrastructure-as-a-Service (IaaS) to Platform-as-a-Service (PaaS) to Function-as-a-Service (FaaS). While IaaS abstracts the underlying infrastructure to provide virtual machines for ready consumption and

serverless architecture and how future research can over-come those limitations (Sec. 4). By sharing these design patterns with the wider research and development com-munity, we hope to encourage others to develop more se-curity applications using serverless architecture and ex-plore similar

pace with customers, without being hamstrung by the limitations of more traditional architectures. Disadvantages of Serverless computing A drawback of serverless computing for some developers is that it is event-driven and does not have a persistent state. Local variables' values don't persist across instantiations, so it can be a problem for

Iron.io Whit Paper Serverless Computing: Developer Empowerment Reaches Nw Hights 6. 2016 Iron.io, Inc. PREPARING TO GO SERVERLESS Any new development trend capturing headlines needs to be backed up with real world solutions in order to escape from the hype cycle to the enterprise. Given that

formance, and higher monetary cost. To this end, we design and build a new serverless parallel computing framework called Wukong2. Wukong is a serverless-oriented, allelcomputing framework. The key insight of Wukong is that partitioning the work of a centralized scheduler (i.e., tracking task .

The SRD is the ultimate axial pile capacity that is experienced during the dynamic conditions of pile driving. Predictions of the SRD are usually calculated by modifying the calculation for the ultimate static axial pile capacity in compression. API RP 2A and ISO 19002 refer to several methods proposed in the literature.