Service-oriented Heterogeneous Resource Sharing . - Rutgers University

1y ago
8 Views
2 Downloads
1.29 MB
8 Pages
Last View : 5d ago
Last Download : 3m ago
Upload by : Gia Hauser
Transcription

Service-Oriented Heterogeneous Resource Sharing forOptimizing Service Latency in Mobile CloudTakayuki NishioRyoichi ShinkumaTatsuro TakahashiGraduate School ofInformatics, Kyoto UniversityYoshida-honmachi, Sakyo-ku,Kyoto, JapanGraduate School ofInformatics, Kyoto UniversityYoshida-honmachi, Sakyo-ku,Kyoto, JapanGraduate School ofInformatics, Kyoto UniversityYoshida-honmachi, Sakyo-ku,Kyoto, Narayan B. Mandayamttakahashi@i.kyotou.ac.jpWireless Information NetworkLaboratory (WINLAB),Rutgers University671 Route 1 SouthNorth Brunswick, NJ, USAnarayan@winlab.rutgers.eduABSTRACTKeywordsFog computing is expected to be an enabler of mobile cloudcomputing, which extends the cloud computing paradigmto the edge of the network. In the mobile cloud, not onlycentral data centers but also pervasive mobile devices sharetheir heterogeneous resources (e. g. CPUs, bandwidth, content) and support services. The mobile cloud based onsuch resource sharing is expected to be a powerful platform for mobile cloud applications and services. In thispaper, we propose an architecture and mathematical framework for heterogeneous resource sharing based on the keyidea of service-oriented utility functions. Since heterogeneous resources are often measured/quantified in disparatescales/units (e.g. power, bandwidth, latency), we present aunified framework where all these quantities are equivalentlymapped to time resources. We formulate optimizationproblems for maximizing (i) the sum of the utility functions,and (ii) the product of the utility functions, and solve themvia convex optimization approaches. Our numerical resultsshow that service-oriented heterogeneous resource sharingreduces service latencies effectively and achieves high energyefficiency, making it attractive for use in the mobile cloud.cloud computing, mobile cloud, fog computing, heterogeneous resource sharing, service-oriented1.INTRODUCTIONRecent advances of cloud computing platforms and mobiledevices have given rise to mobile cloud computing (MCC),which is a new paradigm for mobile applications and servicesthat promises to have a strong impact on our lifestyle. Initially, MCC mainly focused on improving computing powerand storage capacity of mobile nodes by outsourcing tasksto more powerful cloud data center [1].Mobile cloud architectures can be roughly classified intotwo types[2, 3]. The first type of architecture is the agentclient based architecture, where only a central data centerprovides resources (e.g. central processing unit (CPU) andstorage) for mobile devices and processes the tasks necessaryfor implementing a service. In this setting, mobile devicesjust use cloud resources and do not contribute any services[4]. The second type of architecture is a cooperation-basedarchitecture, where not only central data centers but alsomobile devices share their resources and support services.Such architectures can be widely varied as well as the mostpowerful due to the sheer numbers of devices available totake part in the cloud. The underlying concept behind sucharchitectures is also referred to as fog computing [5], whichextends the cloud computing paradigm to the edge of thenetwork.The cooperation-based mobile cloud is the most interesting and visible research area of MCC at present. Therecent performance advances and diversification of mobiledevices bring a lot of ‘heterogeneous resources’ into localnetworks such as high-performance CPUs, high-speed LongTerm Evolution (LTE) connections, high volume storages,and multiple-sensor information. These correspond respectively to computational resources, communication resources,storage resources, and information resources. These heterogeneous resources can be leveraged opportunistically mak-Categories and Subject DescriptorsC.2.4 [Computer-Communications Networks]: Distributed SystemsPermission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others thanACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from permissions@acm.org.MobileCloud’13, July 29, 2013, Bangalore, India.Copyright 2013 ACM 978-1-4503-2206-5/13/07 . 15.00.19

Service applicationing cooperation-based cloud computing enabled by advancedmobile devices, a powerful platform for supporting a widevariety of applications.Several technical issues still need to be solved to fully realize the cooperation-based mobile cloud. In this work, wefocus on the core problem of how to coordinate the sharingof heterogeneous resources between nodes. Conventional approaches for achieving heterogeneous resource sharing usually coordinate tasks (such as numerical calculations or evendownload of data) without consideration of the specific service that is being provided [2, 3]. In such task-oriented sharing, heterogeneous resources are often measured/quantifiedin disparate scales/units (e.g. power, bandwidth, latency),and tasks are allocated to optimize certain metrics based onthe the disparate scales/units. Consider the example of anavigation service composed of the tasks of calculating theoptimal route and downloading map images. Task-orientedsharing for such a service minimizes the processing (computing) time for each task. However, the navigation servicehas the peculiar feature that even if the route calculationis completed in just a few microseconds, users can not enjoy the service until they complete the downloading tasks.In this case, it makes no sense to optimize computationalresources; perhaps the optimization would waste the computational resources even. Instead, what would have beenbetter is the optimization of a service-oriented utility function that better captures the benefits of such optimization.In this paper, we propose an architecture and mathematical framework for heterogeneous resource sharing based onthe key idea of service-oriented utility functions. The proposed system model is shown in Figure 1. In our architecture, a resource coordinator orchestrates tasks and resources service-by-service to maximize the utilities of nodesfor services. Since heterogeneous resources are often measured/quantified in disparate scales/units (e.g. power, bandwidth, latency), we present a unified framework where allthese quantities are equivalently mapped to time resources. We formulate optimization problems for maximizing (i)the sum of the utility functions, and (ii) the product of theutility functions, and solve them via convex optimizationapproaches.This paper is organized as follows. Section 2 introducesthe system model as well service-oriented utility functions.The proposed numerical model is discussed in Section 3.We discuss optimization of resource sharing in Section 4. InSection 5, scenarios for numerical analysis are discussed, andnumerical results are presented. We conclude the paper inSection 6.Nodes’ informationTask compositions for servicesResource coordinatorService requestNodes’ information- Location- PerformanceTask allocationPhysical linkFigure 1: Service oriented mobile cloud with fogcomputingprocess is given as Ti {Til : l L}, where the label ldenotes the specific type of task from the set of all requiredtasks L. In other words, node i uses an appropriate amountof resources to accomplish the task l of size Til . It is possiblethat accomplishing two different tasks may require the sametype of resources in which case the specific type of resourcehas to be shared between these two.Figure 2 (b) shows an example of services where node irequests additional resources and node j shares its resourceswith node i. A task of Til can be separated into smaller portions and, when node j shares resources with node i, node ioutsources some portions of the task to node j. The smallportion of the task outsourced to node j is denoted as Tijl( Tijl 0). Node i has to process the remaining (not outllsourced) tasks TPii . Note that the relationship between Tillland Tij is Ti j N Tij where N is the set of all nodes.Outsourcing portions of a task reduces the processing timefor the task accordingly.ServicerequestTask 1 (T1i)Task 3Task 2 (T2i)(T3i)Servicestarta) without resource sharingTask 1 (T1i)ServicerequestTask 2 (T2i)Remaining Task 3 (ΔT3ii)ServicestartOutsourced task 3 (ΔT3ij)b) with resource sharing2. SYSTEM ARCHITECTURE2.1 AssumptionsFigure 2: Example of flow of tasks for service. Valuein bracket is the amount of the task.We assume that nodes use services through applicationsinstalled in them. Nodes request resources service by service.A service is composed of multiple tasks such as computingand data downloading.Figure 2 depicts an example of a task flow for a service.Nodes want to complete the tasks included in the serviceas soon as possible. To process tasks, nodes need to usetheir resources. The set of resources that node i has is givenas Ri {Rim : m M }, where the label m indicates thespecific type of resource from the set of all available resourcesM . Further, the set of the sizes of tasks that node i has to2.2System modelAn exemplary model of the architecture is shown in Figure 3 where neighboring mobile nodes are connected to othernodes and form a local network for resource sharing by using short-range wireless connections such as WiFi in ad-hocmode and/or Bluetooth. Messages for resource sharing suchas resource requests, task instructions, and the results forthe tasks, are transmitted via the local network. Some of20

Node iLocal cloudData centersLocal resource coordinator1) Resource requestNode j1) Resource request2) Result of coordination2) Result of coordinationInternet3) Task instruction4) Processingtasks5) Result of task6) ConstructingserviceLocal resourcecoordinatorFigure 4: Coordination flowthe quality of experience for the service [6]. We define service latency as the duration from when the first task of aservice being processed starts until all the tasks for that service are finished. We define the utility function of node i asUi (ti ), which is a monotonically decreasing function such as a · ti b, where ti is the service latency of node i. Figure 5 shows the service latency and the utility as a function of outsourced tasks. As we can see in Figure 5, increasing the quantity of outsourced tasks increases the gainof the node for the resource sharing. Here, ti depends onlRi , Til , Tijl , Tji(j N, and l L).Figure 3: System modelthe nodes have wireless Internet access using possibly 3G,LTE and WiFi connections.Neighboring node in a local network form a group called alocal cloud. Nodes share their resources with other nodes inthe same local cloud. A local resource coordinator (LRC) iselected from the nodes in each local cloud. The coordinatormanages resource requests and allocates tasks to the nodesin the local cloud or data center in the Internet if necessary. The coordinator should be elected in accordance withthe connectivity to the local network, CPU performance andbattery life time. In this paper, we focus on resource coordination and have thus omitted the algorithms for the formation of a local cloud and selecting the LRC. Further, weassume that the short range connections necessary to support formation of any networks for resource sharing are notbandwidth restrained. We also study resource allocationonly under the scenario of sharing of computation resources, communication resources and information resources.In our architecture, nodes can be resource users and resource providers. Figure 4 illustrates an example of messaging between two nodes and a coordinator. Here, supposethat node i and node j are requesting resources, and node iis outsourcing its tasks for node j in accordance with somecoordination procedure. The messages are exchanged sequentially as follows: (1) the nodes send request messagesto the coordinator; these messages include what and howmuch resources a node has, what and how much tasks thenode has to complete for a service, and MAC and IP addresses of the node. (2) The coordinator allocates the tasksin order to maximize the utilities of nodes for the services,which is proposed in next section. The coordinator notifieseach node what and how many tasks the node should process; (3) node i sends an instruction message to node j inaccordance with the coordination result. The message includes necessary information to process tasks of node i; (4)node j processes the tasks in accordance with the instructions from node i; (5) node j sends the results of the tasksto node i; and (6) node i constructs the service based on theresults of tasks.Node iOutsourcingtasks of ΔTOther nodesService latencyPhysical wireless link for resource sharingPhysical (wireless) link for Internet accesstimeti’: w/o outsourcing tasks: Gainti: w/ outsourcing tasksΔTOutsourced tasksFigure 5: Gain of resource sharingUsing the unified utility function enables us to comparethe value of heterogeneous resources in the same dimensionof time. It could be a powerful metric for heterogeneous resources since we can treat any kind of resource as long asthe resource can reduce the service latency. For example,a high-performance CPU reduces the latency for processingcomputational jobs, and high-speed wireless access reducesthe latency for downloading data. A smart algorithm can reduce the latency for processing a computational task. Content can also be a resource, which can reduce the latency forobtaining the same content from the Internet. Sensor information can reduce latencies. For example, GPS informationcan be used to localize a node instead of other localizationtechniques that may be more computationally intensive andrequire completion of several communication tasks.3.NUMERICAL MODEL FOR SERVICEORIENTED RESOURCE SHARINGIn this section, we discuss the proposed architecture interms of service latency and energy consumption, which areimportant factors for mobile users.2.3 Service-oriented unified utility functionWe model the utility of nodes as a function of a servicelatency. The reason why we focus on service latency is thatthe latency is a straightforward measure of the decrease in21

3.1 Service latency3.2Figure 6 shows the different scenarios possible for howtasks may need to be accomplished in providing a service.For a service (a), we can easily calculate the latency as thesum of the latencies for all tasks. For a service (b), we alsocan calculate the latency as the pointwise maximum of thelatencies for all tasks. However, a service generally consistsof multiple sequential tasks as shown in Fig 6 (c). Here, wedefine a sequence as a set of tasks which have to be processedin order. As shown in Figure 6 (d), we can define the service(c), identified as s, as Qs {Qus : u Us }, where the labelu indicates the specific identification of the sequence fromthe set of all sequences included in the service Us . Since thelatency of each sequence can be calculated as the sum of thelatency for processing the tasks included in the sequence,the service latency of node i ti is written asX mti max (ti l ),(1)As shown in Figure 4, in order to share resources, nodeshave to exchange messages and process tasks for other nodes.For example, suppose that node i outsources a set of tasks Tijl to node j. The instruction message from node i instructs node j how to process each task in the set and nodemj processes the tasks using its resources Rj l . When it finishes processing the tasks, node j sends the results of thetasks back to node i. We define the latency for an outsourced task tlij as the duration from when the first messageis sent until the result is received.The latency tlij is written asu Usml QusmTask ATask BServicestartServicerequesta) Sequential caseServicerequestTask ATask BTask CTask DServicestartTask Bll,Dins(Tijl ) winsServicerequestSequence 1Sequence 2Servicestartd) Parallelized sequences3.3Theshould be defined appropriately for the resourcesml . For computational and communication resources, wemdefine the latency as Til /Ri l . This definition of latency isaccurate, since, for example, the processing time of compumtational operations Til when using a CPU that performs Ri loperation per second, and the downloading time of a datamsize of Til with throughput of Ri l bps could be calculatedusing the definition.Next, we discuss information resources. We regard theseresources as alternative resources to communication and computational resources. For example, when a map image isrequired for a service, if a node caches the image, the nodecan use it without downloading the image from the Internet. Suppose that in a navigation service, if a node cachesa route calculation result, the node can leverage that resultinstead of calculating the route. If node i has alternative information for a task Til , the node can finish the task withoutmprocessing the task using resources Ri l . Let us define(0 (Iil Rjinf o );δjli (2)1 (Iil 6 Rjinf o ),where Iil is an alternative information resource for task l andRjinf o denotes the set of all information resources for nodej. Using Eq. (2), we define the latency for a task asTil.Riml(6)Trade-off between gain and energy consumptionAs we mentioned in Section 1, we intuitively know thatsharing resources with nodes close to you should be moreefficient and effective than sharing with distant nodes.There is a trade-off between the gain from cooperatingwith distant nodes and the energy consumption. Increasingthe number of cooperating nodes might enable the nodes toobtain a greater gain, since the nodes would share a largeramount of remote resources. To increase the number of cooperating nodes, the nodes must improve their communication range. Suppose that there are three nodes i, j and k.The distances between each pair of nodes are dij , djk , anddik , respectively. Here, dik and djk are larger than dij . Inorder for node i to share resources with node j, node i hasto transmit messages to node j. In that case, the communication range of node i has to be longer than dij . If nodei and j cooperate with node k, their communication rangeshave to be longer than dik and djk respectively. Nodes i andj can obtain greater gain by sharing resource with node k,while they require a larger energy for transmission to obtainmore communication range.Energy is consumed when nodes send/receive data. Thevalues Etx (D, d) and Erx (D) represents the consumed energy in sending and receiving a message, where D is thedata size of sent or received data, and dr is the requiredcommunication range to share resources with a node, whichis almost the same as the geographical distance between asender and a receiver. Etx (D, d) and Erx (D) monotonicallyincrease as D and d increase. In particular, they increaseexponentially as d increases while maintaining the signal tonoise ratio (SNR) [7]. Energy is also consumed when nodesltmimandlllllltm δili · M AX(tmii ( Tii ), ti1 , ti2 , ., tiN ),Sequence 3Figure 6: Task sequences for serviceti l (Til ) δiliDrl (Tijl ) wrl Tijl ,tt (D) D/θ,(5)lwhere winsand wrl are weight parameters and θ is the throughput of a short-range wireless link.Outsourced tasks and the other remaining tasks can beprocessed in parallel. Then, the latency for task l becomesServicestartb) Parallel caseOutsourced task C’c) Mixed caseTask Amltlij tj l ( Tijl ) tt (Dins( Tijl )) tt (Drl ( Tijl )) tj l ,(4)where tt (D) indicates the time needed to transmit a messagevia short-range communications, the length of which is D;lDins( Tijl ) is the data size required for instructing the nodehow to process the task Tijl ; Dr ( Tijl ) is the data size oflresults for task Tijl ; tmis an additional latency thatjdepends on when node j starts to process task Tijl . In thispaper, we simply define them aswhere ti l is the latency required for node i to complete taskml when using the resources appropriate for the task Ri l .ServicerequestProcedure for sharing resources(3)22

4.2process tasks. E ml (T l ) is the consumed energy for processing the task T l while using resource ml . It also monotonically increase as the amount of tasks increases. Here,we simply define E ml (T l ) eml · T l , Erx (D) erx · D,Etx (D, d) etx · (d)2 · D, where eml indicates the energyconsumed in a second for fully usage of the resource ml ;erx indicates the energy consumed to receive a data size of1 Byte; etx indicates the energy consumed to send a datasize of 1 Byte to a node distance of 1 m. Then, the energyconsumed by node i when sharing resources with node j isXlllEij {erx wins eml Tji etx (dij )2 wrl Tji}ConstraintsNext, we discuss constraints. In this paper, we considerconstraints for task amounts and incentives on service latency and energy consumption. The constraints are writtenas:subject to:XTil Tijl (for any i and l),(12)j N TijlEitht0il Lji Xl{etx (dij )2 wins erx wrl Tijl },j N,j6 i4. RESOURCE SHARING OPTIMIZATIONWe consider a case where a coordinator ideally knows allinformation such as Rim and Til . The coordinator instructsevery node to allocate their tasks to other nodes in order toi) maximize the sum of the gains of all nodes or ii) maximizethe product of gains of all node. First we discuss the generalcase. Then, we explicitly formulate an optimization problemfor a specific case where two nodes shares resources.4.1 Objective4.1.1 Maximizing sum of gains in utilityThe simplest objective is to maximize the sum of the gainsof all nodes. It is written asXobjective:max(Ui (ti ) Ui (t0i )),(9)4.3i Ni NAs described in [9], a nonnegative, nonzero weighted sumof convex (concave) functions is convex (concave). Then, ifti is convex or concave for Tij , the objective is concave orconvex for Tij , respectively.4.1.2 Maximizing Product of gains in utility(15)Case study: Resource sharing in two-nodecaseti M AX(δi1iAn objective of the optimization problem should beY 0objective:max(ti ti ).(11) Tij (i, j N )(14)We discuss a simple case where a node shares computational, communication and information resources with another node to minimize the latency for a service composedof a computational task and a communication task.We consider a scenario where two nodes 1 and 2 share theircomputational and communication resources, the amountsof which are indicated by Ri1 and Ri2 respectively (i 1 or 2).A common service requested by nodes 1 and 2 consists of acomputational task and a communication task, the amountsof which are Ti1 and Ti2 , respectively (i 1 or 2.) Thelatency of each task is assumed to be Til /Ril as described inSection 3.1.Here, we assume that these two tasks can be processed inparallel. In this case, as discussed in Section 3.1, the servicelatency of node i when they do not share resources becomeswhere Tij : l L} and t0i is a service latency whennode i does not share resources. If we can define the utilityfunction as Ui (ti ) a · ti b (ti 0), the problem tomaximize the sum of the gains in utility of all nodes is equalto the problem of maximizing the sum of reduced servicelatencies. Then, the objective can be written as,X 0objective:max(ti ti ).(10){ Tijl Tij (i, j N )(for any i), ti (for any i),(13)Eithwhere Lij is the set of labels for tasks outsourced to node jby node i. In addition, node i has to complete the remainingtasks. Then, the total energy consumption of node i becomesXXEij .(8)Ei eml( Tiil ) Tij (i, j N ) Ei Ei0whereis threshold for energy consumption and Ei0 isenergy consumption of node i without resource sharing.Eqs. (12) and (13) mean that the size of tasks should bepositive, and nodes cannot outsource more tasks than thenodes have. Eq. (14) and Eq. (15) mean that node i ismotivated to join the resource sharing only if its additionalenergy consumption is kept smaller than its threshold Eithand if the resource sharing reduces its service latency. Whywe consider the constraints is that we can expect that nodesdo not share their resources without ensuring an incentivefor the sharing. Resource sharing can reduce not only servicelatency but also energy consumption [8]. These reductionscan be incentives for resource sharing, but we expect thatthese two do not go together since there are trade-offs, asdiscussed in Section 3.3.PFrom Eq. (8), Ei and j N Tijl are the sum of an affinefunction. Then constraints except for ti are evidently convexor concave for Tijl . If we use objective (i) and ti is convex orconcave, the optimization problem becomes a convex optimization problem, which can make optimization easier thanthe general case since any local solution must be a globalsolution.(7)l Lijl L 0 (for any i,j and l),Ti1 2i Ti2, δi 2 ).Ri1Ri(16)From Eqs. (5) and (6), when node i and j share theirresource, the latency for a task l of node i isi NThis objective is based on the idea of the Nash bargainingsolution, which is a solution that brings Pareto efficiency andproportional fairness. However, the objective is not necessarily convex, which makes the optimization problem hardto solve.tli23lTil Tijl Tji,lRlTijtt (Dins (Tijl )) δjli l tt (Dr (Tijl ))).Ri δili M AX((17)

lIn this scenario, from (17), either Tijl or Tjishould bel0 at the optimal equilibrium. We define ij as l Tijl( Tji 0); lij (18)l Tji ( Tijl 0).may be chosen according to various criteria as described below: minimizing the average geographical distance betweennodes, maximizing the average reduced service latency, ormaximizing the product of reduced service latencies.Then5.l Tijl Tji lij .(19)5.1From Eq. (19), Eq. (17) can be described as8T l l δili M AX{ i Rl ij , ilwins lijwr lli ijtli ( lij ) δ θ ij }. ( lij 0);jθRil : li Til lij( lij 0).δi R l .This function is convex, since, as proved in [9], if f1 andf2 are convex functions then their pointwise maximum f,defined by f (x) max{f1 (x), f2 (x)}, is also convex.Next, we discuss the energy consumption. From Eq. (8),the energy consumption of node i when sharing resources is8 li lδi {e · (Til lij ) etx · (dij )2 · wins lij erx · wr lij }( lij 0);Eil li llllδ e · (Ti ij ) erx · wins ij : i etx · (dij )2 · wr lij( lij 0).The function is continuous and obviously convex for δ(Iil , Riinf o ) 0 or 1.Using the above functions, we can write the optimizationproblem as:objective:subject to:max Gi Gj2 1ij , ij12Ri11j TjM AX(δjR1jRi22j Tj,δjR2j(20)TT122M AX(δi1i i1 ,δi2i i2 ) M AX(t1i ( ij ),ti ( ij )),Gi Gj Gi , Gj 0,Til lij TjlEith Ei1 e1 Ti1 Ei2 e2 Ti2 .NUMERICAL EXAMPLES122) M AX(t1j ( ij ),tj ( ij )),Scenario setupTo illustrate our approach, we consider a scenario wherenodes share their communication and computational resources. The nodes want to use a navigation service. For theservice, node i has to download load information and mapimages, and calculate the optimal route. We assume herethat the load information is already stored in the nodes.Then a latency for downloading the load information is 0.The total data size of the map images is Ti1 and the number of operations for calculating the route is Ti2 . Node idownloads data at a throughput of Ri1 and calculates computational tasks at the speed of Ri2 . We assume that nodescan process the image downloading and route calculationsimultaneously. In this case, an optimization problem forthe resource sharing becomes the same as that of (20) whenδili δjlj 1 for any l.Here, we assume that the local resource coordinator mentioned in Section 2.2 ideally solves the optimization problemdiscussed in Section 4 and determines the optimal allocationof tasks; however, the constraints for energy consumptionsare not considered in order to observe the adverse effects ofgeographical distance. We solve this non linear optimizationproblem by using the generalized reduced gradient technique[10].We describe the parameters used in the study in the captions of specific figures. These parameters are chosen considering the performance of actual CPUs and wireless accessusing 3G, LTE, and Wi-Fi. [11, 12, 13]5.2Comparative EvaluationWe compare our service-oriented approach with a numerical upper bound and a task-oriented approach.(l 1 and 2),5.2.1M AX(t1i ( 1ij ), t2i ( 2ij )) is convex.1 Then, Gi and Gj areevidently concave functions. As described in [9], a nonnegative, nonzero weighted sum of convex (concave) functionsis convex (concave). Then, Gi Gj is concave. As mentioned above, other constraints are also convex. Therefore,the optimization problem is a convex optimization problem.While we have discussed the 2-node case with 2 tasks thusfar which lends itself to formulation as convex optimization,the extension to the general case of N-nodes with L-tasksmay not necessarily yield such convex optimization formulations. We instead outline a heuristic approach below thatwill be an aspect of further study. Specifically, a heuristicfor the N-node optimization problem can be to use subsetsof the 2-node optimization problems, where these subsetsNumerical upper boundThe numerical upper bound is obtained when the sum ofreduced service latencies is maximized without consideringincentives for nodes. In particular, in this scenario, we solvethe following optimization problem for 2 nodes i and j as:objective:max Gi Gj2 1ij , ij(21)subject to:5.2.21We define t1i ( 1ij , 2ij ) t1i ( 1ij ) 0· 2ij and t2i ( 1ij , 2ij ) 0 · 1ij . These functions are convex since they arethe sum of convex functions and zero. Then their pointwisemaximum of M AX(t1i ( 1ij , 2ij ), t2i ( 1ij , 2ij )) also has to beconvex. Since M AX(t1i ( 1ij , 2ij ), t2i ( 1ij , 2ij )) is equal toM AX(t1i ( 1ij ), t2i ( 2ij )), M AX(t1i ( 1ij ), t2i ( 2ij )) is also convex.t2i ( 2ij )12RiTj1M AX( 1RjRiTj2, 2RjTT122M AX( i1 , i2 ) M AX(t1i ( ij ),t

cloud computing, mobile cloud, fog computing, heteroge-neous resource sharing, service-oriented 1. INTRODUCTION Recent advances of cloud computing platforms and mobile devices have given rise to mobile cloud computing (MCC), which is a new paradigm for mobile applications and services that promises to have a strong impact on our lifestyle. Ini-

Related Documents:

Resource Sharing in Heterogeneous Cloud Radio Access Networks Marcelo Antonio Marotta , Nicholas Kaminski y, Ismael Gomez-Miguelez , Lisandro Zambenedetti Granville , Juergen Rochol , Luiz DaSilvay, Cristiano Bonato Bothz Federal University of Rio Grande do Sul, Brazil,yUniversity of Dublin, Trinity College, Ireland, zFederal University of Health Sciences of Porto Alegre, Brazil

“Data-Oriented Design and C ”, Mike Acton, CppCon 2014 “Pitfalls of Object Oriented Programming”, Tony Albrecht “Introduction to Data-Oriented Design”, Daniel Collin “Data-Oriented Design”, Richard Fabian “Data-Oriented Design (Or Why You Might

Research on Innovative Application-oriented Talent Training Mode in Private Colleges A-ling LUO 1,a,*, Yue WANG. 1,b. and Hui LIU. . goal-oriented Education, ability-oriented Education or demanoriented Education. OBE is an d-advanced educational concept of the results-oriented, student-oriented and reverse thinking .

sharing sectors, including peer-to-peer sharing, online staffing, car sharing, sharing the accommodation and music streaming will make more than half of the total worldwide income (PwC, 2015). The global revenue of sharing economy will grow to US 335 billion by the end

With simple file sharing, the act of enabling file sharing on a folder and specifying the type of access is simplified to the following choices: Whether to enable sharing for the folder The name of the share Whether to allow network users to change files in the folder The Sharing tab for simple file sharing is shown in the following figure.

We name the coordinated time- or space-sharing scheduling as time-space sharing scheduling. Although Hawk [10], Mercury [26], and Eagle [9] all employ a mixed scheduling policy of time-sharing and space-sharing, they do not consider to coordinate time- or space-sharing scheduling among differenthorizontallayers. Instead, they simply employ .

A brief timeline of analysis methods in System Development function oriented (1970ties) - focus on data-processing, not data data oriented (1980ties) - all data available to all functions object oriented (1990ties-now) - encapsulation of data and functions process and service oriented (now- ) - designing for computer supported work.

Alfredo López Austin, Universidad Nacional Autónoma de México (UNAM) 4:15 pm – 5:00 pm Questions and Answers from Today’s Panelists . Friday’s symposium presenters (order of appearance): Kevin B. Terraciano Kevin Terraciano is Professor of History, chair of the Latin American Studies Graduate Program, and interim director of the Latin American Institute. He specializes in Colonial .