IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 17, NO. 7, JULY 20184565Peer-Assisted Computation Offloadingin Wireless NetworksYeli Geng , Student Member, IEEE, and Guohong Cao, Fellow, IEEEAbstract— Computation offloading has been widely used toalleviate the performance and energy limitations of smartphonesby sending computationally intensive applications to the cloud.However, mobile devices with poor cellular service quality mayincur high communication latency and high energy consumptionfor offloading, which will reduce the benefits of computationoffloading. In this paper, we propose a peer-assisted computationoffloading (PACO) framework to address this problem. In PACO,a client experiencing poor service quality can choose a neighborwith better service quality to be the offloading proxy. Throughpeer to peer interface such as WiFi direct, the client can offloadcomputation tasks to the proxy which further transmits themto the cloud server through cellular networks. We proposealgorithms to decide which tasks should be offloaded to minimize the energy consumption. We have implemented PACO onAndroid and have implemented three computationally intensiveapplications to evaluate its performance. Experimental resultsand simulation results show that PACO makes it possible for userswith poor cellular service quality to benefit from computationoffloading and PACO significantly reduces the delay and energyconsumption compared to existing schemes.Index Terms— Energy consumption, cellular phones, computation offloading, wireless communication.I. I NTRODUCTIONS MOBILE devices are becoming increasingly powerful,computationally intensive mobile applications such asimage or video processing, augmented reality, and speechrecognition have experienced explosive growth. However,these computationally intensive applications may quickly drainthe battery of mobile devices. One popular solution to conserve battery life is to offload these computation tasks frommobile devices to resource-rich servers, which is referred toas computation offloading [1].Previous research on computation offloading has focusedon building frameworks that enable mobile computation offloading to software clones of smartphones in thecloud [2]–[4]. However, all these studies assume good networkconnectivity while neglecting real-life challenges for offloading through cellular networks. In cellular networks such as3G, 4G and LTE, some areas have good coverage while othersAManuscript received March 17, 2017; revised August 26, 2017 andJanuary 19, 2018; accepted April 4, 2018. Date of publication April 24,2018; date of current version July 10, 2018. This work was supportedby the National Science Foundation under Grant CNS-1421578 and GrantCNS-1526425. The associate editor coordinating the review of this paperand approving it for publication was K. Huang. (Corresponding author:Yeli Geng.)The authors are with the School of Electrical Engineering and ComputerScience, Pennsylvania State University, University Park, PA 16802 USA(e-mail: yzg5086@cse.psu.edu; gcao@cse.psu.edu).Color versions of one or more of the figures in this paper are availableonline at http://ieeexplore.ieee.org.Digital Object Identifier 10.1109/TWC.2018.2827369may not because of practical deployment issues. As a result,the wireless signal strength of a mobile device varies based onits location. Mobile users experiencing weak signal strengthusually have low data-rate connections. Moreover, the datathroughput depends on the traffic load in the area [5]. When thenetwork connectivity is slow, it may incur higher communication latency and consume more energy to offload computationto the cloud. Therefore, mobile devices experiencing poorservice quality (in terms of signal strength and throughput)may not benefit from computation offloading.Some existing research has identified similar challengesfor data transmission in cellular networks. Many studiespropose to offload cellular traffic to WiFi network to saveenergy [6], [7]. Schulman et al. [8] propose to defer datatransmissions to coincide with periods of strong signal to saveenergy. In QATO [9], data traffic is offloaded from nodeswith poor service quality to neighboring nodes with betterservice quality to save energy and reduce delay. However, allthese works focus on traffic offloading rather than computationoffloading, and computation offloading decisions have to consider the delay and energy consumption of both computationexecution and data transmission.In this paper, we propose a Peer-Assisted ComputationOffloading (PACO) framework to enable computation offloading in wireless networks, which is especially helpful formobile devices suffering from poor service quality. In PACO,clients with poor service quality can identify neighbors withbetter service quality and choose one of them as the offloading proxy. Through peer to peer interfaces such as WiFidirect, clients can offload computation tasks to the proxywhich actually handles the computation offloading to theserver.Although leveraging nearby devices to relay traffic has beenstudied in prior work, using them for computation offloadingin cellular networks raises new challenges which have not beenaddressed. One main challenge is how to make offloadingdecisions, i.e., determining which tasks should be offloadedto minimize the energy consumption of the mobile devices.Existing research [2], [4] considers the trade-off between theenergy saved by moving computation to the cloud and theenergy spent on offloading the computation. However, theydid not consider the special characteristics of cellular networkswhen making offloading decisions. After a data transmission,the cellular interface has to stay in the high-power state forsome time which could consume a significant amount ofadditional energy (referred to as the long tail problem) [10].This long tail problem makes it hard to decide whether tooffload the computation.1536-1276 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications standards/publications/rights/index.html for more information.
4566IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 17, NO. 7, JULY 2018We have implemented PACO on the Android platform.To evaluate its performance, we have developed three computational intensive applications. Experimental and simulationresults show that PACO can significantly reduce the energy anddelay compared to other computation offloading approaches.The main contributions of this paper are as follows. We introduce the idea of leveraging peers with better service quality to enable computation offloading in cellularnetworks. We design a peer-assisted computation offloading framework to detect and utilize neighbors with better servicequality to save energy. We also propose algorithms todetermine whether a task should be offloaded or not. We have implemented the framework on the Androidplatform and have implemented three applications tovalidate its effectiveness.The remainder of this paper is organized as follows: InSection II, we present related work. Section III provides background and motivation for peer-assisted computation offloading. We present a high-level description of the PACO architecture in Section IV. We describe the design of PACO in moredetails; i.e., the proxy selection mechanism in Section V, andthe offloading decision algorithms in Section VI. Section VIIevaluates PACO’s performance. Finally, Section VIII concludes the paper.II. R ELATED W ORKIn this section, we review three categories of research relatedto our work.A. Power Saving in Cellular NetworksIn cellular networks, the wireless interface will stay in highpower states for a long time (i.e., the long tail problem) aftera data transmission. Existing research [10], [11] has shownthat a large amount of energy can be wasted due to thisproblem. As a result, many researchers proposed to defer datatransmission [8] or to aggregate the network traffic to amortizethe tail energy [10], [12].designed offloading algorithms to minimize the energy consumption considering the long tail problem. However, all ofthem assume that mobile devices have good cellular networkconnectivity.Some existing research has exploited peer-to-peer offloadingin different networks. In [19]–[21], authors proposed to offloadcomputation to neighboring nodes in disruption tolerant networks, wireless sensor networks and small-cell networks,respectively. In [22], D2D communication was exploited toenable offloading from mobile devices to other mobile devicesin cellular networks. Furthermore, Jo et al. [23] proposeda heterogeneous mobile computing architecture to exploitresources from D2D communication-based mobile devices.However, all of them offload computation to neighboringmobiles, instead of the faraway cloud. Different from them,our work exploits one-hop D2D communication to leverage aneighboring mobile device only as a relay to offload to thecloud through cellular network.While solving the connectivity problem by exploiting theD2D communication, we also consider the long tail problemin our proposed solution to better utilize the cellular resources.To the best of our knowledge, none of the previous workattempted to solve the long tail problem when exploiting thecooperative computation offloading. Our work is the first tosolve both problems in mobile cloud computing.C. Cellular Traffic OffloadingTo deal with the traffic overload problem in cellular networks, some researchers proposed to offload part of thecellular traffic through other wireless networks. Some researchefforts have been focusing on offloading 3G traffic throughopportunistic mobile networks or D2D networks [24], [25].Others utilized public WiFi for 3G traffic offloading [6], [7].Besides offloading through WiFi or opportunistic mobile networks, existing work also leveraged mobile nodes with goodsignal. For example, UCAN [26] enabled 3G base station toforward data to mobile clients with better channel quality,which then relay data to destination clients with poor channelquality through peer-to-peer links. Different from previouswork which only considered traffic offloading, our workfocuses on computation offloading.B. Computation OffloadingComputation offloading has received considerable attention recently. Some previous work has focused on buildingframeworks that enable computation offloading to the remotecloud, such as MAUI [2], CloneCloud [4] and ThinkAir [3].Other work [13]–[15] has focused on the offloading decisions, i.e., which tasks of an application should be offloaded,to improve performance or save energy of the mobile devices.There have been some studies on computation offloadingwhich aim to reduce the cellular communication cost bysolving the long tail problem. Xiang et al. [16] proposedthe technique of coalesced offloading, which coordinates theoffloading requests of multiple applications to amortize the tailenergy. Tong and Gao [17] proposed application-aware trafficscheduling in computation offloading to minimize energy andsatisfy the application delay constraint. Geng et al. [18]III. BACKGROUND AND M OTIVATIONIn this section, we first introduce cellular networks and theirenergy model, and then give the motivation of our peer-assistedcomputation offloading.The Universal Mobile Telecommunications System (UMTS)is a popular 3G standard developed by 3GPP. While GSM isbased on TDMA, UMTS uses Wideband CDMA (WCDMA)radio access technology and provides a transfer rate of upto 384 Kbps for its first version Release 99. After that,High Speed Downlink Packet Access (HSDPA) has beenintroduced and provides a higher data rate up to 14 Mbps.In Release 6, the uplink is enhanced via High Speed UplinkPacket Access (HSUPA) to support a peak data rate up to7Mbps. Later, HSDPA and HSUPA have been merged intoone, High Speed Packet Access (HSPA), and its evolution
GENG AND CAO: PACO IN WIRELESS NETWORKSFig. 1.The power level of the UMTS cellular interface at different states.HSPA has been introduced and standardized. HSPA offersa number of enhancements, supporting an increased data rateup to 84 Mbps [27]. The Long Term Evolution (LTE) is thelatest extension of UMTS. LTE enhances both the radio accessnetwork and the core network. The targeted user throughputof LTE is 100Mbps for downlink and 50 Mbps for uplink,significantly higher than existing 3G networks [28].A. Cellular Networks and Power ModelThe power model of a typical data transmission in UMTSis shown in Fig. 1. Initially, the radio interface stays in theIDLE state, consuming very low power. When there is a datatransmission request, it promotes to the DCH state. After thecompletion of data transmission, it stays in the DCH statefor some time and then demotes to the FACH state beforereturning to the IDLE state. The extra time spent in the highpower DCH and FACH states is called the tail time. HSPA and LTE have similar power models. Thus, we generalizethe power consumption of the cellular interface into threestates: promotion, data transmission and tail. The power inthe promotion and tail state are denoted as Ppro , and Ptail .We differentiate the power for uploads from downloads in datatransmission, and denote them as Pup and Pdown , respectively.4567Fig. 2.Execution model for offloaded tasks.are negligible. Thus, the energy consumption to offload taskTi over P2P link is calculated asiEp2p Pp2p (siup sidown )/rp2p ,where Pp2p is the data transmission power, and rp2p is thetransmission rate.The cellular part consists of three steps: sending the uploaddata, executing the task on the server, and receiving thedownload data. Sending and receiving data of task Ti areiiand Tdown, and the energydenoted as two subtasks Tupiconsumption of them are calculated as Eup Pup siup /rupiiand Edown Pdown sdown /rdown , where the upload anddownload data rate are denoted as rup and rdown . Thereare two cases to calculate the energy of offloading task Tiover the cellular network. In the first case (Fig. 2(a)), afterisending the upload data (Tup) to the server, the proxy is idleiwaiting for the download data (Tdown) while Ti is executedon the server. Let Δt denote the interval between subtasksiiand Tdown(i.e., Ti ’s execution time on the server). Then,Tupthe energy consumption between these two subtasks (denotedii, Tdown)) depends on Δt: 1) If Δt is larger than theas E(Tuptail time ttail , the proxy will consume some extra promotionenergy and tail energy. 2) If Δt is smaller than ttail , there isa partial tail and no promotion energy. In summary, B. Task Execution ModelA task can be executed locally on the mobile device or executed on the server through offloading. If task Ti is executed onithe mobile device, its energy consumption is denoted as Elocal.If Ti is offloaded to the remote server with the help of a proxy,the energy consumption consists of two parts: the P2P partbetween the client and the proxy, and the cellular part betweenthe proxy and the server. During offloading, the offloaded taskmay need some input data, denoted as siup , which is sent fromthe client to the proxy and then uploaded from the proxy tothe server. The offloaded task may also generate some outputdata, denoted as sidown , which is downloaded from the serverto the proxy and then sent from the proxy to the client.For the P2P part, the client and proxy use the WiFi directinterface to transmit the offloaded task. WiFi direct has muchhigher speed and larger transmission range than its Bluetoothcounterpart. For WiFi direct, the promotion and tail energy(1)ii, Tdown)E(Tup Ppro tpro Ptail ttail , if Δt ttailPtail max{Δt, 0},otherwise.(2)In the second case (Fig. 2(b)), after sending the upload datai(Tup) to the server, the proxy is busy offloading other tasks.iiThen there could be multiple subtasks between Tupand Tdown.Let Si denote the set of all offloaded tasks including Ti , andS i Si \ {Ti }. Each set can be considered as a sequence ofsubtasks ordered by their arrival times. We can use Eq. (2) tocalculate the energy between adjacent subtasks and then getthe overall energy consumptions of set Si and set S i (denotedas E(Si ) and E(S i )). Then the energy consumption of Ti inthe cellular part is calculated asiEcell E(Si ) E(S i ).(3)
4568IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 17, NO. 7, JULY 2018TABLE IM OBILE D EVICES AND N ETWORK T YPESFig. 3.Fig. 4.Energy and delay with/without computation offloading.Fig. 5.Overview of the PACO architecture.Downlink throughput of different carriers at different locations.C. Motivation for Peer-Assisted Computation OffloadingExisting study has shown that mobile devices within an areamay have different service quality, especially when differentservice providers are used [9]. How will the service qualitydifference affect the efficiency of computation offloading? Toanswer this question, we have run some experiments.Our testbed consists of two types of smartphones, servedby two cellular carriers, as described in Table I. We picked6 popular locations on our campus, and used these two phonesto send data to our Linux server. We measured the datathroughput of different carriers at each location, and the resultsare shown in Fig. 3. Then, we conduct computation offloadingexperiments at location 1, where Carrier 1 has extremely lowdata throughput but Carrier 2 has much better service quality.We have implemented an Optical Character Recognition (OCR) application which automatically recognizes thecharacters in images and outputs the text on Android smartphones. The detailed setup and implementation of our testbedwill be discussed in Section VII. We conduct experiments inthree modes: no offloading and offloading with two different cellular networks. We run the application to recognize10 images and repeat the test several times to measure theaverage energy consumption and the delay.The results are shown in Fig. 4. As can be seen, offloadingcomputation with poor service quality (Carrier1-offload) mayconsume more energy and increase the delay compared toexecuting the computation locally. On the other hand, computation offloading under good service quality (Carrier2-offload)can significantly reduce the energy consumption and delay.Based on these results, mobile devices with poor servicequality should leverage the node with better service qualityfor computation offloading.IV. PACO S YSTEM A RCHITECTUREPACO considers the service quality difference amongmobile devices, and leverages peers with better service qualityfor computation offloading. In this section, we present a highlevel overview of the PACO architecture.The architecture of PACO is shown in Fig. 5. PACO sharessimilar design with CloneCloud and ThinkAir by creatingvirtual machines (VMs) of a complete smartphone systemin the cloud. In this way, PACO enables easy computationoffloading between devices of diverging architectures, evendifferent instruction set architectures (e.g., ARM-based smartphones and x86-based servers).On the mobile device side, PACO consists of three components. (1) Profilers for device, application and network.The device profiler measures the mobile device’s energyconsumption characteristics and builds its energy model at theinitialization time. The application profiler tracks a numberof parameters related to program execution, such as the datasize, the execution time and the resource requirements ofindividual tasks. The network profiler continuously monitorsthe network condition such as the data rate of the cellularnetwork. (2) Neighbor discovery, which identifies neighboringnodes that support PACO service and collects a list of networkquality profiles from them. (3) Offload engine, which determines whether to offload computation tasks and to which node(proxy) to offload. If a mobile device is chosen as a PACOproxy, its offload engine also handles the communication withthe cloud server. It receives offloading requests from PACOclients, and sends
these works focus on traffic offloading rather than computation offloading, and computation offloading decisions have to con-sider the delay and energy consumption of both computation execution and data transmission. In this paper, we propose a Peer-Assisted Computation Offloading (PACO) framework to enable computation offload-
DNR Peer A Peer B Peer C Peer D Peer E Peer F Peer G Peer H Peer I Peer J Peer K 14 Highest Operating Margin in the Peer Group (1) (1) Data derived from SEC filings, three months ended 6/30/13 and includes DNR, CLR, CXO, FST, NBL, NFX, PXD, RRC, SD SM, RRC, XEC. Calculated as
The popularity of peer-to-peer multimedia file sharing applications such as Gnutella and Napster has created a flurry of recent research activity into peer-to-peer architec-tures. We believe that the proper evaluation of a peer-to-peer system must take into account the characteristics
In a peer-peer file-sharing application, for example, a peer both requests files from its peers, and stores and serves files to its peers. A peer thus generates workload for the peer-peer application, while also providing the ca
etc. Some hybrid machining processes, such as ultrasonic vibration-assisted [2], induction-assisted [3], LASER-assisted [4], gas-assisted [5] and minimum quantity lubrication (MQL)-assisted [6,7] machining are used to improve the machinability of those alloys. Ultrasonic-assisted machining uses ultrasonic vibration to the cutting zone [2]. The
this training course came from as well as to explain 3 main themes (peer-to-peer education, youth information and facilitation). As a trainer delivering the peer-to-peer training course, you will need a bit some more knowledge in your pockets before the training course starts. If you are a young peer educator who just finished the training course,
CarMax is the Largest Buyer and Seller of Used Autos from and to Consumers in the U.S. 5. The powerful integration of our online and in -person experiences gives us access to the. largest addressable market . in the used auto industry. CarMax. Peer 1. Peer 2. Peer 3. Peer 4. Peer 5. Peer 6. Peer 7. 752K CarMax FY21 vs Public Peers in CY2020. 11%
assisted liposuction, vaser-assisted liposuction, external ultrasound-assisted liposuction, laser-assisted liposuction, power-assisted liposuction, vibro liposuction (lipomatic), waterjet assisted and J-plasma liposuction. This standard sets out the requirements for the provision of Liposuction service. Liposuction
the adoption and adaptation of agile software development practices. This model was found especially useful when the project context departs significantly from the “agile sweet spot”, i.e., the ideal conditions in which agile software development practices originated from, and where they are most likely to succeed, “out of the box”. This is the case for large systems, distributed .