IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 65, NO. 7,

2y ago
9 Views
2 Downloads
1.36 MB
14 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Shaun Edmunds
Transcription

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 65, NO. 7, JULY 20172917Delay-Optimal Buffer-Aware SchedulingWith Adaptive TransmissionXiang Chen, Student Member, IEEE, Wei Chen, Senior Member, IEEE, Joohyun Lee, Member, IEEE,and Ness B. Shroff, Fellow, IEEEAbstract— In this paper, we aim to obtain the optimal tradeoffbetween the average delay and the average power consumption ina communication system. In our system, the arrivals occur at eachtimeslot according to a Bernoulli arrival process, and are bufferedat the transmitter waiting to be scheduled. We consider a finitebuffer and allow the scheduling decision to depend on the bufferoccupancy. In order to capture the realism in communicationsystems, the transmission power is assumed to be an increasingand convex function of the number of packets transmitted ineach timeslot. This problem is modeled as a constrained Markovdecision process (CMDP). We first prove that the optimal policyof the Lagrangian relaxation of the CMDP is deterministic andthreshold-based. We then show that the optimal delay-powertradeoff curve is convex and piecewise linear, and the optimalpolicies of the original problem are also threshold-based. Basedon the results, we propose an algorithm to obtain the optimalpolicy and the optimal tradeoff curve. We also show that theproposed algorithm is much more efficient than using generalmethods. The theoretical results and the algorithm are validatedby linear programming and simulations.Index Terms— Cross-layer design, joint channel and bufferaware scheduling, markov decision process, queueing, energy efficiency, average delay, delay-power tradeoff, linear programming.I. I NTRODUCTIONSCHEDULING for minimizing delay or power has beenstudied widely and is getting increasingly important, asmany delay sensitive applications are emerging, such as instantmessenger (IM), social network service (SNS), streamingmedia and so on. On the other hand, the requirements ofmobility and portability for communication terminals incurstringent energy constraints.In typical communication systems, for fixed channel conditions, the power efficiency (per bit transmitted) rapidlyManuscript received September 7, 2016; revised February 8, 2017; acceptedMarch 27, 2017. Date of publication April 6, 2017; date of current versionJuly 13, 2017. The associate editor coordinating the review of this paperand approving it for publication was L. Chen. This work has been supported in part by the National Science Foundation of China (NSFC) underGrants No. 61671269, No. 61322111, No. 61621091, in part by the U.S.NSF under Grants CNS-1618566, CNS-1421576, CNS-1409336, in part bythe China National 973 Program under Project No. 2013CB336600, and inpart by a grant from the Army Research Office ARO-W911NF-14-1-0368.(Corresponding author: Wei Chen.)X. Chen and W. Chen are with the Department of Electronic Engineering, Tsinghua National Laboratory for Information Science and Technology (TNList), Tsinghua University, Beijing 100084, China (e-mail:chen-xiang12@mails.tsinghua.edu.cn; wchen@tsinghua.edu.cn).J. Lee is with the Department of ECE, The Ohio State University,Columbus, OH 43210 USA (e-mail: lee.7119@osu.edu).N. B. Shroff is with the Department of ECE and the Department ofCSE, The Ohio State University, Columbus, OH 43210 USA (e-mail:shroff@ece.osu.edu).Color versions of one or more of the figures in this paper are availableonline at http://ieeexplore.ieee.org.Digital Object Identifier 10.1109/TCOMM.2017.2691711Fig. 1. Energy/Transmission versus Bit/Transmission with adaptive M-PSK(Target ber 10 5 , Noise Power Spectral Density N0 150 dBm/Hz).decreases as the transmission rate is increased. In other words,the power cost is convex in transmission rate. Below are twocanonical examples of communication systems that demonstrate this convex behavior.1) The information-theoretically optimal transmission rateR 21 log2 (1 NP ). Therefore the power to transmits bit(s) is s N(4s 1), which is strictly increasingand convex.2) Consider an adaptive M-PSK transmission system witha fixed bit error rate (ber). The ber expression forM-PSK is shown in [1, eq(8.31)]. We fix the ber 10 5 and the one-sided noise power spectral densityN0 150 dBm/Hz. The energy-bit curve is shownin Fig. 1, which is strictly increasing and convex.The convexity of power cost in transmission rate brings anatural tradeoff between power and delay. As we increase thetransmission rate, the delay becomes shorter with the costof low power efficiency, and vice versa. Our main goal isto characterize the optimal delay-power tradeoff and obtainan optimal scheduling policy for a given average powerconstraint.The optimal delay-power tradeoff and the optimal scheduling policy in the point-to-point communication scenario havebeen studied in [2]–[12], under the convexity assumption forpower cost. Among these works, the power cost is modeled based on Shannon’s formula in [3]–[7]. Since thereis no interference in the point-to-point scenario, the powercost is convex in the transmission rate (bits/transmission),similar to the information-theoretical example we introducedabove. Lagrange multiplier method has been applied in theseworks, in order to transform the constrained optimization to0090-6778 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications standards/publications/rights/index.html for more information.

2918unconstrained optimization to simplify the problem. Based onthis, the properties of the delay-power tradeoff curve havebeen studied in [3], [6], [7], and [9], and the monotonicityof the optimal scheduling policy is investigated in [2], [4],[6], and [8]–[12]. However, most papers neither trace back tothe original constrained problem, nor prove the equivalencebetween the original and the Lagrangian relaxation problems.Only in [4] and [9]–[11], properties of the optimal policyfor the constrained problem are tackled based on the resultsfrom the unconstrained problem. However, in [4], the powercost is fixed by Shannon’s formula, thus the results cannot beapplied to more generalized power models. In [9], necessaryproperties for proof such as the unichain property of policies,the multimodularity of costs, and stochastically increasingbuffer transition probabilities are not proven, but just assumedto be correct. In [10] and [11], binary control is considered,i.e., the scheduler only determines to transmit or not totransmit.We studied the optimal scheduling in [13]–[15], consideringa single-queue single-server system with fixed transmissionrate, and obtained analytical solutions. Interestingly, in thesecases, the monotonicity of the optimal policy can be directlyobtained by steady-state analysis of the Markov Process andlinear programming formulation. Similar approaches havebeen applied in [16] and [17]. We generalized our modeland included the adaptive transmission assumption in [18],which is much harder to analyze because of the more complicated state transition of the Markov chain. In this paper,we continue this line of research, analyze the problem withinthe CMDP framework, and present our thorough analysis andresults. We first consider its Lagrangian relaxed version. In theunconstrained MDP problem, we prove that the optimal policyis deterministic and threshold-based. Then, in the CMDP problem, we fully characterize the optimal delay-power tradeoff.We prove that the tradeoff curve is convex and piecewiselinear, whose vertices are obtained by the optimal policies inthe relaxed problem. Moreover, the neighboring vertices of thetradeoff curve are obtained by policies which take differentactions in only one state. These discoveries enable us to showthat the solution to the overall CMDP problem is also of athreshold form, and devise an algorithm to efficiently obtainthe optimal tradeoff curve.The remainder of this paper is organized as follows. Thesystem model is described in Section II, where the delay-powertradeoff problem is formulated as a Constrained Markov Decision Process. In Section III, based on the Lagrangian relaxationof the CMDP problem, it is proven that the optimal policyfor the average combined cost is deterministic and thresholdbased. Then we conduct steady-state analysis in Section IV,based on which we can prove that the optimal delay-powertradeoff curve is piecewise linear, and the optimal policiesfor the CMDP problem are also threshold-based. Moreover, inthis section, we propose an efficient algorithm to obtain theoptimal delay-power tradeoff curve, and an equivalent LinearProgramming problem is formulated to confirm the validityand efficiency of the theoretical results and the algorithm.Simulation results are given in Section V, and Section VIconcludes the paper.IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 65, NO. 7, JULY 2017Fig. 2.System Model.II. S YSTEM M ODELWe consider the system model shown in Fig. 2. Time isdivided into timeslots. Assume that at the end of each timeslot,data arrive as a Bernoulli Process with parameter α. Each dataarrival contains A packets. Define A {0, 1}. Define a[n] Awhere a[n] 1 or 0 as whether or not there are data arriving intimeslot n, hence Pr{a[n] 1} α and Pr{a[n] 0} 1 α.Let s[n] denote the number of data packets transmittedin timeslot n. We assume that at most S packets can betransmitted in each timeslot because of the constraints of thetransmitter. We force S A. Define S {0, 1, · · · , S}, thuss[n] S.Let p[n] denote the power consumption in timeslot n.Transmitting s packet(s) incurs power consumption s , where0 s S. Therefore p[n] s[n] . Transmitting 0 packetwill cost no power, hence 0 0. Based on the analysis inthe Introduction, being able to capture the convex relationshipbetween power and bits transmitted is important. Therefore weassume that s is strictly increasing and convex in s.The arrivals can be stored in a finite buffer up to a maximumof Q packets. For a typical communication system, the buffersize should be larger than the data size of an arrival, thus we setQ A. Define Q {0, 1, · · · , Q}. Let q[n] Q denote thenumber of packets in the buffer at the beginning of timeslot n.The amount of transmission s[n] will be decided according toour scheduling policy, based on the historical information ofthe buffer states and the data arrivals. The dynamics of thebuffer is given asq[n 1] q[n] s[n] Aa[n].(1)To avoid overflow or underflow, the number of transmittedpackets in each timeslot n should satisfy 0 q[n] s[n] Q A.For each timeslot n, respectively consider the queue lengthq[n] and the transmission s[n] as the state and the action ofthe system. According to (1), the transition probabilityPr{q[n 1] α 1 α 0j q[n] q, s[n] s}j q s A,j q s,else.(2)It shows that the probability distribution of the next state isdetermined by the current state and the chosen action. Thequeue length q[n] and the transmission power p[n] can betreated as two immediate costs, which are determined bythe current state and the current action respectively. Therefore, this system can be considered as a Markov DecisionProcess (MDP). In the following, we will show that the queue

CHEN et al.: DELAY-OPTIMAL BUFFER-AWARE SCHEDULING WITH ADAPTIVE TRANSMISSIONFig. 3.2919Markov Chain of q[n] (Q 7, A 2, S 3).length cost corresponds to the average delay, hence there is atradeoff between these two costs.A decision rule δn : Q A Q · · · A Q P (S) n state(s) and (n 1) action(s)can specify the action s[n] at timeslot n according to aprobability distribution pδn (·) (·) on the set of actions S, i.e.,Pr{s[n] s q[1] q1 , s[1] s1 , · · · , q[n] qn } pδn (q1 ,s1 ,··· ,qn ) (s).(3)Define a transmission policy γ (δ1 , δ2 , · · · ), which is aγsequence of decision rules. Define Eq0 {·} as the notation of theexpectation when policy γ is applied and the initial state is q0 .Therefore the average power consumption under policy γPγ limN 1 γEqN 0f q,s Pr{s[n] s q[n] q}.Np[n] .(4)n 1Let Dγ denote the average delay under policy γ . Accordingto Little’s Law, the average queueing delay is the quotientof the average queue length divided by the average arrivalrate, i.e.,1 γ1limEqDγ α A N N 0Nq[n] .(5)n 1Therefore, policy γ will determine Z γ (Pγ , Dγ ), whichis a point in the delay-power plane. Define Z γ Z γ as theline segment connecting Z γ and Z γ . Let denote the setof all feasible policies which can guarantee no overflow orunderflow. Define R {Z γ γ } as the set of all feasiblepoints in the delay-power plane. Intuitively, since the powerconsumption for each data packet increases if we want to transmit faster, there is a tradeoff between the average queueingdelay and the average power consumption. Define the optimaldelay-power tradeoff curve L {(P, D) R (P , D ) R ,either P P or D D}.Since there are two costs in the MDP, by minimizing theaverage delay given an average power constraint Pth , we obtaina CMDP problem.min Dγ(6a)s.t. Pγ Pth .(6b)γ A. Reduction to Stationary PoliciesHere, we show that in order to solve our problem, it isenough to restrict our class of policies to a stationary classof policies. A stationary policy for an MDP means that theprobability distribution to determine s[n] is only a functionof state q[n], i.e. δn : Q P (S), and the decision rulesfor all timeslots are the same. For a CMDP, it is provenin [19, Th. 11.3] that stationary policies are complete, whichmeans stationary policies can achieve as good performanceas any other policies. Therefore we only need to considerstationary policies in this problem.Let f q,s denote the probability to transmit s packet(s) whenq[n] q, i.e.By varying the value of Pth , the optimal delay-power tradeoffcurve L can be obtained. In the following, we show thatoptimizing over a simpler class of policies will minimize theobjective in (6).(7)Ss 0 f q,sTherefore we have 1 for all q 0, · · · , Q.We guarantee the avoidance of overflow or underflow bysetting f q,s 0 if q s 0 or q s Q A. Let F denote a(Q 1) (S 1) matrix whose element in the (q 1)th row andthe (s 1)th column is f q,s . Therefore matrix F can representa decision rule, and moreover a stationary transmission policy.As explained above, we only need to consider stationarypolicies. Therefore, we will use F to represent a stationarypolicy in the rest of the paper. Let PF and D F denote theaverage power consumption and the average queueing delayunder policy F. Let F denote the set of all feasible stationarypolicies which can guarantee no overflow or underflow. Let F Ddenote the set of all stationary and deterministic policies whichcan guarantee no overflow or underflow. Thus the optimizationproblem (6) is equivalent tomin D F(8a)s.t. PF Pth .(8b)F FB. Reduction to UnichainsGiven a stationary policy for a Markov Decision Process,there is an inherent Markov Reward Process (MRP) with q[n]as the state variable. Let λi, j denote the transition probabilityfrom state i to state j . An example of the transition diagramis shown in Fig. 3, where λi,i for i 0, · · · , Q are omittedto keep the diagram legible.The Markov chain could have more than one closed communication class under certain transmission policies. For example,in the example in Fig. 3, if we apply the scheduling policyf 0,0 1, f 1,0 1, f2,2 1, f 3,2 1, f 4,2 1, f 5,2 1,

2920IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 65, NO. 7, JULY 2017f 6,2 1, f 7,2 1, and f i, j 0 for all others, it can beseen that states 4, 5, 6 and 7 are transient, while states {0, 2}and states {1, 3} are two closed communication classes. Underthis circumstance, the limiting probability distribution and theaverage cost are dependent on the initial state and the samplepaths. However, the following theorem will show that we onlyneed to study the cases with only one closed communicationclass.Theorem 1: If the Markov chain generated by policy Fhas more than one closed communication class, named asC1 , · · · , C L , where L 1, then for all 1 l L, thereexists a policy F l such that the Markov chain generated byF l has Cl as its only closed communication class. Moreover,the limiting distribution and the average cost of the Markovchain generated by F starting from state c Cl are the sameas the limiting distribution and the average cost of the Markovchain generated by F l .Proof: See Appendix A.According to Theorem 1, the constructed policy Fl andthe original policy F will have the same average powerand average delay for any closed communication class Clwhich policy F converges to. Therefore, without loss ofgenerality, we can focus on the Markov chains with only oneclosed communication class, which are called unichains. Fora unichain, the initial state or the sample path won’t affectthe limiting distribution or the average cost, which means theγparameter q0 in Eq0 {·} won’t affect the value of the function.As we will demonstrate in the following two sections, theoptimal policies for the Constrained MDP problem and itsLagrangian relaxation problem are threshold-based. Here, wedefine that, a stationary policy F is threshold-based, if andonly if there exist (S 1) thresholds 0 q F (0) q F (1) · · · q F (S) Q, such that f q,s 0 only when q F (s 1) q q F (s) (we set q F ( 1) 1 for the inequality whens 0). It means that, under policy F, when the queue stateis larger than threshold q F (s 1) and smaller than q F (s),it transmits s packet(s). When the queue state is equal tothreshold q F (s), it transmits s or (s 1) packet(s). Notethat under this definition, probabilistic policies can also bethreshold-based.III. O PTIMAL D ETERMINISTIC T HRESHOLD -BASEDP OLICY FOR THE L AGRANGIAN R ELAXATION P ROBLEMIn (8), we formulate the optimization problem as a Constrained MDP, which is difficult to solve in general. Therefore,we first study the Lagrangian relaxation of (8) in this section,and prove that the optimal policy for the relaxation problemis deterministic and threshold-based. We will then use theseresults to show that the solution to the original non-relaxedCMDP problem is also of a threshold type.Let μ 0 denote the Lagrange multiplier. Thus theLagrangian relaxation of (8) isminF FlimN 1 FEN q01αAN(q[n] α Aμp[n]) μPth .(9)n 1In (9), the term μPth is constant. Therefore, the Lagrangianrelaxation problem is minimizing a constructed combinedaverage cost (q[n] α Aμp[n]). This is an infinite-horizonMarkov Decision Process with an average cost criterion,for which it is proven in [20, Th. 9.1.8] that, there existsan optimal stationary deterministic policy. For a stationarydeterministic policy F F D , let s F (q) denote the packet(s) totransmit when q[n] q. In other words, we have f q,s F (q) 1for all q. Define η α Aμ. Therefore (9) is equivalent tomin limF F D N 1 FEN q0N q[n] η s F (q[n]) .(10)n 1The optimal policy for (10) has the following property.Theorem 2: An optimal policy F for (10) should satisfythat s F (q 1) s F (q) 0 or 1 for all 0 q Q. ThereforeF is a threshold-based policy.Proof: See Appendix B.The proof applies a nested induction method to the policyiteration algorithm for the Markov Decision Process. Theorem 2 indicates a very intuitive conclusion that more datashould be transmitted if the queue is longer. More specificallyspeaking, for an optimal deterministic policy F, there exists(S 1) thresholds q F (0) q F (1) · · · q F (S), suchthatf q,s 1 q F (s 1) q q F (s), s 0, · · · , Sf q,s 0 else(11)where q F ( 1) 1. The form of the optimal policy satisfiesour definition of threshold-based policy in Section II.Moreover, we can have the following two corollaries.Corollary 1: Under any optimal threshold-based policy F,there will be no transmission only when q[n] 0. In otherwords, threshold q F (0) 0.Proof: This is an intuitive result, because every datapacket will be transmitted sooner or later, which costs atleast 1 power, thus not transmitting when there are backlogs is just a waste of time. The following is its rigorousproof.If there exists an optimal threshold-based policy F F Dwhere s F (q1 ) 0, q1 0. Since s F (q) has the thresholdbased property, we have s F (1) 0. Construct a policy F where s F (q) s F (q 1) for 0 q Q and s F (Q) A.It can be seen that F F D . State 0 is a transient state underpolicy F, and state Q is a transient state under policy F .States 1, · · · , Q under policy F and states 0, · · · , Q 1 underpolicy F have the exactly same state transition, except thatthe states for F are 1 smaller than the states for F. Thereforethe average power consumption under two policies is the sameand the average queue length for F is 1 smaller, which meansthe average delay for F is strictly smaller. Therefore F is notan optimal policy, which conflicts with the assumption. Hencethe optimal threshold-based policy should have that there willbe no transmissions only when q[n] 0.Corollary 2: For an optimal threshold-based policy F,there is no need to transmit more than A packets. In otherwords, threshold q F (A) q F (A 1) · · · q F (S) Q.Proof: If there exists an optimal threshold-based policyF F D where q1 is the smallest state such that s F (q1 ) A.Since s F (q) has the threshold-based property, for all q q1 ,

CHEN et al.: DELAY-OPTIMAL BUFFER-AWARE SCHEDULING WITH ADAPTIVE TRANSMISSIONwe have s(q) A. Also, for all q q1 , we have s(q) A.Construct a policy F where s F (q) s F (q) for q q1 ands F (q) A for q q1 . It can be seen that F F D . Sinceq1 , · · · , Q are transient states under both policies, and thetransmission for recurrent states of the two policies is exactlythe same, policy F has the same performance as policy F.Therefore, for an optimal threshold-based policy, there is noneed to transmit more than A packets.IV. O PTIMAL T HRESHOLD -BASED P OLICYFOR THE CMDPIn Section III, we prove that the optimal policy to minimize the combined cost is deterministic and threshold-based.We will now prove that the solution to the overall CMDPproblem also takes on a threshold form. We first conductsteady-state analysis for the Markov Decision Process, discover that the feasible average delay and power region is aconvex polygon and the optimal delay-power tradeoff curveis piecewise linear, whose neighboring vertices are obtainedby deterministic policies which take different actions in onlyone state. Based on this, the optimal threshold-based policyobtained in Section III will be shown to correspond to thevertices of the piecewise linear curve. Therefore, the optimalpolicy for the CMDP problem, which is the convex combination of two deterministic threshold-based policies, willbe proven to also take a threshold form. Then, we willprovide an efficient algorithm to obtain the optimal delaypower tradeoff curve, and a Linear Programming will beformulated to confirm our results.Based on Theorem 1, without loss of generality, we canfocus on unichains, in which case the steady-state probabilitydistribution exists. Let π F (q) denote the steady-state probability for state q when applying policy F. Define π F [π F (0), · · · , π F (Q)]T . Let F denote a (Q 1) (Q 1)matrix whose element in the (i 1)th column and the ( j 1)throw is λi, j , which is determined by policy F. Let I denote theidentity matrix. Define 1 [1, · · · , 1]T , and 0 [0, · · · , 0]T .We won’t specify the size of I, 1 or 0 if there is no ambiguity. 1TDefine G F F I. Define H F G F (0 : (Q 1), :) 1and c .0From the definition of the steady-state distribution, we haveG F π F 0 and 1T π F 1. For a unichain, the rank of G Fis Q. Therefore, we can conclude that H F is invertible andπF H 1F c.(12)We can express the average power consumption PF andthe average delay D F using the steady-state probabilitydistribution. For state q, transmitting s packet(s) will costS s f 0,s , · · · , s with probability f q,s . Define p F [ s 0ST f],whichisafunctionofF,thusthe averagesQ,ss 0power consumptionQPF Sπ F (q)q 0 s f q,s pTF π F .s 0(13)2921Similarly, define d [0, 1, · · · , Q]T , thus the average delayDF 1αAQqπ F (q) q 01 Td π F.αA(14)A. Partially Linear Property of Scheduling PoliciesThe mapping from F to Z F (PF , D F ) has a partiallylinear property shown in the following lemma.Lemma 1: F and F are two scheduling policies that aredifferent only when q[n] q, i.e. the two matrices aredifferent only in the (q 1)th row. Define F (1 )F F where 0 1. Then1) There exists 0 1 so that PF (1 )PF PF and D F (1 )D F D F . Moreover, it should hold that is a continuous nondecreasing function of .2) When changes from 0 to 1, point Z F moves on the linesegment Z F Z F from Z F to Z F .Proof: See Appendix C.Lemma 1 indicates that the convex combination of scheduling policies which take different actions in only one state willinduce the convex combination of points in the delay-powerplane. Furthermore, we can have the following two results.Theorem 3: The set of all feasible points in the delay-powerplane, R , is a convex polygon whose vertices are all obtainedby deterministic scheduling policies. Moreover, the policiescorresponding to neighboring vertices of R take differentactions in only one state.Proof: See Appendix D.Corollary 3: The optimal delay-power tradeoff curve L ispiecewise linear, decreasing, and convex. The vertices ofthe curve are obtained by deterministic scheduling policies.Moreover, the policies corresponding to neighboring verticesof L take different actions in only one state.Proof: See Appendix E.B. Optimal Threshold-Based Policy for the CMDPIn the last section, we prove in Theorem 2 that the optimalpolicy for the combined cost is deterministic and thresholdbased. Based on the steady-state analysis, the objective function in the unconstrained MDP problem (10)1 FEq 0limN NN q[n] η s F (q[n]) n 1 α AD F η PF (η, α A), Z F(15)can be seen as the inner product of vector (η, α A) and Z F .Since R is a convex polygon, the corresponding Z F minimizing the inner product will be obtained by vertices of L,as demonstrated in Fig. 4. Since the conclusion in Theorem 2holds for any η, the vertices of the optimal tradeoff curveare all obtained by optimal policies for the relaxed problem,which are deterministic and threshold-based. Moreover, fromCorollary 3, the neighboring vertices of L are obtained by policies which take different actions in only one state. Therefore,we have the following theorem.Theorem 4: Given an average power constraint, thescheduling policy F to minimize the average delay takesthe following form that, there exists (S 1) thresholds

2922IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 65, NO. 7, JULY 2017Algorithm 1 Constructing the Optimal Delay-Power Tradeoff1: Construct F whose thresholds q F (s) s for s A and q F (s) Q for s A2: Calculate D F and PF3: Fc [F], Dc D F , Pc PF4: while Fc do5:F p Fc , D p Dc , Pp Pc6:Fc , slope 7:while F p do8:F F p .pop(0)9:for all 0 s A doConstruct F where q F (s ) q F (s ) 110:and q F (s) q F (s) for s s 11:NewPolicyProbing()12:// Probing all possible candidates13:end for14:end whileDraw the line segment connecting (Pp , D p ) and15:(Pc , Dc )16: end whileFig. 4. The minimum inner product of points on L and the weighted vectorcan always be obtained by vertices of L.q F (0) q F (1) · · · q F (S), one of which we label asq F (s ), such that f q,s 1q F (s 1) q q F (s), s s f 1q F (s 1) q q F (s )q,s f q F (s ),s f q F (s ), s 1 1 f q,s 0else(16)1: procedure N EW P OLICY P ROBING( )2:if F is feasible and threshold-based then3:Calculate D F and PF 4:if D F D p and PF Pp then5:F p .append(F )6:// Z F coincides with Z p7:else if D F D p and PF Pp thenD D8:if PF P p slope thenpFD D9:Fc [F ], slope PF P ppF10:Dc D F , Pc PF 11:// Z F has the best performanceD D12:else if PF P p slope thenpwhere q F ( 1) 1. Therefore F is a threshold-basedpolicy.Proof: According to Corollary 3, the policies corresponding to neighboring vertices of L are deterministic andtake different actions in only one state. In other words,according to (11), the thresholds for F and F are all thesame except one of their thresholds are different by 1. Definethe thresholds for F as q F (0), q F (1), · · · , q F (s ), · · · , q F (S),and the thresholds for F as q F (0), q F (1), · · · , q F (s ) 1, · · · , q F (S), which means the two policies are different onlyin state q F (s ). Since the policy to obtain a point on Z F Z F isthe convex combination of F and F , it should have the formshown in (16). This form satisfies our definition of thresholdbased policies in Section II.According to Theorem 4, policies corresponding to thepoints between vertices of the optimal tradeoff curve, as themixture of two deterministic threshold-based policies differentonly in one state, also satisfies our definition of thresholdbased policy in Section II. When q F (s 1) q[n] q F (s),we transmit s packet(s). Any optimal scheduling policy F hasat most two decimal elements f q F (s ),s and f q F (s ),s 1 , whilethe other elements are either 0 or 1.C. Algorithm to Obtain the Optimal Tradeoff CurveHere, we propose Algorithm 1 to efficiently obtain theoptimal delay-power tradeoff curve. This algorithm is basedon the results that the optimal delay-power tradeoff curve ispiecewise linear, whose vertices are obtained by deterministicthreshold-based policies, and policies corresponding to two13:14:15:16:17:18:19:20:21:22:23:24:Fif PF Pc thenFc .append(F )// Z F has the same performanceas the current best candidate(s)else if PF Pc thenD DFc [F ], sl

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 65, NO. 7, JULY 2017 2917 Delay-Optimal Buffer-Aware Scheduling With Adaptive Transmission Xiang Chen, Student Member, IEEE, Wei Chen, Senior Member, IEEE, Joohyun Lee, Member, IEEE, and Ness B. Shroff, Fellow, IEEE Abstract—In this paper, we aim to obtain the optimal tradeoff between the average delay and the average power

Related Documents:

IEEE 3 Park Avenue New York, NY 10016-5997 USA 28 December 2012 IEEE Power and Energy Society IEEE Std 81 -2012 (Revision of IEEE Std 81-1983) Authorized licensed use limited to: Australian National University. Downloaded on July 27,2018 at 14:57:43 UTC from IEEE Xplore. Restrictions apply.File Size: 2MBPage Count: 86Explore furtherIEEE 81-2012 - IEEE Guide for Measuring Earth Resistivity .standards.ieee.org81-2012 - IEEE Guide for Measuring Earth Resistivity .ieeexplore.ieee.orgAn Overview Of The IEEE Standard 81 Fall-Of-Potential .www.agiusa.com(PDF) IEEE Std 80-2000 IEEE Guide for Safety in AC .www.academia.eduTesting and Evaluation of Grounding . - IEEE Web Hostingwww.ewh.ieee.orgRecommended to you b

Signal Processing, IEEE Transactions on IEEE Trans. Signal Process. IEEE Trans. Acoust., Speech, Signal Process.*(1975-1990) IEEE Trans. Audio Electroacoust.* (until 1974) Smart Grid, IEEE Transactions on IEEE Trans. Smart Grid Software Engineering, IEEE Transactions on IEEE Trans. Softw. Eng.

EIC, IEEE Transactions on Cloud Computing – Yuanyuan Yang EIC, IEEE Transactions on Cognitive Communications and Networking – Ying-Chang Liang EIC, IEEE Transactions on Molecular, Biological, and Multi-Scale Communications – Chan-Byoung Chae EIC, IEEE Transactions on Signal and Info

Menschen Pagina 20 Schritte international Neu Pagina 22 Motive Pagina 24 Akademie Deutsch Pagina 25 Starten wir! Pagina 26 Themen aktuell Pagina 28 em neu Pagina 29 Sicher! Pagina 30 Vol A1 1 Vol A1 Vol 1 Vol 1 2 Vol unico Vol 1 Volume 1 Volume 1 Vol 1 Vol 1 1 Vol A1 2 Vol 2 Vol 1 2 Vol A2 1 Vol A2 Vol 3 Vol

IEEE Robotics and Automation Society IEEE Signal Processing Society IEEE Society on Social Implications of Technology IEEE Solid-State Circuits Society IEEE Systems, Man, and Cybernetics Society . IEEE Communications Standards Magazine IEEE Journal of Electromagnetics, RF and Microwaves in Medicine and Biology IEEE Transactions on Emerging .

446 IEEE TRANSACTIONS ON SMART GRID, VOL. 4, NO. 1, MARCH 2013 An Information-Theoretic Approach to PMU Placement in Electric Power Systems Qiao Li, Student Member, IEEE,TaoCui, Student Member, IEEE,YangWeng, Student Member, IEEE, Rohit Negi, Member, IEEE, Franz Franchetti, Member, IEEE, and Marija D. Ilić, Fellow, IE

IEEE TRANSACTIONS ON IMAGE PROCESSING, TO APPEAR 1 Quality-Aware Images Zhou Wang, Member, IEEE, Guixing Wu, Student Member, IEEE, Hamid R. Sheikh, Member, IEEE, Eero P. Simoncelli, Senior Member, IEEE, En-Hui Yang, Senior Member, IEEE, and Alan C. Bovik, Fellow, IEEE Abstract— We propose the concept of quality-aware image, in which certain extracted features of the original (high-

& Beverages ensure food availability in the future. secure access to primary resources as well as productive operations that This is evidenced by repeated Chinese-led acquisition in foreign markets, targeting agricultural companies or meat, pork and poultry producers, mainly. Such was the case of Smithfields Foods (with Chinese WH Group as its controlling shareholder), which acquired several .