Towards Agile And Smooth Video Adaptation In Dynamic HTTP Streaming

1y ago
8 Views
2 Downloads
3.47 MB
12 Pages
Last View : 23d ago
Last Download : 3m ago
Upload by : Asher Boatman
Transcription

Towards Agile and Smooth Video Adaptationin Dynamic HTTP StreamingGuibin Tian and Yong LiuDepartment of Electrical and Computer EngineeringPolytechnic Institute of New York UniversityBrooklyn, NY, USA 11201gtian01@students.poly.edu, yongliu@poly.eduABSTRACTing DASH [1, 2]. In DASH, a video content is encoded into multiple versions at different rates. Each encoded video is further fragmented into small video chunks, each of which normally contains seconds or tens of seconds worth of video. Video chunks canbe served to clients using standard HTTP servers in either live oron-demand fashion. Upon network condition changes, a client candynamically switch video version for the chunks to be downloaded.Different from the traditional video streaming algorithms, DASHdoes not directly control the video transmission rate. Transmissionrate of a chunk is totally controlled by the TCP protocol, whichreacts to network congestion along the server-client path. Intuitively, if TCP throughput is high, DASH should choose a high videorate to give user better video quality; if TCP throughput is low,DASH should switch to a low video rate to avoid playback freezes.To maximally utilize throughput achieved by TCP and avoid videofreezes, DASH video adaptation should be responsive to networkcongestion level shifts. On the other hand, TCP congestion control incurs inherent rate fluctuations; and cross-traffic rate has bothlong-term and short-term variations. Adapting video rate to shortterm TCP throughput fluctuations will significantly degrade userexperience. It is therefore desirable to adapt video rate smoothly.In this paper, we propose client-side video adaptation algorithmsto strike the balance between the responsiveness and smoothness inDASH. Our algorithms use client-side buffered video time as feedback signal. We show that there is a fundamental conflict betweenbuffer size smoothness and video rate smoothness, due to the inherent TCP throughput variations. We propose novel video rate adaptation algorithms that smoothly increase video rate as the availablenetwork bandwidth increases, and promptly reduce video rate inresponse to sudden congestion level shift-ups. We further showthat imposing a buffer cap and reserving a small video rate margincan simultaneously decrease buffer size oscillations and video ratefluctuations. Adopting a machine-learning based TCP throughputprediction algorithm, we also extend our DASH designs to workwith multiple CDN servers. Our contribution is four-fold.Dynamic Adaptive Streaming over HTTP (DASH) is widely deployed on the Internet for live and on-demand video streaming services. Video adaptation algorithms in existing DASH systems areeither too sluggish to respond to congestion level shifts or too sensitive to short-term network bandwidth variations. Both degradeuser video experience. In this paper, we formally study the responsiveness and smoothness trade-off in DASH through analysis andexperiments. We show that client-side buffered video time is a goodfeedback signal to guide video adaptation. We then propose novelvideo rate control algorithms that balance the needs for video ratesmoothness and high bandwidth utilization. We show that a smallvideo rate margin can lead to much improved smoothness in videorate and buffer size. The proposed DASH designs are also extendedto work with multiple CDN servers. We develop a fully-functionalDASH system and evaluate its performance through extensive experiments on a network testbed and the Internet. We demonstratethat our DASH designs are highly efficient and robust in realisticnetwork environment.Categories and Subject DescriptorsH.5.1 [Information Systems]: Multimedia Information Systems—Video(e.g., tape, disk, DVI)General TermsDesignKeywordsAdaptation, DASH, Emulab, Multiple CDN, SVR1. INTRODUCTIONVideo traffic dominates the Internet. The recent trend in online video streaming is Dynamic Adaptive Streaming over HTTP(DASH) that provides uninterrupted video streaming service to users with dynamic network conditions and heterogeneous devices. Notably, Netflix’s online video streaming service is implemented us-1. We formally study the responsiveness and smoothness tradeoff in DASH through analysis and experiments. We showthat buffered video time is a good reference signal to guidevideo rate adaptation.2. We propose novel rate adaptation algorithms that balance theneeds for video rate smoothness and bandwidth utilization.We show that a small video rate margin can lead to muchimproved smoothness in video rate and buffer size.Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise, torepublish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee.CoNEXT’12, December 10–13, 2012, Nice, France.Copyright 2012 ACM 978-1-4503-1775-7/12/12 . 15.00.3. We are the first to develop DASH designs that allow a clientto work with multiple CDN servers. We show that machinelearning based TCP throughput estimation algorithms can ef-109

3.1fectively guide DASH server switching and achieve the multiplexing gain.To sustain continuous playback, a video streaming client normally maintains a video buffer to absorb temporary mismatch between video download rate and video playback rate. In conventional single-version video streaming, video buffer is measured by thesize of buffered video, which can be easily mapped into bufferedvideo playback time when divided by the average video playbackrate. In DASH, different video versions have different video playback rates. Since a video buffer contains chunks from differentversions, there is no longer direct mapping between buffered videosize and buffered video time. To deal with multiple video versions,we use buffered video time to directly measure the length of videoplayback buffer.Buffered video time process, denoted by q(t), can be modeled asa single-server queue with constant service rate of 1, i.e., with continuous playback, in each unit of time, a piece of video with unitplayback time is played and dequeued from the buffer. The enqueueprocess is driven by the video download rate and the downloadedvideo version. Specifically, for a video content, there are L different versions, with different playback rates V1 V2 · · · VL .All versions of the video are partitioned into chunks, each of whichhas the same playback time of Δ. A video chunk of version i hasa size of Vi Δ. A client downloads video chunks sequentially, andfor each chunk, he can choose one out of the L versions. Withoutloss of generality, a client starts to download chunk k from version(s)i at time instant tk . Then the video rate requested by the client(e)for the k-th chunk is v(k) Vi . Let tk be the time instant whenchunk k is downloaded completely. In a “greedy" download mode,a client downloads chunk k right after chunk k 1 is completely(e)(s)downloaded, in other words, tk 1 tk . For the buffered videotime evolution, we have: (e)(s)(e)(s)q(tk ) Δ max q(tk ) (tk tk ), 0 ,(1)4. We implement the proposed algorithms into a fully-functionalDASH system, which is evaluated through extensive experiments on network testbed and the Internet. We demonstratethat our DASH designs are highly efficient and robust in realistic network environment.The rest of the paper is organized as follows. Section 2 describesthe related work. DASH designs for single server are developedin Section 3. Extensions to the multiple-server case are presentedin Section 4. In Section 5, we report the experimental results forsingle-server case and multi-server case on both Emulab testbedand the Internet. We conclude the paper with summary and futurework in Section 6.2.RELATED WORKAlthough DASH is a relatively new application, due to its popularity, it has generated lots of research interests recently. In [3],Watson systematically introduced the DASH framework of Netflix,which is the largest DASH stream provider in the world. In [4],authors compared rate adaptation of three popular DASH clients:Netflix client, Microsoft Smooth Streaming [5], and Adobe OSMF [6]. They concluded that none of them is good enough. Theyare either too aggressive or too conservative. Some clients evenjust jump between the highest video rate and the lowest video rate.Also, all of them have relatively long response time under networkcongestion level shift. It was shown in [7] that dramatic video ratechanges lead to inferior user quality-of-experience. They furtherproposed to gradually change video rate based on available bandwidth measurement. In [8], authors proposed a feedback controlmechanism to control the sending buffer size on the server side.Our video adaptation is driven by buffered video time on the clientside, which has direct implication on client video playback. Ourscheme does not require any change on the server side. There arealso papers on DASH in wireless and mobile networks. In [9],several adaptive media players on the market were tested to seehow they perform in challenging streaming scenarios in a mobile3G network. In [10], Mueller et al implemented a DASH systemand proved it works in vehicular network environments. In [11],a DASH-like algorithm was proposed for a server to regulate thevideo uploading from a mobile client. In DASH, it is important topredict TCP throughput and quickly detect congestion level shifts.One way is to monitor the path using network bandwidth measurement tools like pathload [12]. But measuring available bandwidthitself injects probing traffic into the path, and it may take long timeto converge to an acceptable result. And the accuracy of such toolsis not guaranteed. And in [13] and [14], authors presented historybased and machine-learning-based TCP throughput prediction. InDASH, video chunks are continuously transmitted from the serverto the client. In this scenario, TCP throughput data can be collected in realtime. In our experiments, we found that even simplehistory-based TCP throughput prediction can achieve higher accuracy than those reported in [13] and [14]. For multi-server DASH,a client needs to continuously evaluate the throughput of a DASHserver before switching to it. We implement the light-weight machine learning approach proposed in [14] for the TCP throughputpredict of candidate DASH servers.3.Buffered Video Timewhere the first term is the added video time upon the completion ofthe downloading of chunk k, the second term reflects the fact thatthe buffered video time is consumed linearly at rate 1 during thedownloading of chunk k.Using fluid approximation, we evenly distribute the added video(s) (e)time of Δ over the download interval (tk , tk ], thendq(t)dt Δ 1(q(t) 0),(s) tkv(k)Δ 1(q(t) 0),(e)(s)v(k)(tk tk )(2)(e)tkT̄ (k) 1(q(t) 0),v(k)(s)(3)(e)t (tk , tk ],(4)where 1(·) is the indicator function, and T̄ (k) is the average TCPthroughput when downloading chunk k. The buffered video timeremains constant when the requested video rate v(k) exactly matches T̄ (k) which is not practical. In practice, v(k) can only assumeone of the L predefined video rates. There will be unavoidable ratemismatches, thus buffer fluctuations.3.2 Control Buffer OscillationsFrom (4), if the requested video rate is higher than the actual TCPthroughput, the buffered video time decreases, and video playbackfreezes whenever q(t) goes down to zero; if the requested video rateis lower than the actual TCP throughput, the buffered video timeramps up, it suggests that user gets stuck at low video rate eventhough his connection supports higher rate. A responsive videoDASH WITH SINGLE SERVERWe start with DASH system where a client only downloads videochunks from a single server. We will extend our designs to themultiple server case in Section 4.110

TCP Thru.PredictionRealTCP Thru. -PIDControlleru(t) 1RealTCP Thru.T (t)T̂ (t)qrefTCP Thru.Prediction ṽ(t)v(t)Video RateQuantizer T (t)v(t)VideoBufferT (t)T̂ (t)q(t)qrefControlModuleF (t) ṽ(t)SwitchingLogicv(t) T (t)v(t)VideoBufferq(t)Figure 1: PID control oriented adaptive streamingFigure 2: Buffer size oriented adaptive streamingadaptation scheme should control video rate to closely track TCPthroughput so that q(t) stays within a bounded region. As a result,no video freeze happens, and the requested video rate matches theTCP throughput in long-run.To maintain a stable queue length, one can employ a simple rateadaptation scheme:v(k) argmin{Vi ,1 i L}loading chunk k. Taking into account the finite discrete video rates,the actual requested video rate for chunk k is the highest video ratelower than ṽ(k),v(k) Q(ṽ(k)) Vi T̂ (k) ,Vi ,(5)where Q(·) is the quantization function. When q(t) oscillates aroundqref with small amplitude, the control signal u(t) is small, the requested video rate is set close to the predicted TCP throughput. Thethroughput estimation error and video rate quantization error willbe absorbed by the video buffer and closed-loop control.where T̂ (k) is some estimate of TCP throughput before downloading chunk k. In other words, a client always downloads the versionwith a rate “closest" to the estimated TCP throughput. However, itis well-known that it is hard to accurately estimate TCP throughput. Such an open-loop design is not robust against TCP throughputestimation errors. To address this problem, we investigate a closedloop feedback control design for video rate adaptation.Instead of directly matching the requested video rate v(k) withTCP throughput estimate, we use the evolution of buffered videotime q(t) as feedback signal to adjust v(k). One straightforwardway is to set up a reference queue length qref , i.e. the target videobuffer time, and build a P ID controller to regulate the requestedvideo rate.P ID is the most commonly used feedback controller in industrial control systems. A P ID controller calculates an "error" valueas the difference between a measured process variable and a desired set point. The controller attempts to minimize the error byadjusting the process control inputs. The P ID controller calculation (algorithm) involves three separate parameters: the proportional factor, the integral factor and derivative factor, denoted byKP , KI , and KD respectively. Heuristically, these values can beinterpreted in terms of time: KP depends on the present error, KIon the accumulation of past errors, and KD is a prediction of future errors, based on current rate of change. The weighted sum ofthese three actions is used to adjust the process via the output ofthe controller. In practice, a P ID controller doesn’t have to set allthe three parameters. There are very common usage of variationsof P ID controller. In our research, because of inherent rate fluctuations of TCP transmission, we use P I controller instead of P IDcontroller because the derivative factor may amplify the impact ofTCP fluctuations.Figure 1 illustrates the diagram of the control system. We adopta Proportional-Integral (PI) controller, with control output drivenby the deviation of buffered video time: tu(t) Kp (q(t) qref ) KI(q(τ ) qref )dτ,3.3 Control Video Rate FluctuationsTo accurately control the buffer size, one has to constantly adaptthe requested video rate to realtime TCP throughput. If the achievedTCP throughput is larger than the requested video rate, the buffersize will increase and the feedback control module will increase thevideo rate, which, according to (4), will slow down buffer increase,and vice versa. Since TCP throughput is by-nature time varying,the requested video rate will also incur constant fluctuations.From control system point of view, there is a fundamental conflict between maintaining stable video rate and stable buffer size,due to the unavoidable network bandwidth variations. From end user point of view, video rate fluctuations are much more perceivablethan buffer size oscillations. Recent study has shown that switchingback-and-forth between different video versions will significantlydegrade user video experience [7]. Meanwhile, buffer size variations don’t have direct impact on video streaming quality as longas the video buffer does not deplete. In this section, we revisit ourvideo rate adaptation design with the following goals:1. avoid video rate fluctuations triggered by short-term bandwidth variations and TCP throughput estimation errors;2. increase video rate smoothly when the available network bandwidth is consistently higher than the current video rate;3. quickly decrease video rate upon the congestion level shiftups to avoid video playback freezes.To simultaneously achieve the three goals, one has to strike theright balance between the responsiveness and smoothness of videorate adaptation upon network bandwidth increases and decreases.Classical feedback control, such as those presented in Figure 1,is no longer sufficient. We develop a new rate control system asshown in Figure 2.03.3.1 Control Modulewhere Kp and KI are the P and I control coefficients respectively.The target video rate for chunk k is(s)max{Vi :Vi ṽ(k)}The control module still uses buffered video time q(t) as feedback signal, since it directly reflects the mismatch between thevideo rate and realtime TCP throughput. Instead of controlling q(t)to a target level qref , we only use q(t) to guide the video rate selection. To determine video rate v(k) for chunk k, we need TCP(s)ṽ(k) (u(tk ) 1)T̂ (tk ),(s)where T̂ (tk ) is the TCP throughput estimate right before down-111

is a constant. If v(k 1) VL , the adjustment is neutral; ifv(k 1) VL , Fv (k) 1. This is because HTTP adaptivestreaming uses TCP transmission. If a chunk is small, TCP has togo through slow-start process to open up its congestion window,leading to low TCP throughput even if the available bandwidth ismuch higher. This compensation enables fast rate increase when aDASH session starts or resumes from a low video rate. If the clientconnects to the server with persistent HTTP connection, there willbe no such problem because only the first chunk will experienceslow start. In such scenario, we can just set this adjustment factorto constant 1. Notice that, each adjustment factor assumes value of(s)1 when the system is at the equilibrium point, i.e., q(tk ) qref ,(s)(s)q(tk ) q(tk 1 ), v(k 1) VL . When the system operateswithin the neighborhood of the equilibrium point, each adjustment factor takes small positive or negative deviation from one. Thetotal deviation of their product from one is approximately the summation of the individual deviations, similar to the PI controller inthe previous section. Different from the PI controller, the product deviation changes smoothly within a bounded region when thesystem operates away from the equilibrium point. ï ï ï Figure 3: Adjustment Function for Buffer Size Deviationthroughput prediction T̂ (k) and an adjustment factor F (k), whichis a function of the target buffer size, current buffer size, previousbuffer size, and the current video rate:F (k) Fq (k) Ft (k) Fv (k),(6)with(s)Fq (k) 2 Ft (k) Fv (k) ep (q(tk1 e) qref )(s)p (q(tk ) qref )Δ(s)(s)Δ (q(tk ) q(tk 1 ))VLW v(k 1) WVL W(7)3.3.2 Rate Switching LogicAfter we get the final adjustment factor F (k), similar to thebuffer control case in Section 3.2, we can multiply it with the TCP(s)throughput estimate and set a target video rate ṽ(k) F (k)T̂ (tk ),then use the quantization function Q(·) in (5) to convert it to a discrete video rate v(k). If we adjust the video rate directly accordingto the quantized target video rate, there will be again frequent fluctuations. To resolve this, a rate switching logic module is added after the quantizer as shown in Figure 2. It controls video rate switchaccording to algorithm 1.(8)(9)In (6), the adjustment factor F (k) is a product of three sub-factors:buffer size adjustment Fq (k), buffer trend adjustment Ft (k), andvideo chunk size adjustment Fv (k), which we explain one-by-onein the following.Buffer Size Adjustment Fq (k) is an increasing function of buffer(s)size deviation q(tk ) qref from the target buffer size in (7). Larger buffer size suggests one should be more aggressive in choosing higher video rate. As illustrated in Figure 3, when the buffersize matches the target qref , the adjustment is neutral (with value1); when the deviation is small, the adjustment is approximately(s)1 p(q(tk ) qref ), mimicking a simple P -controller, with stationary output of 1 and Kp p; when the deviation is large, theadjustment factor increases/decreases smoothly with upper boundof 2 and lower bound of 0. This is to avoid Fq overpowers the othertwo factors.Buffer Trend Adjustment Ft (k) is an increasing function of(s)(s)buffer size growth q(tk ) q(tk 1 ) since the downloading of theprevious video chunk, as calculated in (8), where Δ is the videotime contained in one chunk. If there is no buffer size growth,the adjustment is neutral (with value 1). If the buffer size growsfast, it suggests that the previous video rate is too conservative, oneshould increase the video rate; if the buffer size decreases fast, itsuggests that the previous video rate is too aggressive, one shoulddecrease the video rate. From equations (1) to (4), in a greedy(e)(s)download mode with tk 1 tk , it can be shown that with fluidapproximation:Ft (k) T̄ (k 1)dq(t) 1(q(t) 0).v(k 1)dtAlgorithm 1 Smooth Video Adaptation ̃(k) F (k)T̂ (tk );q(s)if q(tk ) refthen2v(k) Q(T̄ (k 1));return;else if ṽ(k) v(k 1) thenCounter if Counter m then(s)v(k) Q(T̂ (tk ));Counter 0;returnend ifelse if ṽ(k) v(k 1) thenCounter 0end ifv(k) v(k 1); return;qIf the buffer size drops below half of the target size ref, it indi2cates that the current video rate is higher than the TCP throughput,and there is a danger of buffer depletion and playback freeze. Wethen immediately reduce the video rate to v(k) Q(T̄ (k 1)),where T̄ (k 1) is the actual TCP throughput of the previous chunktransmission. Due to the quantization, v(k) T̄ (k 1), if TCPthroughput in the current round is close to the previous round, theqbuffer size is expected to increase until it goes back to above ref.2qrefIf the buffer size is larger than 2 , we consider it safe to keepthe current rate or switch up to a higher rate. To avoid small timescale fluctuations, video rate is switched up only if the target videorate ṽ(k) calculated by the controller is larger than the current rate(10)In other words, Ft (k) is the ratio between the actual downloadthroughput and video rate for chunk k 1. Ft is essentially aDerivative D-controller, that responds fast to increase/decrease trendin buffered video time.Video Chunk Size Adjustment Fv (k) is a decreasing functionof the previous video rate v(k 1), calculated in (9), where W112

v(k 1) for m consecutive chunks. Whenever a switch-up is triggered, the video rate is set to match the TCP throughput estimate(s)T̂ (tk ). Before the switch-up counter reaches m, if the target videorate calculated for one chunk is smaller than the current video rate,the switch-up counter will be reset and start over.The parameter m controls the trade-off between the responsiveness and smoothness of rate adaptation. Larger m will definitelymake the adaptation smoother, but sluggish. If the video rate isat low levels, the user will have to watch that video rate for a longtime even if there is enough bandwidth to switch up. To address thisproblem, we dynamically adjust m according to the trend of buffergrowth. More specifically, for chunk k, we calculate a switchup threshold as a decreasing function of the recent buffer growth:(s)(s)m(k) fm (q(tk ) q(tk 1 )), and video rate is switched up ifthe switch-up counter reaches (m(k) m(k 1) m(k 2))/3.The intuition behind this design is that fast buffer growth suggestsTCP throughput is persistently larger than the current video rate,one should not wait for too long to switch up. Similar to (10), itcan be shown that if buffer is non-empty, v(k 1)(s)(s)q(tk ) q(tk 1 ) Δ 1 .T̄ (k 1)4.While most DASH services employ multiple servers hosting thesame set of video contents, each client is only assigned to one server [1]. It is obviously more advantageous if a DASH client is allowed to dynamically switch from one server to another, or evenbetter, simultaneously download from multiple servers. Our videoadaptation algorithms in Section 3 can be easily extended to thecase where a client can download from multiple servers. We consider two cases. In the first case, given n servers, a client alwaysconnects to the server which can provide the highest video download rate. In the second case, a client simultaneously connects tos out of n servers, and downloads different chunks from differentservers then combine them in the order of the video.4.1TCP Throughput PredictionIn single-server study, since we keep downloading video chunksfrom the same server, we can use simple history-based TCP throughput estimation algorithm [13] to predict the TCP throughput fordownloading new chunks. With multiple servers, it is necessary fora client to estimate its TCP throughput to a server even if it has notdownloaded any chunk from that server. We adopt a light-weightTCP throughput prediction algorithm proposed in [14]. The authors showed that TCP throughput is mainly determined by packetloss, delay and the size of the file to be downloaded. They proposeto use the Support Vector Regress (SVR) algorithm [15] to train aTCP throughput model T̂ (pl , pd , fs ) out of training data consist(i)(i)ing of samples of packet loss rate pl , packet delay pd , file size(i)fs and the corresponding actual TCP throughput T (i) . To predictthe current TCP throughput, one just need to plug in the currentmeasured packet loss, delay, and the download file size. For ourpurpose, we download each chunk as a separate file. Chunks fromdifferent video versions have different file sizes. In our SVR TCPmodel, we use video rate Vi in place of file size fs .One example dynamic-m function we use in our experiments is apiece-wise constant function which we got from empirical study: (s)(s)1if q(tk ) q(tk 1 ) [0.4Δ, Δ); (s)(s) 5if q(tk ) q(tk 1 ) [0.2Δ, 0.4Δ);(11)m(k) (s)(s) 15 if q(tk ) q(tk 1 ) [0, 0.2Δ); 20 otherwise.3.4DASH WITH MULTIPLE SERVERSControl Buffer OverflowSo far we assume a “greedy" client mode, where a client continuously sends out “GET" requests to fully load TCP and downloadvideo chunks at the highest rate possible. In practice, this may notbe plausible for the following reasons: 1) A DASH server normallyhandles a large number of clients. If all clients send out “GET" requests too frequently, the server will soon be overwhelmed; 2) If therequested video rate is consistently lower than TCP throughput, thebuffered video time quickly ramps up, leading to buffer overflow.In Video-on-Demand (VoD), it means that the client pre-fetchesway ahead of the its current playback point, which is normally notallowed by a VoD server. In live video streaming, pre-fetching issimply not possible for content not yet generated. The bufferedvideo time is upper-bounded by the user tolerable video playbacklag, which is normally in the order of seconds; 3) Finally, fullystressing TCP and the network without any margin comes with therisk of playback freezes, especially when the client doesn’t havelarge buffer size, like in the live streaming case.To address these problems, we introduce a milder client download scheme. To avoid buffer overflow, we introduced a buffer capqmax . Whenever the buffered video time goes over qmax , the clientkeeps idle for a certain timespan before sending out the request forthe next chunk. Also, to mitigate the TCP and network stresses, wereserve a video rate margin of 0 ρv 1. For any target videorate ṽ(k) calculated by the controllers in the previous sections, weonly request a video rate of v(k) Q((1 ρv )ṽ(k)). With thevideo rate margin, the buffered video time will probably increase.When q(t) goes over qmax , the client simply inserts an idle time ofq(t) qmax before sending out the next download request. As willbe shown in our experiments, even a small video rate margin cansimultaneously reduce the buffer size oscillations and video ratefluctuations a lot.4.2Dynamic Server SelectionIn the first case, we allow a client to dynamically switch to theserver from which it can obtain the highest TCP throughput. Whilea client downloads from its current DASH server, it constantlymonitors its throughput to other candidate servers by using theSVR TCP throughput estimation model. To accommodate the SVRthroughput estimation errors, which was reported around 20% in[14], the client switches to a new DASH server only if the estimated throughput to that server is at least 20% higher than the achievedthroughput with the current DASH server.To avoid wrong server switch triggered by SVR estimate errors,we use trial-based transition. When a client decides to switch toa new server, it establishes TCP connections with the new server,and also keeps the connections with the current server. It sendsout “GET" requests to both servers. After a few chunk transmissions, the client closes the connections with the server with smallerthroughput and uses the one with larger throughput as its currentDASH server.4.3Concurrent DownloadIn the second case, a clients is allowed to simultaneously download chunks from s out of n servers. The client-side video buffer isfed by s TCP connections from the chosen servers. The video rateadaptation algorithms in Section 3 work in a similar way, by justreplacing TCP throughput estimate from the single server with the

line video streaming is Dynamic Adaptive Streaming over HTTP (DASH) that provides uninterrupted video streaming service to user-s with dynamic network conditions and heterogeneous devices. No-tably, Netflix's online video streaming service is implemented us-Permission to make digital or hard copies of all or part of this work for

Related Documents:

1. The need for an agile way of working 6 2. The need for an agile way of working 9 3. Agile Core Values - Agile Project Management Vs. 10 Agile Event Management 4. Agile principles 12 _Agile Principles of Agile Project Management 13 _Agile Principles of VOK DAMS Agile Event Management 14 5. Agile Methods 16 _Scrum in Short 16 _Kanban in Short 18

Agile Estimating and Planning by Mike Cohn Agile Game Development with Scrum by Clinton Keith Agile Product Ownership by Roman Pichler Agile Project Management with Scrum by Ken Schwaber Agile Retrospectives by Esther Derby and Diana Larsen Agile Testing: A Practical Guide for Testers and Agile Teams by Lisa Crispin and .

1.1 Purpose of the Agile Extension to the BABOK Guide1 1.2 What is Agile Business Analysis?2 1.3 Structure6 Chapter 2:The Agile Mindset 2.1 What is an Agile Mindset?7 2.2 The Agile Mindset, Methodologies, and Frameworks8 2.3 Applying the Agile Mindset9 2.4 Agile Extension and the Agile Ma

Agile World View "Agility" has manydimensions other than IT It ranges from leadership to technological agility Today's focus is on organizational & enterprise agility Agile Leaders Agile Organization Change Agile Acquisition & Contracting Agile Strategic Planning Agile Capability Analysis Agile Program Management Agile Tech.

2. Quality Assurance AND Methods of Agile 3. Metrics of quality AND Agile quality assurance 4. Agile AND Quality 5. Agile Quality AND Software Development 6. Agile quality AND Agile methods The search keywords for agile particulars have been merged by using the Boolean ''OR" operator, which

The most popular agile methodologies include: extreme programming (XP), Scrum, Crystal, Dynamic Sys-tems Development (DSDM), Lean Development, and Feature Driven Development (FDD). All Agile methods share a common vision and core values of the Agile Manifesto. Agile Methods: Some well-known agile software development methods include: Agile .

1. Agile methods are undisciplined and not measurable. 2. Agile methods have no project management. 3. Agile methods apply only to software development. 4. Agile methods have no documentation. 5. Agile methods have no requirements. 6. Agile methods only work with small colocated teams.-7. Agile methods do not include planning. 8.

Course 1: Foundations of Agile and Agile Frameworks In this course, students will be introduced to The Agile Mindset and how it sets the tone for "Being" Agile versus just "Doing" Agile. Students will learn to leverage The Agile Manifesto as the foundation for all Agile Frameworks, as well as identify the practical differences between .