Network Load Balancing In Software Defined Network: A Survey

1y ago
14 Views
2 Downloads
1.95 MB
9 Pages
Last View : 2d ago
Last Download : 3m ago
Upload by : Sasha Niles
Transcription

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 14, Number 2, 2019 (Special Issue) Research India Publications. http://www.ripublication.comNetwork Load balancing in Software Defined Network: A SurveyMonika MehraM. Tech Student, Computer Science & Engineering,Govt. Mahila Engineering College AjmerAjmer Rajasthan India.Sudarshan MauryaAssistant Professor, Department of Computer Science & Engineering,Govt. Mahila Engineering College AjmerAjmer Rajasthan India.Naveen Kumar TiwariAssistant Professor, Department of Computer Science & Engineering,Rajkiya Engineering College, KannaujKannauj Uttar Pradesh, India.designed a good network architecture for the cloud and dataAbstractA variety of approaches have been suggested for datacenters to balance network traffic in recent years. Thenetwork topologies and algorithm can affect the quality ofthe network communication. Routing algorithms such as aglobal load balancing (GLB) and dynamic load balancing(DLB) have been proposed. Both use fat-tree topology fordata centers along with first using Software DefinedNetworking (SDN) architecture in cloud computing. TCPcongestion window allows limited buffer space for datacenters and cloud at a time. This paper presents a detailedsurvey on tasks performed on data centers & cloudenvironment and techniques which is used for loadbalancing on the network along with some algorithms usedcenter to handle bulk traffic [1] and use a new scheme to solveproblems which would use multiple transmission path fortransmitting the data over the network. There is one keyproblem is: “how to achieve efficient load balancing in datacenters and cloud”. Another related problem in the datacenter network is: “how to minimize the latency and maximizethe throughput on the network”. In order to answer aboveproblems, hardware and software approaches have beensuggested [2]. The hardware approach can be defined in termsof faster switches, and alternate network topologies. Loadbalancing using software defined network should providehigher bandwidth utilization in existing network. Networktraffic flow scheduling is required to improve the currentapproach. The load balancing methods can be broadlyto improve the performance of overall network.classified into two parts: static load balancing and theKeywords: Data Center, Cloud Computing, Software DefinedNetworking, Network Load Balancing, Virtualization.dynamic load balancing [3]. All application servers run as aguest on Virtual Machine Monitor/hypervisor that provide thefacility for resource sharing on cloud and data centers, whichIntroductionIn recent years, the services on the internet has become widelyhigher and the number of users has increased significantly.The data center may provide variety of services to people;transforms a huge collection of interconnected commodityhardware data into cloud infrastructure with high availability,flexibility, predictable performance, reliability, and security[4].therefore, data centers play very important role in the internetservices nowadays. Various authors proposed that they havePage 245 of 253

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 14, Number 2, 2019 (Special Issue) Research India Publications. http://www.ripublication.com1.1 Centralized SDN Architecture: This network has onlycontrol and generate new forwarding rules for switches.one controller and multiple switches, hence load balancingSoftware define network (SDN) is changing the way networkproblem primarily focuses on servers and links to control andismanage the traffic on the network. It also adjusts link andcharacteristics: 1. How to handle and forwards trafficservers dynamically according to load.according to decisions and 2. An SDN build up that controlsWhen more than one process shares same resources usingmultiple data-plane components, network data-plane resourcesresource sharing system, which allocated some resources then(i.e. routers, switches, and other middle boxes) and well-the system is considered for load balancing, which producedefined application programming interface (API). OpenFlowefficient and reliable system utilization. In SDN OpenFlow[6] switches have one or more tables of packet i.e. handlinguses different scripting language that manages all thelaws. Each law is matching a subset of traffic and executehardware switches, routers and unified interface on thecertain actions on that traffic: actions include dropping,hypervisor network [5]. It allows control of switches, flexibleforwarding or flooding. All the rules are installed by arouting policies without being encountered in complex switch-controller application and an OpenFlow controller can behaveby-switch flow table configurations, thus significantly reducelike a router, firewall, network address translator or somethingoperations management complexity.in ms without load balancingWhenever we talk about huge network, there are manyproblems that occur on the network at a time. Sometimes thenetwork shows performance issues and the long and shortflows are not to be interacting together, therefore networkcarried the traffic.2.1 Queue Buildup [8]: When throughput on the networkdevice is less then flow of incoming data flow due to longflow or short flow over time, then data packet incur increasein latency [8]. The latency is increased as the packets arequeued up in the buffer of the networking device and this istermed as Queue Buildup. This affects the networkperformance and degrades QOS over the data center andFigure 1: Architecture of SDN networkcloud.Open flow is a protocol that is used in SDN, which makes thecontrol plane and data plane separated. Traditional switchesare used to forward data packets which is determined by thecontroller. In the OpenFlow network, controller can controltheswitches,andmodifiedandor upgradedusingprogramming insight, the common popular programminglanguage of controllers are: C , java and python, etc.Because of this, openflow network is much more flexible to2.2 Incast [8]: If multiple flows using same interface totransfer the packets, then for some short periods packets arequeue up and the maximum buffer utilized on the network.Incast will probably degrade the performance over thenetwork, miss the aggregator deadline and left out the finalconclusions.If we want to resolve this problem, we can increase thenumber of partition and bandwidth, this way we can improveupgradable, in this way openflow networks can be updated toPage 246 of 253

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 14, Number 2, 2019 (Special Issue) Research India Publications. http://www.ripublication.comthe performance on the network and decrease the possibilitystatic load balancing algorithm [21,22]. It contains predefinedof packet drop.rules for balancing load over the network, and they do not2.3 Buffer Overflow [8]: Data centers manage long flow (itreact according to the network state, that is fixed set of criteriacontains high latency and low bandwidth) and short flows (itfor the network.contains low latency and high bandwidth) on the network,whenever large number of packets come in the network, thenetwork is not able to handle and drop some packet due tooverflow on the buffer space and long TCP flows makequeues on their interfaces. This is called buffer overflow, as a3.1.2 Dynamic load balancing: It depends only on theprevious knowledge and current state of network and it reactsaccording to network load and distributes network loadaccording to the current state of the network. Which try ofoptimize the use of network resources dynamically.result network is unable to handle any more traffic.There are two prime SDN Architectures, that manage networkMotivationload as shown in fig-4, which can be divided in two categoriesCloud computing is one of the techniques which is used byi.e. Centralized Single Control and Distributed Multiplepeople to deal with huge data transfer from one network toControl.another. The congestion control algorithm used to mitigate theArchitecture can further be subcategorized into Data Planecongestion over the network and minimize the packet loss.and Control plane. It is used for Balancing the load accordingTCP congestion control protocols are not alone sufficient toto controller and according to flow of Data as well. Data Planeachieve high throughput and low latency over data centerhandle Link Load Balancing and Server Load Balancing. Thenetwork with minimizing packet losses. In this paper we areDistributed multiple control architecture is subcategorizeddiscussing load balancing, which is a technique to deal withinto Flat Architecture and hierarchical Architecture. It helps tovarious traffic profiles and used by SDN architecture toexplore new dimensions of load rove the performance of the network which also make theefficient way to transfer data packets over the network.Figure 4. Load balancing ArchitectureFigure 3. Load BalancingLiterature survey:3.1 Classification of load balancingDCTCP [8] publish in 2010, gave the description for data3.1.1 Static load balancing: According to previouscommunications in data centers and workload characterizationknowledge of system’s resources and application, we can usePage 247 of 253

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 14, Number 2, 2019 (Special Issue) Research India Publications. http://www.ripublication.comand performance impairments (Switches, incast, Queueaccommodate modification to set new goal that how to avoidBuildup, Buffer pressure) [8] and proposed three algorithms:incast. This paper uses the avoidance method and fixed a1. Simple marking at the switch: It is defined the processmaximum number of present connections and a predefinedwhich mark on the arrival packet and give the instructions forfixed value. Previously some researchers used serializationdestination address.method for modification [16] but this paper suggested a fix2. ECN-ECHO [8] at the receiver: DCTCP and TCP receiversnumber (n) of connections in every set. It used NS-2 simulatorare different but both work in same manner. Wheneverto simulate improvement of network performance and furtherpackets are sent or received, packet has its own uniquereduced the problem of incast.identity that are used to get the information about packetswhere to send. It makes the process simple and reduce theload on the network and improve performance.Tuning ECN for Data Center networks by HAITAO WU [17]analyses network behavior while ECN on and off and tuningof ECN for the DCN. ECN basically identify probability of3. Controller at the sender: It is used to control few thingsQueue buildup over the buffer, which is eventually helpful insuch as how to maintain the queue length, congestion windowhandling or avoidance of network congestion. This paperwhen queue build-up occurs. It also helps to manage networkconsiders three types of flows: 1) Long Flow 2) Short Flow 3)traffic and control processing.Background Flows.DCTCP also rectify some issues like queue buildup, bufferpressure and Incast defined earlier. Paper focuses on real timeapplication like Google search; whenever we search any topicit gives multiple results while searching queries. Soft real timeapplication generates diverse data of short and long flows.Before DCTCP, TCP also uses few things that are modified inDCTCP like “High throughput, High burst tolerance and largeflows” etc. DCTCP gives some algorithms which already1) Long flows are less in count and persist for longer durationof time, which deliver large amount of data over the network.2) Short flows are large in number and try to deliver smallamount of data as soon as possible and persist for smallerduration time over the network.3) Background flows are managerial flows, which maintainthe data center and make available to users for task. They arecontinuous in nature and remains the same.Author analyses that larger ECN threshold increase thebeing used to enhance performance compared to TCP.throughput but degrades the network performance by queueDCTCP shows firstly it gives 6000 severs and analyses aboutbuildup whereas smaller ECN threshold decreases theproduction of traffic and divide the network in layers likethroughput also give false congestion alarms in some cases.partitions/aggregator.forICTCP [26] presents the incast congestion control for DCN,improvement of network. ECN [8] use to gives theit defines incast congestion on the network and gives incastnotification on the system that network is overloaded now itcongestion-based algorithms to resolve this problem. Therecannot handle more data, even if sender send more data packetare four part 1) Control Trigger 2) Pre-Connection Controlthen data packets may drop.Interval 3) Window Adjustment on single the network. 4)Again, DCTCP take 6000 servers and put in 150 racks andFairness Controller for Multiple connections: whenevereach rack hold 44 servers that is used 1GBPS Ethernet thenmultiple TCP connection used for jobs at the same timecheck the results for high burst tolerance and large flows.throughput increase on network and provide some space forAfter the analysis it turned out that there were 80%-90%independent task.performance enhancement as compared to TCP. After1) Control Triggers: In this paper control trigger used quotaDCTCP initial release, DCTCP improved and redefined tofor available bandwidth that increase the receiver window forDCTCPalsousedECNPage 248 of 253

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 14, Number 2, 2019 (Special Issue) Research India Publications. http://www.ripublication.comall incoming connection according to higher throughput, andconnection.it also helps to differentiate measured andtime is divided into slots. After then firstly measured totalexpected throughput over the receive window.traffic receive and use slot for calculate the bandwidthb) Expected throughput: representation of theaccording to quota.throughput which is expected over the network, it is2) Pre-Connection control interval is representing theonly constrained according to receiver window.estimated throughput for TCP connection and measured the4) Fairness Controller for Multiple Connections: whenever theachieved throughput for RTT and ICTCP use 2*RTT forbandwidth is small on that receiver interface meanwhile thewindow adjustment over the network.larger receiver window connection is decreased for achieve3) Window Adjustment on single connection defines aboutthe fairness.throughputs:ICTCP used NDIS driver on the windows for measurementsa) Measured throughput: representation of theand resultsthroughput which is measured over the network. it [8]ToolnSimulatiooverflowBuffern control CongestioLoad build-upincast2010QueueyearDCTCPbalancingName of paperGoalsAlgorithmhelps to represent current requirements over TCP1)Simple Marking at the switch, Evaluate2)ECN-Echoatthebyreceiver, generated3)Controller at the sender.benchmarkfromtrafficmeasurement. TuningECN 1) Congestion Control Algorithm.for 2012Evaluated by intentqueue-based ECNDCN[17]ICTCP2012 [26]1)Control trigger, 2) pre-connectioncontrol interval, 3)Window adjustment NDIS driver on theonsingleconnection,4)Fairness Windows OS.controller for multiple connections. TCP 1)ModifiedDCTCPCongestionimprovemAvoidance,ents2) Dynamic delay ACK timeoutfor 2013Page 249 of 253OMNET

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 14, Number 2, 2019 (Special Issue) Research India Publications. on. 2015 [25]1) Router/Switch2) Receiver OperationNS-33) Sender OperationTable1: Comparisons Between Congestion monitoring and handlingTCP improvements for Data Center Networks [24] showscongestion and make the network flow smooth. One morethe improvements of data center network, it modified DCTCPalgorithm (dynamic delayed ACK timeout calculation) [28]algorithms to improve network performance and reduceused to decrease the possibility of packet loss and increasedcongestion using OmNet simulator. It also proposed somethe packet flow. When network shows timeout condition thenalgorithm for modification in previous method and describepackets are lost, whenever network shows maximum flowsmodified congestion avoidance: if maximum number ofthen this method gives a fix buffer size for each window andpackets flow in one window then network is congested andimprove the performance of the network.flow is minimum network. In normal condition this algorithmis used to increase the buffer space and decrease theCongestionMethods of selecting Simulation/Mechanism monitoring method pathsFreeway [32]Sendingrateat LeastEvaluationEmulationcongestedController polls switches controllerin Basedonself-made Outperform ECMPsimulator for FAT-TREEtopology.CONGA [30]Sendingrateusing Least congested in recordBased on OmNeT MPTCP in ThroughputSwitch recordsMPTCP [34]CLOVE [31]TCP-Basedusing Less congested: Redirect to Basedonself-made It performed path TCPmodificationan under-utilized pathsimulator.throughput.ECN using TCPWeighted round-robinBased on NS-2 Simulator. CLOVE-ECNOutperform.Rep-Flow--Round-Robin[29]Based on NS-3 simulator it’sachievedgoodfor distributed congestion. performance to Pfebric inflow completion time.Tiny-Flow--Round-RobinBased on NS-3Page 250 of 253it achieved performance

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 14, Number 2, 2019 (Special Issue) Research India Publications. tion time.Fastpass [33]Sendingrateat Integratedmethod: Based on real Physical it performed end-to-endController polls switches Determine the path and Network.sendingtimeforlatency.eachpacketLocalflow [28] Sendingrateusing IntegratedSwitch recordsmethod: Based on real Physical It performed symmetricDistribute aggregate flowsNetwork.and asymmetric scenarios.Table2: Comparisons Between Congestion monitoring and handlingM21TCP [25] represents some improvements and showsA SURVEY Released in 2017 on Load Balancing in SDN, Itbetter results as compared to DCTCP. M21TCP used dividedefined several Load Balancing strategies and how to improveand conquer method to split the data in small packets and sendthe performance of the SDN network. Load balancing is aall the information from our sender which manage all theproblem that create some impairments on the network. Itinformation about data and send to receiver. Paper alsoshows different types of load balancing architecture theirpresents throughput reduction, which fix some time slots foradvantages and disadvantages. This paper shows thegetting and sending information and a timing system has beencomparison between cloud computing and virtualization andcompleted its own process if the process fails to comply withfurther explains load distributions and types of load balancing.any conditions therefore it represents time delay in resultingIt then describes techniques to improve the networkprocess. This technique actually used to improve incastperformance and manage network load over the networkproblem and make improve performance. M21TCP representwhich is used by SDN and also established load balancinga algorithm and it has 3 parts:algorithms, making the architecture of load balancing as1) Routes/Switch operation: it is used to measure regularly onshown earlier.network to calculate MCW, and modification of TCP packet,In 2018 a survey released Load Balancing in Data Centerand whenever congestion is increases on receive window.Networks: A Survey trying to find the solution of the load2)Receiver Operation: in this operation if MCW receiveencoded ACK, then it sends back delayed ACKS. MCW usedbalancing problem and some given different theories andconclusions which contributed in improvement in completiontime of flows, bandwidth utilization, power consumption,latest received packet used in ACK.traffic analysis and different kind of mechanisms[26] that are3) Sender Operation: third and last part of algorithm that showshown in this paper and also defined different results aboutthe major difference between TCP and M21TCP that isflow, energy, traffic, completion time etc. and also showsM21TCP is always use TCP for sends the packets in particularmechanisms which compares different factors, evaluationlimit, it is router based and improve the incast problem andmethods, approaches and performance. It shows data centerget the aim by some graphical results.network architectures such as fat-tree, B-cube, Cam-Cubetopologies [26], that helps to control traffic, manage packetsand improve the overall network performance.Page 251 of 253

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 14, Number 2, 2019 (Special Issue) Research India Publications. http://www.ripublication.comPatel, B. Prabhakar, S. Sengupta, and M. Sridharan, “DataTCP(DCTCP),”CONCLUSIONCenterThis paper represents survey on some techniques andCommunications Review,vol. 40, no. 4, pp. 63–74, Aug.algorithms to improve the performance of cloud and 145/1851275.1centers with improving the throughput and balancing network851192traffic. It also represents the virtualization techniques which,[9] V. Vasudevan, A. Phanishayee, H. Shah, E. Krevat, D. G.define how to divide large amount data in small chunks on theAndersen, G. R. Ganger, G. A. Gibson, and B. Mueller, “Safehost and how hosts manage traffic on the virtual machineandmonitor. Data centers handle large amount of data. LoadDatacenterbalancing, reducing response time and huge data transfer costCommunications Review, vol. 39, no. 4, pp. 303–314, Aug.are key challenges in the cloud and data centers to improve2009.the performance of system and which have been compared[10] P. Prakash, A. Dixit, Y. C. Hu, and R. Kompella, “Theand shown about some popular mechanism and methods. StillTCP Outcast Problem: Exposing. Unfairness in Data Centerthere is always a scope of improvement.Networks,” in Proceedings of the 9th USENIX Conference puterforComputerImplementation,ser.NSDI’12. Berkeley, CA, USA: USENIX Association, 2012,REFERENCE[1] Leiserson C E. Fat-trees: universal supercomputing[J].pp.30–Computers, IEEE tation.cfm?id 2228[2] Xia W, Wen Y, Foh C H, et al. Networking[J]. 2014.298.2228339[3] Tong R, Zhu X. A load balancing strategy based on the[11] G. Khanna, K. Beaty, G. Kar, and A. Kochut,combination of static and dynamic[C]//Database Technology“Application Performance Management in Virtualized Serverand Applications (DBTA), 2010 2nd International WorkshopEnvironm,”on. IEEE, 2010: 1-4.Symposium, (2006). NOMS (2006). 10th IEEE/IFIP, pp 373–[4] N.M. Chowdhury, Mosharaf Kabir, and Raouf Boutaba,381, 2006.“Network virtualization: state of the art and research[12] M. Moradi, M.A. Dezfuli, M.H.Safavi, Department ofchallenges,” IEEE Communications Magazine, 2009.Computer and IT, Engineering, Amirkabir University of[5] N. McKeown, et al, “OpenFlow: enabling innovation inTechnology, Tehran, Iran, “A New Time Optimizingcampus networks,” ACM SIGCOMM Comput. Commun. Rev.,Probabilistic Load Balancing Algorithm in Grid Computing”vol. 38, no.2, pp. 69-74, 2008.IEEE Vol.1 , pp. 232-237,2010I[6] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar,[13] Mr. Jayant Adhikari, Prof. Sulabha Patil, “ LoadL. Peterson, J. Rexford, S. Shenker, and J. Turner. OpenFlow:Balancing The Essential Factor In Cloud Computing”,Enabling innovation in campus networks. ACM SIGCOMMInternational Journal of Engineering Research & TechnologyComputer Communications Review, Apr. 2008.(IJERT), Vol. 1 Issue. 10, pp. 1-5, 2012[ 7] N.J. Kansal and I. Chana, “Existing load balancing[14] Manoranjan Dash, Amitav Mahapatra, Narayan Ranjantechniques in cloud computing: a systematic review,” JournalChakraborty, “Cost Effective Selection of Data Center inof Information Systems and Communication, vol. 3, no. 1, pp.Cloud Environment,” Special Issue of International Journal on87-91, 2012.Advanced Computer Theory and Engineering (IJACTE), Vol.[8] M. Alizadeh, A. Greenberg, D. A. Maltz, J. Padhye, P.2 Issue. 1, pp. 2319-2526, 2013Page 252 of 253inNetworkOperationsandManagement

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 14, Number 2, 2019 (Special Issue) Research India Publications. http://www.ripublication.com[15] Tejinder Sharma, Vijay Kumar Banga, “Efficient and(CloudNet), 2015 IEEE 4th International Conference on, OctEnhanced Algorithm in Cloud Computing,” International2015, pp. 20–25.Journal of Soft Computing and Engineering (IJSCE), Vol. 3,[26] Wu, H., Feng, Z., Guo, C., Zhang, Y.: ICTCP:Issue 1, pp. 2231-2307, 2013Incast[16] David Escalnte and Andrew J. Korty, “Cloud Services:networks. In: ACM CoNEXT (2010)Policy and Assessment”, EDUCAUSE Review, Vol. 46,no. 4,[27] “TinyFlow: Breaking Elephants Down into Mice in Data2011.Center Network,” in Proc. IEEE LANMAN, 2014, pp. 1–6.[17] Haitao Wu, Jiabo Ju , Guohan Lu , Chuanxiong Guo ,[28] S. Sen, D. Shue, S. Ihm, and M. J. Freedman, “Scalable,Yongqiang Xiong , Yongguang Zhang, Tuning ECN for dataOptimal Flow Routing in Datacenters via Local Linkcenter networks, Proceedings of the 8th internationalBalancing,” in Proc. ACM CoNEXT, 2013, pp. 151–162.conference on Emerging networking experiments and[29] H. Xu and B. Li, “Repflow: Minimizing Flowtechnologies, December 10-13, 2012Completion Times with Replicated Flows in Data Centers,” in[18] S. Osada, K. Kjita, Y. Fukushima and T. Yukohira, "TCPProc. IEEE INFOCOM, 2014.Incast Avoidance Based on Connection Serialization in Data[30] M. Alizadeh, T. Edsall, S. Dharmapurikar, R.Center Networks", APCC 2013, PP. 120-125.Vaidyanathan, K. Chu,A. Fingerhut, F. Matus, R. Pan, N.[19] Ali M Alakeel, “A Guide To Dynamic Load Balancing InYadav, G. Varghese et al., “CONGA: Distributed Congestion-Distributed Computer Systems”, International Journal ofaware Load Balancing for Datacenters,” in Proc. ACMComputer Science and Network Security, Vol. 10 No. 6, pp.SIGCOMM, 2014, pp. 503–514.153-160, 2010.[31] N. Katta, M. Hira, A. Ghag, C. Kim, I. Keslassy, and J.[20] Ram Prasad Padhy, P. Goutam Prasad Rao, “LoadRexford, “CLOVE: How I Learned to Stop Worrying aboutBalancing in Cloud Computing Systems”, Department ofthe Core and Love the Edge,” in Proc. ACM HotNets, 2016,Computer Science and Engineering, National Institute ofpp. 155–161.Technology, 2011.[32] W. Wang, Y. Sun, K. Zheng, M. A. Kaafar, D. Li, and Z.[21] R. X. T and X. F. Z, “A Load Balancing Strategy BasedLi, “Free-way: Adaptively Isolating the Elephant and Miceon the Combination of Static and Dynamic, in DatabaseFlows on Different Transmission Paths,” in Proc. IEEE ICNP,Technology and Applications”, 2nd International Workshop,2014, pp. 362–367.2010.[33] J. Perry, A. Ousterhout, H. Balakrishnan, D. Shah, and H.[22] A. N. Tantawi and D. Tawsley, “Optimal Static LoadFugal,“Fastpass:Balancing in Distributed Computer Systems” Journal of theNetwork,” in Proc. ACM SIGCOMM, 2014, pp. 307–318.ACM,Vol. 32, No. 2, pp. 445-465, 1985.[34] C. Raiciu, S. Barre, C. Pluntke, A. Greenhalgh, D.[23] S. H. Bokhari, “Dual Processor Scheduling WithWischik,Dynamic Reassignment”, IEEE Transactions on SoftwarePerformance and Robustness with Multipath TCP,” in Proc.Engineering, Vol. SE-5, No. 4, pp. 341-349, 1979.ACM SIGCOMM, 2011, pp. 266–277.[24] Das, Tanmoy & Sivalingam, Krishna. (2013). COMSNETS.2013.6465539.[25] A. Adesanmi and L. Mhamdi, “M21TCP: OvercomingTCP incast congestion in data centres,” inCloud NetworkingPage 253 of enter

It is used for Balancing the load according to controller and according to flow of Data as well. Data Plane handle Link Load Balancing and Server Load Balancing. The Distributed multiple control architecture is subcategorized into Flat Architecture and hierarchical Architecture. It helps to explore new dimensions of load balancing. Figure 4.

Related Documents:

8. Load Balancing Lync Note: It's highly recommended that you have a working Lync environment first before implementing the load balancer. Load Balancing Methods Supported Microsoft Lync supports two types of load balancing solutions: Domain Name System (DNS) load balancing and Hardware Load Balancing (HLB). DNS Load Balancing

Load Balancing can also be of centralized load balancing and distributed load balancing. Centralized load balancing typically requires a head node that is responsible for handling the load distribution. As the no of processors increases, the head node quickly becomes a bottleneck, causing signi cant performance degradation. To solve this problem,

load balancing degree and the total time till a balanced state is reached. Existing load balancing methods usually ignore the VM migration time overhead. In contrast to sequential migration-based load balancing, this paper proposes using a network-topology aware parallel migration to speed up the load balancing process in a data center.

Internal Load Balancing IP: 10.10.10.10, Port: 80 Web Tier Internal Tier Internal Load Balancing IP: 10.20.1.1, Port: 80 asia-east-1a User in Singapore Database Tier Database Tier Database Tier External Load Balancing Global: HTTP(S) LB, SSL Proxy Regional: Network TCP/UDP LB Internal Load Balancing ILB Use Case 2: Multi-tier apps

Figure 1: Load Balancing Model based on [4]. 2.2 Load Balancing As cloud computing continues to grow, load balancing is essential to ensure that the quality of service isn't compro-mised for end users [4]. Load balancing is the process of distributing workload amongst a collection of servers in a data center.

leverage various load balancing mechanisms with DNS servers. DNS load balancing is a way to distribute client requests for host names across multiple IP addresses without needing client interaction. Generally, DNS load balancing is performed by round-robin. Load balancing can also be performed through third-

necessary to use load balancing methods. Load balancing is implemented using hardware, software instruments, or a combination of both. Previously, it was clear delineation of hardware and software load balancing. Now, in connection with the development and improvement of both hardware and software load balancers, the boundaries

A - provider is used by AngularJS internally to create services, factory etc. B - provider is used during config phase. C - provider is a special factory method. D - All of the above. Q 10 - config phase is the phase during which AngularJS bootstraps itself. A - true B - false Q 11 - constants are used to pass values at config phase. A - true B .