Riverbed Stingray Traffic Manager VMware VSphere 4 Performance On Cisco .

1y ago
6 Views
1 Downloads
2.15 MB
20 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Milo Davies
Transcription

WHITE PAPERRiverbed Stingray Traffic ManagerVMware vSphere 4 Performance onCisco Unified Computing System

Riverbed Stingray Traffic Manager VMware vSphere 4 Performance on Cisco Unified Computing SystemTable of ContentIntroduction . . 3Test Setup . . 3System Under Test . . 3Benchmarks . . 11Results . . 12Analysis . . 15Comparison to Previous Tests . . 16 2011 Riverbed Technology. All rights reserved.2

Riverbed Stingray Traffic Manager VMware vSphere 4 Performance on Cisco Unified Computing SystemINTRODUCTIONThis document details the performance figures obtained by the Riverbed Stingray Traffic Manager running on VMwarevSphere 4.0 on the Cisco Unified Computing System (UCS) and outlines the methods used to achieve these performancefigures. This testing follows a previous test conducted with Cisco UCS blade running on a previous generation of Intel Xeonprocessors. The results from these two sets of tests will be compared at the end of this document.Test SetupThe system under test utilized a Cisco B200-M2 blade server with two 64-bit six-core Intel Xeon X5680 CPUs (“Westmere-EP”)at 3.33 GHz with 48 GB of RAM. The number of load-generating clients and web servers depended on the tests being run. Forsome tests, two load generating clients and four web servers were used. For other tests, three load generating clients and threeweb servers were used. All load generating clients and web servers utilized Cisco B200-M1 blade servers with two 64-bit quadcore Intel Xeon X5570 CPUs (“Nehalem-EP”) at 2.93 GHz and 48 GB of RAM. Two load generator clients and the system undertest were in one UCS chassis, while four web servers were in a separate chassis. When a third load generating client wasneeded, one of the web servers was used. All systems had dual-ported Cisco UCS M81KR VIC adapters, with one port each onthe A and B UCS interconnects. All interfaces on the A interconnect were on a separate subnet from those on the B interconnect.RedHat Enterprise Linux 5.4 was the operating system used for all machines. Zeus Traffic Manager was running on the systemunder test and load balancing the requests across Zeus Web Servers.The requests were generated by a program called Zeusbench, a high-performance, vendor agnostic, HTTP benchmarking toolthat is included with Stingray Traffic Manager. Zeusbench reported the benchmarking results obtained on each of the clients.These results were then collated to ascertain the overall performance of the system under test.System Under TestVirtual EnvironmentsThe performance figures for the various virtual configurations were obtained using the 64-bit build of Stingray Traffic Manager VA6.0r7 installed on a Redhat 5.4 virtual machine on vSphere 4.1. When obtaining performance figures, Stingray Traffic Managerinstances were the only virtual machines installed on the vSphere server.The virtual machines were each allocated 1 GB of RAM. One, two, or three vCPUs were allocated to each virtual machinedepending on the test.The VMware VMDirectPath capabilities of the Intel Xeon Architecture coupled with Cisco UCS M81KR VIC Adapter capability toinstantiate virtual pci-e devices (in this instance multiple virtual network adapters), which were utilized so that the network trafficbypassed the VMware Hypervisor and went directly from the hardware to the VM. The driver used was the VMXNet 3 driverprovided by VMware.For all tests, each Stingray Traffic Manager instance utilized a single 10G uplink on the network adapter. Stingray TrafficManager instances were evenly divided between the A and B UCS interconnects.The following Linux tunings were applied to each Stingray Traffic Manager instance:echo "1024 65535" /proc/sys/net/ipv4/ip local port rangeecho 0 /proc/sys/net/ipv4/tcp syncookiesecho 8192 /proc/sys/net/ipv4/tcp max syn backlogecho 1024 /proc/sys/net/core/somaxconnecho 1800000 /proc/sys/net/ipv4/tcp max tw bucketsecho "128000 200000 262144" /proc/sys/net/ipv4/tcp memecho 2097152 /proc/sys/fs/file-maxecho 1 /proc/sys/net/ipv4/tcp tw reuseecho 1 /proc/sys/net/ipv4/tcp tw recycle 2011 Riverbed Technology. All rights reserved.3

Riverbed Stingray Traffic Manager VMware vSphere 4 Performance on Cisco Unified Computing SystemThe following diagrams describe the different virtual machine environments:Diagram A: 2 VMs 2011 Riverbed Technology. All rights reserved.4

Riverbed Stingray Traffic Manager VMware vSphere 4 Performance on Cisco Unified Computing SystemDiagram B: 4 VMs 2011 Riverbed Technology. All rights reserved.5

Riverbed Stingray Traffic Manager VMware vSphere 4 Performance on Cisco Unified Computing SystemDiagram C: 6 VMs 2011 Riverbed Technology. All rights reserved.6

Riverbed Stingray Traffic Manager VMware vSphere 4 Performance on Cisco Unified Computing SystemDiagram D: 8 VMs 2011 Riverbed Technology. All rights reserved.7

Riverbed Stingray Traffic Manager VMware vSphere 4 Performance on Cisco Unified Computing SystemDiagram E: 12 VMs 2011 Riverbed Technology. All rights reserved.8

Riverbed Stingray Traffic Manager VMware vSphere 4 Performance on Cisco Unified Computing SystemDiagram F: 16 VMs 2011 Riverbed Technology. All rights reserved.9

Riverbed Stingray Traffic Manager VMware vSphere 4 Performance on Cisco Unified Computing SystemDiagram G: 24 VMs 2011 Riverbed Technology. All rights reserved.10

Riverbed Stingray Traffic Manager VMware vSphere 4 Performance on Cisco Unified Computing SystemBenchmarksThe intent of the benchmarks was to obtain performance figures with 2, 4, 8, 12, 16, and 24 64-bit virtual machines with varyingnumbers of virtual CPUs utilizing the virtualization optimized Cisco UCS M81KR VIC Adapter. These figures could then becompared to discover how performance is affected by scaling the number of virtual machines and virtual CPUs.The benchmarks conducted consisted of the following tests:HTTP connections per second: The rate at which Stingray Traffic Manager can process new HTTP connections. Clients senda series of HTTP requests, each on a new connection. Stingray Traffic Manager parses the requests and forwards them on to aweb server using an established keepalive connection. The web server sends back a response for each request.HTTP requests per second: The rate at which Stingray Traffic Manager can handle HTTP requests. Each client sends itsrequests down an existing keepalive connection. Stingray Traffic Manager processes each request and forwards it on to a webserver over another keepalive connection. The web server responds with a zero-sized file.HTTP 2K requests per second: The rate at which Stingray Traffic Manager can handle requests for a 2 KB file.HTTP 8K requests per second: The rate at which Stingray Traffic Manager can handle requests for an 8 KB file.HTTP throughput: The throughput that Stingray Traffic Manager can sustain when serving large files via HTTP. Themethodology used for the HTTP requests per second benchmark is used here, however the files requested are 1 MB in size.SSL connections per second: The rate at which Stingray Traffic Manager can decrypt new SSL connections. Clients send aseries of HTTPS requests, each on a new connection, for a zero-sized file. SSL session IDs are not re-used, so each connectionrequires a full SSL handshake. 1024 bit RC-4 encryption is used.SSL throughput: The throughput Stingray Traffic Manager can sustain while performing SSL decryption. Each client sends itsrequests on an existing keepalive connection and SSL session IDs are re-used. The test measures the performance of the cipherused to encrypt and decrypt the data passed along the SSL connection. 2011 Riverbed Technology. All rights reserved.11

Traffic Valuation and Prioritization with Riverbed Traffic ManagerResultsThe following table presents the test results of each of the virtual machine configurations running Stingray Traffic Manager:Test Name2 VM2 vCPU/VM4 VM2 vCPU/VM4 VM3 vCPU/VM6 VM2 vCPU/VM8 VM1 vCPU/VMHTTP conn/s114,000206,000324,000313,000218,000HTTP req/s266,000524,000672,000684,000512,000HTTP 2K req/s202,000374,000523,000496,000367,000HTTP 8K req/s148,000246,000251,000253,000250,000HTTP throughput (Gbits)19.019.119.219.319.0SSL conn/s10,60020,70031,30030,70020,100SSL throughput (Gbits)4.17.712.111.77.7Test Name8 VM2 vCPU/VM12 VM1 vCPU/VM12 VM2 vCPU/VM16 VM1 vCPU/VM24 VM1 vCPU/VMHTTP conn/s300,000286,000257,000271,000221,000HTTP req/s559,000634,000494,000546,000404,000HTTP 2K req/s398,000496,000383,000413,000299,000HTTP 8K req/s248,000248,000237,000237,000203,000HTTP throughput (Gbits)19.118.618.618.518.2SSL conn/s30,20028,30029,70028,00027,900SSL throughput (Gbits)11.611.111.311.110.9 2011 Riverbed Technology. All rights reserved.12

Traffic Valuation and Prioritization with Riverbed Traffic ManagerThe following charts compare the aggregate performance of each virtual machine configuration, ordered by the number of virtualmachines: 2011 Riverbed Technology. All rights reserved.13

Traffic Valuation and Prioritization with Riverbed Traffic ManagerThe following charts compare the aggregate performance of each virtual machine configuration, ordered by the number ofphysical cores being utilized: 2011 Riverbed Technology. All rights reserved.14

Traffic Valuation and Prioritization with Riverbed Traffic ManagerAnalysisThe charts presented above show the difference in performance between benchmarks for each virtual machine configuration.The configurations vary by the number of virtual machines and the number of virtual CPUs configured for each virtual machine.The results for HTTP connections/second showed outstanding performance in the virtual environment. These tests are notbandwidth dependent, so as expected, the results improved as the number of physical cores used increased until all 12 coreswere being utilized. The peak result was found when using 4 VMs with 3 vCPUs, which was slightly higher than the result whenusing 6 VMs with 2 vCPUs. As the number of virtual machines increased to the point where there were more vCPUs thanphysical cores, the results began to decrease due to CPU oversubscription overhead.The results for HTTP requests/second also showed outstanding performance in the virtual environment. Again, peakperformance was found using either 4 VMs with 3 vCPUs, or 6 VMs with 2 vCPUs. We note a measurable degradation inperformance at 24 VMs, likely due to the combination of the overhead of context switching between virtual machines, the impacton the memory hierarchy performance, and the impact on having de-scheduled virtual machines affecting the timing of thenetwork flows at the TCP level. Nevertheless, the performance levels remain acceptable in this configuration, thereby validatingthe scenarios where Riverbed appliances are deployed for maximum density to scale-out and scale-down. Note that the HTTP8K request/second test is bandwidth-limited when using a 10G adapter, explaining why the results are fairly close between thevarious configurations.The results obtained for HTTP throughput show consistent performance between 19 to 19.3G as the number of virtual machineswas increased from two to eight. A slight drop occurred when using 12 and 16 virtual machines, and then a further slight dropwhen using 24 virtual machines. Achieving 19G from a 20G adapter in a virtualized environment is an outstanding result.SSL processing is CPU intensive, so as expected, the results obtained for the SSL connections/second test and the SSLthroughput test improve as the number of physical cores in use increases. A peak was again achieved when using 4 VMs with 3vCPUs or 6 VMs with 2 vCPUs, but the results stayed consistently high as the number of virtual machines increased, with somedrop-off. 2011 Riverbed Technology. All rights reserved.15

Traffic Valuation and Prioritization with Riverbed Traffic ManagerComparison with Previous TestIn November 2009, Zeus (which is now a part of Riverbed) released a white paper entitled: “Zeus Traffic Manager VMwarevSphere 4 Performance on Cisco Unified Computing System”. This white paper details a set of tests undertaken that were similarto the tests detailed in the previous white paper, so it is interesting to compare the results.The major difference between the two sets of tests is that in the original test the System under test was an 8-core blade runningon Intel Xeon 5570 @ 2.93 GHz processors, whereas the more recent testing uses a 12-core blade running on Intel Xeon 5680@ 3.33 GHz processors. For more details on the previous tests, please refer to the previous white paper.The charts below compare the results from the previous and current set of tests. From this comparison, it can be seen that notonly can higher peak results be achieved with the 12-core blade, but it can clearly handle larger numbers of virtual machinesmore efficiently. 2011 Riverbed Technology. All rights reserved.16

Traffic Valuation and Prioritization with Riverbed Traffic Manager 2011 Riverbed Technology. All rights reserved.17

Traffic Valuation and Prioritization with Riverbed Traffic Manager 2011 Riverbed Technology. All rights reserved.18

Traffic Valuation and Prioritization with Riverbed Traffic Manager. 2011 Riverbed Technology. All rights reserved.19

Traffic Valuation and Prioritization with Riverbed Traffic ManagerAbout RiverbedRiverbed delivers performance for the globally connected enterprise. With Riverbed, enterprises can successfully andintelligently implement strategic initiatives such as virtualization, consolidation, cloud computing, and disaster recovery withoutfear of compromising performance. By giving enterprises the platform they need to understand, optimize and consolidate their IT,Riverbed helps enterprises to build a fast, fluid and dynamic IT architecture that aligns with the business needs of theorganization. Additional information about Riverbed (NASDAQ: RVBD) is available at www.riverbed.com.Riverbed Technology, Inc.199 Fremont StreetSan Francisco, CA 94105Tel: (415) 247-8800www.riverbed.com 2011 Riverbed Technology. All rights reserved.Riverbed Technology Ltd.The Jeffreys BuildingCowley RoadCambridge CB4 0WSUnited KingdomTel: 44 (0) 1223 568555Riverbed Technology Pte. Ltd.391A Orchard Road #22-06/10Ngee Ann City Tower ASingapore 238873Tel: 65 6508-7400Riverbed Technology K.K.Shiba-Koen Plaza Building 9F3-6-9, Shiba, Minato-kuTokyo, Japan 105-0014Tel: 81 3 5419 199020

Zeus Traffic Manager was running on the system under test and load balancing the requests across Zeus Web Servers. The requests were generated by a program called Zeusbench, a high-performance, vendor agnostic, HTTP benchmarking tool that is included with Stingray Traffic Manager. Zeusbench reported the benchmarking results obtained on each of .

Related Documents:

2.7 VMware vCenter Support Assistant 22 2.8 VMware Continuent 23 2.9 VMware Hyper-Converged Infrastructure Kits 23 2.10 VMware Site Recovery Manager 23 2.11 VMware NSX 24 2.12 VMware NSX Advanced Load Balancer 28 2.13 VMware SD-WAN by VeloCloud 29 2.14 VMware Edge Network Intelligence 30 2.15 VMware NSX Firewall 30

SOLUTION GUIDE Stingray Traffic Manager Solution Guide Load Balancing and Optimization for Microsoft Exchange 2013 with Stingray Traffic Manager

Stingray Aptimizer can be deployed in an existing load-balanced scenario without having to change or rewire the existing network topology. Logically, Stingray Aptimizer devices are sandwiched between two layers of ADCs so that they can be scaled, and the ADCs perform the health-checks and load balancing against the back end servers.

GM CONFIDENTIAL. FOR SALESPERSON USE ONLY. 2014 CHEVROLET CORVETTE STINGRAY SALES GUIDE STINGRAY Stingray is a historical Corvette name, reborn for the seventh generation. The nam

2019 Chevrolet Corvette Pricing Model Model Description MSRP(c) DFC 1YY07 Corvette Stingray Coupe 55,495.00 1,095.00 1YY67 Corvette Stingray Conv 59,495.00 1,095.00 1YX07 Corvette Stingray Coupe w/ Z51 60,495.00 1,095.00 1YX67 Corvette Stingray Conv w/ Z51 64,495.00 1,095.00 1YW07 Corvett

Operating Manual – MT700 Stingray Phoenix Audio Technologies 7 Using the MT700 Stingray Mixer The MT700 Stingray Distributed Array Mixer is an eight-channel – four (4) microphone / four (4) line input – automatic DSP (digital signal processing) mixer for installed room applications. Its

the VMware Hybrid Cloud Native VMware management tools extend on-prem services across VMware Hybrid Cloud vRealize adapters allow "first class citizen" status for VMware Cloud on AWS Leverage same in-house VMware tools and processes across VMware Hybrid Cloud Support the cloud agility strategy of the organisation without disruption

with representatives from the Anatomy sector. These relate to the consent provisions of the Human Tissue Act 2004 (HT Act), governance and quality systems, traceability and premises. 3. The Standards reinforce the HT Act’s intention that: a) consent is paramount in relation to activities involving the removal, storage and use of human tissue; b) bodies of the deceased and organs and tissue .