Dell EMC Networking Layer 3 Leaf-Spine Deployment And Best .

3y ago
174 Views
9 Downloads
1.67 MB
42 Pages
Last View : 3d ago
Last Download : 3m ago
Upload by : Gia Hauser
Transcription

Dell EMC Networking Layer 3 Leaf-SpineDeployment and Best Practices with OS10Deploying layer 3 leaf-spine networks in the data center with Dell EMC NetworkingOS10 switchesDell EMC Networking Infrastructure SolutionsMarch 2018Internal Use - Confidential

RevisionsDateRev.DescriptionAuthorsMarch 20181.0Initial releaseAndrew Waranowski, Curtis BunchTHIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS ANDTECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANYKIND.Copyright 2018 Dell Inc. All rights reserved. Dell and the Dell EMC logo are trademarks of Dell Inc. in the United States and/or otherjurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.2Dell EMC Networking Layer 3 Leaf-Spine Deployment and Best Practices with OS10 Version 1.0Internal Use - Confidential

Table of contentsRevisions.21Introduction .51.123Hardware overview .72.1Dell EMC Networking S4148F-ON .72.2Dell EMC Networking S4248FB-ON .72.3Dell EMC Networking Z9100-ON.7Leaf-spine overview .83.14Typographical Conventions .6Layer 3 leaf-spine topology .8Protocols used in the leaf-spine examples .94.1Virtual Link Trunking (VLT) .94.2LACP/LAG .94.3Uplink Failure Detection (UFD).104.3.1 UFD vs. iBGP at the leaf layer .104.4Rapid Spanning Tree Protocol (RSTP) .104.5Routing protocols .114.5.1 Border Gateway Protocol (BGP) .114.5.2 Open Shortest Path First (OSPF) .1154.6Virtual Router Redundancy Protocol (VRRP) .114.7Equal Cost Multi-Path (ECMP) .12Layer 3 configuration planning .135.1BGP ASN configuration .135.2IP addressing .135.2.1 Loopback addresses .135.2.2 Point-to-point addresses .146Example 1: Layer 3 with Dell EMC leaf and spine switches using OSPF .166.1S4148F-ON leaf switch configuration .176.2Z9100-ON spine switch configuration.206.3Example 1 validation .236.3.1 show ip ospf neighbor .236.3.2 show ip route ospf.246.3.3 show vlt .253Dell EMC Networking Layer 3 Leaf-Spine Deployment and Best Practices with OS10 Version 1.0Internal Use - Confidential

6.3.4 show vlt vlt-port-detail .266.3.5 show vlt mismatch .266.3.6 show uplink-state-group .266.3.7 show spanning-tree rstp brief .277Example 2: Layer 3 with Dell EMC leaf and spine switches using eBGP .287.1S4148F-ON leaf switch configuration .297.2Z9100-ON spine switch configuration.337.3Example 2 validation .357.3.1 show ip bgp summary.367.3.2 show ip route bgp .367.3.3 Dell EMC Networking leaf switch validation commands previously covered .374ADell EMC Networking ONIE switch factory default settings .39BValidated hardware and operating systems .40CTechnical support and resources .41DSupport and Feedback .42Dell EMC Networking Layer 3 Leaf-Spine Deployment and Best Practices with OS10 Version 1.0Internal Use - Confidential

1IntroductionData center networks have traditionally been built in a three-layer hierarchical tree consisting of access,aggregation and core layers as shown in Figure 1.CoreSpine 1AggregationAccessHierarchical networking modelDue to increasing east-west traffic within the data center (server-server, server-storage, etc.), an alternative tothe traditional access-aggregation-core network model is becoming more widely used. This architecture,shown in Figure 2, is known as a leaf-spine network and is a non-blocking network where all devices areexactly the same number of hops away.SpineLeafLeaf-spine architectureIn a leaf-spine architecture, the access layer is referred to as the leaf layer. Servers and storage devicesconnect to leaf switches at this layer. At the next level, the aggregation and core layers are collapsed into asingle spine layer. Every leaf switch connects to every spine switch to ensure that all leaf switches are nomore than one hop away from one another. This minimizes latency and the likelihood of bottlenecks in thenetwork.A leaf-spine architecture is highly scalable. As administrators add racks to the data center, a pair of leafswitches are added to each new rack. Spine switches may be added as bandwidth requirements increase. Ifthe initial spine layer is exhausted an additional layer can be deployed creating a 3-tier model.The interconnections between leaf and spine are dynamically routed. This deployment guide provides stepby-step configuration examples using eBGP or OSPF to provide dynamic routing. It includes examples usingDell EMC Networking switches at both the leaf and spine layers. The objective is to enable a networkadministrator or engineer to deploy a layer 3 leaf-spine architecture using the examples provided.5Dell EMC Networking Layer 3 Leaf-Spine Deployment and Best Practices with OS10 Version 1.0Internal Use - Confidential

1.1Typographical ConventionsThe command line examples in this document use the following conventions:6Monospace TextCLI examplesUnderlined Monospace TextCLI examples that wrap the page. This text is entered as a singlecommand.Italic Monospace TextVariables in CLI examplesBold Monospace TextUsed to distinguish CLI examples from surrounding text.Dell EMC Networking Layer 3 Leaf-Spine Deployment and Best Practices with OS10 Version 1.0Internal Use - Confidential

2Hardware overviewThis section briefly describes the hardware used to validate the examples in this guide. A complete listing ofhardware and components used is provided in Appendix B.2.1Dell EMC Networking S4148F-ONThe Dell EMC Networking S4148F-ON is a 1-Rack Unit (RU) switch with forty-eight 10GbE ports and four10/25/40/50/100GbE ports. In this guide, Two S4148F-ON switches are used as a leaf switches in theexamples in this guide.Dell EMC Networking S4148F-ON2.2Dell EMC Networking S4248FB-ONThe Dell EMC Networking S4248FB-ON is a 1-RU, multilayer switch with forty 10GbE ports, two 40GbE ports,and six 10/25/40/50/100GbE ports. Two S4248FB-ON switches are used as leaf switches in the examples inthis guide.Dell EMC Networking S4248FB-ON2.3Dell EMC Networking Z9100-ONThe Dell EMC Networking Z9100-ON is a 1-RU, multilayer switch with thirty-two 10/25/40/50/100GbE portsplus two 10GbE ports. The Z9100-ON is used as a spine switch in examples in this guide.Dell EMC Networking Z9100-ON7Dell EMC Networking Layer 3 Leaf-Spine Deployment and Best Practices with OS10 Version 1.0Internal Use - Confidential

3Leaf-spine overviewThe following concepts apply to layer 3 leaf-spine topologies: Each leaf switch connects to every spine switch in the topology.Servers, storage arrays, edge routers and similar devices always connect to leaf switches, never tospines.Layer 3 topologies use two leaf switches at the top of each rack configured as a Virtual Link Trunking (VLT)pair. VLT allows all connections to be active while also providing fault tolerance. As administrators add racksto the data center, two leaf switches configured for VLT are added to each new rack.The total number of leaf-spine connections is equal to the number of leaf switches multiplied by the number ofspine switches. To increase fabric bandwidth additional connections between leaf and spine switches can beimplemented as long as the spine layer has the capacity for the additional connections.3.1Layer 3 leaf-spine topologyIn a layer 3 leaf-spine network, traffic between leaf and spine switches is routed. The layer 3 / layer 2boundary is at the leaf switches. This means at the leaf layer and below (leaf switches and hosts),communication is achieved at layer 2. However, communication at and above the leaf switches is achieved atlayer 3. Spine switches are never connected to each other in a layer 3 topology. Equal cost multi-path routing(ECMP) is used to load balance traffic across the layer 3 network. Connections within racks from hosts to leafswitches are layer 2. Connections to external networks are made from a pair of edge or border leaf switchesas shown in Figure 6. Connections from the data center core can also be made directly to the spines.Spine 1L3 ConnectionL2 ConnectionExternalNetworkSpineSpine 22ECMPL3L2Leaf 1VLTiLeaf 2Leaf 3VLTiLeaf 4Edge LeafRack 1Rack 2Rack nVLTVLTVLTHostHostHostLayer 3 leaf-spine network8Dell EMC Networking Layer 3 Leaf-Spine Deployment and Best Practices with OS10 Version 1.0Internal Use - ConfidentialVLTiEdge LeafL3L2

4Protocols used in the leaf-spine examplesThis section provides an overview of the protocols used in constructing the leaf-spine network examples inthis guide. 4.1VLT, Section 4.1LACP/LAG, Section 4.2Uplink Failure Detection (UFD), Section 4.3RSTP, Section 4.4Routing protocols, Section 4.5- Border Gateway Protocol (BGP)- Open Shortest Path First (OSPF)VRRP, Section 4.6ECMP, Section 4.7Virtual Link Trunking (VLT)VLT allows link aggregation group (LAG) terminations on two separate switches and supports a loop-freetopology. The two switches are referred to as VLT peers and are kept synchronized via an inter-switch link(ISL) called the VLT interconnect (VLTi). A separate backup link maintains heartbeat messages across theOOB management network or using a point-to-point link between peers.VLT provides layer 2 multipathing and load-balances traffic. VLT offers the following additional benefits: Eliminates spanning tree-blocked portsUses all available uplink bandwidthEnables fast path switchover if either a link or device failsEnsures high availabilityNote: Downstream connections from leaf switches configured for VLT do not necessarily have to beconfigured as LAGs if other fault tolerant methods are preferred (e.g. multipath IO). In this guide, examples 1and 2 use LAGs to downstream servers while examples 3 and 4 do not.4.2LACP/LAGLink Aggregation Group (LAG) bundles multiple links into a single interface to increase bandwidth betweentwo devices. LAGs also provide redundancy via the multiple paths. In a leaf-spine network, LAGs are typicallyused to attach servers or storage devices to the VLT leaf pairs.Link Aggregation Control Protocol (LACP) is an improvement over static LAGs in that the protocol willautomatically failover if there is a connectivity issue. This is especially important if the links traverse a mediaconverter where it is possible to lose Ethernet connectivity while links remain in an Up state.9Dell EMC Networking Layer 3 Leaf-Spine Deployment and Best Practices with OS10 Version 1.0Internal Use - Confidential

4.3Uplink Failure Detection (UFD)If a leaf switch loses all connectivity to the spine layer, by default the attached hosts continue to send traffic tothat leaf without a direct path to the destination. The VLTi link to the peer leaf switch handles traffic duringsuch a network outage, but this is not considered a best practice.Dell EMC recommends enabling UFD, which detects the loss of upstream connectivity. An uplink state groupis configured on each leaf switch, which creates an association between the uplinks to the spines and thedownlink interfaces.In the event all uplinks fail on a switch, UFD automatically shuts down the downstream interfaces. Thispropagates to the hosts attached to the leaf switch. The host then uses its link to the remaining switch tocontinue sending traffic across the leaf-spine network.4.3.1UFD vs. iBGP at the leaf layerSome leaf and spine implementations make use of iBGP between leaf switches instead of UFD. If UFD is notused, it is possible, due to hashing, for packets to enter a leaf switch which does not have functioning uplinks.If iBGP is enabled between the leaf switches, it is possible for them to route packets to each other, providedthere are valid routes. However, the designs in this document make use of UFD, because it reduces networkand configuration complexity and simplifies path determination.iBGP between leafswitchesSpine 1UFD used on leafswitchesSpine 2Leaf 1L3 ConnectionL2 ConnectionTraffic pathSpine 1VLTiLeaf 2Spine 2Leaf 1VLTiPo 1Po 1Server 2Server 2Leaf 2Using iBGP between leaf switches vs UFD on leaf switches provides more path options at thecost of greater network and configuration complexity4.4Rapid Spanning Tree Protocol (RSTP)As a precautionary measure, Dell EMC recommends enabling RSTP on all switches that have layer 2interfaces. Even though VLT environments are loop-free, STP should be configured as a best practice in case10Dell EMC Networking Layer 3 Leaf-Spine Deployment and Best Practices with OS10 Version 1.0Internal Use - Confidential

of switch misconfiguration or improperly connected cables. In properly configured and connected leaf-spinenetworks, there are no ports blocked by Spanning Tree Protocol.4.5Routing protocolsAny of the following routing protocols may be used on layer 3 connections when designing a leaf-spinenetwork: 4.5.1OSPFBGPBorder Gateway Protocol (BGP)BGP may be selected for scalability and is well suited for very large networks. BGP can be configured asExternal BGP (eBGP) to route between autonomous systems or Internal BGP (iBGP) to route within a singleautonomous system.Layer 3 leaf-spine networks use ECMP routing. eBGP and iBGP handle ECMP differently. By default, eBGPsupports ECMP without any adjustments. To keep configuration complexity to a minimum, Dell EMCrecommends eBGP in leaf-spine fabric deployments.BGP tracks IP reachability to the peer remote address and the peer local address. Whenever either addressbecomes unreachable, BGP brings down the session with the peer. To ensure fast convergence with BGP,Dell EMC recommends enabling Neighbor fall-over with BGP. Neighbor fall-over terminates BGP sessions ofany directly adjacent peer if the link to reach the peer goes down without waiting for the hold-down timer toexpire.4.5.2Open Shortest Path First (OSPF)OSPF is an interior gateway protocol that provides routing inside an autonomous network. OSPF routers sendlink-state advertisements to all other routers within the same autonomous system areas. While generally morememory and CPU intensive than BGP, OSPF may offer faster convergence. OSPF is often used in smallernetworks (100 OSPF routers, depending on various factors).4.6Virtual Router Redundancy Protocol (VRRP)VRRP is a first hop redundancy protocol. It provides gateway redundancy by enabling a pair of VRRP routersto coordina

Dell EMC Networking S4148F-ON 2.2 Dell EMC Networking S4248FB-ON The Dell EMC Networking S4248FB-ON is a 1-RU, multilayer switch with forty 10GbE ports, two 40GbE ports, and six 10/25/40/50/100GbE ports. Two S4248FB-ON switches are used as leaf switches in the examples in this guide. Dell EMC Networking S4248FB-ON 2.3 Dell EMC Networking Z9100-ON

Related Documents:

Dell EMC Unity: Investment Protection Grow with Dell EMC Unity All-Flash Dell EMC Unity 350F Dell EMC Unity 450F Dell EMC Unity 550F Dell EMC Unity 650F ONLINE DATA-IN PLACE UPGRADE PROCESSOR 6c / 1.7GHz 96 GB Memory 10c / 2.2GHz 128 GB Memory 14c / 2.0GHz 256 GB Memory 14c / 2.4GHz 512 GB Memory CAPACITY 150 Drives 2.4 PB 250 Drives 4 PB 500 .

“Dell EMC”, as used in this document, means the applicable Dell sales entity (“Dell”) specified on your Dell quote or invoice and the applicable EMC sales entity (“EMC”) specified on your EMC quote. The use of “Dell EMC” in this document does not indicate a change to the legal name of the Dell

EMC: EMC Unity、EMC CLARiiON EMC VNX EMC Celerra EMC Isilon EMC Symmetrix VMAX 、VMAXe 、DMX EMC XtremIO VMAX3(闪存系列) Dell: Dell PowerVault MD3xxxi Dell EqualLogic Dell Compellent IBM: IBM N 系列 IBM DS3xxx、4xxx、5xx

monitor switch features, refer to the User Configuration Guide, which is available on the Dell EMC Support website at dell.com . NOTE: Switch administrators are strongly advised to maintain Dell EMC Networking switches on the latest version of the Dell EMC Networking Operating System (DNOS). Dell EMC Networking continually improves the .

Dell EMC PowerEdge 14g! R640, R740, R740xd, FX2 with FC430, FC630 All flash, hybrid Dell EMC PowerEdge R730xd All flash, hybrid Dell EMC PowerEdge R630, R730xd All HDD, all flash, hybrid Dell EMC PowerEdge R930 24x 2.5″ SSD plus 8x NVMe Dell EMC PowerEdge R730 16x 2.5″drives, 8x 3.5″ drives VMware-certified configurations

Table 3. Dell EMC PowerVault MD-Series storage array rules for non-dense, 2U models only (MD3200, MD3220, MD3200i, MD3220i, MD3600i, MD3620i, MD3600f and MD3620f) Rule Dell EMC PowerVault MD3200 series Dell EMC PowerVault MD3200i series Dell EMC PowerVault MD3600i series Dell EMC PowerVault MD3600f series 6 Gbps SAS 1 Gbps iSCSI 10 Gbps iSCSI 8 .

Grow with Dell EMC Unity All-Flash More firepower Dell EMC Unity 350F Dell EMC Unity 450F Dell EMC Unity 550F Dell EMC Unity 650F DATA-IN PLACE UPGRADE PROCESSOR 6c / 1.7GHz 96 GB Memory 10c / 2.2GHz 128 GB Memory 14c / 2.0GHz 256 GB Memory 14c / 2.4GHz 512 GB Memory CAPACITY 150 Drives 2.4

administrim publik pranë fakultetit “Maxwell School of Citizenship and Public Affairs” të Universitetit të Sirakuzës. Dmitri është drejtues i ekipit të pro jektit për nënaktivitetin e kuadrit të raportimit financiar pranë programit PULSAR. FRANS VAN SCHAIK : Profesor i plotë i kontabilitetit, Universiteti i Amsterdamit Dr. Frans Van Schaik është profesor i plotë i .