Cabling A Fabric-attached MetroCluster Configuration .

2y ago
10 Views
3 Downloads
1.05 MB
205 Pages
Last View : 1m ago
Last Download : 2m ago
Upload by : Eli Jorgenson
Transcription

Cabling a fabric-attached MetroClusterconfigurationONTAP MetroClusterNetAppDecember 10, 2021This PDF was generated from nstallfc/concept illustration of the local ha pairs in a mcc configuration.html on December 10, 2021.Always check docs.netapp.com for the latest.

Table of ContentsCabling a fabric-attached MetroCluster configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1Parts of a fabric MetroCluster configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1Required MetroCluster FC components and naming conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8Configuration worksheets for FC switches and FC-to-SAS bridges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12Installing and cabling MetroCluster components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12Configuring the FC switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61Installing FC-to-SAS bridges and SAS disk shelves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187

Cabling a fabric-attached MetroClusterconfigurationThe MetroCluster components must be physically installed, cabled, and configured atboth geographic sites. The steps are slightly different for a system with native diskshelves as opposed to a system with array LUNs.Parts of a fabric MetroCluster configurationAs you plan your MetroCluster configuration, you should understand the hardwarecomponents and how they interconnect.Disaster Recovery (DR) groupsA fabric MetroCluster configuration consists of one or two DR groups, depending on the number of nodes inthe MetroCluster configuration. Each DR group consists of four nodes. An eight-node MetroCluster configuration consists of two DR groups. A four-node MetroCluster configuration consists of one DR group.The following illustration shows the organization of nodes in an eight-node MetroCluster configuration:1

The following illustration shows the organization of nodes in a four-node MetroCluster configuration:Key hardware elementsA MetroCluster configuration includes the following key hardware elements: Storage controllersThe storage controllers are not connected directly to the storage but connect to two redundant FC switchfabrics.2

FC-to-SAS bridgesThe FC-to-SAS bridges connect the SAS storage stacks to the FC switches, providing bridging betweenthe two protocols. FC switchesThe FC switches provide the long-haul backbone ISL between the two sites. The FC switches provide thetwo storage fabrics that allow data mirroring to the remote storage pools. Cluster peering networkThe cluster peering network provides connectivity for mirroring of the cluster configuration, which includesstorage virtual machine (SVM) configuration. The configuration of all of the SVMs on one cluster is mirroredto the partner cluster.Eight-node fabric MetroCluster configurationAn eight-node configuration consists of two clusters, one at each geographically separated site. cluster A islocated at the first MetroCluster site. cluster B is located at the second MetroCluster site. Each site has oneSAS storage stack. Additional storage stacks are supported, but only one is shown at each site. The HA pairsare configured as switchless clusters, without cluster interconnect switches. A switched configuration issupported, but is not shown.An eight-node configuration includes the following connections: FC connections from each controller’s HBAs and FC-VI adapters to each of the FC switches An FC connection from each FC-to-SAS bridge to an FC switch SAS connections between each SAS shelf and from the top and bottom of each stack to an FC-to-SASbridge An HA interconnect between each controller in the local HA pairIf the controllers support a single-chassis HA pair, the HA interconnect is internal, occurring through thebackplane, meaning that an external interconnect is not required. Ethernet connections from the controllers to the customer-provided network that is used for cluster peeringSVM configuration is replicated over the cluster peering network. A cluster interconnect between each controller in the local clusterFour-node fabric MetroCluster configurationThe following illustration shows a simplified view of a four-node fabric MetroCluster configuration. For someconnections, a single line represents multiple, redundant connections between the components. Data andmanagement network connections are not shown.3

The following illustration shows a more detailed view of the connectivity in a single MetroCluster cluster (bothclusters have the same configuration):Two-node fabric MetroCluster configurationThe following illustration shows a simplified view of a two-node fabric MetroCluster configuration. For someconnections, a single line represents multiple, redundant connections between the components. Data andmanagement network connections are not shown.4

A two-node configuration consists of two clusters, one at each geographically separated site. cluster A islocated at the first MetroCluster site. cluster B is located at the second MetroCluster site. Each site has oneSAS storage stack. Additional storage stacks are supported, but only one is shown at each site.In a two-node configuration, the nodes are not configured as an HA pair.The following illustration shows a more detailed view of the connectivity in a single MetroCluster cluster (bothclusters have the same configuration):A two-node configuration includes the following connections: FC connections between the FC-VI adapter on each controller module FC connections from each controller module’s HBAs to the FC-to-SAS bridge for each SAS shelf stack SAS connections between each SAS shelf and from the top and bottom of each stack to an FC-to-SASbridge Ethernet connections from the controllers to the customer-provided network that is used for cluster peeringSVM configuration is replicated over the cluster peering network.Illustration of the local HA pairs in a MetroCluster configurationIn eight-node or four-node MetroCluster configurations, each site consists of storagecontrollers configured as one or two HA pairs. This allows local redundancy so that if onestorage controller fails, its local HA partner can take over. Such failures can be handledwithout a MetroCluster switchover operation.Local HA failover and giveback operations are performed with the storage failover commands, in the samemanner as a non-MetroCluster configuration.5

Related informationIllustration of redundant FC-to-SAS bridgesRedundant FC switch fabricsIllustration of the cluster peering networkONTAP conceptsIllustration of redundant FC-to-SAS bridgesFC-to-SAS bridges provide protocol bridging between SAS attached disks and the FCswitch fabric.Related informationIllustration of the local HA pairs in a MetroCluster configurationRedundant FC switch fabricsIllustration of the cluster peering network6

Redundant FC switch fabricsEach switch fabric includes inter-switch links (ISLs) that connect the sites. Data isreplicated from site-to-site over the ISL. Each switch fabric must be on different physicalpaths for redundancy.Related informationIllustration of the local HA pairs in a MetroCluster configurationIllustration of redundant FC-to-SAS bridgesIllustration of the cluster peering networkIllustration of the cluster peering networkThe two clusters in the MetroCluster configuration are peered through a customerprovided cluster peering network. Cluster peering supports the synchronous mirroring ofstorage virtual machines (SVMs, formerly known as Vservers) between the sites.Intercluster LIFs must be configured on each node in the MetroCluster configuration, and the clusters must beconfigured for peering. The ports with the intercluster LIFs are connected to the customer-provided clusterpeering network. Replication of the SVM configuration is carried out over this network through theConfiguration Replication Service.7

Related informationIllustration of the local HA pairs in a MetroCluster configurationIllustration of redundant FC-to-SAS bridgesRedundant FC switch fabricsCluster and SVM peering express configurationConsiderations for configuring cluster peeringCabling the cluster peering connectionsPeering the clustersRequired MetroCluster FC components and namingconventionsWhen planning your MetroCluster FC configuration, you must understand the requiredand supported hardware and software components. For convenience and clarity, youshould also understand the naming conventions used for components in examplesthroughout the documentation. For example, one site is referred to as Site A and theother site is referred to as Site B.Supported software and hardwareThe hardware and software must be supported for the MetroCluster FC configuration.NetApp Hardware UniverseWhen using AFF systems, all controller modules in the MetroCluster configuration must be configured as AFFsystems.Long-wave SFPs are not supported in the MetroCluster storage switches. For a table ofsupported SPFs, see the MetroCluster Technical Report.Hardware redundancy in the MetroCluster FC configurationBecause of the hardware redundancy in the MetroCluster FC configuration, there are two of each componentat each site. The sites are arbitrarily assigned the letters A and B and the individual components are arbitrarily8

assigned the numbers 1 and 2.Requirement for two ONTAP clustersThe fabric-attached MetroCluster FC configuration requires two ONTAP clusters, one at each MetroClustersite.Naming must be unique within the MetroCluster configuration.Example names: Site A: cluster A Site B: cluster BRequirement for four FC switchesThe fabric-attached MetroCluster FC configuration requires four FC switches (supported Brocade or Ciscomodels).The four switches form two switch storage fabrics that provide the ISL between each of the clusters in theMetroCluster FC configuration.Naming must be unique within the MetroCluster configuration.Requirement for two, four, or eight controller modulesThe fabric-attached MetroCluster FC configuration requires two, four, or eight controller modules.In a four or eight-node MetroCluster configuration, the controller modules at each site form one or two HApairs. Each controller module has a DR partner at the other site.The controller modules must meet the following requirements: Naming must be unique within the MetroCluster configuration. All controller modules in the MetroCluster configuration must be running the same version of ONTAP. All controller modules in a DR group must be of the same model.However, in configurations with two DR groups, each DR group can consist of different controller modulemodels. All controller modules in a DR group must use the same FC-VI configuration.Some controller modules support two options for FC-VI connectivity: Onboard FC-VI ports An FC-VI card in slot 1 A mix of one controller module using onboard FC-VI ports and another using anadd-on FC-VI card is not supported. For example, if one node uses onboard FC-VI configuration, thenall other nodes in the DR group must use onboard FC-VI configuration as well.Example names: Site A: controller A 19

Site B: controller B 1Requirement for four cluster interconnect switchesThe fabric-attached MetroCluster FC configuration requires four cluster interconnect switches (if you are notusing two-node switchless clusters)These switches provide cluster communication among the controller modules in each cluster. The switches arenot required if the controller modules at each site are configured as a two-node switchless cluster.Requirement for FC-to-SAS bridgesThe fabric-attached MetroCluster FC configuration requires one pair of FC-to-SAS bridges for each stackgroup of SAS shelves.FibreBridge 6500N bridges are not supported in configurations running ONTAP 9.8 and later. FibreBridge 7600N or 7500N bridges support up to four SAS stacks. FibreBridge 6500N bridges support only one SAS stack. Each stack can use different models of IOM.A mix of IOM12 modules and IOM3 modules is not supported within the same storage stack. A mix ofIOM12 modules and IOM6 modules is supported within the same storage stack if your system is running asupported version of ONTAP.Supported IOM modules depend on the version of ONTAP you are running. Naming must be unique within the MetroCluster configuration.The suggested names used as examples in this guide identify the controller module and stack that the bridgeconnects to, as shown below.Pool and drive requirements (minimum supported)Eight SAS disk shelves are recommended (four shelves at each site) to allow disk ownership on a per-shelfbasis.The MetroCluster configuration requires the minimum configuration at each site: Each node has at least one local pool and one remote pool at the site.For example, in a four-node MetroCluster configuration with two nodes at each site, four pools are requiredat each site. At least seven drives in each pool.In a four-node MetroCluster configuration with a single mirrored data aggregate per node, the minimumconfiguration requires 24 disks at the site.In a minimum supported configuration, each pool has the following drive layout: Three root drives10

Three data drives One spare driveIn a minimum supported configuration, at least one shelf is needed per site.MetroCluster configurations support RAID-DP and RAID4.Drive location considerations for partially populated shelvesFor correct auto-assignment of drives when using shelves that are half populated (12 drives in a 24-driveshelf), drives should be located in slots 0-5 and 18-23.In a configuration with a partially populated shelf, the drives must be evenly distributed in the four quadrants ofthe shelf.Mixing IOM12 and IOM 6 modules in a stackYour version of ONTAP must support shelf mixing. Refer to the Interoperability Matrix Tool (IMT) to see if yourversion of ONTAP supports shelf mixing. NetApp InteroperabilityFor further details on shelf mixing see: Hot-adding shelves with IOM12 modules to a stack of shelves withIOM6 modulesBridge naming conventionsThe bridges use the following example naming:bridge site stack grouplocation in pairThis portion of the name Identifies the Possible values siteSite on which the bridge pairphysically resides.A or Bstack groupNumber of the stack group to which 1, 2, etc.the bridge pair connects. FibreBridge 7600N or 7500Nbridges support up to fourstacks in the stack group.The stack group can contain nomore than 10 storage shelves. FibreBridge 6500N bridgessupport only a single stack inthe stack group.location in pairBridge within the bridge pair.A pairof bridges connect to a specificstack group.a or b11

Example bridge names for one stack group on each site: bridge A 1a bridge A 1b bridge B 1a bridge B 1bConfiguration worksheets for FC switches and FC-to-SASbridgesBefore beginning to configure the MetroCluster sites, you can use the followingworksheets to record your site information:Site A worksheetSite B worksheetInstalling and cabling MetroCluster componentsThe storage controllers must be cabled to the FC switches and the ISLs must be cabledto link the MetroCluster sites. The storage controllers must also be cabled to the clusterpeering, data, and management networks.Racking the hardware componentsIf you have not received the equipment already installed in cabinets, you must rack thecomponents.About this taskThis task must be performed on both MetroCluster sites.Steps1. Plan out the positioning of the MetroCluster components.The rack space depends on the platform model of the controller modules, the switch types, and the numberof disk shelf stacks in your configuration.2. Properly ground yourself.3. Install the controller modules in the rack or cabinet.AFF and FAS Documentation Center4. Install the FC switches in the rack or cabinet.5. Install the disk shelves, power them on, and then set the shelf IDs. You must power-cycle each disk shelf. Shelf IDs must be unique for each SAS disk shelf within each MetroCluster DR group (including bothsites).12

6. Install each FC-to-SAS bridge:a. Secure the “L” brackets on the front of the bridge to the front of the rack (flush-mount) with the fourscrews.The openings in the bridge “L” brackets are compliant with rack standard ETA-310-X for 19-inch (482.6mm) racks.The ATTO FibreBridge Installation and Operation Manual for your bridge model contains moreinformation and an illustration of the installation.For adequate port space access and FRU serviceability, you must leave 1U space belowthe bridge pair and cover this space with a tool-less blanking panel.b. Connect each bridge to a power source that provides a proper ground.c. Power on each bridge.For maximum resiliency, bridges that are attached to the same stack of disk shelvesmust be connected to different power sources.The bridge Ready LED might take up to 30 seconds to illuminate, indicating that the bridge hascompleted its power-on self test sequence.Cabling the new controller module’s FC-VI and HBA ports to the FC switchesThe FC-VI ports and HBAs (host bus adapters) must be cabled to the site FC switches oneach controller module in the MetroCluster configuration.Steps1. Cable the FC-VI ports and HBA ports, using the table for your configuration and switch model. Port assignments for FC switches when using ONTAP 9.1 and later Port assignments for FC switches when using ONTAP 9.0 Port assignments for systems using two initiator portsCabling the ISLs between MetroCluster sitesYou must connect the FC switches at each site through the fiber-optic Inter-Switch Links(ISLs) to form the switch fabrics that connect the MetroCluster components.About this taskThis must be done for both switch fabrics.Steps1. Connect the FC switches at each site to all ISLs, using the cabling in the table that corresponds to yourconfiguration and switch model. Port assignments for FC switches when using ONTAP 9.1 and later Port assignments for FC switches when using ONTAP 9.0Related information13

Considerations for ISLsPort assignments for systems using two initiator portsYou can configure FAS8020, AFF8020, FAS8200, and AFF A300 systems using a singleinitiator port for each fabric and two initiator ports for each controller.You can follow the cabling for the FibreBridge 6500N bridge or FibreBridge 7500N or 7600N bridge using onlyone FC port (FC1 or FC2). Instead of using four initiators, connect only two initiators and leave the other twothat are connected to the switch port empty.You must apply the correct RCF file for the FibreBridge 6500N bridge’s configuration.If zoning is performed manually, then follow the zoning used for a FibreBridge 6500N or a FibreBridge 7500Nor 7600N bridge using one FC port (FC1 or FC2). In this scenario, one initiator port rather than two is added toeach zone member per fabric.You can change the zoning or perform an upgrade from a FibreBridge 6500 to a FibreBridge 7500 using theprocedure Hot-swapping a FibreBridge 6500N bridge with a FibreBridge 7500N or 7600N bridge from theMetroCluster Maintenance Guide.The following table shows port assignments for FC switches when using ONTAP 9.1 and later.Configurations using FibreBridge 6500N bridges or FibreBridge 7500N or 7600N using one FC port(FC1 or FC2) onlyMetroCluster 1 or DR Group 1ComponentPortBrocade switch models 6505, 6510, 6520, 7840,G620, G610, and DCX 8510-8Connects to FC switch Connects to switchport controller x 114FC-VI port a10FC-VI port b20FC-VI port c11FC-VI port d21HBA port a12HBA port b22HBA port c--HBA port d--

Stack 1Stack ybridge x 1a18bridge x 1b28bridge x ya111bridge x yb211The following table shows port assignments for FC switches when using ONTAP 9.0.MetroCluster two-node configurationComponentcontroller x 1PortBrocade 6505, 6510, or DCX 8510-8FC switch x 1FC switch x 2FC-VI port a0-FC-VI port b-0HBA port a1-HBA port b-1HBA port c2-HBA port d-2Port assignments for FC switches when using ONTAP 9.0You need to verify that you are using the specified port assignments when you cable theFC switches. The port assignments are different between ONTAP 9.0 and later versionsof ONTAP.Ports that are not used for attaching initiator ports, FC-VI ports, or ISLs can be reconfigured to act as storageports. However, if the supported RCFs are being used, the zoning must be changed accordingly.If the supported RCF files are used, ISL ports may not connect to the same ports shown here and may need tobe reconfigured manually.Overall cabling guidelinesYou should be aware of the following guidelines when using the cabling tables: The Brocade and Cisco switches use different port numbering: On Brocade switches, the first port is numbered 0. On Cisco switches, the first port is numbered 1.15

The cabling is the same for each FC switch in the switch fabric. AFF A300 and FAS8200 storage systems can be ordered with one of two options for FC-VI connectivity: Onboard ports 0e and 0f configured in FC-VI mode. Ports 1a and 1b on an FC-VI card in slot 1.Brocade port usage for controller connections in an eight-node MetroCluster configuration runningONTAP 9.0The cabling is the same for each FC switch in the switch fabric.The following table shows controller port usage on Brocade switches:MetroCluster eight-node configurationComponentcontroller x 1controller x 216PortBrocade 6505, 6510, or DCX 8510-8FC switch x 1FC switch x 2FC-VI port a0-FC-VI port b-0HBA port a1-HBA port b-1HBA port c2-HBA port d-2FC-VI port a3-FC-VI port b-3HBA port a4-HBA port b-4HBA port c5-HBA port d-5

controller x 3controller x 4FC-VI port a6FC-VI port b-6HBA port a7-HBA port b-7HBA port c8-HBA port d-8FC-VI port a9-FC-VI port b-9HBA port a10-HBA port b-10HBA port c11-HBA port d-11Brocade port usage for FC-to-SAS bridge connections in an eight-node MetroCluster configurationrunning ONTAP 9.0The following table shows bridge port usage when using FibreBridge 7500 bridges:MetroCluster eight-node configurationFibreBridge 7500 bridge Portbridge x 1abridge x 1bbridge x 2aBrocade 6505, 6510, or DCX 8510-8FC switch x 1FC switch x 2FC112-FC2-12FC113-FC2-13FC114-FC2-1417

bridge x 2bbridge x 3abridge x 3bbridge x 4abridge x FC119-FC2-19The following table shows bridge port usage when using FibreBridge 6500 bridges:MetroCluster eight-node configurationFibreBridge 6500 bridge PortBrocade 6505, 6510, or DCX 8510-8FC switch x 1FC switch x 2bridge x 1aFC112-bridge x 1bFC1-12bridge x 2aFC113-bridge x 2bFC1-13bridge x 3aFC114-bridge x 3bFC1-14bridge x 4aFC115-bridge x 4bFC1-15bridge x 5aFC116-18

bridge x 5bFC1-16bridge x 6aFC117-bridge x 6bFC1-17bridge x 7aFC118-bridge x 7bFC1-18bridge x 8aFC119-bridge x 8bFC1-19Brocade port usage for ISLs in an eight-node MetroCluster configuration running ONTAP 9.0The following table shows ISL port usage:MetroCluster eight-node configurationISL portBrocade 6505, 6510, or DCX 8510-8FC switch x 1FC switch x 2ISL port 12020ISL port 22121ISL port 32222ISL port 42323Brocade port usage for controllers in a four-node MetroCluster configuration running ONTAP 9.0The cabling is the same for each FC switch in the switch fabric.MetroCluster four-node configurationComponentPortBrocade 6505, 6510, or DCX 8510-8FC switch x 1FC switch x 219

controller x 1controller x 2FC-VI port a0-FC-VI port b-0HBA port a1-HBA port b-1HBA port c2-HBA port d-2FC-VI port a3-FC-VI port b-3HBA port a4-HBA port b-4HBA port c5-HBA port d-5Brocade port usage for bridges in a four-node MetroCluster configuration running ONTAP 9.0The cabling is the same for each FC switch in the switch fabric.The following table shows bridge port usage up to port 17 when using FibreBridge 7500 bridges. Additionalbridges can be cabled to ports 18 through 23.MetroCluster four-node configurationFibreBridge7500 bridgePortbridge x 1aFC16-6-FC2-6-6FC17-7-FC2-7-7bridge x 1b20Brocade 6510 or DCX 8510-8Brocade 6505FC switch x 1 FC switch x 2 FC switch x 1 FC switch x 2

bridge x 2abridge x 2bbridge x 3abridge x 3bbridge x 4abridge x -13-17additional bridges can be cabledthrough port 19, then ports 24through 47additional bridges can be cabledthrough port 23The following table shows bridge port usage when using FibreBridge 6500 bridges:MetroCluster four-node configurationFibreBridge6500 bridgePortBrocade 6510, DCX 8510-8Brocade 6505bridge x 1aFC16-6-bridge x 1bFC1-6-6bridge x 2aFC17-7-bridge x 2bFC1-7-7bridge x 3aFC18-12-FC switch x 1 FC switch x 2 FC switch x 1 FC switch x 221

bridge x 3bFC1-8-12bridge x 4aFC19-13-bridge x 4bFC1-9-13bridge x 5aFC110-14-bridge x 5bFC1-10-14bridge x 6aFC111-15-bridge x 6bFC1-11-15bridge x 7aFC112-16-bridge x 7bFC1-12-16bridge x 8aFC113-17-bridge x 8bFC1-13-17additional bridges can be cabledthrough port 19, then ports 24through 47additional bridges can be cabledthrough port 23Brocade port usage for ISLs in a four-node MetroCluster configuration running ONTAP 9.0The following table shows ISL port usage:MetroCluster four-node configurationISL portBrocade 6510, DCX 8510-8Brocade 6505FC switch x 1FC switch x 2FC switch x 1FC switch x 2ISL port 1202088ISL port 2212199ISL port 322221010ISL port 423231111Brocade port usage for controllers in a two-node MetroCluster configuration running ONTAP 9.0The cabling is the same for each FC switch in the switch fabric.22

MetroCluster two-node configurationComponentPortcontroller x 1Brocade 6505, 6510, or DCX 8510-8FC switch x 1FC switch x 2FC-VI port a0-FC-VI port b-0HBA port a1-HBA port b-1HBA port c2-HBA port d-2Brocade port usage for bridges in a two-node MetroCluster configuration running ONTAP 9.0The cabling is the same for each FC switch in the switch fabric.The following table shows bridge port usage up to port 17 when using FibreBridge 7500 bridges. Additionalbridges can be cabled to ports 18 through 23.MetroCluster two-node configurationFibreBridge7500 bridgePortbridge x -13-FC2-9-13bridge x 1bbridge x 2abridge x 2bBrocade 6510, DCX 8510-8Brocade 6505FC switch x 1 FC switch x 2 FC switch x 1 FC switch x 223

bridge x 3abridge x 3bbridge x 4abridge x -12-16FC113-17-FC2-13-17additional bridges can be cabledthrough port 19, then ports 24through 47additional bridges can be cabledthrough port 23The following table shows bridge port usage when using FibreBridge 6500 bridges:MetroCluster two-node configurationFibreBridge6500 bridgePortbridge x 1aFC16-6-bridge x 1bFC1-6-6bridge x 2aFC17-7-bridge x 2bFC1-7-7bridge x 3aFC18-12-bridge x 3bFC1-8-12bridge x 4aFC19-13-bridge x 4bFC1-9-13bridge x 5aFC110-14-24Brocade 6510, DCX 8510-8Brocade 6505FC switch x 1 FC switch x 2 FC switch x 1 FC switch x 2

bridge x 5bFC1-10-14bridge x 6aFC111-15-bridge x 6bFC1-11-15bridge x 7aFC112-16-bridge x 7bFC1-12-16bridge x 8aFC113-17-bridge x 8bFC1-13-17additional bridges can be cabledthrough port 19, then ports 24through 47additional bridges can be cabledthrough port 23Brocade port usage for ISLs in a two-node MetroCluster configuration running ONTAP 9.0The following table shows ISL port usage:MetroCluster two-node configurationISL portBrocade 6510, DCX 8510-8Brocade 6505FC switch x 1FC switch x 2FC switch x 1FC switch x 2ISL port 1202088ISL port 2212199ISL port 322221010ISL port 423231111Cisco port usage for controllers in an eight-node MetroCluster configuration running ONTAP 9.0The following table shows controller port usage on Cisco switches:MetroCluster eight-node configurationComponentPortCisco 9148 or 9148SFC switch x 1FC switch x 225

controller x 1controller x 2controller x 326FC-VI port a1-FC-VI port b-1HBA port a2-HBA port b-2HBA port c3-HBA port d-3FC-VI port a4-FC-VI port b-4HBA port a5-HBA port b-5HBA port c6-HBA port d-6FC-VI port a7FC-VI port b-7HBA port a8-HBA port b-8HBA port c9-HBA port d-9

controller x 4FC-VI port a10-FC-VI port b-10HBA port a11-HBA port b-11HBA port c13-HBA port d-13Cisco port usage for FC-to-SAS bridges in an eight-node MetroCluster configuration running ONTAP9.0The following table shows bridge port usage up to port 23 when using FibreBridge 7500 bridges. Additionalbridges can be attached using ports 25 through 48.MetroCluster eight-node configurationFibreBridge 7500 bridge Portbridge x 1abridge x 1bbridge x 2abridge x 2bbridge x 3abridge x 3bCisco 9148 or 9148SFC switch x 1FC switch x C11919FC2--FC12121FC2--27

bridge x 4abridge x 4bFC12222FC2--FC12323FC2--Additional bridges can be attached using ports 25 through 48 following the same pattern.The following table shows bridge port usage up to port 23 when using FibreBridge 6500 bridges. Additionalbridges can be attached using ports 25-48.MetroCluster eight nodeFibreBridge 6500 bridge PortCisco 9148 or 9148SFC switch x 1FC switch x 2bridge x 1aFC114-bridge x 1bFC1-14bridge x 2aFC115-bridge x 2bFC1-15bridge x 3aFC117-bridge x 3bFC1-17bridge x 4aFC118-bridge x 4bFC1-18bridge x 5aFC119-bridge x 5bFC1-19bridge x 6aFC121-bridge x 6bFC1-21bridge x 7aFC122-bridge x 7bFC1-2228

bridge x 8aFC123-bridge x 8bFC1-23Additional bridges can be attached using ports 25 through 48 following the same pattern.Cisco port usage for ISLs in an eight-node MetroCluster configuration running ONTAP 9.0The following table shows ISL port usage:MetroCluster eight-node configurationISL portCisco 9148 or 9148SFC switch x 1FC switch x 2ISL port 11212ISL port 21616ISL port 32020ISL port 42424Cisco port usage for controllers in a four-node MetroCluster configurationThe cabling is the same for each FC

The fabric-attached MetroCluster FC configuration requires one pair of FC-to-SAS bridges for each stack group of SAS shelves. FibreBridge 6500N bridges are not supported in configurations running ONTAP 9.8 and later. FibreBridge 7600N or 7500N bridges support up to four SAS stacks. FibreBridge 6500N bridges support only one SAS stack.

Related Documents:

ISL sharing is supported on all switches except the Cisco 9250i and Cisco 9148 switches. All nodes must be running ONTAP 9.2 or later. The FC switch cabling for ISL sharing is the same as for the eight-node MetroCluster cabling. The RCF files for ISL sharing are same as for the eight-node MetroCluster cabling.

ATTO 7600N bridge, NetApp FlexGroup support, in-band monitoring of bridges ONTAP 9.5. SVM-DR with MetroCluster as a source. ONTAP 9.4. ATTO 7500 bridge firmware update capability from ONTAP, additional platforms and features for MetroCluster IP. ONTAP 9.3. Introduction of MetroCluster IP (see TR-4689 MetroCluster IP) and MetroCluster

Parts of a two-node SAS-attached stretch MetroCluster configuration The two-node MetroCluster SAS-attached configuration requires a number of parts, including two single-node clusters in which the storage controllers are directly connected to the storage using SAS cables. The MetroCluster configuration includes the following key hardware elements:

The IP switches are also used for all intra-cluster traffic within the local clusters. The MetroCluster traffic is kept separate from the intra-cluster traffic by using separate IP subnets and VLANs. The MetroCluster IP fabric is distinct and different from the cluster peering network. 4 MetroCluster IP Installation and Configuration Guide

FABRIC REQUIREMENTS Fabric A ATH-145 2 1/4 yd. Fabric B ATH-144 1/2 yd. Fabric C ATH-141 5/8 yd. Fabric D ATH-148 1/2 yd. Fabric E ATH-140 1/2 yd. Fabric F ATH-143 5/8 yd. Backing Fabric ATH-147 3 5/8 yds. (suggested) Binding Fabric ATH-143 (Fabric F) (incl

A. The cabling is incorrect because the controllers are cabled from the host port. B. The cabling is incorrect because it usesthe top-down bottom-up cabling method. C. The cabling is correct because it follows the recommended E-Series cabling method. D. The cabling is incorrect because they have been daisy-chained. Answer: D . QUESTION: 50

Network Cabling Replacement (Structured Cabling System) The County of Monroe ("County") requests proposals from qualified and experienced vendors interested in providing compliant data network cabling (structured cabling system) to replace all current CAT5/5e cabling and identified fiber cabling from desktop to and including replacement of

1000 days during pregnancy and the first 2 years of life, as called for in the 2008 Series. One of the main drivers of this new international commitment is the Scaling Up Nutrition (SUN) movement.18,19 National commitment in LMICs is growing, donor funding is rising, and civil society and the private sector are increasingly engaged. However, this progress has not yet translated into .