Performance And Security Hyper-V Cluster Configuration .

1y ago
2.87 MB
53 Pages
Last View : 1m ago
Last Download : 1y ago
Upload by : Karl Gosselin

Understanding Windows ServerHyper-V Cluster Configuration,Performance and SecurityAuthorBrandon LeeBrandon Lee has been in the IT industry for over 15 years now and hasworked in various IT industries spanning education, manufacturing, hospitality,and consulting for various technology companies including Fortune 500companies. He is a prolific blogger and contributes to the community throughvarious blog posts and technical documentation primarily

Understanding Hyper-V Cluster Configuration Performanceand Security1. Server Failover Clustering OverviewoWindows Server Failover Clusters Hyper-V Specific ConsiderationsHyper-V Configuration Best PracticesoUse Hyper-V Core installationsoSizing the Hyper-V Environment CorrectlyoNetwork Teaming and ConfigurationoStorage ConfigurationHyper-V Networking Best PracticesoPhysical NIC considerationsoWindows and Virtual Network ConsiderationsWhat is a Hyper-V Virtual Switch?oHyper-V Virtual Switch Capabilities and FunctionalityoHyper-V Logical SwitchesoCreating Hyper-V Virtual SwitchesHyper-V advanced virtual machine network configurationoVirtual Machine Queue (VMQ)oIPsec Task OffloadingoSR-IOVoDHCP Guard, Router Guard, Protected Network and Port MirroringHyper-V Design Considerations with iSCSI StorageoHyper-V Windows Configuration for iSCSIoVerifying MultipathingWhat is Windows Server 2016 Storage Spaces Direct?oWindows Server 2016 Storage Spaces Direct RequirementsoWindows Server 2016 Storage Spaces Direct ArchitectureoWindows Server 2016 SAN vs Storage Spaces DirectWhy Use Hyper-V VHDX File Format?oOptimizing VHDX Virtual Disk FilesoResizing VHDX Virtual Disk FilesTroubleshooting Hyper-V with Event LogsoTaking Hyper-V Troubleshooting with Event Viewer FurtheroSystem Center Virtual Machine Manager LoggingSystem Center Virtual Machine Manager SCVMM – OverviewoSystem Center Virtual Machine Manager SCVMM – FeaturesoManaging Hyper-V Host and Clusters with SCVMMoIs System Center Virtual Machine Manager Required?Backup & Disaster Recovery for Virtual and Physical Data Center Vembu Technologies

What is Windows Server Failover Clustering?Windows Server Failover Clustering is the mechanism that allows running Windows Roles,Features, and applications to be made highly available across multiple Windows Serverhosts. Why is making roles, features, and other applications across multiple hostsimportant? Clustering helps to ensure that workloads are resilient in the event of ahardware failure. Especially when thinking about virtualized workloads, often multiplevirtual machines are running on a single host. If a host fails, it is not simply a singleworkload that fails, but possibly many production workloads could be taken offline withdense configurations of virtual machines on a single host.The Windows Server Failover Cluster in the context of Hyper-V, allows bringing togethermultiple physical Hyper-V hosts into a “cluster” of hosts. This allows aggregatingCPU/memory resources attached to shared storage which in turn allows the ability toeasily migrate virtual machines between the Hyper-V hosts in the cluster. The sharedstorage can be in the form of the traditional SAN or in the form of Storage Spaces Directin Windows Server 2016.The ability to easily migrate virtual machines between shared storage allows restarting avirtual machine on a different host in the Windows Server Failover Cluster if the originalphysical host the virtual machine was running on fails. This allows business-criticalworkloads to be brought back up very quickly even if a host in the cluster has failed.Windows Server Failover Clustering also has other added benefits as they relate toHyper-V workloads that are important to consider. In addition to allowing virtualmachines to be highly available when hosts fail, the Windows Server Failover Cluster alsoallows for planned maintenance periods such as patching Hyper-V hosts.This allows Hyper-V administrators the ability to patch hosts by migrating virtualmachines off a host, applying patches, and then rehydrating the host with virtualmachines. There is also Cluster Aware Updating that allows this to be done in anautomated fashion. Windows Server Failover Clustering also provides the benefit ofprotecting against corruption if the cluster hosts become separated from one another inthe classic “split-brain” scenario. If two hosts attempt to write data to the same virtualdisk, corruption can occur.Windows Server Failover Clusters have a mechanism called quorum that preventsseparated Hyper-V hosts in the cluster from inadvertently corrupting data. In WindowsServer 2016, new type of quorum has been introduced that can be utilized along with thelongstanding quorum mechanisms – the cloud witness.Backup & Disaster Recovery for Virtual and Physical Data Center Vembu Technologies

Windows Server Failover Clustering BasicsNow that we know what Windows Server Failover Cluster is and why it is important, let’stake a look at Windows Server Failover Clustering basics to understand a bit deeper howFailover Clustering in Windows Server works. Windows Server Failover Clustering is afeature instead of a role as Windows Server Failover clustering simply helps WindowsServers accomplish their primary role.It is also included in the Standard Edition version of Windows Server along with theDatacenter version. There is no feature difference between the two Windows versions inthe Failover Clustering features and functionality. A Windows Server Failover Cluster iscomposed of two or more nodes that offer resources to the cluster as a whole. Amaximum of 64 nodes per cluster is allowed with Windows Server 2016 Failover Clusters.Additionally, Windows Server 2016 Failover Clusters can run a total of 8000 virtualmachines per cluster. Although in this post we are referencing Hyper-V in general,Windows Server Failover Clusters can house many different types of services including fileservers, print servers, DHCP, Exchange, and SQL just to name a few.One of the primary benefits as already mentioned with Windows Server Failover Clustersis the ability to prevent corruption when cluster nodes become isolated from the rest ofthe cluster. Cluster nodes communicate via the cluster network to determine if the rest ofthe cluster is reachable. This is extremely important as it checks to see if the rest of thecluster is reachable. The cluster in general then performs a voting process of sorts thatdetermines which cluster nodes have the node majority or can reach the majority of thecluster resources.Quorum is the mechanism that validates which cluster nodes have the majority ofresources and have the winning vote when it comes to assuming ownership of resourcessuch as in the event of a Hyper-V cluster and virtual machine data. This becomes glaringlyimportant when you think about the case of an even node cluster such as a cluster with(4) nodes. If a network split happens that allows two of the nodes on each side to only seeits neighbor, there would be no majority. Starting with Windows Server 2012, by defaulteach node has a vote in the quorum voting process.A file or share witness allows a tie breaking vote by allowing one side of the partitionedcluster to claim this resource, thus breaking the tie. The cluster hosts that claim the diskor file share witness perform a SCSI lock on the resource, which prevents the other sidefrom obtaining the majority quorum vote. With odd numbered cluster configurations, oneside of a partitioned cluster will always have a majority so the file or share witness is notneeded.Quorum received enhancements in Windows Server 2016 with the addition of the cloudwitness. This allows using an Azure storage account and its reachability as the witnessvote. A “0-byte” blob file is created in the Azure storage account for each cluster thatutilizes the account.Backup & Disaster Recovery for Virtual and Physical Data Center Vembu Technologies

Windows Server Failover Clusters Hyper-V SpecificConsiderationsWhen using Windows Server Failover Clusters for hosting the Hyper-V role, this opens upmany powerful options for running production, business-critical virtual machines. Thereare a few technologies to be aware of that specifically pertain to Hyper-V and otherworkloads. These are the following Cluster Shared VolumesReFSStorage Spaces DirectCluster Shared VolumesCluster Shared Volumes or CSVs provide specific benefits for Hyper-V virtual machines inallowing more than one Hyper-V host to have read/write access to the volume or LUNwhere virtual machines are stored. In legacy versions of Hyper-V before CSVs wereimplemented, only one Windows Server Failover Cluster host could have read/writeaccess to a specific volume at a time. This created complexities when thinking about highavailability and other mechanisms that are crucial to running business-critical virtualmachines on a Windows Server Failover Cluster.Cluster Shared Volumes solved this problem by allowing multiple nodes in a failovercluster to simultaneously have read/write access to the same LUN provisioned with NTFS.This allows the advantage of having all Hyper-V hosts provisioned to the various storageLUNs which can then assume compute/memory quickly in the case of a node failure inthe Windows Server Failover Cluster.ReFSReFS is short for “Resilient File System” and is the newest file system released fromMicrosoft speculated to be the replacement for NTFS by many. ReFS touts manyadvantages when thinking about Hyper-V environments. It is resilient by nature, meaningthere is no chkdsk functionality as errors are corrected on the fly.However, one of the most powerful features of ReFS related to Hyper-V is the blockcloning technology. With block cloning the file system merely changes metadata asopposed to moving actual blocks. This means that will typical I/O intensive operations onNTFS such as zeroing out a disk as well as creating and merging checkpoints, theoperation is almost instantaneous with ReFS.ReFS should not be used with SAN/NFS configurations however as the storage operates inI/O redirected mode in this configuration where all I/O is sent to the coordinator nodewhich can lead to severe performance issues. ReFS is recommended however withStorage Spaces Direct which does not see the performance hit that SAN/NFSconfigurations do with the utilization of RDMA network adapters.Backup & Disaster Recovery for Virtual and Physical Data Center Vembu Technologies

Storage Spaces DirectStorage Spaces Direct is Microsoft’s software defined storage solution that allows creatingshared storage by using locally attached drives on the Windows Server Failover Clusternodes. It was introduced with Windows Server 2016 and allows two configurations: ConvergedHyper-convergedWith Storage Spaces Direct you have the ability to utilize caching, storage tiers, anderasure coding to create hardware abstracted storage constructs that allow runningHyper-V virtual machines with scale and performance more cheaply and efficiently thanusing traditional SAN storage.Hyper-V Configuration Best PracticesThere are several critical configuration areas that you want to take a look at whenthinking about Hyper-V configuration best practices with any environment. We will lookmore closely at the following configuration areas: Use Hyper-V Core installationsSizing the Hyper-V Environment CorrectlyNetwork Teaming and ConfigurationStorage ConfigurationOperating System Patches UniformityThe above configuration areas represent a large portion of potential Hyper-Vconfiguration mistakes that many make in production environments. Let’s take a closerlook at the above in more detail to explain why they are extremely important to get rightin a production environment and what can be done to ensure you do get them right.Use Hyper-V Core installationsWhile traditional Windows administrators love the GUI to manage servers, maintainingGUI interfaces on server operating systems is not really a good idea. It leads to muchlarger installation bases as well as having to maintain patches and other upgrades simplydue to the GUI interface and any security and other vulnerabilities that may present as aresult.Using the Windows Server 2016 core installation to run the Hyper-V role is certainly therecommended approach to run production workloads on Hyper-V nodes. With the widerange of management tools that can be leveraged with Hyper-V core such as PowerShellremoting, as well as running the GUI Hyper-V manager on another server, it reallypresents no additional administrative burden to run Hyper-V core with today’s tools.Backup & Disaster Recovery for Virtual and Physical Data Center Vembu Technologies

Sizing the Hyper-V Environment CorrectlyThere are many issues that can come from sizing a Hyper-V environment incorrectly. If aHyper-V cluster environment is sized too small, performance issues can certainly resultdue to over-provisioning of resources. Oversizing a Hyper-V environment can certainly bea deterrent from a fiscal standpoint of either being approved for funds on the outset foreither a greenfield installation or an upgrade to server resources that are due for arefresh. A final very crucial part of correctly sizing a Hyper-V environment is being able toproperly plan for growth in the environment. Every environment in this respect will bedifferent depending on forecast growth.A great tool that can be utilized to correctly size the needed number of cores, memory,and disk space is the Microsoft Assessment and Planning Toolkit. It can calculate thecurrent cores, memory, and storage being utilized by production workloads in anautomated fashion so you can easily gather current workload demands. Then, you cancalculate for growth in the environment based on the projected amount of new serverresources that will need to be provisioned in the upcoming future.The Microsoft Assessment and Planning Toolkit can be downloaded ils.aspx?id 7826The Microsoft Assessment and Planning Toolkit allows sizing new Hyper-V environments based on current workloadsBackup & Disaster Recovery for Virtual and Physical Data Center Vembu Technologies

Network Teaming and ConfigurationHyper-V network design is an extremely important part of the Hyper-V Cluster design in aproduction build out. In fact, if the networking configuration and design is not doneproperly, you can expect problems to ensue from the outset. Microsoft recommends todesign your network configuration with the following goals in mind: To ensure network quality of serviceTo provide network redundancyTo isolate traffic to defined networksWhere applicable, take advantage of Server Message Block (SMB) MultichannelProper design of network connections for redundancy generally involves teamingconnections together. There are certainly major mistakes that can be made with theNetwork Teaming configuration that can lead to major problems when either hardwarefails or a failover occurs. When cabling and designing network connections on Hyper-Vhosts, you want to make sure that the cabling and network adapter connections are“X’ed” out, meaning that there is no single point of failure with the network path. Thewhole reason that you want to team network adapters is so that if you have a failure withone network card, the other “part” of the team (the other network card) will still befunctioning.Mistakes however can be made when setting up network teams in Hyper-V clusterconfigurations. A common mistake is to team ports off the same network controller. Thisissue does not present itself until a hardware failure of the network controller takes bothports from the same network controller offline.Also, if using different makes/models of network controllers in a physical Hyper-V host, itis not best practice to create a team between those different models of networkcontrollers. There can potentially be issues with the different controllers and how theyhandle the network traffic with the team. You always want to use the same type ofnetwork controller in a team.Properly setting up your network adapters for redundancy and combining availablecontroller ports together can bring many advantages such as being able to make use of“converged networking”. Converged networking with Hyper-V is made possible bycombining extremely fast NICs (generally higher than 10 Gbps) and “virtually” splittingtraffic on your physical networks inside the hypervisor. So, the same network adaptersare used for different kinds of traffic.Backup & Disaster Recovery for Virtual and Physical Data Center Vembu Technologies

Hyper-V Converged Networking logical layout (image courtesy of Microsoft)Storage ConfigurationThere is another “teaming” issue as it relates to Hyper-V storage. While teaming is good inother types of network traffic, you do not want to team network controllers with iSCSItraffic. Instead you want to utilize MPIO for load balancing iSCSI traffic. The problem withteaming technologies such as LACP (802.3ad) as relates to iSCSI traffic is that aggregatinglinks via LACP, etc, does not improve the throughput of a single I/O flow. A single flow canonly traverse one path. Link aggregation helps traffic flows from different sources. Eachflow will then be sent down a different path based on a hash algorithm. MPIO on the otherhand works between the hosts and iSCSI initiators and properly load balances the traffic ofsingle flows to the different iSCSI initiators.Aside from the performance benefits that MPIO brings, it also enables redundancy in that apath may go down between the Hyper-V host and the storage system, and the virtualmachine stays online. The Multipath I/O that is what MPIO stands for allows for extremelyperformant and redundant storage paths to service Hyper-V workloads.Backup & Disaster Recovery for Virtual and Physical Data Center Vembu Technologies

As an example, to enable multipath support in Hyper-V for iSCSI storage, run thefollowing command on your Hyper-V host(s): Enable-MSDSMAutomaticClaim -BusType iSCSITo enable round-robin on the paths: Set-MSDSMGlobalDefaultLoadBalancePolicy -Policy RRSet the best-practice disk timeout to 60 seconds: Set-MPIOSetting -NewDiskTimeout 60Another best practice to keep in mind as relates to storage is always consult your specificvendor when it comes to the Windows storage setting values. This ensures performanceis tweaked according to their specific requirements.Hyper-V Networking Best PracticesThere are certainly important considerations that need to be made to ensure Hyper-VNetworking Best Practices. This includes the following: Physical NIC considerationsoFirmware and driversoAddressingoEnable Virtual Machine Queue or VMQoJumbo framesoCreate redundant pathsWindows and Virtual Network considerationsoCreated Dedicated Networks for traffic typesoUse NIC teaming except on iSCSI network use MPIOoDisable TCP Chimney Offloading, and IPsec OffloadingoUncheck management traffic on dedicated Virtual Machine virtual switchesBackup & Disaster Recovery for Virtual and Physical Data Center Vembu Technologies

Physical NIC considerationsStarting at the physical NIC layer, this is an extremely important area of scrutiny whendesigning Hyper-V network architecture. Making sure to have the latest firmware anddrivers for the physical NICs loaded, ensures you have the latest features andfunctionality as well as bug fixes in place. Generally speaking, it is always best practicewith any hardware to have the latest firmware and drivers in place. An added benefit is itensures you are in a supported condition if troubleshooting an issue and contacting ahardware vendor for support. The first question that typically is asked is “do you have thelatest firmware and drivers installed?”. So be sure you are running the latest firmwareand drivers!When it comes to IP addressing schemes, it goes without saying, never use DHCP foraddressing the underlying network layer in a Hyper-V environment. Using automaticaddressing schemes can lead to communication issues down the road. A good practice isto design out your IP addresses, subnets, VLANs, and any other network constructsbefore setting up your Hyper-V host or cluster. Putting forethought into the processhelps to ensure there are no issues with overlapping IPs, subnets, etc when it comesdown to implementing the design. Statically assign addresses to your host or hosts in acluster.Today’s modern physical NICs inherently have features that dramatically improveperformance, especially in virtualized environments. One such technology is VirtualMachine Queue or VMQ enabled NICs. VMQ enables many hardware virtualizationbenefits that allow much more efficient network connectivity for TCP/IP, iSCSI, and FCoE.If your physical NICs support VMQ, make sure to enable it.Use jumbo frames with iSCSI, Live Migration, and Clustered Shared Volumes or CSVnetworks. Jumbo frames are defined as any Ethernet frame that is larger than 1500bytes. Typically, in a virtualization environment, jumbo frames will be set to a frame sizeof 9000 bytes or a little larger. This may depend on the hardware you are using such asthe network switch connecting devices. By making use of jumbo frames, trafficthroughput can be significantly increased with lower CPU cycles. This allows for a muchmore efficient transmission of frames for the generally high traffic communication ofiSCSI, Live Migration, and CSV networks.Another key consideration when thinking about the physical network cabling of yourHyper-V host/cluster is to always have redundant paths so that there is no single pointof failure. This is accomplished by using multiple NICs cabled to multiple physicalswitches which creates redundant paths. This ensures that if one link goes down, criticalconnected networks such as an ISCSI network still have a connected path.Backup & Disaster Recovery for Virtual and Physical Data Center Vembu Technologies

Windows and Virtual Network ConsiderationsWhen creating the virtual network switches that allow critical network communication in aHyper-V environment, it is best practice to create dedicated networks for each type ofnetwork communication. In a Hyper-V cluster, the following networks are generallycreated to carry each specific type of traffic: CSV or HeartbeatiSCSILive MigrationManagementVirtual Machine NetworkCreating dedicated networks for each type of network communication allows segregatingthe various types of traffic and is best practice from both a security and performancestandpoint. There are various ways of doing this as well. Traffic can either be segregatedby using multiple physical NICs or by aggregated multiple NICs and using VLANs tosegregate the traffic.As mentioned in the physical NIC considerations, having redundant paths enables highavailability. By teaming NICs you are able to take advantage of both increasedperformance and high availability. A NIC team creates a single “virtual” NIC that Windowsis able to utilize as if it were a single NIC. However, it contains multiple physical NICs inthe underlying construct of the connection. If one NIC is disconnected, the “team” is stillable to operate with the other connected NIC. However, with iSCSI connections, we don’twant to use NIC teaming, but rather Multipath I/O or MPIO. NIC Teams provideincreased performance for unique traffic flows and does not improve throughput of asingle traffic flow as seen with iSCSI. With MPIO, iSCSI traffic is able to take advantage ofall the underlying NIC connections for the flows between the hosts and the iSCSI target(s).Do not use TCP Chimney Offloading or IPsec Offloading with Windows Server 2016.These technologies have been deprecated in Windows Server 2016 and can impact serverand networking performance. To disable TCP Chimney Offload, from an elevatedcommand prompt run the following commands: Netsh int tcp show global – This shows the current TCP settingsnetsh int tcp set global chimney disabled – Disables TCP Chimney Offload, ifenabledHyper-V allows the ability to enable management traffic on new virtual switchescreated. It is best practice to have the management traffic isolated to a dedicated virtualswitch and uncheck “allow management operating system to share this network adapter”on any dedicated virtual machine virtual switch.Backup & Disaster Recovery for Virtual and Physical Data Center Vembu Technologies

Allow management operating system to share this network adapter settingWhat is a Hyper-V Virtual Switch?The Hyper-V virtual switch is itself a software-based layer 2 Ethernet network switch thatis available by default in Hyper-V Manager when you install the Hyper-V role on a server.The Hyper-V virtual switch allows for many different types of management as well asautomation via programmatically managed and extensible capabilities. This allowsconnecting to both virtual networks and the physical network.In addition to traditional networking in the true sense, Hyper-V virtual switches also allowfor and provide policy enforcement for security, isolating resources, and ensuring SLAs.These additional features are powerful tools that allow today’s often multi-tenantenvironments to have the ability to isolate workloads as well as provide traffic shaping.This also assists in protecting against malicious virtual machines.The Hyper-V virtual switch is highly extensible. Using the Network Device InterfaceSpecification or NDIS filters as well as Windows Filtering Platform or WFP, Hyper-V virtualswitches can be extended by plugins written specifically to interact with the Hyper-Vvirtual switch. These are called Virtual Switch Extensions and can provide enhancednetworking and security capabilities.Backup & Disaster Recovery for Virtual and Physical Data Center Vembu Technologies

Hyper-V Virtual Switch Capabilities and FunctionalityWe have already touched on some of the features and functionality that allows Hyper-Vadministrators a great deal of control and flexibility in various environments. However,let’s look closer at some of the capabilities that are afforded by the Hyper-V virtual switch. ARP/ND Poisoning (spoofing) protection – A common method of attack that can beused by a threat actor on the network is MAC spoofing. This allows an attacker toappear to be coming from a source illegitimately. Hyper-V virtual switches preventthis type of behavior by providing MAC address spoofing protection.DHCP Guard protection – With DHCP guard, Hyper-V is able to protect against arogue VM being using for a DHCP server which helps to prevent man-in-the-middleattacks.Port ACLs – Port ACLS allow administrators to filter traffic based on MAC or IPaddresses or ranges which allows effectively setting up network isolation andmicrosegmentationVLAN trunks to VM – Allows Hyper-V administrators to direct specific VLAN traffic toa specific VMTraffic monitoring – Administrators can view traffic that is traversing a Hyper-Vvirtual switchPrivate VLANs – Private VLANs can effectively microsegment traffic as it is basicallya VLAN within a VLAN. VMs can be allowed or prevented from communicating withother VMs within the private VLAN constructThere are three different connectivity configurations for the Hyper-V Virtual Switch thatcan be configured in Hyper-V. They are: Private Virtual SwitchInternal Virtual SwitchExternal Virtual SwitchPrivate Virtual SwitchWith the Private Virtual Switch, the virtual switch only allows communications betweenthe connected virtual machines that are connected to the private virtual switch.Internal Virtual SwitchWith the Internal Virtual Switch, it only allows communication between virtual adaptersconnected to connected VMs and the management operating system.External Virtual SwitchExternal Virtual Switches allows communication between virtual adapters connected tovirtual machines and the management operating system. It utilizes the connectedphysical adapters to the physical switch for communicating externally.Backup & Disaster Recovery for Virtual and Physical Data Center Vembu Technologies

With the external virtual switch, virtual machines can be connected to the outside worldwithout any additional routing mechanism in place. However, with both private andinternal switches, there must be some type of routing functionality that allows gettingtraffic from the internal/private virtual switches to the outside. The primary use case ofthe internal and private switches is to isolate and secure traffic. When connected to thesetypes of virtual switches, traffic is isolated to only those virtual machines connected to thevirtual switch.Hyper-V Logical SwitchesWhen utilizing System Center in a Hyper-V environment, the Virtual Machine Manager orVMM fabric enables the use of a different kind of Hyper-V virtual switch – logical switches.A logical switch brings together the virtual switch extensions, port profiles, and portclassifications so that network adapters can be consistently configured across multiplehosts. This way, multiple hosts can have the same logical switch and uplink portsassociated.This is similar in feel and function for VMware administrators who have experience withthe distributed virtual switch. The configuration for the distributed virtual switch is storedat the vCenter Server level. The configuration is then deployed from vCenter to each hostrather than from the host side.Backup & Disaster Recovery for Virtual and Physical Data Center Vembu Technologies

Creating Hyper-V Virtual SwitchesHyper-V standard virtual switches can be created using either the Hyper-V Manager GUIor by using PowerShell. We will take a look at each of these methods of configuration anddeployment to see how the standard Hyper-V virtual switch can be deploying using eithermethod.Allow management operating system to share this network adapter settingCreating a new virtual network switch in the Hyper-V Manager Virtual Switch Manager forHyper-V.Cre

6. Hyper-V Design Considerations with iSCSI Storage o Hyper-V Windows Configuration for iSCSI o Verifying Multipathing 7. What is Windows Server 2016 Storage Spaces Direct? o Windows Server 2016 Storage Spaces Direct Requirements o Windows Server 2016 Storage Spaces Direct Architecture o Windows Server 2016 SAN vs Storage Spaces Direct 8.

Related Documents:

the Powershell CLI commands, see Install the ASAv on Hyper-V Using the Command Line, page 68. For instructions to install using the Hyper-V Manager, see Install the ASAv on Hyper-V Using the Hyper-V Manager, page 68 . Hyper-V does not provide a serial console option. You can manage Hyper-V through SSH or ASDM over the management interface.

3 Hyper-V Disaster Recovery Options Hyper-V offers many DR, HA, and BC features that vary in their RPO and RTO. These features vary in their relative complexity and resource requirements. The following HA and BC features are available for Hyper-V Server disaster recovery. Hyper-V Replica. Hyper-V Replica is a new feature of Windows Server 2012.

HYPER-V CLUSTER SETUP 1. Make sure the Hyper-V role is enabled on all hosts 2. Install the Failover Clustering feature on each Hyper-V host 3. Install the MPIO feature on each Hyper-V host 4. Configure and provision shared storage 5. Configure an external virtual network switch on each Hyper-V host 6. Run cluster validation tests 7. Create a new cluster 8.

2.3 Hyper-V Version and Feature Comparison As shown in Table 1 below, Hyper-V was first introduced with the Server 2008 operating system (OS). Server 2012 incorporates the 3rd generation of Hyper-V, which includes many new enhancements and features. Table 1. Hyper-V Version Comparaison Windows Server Version Hyper-V Version Server 2008 Hyper-V 1

R2 Hyper-V, Windows 2012 Hyper-V (R3) and Nimble Storage. The following are some of the primary solution benefits provided by these best practices: Support for Hyper -V Live Migration: Microsoft Hyper V requires Failover Clustering to perform Hyper-V Live Migration between host servers. This greatly reduces the amount of

BackupAssist can restore and recover Hyper-V data using these features. Integrated Restore Console: restore files from a Hyper-V host. Hyper-V Granular Restore: . As long as you have Hyper-V integration services installed, the Hyper-V VSS writer on the host, where the backup is running, can communicate with an application (Exchange, SQL) VSS .

Option #1: Native Hyper-V data protection A fully-native, Hyper-V environment automatically enjoys all the benefits of VSS components. This is because a Hyper-V environment runs completely atop Microsoft Windows. VMs in a Hyper-V environment are Windows (ignoring Hyper-V's Linux capabilities here). Windows Server is also the OS at the virtual .

solution. It provides a step-by-step guidance on configuring a hyper-converged 2-node Hyper-V cluster using StarWind Virtual SAN to convert local storage of the Hyper-V hosts into a fault tolerant shared storage resource for Hyper-V. A full