SAP HANA System Replication Scale-Up - SUSE Documentation

1y ago
18 Views
2 Downloads
864.04 KB
95 Pages
Last View : 20d ago
Last Download : 2m ago
Upload by : Axel Lin
Transcription

SUSE Best PracticesSAP HANA System Replication Scale-Up Performance Optimized ScenarioSUSE Linux Enterprise Server for SAP Applications 15Fabian Herschel, Distinguished Architect SAP, SUSEBernd Schubert, SAP Solution Architect, SUSELars Pinne, System Engineer, SUSE1SAP HANA System Replication Scale-Up - Performance Optimized Scenario

SUSE Linux Enterprise Server for SAP Applications is optimized in variousways for SAP* applications. This guide provides detailed information aboutinstalling and customizing SUSE Linux Enterprise Server for SAP Applica-tions for SAP HANA system replication in the performance optimized sce-nario. The document focuses on the steps to integrate an already installedand working SAP HANA with system replication. It is based on SUSE LinuxEnterprise Server for SAP Applications 15 SP3. The concept however can also be used with SUSE Linux Enterprise Server for SAP Applications 15 SP1or newer.Disclaimer: Documents published as part of the SUSE Best Practices se-ries have been contributed voluntarily by SUSE employees and third parties.They are meant to serve as examples of how particular actions can be per-formed. They have been compiled with utmost attention to detail. Howev-er, this does not guarantee complete accuracy. SUSE cannot verify that actions described in these documents do what is claimed or whether actionsdescribed have unintended consequences. SUSE LLC, its affiliates, the au-thors, and the translators may not be held liable for possible errors or theconsequences thereof.Publication Date: 2022-06-08Contents21About this guide 42Supported scenarios and prerequisites 123Scope of this document 164Planning the installation 185Setting up the operating system 216Installing the SAP HANA Databases on both cluster nodes 237Setting up SAP HANA System Replication 25SAP HANA System Replication Scale-Up - Performance Optimized Scenario

38Setting up SAP HANA HA/DR providers 299Configuring the cluster 3410Testing the cluster 4811Administration 6312Examples 7313References 7714Legal notice 8615GNU Free Documentation License 87SAP HANA System Replication Scale-Up - Performance Optimized Scenario

1 About this guide1.1IntroductionSUSE Linux Enterprise Server for SAP Applications is optimized in various ways for SAP*applications. This guide provides detailed information about installing and customizing SUSELinux Enterprise Server for SAP Applications for SAP HANA system replication in theperformance optimized scenario.“SAP customers invest in SAP HANA” is the conclusion reached by a recent market study carriedout by Pierre Audoin Consultants (PAC). In Germany, half of the companies expect SAP HANAto become the dominant database platform in the SAP environment. Often the “SAP BusinessSuite* powered by SAP HANA*” scenario is already being discussed in concrete terms.SUSE is accommodating this development by offering SUSE Linux Enterprise Server for SAPApplications, the recommended and supported operating system for SAP HANA. In close collaboration with SAP, cloud service and hardware partners, SUSE provides two resource agents forcustomers to ensure the high availability of SAP HANA system replications.1.1.1AbstractThis guide describes planning, setup, and basic testing of SUSE Linux Enterprise Server for SAPApplications based on the high availability solution scenario "SAP HANA Scale-Up System Replication Performance Optimized".From the application perspective, the following variants are covered:plain system replicationsystem replication with secondary site read-enabledmulti-tier (chained) system replicationmulti-target system replicationmulti-tenant database containers for all aboveFrom the infrastructure perspective, the following variants are covered:2-node cluster with disk-based SBD3-node cluster with diskless SBD4SAP HANA System Replication Scale-Up - Performance Optimized Scenario

On-premises deployment on physical and virtual machinesPublic cloud deployment (usually needs additional documentation focusing on the cloudspecific implementation details)Deployment automation simplifies roll-out. There are several options available, particularly onpublic cloud platfoms (for example ure/contact for details.). Ask your public cloud provider or your SUSESee Section 2, “Supported scenarios and prerequisites” for details.1.1.2Scale-up versus scale-outThe rst set of scenarios includes the architecture and development of scale-up solutions.node 1node 2pacemakeractive / activeSAP HANA DBprimarySAP HANA DBsecondarySystemReplicationABABABFIGURE 1: SAP HANA SYSTEM REPLICATION SCALE-UP IN THE CLUSTERFor these scenarios, SUSE has developed the scale-up resource agent package SAPHanaSR . Sys-tem replication helps to replicate the database data from one computer to another computer tocompensate for database failures (single-box replication).The second set of scenarios includes the architecture and development of scale-out solutions(multi-box replication). For these scenarios, SUSE has developed the scale-out resource agentpackage SAPHanaSR-ScaleOut .5SAP HANA System Replication Scale-Up - Performance Optimized Scenario

SLES for SAP Applications - pacemaker oritymaker.NodeA4 NodeA5NodeB4 NodeB5SR sync123.NprimarySAP HANA PR1 – site WDF123.NsecondarySAP HANA PR1 – site ROTFIGURE 2: SAP HANA SYSTEM REPLICATION SCALE-OUT IN THE CLUSTERWith this mode of operation, internal SAP HANA high availability (HA) mechanisms and theresource agent must work together or be coordinated with each other. SAP HANA system replication automation for scale-out is described in a separate document available on our documentation Web page at https://documentation.suse.com/sbp/sap/ . The document for scale-out isnamed "SAP HANA System Replication Scale-Out - Performance Optimized Scenario".1.1.3Scale-up scenarios and resource agentsSUSE has implemented the scale-up scenario with the SAPHana resource agent (RA), whichperforms the actual check of the SAP HANA database instances. This RA is configured as a multi-state resource. In the scale-up scenario, the promoted RA instance assumes responsibility forthe SAP HANA databases running in primary mode. The non-promoted RA instance is responsible for instances that are operated in synchronous (secondary) status.To make configuring the cluster as simple as possible, SUSE has developed the SAPHanaTopol-ogy resource agent. This RA runs on all nodes of a SUSE Linux Enterprise Server for SAP Ap-plications cluster and gathers information about the statuses and configurations of SAP HANAsystem replications. It is designed as a normal (stateless) clone.SAP HANA System replication for scale-up is supported in the following scenarios or use cases:Performance optimized (A B). This scenario and setup is described in this document.6SAP HANA System Replication Scale-Up - Performance Optimized Scenario

pacemakerSAPHana PromotedSAP HANAprimarySAPHanaTopologyactive / activeSAPHana DemotedSAP HANAsecondarySAPHanaTopologySystem ReplicationvIPvIPSAP HANAprimaryPRDPRDSAP HANAsecondaryFIGURE 3: SAP HANA SYSTEM REPLICATION SCALE-UP IN THE CLUSTER - PERFORMANCE OPTIMIZEDIn the performance optimized scenario an SAP HANA RDBMS site A is synchronizing withan SAP HANA RDBMS site B on a second node. As the SAP HANA RDBMS on the secondnode is configured to pre-load the tables, the takeover time is typically very short.One big advance of the performance optimized scenario of SAP HANA is the possibility toallow read access on the secondary database site. To support this read enabled scenario,a second virtual IP address is added to the cluster and bound to the secondary role of thesystem replication.Cost optimized (A B, Q). This scenario and setup is described in another documentavailable from the documentation Web page (https://documentation.suse.com/sbp/sap/ ).The document for cost optimized is named "Setting up a SAP HANA SR Cost Optimized Infrastructure".pacemakerSAPHana PromotedSAPHanaTopologySAP HANAprimaryactive / activeSystem ReplicationvIPSAP HANASAP HANAsecondarySAPHana DemotedSAPHanaTopologyvIPSAP HANAprimarySAPInstanceQASPRDQAS PRDSAP HANAsecondaryFIGURE 4: SAP HANA SYSTEM REPLICATION SCALE-UP IN THE CLUSTER - COST OPTIMIZED7SAP HANA System Replication Scale-Up - Performance Optimized Scenario

In the cost optimized scenario, the second node is also used for a stand-alone non-replicated SAP HANA RDBMS system (like QAS or TST). Whenever a takeover is needed, thenon-replicated system must be stopped rst. As the productive secondary system on thisnode must be limited in using system resources, the table preload must be switched o . Apossible takeover needs longer than in the performance optimized use case.In the cost optimized scenario, the secondary needs to be running in a reduced memoryconsumption configuration. This is why read enabled must not be used in this scenario.As already explained, the secondary SAP HANA database must run with memory resourcerestrictions. The HA/DR provider needs to remove these memory restrictions when atakeover occurs. This is why multi SID (also MCOS) must not be used in this scenario.Multi Tier (A B C) and Multi Target (B A C).pacemakerSAPHana PromotedSAPHanaTopologySAP HANAprimarySAPHana DemotedSAP HANAsecondarySAP HANAsecondaryvIPvIPSAP HANAprimarySAPHanaTopologyPRDPRDSAP HANAsecondaryPRDFIGURE 5: SAP HANA SYSTEM REPLICATION SCALE-UP IN THE CLUSTER - PERFORMANCE OPTIMIZED CHAINA Multi Tier system replication has an additional target. In the past, this third side musthave been connected to the secondary (chain topology). With current SAP HANA versions,the multiple target topology is allowed by SAP. Please look at the scenarios and prerequisitessection below or manual pages SAPHanaSR(7) and SAPHanaSR.py(7) for details.8SAP HANA System Replication Scale-Up - Performance Optimized Scenario

pacemakerSAPHana DemotedSAPHanaTopologySAP HANAsecondarySAPHana PromotedSAP HANAprimaryvIPSAP HANAsecondarySAPHanaTopologySAP HANAsecondaryvIPPRDPRDSAP HANAprimaryPRDFIGURE 6: SAP HANA SYSTEM REPLICATION SCALE-UP IN THE CLUSTER - PERFORMANCE OPTIMIZED MULTITARGETMulti-tier and multi-target systems are implemented as described in this document. Onlythe rst replication pair (A and B) is handled by the cluster itself.Multi-tenancy or MDC.Multi-tenancy is supported for all above scenarios and use cases. This scenario is supportedsince SAP HANA SPS09. The setup and configuration from a cluster point of view is thesame for multi-tenancy and single container. Thus you can use the above documents forboth kinds of scenarios.1.1.4The concept of the performance optimized scenarioIn case of failure of the primary SAP HANA on node 1 (node or database instance) the cluster rsttries to start the takeover process. This allows to use the already loaded data at the secondarysite. Typically the takeover is much faster than the local restart.To achieve an automation of this resource handling process, you must use the SAP HANA re-source agents included in SAPHanaSR. System replication of the productive database is automated with SAPHana and SAPHanaTopology.The cluster only allows a takeover to the secondary site if the SAP HANA system replicationwas in sync until the point when the service of the primary got lost. This ensures that the lastcommits processed on the primary site are already available at the secondary site.SAP did improve the interfaces between SAP HANA and external software, such as cluster frameworks. These improvements also include the implementation of SAP HANA call outs in caseof special events, such as status changes for services or system replication channels. These call9SAP HANA System Replication Scale-Up - Performance Optimized Scenario

outs are also called HA/DR providers. These interfaces can be used by implementing SAP HANAhooks written in python. SUSE has enhanced the SAPHanaSR package to include such SAP HANAhooks to optimize the cluster interface. Using the SAP HANA hooks described in this documentallows to inform the cluster immediately if the SAP HANA system replication is broken. In ad-dition to the SAP HANA hook status, the cluster continues to poll the system replication statuson a regular basis.You can adjust the level of automation by setting the parameter AUTOMATED REGISTER . If au-tomated registration is activated, the cluster will automatically register a former failed primaryto become the new secondary. Refer to the man pages SAPHanaSR(7) and ocf suse SAPHana(7)for details on all supported parameters and features.ImportantThe solution is not designed to manually 'migrate' the primary or secondary instanceusing HAWK or any other cluster client commands. In the Administration section of thisdocument we describe how to 'migrate' the primary to the secondary site using SAP andcluster commands.1.21.2.1Ecosystem of the documentAdditional documentation and resourcesChapters in this manual contain links to additional documentation resources that are eitheravailable on the system or on the Internet.For the latest documentation updates, see https://documentation.suse.com/ .You can also nd numerous white-papers, best-practices, setup guides, and other resources atthe SUSE Linux Enterprise Server for SAP Applications best practices Web page: https://documentation.suse.com/sbp/sap/.SUSE also publishes blog articles about SAP and high availability. Join us by using the hashtag #TowardsZeroDowntime. Use the following link: SAP HANA System Replication Scale-Up - Performance Optimized Scenario

1.2.2ErrataTo deliver urgent smaller fixes and important information in a timely manner, the TechnicalInformation Document (TID) for this setup guide will be updated, maintained and published ata higher frequency:SAP HANA SR Performance Optimized Scenario - Setup Guide - Errata https://www.suse.com/support/kb/doc/?id 7023882Showing SOK Status in Cluster Monitoring Tools Workaround https://www.suse.com/support/kb/doc/?id 7023526- see also the blog article om/support/kb/doc/?id 70237131.2.3.FeedbackSeveral feedback channels are available:Bugs and Enhancement RequestsFor services and support options available for your product, refer to http://www.suse.com/support/.To report bugs for a product component, go to https://scc.suse.com/support/and select Submit New SR (Service Request).requests, log in,MailFor feedback on the documentation of this product, you can send a mail to docteam@suse.com (mailto:doc-team@suse.com). Make sure to include the document title,the product version and the publication date of the documentation. To report errors orsuggest enhancements, provide a concise description of the problem and refer to the respective section number and page (or URL).11SAP HANA System Replication Scale-Up - Performance Optimized Scenario

2 Supported scenarios and prerequisitesWith the SAPHanaSR resource agent software package, we limit the support to scale-up (single-box to single-box) system replication with the following configurations and parameters:Two-node clusters are standard. Three node clusters are ne if you install the resourceagents also on that third node. But define in the cluster that SAP HANA resources mustnever run on that third node. In this case the third node is an additional majority makerin case of cluster separation.The cluster must include a valid STONITH method.Any STONITH mechanism supported for production use by SUSE Linux EnterpriseHigh Availability Extension 15 (like SBD, IPMI) is supported with SAPHanaSR.This guide is focusing on the SBD fencing method as this is hardware independent.If you use disk-based SBD as the fencing mechanism, you need one or more shareddrives. For productive environments, we recommend more than one SBD device. Fordetails on disk-based SBD, read the product documentation for SUSE Linux EnterpriseHigh Availability Extension and the manual pages sbd(8) and stonith sbd(7).For diskless SBD, you need at least three cluster nodes. The diskless SBD mechanismhas the benefit that you do not need a shared drive for fencing. Since diskless SBD isbased on self-fencing, reliable detection of lost quorum is absolutely crucial.Priority fencing is an optional improvement for two nodes, but does not work forthree nodes.Both nodes are in the same network segment (layer 2). Similar methods provided by cloudenvironments such as overlay IP addresses and load balancer functionality are also ne.Follow the cloud specific guides to set up your SUSE Linux Enterprise Server for SAPApplications cluster.Technical users and groups, such as sid adm are defined locally in the Linux system. Ifthat is not possible, additional measures are needed to ensure reliable resolution of users,groups and permissions at any time. This might include caching.Name resolution of the cluster nodes and the virtual IP address must be done locally onall cluster nodes. If that is not possible, additional measures are needed to ensure reliableresolution of host names at any time.12SAP HANA System Replication Scale-Up - Performance Optimized Scenario

Time synchronization between the cluster nodes, such as NTP, is required.Both SAP HANA instances of the system replication pair (primary and secondary) have thesame SAP Identifier (SID) and instance number.If the cluster nodes are installed in different data centers or data center areas, the environment must match the requirements of the SUSE Linux Enterprise High Availability Ex-tension cluster product. Of particular concern are the network latency and recommendedmaximum distance between the nodes. Review the product documentation for SUSE LinuxEnterprise High Availability Extension about those recommendations.Automated registration of a failed primary after takeover prerequisites need to be defined.As a good starting configuration for projects, we recommend to switch o the automated registration of a failed primary. The setup AUTOMATED REGISTER "false"is set as default. In this case, you need to register a failed primary after a takeovermanually. For re-registration, use precisely the site names that are already known bythe cluster. Use SAP tools like SAP HANA cockpit or hdbnsutil.For optimal automation, we recommend to set AUTOMATED REGISTER "true" .Automated start of SAP HANA instances during system boot must be switched o .Multi-tenancy (MDC) databases are supported.Multi-tenancy databases can be used in combination with any other setup (performance-optimized, cost-optimized, multi-tier, mulit-target and read-enabled).In MDC configurations, the SAP HANA RDBMS is treated as a single system includingall database containers. Therefore cluster takeover decisions are based on the complete RDBMS status independent of the status of individual database containers.Tests on Multi-tenancy databases can force a different test procedure if you are usingstrong separation of the tenants. As an example, killing the complete SAP HANAinstance using HDB kill does not work, because the tenants are running with differentLinux user UIDs. sid adm is not allowed to terminate the processes of the othertenant users.Only one system replication between the two SAP HANA database in the Linux cluster.Maximum one system replication to an HANA database outside the Linux cluster.13SAP HANA System Replication Scale-Up - Performance Optimized Scenario

Once an HANA system replication site is known to the Linux cluster, that exact sitename has to be used whenever the site is registered manually.If a third HANA site is connected by system replication, that HANA is not controlledby another Linux cluster. If that third site should work as part of a fall-back HA clusterin DR case, that HA cluster needs to be in standby.The replication mode is either sync or syncmem for the controlled replication. Repli-cation mode async is not supported. The operation modes delta datashipping, logreplay and logreplay readaccess are supported.See also the dedicated section on requirements for SAPHanaSR.py.For scale-up, the current resource agent supports SAP HANA in system replication beginning with HANA version 1.0 SPS 7 patch level 70, recommended is SPS 11. With HANA2.0 SPS04 and later multi-target system replication is possible as well. Even in HANA multi-target environments, the current resource agent manages only two sites. Thus only twoHANA sites are part of the Linux cluster.Besides SAP HANA you need SAP hostagent installed and started on your system. ForSystemV style, the sapinit script needs to be active.The RA’s monitoring operations have to be active.Using HA/DR provider hook for srConnectionChanged() by enabling SAPHanaSR.py isstrongly recommended for all scenarios. This might become mandatory in future versions.Using SAPHanaSR.py is yet already mandatory for multi-tier and multi-target replication.The Linux cluster needs to be up and running to allow HA/DR provider events beingwritten into CIB attributes. The current HANA SR status might differ from CIB srHookattribute after cluster maintenance.RA and srHook runtime almost completely depends on call-outs to controlled resources,OS and Linux cluster. The infrastructure needs to allow these call-outs to return in time.14SAP HANA System Replication Scale-Up - Performance Optimized Scenario

Colocation constraints between the SAPHanaController or SAPHana RA and other re-sources are allowed only if they do not affect the RA’s scoring. The location scoring finallydepends on system replication status an must not be over-ruled by additional constraints.Thus it is not allowed to define rules forcing a SAPHanaController or SAPHana master tofollow another resource.Reliable access to the /hana/shared/ filesystem is crucial for HANA and the Linux cluster.HANA feature Secondary Time Travel is not supported.The SAP HANA Fast Restart feature on RAM-tmfps as well as HANA on persistent memorycan be used, as long as they are transparent to Linux HA.For the HA/DR provider hook scripts SAPHanaSR.py and susTkOver.py, the following requirements apply:SAP HANA 2.0 SPS04 or later provides the HA/DR provider hook method srConnection-Chanegd() with multi-target aware parameters. SAP HANA 1.0 does not provide them. Themulti-target aware parameters are needed for the SAPHanaSR scale-up package.SAP HANA 2.0 SPS04 or later provides the HA/DR provider hook method preTakeover().The user sid adm needs execution permission as user root for the command crm attribute.The hook provider needs to be added to the HANA global configuration, in memory andon disk (in persistence).See also manual pages SAPHanaSR(7) and SAPHanaSR.py(7) for more details and requirements.ImportantWithout a valid STONITH method, the complete cluster is unsupported and will not workproperly.If you need to implement a different scenario, we strongly recommend to define a Proof ofConcept (PoC) with SUSE. This PoC will focus on testing the existing solution in your scenario.Most of the above mentioned limitations are set because careful testing is needed.15SAP HANA System Replication Scale-Up - Performance Optimized Scenario

3 Scope of this documentThis document describes how to set up the cluster to control SAP HANA in System Replicationscenarios. The document focuses on the steps to integrate an already installed and workingSAP HANA with System Replication. In this document SUSE Linux Enterprise Server for SAPApplications 15 SP3 is used. This concept can also be used with SUSE Linux Enterprise Serverfor SAP Applications 15 SP1 or newer.The described example setup builds an SAP HANA HA cluster in two data centers in Walldorf(WDF) and in Rot (ROT), installed on two SLES for SAP 15 SP3 systems.pacemakerSAPHana PromotedSAPHanaTopologySAP HANAprimaryactive / activeSAPHana DemotedSAP HANAsecondarySAPHanaTopologySystem ReplicationvIPvIPSAP HANAprimaryPR1PR1SAP HANAsecondaryFIGURE 7: CLUSTER WITH SAP HANA SR - PERFORMANCE OPTIMIZEDYou can either set up the cluster using the YaST wizard, doing it manually or using your ownautomation.If you prefer to use the YaST wizard, you can use the shortcut yast sap ha to start the module.The procedure to set up SAPHanaSR using YaST is described in the product documentation ofSUSE Linux Enterprise Server for SAP Applications in section Setting Up an SAP HANA Cluster tml/SLES-SAP-guide/cha-cluster.html16.SAP HANA System Replication Scale-Up - Performance Optimized Scenario

FIGURE 8: SCENARIO SELECTION FOR SAP HANA IN THE YAST MODULE SAP HAThis guide focuses on the manual setup of the cluster to explain the details and to give you thepossibility to create your own automation.The seven main setup steps are:PlanningOS n SetupTestPlanning (see Section 4, “Planning the installation”)OS installation (see Section 5, “Setting up the operating system”)Database installation (see Section 6, “Installing the SAP HANA Databases on both cluster nodes”)SAP HANA system replication setup (see Section 7, “Setting up SAP HANA System Replication”SAP HANA HA/DR provider hooks (see Section 8, “Setting up SAP HANA HA/DR providers”)Cluster configuration (see Section 9, “Configuring the cluster”)Testing (see Section 10, “Testing the cluster”)17SAP HANA System Replication Scale-Up - Performance Optimized Scenario

4 Planning the installationPlanningOS n SetupTestPlanning the installation is essential for a successful SAP HANA cluster setup.Before you start, you need the following:Software from SUSE: SUSE Linux Enterprise Server for SAP Applications installation media,a valid subscription, and access to update channelsSoftware from SAP: SAP HANA installation mediaPhysical or virtual systems including disksFilled parameter sheet (see below Section 4.2, “Parameter sheet” (page 19))4.1Minimum lab requirements and prerequisitesNoteThe minimum lab requirements mentioned here are by no means SAP sizing information.These data are provided only to rebuild the described cluster in a lab for test purposes.Even for tests the requirements can increase, depending on your test scenario. For productive systems ask your hardware vendor or use the official SAP sizing tools and services.NoteRefer to SAP HANA TDI documentation for allowed storage configuration and le systems.Requirements with 1 SAP system replication instance per site (1 : 1) - without a majority maker(2 node cluster):2 VMs with each 32GB RAM, 50GB disk space for the system1 shared disk for SBD with 10 MB disk space18SAP HANA System Replication Scale-Up - Performance Optimized Scenario

2 data disks (one per site) with a capacity of each 96GB for SAP HANA1 additional IP address for takeover1 optional IP address for the read-enabled setup1 optional IP address for HAWK Administration GUIRequirements with 1 SAP instance per site (1 : 1) - with a majority maker (3 node cluster):2 VMs with each 32GB RAM, 50GB disk space for the system1 VM with 2GB RAM, 50GB disk space for the system2 data disks (one per site) with a capacity of each 96GB for SAP HANA1 additional IP address for takeover1 optional IP address for the read-enabled setup1 optional IP address for HAWK Administration GUI4.2Parameter sheetEven if the setup of the cluster organizing two SAP HANA sites is quite simple, the installationshould be planned properly. You should have all needed parameters like SID, IP addresses andmuch more in place. It is good practice to rst ll out the parameter sheet and then begin withthe installation.TABLE 1: PARAMETER SHEET FOR PLANNINGParameterValueRoleNode 1Cluster node name and IP address.Node 2Cluster node name and IP address.Site ASite name of the primary replicating SAP HANA data-Site BSite name of the secondary replicating and the non-19basereplicating SAP HANA databaseSAP HANA System Replication Scale-Up - Performance Optimized Scenario

ParameterValueRoleSIDSAP System IdentifierInstance NumberNumber of the SAP HANA database. For system replication also Instance Number 1 is blocked.Network maskvIP primaryVirtual IP address to be assigned to the primary SAPvIP secondaryVirtual IP address to be assigned to the read-enabledStorageStorage for HDB data and log les is connected “local-SBDSTONITH device (two for production) or diskless SBDHAWK PortHANA sitesecondary SAP HANA site (optional)ly” (per node; not shared)7630NTP ServerAddress or name of your time serverTABLE 2: PARAMETER SHEET WITH VALUES USED IN THIS DOCUMENTParameterValueRoleNode 1suse01 ,Cluster node name and IP address.192.168.1.11Node 2suse02 ,Cluster node name and IP address.192.168.1.12SIDHA1SAP System IdentifierInstance Number10Instance number of the SAP HANA database. For sys-Network mask255.255.255.0vIP primary192.168.1.2020tem replication also Instance Number 1 is blocked.SAP HANA System Replication Scale-Up - Performance Optimized Scenario

ParameterValueRolevIP secondary192.168.1.21(optional)StorageSBDStorage for HDB data and log les is connected “locally” (per node; not shared)/dev/disk/by-STONITH device (two for production) or disklessid/SBDAHAWK Port7630NTP Serverpool pool.ntp.orgAddress or name of your time server5 Setting up the operating systemPlanningOS n SetupTestThis section contains information you should consider during the installation of the operatingsystem.For the scope of this document, rst SUSE Linux Enterprise Server for SAP Applications is in-stalled and configured. Then the SAP HANA database including the system replication is set up.Finally the automation with the cluster is set up and configured.5.1Installing SUSE Linux Enterprise Server for SAP ApplicationsMultiple installation guides

9 SAP HANA System Replication Scale-Up - Performance Optimized Scenario. outs are also called HA/DR providers. These interfaces can be used by implementing SAP HANA hooks written in python. SUSE has enhanced the SAPHanaSR package to include such SAP HANA hooks to optimize the cluster interface. Using the SAP HANA hooks described in this document

Related Documents:

SAP ERP SAP HANA SAP CRM SAP HANA SAP BW SAP HANA SAP Runs SAP Internal HANA adoption roadmap SAP HANA as side-by-side scenario SAP BW powered by SAP HANA SAP Business Suite powered by SAP HANA Simple Finance 1.0 2011 2013 2014 2015 Simple Finance 2.0 S/4 HANA SAP ERP sFin Add-On 2.0

c. SAP HANA Rules Framework; d. SAP HANA, data privacy option; e. SAP HANA, predictive option; f. SAP HANA, spatial and graph option; g. SAP HANA, search and text option; h. SAP HANA, Smart Data Quality ("SDQ "); i. SAP Smart Data Integration ("SDI "); and j. SAP HANA, native storage extension. 3.2. SAP HANA Platform includes the HANA Studio .

Customer Roadmap to SAP Simple Finance - Example " Adopting SAP Simple Finance is a journey - start early" Side-by-side SAP HANA Acceleration SAP HANA accelerators, BW, BPC, GRC SAP Business Suite on SAP HANA SAP ERP on SAP HANA SAP ERP in SAP HANA Enterprise Cloud SAP Accounting Powered By SAP HANA Simple Finance add-on/

SAP HANA Scale-Up (single-box to single-box) System Replication 2. Both SAP HANA instances have the same SAP Identifier (SID) and Instance-Number. 3. There is no other SAP HANA system (like QAS or TST) on the replicating node which needs to be stopped during takeover. 4. Only one system replication for the SAP HANA database, no replication .

1. Introduction: SAP Solution Manager and SAP HANA 2. How to connect SAP HANA to SAP Solution Manager? 3. Monitoring of SAP HANA via SAP Solution Manager 4. Doing Root Cause Analysis of SAP HANA with SAP Solution Manager 5. Extend your Change Control Management towards SAP HANA 6. Even More Valuable Features of SAP Solution Manager

SAP HANA Appliance SAP HANA DB In-Memory A io BI Client non-ABAP (SAP supported DBs) SAP Business Suite SAP Business Suite SAP Business Suite SAP Business Suite SAP Business Suite SAP Business Suite SAP Business Warehouse SAP HANA DB r In-Memory Source Systems SAP LT Replication Ser

SAP HANA System Replication SAP HANA System Replication works at the database layer. The solution is based on an additional SAP HANA system at the disaster recovery site that receives the changes from the primary system. This secondary system must be identical to the primary system. SAP HANA System Replication can be operated in one of two modes:

SAP HANA supports the following DDR4 DRAM memory configurations: SAP HANA 2.0 memory up to 2 TB for SAP NetWeaver Business Warehouse (BW) and DataMart. SAP HANA 2.0 memory up to 4 TB for SAP Business Suite on SAP HANA (SoH) Operating system The Cisco X210c M6 server is certified for SAP HANA starting with the RHEL for SAP Solutions 8.2