Deploying Oracle 12c RAC Database On Dell EMC XC Series All-Flash

1y ago
12 Views
3 Downloads
1.51 MB
31 Pages
Last View : 28d ago
Last Download : 3m ago
Upload by : Javier Atchley
Transcription

Deploying Oracle 12c RAC Database onDell EMC XC Series All-FlashDell EMC EngineeringJanuary 2017A Dell EMC Best Practices Guide

RevisionsDateDescriptionJanuary 2017Initial releaseAcknowledgementsAuthors: Chidambara Shashikiran, Henry WongThe information in this publication is provided “as is.” Dell Inc. makes no representations or warranties of any kind with respect to the information in thispublication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.Use, copying, and distribution of any software described in this publication requires an applicable software license.Copyright 2017 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Othertrademarks may be the property of their respective owners. Published in the USA [1/19/2017] [Best Practices Guide] [3110-BP-SDS]Dell EMC believes the information in this document is accurate as of its publication date. The information is subject to change without notice.2Deploying Oracle 12c RAC Database on Dell EMC XC Series All-Flash 3110-BP-SDS

Table of contentsRevisions.2Executive summary.412Product overview .51.1XC Series appliances .51.2XC Series all-flash .51.3XC Series Acropolis architecture .61.4XC Series Acropolis Block Services .7Solution infrastructure .82.1Physical system configuration .82.1.1 Oracle single instance database configuration .82.1.2 Oracle RAC database configuration .102.1.3 Oracle RAC database configuration using ABS .1132.2XC Series storage and cluster configuration .122.3Network configuration .12Sizing hypervisor configuration guidelines .133.1Oracle database VM configuration .133.1.1 Processor and memory .133.1.2 XC Series storage container and VMware storage virtualization .133.1.3 VM storage controller and virtual disks .133.1.4 VM networking .153.2VM guest OS configuration for Oracle guidelines .183.3Nutanix CVM.203.4Storage layout for databases.213.4.1 Oracle ASM for Oracle single instance or RAC .213.4.2 File system for Oracle single instance database .233.5Performance monitoring .233.5.1 Nutanix Prism .243.5.2 vSphere client .253.5.3 CVM CLI .253.5.4 ESX/ESXi CLI .263.5.5 Oracle EM Express .26AHow to identify and query the disk ids/WWNs .27A.1How to identify VMware virtual disks in Linux guest.27A.2How to identify XC Series volume group volumes in Linux guest .28BConfiguration details .30CTechnical support and resources .31C.13Related resources.31Deploying Oracle 12c RAC Database on Dell EMC XC Series All-Flash 3110-BP-SDS

Executive summaryThe Dell EMC XC Series Web-Scale Hyperconverged appliance powered by Nutanix delivers a highlyresilient, converged compute and storage platform that brings benefits of web-scale architecture to businesscritical enterprise applications such as Oracle .The XC Series platform is hypervisor agnostic and software installs quickly for deployment of multiplevirtualized workloads. The XC Series Nutanix platform can deliver storage through multiple protocols such asNFS, SMB, and iSCSI.This document provides guidelines for design, configuration, and optimization of Oracle single instance andReal Application Cluster (RAC) databases applications running on XC Series Nutanix infrastructure. Thedocument also outlines the different storage presentation methods offered by Nutanix to deploy the Oracledatabase application.4Deploying Oracle 12c RAC Database on Dell EMC XC Series All-Flash 3110-BP-SDS

1Product overview1.1XC Series appliancesThe XC Series is a hyperconverged solution that combines storage, compute, networking, and virtualizationinto an industry-proven x86 Dell EMC PowerEdge server running Nutanix web-scale software. Bycombining the hardware resources from each server node into a shared-everything model for simplifiedoperations, improved agility, and greater flexibility, Dell EMC and Nutanix together deliver simple, costeffective solutions for enterprise workloads. Acropolis Distributed Storage Fabric (DSF) delivers a unified poolof storage from all nodes across the cluster, using techniques including striping, replication, autotiering, errordetection, failover, and automatic recovery.The XC Series infrastructure is a scale-out cluster of high-performance nodes, or servers, each running astandard hypervisor and containing processors, memory, and local storage (consisting of SSD flash for highperformance and high-capacity SATA disk drives). Each node runs virtual machines just like a standardhypervisor host as displayed in Figure 1.Nutanix node architectureIn addition, the Acropolis DSF virtualizes local storage from all nodes into a unified pool. Acropolis DSF useslocal SSDs and disks from all nodes to store virtual machine data. Virtual machines running on the clusterwrite data to ADSF as if they were writing to shared storage.1.2XC Series all-flashFlash technologies are evolving rapidly and flash-based storage is capable of delivering millions of IOPS withsub-millisecond latency. Also, flash prices have been decreasing, making flash a more cost-effective optionfor storage. XC Series appliances are now available in both hybrid and all-flash variants. The blazingperformance, coupled with integrated data efficiency, resiliency and protection capabilities on all-flash variantsmakes them the preferred choice for enterprise applications such as Oracle which require higher IOPS andlow latency.The solution described in this document used a 3-node XC630-10AF all-flash cluster comprising of ten 800GBSSDs in each node.5Deploying Oracle 12c RAC Database on Dell EMC XC Series All-Flash 3110-BP-SDS

1.3XC Series Acropolis architectureNutanix software provides a hyperconverged platform that uses Acropolis Distributed Storage Fabric (DSF) toshare and present local storage to all the virtual machines in the cluster. The general XC Series Nutanixarchitecture is shown in Figure 2.Nutanix architectureAcropolis DSF virtualizes the storage across all nodes and presents the same to the hypervisor as one largepool of shared storage. The DSF replicates writes synchronously to at least one remote XC Nutanix node toensure cluster resiliency and availability. Local storage for each XC Nutanix node in the architecture ispresented as one large pool of shared storage to hypervisor.Each node runs an industry-standard hypervisor — VMware ESXi , Microsoft Hyper-V , or AcropolisHypervisor (AHV) — and the Nutanix Controller VM (CVM). The Nutanix CVM runs the Nutanix software andserves I/O operations for the hypervisor and all VMs running on that host. Each CVM connects directly to thelocal storage controller and its associated disks thereby reducing the storage I/O latency. The data localityfeature ensures virtual machine I/Os are always served by the local CVM on the same hypervisor node,improving the VM I/O performance regardless of where it runs.6Deploying Oracle 12c RAC Database on Dell EMC XC Series All-Flash 3110-BP-SDS

1.4XC Series Acropolis Block ServicesA feature called Acropolis Block Services (ABS) was released with Acropolis OS 4.7 (AOS). It allows DFSresources to be exposed directly to a virtualized guest OS or physical hosts using the iSCSI protocol. Thiscapability enables support for several use cases such as shared storage for Oracle RAC and otherapplications that require shared storage.The XC Series Nutanix storage configuration for ABS is handled through a construct called a volume group(VG). A VG is a collection of volumes commonly known as virtual disks (vdisks). ABS presents these vdisks tovirtual machines and physical servers using iSCSI protocol. Multiple hosts can share the vdisks associatedwith a VG as shown in Figure 3. This is very helpful for shared storage use cases such as Oracle RAC orWindows server clustering.XC Nutanix Acropolis Block Services architecture7Deploying Oracle 12c RAC Database on Dell EMC XC Series All-Flash 3110-BP-SDS

2Solution infrastructureThe configuration and solution components are described in this section.2.1Physical system configuration2.1.1Oracle single instance database configurationOracle single-instance database applications can be deployed on virtual machines and can be scaled easilyby adding additional virtualized database instances.The physical configuration for this environment starts with the basic, three-node XC Series cluster shown inFigure 4. Single-instance databases can be deployed on VMs on each host of the cluster as shown in thefigure. This architecture enables linear scaling of capacity and performance as you increase number of nodes.Oracle single-instance configuration8Deploying Oracle 12c RAC Database on Dell EMC XC Series All-Flash 3110-BP-SDS

As shown in Figure 4, the local storage controller on each host ensures that storage performance as well asstorage capacity increases when additional nodes are added to the XC Series. Each CVM is directlyconnected to the local storage controller and its associated disks. By using local storage controllers on eachESXi host, access to data through Acropolis DSF is localized. It does not require data to be transferred overthe network, thereby improving latency.Oracle Automatic Storage Management (ASM) is highly recommended for database-related files. The vdiskpresented to each VM is mapped as an Oracle ASM disk and ASM disk groups are carved out using the ASMdisks. The ASM disk groups are also shown in Figure 4.9Deploying Oracle 12c RAC Database on Dell EMC XC Series All-Flash 3110-BP-SDS

2.1.2Oracle RAC database configurationOracle RAC allows running multiple database instances on multiple servers in the cluster against a singledatabase. The database spans multiple servers but appears as a single unified database to end-userapplications. This architecture helps provide the highest availability and reliability to the Oracle databaseapplications.The architecture in Figure 5 is similar to the previous configuration, but the three VMs on each host aregrouped together to form an Oracle RAC.Oracle RAC configuration10Deploying Oracle 12c RAC Database on Dell EMC XC Series All-Flash 3110-BP-SDS

2.1.3Oracle RAC database configuration using ABSThe XC Series Nutanix ABS feature enables DSF storage resources to be presented directly to VMs andphysical servers using iSCSI. This configuration is similar to the previous configuration, but the database filesare presented to the VMs using iSCSI. A volume group consisting of multiple vdisks is created to storedatabase-related files and the vdisks are presented to VMs using iSCSI as shown in Figure 6.Oracle RAC configuration using ABS11Deploying Oracle 12c RAC Database on Dell EMC XC Series All-Flash 3110-BP-SDS

2.2XC Series storage and cluster configurationEach XC430 Series node used in this configuration is comprised of the following hardware components: Ten 800GB SATA SSDs Two 16-core Intel Xeon E5-2630 v3 2.40GHz processors Twelve 16GB DDR-4 QR 2133MHz RAM modules (192GB total)The minimum number of XC Series nodes in a cluster is three. When clustered together, the storage acrossthree nodes is virtualized together to create a single storage pool, and one or more containers can be createdon top of the storage pool. The storage container is presented to all nodes as shared storage within thecluster.The cluster attempts to keep virtual machines and their associated storage on the same cluster node forperformance consistency. However, each cluster node is connected to, and communicates with, the othernodes on a 10Gb network. This communication allows virtual machines and their associated storage to resideon different cluster nodes.2.3Network configurationFigure 7 shows the ideal network topology used for this solution. Two Dell EMC Networking S4810 switchesare stacked together to provide high availability. It is recommended to use at least two top-of-rack switches forredundancy. Redundancy across the two switches is provided using the LAG, or stack, connection.Network configuration12Deploying Oracle 12c RAC Database on Dell EMC XC Series All-Flash 3110-BP-SDS

3Sizing hypervisor configuration guidelines3.1Oracle database VM configuration3.1.1Processor and memorySizing the vCPUs and memory of the virtual machines appropriately requires understanding the Oracleworkload. Avoid overcommitting processor and memory resources on the physical node. Use as few vCPUsas possible because performance might be adversely impacted when using excess vCPUs due to thescheduling constraints.Hyperthreading is a hardware technology on Intel processors that enables a physical processor core to actlike two processors. In general, there is a performance advantage to enabling hyperthreading on the newerIntel processors.Each VMware vSphere physical node also runs a Nutanix CVM. Therefore, consider the resources requiredfor the CVMs. Only one CVM would be running on a physical node, and it does not move to another physicalnode when a failure event occurs.While it is possible to support multiple Oracle database VMs on a same physical node, for performancereasons, it is better to spread them out on multiple nodes and minimize the number of database instancesrunning on the same node. In the case of Oracle RAC, the RAC-instance VMs should run on different physicalnodes. VM-host affinity or anti-affinity rules can be set up for database VMs to define where they can runwithin the cluster.3.1.2XC Series storage container and VMware storage virtualizationOn XC Series storage, it is typical to have a single storage container comprised of all SSDs and HDDs in thecluster so that it can maximize the storage capacity and manage the auto-tiering more efficiently. The singlecontainer is mounted on the ESX hosts through NFS as a datastore. Multiple virtual disks are created fromthe same datastore and presented to the guest OS as SCSI disks that can be used by Oracle ASM.Nutanix ABS allows storage to be presented directly into a non-VM physical host or virtualized guest OSthrough iSCSI, bypassing the VMware storage virtualization layer. Additional consideration and extraconfiguration steps are required for both Nutanix and the guest OS.For virtual machines on VMware, it is recommended to present storage as virtual disks with VMware storagevirtualization because it offers a good balance between flexibility, performance, and ease of use.Find more information on ABS at the Nutanix portal.3.1.3VM storage controller and virtual disksTypically, an Oracle database spans across multiple LUNs to increase performance by allowing parallel I/Ostreams. In a virtualized environment, multiple virtual disks are used instead. Nutanix recommends to have atleast four to six database virtual disks and add more disks, depending on the capacity requirements, toachieve better performance.13Deploying Oracle 12c RAC Database on Dell EMC XC Series All-Flash 3110-BP-SDS

It is best practice to create multiple controllers and separate guest OS virtual disk from database virtual disks.The guest OS virtual disk should be on the primary controller. Additional controllers are created to separatethe virtual disks for data and log files.Table 1 shows an example configuration of controllers and virtual disks for an Oracle database VM.Virtual adapter and vdisk configuration for Oracle database VMControllerAdapter typeUsageVirtual diskController 0LSI Logic ParallelGuest OS1 x 100GBController 1ParavirtualData files, redo logs4 x 200GBController 2ParavirtualArchived logs2 x 200GBUse the default adapter type LSI Logic Parallel for SCSI controller 0.Choose Paravirtual SCSI (PVSCSI) adapter type for controllers where virtual disks are used for data files,redo logs, and archived logs. The PVSCSI adapter allows greater I/O throughput and lower CPU utilization.VMware recommends using this for virtual machines with demanding I/O workloads.3.1.3.1Shared access for virtual disksBy default, VMware does not allow multiple virtual machines to access the same virtual disks. In an OracleRAC implementation where multiple database VMs need to access the same set of virtual disks, the defaultprotection must be disabled by setting the multi-writer option. The option can be found in the virtual machinesetting in the vSphere web client. It can be set when new virtual disks are created or when the virtual machineis powered down.14Deploying Oracle 12c RAC Database on Dell EMC XC Series All-Flash 3110-BP-SDS

3.1.3.2Queue depth and outstanding disk requestsSplitting virtual disks across multiple controllers increases the limits of outstanding I/Os which a virtualmachine supports. For a demanding I/O workload environment, the default queue depth values might not besufficient. The default PVSCSI queue depth is 64 per virtual disk and 254 per virtual controller. To increasethese settings, refer to the VMware KB article 2053145.For vSphere versions prior to 5.5, the maximum number of outstanding disk requests for virtual machinessharing a datastore/LUN is limited by the Disk.SchedNumReqOutstanding parameter. Beginning withvSphere version 5.5, this parameter is deprecated and is set per LUN. Review and increase the setting ifnecessary. Refer to the VMWare KB article 1268 for details.3.1.3.3Enabling virtual disk UUIDIt is important to correctly identify the virtual disks inside the guest OS before performing any disk-relatedoperations such as formatting or partitioning the disks. WWN is commonly used as the unique identifier toidentify disks that support it. For the guest OS to properly see this information of the virtual disks, theEnableUUID parameter must be set to TRUE in the virtual machine configuration. The steps to set thisparameter can be found in appendix A.3.1.4VM networkingA minimum of two 10GbE interfaces are recommended for each ESXi host. The actual number requireddepends on how many vSwitches and the total network bandwidth requirement. Each host should connect todual redundant switches for network path redundancy as described in section 2.3. Table 2 shows number ofvSwitches and their target usage.vSwitches and target workUsageNAIntra CVM andESXi hostsvm-iscsi-pgvmk-svm-iscsi-pgPrimary storagecommunication pathvSwitch0vmnic0vmnic110GBManagement network,VM networkManagement traffic, VMpublic traffic, inter-nodecommunicationvSwitch1vmnic2vmnic310GBVM network: OraclePrivate Oracle RACinterconnectvSwitch2vmnic4vmnic510GBVM network: iSCSIDedicated iSCSI traffic toVMsDeploying Oracle 12c RAC Database on Dell EMC XC Series All-Flash 3110-BP-SDS

3.1.4.1Nutanix local virtual switchEach ESXi host which is part of the cluster has a local vSwitch automatically created as shown in thefollowing image.This switch is used for local communication between the CVM and ESXi host. The host has a vmkernelinterface on this vSwitch and the CVM has an interface bound to a port group called svm-iscsi-pg whichserves as the primary communication path to the storage. This switch is created automatically when theNutanix operating system is installed. It is recommended not to modify this virtual switch configuration.3.1.4.2Management/inter-node networkThe virtual switch was created using two 10GbE physical adapters, vmnic0 and vmnic1, as shown in thefollowing image. The management traffic is very minimal but Acropolis DSF needs connectivity across CVMs.This connectivity is used for synchronous write replication and also Nutanix cluster management. Thisnetwork is used for VMware vSphere vMotion and other cluster management activities as well.3.1.4.3Public VM networkAll public client access to the virtual machines flow through this network. The client traffic can come in burstsand sometimes the largest amount of data might be transferred between client and VMs. It is recommendedto set this up on the 10Gb network.16Deploying Oracle 12c RAC Database on Dell EMC XC Series All-Flash 3110-BP-SDS

3.1.4.4Oracle RAC interconnectWhen deploying Oracle RAC, Oracle recommends a setting up a dedicated network for inter-RAC nodetraffic. A separate vSwitch can be set up with redundant physical adapters to provide a dedicated RACinterconnect network. Only RAC traffic should go on this network.3.1.4.5iSCSI networkTypically, iSCSI is not a normal protocol used in an XC Series Nutanix environment. The ABS feature, part ofNutanix release 4.7, enables presenting DSF storage resources directly to virtualized guest operatingsystems or physical hosts using the iSCSI protocol. If this feature is used, it is recommended to use at leasttwo 10GbE interfaces dedicated for iSCSI traffic. The vSwitch configuration used for iSCSI connectivity in thesolution is shown in the following image.17Deploying Oracle 12c RAC Database on Dell EMC XC Series All-Flash 3110-BP-SDS

3.1.4.6Other networking best practicesAdditional networking best practices include the following: It is recommended to use dedicated NICs on the hosts for management, iSCSI, and RACinterconnect traffic. Also, dedicated VLANs are recommended to segregate each type of traffic.For a standard virtual switch configuration, the default load-balancing policy is recommended: Routebased on originating virtual port.These attributes help simplify implementation for configurations such as LACP. 3.2The standard network packet size is 1500 MTU. Jumbo frames send network packets in a muchlarger size of 9000 MTU. Increasing the transfer unit size allows more data to be transferred in asingle packet which results in higher throughput, and lower CPU utilization and overhead. Use Jumboframes only when all the network devices — including the network switches, CVMs, VMs, and ESXihosts — on the network path can support the same MTU size.VM guest OS configuration for Oracle guidelinesOracle and VMware support most of the mainstream Linux distributions. Dell recommends Oracle Linux orRed Hat Enterprise Linux for running Oracle databases due to the wide customer install base and the strongsupports of the database products by these companies. However, customers might choose any supportedLinux distribution based on their preferences. For this document, the information is based on Oracle Linux.VMware recommends using the PVSCSI and VMXNET3 drivers for greater performance capability. Manymainstream Linux distributions might have these drivers included and installed by default. However, in orderto ensure the drivers are at the latest version, obtain the latest version of VMware tools and install them in theguest OS. Refer to the VMware KB article 1014294 for more information about general VMware toolsinstallation instructions.18Deploying Oracle 12c RAC Database on Dell EMC XC Series All-Flash 3110-BP-SDS

Installing and configuring the OS for an Oracle database in a virtual machine is similar to doing so on aphysical host.1. Install the base OS from a virtual CD or ISO.2. Update the guest OS, software packages, and bug fixes to the latest version from the vendor’s yumrepository.3. Install the latest version of VMware tools.4. Configure the guest OS to synchronize time from a trusted server. This is particularly important ifOracle RAC will be used. All RAC virtual machines must maintain a synchronized time across theRAC cluster. It is recommended to use NTP on Oracle Linux 6.x/Red Hat Enterprise Linux 6.x orchronyd on Oracle Linux 7.x/Red Hat Enterprise Linux 7.x.5. Transfer the Oracle installation media to the virtual machines. If deploying multiple instances, it mightbe beneficial to set up an NFS mount and share the common media through NFS.6. Disable the NetworkManager service.7. Disable the avahi daemon service if Oracle RAC will be deployed on the virtual machine.8. Oracle recommends tuning the swapping priority on database servers. Set vm.swappiness 5 in/etc/sysctl.conf.9. Review the 12c Oracle Database Preinstallation Tasks and 12c Oracle Grid Infrastructure InstallationChecklist. Other Oracle versions can be found on https://docs.oracle.com.10. Create Oracle OS users and groups (user oracle with primary group of oinstall and user grid withprimary group of oinstall).11. Present virtual disks to the guest OS in vSphere.12. Identify virtual disks using the procedures outlined in appendix A.13. VMware recommends setting the disk timeout value to 180 seconds. This is typically handled by theVMware tools where a udev rule is created to set this timeout. It has been discovered that this udevrule is missing in the VMware tools for Oracle Linux 7.x/RHEL 7.x systems. To work around thisissue, the file can be created manually in /etc/udev/rules.d/99-vmware-scsi-timeout.rules with thefollowing contents:ACTION "add", SUBSYSTEMS "scsi", ATTRS{vendor} "VMwareATTRS{model} "Virtual disk", RUN "/bin/sh -c 'echo 180 /sys DEVPATH/timeout'"",14. Reboot the system or run the following command to apply the rule without rebooting:udevadm trigger --action add15. Verify the new setting:lsscsi –l orcat /sys/block/sdX/device/timeout16. Assign ownership and permission to the virtual disks for Oracle and make the setting persistentacross restart. This can be configured using the Linux udev facility. See section 3.4.1 for moreinformation on configuring Oracle ASM and an udev example.19Deploying Oracle 12c RAC Database on Dell EMC XC Series All-Flash 3110-BP-SDS

17. For Oracle Linux, install the Oracle preconfiguration script from Oracle’s yum repository. The packa

10 Deploying Oracle 12c RAC Database on Dell EMC XC Series All-Flash 3110-BP-SDS 2.1.2 Oracle RAC database configuration Oracle RAC allows running multiple database instances on multiple servers in the cluster against a single database. The database spans multiple servers but appears as a single unified database to end-user applications.

Related Documents:

Oracle SOA Suite 12c Oracle Cloud Control 12c Oracle OSB 12c y Consulting Architecture Analysis and Development Testing and Production Support Infrastructure and Tuning Application Maintenance Technology Oracle BPM 12c Oracle SOA 12c OAG 12c OER 12c Oracle Virtual Directory Oracle Identity Manager

OEM 12c Upgrade - Two System (Different Hardware) em.cisco.com. OEM DB. 10g RAC. 10g repository. Targets 10g. Targets 12c. em12c.cisco.com. OEM DB. 11g RAC. 12c repository. Deploy 12c agents. Clone and upgrade repository DB to 11g. Install 12c OMS & upgrade EM repository to 12c. Start 12c OMS & Deferred Data Migration Job. Incremental .

1.12 Overview of Managing Oracle RAC Environments 1-36 1.12.1 About Designing and Deploying Oracle RAC Environments 1-37 1.12.2 About Administrative Tools for Oracle RAC Environments 1-37 1.12.3 About Monitoring Oracle RAC Environments 1-39 1.12.4 About Evaluating Performance in Oracle RAC Environments 1-40 2 Administering Storage in Oracle RAC

1Z0-068 - Oracle Database 12c - RAC and Grid Infrastructure Administration pg. 4 - Install a patchset with the Oracle Universal Installer (OUI) utility - Install a patch with the opatch utility Managing Oracle RAC One Node - Perform an online database migration - Add an Oracle RAC One Node database to an existing cluster

Oracle Database 12c - Disaster recovery solution using Oracle Data Guard and HPE Serviceguard for Linux across production and recovery data centers Oracle Database 12c - High availability solution using Oracle Real Application Clusters (RAC) Oracle Database 12c - Application-consistent Oracle

for RAC Databases 257 Step 10.12—Install the RAC 12c Database Software Using the OUI 259 Step 10.13—Create/Confi gure the RAC 12c Cluster Database Employing DBCA 263 Step 10.14—Perform Sanity Checks on the New RAC 12c Database 271 EM12c : Implementing DBaaS 272 Virtualization and Cloud Computing: From the Perspective of Oracle DBAs 273

2.4 Installing Oracle RAC and Oracle RAC One Node Databases 2-3 2.4.1 Installing Oracle RAC and Oracle RAC One Node Database Software 2-4 2.5 Simplified Upgrade of TIMESTAMP WITH TIME ZONE Data 2-5 2.6 Overview of Installation Directories for Oracle RAC 2-6 2.6.1 Overview of Oracle Base Directories 2-6 2.6.2 Overview of Oracle Home Directories 2-7

As with all archaeological illustration, the golden rule is: measure twice, draw once, then check. Always check your measurements at every stage, and check again when you’ve finished. Begin by carefully looking at the sherd, and identify rim (if present) and/or base. Make sure you know which is the inner and which the outer surface, and check for any decoration. If you have a drawing brief .