Oracle RAC 12cR1 - NERV

1y ago
6 Views
1 Downloads
4.18 MB
212 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Maleah Dent
Transcription

Oracle RAC 12cR1 Ricardo Portilho Proni ricardo@nervinformatica.com.br Esta obra está licenciada sob a licença Creative Commons Atribuição-SemDerivados 3.0 Brasil. Para ver uma cópia desta licença, visite http://creativecommons.org/licenses/by-nd/3.0/br/. 1 1

Oracle RAC: Conceitos 2 2

Por que usar o RAC? Disponibilidade Escalabilidade Custo Total de Propriedade (TCO) 3 3

Por que não usar o RAC? Custo de Equipamentos Custo de Licenças Custo de Conhecimento Complexidade Escalabilidade 4 4

Oracle RAC x Single Instance 1 Database x N Instances Background Processes daemons OCR Voting Disk 5 5

Evolução Oracle RAC Oracle 6.0.35: VAX / VMS Oracle 7: PCM Oracle 8i: Cache Fusion I Oracle 9i: Cache Fusion II, Oracle Cluster Management Services Oracle 10gR1: Oracle Cluster Management Services Cluster Ready Services (CRS) ASM - Automatic Storage management FAN - Fast Application Notification Integração com Database Services AWR, ADDM, ASH, Scheduler, Enterprise Manager Oracle 10gR2: CRS Oracle Clusterware. New Features incluem: cluvfy, asmcmd. Oracle 11gR1: Apenas 7 New Features. Oracle 11gR2: CRS Grid Infrastrucutre. 32 New Features. Oracle 12cR1: 33 New Features. 6 6

RAC 11gR1 New Features Enhanced Oracle RAC Monitoring and Diagnostics in Enterprise Manager Enhanced Oracle Real Application Clusters Configuration Assistants OCI Runtime Connection Load Balancing Parallel Execution for Oracle Real Application Clusters Support for Distributed Transactions in an Oracle RAC Environment Enhanced Oracle RAC Switchover Support for Logical Standby Databases Enhanced Oracle RAC Monitoring and Diagnostics in Enterprise Manager 7 7

RAC 11gR2 New Features Configuration Assistants Support New Oracle RAC Features Enhanced Cluster Verification Utility Integration of Cluster Verification Utility and Oracle Universal Installer Cluster Time Service Oracle Cluster Registry (OCR) Enhancements Grid Plug and Play (GPnP) Oracle Restart Policy-Based Cluster and Capacity Management Improved Clusterware Resource Modeling Role-Separated Management Agent Development Framework Zero Downtime Patching for Oracle Clusterware and Oracle RAC Enterprise Manager-Based Clusterware Resource Management Enterprise Manager Provisioning for Oracle Clusterware and Oracle Real Application Clusters Enterprise Manager Support for Grid Plug and Play Enterprise Manager Support for Oracle Restart Configuration Assistant Support for Removing Oracle RAC Installations 8 8

RAC 11gR2 New Features Oracle Universal Installer Support for Removing Oracle RAC Installations Improved Deinstallation Support With Oracle Universal Installer Downgrading Database Configured With DBControl Oracle Restart Integration with Oracle Universal Installer Out-of-Place Oracle Clusterware Upgrade OUI Support for Out-of-Place Oracle Clusterware Upgrade Server Control (SRVCTL) Enhancements Server Control (SRVCTL) Enhancements to Support Grid Plug and Play SRVCTL Support for Single-Instance Database in a Cluster Universal Connection Pool (UCP) Integration with Oracle Data Guard UCP Integration With Oracle Real Application Clusters Universal Connection Pool (UCP) for JDBC Java API for Oracle RAC FAN High Availability Events EMCA Supports New Oracle RAC Configuration for Enterprise Manager Global Oracle RAC ASH Report ADDM Backwards Compatibility 9 9

RAC 12cR1 New Features Oracle Flex Cluster SRVCTL Support for Oracle Flex Cluster Implementations Policy-Based Cluster Management and Administration What-If Command Evaluation Shared Grid Naming Service (GNS) Online Resource Attribute Modification Grid Infrastructure Script Automation for Installation and Upgrade Multipurpose Cluster Installation Support Support for IPv6 Based IP Addresses for Oracle RAC Client Connectivity Message Forwarding on Oracle RAC Sharded Queues for Performance and Scalability Oracle Grid Infrastructure Rolling Migration for One-Off Patches 10 10

RAC 12cR1 New Features Oracle Flex ASM Oracle ASM Shared Password File in a Disk Group Oracle ASM Rebalance Enhancements Oracle ASM Disk Resync Enhancements Oracle ASM chown, chgrp, chmod and Open Files Support Oracle ASM Support ALTER DISKGROUP REPLACE USER Oracle ASM File Access Control on Windows Oracle ASM Disk Scrubbing Oracle Cluster Registry Backup in ASM Disk Group Support Enterprise Manager Support for Oracle ASM Features Oracle ACFS Support for All Oracle Database Files Oracle ACFS and Highly Available NFS Oracle ACFS Snapshots Enhancements Oracle ACFS Replication Integration with Oracle ACFS Security and Encryption Oracle ACFS Security and Encryption Features Oracle ACFS File Tags for Grid Homes Oracle ACFS Plug-in APIs Oracle ACFS Replication and Tagging on AIX Oracle ACFS Replication and Tagging on Solaris Oracle Audit Vault Support for Oracle ACFS Security and Encryption Enterprise Manager Support for Oracle ACFS New Features 11 11

Hardware 12 12

Hardware 13 13

Sistema Operacional 14 14

Sistemas Operacionais homologados Linux x64 Oracle Linux 7 / Red Hat Enterprise Linux 7 Oracle Linux 6 / Red Hat Enterprise Linux 6 Oracle Linux 5 / Red Hat Enterprise Linux 5 SUSE Linux Enterprise Server 11 Linux on System z Red Hat Enterprise Linux 6 Red Hat Enterprise Linux 5 SUSE 11 Unix Oracle Solaris 11 (SPARC) / Oracle Solaris 10 (SPARC) Oracle Solaris 11 (x64) / Oracle Solaris 10 (x64) HP-UX 11iV3 AIX 7.1 / AIX 6.1 Windows (x64) Windows Server 2008 SP2 - Standard, Enterprise, DataCenter, Web. Windows Server 2008 R2 - Foundation, Standard, Enterprise, DataCenter, Web. Windows Server 2012 - Standard, Datacenter, Essentials, Foundation. Windows Server 2012 R2 - Standard, Datacenter, Essentials, Foundation 15 15

Lab 1 – Instalação OEL 6 Hands On ! 16 16

Lab 1.1: Instalação OEL 6 Nas máquinas nerv01 e nerv02, instale o OEL. - 1a tela: Install or upgrade an existing system - 2a tela: Skip - 3a tela: Next - 4a tela: English (English), Next - 5a tela: Brazilian ABNT2, Next - 6a tela: Basic Storage Devices, Next - 7a tela: Fresh Installation, Next - 8a tela: nerv01.localdomain, Next - 9a tela: America/Sao Paulo, Next - 10a tela: Nerv2017, Nerv2017, Next - 11a tela: Create Custom Layout, Next 17 17

Lab 1.2: Instalação OEL 6 - 12a tela: Crie as partições como abaixo, e em seguida, Next: sda1 1024 MB /boot sda2 100000 MB / sda3 20000 MB /home sda5 16384 MB swap sda6 10000 MB /var sda7 10000 MB /tmp sda8 Espaço restante /u01 - 13a tela: Format - 14a tela: Write changes to disk - 15a tela: Next - 16a tela: Desktop - 17a tela: Reboot - Retire o DVD. - Após o Boot: “Forward”, “Yes, I agree to the License Agrrement”, “Forward”, “No, I prefer to register at a later time”, “Forward” “No thanks, I'll connect later”, “Forward”, “Forward”, “Yes”, “Forward”, “Finish”, “Yes”, “OK”. 18 1818

Lab 2 – Configuração DNS Hands On ! 19 19

Lab 2.1: Instalação OEL 6 Na máquina nerv09, instale os pacotes necessários para o DNS. # yum -y install bind bind-utils Na máquina nerv09, deixe APENAS as seguintes linhas no arquivo /etc/named.conf. options { listen-on port 53 { 127.0.0.1; 192.168.15.201; }; directory "/var/named"; dump-file "/var/named/data/cache dump.db"; statistics-file "/var/named/data/named stats.txt"; // query-source address * port 53; }; zone "." in { type hint; file "/dev/null"; }; zone "localdomain." IN { type master; file "localdomain.zone"; allow-update { none; }; }; 20 20

Lab 2.2: Instalação OEL 6 Nas máquinas nerv09, deixe APENAS as seguintes linhas no arquivo /var/named/localdomain.zone. TTL @ 86400 IN SOA localhost root.localhost ( 42 ; serial (d. adams) 3H ; refresh 15M ; retry 1W ; expiry 1D ) ; minimum IN NS localhost localhost IN A 127.0.0.1 nerv01 IN A 192.168.15.101 nerv02 IN A 192.168.15.102 nerv01-vip IN A 192.168.15.111 nerv02-vip IN A 192.168.15.112 rac01-scan IN A 192.168.15.151 rac01-scan IN A 192.168.15.152 rac01-scan IN A 192.168.15.153 21 21

Lab 2.3: Instalação OEL 6 Na máquina nerv09, deixe APENAS as seguintes linhas no arquivo /var/named/15.168.192.in-addr.arpa. ORIGIN 0.168.192.in-addr.arpa. TTL 1H @ IN SOA nerv09.localdomain. root.nerv09.localdomain. ( 2 3H 1H 1W 1H ) 0.168.192.in-addr.arpa. IN NS nerv09.localdomain. 101 102 111 112 151 152 153 IN PTR IN PTR IN PTR IN PTR IN PTR IN PTR IN PTR nerv01.localdomain. nerv02.localdomain. nerv01-vip.localdomain. nerv02-vip.localdomain. rac01-scan.localdomain. rac01-scan.localdomain. rac01-scan.localdomain. 22 22

Lab 2.4: Instalação OEL 6 Na máquina nerv09, inicie o DNS Server, e o habilite para o início automático. # service named start # chkconfig named on Na máquina nerv09, pare o firewall, e o desabilite para o início automático. # service iptables stop # service ip6tables stop # chkconfig iptables off # chkconfig ip6tables off 23 23

Lab 3 – Configuração OEL 6 Hands On ! 24 24

Lab 3.1 – Configuração OEL 6 Nas máquinas nerv01 e nerv02, configure as placas de rede pública e privada. 25 25

Lab 3.2 – Configuração OEL 6 Nas máquinas nerv01 e nerv02, atualize o sistema operacional e execute a instalação dos pré-requisitos. # service network restart # yum -y update # yum -y install oracle-rdbms-server-12cR1-preinstall # yum -y install oracleasm-support # yum -y install unzip wget iscsi-initiator-utils java-1.8.0-openjdk parted # yum -y install unixODBC unixODBC.i686 unixODBC-devel unixODBC-devel.i686 # wget http://download.oracle.com/otn software/asmlib/oracleasmlib-2.0.4-1.el6.x86 64.rpm # rpm -ivh oracleasmlib-2.0.4-1.el6.x86 64.rpm Nas máquinas nerv01 e nerv02, remova o DNS 8.8.8.8 da placa de rede eth0. Nas máquinas nerv01 e nerv02, altere a seguinte linha no arquivo /etc/fstab. tmpfs /dev/shm tmpfs defaults,size 4g 00 26 26

Lab 3.3 – Configuração OEL 6 Nas máquinas nerv01 e nerv02, ACRESCENTAR ao arquivo /etc/hosts: # Public 192.168.15.101 nerv01.localdomain nerv01 192.168.15.102 nerv02.localdomain nerv02 # Private 192.168.1.101 nerv01-priv.localdomain nerv01-priv 192.168.1.102 nerv02-priv.localdomain nerv02-priv # Virtual 192.168.15.111 nerv01-vip.localdomain nerv01-vip 192.168.15.112 nerv02-vip.localdomain nerv02-vip # Storage 192.168.15.201 nerv09.localdomain nerv09 27 27

Lab 3.4 – Configuração OEL 6 Nas máquinas nerv01 e nerv02, executar os comandos abaixo. # groupadd oper # groupadd asmadmin # groupadd asmdba # groupadd asmoper # usermod -g oinstall -G dba,oper,asmadmin,asmdba,asmoper oracle # mkdir -p /u01/app/12.1.0.2/grid # mkdir -p /u01/app/oracle/product/12.1.0.2/db 1 # chown -R oracle:oinstall /u01 # chmod -R 775 /u01 # passwd oracle (Coloque como senha do usuário oracle: Nerv2017) 28 28

Lab 3.5 – Configuração OEL 6 Nas máquinas nerv01 e nerv02, altere o SELinux de “enforcing” para “permissive”. # vi /etc/selinux/config Nas máquinas nerv01 e nerv02, desabilite o firewall. # chkconfig iptables off # chkconfig ip6tables off Nas máquinas nerv01 e nerv02, desabilite o NTP. # mv /etc/ntp.conf /etc/ntp.conf.org # reboot 29 29

Lab 3.6 – Configuração OEL 6 Nas máquinas nerv01 e nerv02 , com o usuário oracle, ACRESCENTAR NO FINAL do arquivo /home/oracle/.bash profile as linhas abaixo. export TMP /tmp export TMPDIR TMP export ORACLE HOSTNAME nerv01.localdomain export ORACLE UNQNAME ORCL export ORACLE BASE /u01/app/oracle export ORACLE HOME ORACLE BASE/product/12.1.0.2/db 1 export GRID HOME /u01/app/12.1.0.2/grid export CRS HOME GRID HOME export ORACLE SID ORCL1 export ORACLE TERM xterm export PATH /usr/sbin: PATH export PATH ORACLE HOME/bin: PATH export LD LIBRARY PATH ORACLE HOME/lib:/lib:/usr/lib export CLASSPATH ORACLE HOME/JRE: ORACLE HOME/jlib: ORACLE HOME/rdbms/jlib if [ USER "oracle" ]; then if [ SHELL "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi fi 30 30

Shared Storage 31 31

Opções de Shared Storage 32 32

Opções de Shared Storage 33 33

Lab 4 – Storage Hands On ! 34 34

Lab 4.1 – Storage Na máquinas nerv09, crie 3 partições de 5GB, e 4 de 10GB. Na máquina nerv09, configure o iSCSI server. # yum -y install scsi-target-utils # cat /etc/tgt/targets.conf target iqn.2010-10.com.nervinformatica:storage.asm01-01 backing-store /dev/sda5 initiator-address 192.168.15.101 initiator-address 192.168.15.102 /target target iqn.2010-10.com.nervinformatica:storage.asm01-02 backing-store /dev/sda6 initiator-address 192.168.15.101 initiator-address 192.168.15.102 /target . # service tgtd start # chkconfig tgtd on 35 35

Lab 4.2 – Storage (ASM) Nas máquinas nerv01 e nerv02, ative o pacote iSCSI Initiator. # chkconfig iscsid on Nas máquinas nerv01 e nerv02, verifique os Discos exportados no Storage. # iscsiadm -m discovery -t sendtargets -p 192.168.15.201 -l Nas máquinas nerv01 e nerv02, deixe APENAS os novos discos no arquivo /etc/ iscsi/initiatorname.iscsi. InitiatorName iqn.2010-10.com.nervinformatica:storage.asm01-01 InitiatorName iqn.2010-10.com.nervinformatica:storage.asm01-02 InitiatorName iqn.2010-10.com.nervinformatica:storage.asm01-03 InitiatorName iqn.2010-10.com.nervinformatica:storage.asm01-04 InitiatorName iqn.2010-10.com.nervinformatica:storage.asm01-05 InitiatorName iqn.2010-10.com.nervinformatica:storage.asm01-06 InitiatorName iqn.2010-10.com.nervinformatica:storage.asm01-07 36 36

Lab 4.3 – Storage (ASM) Nas máquinas nerv01 e nerv02 verifique se os discos foram configurados localmente. # fdisk -l Na máquina nerv01, particione os novos discos. # fdisk /dev/sdb n enter p enter 1 enter enter enter w enter # fdisk /dev/sdc n enter p enter 1 enter enter enter w enter . 37 37

Lab 4.4 – Storage (ASM) Na máquina nerv02, execute a detecção dos novos discos. # partprobe /dev/sdb # partprobe /dev/sdc # partprobe /dev/sdd # partprobe /dev/sde # partprobe /dev/sdf # partprobe /dev/sdg # partprobe /dev/sdh 38 38

Lab 4.5 – Storage (ASM) Nas máquinas nerv01 e nerv02, configure a ASMLib. # /etc/init.d/oracleasm configure oracle enter asmadmin enter y enter y enter # /etc/init.d/oracleasm status Na máquina nerv01, crie os discos do ASM. # /etc/init.d/oracleasm createdisk DISK01 /dev/sdb1 # /etc/init.d/oracleasm createdisk DISK02 /dev/sdc1 # /etc/init.d/oracleasm createdisk DISK03 /dev/sdd1 # /etc/init.d/oracleasm createdisk DISK04 /dev/sde1 # /etc/init.d/oracleasm createdisk DISK05 /dev/sdf1 # /etc/init.d/oracleasm createdisk DISK06 /dev/sdg1 # /etc/init.d/oracleasm createdisk DISK07 /dev/sdh1 Na máquina nerv02, execute a detecção dos discos criados. # /etc/init.d/oracleasm scandisks 39 39

Lab 4.6 – Storage (ASM) Nas máquinas nerv01 e nerv02, verifique se os discos estão corretos. # /etc/init.d/oracleasm listdisks # /etc/init.d/oracleasm querydisk -v -p DISK01 # /etc/init.d/oracleasm querydisk -v -p DISK02 # /etc/init.d/oracleasm querydisk -v -p DISK03 # /etc/init.d/oracleasm querydisk -v -p DISK04 # /etc/init.d/oracleasm querydisk -v -p DISK05 # /etc/init.d/oracleasm querydisk -v -p DISK06 # /etc/init.d/oracleasm querydisk -v -p DISK07 Nas máquinas nerv01 e nerv02, verifique se os discos estão corretos. # ls -lh /dev/oracleasm/disks/ brw-rw----. 1 oracle asmadmin 8, 17 Jan 2 13:01 DISK01 brw-rw----. 1 oracle asmadmin 8, 33 Jan 2 13:01 DISK02 brw-rw----. 1 oracle asmadmin 8, 49 Jan 2 13:01 DISK03 brw-rw----. 1 oracle asmadmin 8, 65 Jan 2 13:01 DISK04 brw-rw----. 1 oracle asmadmin 8, 81 Jan 2 13:01 DISK05 brw-rw----. 1 oracle asmadmin 8, 97 Jan 2 13:01 DISK06 brw-rw----. 1 oracle asmadmin 8, 113 Jan 2 13:01 DISK07 40 40

Oracle Grid Infrastructure 41 41

Componentes - Oracle Cluster Registry - Voting Disk (Quorum Disk) - Grid Infrastructure Management Repository (MGMTDB) - VIPs e SCAN - Utilitários: crsctl, srvctl - Daemons: ohasd, crsd, evmd, ons, evmlogger, ologgerd, cssdmonitor, cssdagent, ocssd, octssd, osysmond, mdnsd, gpnpd, gipcd, orarootagent, oraagent, scriptagent 42 42

Lab 5 - Grid Infraestructure Hands On ! 43 43

Lab 5.1 – Grid Infrastructure Na máquina nerv01, com o usuário oracle, descompacte e execute o instalador do Grid Infrastructure. cd /home/oracle unzip -q linuxamd64 12102 grid 1of2.zip unzip -q linuxamd64 12102 grid 2of2.zip Nas máquinas nerv01 e nerv02, instale o Cluster Verification Utility. # rpm -ivh /home/oracle/grid/rpm/cvuqdisk-1.0.9-1.rpm Na máquina nerv01, inicie a instalação do Grid Infrastructure. cd grid ./runInstaller 44 44

Lab 5.2 – Grid Infrastructure 45 45

Lab 5.3 – Grid Infrastructure 46 46

Lab 5.4 – Grid Infrastructure 47 47

Lab 5.5 – Grid Infrastructure 48 48

Lab 5.6 – Grid Infrastructure 49 49

Lab 5.7 – Grid Infrastructure 50 50

Lab 5.8 – Grid Infrastructure 51 51

Lab 5.9 – Grid Infrastructure 52 52

Lab 5.10 – Grid Infrastructure 53 53

Lab 5.11 – Grid Infrastructure 54 54

Lab 5.12 – Grid Infrastructure 55 55

Lab 5.13 – Grid Infrastructure 56 56

Lab 5.14 – Grid Infrastructure 57 57

Lab 5.15 – Grid Infrastructure 58 58

Lab 5.16 – Grid Infrastructure 59 59

Lab 5.17 – Grid Infrastructure 60 60

Lab 5.18 – Grid Infrastructure 61 61

Lab 5.19 – Grid Infrastructure 62 62

Lab 5.20 – Grid Infrastructure 63 63

Lab 5.21 – Grid Infrastructure 64 64

Lab 5.22 – Grid Infrastructure 65 65

Lab 5.23 – Grid Infrastructure 66 66

Lab 5.24 – Grid Infrastructure 67 67

Lab 5.26 – Grid Infrastructure 68 68

Lab 5.27 – Grid Infrastructure 69 69

Lab 5.28 – Grid Infrastructure 70 70

Lab 5.29 – Grid Infrastructure 71 71

Lab 5.30 – Grid Infrastructure 72 72

Lab 5.31 – Grid Infrastructure 73 73

Lab 6 – Oracle Database Software Hands On ! 74 74

Lab 6.1 – Oracle Database Software Na máquina nerv01, com o usuário oracle, descompacte e execute o instalador do Oracle Database Software. cd /home/oracle unzip -q linuxamd64 12102 database 1of2.zip unzip -q linuxamd64 12102 database 2of2.zip cd database ./runInstaller 75 75

Lab 6.2 – Oracle Database Software 76 76

Lab 6.3 – Oracle Database Software 77 77

Lab 6.4 – Oracle Database Software 78 78

Lab 6.5 – Oracle Database Software 79 79

Lab 6.6 – Oracle Database Software 80 80

Lab 6.7 – Oracle Database Software 81 81

Lab 6.8 – Oracle Database Software 82 82

Lab 6.9 – Oracle Database Software 83 83

Lab 6.10 – Oracle Database Software 84 84

Lab 6.11 – Oracle Database Software 85 85

Lab 6.12 – Oracle Database Software 86 86

Lab 6.13 – Oracle Database Software 87 87

Lab 6.14 – Oracle Database Software 88 88

Lab 6.17 – Oracle Database Software 89 89

Lab 6.18 – Oracle Database Software 90 90

Oracle Database 91 91

RAC Database Background Process ACMS: Atomic Controlfile to Memory Service GTX0-j: Global Transaction Process LMON: Global Enqueue Service Monitor LMD: Global Enqueue Service Daemon LMS: Global Cache Service Process LCK0: Instance Enqueue Process RMSn: Oracle RAC Management Processes RSMN: Remote Slave Monitor PFILE / SPFILE (1x) Control Files (1x) Online Redo Log Threads (x Nodes) UNDO Tablespaces / Datafiles (x Nodes) Datafiles (1x) 92 92

Lab 7.1 – Oracle Database Para efetuar logon na Instance ASM1, use o SQLPlus. export ORACLE HOME GRID HOME export ORACLE SID ASM1 sqlplus / AS SYSASM SQL CREATE DISKGROUP DATA NORMAL REDUNDANCY DISK 'ORCL:DISK04', 'ORCL:DISK05'; SQL CREATE DISKGROUP FRA NORMAL REDUNDANCY DISK 'ORCL:DISK06', 'ORCL:DISK07'; SQL ALTER DISKGROUP DATA SET ATTRIBUTE 'compatible.asm' '12.1.0.0.0'; SQL ALTER DISKGROUP FRA SET ATTRIBUTE 'compatible.asm' '12.1.0.0.0'; SQL ALTER DISKGROUP DATA SET ATTRIBUTE 'compatible.rdbms' '12.1.0.0.0'; SQL ALTER DISKGROUP FRA SET ATTRIBUTE 'compatible.rdbms' '12.1.0.0.0'; srvctl start diskgroup -g DATA -n nerv02 srvctl enable diskgroup -g DATA -n nerv02 srvctl start diskgroup -g FRA -n nerv02 srvctl enable diskgroup -g FRA -n nerv02 93 93

Lab 7.2 – Oracle Database 94 94

Lab 7.3 – Oracle Database 95 95

Lab 7.4 – Oracle Database 96 96

Lab 7.5 – Oracle Database 97 97

Lab 7.6 – Oracle Database 98 98

Lab 7.7 – Oracle Database 99 99

Lab 7.8 – Oracle Database 100 100

Lab 7.9 – Oracle Database 101 101

Lab 7.10 – Oracle Database 102 102

Lab 7.11 – Oracle Database 103 103

Lab 7.12 – Oracle Database 104 104

Lab 7.13 – Oracle Database 105 105

Lab 7.14 – Oracle Database 106 106

Lab 7.15 – Oracle Database 107 107

Lab 7.16 – Oracle Database 108 108

Lab 7.17 – Oracle Database Para efetuar logon na Instance ASM1, use o SQLPlus. export ORACLE SID ASM1 sqlplus / as SYSDBA Por que não funcionou? Verifique os discos existentes, e espaço disponível. SQL SELECT NAME, TOTAL MB, FREE MB, HOT USED MB, COLD USED MB FROM V ASM DISK; SQL SELECT NAME, TOTAL MB, FREE MB, HOT USED MB, COLD USED MB FROM V ASM DISKGROUP; Crie uma TABLESPACE no ASM. SQL CREATE TABLESPACE nerv DATAFILE ' DATA'; Deve ser feito na Instance ASM ou na Instance Database? Verifique o novo DATAFILE criado, e os já existentes. SQL SELECT FILE NAME, BYTES, MAXBYTES, AUTOEXTENSIBLE, INCREMENT BY FROM DBA DATA FILES; 109 109

Lab 7.18 – Oracle Database Execute o asmcmd, e navegue pelos diretórios do Disk Group. asmcmd -p ASMCMD [ ] help ASMCMD [ ] lsdg Pelo asmcmd, copie um DATAFILE do ASM para o /home/oracle de uma máquina do RAC. Execute um Backup do Banco de Dados. rman target / RMAN BACKUP DATABASE PLUS ARCHIVELOG DELETE INPUT; Por que não funcionou? 110 110

Administração 111 111

Comandos depreciados no 11gR2 112 112

Comandos depreciados no 11gR2 113 113

Comandos depreciados no 11gR2 114 114

Comandos depreciados no 11gR2 115 115

Comandos depreciados no 11gR2 116 116

Comandos depreciados no 12cR1 117 117

Comandos depreciados no 12cR1 118 118

Comandos depreciados no 12cR1 119 119

Dificuldades GRID HOME x ORACLE HOME oracle X root 120 120

Binários do GRID HOME Adicionar GRID HOME/bin no PATH, no .bash profile crsctl status res -t OU . oraenv ORACLE SID [ORCL1] ? ASM1 enter OU cd /u01/app/12.1.0.2/grid/bin/ ./crsctl status res -t OU /u01/app/12.1.0.2/grid/bin/crsctl status res -t 121 121

Daemons 122 122

Daemons 123 123

Daemons 124 124

Daemons 125 125

Cluster Startup 126 126

Logs 11gR2 GRID HOME/log/ node / GRID HOME/log/ node /alert node .log 12cR1 ORACLE BASE/diag/crs/ node /crs ORACLE BASE/diag/crs/ node /crs/trace/alert.log 127 127

LAB 8 – Daemons Hands On ! 128 128

Lab 8.1 – Daemons Acompanhe a execução dos daemons via top. Desligue a máquina nerv01. Veja o que acontece no Alert Log do nerv02 enquanto o nerv01 é desligado. tail -f g Ligue a máquina nerv01. Veja o que acontece no Alert Log do nerv02 enquanto o nerv01 é ligado. tail -f g Familiarize-se com o diretório de Logs. Verifique o estado dos recursos. /u01/app/12.1.0.2/grid/bin/crsctl status res -t 129 129

Lab 8.2 – Daemons Continue acompanhando os Alert Logs das duas máquinas. Desconecte o cabo da rede do Interconnect, apenas de um nó. O que aconteceu? Desconecte o cabo da rede do Storage, apenas de um nó. O que aconteceu? Verifique e altere parâmetros de timeout para o mínimo possível. # /u01/app/12.1.0.2/grid/bin/crsctl get css reboottime # /u01/app/12.1.0.2/grid/bin/crsctl get css misscount # /u01/app/12.1.0.2/grid/bin/crsctl get css disktimeout # /u01/app/12.1.0.2/grid/bin/crsctl set css reboottime 1 # /u01/app/12.1.0.2/grid/bin/crsctl set css misscount 2 # /u01/app/12.1.0.2/grid/bin/crsctl set css disktimeout 3 130

Teste de Carga 131 131

Teste de Carga Tipos TPC-C: OLTP (Rede Varejista) TPC-E: OLTP (Telefonia) TPC-H: Data Warehouse Ferramentas Hammerora Swingbench 132 132

Teste de Carga 133 133

LAB 9 – Teste de Carga Hands On ! 134 134

Lab 9.1 – Teste de Carga Copie o swingbench para a máquina nerv01, como usuário oracle. Crie uma TABLESPACE com o nome SOE. Descompacte o swingbench.zip. cd /home/oracle unzip -q swingbench261040.zip cd swingbench/bin Execute a criação do SCHEMA do teste de carga: ./oewizard Execute o teste de carga: ./charbench -cs //rac01-scan/ORCL -uc 10 -c ./configs/SOE Server Side V2.xml 135

srvctl 136 136

srvctl A partir de qualquer Node, controla todos. Deve ser utilizado com o usuário oracle ou com o owner do GRID HOME. Deve ser utilizado o srvctl do GRID HOME. Comando preferencial para iniciar e parar recursos do RAC. Administra Database, Instances, ASM, Listeners e Services. Um recurso pode ser iniciado, parado, habilitado, ou desabilitado. 137 137

LAB 10 – srvctl Hands On ! 138 138

Lab 10.1 – srvctl Execute srvctl -h e entenda as opções. Pare o Listener de apenas um Node. Pare a Instance de apenas um Node. Inicie novamente o Listener que está parado. Inicie novamente a Instance que está parada. Pare o Database, e o inicie novamente. Pare uma Intance com a opção ABORT. Inicie uma Instance com a opção MOUNT. Mate uma Instance (kill no pmon) de um dos nós, e veja o que acontece. 139 139

Lab 10.2 – srvctl Coloque o banco em modo ARCHIVELOG e execute um backup. srvctl stop database -d ORCL srvctl start instance -d ORCL -i ORCL1 -o mount SQL ALTER DATABASE ARCHIVELOG; SQL ALTER SYSTEM SET db recovery file dest ' FRA'; SQL ALTER SYSTEM SET db recovery file dest size 10G; SQL ALTER DATABASE OPEN; srvctl start instance -d ORCL -i ORCL2 RMAN BACKUP DATABASE; 140 140

crsctl 141 141

crsctl A partir de qualquer Node, controla todos. Deve ser utilizado com o usuário root. Deve ser utilizado do GRID HOME. Principal comando de administração do Grid. Um recurso pode ser iniciado, parado, habilitado, ou desabilitado. Necessário para verificação e alteração de parâmetros. Necessário para Troubleshooting e Debug. 142 142

LAB 11 – crsctl Hands On ! 143 143

Lab 11.1 – crsctl Verifique as opções do crsctl, digitando “crsctl”, sem opções. Verifique o status dos Daemons: # /u01/app/12.1.0.2/grid/bin/crsctl check css # /u01/app/12.1.0.2/grid/bin/crsctl check evm # /u01/app/12.1.0.2/grid/bin/crsctl check crs # /u01/app/12.1.0.2/grid/bin/crsctl check ctss # /u01/app/12.1.0.2/grid/bin/crsctl check cluster # /u01/app/12.1.0.2/grid/bin/crsctl check cluster -all Verifique a versão instalada e ativa. # /u01/app/12.1.0.2/grid/bin/crsctl query crs activeversion # /u01/app/12.1.0.2/grid/bin/crsctl query crs softwareversion # /u01/app/12.1.0.2/grid/bin/crsctl query crs releasepatch # /u01/app/12.1.0.2/grid/bin/crsctl query crs softwarepatch Liste todos os parâmetros de um recurso. # /u01/app/12.1.0.2/grid/bin/crsctl status res ora.orcl.db -f srvctl modify database -db ORCL -startoption OPEN srvctl modify database -db ORCL -stopoption ABORT 144

Lab 11.2 – crsctl Liste os módulos do Cluster. # /u01/app/12.1.0.2/grid/bin/crsctl lsmodules crs # /u01/app/12.1.0.2/grid/bin/crsctl lsmodules css # /u01/app/12.1.0.2/grid/bin/crsctl lsmodules evm Coloque um dos módulos informados pelo comando anterior (lsmodules), e coloque ele em modo Debug. # tail -f c # /u01/app/12.1.0.2/grid/bin/crsctl set log css “CSSD:5” # /u01/app/12.1.0.2/grid/bin/crsctl set log css “CSSD:2” Pare todo o Node. # /u01/app/12.1.0.2/grid/bin/crsctl stop cluster Pare o outro Node. # /u01/app/12.1.0.2/grid/bin/crsctl stop cluster -n nerv02 Inicie todo o Cluster. # /u01/app/12.1.0.2/grid/bin/crsctl start cluster -all 145

Voting Disks 146 146

Voting Disk É o centro do “ping” dos Nodes. Pode ter N mirrors. Pode ser alterado de qualquer Node. Backups do Voting Disk são manuais. Todas operações do Voting Disk devem ser executadas como root. Deve ser feito backup após Adição ou Remoção de Nodes ( 11gR2). Com base nas informações nele, o Clusterware decide que Node faz parte do Cluster (Election / Eviction / Split Brain). 147 147

LAB 11 – Voting Disk Hands On ! 148 148

Lab 12.1 – Voting Disk Na máquinas nerv09, crie 3 partições de 1GB (para o VD), e 3 de 2GB (para o OCR). Na máquina nerv09, reconfigure o arquivo /etc/tgt/targets.conf iSCSI server com as novas 6 partições. target iqn.2010-10.com.nervinformatica:storage.asm01-08 backing-store /dev/sda33 initiator-address 192.168.15.101 initiator-address 192.168.15.102 /target . # service tgtd reload 149 149

Lab 12.2 – Voting Disk Nas máquinas nerv01 e nerv02, verifique os Discos exportados no Storage. # iscsiadm -m discovery -t sendtargets -p 192.168.15.201 -l Nas máquinas nerv01 e nerv02, adicione os novos discos no arquivo /etc/iscsi/initiatorname.iscsi. InitiatorName iqn.2010-10.com.nervinformatica:storage.asm01-08 InitiatorName iqn.2010-10.com.nervinformatica:storage.asm01-09 InitiatorName iqn.2010-10.com.nervinformatica:storage.asm01-10 InitiatorName iqn.2010-10.com.nervinformatica:storage.asm01-11 InitiatorName iqn.2010-10.com.nervinformatica:storage.asm01-12 InitiatorName iqn.2010-10.com.nervinformatica:storage.asm01-13 150 150

Lab 12.3 – Voting Disk Nas máquinas nerv01 e nerv02 verifique se os discos foram configurados localmente. # fdisk -l Na máquina nerv01, particione os novos discos. # fdisk /dev/sdi n enter p enter 1 enter enter enter w enter . 151 151

Lab 12.4 – Voting Disk Na máquina nerv02, execute a detecção dos novos discos. # partprobe /dev/sdi # partprobe /dev/sdj # partprobe /dev/sdk # partprobe /dev/sdl # partprobe /dev/sdm # partprobe /dev/sdn Na máquina nerv01, crie os discos do ASM. # /etc/init.d/oracleasm createdisk DISK08 /dev/sdi1 # /etc/init.d/oracleasm createdisk DISK09 /dev/sdj1 # /etc/init.d/oracleasm c

Oracle RAC 12cR1 1 Ricardo Portilho Proni ricardo@nervinformatica.com.br Esta obra está licenciada sob a licença . Install or upgrade an existing system - 2a tela: Skip - 3a tela: Next - 4a tela: English (English), Next - 5a tela: Brazilian ABNT2, Next - 6a tela: Basic Storage Devices, Next

Related Documents:

1.12 Overview of Managing Oracle RAC Environments 1-36 1.12.1 About Designing and Deploying Oracle RAC Environments 1-37 1.12.2 About Administrative Tools for Oracle RAC Environments 1-37 1.12.3 About Monitoring Oracle RAC Environments 1-39 1.12.4 About Evaluating Performance in Oracle RAC Environments 1-40 2 Administering Storage in Oracle RAC

2.4 Installing Oracle RAC and Oracle RAC One Node Databases 2-3 2.4.1 Installing Oracle RAC and Oracle RAC One Node Database Software 2-4 2.5 Simplified Upgrade of TIMESTAMP WITH TIME ZONE Data 2-5 2.6 Overview of Installation Directories for Oracle RAC 2-6 2.6.1 Overview of Oracle Base Directories 2-6 2.6.2 Overview of Oracle Home Directories 2-7

Oracle 19c : de-support of Oracle RAC in SE2 Introduction Oracle Real Application Clusters (Oracle RAC) is the only solution to build an active – active cluster using shared storage for your Oracle database. Up to Oracle 18c Oracle RAC is available as a feature within

Oracle Database using Oracle Real Application Clusters (Oracle RAC) and Oracle Resource Management provided the first consolidation platform optimized for Oracle Database and is the MAA best practice for Oracle Database 11g. Oracle RAC enables multiple Oracle databases to be easily consolidated onto a single Oracle RAC cluster.

Apr 12, 2018 · RAC AGENDA – MAY 2018 . 1. Welcome, RAC Introductions and RAC Procedure - RAC Chair . 2. Approval of Agenda and Minutes - RAC Chair . 3. Wildlife Board Meeting Update INFORMATIONAL - RAC Chair . 4. Reg

Safety Precautions Operating Instructions Installation Instructions Care and Cleaning RAC-WK0511CMU RAC-WK0611CRU RAC-WK0811ESCWU RAC-WK1011ESCWU RAC-WK1211ESCWU RAC-WK1511ESCRU RAC-WK1821ESCRU Warning notices: Before using this product, please read this manual carefully and keep it for future reference. The design and specifications are subject to

Oracle Grid Infrastructure software Oracle RAC database engine By leveraging the proven Oracle RAC database engine FlashGrid enables the following use-cases: Lift-and-shift migration of existing Oracle RAC databases to AWS. Migration of existing Oracle databases from high-end on-premises servers to AWS without reducing

1 Advanced Engineering Mathematics C. Ray Wylie, Louis C. Barrett McGraw-Hill Book Co 6th Edition, 1995 2 Introductory Methods of Numerical Analysis S. S. Sastry Prentice Hall of India 4th Edition 2010 3 Higher Engineering Mathematics B.V. Ramana McGraw-Hill 11 th Edition,2010 4 A Text Book of Engineering Mathematics N. P. Bali and Manish Goyal Laxmi Publications 2014 5 Advanced Engineering .