Monday, September 14, 2009


1.Objectives


5 Installation


5.1 CRS install
2 System Configuration
5.2 ASM Install
2.1 Machine Configuration
5.3 Install Database Software
2.2 External/Shared Storage
5.4 create RAC Database
2.3 Kernel Parameters
6 Scripts and profile files


5.4 .bash_profile rac01
3 Oracle Software Configuration
5.5 .bash_profile rac02
3.1 Directory Structure

3.2 Database Layout

3.3 Redo Logs
6 RAC Infrastructure Testing
3.4 Controlfiles
6.1 RAC Voting Disk Test


6.2 RAC Cluster Registry Test
4 Oracle Pre-Installation tasks
6.3 RAC ASM Tests
4.1 Installing Redhat
6.4 RAC Interconnect Test
4.2 Network Configuration
6.5 Loss of Oracle Config File
4.3 Copy Oracle 10.2.0.1 software onto server

4.4 Check installed packages
Appendix
4.5 validate script
1. OCR/Voting disk volumes INAccessible by rac02 87
4.6 Download ASM packages
2. RAC cluster went down On PUBLIC network test. 88
4.7 Download OCFS packages

4.8 Creating Required Operating System Groups and Users.

4.9 Oracle required directory creation

4.10 Verifying That the User nobody Exists

4.11 Configuring SSH on Cluster Member Nodes For oracle

4.12 Configuring SSH on Cluster Member Nodes for root.

4.13 VNC setup

4.14 Kernel parameters

4.15 Verifying Hangcheck-timer Module on Kernel 2.6

4.16 Oracle user limits

4.17 Installing the cvuqdisk Packeage for linux.

4.18 Disk Partitioning

4.19 Checking the Network Setup with CVU

4.20 Checking the Hardware and Operating System Setup with CVU

4.21 Checking the Operating System Requirements with CVU.

4.22 Verifying Shared Storage

4.23 Verifying the Clusterware Requirements with CVU

4.24 ASM package install

4.25 OCFS package install

4.26 disable SELinux

4.27 OCFS2 Configuration

4.28 OCFS2 File system format

4.29 OCFS2 File system mount

1 Objectives

The objectives of this document are to:

  • Record the setup and configuration of the 10g Oracle Standalone environment.

· Supplement the handover of the environment to ICM personnel.

1.1 Scope

This document provides an overview of Oracle Clusterware and Oracle Real Application Clusters (RAC) installation and configuration procedures.

2 System Configuration

2.1 Machine Configuration

The table below details the specifications for Server.

Node Name ra01.-asp.com
Purpose RAC Node 1
Ip Address 10.13.100.11
Manufacturer HP
Model DL585 G2
Operating System Linux Red Hat Advanced Server
OS Version 4.0
Update Version Nahant Update 5
OS Patches See “Check installed packages” section
Memory 16 GB
Swap 16 GB
No Of Processors 2 Dual-Core AMD Opteron(tm) Processor 8218 cache size 1024 KB
No of Oracle Instances 1 x Database
Adapter Target Lun SD# Mpath Size Usage
1 1 3 sdj mpath2 1GB VOTE3
1 0 3 sdc mpath2 1GB VOTE3
1 1 2 sdi mpath1 1GB VOTE2
1 0 2 sdb mpath1 1GB VOTE2
1 1 1 sdh mpath0 1GB VOTE1
1 0 1 sda mpath0 1GB VOTE1
1 1 7 sdn mpath6 400GB ASM1
1 0 7 sdg mpath6 400GB ASM1
1 1 6 sdm mpath5 175GB ASM2
1 0 6 sdf mpath5 175GB ASM2
1 1 5 sdl mpath4 5GB OCR1
1 0 5 sde mpath4 5GB OCR1
1 1 4 sdk mpath3 5GB OCR2
1 0 4 sdd mpath3 5GB OC2

The following table details the disk configuration information used to partition the internal disks during the installation process:

File System Storage Size Mount Point
ext3 /dev/mapper/VolGroup00-LogVol00 20 GB /
ext3 /dev/cciss/c0d0p1 100 M /boot
ext3 /dev/mapper/VolGroup00-LogVol02 2 GB /home
ext3 /dev/mapper/VolGroup00-LogVol05 4 GB /tmp
ext3 /dev/mapper/VolGroup00-LogVol03 1 GB /opt
ext3 /dev/mapper/VolGroup00-LogVol04 2 GB /var
ext3 /dev/mapper/VolGroup00-LogVol06 22 GB /u01

2.2 External/Shared Storage

File System Size Mount Point
ASM 187.9 GB +DATA
ASM 429.4 GB +BACKUP
File System Storage Size Mount Point
OCFS2 /dev/mapper/mpath0 1 GB /u02/vote1
OCFS2 /dev/mapper/mpath1 1 GB /u02/vote2
OCFS2 /dev/mapper/mpath2 1 GB /u02/vote3
OCFS2 /dev/mapper/mpath4 5 GB /u02/ocr1
OCFS2 /dev/mapper/mpath3 5 GB /u02/ocr2

2.3 Kernel Parameters

Kernel parameters were reconfigured to support the Oracle environment. The table below details all Kernel parameter changes.

Parameter Name rac01 rac01
SEMMNI 4096 4096
SHMMAX 4294967295 4294967295
SHMMNI 4096 4096
SHMALL 2097152 2097152
IP_LOCAL_PORT_RANGE (START) 1024 1024
IP_LOCAL_PORT_RANGE (RANGE) 65000 65000
RMEM_DEFAULT 262144 262144
RMEM_MAX 262144 262144
WMEM_DEFAULT 262144 262144
WMEM_MAX 262144 262144
User Process Limit 16384 16384
User Max Files 65536 65536

3 Oracle Software Configuration

3.1 Directory Structure

The following folders and ASM disk groups were created during the installation process for rac01 and rac02.

Description Location
Oracle Base Directory /u01/app/oracle
Oracle Inventory Directory /u01/appl/oracle/oraInventory
CRS (ORACLE_HOME) /u01/crs/oracle/product/10/crs
ASM (ORACLE_HOME) /u01/app/oracle/product/10.2.0/asm
DB (ORACLE_HOME) /u01/app/oracle/product/10.2.0/db_1
Datafiles +DATA (ASM disk group)
Recovery Area +BACKUP (ASM disk group)

3.2 Database Layout

The following table details the datafile configuration:

Storage Database Tablespace Size MB
+DATA/prod/datafile/agrblob.270.649008095 AGRBLOB 1,024.000
+DATA/prod/datafile/agrdata.268.649008049 AGRDATA 1,024.000
+DATA/prod/datafile/agrdoc.271.649008107 AGRDOC 1,024.000
+DATA/prod/datafile/agrindex.269.649008085 AGRINDEX 1,024.000
+DATA/prod/datafile/agrscratch.272.649008117 AGRSCRATCH 100.000
+DATA/prod/datafile/example.264.647361533 EXAMPLE 100.000
+DATA/prod/datafile/sysaux.257.647361451 SYSAUX 470.000
+DATA/prod/datafile/system.256.647361451 SYSTEM 510.000
+DATA/prod/datafile/undotbs1.258.647361451 UNDOTBS1 40.000
+DATA/prod/datafile/undotbs2.265.647361611 UNDOTBS2 125.000
+DATA/prod/datafile/users.259.647361453 USERS 5.000
+DATA/prod/tempfile/agrtemp.273.649008193 AGRTEMP 4,096.000
+DATA/prod/tempfile/temp.263.647361531 TEMP 20.000

3.3 Redo Logs

The following tables detail the redo log configuration

RedoLogs Directory Datafile
1 +DATA/prod/onlinelog/ group_1.261.647361523
1 +BACKUP/prod/onlinelog/ group_1.257.647361525
2 +DATA/prod/onlinelog/ group_2.262.647361525
2 +BACKUP/prod/onlinelog/ group_2.258.647361525
3 +DATA/prod/onlinelog/ group_3.266.647361665
3 +BACKUP/prod/onlinelog/ group_3.259.647361665
4 +DATA/prod/onlinelog/ group_4.267.647361667
4 +BACKUP/prod/onlinelog/ group_4.260.647361667

3.4 Controlfiles

The following tables detail the controlfile configurations.

Controlfiles Location
current.260.647361521 +DATA/prod/controlfile/
current.256.647361521 +BACKUP/prod/controlfile/

4 Oracle Pre-Installation tasks

This section details the tasks and checks carried out prior to the installation of Oracle 10g RAC.

4.1 Installing Redhat

Please ensure following RED HAT packages are installed before going ahead with an Oracle Install.

- Development Tools

- Compatibility Arch Development Support

- Legacy Development Support

clip_image003[4]

clip_image005[4]

4.2 Network Configuration

Please configure /etc/hosts as below at rac01/rac02:

Ensure that the node names are not included for the loopback address in the /etc/hosts file.

RAC01:

# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
# Public
10.13.100.11 rac01.-asp.com rac01
10.13.100.12 rac02.-asp.com rac02

#Private
172.16.2.1 orapriv01.-asp.com orapriv01
172.16.2.2 orapriv02.-asp.com orapriv02

#Virtual
10.13.100.13 oravip01.-asp.com oravip01
10.13.100.14 oravip02.-asp.com oravip02

RAC02:

127.0.0.1 localhost.localdomain localhost
# Public
10.13.100.11 rac01.-asp.com rac01
10.13.100.12 rac02.-asp.com rac02

#Private
172.16.2.1 orapriv01.-asp.com orapriv01
172.16.2.2 orapriv02.-asp.com orapriv02

#Virtual
10.13.100.13 oravip01.-asp.com oravip01
10.13.100.14 oravip02.-asp.com oravip02

4.3 Copy Oracle 10.2.0.1 software onto server

Connect to http://edelivery.oracle.com and download Oracle Database for Platform “Linux Intel (64-bit)

clip_image007[4]

Click “Oracle® Database 10g Release 2 (10.2.0.1.0) Media Pack (with Oracle® Enterprise Manager 10g Release 2 Grid Control (10.2.0.1.0) for Linux x86 and Oracle® Warehouse Builder 10g Release 2 (10.2.0.1)) for Linux x86-64”

clip_image009[4]

Download part 1 of 5 to part 5 of 5

Upload zip files onto server to /u01/10gr2, and run the following commands:

cd /u01/10gr2
mkdir 10gr2
cd 10gr2
for i in `ls ../*5.zip`
do
unzip $i
done

4.4 Check installed packages

To interpret the package name, e.g. ocfs2-2.6.9-22.0.1.ELsmp-1.2.1-1.i686.rpm

The package name is comprised of multiple parts separated by '-'.

· ocfs2 - Package name
· 2.6.9-22.0.1.ELsmp - Kernel version and flavor
· 1.2.1 - Package version
· 1 - Package subversion
· i686 - Architecture

4.4.1 Adding packages

Adding packages can be completed by either install using Add and remove applications from gnome VNC session or install directly from CD’s. Dependency issues may require packages to be removed and reinstalled.

4.4.2 Oracle required packages

Oracle x86_64 requires these packages and versions at a minimum. This list is based upon a "default-RPMs" installation of RHEL AS/ES 4. The x86_64 packages are on the Red Hat Enterprise Linux 4 x86-64 distribution. Run the following rpm command list packages installed and to distinguish between a 32-bit or 64-bit package

rpm -q --qf '%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n' \

binutils compat-db control-center gcc gcc-c++ glibc glibc-common \

gnome-libs libstdc++ libstdc++-devel make pdksh sysstat \

xscreensaver compat-libstdc++-33 glibc-kernheaders glibc-headers libaio \

libgcc glibc-devel ORBit xorg-x11-deprecated-libs | sort

Required packages Installed on rac01 Installed on rac02
binutils-2.15.92.0.2-18.x86_64 binutils-2.15.92.0.2-22 (x86_64) binutils-2.15.92.0.2-22 (x86_64)
compat-db-4.1.25-9.i386 compat-db-4.1.25-9 (i386) compat-db-4.1.25-9 (i386)
compat-db-4.1.25-9.x86_64 compat-db-4.1.25-9 (x86_64) compat-db-4.1.25-9 (x86_64)
compat-libstdc++-33-3.2.3-47.3.i386 compat-libstdc++-33-3.2.3-47.3 (i386) compat-libstdc++-33-3.2.3-47.3 (i386)
compat-libstdc++-33-3.2.3-47.3.x86_64 compat-libstdc++-33-3.2.3-47.3 (x86_64) compat-libstdc++-33-3.2.3-47.3 (x86_64)
control-center-2.8.0-12.x86_64 control-center-2.8.0-12.rhel4.5 (x86_64) control-center-2.8.0-12.rhel4.5 (x86_64)
gcc-3.4.3-22.1.x86_64 gcc-3.4.6-8 (x86_64) gcc-3.4.6-8 (x86_64)
gcc-c++-3.4.3-22.1.x86_64 gcc-c++-3.4.6-8 (x86_64) gcc-c++-3.4.6-8 (x86_64)
glibc-2.3.4-2.9.i686 glibc-2.3.4-2.36 (i686) glibc-2.3.4-2.36 (i686)
glibc-2.3.4-2.9.x86_64 glibc-2.3.4-2.36 (x86_64) glibc-2.3.4-2.36 (x86_64)
glibc-common-2.3.4-2.9.x86_64 glibc-common-2.3.4-2.36 (x86_64) glibc-common-2.3.4-2.36 (x86_64)
glibc-devel-2.3.4-2.9.i386 glibc-common-2.3.4-2 (i686) glibc-common-2.3.4-2 (i686)
glibc-devel-2.3.4-2.9.x86_64 glibc-devel-2.3.4-2.36 (x86_64) glibc-devel-2.3.4-2.36 (x86_64)
glibc-headers-2.3.4-2.9.x86_64 glibc-headers-2.3.4-2.36 (x86_64) glibc-headers-2.3.4-2.36 (x86_64)
glibc-kernheaders-2.4-9.1.87.x86_64 glibc-kernheaders-2.4-9.1.100.EL (x86_64) glibc-kernheaders-2.4-9.1.100.EL (x86_64)
gnome-libs-1.4.1.2.90-44.1.x86_64 gnome-libs-1.4.1.2.90-44.1 (x86_64) gnome-libs-1.4.1.2.90-44.1 (x86_64)
libaio-0.3.103-3.i386 libaio-0.3.105-2 (i386) libaio-0.3.105-2 (i386)
libaio-0.3.103-3.x86_64 libaio-0.3.105-2 (x86_64) libaio-0.3.105-2 (x86_64)
libgcc-3.4.3-22.1.i386 libgcc-3.4.6-8 (i386) libgcc-3.4.6-8 (i386)

libgcc-3.4.6-8 (x86_64) libgcc-3.4.6-8 (x86_64)

libstdc++-3.4.6-8 (i386) libstdc++-3.4.6-8 (i386)
libstdc++-3.4.3-22.1.x86_64 libstdc++-3.4.6-8 (x86_64) libstdc++-3.4.6-8 (x86_64)
libstdc++-devel-3.4.3-22.1.x86_64 libstdc++-devel-3.4.5-2 (x86_64) libstdc++-devel-3.4.5-2 (x86_64)
make-3.80-5.x86_64 make-3.80-5 (x86_64) make-3.80-5 (x86_64)
ORBit-0.5.17-14.i386 ORBit-0.5.17-14 (x86_64) ORBit-0.5.17-14 (x86_64)
pdksh-5.2.14-30.x86_64 pdksh-5.2.14-30.3 (x86_64) pdksh-5.2.14-30.3 (x86_64)
sysstat-5.0.5-1.x86_64 sysstat-5.0.5-7.rhel4 (x86_64) sysstat-5.0.5-7.rhel4 (x86_64)
xorg-x11-deprecated-libs-6.8.2-1.EL.13.6.i386 xorg-x11-deprecated-libs-6.8.2-1.EL.13.25.1 (i386) xorg-x11-deprecated-libs-6.8.2-1.EL.13.25.1 (i386)

xorg-x11-deprecated-libs-6.8.2-1.EL.13.25.1 (x86_64) xorg-x11-deprecated-libs-6.8.2-1.EL.13.25.1 (x86_64)
xscreensaver-4.18-5.rhel4.2.x86_64 xscreensaver-4.18-5.rhel4.10 (x86_64) xscreensaver-4.18-5.rhel4.10 (x86_64)

4.5 validate script

Oracle provides validate scripts for most platforms and Oracle releases.

Doc ID: Note:342555.1

Pre-Install checks for 10gR2 RDBMS (10.2.x) - Linux AMD64/EM64T Platforms is Oracle Note 342555.1.

To run the rules:

  1. Save the file as "validate.tar".
  2. Untar the files to a local directory. i.e tar xvf validate.tar
  3. Set your environment to the instance to be validated.
  4. Execute perl validate.pl filename.txt from the command line, as in the following examples:

[oracle@rac01 software]$ perl validate.pl 10gr2_rdbms_linuxamd64_hcve_043006.txt

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Health Check/Validation (V 01.07.00)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
"Validation Rule Engine" will be run in following environment:
HOSTNAME : rac01.-asp.com
USERNAME : oracle
ORACLE_SID : LIVE1
ORACLE_HOME : /u01/app/oracle/product/10.2.0/db_1
If this is not correct environment
Please set correct env parameters and rerun the program
Would you like to continue [Y]es/[N]o (Hit return for [Y]es) : Y
Executing Rules
~~~~~~~~~~~~~~~
Executing Rule: OS certified? - completed successfully.
Executing Rule: User in /etc/passwd? - completed successfully.
Executing Rule: Group in /etc/group? - completed successfully.
Executing Rule: Input ORACLE_HOME - user INPUT Required.
Enter value for <>
(Hit return for [$ORACLE_HOME]) :
- completed successfully.
Executing Rule: ORACLE_HOME valid? - completed successfully.
Executing Rule: O_H perms OK? - completed successfully.
Executing Rule: Umask set to 022? - completed successfully.
Executing Rule: LDLIBRARYPATH unset? - completed successfully.
Executing Rule: JAVA_HOME unset? - completed successfully.
Executing Rule: Other O_Hs in PATH? - completed successfully.
Executing Rule: oraInventory perms - completed successfully.
Executing Rule: /tmp adequate? - completed successfully.
Executing Rule: Swap (in Mb) - completed successfully.
Executing Rule: RAM (in Mb) - completed successfully.
Executing Rule: Swap OK? - completed successfully.
Executing Rule: Disk Space OK? - completed successfully.
Executing Rule: Kernel params OK? - completed successfully.
Executing Rule: Got ld,nm,ar,make? - completed successfully.
Executing Rule: ulimits OK? - completed successfully.
Executing Rule: RHEL3 rpms(Pt1) ok? - completed successfully.
Executing Rule: RHEL3 rpms(Pt2) ok? - completed successfully.
Executing Rule: RHEL4 Pt 1 rpms ok?
- completed successfully.
Executing Rule: RHEL4 Pt 2 rpms ok? - completed successfully.
Executing Rule: SuSE SLES9 rpms ok? - completed successfully.
Executing Rule: ip_local_port_range - completed successfully.
Executing Rule: Tainted Kernel? - completed successfully.
Executing Rule: other OUI up? - completed successfully.
Test "10gr2_rdbms_linuxamd64_hcve_043006" executed at Wed Feb 13 10:51:06 2008
Test Results
~~~~~~~~~~~~
ID NAME RESULT C VALUE
===== ==================== ====== = ========================================
10 OS certified? PASSED = Certified with 10gR2 RDBMS
20 User in /etc/passwd? PASSED = userOK
30 Group in /etc/group? PASSED = GroupOK
40 Input ORACLE_HOME RECORD $ORACLE_HOME
50 ORACLE_HOME valid? PASSED = OHexists
60 O_H perms OK? PASSED = CorrectPerms
70 Umask set to 022? PASSED = UmaskOK
80 LDLIBRARYPATH unset? PASSED = UnSet
90 JAVA_HOME unset? PASSED = UnSet
100 Other O_Hs in PATH? PASSED = NoneFound
110 oraInventory perms PASSED = oraInventoryNotFound
120 /tmp adequate? PASSED = TempSpaceOK
130 Swap (in Mb) PASSED > 16383
140 RAM (in Mb) PASSED > 16011
150 Swap OK? PASSED = SwapToRAMOK
160 Disk Space OK? PASSED = DiskSpaceOK
170 Kernel params OK? FAILED = SHMMAXTooSmall
180 Got ld,nm,ar,make? PASSED = ld_nm_ar_make_found
190 ulimits OK? FAILED = StackTooSmall NoFilesTooSmall Maxupro..>
204 RHEL3 rpms(Pt1) ok? PASSED = NotRHEL3
205 RHEL3 rpms(Pt2) ok? PASSED = NotRHEL3
206 RHEL4 Pt 1 rpms ok? FAILED = binutils-2.15.92.0.2-22 RHEL4rpmsPart..>
207 RHEL4 Pt 2 rpms ok? FAILED = libaio notInstalled glibc-devel (32bi..>
208 SuSE SLES9 rpms ok? PASSED = NotSuSE
209 ip_local_port_range PASSED = ip_local_port_rangeOK
210 Tainted Kernel? PASSED = NotVerifiable
220 other OUI up? PASSED = NoOtherOUI
Please see the log file for detailed results and recommendations
Log FileName: 10gr2_rdbms_linuxamd64_hcve_043006_run_4355/validate_result_10gr2_rdbms_linuxamd64_hcve_043006.log

4.6 Download ASM packages

Download ASMLib rpm files from :

http://www.oracle.com/technology/software/tech/linux/asmlib/rhel4.html

Search for “Drivers for kernel 2.6.9-55.EL”
The Library and Tools
· oracleasm-support-2.0.3-1.x86_64.rpm
· oracleasmlib-2.0.2-1.x86_64.rpm
· Drivers for kernel “2.6.9-55.ELsmp #1 SMP”
oracleasm-2.6.9-55.ELsmp-2.0.3-1.x86_64.rpm

4.7 Download OCFS packages

ocfs2 package from URL:-

http://oss.oracle.com/projects/ocfs2/files/RedHat/RHEL4/x86_64/old/1.2.7-1/2.6.9-55.EL/

  • ocfs2-tools-1.2.7-1.el4.x86_64.rpm
  • ocfs2-tools-debuginfo-1.2.7-1.el4.x86_64.rpm
  • ocfs2console-1.2.7-1.el4.x86_64.rpm

4.8 Creating Required Operating System Groups and Users

Create oinstall and dba groups (not created osoper group as this is optional)
/usr/sbin/groupadd oinstall
/usr/sbin/groupadd dba
Create Oracle user
/usr/sbin/useradd -g oinstall -G dba -d /home/oracle -m oracle

This command will create user “oracle”, with group “oinstall” and supplementary group of “dba”. It will also create the home directory of “/home/oracle”. The default shell will be used - /bin/bash
Set Password
passwd oracle

4.9 Oracle required directory creation

Oracle software will be installed on the internal disk, in file system /u01.
The OCFS2 file system will be on the external disk and will be mounted as /u02.
On server’s sygora01 and rac02
mkdir -p /u01/app/oracle

chown -R oracle:oinstall /u01/app/oracle
chmod -R 775 /u01/app/oracle

mkdir /u02
chown oracle:dba /u02
chmod 775 /u02

4.10 Verifying That the User nobody Exists

1. To determine if the user exists, enter the following command:

id nobody

If this command displays information about the nobody user then do not create that user.

2. If the nobody user does not exist, then enter the following command to create it:

/usr/sbin/useradd nobody

4.11 Configuring SSH on Cluster Member Nodes For oracle

mkdir ~/.ssh
chmod 700 ~/.ssh
mkdir ~/.ssh
chmod 700 ~/.ssh
/usr/bin/ssh-keygen -t rsa
/usr/bin/ssh-keygen -t dsa
ssh rac01 cat /home/oracle/.ssh/id_rsa.pub >> authorized_keys
ssh rac01 cat /home/oracle/.ssh/id_dsa.pub >> authorized_keys
ssh rac02 cat /home/oracle/.ssh/id_rsa.pub >> authorized_keys
ssh rac02 cat /home/oracle/.ssh/id_dsa.pub >> authorized_keys
scp authorized_keys rac02:/home/oracle/.ssh/
chmod 600 ~/.ssh/ authorized_keys
exec /usr/bin/ssh-agent $SHELL
/usr/bin/ssh-add
chmod 700 ~/.ssh
ssh rac02 cat /home/oracle/.ssh/id_rsa.pub >> authorized_keys
ssh rac02 cat /home/oracle/.ssh/id_dsa.pub >> authorized_keys
ssh rac01 cat /home/oracle/.ssh/id_rsa.pub >> authorized_keys
ssh rac01 cat /home/oracle/.ssh/id_dsa.pub >> authorized_keys
scp authorized_keys rac01:/home/oracle/.ssh/
chmod 600 ~/.ssh/ authorized_keys
Then log onto node 2 and execute: ssh rac02
4.12 Configuring SSH on Cluster Member Nodes for root
kdir ~/.ssh
chmod 700 ~/.ssh
/usr/bin/ssh-keygen -t rsa
/usr/bin/ssh-keygen -t dsa
touch ~/.ssh/authorized_keys
On node 1:
ssh rac01 cat /root/.ssh/id_rsa.pub >> authorized_keys
Password:
ssh rac01 cat /root/.ssh/id_dsa.pub >> authorized_keys
Password:
ssh rac02 cat /root/.ssh/id_rsa.pub >> authorized_keys
Password:
ssh rac02 cat /root/.ssh/id_dsa.pub >> authorized_keys
Password:
scp authorized_keys rac02:/root/.ssh/
chmod 600 ~/.ssh/ authorized_keys
On node 2:
ssh rac02 cat /root/.ssh/id_rsa.pub >> authorized_keys
Password:
ssh rac02 cat /root/.ssh/id_dsa.pub >> authorized_keys
Password:
ssh rac01 cat /root/.ssh/id_rsa.pub >> authorized_keys
Password:
ssh rac01 cat /root/.ssh/id_dsa.pub >> authorized_keys
Password:
scp authorized_keys rac01:/root/.ssh/
chmod 600 ~/.ssh/ authorized_keys
exec /usr/bin/ssh-agent $SHELL
/usr/bin/ssh-add
Test ssh by logging onto node1 and executing following command:
ssh rac02
Then log onto node 2 and execute: ssh rac02

4.13 VNC setup

Start VNC services and login as oracle user
Connect to root
Start shell and execute “xhost +”
The xhost program is used to add and delete host names or user names to the list allowed to make connections to the X server.
Fire Xcalc to see it is working

Start new shell and login as oracle
Execute “xhost +”
Fire Xcalc to see it is working

4.14 Kernel parameters

Edit file /etc/sysctl.conf on both servers and add these lines:
kernel.shmall = 2097152
kernel.shmmax = 4294967296
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default=262144
net.core.wmem_default=262144
net.core.rmem_max=262144
net.core.wmem_max=262144
Reboot both servers
init 6 or reboot

4.15 Verifying Hangcheck-timer Module on Kernel 2.6

As root run the following command to check if hangcheck loaded:
/sbin/lsmod | grep hang
If the hangcheck-timer module is not listed for any node, then enter a command similar to the following to start the module located in the directories of the current kernel version:
insmod /lib/modules/2.6.9-55.ELsmp/kernel/drivers/char/hangcheck-timer.ko hangcheck_tick=30 hangcheck_margin=180
lsmod | grep hang
The output should be similar to the following:
hangcheck_timer 5337 0
Now add "/sbin/modprobe hangcheck-timer” to /etc/rc.local

[root@rac02 ~]# echo "/sbin/modprobe hangcheck-timer" >> /etc/rc.local

4.16 Oracle user limits

Add the following lines to /etc/security/limits.conf file:
* soft nproc 2047
* hard nproc 16384
* soft nofile 1024
* hard nofile 65536

4.17 Installing the cvuqdisk Package for linux

Please download and install the operating system package cvuqdisk.
Check cvuqdisk is installed
rpm -qi cvuqdisk
It’s not installed if the command returns “package cvuqdisk is not installed”
If there is an existing version, remove it with
rpm -e cvuqdisk
Install package
cd /u01/10gr2/10gr2/clusterware/rpm
rpm -iv cvuqdisk-1.0.1-1.rpm
Once installed make sure following details returned for “rpm -qi cvuqdisk”
[root@rac01 ~]# rpm -qi cvuqdisk
Name : cvuqdisk Relocations: (not relocatable)
Version : 1.0.1 Vendor: Oracle Corp.
Release : 1 Build Date: Thu 02 Jun 2005 23:21:38 BST
Install Date: Tue 12 Feb 2008 14:00:36 GMT Build Host: stacs27.us.oracle.com
Group : none Source RPM: cvuqdisk-1.0.1-1.src.rpm
Size : 4168 License: Oracle Corp.
Signature : (none)
Summary : RPM file for cvuqdisk
Description :
This package contains the cvuqdisk program required by CVU.
cvuqdisk is a binary that assists CVU in finding scsi disks.

4.18 Disk Partitioning

fdisk –l

Disk /dev/cciss/c0d0: 73.3 GB, 73372631040 bytes
255 heads, 63 sectors/track, 8920 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/cciss/c0d0p1 * 1 13 104391 83 Linux
/dev/cciss/c0d0p2 14 8920 71545477+ 8e Linux LVM

Disk /dev/dm-7: 187.9 GB, 187904819200 bytes

255 heads, 63 sectors/track, 22844 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/dm-7 doesn't contain a valid partition table

Disk /dev/dm-8: 429.4 GB, 429496729600 bytes

255 heads, 63 sectors/track, 52216 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/dm-8 doesn't contain a valid partition table
Disk /dev/dm-9: 1073 MB, 1073741824 bytes
34 heads, 61 sectors/track, 1011 cylinders
Units = cylinders of 2074 * 512 = 1061888 bytes
Disk /dev/dm-9 doesn't contain a valid partition table
Disk /dev/dm-10: 1073 MB, 1073741824 bytes
34 heads, 61 sectors/track, 1011 cylinders
Units = cylinders of 2074 * 512 = 1061888 bytes
Disk /dev/dm-10 doesn't contain a valid partition table
Disk /dev/dm-11: 1073 MB, 1073741824 bytes
34 heads, 61 sectors/track, 1011 cylinders
Units = cylinders of 2074 * 512 = 1061888 bytes
Disk /dev/dm-11 doesn't contain a valid partition table
Disk /dev/dm-12: 5368 MB, 5368709120 bytes
166 heads, 62 sectors/track, 1018 cylinders
Units = cylinders of 10292 * 512 = 5269504 bytes
Disk /dev/dm-12 doesn't contain a valid partition table
Disk /dev/dm-13: 5368 MB, 5368709120 bytes
166 heads, 62 sectors/track, 1018 cylinders
Units = cylinders of 10292 * 512 = 5269504 bytes
Disk /dev/dm-13 doesn't contain a valid partition table

4.19 Checking the Network Setup with CVU

[oracle@rac01 software]$ /u01/software/clusterware/cluvfy/runcluvfy.sh comp nodereach –n

rac01,rac02 -verbose
Verifying node reachability
Checking node reachability...
Check: Node reachability from node "rac01"
Destination Node Reachable?
------------------------------------ ------------------------
rac02 yes
rac01 yes
Result: Node reachability check passed from node "rac01".
Verification of node reachability was successful.
4.20 Checking the Hardware and Operating System Setup with CVU
Checking the Operating System Requirements Setup with CVU
Connect to oracle VNC and in terminal run the following command:
exec /usr/bin/ssh-agent $SHELL
/usr/bin/ssh-add
[oracle@rac01 ~]$ /u01/software/clusterware/cluvfy/runcluvfy.sh stage -post hwos -n rac01,rac02
Performing post-checks for hardware and operating system setup
Checking node reachability...
Node reachability check passed from node "rac01".
Checking user equivalence...
User equivalence check failed for user "oracle".
Check failed on nodes:
rac01
ARNING:
User equivalence is not set for nodes:
rac01
Verification will proceed with nodes:
rac02
Checking node connectivity...
Node connectivity check passed for subnet "10.13.100.0" with node(s) rac02.
Node connectivity check passed for subnet "192.168.100.0" with node(s) rac02.
Node connectivity check passed for subnet "172.16.2.0" with node(s) rac02.
Suitable interfaces for the private interconnect on subnet "10.13.100.0":
rac02 bond0:10.13.100.12
Suitable interfaces for the private interconnect on subnet "192.168.100.0":
rac02 bond1:192.168.100.12
Suitable interfaces for the private interconnect on subnet "172.16.2.0":
rac02 bond2:172.16.2.2
ERROR:
Could not find a suitable set of interfaces for VIPs.
Node connectivity check failed.
Checking shared storage accessibility...
Disk Sharing Nodes (1 in count)
------------------------------------ ------------------------
/dev/sda rac02
/dev/sdb rac02
/dev/sdc rac02
/dev/sdd rac02
/dev/sde rac02
/dev/sdf rac02
/dev/sdg rac02
Disk Sharing Nodes (1 in count)
------------------------------------ ------------------------
/dev/sdh rac02
/dev/sdi rac02
/dev/sdj rac02
/dev/sdk rac02
/dev/sdl rac02
/dev/sdm rac02
/dev/sdn rac02
Shared storage check was successful on nodes "rac02".
Post-check for hardware and operating system setup was unsuccessful on all the nodes.
VIP failed - Ignore - see Checking the Network Setup with CVU
Shared storage failed - See Verifying Shared Storage

VIP failed - Ignore - see Checking the Network Setup with CVU

Shared storage failed - See Verifying Shared Storage

4.21 Checking the Operating System Requirements with CVU

[oracle@rac01 ~]$ /u01/software/clusterware/cluvfy/runcluvfy.sh stage -post hwos -n rac01,rac02
Verifying system requirement
Checking system requirements for 'crs'...
Total memory check passed.
Free disk space check passed.
Swap space check passed.
System architecture check passed.
Kernel version check passed.
Package existence check passed for "binutils-2.15.92.0.2-13".
Group existence check passed for "dba".
Group existence check passed for "oinstall".
User existence check passed for "nobody".
System requirement passed for 'crs'
Verification of system requirement was successful.

4.22 Verifying Shared Storage

/u01/software/clusterware/cluvfy/runcluvfy.sh comp ssa -n rac01,rac02 -s /dev/mapper/mpath0 –verbose

4.23 Verifying the Clusterware Requirements with CVU

cd /u01/10gr2/10gr2/clusterware/cluvfy
./runcluvfy.sh stage -pre crsinst -n rac01,rac02
Performing pre-checks for cluster services setup
Checking node reachability...
Node reachability check passed from node "rac01".
Checking user equivalence...
User equivalence check passed for user "oracle".
Checking administrative privileges...
User existence check passed for "oracle".
Group existence check passed for "oinstall".
Membership check for user "oracle" in group "oinstall" [as Primary] passed.
Administrative privileges check passed.
Checking node connectivity...
WARNING:
Make sure IP address "192.168.6.2" is up and is a valid IP address on node "rac01".
Node connectivity check failed for subnet "192.168.6.0".
Node connectivity check passed for subnet "192.168.5.0" with node(s) rac02,rac01.
Suitable interfaces for the private interconnect on subnet "192.168.5.0":
rac02 eth2:192.168.5.3
rac01 eth2:192.168.5.2

ERROR:

Could not find a suitable set of interfaces for VIPs.

Node connectivity check failed.
Checking system requirements for 'crs'...
Total memory check passed.
Free disk space check passed.
Swap space check passed.
System architecture check passed.
Kernel version check passed.
Package existence check passed for "binutils-2.15.92.0.2-13".
Group existence check passed for "dba".
Group existence check passed for "oinstall".
User existence check passed for "nobody".
System requirement passed for 'crs'
Pre-check for cluster services setup was unsuccessful on all the nodes.

VIP failed - Ignore - see Checking the Network Setup with CVU

4.24 ASM package install - Done

Install the 3 ASM packages on both servers using the following command:
[root@rac02 software]# rpm -Uvh oracleasm-support-2.0.3-1.x86_64.rpm \
oracleasmlib-2.0.2-1.x86_64.rpm \
oracleasm-2.6.9-55.ELsmp-2.0.3-1.x86_64.rpm
Preparing... ########################################### [100%]
1:oracleasm-support ########################################### [ 33%]
2:oracleasm-2.6.9-55.ELsm########################################### [ 67%]
3:oracleasmlib ########################################### [100%]

On both servers
[root@rac01 software]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting without typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Fix permissions of Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: [ OK ]
Creating /dev/oracleasm mount point: [ OK ]
Loading module "oracleasm": [ OK ]
Mounting ASMlib driver filesystem: [ OK ]
Scanning system for ASM disks: [ OK ]

Disk Matrix at RAC01, RAC02

Mpath Size Usage
mpath2 1GB VOTE3
mpath1 1GB VOTE2
mpath0 1GB VOTE1
mpath6 400GB ASM1
mpath5 175GB ASM2
mpath4 5GB OCR1
mpath3 5GB OC2

On RAC01

[root@rac01 software]# /etc/init.d/oracleasm createdisk VOL1 /dev/mapper/mpath5
Marking disk "/dev/mapper/mpath5" as an ASM disk: [ OK ]
[root@rac01 software]# /etc/init.d/oracleasm createdisk VOL1 /dev/mapper/mpath6
root@rac01 software]# /etc/init.d/oracleasm createdisk VOL2 /dev/mapper/mpath6
Marking disk "/dev/mapper/mpath6" as an ASM disk: [ OK ]

On RAC02

[root@rac02 ~]# /etc/init.d/oracleasm scandisks
Scanning system for ASM disks: [ OK ]
4.24.1 Configuring the Scan Order

The Oracle ASMLib configuration file is located at /etc/sysconfig/oracleasm.

The configuration file contains many configuration variables. The ORACLEASM_SCANORDER variable specifies disks to be scanned first. The ORACLEASM_SCANEXCLUDE variable specifies the disks that are to be ignored.

Multipath Disks First

Edit the ORACLEASM_SCANORDER variable to configure ASMLib to scan the multipath disks first:

ORACLEASM_SCANORDER="multipath sd"

4.25 OCFS package install

On both rac01 and rac02, run the following command to install the 3 packages:-

[root@rac01 software]# rpm -Uvh ocfs2-2.6.9-55.ELsmp-1.2.8-2.el4.x86_64.rpm \
ocfs2-2.6.9-55.EL-debuginfo-1.2.8-2.el4.x86_64.rpm \
ocfs2console-1.2.7-1.el4.x86_64.rpm \
ocfs2-tools-1.2.7-1.el4.x86_64.rpm \
ocfs2-tools-debuginfo-1.2.7-1.el4.x86_64.rpm \
Preparing... ########################################### [100%]
1:ocfs2-tools ########################################### [ 20%]
2:ocfs2-2.6.9-55.ELsmp ########################################### [ 40%]
3:ocfs2-2.6.9-55.EL-debug########################################### [ 60%]
4:ocfs2console ########################################### [ 80%]
5:ocfs2-tools-debuginfo ########################################### [100%]

4.26 disable SELinux

On both servers

Disable SELinux, run the "Security Level Configuration" GUI utility:

# /usr/bin/system-config-securitylevel &

or from console / Applications / System Settings / Security Level

Now, click the SELinux tab and check off the "Enabled" checkbox. After clicking on [OK], you will be presented with a warning dialog. Simply acknowledge this warning by clicking "Yes". Your screen should now look like the following after disabling the SELinux option:

Reboot both servers

init 6 or reboot

4.27 OCFS2 Configuration

OCFS2 is the file system used for the Voting disk and CSS. Oracle provides a tool, ocfs2console to setup and configure the usage of the file system.

Perform these tasks at only node RAC01

Connect to root VNC and run the following commands:

exec /usr/bin/ssh-agent $SHELL

/usr/bin/ssh-add

ocfs2console

clip_image011[4]

clip_image013[4]

clip_image015[4]

clip_image017[4]

On both servers as root

[root@rac01 ~]# /etc/init.d/o2cb configure
Configuring the O2CB driver.
This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot. The current values will be shown in brackets ('[]'). Hitting
without typing an answer will keep that current value. Ctrl-C
will abort.
Load O2CB driver on boot (y/n) [y]: y
Cluster to start on boot (Enter "none" to clear) [ocfs2]:
Specify heartbeat dead threshold (>=7) [31]:
Specify network idle timeout in ms (>=5000) [30000]:
Specify network keepalive delay in ms (>=1000) [2000]:
Specify network reconnect delay in ms (>=2000) [2000]:
Writing O2CB configuration: OK
O2CB cluster ocfs2 already online

4.27.1 OCFS2 commands

To check the status of the cluster, do:

[root@rac01 software]# /etc/init.d/o2cb status
Module "configfs": Loaded
Filesystem "configfs": Mounted
Module "ocfs2_nodemanager": Loaded
Module "ocfs2_dlm": Loaded
Module "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking O2CB cluster ocfs2: Online
Heartbeat dead threshold: 31
Network idle timeout: 30000
Network keepalive delay: 2000
Network reconnect delay: 2000
Checking O2CB heartbeat: Not active

To load the modules, do:

/etc/init.d/o2cb load

To online cluster ocfs2, do:

/etc/init.d/o2cb online ocfs2

Starting cluster ocfs2: OK

To offline cluster ocfs2, do:

/etc/init.d/o2cb offline ocfs2
Cleaning heartbeat on ocfs2: OK

If the cluster is setup to load on boot, one could start and stop cluster ocfs2, as follows:

/etc/init.d/o2cb start

Loading module "configfs": OK
Mounting configfs filesystem at /config: OK
Loading module "ocfs2_nodemanager": OK
Loading module "ocfs2_dlm": OK
Loading module "ocfs2_dlmfs": OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Starting cluster ocfs2: OK
/etc/init.d/o2cb stop
Cleaning heartbeat on ocfs2: OK
Stopping cluster ocfs2: OK
Unmounting ocfs2_dlmfs filesystem: OK
Unloading module "ocfs2_dlmfs": OK
Unmounting configfs filesystem: OK
Unloading module "configfs": OK

4.28 OCFS2 File system format

Format command needs O2CB cluster started and online as it needs to check the volume is not mounted on some node in the cluster.

Connect to root VNC and run the following commands:

ocfs2console

clip_image019[4]

clip_image021[4]

If list of devices presented is wrong please use following command to manually format the OCFS volumes.

Create 3 Voting Disk volumes as below

# mkfs.ocfs2 -L "vote1” /dev/mapper/mpath0
# mkfs.ocfs2 -L "vote2” /dev/mapper/mpath1
# mkfs.ocfs2 -L "vote3” /dev/mapper/mpath2

Create 2 OCR volumes as below
# mkfs.ocfs2 -L "ocr1” /dev/mapper/mpath4
# mkfs.ocfs2 -L "ocr2” /dev/mapper/mpath3

http://oss.oracle.com/projects/ocfs2/dist/documentation/ocfs2_faq.html#CONFIGURE

[root@rac01 software]# mkfs.ocfs2 -L "vote1" /dev/mapper/mpath0
mkfs.ocfs2 1.2.7
Filesystem label=vote1
Block size=4096 (bits=12)
Cluster size=4096 (bits=12)
Volume size=1073741824 (262144 clusters) (262144 blocks)
9 cluster groups (tail covers 4096 clusters, rest cover 32256 clusters)
Journal size=67108864
Initial number of node slots: 4
Creating bitmaps: done
Initializing superblock: done
Writing system files: done
Writing superblock: done
Writing backup superblock: 0 block(s)
Formatting Journals: done
Writing lost+found: done
mkfs.ocfs2 successful

[root@rac01 software]# mkfs.ocfs2 -L "vote2" /dev/mapper/mpath1
mkfs.ocfs2 1.2.7
Filesystem label=vote2
Block size=4096 (bits=12)
Cluster size=4096 (bits=12)
Volume size=1073741824 (262144 clusters) (262144 blocks)
9 cluster groups (tail covers 4096 clusters, rest cover 32256 clusters)
Journal size=67108864
Initial number of node slots: 4
Creating bitmaps: done
Initializing superblock: done
Writing system files: done
Writing superblock: done
Writing backup superblock: 0 block(s)
Formatting Journals: done
Writing lost+found: done
mkfs.ocfs2 successful
[root@rac01 software]# mkfs.ocfs2 -L "vote3" /dev/mapper/mpath2
mkfs.ocfs2 1.2.7
Filesystem label=vote3
Block size=4096 (bits=12)
Cluster size=4096 (bits=12)
Volume size=1073741824 (262144 clusters) (262144 blocks)
9 cluster groups (tail covers 4096 clusters, rest cover 32256 clusters)
Journal size=67108864
Initial number of node slots: 4
Creating bitmaps: done
Initializing superblock: done
Writing system files: done
riting superblock: done
Writing backup superblock: 0 block(s)
Formatting Journals: done
Writing lost+found: done
mkfs.ocfs2 successful
[root@rac01 software]# mkfs.ocfs2 -L "ocr1" /dev/mapper/mpath4
mkfs.ocfs2 1.2.7
Filesystem label=ocr1
Block size=4096 (bits=12)
Cluster size=4096 (bits=12)
Volume size=5368709120 (1310720 clusters) (1310720 blocks)
41 cluster groups (tail covers 20480 clusters, rest cover 32256 clusters)
Journal size=67108864
Initial number of node slots: 4
Creating bitmaps: done
Initializing superblock: done
Writing system files: done
Writing superblock: done
Writing backup superblock: 2 block(s)
Formatting Journals: done
Writing lost+found: done
mkfs.ocfs2 successful

[root@rac01 software]# mkfs.ocfs2 -L "ocr2" /dev/mapper/mpath3
mkfs.ocfs2 1.2.7
Filesystem label=ocr2
Block size=4096 (bits=12)
Cluster size=4096 (bits=12)
Volume size=5368709120 (1310720 clusters) (1310720 blocks)
41 cluster groups (tail covers 20480 clusters, rest cover 32256 clusters)
Journal size=67108864
Initial number of node slots: 4
Creating bitmaps: done
Initializing superblock: done
Writing system files: done
Writing superblock: done
Writing backup superblock: 2 block(s)
Formatting Journals: done
Writing lost+found: done
mkfs.ocfs2 successful

4.29 OCFS2 File system mount

[root@rac01 software]# mkdir /u02
[root@rac01 software]# mkdir -p /u02/vote1
[root@rac01 software]# mkdir -p /u02/vote2
[root@rac01 software]# mkdir -p /u02/vote3
[root@rac01 software]# mkdir -p /u02/ocr1
[root@rac01 software]# mkdir -p /u02/ocr2
[root@rac01 software]# chown -R oracle:oinstall /u02/vote1 /u02/vote2 /u02/vote3 /u02/ocr1 /u02/ocr2

[root@rac01 software]# chmod -R 775 /u02/vote1 /u02/vote2 /u02/vote3 /u02/ocr1 /u02/ocr2
Mpath Size Usage
mpath0 1GB vote1
mpath1 1GB vote2
mpath2 1GB vote3
mpath4 5GB ocr1
mpath3 5GB ocr2

To mount run the following command as root on both servers:
mount -t ocfs2 -o datavolume,nointr /dev/mapper/mpath0 /u02/vote1
mount -t ocfs2 -o datavolume,nointr /dev/mapper/mpath1 /u02/vote2
mount -t ocfs2 -o datavolume,nointr /dev/mapper/mpath2 /u02/vote3
mount -t ocfs2 -o datavolume,nointr /dev/mapper/mpath3 /u02/ocr2
mount -t ocfs2 -o datavolume,nointr /dev/mapper/mpath4 /u02/ocr1

To unmount run the following command as root:
umount /u02

[root@rac01 /]# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
20642428 2847104 16746748 15% /
/dev/cciss/c0d0p1 101086 21753 74114 23% /boot
none 8197640 0 8197640 0% /dev/shm
/dev/mapper/VolGroup00-LogVol02
2064208 36060 1923292 2% /home
/dev/mapper/VolGroup00-LogVol03
1032088 171424 808236 18% /opt
/dev/mapper/VolGroup00-LogVol05
4128448 2798528 1120208 72% /tmp
/dev/mapper/VolGroup00-LogVol06
23932052 1206840 21509520 6% /u01
/dev/mapper/VolGroup00-LogVol04
2064208 154664 1804688 8% /var
/dev/mapper/mpath0 1048576 268156 780420 26% /u02/vote1
/dev/mapper/mpath1 1048576 268156 780420 26% /u02/vote2
/dev/mapper/mpath2 1048576 268156 780420 26% /u02/vote3
/dev/mapper/mpath3 5242880 268292 4974588 6% /u02/ocr2
/dev/mapper/mpath4 5242880 268292 4974588 6% /u02/ocr1

[root@rac01 /]# umount /u02/vote1
[root@rac01 /]# umount /u02/vote2
[root@rac01 /]# umount /u02/vote3
[root@rac01 /]# umount /u02/ocr1
[root@rac01 /]# umount /u02/ocr2

Edit /etc/fstab on both servers and added
/dev/mapper/mpath0 /u02/vote1 ocfs2 _netdev,datavolume,nointr 0 0
/dev/mapper/mpath1 /u02/vote2 ocfs2 _netdev,datavolume,nointr 0 0
/dev/mapper/mpath2 /u02/vote3 ocfs2 _netdev,datavolume,nointr 0 0
/dev/mapper/mpath3 /u02/ocr2 ocfs2 _netdev,datavolume,nointr 0 0
/dev/mapper/mpath4 /u02/ocr1 ocfs2 _netdev,datavolume,nointr 0 0
The first field (/dev/hdc) is the physical device/remote filesystem which is to be described.
The second field (/mnt/cdrom) specifies the mount point where the filesystem will be mounted.
The third field (iso9660) is the type of filesystem on the device from the first field.
The fourth field (noauto,ro,user) is a (default) list of options which mount should use when mounting the filesystem.
The fifth field (0) is used by dump (a backup utility) to decide if a filesystem should be backed up. If zero then dump will ignore that filesystem. The sixth field (0) is used by fsck (the filesystem check utility) to determine the order in which filesystems should be checked.
Check server configuration for startup has OCFS2 and O2CB as “on” for runlevels 3 and 5:
chkconfig --list o2cb
o2cb 0:off 1:off 2:on 3:on 4:on 5:on 6:off
chkconfig --list ocfs2

ocfs2 0:off 1:off 2:on 3:on 4:on 5:on 6:off
Reboot and confirm /u02 has mounted.

5 Installation

Installation will be progressed in 3 parts,

  • CRS install at separate base directory /u01/crs
  • ASM install in a separate $ASM_HOME from database home ($ORACLE_HOME)
  • DB install at $ORACLE_HOME

5.1 CRS install

exec /usr/bin/ssh-agent $SHELL
/usr/bin/ssh-add
cd /u01/10gr2/10gr2/clusterware
./runInstaller

Has 'rootpre.sh' been run by root? [y/n] (n)

Y

clip_image023[4]

clip_image025[4]

change Inventory Directory to
/u01/app/oracle/oraInventory
and group to
dba
Next
/u01/crs/oracle/product/10/app
Next

clip_image027[4]

clip_image029[4]

clip_image031[4]

clip_image033[4]

All OK

clip_image035[4]

Edit rac01 private and virtual names so priv-{node} and vip-{node}.

The reason for the change is there is less chance of selecting incorrect connection name.

Add rac02,

with Private node name of orapriv02

and virtual host name of oravip02

clip_image039[4]

clip_image041[4]

clip_image043[4]

clip_image045[4]

Select bond0 for public ,bond2 for Private interface.

clip_image047[4]

clip_image049[4]

clip_image051[4]

Please run these scripts as “root” and click OK when done.

clip_image053[4]

5.1.1 VIPCA

exec /usr/bin/ssh-agent $SHELL

/usr/bin/ssh-add

Click IP Alias Name for rac01 and enter oravip01, the rest is filled in when you tab

clip_image055[4]

clip_image057[4]

clip_image059[4]

clip_image061[4]

clip_image063[4]

clip_image064[4]

clip_image065[4]

clip_image067[4]

clip_image069[4]

Ifconfig should return a new VIP address configured

[root@rac01 ~]# ifconfig

bond0 Link encap:Ethernet HWaddr 00:1B:78:95:0E:3A

inet addr:10.13.100.11 Bcast:10.13.100.255 Mask:255.255.255.0

inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link

UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1

RX packets:1999421 errors:0 dropped:0 overruns:0 frame:0

TX packets:3637692 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:0

RX bytes:140532768 (134.0 MiB) TX bytes:1230836921 (1.1 GiB)

bond0:1 Link encap:Ethernet HWaddr 00:1B:78:95:0E:3A

inet addr:10.13.100.13 Bcast:10.13.100.255 Mask:255.255.255.0

UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1

5.2 ASM Install

With Oracle Database 10g Release 2 (10.2), Automatic Storage Management should be installed in a separate ASM home directory.

clip_image071[4]

clip_image073[4]

clip_image077[4]

clip_image079[4]

clip_image081[4]

clip_image083[4]

clip_image085[4]

While carrying install Oracle returned following error

clip_image087[4]

ASM1 instance could start on node 1 but not on node 2(RAC02)

SR Number 18563922.6 raised and solution suggested was adding following ASM string at ASM1,ASM2 init files.

asm_diskstring = '/dev/oracleasm/disks/*'

LSNRCTL> start LISTENER_RAC02
Starting /u01/app/oracle/product/10.2.0/asm/bin/tnslsnr: please wait...
TNSLSNR for Linux: Version 10.2.0.1.0 - Production
System parameter file is /u01/app/oracle/product/10.2.0/asm/network/admin/listener.ora
Log messages written to /u01/app/oracle/product/10.2.0/asm/network/log/listener_rac02.log
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.13.100.14)(PORT=1521)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.13.100.12)(PORT=1521)))
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_RAC02
Version TNSLSNR for Linux: Version 10.2.0.1.0 - Production
Start Date 22-FEB-2008 13:00:11
Uptime 0 days 0 hr. 0 min. 0 sec
Trace Level off
Security ON: Local OS Authentication
SNMP ON
Listener Parameter File /u01/app/oracle/product/10.2.0/asm/network/admin/listener.ora
Listener Log File /u01/app/oracle/product/10.2.0/asm/network/log/listener_rac02.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.13.100.14)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.13.100.12)(PORT=1521)))
Services Summary...
Service "PLSExtProc" has 1 instance(s).
Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully

5.3 Install Database Software

Connect as oracle to rac01 VNC session & start Database Configuration Assistant

clip_image089[4]

clip_image091[4]

clip_image093[4]

clip_image095[4]

clip_image097[4]

clip_image099[4]

clip_image101[4]

clip_image103[4]

clip_image105[4]

clip_image107[4]

clip_image109[4]

clip_image111[4]

5.4 Create RAC Database

clip_image113[4]

clip_image115[4]

clip_image117[4]

clip_image119[4]

clip_image120[4]

clip_image122[4]

Confirm Cluster Installation with both rac01 and rac01 selected, Next

clip_image124[4]

clip_image126[4]

clip_image128[4]

clip_image130[4]

clip_image132[4]

clip_image134[4]

clip_image136[4]

clip_image137[4]

clip_image139[4]

clip_image141[4]

clip_image143[4]

clip_image145[4]

clip_image147[4]

clip_image149[4]

Change processes from 150 to 500, 8192 Bytes Block size (default)

clip_image151[4]

Default for Connection Mode (dedicated Server Mode)

clip_image153[4]

clip_image155[4]

clip_image157[4]

Click OK

clip_image160[4]clip_image162[4]

clip_image164[4]

Click exit - will take about 5 minutes to finish and return to unix prompt.

6 Scripts and profile files

5.4 .bash_profile rac01

$HOME/.bash_profile at Rac01

export PATH
unset USERNAME
CRS_HOME=/u01/crs/oracle/product/10/crs ; export CRS_HOME
ASM_HOME=/u01/app/oracle/product/10.2.0/asm ; export ASM_HOME
RDBMS_HOME=/u01/app/oracle/product/10.2.0/db_1 ; export RDBMS_HOME
# Now set ORACLE_HOME to RDBMS
ORACLE_HOME=$RDBMS_HOME ; export ORACLE_HOME
ORACLE_SID=prod1 ; export ORACLE_SID
# set LD_LIBRARY_PATH (as oraenv)
case "$LD_LIBRARY_PATH" in
*$OLDHOME/lib*) LD_LIBRARY_PATH=`echo $LD_LIBRARY_PATH | \
sed "s;$OLDHOME/lib;$ORACLE_HOME/lib;g"` ;;
*$ORACLE_HOME/lib*) ;;
"") LD_LIBRARY_PATH=$ORACLE_HOME/lib ;;
*) LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH ;;
esac
PATH=$PATH:/usr/local/bin:$ORACLE_HOME/bin:$CRS_HOME/bin:$ASM_HOME/bin.
export PATH
unset USERNAME

5.5 .bash_profile rac02

export PATH
unset USERNAME
CRS_HOME=/u01/crs/oracle/product/10/crs ; export CRS_HOME
ASM_HOME=/u01/app/oracle/product/10.2.0/asm ; export ASM_HOME
RDBMS_HOME=/u01/app/oracle/product/10.2.0/db_1 ; export RDBMS_HOME
# Now set ORACLE_HOME to RDBMS
ORACLE_HOME=$RDBMS_HOME ; export ORACLE_HOME
ORACLE_SID=prod2 ; export ORACLE_SID
# set LD_LIBRARY_PATH (as oraenv)
case "$LD_LIBRARY_PATH" in
*$OLDHOME/lib*) LD_LIBRARY_PATH=`echo $LD_LIBRARY_PATH | \
sed "s;$OLDHOME/lib;$ORACLE_HOME/lib;g"` ;;
*$ORACLE_HOME/lib*) ;;
"") LD_LIBRARY_PATH=$ORACLE_HOME/lib ;;
*) LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH ;;
esac
PATH=$PATH:/usr/local/bin:$ORACLE_HOME/bin:$CRS_HOME/bin:$ASM_HOME/bin.
export PATH
unset USERNAME

6 RAC Infrastructure Testing : Following tests were carried out on one RAC node at a time

6.1 RAC Voting Disk Test :

Test ID Category Cause of Failure Method of Test Database Crash
V1 Software\Disk Failure of SAN storage system Disconnect SAN HBA Switch to Server RAC01 or RAC02, one at a Time NO

6.2 RAC Cluster Registry Test

Test ID Category Cause of Failure Method of Test Database Crash
C1 Software\Disk Failure of SAN storage system Disconnect SAN HBA Switch to Server RAC01 or RAC02, one at a Time NO

6.3 RAC ASM Tests

Test ID Category Cause of Failure Method of Test Database Crash
A1 Software\Disk Failure of SAN storage system Disconnect SAN HBA Switch to Server RAC01 or RAC02, one at a Time NO

6.4 RAC Interconnect Tests

Test ID Category Cause of Failure Method of Test Server Crash
N1 Private Network Loss of Private network Pull PRIVATE port cable from RAC01 NO CRASHRAC moved database connections to RAC01.
N2 Public Network Loss of Public network Pull PUBLIC port cable from RAC01 NO CRASHRAC moved database connections from node 1 to Node 2
N3 Private Network Loss of Private network Pull PRIVATE port cable from RAC02. NO CRASHRAC moved database connections to RAC02
N4 PublicNetwork Loss of Public network Pull PUBLIC port cable from RAC02 NO CRASHRAC moved database connections from node 2 to Node 1

Appendix

Errors and warnings encountered and cause and required actions

1. OCR/Voting disk volumes were accessible by rac02 server

After restart oracle lost 2 volumes - /devmapper/mpath1 & /devmapper/mpath3
[root@rac01 ~]# mount -t ocfs2 -o datavolume,nointr /dev/mapper/mpath1 /u02/vote2
mount.ocfs2: Device name specified was not found while opening device /dev/mapper/mpath1
[root@rac01 ~]# mount -t ocfs2 -o datavolume,nointr /dev/mapper/mpath2 /u02/vote3
[root@rac01 ~]# mount -t ocfs2 -o datavolume,nointr /dev/mapper/mpath3 /u02/ocr2
mount.ocfs2: Device name specified was not found while opening device /dev/mapper/mpath3
[root@rac01 ~]# mount -t ocfs2 -o datavolume,nointr /dev/mapper/mpath4 /u02/ocr1
[root@rac01 mapper]# ls -lrt | grep mpath
brw-rw---- 1 root disk 253, 13 Feb 13 18:07 mpath6
brw-rw---- 1 root disk 253, 12 Feb 13 18:07 mpath5
brw-rw---- 1 root disk 253, 11 Feb 13 18:07 mpath4
brw-rw---- 1 root disk 253, 9 Feb 13 18:07 mpath2
brw-rw---- 1 root disk 253, 7 Feb 13 18:07 mpath0

This was a device mapper issue at SAN level.

2. RAC entire cluster went down while carrying public network test

Private network disabled at rac01
- Existing connection were migrated from prod1 to prod2
- Something happened and RAC2 also rebooted, lost entire RAC database.
SQL> select instance_name from v$instance;
INSTANCE_NAME
—————-
prod1
SQL> select instance_name from v$instance;
select instance_name from v$instance
ERROR at line 1:
ORA-03135: connection lost contact
SQL> select instance_name from v$instance;
INSTANCE_NAME
—————-
prod2

Investigation: OCFS2 requires the nodes to be alive on network and sends regular keep alive packets to ensure that they are there. When one node disappears on the network, it lead to node-self-fencing hence when public network was disabled from first node, OCFS2 sends shutdown signal to second node

Solution :

-Use interconnect IP address to configure OCFS2 in /etc/ocfs2/cluster.conf instead of using public IP address

-reboot Nodes and do the test again

Testing : To change heartbeat IP for OCFS make changes at both nodes & reboot both servers. Once OCFS was on private interconnect IP address, everything should work fine.

OLD /etc/ocfs2/cluster.conf FILE with Public IP Addresses
node:
ip_port = 7777
ip_address = 10.13.100.11
number = 0
name = rac01
cluster = ocfs2
node:
ip_port = 7777
ip_address = 10.13.100.12
number = 1
name = rac02
cluster = ocfs2
cluster:
node_count = 2
name = ocfs2
New /etc/ocfs2/cluster.conf with Private Interconnect IP Addresses
node:
ip_port = 7777
ip_address = 172.16.2.1
number = 0
name = rac01
cluster = ocfs2
node:
ip_port = 7777
ip_address = 172.16.2.2
number = 1
name = rac02
cluster = ocfs2
cluster:
node_count = 2
name = ocfs2

1 comment:

Anonymous said...

looks nice. but have you ever heard about another way of file fix .dbf, provided by appropriate data recovery services?