Tuesday, May 21, 2013

Oracle RAC Physical Standby DataGuard Creation

Oracle RAC Physical Standby DataGuard Creation


The following article describes how to create an Oracle RAC 11gR2 physical standby database with Active DataGuard real-time apply enabled on Linux. This procedure can easily be modified for any other supported platform. Creating a standby database as RAC directly from primary RAC, rather than standalone, saves time and prevents potential issues, while converting from standalone to RAC. In addition, the following approach ensures the proper placement of database files at the standby ASM disk groups.

Both systems are using ASM for storage. The storage layout is identical between the primary and the standby databases. System, data, redo log, and fast recovery area are located on separate ASM disks for performance and recoverability reasons.
The source and target databases are 2-node Oracle RAC 11gR2. Maintaining standby database as RAC with Active DataGuard and a real-time apply feature allows using it for reading, backup, and testing by converting it over to a snapshot standby database and back after the completion.

The following are the names and their meaning used in this article.

Name
Description
MY_DB_DOMAIN
Database domain name
MYDB
Database name
STNDBY
Unique database name
stndbyhost1, stndbyhost2
Standby hosts
STNDBY1, STNDBY2
Standby instances
PRIMARY1, PRIMARY2
Primary instances
primaryhost1, primaryhost2
Primary hosts
-vip
Host’s virtual IP
primary-grid-scan-name
Primary RAC Grid SCAN
standby-grid-scan-name
Standby RAC Grid SCAN
ASM_diskN
ASM disk name N
Oracle Grid should already be installed and running on all of the standby hosts. Oracle database software should also be installed on all of the standby hosts. All the prerequisites should have been completed for both environments for the Oracle physical standby DataGuard configuration.
The following are the actual steps needed for this process.
1. Prepare the primary database
a. Ensure that the database is in archivelog mode
SQL > SELECT LOG_MODE FROM V$DATABASE;
  LOG_MODE
  ------------
  ARCHIVELOG
b. Enable force logging, so that all database changes are logged
SQL > ALTER DATABASE FORCE LOGGING;
c. Create standby redo logs
SQL > ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 GROUP 5 '' SIZE  GROUP 6 '' SIZE  GROUP 7 '' SIZE ;
SQL > ALTER DATABASE ADD STANDBY LOGFILE THREAD 2 GROUP 8 '' SIZE  GROUP 9 '' SIZE  GROUP 10 '' SIZE ;
 
Where and specify the location and size of log files
Note the following when creating standby redo logs:
  • The quantity is determined by (maximum # of logfile groups +1) * maximum # of threads.
  • Must be the same size as online redo logs.
  • Should not be multiplexed.
2. Create $ORACLE_HOME/dbs/initSTNDBY1.ora on stndbyhost1 to start an auxiliary database instance.
*.db_domain='MY_DB_DOMAIN'
*.db_name='MYDB'
*.db_unique_name='STNDBY'
Note: delete spfileSTNDBY1.ora if one exists.
3. Create adump directory on stndbyhost1 and stndbyhost2.
CMD > mkdir -p $ORACLE_BASE/admin/STNDBY/adump
4. Create and start a temporary listener with static service information on stndbyhost1 in database home.
Edit $ORACLE_HOME/network/admin/listener.ora as follows:
SID_LIST_LISTENER_DBCLONE =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = STNDBY.MY_DB_DOMAIN)
      (ORACLE_HOME = $ORACLE_HOME)
      (SID_NAME = STNDBY1)
    )
  )
    
  LISTENER_DBCLONE =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS_LIST =
        (ADDRESS = (PROTOCOL=TCP)(HOST = stndbyhost1)(PORT = 1521))
      )
    )
  )
Start the listener.
CMD > lsnrctl start LISTENER_DBCLONE
5. Add standby database to $ORACLE_HOME/network/admin/tnsnames.ora on primaryhost1.
This entry points to the listener in the previous step.
STNDBY1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = stndbyhost1-vip)(PORT = 1521))
    (CONNECT_DATA = (UR=A)
      (SERVER = DEDICATED)
      (SERVICE_NAME = STNDBY.MY_DB_DOMAIN)
    )
  )
Note: (UR=A) is needed to connect to a BLOCKED unmounted database instance.
6. Modify and reload the standby grid listeners with static service information for the standby database.
Edit $GRID_HOME/network/admin/listener.ora on both standby hosts and add the following configuration.
stndbyhost1
LISTENER =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL=IPC)(KEY=LISTENER))
      (ADDRESS = (PROTOCOL = TCP)(HOST = stndbyhost1-vip)(PORT = 1521))
    )
  )
 
SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = STNDBY.MY_DB_DOMAIN)
      (ORACLE_HOME = $ORACLE_HOME)
      (SID_NAME = STNDBY1)
    )
  )
stndbyhost2
LISTENER =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = stndbyhost2-vip)(PORT = 1521))
    )
  )
 
SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = STNDBY.MY_DB_DOMAIN)
      (ORACLE_HOME = $ORACLE_HOME)
      (SID_NAME = STNDBY2)
    )
  )
7. Reload the grid listeners on stndbyhost1 and stndbyhost2.
CMD > lsnrctl reload
8. Copy the password file from primaryhost1 to stndbyhost1 and stndbyhost2.
CMD > scp $ORACLE_HOME/dbs/orapwMYDB1 stndbyhost1:$ORACLE_HOME/dbs/orapwSTNDBY1
    CMD > scp $ORACLE_HOME/dbs/orapwMYDB1 stndbyhost2:$ORACLE_HOME/dbs/orapwSTNDBY2
9. Add instance name to /etc/oratab.
Add STNDBY1 instance name to /etc/oratab on stndbyhost1 and STNDBY2 on stndbyhost2 and set the environment. Note that command line starts with dot “.” to make the environment values permanent.
CMD >. oraenv
10. Start the standby database in nomount state on stndbyhost1.
CMD > sqlplus / as sysdba
SQL > startup nomount
SQL > exit
11. Duplicate the database from primaryhost1.
Note: standby database name STNDBY1 matches to the one in step 4.
Parameter notes:
  • db_create_file_dest and db_recovery_file_dest must be reset in order for all the files to be placed at the same location on destination as on the source.
  • cluster_database must be initially disabled, because duplication works with one node only.
  • remote_listener is required to register the database with SCAN listener.
Allocate multiple primary channels, as the primary work is performed by target channels.
CMD > rman target=sys@PRIMARY1 auxiliary=sys@STNDBY1
     
run
{
allocate channel prmy1 type disk;
allocate channel prmy2 type disk;
allocate channel prmy3 type disk;
allocate channel prmy4 type disk;
allocate auxiliary channel stby type disk;
 
DUPLICATE DATABASE FOR STANDBY FROM ACTIVE DATABASE
NOFILENAMECHECK
SPFILE
  PARAMETER_VALUE_CONVERT 'MYDB','STNDBY'
  SET db_unique_name='STNDBY'
  SET db_file_name_convert='/MYDB/','/STNDBY/'
  SET log_file_name_convert='/MYDB/','/STNDBY/'
  SET fal_server='(DESCRIPTION = (LOAD_BALANCE = ON)(ADDRESS = (PROTOCOL = TCP)(HOST = primary-grid-scan-name)(PORT = 1521))(CONNECT_DATA = (SERVICE_NAME = MYDB.MY_DB_DOMAIN)))'
  SET standby_file_management='AUTO'
  SET log_archive_config='dg_config=(STNDBY,MYDB)'
  SET log_archive_dest_1='LOCATION=use_db_recovery_file_dest', 'valid_for=(ALL_ROLES,ALL_LOGFILES)'
  SET log_archive_dest_2='service="(DESCRIPTION = (LOAD_BALANCE = ON)(ADDRESS = (PROTOCOL = TCP)(HOST = primary-grid-scan-name)(PORT = 1521))
(CONNECT_DATA = (SERVICE_NAME = MYDB.MY_DB_DOMAIN)))"',
'LGWR ASYNC NOAFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300 db_unique_name="MYDB" net_timeout=30',
'valid_for=(all_logfiles,primary_role)'
  SET log_archive_dest_state_2='enable'
  SET remote_listener='standby-grid-scan-name:1521'
  SET cluster_database='false'
  RESET db_create_file_dest
  RESET db_recovery_file_dest
;
}
exit
12. Set standby database parameters.
The following parameters were reset before the duplication and should be set now.
Run the following on the stndbyhost1, while replacing “+ARCH” and “+APP” with your own values:
SQL > ALTER SYSTEM SET db_recovery_file_dest='+ARCH';
SQL > ALTER SYSTEM SET db_create_file_dest='+APP';
SQL > ALTER SYSTEM SET cluster_database=true SCOPE=spfile;
SQL > ALTER SYSTEM RESET db_file_name_convert;
SQL > ALTER SYSTEM RESET log_file_name_convert;
13. Create pfile from spfile on stndbyhost1.
CMD > cd $ORACLE_HOME/dbs
SQL > CREATE pfile='p.ora' FROM spfile;
Edit pfile as follows:
  • Remove parameter local_listener, if one exists.
  • Replace all parameter prefixes MYDB[N] with STNDBY[N], while keeping parameter values.
14. Create spfile on ASM from pfile created in the previous step.
SQL > CREATE spfile='+SYS/STNDBY/spfileSTNDBY.ora' FROM pfile='p.ora';
15. Prepare the standby database on stndbyhost1.
a. Shutdown the standby database and listener LISTENER_DBCLONE on stndbyhost1.
b. Rename or delete $ORACLE_HOME/network/admin/listener.ora, as it was used for duplication only.
16. Create initialization file on the stndbyhost1.
a. Remove spfile from $ORACLE_HOME/dbs.
b. Replace the contents of $ORACLE_HOME/dbs/initSTNDBY1.ora as follows:
SPFILE='+SYS/STNDBY/spfileSTNDBY.ora'
17. Prepare database on primaryhost1 for DataGuard.
SQL > ALTER SYSTEM SET log_archive_config='dg_config=(MYDB,STNDBY)';
SQL > ALTER SYSTEM SET log_archive_dest_1='LOCATION=+ARCH valid_for=(ALL_LOGFILES,ALL_ROLES) db_unique_name=MYDB';
SQL > ALTER SYSTEM SET log_archive_dest_2='service="(DESCRIPTION = (LOAD_BALANCE = ON)
  (ADDRESS = (PROTOCOL = TCP)(HOST = primary-grid-scan-name)(PORT = 1521)) (CONNECT_DATA = (SERVICE_NAME = STNDBY.MY_DB_DOMAIN)))"', 
  'LGWR ASYNC NOAFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300 db_unique_name="STNDBY" net_timeout=30', 
  'valid_for=(all_logfiles,primary_role)';
SQL > ALTER SYSTEM SET log_archive_dest_state_2='enable';
SQL > ALTER SYSTEM SET dg_broker_start=true;
18. Register RAC on the standby.
Run the following commands on stndbyhost1 or stndbyhost2 to register standby RAC with Oracle Grid, while replacing ASM disk groups “ASM_disk” with the actual ones
CMD > srvctl add database -d STNDBY -o $ORACLE_HOME -c RAC -a "ASM_disk1,ASM_disk2,...,
 ASM_diskN" -p "+SYS/STNDBY/spfileSTNDBY.ora" -m "MY_DB_DOMAIN" -r physical_standby -n MYDB
CMD > srvctl add instance -d STNDBY -n stndbyhost1 -i STNDBY1
CMD > srvctl add instance -d STNDBY -n stndbyhost2 -i STNDBY2
19. Startup the standby RAC from any standby node.
CMD > srvctl start database -d STNDBY
20. Enable flashback and start the Managed Recovery Process (MRP) on stndbyhost1.
SQL > ALTER DATABASE FLASHBACK ON;
 SQL > ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
If licensed for Active Dataguard (ADG), then open the Standby Database in READ ONLY mode:
SQL > ALTER DATABASE OPEN READ ONLY;
21. Verify the standby database.
On primaryhost1:
SQL > ALTER SYSTEM ARCHIVE LOG CURRENT;
On stndbyhost1 and primaryhost1:
The sequence number should be the same on both databases.
SQL > SELECT sequence#, first_time, next_time, applied
    FROM v$archived_log ORDER BY sequence#;
22. Create DataGuard configuration on the primaryhost1.
CMD > dgmgrl /
 
CREATE CONFIGURATION 'MYDB.MY_DB_DOMAIN' AS PRIMARY DATABASE IS 'MYDB' CONNECT IDENTIFIER IS '(DESCRIPTION = (LOAD_BALANCE = ON)
 (ADDRESS = (PROTOCOL = TCP)(HOST = primary-grid-scan-name)(PORT = 1521))(CONNECT_DATA = (SERVICE_NAME = MYDB.MY_DB_DOMAIN)))';
 
ADD DATABASE 'STNDBY' AS CONNECT IDENTIFIER IS '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST= standby-grid-scan-name) (PORT=1521))
 (CONNECT_DATA=(SERVICE_NAME=STNDBY.MY_DB_DOMAIN)(SERVER=DEDICATED)))';
 
ENABLE CONFIGURATION;
 
EDIT DATABASE 'STNDBY' SET PROPERTY 'Enterprise Manager Name' = 'STNDBY.MY_DB_DOMAIN';
 
EDIT DATABASE 'MYDB' SET PROPERTY 'Enterprise Manager Name' = 'MYDB.MY_DB_DOMAIN';
23. Add the new standby database to Oracle Enterprise Manager (OEM).

Wednesday, January 30, 2013



 

Oracle RAC  Database on Linux Using VirtualBox


August - December, 2012
This article describes the installation of Oracle Database 11g release 2 (11.2 64-bit) RAC on Linux (Oracle Linux 6.3 64-bit) using VirtualBox (4.1.14+).

Introduction

If you want to get through all steps of the Oracle RAC installation and your laptop or desktop computer has 8 GB or more of RAM, then this is entirely feasible using Oracle VirtualBox as demonstrated in this article. You can get a running RAC system which can host a small test database. The created system is not, and should not be considered, a production-ready system. It's simply to allow you to get used to installing and using RAC and test various administration procedures. The article also explains how to save the images and restore RAC from the images in a matter of minutes. Even if you break your test system, it will be easy to restore.
This article uses the 64-bit versions of Oracle Linux, version 6.3, and Oracle 11g Release 2, version 11.2.0.3. Using VirtualBox you can run multiple Virtual Machines (VMs) on a single server, allowing you to run both RAC nodes on a single machine. In addition, it allows you to set up shared virtual disks. The finished system includes two guest operating systems, two sets of Oracle Grid Infrastructure (Clusterware + ASM) and two Database instances all on a single server. The amount of disk space needed is about 32 GB, if you want to save images of the finished RAC, another 12 GB of the disk space will be needed.
This article was originally inspired by the article "Oracle Database 11g Release 2 RAC On Linux Using VirtualBox" written by Tim Hall and published in his blog. Then it was almost entirely revised and reworked, now this article seems to have very little resemblance with the original work.
Note. When this article was written, Oracle Database 11g Release 2 (11.2.0.3) for Linux 64-bit (both clusterware and database) was available through the Oracle support to licensed customers only. As happened in the past, the Oracle corporation was making the latest version available to general public pretty soon. So I thought that using the latest and greatest version at the moment, with many bugs fixed, will be the best way to go. But now is end of 2012 and 11.2.0.3 is still unavailable to general public, while this version is really much better than any older version. I apologize for this inconvenience and suggest to find any possible way to get this version. It doesn't make sense to fight issues and research workarounds for bugs already fixed. Ask your friends who have access to Oracle support to help. And, if you can, bother the Oracle corporation to make the latest version available for download.
As of now, 11.2.0.3 can be downloaded in Oracle support site, in "Patches & Updates", then select "Latest Patchsets", then select "Oracle Database", then select "Linux x86-64", then select "11.2.0.3.0". The number of this patch set is 10404530, it is possible to jump to the download page using this number. This patch set is a full installation of the Oracle Database software. This means that you do not need to install Oracle Database 11g Release 2 (11.2.0.1) before installing Oracle Database 11g Release 2 (11.2.0.3). For installing RAC database you will need only 3 files:
Oracle Database (includes Oracle Database and Oracle RAC), part 1: 
  p10404530_112030_Linux-x86-64_1of7.zip  1.3G
 
Oracle Database (includes Oracle Database and Oracle RAC), part 2:
  p10404530_112030_Linux-x86-64_2of7.zip  1.1G
 
Oracle Grid Infrastructure (includes Oracle ASM, Oracle Clusterware):
  p10404530_112030_Linux-x86-64_3of7.zip  933M 

System Requirements

  • 8 GB of RAM;
  • 32 GB of free space on the hard disk;
  • This procedure was tested on 64-bit Windows 7. Although there should be no problems using VirtualBox on other Host OSes. Please let me know if you had success or problems in other OSes;

Download Software

Download the following software.

Virtual Machine Setup

In this exercise, we are using VirtualBox installed on 64-bit Windows 7.
Now we must define the two virtual RAC nodes. We can save time by defining one VM, then cloning it when it is installed.
Start VirtualBox and click the "New" button on the toolbar. Click the "Next" button on the first page of the Virtual Machine Wizard.
Enter the name "rac1", OS "Linux" and Version "Oracle (64 bit)", and then click the "Next" button:
Create New VM - VM Name and OS Type
If you have 16 GB of RAM in your host system, then set Base Memory to 3072 MB, otherwise use 2048 MB, as in the screenshot below, then click the "Next" button:
Create New VM - Memory
Accept the default option to create a new virtual hard disk by clicking the "Next" button:
Create New VM - Virtual Hard Disk
Accept the default VDI type and click the "Next" button on the Virtual Disk Creation Wizard welcome screen:
New Virtual Hard Disk - VDI type
Accept the default "Dynamically allocated" option by clicking the "Next" button:
New Virtual Hard Disk - Dynamically allocated
Accept the default location and set the size to "16G" and click the "Next" button:
New Virtual Hard Disk - Virtual Disk Location And Size
Press the "Create" button on the Create New Virtual Disk Summary screen:
New Virtual Hard Disk - Summary
Press the "Create" button on the Create New Virtual Machine Summary screen:
Create New VM - Summary
The "rac1" VM will appear on the left hand pane. Click on the "Network" link on the right side:
VM Manager - rac 1
Make sure "Adapter 1" is enabled, attached to "Bridged Adapter"
Network - Adapter 1
Then click on the "Adapter 2" tab. Make sure "Adapter 2" is enabled and attach to "Internal Network". Then press "OK" button:
Network Adapter 2
Optionally, you can disable the audio card using "Audio" link. This will probably save some amount of space and avoid potential problems related to audio settings. Also if your system has 4 CPU cores or more, it will make sense to allocate 2 CPUs to the Virtual Machine. You can do that in "System" settings.
The virtual machine is now configured so we can start the guest operating system installation.

Guest Operating System Installation

Please note that during installation Virtual Box will be keeping the mouse pointer inside VM area. To exit, press Right Control key on the keyboard.
Place the Oracle Linux 6.3 (or newer) DVD in the DVD drive and skip next two screenshots. If you don't have DVD, download the .iso image and place it into the virtual DVD. Select "Storage" link on the right hand pane of the VirtualBox Manager screen to open "Storage" screen. Then select DVD drive in the "Storage Tree" section:
Select DVD Drive
In "Attributes" section click on the DVD disk icon and choose DVD .iso file. Note that name of the file shows in the Storage Tree. Then press 'OK":
Select DVD Drive file
Start the virtual machine by clicking the "Start" button on the toolbar. The resulting console window will contain the Oracle Linux boot screen. Proceed with the "Install or upgrade an existing system":
Oracle Linux Boot
Do not perform the media test. Choose "Skip" button:
Start the virtual machine by clicking the "Start" button on the toolbar. The resulting console window will contain the Oracle Linux boot screen.
Oracle Linux Boot
Continue through the Oracle Linux installation as you would for a normal server. On next three screens select Language, Keyboard, and Basic Storage Devices type. Confirm to discard any data.
Set "Hostname" to rac1.localdomain and press "Configure Network":
Linux set hotsname
In the Network Connections screen select "System eth0" interface and press "Edit":
Linux network setup
Check the box "Connect automatically". Select "IPv4 Settings" tab make sure the Method is set to "Automatic (DHCP)". Select "IPv6 Settings" tab make sure the Method is set to "Ignore". Press "Apply" button:
Linux network setup
Close Network Connections screen and proceed to next setup screen. Select time zone; Type in Root Password: oracle;
Select "Use All Space" type of installation and check "Review and modify partitioning layout":
Linux network eth1 setup
Edit size of lv_swap device to 1500 MB; then edit size of lv_root to 14380 MB. Press "Next":
Linux partitions setup
Confirm through warnings and create partitions. Keep defaults in Boot loader screen.
In the software type installation screen select "Database Server" and check "Customize now" button. Press Next:
Linux software type
In the Customization screen select Database and uncheck all items; select Desktops and check "Desktop" and "Graphical Administration Tools"; then press Next and finish installation. Reboot.
When it comes back, there will be more setup screens obvious to handle. Don't create 'oracle' account, this will be done later. Congratulations! The Linux has been installed.

Check Internet Access

We will need Internet access because additional packages will be installed online. Open terminal and try to ping any Internet site, for example:
ping yahoo.com
If ping doesn't work, troubleshoot the problem using 'ifconfig' command and making changes in Network Connections (Linux desktop Main menu | System | Preferences | Network Connections). If you made changes in Network Connections, restart interface by rebooting VM or running these two commands:
# ifdown eth0
# ifup eth0
Then check the ping again.

Oracle Clusterware Installation Prerequisites. Part 1

All actions in this section must be performed by the root user.
Run Automatic Setup by installing 'oracle-rdbms-server-11gR2-preinstall' package. This package performs prerequisites including kernel parameter change and creation of Linux oracle account:
# yum install oracle-rdbms-server-11gR2-preinstall
Note. Probably you will not be able to paste and copy this command. So you will have to type it manually. We are going to fix that shortly by installing Guest Additions. For now just type those commands.
Install ASMLib:
# yum install oracleasm
# yum install oracleasm-support
Configure ASMLib running this command and answering questions:
# oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting  without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: oinstall
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: 
Writing Oracle ASM library driver configuration: done
#

Install Guest Additions

Guest Additions are optional, but highly recommended. Guest Additions allow better integration of mouse support and clipboard bidirectional copying. Another important feature is support of shared folders making files in Host OS visible to Guest. The remaining of this document assumes that Guest Additions are installed.
In order to install Guest Additions, reboot just created VM, login as root. Then in the window menu select Devices | Install Guest Additions. Go through the download until you see DVD Autorun screen:
VirtualBox guest additions
Press "OK", then "Run" to start installation.
Note. The installation can fail complaining on missing kernel-uek-devel package providing a 'yum' command to install this package. Run this command - that's why we need Internet access. Also install another package: 'yum install gcc'. Then reinstall Guest Additions by double-clicking on VBOXADDITIONS DVD icon on the desktop, and clicking "Open Autorun Prompt" button.
Reboot the machine. Now you should be much happier about the VirtualBox!

Oracle Clusterware Installation Prerequisites. Part 2

Create the directory in which the Oracle software will be installed.
mkdir -p  /u01
chown -R oracle:oinstall /u01
chmod -R 775 /u01/
Add oracle account to dba and vboxsf groups. The vboxsf group was created by VirtualBox Guest Additions and will allow oracle user access folders in the Host OS:
# usermod -G dba,vboxsf oracle
Reset oracle user password to oracle:
# passwd oracle
Changing password for user oracle.
New password: 
BAD PASSWORD: it is based on a dictionary word
BAD PASSWORD: is too simple
Retype new password: 
passwd: all authentication tokens updated successfully.
# 
Disable secure linux by editing the "/etc/selinux/config" file, making sure the SELINUX flag is set as follows.
SELINUX=disabled
Either configure NTP, or make sure it is not configured so the Oracle Cluster Time Synchronization Service (ctssd) can synchronize the times of the RAC nodes. In this case we will deconfigure NTP.
# service ntpd stop
Shutting down ntpd:                                        [FAILED]
# chkconfig ntpd off
# mv /etc/ntp.conf /etc/ntp.conf.orig
# rm /var/run/ntpd.pid
Cleanup YUM repositories:
# yum clean all
Check file system usage, about 2.8 GB is used:
# df -k
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/vg_rac1-lv_root
                      14493616   2865472  10891888  21% /
tmpfs                  1027556       272   1027284   1% /dev/shm
/dev/sda1               495844     77056    393188  17% /boot
# 

Network Setup

All actions in this section must be performed by the root user.
Below is TCP layout of addresses used in public and private networks. If you need to use another addresses, make corresponding adjustments and remember to stay consistent with those adjustments throughout the rest of the article. Please note that the subnet 192.168.56.0 is a default configuration used by the VirtualBox as Host-only network connecting the host OS and virtual machines. The VirtualBox is also running DHCP server on this subnet reserving address range 100-254. So it is safe to use addresses below 100 for static addresses. You can verify these settings in: Main menu | File | Preferences | Network, then check the properties of the Host-only network. We are using this subnet for the RAC public network.
Edit "/etc/hosts" file by appending the following information:
# Private
192.168.10.1    rac1-priv.localdomain   rac1-priv
192.168.10.2    rac2-priv.localdomain   rac2-priv

# Public
192.168.56.71    rac1.localdomain        rac1
192.168.56.72    rac2.localdomain        rac2

# Virtual
192.168.56.81    rac1-vip.localdomain    rac1-vip
192.168.56.82    rac2-vip.localdomain    rac2-vip

# SCAN
192.168.56.91    rac-scan.localdomain    rac-scan
192.168.56.92    rac-scan.localdomain    rac-scan
192.168.56.93    rac-scan.localdomain    rac-scan
Note. The SCAN address should not really be defined in the hosts file. Instead is should be defined on the DNS to round-robin between 3 addresses on the same subnet as the public IPs. For this installation, we will compromise and use the hosts file. If you are using DNS, then comment out lines with SCAN addresses.
Open the Network Connections tool: Linux desktop Main menu | System | Preferences | Network Connections. Select "System eth0" interface, which will be used for public network, and press "Edit":
Linux network setup
Make sure that "Connect automatically" is checked. In "IPv6 Settings" tab make sure the Method is set to "Ignore". Select "IPv4 Settings" tab; change Method to "Manual"; Press "Add" and fill Address: 192.168.56.71; Netmask: 255.255.255.0; Gateway: 0.0.0.0. Press "Apply" then done:
Linux network eth0 setup
In the Network Connections screen select "System eth1" interface, this will be used for private network, press "Edit". Then check the box "Connect automatically". In "IPv6 Settings" tab make sure the Method is set to "Ignore". Select "IPv4 Settings" tab; change Method to "Manual"; Press "Add" and fill Address: 192.168.10.1; Netmask: 255.255.255.0; Gateway: 0.0.0.0. Press "Apply" then done:
Linux network eth1 setup
Disable the firewall: Linux Main menu | System | Administration | Firewall. Click on "Disable" icon, then on "Apply".
Linux disable firewall

Downloaded Oracle Installation Files

There are two options to handle Oracle downloads:
  • Downloading or transferring files into VM and uncompressing them in VM;
  • Downloading and uncompressing in the Host OS, then making folders accessible to VM filesystem;
Obviously second option is much better because it doesn't use virtual disk of Guest VM and will result in smaller final image. Also installation files can be easily reused in another installation exercise. In this section we are going to setup VirtualBox Shared Folders.
It is assumed that you already downloaded oracle installation files and uncompressed them into the "grid" and "database" folders. In our example these folders are in "C:\TEMP\oracle_sw" folder.
C:\TEMP\oracle_sw>dir -l
total 0
drwx------+ 1 sromanenko Domain Users 0 Aug  5 18:10 database
drwx------+ 1 sromanenko Domain Users 0 Aug  5 03:08 grid
Shutdown VM. In VirtualBox Manager click on "Shared Folders" link in the right-hand pane. Add shared folder by pressing "plus" icon. Then select path to the location of oracle software, and check both boxes "Read-only" and "Auto-mount":
VirtualBox Shared Folders
Press "OK" to save this setting. Now Shared Folders should look like this:
VirtualBox Shared Folders
Restart VM and login as oracle user. Change directory to "/media/sf_oracle_sw" - this is where VirtualBox maps Host OS shared folder. Note that VirtualBox added prefix "sf_" to the name of the folder. List 'ls' content of the folder:
$ cd /media/sf_oracle_sw
$ ls
database  grid
$
There is one package 'cvuqdisk' that should be installed before the installation. Install it from the Oracle grid/rpm directory as root user:
$ su root
Password: 
# cd /media/sf_oracle_sw/grid/rpm
# rpm -Uvh cvuqdisk*

Clone the Virtual Machine

Shutdown the VM.
In the VirtualBox Manager, in Network settings for the Adapter 1, change the network type it is attached to from the "Bridged Adapter" to the "Host-only Adapter".
Note. If you don't need access to the RAC database from the Host OS, then you can use "Internal Network" type of adapter. The RAC will be accessible from all other Virtual Machines in both cases. Optionally, if you need Internet access in future, this can be added after RAC is installed, see Adding Internet Access. For more details about type of Network adapters, see "Virtual Networking" chapter in the VirtualBox documentation.
In the VirtualBox Manager window start clone wizard: Main menu | Machine | Clone. Type "rac2" for the name of new machine. Make sure that "Reinitialize the MAC address of all network cards" is not checked. Then press "Next":
VirtualBox clone name
Keep default "Full Clone" option selected and press "Clone":
VirtualBox clone type
Start cloned VM rac2 and login as root user. Then change hostname by editing file "/etc/sysconfig/network", HOSTNAME parameter:
HOSTNAME=rac2.localdomain
Start "Network Connections" tool (Main menu | System | Preferences | Network Connections). Edit eth0 and eth1 interfaces and set in IPv4 addresses 192.168.56.72 and 192.168.10.2 correspondingly.
Reboot system.
Now we need to change MAC address for both interfaces. The MAC address must be unique. We need to pickup two new unused addresses and set them for eth0 and eth1. The easiest way is to change just last two characters of the address. We are going to use '00' for eth0 and ''01' for eth1. Make sure that these addresses don't collide with the MAC addresses of rac1. Edit MAC address in the "Wired" tab. The screenshot below shows where to set MAC address. Don't forget to change MAC addresses for both interfaces. Please note that your setup will have a different set of MAC addresses because they are random-generated by VirtualBox.
VirtualBox network MAC address change
Write down the new MAC addresses for both interfaces. Save new settings pressing "Apply" button, then shutdown the machine. After shutdown, return to the VirtualBox Manager, select rac2 VM and edit "Network" settings. Make same changes to the MAC addresses. Don't forget to change MAC addresses for both adapters.
VirtualBox adapter MAC address change
Start both machines and check that they can ping each other using both public and private network. For example, on rac1:
$ ping rac2
$ ping rac2-priv
If you have problems, use 'ifconfig' command to check the configuration, then correct the problem using "Network Connections" tool.

Create Shared Disks

Shut down both virtual machines. We need to create a new virtual disk, change its attribute to Shareable and add to both VMs. In the current version of VirtualBox, the only way to create a new disk in the GUI is through the "Storage" page in the virtual machine's settings. Select either rac1 or rac2 VM, then click on "Storage" link. Select "SATA Controller" and click on "Add Hard Disk" icon. If not sure, which icon to use, same action is available through the popup menu, right-click on the "SATA Controller" and select "Add Hard Disk".
VirtualBox Storage - SATA Controller
Press "Create new disk":
VirtualBox create Hard Disk question
Accept the default VDI type and click the "Next" button on the Virtual Disk Creation Wizard welcome screen:
New Virtual Hard Disk - VDI type
Select "Fixed size" option and press the "Next" button:
New Virtual Hard Disk - Fixed size
Change the name and location of this disk. You can keep this file in the default location - the folder of a selected VM. Although, because this disk is shared, it will be better to put it in the parent directory. So, instead of "...\VirtualBox VMs\rac1" directory, place it in "...\VirtualBox VMs". Set the size to "2400 MB" - this will result in about of 400 MB of free space in the ASM group when everything is installed. If you will need more space, you can choose the bigger size. And, regardless of what you decide now, it will be possible to add more shared disks to the ASM group after everything is installed.
New Virtual Hard Disk - Location And Size
Create the new disk and this disk will be already attached to VM.
Select this new disk. You will see in the disk Information that the type of this disk is "Normal". There was no option in the previous dialog windows to create new disk as "Shareable". And once it is attached, this attribute cannot be changed. This is a limitation of GUI so we have to work around it: click on "Remove Attachment" icon. Therefore this VM returns back to the previous storage configuration. Close the "Storage" page.
What is different now - there is a new disk registered to VirtualBox. We will use Virtual Media Manager (Main menu | File | Virtual Media Manager) to change its attributes. Select this new disk in the Virtual Media Manager:
VB - Shared Disk is Detached in Media Manager
Click on "Modify" icon and select "Shareable":
New Virtual Hard Disk - Location And Size
Attach this existing disk to each VM using "Storage" page. Don't forget to select correct controller before attaching the disk and use "Choose existing disk" option.
In the end the "Storage" section of both VMs should be looking like this:
Attached Hard Disks
Start either of the machines and log in as root. The current disks can be seen by issuing the following commands.
# ls /dev/sd*
/dev/sda  /dev/sda1  /dev/sda2  /dev/sdb
#
Use the "fdisk" command to partition the new disk "sdb".
# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0xd724aa83.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-305, default 1): 
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-305, default 305): 
Using default value 305

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
#
The sequence of answers is "n", "p", "1", "Return", "Return" and "w".
Once the new disk is partitioned, the result can be seen by repeating the previous "ls" command.
# ls /dev/sd*
/dev/sda  /dev/sda1  /dev/sda2  /dev/sdb  /dev/sdb1
#
Mark the new shared disk in the ASMLib as follows.
# oracleasm createdisk DISK1 /dev/sdb1
Writing disk header: done
Instantiating disk: done
# 
Run the "scandisks" command to refresh the ASMLib disk configuration.
# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
#
We can see the disk is now visible to ASM using the "listdisks" command.
# oracleasm listdisks
DISK1
#
Start another VM and log in as root. Check that the shared disk is visible to ASM using the "listdisks" command.
# oracleasm listdisks
DISK1
#
The virtual machines and shared disks are now configured for the grid infrastructure!

Install the Grid Infrastructure

Make sure the "rac1" and "rac2" virtual machines are started, then login to "rac1" or switch the user to oracle and start the Oracle installer.
$ cd /media/sf_oracle_sw/grid
$ ./runInstaller
Select "Skip software updates" option, press "Next":
Grid - software updates
Select the "Install and Configure Grid Infrastructure for a Cluster" option, then press the "Next" button.
Grid - Select Installation Option
Select the "Advanced Installation" option, then click the "Next" button.
Grid - Select Installation Type
On the "Grid Plug and Play information" screen, change Cluster Name to "rac-cluster" and SCAN Name to "rac-scan.localdomain", uncheck "Configure GNS" box, then press the "Next" button.
Grid - names
On the "Cluster Node Configuration" screen, click the "Add" button.
Grid - Cluster Node Configuration
Enter the details of the second node in the cluster, then click the "OK" button.
Grid - Add Cluster Node Information
Click the "SSH Connectivity..." button and enter the password for the "oracle" user. Click the "Setup" button to configure SSH connectivity, and the "Test" button to test it once it is complete. Then press "Next".
Grid - SSH Connectivity
On the "Specify Network Interface Usage" screen check the public and private networks are specified correctly. Press the "Next" button.
Grid - Network Interfaces
On the "Storage Option Information" screen keep Oracle ASM option selected and press "Next".
Grid - Storage Options
On the "Create ASM Disk Group" screen click on "Change Discovery Path" button:
Grid - Create ASM Group
Then enter "/dev/oracleasm/disks" and press "OK":
Grid - ASM Discovery Path
Keep "Disk Group Name" unchanged. Select "External" redundancy option. Check "/dev/oracleasm/disks/DISK1" in the "Add Disks" section. When done, press "Next".
Grid - Create ASM Group
On the "Specify ASM Password" screen select "Use same passwords for these accounts" option and type in "oracle" password, then press "Next". Ignore warnings about password weakness.
Grid - ASM Passwords
Keep defaults on the "Failure Isolation Support" and press "Next".
Grid - Failure Isolation Support
Keep defaults on the "Privileged Operating System Groups" and press "Next".
Grid - Privileged Operating System Groups
Keep suggested paths unchanged on the "Specify Installation Location" and press "Next".
Grid - Specify Installation Location
Keep suggested path unchanged on the "Create Inventory" and press "Next".
Grid - Create Inventory
The results of prerequisite checks are shown on the next screen. You should see two warnings and one failure. The failure was caused by inability to lookup SCAN in DNS and that should be expected. Check "Ignore All" box and press "Next".
Grid - Prerequisite Check Results
Press "Install" on the Summary screen.
Grid - Create Inventory
Wait while the setup takes place...
Grid - Setup
When prompted, run the configuration scripts on each node.
Grid - Execute Configuration Scripts
Execute scripts as root user, first in rac1, then in rac2.
# /u01/app/oraInventory/orainstRoot.sh
# /u01/app/11.2.0/grid/root.sh
#
When running root.sh you will be asked about location of bin directory, press Enter in response. The output of the root.sh should finish with "Configure Oracle Grid Infrastructure for a Cluster ... succeeded". If the script fails, correct the problem and restart it.
Once the scripts have completed, return to the "Execute Configuration Scripts" screen on "rac1", click the "OK" button and wait for the configuration assistants to complete.
We expect the verification phase to fail with an error relating to the SCAN:
Grid - Configuration Assistants
Here are the offending lines from the log file:
INFO: Checking Single Client Access Name (SCAN)...
INFO: Checking TCP connectivity to SCAN Listeners...
INFO: TCP connectivity to SCAN Listeners exists on all cluster nodes
INFO: Checking name resolution setup for "rac-scan.localdomain"...
INFO: ERROR: 
INFO: PRVG-1101 : SCAN name "rac-scan.localdomain" failed to resolve
INFO: ERROR: 
INFO: PRVF-4657 : Name resolution setup check for "rac-scan.localdomain" (IP address: 192.168.56.71) failed
INFO: ERROR: 
INFO: PRVF-4657 : Name resolution setup check for "rac-scan.localdomain" (IP address: 192.168.56.72) failed
INFO: ERROR: 
INFO: PRVF-4657 : Name resolution setup check for "rac-scan.localdomain" (IP address: 192.168.56.73) failed
INFO: ERROR: 
INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "rac-scan.localdomain"
INFO: Verification of SCAN VIP and Listener setup failed
Provided this is the only error, it is safe to ignore this and continue by clicking the "Next" button. Close the Configuration Assistant on the next screen.
Check the status of running clusterware. On rac1 as root user:
# . oraenv
ORACLE_SID = [oracle] ? +ASM1
The Oracle base has been set to /u01/app/oracle

# crsctl status resource -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.asm
               ONLINE  ONLINE       rac1                     Started             
               ONLINE  ONLINE       rac2                     Started             
ora.gsd
               OFFLINE OFFLINE      rac1                                         
               OFFLINE OFFLINE      rac2                                         
ora.net1.network
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.ons
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac2                                         
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac1                                         
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac1                                         
ora.cvu
      1        ONLINE  ONLINE       rac1                                         
ora.oc4j
      1        ONLINE  ONLINE       rac1                                         
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                                         
ora.rac2.vip
      1        ONLINE  ONLINE       rac2                                         
ora.scan1.vip
      1        ONLINE  ONLINE       rac2                                         
ora.scan2.vip
      1        ONLINE  ONLINE       rac1                                         
ora.scan3.vip
      1        ONLINE  ONLINE       rac1                                         
# 
You should see various clusterware components running on both nodes. The grid infrastructure installation is now complete!
Check filesystem usage, about 6.5 GB are used:
$ df -k
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/vg_rac1-lv_root
                      14493616   6462704   7294656  47% /
tmpfs                  1027552    204708    822844  20% /dev/shm
/dev/sda1               495844     77056    393188  17% /boot
$ 

Install the Database

Make sure the "rac1" and "rac2" virtual machines are started, then login to "rac1" or switch the user to oracle and start the Oracle installer.
$ cd /media/sf_oracle_sw/database
$ ./runInstaller
Uncheck the "I wish to receive security updates..." checkbox and press the "Next" button:
DB - Configure Security Updates
Check "Skip software updates" checkbox and press the "Next" button:
DB - Skip Software Updates
Accept the "Create and configure a database" option and press the "Next" button:
DB - Select Installation Option
Accept the "Server Class" option and press the "Next" button:
DB - System Class
Make sure "Oracle Real Application Cluster database installation" is chosen and both nodes are selected, and then press the "Next" button.
DB - Node Selection
Select the "Advanced install" option and press the "Next" button:
DB - Select Istall Type
Select Language on next screen and press the "Next" button.
Accept "Enterprise Edition" option and press the "Next" button:
DB - Select Database Edition
Accept default installation locations and press the "Next" button:
DB - Specify Installation Location
Accept "General Purpose / Transaction Processing" option and press the "Next" button:
DB - Specify Installation Location
You can keep default "orcl" database name or define your own. We used "ractp":
DB - Specify Database Identifier
On the "Configuration Options" screen reduce the amount of allocated memory to 750 MB - this will avoid excessive swapping and will run smoother. You are free to explore other tabs and set whatever suits your needs.
DB - Specify Configuration Options
Accept default Management option and press the "Next" button:
DB - Specify Management Options
Accept "Oracle Automatic Storage Management" option and type in "oracle" password, then press the "Next" button:
DB - Specify Storage Options
Accept default "Do not enable automated backups" option and press the "Next" button:
DB - Specify Recovery Options
Review ASM Disk Group Name which will be used by the database and press the "Next" button:
DB - Select ASM Disk Group
Select "Use the same password for all accounts" option, type in "oracle" password, then press the "Next" button:
DB - Specify Schema Passwords
Select "oinstall" group for both Database Administrator and Database Operator groups, then press the "Next" button:
DB - Privileged OS Groups
Wait for the prerequisite check to complete. If there are any problems, either fix them, or check the "Ignore All" checkbox and click the "Next" button.
DB - Perform Prerequisite Checks
If you are happy with the summary information, click the "Install" button.
DB - Summary
Wait while the installation takes place.
DB - Install Product
Once the software installation is complete the Database Configuration Assistant (DBCA) will start automatically.
DB - DBCA
Once the Database Configuration Assistant (DBCA) has finished, click the "OK" button.
DB - DBCA Complete
When prompted, run the configuration scripts on each node. When the scripts have been run on each node, click the "OK" button.
DB - Execute Configuration Scripts
Execute scripts as root user in both nodes:
# /u01/app/oracle/product/11.2.0/dbhome_1/root.sh
#
Click the "Close" button to exit the installer.
DB - Finish
The RAC database creation is now complete!

Check the Status of the RAC

There are several ways to check the status of the RAC. The srvctl utility shows the current configuration and status of the RAC database.
$ . oraenv
ORACLE_SID = [oracle] ? ractp
The Oracle base has been set to /u01/app/oracle

$ srvctl config database -d ractp
Database unique name: ractp
Database name: ractp
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/ractp/spfileractp.ora
Domain: localdomain
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: ractp
Database instances: ractp1,ractp2
Disk Groups: DATA
Mount point paths: 
Services: 
Type: RAC
Database is administrator managed

$ srvctl status database -d ractp
Instance ractp1 is running on node rac1
Instance ractp2 is running on node rac2
$
The V$ACTIVE_INSTANCES view can also display the current status of the instances.
$ export ORACLE_SID=ractp1
[oracle@rac1 Desktop]$ sqlplus / as sysdba
SELECT inst_name FROM v$active_instances;

INST_NAME
--------------------------------------------------------------------------------
rac1.localdomain:ractp1
rac2.localdomain:ractp2

exit
$

Making Images of the RAC Database

At any point earlier we could save the image of created virtual machine and then restore it at will. Here we are going to save images of the newly created Oracle RAC system which we can restore in the same system or even hand over to another location and restore in a matter of few minutes!
The export of VM is a straightforward process and saving RAC images would be an easy task if not dealing with the shared disk. In my view the simplest way to handle that is by detaching shared disk from both nodes and taking care of these three parts (two self-contained VMs and one Shared disk) separately. In the end there will be three files: two files for VMs and a file representing the shared disk. These three files can be further zipped by your favorite archiver into one file which can be used for storage or transfer. After export is done, the shared disk can be easily attached back to the nodes. Same is true for the import of VMs back into VirtualBox along with the copy of shared disk - the shared disk is attached to the imported VMs as an extra step. Let's perform all these actions.

Clean Shutdown of RAC

But first, we need to shutdown servers in nice and clean manner because we want save them in a robust state. Shutdown the database. As oracle user execute on any node:
$ . oraenv
ORACLE_SID = [oracle] ? ractp
The Oracle base has been set to /u01/app/oracle

$ srvctl stop database -d ractp
$
Shutdown the clusterware on the first node. As root user execute:
# . oraenv
ORACLE_SID = [ractp1] ? +ASM1
The Oracle base remains unchanged with value /u01/app/oracle

# crsctl stop crs
...
CRS-4133: Oracle High Availability Services has been stopped.
#
Shutdown the clusterware on the second node. As root user execute:
# . oraenv
ORACLE_SID = [ractp1] ? +ASM2
The Oracle base remains unchanged with value /u01/app/oracle

# crsctl stop crs
...
CRS-4133: Oracle High Availability Services has been stopped.
#
Shutdown both virtual machines. Wait until all VM windows are closed.

Detach Shared Disk and Make a Copy Of It

In the VirtualBox Manager open Virtual Media Manager: Main menu | File | Virtual Media Manager. Then select the disk used by the RAC (rac_shared_disk1.vdi). Note that this disk shows as attached to rac1 and rac2 VMs:
VB - Detach Shared Disk in Media Manager
Click on "Release" icon and then confirm in the pop-up window. Note that this disk now shows as "Not attached". Click on "Copy" to start Disk Copying Wizard.
VB - Shared Disk is Detached in Media Manager
Accept Virtual disk to copy and press "Next".
VB - Detach Shared Disk in Media Manager
Accept Virtual disk file type as VDI and press "Next".
VB - Copying, Target File Type
Select "Fixed size" and press "Next".
VB - Copying, Target File Storage Details
On the next screen you can set location and name of the new file. When done, press "Next".
VB - Copying, Target File Storage Details
On the Summary screen review details and press "Copy" to complete copying. Close the Media Manager when copying is done.
Note. Do not try to copy .vdi file because the copy will retain same disk UID and VirtualBox will refuse to use it because there is already such disk. When copying trough the Virtual Media Manager, the new UID is assigned automatically.

Export VMs

In the VirtualBox Manager select VM, then call Appliance Export Wizard: Main menu | File | Export Appliance. Exporting is generally as simple as saving a file. Export both VMs.
Now you should have 3 files that can be further zipped into a single file with the size about 12 GB.

Re-attach Shared Disk to the Original RAC Setup

Fix our current working RAC setup by re-attaching shared disk to rac1 and rac2 VM using "Storage" page. Don't forget to select correct controller before attaching the disk:
VirtualBox Storage - SATA Controller
Press "Add Hard Disk" icon and use "Choose Existing Disk" to attach rac_shared_disk1.vdi. Once Shared disk is attached to both VMs, the RAC is ready to run.

Restoring RAC from Saved Files

In this section we will import RAC from the saved files creating a second RAC system. Don't run both RAC at the same time because they will have same network attributes.
Open Appliance Import Wizard: Main menu | File | Import Appliance. Choose the file and press "Next":
VB - Appliance Import Wizard
On the Appliance Import Settings different attributes of new VM can be changed. We are going to accept settings unchanged. It is interesting to note, that disks are going to be imported in VMDK format different from the original VDI format.
VB - Appliance Import Settings
Wait until the VM is imported:
VB - Appliance Import Settings
Import both VMs and copy Shared Disk rac_shared_disk1_copy.vdi file into the parent directory (Virtual VMs). This disk could be attached to both machines, but unfortunately current version (4.1.18) of VirtualBox doesn't preserve type of the disk then making a copy. Attach this disk to the either of imported VM, then select it and review disk information:
VB - attached disk type
In the VirtualBox 4.1.18, the copied disk has "Normal" type. If you have a newer version and the type is "Shareable" then this bug has been fixed, and you can proceed to another VM. If not, de-attach the disk, then go to the Virtual Media Manager and change the disk type to "Shareable" as has been described above, then return to the Virtual machines and attached the Shared disk.
Start new VMs. The clusterware should start automatically. You will need to bring up the database. Login as the oracle user and execute:
$ . oraenv
ORACLE_SID = [oracle] ? ractp
The Oracle base has been set to /u01/app/oracle

$ srvctl start database -d ractp
$
The RAC should be well and running!

Post Installation Optimization (Optional)

It had been noticed that after a while, the ologgerd process can consume excessive CPU resource. You can check that by starting top, then pressing 'c' key (cmd name/line):
ologgerd process consumes CPU
The ologgerd is part of Oracle Cluster Health Monitor and used by Oracle Support to troubleshoot RAC problems. If ologgerd process is consuming a lot of CPU, you can stop it by executing on both nodes:
# crsctl stop resource ora.crf -init
If you want to disable ologgerd permanently, then execute:
# crsctl delete resource ora.crf -init
.

Adding Internet Access (Optional)

Shutdown virtual machine. In the Network section of settings, select Adapter 3, make sure it is enabled and attached to "Bridged Adapter". Restart the VM and check that you can ping outside world in the terminal window:
ping yahoo.com
Repeat all these actions with another node.
Please note, that using Bridged Adapter makes your VMs accessible on the same local area network where your workstation / laptop is connected. Therefore same rules apply to the network configuration of VMs. By default, VMs will be using your local network DHCP to obtain IP address. Optionally you can assign static addresses. Even though your VMs will be accessible to any computers on the LAN, the Oracle listener is not listening on these IP addresses. If needed, these IPs can be added to the listener, but this is beyond the scope of this article.

Clusterware and Database Monitoring

You can check the status of clusterware by issuing this command:
# crsctl status resource -t
But it is much easier using a tool which will run same command on pre-defined time interval and organize output into the tabular form. For this we are going to use a freeware tool Lab128 Freeware. The installation is simple: unzip files into some folder, then run lab128fw.exe executable file.
Since we will be running the monitoring tool in the host OS (Windows), the RAC should be accessible to the host OS. If you did everything as described above, you should have Adapter 1 on both VMs attached to "Host-only Adapter" and you should be able to ping both nodes:
ping 192.168.56.71
ping 192.168.56.72
It will make sense to add these IP addresses to the %SystemRoot%\System32\drivers\etc\hosts file:
#ORACLE SCAN LISTENER
192.168.56.91       rac-scan.localdomain   rac-scan
192.168.56.92       rac-scan.localdomain   rac-scan
192.168.56.93       rac-scan.localdomain   rac-scan

#rac1 node
192.168.56.71        rac1
192.168.56.72        rac2
Start Lab128, then connect through SSH to one node: Main menu | Clusterware | Clusterware Commander, then enter User: oracle; Password: oracle; Hostname/IP: 192.168.56.71. Optionally give the "Alias name" to this connection and save it by pressing "Store" button. Then press "Connect".
SSH Login to Clusterware Commander
The Clusterware Commander window presents the status of clusterware components in a tabular view. Left columns present the Component/Resource Type, Name, Target and Online node counts. The remaining columns to the right represent nodes in the cluster with the values showing the state of the component on that node. This view is refreshed automatically with the rate defined in the "Refresh Rate, sec." box. The tabular view content can be customized by filtering out less important components. If you need more information about this window, press F1 key.
Clusterware Commander Window
The tabular view can be used to manage the components through the pop-up menu. Right-click on the cell in the node area. Depending on the context of the selected component and the node, the menu will offer various commands, see the picture above.
Additionally, you can start Lab128 monitor for a specific database instance with many connection details filled up automatically. The example of Login window is shown below. Just enter in User: sys; Password: oracle. It is recommended to name this monitor in the "Alias Name" box (we named it RACTP1) and saved it pressing the "Store" button. This will save our efforts next time when connecting to the same instance. Then press "Connect" button to open Lab128 monitor:
Lab128 monitor login