Performance "Statspack" Report:

Performance "Statspack" Report:

============================================
 Statspack-like report using user's selected time interval. Once the interval is selected, a menu item Reports | Performance | Statspack Report becomes enabled. Also you can use shortcut Ctrl-S to call the report. The report presents the most important data used in performance analysis:
Load Profile. The "Load Profile" section shows you the load on your instance per second and per transaction. You can compare this section between two Reports to see how the load on your instance is increasing or decreasing over time.
  • Redo size - This is the amount of redo generated during this report.
  • Logical Reads - This is calculated as Consistent Gets + DB Block Gets = Logical Reads
  • Block changes - The number of blocks modified during the sample interval
  • Physical Reads - The number of requests for a block that caused a physical I/O.
  • Physical Writes - The number of physical writes issued.
  • User Calls - The number of queries generated
  • Parses Total of all parses - both hard and soft
  • Hard Parses - Those parses requiring a completely new parse of the SQL statement. These consume both latches and shared pool area.
  • Soft Parses - Not listed but derived by subtracting the hard parses from parses. A soft parse reuses a previous hard parse and hence consumes far fewer resources.
  • Sorts, Logons, Executes and Transactions - are all self explanatory.
Instance Efficiency Percentages. Hit ratios are calculations that may provide information regarding different operations in the Oracle instance. Database tuning never must be driven by hit ratios. They only provide additional information to understand how the instance is operating.
  • Buffer Nowait % This is the percentage of time that the instance made a call to get a buffer (all buffer types are included here) and that buffer was made available immediately (meaning it didn't have to wait for the buffer...hence "Buffer Nowait").
  • Buffer Hit % This means that when a request for a buffer took place, the buffer was available in memory and physical disk I/O did not need to take place.
  • Library Hit % If your Library Hit percentage is low it could mean that your shared pool size is too small or that the bind variables are not being used (or at least being used properly).
  • Execute to Parse % This is the formula used to get this percentage: (executevalue - parsevalue ) / executevalue. So, if you run some SQL and it has to be parsed every time you execute it (even if it was a soft parse) then your percentage would be 0%. The more times that you can reuse the parsed cursor the higher your Execute to Parse ratio is. If the application parses SQL statement and never executes it (really bad!), this ratio will be negative. Generally you should be concerned if this ratio is below 50%.
  • Parse CPU to Parse Elapsed % Generally, this is a measure of how available your CPU cycles were for SQL parsing. If this is low, you may see "latch free" as one of your top wait events.
  • Redo NoWait % The instance didn't have to wait to use the redo log if this is 100%.
  • In-memory Sort % This means the instance could do its sorts in memory as opposed to doing physical I/O. You don't want to be doing your sorts on disk especially in an OLTP system. Try increasing your SORT_AREA_SIZE or PGA_AGGREGATE_TARGET in your spfile/pfile to see if that helps if your in-memory sorting is not between 95% and 100%.
  • Soft Parse % This is an important one, at least for OLTP systems. This means that your SQL is being reused. If this is low (< 95%) then make sure that you're using bind variables in the application and that they're being used properly.
  • Latch Hit %: This should be pretty close to 100%; if it's not then check out what your top wait events are to try to fix the problem (pay specific attention to 'latch free' event).
  • % Non-Parse CPU: Shows the percentage of how much CPU resources were spent on the actual SQL execution.
Top 10 Wait Events. This section shows the Top 10 timed events that must be considered to focus the tuning efforts. This section is crucial in determining what some of the performance drains in your database are. It will actually tell you the amount of time the instance spent waiting.
Here are some common reasons for high wait events:
  • CPU Time: Is not really a wait event, but rather the amount of CPU time used during the snapshot window If this is your largest wait event then it could mean that you have some CPU intensive SQL going on. You may want to examine some of the SQL further down in the Report for SQL statements that have large CPU Time.
  • DB file scattered read: This can be seen fairly often. Usually, if this number is high, then it means there are a lot of full table scans going on. This could be because you need indexes or the indexes you do have are not being used.
  • DB file sequential read: This could indicate poor joining orders in your SQL or waiting for writes to 'temp' space. It could mean that a lot of index reads/scans are going on. Depending on the problem it may help to tune PGA_AGGREGATE_TARGET and/or DB_CACHE_SIZE.
  • SQL*Net more data to client: This means the instance is sending a lot of data to the client. You can decrease this time by having the client bring back less data. Maybe the application doesn't need to bring back as much data as it is.
  • log file sync: A Log File Sync happens each time a commit takes place. If there are a lot of waits in this area then you may want to examine your application to see if you are committing too frequently (or at least more than you need to).
  • Logfile buffer space: This happens when the instance is writing to the log buffer faster than the log writer process can actually write it to the redo logs. You could try getting faster disks but you may want to first try increasing the size of your redo logs; that could make a big difference (and doesn't cost much).
  • Logfile switch: This could mean that your committed DML is waiting for a logfile switch to occur. Make sure your filesystem where your archive logs reside are not getting full. Also, the DBWR process may not be fast enough for your system so you could add more DBWR processes or make your redo logs larger so log switches are not needed as much.
ASH - Top SQL Statements. The Active Session History (ASH) collection engine provides very important report on most active SQL statements and shows breakdown by wait events.
  • Activity,% - percentage of the SQL statement elapsed time in the sum of all SQL elapsed time. This column indicates how big is given SQL statement as compared to the rest of SQL statements.
  • Time (s) - combined elapsed time for the SQL statement. Remember, the combined elapsed time can be bigger than measured time interval if several sessions were executing same statement concurrently.
  • % Total Elapsed Time - combined elapsed time from "Time (s)" column divided by Report elapsed time expressed in percents. This can be bigger than 100%. For example, if 3 sessions were executing same SQL statement during entire report time, the SQL statement will have 300% in this column. In other words, this column shows how many sessions on average were executing this statement (100% - means 1 session).
Each SQL statement has a list of events and percentage of time spent on the event. Depending on nature of waits, this shows whether the statement was CPU intensive, or I/O intensive, or experienced row lock contention etc.
ASH - Top Service/Module. This shows distribution of the load between services. The report has same columns as in previous ASH - Top SQL Statements section.
Top SQL ordered by Elapsed Time. If your goal is to reduce the response time of the database, you should start with this section. You should look at total elapsed time for the query and elapsed time per a single execution. Sometimes the single execution time is reasonable but the frequency of executions can point out to the problem in the application. The percentage of DB Time gives an estimation of how big impact the query is making to overall response time. Therefore it gives an idea how big will be payoff if the query is significantly improved. CPU column should be compared to the Elapsed time column to see the percentage of time spent on CPU. That will tell how CPU intensive the query is.
Top SQL ordered by CPU Time. If your goal is to reduce / analyze the CPU usage, you should look into this section. The CPU time probably will be a substantial percentage of total elapsed time. If the query spent all the time on CPU, values in these columns will be very close. As with elapsed time analysis in previous section, you should look at total time and time per a single execution. Sometimes the single execution time is reasonable but the frequency of executions can point out to the problem in the application. Your decision whether to optimize the query or fix the application or add more hardware should be based on those factors. The percentage of DB Time gives an estimation of how big impact the query is making to overall response time. Therefore it gives an idea how big will be payoff if the query is significantly improved.
Top SQL ordered by Gets. Gets are logical reads of DB blocks. They can be satisfied from the disk (physical read) from the DB cache. Quite often in OLTP environment the queries with highest gets will be gets from cache because they have much higher rate than physical reads. These queries will have high hit ratio (cache gets divided by total gets number) and typically there is a good correlation between top CPU queries and top Gets queries. In DSS environment, gets are mostly read gets, so there is a correlation between top Reads and top Gets queries. You should check if the number of gets per execution is reasonable. You should take into consideration how many rows the query is expected to return. If the number of rows is small but the number of gets per row is high, there are potentials to improve the query. Generally the number of gets per row > 100 in OLTP should be considered as too large but it depends on the query. If the query does aggregations across many rows, this criterion doesn't apply. You should remember that large number of gets may indicate the opportunity to improve the query.
Top SQL ordered by Reads. If your goal is to reduce / analyze the I/O load, you should look into this section. You should check if the number of reads per execution is reasonable. In OLTP environment the large number of reads can indicate full scans or range scans on indexes with poor selectivity. You should also check the number of gets per execution. If the number of gets is reasonably small, and your system is experiencing high I/O, you should increase DB cache. This doesn't apply to DSS database. Even though large number of reads can be normal in DSS, you should attempt to optimize SQL and reduce number of reads.
Top SQL ordered by Executions. High-frequency queries can indicate poorly designed application. The excessive number of execution can tax CPU and network.
Top SQL ordered by Parse Calls. You should always avoid hard parses for frequent queries which is typical to OLTP environment. Even if all parses are soft parses, they present substantial load to the system and should be avoided. This generally can be done at application level by re-coding and using principle "parsing once and executing the query several times."
Top SQL ordered by Cluster Wait Time. This is RAC specific section. These are top queries used to wait most on RAC inter-instance events. Compare values in CWT (Cluster Wait Time) columns with the elapsed time, if they are close, then the global cache and supporting services are a bottleneck for this query. Check which tables / indexes involved in the query and relate that information with segment statistics in Segments and Extents screen, sorted by "gc cr blocks received" and "gc current blocks received". This will give you understanding which segments are hot. Then you will need to take measures, which depending on type of segments, can include reverse indexing, hash partitioning, or application partitioning.
OS Statistics. Most of statistics are self-explanatory. Please note that Physical memory size is not reported correctly in some Oracle versions. The "CPU Total Busy Time" is a sum of "CPU User Time" and "CPU System Time". The "% of Elapsed Time" column shows how much of all available CPU resources have been utilized.

RAC Statistics.

Global Cache Load Profile.

  • Global Cache blocks received - number of blocks received from the remote instance over the hardware interconnect. This occurs when a process requests a consistent read for a data block that is not in its local cache. Oracle sends a request to another instance. Once the buffer has been received, this statistic is incremented.
  • Global Cache blocks served - number of blocks sent to the remote instances over the hardware interconnect.
  • GCS/GES messages received - number of messages received from remote instances over the hardware interconnect. This statistics generally represents overhead caused by functioning of RAC services.
  • GCS/GES messages sent - number of messages sent to remote instances over the hardware interconnect. This statistics generally represents overhead caused by functioning of RAC services.
  • DBWR Fusion writes - number of fusion writes. In RAC, as in a single instance Oracle database, blocks are only written to disk for aging, cache replacement, or checkpoints. When a data block is replaced from the cache due to aging or when a checkpoint occurs and the block was changed in another instance but not written to disk, Global Cache Service will request that instance to write the block to disk.
  • Estd Interconnect traffic - number of KB transferred by the interconnect.

Global Cache Efficiency Percentages.

  • Buffer access - local cache % - percentage of blocks satisfied from local cache to the total number of blocks requested by sessions. It is desirable to maintain this ratio as high as possible because this is the least expensive and fastest way to get the database block.
  • Buffer access - remote cache % - percentage of blocks satisfied from remote instance cache to the total number of blocks requested by sessions. The sum of this ratio and Buffer access - local cache % described above should be maintained as high as possible because these two paths of accessing DB blocks are fastest and least expensive to the system. Getting block from the cache of remote instance is about 10 times faster than reading it from the disk.
  • Buffer access - disk % - percentage of blocks read from disk into the cache to the total number of blocks requested by sessions. It is desirable to maintain this ratio low because physical reads is the slowest way to access database blocks.

Global Cache and Enqueue Services - Workload Characteristics.

This section provides average times for obtaining global enqueue (locks) and global cache blocks.
  • Avg global enqueue get time (ms) - time spent on sending message through the interconnect, opening a new global enqueue for the resource or converting access mode of the enqueue if it is already open. If get time is more than 20 ms, then your system may be experiencing timeouts.
  • Avg global cache cr block receive time (ms) - The time is spent on sending message from requesting instance to the block mastering instance (2-way get) and sometimes to the holding instance (3-way get). This time also includes building of the consistent read image of the block in the holding instance. The CR Block get time should not exceed 15 ms.
  • Avg global cache current block receive time (ms) - The time is spent on sending message from requesting instance to the block mastering instance (2-way get) and sometimes to the holding instance (3-way get). This time also includes log flush time in the holding instance. The Current Block get time should not exceed 30 ms.
  • Avg global cache cr block build time (ms)Avg global cache cr block send time (ms)Avg global cache cr block flush time (ms) - are components of GCS activity to serve remote requests for consistent read (CR) blocks.
  • Global cache log flushes for cr blocks served % - is the percentage of CR block that needed log flush to the total number of CR blocks served.
  • Avg global cache current block build time (ms)Avg global cache current block send time (ms)Avg global cache current block flush time (ms) - are components of GCS activity to serve remote requests for current blocks.
  • Global cache log flushes for current blocks served % - is the percentage of current block that needed log flush to the total number of current blocks served.

Global Cache and Enqueue Services - Messaging Statistics.

This section provides performance numbers on messaging, although Oracle documentation doesn't say how to use that information for the purpose of tuning or troubleshooting . 

  • Avg message sent queue time (ms) - calculated as v$ges_statistics 'msgs sent queue time (ms)' / 'msgs sent queued';
  • Avg message sent queue time on ksxp (ms) - calculated as v$ges_statistics 'msgs sent queue time on ksxp (ms)' / 'msgs sent queued on ksxp';
  • Avg message received queue time (ms) - calculated as v$ges_statistics 'msgs received queue time (ms)' / 'msgs received queued';
  • Avg GCS message process time (ms) - calculated as v$ges_statistics 'gcs msgs process time(ms)' / 'gcs msgs received';
  • Avg GES message process time (ms) - calculated as v$ges_statistics 'ges msgs process time(ms)' / 'ges msgs received';
  • % of direct sent messages - calculated as v$ges_statistics 'messages sent directly' / ( 'messages sent directly' + 'messages sent indirectly' + 'messages flow controlled';
  • % of indirect sent messages - calculated as v$ges_statistics 'messages sent indirectly' / ( 'messages sent directly' + 'messages sent indirectly' + 'messages flow controlled';
  • % of flow controlled messages - calculated as v$ges_statistics 'messages flow controlled' / ( 'messages sent directly' + 'messages sent indirectly' + 'messages flow controlled';

RAC 11G INSTALLATION

Oracle RAC 11g Database on Linux Using VirtualBox

By Sergei Romanenko 
August - December, 2012
This article describes the installation of Oracle Database 11g release 2 (11.2 64-bit) RAC on Linux (Oracle Linux 6.3 64-bit) using VirtualBox (4.1.14+).

Introduction

If you want to get through all steps of the Oracle RAC installation and your laptop or desktop computer has 8 GB or more of RAM, then this is entirely feasible using Oracle VirtualBox as demonstrated in this article. You can get a running RAC system which can host a small test database. The created system is not, and should not be considered, a production-ready system. It's simply to allow you to get used to installing and using RAC and test various administration procedures. The article also explains how to save the images and restore RAC from the images in a matter of minutes. Even if you break your test system, it will be easy to restore.
This article uses the 64-bit versions of Oracle Linux, version 6.3, and Oracle 11g Release 2, version 11.2.0.3. Using VirtualBox you can run multiple Virtual Machines (VMs) on a single server, allowing you to run both RAC nodes on a single machine. In addition, it allows you to set up shared virtual disks. The finished system includes two guest operating systems, two sets of Oracle Grid Infrastructure (Clusterware + ASM) and two Database instances all on a single server. The amount of disk space needed is about 32 GB, if you want to save images of the finished RAC, another 12 GB of the disk space will be needed.
This article was originally inspired by the article "Oracle Database 11g Release 2 RAC On Linux Using VirtualBox" written by Tim Hall and published in his blog. Then it was almost entirely revised and reworked, now this article seems to have very little resemblance with the original work.
Note. When this article was written, Oracle Database 11g Release 2 (11.2.0.3) for Linux 64-bit (both clusterware and database) was available through the Oracle support to licensed customers only. As happened in the past, the Oracle corporation was making the latest version available to general public pretty soon. So I thought that using the latest and greatest version at the moment, with many bugs fixed, will be the best way to go. But now is end of 2012 and 11.2.0.3 is still unavailable to general public, while this version is really much better than any older version. I apologize for this inconvenience and suggest to find any possible way to get this version. It doesn't make sense to fight issues and research workarounds for bugs already fixed. Ask your friends who have access to Oracle support to help. And, if you can, bother the Oracle corporation to make the latest version available for download.
As of now, 11.2.0.3 can be downloaded in Oracle support site, in "Patches & Updates", then select "Latest Patchsets", then select "Oracle Database", then select "Linux x86-64", then select "11.2.0.3.0". The number of this patch set is 10404530, it is possible to jump to the download page using this number. This patch set is a full installation of the Oracle Database software. This means that you do not need to install Oracle Database 11g Release 2 (11.2.0.1) before installing Oracle Database 11g Release 2 (11.2.0.3). For installing RAC database you will need only 3 files:
Oracle Database (includes Oracle Database and Oracle RAC), part 1: 
  p10404530_112030_Linux-x86-64_1of7.zip  1.3G
 
Oracle Database (includes Oracle Database and Oracle RAC), part 2:
  p10404530_112030_Linux-x86-64_2of7.zip  1.1G
 
Oracle Grid Infrastructure (includes Oracle ASM, Oracle Clusterware):
  p10404530_112030_Linux-x86-64_3of7.zip  933M 

System Requirements

  • 8 GB of RAM;
  • 32 GB of free space on the hard disk;
  • This procedure was tested on 64-bit Windows 7. Although there should be no problems using VirtualBox on other Host OSes. Please let me know if you had success or problems in other OSes;

Download Software

Download the following software.

Virtual Machine Setup

In this exercise, we are using VirtualBox installed on 64-bit Windows 7.
Now we must define the two virtual RAC nodes. We can save time by defining one VM, then cloning it when it is installed.
Start VirtualBox and click the "New" button on the toolbar. Click the "Next" button on the first page of the Virtual Machine Wizard.
Enter the name "rac1", OS "Linux" and Version "Oracle (64 bit)", and then click the "Next" button:
Create New VM - VM Name and OS Type
If you have 16 GB of RAM in your host system, then set Base Memory to 3072 MB, otherwise use 2048 MB, as in the screenshot below, then click the "Next" button:
Create New VM - Memory
Accept the default option to create a new virtual hard disk by clicking the "Next" button:
Create New VM - Virtual Hard Disk
Accept the default VDI type and click the "Next" button on the Virtual Disk Creation Wizard welcome screen:
New Virtual Hard Disk - VDI type
Accept the default "Dynamically allocated" option by clicking the "Next" button:
New Virtual Hard Disk - Dynamically allocated
Accept the default location and set the size to "16G" and click the "Next" button:
New Virtual Hard Disk - Virtual Disk Location And Size
Press the "Create" button on the Create New Virtual Disk Summary screen:
New Virtual Hard Disk - Summary
Press the "Create" button on the Create New Virtual Machine Summary screen:
Create New VM - Summary
The "rac1" VM will appear on the left hand pane. Click on the "Network" link on the right side:
VM Manager - rac 1
Make sure "Adapter 1" is enabled, attached to "Internal Network" or "Host-only Adapter". This inetrface will be used for public network, for example, for connection to the RAC datbase from other applications. More about networking will be explained later. On the screenshot below "Internal Network" is selected and name "pubnet" was given to this network:
Network - Adapter 1
Then click on the "Adapter 2" tab. Make sure "Adapter 2" is enabled and attach to "Internal Network". Name this network "privnet":
Network Adapter 2
Then finally click on the "Adapter 3", enable it, and attach to "Bridged Adapter" or "NAT". This adapter will be used for internet. Then press "OK" button:
Network Adapter 2
Optionally, you can disable the audio card using "Audio" link. This will probably save some amount of space and avoid potential problems related to audio settings. Also if your system has 4 CPU cores or more, it will make sense to allocate 2 CPUs to the Virtual Machine. You can do that in "System" settings.
The virtual machine is now configured so we can start the guest operating system installation.

Guest Operating System Installation

Please note that during installation Virtual Box will be keeping the mouse pointer inside VM area. To exit, press Right Control key on the keyboard.
Place the Oracle Linux 6.3 (or newer) DVD in the DVD drive and skip next two screenshots. If you don't have DVD, download the .iso image and place it into the virtual DVD. Select "Storage" link on the right hand pane of the VirtualBox Manager screen to open "Storage" screen. Then select DVD drive in the "Storage Tree" section:
Select DVD Drive
In "Attributes" section click on the DVD disk icon and choose DVD .iso file. Note that name of the file shows in the Storage Tree. Then press 'OK":
Select DVD Drive file
Start the virtual machine by clicking the "Start" button on the toolbar. The resulting console window will contain the Oracle Linux boot screen. Proceed with the "Install or upgrade an existing system":
Oracle Linux Boot
Do not perform the media test. Choose "Skip" button:
Start the virtual machine by clicking the "Start" button on the toolbar. The resulting console window will contain the Oracle Linux boot screen.
Oracle Linux Boot
Continue through the Oracle Linux installation as you would for a normal server. On next three screens select Language, Keyboard, and Basic Storage Devices type. Confirm to discard any data.
Set "Hostname" to rac1.localdomain and press "Configure Network":
Linux set hotsname
In the Network Connections screen select "System eth0" interface, which will be used for public network, and press "Edit":
Linux network setup
Make sure that "Connect automatically" is checked. In "IPv6 Settings" tab make sure the Method is set to "Ignore". Select "IPv4 Settings" tab; change Method to "Manual"; Press "Add" and fill Address: 192.168.56.71; Netmask: 255.255.255.0; Gateway: 0.0.0.0. Press "Apply" then done:
Linux network eth0 setup
In the Network Connections screen select "System eth1" interface, this will be used for private network, then press "Edit". Then check the box "Connect automatically". In "IPv6 Settings" tab make sure the Method is set to "Ignore". Select "IPv4 Settings" tab; change Method to "Manual". Press "Add" and fill Address: 192.168.10.1; Netmask: 255.255.255.0; Gateway: 0.0.0.0. When done, press "Apply":
Linux network eth1 setup
Finally select "System eth2" interface, this will be used for Internet, then press "Edit". Check the box "Connect automatically". Select "IPv4 Settings" tab make sure the Method is set to "Automatic (DHCP)". In "IPv6 Settings" tab make sure the Method is set to "Ignore". Press "Apply" button:
Linux network setup
Close Network Connections screen and proceed to next setup screen. Select time zone; Type in Root Password: oracle;
Select "Use All Space" type of installation and check "Review and modify partitioning layout":
Linux network eth1 setup
Edit size of lv_swap device to 1500 MB; then edit size of lv_root to 14380 MB. Press "Next":
Linux partitions setup
Confirm through warnings and create partitions. Keep defaults in Boot loader screen.
In the software type installation screen select "Database Server" and check "Customize now" button. Press Next:
Linux software type
In the Customization screen select Database and uncheck all items; select Desktops and check "Desktop" and "Graphical Administration Tools"; then press Next and finish installation. Reboot.
When it comes back, there will be more setup screens obvious to handle. Don't create 'oracle' account, this will be done later. Congratulations! The Linux has been installed.

Check Internet Access

We will need Internet access because additional packages will be installed online. Open terminal and try to ping any Internet site, for example:
ping yahoo.com
If ping doesn't work, troubleshoot the problem using 'ifconfig' command and making changes in Network Connections (Linux desktop Main menu | System | Preferences | Network Connections). If you made changes in Network Connections, restart interface by rebooting VM or running these two commands:
# ifdown eth0
# ifup eth0
Then check the ping again.

Oracle Clusterware Installation Prerequisites. Part 1

All actions in this section must be performed by the root user.
Run Automatic Setup by installing 'oracle-rdbms-server-11gR2-preinstall' package. This package performs prerequisites including kernel parameter change and creation of Linux oracle account:
# yum install oracle-rdbms-server-11gR2-preinstall
Note. Probably you will not be able to paste and copy this command. So you will have to type it manually. We are going to fix that shortly by installing Guest Additions. For now just type those commands.
Install ASMLib:
# yum install oracleasm
# yum install oracleasm-support
Configure ASMLib running this command and answering questions:
# oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: oinstall
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: 
Writing Oracle ASM library driver configuration: done
#

Install Guest Additions

Guest Additions are optional, but highly recommended. Guest Additions allow better integration of mouse support and clipboard bidirectional copying. Another important feature is support of shared folders making files in Host OS visible to Guest. The remaining of this document assumes that Guest Additions are installed.
In order to install Guest Additions, reboot just created VM, login as root. Then in the window menu select Devices | Install Guest Additions. Go through the download until you see DVD Autorun screen:
VirtualBox guest additions
Press "OK", then "Run" to start installation.
Note. The installation can fail complaining on missing kernel-uek-devel package providing a 'yum' command to install this package. Run this command - that's why we need Internet access. Also install another package: 'yum install gcc'. Then reinstall Guest Additions by double-clicking on VBOXADDITIONS DVD icon on the desktop, and clicking "Open Autorun Prompt" button.
Reboot the machine. Now you should be much happier about the VirtualBox!

Oracle Clusterware Installation Prerequisites. Part 2

Create the directory in which the Oracle software will be installed.
mkdir -p  /u01
chown -R oracle:oinstall /u01
chmod -R 775 /u01/
Add oracle account to dba and vboxsf groups. The vboxsf group was created by VirtualBox Guest Additions and will allow oracle user access folders in the Host OS:
# usermod -G dba,vboxsf oracle
Reset oracle user password to oracle:
# passwd oracle
Changing password for user oracle.
New password: 
BAD PASSWORD: it is based on a dictionary word
BAD PASSWORD: is too simple
Retype new password: 
passwd: all authentication tokens updated successfully.
# 
Disable secure linux by editing the "/etc/selinux/config" file, making sure the SELINUX flag is set as follows.
SELINUX=disabled
Either configure NTP, or make sure it is not configured so the Oracle Cluster Time Synchronization Service (ctssd) can synchronize the times of the RAC nodes. In this case we will deconfigure NTP.
# service ntpd stop
Shutting down ntpd:                                        [FAILED]
# chkconfig ntpd off
# mv /etc/ntp.conf /etc/ntp.conf.orig
# rm /var/run/ntpd.pid
Cleanup YUM repositories:
# yum clean all
Check file system usage, about 2.8 GB is used:
# df -k
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/vg_rac1-lv_root
                      14493616   2865472  10891888  21% /
tmpfs                  1027556       272   1027284   1% /dev/shm
/dev/sda1               495844     77056    393188  17% /boot
# 

Network Setup

All actions in this section must be performed by the root user.
Below is TCP layout of addresses used in public and private networks. If you need to use another addresses, make corresponding adjustments and remember to stay consistent with those adjustments throughout the rest of the article. Please note that the subnet 192.168.56.0 is a default configuration used by the VirtualBox as Host-only network connecting the host OS and virtual machines. The VirtualBox is also running DHCP server on this subnet reserving address range 100-254. So it is safe to use addresses below 100 for static addresses. You can verify these settings in: Main menu | File | Preferences | Network, then check the properties of the Host-only network. We are using this subnet for the RAC public network. Even if you don't need to connect from the Host OS to the RAC, and you used VB "Internal Network" for Adapter1, you still can use proposed layout without making changes.
Edit "/etc/hosts" file by appending the following information:
# Private
192.168.10.1    rac1-priv.localdomain   rac1-priv
192.168.10.2    rac2-priv.localdomain   rac2-priv

# Public
192.168.56.71    rac1.localdomain        rac1
192.168.56.72    rac2.localdomain        rac2

# Virtual
192.168.56.81    rac1-vip.localdomain    rac1-vip
192.168.56.82    rac2-vip.localdomain    rac2-vip

# SCAN
192.168.56.91    rac-scan.localdomain    rac-scan
192.168.56.92    rac-scan.localdomain    rac-scan
192.168.56.93    rac-scan.localdomain    rac-scan
Note. The SCAN address should not really be defined in the hosts file. Instead is should be defined on the DNS to round-robin between 3 addresses on the same subnet as the public IPs. For this installation, we will compromise and use the hosts file. If you are using DNS, then comment out lines with SCAN addresses.
We already set IP addresses of all adapters during Linux unstallation. If you followed the instructions, there is no need to change anything. But if you need to change something, you can do so with the Network Connections tool: Linux desktop Main menu | System | Preferences | Network Connections.
Now we need to disable the firewall: Linux Main menu | System | Administration | Firewall. Click on "Disable" icon, then on "Apply".
Linux disable firewall

Downloaded Oracle Installation Files

There are two options to handle Oracle downloads:
  • Downloading or transferring files into VM and uncompressing them in VM;
  • Downloading and uncompressing in the Host OS, then making folders accessible to VM filesystem;
Obviously second option is much better because it doesn't use virtual disk of Guest VM and will result in smaller final image. Also installation files can be easily reused in another installation exercise. In this section we are going to setup VirtualBox Shared Folders.
It is assumed that you already downloaded oracle installation files and uncompressed them into the "grid" and "database" folders. In our example these folders are in "C:\TEMP\oracle_sw" folder.
C:\TEMP\oracle_sw>dir -l
total 0
drwx------+ 1 sromanenko Domain Users 0 Aug  5 18:10 database
drwx------+ 1 sromanenko Domain Users 0 Aug  5 03:08 grid
Shutdown VM. In VirtualBox Manager click on "Shared Folders" link in the right-hand pane. Add shared folder by pressing "plus" icon. Then select path to the location of oracle software, and check both boxes "Read-only" and "Auto-mount":
VirtualBox Shared Folders
Note You can use any name in "Folder Name". If you have oracle installation files at different location, you can overwrite that name to "oracle_sw". This will make easier to follow steps below.
Press "OK" to save this setting. Now Shared Folders should look like this:
VirtualBox Shared Folders
Restart VM and login as oracle user. Change directory to "/media/sf_oracle_sw" - this is where VirtualBox maps Host OS shared folder. Note that VirtualBox added prefix "sf_" to the name of the folder. List 'ls' content of the folder:
$ cd /media/sf_oracle_sw
$ ls
database  grid
$
There is one package 'cvuqdisk' that should be installed before the installation. Install it from the Oracle grid/rpm directory as root user:
$ su root
Password: 
# cd /media/sf_oracle_sw/grid/rpm
# rpm -Uvh cvuqdisk*

Clone the Virtual Machine

Shutdown the VM.
In the VirtualBox Manager, in Network settings for the Adapter 1, change the network type it is attached to from the "Bridged Adapter" to the "Host-only Adapter".
Note. If you don't need access to the RAC database from the Host OS, then you can use "Internal Network" type of adapter. The RAC will be accessible from all other Virtual Machines in both cases. Optionally, if you need Internet access in future, this can be added after RAC is installed, see Adding Internet Access. For more details about type of Network adapters, see "Virtual Networking" chapter in the VirtualBox documentation.
In the VirtualBox Manager window start clone wizard: Main menu | Machine | Clone. Type "rac2" for the name of new machine. Make sure that "Reinitialize the MAC address of all network cards" is not checked. Then press "Next":
VirtualBox clone name
Keep default "Full Clone" option selected and press "Clone":
VirtualBox clone type
Start cloned VM rac2 and login as root user. Then change hostname by editing file "/etc/sysconfig/network", HOSTNAME parameter:
HOSTNAME=rac2.localdomain
Start "Network Connections" tool (Main menu | System | Preferences | Network Connections). Edit eth0 and eth1 interfaces and set in IPv4 addresses 192.168.56.72 and 192.168.10.2 correspondingly.
Reboot system.
Now we need to change MAC address for all three interfaces. At the moment we have two VMs with the same set of MAC addresses. We can run one machine or another, but not both of them at the same time because MAC address must be unique. No changes will be made to rac1, we will pick up three new unused addresses and set them for eth0, eth1, and eth2 in rac2. The easiest way to do that is to change just last two characters of the address. We are going to change them to '00'. If the last two characters are already '00', then change to something else, '01', for example. Just make sure that these addresses don't collide with the MAC addresses of rac1. In running rac2 node, open "Network Connections" and edit MAC address in the "Wired" tab. The screenshot below shows where to set MAC address. Don't forget to change MAC addresses for all three interfaces. Please note that your setup will have a different set of MAC addresses because they are random-generated by VirtualBox.
VirtualBox network MAC address change
Write down the new MAC addresses for all three interfaces. Save new settings pressing "Apply" button, then shutdown the machine. After shutdown, return to the VirtualBox Manager, select rac2 VM and edit "Network" settings. Make same changes to the MAC addresses. Don't forget to change MAC addresses for all three adapters.
VirtualBox adapter MAC address change
Start both machines and check that they can ping each other using both public and private network. For example, on rac1:
$ ping rac2
$ ping rac2-priv
If you have problems, use 'ifconfig' command to check the configuration, then correct the problem using "Network Connections" tool.

Create Shared Disks

Shut down both virtual machines. We need to create a new virtual disk, change its attribute to Shareable and add to both VMs. In the current version of VirtualBox, the only way to create a new disk in the GUI is through the "Storage" page in the virtual machine's settings. Select either rac1 or rac2 VM, then click on "Storage" link. Select "SATA Controller" and click on "Add Hard Disk" icon. If not sure, which icon to use, same action is available through the popup menu, right-click on the "SATA Controller" and select "Add Hard Disk".
VirtualBox Storage - SATA Controller
Press "Create new disk":
VirtualBox create Hard Disk question
Accept the default VDI type and click the "Next" button on the Virtual Disk Creation Wizard welcome screen:
New Virtual Hard Disk - VDI type
Select "Fixed size" option and press the "Next" button:
New Virtual Hard Disk - Fixed size
Change the name and location of this disk. You can keep this file in the default location - the folder of a selected VM. Although, because this disk is shared, it will be better to put it in the parent directory. So, instead of "...\VirtualBox VMs\rac1" directory, place it in "...\VirtualBox VMs". Set the size to "2400 MB" - this will result in about of 400 MB of free space in the ASM group when everything is installed. If you will need more space, you can choose the bigger size. And, regardless of what you decide now, it will be possible to add more shared disks to the ASM group after everything is installed.
New Virtual Hard Disk - Location And Size
Create the new disk and this disk will be already attached to VM.
Select this new disk. You will see in the disk Information that the type of this disk is "Normal". There was no option in the previous dialog windows to create new disk as "Shareable". And once it is attached, this attribute cannot be changed. This is a limitation of GUI so we have to work around it: click on "Remove Attachment" icon. Therefore this VM returns back to the previous storage configuration. Close the "Storage" page.
What is different now - there is a new disk registered to VirtualBox. We will use Virtual Media Manager (Main menu | File | Virtual Media Manager) to change its attributes. Select this new disk in the Virtual Media Manager:
VB - Shared Disk is Detached in Media Manager
Click on "Modify" icon and select "Shareable":
New Virtual Hard Disk - Location And Size
Attach this existing disk to each VM using "Storage" page. Don't forget to select correct controller before attaching the disk and use "Choose existing disk" option.
In the end the "Storage" section of both VMs should be looking like this:
Attached Hard Disks
Start either of the machines and log in as root. The current disks can be seen by issuing the following commands.
# ls /dev/sd*
/dev/sda  /dev/sda1  /dev/sda2  /dev/sdb
#
Use the "fdisk" command to partition the new disk "sdb".
# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0xd724aa83.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-305, default 1): 
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-305, default 305): 
Using default value 305

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
#
The sequence of answers is "n", "p", "1", "Return", "Return" and "w".
Once the new disk is partitioned, the result can be seen by repeating the previous "ls" command.
# ls /dev/sd*
/dev/sda  /dev/sda1  /dev/sda2  /dev/sdb  /dev/sdb1
#
Mark the new shared disk in the ASMLib as follows.
# oracleasm createdisk DISK1 /dev/sdb1
Writing disk header: done
Instantiating disk: done
# 
Run the "scandisks" command to refresh the ASMLib disk configuration.
# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
#
We can see the disk is now visible to ASM using the "listdisks" command.
# oracleasm listdisks
DISK1
#
Start another VM and log in as root. Check that the shared disk is visible to ASM using the "listdisks" command.
# oracleasm listdisks
DISK1
#
The virtual machines and shared disks are now configured for the grid infrastructure!

Install the Grid Infrastructure

Make sure the "rac1" and "rac2" virtual machines are started, then login to "rac1" or switch the user to oracle and start the Oracle installer.
$ cd /media/sf_oracle_sw/grid
$ ./runInstaller
Select "Skip software updates" option, press "Next":
Grid - software updates
Select the "Install and Configure Grid Infrastructure for a Cluster" option, then press the "Next" button.
Grid - Select Installation Option
Select the "Advanced Installation" option, then click the "Next" button.
Grid - Select Installation Type
On the "Grid Plug and Play information" screen, change Cluster Name to "rac-cluster" and SCAN Name to "rac-scan.localdomain", uncheck "Configure GNS" box, then press the "Next" button.
Grid - names
On the "Cluster Node Configuration" screen, click the "Add" button.
Grid - Cluster Node Configuration
Enter the details of the second node in the cluster, then click the "OK" button.
Grid - Add Cluster Node Information
Click the "SSH Connectivity..." button and enter the password for the "oracle" user. Click the "Setup" button to configure SSH connectivity, and the "Test" button to test it once it is complete. Then press "Next".
Grid - SSH Connectivity
On the "Specify Network Interface Usage" screen check the public and private networks are specified correctly. Press the "Next" button.
Grid - Network Interfaces
On the "Storage Option Information" screen keep Oracle ASM option selected and press "Next".
Grid - Storage Options
On the "Create ASM Disk Group" screen click on "Change Discovery Path" button:
Grid - Create ASM Group
Then enter "/dev/oracleasm/disks" and press "OK":
Grid - ASM Discovery Path
Keep "Disk Group Name" unchanged. Select "External" redundancy option. Check "/dev/oracleasm/disks/DISK1" in the "Add Disks" section. When done, press "Next".
Grid - Create ASM Group
On the "Specify ASM Password" screen select "Use same passwords for these accounts" option and type in "oracle" password, then press "Next". Ignore warnings about password weakness.
Grid - ASM Passwords
Keep defaults on the "Failure Isolation Support" and press "Next".
Grid - Failure Isolation Support
Keep defaults on the "Privileged Operating System Groups" and press "Next".
Grid - Privileged Operating System Groups
Keep suggested paths unchanged on the "Specify Installation Location" and press "Next".
Grid - Specify Installation Location
Keep suggested path unchanged on the "Create Inventory" and press "Next".
Grid - Create Inventory
The results of prerequisite checks are shown on the next screen. You should see two warnings and one failure. The failure was caused by inability to lookup SCAN in DNS and that should be expected. Check "Ignore All" box and press "Next".
Grid - Prerequisite Check Results
Press "Install" on the Summary screen.
Grid - Create Inventory
Wait while the setup takes place...
Grid - Setup
When prompted, run the configuration scripts on each node.
Grid - Execute Configuration Scripts
Execute scripts as root user, first in rac1, then in rac2.
# /u01/app/oraInventory/orainstRoot.sh
# /u01/app/11.2.0/grid/root.sh
#
When running root.sh you will be asked about location of bin directory, press Enter in response. The output of the root.sh should finish with "Configure Oracle Grid Infrastructure for a Cluster ... succeeded". If the script fails, correct the problem and restart it.
Once the scripts have completed, return to the "Execute Configuration Scripts" screen on "rac1", click the "OK" button and wait for the configuration assistants to complete.
We expect the verification phase to fail with an error relating to the SCAN:
Grid - Configuration Assistants
Here are the offending lines from the log file:
INFO: Checking Single Client Access Name (SCAN)...
INFO: Checking TCP connectivity to SCAN Listeners...
INFO: TCP connectivity to SCAN Listeners exists on all cluster nodes
INFO: Checking name resolution setup for "rac-scan.localdomain"...
INFO: ERROR: 
INFO: PRVG-1101 : SCAN name "rac-scan.localdomain" failed to resolve
INFO: ERROR: 
INFO: PRVF-4657 : Name resolution setup check for "rac-scan.localdomain" (IP address: 192.168.56.71) failed
INFO: ERROR: 
INFO: PRVF-4657 : Name resolution setup check for "rac-scan.localdomain" (IP address: 192.168.56.72) failed
INFO: ERROR: 
INFO: PRVF-4657 : Name resolution setup check for "rac-scan.localdomain" (IP address: 192.168.56.73) failed
INFO: ERROR: 
INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "rac-scan.localdomain"
INFO: Verification of SCAN VIP and Listener setup failed
Provided this is the only error, it is safe to ignore this and continue by clicking the "Next" button. Close the Configuration Assistant on the next screen.
Check the status of running clusterware. On rac1 as root user:
# . oraenv
ORACLE_SID = [oracle] ? +ASM1
The Oracle base has been set to /u01/app/oracle

# crsctl status resource -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.asm
               ONLINE  ONLINE       rac1                     Started             
               ONLINE  ONLINE       rac2                     Started             
ora.gsd
               OFFLINE OFFLINE      rac1                                         
               OFFLINE OFFLINE      rac2                                         
ora.net1.network
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.ons
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac2                                         
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac1                                         
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac1                                         
ora.cvu
      1        ONLINE  ONLINE       rac1                                         
ora.oc4j
      1        ONLINE  ONLINE       rac1                                         
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                                         
ora.rac2.vip
      1        ONLINE  ONLINE       rac2                                         
ora.scan1.vip
      1        ONLINE  ONLINE       rac2                                         
ora.scan2.vip
      1        ONLINE  ONLINE       rac1                                         
ora.scan3.vip
      1        ONLINE  ONLINE       rac1                                         
# 
You should see various clusterware components running on both nodes. The grid infrastructure installation is now complete!
Check filesystem usage, about 6.5 GB are used:
$ df -k
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/vg_rac1-lv_root
                      14493616   6462704   7294656  47% /
tmpfs                  1027552    204708    822844  20% /dev/shm
/dev/sda1               495844     77056    393188  17% /boot
$ 

Install the Database

Make sure the "rac1" and "rac2" virtual machines are started, then login to "rac1" or switch the user to oracle and start the Oracle installer.
$ cd /media/sf_oracle_sw/database
$ ./runInstaller
Uncheck the "I wish to receive security updates..." checkbox and press the "Next" button:
DB - Configure Security Updates
Check "Skip software updates" checkbox and press the "Next" button:
DB - Skip Software Updates
Accept the "Create and configure a database" option and press the "Next" button:
DB - Select Installation Option
Accept the "Server Class" option and press the "Next" button:
DB - System Class
Make sure "Oracle Real Application Cluster database installation" is chosen and both nodes are selected, and then press the "Next" button.
DB - Node Selection
Select the "Advanced install" option and press the "Next" button:
DB - Select Istall Type
Select Language on next screen and press the "Next" button.
Accept "Enterprise Edition" option and press the "Next" button:
DB - Select Database Edition
Accept default installation locations and press the "Next" button:
DB - Specify Installation Location
Accept "General Purpose / Transaction Processing" option and press the "Next" button:
DB - Specify Installation Location
You can keep default "orcl" database name or define your own. We used "ractp":
DB - Specify Database Identifier
On the "Configuration Options" screen reduce the amount of allocated memory to 750 MB - this will avoid excessive swapping and will run smoother. You are free to explore other tabs and set whatever suits your needs.
DB - Specify Configuration Options
Accept default Management option and press the "Next" button:
DB - Specify Management Options
Accept "Oracle Automatic Storage Management" option and type in "oracle" password, then press the "Next" button:
DB - Specify Storage Options
Accept default "Do not enable automated backups" option and press the "Next" button:
DB - Specify Recovery Options
Review ASM Disk Group Name which will be used by the database and press the "Next" button:
DB - Select ASM Disk Group
Select "Use the same password for all accounts" option, type in "oracle" password, then press the "Next" button:
DB - Specify Schema Passwords
Select "oinstall" group for both Database Administrator and Database Operator groups, then press the "Next" button:
DB - Privileged OS Groups
Wait for the prerequisite check to complete. If there are any problems, either fix them, or check the "Ignore All" checkbox and click the "Next" button.
DB - Perform Prerequisite Checks
If you are happy with the summary information, click the "Install" button.
DB - Summary
Wait while the installation takes place.
DB - Install Product
Once the software installation is complete the Database Configuration Assistant (DBCA) will start automatically.
DB - DBCA
Once the Database Configuration Assistant (DBCA) has finished, click the "OK" button.
DB - DBCA Complete
When prompted, run the configuration scripts on each node. When the scripts have been run on each node, click the "OK" button.
DB - Execute Configuration Scripts
Execute scripts as root user in both nodes:
# /u01/app/oracle/product/11.2.0/dbhome_1/root.sh
#
Click the "Close" button to exit the installer.
DB - Finish
The RAC database creation is now complete!

Check the Status of the RAC

There are several ways to check the status of the RAC. The srvctl utility shows the current configuration and status of the RAC database.
$ . oraenv
ORACLE_SID = [oracle] ? ractp
The Oracle base has been set to /u01/app/oracle

$ srvctl config database -d ractp
Database unique name: ractp
Database name: ractp
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/ractp/spfileractp.ora
Domain: localdomain
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: ractp
Database instances: ractp1,ractp2
Disk Groups: DATA
Mount point paths: 
Services: 
Type: RAC
Database is administrator managed

$ srvctl status database -d ractp
Instance ractp1 is running on node rac1
Instance ractp2 is running on node rac2
$
The V$ACTIVE_INSTANCES view can also display the current status of the instances.
$ export ORACLE_SID=ractp1
[oracle@rac1 Desktop]$ sqlplus / as sysdba
SELECT inst_name FROM v$active_instances;

INST_NAME
--------------------------------------------------------------------------------
rac1.localdomain:ractp1
rac2.localdomain:ractp2

exit
$

Making Images of the RAC Database

At any point earlier we could save the image of created virtual machine and then restore it at will. Here we are going to save images of the newly created Oracle RAC system which we can restore in the same system or even hand over to another location and restore in a matter of few minutes!
The export of VM is a straightforward process and saving RAC images would be an easy task if not dealing with the shared disk. In my view the simplest way to handle that is by detaching shared disk from both nodes and taking care of these three parts (two self-contained VMs and one Shared disk) separately. In the end there will be three files: two files for VMs and a file representing the shared disk. These three files can be further zipped by your favorite archiver into one file which can be used for storage or transfer. After export is done, the shared disk can be easily attached back to the nodes. Same is true for the import of VMs back into VirtualBox along with the copy of shared disk - the shared disk is attached to the imported VMs as an extra step. Let's perform all these actions.

Clean Shutdown of RAC

But first, we need to shutdown servers in nice and clean manner because we want save them in a robust state. Shutdown the database. As oracle user execute on any node:
$ . oraenv
ORACLE_SID = [oracle] ? ractp
The Oracle base has been set to /u01/app/oracle

$ srvctl stop database -d ractp
$
Shutdown the clusterware on the first node. As root user execute:
# . oraenv
ORACLE_SID = [ractp1] ? +ASM1
The Oracle base remains unchanged with value /u01/app/oracle

# crsctl stop crs
...
CRS-4133: Oracle High Availability Services has been stopped.
#
Shutdown the clusterware on the second node. As root user execute:
# . oraenv
ORACLE_SID = [ractp1] ? +ASM2
The Oracle base remains unchanged with value /u01/app/oracle

# crsctl stop crs
...
CRS-4133: Oracle High Availability Services has been stopped.
#
Shutdown both virtual machines. Wait until all VM windows are closed.

Detach Shared Disk and Make a Copy Of It

In the VirtualBox Manager open Virtual Media Manager: Main menu | File | Virtual Media Manager. Then select the disk used by the RAC (rac_shared_disk1.vdi). Note that this disk shows as attached to rac1 and rac2 VMs:
VB - Detach Shared Disk in Media Manager
Click on "Release" icon and then confirm in the pop-up window. Note that this disk now shows as "Not attached". Click on "Copy" to start Disk Copying Wizard.
VB - Shared Disk is Detached in Media Manager
Accept Virtual disk to copy and press "Next".
VB - Detach Shared Disk in Media Manager
Accept Virtual disk file type as VDI and press "Next".
VB - Copying, Target File Type
Select "Fixed size" and press "Next".
VB - Copying, Target File Storage Details
On the next screen you can set location and name of the new file. When done, press "Next".
VB - Copying, Target File Storage Details
On the Summary screen review details and press "Copy" to complete copying. Close the Media Manager when copying is done.
Note. Do not try to copy .vdi file because the copy will retain same disk UID and VirtualBox will refuse to use it because there is already such disk. When copying trough the Virtual Media Manager, the new UID is assigned automatically.

Export VMs

In the VirtualBox Manager select VM, then call Appliance Export Wizard: Main menu | File | Export Appliance. Exporting is generally as simple as saving a file. Export both VMs.
Now you should have 3 files that can be further zipped into a single file with the size about 12 GB.

Re-attach Shared Disk to the Original RAC Setup

Fix our current working RAC setup by re-attaching shared disk to rac1 and rac2 VM using "Storage" page. Don't forget to select correct controller before attaching the disk:
VirtualBox Storage - SATA Controller
Press "Add Hard Disk" icon and use "Choose Existing Disk" to attach rac_shared_disk1.vdi. Once Shared disk is attached to both VMs, the RAC is ready to run.

Restoring RAC from Saved Files

In this section we will import RAC from the saved files creating a second RAC system. Don't run both RAC at the same time because they will have same network attributes.
Open Appliance Import Wizard: Main menu | File | Import Appliance. Choose the file and press "Next":
VB - Appliance Import Wizard
On the Appliance Import Settings different attributes of new VM can be changed. We are going to accept settings unchanged. It is interesting to note, that disks are going to be imported in VMDK format different from the original VDI format.
VB - Appliance Import Settings
Wait until the VM is imported:
VB - Appliance Import Settings
Import both VMs and copy Shared Disk rac_shared_disk1_copy.vdi file into the parent directory (Virtual VMs). This disk could be attached to both machines, but unfortunately current version (4.1.18) of VirtualBox doesn't preserve type of the disk then making a copy. Attach this disk to the either of imported VM, then select it and review disk information:
VB - attached disk type
In the VirtualBox 4.1.18, the copied disk has "Normal" type. If you have a newer version and the type is "Shareable" then this bug has been fixed, and you can proceed to another VM. If not, de-attach the disk, then go to the Virtual Media Manager and change the disk type to "Shareable" as has been described above, then return to the Virtual machines and attached the Shared disk.
Start new VMs. The clusterware should start automatically. You will need to bring up the database. Login as the oracle user and execute:
$ . oraenv
ORACLE_SID = [oracle] ? ractp
The Oracle base has been set to /u01/app/oracle

$ srvctl start database -d ractp
$
The RAC should be well and running!

Post Installation Optimization (Optional)

It had been noticed that after a while, the ologgerd process can consume excessive CPU resource. You can check that by starting top, then pressing 'c' key (cmd name/line):
ologgerd process consumes CPU
The ologgerd is part of Oracle Cluster Health Monitor and used by Oracle Support to troubleshoot RAC problems. If ologgerd process is consuming a lot of CPU, you can stop it by executing on both nodes:
# crsctl stop resource ora.crf -init
If you want to disable ologgerd permanently, then execute:
# crsctl delete resource ora.crf -init
.

Adding Internet Access (Optional)

Shutdown virtual machine. In the Network section of settings, select Adapter 3, make sure it is enabled and attached to "Bridged Adapter". Restart the VM and check that you can ping outside world in the terminal window:
ping yahoo.com
Repeat all these actions with another node.
Please note, that using Bridged Adapter makes your VMs accessible on the same local area network where your workstation / laptop is connected. Therefore same rules apply to the network configuration of VMs. By default, VMs will be using your local network DHCP to obtain IP address. Optionally you can assign static addresses. Even though your VMs will be accessible to any computers on the LAN, the Oracle listener is not listening on these IP addresses. If needed, these IPs can be added to the listener, but this is beyond the scope of this article.

Clusterware and Database Monitoring

You can check the status of clusterware by issuing this command:
# crsctl status resource -t
But it is much easier using a tool which will run same command on pre-defined time interval and organize output into the tabular form. For this we are going to use a freeware tool Lab128 Freeware. The installation is simple: unzip files into some folder, then run lab128fw.exe executable file.
Since we will be running the monitoring tool in the host OS (Windows), the RAC should be accessible to the host OS. If you did everything as described above, you should have Adapter 1 on both VMs attached to "Host-only Adapter" and you should be able to ping both nodes:
ping 192.168.56.71
ping 192.168.56.72
It will make sense to add these IP addresses to the %SystemRoot%\System32\drivers\etc\hosts file:
#ORACLE SCAN LISTENER
192.168.56.91       rac-scan.localdomain   rac-scan
192.168.56.92       rac-scan.localdomain   rac-scan
192.168.56.93       rac-scan.localdomain   rac-scan

#rac1 node
192.168.56.71        rac1
192.168.56.72        rac2
Start Lab128, then connect through SSH to one node: Main menu | Clusterware | Clusterware Commander, then enter User: oracle; Password: oracle; Hostname/IP: 192.168.56.71. Optionally give the "Alias name" to this connection and save it by pressing "Store" button. Then press "Connect".
SSH Login to Clusterware Commander
The Clusterware Commander window presents the status of clusterware components in a tabular view. Left columns present the Component/Resource Type, Name, Target and Online node counts. The remaining columns to the right represent nodes in the cluster with the values showing the state of the component on that node. This view is refreshed automatically with the rate defined in the "Refresh Rate, sec." box. The tabular view content can be customized by filtering out less important components. If you need more information about this window, press F1 key.
Clusterware Commander Window
The tabular view can be used to manage the components through the pop-up menu. Right-click on the cell in the node area. Depending on the context of the selected component and the node, the menu will offer various commands, see the picture above.
Additionally, you can start Lab128 monitor for a specific database instance with many connection details filled up automatically. The example of Login window is shown below. Just enter in User: sys; Password: oracle. It is recommended to name this monitor in the "Alias Name" box (we named it RACTP1) and saved it pressing the "Store" button. This will save our efforts next time when connecting to the same instance. Then press "Connect" button to open Lab128 monitor:
Lab128 monitor login
You can read more about database monitoring with Lab128 using this link Welcome to the World of Lab128.