Oracle RAC 11g Database on Linux Using VirtualBox
By Sergei RomanenkoAugust - December, 2012
This article describes the installation of Oracle Database 11g release 2 (11.2 64-bit) RAC on Linux (Oracle Linux 6.3 64-bit) using VirtualBox (4.1.14+).
- Introduction
- System Requirements
- Download Software
- Virtual Machine Setup
- Guest Operating System Installation
- Check Internet Access
- Oracle Clusterware Installation Prerequisites. Part 1
- Install Guest Additions
- Oracle Clusterware Installation Prerequisites. Part 2
- Network Setup
- Downloaded Oracle Installation Files
- Clone the Virtual Machine
- Create Shared Disks
- Install the Grid Infrastructure
- Install the Database
- Check the Status of the RAC
- Making Images of the RAC Database
- Restoring RAC from Saved Files
- Post Installation Optimization
- Clusterware and Database Monitoring
Introduction
If you want to get through all steps of the Oracle RAC installation and your laptop or desktop computer has 8 GB or more of RAM, then this is entirely feasible using Oracle VirtualBox as demonstrated in this article. You can get a running RAC system which can host a small test database. The created system is not, and should not be considered, a production-ready system. It's simply to allow you to get used to installing and using RAC and test various administration procedures. The article also explains how to save the images and restore RAC from the images in a matter of minutes. Even if you break your test system, it will be easy to restore.
This article uses the 64-bit versions of Oracle Linux, version 6.3, and Oracle 11g Release 2, version 11.2.0.3. Using VirtualBox you can run multiple Virtual Machines (VMs) on a single server, allowing you to run both RAC nodes on a single machine. In addition, it allows you to set up shared virtual disks. The finished system includes two guest operating systems, two sets of Oracle Grid Infrastructure (Clusterware + ASM) and two Database instances all on a single server. The amount of disk space needed is about 32 GB, if you want to save images of the finished RAC, another 12 GB of the disk space will be needed.
This article was originally inspired by the article "Oracle Database 11g Release 2 RAC On Linux Using VirtualBox" written by Tim Hall and published in his blog. Then it was almost entirely revised and reworked, now this article seems to have very little resemblance with the original work.
Note. When this article was written, Oracle Database 11g Release 2 (11.2.0.3) for Linux 64-bit (both clusterware and database) was available through the Oracle support to licensed customers only. As happened in the past, the Oracle corporation was making the latest version available to general public pretty soon. So I thought that using the latest and greatest version at the moment, with many bugs fixed, will be the best way to go. But now is end of 2012 and 11.2.0.3 is still unavailable to general public, while this version is really much better than any older version. I apologize for this inconvenience and suggest to find any possible way to get this version. It doesn't make sense to fight issues and research workarounds for bugs already fixed. Ask your friends who have access to Oracle support to help. And, if you can, bother the Oracle corporation to make the latest version available for download.
As of now, 11.2.0.3 can be downloaded in Oracle support site, in "Patches & Updates", then select "Latest Patchsets", then select "Oracle Database", then select "Linux x86-64", then select "11.2.0.3.0". The number of this patch set is 10404530, it is possible to jump to the download page using this number. This patch set is a full installation of the Oracle Database software. This means that you do not need to install Oracle Database 11g Release 2 (11.2.0.1) before installing Oracle Database 11g Release 2 (11.2.0.3). For installing RAC database you will need only 3 files:
Oracle Database (includes Oracle Database and Oracle RAC), part 1: p10404530_112030_Linux-x86-64_1of7.zip 1.3G Oracle Database (includes Oracle Database and Oracle RAC), part 2: p10404530_112030_Linux-x86-64_2of7.zip 1.1G Oracle Grid Infrastructure (includes Oracle ASM, Oracle Clusterware): p10404530_112030_Linux-x86-64_3of7.zip 933M
System Requirements
- 8 GB of RAM;
- 32 GB of free space on the hard disk;
- This procedure was tested on 64-bit Windows 7. Although there should be no problems using VirtualBox on other Host OSes. Please let me know if you had success or problems in other OSes;
Download Software
Download the following software.
- Oracle Linux;
- VirtualBox (Must be version 4.1.14 or later);
- Oracle 11g Release 2 (11.2) Software (64 bit). Please read the note about Oracle version in the Introduction section above.
Virtual Machine Setup
In this exercise, we are using VirtualBox installed on 64-bit Windows 7.
Now we must define the two virtual RAC nodes. We can save time by defining one VM, then cloning it when it is installed.
Start VirtualBox and click the "New" button on the toolbar. Click the "Next" button on the first page of the Virtual Machine Wizard.
Enter the name "rac1", OS "Linux" and Version "Oracle (64 bit)", and then click the "Next" button:
If you have 16 GB of RAM in your host system, then set Base Memory to 3072 MB, otherwise use 2048 MB, as in the screenshot below, then click the "Next" button:
Accept the default option to create a new virtual hard disk by clicking the "Next" button:
Accept the default VDI type and click the "Next" button on the Virtual Disk Creation Wizard welcome screen:
Accept the default "Dynamically allocated" option by clicking the "Next" button:
Accept the default location and set the size to "16G" and click the "Next" button:
Press the "Create" button on the Create New Virtual Disk Summary screen:
Press the "Create" button on the Create New Virtual Machine Summary screen:
The "rac1" VM will appear on the left hand pane. Click on the "Network" link on the right side:
Make sure "Adapter 1" is enabled, attached to "Internal Network" or "Host-only Adapter". This inetrface will be used for public network, for example, for connection to the RAC datbase from other applications. More about networking will be explained later. On the screenshot below "Internal Network" is selected and name "pubnet" was given to this network:
Then click on the "Adapter 2" tab. Make sure "Adapter 2" is enabled and attach to "Internal Network". Name this network "privnet":
Then finally click on the "Adapter 3", enable it, and attach to "Bridged Adapter" or "NAT". This adapter will be used for internet. Then press "OK" button:
Optionally, you can disable the audio card using "Audio" link. This will probably save some amount of space and avoid potential problems related to audio settings. Also if your system has 4 CPU cores or more, it will make sense to allocate 2 CPUs to the Virtual Machine. You can do that in "System" settings.
The virtual machine is now configured so we can start the guest operating system installation.
Guest Operating System Installation
Please note that during installation Virtual Box will be keeping the mouse pointer inside VM area. To exit, press Right Control key on the keyboard.
Place the Oracle Linux 6.3 (or newer) DVD in the DVD drive and skip next two screenshots. If you don't have DVD, download the .iso image and place it into the virtual DVD. Select "Storage" link on the right hand pane of the VirtualBox Manager screen to open "Storage" screen. Then select DVD drive in the "Storage Tree" section:
In "Attributes" section click on the DVD disk icon and choose DVD .iso file. Note that name of the file shows in the Storage Tree. Then press 'OK":
Start the virtual machine by clicking the "Start" button on the toolbar. The resulting console window will contain the Oracle Linux boot screen. Proceed with the "Install or upgrade an existing system":
Do not perform the media test. Choose "Skip" button:
Start the virtual machine by clicking the "Start" button on the toolbar. The resulting console window will contain the Oracle Linux boot screen.
Continue through the Oracle Linux installation as you would for a normal server. On next three screens select Language, Keyboard, and Basic Storage Devices type. Confirm to discard any data.
Set "Hostname" to rac1.localdomain and press "Configure Network":
In the Network Connections screen select "System eth0" interface, which will be used for public network, and press "Edit":
Make sure that "Connect automatically" is checked. In "IPv6 Settings" tab make sure the Method is set to "Ignore". Select "IPv4 Settings" tab; change Method to "Manual"; Press "Add" and fill Address: 192.168.56.71; Netmask: 255.255.255.0; Gateway: 0.0.0.0. Press "Apply" then done:
In the Network Connections screen select "System eth1" interface, this will be used for private network, then press "Edit". Then check the box "Connect automatically". In "IPv6 Settings" tab make sure the Method is set to "Ignore". Select "IPv4 Settings" tab; change Method to "Manual". Press "Add" and fill Address: 192.168.10.1; Netmask: 255.255.255.0; Gateway: 0.0.0.0. When done, press "Apply":
Finally select "System eth2" interface, this will be used for Internet, then press "Edit". Check the box "Connect automatically". Select "IPv4 Settings" tab make sure the Method is set to "Automatic (DHCP)". In "IPv6 Settings" tab make sure the Method is set to "Ignore". Press "Apply" button:
Close Network Connections screen and proceed to next setup screen. Select time zone; Type in Root Password: oracle;
Select "Use All Space" type of installation and check "Review and modify partitioning layout":
Edit size of lv_swap device to 1500 MB; then edit size of lv_root to 14380 MB. Press "Next":
Confirm through warnings and create partitions. Keep defaults in Boot loader screen.
In the software type installation screen select "Database Server" and check "Customize now" button. Press Next:
In the Customization screen select Database and uncheck all items; select Desktops and check "Desktop" and "Graphical Administration Tools"; then press Next and finish installation. Reboot.
When it comes back, there will be more setup screens obvious to handle. Don't create 'oracle' account, this will be done later. Congratulations! The Linux has been installed.
Check Internet Access
We will need Internet access because additional packages will be installed online. Open terminal and try to ping any Internet site, for example:
ping yahoo.com
If ping doesn't work, troubleshoot the problem using 'ifconfig' command and making changes in Network Connections (Linux desktop Main menu | System | Preferences | Network Connections). If you made changes in Network Connections, restart interface by rebooting VM or running these two commands:
# ifdown eth0 # ifup eth0
Then check the ping again.
Oracle Clusterware Installation Prerequisites. Part 1
All actions in this section must be performed by the root user.
Run Automatic Setup by installing 'oracle-rdbms-server-11gR2-preinstall' package. This package performs prerequisites including kernel parameter change and creation of Linux oracle account:
# yum install oracle-rdbms-server-11gR2-preinstall
Note. Probably you will not be able to paste and copy this command. So you will have to type it manually. We are going to fix that shortly by installing Guest Additions. For now just type those commands.
Install ASMLib:
# yum install oracleasm # yum install oracleasm-support
Configure ASMLib running this command and answering questions:
# oracleasm configure -i Configuring the Oracle ASM library driver. This will configure the on-boot properties of the Oracle ASM library driver. The following questions will determine whether the driver is loaded on boot and what permissions it will have. The current values will be shown in brackets ('[]'). Hitting <ENTER> without typing an answer will keep that current value. Ctrl-C will abort. Default user to own the driver interface []: oracle Default group to own the driver interface []: oinstall Start Oracle ASM library driver on boot (y/n) [n]: y Scan for Oracle ASM disks on boot (y/n) [y]: Writing Oracle ASM library driver configuration: done #
Install Guest Additions
Guest Additions are optional, but highly recommended. Guest Additions allow better integration of mouse support and clipboard bidirectional copying. Another important feature is support of shared folders making files in Host OS visible to Guest. The remaining of this document assumes that Guest Additions are installed.
In order to install Guest Additions, reboot just created VM, login as root. Then in the window menu select Devices | Install Guest Additions. Go through the download until you see DVD Autorun screen:
Press "OK", then "Run" to start installation.
Note. The installation can fail complaining on missing kernel-uek-devel package providing a 'yum' command to install this package. Run this command - that's why we need Internet access. Also install another package: 'yum install gcc'. Then reinstall Guest Additions by double-clicking on VBOXADDITIONS DVD icon on the desktop, and clicking "Open Autorun Prompt" button.
Reboot the machine. Now you should be much happier about the VirtualBox!
Oracle Clusterware Installation Prerequisites. Part 2
Create the directory in which the Oracle software will be installed.
mkdir -p /u01 chown -R oracle:oinstall /u01 chmod -R 775 /u01/
Add oracle account to dba and vboxsf groups. The vboxsf group was created by VirtualBox Guest Additions and will allow oracle user access folders in the Host OS:
# usermod -G dba,vboxsf oracle
Reset oracle user password to oracle:
# passwd oracle Changing password for user oracle. New password: BAD PASSWORD: it is based on a dictionary word BAD PASSWORD: is too simple Retype new password: passwd: all authentication tokens updated successfully. #
Disable secure linux by editing the "/etc/selinux/config" file, making sure the SELINUX flag is set as follows.
SELINUX=disabled
Either configure NTP, or make sure it is not configured so the Oracle Cluster Time Synchronization Service (ctssd) can synchronize the times of the RAC nodes. In this case we will deconfigure NTP.
# service ntpd stop Shutting down ntpd: [FAILED] # chkconfig ntpd off # mv /etc/ntp.conf /etc/ntp.conf.orig # rm /var/run/ntpd.pid
Cleanup YUM repositories:
# yum clean all
Check file system usage, about 2.8 GB is used:
# df -k Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/vg_rac1-lv_root 14493616 2865472 10891888 21% / tmpfs 1027556 272 1027284 1% /dev/shm /dev/sda1 495844 77056 393188 17% /boot #
Network Setup
All actions in this section must be performed by the root user.
Below is TCP layout of addresses used in public and private networks. If you need to use another addresses, make corresponding adjustments and remember to stay consistent with those adjustments throughout the rest of the article. Please note that the subnet 192.168.56.0 is a default configuration used by the VirtualBox as Host-only network connecting the host OS and virtual machines. The VirtualBox is also running DHCP server on this subnet reserving address range 100-254. So it is safe to use addresses below 100 for static addresses. You can verify these settings in: Main menu | File | Preferences | Network, then check the properties of the Host-only network. We are using this subnet for the RAC public network. Even if you don't need to connect from the Host OS to the RAC, and you used VB "Internal Network" for Adapter1, you still can use proposed layout without making changes.
Edit "/etc/hosts" file by appending the following information:
# Private 192.168.10.1 rac1-priv.localdomain rac1-priv 192.168.10.2 rac2-priv.localdomain rac2-priv # Public 192.168.56.71 rac1.localdomain rac1 192.168.56.72 rac2.localdomain rac2 # Virtual 192.168.56.81 rac1-vip.localdomain rac1-vip 192.168.56.82 rac2-vip.localdomain rac2-vip # SCAN 192.168.56.91 rac-scan.localdomain rac-scan 192.168.56.92 rac-scan.localdomain rac-scan 192.168.56.93 rac-scan.localdomain rac-scan
Note. The SCAN address should not really be defined in the hosts file. Instead is should be defined on the DNS to round-robin between 3 addresses on the same subnet as the public IPs. For this installation, we will compromise and use the hosts file. If you are using DNS, then comment out lines with SCAN addresses.
We already set IP addresses of all adapters during Linux unstallation. If you followed the instructions, there is no need to change anything. But if you need to change something, you can do so with the Network Connections tool: Linux desktop Main menu | System | Preferences | Network Connections.
Now we need to disable the firewall: Linux Main menu | System | Administration | Firewall. Click on "Disable" icon, then on "Apply".
Downloaded Oracle Installation Files
There are two options to handle Oracle downloads:
- Downloading or transferring files into VM and uncompressing them in VM;
- Downloading and uncompressing in the Host OS, then making folders accessible to VM filesystem;
Obviously second option is much better because it doesn't use virtual disk of Guest VM and will result in smaller final image. Also installation files can be easily reused in another installation exercise. In this section we are going to setup VirtualBox Shared Folders.
It is assumed that you already downloaded oracle installation files and uncompressed them into the "grid" and "database" folders. In our example these folders are in "C:\TEMP\oracle_sw" folder.
C:\TEMP\oracle_sw>dir -l total 0 drwx------+ 1 sromanenko Domain Users 0 Aug 5 18:10 database drwx------+ 1 sromanenko Domain Users 0 Aug 5 03:08 grid
Shutdown VM. In VirtualBox Manager click on "Shared Folders" link in the right-hand pane. Add shared folder by pressing "plus" icon. Then select path to the location of oracle software, and check both boxes "Read-only" and "Auto-mount":
Note You can use any name in "Folder Name". If you have oracle installation files at different location, you can overwrite that name to "oracle_sw". This will make easier to follow steps below.
Press "OK" to save this setting. Now Shared Folders should look like this:
Restart VM and login as oracle user. Change directory to "/media/sf_oracle_sw" - this is where VirtualBox maps Host OS shared folder. Note that VirtualBox added prefix "sf_" to the name of the folder. List 'ls' content of the folder:
$ cd /media/sf_oracle_sw $ ls database grid $
There is one package 'cvuqdisk' that should be installed before the installation. Install it from the Oracle grid/rpm directory as root user:
$ su root Password: # cd /media/sf_oracle_sw/grid/rpm # rpm -Uvh cvuqdisk*
Clone the Virtual Machine
Shutdown the VM.
In the VirtualBox Manager, in Network settings for the Adapter 1, change the network type it is attached to from the "Bridged Adapter" to the "Host-only Adapter".
Note. If you don't need access to the RAC database from the Host OS, then you can use "Internal Network" type of adapter. The RAC will be accessible from all other Virtual Machines in both cases. Optionally, if you need Internet access in future, this can be added after RAC is installed, see Adding Internet Access. For more details about type of Network adapters, see "Virtual Networking" chapter in the VirtualBox documentation.
Note. If you don't need access to the RAC database from the Host OS, then you can use "Internal Network" type of adapter. The RAC will be accessible from all other Virtual Machines in both cases. Optionally, if you need Internet access in future, this can be added after RAC is installed, see Adding Internet Access. For more details about type of Network adapters, see "Virtual Networking" chapter in the VirtualBox documentation.
In the VirtualBox Manager window start clone wizard: Main menu | Machine | Clone. Type "rac2" for the name of new machine. Make sure that "Reinitialize the MAC address of all network cards" is not checked. Then press "Next":
Keep default "Full Clone" option selected and press "Clone":
Start cloned VM rac2 and login as root user. Then change hostname by editing file "/etc/sysconfig/network", HOSTNAME parameter:
HOSTNAME=rac2.localdomain
Start "Network Connections" tool (Main menu | System | Preferences | Network Connections). Edit eth0 and eth1 interfaces and set in IPv4 addresses 192.168.56.72 and 192.168.10.2 correspondingly.
Reboot system.
Now we need to change MAC address for all three interfaces. At the moment we have two VMs with the same set of MAC addresses. We can run one machine or another, but not both of them at the same time because MAC address must be unique. No changes will be made to rac1, we will pick up three new unused addresses and set them for eth0, eth1, and eth2 in rac2. The easiest way to do that is to change just last two characters of the address. We are going to change them to '00'. If the last two characters are already '00', then change to something else, '01', for example. Just make sure that these addresses don't collide with the MAC addresses of rac1. In running rac2 node, open "Network Connections" and edit MAC address in the "Wired" tab. The screenshot below shows where to set MAC address. Don't forget to change MAC addresses for all three interfaces. Please note that your setup will have a different set of MAC addresses because they are random-generated by VirtualBox.
Write down the new MAC addresses for all three interfaces. Save new settings pressing "Apply" button, then shutdown the machine. After shutdown, return to the VirtualBox Manager, select rac2 VM and edit "Network" settings. Make same changes to the MAC addresses. Don't forget to change MAC addresses for all three adapters.
Start both machines and check that they can ping each other using both public and private network. For example, on rac1:
$ ping rac2 $ ping rac2-priv
If you have problems, use 'ifconfig' command to check the configuration, then correct the problem using "Network Connections" tool.
Create Shared Disks
Shut down both virtual machines. We need to create a new virtual disk, change its attribute to Shareable and add to both VMs. In the current version of VirtualBox, the only way to create a new disk in the GUI is through the "Storage" page in the virtual machine's settings. Select either rac1 or rac2 VM, then click on "Storage" link. Select "SATA Controller" and click on "Add Hard Disk" icon. If not sure, which icon to use, same action is available through the popup menu, right-click on the "SATA Controller" and select "Add Hard Disk".
Press "Create new disk":
Accept the default VDI type and click the "Next" button on the Virtual Disk Creation Wizard welcome screen:
Select "Fixed size" option and press the "Next" button:
Change the name and location of this disk. You can keep this file in the default location - the folder of a selected VM. Although, because this disk is shared, it will be better to put it in the parent directory. So, instead of "...\VirtualBox VMs\rac1" directory, place it in "...\VirtualBox VMs". Set the size to "2400 MB" - this will result in about of 400 MB of free space in the ASM group when everything is installed. If you will need more space, you can choose the bigger size. And, regardless of what you decide now, it will be possible to add more shared disks to the ASM group after everything is installed.
Create the new disk and this disk will be already attached to VM.
Select this new disk. You will see in the disk Information that the type of this disk is "Normal". There was no option in the previous dialog windows to create new disk as "Shareable". And once it is attached, this attribute cannot be changed. This is a limitation of GUI so we have to work around it: click on "Remove Attachment" icon. Therefore this VM returns back to the previous storage configuration. Close the "Storage" page.
What is different now - there is a new disk registered to VirtualBox. We will use Virtual Media Manager (Main menu | File | Virtual Media Manager) to change its attributes. Select this new disk in the Virtual Media Manager:
Click on "Modify" icon and select "Shareable":
Attach this existing disk to each VM using "Storage" page. Don't forget to select correct controller before attaching the disk and use "Choose existing disk" option.
In the end the "Storage" section of both VMs should be looking like this:
Start either of the machines and log in as root. The current disks can be seen by issuing the following commands.
# ls /dev/sd* /dev/sda /dev/sda1 /dev/sda2 /dev/sdb #
Use the "fdisk" command to partition the new disk "sdb".
# fdisk /dev/sdb Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel with disk identifier 0xd724aa83. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u'). Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-305, default 1): Using default value 1 Last cylinder, +cylinders or +size{K,M,G} (1-305, default 305): Using default value 305 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. #
The sequence of answers is "n", "p", "1", "Return", "Return" and "w".
Once the new disk is partitioned, the result can be seen by repeating the previous "ls" command.
# ls /dev/sd* /dev/sda /dev/sda1 /dev/sda2 /dev/sdb /dev/sdb1 #
Mark the new shared disk in the ASMLib as follows.
# oracleasm createdisk DISK1 /dev/sdb1 Writing disk header: done Instantiating disk: done #
Run the "scandisks" command to refresh the ASMLib disk configuration.
# oracleasm scandisks Reloading disk partitions: done Cleaning any stale ASM disks... Scanning system for ASM disks... #
We can see the disk is now visible to ASM using the "listdisks" command.
# oracleasm listdisks DISK1 #
Start another VM and log in as root. Check that the shared disk is visible to ASM using the "listdisks" command.
# oracleasm listdisks DISK1 #
The virtual machines and shared disks are now configured for the grid infrastructure!
Install the Grid Infrastructure
Make sure the "rac1" and "rac2" virtual machines are started, then login to "rac1" or switch the user to oracle and start the Oracle installer.
$ cd /media/sf_oracle_sw/grid $ ./runInstaller
Select "Skip software updates" option, press "Next":
Select the "Install and Configure Grid Infrastructure for a Cluster" option, then press the "Next" button.
Select the "Advanced Installation" option, then click the "Next" button.
On the "Grid Plug and Play information" screen, change Cluster Name to "rac-cluster" and SCAN Name to "rac-scan.localdomain", uncheck "Configure GNS" box, then press the "Next" button.
On the "Cluster Node Configuration" screen, click the "Add" button.
Enter the details of the second node in the cluster, then click the "OK" button.
Click the "SSH Connectivity..." button and enter the password for the "oracle" user. Click the "Setup" button to configure SSH connectivity, and the "Test" button to test it once it is complete. Then press "Next".
On the "Specify Network Interface Usage" screen check the public and private networks are specified correctly. Press the "Next" button.
On the "Storage Option Information" screen keep Oracle ASM option selected and press "Next".
On the "Create ASM Disk Group" screen click on "Change Discovery Path" button:
Then enter "/dev/oracleasm/disks" and press "OK":
Keep "Disk Group Name" unchanged. Select "External" redundancy option. Check "/dev/oracleasm/disks/DISK1" in the "Add Disks" section. When done, press "Next".
On the "Specify ASM Password" screen select "Use same passwords for these accounts" option and type in "oracle" password, then press "Next". Ignore warnings about password weakness.
Keep defaults on the "Failure Isolation Support" and press "Next".
Keep defaults on the "Privileged Operating System Groups" and press "Next".
Keep suggested paths unchanged on the "Specify Installation Location" and press "Next".
Keep suggested path unchanged on the "Create Inventory" and press "Next".
The results of prerequisite checks are shown on the next screen. You should see two warnings and one failure. The failure was caused by inability to lookup SCAN in DNS and that should be expected. Check "Ignore All" box and press "Next".
Press "Install" on the Summary screen.
Wait while the setup takes place...
When prompted, run the configuration scripts on each node.
Execute scripts as root user, first in rac1, then in rac2.
# /u01/app/oraInventory/orainstRoot.sh # /u01/app/11.2.0/grid/root.sh #
When running root.sh you will be asked about location of bin directory, press Enter in response. The output of the root.sh should finish with "Configure Oracle Grid Infrastructure for a Cluster ... succeeded". If the script fails, correct the problem and restart it.
Once the scripts have completed, return to the "Execute Configuration Scripts" screen on "rac1", click the "OK" button and wait for the configuration assistants to complete.
We expect the verification phase to fail with an error relating to the SCAN:
Here are the offending lines from the log file:
INFO: Checking Single Client Access Name (SCAN)... INFO: Checking TCP connectivity to SCAN Listeners... INFO: TCP connectivity to SCAN Listeners exists on all cluster nodes INFO: Checking name resolution setup for "rac-scan.localdomain"... INFO: ERROR: INFO: PRVG-1101 : SCAN name "rac-scan.localdomain" failed to resolve INFO: ERROR: INFO: PRVF-4657 : Name resolution setup check for "rac-scan.localdomain" (IP address: 192.168.56.71) failed INFO: ERROR: INFO: PRVF-4657 : Name resolution setup check for "rac-scan.localdomain" (IP address: 192.168.56.72) failed INFO: ERROR: INFO: PRVF-4657 : Name resolution setup check for "rac-scan.localdomain" (IP address: 192.168.56.73) failed INFO: ERROR: INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "rac-scan.localdomain" INFO: Verification of SCAN VIP and Listener setup failed
Provided this is the only error, it is safe to ignore this and continue by clicking the "Next" button. Close the Configuration Assistant on the next screen.
Check the status of running clusterware. On rac1 as root user:
# . oraenv ORACLE_SID = [oracle] ? +ASM1 The Oracle base has been set to /u01/app/oracle # crsctl status resource -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.DATA.dg ONLINE ONLINE rac1 ONLINE ONLINE rac2 ora.LISTENER.lsnr ONLINE ONLINE rac1 ONLINE ONLINE rac2 ora.asm ONLINE ONLINE rac1 Started ONLINE ONLINE rac2 Started ora.gsd OFFLINE OFFLINE rac1 OFFLINE OFFLINE rac2 ora.net1.network ONLINE ONLINE rac1 ONLINE ONLINE rac2 ora.ons ONLINE ONLINE rac1 ONLINE ONLINE rac2 -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE rac2 ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE rac1 ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE rac1 ora.cvu 1 ONLINE ONLINE rac1 ora.oc4j 1 ONLINE ONLINE rac1 ora.rac1.vip 1 ONLINE ONLINE rac1 ora.rac2.vip 1 ONLINE ONLINE rac2 ora.scan1.vip 1 ONLINE ONLINE rac2 ora.scan2.vip 1 ONLINE ONLINE rac1 ora.scan3.vip 1 ONLINE ONLINE rac1 #
You should see various clusterware components running on both nodes. The grid infrastructure installation is now complete!
Check filesystem usage, about 6.5 GB are used:
$ df -k Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/vg_rac1-lv_root 14493616 6462704 7294656 47% / tmpfs 1027552 204708 822844 20% /dev/shm /dev/sda1 495844 77056 393188 17% /boot $
Install the Database
Make sure the "rac1" and "rac2" virtual machines are started, then login to "rac1" or switch the user to oracle and start the Oracle installer.
$ cd /media/sf_oracle_sw/database $ ./runInstaller
Uncheck the "I wish to receive security updates..." checkbox and press the "Next" button:
Check "Skip software updates" checkbox and press the "Next" button:
Accept the "Create and configure a database" option and press the "Next" button:
Accept the "Server Class" option and press the "Next" button:
Make sure "Oracle Real Application Cluster database installation" is chosen and both nodes are selected, and then press the "Next" button.
Select the "Advanced install" option and press the "Next" button:
Select Language on next screen and press the "Next" button.
Accept "Enterprise Edition" option and press the "Next" button:
Accept default installation locations and press the "Next" button:
Accept "General Purpose / Transaction Processing" option and press the "Next" button:
You can keep default "orcl" database name or define your own. We used "ractp":
On the "Configuration Options" screen reduce the amount of allocated memory to 750 MB - this will avoid excessive swapping and will run smoother. You are free to explore other tabs and set whatever suits your needs.
Accept default Management option and press the "Next" button:
Accept "Oracle Automatic Storage Management" option and type in "oracle" password, then press the "Next" button:
Accept default "Do not enable automated backups" option and press the "Next" button:
Review ASM Disk Group Name which will be used by the database and press the "Next" button:
Select "Use the same password for all accounts" option, type in "oracle" password, then press the "Next" button:
Select "oinstall" group for both Database Administrator and Database Operator groups, then press the "Next" button:
Wait for the prerequisite check to complete. If there are any problems, either fix them, or check the "Ignore All" checkbox and click the "Next" button.
If you are happy with the summary information, click the "Install" button.
Wait while the installation takes place.
Once the software installation is complete the Database Configuration Assistant (DBCA) will start automatically.
Once the Database Configuration Assistant (DBCA) has finished, click the "OK" button.
When prompted, run the configuration scripts on each node. When the scripts have been run on each node, click the "OK" button.
Execute scripts as root user in both nodes:
# /u01/app/oracle/product/11.2.0/dbhome_1/root.sh #
Click the "Close" button to exit the installer.
The RAC database creation is now complete!
Check the Status of the RAC
There are several ways to check the status of the RAC. The
srvctl
utility shows the current configuration and status of the RAC database.$ . oraenv ORACLE_SID = [oracle] ? ractp The Oracle base has been set to /u01/app/oracle $ srvctl config database -d ractp Database unique name: ractp Database name: ractp Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1 Oracle user: oracle Spfile: +DATA/ractp/spfileractp.ora Domain: localdomain Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: ractp Database instances: ractp1,ractp2 Disk Groups: DATA Mount point paths: Services: Type: RAC Database is administrator managed $ srvctl status database -d ractp Instance ractp1 is running on node rac1 Instance ractp2 is running on node rac2 $
The
V$ACTIVE_INSTANCES
view can also display the current status of the instances.$ export ORACLE_SID=ractp1 [oracle@rac1 Desktop]$ sqlplus / as sysdba SELECT inst_name FROM v$active_instances; INST_NAME -------------------------------------------------------------------------------- rac1.localdomain:ractp1 rac2.localdomain:ractp2 exit $
Making Images of the RAC Database
At any point earlier we could save the image of created virtual machine and then restore it at will. Here we are going to save images of the newly created Oracle RAC system which we can restore in the same system or even hand over to another location and restore in a matter of few minutes!
The export of VM is a straightforward process and saving RAC images would be an easy task if not dealing with the shared disk. In my view the simplest way to handle that is by detaching shared disk from both nodes and taking care of these three parts (two self-contained VMs and one Shared disk) separately. In the end there will be three files: two files for VMs and a file representing the shared disk. These three files can be further zipped by your favorite archiver into one file which can be used for storage or transfer. After export is done, the shared disk can be easily attached back to the nodes. Same is true for the import of VMs back into VirtualBox along with the copy of shared disk - the shared disk is attached to the imported VMs as an extra step. Let's perform all these actions.
Clean Shutdown of RAC
But first, we need to shutdown servers in nice and clean manner because we want save them in a robust state. Shutdown the database. As oracle user execute on any node:
$ . oraenv ORACLE_SID = [oracle] ? ractp The Oracle base has been set to /u01/app/oracle $ srvctl stop database -d ractp $
Shutdown the clusterware on the first node. As root user execute:
# . oraenv ORACLE_SID = [ractp1] ? +ASM1 The Oracle base remains unchanged with value /u01/app/oracle # crsctl stop crs ... CRS-4133: Oracle High Availability Services has been stopped. #
Shutdown the clusterware on the second node. As root user execute:
# . oraenv ORACLE_SID = [ractp1] ? +ASM2 The Oracle base remains unchanged with value /u01/app/oracle # crsctl stop crs ... CRS-4133: Oracle High Availability Services has been stopped. #
Shutdown both virtual machines. Wait until all VM windows are closed.
Detach Shared Disk and Make a Copy Of It
In the VirtualBox Manager open Virtual Media Manager: Main menu | File | Virtual Media Manager. Then select the disk used by the RAC (rac_shared_disk1.vdi). Note that this disk shows as attached to rac1 and rac2 VMs:
Click on "Release" icon and then confirm in the pop-up window. Note that this disk now shows as "Not attached". Click on "Copy" to start Disk Copying Wizard.
Accept Virtual disk to copy and press "Next".
Accept Virtual disk file type as VDI and press "Next".
Select "Fixed size" and press "Next".
On the next screen you can set location and name of the new file. When done, press "Next".
On the Summary screen review details and press "Copy" to complete copying. Close the Media Manager when copying is done.
Note. Do not try to copy .vdi file because the copy will retain same disk UID and VirtualBox will refuse to use it because there is already such disk. When copying trough the Virtual Media Manager, the new UID is assigned automatically.
Note. Do not try to copy .vdi file because the copy will retain same disk UID and VirtualBox will refuse to use it because there is already such disk. When copying trough the Virtual Media Manager, the new UID is assigned automatically.
Export VMs
In the VirtualBox Manager select VM, then call Appliance Export Wizard: Main menu | File | Export Appliance. Exporting is generally as simple as saving a file. Export both VMs.
Now you should have 3 files that can be further zipped into a single file with the size about 12 GB.
Re-attach Shared Disk to the Original RAC Setup
Fix our current working RAC setup by re-attaching shared disk to rac1 and rac2 VM using "Storage" page. Don't forget to select correct controller before attaching the disk:
Press "Add Hard Disk" icon and use "Choose Existing Disk" to attach
rac_shared_disk1.vdi
. Once Shared disk is attached to both VMs, the RAC is ready to run.Restoring RAC from Saved Files
In this section we will import RAC from the saved files creating a second RAC system. Don't run both RAC at the same time because they will have same network attributes.
Open Appliance Import Wizard: Main menu | File | Import Appliance. Choose the file and press "Next":
On the Appliance Import Settings different attributes of new VM can be changed. We are going to accept settings unchanged. It is interesting to note, that disks are going to be imported in VMDK format different from the original VDI format.
Wait until the VM is imported:
Import both VMs and copy Shared Disk
rac_shared_disk1_copy.vdi
file into the parent directory (Virtual VMs). This disk could be attached to both machines, but unfortunately current version (4.1.18) of VirtualBox doesn't preserve type of the disk then making a copy. Attach this disk to the either of imported VM, then select it and review disk information:
In the VirtualBox 4.1.18, the copied disk has "Normal" type. If you have a newer version and the type is "Shareable" then this bug has been fixed, and you can proceed to another VM. If not, de-attach the disk, then go to the Virtual Media Manager and change the disk type to "Shareable" as has been described above, then return to the Virtual machines and attached the Shared disk.
Start new VMs. The clusterware should start automatically. You will need to bring up the database. Login as the oracle user and execute:
$ . oraenv ORACLE_SID = [oracle] ? ractp The Oracle base has been set to /u01/app/oracle $ srvctl start database -d ractp $
The RAC should be well and running!
Post Installation Optimization (Optional)
It had been noticed that after a while, the
ologgerd
process can consume excessive CPU resource. You can check that by starting top
, then pressing 'c' key (cmd name/line):
The ologgerd is part of Oracle Cluster Health Monitor and used by Oracle Support to troubleshoot RAC problems. If ologgerd process is consuming a lot of CPU, you can stop it by executing on both nodes:
# crsctl stop resource ora.crf -init
If you want to disable ologgerd permanently, then execute:
.# crsctl delete resource ora.crf -init
Adding Internet Access (Optional)
Shutdown virtual machine. In the Network section of settings, select Adapter 3, make sure it is enabled and attached to "Bridged Adapter". Restart the VM and check that you can ping outside world in the terminal window:
ping yahoo.com
Repeat all these actions with another node.
Please note, that using Bridged Adapter makes your VMs accessible on the same local area network where your workstation / laptop is connected. Therefore same rules apply to the network configuration of VMs. By default, VMs will be using your local network DHCP to obtain IP address. Optionally you can assign static addresses. Even though your VMs will be accessible to any computers on the LAN, the Oracle listener is not listening on these IP addresses. If needed, these IPs can be added to the listener, but this is beyond the scope of this article.
Clusterware and Database Monitoring
You can check the status of clusterware by issuing this command:
# crsctl status resource -t
But it is much easier using a tool which will run same command on pre-defined time interval and organize output into the tabular form. For this we are going to use a freeware tool Lab128 Freeware. The installation is simple: unzip files into some folder, then run lab128fw.exe executable file.
Since we will be running the monitoring tool in the host OS (Windows), the RAC should be accessible to the host OS. If you did everything as described above, you should have Adapter 1 on both VMs attached to "Host-only Adapter" and you should be able to ping both nodes:
ping 192.168.56.71 ping 192.168.56.72
It will make sense to add these IP addresses to the %SystemRoot%\System32\drivers\etc\hosts file:
#ORACLE SCAN LISTENER 192.168.56.91 rac-scan.localdomain rac-scan 192.168.56.92 rac-scan.localdomain rac-scan 192.168.56.93 rac-scan.localdomain rac-scan #rac1 node 192.168.56.71 rac1 192.168.56.72 rac2
Start Lab128, then connect through SSH to one node: Main menu | Clusterware | Clusterware Commander, then enter User: oracle; Password: oracle; Hostname/IP: 192.168.56.71. Optionally give the "Alias name" to this connection and save it by pressing "Store" button. Then press "Connect".
The Clusterware Commander window presents the status of clusterware components in a tabular view. Left columns present the Component/Resource Type, Name, Target and Online node counts. The remaining columns to the right represent nodes in the cluster with the values showing the state of the component on that node. This view is refreshed automatically with the rate defined in the "Refresh Rate, sec." box. The tabular view content can be customized by filtering out less important components. If you need more information about this window, press F1 key.
The tabular view can be used to manage the components through the pop-up menu. Right-click on the cell in the node area. Depending on the context of the selected component and the node, the menu will offer various commands, see the picture above.
Additionally, you can start Lab128 monitor for a specific database instance with many connection details filled up automatically. The example of Login window is shown below. Just enter in User: sys; Password: oracle. It is recommended to name this monitor in the "Alias Name" box (we named it RACTP1) and saved it pressing the "Store" button. This will save our efforts next time when connecting to the same instance. Then press "Connect" button to open Lab128 monitor:
You can read more about database monitoring with Lab128 using this link Welcome to the World of Lab128.
No comments:
Post a Comment