This article purpose is to help you to
configure the RAC cluster. This article assumes a basic understanding of oracle
database & operating system.
1) RAC Database Storage & Cluster File System Requirements
All datafiles, control files, redo log files, and the server parameter file (SPFILE) in Oracle RAC environments must reside on shared storage that is accessible by all the instances.There are several solutions available for storage
- Vendor Cluster File System such as VCFS on solaris or General Parallel File System (GPFS) on IBM platforms
- Shared Storage such as NetApp Filers
- Oracle Cluster File System for Microsoft Windows and Linux
- Raw Devices with ASM
- CRS_HOME in which CRS Software will be installed.
- ORACLE_HOME from which ORACLE RDBMS & ASM instances will be running.
As the root user, verify the network configuration by using the ping command to test the connection from devrac1 from devrac2 and the reverse. As the root user, run the following commands on each node:
- SSH trust should be established
- RSH is also required during installation of software
- Login banner should be disabled
- User equlancy should be present
- raw devices should be initialized and owned by the oracle software owner
- xterm tools such as exceed is required for installation.All pre-req os patches & kernel settings. Please refer to install guide.
- Run cluster verify utility using /staging_area/clusterware/cluvfy/runcluvfy.sh stage -pre crsinst -n node1,node -verbose.
Minimum no of raw devices required:-
Two
raw devices will be required for OCR disk. This will contain cluster
configuration such as which database instances run on which nodes and which
services run on which databases.
Four
raw devices will be required for database storage in which database will
reside.
2) RAC ORACLE_HOME Software Requirements
There
will be two seperate ORACLE_HOMES will be installed in each node of the
cluster.
3) Cluster Ware Requirements
Since
oracle CRS comes free with RAC license, third party clusterware such as veritas
VCS or IBM HACMP is not required for 10g RAC
4) NIC Card Requirements
Each
node needs at least two network interface cards, or network adapters. One
adapter is for the public network and the other adapter is for the private
network used by the interconnect. You should install additional network
adapters on a node if that node uses SAN or NAS storage. The private
interconnect is a separate network that you configure between cluster nodeswill
serve as communication path between the nodes in the cluster. RAC uses this to
transmit the data blocks that are shared between the instances. This
interconnect should be a private interconnect, meaning it is not be accessible
by nodes that are not members of the cluster.
5) IP Addresses & Node Names
Three
IP addresses and node names are required for each node.
One
Public IP address is for public node name. For the public node name, use the
primary host name of each node. Use the output displayed by the hostname
command.(for example node1-pub)
One
private IP address is for private host name for private interface.(for example:
10.*.*.* or 192.168.*.*).Oracle Database uses private IP addresses for
instance-to-instance Cache Fusion traffic (example private node name would be
node1-priv)
One
Virtual IP address with an associated network name(virtual host name) with same
subnet as your public interface.The virtual host name for each node should be
registered with your DNS. If you do not have an available DNS, then record the
virtual host name and IP address in the system hosts file, /etc/hosts
Public
interface names must be the same for all nodes. If the public interface on one
node uses the network adapter eth0, then you must configure eth0 as the public
interface on all nodes.Private interface names must be the same for all nodes.
If eth1 is the private interface name for the first node, then eth1 should be
the private interface name for your second node.For the private network, the
end points of all designated interconnect interfaces must be completely
reachable on the network. There should be no node that is not accessible by
other nodes in the cluster using the private network.
6) Network Configuration Checks for RAC
Node
Node Name Type IP Address Registered in
devrac1
devrac1 Public 143.46.43.100 DNS (if available, else the hosts file)
devrac1
devrac1-vip Virtual 143.46.43.104 DNS (if available, else the hosts file)
devrac1
devrac1-priv Private 10.10.10.11 Hosts file
devrac2
devrac2 Public 143.46.43.101 DNS (if available, else the hosts file)
devrac2
devrac2-vip Virtual 143.46.43.105 DNS (if available, else the hosts file)
devrac2
devrac2-priv Private 10.10.10.12
Entries in Hosts file
127.0.0.1
localhost.localdomain localhost
143.46.43.100
devrac1.mycompany.com devrac1
143.46.43.104
devrac1-vip.mycompany.com devrac1-vip
10.10.10.11
devrac1-priv
143.46.43.101
devrac2.mycompany.com devrac2
143.46.43.105
devrac2-vip.mycompany.com devrac2-vip
10.10.10.12
devrac2-priv
#
ping -c 3 devrac1.mycompany.com
#
ping -c 3 devrac1
#
ping -c 3 devrac1-priv
#
ping -c 3 devrac2.mycompany.com
#
ping -c 3 devrac2
#
ping -c 3 devrac2-priv
You
will not be able to discover the nodes using the ping command for the virtual
IPs (devrac1-vip, devrac2-vip) until after Oracle Clusterware is installed and
running. If the ping commands for the public or private addresses fail, resolve
the issue before you proceed.Ensure that you can access the default gateway with
a ping command.
7) Other Pre-Requisities
No comments:
Post a Comment