Wednesday, June 15, 2011

Oracle 10g RAC on Linux Using NFS

There are various ways in which you can implement RAC,In this article I'm focussing on implementing it using NFS(Network File system),which I have done practically on OEL 4.6 in HP prolient servers,you can try this on vmware by choosing ASM(Automatic Storage Management),since it is mostly used now a days because of various benefits,but the concept remain the same,there will change in shared storage setup and steps.

This article describes the installation of Oracle 10g release 2 ( RAC on Linux (Oracle Enterprise Linux 4.6) using NFS to provide the shared storage.
• Introduction
• Download Software
• Operating System Installation
• Oracle Installation Prerequisites
• Create Shared Disks
• Install the Clusterware Software
• Install the Database Software
• Create a Database using the DBCA
• TNS Configuration
• Check the Status of the RAC
• Direct and Asynchronous I/O

NFS is an abbreviation of Network File System, a platform independent technology created by Sun Microsystems(Now its Oracle) that allows shared access to files stored on computers via an interface called the Virtual File System (VFS) that runs on top of TCP/IP. Computers that share files are considered NFS servers, while those that access shared files are considered NFS clients. An individual computer can be either a NFS server, a NFS client or both.

We can use NFS to provide shared storage for a RAC installation. In a production environment we would expect the NFS server to be a NAS, but for testing it can just as easily be another server or even one of the RAC nodes itself.

To cut costs, this articles uses one of the RAC nodes as the source of the shared storage. Obviously, this means if that node goes down the whole database is lost, so it's not a sensible idea to do this if you are testing high availability. If you have access to a NAS or a third server you can easily use that for the shared storage, making the whole solution much more resilient. Whichever route you take, the fundamentals of the installation are the same.

2)Download Software:
Download the following software.
• Oracle Enterprise Linux (4.6)
• Oracle 10g ( CRS and DB software

3)Operating System Installation:
This article uses Oracle Enterprise Linux 4.6, but it will work equally well on CentOS 4 or Red Hat Enterprise Linux (RHEL) 4. A general pictorial guide to the operating system installation can be found here. More specifically, it should be a server installation with a minimum of 2.5G swap, firewall and secure Linux disabled and the following package groups installed:
• X Window System
• GNOME Desktop Environment
• Editors
• Graphical Internet
• Server Configuration Tools
• FTP Server
• Development Tools
• Legacy Software Development
• Administration Tools
• System Tools
To be consistent with the rest of the article, the following information should be set during the installation:

Step 1:Operating System installation (Oracle Enterprise Linux 4.6)
(On both Cluster Machines atpl131 and atpl136)

1. Install Oracle Enterprise Linux 4.6 on two machines with (atlest 10 -15 gb root
space, 2,5 gb swap space and 1gb tmp space, 128 mb boot space then install
patches as explained above)
2. Make sure during installation give machine name as “atpl131.server” that is
domain should be “server” only not
3. Ensure that both machines must have two ether net cards(eth0 and eth1)
4. Connect first Ethernet card (eth0) of both machines via cross cable connection
wire this makes private network between atpl131 and atpl136 machine
5. Then connect other ether net card (eth1) of both machines via router (this
makes both machines are connected in public network
6. Ensure that in both machines both network cards must be activated and
network wires are connected properly eth0 is private and eth1 is public
Ethernet card also
7. Both machines atpl131 and atpl136 must be having same time (difference up
to 1 second can work but more than that time difference may cause errors.
8. In files # means comments not hash prompt and rest places it is # prompt
9. Public and Virtual IP Address must be of same class (having same subnet
hostname: atpl131.server
IP Address eth1: (public address)
Default Gateway eth0: (public address)
IP Address eth0: (private address)
Default Gateway eth0: none
Virtual IP Address:

hostname: atpl136.server
IP Address eth1: (public address)
Default Gateway eth0: (public address)
IP Address eth0: (private address)
Virtual IP Address:
Default Gateway eth0: none

You can choose IP addresses as per your network requirement, but this must be same throughout the process.

Step 2:Install Packages
(On both Cluster Machines atpl131 and atpl136)

Once the basic installation is complete, install the following packages from root user. Install these four patches
compat-gcc-7.3-2.96.128.i386.rpm, compat-libstdc++-devel-7.3-2.96.128.i386.rpm
compat-libstdc++-7.3-2.96.128.i386.rpm, compat-gcc-c++-7.3-2.96.128.i386.rpm

# From Oracle Enterprise Linux Patch CD (or from Linux S/W CDs)
# cd /media/cdrecorder/patch
# rpm -Uvh compat-gcc-7.3-2.96.128.i386.rpm
# rpm -Uvh compat-gcc-c++-7.3-2.96.128.i386.rpm
# rpm -Uvh compat-libstdc++-7.3-2.96.128.i386.rpm
# rpm -Uvh compat-libstdc++-devel-7.3-2.96.128.i386.rpm
# cd /
# eject

Oracle Installation Prerequisites
Perform the following steps with root user into the atpl131 virtual machine.
The /etc/hosts file must contain the following information.

Step 3:CONFIGURING /etc/hosts file
(On both Cluster Machines atpl131 and atpl136)

#vi /etc/hosts localhost.server localhost
# Public atpl131.server atpl131 atpl136.server atpl136
#Private atpl131-priv.server atpl131-priv atpl136-priv.server atpl136-priv
#Virtual atpl-vip.server server-vip atpl-vip.server server-vip
#NAS nas1.server nas1

Notice that the NAS1 entry is actually pointing to the atpl131 node. If you are using a real NAS or a third server to provide your shared storage put the correct IP address into the file.

Step 3:CONFIGURING kernel parameters in /etc/sysctl.conf file.
(On both Cluster Machines atpl131 and atpl136)

# vi /etc/sysctl.conf

kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
# semaphores: semmsl, semmns, semopm, semmni
kernel.sem = 250 32000 100 128
#fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000

# Additional and amended parameters suggested by Kevin Closson
net.core.rmem_default = 524288
net.core.wmem_default = 524288
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem=4096 524288 16777216
net.ipv4.tcp_wmem=4096 524288 16777216
net.ipv4.tcp_mem=16384 16384 16384
# These are optional I enter because during OS installation I wrongly gave hostname #and domain name

Step 4. Rebuilding kernel without restarting the system for above changed kernel parameter.
(On both Cluster Machines atpl131 and atpl136)

# ./sbin/sysctl -p
if this do not work then restart both machines.

Step 5. Add the following lines to the /etc/security/limits.conf file. ( create same file in atpl136)
(On both Cluster Machines atpl131 and atpl136)

# vi /etc/security/limits.conf

* soft nproc 2047
* hard nproc 16384
* soft nofile 1024
* hard nofile 65536

Step 6:Add the following lines to the /etc/pam.d/login file if not already exists.
(On both Cluster Machines atpl131 and atpl136)

# vi /etc/pam.d/login

session required /lib/security/

Step 7:Disable secure linux by editing the /etc/selinux/config file, making sure the SELINUX flag is set as follows.
(On both Cluster Machines atpl131 and atpl136)

# vi /etc/selinux/config


Step 8:Set the hangcheck kernel module parameters by adding the following line to the /etc/modprobe.conf file.
(On both Cluster Machines atpl131 and atpl136)

# vi /etc/modprobe.conf

options hangcheck-timer hangcheck_tick=30 hangcheck_margin=180

You can do it also by GUI tool (Applications  System Settings Security Level). Click on the SELinux tab and disable the feature.

Step 9:To load the module immediately, execute this command"modprobe -v hangcheck-timer".
(On both Cluster Machines atpl131 and atpl136)

#modprob –v hangcheck-timer

Step 10:Create the new groups and users(with same user name, user id and os groups on bothe machines .
(On both Cluster Machines atpl131 and atpl136)

a) For atpl131:
Group create:
# groupadd oinstall
# groupadd dba
# groupadd oper
User Create:
# useradd -d /home/oracle -g oinstall -G dba -s /bin/bash oracle
# passwd oracle
# new password: oracle
# reenter password: oracle

To see the groups name and corresponding ids,user name and its id give:

# id oracle

b) For atpl136:
# id oracle

Step 11:During the installation, both RSH and RSH-Server were installed.
Enable remote shell and rlogin by doing the following.
(On both Cluster Machines atpl131 and atpl136)

# chkconfig rsh on
# chkconfig rlogin on
# service xinetd reload

Step 12:Create the /etc/hosts.equiv file as the root user.
(On both Cluster Machines atpl131 and atpl136)

# touch /etc/hosts.equiv
# chmod 600 /etc/hosts.equiv
# chown root:root /etc/hosts.equiv

Step 13:Edit the /etc/hosts.equiv file to include all the RAC nodes(atpl131 and atpl136):
(On both Cluster Machines atpl131 and atpl136)

# vi /etc/hosts.equiv

+atpl131 oracle
+atpl136 oracle
+atpl131-priv oracle
+atpl131-priv oracle

Step 14:Now restart both machines.

Step 15:Login as the oracle user and add the following lines at the end of the .bash_profile file.
(On both Cluster Machines atpl131 and atpl136)

Note:Edit this .bash_profile file in atpl136 also but give different ORACLE_SID there as: ORACLE_SID=RAC2; export ORACLE_SID

$ vi .bash_profile

# Oracle Settings
TMP=/tmp; export TMP

ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1; export ORACLE_HOME
ORACLE_SID=RAC1; export ORACLE_SID (will be changed in 2nd machine)
PATH=/usr/sbin:$PATH; export PATH


if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
ulimit -u 16384 -n 65536

Note: Remember to set the ORACLE_SID to RAC2 on the second node remaing valus are same for second machine.

Step 16:Create Shared Disks (On atpl131 Machine only)

First we need to set up some NFS shares. In this case we will do this on the atpl131 node, but you can do the on a NAS or a third server if you have one available. On the atpl131 node

Step 16.a:Create the following directories.

# mkdir /share1
# mkdir /share2

Step 16.b:CREATE TWO DISKS /dev/cciss/c0d1p11 and
/dev/cciss/c0d1p12 using fdisk

To see the attached disks

# fdisk -l

Partition the disk in two slices
# fdisk /dev/cciss/c0d1
fdisk> command (m for help) n
l logical (5 or over)
p primary partition (1-4)

fdisk> l
first cylinder (7303-35698, default 7303):7303
using default value 7303
fdisk> Last cylinder or +size or +sizeM or +sizeK (1-29801, default 29801): +10g
fdisk> p
show you created partition
fdisk> w
(This writes the created partition)
Now /dev/cciss/c0d1p12 is created.

Same way create the second partition (/dev/cciss/c0d1p12)

Step 16.c:Mount both disks on share1 and share2
# mount /dev/cciss/c0d1p11 /share1
# mount /dev/cciss/c0d1p12 /share2

Step 16.d:Also enter entries in /etc/fstab file for these two disk for permanent mounting at boot time.
# vi /etc/fstab

/dev/cciss/c0d1p11 /share1 ext3 defaults 0 0
/dev/cciss/c0d1p12 /share2 ext3 defaults 0 0

Step 16.e:Add the following lines to the /etc/exports file.

# vi /etc/exports

/share1 *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
/share2 *(rw,sync,no_wdelay,insecure_locks,no_root_squash)

NOTE 1: Do not repeat Step 16 for atpl136 machine

NOTE 2: We have use NFS sharing by adding the entry in /etc/exports file

Step 17:Run the following command to export the NFS shares.
(On both Cluster Machines atpl131 and atpl136)

# chkconfig nfs on
# service nfs restart

Step 18:Create two mount points to mount the NFS shares share1 and share2 on both machines.
(On both Cluster Machines atpl131 and atpl136)

# mkdir /u01
# mkdir /u02

Step 19:Add the following lines to the "/etc/fstab" file for nfs mounting .
(On both Cluster Machines atpl131 and atpl136)

# vi /etc/fstab

atpl131:/share1 /u01 nfs rw,bg,hard,nointr,tcp,vers=3,timeo=300rsize=32768,wsize= 32768,actimeo=0 0 0
atpl131:/share2 /u02 nfs rw,bg,hard,nointr,tcp,vers=3,timeo=300,rsize=32768,wsize =32768,actimeo=0 0 0

1. Here we can also use nas1 in place of atpl131 because nas1 is logical name of
atpl131 this we set it in /etc/hosts file.
2. /share1 and share2 are two folders which are created on the atpl131 and are used on atpl131 and 136 for nfs mount because here we use nfs sharing for cluster installation. this we did because we do not have nas device.

Step 20:Mount the NFS shares on both servers.
(On both Cluster Machines atpl131 and atpl136)

# mount /u01
# mount /u02

Step 21:Create the shared CRS Configuration and Voting Disk files.
(On both Cluster Machines atpl131 and atpl136)

# touch /u01/crs_configuration

NOTE: Don’t create voting_disk file now using # touch /u01/voting_disk
command other wise at the time of runinstaller step give this file is already exists .
when we run the runinstaller then it ask for voting_disk file click next->then it creates this file in “/u01/ ” directory if u again click next raise error so don’t worry go to /u01 directory and change permissions 700 to this file then click next goes without error.

Step 22:Create the directories in which the Oracle software will be installed.
(On both Cluster Machines atpl131 and atpl136)

# mkdir -p /u01/crs/oracle/product/10.2.0/crs
# mkdir -p /u01/app/oracle/product/10.2.0/db_1
# mkdir -p /u01/oradata
# chown -R oracle:oinstall /u01 /u02

Step 23:RESTART BOTH MACHINES atpl131 and atpl136

Step 24:Login to both atpl131 and atpl136 machines as the oracle user and run commands to check whether they are clustered properly or not.
(On both Cluster Machines atpl131 and atpl136)

$ cd /u02/clusterware/cluvfy
$ sh comp nodereach -n atpl131,atpl136 -verbose
$ sh stage -pre crsinst -n atpl131,atpl136

Step 25:Install the Clusterware Software(On atpl131 machine only first)
Place the clusterware and database software in the /u02 directory and unzip it.

# cd /u02
# unzip
# unzip

Start the Oracle installer.

$ cd /u02/clusterware
$ ./runInstaller

Step R-1. On the "Welcome" screen, click the "Next" button.

Step R-2. Accept the default inventory location by clicking the "Next" button.

Step R-3. Enter the appropriate file name and Oracle Home path (here shown path and file name only because we already set these path & name in “.bash_profile” )for the and click the "Next" button.

Step R-4. Wait while the prerequisite checks are done. If you have any failures correct them and retry the tests before clicking the "Next" button.

Step R-4 The "Specify Cluster Configuration" screen shows only the atpl131 node in the cluster. Click the "Add" button to continue.

Step R-5 The "Specific Network Interface Usage" screen defines how each network interface will be used. Highlight the "eth0" interface and click the "Edit" button.

Note1:Now it ask for running the two scripts from root user

step 1. (i) first run “” file on atpl131
# ./u01/app/oracle/oraInventory/
step 1. (ii) then run script in atpl131
# ./u01/crs/oracle/product/10.2.0/crs/
during running these scripts if any error regarding dir ownership or services not on like comes then ignore this.

after step 1 just do step 2 on atpl136
step 2. (i) after that run on atpl136 then
# ./u01/app/oracle/oraInventory/
step 2. (ii) then run file on atpl136
# ./u01/crs/oracle/product/10.2.0/crs/
during running these scripts if any error regarding dir ownership or services not on like comes then ignore this.

step 3. after above step 1 and 2 before clicking on ok on atpl131
run vipca(virtual IP configuration assistanent)
# cd /u01/crs/oracle/product/10.2.0/crs/bin
# ./vipca
then it ask for choosing Ethernet card choose public network card(i.e. eth1)
then give these values
for atpl131:
IP alias name: atpl131-vip.server & IP address: &subnet mask
for atpl136:
IP alias name: atpl136-vip.server & IP address: &subnet mask

step 4. then come to atpl131. click ok next next …. installation complete.

step 5 launch dbca.

Sukhwinder Singh

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.

There was an error in this gadget