Oracle Clusterware 10.2.0.1 (CRS 10gR2) on SLES9

10gR2 CRS on SLES9 x86

CRS is the clusterware software of oracle and it is the base for the RAC installation.
I'm covering this in a separate document since the functionality of CRS 10gR2 are improved from 10gR1 and the API has been opened for third party and home made applications.
In this way you can clusterize your own application on the oracle cluster.
This is described in a document soon to be published (or I hope so).

Before installing the binaries you have to plan how to create your system.

The cluster can be installed on a single node but it make few sense.
You should have at leat two nodes and a shared storage.

Personally I prefer several nodes connected to a SAN via redundant fiber channel cards (like qlogic 23xx). The cards can have a multipath software (SLES9 has multipath built-in capabilities)  or working with driver failover (always like qlogic).

Avery node should be connected to others via a private network.
A switch is required (the cross cable is not supported)  and a gigabit connection is recommended (this is for RAC).

After installing the operating system make sure you have the required packages:
 

  • gcc (SLES9 uses 3.3),
  • glibc-devel,
  • make,
  • openmotif,
  • the base X libraries,
  • libaio,
  • libaio-devel,
  • orarun.

The last package will simplify the installation processes and the later administration.
It provides the oracle user and groups, the start and stop scripts for your init levels, the environment for the oracle user and the right settings for the kernel parameters (they can later be adjusted.
It even provides some fixes for 9i installation bugs. These can (and should) be avoided for 10g, especially for 10gR2.

Last version of orarun can be foiund here.

Create the directory tree for the oracle installation (look at the standard OFA): the default is /opt/oracle/product/10g/db_1.
I prefer /u01/app/oracle/product/10.2/crs_1
 

linux: # mkdir -p /u01/app/oracle/product/10.2/crs_1

Make sure to chenge the ownership of the tree with chown (the owner should be the oracle user while the group oinstall).
 

linux: # chown -r oracle:oinstall /u01/app/oracle

Now you can modify some files in /etc:

  • /etc/passwd: change the shell for the oracle user created by orarun (default is /bin/false);
  • /etc/group: oracle user should already belong to dba, oinstall and disk;
  • /ets/sysconfig/oracle for ORACLE_BASE, ORACLE_HOME, ORACLE_SID and several kernel parameters plus the starting parameter for the oracle script in /etc/init.d (useful during machin boot).
  • /etc/profile.d/oracle.sh (or oracle.csh depending on the shell you chose above). From 10g on I prefer to unset the variables: LD_ASSUME_KERNEL and LD_PRELOAD. It can be done by adding:
            • unset LD_ASSUME_KERNEL
            • unset LD_PRELOAD
before the last "fi" int he script. By unsetting the LD_ASSUME_KERNEL all your program are going to use the last version of the glibc, exploiting the new linux posix thread libraries used heavily by oracle executables. You will see less process around and you'll get a more stable system (especially for RAC).
Add the variable ORA_CRS_HOME=$ORACLE_BASE/product/10.2/crs_1. This is not required but is suggested. It easy your life.

         

Give a password to your oracle user:
 

passwd oracle

Reconect to the oracle user so you'll have your new env in place.

Now you need a shared device where to places the Oracle Cluster Registry and the Voting Disk.
This two files are so important that in 10gR2 they can be multiplexed as normally done to the database redologs and controlfiles.

I prefer to place them on raw devices.
So I create several devices using fdisk on my shared device.
This operation is done on one node.
A reboot of the other nodes is not necessary! The others will inherit the modification simply by issuing:
 

# partprobe

Now bind them to the raw devices modifying /etc/raw like in this example:

raw1:oradata1_a/registrylv
raw2:oradata1_a/votinglv
raw3:ida/c0d1

The script for starting (binding) the raw is:
 

# /etc/init.d/raw start

Make sure the oracle user is in the disk group. In this way you won't have to change the permissions on the raw devices.

Modify your /etc/hosts to include all the names of your nodes (physical names, virtual names and private names) and copy the file on all the cluster machines.

Example:

192.168.23.191  breonldblc03.ras breonldblc03
192.168.23.192  breonldblc04.ras breonldblc04
192.168.23.18   breonldblv02.ras breonldblv02
192.168.23.196  breonldblv03.ras breonldblv03
192.168.23.19   breonldblv04.ras breonldblv04
192.168.23.20   breonldblv05.ras breonldblv05
192.168.255.1   internal1.ras    internal1
192.168.255.2   internal2.ras    internal2

These steps are explained even here.

After doing these you need to exchange to create an ssh key for your oracle user on one node and copy it on all the others.

 

ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/opt/oracle/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /opt/oracle/.ssh/id_rsa.
Your public key has been saved in /opt/oracle/.ssh/id_rsa.pub.

Beeing in the .ssh directory (usually /opt/oracle/.ssh) copy the public key in the authorized_keys2 to permits the authentication without password.
 

cat id_rsa.pub >> authorized_keys2

Now copy the whole .ssh directory on all the nodes.

Now, for every node you have to connect to the other using all the private and public name used in /etc/hosts (with and without domain).
Reply 'yes' to every question and make sure that you are no longer prompted for a password.
At every second try with the same connection you shouldn't receive any message or request. You need to be immediately authenticated and presented with a shell prompt for the oracle installation to proceed smoothly.

Warning!!!!!

If you can be authenticated without password or any other request but if an output (or a warning) is shown then oracle will interpret that as an error, stopping the installation. So, solve any related issue/warning before going  ahead.

 

Here, you can see an example of the messages shown when establishing the initial ssh connections:

oracle@sles9rac1:~/.ssh> ssh oracle@192.168.255.1
The authenticity of host '192.168.255.1 (192.168.255.1)' can't be established.
RSA key fingerprint is 4c:70:d1:4c:6c:71:5c:19:a6:87:14:38:e5:f7:7f:51.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.255.1' (RSA) to the list of known hosts.
Last login: Wed Nov 10 16:18:43 2004 from 192.168.255.2
oracle@sles9rac1:~> exit
logout
Connection to 192.168.255.1 closed.
oracle@sles9rac1:~/.ssh> ssh oracle@192.168.255.2
The authenticity of host '192.168.255.2 (192.168.255.2)' can't be established.
RSA key fingerprint is 4c:70:d1:4c:6c:71:5c:19:a6:87:14:38:e5:f7:7f:51.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.255.2' (RSA) to the list of known hosts.
Last login: Wed Nov 10 16:20:14 2004 from 192.168.24.60
oracle@sles9rac2:~> exit
logout
Connection to 192.168.255.2 closed.

Make sure you can open X application (connect with ssh -Y or export the DISPALY variable, or whatever your are used to).
Place yourself in the directory where your instaler is (maybe the cdrom or the directory where you decompressed the tarball download from OTN).

Run the installer as oracle:
 

./runInstaller

I'm attaching some snapshot taken during my installation.

Before executing the script make sure root can export the display!!!!!
A configuration assistance is going to be opened from root!!!

 

Contact information:

fabrizio.magni _at_ gmail.com

 
Copyright © 2010-2015 - Fabrizio Magni