Thursday, October 18, 2012

Cloning Oracle 11g Grid Home 

Cloning oracle RAC binaries has become common practice but same is not always true with the Grid!! We recently patched one of our RAC env and need to apply the same patches to the QA. 
One way was to repeat the process on env to be patched. Second was to clone env and use it for RAC DBs.
Since the Cloning Grid was not very common and was looking challenging, i decided to give it a try. 

On the source node, create a copy of the Oracle grid infrastructure home. To keep the installed Oracle grid infrastructure home as a working home, make a full copy of the source Oracle grid infrastructure home and remove the unnecessary files from the copy. For example, as root on Linux systems, run the cp command:

[oracle@appsractest bin]$  cp -prf Grid_home location_of_the_copy_of_Grid_home
Delete unnecessary files from the copy.

The Oracle grid infrastructure home contains files that are relevant only to the source node, so you can remove the unnecessary files from the copy in the log, crs/init, and cdata directories. Check following to remove the unnecessary files from the copy of the Oracle grid infrastructure home:


[oracle@appsractest bin]$ cd /u01/app/11.2.0.2/grid

[oracle@appsractest grid]$  rm -rf  /u01/app/11.2.0.2/grid/log/host_name
[oracle@appsractest grid]$  rm -rf crs/init
[oracle@appsractest grid]$  rm -rf cdata
[oracle@appsractest grid]$  rm -rf gpnp/*
[oracle@appsractest grid]$  rm -rf network/admin/*.ora
[oracle@appsractest grid]$  find . -name '*.ouibak' -exec rm {} \;
[oracle@appsractest grid]$  find . -name '*.ouibak.1' -exec rm {} \;
[oracle@appsractest grid]$  rm -rf root.sh*
[oracle@appsractest grid]$  rm -rf  /u01/app/11.2.0.2/grid/inventory/ContentsXML/oraclehomeproperties.xml
[oracle@appsractest grid]$  cd cfgtoollogs
[root@node1 cfgtoollogs]# find . -type f -exec rm -f {} \;


Create a compressed copy of the previously copied Oracle grid infrastructure home using tar or gzip on Linux and UNIX systems. Ensure that the tool you use preserves the permissions and file time-stamps. 

The steps to create a cluster through cloning are as follows:
Prepare the new cluster nodes
Deploy Oracle Clusterware on the destination nodes
Run the clone.pl script on each destination node
Prepare crsconfig_params file on all nodes
Run the orainstRoot.sh script on each node
Run the Grid_home/root.sh script
Run the configuration assistants and CVU
Step 1: Prepare the New Cluster Nodes

On each destination node, perform the following preinstallation steps:
Specify the kernel parameters
Configure block devices for Oracle Clusterware devices
Ensure that you have set the block device permissions correctly
Use short, nondomain-qualified names for all of the names in the Hosts file
Test whether the interconnect interfaces are reachable using the ping command
Verify that the VIP addresses are not active at the start of the cloning process by using the ping command (the ping command of the VIP address must fail)
Delete all files in the Grid_home/gpnp folder

Note:
If the Grid_home/gpnp folder contains any files, then cluster creation fails. All resources are added to the existing cluster, instead.
Run CVU to verify your hardware and operating system environment

-- Create the grid home directory on destination home
[root@appsractest ]## mkdir -p /u01/app/11.2.0/grid
[root@appsractest ]# cd /u01/app/11.2.0/grid
[root@appsractest ]# tar -zxvf /home/oracle/gridHome.tgz
In this example, path_name represents the directory structure in which you want to install the Oracle grid infrastructure home. Note that you can change the Grid Home location as part of the clone process
Change the ownership of all of the files to belong to the oracle:oinstall group, and create a directory for the Oracle Inventory. The following example shows the commands to do this on a Linux system:

[root@appsractest grid]# chown -R oracle:oinstall /u01/app/11.2.0/grid
[root@appsractest grid]# mkdir -p /u01/app/oraInventory
[root@appsractest grid]# chown oracle:oinstall /u01/app/oracle/oraInventory

It is important to remove any Oracle network files from the /u01/app/11.2.0.2/grid/network/admin directory on both nodes before continuing. For example, remove any tnsnames.ora, listener.ora or sqlnet.ora files.

[oracle@appsractest bin]$ /home/oracle/clone.sh 
./runInstaller -clone -waitForCompletion  "ORACLE_BASE=/u01/app/oracle" "ORACLE_HOME=/u01/app/11.2.0.2/grid" "ORACLE_HOME_NAME=OraHome1Grid" "INVENTORY_LOCATION=/u01/app/oraInventory" "CLUSTER_NODES={appsractest}" "LOCAL_NODE=appsractest" -silent -noConfig -nowait 

Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB.   Actual 2047 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2012-10-16_05-26-26AM. Please wait ...Oracle Universal Installer, Version 11.2.0.2.0 Production
Copyright (C) 1999, 2010, Oracle. All rights reserved.

You can find the log of this install session at:
 /u01/app/oraInventory/logs/cloneActions2012-10-16_05-26-26AM.log
...
Performing tests to see whether nodes  are available
............................................................... 100% Done.
Installation in progress (Tuesday, October 16, 2012 5:27:02 AM IST)
.........................................................................                                                       73% Done.
Install successful
Linking in progress (Tuesday, October 16, 2012 5:27:20 AM IST)
Link successful
Setup in progress (Tuesday, October 16, 2012 5:29:49 AM IST)
................                                                100% Done.
Setup successful

End of install phases.(Tuesday, October 16, 2012 5:30:10 AM IST)
WARNING:
The following configuration scripts need to be executed as the "root" user in each cluster node.
/u01/app/11.2.0.2/grid/rootupgrade.sh #On nodes appsractest
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node

The cloning of OraHome1Grid was successful.
Please check '/u01/app/oraInventory/logs/cloneActions2012-10-16_05-26-26AM.log' for more details.
-- Repeat the above steps for all the nodes in cluster

Run the script on the local node first. After successful completion, you can run the script in parallel on all the other nodes, except a node you designate as the last node. When all the nodes except the last node are done successfully, run the script on the last node.

Execution of rootupgrade.sh log file - 
Running Oracle 11g root script...
The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/11.2.0.2/grid

Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.

To configure Grid Infrastructure for a Cluster execute the following command:
/u01/app/11.2.0.2/grid/crs/config/config.sh

This command launches the Grid Infrastructure Configuration Wizard. The wizard also supports silent operation, and the parameters can be passed through the response file that is available in the installation media.

-- Run orainstRoot.sh script as follows
[root@appsractest ~]# cd /u01/app/oraInventory/
[root@appsractest oraInventory]# ./orainstRoot.sh 
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@appsractest oraInventory]# 
[root@appsractest oraInventory]# echo Then run the following command:
Then run the following command:
-- For our process we need to prepare the following crsconfig file and then execute rootcrs.pl script as follows

-- prepare crsconfig_params file /u01/app/11.2.0.2/grid/crs/install/crsconfig_params

[root@appsractest grid]# /u01/app/11.2.0.2/grid/perl/bin/perl -I/u01/app/11.2.0.2/grid/perl/lib -I/u01/app/11.2.0.2/grid/crs/install /u01/app/11.2.0.2/grid/crs/install/rootcrs.pl 
Using configuration parameter file: /u01/app/11.2.0.2/grid/crs/install/crsconfig_params
-- so its using the file crsconfig file we created earlier...
LOCAL ADD MODE 
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
OLR initialization - successful
  root wallet
  root wallet cert
  root cert export
  peer wallet
  profile reader wallet
  pa wallet
  peer wallet keys
  pa wallet keys
  peer cert request
  pa cert request
  peer cert
  pa cert
  peer root cert TP
  profile reader root cert TP
  pa root cert TP
  peer pa cert TP
  pa peer cert TP
  profile reader pa cert TP
  profile reader peer cert TP
  peer user cert
  pa user cert
Adding daemon to inittab
ACFS-9200: Supported
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9321: Creating udev for ADVM/ACFS.
ACFS-9323: Creating module dependencies - this may take some time.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9309: ADVM/ACFS installation correctness verified.
CRS-2672: Attempting to start 'ora.mdnsd' on 'appsractest'
CRS-2676: Start of 'ora.mdnsd' on 'appsractest' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'appsractest'
CRS-2676: Start of 'ora.gpnpd' on 'appsractest' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'appsractest'
CRS-2672: Attempting to start 'ora.gipcd' on 'appsractest'
CRS-2676: Start of 'ora.gipcd' on 'appsractest' succeeded
CRS-2676: Start of 'ora.cssdmonitor' on 'appsractest' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'appsractest'
CRS-2672: Attempting to start 'ora.diskmon' on 'appsractest'
CRS-2676: Start of 'ora.diskmon' on 'appsractest' succeeded
CRS-2676: Start of 'ora.cssd' on 'appsractest' succeeded

Disk Group OCR_DISK creation failed with the following message:
ORA-15018: diskgroup cannot be created
ORA-15031: disk specification '/dev/hdd2' matches no disks
ORA-15025: could not open disk "/dev/hdd2"
ORA-15056: additional error message
Configuration of ASM ... failed
see asmca logs at /u01/app/oracle/cfgtoollogs/asmca for details
Did not succssfully configure and start ASM at /u01/app/11.2.0.2/grid/crs/install/crsconfig_lib.pm line 6464.

The issue here is that oracleasm_disks parameter having invalid value.
fix - modified parameter for oracleasm_disks - 
from /dev/hdd1 to /dev/oracleasm/OCR_DISK disks

-- next run 
[root@appsractest grid]# /u01/app/11.2.0.2/grid/perl/bin/perl -I/u01/app/11.2.0.2/grid/perl/lib -I/u01/app/11.2.0.2/grid/crs/install /u01/app/11.2.0.2/grid/crs/install/rootcrs.pl 
Using configuration parameter file: /u01/app/11.2.0.2/grid/crs/install/crsconfig_params
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'appsractest'
CRS-2676: Start of 'ora.cssdmonitor' on 'appsractest' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'appsractest'
CRS-2672: Attempting to start 'ora.diskmon' on 'appsractest'
CRS-2676: Start of 'ora.diskmon' on 'appsractest' succeeded
CRS-2676: Start of 'ora.cssd' on 'appsractest' succeeded
You need disks from at least two different failure groups, excluding quorum disks and quorum failure groups, to create a Disk Group with normal redundancy
Configuration of ASM ... failed

-- Since I'm using external redundancy that parameter needs to get modify from NORMAL to EXTERNAL
 fix - modified ASM_REDUNDANCY=EXTERNAL
-- Next run 
Disk Group OCR_DISK creation failed with the following message:
ORA-15018: diskgroup cannot be created
ORA-15031: disk specification '/dev/oracleasm/disks' matches no disks
ORA-15014: path '/dev/oracleasm/disks/OCR_DISK' is not in the discovery set
ORA-15014: path '/dev/oracleasm/disks/DATA' is not in the discovery set


-- Disk discovery string needs to be fully qualified path as follows
 fix - modified ASM_DISCOVERY_STRING=/dev/oracleasm/* to ASM_DISCOVERY_STRING=/dev/oracleasm/disks/*

[root@appsractest grid]# /u01/app/11.2.0.2/grid/perl/bin/perl -I/u01/app/11.2.0.2/grid/perl/lib -I/u01/app/11.2.0.2/grid/crs/install /u01/app/11.2.0.2/grid/crs/install/rootcrs.pl 
Using configuration parameter file: /u01/app/11.2.0.2/grid/crs/install/crsconfig_params
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'appsractest'
CRS-2676: Start of 'ora.cssdmonitor' on 'appsractest' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'appsractest'
CRS-2672: Attempting to start 'ora.diskmon' on 'appsractest'
CRS-2676: Start of 'ora.diskmon' on 'appsractest' succeeded
CRS-2676: Start of 'ora.cssd' on 'appsractest' succeeded
ASM created and started successfully.
Disk Group OCR_DISK created successfully.
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 6b227d8c9a314fe5bf3ce025c322a476.
Successfully replaced voting disk group with +OCR_DISK.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   6b227d8c9a314fe5bf3ce025c322a476 (/dev/oracleasm/disks/OCR_DISK) [OCR_DISK]
Located 1 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'appsractest'
CRS-2676: Start of 'ora.asm' on 'appsractest' succeeded
CRS-2672: Attempting to start 'ora.OCR_DISK.dg' on 'appsractest'
CRS-2676: Start of 'ora.OCR_DISK.dg' on 'appsractest' succeeded
ACFS-9200: Supported
ACFS-9200: Supported
CRS-2672: Attempting to start 'ora.registry.acfs' on 'appsractest'
CRS-2676: Start of 'ora.registry.acfs' on 'appsractest' succeeded
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

At the end of the Oracle Clusterware installation, on each new node, manually run the configuration assistants and CVU. The following commands should be run from the first node only.
[oracle] $ /u01/app/11.2.0/grid/bin/netca \
                /orahome /u01/app/11.2.0/grid \
                /orahnam OraGridHome1 \
                /instype typical \
                /inscomp client,oraclenet,javavm,server\
                /insprtcl tcp \
                /cfg local \
                /authadp NO_VALUE \
                /responseFile /u01/app/11.2.0/grid/network/install/netca_typ.rsp \
                 /silent

[oracle@appsractest bin]$ /u01/app/11.2.0.2/grid/bin/netca /orahome /u01/app/11.2.0.2/grid /orahnam OraHome1Grid /instype typical /inscomp client,oraclenet,javavm,server /insprtcl tcp /cfg local /authadp NO_VALUE /responseFile /u01/app/11.2.0.2/grid/network/install/netca_typ.rsp /silent

Parsing command line arguments:
    Parameter "orahome" = /u01/app/11.2.0.2/grid
    Parameter "orahnam" = OraHome1Grid
    Parameter "instype" = typical
    Parameter "inscomp" = client,oraclenet,javavm,server
    Parameter "insprtcl" = tcp
    Parameter "cfg" = local
    Parameter "authadp" = NO_VALUE
    Parameter "responsefile" = /u01/app/11.2.0.2/grid/network/install/netca_typ.rsp
    Parameter "silent" = true
Done parsing command line arguments.
Oracle Net Services Configuration:
Oracle Net Listener Startup:
    Listener started successfully.
Listener configuration complete.
Oracle Net Services configuration successful. The exit code is 0

-- If you use a supported cluster file system, then you do not need to execute this command. If you are using Oracle ASM, then run the following command:
[oracle] $ /u01/app/11.2.0/grid/bin/asmca -silent -postConfigureASM -sysAsmPassword oracle -asmsnmpPassword oracle
-- Setup Oracle ASM password..
[oracle@appsractest bin]$ /u01/app/11.2.0.2/grid/bin/asmca -silent -postConfigureASM -sysAsmPassword oracle -asmsnmpPassword oracle
PostConfiguration completed successfully

Now if you check your Grid with CRS is up and running. Next step will be to clone Oracle RAC Binaries or Install them as per your need. I will put that topic in my next post.
One last thing is that you have to repeat this process for all the nodes on your destination cluster, so make sure you follow all the above mentioned steps. You can do some of this steps at a time on multiple nodes but doing them one by one will give you better control and ease of trouble shooting.

Note - If you a plan to run a pre-11g release 2 (11.2) database on this cluster, then you should run oifcfg as described in the Oracle Database 11g release 2 (11.2) documentation

1 comment:

  1. Can you please help me I am doing exactly same and getting [WARNING] [INS-08109] Unexpected error occurred while validating inputs at state 'ConfigWiz while running config.sh after clone.pl. Do i need to run config.sh or rootcrs.pl will do the job.

    ReplyDelete