Sunday, February 22, 2009

Adding a Node to a 10g RAC Cluster

PURPOSE
----------

The purpose of this note is to provide the user with a document that
can be used as a guide to add a cluster node from an Oracle 10g Real
Applications environment.

SCOPE & APPLICATION
--------------------------

This document can be used by DBAs and support analsyts who need to
either add a cluster node or assist another in adding a cluster
node in a 10g Unix Real Applications environment. If you are on
10gR2 (10.2.0.2 or higher), please refer to the documentation for
more updated steps.


ADDING A NODE TO A 10g RAC CLUSTER
--------------------------------------------

The most important steps that need to be followed are;

A. Configure the OS and hardware for the new node.
B. Add the node to the cluster.
C. Add the RAC software to the new node.
D. Reconfigure listeners for new node.
E. Add instances via DBCA.

Here is a breakdown of the above steps.


A. Configure the OS and hardware for the new node.
----------------------------------------------------

Please consult with available OS vendor documentation for this step.

See
Note 264847.1 for network requirements. Also verify that the OCR and
voting files are visible from the new node with correct permissions.


B. Add the node to the cluster.
------------------------------

1. If the CRS Home is owned by root and you are on a version < 10.1.0.4, change
the ownership of the CRS Home directories on all nodes to the Oracle user
so that OUI can read and write to these directories.
2. Set the DISPLAY environment variable and run the addNode.sh script from
$ORA_CRS_HOME/oui/bin on one of the existing nodes as the oracle user.
Example:

DISPLAY=ipaddress:0.0; export DISPLAY
cd $ORA_CRS_HOME/oui/bin
./addNode.sh

3. The OUI Welcome screen will appear, click next.

4. On the "Specify Cluster Nodes to Add to Installation" screen, add the
public and private node names (these should exist in /etc/hosts and
should be pingable from each of the cluster nodes), click next.

5. The "Cluster Node Addition Summary" screen will appear, click next.

6. The "Cluster Node Addition Progress" screen will appear. You will
then be prompted to run rootaddnode.sh as the root user. First verify
that the CLSCFG information in the rootaddnode.sh script is correct.
It should contain the new public and private node names and node
numbers. Example:

$CLSCFG -add -nn <node2>,2 -pn <node2-private>,2 -hn <node2>,2

Then run the rootaddnode.sh script on the EXISTING node you ran the
addNode.sh from. Example:

su root
cd $ORA_CRS_HOME
sh -x rootaddnode.sh

Once this is finished, click OK in the dialog box to continue.

7. At this point another dialog box will appear, this time you are
prompted to run $ORA_CRS_HOME/root.sh on all the new nodes.
If you are on version < 10.1.0.4 then
- Locate the highest numbered NEW cluster node using "$ORA_CRS_HOME/bin/olsnodes -n".
- Run the root.sh script on this highest numbered NEW cluster node.
- Run the root.sh script on the rest of the NEW nodes in any order.
For versions 10.1.0.4 and above the root scritps can be run on the NEW
nodes in any order.

Example:

su root
cd $ORA_CRS_HOME
sh -x root.sh

If there are any problems with this step, refer to
Note 240001.1

Once this is finished, click OK in the dialog box to continue.

8. After running the CRS root.sh on all new nodes, run
$ORA_CRS_HOME/bin/racgons add_config <newnode1>:4948 <newnode2>:4948...
from any node.

9. Next you will see the "End of Installation" screen. At this point you
may exit the installer.

10. Change the ownership of all CRS Homes back to root.


C. Add the Oracle Database software (with RAC option) to the new node.
-------------------------------------------------------------------------

1. On a pre-existing node, cd to the $ORACLE_HOME/oui/bin directory and
run the addNode.sh script. Example:

DISPLAY=ipaddress:0.0; export DISPLAY
cd $ORACLE_HOME/oui/bin
./addNode.sh

2. The OUI Welcome screen will appear, click next.

3. On the "Specify Cluster Nodes to Add to Installation" screen, specify
the node you want to add, click next.

4. The "Cluster Node Addition Summary" screen will appear, click next.

5. The "Cluster Node Addition Progress" screen will appear. You will
then be prompted to run root.sh as the root user.

su root
cd $ORACLE_HOME
./root.sh

Once this is finished, click OK in the dialog box to continue.

6. Next you will see the "End of Installation" screen. At this point you
may exit the installer.

7. Cd to the $ORACLE_HOME/bin directory and run the vipca tool with the
new nodelist. Example:

su root
DISPLAY=ipaddress:0.0; export DISPLAY
cd $ORACLE_HOME/bin
./vipca -nodelist <node1>,<node2>

8. The VIPCA Welcome Screen will appear, click next.

9. Add the new node's virtual IP information, click next.

10. You will then see the "Summary" screen, click finish.

11. You will now see a progress bar creating and starting the new CRS
resources. Once this is finished, click ok, view the configuration
results, and click on the exit button.

12. Verify that interconnect information is correct with:

oifcfg getif

If it is not correct, change it with:

oifcfg setif <interface-name>/<subnet>:<cluster_interconnectpublic>

For example:

oifcfg setif -global eth1/10.10.10.0:cluster_interconnect

or

oifcfg setif -node <nodename> eth1/10.10.10.0:cluster_interconnect


D. Reconfigure listeners for new node.
--------------------------------------

1. Run NETCA on the NEW node to verify that the listener is configured on
the new node. Example:

DISPLAY=ipaddress:0.0; export DISPLAY
netca

2. Choose "Cluster Configuration", click next.

3. Select all nodes, click next.

4. Choose "Listener configuration", click next.

5. Choose "Reconfigure", click next.

6. Choose the listener you would like to reconfigure, click next.

7. Choose the correct protocol, click next.

8. Choose the correct port, click next.

9. Choose whether or not to configure another listener, click next.

10. You may get an error message saying, "The information provided for this
listener is currently in use by another listener...". Click yes to
continue anyway.

11. The "Listener Configuration Complete" screen will appear, click next.

12. Click "Finish" to exit NETCA.

13. Run crs_stat to verify that the listener CRS resource was created.
Example:

cd $ORA_CRS_HOME/bin
./crs_stat

14. The new listener will likely be offline. Start it by starting the
nodeapps on the new node. Example:

srvctl start nodeapps -n <newnode>

15. Use crs_stat to confirm that all VIP's, GSD's, ONS's, and listeners are
ONLINE.


E. Add instances via DBCA. (for standby databases see section F)
-----------------------------------------------------------------

1. To add new instances, launch DBCA from a pre-existing node. Example:

DISPLAY=ipaddress:0.0; export DISPLAY
dbca

2. On the welcome screen, choose "Oracle Real Application Clusters",
click next.

3. Choose "Instance Management", click next.

4. Choose "Add an Instance", click next.

5. Choose the database you would like to add an instance to and specify
a user with SYSDBA privileges, click next. Click next again...

6. Choose the correct instance name and node, click next.

7. Review the storage screen, click next.

8. Review the summary screen, click OK and wait a few seconds for the
progress bar to start.

9. Allow the progress bar to finish. When asked if you want to perform
another operation, choose "No" to exit DBCA.

10. To verify success, log into one of the instances and query from
gv$instance, you should now see all nodes.


F. Adding Instances to the Standby Database
---------------------------------------------

1. If you are using a RAC primary, make sure the steps from section E have
been performed. If you are using a single instance primary, add the redo
log groups and threads to the primary database via "alter database"
commands. Example commands:

alter database add logfile thread 2
group 3 ('/dev/RAC/redo2_01_100.dbf') size 100M,
group 4 ('/dev/RAC/redo2_02_100.dbf') size 100M;
alter database enable public thread 2;


2. Create a new standby controlfile from the primary database and copy it
to the standby. Example commmand:

alter database create standby controlfile as "/u01/stby.ctl";

3. Shut down the standby database, back up the existing standby
controlfile on the standby database, and copy the new standby
controlfile into place.

4. Adjust any init.ora or spfile parameters such as thread, instance_name,
instance_number, local_listener, undo_tablespace, etc... for any new
instances.

5. Recover the standby database.

No comments: