Views expressed here are solely that of my own. Please make sure that you test the code/queries that appear on my blog before applying them to the production environment.

Thursday, March 22, 2012

How to replace Oracle 10g RAC OCR, Voting and ASM spfile disks

Below is the procedure I followed and the errors when replacing Oracle 10g RAC OCR, Voting and ASM spfile disks with the new disks from the new storage system. The database version is Oracle 10gR2 (10.2.0.4) RAC with two nodes and it is running on IBM servers with IBM AIX v6.1 OS.

I followed the procedure explained in the below Metalink Oracle Support (MOS) note.

OCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE) [ID 428681.1]

First thing you should do is analyze the current configuration and find what is the current disk sizes and the permissions of their defined devices.

You need to use root Unix user for the below operations.

Check ASM spfile device and hdisk, use the major and minor numbers of the device to find the corresponding hdisk which is "23, 3" below.
[root@srvdb01]:/home/root \> ls -l /dev/asmspf*
crw-rw----    1 oracle   dba          23,  3 Jul 28 2011  /dev/asmspf_disk
[root@srvdb01]:/home/root \> ls -l /dev/hdisk*|grep "23,  3"
brw-------    1 root     system       23,  3 Jul 28 2011  /dev/hdisk1

Check the corresponding hdisk size which is 100 MB below
[root@srvdb01]:/home/root \> bootinfo -s hdisk1
102


Check OCR disks, they are 300 MB each
[root@srvdb01]:/home/root \> ls -l /dev/ocr*
crw-r-----    1 root     dba          23,  4 Jul 28 2011  /dev/ocr_disk1
crw-r-----    1 root     dba          23,  5 Jul 28 2011  /dev/ocr_disk2
[root@srvdb01]:/home/root \> ls -l /dev/hdisk*|grep "23,  4"
brw-------    1 root     system       23,  4 Jul 28 2011  /dev/hdisk2
[root@srvdb01]:/home/root \> ls -l /dev/hdisk*|grep "23,  5"
brw-------    1 root     system       23,  5 Jul 28 2011  /dev/hdisk3
[root@srvdb01]:/home/root \> bootinfo -s hdisk2
307
[root@srvdb01]:/home/root \> bootinfo -s hdisk3
307


Check Voting disks, they are 300 MB each
[root@srvdb01]:/home/root \> ls -l /dev/vot*
crw-r--r--    1 oracle   dba          23,  6 Mar 20 10:55 /dev/voting_disk1
crw-r--r--    1 oracle   dba          23,  7 Mar 20 10:55 /dev/voting_disk2
crw-r--r--    1 oracle   dba          23,  8 Mar 20 10:55 /dev/voting_disk3
[root@srvdb01]:/home/root \> ls -l /dev/hdisk*|grep "23,  6"
brw-------    1 root     system       23,  6 Jul 28 2011  /dev/hdisk4
[root@srvdb01]:/home/root \> ls -l /dev/hdisk*|grep "23,  7"
brw-------    1 root     system       23,  7 Jul 28 2011  /dev/hdisk5
[root@srvdb01]:/home/root \> ls -l /dev/hdisk*|grep "23,  8"
brw-------    1 root     system       23,  8 Jul 28 2011  /dev/hdisk6
[root@srvdb01]:/home/root \> bootinfo -s hdisk4
307
[root@srvdb01]:/home/root \> bootinfo -s hdisk5
307
[root@srvdb01]:/home/root \> bootinfo -s hdisk6
307
[root@srvdb01]:/home/root \> 


Do the same check on the second node as root user
[root@srvdb02]:/home/root \> ls -l /dev/asmspf*
crw-rw----    1 oracle   dba          23,  3 Jun 29 2011  /dev/asmspf_disk
[root@srvdb02]:/home/root \> ls -l /dev/hdisk*|grep "23,  3"
brw-------    1 root     system       23,  3 Jun 30 2011  /dev/hdisk1
[root@srvdb02]:/home/root \> bootinfo -s hdisk1
102 

[root@srvdb02]:/home/root \> ls -l /dev/ocr*
crw-r-----    1 root     dba          23,  6 Mar 20 11:00 /dev/ocr_disk1
crw-r-----    1 root     dba          23,  7 Mar 20 11:00 /dev/ocr_disk2
[root@srvdb02]:/home/root \> ls -l /dev/hdisk*|grep "23,  6"
brw-------    1 root     system       23,  6 Jul 04 2011  /dev/hdisk2
[root@srvdb02]:/home/root \> ls -l /dev/hdisk*|grep "23,  7"
brw-------    1 root     system       23,  7 Jul 04 2011  /dev/hdisk3
[root@srvdb02]:/home/root \> bootinfo -s hdisk2
307
[root@srvdb02]:/home/root \> bootinfo -s hdisk3
307

[root@srvdb02]:/home/root \> ls -l /dev/vot*
crw-r--r--    1 oracle   dba          23,  8 Mar 20 11:01 /dev/voting_disk1
crw-r--r--    1 oracle   dba          23,  9 Mar 20 11:01 /dev/voting_disk2
crw-r--r--    1 oracle   dba          23, 10 Mar 20 11:01 /dev/voting_disk3
[root@srvdb02]:/home/root \> ls -l /dev/hdisk*|grep "23,  8"
brw-------    1 root     system       23,  8 Jul 04 2011  /dev/hdisk4
[root@srvdb02]:/home/root \> ls -l /dev/hdisk*|grep "23,  9"
brw-------    1 root     system       23,  9 Jul 04 2011  /dev/hdisk5
[root@srvdb02]:/home/root \> ls -l /dev/hdisk*|grep "23, 10"
brw-------    1 root     system       23, 10 Jul 04 2011  /dev/hdisk6
[root@srvdb02]:/home/root \> bootinfo -s hdisk4
307
[root@srvdb02]:/home/root \> bootinfo -s hdisk5
307
[root@srvdb02]:/home/root \> bootinfo -s hdisk6
307
[root@srvdb02]:/home/root \> 

New disks to be assigned as OCR, Voting and ASM spfile are below
brw------- 1 root system 21, 6 Mar 20 10:34 hdisk43 spfile
brw------- 1 root system 21, 2 Mar 20 10:34 hdisk44 ocr1
brw------- 1 root system 21, 1 Mar 20 10:34 hdisk45 ocr2
brw------- 1 root system 21, 3 Mar 20 10:34 hdisk46 vote1
brw------- 1 root system 21, 5 Mar 20 10:34 hdisk47 vote2
brw------- 1 root system 21, 4 Mar 20 10:34 hdisk48 vote3


Now start to configure the new hdisks

First check the sizes of the new disks by running the following command on each RAC node, as you can see from the below output we assigned 1 GB disks for each component.

for i in 43 44 45 46 47 48
do
bootinfo -s hdisk$i
done

[root@srvdb01]:/home/root \> for i in 43 44 45 46 47 48  
> do
> bootinfo -s hdisk$i
> done 
1024
1024
1024
1024
1024
1024
[root@srvdb01]:/home/root \> 

[root@srvdb02]:/home/root \> for i in 43 44 45 46 47 48  
> do
> bootinfo -s hdisk$i
> done
1024
1024
1024
1024
1024
1024
[root@srvdb02]:/home/root \> 

Now use the following command to set the "reserve_policy" of the new disks to the value "no_reserve", otherwise these disks can not be used in a RAC configuration. Also use the following second command to make sure that this operation is successful.
for i in 43 44 45 46 47 48
do
chdev -l hdisk$i -a reserve_policy=no_reserve
done

for i in 43 44 45 46 47 48
do
lsattr -El hdisk$i | grep reserve
done

[root@srvdb01]:/home/root \> for i in 43 44 45 46 47 48 
> do
> chdev -l hdisk$i -a reserve_policy=no_reserve
> done
hdisk43 changed
hdisk44 changed
hdisk45 changed
hdisk46 changed
hdisk47 changed
hdisk48 changed


As we can see they are set accordingly.
[root@srvdb01]:/home/root \> for i in 43 44 45 46 47 48  
> do
> lsattr -El hdisk$i | grep reserve
> done
reserve_policy  no_reserve                                          Reserve Policy                   True
reserve_policy  no_reserve                                          Reserve Policy                   True
reserve_policy  no_reserve                                          Reserve Policy                   True
reserve_policy  no_reserve                                          Reserve Policy                   True
reserve_policy  no_reserve                                          Reserve Policy                   True
reserve_policy  no_reserve                                          Reserve Policy                   True
[root@srvdb01]:/home/root \> 

Do the same thing in the second node
[root@srvdb02]:/home/root \> for i in 43 44 45 46 47 48 
> do
> chdev -l hdisk$i -a reserve_policy=no_reserve
> done
hdisk43 changed
hdisk44 changed
hdisk45 changed
hdisk46 changed
hdisk47 changed
hdisk48 changed
[root@srvdb02]:/home/root \> for i in 43 44 45 46 47 48  
> do
> lsattr -El hdisk$i | grep reserve
> done
reserve_policy  no_reserve                                          Reserve Policy                   True
reserve_policy  no_reserve                                          Reserve Policy                   True
reserve_policy  no_reserve                                          Reserve Policy                   True
reserve_policy  no_reserve                                          Reserve Policy                   True
reserve_policy  no_reserve                                          Reserve Policy                   True
reserve_policy  no_reserve                                          Reserve Policy                   True
[root@srvdb02]:/home/root \> 

To find the hdisk major and minor numbers, we use the following command on each node, since the major and minor numbers of the new assigned hdisk can be different on different RAC nodes.

for i in 43 44 45 46 47 48
do
ls -la /dev/hdisk$i
done

[root@srvdb01]:/home/root \> for i in 43 44 45 46 47 48 
> do
> ls -la /dev/hdisk$i
> done

brw-------    1 root     system       21,  1 Mar 20 10:34 /dev/hdisk43
brw-------    1 root     system       21,  2 Mar 20 10:34 /dev/hdisk44
brw-------    1 root     system       21,  4 Mar 20 10:34 /dev/hdisk45
brw-------    1 root     system       21,  5 Mar 20 10:34 /dev/hdisk46
brw-------    1 root     system       21,  3 Mar 20 10:34 /dev/hdisk47
brw-------    1 root     system       21,  6 Mar 20 10:34 /dev/hdisk48
[root@srvdb01]:/home/root \> 

[root@srvdb02]:/home/root \> for i in 43 44 45 46 47 48 
> do
> ls -la /dev/hdisk$i
> done

brw-------    1 root     system       21,  6 Mar 20 10:34 /dev/hdisk43
brw-------    1 root     system       21,  2 Mar 20 10:34 /dev/hdisk44
brw-------    1 root     system       21,  1 Mar 20 10:34 /dev/hdisk45
brw-------    1 root     system       21,  3 Mar 20 10:34 /dev/hdisk46
brw-------    1 root     system       21,  5 Mar 20 10:34 /dev/hdisk47
brw-------    1 root     system       21,  4 Mar 20 10:34 /dev/hdisk48
[root@srvdb02]:/home/root \> 

Start to create the virtual devices to be used that corresponds to the actual hdisk as below. As you can see above the major and minor numbers of the same hdisks can be different on each RAC node.

We prepare the following commands for each node

For the first node
mknod /dev/asmspf_disk_01 c 21 1

mknod /dev/ocr_disk_01 c 21 2
mknod /dev/ocr_disk_02 c 21 4

mknod /dev/voting_disk_01 c 21 5
mknod /dev/voting_disk_02 c 21 3
mknod /dev/voting_disk_03 c 21 6


For the second node
mknod /dev/asmspf_disk_01 c 21 6

mknod /dev/ocr_disk_01 c 21 2
mknod /dev/ocr_disk_02 c 21 1

mknod /dev/voting_disk_01 c 21 3
mknod /dev/voting_disk_02 c 21 5
mknod /dev/voting_disk_03 c 21 4

Run those commands on each corresponding RAC node
First node
[root@srvdb01]:/home/root \> mknod /dev/asmspf_disk_01 c 21 1
[root@srvdb01]:/home/root \> mknod /dev/ocr_disk_01 c 21 2
[root@srvdb01]:/home/root \> mknod /dev/ocr_disk_02 c 21 4
[root@srvdb01]:/home/root \> mknod /dev/voting_disk_01 c 21 5
[root@srvdb01]:/home/root \> mknod /dev/voting_disk_02 c 21 3
[root@srvdb01]:/home/root \> mknod /dev/voting_disk_03 c 21 6


Second node
[root@srvdb02]:/home/root \> mknod /dev/asmspf_disk_01 c 21 6
[root@srvdb02]:/home/root \> mknod /dev/ocr_disk_01 c 21 2
[root@srvdb02]:/home/root \> mknod /dev/ocr_disk_02 c 21 1
[root@srvdb02]:/home/root \> mknod /dev/voting_disk_01 c 21 3
[root@srvdb02]:/home/root \> mknod /dev/voting_disk_02 c 21 5
[root@srvdb02]:/home/root \> mknod /dev/voting_disk_03 c 21 4


Now check the new situation, the devices having major number 21 are the new disks configured recently

First node
[root@srvdb01]:/home/root \> ls -l /dev/asmspf*
crw-rw----    1 oracle   dba          23,  3 Jul 28 2011  /dev/asmspf_disk
crw-------    1 root     system       21,  1 Mar 20 11:26 /dev/asmspf_disk_01
[root@srvdb01]:/home/root \> ls -l /dev/ocr*
crw-r-----    1 root     dba          23,  4 Jul 28 2011  /dev/ocr_disk1
crw-r-----    1 root     dba          23,  5 Jul 28 2011  /dev/ocr_disk2
crw-------    1 root     system       21,  2 Mar 20 11:26 /dev/ocr_disk_01
crw-------    1 root     system       21,  4 Mar 20 11:26 /dev/ocr_disk_02
[root@srvdb01]:/home/root \> ls -l /dev/vot*
crw-r--r--    1 oracle   dba          23,  6 Mar 20 11:27 /dev/voting_disk1
crw-r--r--    1 oracle   dba          23,  7 Mar 20 11:27 /dev/voting_disk2
crw-r--r--    1 oracle   dba          23,  8 Mar 20 11:27 /dev/voting_disk3
crw-------    1 root     system       21,  5 Mar 20 11:26 /dev/voting_disk_01
crw-------    1 root     system       21,  3 Mar 20 11:26 /dev/voting_disk_02
crw-------    1 root     system       21,  6 Mar 20 11:26 /dev/voting_disk_03
[root@srvdb01]:/home/root \>

Second node
[root@srvdb02]:/home/root \> ls -l /dev/asmspf*
crw-rw----    1 oracle   dba          23,  3 Jun 29 2011  /dev/asmspf_disk
crw-------    1 root     system       21,  6 Mar 20 11:26 /dev/asmspf_disk_01
[root@srvdb02]:/home/root \> ls -l /dev/ocr*
crw-r-----    1 root     dba          23,  6 Mar 20 11:27 /dev/ocr_disk1
crw-r-----    1 root     dba          23,  7 Mar 20 11:27 /dev/ocr_disk2
crw-------    1 root     system       21,  2 Mar 20 11:26 /dev/ocr_disk_01
crw-------    1 root     system       21,  1 Mar 20 11:26 /dev/ocr_disk_02
[root@srvdb02]:/home/root \> ls -l /dev/vot*
crw-r--r--    1 oracle   dba          23,  8 Mar 20 11:27 /dev/voting_disk1
crw-r--r--    1 oracle   dba          23,  9 Mar 20 11:27 /dev/voting_disk2
crw-r--r--    1 oracle   dba          23, 10 Mar 20 11:27 /dev/voting_disk3
crw-------    1 root     system       21,  3 Mar 20 11:26 /dev/voting_disk_01
crw-------    1 root     system       21,  5 Mar 20 11:26 /dev/voting_disk_02
crw-------    1 root     system       21,  4 Mar 20 11:26 /dev/voting_disk_03
[root@srvdb02]:/home/root \> 

Pay attention in this step : File ownership and device permissions should be the same as the old disks !!!!!!
Use the following commands on each RAC node

for i in 01 02
do
chown root:dba /dev/ocr_disk_$i
chmod 640 /dev/ocr_disk_$i
done

for i in 01 02 03
do
chown oracle:dba /dev/voting_disk_$i
chmod 644 /dev/voting_disk_$i
done

chown oracle:dba /dev/asmspf_disk_01
chmod 660 /dev/asmspf_disk_01

First node
[root@srvdb01]:/home/root \> for i in 01 02 
> do
> chown root:dba /dev/ocr_disk_$i
> chmod 640 /dev/ocr_disk_$i
> done
[root@srvdb01]:/home/root \> for i in 01 02 03  
> do
> chown oracle:dba /dev/voting_disk_$i
> chmod 644 /dev/voting_disk_$i
> done
[root@srvdb01]:/home/root \> chown oracle:dba /dev/asmspf_disk_01
[root@srvdb01]:/home/root \> chmod 660 /dev/asmspf_disk_01

Second node
[root@srvdb02]:/home/root \> for i in 01 02 
> do
> chown root:dba /dev/ocr_disk_$i
> chmod 640 /dev/ocr_disk_$i
> done
[root@srvdb02]:/home/root \> for i in 01 02 03  
> do
> chown oracle:dba /dev/voting_disk_$i
> chmod 644 /dev/voting_disk_$i
> done
[root@srvdb02]:/home/root \> chown oracle:dba /dev/asmspf_disk_01
[root@srvdb02]:/home/root \> chmod 660 /dev/asmspf_disk_01


Now check the file ownerships and the permissions of the new disks, they are the ones with major number 21, and the ownership and permissions should be the same as the old disks having the major number 23 below.

First node
[root@srvdb01]:/home/root \> ls -l /dev/asmspf*
crw-rw----    1 oracle   dba          23,  3 Jul 28 2011  /dev/asmspf_disk
crw-rw----    1 oracle   dba          21,  1 Mar 20 11:26 /dev/asmspf_disk_01
[root@srvdb01]:/home/root \> ls -l /dev/ocr*
crw-r-----    1 root     dba          23,  4 Jul 28 2011  /dev/ocr_disk1
crw-r-----    1 root     dba          23,  5 Jul 28 2011  /dev/ocr_disk2
crw-r-----    1 root     dba          21,  2 Mar 20 11:26 /dev/ocr_disk_01
crw-r-----    1 root     dba          21,  4 Mar 20 11:26 /dev/ocr_disk_02
[root@srvdb01]:/home/root \> ls -l /dev/vot*
crw-r--r--    1 oracle   dba          23,  6 Mar 20 11:37 /dev/voting_disk1
crw-r--r--    1 oracle   dba          23,  7 Mar 20 11:37 /dev/voting_disk2
crw-r--r--    1 oracle   dba          23,  8 Mar 20 11:37 /dev/voting_disk3
crw-r--r--    1 oracle   dba          21,  5 Mar 20 11:26 /dev/voting_disk_01
crw-r--r--    1 oracle   dba          21,  3 Mar 20 11:26 /dev/voting_disk_02
crw-r--r--    1 oracle   dba          21,  6 Mar 20 11:26 /dev/voting_disk_03
[root@srvdb01]:/home/root \> 

Second node
[root@srvdb02]:/home/root \> ls -l /dev/asmspf*
crw-rw----    1 oracle   dba          23,  3 Jun 29 2011  /dev/asmspf_disk
crw-rw----    1 oracle   dba          21,  6 Mar 20 11:26 /dev/asmspf_disk_01
[root@srvdb02]:/home/root \> ls -l /dev/ocr*
crw-r-----    1 root     dba          23,  6 Mar 20 11:37 /dev/ocr_disk1
crw-r-----    1 root     dba          23,  7 Mar 20 11:37 /dev/ocr_disk2
crw-r-----    1 root     dba          21,  2 Mar 20 11:26 /dev/ocr_disk_01
crw-r-----    1 root     dba          21,  1 Mar 20 11:26 /dev/ocr_disk_02
[root@srvdb02]:/home/root \> ls -l /dev/vot*
crw-r--r--    1 oracle   dba          23,  8 Mar 20 11:38 /dev/voting_disk1
crw-r--r--    1 oracle   dba          23,  9 Mar 20 11:38 /dev/voting_disk2
crw-r--r--    1 oracle   dba          23, 10 Mar 20 11:38 /dev/voting_disk3
crw-r--r--    1 oracle   dba          21,  3 Mar 20 11:26 /dev/voting_disk_01
crw-r--r--    1 oracle   dba          21,  5 Mar 20 11:26 /dev/voting_disk_02
crw-r--r--    1 oracle   dba          21,  4 Mar 20 11:26 /dev/voting_disk_03
[root@srvdb02]:/home/root \> 

Now you need to use the following command to set the "pv" parameter of each new hdisk to the value "clear" to sign them as new untouched devices in the system.

for i in 43 44 45 46 47 48
do
chdev -l hdisk$i -a pv=clear
done


First node
[root@srvdb01]:/home/root \> for i in 43 44 45 46 47 48
> do
> chdev -l hdisk$i -a pv=clear
> done

hdisk43 changed
hdisk44 changed
hdisk45 changed
hdisk46 changed
hdisk47 changed
hdisk48 changed
[root@srvdb01]:/home/root \> 

Second node
[root@srvdb02]:/home/root \> for i in 43 44 45 46 47 48
> do
> chdev -l hdisk$i -a pv=clear
> done

hdisk43 changed
hdisk44 changed
hdisk45 changed
hdisk46 changed
hdisk47 changed
hdisk48 changed
[root@srvdb02]:/home/root \> 

In this step you need to format the new virtual devices by running the following commands on each RAC node. You need to wait until the "Done" message displayed for each command that indicates that formatting is finished. Run those commands first in the first node wait until they complete ("Done" message) and then run them in the second node and wait for them to complete, this is ver important.

for i in 01 02
do
dd if=/dev/zero of=/dev/ocr_disk_$i bs=8192 count=25000 &
done

for i in 01 02 03
do
dd if=/dev/zero of=/dev/voting_disk_$i bs=8192 count=25000 &
done

dd if=/dev/zero of=/dev/asmspf_disk_01 bs=8192 count=25000 &

First node
[root@srvdb01]:/home/root \> for i in 01 02 
> 
> do
> dd if=/dev/zero of=/dev/ocr_disk_$i bs=8192 count=25000 &
> done
[1] 852170
[2] 848092
[root@srvdb01]:/home/root \> for i in 01 02 03 
> 
> do
> dd if=/dev/zero of=/dev/voting_disk_$i bs=8192 count=25000 &
> done
[3] 1274024
[4] 532694
[5] 1503258
[root@srvdb01]:/home/root \> 25000+0 records in.
25000+0 records out.
25000+0 records in.
25000+0 records out.
dd if=/dev/zero of=/dev/asmspf_disk_01 bs=8192 count=25000 &
[6] 852178
[1]   Done                    dd if=/dev/zero of=/dev/ocr_disk_$i bs=8192 count=25000
[2]   Done                    dd if=/dev/zero of=/dev/ocr_disk_$i bs=8192 count=25000
[root@srvdb01]:/home/root \> 
[root@srvdb01]:/home/root \> 25000+0 records in.
25000+0 records out.
25000+0 records in.
25000+0 records out.
25000+0 records in.
25000+0 records out.
25000+0 records in.
25000+0 records out.

[3]   Done                    dd if=/dev/zero of=/dev/voting_disk_$i bs=8192 count=25000
[4]   Done                    dd if=/dev/zero of=/dev/voting_disk_$i bs=8192 count=25000
[5]-  Done                    dd if=/dev/zero of=/dev/voting_disk_$i bs=8192 count=25000
[6]+  Done                    dd if=/dev/zero of=/dev/asmspf_disk_01 bs=8192 count=25000
[root@srvdb01]:/home/root \> 

Second node
[root@srvdb02]:/home/root \> for i in 01 02 
> 
> do
> dd if=/dev/zero of=/dev/ocr_disk_$i bs=8192 count=25000 &
> done
[1] 1040494
[2] 1507446
[root@srvdb02]:/home/root \> 
[root@srvdb02]:/home/root \> for i in 01 02 03 
> 
> do
> dd if=/dev/zero of=/dev/voting_disk_$i bs=8192 count=25000 &
> done
[3] 823530
[4] 975062
[5] 520304
[root@srvdb02]:/home/root \> 25000+0 records in.
25000+0 records out.
25000+0 records in.
25000+0 records out.
dd if=/dev/zero of=/dev/asmspf_disk_01 bs=8192 count=25000 &
[6] 581850
[1]   Done                    dd if=/dev/zero of=/dev/ocr_disk_$i bs=8192 count=25000
[2]   Done                    dd if=/dev/zero of=/dev/ocr_disk_$i bs=8192 count=25000
[root@srvdb02]:/home/root \> 
[root@srvdb02]:/home/root \> 
[root@srvdb02]:/home/root \> 25000+0 records in.
25000+0 records out.
25000+0 records in.
25000+0 records out.
25000+0 records in.
25000+0 records out.

[3]   Done                    dd if=/dev/zero of=/dev/voting_disk_$i bs=8192 count=25000
[4]   Done                    dd if=/dev/zero of=/dev/voting_disk_$i bs=8192 count=25000
[5]-  Done                    dd if=/dev/zero of=/dev/voting_disk_$i bs=8192 count=25000
[root@srvdb02]:/home/root \> 25000+0 records in.
25000+0 records out.

[6]+  Done                    dd if=/dev/zero of=/dev/asmspf_disk_01 bs=8192 count=25000
[root@srvdb02]:/home/root \> 

Now that the configuration of the new disks is completed.

---------------------------
1-) REPLACE OCR DISKS

We can now continue with the replacing the old OCR, Voting and ASM spfile disks with the new ones.

According to the MOS note id 428681.1, in Oracle 10g RAC, OCR disks can be replaced while the CRS is open, so we tried to do it without closing the CRS.

First we need to check if there is any OCR backup, normally there is a scheduled OCR backup, backing up the OCR disk automatically, so most probably you will see an output like below.

Check the OCR backup, there are some backups taken recently, so there is no problem if something goes wrong we can use the most recent backup before the operation to recover the OCR.
[root@srvdb01]:/home/root \> ocrconfig -showbackup

srvdb02     2012/03/20 08:57:32     /oracle/crshome1/cdata/crs

srvdb02     2012/03/20 04:57:32     /oracle/crshome1/cdata/crs

srvdb02     2012/03/20 00:57:31     /oracle/crshome1/cdata/crs

srvdb02     2012/03/18 20:57:30     /oracle/crshome1/cdata/crs

srvdb02     2012/03/07 16:57:14     /oracle/crshome1/cdata/crs
[root@srvdb02]:/home/root \> 

Check the current OCR info, we have two OCR disks configured already, first one is primary and the second one is just the OCR mirror.
[root@srvdb01]:/home/root \> ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     314344
         Used space (kbytes)      :       3876
         Available space (kbytes) :     310468
         ID                       : 1673980957
         Device/File Name         : /dev/ocr_disk1
                                    Device/File integrity check succeeded
         Device/File Name         : /dev/ocr_disk2
                                    Device/File integrity check succeeded

         Cluster registry integrity check succeeded

[root@srvdb01]:/home/root \> 

Now we try to replace the primary OCR disk, but we got "PROT-16" error, this can be because of bug I do not know the reason but I was unable to do it.
Run these commands only on one node.
[root@srvdb01]:/home/root \> ocrconfig -replace ocr /dev/ocr_disk_01
PROT-16: Internal Error


Then I tried to replace the ocrmirror but it gave another error
[root@srvdb01]:/home/root \> ocrconfig -replace ocrmirror /dev/ocr_disk_01
PROT-21: Invalid parameter


When I checked the log file it gives the following error.
[root@srvdb01]:/oracle/crshome1/log/srvdb01/client \> vi ocrconfig_2027594.log
"ocrconfig_2027594.log" 5 lines, 442 characters 
Oracle Database 10g CRS Release 10.2.0.4.0 Production Copyright 1996, 2008 Oracle.  All rights reserved.
2012-03-20 12:10:20.964: [ OCRCONF][1]ocrconfig starts...
2012-03-20 12:10:21.019: [  OCRCLI][1]proac_replace_dev:[/dev/ocr_disk_01]: Failed. Retval [8]
2012-03-20 12:10:21.019: [ OCRCONF][1]The input OCR device either is identical to the other device or cannot be opened
2012-03-20 12:10:21.019: [ OCRCONF][1]Exiting [status=failed]...


Not to loose any time I left this error aside to search for a solution later and I decided to go on with the "to replace the OCR disk offline" procedure, since at least I needed to close the CRS stack in Oracle 10g RAC for replacing the voting this it is not so much important to replace the OCR disks while the CRS is open or not.

Just continue with the Offline replacement procedure.
First we need to stop the CRS on all nodes for this operation.

First node
[root@srvdb01]:/home/root \> crsctl stop crs
Stopping resources. This could take several minutes.
Successfully stopped CRS resources.
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.


Check the CRS if it's closed successfuly
[root@srvdb01]:/home/root \> crsctl check crs
Failure 1 contacting CSS daemon
Cannot communicate with CRS
Cannot communicate with EVM 


Second node
[root@srvdb02]:/home/root \> crsctl stop crs
Stopping resources. This could take several minutes.
Successfully stopped CRS resources.
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.

[root@srvdb02]:/home/root \> crsctl check crs
Failure 1 contacting CSS daemon
Cannot communicate with CRS
Cannot communicate with EVM 


Run the following commands on all the RAC nodes
[root@srvdb01]:/home/root \> ocrconfig -repair ocr /dev/ocr_disk_01
[root@srvdb01]:/home/root \> ocrconfig -repair ocrmirror /dev/ocr_disk_02

[root@srvdb02]:/home/root \> ocrconfig -repair ocr /dev/ocr_disk_01
[root@srvdb02]:/home/root \> ocrconfig -repair ocrmirror /dev/ocr_disk_02

Run this command only on one node
[root@srvdb01]:/home/root \> ocrconfig -overwrite


Check the OCR for the latest state, as we can see the nold disks are replaced by the new ones
[root@srvdb01]:/home/root \> ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     314344
         Used space (kbytes)      :       3884
         Available space (kbytes) :     310460
         ID                       : 1673980957
         Device/File Name         : /dev/ocr_disk_01
                                    Device/File integrity check succeeded
         Device/File Name         : /dev/ocr_disk_02
                                    Device/File needs to be synchronized with the other device 

         Cluster registry integrity check succeeded

[root@srvdb01]:/home/root \> 

[root@srvdb02]:/home/root \> ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     314344
         Used space (kbytes)      :       3884
         Available space (kbytes) :     310460
         ID                       : 1673980957
         Device/File Name         : /dev/ocr_disk_01
                                    Device/File integrity check succeeded
         Device/File Name         : /dev/ocr_disk_02
                                    Device/File needs to be synchronized with the other device

         Cluster registry integrity check succeeded

[root@srvdb02]:/home/root \>

Now we can remove the virtual devices of old OCR disks on each node.
[root@srvdb01]:/home/root \> rm /dev/ocr_disk1
[root@srvdb01]:/home/root \> rm /dev/ocr_disk2

[root@srvdb02]:/home/root \> rm /dev/ocr_disk1
[root@srvdb02]:/home/root \> rm /dev/ocr_disk2

---------------------------
2-) REPLACE VOTING DISKS

Now we can continue replacing the old voting disks with the new ones.

Check current voting disk info
[root@srvdb01]:/home/root \> crsctl query css votedisk
 0.     0    /dev/voting_disk1
 1.     0    /dev/voting_disk2
 2.     0    /dev/voting_disk3


located 3 votedisk(s).


Now add the new voting disks, do this operation only on first node
[root@srvdb01]:/home/root \> crsctl add css votedisk /dev/voting_disk_01 -force
Now formatting voting disk: /dev/voting_disk_01
successful addition of votedisk /dev/voting_disk_01.
[root@srvdb01]:/home/root \> crsctl add css votedisk /dev/voting_disk_02 -force
Now formatting voting disk: /dev/voting_disk_02
successful addition of votedisk /dev/voting_disk_02.
[root@srvdb01]:/home/root \> crsctl add css votedisk /dev/voting_disk_03 -force
Now formatting voting disk: /dev/voting_disk_03
successful addition of votedisk /dev/voting_disk_03.


Check the current voting disk info
[root@srvdb01]:/home/root \> crsctl query css votedisk
 0.     0    /dev/voting_disk1
 1.     0    /dev/voting_disk2
 2.     0    /dev/voting_disk3
 3.     0    /dev/voting_disk_01
 4.     0    /dev/voting_disk_02
 5.     0    /dev/voting_disk_03

located 6 votedisk(s).


Delete the old voting disks.
[root@srvdb01]:/home/root \> crsctl delete css votedisk /dev/voting_disk1 -force 
successful deletion of votedisk /dev/voting_disk1.
[root@srvdb01]:/home/root \> crsctl delete css votedisk /dev/voting_disk2 -force 
successful deletion of votedisk /dev/voting_disk2.
[root@srvdb01]:/home/root \> crsctl delete css votedisk /dev/voting_disk3 -force
successful deletion of votedisk /dev/voting_disk3.


Check the current voting disk info
[root@srvdb01]:/home/root \> crsctl query css votedisk
 0.     0    /dev/voting_disk_01
 1.     0    /dev/voting_disk_02
 2.     0    /dev/voting_disk_03

located 3 votedisk(s).

Now we successfully replaced the old voting disks with the new ones.

Now we can remove the virtual devices of old OCR disks on each node.
[root@srvdb01]:/home/root \> rm /dev/voting_disk1
[root@srvdb01]:/home/root \> rm /dev/voting_disk2
[root@srvdb01]:/home/root \> rm /dev/voting_disk3

[root@srvdb02]:/home/root \> rm /dev/voting_disk1
[root@srvdb02]:/home/root \> rm /dev/voting_disk2
[root@srvdb02]:/home/root \> rm /dev/voting_disk3


---------------------------
3-) REPLACE ASM SPFILE DISK

We can continue with replacing the ASM spfile disk now.

Following operations should be performed on each RAC node.

Switch to the the ASM profile.

[oracle@srvdb01]:/oracle > . .profile_asm
[oracle@srvdb01]:/oracle > env|grep ORA
ORACLE_BASE=/oracle
ORACLE_SID=+ASM1
ORA_ASM_HOME=/oracle/asmhome1
ORA_CRS_HOME=/oracle/crshome1
ORACLE_HOME=/oracle/asmhome1

Go to init.ora file destination
[oracle@srvdb01]:/oracle > cd $ORACLE_HOME/dbs

Check this file for the current spfile setting
[oracle@srvdb01]:/oracle/asmhome1/dbs > more init+ASM1.ora
SPFILE='/dev/asmspf_disk'

Also before you close the CRS the ASM database should show the below output.
SYS@+ASM1 AS SYSDBA> sho parameter spfile

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
spfile                               string      /dev/asmspf_disk


We need to backup this file, create a new pfile form current spfile, create a new spfile from this pfile in the new ASM spfile disk, and arrange the new init.ora file to point to the new ASm spfile location.

Backup the old pfile
[oracle@srvdb01]:/oracle/asmhome1/dbs > cp init+ASM1.ora init+ASM1.ora-201203201041


In sqlplus, create a new pfile from spfile
SYS@+ASM1 AS SYSDBA> create pfile from spfile;

File created.

Check the new pfile
[oracle@srvdb01]:/oracle/asmhome1/dbs > more init+ASM1.ora
+ASM1.asm_diskgroups='DG_DB_ASM','DG_FRA_ASM'#Manual Mount
*.asm_diskgroups='DG_DB_ASM','DG_FRA_ASM'
*.asm_diskstring='/dev/ASM*'
*.background_dump_dest='/oracle/admin/+ASM/bdump'
*.cluster_database=true
*.core_dump_dest='/oracle/admin/+ASM/cdump'
+ASM2.instance_number=2
+ASM1.instance_number=1
*.instance_type='asm'
*.large_pool_size=12M
*.processes=150
*.remote_login_passwordfile='exclusive'
*.user_dump_dest='/oracle/admin/+ASM/udump'


In sqlplus, create the new spfile in the new ASM spfile disk location
SYS@+ASM1 AS SYSDBA> create spfile='/dev/asmspf_disk_01' from pfile;

File created.

Change the new init.ora file to point to the new ASm spfile location
[oracle@srvdb01]:/oracle/asmhome1/dbs > more init+ASM1.ora
SPFILE='/dev/asmspf_disk_01'

Do all the above operations in the second node also

On second node
Switch to the the ASM profile.

[oracle@srvdb02]:/oracle > . .profile_asm
[oracle@srvdb02]:/oracle > env|grep ORA
ORACLE_BASE=/oracle
ORACLE_SID=+ASM2
ORA_ASM_HOME=/oracle/asmhome1
ORA_CRS_HOME=/oracle/crshome1
ORACLE_HOME=/oracle/asmhome1

Go to init.ora file destination
[oracle@srvdb02]:/oracle > cd $ORACLE_HOME/dbs

Check this file for the current spfile setting
[oracle@srvdb02]:/oracle/asmhome1/dbs > more init+ASM2.ora
SPFILE='/dev/asmspf_disk'

Also before you close the CRS the ASM database should show the below output.
SYS@+ASM1 AS SYSDBA> sho parameter spfile

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
spfile                               string      /dev/asmspf_disk


We need to backup this file, create a new pfile form current spfile, create a new spfile from this pfile in the new ASM spfile disk, and arrange the new init.ora file to point to the new ASm spfile location.

Backup the old pfile
[oracle@srvdb02]:/oracle/asmhome1/dbs > cp init+ASM2.ora init+ASM2.ora-201203201041


In sqlplus, create a new pfile from spfile
SYS@+ASM1 AS SYSDBA> create pfile from spfile;

File created.

Check the new pfile
[oracle@srvdb02]:/oracle/asmhome1/dbs > more init+ASM2.ora
+ASM1.asm_diskgroups='DG_DB_ASM','DG_FRA_ASM'#Manual Mount
*.asm_diskgroups='DG_DB_ASM','DG_FRA_ASM'
*.asm_diskstring='/dev/ASM*'
*.background_dump_dest='/oracle/admin/+ASM/bdump'
*.cluster_database=true
*.core_dump_dest='/oracle/admin/+ASM/cdump'
+ASM2.instance_number=2
+ASM1.instance_number=1
*.instance_type='asm'
*.large_pool_size=12M
*.processes=150
*.remote_login_passwordfile='exclusive'
*.user_dump_dest='/oracle/admin/+ASM/udump'


In sqlplus, create the new spfile in the new ASM spfile disk location
SYS@+ASM1 AS SYSDBA> create spfile='/dev/asmspf_disk_01' from pfile;

File created.

Change the new init.ora file to point to the new ASm spfile location
[oracle@srvdb02]:/oracle/asmhome1/dbs > more init+ASM2.ora
SPFILE='/dev/asmspf_disk_01'

---------------------------

Now that all the configurations are finished and we can try to open the CRS and see the result.

[root@srvdb01]:/home/root \> crsctl start crs
Attempting to start CRS stack 
The CRS stack will be started shortly
[root@srvdb01]:/home/root \> 

[root@srvdb02]:/home/root \> crsctl start crs
Attempting to start CRS stack 
The CRS stack will be started shortly
[root@srvdb02]:/home/root \> 

[root@srvdb01]:/home/root \> crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy

[root@srvdb02]:/home/root \> crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy

[root@srvdb01]:/home/root \> crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora....T1.inst application    ONLINE    ONLINE    srvdb01   
ora....T2.inst application    ONLINE    ONLINE    srvdb02   
ora.CORET.db   application    ONLINE    ONLINE    srvdb02   
ora....SM1.asm application    ONLINE    ONLINE    srvdb01   
ora....01.lsnr application    ONLINE    ONLINE    srvdb01   
ora....b01.gsd application    ONLINE    ONLINE    srvdb01   
ora....b01.ons application    ONLINE    ONLINE    srvdb01   
ora....b01.vip application    ONLINE    ONLINE    srvdb01   
ora....SM2.asm application    ONLINE    ONLINE    srvdb02   
ora....02.lsnr application    ONLINE    ONLINE    srvdb02   
ora....b02.gsd application    ONLINE    ONLINE    srvdb02   
ora....b02.ons application    ONLINE    ONLINE    srvdb02   
ora....b02.vip application    ONLINE    ONLINE    srvdb02   
[root@srvdb01]:/home/root \> 

[root@srvdb02]:/home/root \> crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora....T1.inst application    ONLINE    ONLINE    srvdb01   
ora....T2.inst application    ONLINE    ONLINE    srvdb02   
ora.CORET.db   application    ONLINE    ONLINE    srvdb02   
ora....SM1.asm application    ONLINE    ONLINE    srvdb01   
ora....01.lsnr application    ONLINE    ONLINE    srvdb01   
ora....b01.gsd application    ONLINE    ONLINE    srvdb01   
ora....b01.ons application    ONLINE    ONLINE    srvdb01   
ora....b01.vip application    ONLINE    ONLINE    srvdb01   
ora....SM2.asm application    ONLINE    ONLINE    srvdb02   
ora....02.lsnr application    ONLINE    ONLINE    srvdb02   
ora....b02.gsd application    ONLINE    ONLINE    srvdb02   
ora....b02.ons application    ONLINE    ONLINE    srvdb02   
ora....b02.vip application    ONLINE    ONLINE    srvdb02   
[root@srvdb02]:/home/root \> 

As a result everything looks good.
Lets check if the ASM spfile is also updated in the ASM instances.

First node
[oracle@srvdb01]:/oracle > . .profile_asm
[YOU HAVE NEW MAIL]
[oracle@srvdb01]:/oracle > env|grep ORA
ORACLE_BASE=/oracle
ORACLE_SID=+ASM1
ORA_ASM_HOME=/oracle/asmhome1
ORA_CRS_HOME=/oracle/crshome1
ORACLE_HOME=/oracle/asmhome1


In sqlplus
SYS@+ASM1 > sho parameter spfile

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
spfile                               string      /dev/asmspf_disk_01
SYS@+ASM1 > 

Second node
[oracle@srvdb02]:/oracle > . .profile_asm
[YOU HAVE NEW MAIL]
[oracle@srvdb02]:/oracle > env|grep ORA
ORACLE_BASE=/oracle
ORACLE_SID=+ASM2
ORA_ASM_HOME=/oracle/asmhome1
ORA_CRS_HOME=/oracle/crshome1
ORACLE_HOME=/oracle/asmhome1


In sqlplus
SYS@+ASM2 > sho parameter spfile

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
spfile                               string      /dev/asmspf_disk_01
SYS@+ASM2 > 

ASM spfile replacement is also successfull, now we can rm the old ASm spfile virtual device on both nodes

[root@srvdb01]:/home/root \> rm /dev/asmspf_disk
[root@srvdb02]:/home/root \> rm /dev/asmspf_disk


Now this is the end of the operation which was successfull. I hope everything will go fine if you will need to replace OCR, Voting and ASM spfile disks in your Oracle RAC systems one day.

Monday, March 19, 2012

How to create an ASM Cluster File System (ACFS) in Oracle 11g ASM

If you ever need a clustered file system in an Oracle 11g RAC configuration you can use ACFS (ASM Cluster File System) method.
You can create a clustered file system which is accessible by all the nodes of the Oracle RAC and managed by Oracle ASM.

Below example is using Oracle 11gR2 (11.2.0.3) RAC database with two nodes on IBM servers with IBM AIX v6.1 OS.

Oracle RAC clusterware stack (CRS) and ASM should be up before starting the below operation.

[root@nldevrac02]:/home/root > crsctl check cluster -all 
**************************************************************
nldevrac01:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
nldevrac02:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

[root@nldevrac01]:/home/root > crsctl stat res ora.asm
NAME=ora.asm
TYPE=ora.asm.type
TARGET=ONLINE              , ONLINE
STATE=ONLINE on nldevrac01, ONLINE on nldevrac02

We need to do the following operation on only one of the RAC nodes.
Switch to Oracle GI owner user in Unix which is "oracle" for me.

[root@nldevrac01]:/home/root > su - oracle
[oracle@nldevrac01]:/oracle > 

We need to switch to Oracle 11g Grid Infrastructure (GI) profile in Unix since we will use ASMCA (Automatic Storage Management Configuration Assistant) utility from GI home binaries.

[oracle@nldevrac01]:/oracle > . .profile_grid_11g    

Check again if the Oracle related environment variables are correct

[oracle@nldevrac01]:/oracle > env | grep ORA
ORACLE_BASE=/ora11/oracle
ORACLE_SID=+ASM1
ORACLE_UNQNAME=COREP
ORACLE_HOME=/ora11/11.2.0/grid
ORA_GRID_HOME=/ora11/11.2.0/grid

Check if ASMCA utility is included in the PATH environment variable and oracle user can execute it.
[oracle@nldevrac01]:/oracle > which asmca
/ora11/11.2.0/grid/bin/asmca
[oracle@nldevrac01]:/oracle > 

Set the Unix "DISPLAY" parameter to the IP address of your PC like below.
[oracle@nldevrac01]:/oracle > export DISPLAY=10.1.20.99:0.0

Check that it's been set correct
[oracle@nldevrac01]:/oracle > echo $DISPLAY
10.1.20.99:0.0

You need to use an X-Windows client software like "WRQ Reflection X" or any others on you PC to be able to see the GUI screens of ASMCA utility on your monitor during the operation.
Start the X-Windows client software in listening mode on your PC.

Continue with ASMCA utility.

[oracle@nldevrac01]:/ora11/11.2.0/grid/dbs > asmca

You will see the following screen on your PC monitor when you start ASMCA utility


First we need to create a new ASM diskgroup. I choose "External" for the Redundancy of the new ASM diskgroup called "DG_ACFS_DISK_01" since the redundancy of the disks are managed in the Storage System externally. As a member disk, I choose already configured "/dev/ASM_Disk_RAC_voting_02" (do not take into account the name since it was intended for a different purpose and now I use it for ACFS) which point to a raw "/dev/hdisk" assigned to both RAC nodes. In "ASM Compatibility" part below, it will bring "11.2.0.0.0" automatically and it is very important that to be able to create an ACFS on a diskgroup this value should be set to this value. I also manually set the "Database Compatibility" to the value "11.2.0.0.0".


When I press OK button, after some time the following message is displayed.


Final disk group list is as the following with the new diskgroup. Just check very carefully that in the "State" column it should show "MOUNTED(2 of 2)", this means the new ASM diskgroup was successfully mounted in ASMs of both of the RAC nodes otherwise you can have different problem in the further steps.


Check again that this new ASM diskgroup is mounted in both RAC nodes.
[root@nldevrac01]:/home/root > crsctl stat res ora.DG_ACFS_DISK_01.dg
NAME=ora.DG_ACFS_DISK_01.dg
TYPE=ora.diskgroup.type
TARGET=ONLINE              , ONLINE
STATE=ONLINE on nldevrac01, ONLINE on nldevrac02

[root@nldevrac01]:/home/root > 

The next step is creating a Volume that is to be used in ACFS. Click the "Volumes" tab and press the "Create" button in this window.


You need to choose a "Volume Name" for your new volume. Choose the disk group name that you created in the previous steps and just give a size to this volume for how much space you need to use.


In next steps you will see the following windows.




Now that the volume is created. In the next step we will create the ACFS. Just click the "ASM Cluster File System" tab and press the "Create" button.



In that screen, since we will use this file system for general purpose, we choose "General Purpose File System", you can also keep your database binaries on a ACFS. In the mount point field you see how your new file system will look in your Unix server. After pressing the OK button I got the below error.


"ASM Cluster File System creation on /dev/as/oraacfs-116 failed with the following message:" but the message is empty. This can be a bug, but what I suspect is since I run the ASMCA utility by "oracle" unix user, this user is not capable of running the commands needed for creating the ACFS and I need to get and run those command by a more privileged user which is "root" in Unix. There is also a note about this situation at the bottom of the "ASM Cluster File Systems" tab like that "Note : Some ACFS commands can be executed as privileged/root user only. If you choose any of these operations, ASMCA will generate the command that can be executed as privileged/root user manually.

So I go to the "Create ASM Cluster File System" window and press the button "Show Command" and it shows the below commands.


Create ACFS Command:
/usr/sbin/mkfs -V acfs /dev/asm/oraacfs-116

Register Mount Point Command:
/sbin/acfsutil registry -a -f /dev/asm/oraacfs-116 /ora11/oracle/acfsmounts/dg_acfs_disk_01_oraacfs

Then I run those commands by logging in the system as root user as before this should be run only on one of the RAc nodes.

[root@nldevrac01]:/home/root > /usr/sbin/mkfs -V acfs /dev/asm/oraacfs-116
mkfs: version                   = 11.2.0.3.0
mkfs: on-disk version           = 39.0
mkfs: volume                    = /dev/asm/oraacfs-116
mkfs: volume size               = 536870912
mkfs: Format complete.
[root@nldevrac01]:/home/root > 6 /ora11/oracle/acfsmounts/dg_acfs_disk_01_oraacfs                                                <
acfsutil registry: mount point /ora11/oracle/acfsmounts/dg_acfs_disk_01_oraacfs successfully added to Oracle Registry

After that to refresh the "ASM Cluster File Systems" window, switch to "Volumes" tab then click again to "ASM CLuster File Systems" tab and now you can see the newly creates ACFS file system.
Now you can see the newly created ACFS file system in both of the RAC nodes and start to use it.
[root@nldevrac01]:/home/root > df -g
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/hd4           0.25      0.17   31%     3245     8% /
/dev/hd2           7.00      5.04   28%    42355     4% /usr
/dev/hd9var        0.25      0.22   13%      635     2% /var
/dev/hd3           2.00      1.70   16%     1180     1% /tmp
/dev/hd1           0.25      0.23    8%       50     1% /home
/proc                 -         -    -         -     -  /proc
/dev/hd10opt       0.25      0.06   78%     4927    26% /opt
/dev/sysperflv      0.25      0.21   15%       54     1% /sysperf
/dev/hd11admin      0.25      0.24    4%       17     1% /admin
/dev/oraclelv     39.50      7.46   82%    66335     4% /oracle
/dev/ora11lv      20.00      1.84   91%    55117    12% /ora11
/dev/asm/oraacfs-116      0.50      0.43   15%   147928    15% /ora11/oracle/acfsmounts/dg_acfs_disk_01_oraacfs
[root@nldevrac01]:/home/root > 

[root@nldevrac02]:/home/root > df -g
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/hd4           0.25      0.18   28%     3215     7% /
/dev/hd2           7.00      5.04   28%    42338     4% /usr
/dev/hd9var        0.25      0.22   13%      630     2% /var
/dev/hd3           2.00      0.98   52%     2810     2% /tmp
/dev/hd1           0.25      0.23    8%       46     1% /home
/proc                 -         -    -         -     -  /proc
/dev/hd10opt       0.25      0.06   78%     4927    26% /opt
/dev/sysperflv      0.25      0.22   14%       54     1% /sysperf
/dev/hd11admin      0.25      0.24    4%       17     1% /admin
/dev/oraclelv     39.50     11.65   71%    56435     3% /oracle
/dev/ora11lv      20.00      1.89   91%    55416    11% /ora11
/dev/asm/oraacfs-116      0.50      0.43   15%   147928    15% /ora11/oracle/acfsmounts/dg_acfs_disk_01_oraacfs
[root@nldevrac02]:/home/root > 
As you can see the ownership of the new cluster file system is root, if you want to use it with oracle user just change the ownership to oracle or whatever user you will use.
[root@nldevrac02]:/home/root > ls -l /ora11/oracle
total 0
drwx------    2 oracle   dba             256 Mar 12 11:06 Clusterware
drwxr-xr-x    4 root     system          256 Mar 15 17:13 acfsmounts
drwx------    3 oracle   dba             256 Mar 12 14:18 admin
drwx------    6 oracle   dba             256 Mar 15 15:33 cfgtoollogs
drwxrwxr-x   11 oracle   dba            4096 Mar 12 13:09 diag
drwx------    3 oracle   dba             256 Mar 12 11:06 nldevrac02
drwx------    3 oracle   dba             256 Mar 13 13:25 oradata
drwxr-----    3 oracle   dba             256 Mar 13 16:28 oradiag_oracle
drwxr-xr-x    3 root     system          256 Mar 15 12:46 oradiag_root
drwx------    3 oracle   dba             256 Mar 12 13:08 product

Wednesday, March 7, 2012

PRKP-1001 and CRS-0215 errors in an Oracle 10gR2 RAC database

I just installed a test Oracle 10gR2 RAC database and I am getting the below error when I try to start the instance on first node by using srvctl utility. I do not get any error when I try to startup the instance in SQL Plus.

This is the current state and I want to open instance on first node, first component in the below list.
[root@nldevrac01]:/oracle/crshome1/bin > crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora....P1.inst application    ONLINE    OFFLINE               
ora....P2.inst application    ONLINE    OFFLINE               
ora.COREP.db   application    ONLINE    OFFLINE               
ora....SM1.asm application    ONLINE    ONLINE    nldevrac01  
ora....01.lsnr application    ONLINE    ONLINE    nldevrac01  
ora....c01.gsd application    ONLINE    ONLINE    nldevrac01  
ora....c01.ons application    ONLINE    ONLINE    nldevrac01  
ora....c01.vip application    ONLINE    ONLINE    nldevrac01  
ora....SM2.asm application    ONLINE    OFFLINE               
ora....02.lsnr application    ONLINE    OFFLINE               
ora....c02.gsd application    OFFLINE   OFFLINE               
ora....c02.ons application    ONLINE    OFFLINE               
ora....c02.vip application    ONLINE    ONLINE    nldevrac01  
[root@nldevrac01]:/oracle/crshome1/bin >

When I try to start it by using srvctl, it gives the below error.
[root@nldevrac01]:/oracle/crshome1/bin > srvctl start instance -d corep -i corep1
PRKP-1001 : Error starting instance COREP1 on node nldevrac01
CRS-0215: Could not start resource 'ora.COREP.COREP1.inst'.
[root@nldevrac01]:/oracle/crshome1/bin > 

Then I searched for the solution to this error, I started by checking the related log file to this error. I found the below error in one of the log files.
[oracle@nldevrac01]:/oracle/orahome1/log/nldevrac01/racg > vi imonCOREP.log
...
Oracle Database 10g RACG Release 10.2.0.4.0 Production Copyright 1996, 2005, Oracle.  All rights reserved.
SQL*Plus: Release 10.2.0.4.0 - Production on Tue Mar 6 17:21:34 2012
Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.
Enter user-name: Connected to an idle instance.
SQL> ORA-00119: invalid specification for system parameter LOCAL_LISTENER
ORA-00132: syntax error or unresolved network name 'LISTENER_NLDEVRAC01'
ORA-01078: failure in processing system parameters
SQL> Disconnected

SQL*Plus: Release 10.2.0.4.0 - Production on Tue Mar 6 17:23:31 2012
Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.
Enter user-name: Connected to an idle instance.
SQL> ORACLE instance shut down.
SQL> Disconnected

Searching in the internet for a solution to this error I found the below solution and it worked for me. The problem was that srvctl utility was not aware of the TNS_ADMIN parameter setting in Oracle owner user's ".profile" file.
It's actually set to the below value but srvctl is unaware of it.

[oracle@nldevrac01]:/oracle > echo $TNS_ADMIN
/oracle/asmhome1/network/admin
[oracle@nldevrac01]:/oracle >

Solution :
SRVCTL works from the Oracle Cluster Registry information and does not know the TNS_ADMIN environment setting for the database and instances. Solving this problem is adding the TNS_NAMES environment setting to the Oracle Cluster Registry for the database and instances. The SRVCTL setenv statements to add the attribute TNS_ADMIN to the Oracle Cluster Rergistry have only to be executed from one node only.

Set this parameter
[oracle@nldevrac01]:/oracle > srvctl getenv database -d corep
[oracle@nldevrac01]:/oracle > srvctl setenv db -d corep -t TNS_ADMIN=/oracle/asmhome1/network/admin
[oracle@nldevrac01]:/oracle > srvctl getenv database -d corep
TNS_ADMIN=/oracle/asmhome1/network/admin

Try to start the instance now
[oracle@nldevrac01]:/oracle > srvctl start instance -d corep -i corep1                   

Check the final stat and I saw that the instance on first node and database was started successfully after setting the above parameter, first and third component below.
[oracle@nldevrac01]:/oracle > crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora....P1.inst application    ONLINE    ONLINE    nldevrac01  
ora....P2.inst application    ONLINE    OFFLINE               
ora.COREP.db   application    ONLINE    ONLINE    nldevrac01  
ora....SM1.asm application    ONLINE    ONLINE    nldevrac01  
ora....01.lsnr application    ONLINE    ONLINE    nldevrac01  
ora....c01.gsd application    ONLINE    ONLINE    nldevrac01  
ora....c01.ons application    ONLINE    ONLINE    nldevrac01  
ora....c01.vip application    ONLINE    ONLINE    nldevrac01  
ora....SM2.asm application    ONLINE    OFFLINE               
ora....02.lsnr application    ONLINE    OFFLINE               
ora....c02.gsd application    ONLINE    OFFLINE               
ora....c02.ons application    ONLINE    OFFLINE               
ora....c02.vip application    ONLINE    ONLINE    nldevrac01  
[oracle@nldevrac01]:/oracle > 

This was a quick solution for my case, I hope it can help to other people who will get the same error in their configurations.