Views expressed here are solely that of my own. Please make sure that you test the code/queries that appear on my blog before applying them to the production environment.

Monday, March 19, 2012

How to create an ASM Cluster File System (ACFS) in Oracle 11g ASM

If you ever need a clustered file system in an Oracle 11g RAC configuration you can use ACFS (ASM Cluster File System) method.
You can create a clustered file system which is accessible by all the nodes of the Oracle RAC and managed by Oracle ASM.

Below example is using Oracle 11gR2 (11.2.0.3) RAC database with two nodes on IBM servers with IBM AIX v6.1 OS.

Oracle RAC clusterware stack (CRS) and ASM should be up before starting the below operation.

[root@nldevrac02]:/home/root > crsctl check cluster -all 
**************************************************************
nldevrac01:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
nldevrac02:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

[root@nldevrac01]:/home/root > crsctl stat res ora.asm
NAME=ora.asm
TYPE=ora.asm.type
TARGET=ONLINE              , ONLINE
STATE=ONLINE on nldevrac01, ONLINE on nldevrac02

We need to do the following operation on only one of the RAC nodes.
Switch to Oracle GI owner user in Unix which is "oracle" for me.

[root@nldevrac01]:/home/root > su - oracle
[oracle@nldevrac01]:/oracle > 

We need to switch to Oracle 11g Grid Infrastructure (GI) profile in Unix since we will use ASMCA (Automatic Storage Management Configuration Assistant) utility from GI home binaries.

[oracle@nldevrac01]:/oracle > . .profile_grid_11g    

Check again if the Oracle related environment variables are correct

[oracle@nldevrac01]:/oracle > env | grep ORA
ORACLE_BASE=/ora11/oracle
ORACLE_SID=+ASM1
ORACLE_UNQNAME=COREP
ORACLE_HOME=/ora11/11.2.0/grid
ORA_GRID_HOME=/ora11/11.2.0/grid

Check if ASMCA utility is included in the PATH environment variable and oracle user can execute it.
[oracle@nldevrac01]:/oracle > which asmca
/ora11/11.2.0/grid/bin/asmca
[oracle@nldevrac01]:/oracle > 

Set the Unix "DISPLAY" parameter to the IP address of your PC like below.
[oracle@nldevrac01]:/oracle > export DISPLAY=10.1.20.99:0.0

Check that it's been set correct
[oracle@nldevrac01]:/oracle > echo $DISPLAY
10.1.20.99:0.0

You need to use an X-Windows client software like "WRQ Reflection X" or any others on you PC to be able to see the GUI screens of ASMCA utility on your monitor during the operation.
Start the X-Windows client software in listening mode on your PC.

Continue with ASMCA utility.

[oracle@nldevrac01]:/ora11/11.2.0/grid/dbs > asmca

You will see the following screen on your PC monitor when you start ASMCA utility


First we need to create a new ASM diskgroup. I choose "External" for the Redundancy of the new ASM diskgroup called "DG_ACFS_DISK_01" since the redundancy of the disks are managed in the Storage System externally. As a member disk, I choose already configured "/dev/ASM_Disk_RAC_voting_02" (do not take into account the name since it was intended for a different purpose and now I use it for ACFS) which point to a raw "/dev/hdisk" assigned to both RAC nodes. In "ASM Compatibility" part below, it will bring "11.2.0.0.0" automatically and it is very important that to be able to create an ACFS on a diskgroup this value should be set to this value. I also manually set the "Database Compatibility" to the value "11.2.0.0.0".


When I press OK button, after some time the following message is displayed.


Final disk group list is as the following with the new diskgroup. Just check very carefully that in the "State" column it should show "MOUNTED(2 of 2)", this means the new ASM diskgroup was successfully mounted in ASMs of both of the RAC nodes otherwise you can have different problem in the further steps.


Check again that this new ASM diskgroup is mounted in both RAC nodes.
[root@nldevrac01]:/home/root > crsctl stat res ora.DG_ACFS_DISK_01.dg
NAME=ora.DG_ACFS_DISK_01.dg
TYPE=ora.diskgroup.type
TARGET=ONLINE              , ONLINE
STATE=ONLINE on nldevrac01, ONLINE on nldevrac02

[root@nldevrac01]:/home/root > 

The next step is creating a Volume that is to be used in ACFS. Click the "Volumes" tab and press the "Create" button in this window.


You need to choose a "Volume Name" for your new volume. Choose the disk group name that you created in the previous steps and just give a size to this volume for how much space you need to use.


In next steps you will see the following windows.




Now that the volume is created. In the next step we will create the ACFS. Just click the "ASM Cluster File System" tab and press the "Create" button.



In that screen, since we will use this file system for general purpose, we choose "General Purpose File System", you can also keep your database binaries on a ACFS. In the mount point field you see how your new file system will look in your Unix server. After pressing the OK button I got the below error.


"ASM Cluster File System creation on /dev/as/oraacfs-116 failed with the following message:" but the message is empty. This can be a bug, but what I suspect is since I run the ASMCA utility by "oracle" unix user, this user is not capable of running the commands needed for creating the ACFS and I need to get and run those command by a more privileged user which is "root" in Unix. There is also a note about this situation at the bottom of the "ASM Cluster File Systems" tab like that "Note : Some ACFS commands can be executed as privileged/root user only. If you choose any of these operations, ASMCA will generate the command that can be executed as privileged/root user manually.

So I go to the "Create ASM Cluster File System" window and press the button "Show Command" and it shows the below commands.


Create ACFS Command:
/usr/sbin/mkfs -V acfs /dev/asm/oraacfs-116

Register Mount Point Command:
/sbin/acfsutil registry -a -f /dev/asm/oraacfs-116 /ora11/oracle/acfsmounts/dg_acfs_disk_01_oraacfs

Then I run those commands by logging in the system as root user as before this should be run only on one of the RAc nodes.

[root@nldevrac01]:/home/root > /usr/sbin/mkfs -V acfs /dev/asm/oraacfs-116
mkfs: version                   = 11.2.0.3.0
mkfs: on-disk version           = 39.0
mkfs: volume                    = /dev/asm/oraacfs-116
mkfs: volume size               = 536870912
mkfs: Format complete.
[root@nldevrac01]:/home/root > 6 /ora11/oracle/acfsmounts/dg_acfs_disk_01_oraacfs                                                <
acfsutil registry: mount point /ora11/oracle/acfsmounts/dg_acfs_disk_01_oraacfs successfully added to Oracle Registry

After that to refresh the "ASM Cluster File Systems" window, switch to "Volumes" tab then click again to "ASM CLuster File Systems" tab and now you can see the newly creates ACFS file system.
Now you can see the newly created ACFS file system in both of the RAC nodes and start to use it.
[root@nldevrac01]:/home/root > df -g
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/hd4           0.25      0.17   31%     3245     8% /
/dev/hd2           7.00      5.04   28%    42355     4% /usr
/dev/hd9var        0.25      0.22   13%      635     2% /var
/dev/hd3           2.00      1.70   16%     1180     1% /tmp
/dev/hd1           0.25      0.23    8%       50     1% /home
/proc                 -         -    -         -     -  /proc
/dev/hd10opt       0.25      0.06   78%     4927    26% /opt
/dev/sysperflv      0.25      0.21   15%       54     1% /sysperf
/dev/hd11admin      0.25      0.24    4%       17     1% /admin
/dev/oraclelv     39.50      7.46   82%    66335     4% /oracle
/dev/ora11lv      20.00      1.84   91%    55117    12% /ora11
/dev/asm/oraacfs-116      0.50      0.43   15%   147928    15% /ora11/oracle/acfsmounts/dg_acfs_disk_01_oraacfs
[root@nldevrac01]:/home/root > 

[root@nldevrac02]:/home/root > df -g
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/hd4           0.25      0.18   28%     3215     7% /
/dev/hd2           7.00      5.04   28%    42338     4% /usr
/dev/hd9var        0.25      0.22   13%      630     2% /var
/dev/hd3           2.00      0.98   52%     2810     2% /tmp
/dev/hd1           0.25      0.23    8%       46     1% /home
/proc                 -         -    -         -     -  /proc
/dev/hd10opt       0.25      0.06   78%     4927    26% /opt
/dev/sysperflv      0.25      0.22   14%       54     1% /sysperf
/dev/hd11admin      0.25      0.24    4%       17     1% /admin
/dev/oraclelv     39.50     11.65   71%    56435     3% /oracle
/dev/ora11lv      20.00      1.89   91%    55416    11% /ora11
/dev/asm/oraacfs-116      0.50      0.43   15%   147928    15% /ora11/oracle/acfsmounts/dg_acfs_disk_01_oraacfs
[root@nldevrac02]:/home/root > 
As you can see the ownership of the new cluster file system is root, if you want to use it with oracle user just change the ownership to oracle or whatever user you will use.
[root@nldevrac02]:/home/root > ls -l /ora11/oracle
total 0
drwx------    2 oracle   dba             256 Mar 12 11:06 Clusterware
drwxr-xr-x    4 root     system          256 Mar 15 17:13 acfsmounts
drwx------    3 oracle   dba             256 Mar 12 14:18 admin
drwx------    6 oracle   dba             256 Mar 15 15:33 cfgtoollogs
drwxrwxr-x   11 oracle   dba            4096 Mar 12 13:09 diag
drwx------    3 oracle   dba             256 Mar 12 11:06 nldevrac02
drwx------    3 oracle   dba             256 Mar 13 13:25 oradata
drwxr-----    3 oracle   dba             256 Mar 13 16:28 oradiag_oracle
drwxr-xr-x    3 root     system          256 Mar 15 12:46 oradiag_root
drwx------    3 oracle   dba             256 Mar 12 13:08 product

2 comments:

Ergem Peker said...

very well prepared hands on guide for ACFS. thanks for your post..

Ural Ural said...

Dear Ergem, thank you for your comment although it's a bit late.

Take care,
Ural