If you are not deploying an extended distance cluster, you can skip on to part 5 of this series.
Configure the witness site
At your witness site all you need is a Linux machine running an NFS server. This could be something as simple as an EC2 instance running in Amazon. The Oracle whitepaper on this that I referenced is available here. You need to create a directory and export it to both cluster nodes. For example you could add these lines to /etc/exports
on the nfs server:
/mysql-prod mysql-prod-dc1-1.example.com(rw,sync,all_squash,anonuid=54321,anongid=54322)
/mysql-prod mysql-prod-dc2-1.example.com(rw,sync,all_squash,anonuid=54321,anongid=54322)
Once that is done, you just need to do the following as root on each node:
chkconfig netfs on
chkconfig rpcbind on
service rpcbind start
mkdir /voting_disk
Add a line like this to /etc/fstab
on each node:
nfshost.example.com:/mysql-prod /voting_disk nfs _netdev,rw,bg,soft,intr,rsize=32768,wsize=32768,tcp,noac,vers=3,timeo=600 0 0
If you have not yet installed the patch for 19373893 you need to mount the NFS share with the hard option instead of soft. However if you do this and your NFS server goes unreachable (or your EC2 instance just up and disappears as they occasionally do) your entire cluster will crash. ASM will not let you add a disk image mounted over a soft mounted NFS share by default, this is what the patch allows you to do.
Once the NFS share is mounted we need to create a disk image to be used as a voting disk for ASM. Run this command as the oracle user/grid owner on one node:
dd if=/dev/zero of=/voting_disk/nfs_vote bs=1M count=500
With this out of the way, we can create the CRS diskgroup. There are lots of ways to do this. If you want a GUI you could use asmca. Command line you could use asmcmd or sqlplus. I have had issues in the past adding the quorum disk with asmca so I chose to go the sqlplus route.
$ sqlplus / as sysasm
SQL*Plus: Release 12.1.0.2.0 Production on Tue Nov 29 16:33:03 2016
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options
SQL> create diskgroup CRS normal redundancy failgroup DC2 disk '/dev/xvdc' name CRS_1234 failgroup DC1 disk '/dev/xvdd' name CRS_5678;
Diskgroup created.
SQL> alter system set asm_diskstring = '/dev/xvd[c-l]','/voting_disk/nfs_vote' scope=both;
System altered.
SQL> alter diskgroup CRS set attribute 'compatible.asm'='12.1.0.2.0';
Diskgroup altered.
SQL> alter diskgroup CRS add quorum failgroup AWS DISK '/voting_disk/nfs_vote' NAME CRS_NFS;
Diskgroup altered.
You may also need to do a alter diskgroup CRS mount;
on the other node in the cluster depending on how you created the diskgroup.
Next, we need to tell the cluster to use the new diskgroup as the voting diskgroup instead of the DATA diskgroup created during provisioning. We also are going to relocate the cluster registry to the CRS diskgroup.
# /u01/app/12.1.0/grid/bin/crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE af5b1804b7ef4f32bfb026d0a2862a4b (/dev/xvde) [DATA]
2. ONLINE b177453910564f80bf010ab3c8707d91 (/dev/xvdf) [DATA]
3. ONLINE 5c82a722c1464ff0bf980648479a1837 (/dev/xvdg) [DATA]
Located 3 voting disk(s).
# /u01/app/12.1.0/grid/bin/crsctl replace votedisk +CRS
Successful addition of voting disk ec3c77f781624f37bf9aafd63411f447.
Successful addition of voting disk 63ec90a312a84f1bbffa93195a294a3d.
Successful addition of voting disk 6a5be824b8714ff0bfe2310d6e80b073.
Successful deletion of voting disk af5b1804b7ef4f32bfb026d0a2862a4b.
Successful deletion of voting disk b177453910564f80bf010ab3c8707d91.
Successful deletion of voting disk 5c82a722c1464ff0bf980648479a1837.
Successfully replaced voting disk group with +CRS.
CRS-4266: Voting file(s) successfully replaced
# /u01/app/12.1.0/grid/bin/crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE ec3c77f781624f37bf9aafd63411f447 (/dev/xvdc) [CRS]
2. ONLINE 63ec90a312a84f1bbffa93195a294a3d (/dev/xvdd) [CRS]
3. ONLINE 6a5be824b8714ff0bfe2310d6e80b073 (/voting_disk/nfs_vote) [CRS]
Located 3 voting disk(s).
# /u01/app/12.1.0/grid/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 4
Total space (kbytes) : 409568
Used space (kbytes) : 1508
Available space (kbytes) : 408060
ID : 2146637587
Device/File Name : +DATA
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check succeeded
# /u01/app/12.1.0/grid/bin/ocrconfig -add +CRS
# /u01/app/12.1.0/grid/bin/ocrconfig -delete +DATA
# /u01/app/12.1.0/grid/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 4
Total space (kbytes) : 409568
Used space (kbytes) : 1508
Available space (kbytes) : 408060
ID : 2146637587
Device/File Name : +CRS
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check succeeded
Fix the DATA diskgroup
As it stands now, the DATA diskgroup will have the 4 disks it contains set to be in 4 different failgroups. We need to fix it so disks presented from the same array are in the same failgroup. I did this with SQLPlus it seems to work best if you first delete a single disk from each eventual failgroup. Once that is done I added the disks back into the properly named failgroup.
SQL> alter diskgroup DATA drop disk DATA_0000 rebalance power 11 wait;
Diskgroup altered.
SQL> alter diskgroup DATA drop disk DATA_0002 rebalance power 11 wait;
Diskgroup altered.
SQL> alter diskgroup DATA add failgroup DC2 disk '/dev/xvde' name DATA_1234 failgroup DC1 disk '/dev/xvdg' name DATA_5678 rebalance power 11 wait;
Diskgroup altered.
Now, you can drop the other two disks and add them back in the proper failgroup.
SQL> alter diskgroup DATA drop disk DATA_0001 drop disk DATA_0003 rebalance power 11 wait;
Diskgroup altered.
SQL> alter diskgroup DATA add failgroup DC2 disk '/dev/xvdf' name DATA_1235 failgroup DC1 disk '/dev/xvdh' name DATA_5679 rebalance power 11 wait;
Diskgroup altered.
Create the NFS diskgroup
I ended up creating the NFS diskgroup in SQLPlus as well in one shot. If you aren’t going to use HA NFS, you can skip this.
SQL> create diskgroup NFS normal redundancy failgroup DC2 disk '/dev/xvdi' name NFS_1234 failgroup DC2 disk '/dev/xvdj' name NFS_1235 failgroup DC1 disk '/dev/xvdk' name NFS_5678 failgroup DC1 disk '/dev/xvdl' name NFS_5679;
In the next part, we will get the MySQL software installed and create ACFS filesystems.