Introduction
We will continue with GNS and Flex Cluster installations (Oracle Clusterware 12cR1 with Flex Cluster), this time we will add new HUB node to our environment so first we can clone the virtual machine template (Installing OL6) and make some changes to prepare it
Points to carry out
- Configure VM template
- Run prerequisite tasks
- Run addnode.sh to add the HUB node
- Validate the status of the new hub node
Assumptions
- You have at least 4GB of RAM for this VM
- VirtualBox is installed and have enough space on your hard drive (40GB)
- You have installed OL6
Configure VM template
Setting up hostname
[root@host01 ~]# hostname host03.example.com [root@host01 ~]# vi /etc/sysconfig/network [root@host01 ~]# cat /etc/sysconfig/network NETWORKING=yes HOSTNAME=host03.example.com # oracle-rdbms-server-12cR1-preinstall : Add NOZEROCONF=yes NOZEROCONF=yes [root@host01 ~]#
Checking the ASM disk are visible from host03
[root@host03 ~]# ls -ltr /dev/asm* brw-rw----. 1 grid asmadmin 8, 17 Mar 24 00:53 /dev/asmdisk1p1 brw-rw----. 1 grid asmadmin 8, 33 Mar 24 00:53 /dev/asmdisk1p2 brw-rw----. 1 grid asmadmin 8, 49 Mar 24 00:53 /dev/asmdisk2p1 brw-rw----. 1 grid asmadmin 8, 65 Mar 24 00:53 /dev/asmdisk2p2 [root@host03 ~]#
Configuring SSH in host03 (you can do this step manual or wait for the GUI, below I will show you how configure manual)
https://docs.oracle.com/cd/E11882_01/install.112/e48294/manpreins.htm#CWAIX368
mkdir ~/.ssh chmod 700 ~/.ssh /usr/bin/ssh-keygen -t rsa #send autorized key file from host01 to host03 scp authorized_keys host03:/home/grid/.ssh/ [grid@host01 .ssh]$ scp authorized_keys host03:/home/grid/.ssh/ The authenticity of host 'host03 (192.168.100.103)' can't be established. RSA key fingerprint is 0a:12:d3:88:b4:5a:63:d4:e3:5e:91:17:3b:20:21:41. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'host03,192.168.100.103' (RSA) to the list of known hosts. grid@host03's password: authorized_keys 100% 466 0.5KB/s 00:00 [grid@host01 .ssh]$ # login with ssh from host01 to hos03 [grid@host01 .ssh]$ ssh host03 [grid@host03 ~]$ cd .ssh/ [grid@host03 .ssh]$ ls -la total 20 drwx------. 2 grid oinstall 4096 Mar 24 01:15 . drwx------. 6 grid oinstall 4096 Mar 24 01:10 .. -rw-r--r--. 1 grid oinstall 466 Mar 24 01:15 authorized_keys -rw-------. 1 grid oinstall 1675 Mar 24 01:11 id_rsa -rw-r--r--. 1 grid oinstall 405 Mar 24 01:11 id_rsa.pub [grid@host03 .ssh]$ cat authorized_keys ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAxFdZTxH1igi/ZFpHkXo5MWCwJ7ed36Uyqwix364TrB3rauSD8Q/7ZdRkvT6ID6aRd6EsYbYa8DJSfqR8X40GmV9VrtkwEXvzJvFQgR1SFtjQaXXPGQSyI49+gj86CQPU4/dz+XRKW+lRh2vNszYpJCYWdt7dFfIiHM+gNQ48Mqs= grid@host01.example.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAsNtc3mUwVY9i0ndfSWdRmS1u5giE1UaOWkO3RBs9jmEpMb89v8y97e9FWceROC9CC3BN+cf7bdmrE+7FwtVRlXj46ESrSmbkOubMOhnLkHJAaXWo/iFByWtaHJgGSJUEVQEepkPH3jV6jsvNYecoqxR9kjpl6KqNEMBZBh6rUEk= grid@host02.example.com [grid@host03 .ssh]$ cat id_rsa.pub >> authorized_keys #after that you have to send the updated file to all the other nodes... [grid@host03 .ssh]$scp authorized_keys host02:/home/grid/.ssh/ [grid@host03 .ssh]$scp authorized_keys host01:/home/grid/.ssh/
Run prerequisite tasks
Checking the preinstall on node03 with cluvfy command
cluvfy stage -pre crsinst -n host03
[grid@host01 ~]$ cluvfy stage -pre crsinst -n host03 Performing pre-checks for cluster services setup Checking node reachability... Node reachability check passed from node "host01" Checking user equivalence... User equivalence check passed for user "grid" Checking node connectivity... Checking hosts config file... Verification of the hosts config file successful Check: Node connectivity using interfaces on subnet "192.168.50.0" Node connectivity passed for subnet "192.168.50.0" with node(s) host03 TCP connectivity check passed for subnet "192.168.50.0" Check: Node connectivity using interfaces on subnet "192.168.100.0" Node connectivity passed for subnet "192.168.100.0" with node(s) host03 TCP connectivity check passed for subnet "192.168.100.0" Node connectivity check passed Checking multicast communication... Checking subnet "192.168.50.0" for multicast communication with multicast group "224.0.0.251"... Check of subnet "192.168.50.0" for multicast communication with multicast group "224.0.0.251" passed. Check of multicast communication passed. Total memory check passed Available memory check passed Swap space check passed Free disk space check passed for "host03:/usr,host03:/var,host03:/etc,host03:/u01/app/12.1.0/grid,host03:/sbin,host03:/tmp" Check for multiple users with UID value 54322 passed User existence check passed for "grid" Group existence check passed for "oinstall" Group existence check passed for "dba" Membership check for user "grid" in group "oinstall" [as Primary] passed Membership check for user "grid" in group "dba" passed Run level check passed Hard limits check passed for "maximum open file descriptors" Soft limits check passed for "maximum open file descriptors" Hard limits check passed for "maximum user processes" Soft limits check passed for "maximum user processes" System architecture check passed Kernel version check passed Kernel parameter check passed for "semmsl" Kernel parameter check passed for "semmns" Kernel parameter check passed for "semopm" Kernel parameter check passed for "semmni" Kernel parameter check passed for "shmmax" Kernel parameter check passed for "shmmni" Kernel parameter check passed for "shmall" Kernel parameter check passed for "file-max" Kernel parameter check passed for "ip_local_port_range" Kernel parameter check passed for "rmem_default" Kernel parameter check passed for "rmem_max" Kernel parameter check passed for "wmem_default" Kernel parameter check passed for "wmem_max" Kernel parameter check passed for "aio-max-nr" Kernel parameter check passed for "panic_on_oops" Package existence check passed for "binutils" Package existence check passed for "compat-libcap1" Package existence check passed for "compat-libstdc++-33(x86_64)" Package existence check passed for "libgcc(x86_64)" Package existence check passed for "libstdc++(x86_64)" Package existence check passed for "libstdc++-devel(x86_64)" Package existence check passed for "sysstat" Package existence check passed for "gcc" Package existence check passed for "gcc-c++" Package existence check passed for "ksh" Package existence check passed for "make" Package existence check passed for "glibc(x86_64)" Package existence check passed for "glibc-devel(x86_64)" Package existence check passed for "libaio(x86_64)" Package existence check passed for "libaio-devel(x86_64)" Package existence check passed for "nfs-utils" Check for multiple users with UID value 0 passed Current group ID check passed Starting check for consistency of primary group of root user Check for consistency of root user's primary group passed Starting Clock synchronization checks using Network Time Protocol(NTP)... NTP configuration file "/etc/ntp.conf" existence check passed Liveness check passed for "ntpd" Check for NTP daemon or service alive passed on all nodes Check of common NTP Time Server passed Clock time offset check passed Clock synchronization check using Network Time Protocol(NTP) passed Core file name pattern consistency check passed. User "grid" is not part of "root" group. Check passed Default user file creation mask check passed Time zone consistency check passed Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ... Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed Checking daemon "avahi-daemon" is not configured and running Daemon not configured check passed for process "avahi-daemon" Daemon not running check passed for process "avahi-daemon" Starting check for Reverse path filter setting ... Check for Reverse path filter setting passed Starting check for Network interface bonding status of private interconnect network interfaces ... Check for Network interface bonding status of private interconnect network interfaces passed Starting check for /dev/shm mounted as temporary file system ... Check for /dev/shm mounted as temporary file system passed Starting check for /boot mount ... Check for /boot mount passed Starting check for zeroconf check ... Check for zeroconf check passed Pre-check for cluster services setup was successful. [grid@host01 ~]$
Run addnode.sh to add the HUB node
If the cluster verification command was successfully you can continue adding the node, if not correct the issues that are present.
[root@host01 ~]# cd /home/grid/grid/rpm/ [root@host01 rpm]# ls cvuqdisk-1.0.9-1.rpm [root@host01 rpm]# rpm -ivh cvuqdisk-1.0.9-1.rpm Preparing... ########################################### [100%] package cvuqdisk-1.0.9-1.x86_64 is already installed [root@host01 rpm]# [root@host01 ~]# su - grid [grid@host01 ~]$ . oraenv ORACLE_SID = [grid] ? +ASM1 The Oracle base has been set to /u01/app/grid [grid@host01 ~]$ cd $ORACLE_HOME [grid@host01 grid]$ cd addnode/ [grid@host01 addnode]$ ./addnode.sh
And now you can follow the below steps
Type the name of the hostname in this case it will be host03.example.com and in the role section select HUB, also check the SSH connectivity if you have not setup yet you can configure from here.
Validate the status of the new hub node
Final check the resources
[grid@host01 ~]$ . oraenv ORACLE_SID = [grid] ? +ASM1 The Oracle base has been set to /u01/app/grid [grid@host01 ~]$ crsctl status res -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.ASMNET1LSNR_ASM.lsnr ONLINE ONLINE host01 STABLE ONLINE ONLINE host02 STABLE ONLINE ONLINE host03 STABLE ora.DATA.dg ONLINE ONLINE host01 STABLE ONLINE ONLINE host02 STABLE ONLINE ONLINE host03 STABLE ora.LISTENER.lsnr ONLINE ONLINE host01 STABLE ONLINE ONLINE host02 STABLE ONLINE ONLINE host03 STABLE ora.net1.network ONLINE ONLINE host01 STABLE ONLINE ONLINE host02 STABLE ONLINE ONLINE host03 STABLE ora.ons ONLINE ONLINE host01 STABLE ONLINE ONLINE host02 STABLE ONLINE ONLINE host03 STABLE -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE host02 STABLE ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE host03 STABLE ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE host01 STABLE ora.MGMTLSNR 1 ONLINE ONLINE host01 169.254.208.71 192.1 68.50.101,STABLE ora.asm 1 ONLINE ONLINE host01 Started,STABLE 2 ONLINE ONLINE host02 Started,STABLE 3 ONLINE ONLINE host03 Started,STABLE ora.cvu 1 ONLINE ONLINE host01 STABLE ora.gns 1 ONLINE ONLINE host01 STABLE ora.gns.vip 1 ONLINE ONLINE host01 STABLE ora.host01.vip 1 ONLINE ONLINE host01 STABLE ora.host02.vip 1 ONLINE ONLINE host02 STABLE ora.host03.vip 1 ONLINE ONLINE host03 STABLE ora.mgmtdb 1 ONLINE ONLINE host01 Open,STABLE ora.oc4j 1 ONLINE ONLINE host01 STABLE ora.scan1.vip 1 ONLINE ONLINE host02 STABLE ora.scan2.vip 1 ONLINE ONLINE host03 STABLE ora.scan3.vip 1 ONLINE ONLINE host01 STABLE -------------------------------------------------------------------------------- [grid@host01 ~]$ [grid@host01 ~]$ crsctl check cluster -all ************************************************************** host01: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** host02: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** host03: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** [grid@host01 ~]$