Removing and Adding back a fail Node Oracle RAC 12R2

After some time I am posting again Yeah!, there is a interesting topic always during any migration, what happend if the node of the cluster complete broke? and then we need to recreated from scratch. So let’t try this out.

Enviroment before the fail:

  • Two nodes Cluster {host01,host02}
  • one asm instance running on each node {+ASM1,+ASM2}
  • one database instance running on each node {orcl1,orcl2}
[grid@host01 ~]$ crsctl status res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ora.DATA.dg
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ora.FRA.dg
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ora.chad
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ora.net1.network
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ora.ons
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE host02 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE host01 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE host01 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE host01 169.254.12.84 192.16
8.50.11 192.168.50.1
2,STABLE
ora.asm
1 ONLINE ONLINE host01 Started,STABLE
2 ONLINE ONLINE host02 Started,STABLE
3 OFFLINE OFFLINE STABLE
ora.cvu
1 ONLINE ONLINE host01 STABLE
ora.host01.vip
1 ONLINE ONLINE host01 STABLE
ora.host02.vip
1 ONLINE ONLINE host02 STABLE
ora.mgmtdb
1 ONLINE ONLINE host01 Open,STABLE
ora.orcl.db
1 ONLINE ONLINE host01 Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
2 ONLINE ONLINE host02 Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
ora.qosmserver
1 ONLINE ONLINE host01 STABLE
ora.scan1.vip
1 ONLINE ONLINE host02 STABLE
ora.scan2.vip
1 ONLINE ONLINE host01 STABLE
ora.scan3.vip
1 ONLINE ONLINE host01 STABLE
--------------------------------------------------------------------------------
[grid@host01 ~]$

Status after node fail and could not be restablish

grid@host01 ~]$ crsctl status res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE host01 STABLE
ora.DATA.dg
ONLINE ONLINE host01 STABLE
ora.FRA.dg
ONLINE ONLINE host01 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE host01 STABLE
ora.chad
ONLINE ONLINE host01 STABLE
ora.net1.network
ONLINE ONLINE host01 STABLE
ora.ons
ONLINE ONLINE host01 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE host01 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE host01 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE host01 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE host01 169.254.12.84 192.16
8.50.11 192.168.50.1
2,STABLE
ora.asm
1 ONLINE ONLINE host01 Started,STABLE
2 ONLINE OFFLINE STABLE
3 OFFLINE OFFLINE STABLE
ora.cvu
1 ONLINE ONLINE host01 STABLE
ora.host01.vip
1 ONLINE ONLINE host01 STABLE
ora.host02.vip
1 ONLINE INTERMEDIATE host01 FAILED OVER,STABLE
ora.mgmtdb
1 ONLINE ONLINE host01 Open,STABLE
ora.orcl.db<span 				data-mce-type="bookmark" 				id="mce_SELREST_start" 				data-mce-style="overflow:hidden;line-height:0" 				style="overflow:hidden;line-height:0" 			></span>
1 ONLINE ONLINE host01 Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
2 ONLINE OFFLINE STABLE
ora.qosmserver
1 ONLINE ONLINE host01 STABLE
ora.scan1.vip
1 ONLINE ONLINE host01 STABLE
ora.scan2.vip
1 ONLINE ONLINE host01 STABLE
ora.scan3.vip
1 ONLINE ONLINE host01 STABLE
--------------------------------------------------------------------------------
[grid@host01 ~]$

Remove node from active node in the cluster

Reconfigure the RDBMS Services in the cluster to take into account node 2 is gone. If there is any service, remove any service from the node (in this case we dont have any)

As root user for example orcl_ha Hight Availability service

srvctl modify service -d orcl -s orcl_ha -n host01 -i orcl1 -f

you can validate this with

crsctl status res -t

Remove database instance

check actual status

[oracle@host01 ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Sun Feb 16 13:43:53 2020

Copyright (c) 1982, 2016, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> select distinct thread# from v$log;

THREAD#
----------
1
2

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

Delete node with dbca in silent mode from node that is alive.

[oracle@host01 ~]$ dbca -silent -deleteInstance -nodeList host02 -gdbName orcl -instanceName orcl2 -sysDBAUserName sys -sysDBAPassword oracle_4U
Deleting instance
1% complete
2% complete
6% complete
13% complete
20% complete
26% complete
33% complete
40% complete
46% complete
53% complete
60% complete
66% complete
Completing instance management.
DBCA Operation failed.
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/orcl.log" for further details.

[oracle@host01 ~]$ cat /u01/app/oracle/cfgtoollogs/dbca/orcl.log
[ 2020-02-16 13:44:35.607 CST ] The user "oracle" does not have user equivalence setup on nodes "[host02]". Please set up the user equivalence before proceeding.
PRKC-1030 : Error checking accessibility for node host02, PRKN-1038 : The command "/usr/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=yes -o NumberOfPasswordPrompts=0 host02 -n /bin/true" run on node "host01" gave an unexpected output: "ssh: connect to host host02 port 22: No route to host"
[ 2020-02-16 13:44:41.312 CST ] The Database Configuration Assistant will delete the Oracle instance and its associated OFA directory structure. All information about this instance will be deleted.

Do you want to proceed?
[ 2020-02-16 13:44:41.326 CST ] Deleting instance
[ 2020-02-16 13:44:47.567 CST ] Node host02 is not accessible. DBCA may not be able to remove all files for the instance. Do you want to continue to delete instance orcl2 on this node?
DBCA_PROGRESS : 1%
[ 2020-02-16 13:44:50.655 CST ] Unable to copy the file "host02:/etc/oratab" to "/tmp/oratab.host02".
DBCA_PROGRESS : 2%
DBCA_PROGRESS : 6%
DBCA_PROGRESS : 13%
DBCA_PROGRESS : 20%
DBCA_PROGRESS : 26%
DBCA_PROGRESS : 33%
DBCA_PROGRESS : 40%
DBCA_PROGRESS : 46%
DBCA_PROGRESS : 53%
DBCA_PROGRESS : 60%
DBCA_PROGRESS : 66%
[ 2020-02-16 13:45:27.331 CST ] Completing instance management.
[ 2020-02-16 13:45:28.375 CST ] Illegal Capacity: -1
[ 2020-02-16 13:45:28.450 CST ] DBCA_PROGRESS : DBCA Operation failed.
[oracle@host01 ~]$

It has fail becuase node is not reachable, You can ignore this. check status actual status of the services

[oracle@host01 ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Sun Feb 16 13:47:48 2020

Copyright (c) 1982, 2016, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> select distinct thread# from v$log;

THREAD#
----------
1

SQL> ALTER DATABASE DISABLE THREAD 2; --Just in case

Database altered.

SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

[oracle@host01 ~]$ srvctl config database -d RAC
PRCD-1120 : The resource for database RAC could not be found.
PRCR-1001 : Resource ora.rac.db does not exist
[oracle@host01 ~]$ srvctl config database -d orcl
Database unique name: orcl
Database name: orcl
Oracle home: /u01/app/oracle/product/12.2.0/dbhome_1
Oracle user: oracle
Spfile: +FRA/ORCL/PARAMETERFILE/spfile.268.1031329117
Password file: +FRA/ORCL/PASSWORD/pwdorcl.256.1031328765
Domain: example.com
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups: FRA
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group: dba
Database instances: orcl1
Configured nodes: host01
CSS critical: no
CPU count: 0
Memory target: 0
Maximum memory: 0
Default network number for database services:
Database is administrator managed
[oracle@host01 ~]$

 

Remove node from clusterware inventory

Remove the VIP from the failed node {host02 in this case}

[grid@host01 ~]$ crsctl status res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
...
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
....
ora.host01.vip
1 ONLINE ONLINE host01 STABLE
ora.host02.vip
1 ONLINE INTERMEDIATE host01 FAILED OVER,STABLE

check if it is still there

[grid@host01 ~]$ srvctl config vip -node host02
VIP exists: network number 1, hosting node host02
VIP Name: host02-vip.example.com
VIP IPv4 Address: 192.168.1.112
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
[grid@host01 ~]$

Remove it as root user

[root@host01 ~]# . oraenv
ORACLE_SID = [root] ? +ASM1
The Oracle base has been set to /u01/app/grid/base
[root@host01 ~]# srvctl stop vip -i orcl2
PRCR-1001 : Resource ora.orcl2.vip does not exist
[root@host01 ~]# srvctl stop vip -i host02
[root@host01 ~]# srvctl remove vip -i host02 -f

Validate that vip has been removed

[grid@host01 ~]$ crsctl status res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
...
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
....
ora.host01.vip
1 ONLINE ONLINE host01 STABLE

If the node is pinned, try to unpin it first (crsctl unpin css -n ), No in this case so no action is required,

[grid@host01 ~]$ olsnodes -s -t
host01 Active Unpinned
host02 Inactive Unpinned

As root user delete the node

[root@host01 ~]# . oraenv
ORACLE_SID = [root] ? +ASM1
The Oracle base has been set to /u01/app/grid/base

[root@host01 ~]# crsctl delete node -n host02
CRS-4661: Node host02 successfully deleted.

As Grid User check after node delete

[grid@host01 ~]$ cluvfy stage -post nodedel -n host02

Verifying Node Removal ...
Verifying CRS Integrity ...PASSED
Verifying Clusterware Version Consistency ...PASSED
Verifying Node Removal ...PASSED

Post-check for node removal was successful.

CVU operation performed: stage -post nodedel
Date: Feb 16, 2020 1:54:43 PM
CVU home: /u01/app/grid/12.2/gridhome1/
User: grid

As grid user, perform a clean up inventory

[grid@host01 bin]$ echo $ORACLE_HOME
/u01/app/grid/12.2/gridhome1
[grid@host01 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/grid/12.2/gridhome1 "CLUSTER_NODES={host01}" CRS=TRUE -silent
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 16379 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.
[grid@host01 bin]$

Adding the Node back

Install all the pre requisites and add the node as described below.

[grid@host01 ~]$ . oraenv
ORACLE_SID = [grid] ? +ASM1
The Oracle base has been set to /u01/app/grid/base
[grid@host01 ~]$ cd $ORACLE_HOME
[grid@host01 gridhome1]$ cd addnode/
[grid@host01 addnode]$ ls
addnode_oraparam.ini addnode_oraparam.ini.sbs addnode_oraparam.ini.sbs.ouibak addnode.pl addnode.sh addnode.sh.ouibak
[grid@host01 addnode]$ ./addnode.sh
You can find the log of this install session at:
/u01/app/grid/oraInventory/logs/addNodeActions2020-02-16_02-35-49-PM.log
[grid@host01 addnode]$

 

This slideshow requires JavaScript.

Root user on host02

[root@host02 ~]# /u01/app/grid/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/grid/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/grid/oraInventory to oinstall.
The execution of the script is complete.
[root@host02 ~]# /u01/app/grid/12.2/gridhome1/root.sh
Performing root user operation.

The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/grid/12.2/gridhome1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/grid/12.2/gridhome1/crs/install/crsconfig_params
The log of current session can be found at:
/u01/app/grid/base/crsdata/host02/crsconfig/rootcrs_host02_2020-02-16_03-01-01PM.log
2020/02/16 15:01:04 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2020/02/16 15:01:04 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.
2020/02/16 15:01:41 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2020/02/16 15:01:42 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
2020/02/16 15:01:47 CLSRSC-363: User ignored prerequisites during installation
2020/02/16 15:01:47 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
2020/02/16 15:01:48 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
2020/02/16 15:01:49 CLSRSC-594: Executing installation step 5 of 19: 'SaveParamFile'.
2020/02/16 15:01:54 CLSRSC-594: Executing installation step 6 of 19: 'SetupOSD'.
2020/02/16 15:01:54 CLSRSC-594: Executing installation step 7 of 19: 'CheckCRSConfig'.
2020/02/16 15:01:56 CLSRSC-594: Executing installation step 8 of 19: 'SetupLocalGPNP'.
2020/02/16 15:01:58 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2020/02/16 15:02:04 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
2020/02/16 15:02:04 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
2020/02/16 15:02:06 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
2020/02/16 15:02:22 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
2020/02/16 15:02:52 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
2020/02/16 15:02:54 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'host02'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'host02' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2020/02/16 15:03:12 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2020/02/16 15:03:14 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'host02'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'host02' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'host02'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'host02' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2020/02/16 15:03:40 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'host02'
CRS-2672: Attempting to start 'ora.evmd' on 'host02'
CRS-2676: Start of 'ora.mdnsd' on 'host02' succeeded
CRS-2676: Start of 'ora.evmd' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'host02'
CRS-2676: Start of 'ora.gpnpd' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'host02'
CRS-2676: Start of 'ora.gipcd' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'host02'
CRS-2676: Start of 'ora.cssdmonitor' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'host02'
CRS-2672: Attempting to start 'ora.diskmon' on 'host02'
CRS-2676: Start of 'ora.diskmon' on 'host02' succeeded
CRS-2676: Start of 'ora.cssd' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'host02'
CRS-2672: Attempting to start 'ora.ctssd' on 'host02'
CRS-2676: Start of 'ora.ctssd' on 'host02' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'host02'
CRS-2676: Start of 'ora.asm' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'host02'
CRS-2676: Start of 'ora.storage' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'host02'
CRS-2676: Start of 'ora.crf' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'host02'
CRS-2676: Start of 'ora.crsd' on 'host02' succeeded
CRS-6017: Processing resource auto-start for servers: host02
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'host01'
CRS-2672: Attempting to start 'ora.net1.network' on 'host02'
CRS-2672: Attempting to start 'ora.chad' on 'host02'
CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'host02'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'host01' succeeded
CRS-2676: Start of 'ora.net1.network' on 'host02' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'host01'
CRS-2672: Attempting to start 'ora.ons' on 'host02'
CRS-2677: Stop of 'ora.scan1.vip' on 'host01' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'host02'
CRS-2676: Start of 'ora.chad' on 'host02' succeeded
CRS-2676: Start of 'ora.scan1.vip' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'host02'
CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'host02'
CRS-2676: Start of 'ora.ons' on 'host02' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'host02' succeeded
CRS-2676: Start of 'ora.asm' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.FRA.dg' on 'host02'
CRS-2676: Start of 'ora.FRA.dg' on 'host02' succeeded
CRS-6016: Resource auto-start has completed for server host02
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2020/02/16 15:05:06 CLSRSC-343: Successfully started Oracle Clusterware stack
2020/02/16 15:05:07 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2020/02/16 15:05:35 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
2020/02/16 15:05:45 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@host02 ~]#

Cheking services

[grid@host02 ~]$ . oraenv
ORACLE_SID = [grid] ? +ASM2
The Oracle base has been set to /u01/app/grid/base
[grid@host02 ~]$ crsctl status res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ora.DATA.dg
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ora.FRA.dg
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ora.chad
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ora.net1.network
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ora.ons
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE host02 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE host01 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE host01 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE host01 169.254.12.84 192.16
8.50.11 192.168.50.1
2,STABLE
ora.asm
1 ONLINE ONLINE host01 Started,STABLE
2 ONLINE ONLINE host02 Started,STABLE
3 OFFLINE OFFLINE STABLE
ora.cvu
1 ONLINE ONLINE host01 STABLE
ora.host01.vip
1 ONLINE ONLINE host01 STABLE
ora.host02.vip
1 ONLINE ONLINE host02 STABLE
ora.mgmtdb
1 ONLINE ONLINE host01 Open,STABLE
ora.orcl.db
1 ONLINE ONLINE host01 Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
ora.qosmserver
1 ONLINE ONLINE host01 STABLE
ora.scan1.vip
1 ONLINE ONLINE host02 STABLE
ora.scan2.vip
1 ONLINE ONLINE host01 STABLE
ora.scan3.vip
1 ONLINE ONLINE host01 STABLE
--------------------------------------------------------------------------------
[grid@host02 ~]$

Adding database node

[oracle@host01 ~]$ . oraenv
ORACLE_SID = [oracle] ? orcl1
The Oracle base has been set to /u01/app/oracle
[oracle@host01 ~]$ cd $ORACLE_HOME
[oracle@host01 dbhome_1]$ cd addnode/
[oracle@host01 addnode]$ ./addnode.sh
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 7258 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 16379 MB    Passed
Checking monitor: must be configured to display at least 256 colors.    Actual 16777216    Passed
Logfile Location : /u01/app/grid/oraInventory/logs/sshsetup1_2020-02-16_04-17-10PM.log
ClusterLogger - log file location: /u01/app/grid/oraInventory/logs/remoteInterfaces2020-02-16_04-17-10PM.log
You can find the log of this install session at:
 /u01/app/grid/oraInventory/logs/installActions2020-02-16_04-17-10PM.log
[oracle@host01 addnode]$

 

This slideshow requires JavaScript.

root script execution output

[root@host02 ~]# /u01/app/oracle/product/12.2.0/dbhome_1/root.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/12.2.0/dbhome_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
[root@host02 ~]#

Create the instance orcl2

[oracle@host01 addnode]$ dbca
[oracle@host01 addnode]$

 

This slideshow requires JavaScript.


Final check

[grid@host01 ~]$ . oraenv
ORACLE_SID = [grid] ? +ASM1
The Oracle base has been set to /u01/app/grid/base
[grid@host01 ~]$ crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       host01                   STABLE
               ONLINE  ONLINE       host02                   STABLE
ora.DATA.dg
               ONLINE  ONLINE       host01                   STABLE
               ONLINE  ONLINE       host02                   STABLE
ora.FRA.dg
               ONLINE  ONLINE       host01                   STABLE
               ONLINE  ONLINE       host02                   STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       host01                   STABLE
               ONLINE  ONLINE       host02                   STABLE
ora.chad
               ONLINE  ONLINE       host01                   STABLE
               ONLINE  ONLINE       host02                   STABLE
ora.net1.network
               ONLINE  ONLINE       host01                   STABLE
               ONLINE  ONLINE       host02                   STABLE
ora.ons
               ONLINE  ONLINE       host01                   STABLE
               ONLINE  ONLINE       host02                   STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       host02                   STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       host01                   STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       host01                   STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       host01                   169.254.12.84 192.16
                                                             8.50.11 192.168.50.1
                                                             2,STABLE
ora.asm
      1        ONLINE  ONLINE       host01                   Started,STABLE
      2        ONLINE  ONLINE       host02                   Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       host01                   STABLE
ora.host01.vip
      1        ONLINE  ONLINE       host01                   STABLE
ora.host02.vip
      1        ONLINE  ONLINE       host02                   STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       host01                   Open,STABLE
ora.orcl.db
      1        ONLINE  ONLINE       host01                   Open,HOME=/u01/app/o
                                                             racle/product/12.2.0
                                                             /dbhome_1,STABLE
      2        ONLINE  ONLINE       host02                   Open,HOME=/u01/app/o
                                                             racle/product/12.2.0
                                                             /dbhome_1,STABLE
ora.qosmserver
      1        ONLINE  ONLINE       host01                   STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       host02                   STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       host01                   STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       host01                   STABLE
--------------------------------------------------------------------------------
[grid@host01 ~]$

 

Ref:

How to Remove/Delete a Node From Grid Infrastructure Clusterware When the Node Has Failed (Doc ID 1262925.1)

https://docs.oracle.com/en/database/oracle/oracle-database/12.2/cwadd/adding-and-deleting-cluster-nodes.html#GUID-929C0CD9-9B67-45D6-B864-5ED3B47FE458

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s