12.2 Grid Patching lesson learned

What happend?

During the last month I updated manually the TFA Software.

I  do this update while the TFA release installed via the Patchset is an older Version. This happens while Oracle Support adds the TFA release which is available while they create the Patchset.

Last weekend I start Patching GI Software 12.2 to RU Oct 2018 on a 4 Node Exadata Cluster

As best practice I do the installation manually and not via opatchauto.

First activity is:

/u01/app/ -prepatch

This ends with the following error message:

2019/03/09 13:36:12 CLSRSC-46: Error: ‚/u01/app/‘ does not exist
2019/03/09 13:36:12 CLSRSC-152: Could not set ownership on ‚/u01/app/‘
Died at /u01/app/ line 7573.
The command ‚/u01/app/ -I/u01/app/ -I/u01/app/ /u01/app/ -prepatch‘ execution failed

The following Doc ID 2409411.1 describes how to fix this by modifying two files. I should be fixed in Grid Release 18. 


remove the following two entries.
unix %ORA_CRS_HOME%/suptools/tfa/release/tfa_home/jlib/jdev-rt.jar %HAS_USER% %ORA_DBA_GROUP% 0644
unix %ORA_CRS_HOME%/suptools/tfa/release/tfa_home/jlib/jewt4.jar %HAS_USER% %ORA_DBA_GROUP% 0644

I made the changes but it did not fix the problem. So I can’t go on with the Patching. For me it looks like a problem with the file permissions.

So next research on MOS and I found this important Doc ID 1931142.1:

„How to check and fix file permissions on Grid Infrastructure environment“

Yes, this was the solution :-)

cd /u01/app/

./rootcrs.sh -init

Using configuration parameter file: /u01/app/

As an add on in the note you can check after the „-init“ the complete GI Installation with the following cluvfy command.

cluvfy comp software -n all -verbose

Verifying Software home: /u01/app/ …2894 files verified
Verifying Software home: /u01/app/ …PASSED

Verification of software was successful.

CVU operation performed: software
Date: Mar 11, 2019 10:10:11 AM
CVU home: /u01/app/
User: oracle

This is very helpful. Finally I start the GI Patching without any problems

Lesson learned

„It is a good idea to check from time to time the status of the Software via cluvfy.“

















opatch lsinv doesn’t show Patching level of clusternodes





We had a strange behaviour while running the „opatch lsinv“ in our Clusterware environment.

The opatch tool doesn’t show at the end the patchlevel and the name of the nodes itself.

It seems that during a lot of patch actions on this cluster that we lost the information inside the inventory.xml file that the CRS is equal true.

After researching of MOS we found a solution for this problem described in Doc-ID 1053393.1.

There ist a possibility to Update a flag CRS=true via the runinstaller in the GRID environment.

So the steps to fix this problem are the following

Our environment is a two node Oracle Enterprise Linux RAC Cluster with GI Software

    -updateNodelist ORACLE_HOME="/u01/app/" CRS=true

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 24575 MB Passed
The inventory pointer is located at /etc/oraInst.loc

'UpdateNodeList' was successful.

The "opatch lsinv" command show now the correct Patching Level and name of the Cluster nodes.

Patch level status of Cluster nodes :

Patching Level Nodes
-------------- -----
1146027977 node2,node1

Patch level status of Cluster nodes :

Patching Level Nodes
-------------- -----
1146027977 node2,node1

OPatch succeeded.

Additonal information

Check the software Patching level via the following command and compare the output with the „opatch lsinv“ as shown above.

[oracle@node1 ~]$crsctl query crs softwarepatch
Oracle Clusterware patch level on node node1 is [1146027977]

Yes, it is a good idea from time to time to check if both commands have the same output and also to update the opatch tool in your environment.

The latest opatch version can be downloaded via MOS link https://updates.oracle.com/download/6880880.html