How to check the Oracle Patch history

(new Update see below)

Patching is a task which is really often done and if you sit in front of a Oracle System which

you know while doing regularly maintenance the Patch status is more or less known to you


But what if your college ask:

„Please check an Oracle System from a new customer and let me know if they had patched

the System in the last six month.“


Now most of us do a Login set the environment for example Oracle_Home or Grid_Home

and start an „opatch“ or checked via the database and registry.


To get a complete overview what happen on the System is more easy from my point of view

while taking a look in the opatch_history.txt file.

This textfile exist for every Oracle Home on the server and shows the complete history


ssh host1

su - oracle

cat /etc/oratab


. oraenv

cd /u01/app/oracle/product/

vi opatch_history.txt

You saw exactly at which time which command was entered against the Oracle Home.

So you can very easy check when and which Patches where applied in the last time.


Date & Time : Mon Aug 03 02:05:45 CEST 2020

Oracle Home : /u01/app/oracle/product/

OPatch Ver. :

Current Dir : /u01/app/oracle/product/

Command     : lsinventory -oh /u01/app/oracle/product/ -local -invPtrLoc /u01/app/oracle/product/

Log File    : /u01/app/oracle/product/

Another example for an Oracle Grid Home here Release 19c.


Grid Home is „/u01/app/19.0.0/grid“



Date & Time : Mon Aug 03 02:26:04 CEST 2020

Oracle Home : /u01/app/19.0.0/grid

OPatch Ver. :

Current Dir : /u01/app/19.0.0/grid

Command     : lsinventory -oh /u01/app/19.0.0/grid -local -invPtrLoc /u01/app/19.0.0/grid/oraInst.loc

Log File    : /u01/app/19.0.0/grid/cfgtoollogs/opatch/opatch2020-08-03_02-26-04AM_1.log
Try it and check your opatch_history.txt file. It is really helpful.


Update on this Post after Roy (Swonger) VP from Oracle ask me if I can add an additional information I do of course.

Roy says:

One of the most common issues I see is when customers run OPatch but forget to run Datapatch to apply the SQL changes to the database. Mike had a good blog post about the dba_registry_history and dba_registry_sqlpatch views that explains which to use depending on the version you are running: 

Yes, please take a look to Mike’s Blog which is very helpful (as all his posts 👍)

In my special case the main topic of my college was the question „Can you check if the customer has tried to install Patches in the Grid Home during the last month?“. This question could be easy answered via the opatch_history.txt file. 

So have fun during the next patching action and not forget the datapatch





Clone Oracle Home in Release 12.2 (RAC with 3 Nodes)

I do many patching activities in the moment and for this I checked out for an easy way to clone the

actual Oracle Home of the 12.2 installation

Okay, that’s easy I had written a blog post for 12.1 so let’s go

But then I found out that this procedure with the „“ script will not work in 12.2


In Oracle Release 12.2 the clone of an Oracle Home will work in the following way:

Source Home   „/u01/app/oracle/product/“

Target Home    „/u01/app/oracle/product/“

The target home I named it „dbhome_2“ is 12.2 + RU Jan 2020

All steps need to be done as os user „oracle“


cp -r /u01/app/oracle/product/ /u01/app/oracle/product/ 

cd /u01/app/oracle/product/ 

./oui/bin/runInstaller -clone -waitForCompletion ORACLE_HOME="/u01/app/oracle/product/" ORACLE_HOME_NAME="RUJAN2020" ORACLE_BASE="/u01/app/oracle" CLUSTER_NODES="{node01,node02,node03}" LOCAL_NODE="node01" -silent 


Download your platform specific RU Jan 2020 from My Oracle Support

This RU Jan 2020 will be applied to the new „dbhome_2“ with the opatch tool not described here.

After this is finished clone the new created „dbhome_2“ on all other nodes


$ cd /u01/app/oracle/product/

$ tar czf home_2.tgz dbhome_2

$ scp home_2.tgz oracle@node02:/u01/app/oracle/product/

$ ssh node02

$ cd /u01/app/oracle/product/

$ tar xf home_2.tgz

$ cd dbhome_2/oui/bin

$ ./runInstaller -clone -waitForCompletion ORACLE_HOME="/u01/app/oracle/product/" ORACLE_HOME_NAME="RUJAN2020" ORACLE_BASE="/u01/app/oracle" CLUSTER_NODES="{node01,node02,node03}" LOCAL_NODE="node02" –silent


$scp home_2.tgz oracle@node03:/u01/app/oracle/product/

$ ssh node03

$ cd /u01/app/oracle/product/

$ tar xf home_2.tgz

$ cd dbhome_2/oui/bin

$ ./runInstaller -clone -waitForCompletion ORACLE_HOME="/u01/app/oracle/product/" ORACLE_HOME_NAME="RUJAN2020" ORACLE_BASE="/u01/app/oracle" CLUSTER_NODES="{node01,node02,node03}" LOCAL_NODE="node03" –silent


Ready is the 12.2 clone + RU Jan 2020 👍  🙂








tfactl summary as HTML (really tricky)

I read an interesting article from Michael Schulze, Optiz Consulting in the actual „Red Stack Magazin / April 2020“, about Tools for the daily Exadata maintenance but it is not so easy as described.

Since some time ago AHF (Autonomous Health Framework) is the tool for checks of any kind on the Exadata.

I have installed and used here Version 19.3 which is an old one. I will update in a few weeks the whole environment to Version 20.1.

So I try to create the „summary HTML“ Report but it will not work as described in Michaels article.



tfactl summary -overview -html


Sometimes you need to start the command twice and you NEED to enter a „q

after you saw the „tfactl_summary>“ this is important otherwise the HTML report will not be created.



tfactl summary -overview -html

WARNING – TFA Software is older than 180 days. Please consider upgrading TFA to the latest version.

  Executing Summary in Parallel on Following Nodes:

    Node : exa01                             

    Node : exa02                             

    Node : exa03                             

    Node : exa04                             

LOGFILE LOCATION : /opt/oracle.ahf/data/repository/suptools/exa01/summary/root/20200502174852/log/summary_command_20200502174852_exa01_177655.log

  Component Specific Summary collection :

    – Collecting CRS details … Done.   

    – Collecting ASM details … Done.   

    – Collecting ACFS details … Done.

    – Collecting DATABASE details … Done.

    – Collecting EXADATA details … Done.

    – Collecting PATCH details … Done.   

    – Collecting LISTENER details … Done.

    – Collecting NETWORK details … Done.

    – Collecting OS details … Done.      

    – Collecting TFA details … Done.     

    – Collecting SUMMARY details … Done.

  Remote Summary Data Collection : In-Progress – Please wait …

  – Data Collection From Node – exa02 .. Done.           

  – Data Collection From Node – exa03 .. Done.           

  – Data Collection From Node – exa04 .. Done.           

  Prepare Clusterwide Summary Overview … Done



  DETAILS                                                                                             STATUS    COMPONENT 


  .-----------------------------------------------.                                                   PROBLEM   CRS        

  | CRS_SERVER_STATUS   : ONLINE                  |                                                                        

  | CRS_STATE           : ONLINE                  |                                                                        

  | CRS_INTEGRITY_CHECK : FAIL                    |                                                                        

  | CRS_RESOURCE_STATUS : OFFLINE Resources Found |                                                                        


  .-------------------------------------------------------.                                           PROBLEM   ASM        

  | ASM_DISK_SIZE_STATUS : WARNING - Available Size < 20% |                                                                

  | ASM_BLOCK_STATUS     : PASS                           |                                                                

  | ASM_CHAIN_STATUS     : PASS                           |                                                                

  | ASM_INCIDENTS        : FAIL                           |                                                                

  | ASM_PROBLEMS         : FAIL                           |                                                                


  .-----------------------.                                                                           OFFLINE   ACFS       

  | ACFS_STATUS : OFFLINE |                                                                                                


  .-----------------------------------------------------------------------------------------------.   PROBLEM   DATABASE   

  | ORACLE_HOME_DETAILS                                                        | ORACLE_HOME_NAME |                        


  | .------------------------------------------------------------------------. | OraDB12Home1     |                        

  | | DB_CHAINS | DB_BLOCKS | INCIDENTS | PROBLEMS | DATABASE_NAME | STATUS  | |                  |                        

  | +-----------+-----------+-----------+----------+---------------+---------+ |                  |                        

  | | PROBLEM   | PASS      | PROBLEM   | PROBLEM  | i10   | PROBLEM | |                  |                        

  | | PROBLEM   | PROBLEM   | PROBLEM   | PROBLEM  | p10     | PROBLEM | |                  |                        

  | | PASS      | PASS      | PROBLEM   | PROBLEM  | i20    | PROBLEM | |                  |                        

  | '-----------+-----------+-----------+----------+---------------+---------' |                  |                        


  .--------------------------------.                                                                  PROBLEM   EXADATA    

  | SWITCH_SSH_STATUS : CONFIGURED |                                                                                       

  | CELL_SSH_STATUS   : CONFIGURED |                                                                                       

  | ENVIRONMENT_TEST  : PASS       |                                                                                       

  | LINKUP            : PASS       |                                                                                       

  | LUN_STATUS        : NORMAL     |                                                                                       

  | RS_STATUS         : RUNNING    |                                                                                       

  | CELLSRV_STATUS    : RUNNING    |                                                                                       

  | MS_STATUS         : RUNNING    |                                                                                       


  .----------------------------------------------.                                                    OK        PATCH      

  | CRS_PATCH_CONSISTENCY_ACROSS_NODES      : OK |                                                                         

  | DATABASE_PATCH_CONSISTENCY_ACROSS_NODES : OK |                                                                         


  .-----------------------.                                                                           OK        LISTENER



  .---------------------------.                                                                       OK        NETWORK



  .-----------------------.                                                                           OK        OS



  .----------------------.                                                                            OK        TFA



  .------------------------------------.                                                              OK        SUMMARY




        ### Entering in to SUMMARY Command-Line Interface ###


  Components : Select Component - select [component_number|component_name]

        1 => overview

        2 => crs_overview

        3 => asm_overview

        4 => acfs_overview

        5 => database_overview

        6 => exadata_overview

        7 => patch_overview

        8 => listener_overview

        9 => network_overview

        10 => os_overview

        11 => tfa_overview

        12 => summary_overview


        ### Exited From SUMMARY Command-Line Interface ###


REPOSITORY  : /opt/oracle.ahf/data/repository/suptools/exa01/summary/root/20200502174852/exa01

HTML REPORT : <REPOSITORY>/report/Consolidated_Summary_Report_20200502174852.html



So enter „q“ and the HTML Report is created. Then start a browser and you the Report which is really helpful.



Have fun :-) and if possible use directly AHF 20.1, or do an Update like me …..






RU Apr 2020 installed

RU Update 19.7 done.

Environment : Exadata, X7-2, 4 node, RAC Cluster


opatch lspatches

30805684;OJVM RELEASE UPDATE: (30805684)

30869156;Database Release Update : (30869156)

29585399;OCW RELEASE UPDATE (29585399)


No error, works perfect. 👍


So we can go on to test Upgrades with „autoupgrade“ for the RAC DB’s …..


Exadata wrong Kernel version after dbnode upgrade

While doing a lot of Exadata Upgrades I get a problem on one of my DB nodes

The patchmgr works without an error and the Cluster starts up

I did a few checks including if all nodes have the same Kernel version

but node01 has an older Kernel Version than the other nodes


node01: Linux node01 4.1.12-124.23.4.el7uek.x86_64 #2 SMP x86_64 x86_64 x86_64

node02: Linux node02 4.14.35-1902.9.2.el7uek.x86_64 #2 SMP x86_64 x86_64 x86_64

node03: Linux node03 4.14.35-1902.9.2.el7uek.x86_64 #2 SMP x86_64 x86_64 x86_64

node04: Linux node04 4.14.35-1902.9.2.el7uek.x86_64 #2 SMP x86_64 x86_64 x86_64


Looks like that during the Upgrade of node01 the Kernel was not updated

I checked all logfiles but I can’t find an error

So I checked the installed Kernel


rpm -qa | grep -i kernel




Okay the new Kernel is installed but seems to be not in place

In Oracle Linux 7 you need to check the „grub.cfg“ file


–> showing older kernel

I changed the following line with the new Kernel version

from value „/initrdefi /initramfs-4.1.12-124.23.4.el7uek.x86_64.img“

to value „/initrdefi /initramfs-4.14.35-1902.9.2.el7uek.x86_64.img“


Then I update the configuration and reboot the server

grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg

Generating grub configuration file …

Found linux image: /boot/vmlinuz-4.14.35-1902.9.2.el7uek.x86_64

Found initrd image: /boot/initramfs-4.14.35-1902.9.2.el7uek.x86_64.img

Found linux image: /boot/vmlinuz-4.1.12-124.23.4.el7uek.x86_64

Found initrd image: /boot/initramfs-4.1.12-124.23.4.el7uek.x86_64.img


Reboot the node01 and the db-node came up with the correct Kernel

Linux node01 4.14.35-1902.9.2.el7uek.x86_64 #2 SMP x86_64 x86_64 x86_64 GNU/Linux



„Please make such a change only if you are absolutely sure and a fallback scenario is in place for example boot via diag.iso otherwise please open an Service Request“

Oracle Cloud „follow up“

Follow up:  „How to setup a CentOS instance and connect via ssh“

Login to the „Oracle Cloud“ via browser

go to -> Compute -> Instance

create a new instance

But before going on setup a ssh-key


ssh-keygen -t rsa -N "" -b 2048 -C "oracloud" -f /Users/user/.ssh


Now you are ready to start the setup and therefore upload the public key before

starting the installation. ( The public key upload is mandatory!


After a few minutes the setup is done. restart the instance

Check the instance

For the connect via ssh use the Linux User „opc“  in the Oracle Cloud and your „public-IP address

The „opc“ user has no Login password!


When the instance started and a „Login screen“ came up with

„user“ and „password“ you forget to upload the public-key during setup


Here the standard Login after correct setup

ssh opc@ 
[opc@pg11 ~]$ 


For root Login enter

sudo su

[root@pg11 opc]#

You need no root password!


That’s it you are logged in start with your work and have fun

The Oracle Cloud documentation for this setup is not very handy

I hope that these few lines help you to get a new machine in the Oracle Cloud fast up and running

forever free :-)