Oracle Essential Support Tools im Exadata Umfeld

Mein DOAG Vortrag zum Thema

„Oracle Support wie gehe ich vor und welche Tools setze ich im Exadata Umfeld ein“



In der Zwischenzeit hat Oracle eine neue Version den Trace File Analyzer bereitgestellt die man dann auch einsetzen sollte. Einfach über die Doc-ID suchen.

TFA Collector – TFA with Database Support Tools Bundle (Doc ID 1513912.1)

Noch eine Anmerkung für alle die gerade PSU’s etc. eingespielt haben. Leider stellt Oracle Support nicht die oben genannte TFA Version mit dem letzten PSU zur Verfügung.  Die Installation des Trace File Analyzer muss separat durchgeführt werden.




Flashback in a Data Guard Environment but what are the steps

The Setup of an Data Guard environment with Physical Standby via Data Guard Broker was done including an Switchover. Everything fine and works as expected. (Oracle Release is and PSU Jan 2017).

The Dev Team asked for a Flashback Database if possible a few days ago. Okay no problem. But what do I have to do exactly.

What are the steps to do? ( The requested timestamp for Flashback on the Primary DB is „07.09.2017 17:15“ )

Stop Apply on the Standby

DGMGRL> connect sys@gtdb10_11_DG
Connected as SYSDBA.
DGMGRL> edit database gtdb10_11 set state='APPLY-OFF';

To get the Primary easily after the Flashback in Sync with the Standby you need to flashback the Standby a short time before you flashback the Primary.

Primary =   „07.09.2017 17:15“

Standby =  „07.09.2017 17:00“

sqlplus / as sysdba

flashback standby DATABASE TO TIMESTAMP(to_date('07.09.2017 17:00',' hh24:mi'))

Now to the Primary Database and I describe the easiest way

srvctl stop db -d gtdb10_12
sqlplus / as sysdba
SQL*Plus: Release Production on Mon Sep 11 13:12:32 2017
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup mount

SQL> flashback DATABASE TO TIMESTAMP(to_date( '07.09.2017 17:15',' hh24:mi'))

After Flashback the Primary open the DB with resetlogs

SQL> alter database open resetlogs;

# Stop the mounted database
SQL> shutdown immediate

# Restart the Primary
srvctl start db -d gtdb10_12

Flashback done

# Now I restarted the Standby is normally not needed
srvctl stop db -d gtdb10_11
srvctl start db -d gtdb10_11

# Start the Apply on the Standby Database
DGMGRL> connect sys@gtdb10_11_DG
Connected as SYSDBA.
DGMGRL> edit database gtdb10_11 set state='APPLY-ON';

The Dev Team can go on :-)


Important is that you have to start with Flashback on the Standby Site not on the Primary Site. Otherwise you need to recreate the whole Standby Database and that could be very time intensiv by having the risk that you work without a Physical Database during the Setup. Also the Switchover has to be redone because whole Standby Database was newly created.

Restore Points

If you work with „restore points“ than keep also in mind that you create the first restore point on the Standby Database and then the Restore Point on the Primary Site.








Problems with Adaptive Features in What’s to do?

The Adaptive Features is divided in two parts

  • Adaptive Plans
    • the possibility to change plans during execution
  • Adaptive Statistics
    • the statistics allow plans to be built from the results of previous executions

Version 12.1 

In 12.1 both components  will be managed by the parameter „optimizer_features_enable“ and is per default on „TRUE“.

Version 12.2

In 12.2 the parameter „optimizer_features_enable“ obsolete. The adaptive feature is now controlled by two parameters

  • optimizer_adaptive_statistics default „FALSE“
  • optimizer_adaptive_plans default „TRUE“

What are the recommendation for if you do a upgrade. You should directly adopt the 12.2 default  by applying the following Patches:

  • Patch 22652097 splits the parameter optimizer_adaptive_features into two, as above, and disables adaptive statistics.
  • Patch 21171382 disables the automatic creation of extended statistics unless the optimizer preference AUTO_STAT_EXTENSIONS is set to ON.


If you Upgrade to 12.1..2 set directly the Default for Oracle 12.2.

Yes, the next Upgrade to Release 12.2 came sooner or later.




How to fix ‚PLAN_TABLE‘ is old version

During a lot of SQL Performance activities I saw all the time the following message but why?

- 'PLAN_TABLE' is old version

Yes, I did a few days before an Upgrade from -> Now I start to check out some important SQL Queries but I the „old version“ message


set autotrace traceonly explain

select * from dual;

and again

Note ------

'PLAN_TABLE' is old version

Problem is that this message came up while the plan_table came from the previous version in my case So the solution for this is very easy

In the new environment here
SQL> connect sys as sysdba

SQL> drop table plan_table$;

SQL> @?/rdbms/admin/catplan

This shoud fix the problem and if not?

Okay it could be possible that you should drop the plan_table as Schema User  and then run the the SQL again.


Here an example output:
- dynamic statistics used: dynamic sampling (level=2)
- automatic DOP: Computed Degree of Parallelism is 8 because of degree limit
- parallel scans affinitized for buffer cache




Exadata GI Upgrade to + PSU Jul2016


Recently I did a upgrade to Grid Infrastructure on a few Exadata Clusters

Here my summary of the installation

Before you start please read the following MOS note 1681467.1. This note is very helpful and describes the whole procedure in a Exadata environment

It’s not only the Upgrade to In the same „session“ I also install the GI PSU Jul 2016 and the Oneoff Patch 23273686 because there is a known BUG in the SCAN Listener area (see below)

At the end of the article some „real news“ from GI (see below „real“ news)

So let’s start

First of all keep in mind that the clusterware must be „up and running”

Step 1 Oracle environment


 export SRVM_USE_RACTRANS=true 



Step 2 GI Software Installation

The next slides show the GI Installation Procedure













If the Setup is at that point you need to do the following but

please don’t close the Installer window

Step 3 Install latest opatch tool

Download opatch tool  Patch 6880880 (better you did it before)

On a Exadata the Installation can be done in one step via dcli

dcli -l oracle -g dbs_group unzip -oq -d /u01/app/ -d /u01/patchdepot

Step 3 Install GI PSU JUL 2016 23273686

Node 1 srvdb01

[root@srvdb01]# /u01/app/ napply -oh /u01/app/ -local /u01/patchdepot/23273686

Node 2 srvdb02

[root@srvdb02]# /u01/app/ napply -oh /u01/app/ -local /u01/patchdepot/23273686

While there is a known BUG you should directly install the following Oneoff Patch 20734332 here the Doc ID 2166451.1 with the details

(SCAN Listener or local listener fails to start after applying Patch 23273629 – Oracle Grid Infrastructure Patch Set Update (Jul2016))

Step 4 

After you finish the PSU Jul 2016 & Oneoff Patch installation the must be started

Node 1 srvdb01

[root@srvdb01 grid]# /u01/app/

Node 2 srvdb02

[root@srvdb02 grid]# /u01/app/

The script works around 15 minutes so stay calm

It finished with the following messages here as example from the last Node 2 srvdb02


Successfully accumulated necessary OCR keys.

Creating OCR keys for user ‚root‘, privgrp ‚root‘..

Operation successful. 14:53:13 CLSRSC-474: Initiating upgrade of resource types

14:54:33 CLSRSC-482: Running command: ‚upgrade model  -s -d -p first‘

14:54:33 CLSRSC-475: Upgrade of resource types successfully initiated.

14:54:35 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster … succeeded


Step 5 final tasks

finally some configuration tool runs and finished the GI Upgrade including PSU Jul 2016 and Oneoff Patch

And now the „real news“ in 

The most notable change belongs to the GIMR  (Grid Infrastructure Management Repository) 

Beginning with it was an option installing the GIMR Database – MGMTDB

Starting with it is mandatory and the MGMTDB database is automatically created as part of the upgrade installation process of 12.10.2 Grid Infrastructure. If you start a installation from scratch the GIMR Database is directly configured

Some interesting GI & MGMTDB commands

[oracle@srvdb01 ~]$ crsctl query crs activeversion

Oracle Clusterware active version on the cluster is []

[oracle@srvdb01 ~]$ crsctl query crs releaseversion

Oracle High Availability Services release version on the local node is []

[oracle@srvdb01 ~]$ crsctl query crs activeversion -f

Oracle Clusterware active version on the cluster is []. The cluster upgrade state is [NORMAL]. The cluster active patch level is [3351897854].


[oracle@srvdb01 ~]$ srvctl status mgmtdb -verbose
Database is enabled
Instance -MGMTDB is running on node srvdb01. Instance status: Open.

[oracle@srvdb01 ~]$ srvctl config mgmtdb
Database unique name: _mgmtdb
Database name:
Oracle home: <CRS home>
Oracle user: oracle
Spfile: +DBFS_DG/_MGMTDB/PARAMETERFILE/spfile.268.926345767
Password file:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Type: Management
PDB name: srv_cl12
PDB service: srv_cl12
Cluster name: srv-cl12
Database instance: -MGMTDB

reinstall tfactl after GI Upgrade


Recently I finished a Grid Upgrade from to + PSU JUL 2016. So far so good during a check I saw that the old tfactl tool under Software Release where up and running.

That could not be okay.  So I start an Uninstall and Setup for Release

What steps has to be done?

Check the actual tfactl installation

 /u01/app/grid/tfa/bin/tfactl print config

Start the unistall on both nodes

[root@db03 bin]# ./tfactl uninstall
TFA will be Uninstalled on Node db03: 

Removing TFA from db03 only
Please remove TFA locally on any other configured nodes

Notifying Other Nodes about TFA Uninstall...
Sleeping for 10 seconds...

Stopping TFA Support Tools...
Stopping TFA in db03...
Shutting down TFA
oracle-tfa stop/waiting
. . . . . 
Killing TFA running with pid 159597
. . . 
Successfully shutdown TFA..

Deleting TFA support files on db03:
Removing /u01/app/oracle/tfa/db03/database...
Removing /u01/app/oracle/tfa/db03/log...
Removing /u01/app/oracle/tfa/db03/output...
Removing /u01/app/oracle/tfa/db03...
Removing /u01/app/oracle/tfa...
Removing /etc/rc.d/rc0.d/K17init.tfa
Removing /etc/rc.d/rc1.d/K17init.tfa
Removing /etc/rc.d/rc2.d/K17init.tfa
Removing /etc/rc.d/rc4.d/K17init.tfa
Removing /etc/rc.d/rc6.d/K17init.tfa
Removing /etc/init.d/init.tfa...
Removing /u01/app/
Removing /u01/app/
Removing /u01/app/

The same on the other node

The new tfactl Setup

[root@db03 install]# ./tfa_setup -silent -crshome /u01/app/
TFA Installation Log will be written to File : /tmp/tfa_install_63022_2016_10_18-14_30_43.log
Starting TFA installation

Using JAVA_HOME : /u01/app/
Running Auto Setup for TFA as user root...
Installing TFA now...

TFA Will be Installed on db03...
TFA will scan the following Directories
| db03 |
| Trace Directory | Resource |
| /u01/app/ | CRS |
| /u01/app/ | CFGTOOLS |
| /u01/app/ | CRS |
| /u01/app/ | CRS |
| /u01/app/ | CRS |
| /u01/app/ | CRS |
| /u01/app/ | CRS |
| /u01/app/ | CRS |
| /u01/app/ | INSTALL |
| /u01/app/ | CRS |
| /u01/app/ | CRS |
| /u01/app/ | DBWLM |
| /u01/app/ | CRS |
| /u01/app/ | CRS |
| /u01/app/ | ASM |
| /u01/app/ | CRS |
| /u01/app/ | CRS |
| /u01/app/oraInventory/ContentsXML | INSTALL |
| /u01/app/oraInventory/logs | INSTALL |
| /u01/app/oracle/crsdata/db03/acfs | ACFS |
| /u01/app/oracle/crsdata/db03/core | CRS |
| /u01/app/oracle/crsdata/db03/crsconfig | CRS |
| /u01/app/oracle/crsdata/db03/crsdiag | CRS |
| /u01/app/oracle/crsdata/db03/cvu | CRS |
| /u01/app/oracle/crsdata/db03/evm | CRS |
| /u01/app/oracle/crsdata/db03/output | CRS |
| /u01/app/oracle/crsdata/db03/trace | CRS |

Installing TFA on db03:
HOST: db03 TFA_HOME: /u01/app/
| Host | Status of TFA | PID | Port | Version | Build ID |
| db03 | RUNNING | 63460 | 5000 | | 12127020160304140533 |

Running Inventory in All Nodes...
Enabling Access for Non-root Users on db03...

Adding default users to TFA Access list...
Summary of TFA Installation:
| db03 |
| Parameter | Value |
| Install location | /u01/app/ |
| Repository location | /u01/app/oracle/tfa/repository |
| Repository usage | 0 MB out of 10240 MB |

Installing oratop extension..
TFA is successfully installed...
And also the same on the other node.

Last but not least check the new Setup on both Nodes

Check the status and configuration
tfactl print status 
tfactl print config

That's it. :-)
It is very easy and done in a few minutes.
tfactl is a helpful tool not only for Oracle Support 
take a few minutes and go through the following 
My Oracle Support Note: 1513912.1


Exadata ASM Disk overview via kfod


That’s the question:

How can I  easily create an overview from the OS command line of all ASM Disks which are configured?

Exadata ASM Disks

ASM disks in an Exadata Machine are part of the Storage cells and presented to the Compute Nodes via the proprietary iDB protocol

The ASM instance is running on the Compute Node

Each storage cell has 12 hard disks and flash disks. During Exadata Setup the Grid disks where created on the hard disks.
Grid Disks are not visible to the Operating System, only to ASM, Database Instance and related utilities, via iDB protocol

To get an overview of the Grid Disks via command line use the „kfod tool“
Here an output from the kfod discovering a Full Rack:

Login in and set your GRID Environment
[ora1120@s1s2db2 asm_grash]<strong>$ $ORACLE_HOME/bin/kfod disks=all</strong>
 Disk Size Path User Group 
 1: 2260992 Mb o/ 
 2: 2260992 Mb o/ 
 3: 2260992 Mb o/ 
 4: 2260992 Mb o/ 
 5: 2260992 Mb o/ 
 6: 2260992 Mb o/ 
 7: 2260992 Mb o/ 
 8: 2260992 Mb o/ 
 9: 2260992 Mb o/ 
 10: 2260992 Mb o/ 
 11: 2260992 Mb o/ 
 12: 2260992 Mb o/ 
 13: 34608 Mb o/ 
 14: 34608 Mb o/ 
 15: 34608 Mb o/ 
 16: 34608 Mb o/ 
 17: 34608 Mb o/ 
 18: 34608 Mb o/ 
 19: 34608 Mb o/ 
 20: 34608 Mb o/ 
 21: 34608 Mb o/ 
 22: 34608 Mb o/ 
 23: 565360 Mb o/ 
 24: 565360 Mb o/ 
 25: 565360 Mb o/ 
 26: 565360 Mb o/ 
 27: 565360 Mb o/ 
 28: 565360 Mb o/ 
 29: 565360 Mb o/ 
 30: 565360 Mb o/ 
 31: 565360 Mb o/ 
 32: 565360 Mb o/ 
 33: 565360 Mb o/ 
 34: 565360 Mb o/ 
 35: 2260992 Mb o/ 
 36: 2260992 Mb o/ 
 37: 2260992 Mb o/ 
 38: 2260992 Mb o/ 
 39: 2260992 Mb o/ 
 40: 2260992 Mb o/ 
 41: 2260992 Mb o/ 
 42: 2260992 Mb o/ 
 43: 2260992 Mb o/ 
 44: 2260992 Mb o/ 
 45: 2260992 Mb o/ 
 46: 2260992 Mb o/ 
 47: 34608 Mb o/ 
 48: 34608 Mb o/ 
 49: 34608 Mb o/ 
 50: 34608 Mb o/ 
 51: 34608 Mb o/ 
 52: 34608 Mb o/ 
 53: 34608 Mb o/ 
 54: 34608 Mb o/ 
 55: 34608 Mb o/ 
 56: 34608 Mb o/ 
 57: 565360 Mb o/ 
 58: 565360 Mb o/ 
 59: 565360 Mb o/ 
 60: 565360 Mb o/ 
 61: 565360 Mb o/ 
 62: 565360 Mb o/ 
 63: 565360 Mb o/ 
 64: 565360 Mb o/ 
 65: 565360 Mb o/ 
 66: 565360 Mb o/ 
 67: 565360 Mb o/ 
 68: 565360 Mb o/ 
 69: 2260992 Mb o/ 
 70: 2260992 Mb o/ 
 71: 2260992 Mb o/ 
 72: 2260992 Mb o/ 
 73: 2260992 Mb o/ 
 74: 2260992 Mb o/ 
 75: 2260992 Mb o/ 
 76: 2260992 Mb o/ 
 77: 2260992 Mb o/ 
 78: 2260992 Mb o/ 
 79: 2260992 Mb o/ 
 80: 2260992 Mb o/ 
 81: 34608 Mb o/ 
 82: 34608 Mb o/ 
 83: 34608 Mb o/ 
 84: 34608 Mb o/ 
 85: 34608 Mb o/ 
 86: 34608 Mb o/ 
 87: 34608 Mb o/ 
 88: 34608 Mb o/ 
 89: 34608 Mb o/ 
 90: 34608 Mb o/ 
 91: 565360 Mb o/ 
 92: 565360 Mb o/ 
 93: 565360 Mb o/ 
 94: 565360 Mb o/ 
 95: 565360 Mb o/ 
 96: 565360 Mb o/ 
 97: 565360 Mb o/ 
 98: 565360 Mb o/ 
 99: 565360 Mb o/ 
 448: 2260992 Mb o/ 
 449: 2260992 Mb o/ 
 450: 2260992 Mb o/ 
 451: 2260992 Mb o/ 
 452: 2260992 Mb o/ 
 453: 2260992 Mb o/ 
 454: 2260992 Mb o/ 
 455: 34608 Mb o/ 
 456: 34608 Mb o/ 
 457: 34608 Mb o/ 
 458: 34608 Mb o/ 
 459: 34608 Mb o/ 
 460: 34608 Mb o/ 
 461: 34608 Mb o/ 
 462: 34608 Mb o/ 
 463: 34608 Mb o/ 
 464: 34608 Mb o/ 
 465: 565360 Mb o/ 
 466: 565360 Mb o/ 
 467: 565360 Mb o/ 
 468: 565360 Mb o/ 
 469: 565360 Mb o/ 
 470: 565360 Mb o/ 
 471: 565360 Mb o/ 
 472: 565360 Mb o/ 
 473: 565360 Mb o/ 
 474: 565360 Mb o/ 
 475: 565360 Mb o/ 
 476: 565360 Mb o/ 




X5-2 Jan2016 GI – Bug 22135419 – 12C GRID HOME PERMISSIONS NOT RESET




The last days I did a Upgrade on a Exadata X5-2 Machine including the GRID Software.

During the GI Patching  (Patch 22243551) there was no error message.

But after a short while we get a lot of errors from the Database alert.logs.

ORA-27140: attach to post/wait facility failed
ORA-27300: OS system dependent operation:invalid_egid failed with status: 1
ORA-27301: OS failure message: Operation not permitted
ORA-27302: failure occurred at: skgpwinit6
ORA-27303: additional information: startup egid = 1002 (dba), current egid = 1001 (oinstall)

The error message means that the „group id“ (gid) was not correct set.

Using asmcmd was not possible.

$ asmcmd

/app/oragrid/product/<a href=""></a>: line 22: /export/home/oragrid/%ORACLE_HOME%/bin/kfod.bin: No such file or directory
/app/oragrid/product/<a href=""></a>: line 22: exec: /export/home/oragrid/%ORACLE_HOME%/bin/kfod.bin: cannot execute: No such file or directory
Use of uninitialized value $clus_mode in scalar chomp at /app/oragrid/product/<a href=""></a> line 5015.
Use of uninitialized value $clus_mode in string eq at /app/oragrid/product/<a href=""></a> line 5043.
Use of uninitialized value $clus_mode in string eq at /app/oragrid/product/<a href=""></a> line 5092.
Use of uninitialized value $clus_mode in string eq at /app/oragrid/product/<a href=""></a> line 5092.
Use of uninitialized value $clus_mode in string eq at /app/oragrid/product/<a href=""></a> line 5092.
Use of uninitialized value $clus_mode in string eq at /app/oragrid/product/<a href=""></a> line 5139.

This is a known Bug 22135419 – 12C GRID HOME PERMISSIONS NOT RESET.

It can be fixed by unlock & relock the GRID_HOME again.

# /u01/app/ -unlock
# /u01/app// –patch

While updating every node here a X5-2 with 4 Nodes the Clusterware will be restarted.

No problem while you can do it node by node.

Oracle Clusterware active version on the cluster is []. The cluster upgrade state is [NORMAL]. The cluster active patch level is [942923749].

Oracle Clusterware active version on the cluster is []. The cluster upgrade state is [NORMAL]. The cluster active patch level is [942923749].

Oracle Clusterware active version on the cluster is []. The cluster upgrade state is [NORMAL]. The cluster active patch level is [942923749].

Oracle Clusterware active version on the cluster is []. The cluster upgrade state is [NORMAL]. The cluster active patch level is [942923749].

Afterwards the cluster upgrade state is [NORMAL]

The Bug is very tricky while during the patching you saw no error