tfactl summary as HTML (really tricky)

I read an interesting article from Michael Schulze, Optiz Consulting in the actual „Red Stack Magazin / April 2020“, about Tools for the daily Exadata maintenance but it is not so easy as described.

Since some time ago AHF (Autonomous Health Framework) is the tool for checks of any kind on the Exadata.

I have installed and used here Version 19.3 which is an old one. I will update in a few weeks the whole environment to Version 20.1.

So I try to create the „summary HTML“ Report but it will not work as described in Michaels article.

Why?

 



tfactl summary -overview -html


 

Sometimes you need to start the command twice and you NEED to enter a „q

after you saw the „tfactl_summary>“ this is important otherwise the HTML report will not be created.

 

Example

tfactl summary -overview -html

WARNING – TFA Software is older than 180 days. Please consider upgrading TFA to the latest version.

  Executing Summary in Parallel on Following Nodes:

    Node : exa01                             

    Node : exa02                             

    Node : exa03                             

    Node : exa04                             

LOGFILE LOCATION : /opt/oracle.ahf/data/repository/suptools/exa01/summary/root/20200502174852/log/summary_command_20200502174852_exa01_177655.log

  Component Specific Summary collection :

    – Collecting CRS details … Done.   

    – Collecting ASM details … Done.   

    – Collecting ACFS details … Done.

    – Collecting DATABASE details … Done.

    – Collecting EXADATA details … Done.

    – Collecting PATCH details … Done.   

    – Collecting LISTENER details … Done.

    – Collecting NETWORK details … Done.

    – Collecting OS details … Done.      

    – Collecting TFA details … Done.     

    – Collecting SUMMARY details … Done.

  Remote Summary Data Collection : In-Progress – Please wait …

  – Data Collection From Node – exa02 .. Done.           

  – Data Collection From Node – exa03 .. Done.           

  – Data Collection From Node – exa04 .. Done.           

  Prepare Clusterwide Summary Overview … Done

      cluster_status_summary                   

                                                                                                                           

  DETAILS                                                                                             STATUS    COMPONENT 

+---------------------------------------------------------------------------------------------------+---------+-----------+

  .-----------------------------------------------.                                                   PROBLEM   CRS        

  | CRS_SERVER_STATUS   : ONLINE                  |                                                                        

  | CRS_STATE           : ONLINE                  |                                                                        

  | CRS_INTEGRITY_CHECK : FAIL                    |                                                                        

  | CRS_RESOURCE_STATUS : OFFLINE Resources Found |                                                                        

  '-----------------------------------------------'                                                                        

  .-------------------------------------------------------.                                           PROBLEM   ASM        

  | ASM_DISK_SIZE_STATUS : WARNING - Available Size < 20% |                                                                

  | ASM_BLOCK_STATUS     : PASS                           |                                                                

  | ASM_CHAIN_STATUS     : PASS                           |                                                                

  | ASM_INCIDENTS        : FAIL                           |                                                                

  | ASM_PROBLEMS         : FAIL                           |                                                                

  '-------------------------------------------------------'                                                                

  .-----------------------.                                                                           OFFLINE   ACFS       

  | ACFS_STATUS : OFFLINE |                                                                                                

  '-----------------------'                                                                                                

  .-----------------------------------------------------------------------------------------------.   PROBLEM   DATABASE   

  | ORACLE_HOME_DETAILS                                                        | ORACLE_HOME_NAME |                        

  +----------------------------------------------------------------------------+------------------+                        

  | .------------------------------------------------------------------------. | OraDB12Home1     |                        

  | | DB_CHAINS | DB_BLOCKS | INCIDENTS | PROBLEMS | DATABASE_NAME | STATUS  | |                  |                        

  | +-----------+-----------+-----------+----------+---------------+---------+ |                  |                        

  | | PROBLEM   | PASS      | PROBLEM   | PROBLEM  | i10   | PROBLEM | |                  |                        

  | | PROBLEM   | PROBLEM   | PROBLEM   | PROBLEM  | p10     | PROBLEM | |                  |                        

  | | PASS      | PASS      | PROBLEM   | PROBLEM  | i20    | PROBLEM | |                  |                        

  | '-----------+-----------+-----------+----------+---------------+---------' |                  |                        

  '----------------------------------------------------------------------------+------------------'                        

  .--------------------------------.                                                                  PROBLEM   EXADATA    

  | SWITCH_SSH_STATUS : CONFIGURED |                                                                                       

  | CELL_SSH_STATUS   : CONFIGURED |                                                                                       

  | ENVIRONMENT_TEST  : PASS       |                                                                                       

  | LINKUP            : PASS       |                                                                                       

  | LUN_STATUS        : NORMAL     |                                                                                       

  | RS_STATUS         : RUNNING    |                                                                                       

  | CELLSRV_STATUS    : RUNNING    |                                                                                       

  | MS_STATUS         : RUNNING    |                                                                                       

  '--------------------------------'                                                                                       

  .----------------------------------------------.                                                    OK        PATCH      

  | CRS_PATCH_CONSISTENCY_ACROSS_NODES      : OK |                                                                         

  | DATABASE_PATCH_CONSISTENCY_ACROSS_NODES : OK |                                                                         

  '----------------------------------------------'                                                                         

  .-----------------------.                                                                           OK        LISTENER

  | LISTNER_STATUS   : OK |

  '-----------------------'

  .---------------------------.                                                                       OK        NETWORK

  | CLUSTER_NETWORK_STATUS :  |

  '---------------------------'

  .-----------------------.                                                                           OK        OS

  | MEM_USAGE_STATUS : OK |

  '-----------------------'

  .----------------------.                                                                            OK        TFA

  | TFA_STATUS : RUNNING |

  '----------------------'

  .------------------------------------.                                                              OK        SUMMARY

  | SUMMARY_EXECUTION_TIME : 0H:2M:31S |

  '------------------------------------'

+---------------------------------------------------------------------------------------------------+---------+-----------+

        ### Entering in to SUMMARY Command-Line Interface ###

tfactl_summary>list

  Components : Select Component - select [component_number|component_name]

        1 => overview

        2 => crs_overview

        3 => asm_overview

        4 => acfs_overview

        5 => database_overview

        6 => exadata_overview

        7 => patch_overview

        8 => listener_overview

        9 => network_overview

        10 => os_overview

        11 => tfa_overview

        12 => summary_overview

tfactl_summary>q

        ### Exited From SUMMARY Command-Line Interface ###

--------------------------------------------------------------------

REPOSITORY  : /opt/oracle.ahf/data/repository/suptools/exa01/summary/root/20200502174852/exa01

HTML REPORT : <REPOSITORY>/report/Consolidated_Summary_Report_20200502174852.html

--------------------------------------------------------------------

 

So enter „q“ and the HTML Report is created. Then start a browser and you the Report which is really helpful.

 

 

Have fun :-) and if possible use directly AHF 20.1, or do an Update like me …..

 

 

 

 

 

RU Apr 2020 installed

RU Update 19.7 done.

Environment : Exadata, X7-2, 4 node, RAC Cluster

 

opatch lspatches

30805684;OJVM RELEASE UPDATE: 19.7.0.0.200414 (30805684)

30869156;Database Release Update : 19.7.0.0.200414 (30869156)

29585399;OCW RELEASE UPDATE 19.3.0.0.0 (29585399)

 

No error, works perfect. 👍

 

So we can go on to test Upgrades with „autoupgrade“ for the RAC DB’s …..

 

Exadata wrong Kernel version after dbnode upgrade

While doing a lot of Exadata Upgrades I get a problem on one of my DB nodes

The patchmgr works without an error and the Cluster starts up

I did a few checks including if all nodes have the same Kernel version

but node01 has an older Kernel Version than the other nodes

 

node01: Linux node01 4.1.12-124.23.4.el7uek.x86_64 #2 SMP x86_64 x86_64 x86_64

node02: Linux node02 4.14.35-1902.9.2.el7uek.x86_64 #2 SMP x86_64 x86_64 x86_64

node03: Linux node03 4.14.35-1902.9.2.el7uek.x86_64 #2 SMP x86_64 x86_64 x86_64

node04: Linux node04 4.14.35-1902.9.2.el7uek.x86_64 #2 SMP x86_64 x86_64 x86_64

 

Looks like that during the Upgrade of node01 the Kernel was not updated

I checked all logfiles but I can’t find an error

So I checked the installed Kernel

 

rpm -qa | grep -i kernel

kernel-transition-3.10.0-0.0.0.2.el7.x86_64

kernel-ueknano-4.14.35-1902.9.2.el7uek.x86_64

 

Okay the new Kernel is installed but seems to be not in place

In Oracle Linux 7 you need to check the „grub.cfg“ file

/boot/efi/EFI/redhat/grub.cfg

–> showing older kernel

I changed the following line with the new Kernel version

from value „/initrdefi /initramfs-4.1.12-124.23.4.el7uek.x86_64.img“

to value „/initrdefi /initramfs-4.14.35-1902.9.2.el7uek.x86_64.img“

 

Then I update the configuration and reboot the server

grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg

Generating grub configuration file …

Found linux image: /boot/vmlinuz-4.14.35-1902.9.2.el7uek.x86_64

Found initrd image: /boot/initramfs-4.14.35-1902.9.2.el7uek.x86_64.img

Found linux image: /boot/vmlinuz-4.1.12-124.23.4.el7uek.x86_64

Found initrd image: /boot/initramfs-4.1.12-124.23.4.el7uek.x86_64.img

 

Reboot the node01 and the db-node came up with the correct Kernel

Linux node01 4.14.35-1902.9.2.el7uek.x86_64 #2 SMP x86_64 x86_64 x86_64 GNU/Linux

 

Attention

„Please make such a change only if you are absolutely sure and a fallback scenario is in place for example boot via diag.iso otherwise please open an Service Request“

Oracle Cloud „follow up“

Follow up:  „How to setup a CentOS instance and connect via ssh“

Login to the „Oracle Cloud“ via browser

go to -> Compute -> Instance

create a new instance

But before going on setup a ssh-key

 


ssh-keygen -t rsa -N "" -b 2048 -C "oracloud" -f /Users/user/.ssh

 

Now you are ready to start the setup and therefore upload the public key before

starting the installation. (id_rsa.pub) The public key upload is mandatory!

 

After a few minutes the setup is done. restart the instance

Check the instance

For the connect via ssh use the Linux User „opc“  in the Oracle Cloud and your „public-IP address

The „opc“ user has no Login password!

 

When the instance started and a „Login screen“ came up with

„user“ and „password“ you forget to upload the public-key during setup

 

Here the standard Login after correct setup


ssh opc@158.101.166.239 
[opc@pg11 ~]$ 

 

For root Login enter

sudo su

[root@pg11 opc]#

You need no root password!

 

That’s it you are logged in start with your work and have fun

The Oracle Cloud documentation for this setup is not very handy

I hope that these few lines help you to get a new machine in the Oracle Cloud fast up and running

forever free :-)

 

 

 

 

 

 

 

 

 

Manual upgrade to Oracle 19c (CDB/PDB)

 

manually to 19c …. 

Actually it is very cool to do everything with so called „auto tools“. If you prefer to do the Upgrade to 19c manually and step-by-step then you can follow my article and have fun otherwise skip this blog article

Before you start you need to read a lot Doc-ID’s from Oracle Support

My list is not complete but here are very important Doc-ID’s for the Upgrade

Pre Activities for Upgrade 19c

(the list is not complete because there are a lot more documents)

    • Release Schedule of current DB
      • 884522.1
    • Patches to apply before Upgrade
      • 253975.1
    • Health Check Script use before Upgrade or once a year
      • 136697.1
    • DB PreUpgrade tool checklist
      • 2380601.1
    • PreUpgrade_19 Zip File latest Version (check for new Version)
      • 884522.1
    • RU Assistant (very helpful tool)
      • 2118136.2
    • Client / Server Interoperability Support Matrix for Different Oracle Versions
      • 207303.1
    • DB Upgrade Diagnostic Information
      • 556610.1
    • Oracle JVM
      • 397770.1

Okay let’s start

Install new Oracle Release 19c

    • Check if OS Version is certified by Oracle Support
      • Login and check certification

      • If the OS is not supported please open an Service Request and ask for Support
    • Download the Oracle Release from
      • OTN or Software Delivery Portal
      • in my example 19c

    •  create new Oracle Home for 19c
      • Use OFA (Oracle Flexible Architecture) for the Setup
        • Documentation: "/u01/app/oracle/product/19.0.0/dbhome_1"
    • unpack the zip File./runInstaller
    • Follow the Installer
    • root.sh
      • fix errors before going on

 

Patch the new ORACLE_HOME with latest RU

    • Install latest version of opatch
    • Download RU

        • p30125133_190000_Linux-x86-64.zip
    • Patch 19c Home
      • opatch apply -local -oh /u01/app/oracle/product/19.0.0/dbhome_1 /home/oracle/Downloads/30125133
      • patching done

      • Test the installed RU
        • ./sqlplus / as sysdba
        • SQL*Plus: Release 19.0.0.0.0 – Production on Thu Dec 5 18:00:34 2019
          Version 19.5.0.0.0

 

Important note while planing the Upgrade to 19c

    • In Oracle19c you can setup 3 PDB’s  in a CDB without the Multitenant license
    • Oracle will desupport the non-CDB in Version 20c
      • This is very important for the future
    • My recommendation
      • plan the changeover in the Multitenant „World“ NOW!
      • It’s time so say good bye … from non-CDB

 

Download, install and run the Database Pre-Upgrade Utility

    • Download from Oracle Support the actual Pre-Upgrade Script
      • actual Version is from November 2019
    • If version is 12.2 or higher, then save the file in $ORACLE_HOME/rdbms/admin
      • unzip preupgrade_19_cbuild_5_lf.zip
      • fileinflating: components.properties
        inflating: preupgrade.jar
        [oracle@o183 admin]$
      • new „Nov 6 13:40 preupgrade.jar“
    • Now it is time to read the documentation
    • start preupgrade.jar
      • $ORACLE_HOME/jdk/bin/java -jar /u01/app/oracle/product/19.0.0/dbhome_1/rdbms/admin/preupgrade.jar TERMINAL TEXT
    • preupgrade Logfile
      • cd /u01/app/oracle/cfgtoollogs/db1_S1/preupgrade
      • Check the Logfile „preupgrade.log“
      • here an example
        • preupgrade
          • Before Upgrade actions
          • After Upgrade actions
      • additional very helpful files
        • preupgrade_fixups.sql
        • postupgrade_fixups.sql
      • and check the Logfile
        • Fix all errors
        • Now you are ready for the Upgrade

 

Oracle JVM installed 

    • Before doing an Upgrade check if the JVM is installed.
    • Why?
      • If you don not need the JVM in the Database deinstall it
      • It makes live in some cases especially during Patching (especially for RAC DB’s) easier
    • Check if JVM is installed
      • select comp_name, version, status from dba_registry where comp_name like ‚%JAVA%‘;
      • select owner, status, count(*) from all_objects where object_type like ‚%JAVA%‘ group by owner, status;
      • select role from dba_roles where role like ‚%JAVA%‘;
    • The  DBA_FEATURE_USAGE_STATISTICS view can also help to check for the Java feature
      • select currently_used, name from  dba_feature_usage_statistics where name like ‚%Java%‘;

 

So Installation, RU and preUpgrade is done. Let’s go to the manual „dbupgrade“ ….

 

Weiterlesen

DOAG 2019 „Engineered Systems Arbeitsgruppen Treffen“

nächste Woche ist wieder DOAG Time :-)

Dieses Jahr werden wir den „Praxisaustausch Engineered Systems“ im Rahmen der DOAG Konferenz durchführen. Somit können alle die sich mit Thema beschäftigen, oder zukünftig beschäftigen werden gerne vorbei schauen.

Wir (Frank Schneede & ich) haben den neuen Exadata Prodctmanager mit dabei.

Gavin wird uns alle News zur Exadata X8M präsentieren und natürlich gerne eure Fragen beantworten.

Den Termin Dienstag, den 19.11.2019 17:00 – 17:45 (open end) Raum Kopenhagen schon mal dick im „Kalender“ markieren und nicht vergessen.

Wir sehen uns nächste Woche in Nürnberg ….

Die Präsentation der Konferenz findet Ihr unter „Publikationen & Vorträge“

 

Autonomous Health Framework (AHF) available

Bildschirmfoto 2019-10-28 um 19.36.24

Oracle released the new Autonomous Health Framework since one week. It is very interesting to go through the list of new and changed features.

Yes after a very long time we had one Tool which bring everything together:

  • one single Interface
  • all diagnostics tools in one bundle makes everything easier
  • Automatic proactive compliance checks helps to fix problems
  • Diagnostic when the failure occur ensures you get everything for the resolution

To get familiar go to Oracle Support and Download the AHF Framework. You find everything while open the Doc ID 2550798.1

Download install and have fun :-)

In the meantime I did an installation on a Exadata X7 4 Node RAC and it works perfect. The old data of tfactl and exachk will be moved to the new directories and everything starts very smooth. Easy and smooth Setup. Now we have the Tools in one location on the Compute Nodes.

 

 

impdp with parameter „cluster and parallel“ for a RAC Cluster

For using the „parallel“ parameter during an import (impdp) on a Oracle RAC Cluster you need to prepare your environment.

The „parallel“ parameter works correctly when you do the following:

– mount point were the export dump resides must be available on ALL cluster members

– create a Service on the database for the impdp job

srvctl add service -s impdp_service -d xdb1 -pdb xpdb1 -preferred xdb11,xdb12 -available xdb13

srvctl start service -s impdp_service -d xdb1

– Check that the service is running

srvctl status service -s impdp_service -d xdb1

Now you are ready to use the impdp „parallel“ parameter

Here an example with „cluster=y parallel=6

impdp system@xpdb1 directory=dump dumpfile=full_%u.dmp schemas=DB1 cluster=y parallel=6 service_name=impdp_service status=180 logfile=imp_xpdb1.log METRICS=Y logtime=all

impdp Log Parameter which are really helpful for analyzing are:

METRICS=Y

logtime=all

Extract from the Logfile

You see that there are detailed informations about the worker process for example W-1 = Worker 1

W-1 Completed by worker 1 757 TABLE objects in 38 seconds
W-1 Completed by worker 2 764 TABLE objects in 37 seconds
W-1 Completed by worker 3 765 TABLE objects in 48 seconds
W-1 Completed by worker 4 765 TABLE objects in 53 seconds
W-1 Completed by worker 5 766 TABLE objects in 34 seconds
W-1 Completed by worker 6 765 TABLE objects in 44 seconds

W-5 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA

Worker 5 is processing TABLE_DATA

For analyzing the impdp process you get so detailed informations try the next time.

Depending on your hardware you can also use different integer values for the „parallel“ parameter but a large number will not help in every situation.

Have fun with impdp on your RAC Cluster….