Problems with Adaptive Features in 12.1.0.2 What’s to do?

The Adaptive Features is divided in two parts

  • Adaptive Plans
    • the possibility to change plans during execution
  • Adaptive Statistics
    • the statistics allow plans to be built from the results of previous executions

Version 12.1 

In 12.1 both components  will be managed by the parameter „optimizer_features_enable“ and is per default on „TRUE“.

Version 12.2

In 12.2 the parameter „optimizer_features_enable“ obsolete. The adaptive feature is now controlled by two parameters

  • optimizer_adaptive_statistics default „FALSE“
  • optimizer_adaptive_plans default „TRUE“

What are the recommendation for 12.1.0.2 if you do a upgrade. You should directly adopt the 12.2 default  by applying the following Patches:

  • Patch 22652097 splits the parameter optimizer_adaptive_features into two, as above, and disables adaptive statistics.
  • Patch 21171382 disables the automatic creation of extended statistics unless the optimizer preference AUTO_STAT_EXTENSIONS is set to ON.

Summary

If you Upgrade to 12.1..2 set directly the Default for Oracle 12.2.

Yes, the next Upgrade to Release 12.2 came sooner or later.

 

 

 

Advertisements

Oracle RU is available

Today Oracle released a very large „monster“ Upgrade. This July 2017 Update includes the first time the new RU „Release Upgrade“ and RUR „Release Update Revision“ Patches.

To be up to date read the following My Oracle Support Doc-ID’s

Actually I can’t find a FAQ but I think this will come in a few weeks.

Stay tuned :-)

rman auxiliary (for duplicate)

I just do some tests with the Oracle 12.2 Release and while doing tests it is all the time better to have a second database and that is the time for rman. So I looked around and found an easy way to duplicate a database from a Backup. Here the details:

My environment

OL 7.3 with an Oracle 12.2 + DATABASE BUNDLE PATCH: 12.2.0.1.170516 (25862693)

I setup an database via dbca and use Standards including the fast recovery area. Oh yes the FRA is a standard now

Next I did a backup and here I did a few tests with image copied and „backup as copy database“. While doing this I had a full backup on disk and that is the point to start with the duplicate.

What are the steps?

First I created a initdb2.dup

Yes, for testing I create only one controlfile
initdb2.dup
*.audit_file_dest='/u01/app/oracle/admin/db2_s1/adump'
*.audit_trail='NONE'
*.db_name='db2'
*.db_unique_name='db2_s1'
*.control_files='/u01/app/oracle/fast_recovery_area/db2_s1.ctl'
*.db_create_file_dest='/u01/app/oracle/oradata'
*.db_recovery_file_dest_size=8G
*.db_recovery_file_dest='/u01/app/oracle/fast_recovery_area'
*.enable_pluggable_database=true
*.large_pool_size=256M
*.sga_target=2G

Next I created a few directories and then I started the new instance


sqlplus / as sysdba

startup nomount pfile=/u01/app/oracle/admin/db2/pfile/initd2.dup'

finally I start the duplicate from „BACKUP LOCATION“

 



rman auxiliary /

RMAN>duplicate database to db2 backup location '/u01/app/oracle/fast_recovery_area';

 

That’s it finished :-)

Keep in mind that I set my environment before starting the rman duplicate and the connection is done against the auxiliary instance. Yes, it is very easy to do a duplicate to a new database when you have a full backup on disc.

 

 

 

OOW 2016 erster Tag

oow_2016

Der erste Tag auf der OpenWorld 2016. Es war ein Tag zum „Warmlaufen und orientieren“.

Das Dominate Thema war „Oracle Cloud“. Zur neuen Oracle 12c Rel. 2 durfte bis auf eine Folie in der Keynote von Larry Ellison nichts gesagt werden. Larry Ellison wird 12c Rel. 2 am Dienstag in seiner zweiten Keynote vorstellen.

Viele Sessions waren mit generellen Themen vertreten, teilweise mit absoluten Basics. Ein super Vortrag kam von Mauro Pagano zum Thema Performance mit seinem Tool SQLd360. Es ist eine Weiter bzw. Neuentwicklung von SqlT. Das Tool sollte sich jeder DBA der sich mit dem Thema Performance auseinandersetzt mal genauer ansehen.

Ein weiterer guter Vortrag kam von Simon Pane zum Thema Oracle Database Security. Viele Beispiele die man im täglichen DBA Umfeld direkt umsetzen und anwenden kann. Von Passworthandling, SQL-Net, Skripten etc..

Einer der ganz wenigen Vorträgen im Themenbereich Engineered Systems hatte den Titel „Explore Database Cloud in 45 Minutes“. Der praktische Teil der Session endete erstmal mit einem technischen Problem, da das Notebook & der Beamer nicht zueinander „fanden“. Insgesamt wurde im Vortrag allerdings nichts Neues berichtet.

 

ASM „corrupted metadata block“ check via amdu / kfed (Part 1)

corrupt

Last week we had a crash on our Exadata ASM Instance and we are not amused about this but we restart the instance and start working as usually.

About the environment: „GRID Software is Release 12.1 but the diskgroups are compatible 11.2.0.4“

To be save we start a check on the DATA diskgroup.


ALTER DISKGROUP DATA CHECK all NOREPAIR;

The check run online but nearly 25 hours
We saw in the meantime lots of errors in the ASM alert.log

Tue Jun 21 15:47:15 2016
NOTE: disk DATA_CD_10_srv1CD13, used AU total mismatch: DD={514269, 0} AT={514270, 0}
Tue Jun 21 15:47:15 2016
GMON querying group 1 at 567 for pid 52, osid 138892
GMON checking disk 143 for group 1 at 568 for pid 52, osid 138892

A MOS note said this should not be a problem but is this correct …?

The analyze is done via a  dump with amdu of the diskgroup when the „CHECK NO REPAIRS“ is ready.


amdu -diskstring 'o/*/*' -dump 'DATA'

Yes, we start the dump in a directory where we have enough space while the amdu tool creates a lot

of 2GB files dependent from the size of the diskgroup. One small file will also be created during this

dump and it is the  report.txt file.

The report.txt has information about the System, OS, Version, all scanned disks and also a list

about the scanned disks which have „corrupted metadata blocks“.

Here an example

---------------------------- SCANNING DISK N0002 -----------------------------

Disk N0002: '192.168.10.10/DATA_CD_01_srv1cd2">192.168.10.10/DATA_CD_01_srv1cd2'

AMDU-00209: Corrupt block found: Disk N0002 AU [454272] block [0] type [0]

AMDU-00201: Disk N0002: '192.168.10.10/DATA_CD_01_srv1cd2">192.168.10.10/DATA_CD_01_srv1cd2'

AMDU-00217: Message 217 not found;  product=RDBMS; facility=AMDU; arguments: [0] [1024] [blk_kfbl]

           Allocated AU's: <strong>507621</strong>

                Free AU's: 57627

       AU's read for dump: 194

       Block images saved: 12457

        Map lines written: 194

          Heartbeats seen: 0

  Corrupt metadata blocks: 1

        Corrupt AT blocks: 0

The next question was: „How can we check if this metadata block is corrupted?“

The answer is you need the kfed tool and theAllocated AU’s: 507621″ 

from the report.txt files.


[oracle0@srv1db1]$ kfed read <strong>aun=507621</strong> aus=4194304 blkn=0 dev=o/<a href="http://192.168.10.10/DATA_CD_00_srv1cd2%7C" data-saferedirecturl="https://www.google.com/url?hl=de&q=http://192.168.10.10/DATA_CD_00_srv1cd2%257C&source=gmail&ust=1467906685880000&usg=AFQjCNHjeE0eAiO3K3BgawquJRg3dq2V0Q">192.168.10.10/DATA_CD_00_srv1cd2</a>

kfbh.endian:                         58 ; 0x000: 0x3a

kfbh.hard:                          162 ; 0x001: 0xa2

kfbh.type:                            0 ; 0x002: <strong>KFBTYP_INVALID</strong>

kfbh.datfmt:                          0 ; 0x003: 0x00

kfbh.block.blk:              1477423104 ; 0x004: blk=1477423104

kfbh.block.obj:              3200986444 ; 0x008: disk=732492

kfbh.check:                    67174540 ; 0x00c: 0x0401008c

kfbh.fcn.base:                    51826 ; 0x010: 0x0000ca72

kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000

kfbh.spare1:                          0 ; 0x018: 0x00000000

kfbh.spare2:                          0 ; 0x01c: 0x00000000

1EFB9400000 0000A23A 580FB000 BECB2D4C 0401008C  [:......XL-......]

1EFB9400010 0000CA72 00000000 00000000 00000000  [r...............]

1EFB9400020 00000000 00000000 00000000 00000000  [................]

  Repeat 253 times

As you saw in the example the „kfbh.type = KFBTYP_INVALID“ which means the metadata block is corrupt.

So and how can I fix this?

In our situation we have an diskgroup which is compatible 11.2.0.4 so we have to start an


ALTER DISKGROUP DATA CHECK ALL REPAIR;

Yeah this could be very dangerous.

If the „CHECK ALL REPAIR“ find a corruption and try to repair this the diskgroup will be dismounted

This means all databases which are up and running will crash

But keep in mind that a „CHECK ALL REPAIR“ will also run 25 hours.

Is there another solution?
Yes but you need also a dismount of the diskgroup.

Then run the amdu tool „OFFLINE“ again and check the report.txt file again for corrupted metadata blocks

More details will be discussed in Part 2 about ASM kfed and amdu

So stay tuned.

 


 

Exdata Lifecycle / Patching

patching_icon

Operate an Exadata Database Machine means you have to manage the Lifecyle. One major task is the regular patching of the whole Exa Stack.

This blog article give you an overview about the Patching.

First remember which components are part of the lifecycle.

Following the component and the tool.

exadata

  • GRID & RDBMS
    • opatch (oplan)
  • DB Node
    • patchmgr (that’s new since Oct 2015)
  • Storage Grid
    • patchmgr
  • Network
    • patchmgr

Before starting the Patching you need to do a bullet proof planing otherwise you fail.

For a Quarter Rack with lets say 10 Production databases you need a planing phase of more or less 2-3 weeks.

How to setup a recommendation?

  • Analyze your ORACLE_HOMES
  • Check existing SR for every database
  • Meet with your Application Manager
  • Use Oracle Tools like exachk
  • Use the conflict analyzer in MOS

exachk will be your best friend

Check the My Oracle Support Note 1070954.1 and install the latest version

First take a look of the table of contents

exachk_0

and one very important table is the recommended version overview

exachk_1

What will be the best recommendation?

It doesn’t give an easy answer while Oracle has a lot of possibilities for the Patching:

  • the QFSDP the Quarterly Full Stack Download Patch
  • or Standalone Patchsets for every Component like Infinband, Cell Server, DB-Node and so on

So the decision has to be taken by the whole team of Application Manager and Oracle DBA’s and System Administrator

Weiterlesen „Exdata Lifecycle / Patching“