What has to be done when the Lifecycle of an Exadata System comes to the end.
You need to do an secure erase of DB and Storage nodes.
By the way you can also secure erase the Switches and PDU, etc. but this is not described in this article.
and My Oracle Support Doc ID 2180963.1
Steps to do
I use the method via bootable USB Stick
Download Boot Image via Patch 25470974
You need a complete list of all DB and Storage node names including all IP addresses of each ILOM Server
Tip: Before you start reset the password on each Server for example „welcome1“
Prepare USB Stick
I use my Mac book and do a „dd“ to copy the image to the USB Stick
dd if=image_diagnostics_126.96.36.199.0_LINUX.X64_170126.2-1.x86_64.usb of=/dev/disk2
Start a ILOM Web Console Login for the first Server
In parallel start a ILOM terminal console session to do a restart of the first Server
While the reset is running the Web Console came up with the BIOS Splash screen. Please enter
It takes a while and you see the boot menu
Select the USB Stick and press enter to start
the Boot Screen appears
Now it is time to check the output on the ILOM terminal console. You will see after a while a „login:“ prompt
Login as "root" password "sos1exadata" or "sos1Exadata"
If the password doesn’t work contact Oracle Support for help
Weiterlesen „Secure Erase of an Exadata System“
During Tests for an Migration of a major customer application we saw in our AWR reports that most of the jobs are very write intensive. This was the point where we would like to test what happens when we change the Flash Cache Mode from Write Through to Write Back.
What are the main benefits of Write Back mode:
- it improves the write intensive operations while writing to flash cache is faster than writing to normal Hard disks
- on Exadata X3 and newer machines write performance can be improved up to 20X IOPS
- The Write Back Flash Cache accelerates reads and writes for all workloads
First of all I take a look in Metalink and found a Doc ID 1500257.1 with more details.
What are the requirements?
since April 2017 it is default if the following conditions are full filled:
- Grid and RDBMS Home
- 188.8.131.52.1 or higher
- 184.108.40.206 or higher
- 220.127.116.11 or higher
- DATA diskgroup has HIGH redundancy
When should I use Write Back?
- It makes sense if your application is write intensive
- You find significant waits for „free buffer waits“
- High IO times to check for write bottlenecks in AWR reports
What are the steps to enable Write Back?
We have the possibility to do it „offline“ so we stop the whole Grid & Rdbms Stack, but you can change it also in a Rolling manner.
dcli -g cell_group -l root "cellcli -e list cell attributes flashcachemode"
crsctl stop cluster -all -f
- Check State of Flash Cache
These steps has to be done on every Cell Server here as an example
Drop the flash cache on that cell
CellCLI> drop flashcache;
Flash cache cel04_FLASHCACHE successfully dropped.
Shut down Cell service
CellCLI> alter cell shutdown services cellsrv;
Stopping CELLSRV services... The SHUTDOWN of CELLSRV services was successful.
Change Cell Flash Cache mode to Write Back
CellCLI> alter cell flashCacheMode=writeback;
Cell cel04 successfully altered
Restart the Cell Service
CellCLI> alter cell startup services cellsrv;
Starting CELLSRV services...
The STARTUP of CELLSRV services was successful.
Recreate the Flash Cache
CellCLI> create flashcache all;
Flash cache cel04_FLASHCACHE successfully created
Finally check the State on all Cell Server dcli -g cell_group -l root "cellcli -e list cell attributes flashcachemode"
So the first step was done and now the tests can go on.
In a few weeks I will give a feedback what are the real improvements so stay tuned.
Operate an Exadata Database Machine means you have to manage the Lifecyle. One major task is the regular patching of the whole Exa Stack.
This blog article give you an overview about the Patching.
First remember which components are part of the lifecycle.
Following the component and the tool.
- GRID & RDBMS
- DB Node
- patchmgr (that’s new since Oct 2015)
- Storage Grid
Before starting the Patching you need to do a bullet proof planing otherwise you fail.
For a Quarter Rack with lets say 10 Production databases you need a planing phase of more or less 2-3 weeks.
How to setup a recommendation?
- Analyze your ORACLE_HOMES
- Check existing SR for every database
- Meet with your Application Manager
- Use Oracle Tools like exachk
- Use the conflict analyzer in MOS
exachk will be your best friend
Check the My Oracle Support Note 1070954.1 and install the latest version
First take a look of the table of contents
and one very important table is the recommended version overview
What will be the best recommendation?
It doesn’t give an easy answer while Oracle has a lot of possibilities for the Patching:
- the QFSDP the Quarterly Full Stack Download Patch
- or Standalone Patchsets for every Component like Infinband, Cell Server, DB-Node and so on
So the decision has to be taken by the whole team of Application Manager and Oracle DBA’s and System Administrator
Weiterlesen „Exdata Lifecycle / Patching“