Exadata wrong Kernel version after dbnode upgrade

While doing a lot of Exadata Upgrades I get a problem on one of my DB nodes

The patchmgr works without an error and the Cluster starts up

I did a few checks including if all nodes have the same Kernel version

but node01 has an older Kernel Version than the other nodes

 

node01: Linux node01 4.1.12-124.23.4.el7uek.x86_64 #2 SMP x86_64 x86_64 x86_64

node02: Linux node02 4.14.35-1902.9.2.el7uek.x86_64 #2 SMP x86_64 x86_64 x86_64

node03: Linux node03 4.14.35-1902.9.2.el7uek.x86_64 #2 SMP x86_64 x86_64 x86_64

node04: Linux node04 4.14.35-1902.9.2.el7uek.x86_64 #2 SMP x86_64 x86_64 x86_64

 

Looks like that during the Upgrade of node01 the Kernel was not updated

I checked all logfiles but I can’t find an error

So I checked the installed Kernel

 

rpm -qa | grep -i kernel

kernel-transition-3.10.0-0.0.0.2.el7.x86_64

kernel-ueknano-4.14.35-1902.9.2.el7uek.x86_64

 

Okay the new Kernel is installed but seems to be not in place

In Oracle Linux 7 you need to check the „grub.cfg“ file

/boot/efi/EFI/redhat/grub.cfg

–> showing older kernel

I changed the following line with the new Kernel version

from value „/initrdefi /initramfs-4.1.12-124.23.4.el7uek.x86_64.img“

to value „/initrdefi /initramfs-4.14.35-1902.9.2.el7uek.x86_64.img“

 

Then I update the configuration and reboot the server

grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg

Generating grub configuration file …

Found linux image: /boot/vmlinuz-4.14.35-1902.9.2.el7uek.x86_64

Found initrd image: /boot/initramfs-4.14.35-1902.9.2.el7uek.x86_64.img

Found linux image: /boot/vmlinuz-4.1.12-124.23.4.el7uek.x86_64

Found initrd image: /boot/initramfs-4.1.12-124.23.4.el7uek.x86_64.img

 

Reboot the node01 and the db-node came up with the correct Kernel

Linux node01 4.14.35-1902.9.2.el7uek.x86_64 #2 SMP x86_64 x86_64 x86_64 GNU/Linux

 

Attention

„Please make such a change only if you are absolutely sure and a fallback scenario is in place for example boot via diag.iso otherwise please open an Service Request“

Oracle Cloud „follow up“

Follow up:  „How to setup a CentOS instance and connect via ssh“

Login to the „Oracle Cloud“ via browser

go to -> Compute -> Instance

create a new instance

But before going on setup a ssh-key

 


ssh-keygen -t rsa -N "" -b 2048 -C "oracloud" -f /Users/user/.ssh

 

Now you are ready to start the setup and therefore upload the public key before

starting the installation. (id_rsa.pub) The public key upload is mandatory!

 

After a few minutes the setup is done. restart the instance

Check the instance

For the connect via ssh use the Linux User „opc“  in the Oracle Cloud and your „public-IP address

The „opc“ user has no Login password!

 

When the instance started and a „Login screen“ came up with

„user“ and „password“ you forget to upload the public-key during setup

 

Here the standard Login after correct setup


ssh opc@158.101.166.239 
[opc@pg11 ~]$ 

 

For root Login enter

sudo su

[root@pg11 opc]#

You need no root password!

 

That’s it you are logged in start with your work and have fun

The Oracle Cloud documentation for this setup is not very handy

I hope that these few lines help you to get a new machine in the Oracle Cloud fast up and running

forever free :-)