MGMTDB not automatically created on Exadata X5 and GI 12.1.0.2

July 1st, 2015 No comments

While deploying an X5 Full Rack recently it happened that the Grid Infrastructure Management Repository was not created by onecommand. The GIMR database was optional in 12.1.0.1 and became mandatory in 12.1.0.2 and should be automatically installed with Oracle Grid Infrastructure 12c release 1 (12.1.0.2). For unknown reason to me that didn’t happen and I had to create it manually. I’ve checked all the log files but couldn’t find any errors.  For reference the OEDA version used was Feb 2015 v15.050, image version on the Exadata was 12.1.2.1.0.141206.1.

To create the database login as the grid user and create file holding the following variables:

cat > /tmp/cfgrsp.properties
oracle.assistants.asm|S_ASMPASSWORD=[your ASM password]
oracle.assistants.asm|S_ASMMONITORPASSWORD=[your ASM password]

and run the following command:

GRID_HOME=/u01/app/12.1.0.2/grid
[oracle@exa01 ~]$ $GRID_HOME/cfgtoollogs/configToolAllCommands RESPONSE_FILE=/tmp/cfgrsp.properties

For reference, here is similar bug I found on MOS:
-MGMTDB Not Created When Using EM12c Provisioning (Doc ID 1983885.1)

Categories: oracle Tags: , ,

dbnodeupdate.sh post upgrade step fails on Exadata storage software 12.1.2.1.1

June 23rd, 2015 No comments

I’ve done several Exadata deployments in the past two months and had to upgrade the Exadata storage software on half of them. Reason for that was because units shipped before May had their Exadata storage software version of 12.1.2.1.0.

The upgrade process of the database nodes ran fine but when I ran dbnodeupdate.sh -c for completing post upgrade steps I got an error that the system wasn’t on the expected Exadata release or kernel:

(*) 2015-06-01 14:21:21: Verifying GI and DB's are shutdown
(*) 2015-06-01 14:21:22: Verifying firmware updates/validations. Maximum wait time: 60 minutes.
(*) 2015-06-01 14:21:22: If the node reboots during this firmware update/validation, re-run './dbnodeupdate.sh -c' after the node restarts..
(*) 2015-06-01 14:21:23: Collecting console history for diag purposes

ERROR: System not on expected Exadata release or kernel, exiting


ERROR: Correct error, or to override run: ./dbnodeupdate.sh -c -q -t 12.1.2.1.1.150316.2

Indeed, the database node was running the new Exadata software but still using the old kernel (2.6.39-400.243) and dbnodeupdate was expecting me to run the new 2.6.39-400.248 kernel:

imageinfo:
Kernel version: 2.6.39-400.243.1.el6uek.x86_64 #1 SMP Wed Nov 26 09:15:35 PST 2014 x86_64
Image version: 12.1.2.1.1.150316.2
Image activated: 2015-06-01 12:27:57 +0100
Image status: success
System partition on device: /dev/mapper/VGExaDb-LVDbSys1

The reason for that was that the previous run of dbnodeupdate installed the new kernel package but failed to update grub.conf. The solution is to manually add the missing kernel entry to grub.conf and reboot the server to pick up the new kernel, here is a note for more information which by the time I had this problem was still internal:


Bug 20708183 – DOMU:GRUB.CONF KERNEL NOT ALWAYS UPDATED GOING TO 121211, NEW KERNEL NOT BOOTED

 

 

Categories: oracle Tags: ,

How do I change DNS servers on Exadata storage servers

June 19th, 2015 No comments

This is just a quick post to highlight a problem I had recently on another Exadata deployment.

For the most customers the management network on Exadata is routable and the DNS servers are accessible. However in a recent deployment for a financial organization this wasn’t the case and the storage servers were NOT able to reach the DNS servers. The customer provided a different set of DNS servers within the management network which were still able to resolve all the Exadata hostnames. If you encounter similar problem stop all cell services and run ipconf on each storage server to update the DNS servers.

On each storage server there is a service called cellwall (/etc/init.d/cellwall) which actually will run many checks and apply a lot of iptables rules. Here are couple of comments from the script to give you an idea:

# general lockdown from everything external, (then selectively permit)
  # general permissiveness (localhost: if you are in, you are in)
  # allow all udp traffic only from rdbms hosts on IPoIB only
      # allow DNS to work on all interfaces
      # open sport=53 only for DNS servers (mitigate remote-offlabel-port exploit)

and many more but you can check the script and see what it does OR run iptables -L -n to get all the iptables rules.

Here is some more information on how to change IP addresses on Exadata:

UPDATE: Thanks to Jason Arneil for pointing out that proper way to update the configuration of the cell.

Categories: oracle Tags:

How to configure Power Distribution Units on Exadata X5

June 18th, 2015 No comments

I’ve done several Exadata deployments recently and I have to say of all the components PDUs were hardest to configure. Important to notice that unlike earlier generations of Exadata the PDUs in X5 are Ehnanced PDUs and not Standard.

Reading the public documentation (Configuring the Power Distribution Units) it says that on PDUs with three power input leads you need to connect the middle power lead to the power source. Well I’ve done that many times and it didn’t NOT worked, the documentation says that the PDU should be accessible on 192.168.0.1. I believe the reason for that is because DHCP has been enabled by default and this can be easily confirmed by checking the LCD screen of the PDU. I even tried setting up DHCP server myself to make the PDU acquire IP but that didn’t worked either.

To configure the PDU you need to connect through serial management port. Nowadays there are no more laptops with serial ports so you will need USB to RS-232 DB9 Serial Adapter, I bought mine from Amazon. You will also need DB9 to RJ45 cable – these are quite popular and I’m sure you’ve seen before the blue Cisco console cable.

You need to connect the cable to SET MGT port of PDU and then establish terminal connection (you can use putty too) with the following settings:
9600 baud, 8 bit, 1 stop bit, no parity bit, no flow control

The username is admin and the password is adm1n.

Here are the commands you need to configure the PDU. Each network change requires reboot of the PDU:

Welcome to Oracle PDU

pducli->username: admin
pducli->password: *****
Login OK - Admin rights!
pducli->

set pdu_name=exa01pdu01
set systime_manual_date=2015-06-18
set systime_manual_time=12:45:00
set systime_ntp_server_enable=On
set systime_ntp_server=192.168.1.2
set systime_dst_enable=On
set net_ipv4_dhcp=Off
reset=yes

set net_ipv4_ipaddr=192.168.1.10
set net_ipv4_subnet=255.255.255.0
set net_ipv4_gateway=192.168.1.1
set net_ipv4_dns1=192.168.1.3
set net_ipv4_dns2=192.168.1.4
reset=yes

Regarding the network connectivity – the documentation says you need two additional cables from your management network. However if you have half or quarter rack you can plug-in the PDU network connections to the management/cisco switch. Make a note that if you ever plan to upgrade to full rack you will have to provide the two additional cables from your management network and disconnect PDUs from the management switch.

 

IMPORTANT: Make sure you don’t leave any active CLI sessions, otherwise you won’t be able to login remotely and will require data centre visit to reboot the PDU.

Categories: oracle Tags:

opatch 12.1.0.1.7 fails with System Configuration Collection Failed

June 1st, 2015 No comments

I was recently upgrading an Exadata 12.1.0.2 DBBP6 to DBBP7 and as usual I went for the latest opatch version which was 12.1.0.1.7 (Apr 2015) as of that time.

After running the opatchauto apply or opatchauto apply -analyze I got the following error:

System Configuration Collection failed: oracle.osysmodel.driver.sdk.productdriver.ProductDriverException: java.lang.NullPointerException
Exception in thread "main" java.lang.RuntimeException: java.io.IOException: Stream closed
        at oracle.opatchauto.gi.GILogger.writeWithoutTimeStamp(GILogger.java:450)
        at oracle.opatchauto.gi.GILogger.printStackTrace(GILogger.java:465)
        at oracle.opatchauto.gi.OPatchauto.main(OPatchauto.java:97)
Caused by: java.io.IOException: Stream closed
        at java.io.BufferedWriter.ensureOpen(BufferedWriter.java:98)
        at java.io.BufferedWriter.write(BufferedWriter.java:203)
        at java.io.Writer.write(Writer.java:140)
        at oracle.opatchauto.gi.GILogger.writeWithoutTimeStamp(GILogger.java:444)
        ... 2 more

opatchauto failed with error code 1.

This was a known problem caused by these bugs which are fixed in 12.1.0.1.8:
Bug 20892488 : OPATCHAUTO ANALYZE FAILING WITH GIPATCHINGHELPER::CREATESYSTEMINSTANCE FAILED
BUG 20857919 – LNX64-121023GIPSU:APPLY GIPSU FAILED WITH SYSTEM CONFIGURATION COLLECTION FAILED

As 12.1.0.1.8 is not yet available the workaround is to use lower version of opatch 12.1.0.1.6 which can be downloaded from this note:
Opatchauto Gives “System Configuration Collection Failed” Message (Doc ID 2001933.1)

You might run into the same problem if you are applying 12.1.0.2.3 PSU.

Categories: oracle Tags: , ,

Speaking at UKOUG Systems Event and BGOUG

May 19th, 2015 No comments

I’m pleased to say that I will be speaking at the UKOUG Systems Event 2015, held at Cavendish Conference Center in London, 20 May 2015. My session “Oracle Exadata Meets Elastic Configurations” starts at 10:15 in Portland Suite. Here is the agenda of the UKOUG Systems Event.

In a month time I’ll be also speaking at the Spring Conference of the Bulgarian Oracle User Group. The conference will be held from 12th to 14th June, 2015 in hotel Novotel in Plovdiv, Bulgaria. I’ve got the conference opening slot at 11:00 in hall Moskva, my session topic is “Oracle Data Guard Fast-Start Failover: Live demo”. Here is the agenda of the conference.

I would like to thank EDBA for making this happen!

Categories: oracle Tags: ,

applyElasticConfig.sh fails with Unable to locate any IB switches

May 15th, 2015 No comments

With the release of Exadata X5 Oracle introduced elastic configurations and changed the process on how the initial configuration is performed. Back before you had to run applyconfig.sh which would go across the nodes and change all the settings according to your config. This script has now evolved and it’s called applyElasticConfig.sh which is part of OEDA (onecommand). During one of the recent deployments I ran into the below problem:

[root@node8 linux-x64]# ./applyElasticConfig.sh -cf Customer-exa01.xml

Applying Elastic Config...
Applying Elastic configuration...
Searching Subnet 172.16.2.x..........
5 live IPs in 172.16.2.x.............
Exadata node found 172.16.2.46.
Collecting diagnostics...
Errors occurred. Send /opt/oracle.SupportTools/onecommand/linux-x64/WorkDir/Diag-150512_160716.zip to Oracle to receive assistance.
Exception in thread "main" java.lang.NullPointerException
at oracle.onecommand.commandexec.utils.CommonUtils.getStackFromException(CommonUtils.java:1579)
at oracle.onecommand.deploy.cliXml.ApplyElasticConfig.doDaApply(ApplyElasticConfig.java:105)
at oracle.onecommand.deploy.cliXml.ApplyElasticConfig.main(ApplyElasticConfig.java:48)

Going through the logs we can see the following message:

2015-05-12 16:07:16,404 [FINE ][ main][ OcmdException:139] OcmdException from node node8.my.company.com return code = 2 output string: Unable to locate any IB switches... stack trace = java.lang.Throwable

The problem was caused because of IB switch names in my OEDA XML file were different to the one’s actually physically in the rack, actually the IB switch hostnames were missing from the hosts file. So if you ever run into this problem make sure your IB switch hosts file (/etc/hosts) has the correct hostname in the proper format:

#IP                 FQDN                      ALIAS
192.168.1.100       exa01ib01.local.net       exa01ib01

Also make sure to reboot the IB switch after any change of the hosts file.

Categories: oracle Tags:

How to configure Link Aggregation Control Protocol on Exadata

May 13th, 2015 2 comments

During a recent X5 installation I had to configure Link Aggregation Control Protocol (LACP) on the client network of the compute nodes. Although the ports were running at 10Gbits and default configuration of Active/Passive works perfectly fine the customer wanted even distribution of traffic and workload across their core switches.

Link Aggregation Control Protocol (LACP), also known as 802.3ad is a methods of combining multiple physical network connections into one logical connection to increase throughput and provide redundancy in case one of the links should fail. The protocol requires both – the server and the switch(es) to have the same settings to allow LACP to work properly.

To configure LACP on Exadata you need to change the bondeth0 parameters.

On each of the compute nodes open the following file:

/etc/sysconfig/network-scripts/ifcfg-bondeth0

and replace the line saying BONDING_OPTS with this one:

BONDING_OPTS="mode=802.3ad xmit_hash_policy=layer3+4 miimon=100 downdelay=200 updelay=5000 num_grat_arp=100"

and then restart the network interface:

ifdown bondeth0
ifup bondeth0
Determining if ip address 192.168.1.10 is already in use for device bondeth0...

You can check the status of the interface by query the proc filesystem. Make sure both interfaces are up and running at the same speed. The esential part to make sure the LACP is working is shown below:

cat /proc/net/bonding/bondeth0

802.3ad info
LACP rate: slow
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
Aggregator ID: 1
Number of ports: 2
Actor Key: 33
Partner Key: 34627
Partner Mac Address: 00:23:04:ee:be:c8
I had a problem with the network where the client network did NOT come up after server reboot. This was happening because during system boot the 10Gbit interfaces goes through multiple resets causing very fast link change. Here is the status of the bond as of that time:
cat /proc/net/bonding/bondeth0

802.3ad info
LACP rate: slow
Aggregator selection policy (ad_select): stable
bond bondeth0 has no active aggregator
The solution for that was to decrease the down_delay to 200. The issue is described in this note:
Bonding Mode 802.3ad Using 10Gbps Network – Slave NICs Fail to Come Up Consistently after Reboot (Doc ID 1621754.1)

 

Categories: linux, oracle Tags:

Smart firewalls

April 15th, 2015 No comments

It’s been a while since my last post but I was really busy working on a number of projects.

The purpose of this post is to highlight an issue I had while building a standby database. The environment we had – three 11.2.0.3 databases at host A (primary) and same were restored from backup on another host B (standby), both hosts were running Linux. It’s important to mention that both hosts were located in different Data Centers.

Once a standby database was mounted we would start shipping archive log files from the primary without adding it to the DataGuard Broker config as of that moment. We wanted to touch the production as little as possible and would add the database to the broker config just before doing the switchover. In the meanwhile we would manually recover the standby database to reduce the apply lag once the database is being added to the broker config. This approach worked fine for two of the databases but we got this error for the third one:

Fri Mar 13 13:33:43 2015
RFS[33]: Assigned to RFS process 29043
RFS[33]: Opened log for thread 1 sequence 29200 dbid -707326650 branch 806518278
CORRUPTION DETECTED: In redo blocks starting at block 20481count 2048 for thread 1 sequence 29200
Deleted Oracle managed file +RECO01/testdb/archivelog/2015_03_13/thread_1_seq_29200.8481.874244023
RFS[33]: Possible network disconnect with primary database
Fri Mar 13 13:42:45 2015
Errors in file /u01/app/oracle/diag/rdbms/testdb/testdb/trace/testdb_rfs_31033.trc:

Running through the trace file the first thing which I noticed was:

Corrupt redo block 5964 detected: BAD CHECKSUM

We already had two databases running from host A to host B so we rulled out the firewall issue. Then tried couple of other things – manually recovered the standby with incremental backup, recreated the standby, cleared all the redo/standby log groups but nothing helped. I found only one note in MOS with similar symptom for Streams in 10.2.

At the end the network admins were asked to check the config of the firewalls one more time. There were two firewalls – one where host A was located and another one where host B was located.

It turned out that the firewall at host A location had SQLnet class inspection enabled which was causing the corruption. The logs were successfully shipped from the primary database once this firewall feature was disabled. The strange thing was that we haven’t had any issues with the other two databases running on the same hosts, well what can I say – smart firewalls.

 

Categories: oracle Tags:

RHEL6 udev and EMC PowerPath

January 26th, 2015 No comments

I’m working on Oracle database migration project where customer have chosen commodity x86 hardware with RHEL6 and EMC storage.

I’ve done many similar installations in the past and I always used the native MPIO in Linux (DM-Multipath) to load balance and failover I/O paths. This time however I’ve got EMC PowerPath doing the load balance and failover and got the native MPIO disabled. From my point of view it’s the same, whether I’ll be using /dev/emcpower* or /dev/mapper/* it’s the same. Obviously PowerPath has some advantages over the native MPIO which I really can’t tell yet. That’s a good paper from EMC giving a comparison between the native MPIO in different operating systems.

As mentioned before the aggregated logical names (pseudo names) with EMC PowerPath could be found under /dev/emcpowerX. I partitioned the disks with GPT tables and aligned the first partition to match the storage sector size. Also added to following line to udev rules to make sure my devices will get the proper permissions:

ACTION=="add", KERNEL=="emcpowerr1", OWNER:="oracle", GROUP:="dba", MODE="0600"

I restarted the server and then later udev to make sure ownership and permissions were picked up correctly. Upon running asmca to create ASM with the first disk group I got the following errors:

Configuring ASM failed with the following message:
One or more disk group(s) creation failed as below:
Disk Group DATA01 creation failed with the following message:
ORA-15018: diskgroup cannot be created
ORA-15031: disk specification '/dev/emcpowerr1' matches no disks
ORA-15025: could not open disk "/dev/emcpowerr1"
ORA-15056: additional error message

Well that’s strange, I’m sure the file had to correct permissions. However listing the file proved that it didn’t have the correct permissions. I repeated the process several times and always got the same result, you can use simple touch command to get the same result:

[root@testdb ~]# ls -al /dev/emcpowerr1
brw-rw---- 1 oracle dba 120, 241 Jan 23 12:35 /dev/emcpowerr1
[root@testdb ~]# touch /dev/emcpowerr1
[root@testdb ~]# ls -al /dev/emcpowerr1
brw-rw---- 1 root root 120, 241 Jan 23 12:35 /dev/emcpowerr1

Something was changing the ownership of the file and I didn’t know what. Well you’ll be no less surprised than I was to find that linux has a similar auditing framework as the Oracle database.

Auditctl will allow you to audit any file for any syscall run against it. In my case I would like to know which process is changing the ownership of my device file. Another helpful command is ausyscall whic allows you to map syscall names and numbers. In other words I would like to know what is the chmod syscall number on a 64bit platform (it does matter):

[root@testdb ~]# ausyscall x86_64 chmod --exact
90

Then I would like to set up auditing for all chmod calls against my device file:

[root@testdb ~]# auditctl -a exit,always -F path=/dev/emcpowerr1 -F arch=b64 -S chmod
[root@testdb ~]# touch /dev/emcpowerr1
[root@testdb ~]# tail -f /var/log/audit/audit.log
type=SYSCALL msg=audit(1422016631.416:4208): arch=c000003e syscall=90 success=yes exit=0 a0=7f3cfbd36960 a1=61b0 a2=7fff5c59b830 a3=0 items=1 ppid=60056 pid=63212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="udevd" exe="/sbin/udevd" key=(null)
type=CWD msg=audit(1422016631.416:4208):  cwd="/"
type=PATH msg=audit(1422016631.416:4208): item=0 name="/dev/emcpowerr1" inode=28418 dev=00:05 mode=060660 ouid=54321 ogid=54322 rdev=78:f1 nametype=NORMAL
[root@testdb ~]# auditctl -D
No rules

Gotcha! So it was udev changing the permissions but why ?

I spent half day going through logs and tracing udev but couldn’t find anything.

At the end of the day I found an article by RHEL on which they had exactly the same problem. The solution was to have “add|change” into the ACTION directive instead of only “add”.

So here is the rule you need to have in order for UDEV to set a persistent ownership/permission on EMC PowerPath device files in RHEL 6:

[root@testdb ~]# cat /etc/udev/rules.d/99-oracle-asm.rules
ACTION=="add|change", KERNEL=="emcpowerr1", OWNER:="oracle", GROUP:="dba", MODE="0600"

Hope it helps and you don’t have to spent half day as I did.

Sve

Categories: linux, oracle Tags: