That’s not really a problem but annoying issue I had with OEM 12c. Once a standby database is promoted, the database system for the same is showing as metric collections error OR Status Pending.
The standby database doesn’t need its own system since it will join the primary database system. The solution is to associate the standby database with the primary system and then remove the standby database system.
For example – we’ve got primary and standby databases – TESTDB_LON, TESTDB_RDG. Once promoted the following targets are also created in OEM – TESTDB_LON_sys and TESTDB_RDG_sys.
The second one will always be having status Pending:
Status Pending (Post Blackout)
The way to resolve that is to associate the standby database with the primary database system. I usually rename the primary database system as well to omit the location (LON and RDG):
– Go to the Targets -> Systems and choose the system you want to edit
– Then go to Database System -> Target Setup -> Edit system
– Rename the system name from TESTDB_LON_sys to TESTDB_sys
– Save changes
– Go to Database System again, Target Setup -> Edit system
– Click next to go to Step 2
– Add the standby database to the Standby Database Associations table
– Save changes
At this point we’ve got one system TESTDB_sys with two database members TESTDB_LON and TESTDB_RDG.
Next step is to remove the database system for the standby using emcli:
Exadata X5-2 and X4-8B racks are delivered with the “Enhanced” PDU metering units connected via the Cisco switch. Although the documentation says they should have static addresses, they don’t. You need to configure them manually using serial console connection, this is described in my earlier post here.
However if you forget to exit the serial console connection to the PDU and then try to login using SSH later you’ll get the following message:
login as: admin
CLI already in use!!!
Please try again later .....
Then someone has to go all the way to the data centre and reset the PDU or exit from the serial console.
This happened to me a month ago right after I applied DBBP7 on 22.214.171.124. For some reason the ora.crf resource didn’t start automatically:
CRS-5013: Agent "ORAROOTAGENT" failed to start process "/u01/app/126.96.36.199/grid/bin/osysmond" for action "start": details at "(:CLSN00008:)" in "/u01/app/oracle/diag/crs/exa01db01/crs/trace/ohasd_orarootagent_root.trc"
CRS-2674: Start of 'ora.crf' on 'exa01db01' failed
Checking the trace file for more details you can immediately spot where the problem is:
This will be simple and short post on an issue I had recently. I got the following error while running the first step of onecommand – Validate Configuration File:
2015-07-01 12:31:03,712 [INFO ][ main][ ValidationUtils:761] SUCCESS: NTP servers on machine exa01db02.local.net verified successfully
2015-07-01 12:31:03,713 [INFO ][ main][ ValidationUtils:761] SUCCESS: NTP servers on machine exa01db01.local.net verified successfully
2015-07-01 12:31:03,714 [INFO ][ main][ ValidationUtils:778] Following errors were found...
2015-07-01 12:31:03,714 [INFO ][ main][ ValidationUtils:783] ERROR: Encountered error while running NTP validation error on host: exa01cel03.local.net
2015-07-01 12:31:03,714 [INFO ][ main][ ValidationUtils:783] ERROR: Encountered error while running NTP validation error on host: exa01cel02.local.net
2015-07-01 12:31:03,714 [INFO ][ main][ ValidationUtils:783] ERROR: Encountered error while running NTP validation error on host: exa01cel01.local.net
Right, so my NTP servers were accessible from the db nodes but not from the cells. When I queried the NTP server from the cells I got the following error:
# ntpdate -dv ntpserver1
1 Jul 09:00:09 ntpdate: ntpdate firstname.lastname@example.org Fri Feb 27 14:50:33 UTC 2015 (1)
Looking for host ntpserver1 and service ntp
host found : ntpserver1.local.net
172.16.1.100: Server dropped: no data
server 172.16.1.100, port 123
Perhaps I should have mentioned that the cells have their own firewall (cellwall) which will only allow certain inbound/outbound traffic. During boot the script will build all the rules dynamically and apply them. Now the above error occurred because of two reasons:
A) The NTP servers were specified using hostname instead of IP addresses in OEDA
B) The management network was NOT available after the initial config (applyElasticConfig) was applied
Because of that cellwall was not able to resolve the NTP servers IP addresses and thus they were omitted from the firewall configuration. You can safely proceed with the deployment but if you want to get rid of the annoying message the solution is simply to restart the cell firewall – /etc/init.d/cellwall restart
While deploying an X5 Full Rack recently it happened that the Grid Infrastructure Management Repository was not created by onecommand. The GIMR database was optional in 188.8.131.52 and became mandatory in 184.108.40.206 and should be automatically installed with Oracle Grid Infrastructure 12c release 1 (220.127.116.11). For unknown reason to me that didn’t happen and I had to create it manually. I’ve checked all the log files but couldn’t find any errors. For reference the OEDA version used was Feb 2015 v15.050, image version on the Exadata was 18.104.22.168.0.141206.1.
To create the database login as the grid user and create file holding the following variables:
I’ve done several Exadata deployments in the past two months and had to upgrade the Exadata storage software on half of them. Reason for that was because units shipped before May had their Exadata storage software version of 22.214.171.124.0.
The upgrade process of the database nodes ran fine but when I ran dbnodeupdate.sh -c for completing post upgrade steps I got an error that the system wasn’t on the expected Exadata release or kernel:
(*) 2015-06-01 14:21:21: Verifying GI and DB's are shutdown
(*) 2015-06-01 14:21:22: Verifying firmware updates/validations. Maximum wait time: 60 minutes.
(*) 2015-06-01 14:21:22: If the node reboots during this firmware update/validation, re-run './dbnodeupdate.sh -c' after the node restarts..
(*) 2015-06-01 14:21:23: Collecting console history for diag purposes
ERROR: System not on expected Exadata release or kernel, exiting
ERROR: Correct error, or to override run: ./dbnodeupdate.sh -c -q -t 126.96.36.199.1.150316.2
Indeed, the database node was running the new Exadata software but still using the old kernel (2.6.39-400.243) and dbnodeupdate was expecting me to run the new 2.6.39-400.248 kernel:
Kernel version: 2.6.39-400.243.1.el6uek.x86_64 #1 SMP Wed Nov 26 09:15:35 PST 2014 x86_64
Image version: 188.8.131.52.1.150316.2
Image activated: 2015-06-01 12:27:57 +0100
Image status: success
System partition on device: /dev/mapper/VGExaDb-LVDbSys1
The reason for that was that the previous run of dbnodeupdate installed the new kernel package but failed to update grub.conf. The solution is to manually add the missing kernel entry to grub.conf and reboot the server to pick up the new kernel, here is a note for more information which by the time I had this problem was still internal:
Bug 20708183 – DOMU:GRUB.CONF KERNEL NOT ALWAYS UPDATED GOING TO 121211, NEW KERNEL NOT BOOTED
This is just a quick post to highlight a problem I had recently on another Exadata deployment.
For the most customers the management network on Exadata is routable and the DNS servers are accessible. However in a recent deployment for a financial organization this wasn’t the case and the storage servers were NOT able to reach the DNS servers. The customer provided a different set of DNS servers within the management network which were still able to resolve all the Exadata hostnames. If you encounter similar problem stop all cell services and run ipconf on each storage server to update the DNS servers.
On each storage server there is a service called cellwall (/etc/init.d/cellwall) which actually will run many checks and apply a lot of iptables rules. Here are couple of comments from the script to give you an idea:
# general lockdown from everything external, (then selectively permit)
# general permissiveness (localhost: if you are in, you are in)
# allow all udp traffic only from rdbms hosts on IPoIB only
# allow DNS to work on all interfaces
# open sport=53 only for DNS servers (mitigate remote-offlabel-port exploit)
and many more but you can check the script and see what it does OR run iptables -L -n to get all the iptables rules.
Here is some more information on how to change IP addresses on Exadata:
UPDATE: Thanks to Jason Arneil for pointing out that proper way to update the configuration of the cell.
I’ve done several Exadata deployments recently and I have to say of all the components PDUs were hardest to configure. Important to notice that unlike earlier generations of Exadata the PDUs in X5 are Ehnanced PDUs and not Standard.
Reading the public documentation (Configuring the Power Distribution Units) it says that on PDUs with three power input leads you need to connect the middle power lead to the power source. Well I’ve done that many times and it didn’t NOT worked, the documentation says that the PDU should be accessible on 192.168.0.1. I believe the reason for that is because DHCP has been enabled by default and this can be easily confirmed by checking the LCD screen of the PDU. I even tried setting up DHCP server myself to make the PDU acquire IP but that didn’t worked either.
To configure the PDU you need to connect through serial management port. Nowadays there are no more laptops with serial ports so you will need USB to RS-232 DB9 Serial Adapter, I bought mine from Amazon. You will also need DB9 to RJ45 cable – these are quite popular and I’m sure you’ve seen before the blue Cisco console cable.
You need to connect the cable to SET MGT port of PDU and then establish terminal connection (you can use putty too) with the following settings:
9600 baud, 8 bit, 1 stop bit, no parity bit, no flow control
The username is admin and the password is adm1n.
Here are the commands you need to configure the PDU. Each network change requires reboot of the PDU:
Welcome to Oracle PDU
Login OK - Admin rights!
Regarding the network connectivity – the documentation says you need two additional cables from your management network. However if you have half or quarter rack you can plug-in the PDU network connections to the management/cisco switch. Make a note that if you ever plan to upgrade to full rack you will have to provide the two additional cables from your management network and disconnect PDUs from the management switch.
IMPORTANT: Make sure you don’t leave any active CLI sessions, otherwise you won’t be able to login remotely and will require data centre visit to reboot the PDU.
I was recently upgrading an Exadata 184.108.40.206 DBBP6 to DBBP7 and as usual I went for the latest opatch version which was 220.127.116.11.7 (Apr 2015) as of that time.
After running the opatchauto apply or opatchauto apply -analyze I got the following error:
System Configuration Collection failed: oracle.osysmodel.driver.sdk.productdriver.ProductDriverException: java.lang.NullPointerException
Exception in thread "main" java.lang.RuntimeException: java.io.IOException: Stream closed
Caused by: java.io.IOException: Stream closed
... 2 more
opatchauto failed with error code 1.
This was a known problem caused by these bugs which are fixed in 18.104.22.168.8: Bug 20892488 : OPATCHAUTO ANALYZE FAILING WITH GIPATCHINGHELPER::CREATESYSTEMINSTANCE FAILED
BUG 20857919 – LNX64-121023GIPSU:APPLY GIPSU FAILED WITH SYSTEM CONFIGURATION COLLECTION FAILED
As 22.214.171.124.8 is not yet available the workaround is to use lower version of opatch 126.96.36.199.6 which can be downloaded from this note:
Opatchauto Gives “System Configuration Collection Failed” Message (Doc ID 2001933.1)
You might run into the same problem if you are applying 188.8.131.52.3 PSU.
I’m pleased to say that I will be speaking at the UKOUG Systems Event 2015, held at Cavendish Conference Center in London, 20 May 2015. My session “Oracle Exadata Meets Elastic Configurations” starts at 10:15 in Portland Suite. Here is the agenda of the UKOUG Systems Event.
In a month time I’ll be also speaking at the Spring Conference of the Bulgarian Oracle User Group. The conference will be held from 12th to 14th June, 2015 in hotel Novotel in Plovdiv, Bulgaria. I’ve got the conference opening slot at 11:00 in hall Moskva, my session topic is “Oracle Data Guard Fast-Start Failover: Live demo”. Here is the agenda of the conference.
I would like to thank EDBA for making this happen!