Archive

Archive for the ‘oracle’ Category

Introducing Oracle ASM Filter Driver

October 27th, 2015 3 comments

The Oracle ASMFD (Filter Driver) was introduced in Oracle Database 12.1.0.2 and as of the moment it is available on Linux systems only.

Oracle ASM Filter Driver is a kernel module very much like the ASMLIB that resides in the I/O path of the Oracle ASM disks. It provides an interface between the Oracle binaries and the underlying operating environment.

Here are some of the features of ASMFD:

  • Reject non-Oracle I/O

The ASM filter driver will reject write I/O operation issued by non-Oracle commands. This prevents non-Oracle applications from writing to ASM disks and protects ASM from accidental corruption.

  • Device name persistence

Similarly to ASMLIB you don’t have to configure the device name persistence using UDEV.

  • Faster node recovery

According to the documentation ASMFD allows Oracle Clusterware to perform node level fencing without a reboot. So in case of CSS is not running or nodes are fenced the Oracle stack will be restarted instead of node to be rebooted. This is greatly reduce the boot time as with some enterprise servers it might take up to 10 minutes to boot.

  • Reduce OS resource usage

ASMFD exposes a portal device that can be used for all I/O on a particular host and thus decreasing the number of open file descriptors. With it each ASM process needs to have an open descriptor to each ASM disk. I’m not sure how much this will save you but might be useful in case you got hundreds of ASM disks.

  • Thin Provisioning & Data Integrity

This is another new and cool feature which is very popular in the virtualization world. When enabled the disk space not in use can be returned to the array also known as thin-provisioning. This attribute can be set only if the ASM compatibility is greater than or equal to 12.1.0.0 and requires you to use ASMFD!

In a way ASMFD is a replacement of ASMLIB as it includes base-ASMLIB features. However ASMFD takes it one step further by protecting the ASM disks from non-oracle write I/O operations to prevent accidental damage. Unlike ASMLIB the ASMFD is installed with the Oracle Grid Infrastructure installation.

 

Brief history of ASM and the need of ASM Filter Driver

To understand ASMFD better we need to understand where the need comes from. It’s important to say that this is very specific to Linux as other platforms have other methods to fulfill the requirements. Because that’s not the purpose of this post and it’s too long I decide to keep it at the end of the post.

In Linux as in any other platform there is a user separation which implies access restrictions. In Linux we usually install Oracle Database under the oracle user and to do so we need to have writable access to the directories we plan to use. By default that would be /home/oracle/ and as you can imagine that’s not very handy, also you might want to install the database in separate partition or file system. For this reason the root user will create the required directories and change their ownership to oracle, that is usually /u01 or /opt.

That would work if you want to store your database files in a file system. However the traditional file systems were not designed for database files, they need to have a file system check on a regular basis and sometimes they might get corrupted. For that reason and performance perspective many people would move to RAW devices in the past. Another case would be if you want to run RAC – you’ll either need a cluster file system or RAW devices.

Historically with 9i and 10g we used to create RAW devices which are one to one mapping between a device file and a logical name. For example you would create partition on each device /dev/sda1, /deb/sdb1 and then map those to /dev/raw/raw1, /dev/raw/raw2 and so on. Additional because in Linux the device files are rebuild each time the system reboots you need to make sure the permissions and ownership are preserved and persist after system reboot. This was achieved by having additional rules in your last boot scripts (often rc.local). For other platforms like HP-UX for example one had to buy additional license (HP Service Guard extension for RAC) which would give you the ability to have a shared LVM groups across two or more servers.

However the support and maintenance of raw devices was really difficult and Oracle came up with the idea to create their own volume manager to simplify database administration and eliminate the need to manage thousands of database files – Automatic Storage Management, ASM for short. A simple description is that ASM is very sophisticated volume manager for Oracle data. ASM could also be used if you deploy RAC hence you don’t need cluster file systems or RAW devices anymore. Additionally it provides a redundancy so if you have JBOD you can use ASM to do the mirroring of the data. Another important feature is that you don’t need persistent device naming anymore. Upon start ASM will read all the disk drives specified by asm_diskstring and use the ones on which ASM header is found. Although ASM was released in 10.1 people were still using raw devices at the time because ASM was too new and unknown for many DBAs.

So ASM will logically group all the disks (LUNs presented from the storage) into what’s called ASM disk groups and because it’s using Oracle Managed Files you don’t really care anymore where your files are and what their names are. ASM is just another abstraction layer in the database file storage. ASM is available on all platforms so in a way it will standardize the administration of database files. Often the DBAs will also administer the ASM but it could be the storage team managing the ASM. You still had to make sure the device files have the correct permissions before ASM could use them, otherwise no diskgroup will be available hence database could not start.

At the same time back in 2004 Oracle released another product ASMLib which only purpose was to persist the device naming and preserve the device files permissions. I don’t want to go into details about ASMLib here but there is an old and very good post on ASMLib from Wim Coekaerts (HERE). Just to mention that ASMLib is also available under RHEL, more can be found HERE.

In the recent years many people like myself used UDEV to persist the permissions and ownership of the device files used by ASM. I really like to have one to one match between device files and ASM disk names for better understanding and ease any future troubleshooting.

ASM Filter Driver takes this one step further by introducing the features above. I can see people start using ASMFD to take advantage of the thin provisioning OR make sure no one will overwrite (by mistake) the ASM device files, yes this happens and it happened to me recently.

Categories: oracle Tags: ,

Database system target in pending status for standby database in OEM 12c

October 6th, 2015 No comments

That’s not really a problem but annoying issue I had with OEM 12c. Once a standby database is promoted, the database system for the same is showing as metric collections error OR Status Pending.

The standby database doesn’t need its own system since it will join the primary database system. The solution is to associate the standby database with the primary system and then remove the standby database system.

For example – we’ve got primary and standby databases – TESTDB_LON, TESTDB_RDG. Once promoted the following targets are also created in OEM – TESTDB_LON_sys and TESTDB_RDG_sys.

The second one will always be having status Pending:
Status Pending (Post Blackout)

The way to resolve that is to associate the standby database with the primary database system. I usually rename the primary database system as well to omit the location (LON and RDG):
– Go to the Targets -> Systems and choose the system you want to edit
– Then go to Database System -> Target Setup -> Edit system
– Rename the system name from TESTDB_LON_sys to TESTDB_sys
– Save changes
– Go to Database System again, Target Setup -> Edit system
– Click next to go to Step 2
– Add the standby database to the Standby Database Associations table
– Save changes

At this point we’ve got one system TESTDB_sys with two database members TESTDB_LON and TESTDB_RDG.

Next step is to remove the database system for the standby using emcli:

[oracle@oem12c ~]$ /opt/app/oracle/em12cr4/middleware/oms/bin/emcli login -username=sysman
Enter password :
Login successful

[oracle@oem12c ~]$ /opt/app/oracle/em12cr4/middleware/oms/bin/emcli delete_target -name="TESTDB_RDG_sys" -type="oracle_dbsys"
Target "TESTDB_RDG_sys:oracle_dbsys" deleted successfully

Now it’s all sorted and hopefully all targets are “green”.

Categories: oracle Tags:

Exadata X5 PDU – CLI already in use

September 18th, 2015 No comments

Exadata X5-2 and X4-8B racks are delivered with the “Enhanced” PDU metering units connected via the Cisco switch. Although the documentation says they should have static addresses, they don’t. You need to configure them manually using serial console connection, this is described in my earlier post here.

However if you forget to exit the serial console connection to the PDU and then try to login using SSH later you’ll get the following message:

login as: admin
admin@192.168.1.10's password:

CLI already in use!!!
Please try again later .....

Then someone has to go all the way to the data centre and reset the PDU or exit from the serial console.

Categories: oracle Tags:

Start of ‘ora.crf’ failed after update to 12.1.0.2 DBBP7

July 25th, 2015 2 comments

This happened to me a month ago right after I applied DBBP7 on 12.1.0.2. For some reason the ora.crf resource didn’t start automatically:

CRS-5013: Agent "ORAROOTAGENT" failed to start process "/u01/app/12.1.0.2/grid/bin/osysmond" for action "start": details at "(:CLSN00008:)" in "/u01/app/oracle/diag/crs/exa01db01/crs/trace/ohasd_orarootagent_root.trc"
CRS-2674: Start of 'ora.crf' on 'exa01db01' failed

Checking the trace file for more details you can immediately spot where the problem is:

2015-06-04 10:35:51.156513 :CLSDYNAM:3286230784: [ ora.crf]{0:0:8275} [start] (:CLSN00008:)Utils:execCmd scls_process_spawn() failed 1
2015-06-04 10:35:51.156520 :CLSDYNAM:3286230784: [ ora.crf]{0:0:8275} [start] (:CLSN00008:) category: -1, operation: fail, loc: canexec2, OS error: 0, other: no exe permission, file [/u01/app/12.1.0.2/grid/bin/osysmond]

Indeed the osysmond is owned by the oracle user where it should be owned by root:

[root@exa01db01 ~]# ls -al /u01/app/12.1.0.2/grid/bin/osysmond
-rwxr-x--- 1 oracle oinstall 9441 Jun  4 10:42 /u01/app/12.1.0.2/grid/bin/osysmond

The fix for that is simple – you need to unlock and lock the GI:

[root@exa01db01 ~]# /u01/app/12.1.0.2/grid/crs/install/rootcrs.pl -unlock
[root@exa01db01 ~]# /u01/app/12.1.0.2/grid/crs/install/rootcrs.pl -patch

The osysmond has the correct permissions now and the resource ora.crf starts sucessfully:

[root@exa01db01 ~]# ls -al /u01/app/12.1.0.2/grid/bin/osysmond
-rwxr-x--- 1 root oinstall 9533 Jun  4 10:48 /u01/app/12.1.0.2/grid/bin/osysmond

 

For reference:

Categories: oracle Tags:

Exadata’s onecommand fails to validate NTP servers on storage servers

July 6th, 2015 No comments

This will be simple and short post on an issue I had recently. I got the following error while running the first step of onecommand – Validate Configuration File:

2015-07-01 12:31:03,712 [INFO  ][    main][     ValidationUtils:761] SUCCESS: NTP servers on machine exa01db02.local.net verified successfully
2015-07-01 12:31:03,713 [INFO  ][    main][     ValidationUtils:761] SUCCESS: NTP servers on machine exa01db01.local.net verified successfully
2015-07-01 12:31:03,714 [INFO  ][    main][     ValidationUtils:778] Following errors were found...
2015-07-01 12:31:03,714 [INFO  ][    main][     ValidationUtils:783] ERROR: Encountered error while running NTP validation error on host: exa01cel03.local.net
2015-07-01 12:31:03,714 [INFO  ][    main][     ValidationUtils:783] ERROR: Encountered error while running NTP validation error on host: exa01cel02.local.net
2015-07-01 12:31:03,714 [INFO  ][    main][     ValidationUtils:783] ERROR: Encountered error while running NTP validation error on host: exa01cel01.local.net

Right, so my NTP servers were accessible from the db nodes but not from the cells. When I queried the NTP server from the cells I got the following error:

# ntpdate -dv ntpserver1
1 Jul 09:00:09 ntpdate[22116]: ntpdate 4.2.6p5@1.2349-o Fri Feb 27 14:50:33 UTC 2015 (1)
Looking for host ntpserver1 and service ntp
host found : ntpserver1.local.net
transmit(172.16.1.100)
transmit(172.16.1.100)
transmit(172.16.1.100)
transmit(172.16.1.100)
transmit(172.16.1.100)
172.16.1.100: Server dropped: no data
server 172.16.1.100, port 123

Perhaps I should have mentioned that the cells have their own firewall (cellwall) which will only allow certain inbound/outbound traffic. During boot the script will build all the rules dynamically and apply them. Now the above error occurred because of two reasons:

A) The NTP servers were specified using hostname instead of IP addresses in OEDA
B) The management network was NOT available after the initial config (applyElasticConfig) was applied

Because of that cellwall was not able to resolve the NTP servers IP addresses and thus they were omitted from the firewall configuration. You can safely proceed with the deployment but if you want to get rid of the annoying message the solution is simply to restart the cell firewall – /etc/init.d/cellwall restart

Categories: oracle, Uncategorized Tags:

MGMTDB not automatically created on Exadata X5 and GI 12.1.0.2

July 1st, 2015 No comments

While deploying an X5 Full Rack recently it happened that the Grid Infrastructure Management Repository was not created by onecommand. The GIMR database was optional in 12.1.0.1 and became mandatory in 12.1.0.2 and should be automatically installed with Oracle Grid Infrastructure 12c release 1 (12.1.0.2). For unknown reason to me that didn’t happen and I had to create it manually. I’ve checked all the log files but couldn’t find any errors.  For reference the OEDA version used was Feb 2015 v15.050, image version on the Exadata was 12.1.2.1.0.141206.1.

To create the database login as the grid user and create file holding the following variables:

cat > /tmp/cfgrsp.properties
oracle.assistants.asm|S_ASMPASSWORD=[your ASM password]
oracle.assistants.asm|S_ASMMONITORPASSWORD=[your ASM password]

and run the following command:

GRID_HOME=/u01/app/12.1.0.2/grid
[oracle@exa01 ~]$ $GRID_HOME/cfgtoollogs/configToolAllCommands RESPONSE_FILE=/tmp/cfgrsp.properties

For reference, here is similar bug I found on MOS:
-MGMTDB Not Created When Using EM12c Provisioning (Doc ID 1983885.1)

Categories: oracle Tags: , ,

dbnodeupdate.sh post upgrade step fails on Exadata storage software 12.1.2.1.1

June 23rd, 2015 No comments

I’ve done several Exadata deployments in the past two months and had to upgrade the Exadata storage software on half of them. Reason for that was because units shipped before May had their Exadata storage software version of 12.1.2.1.0.

The upgrade process of the database nodes ran fine but when I ran dbnodeupdate.sh -c for completing post upgrade steps I got an error that the system wasn’t on the expected Exadata release or kernel:

(*) 2015-06-01 14:21:21: Verifying GI and DB's are shutdown
(*) 2015-06-01 14:21:22: Verifying firmware updates/validations. Maximum wait time: 60 minutes.
(*) 2015-06-01 14:21:22: If the node reboots during this firmware update/validation, re-run './dbnodeupdate.sh -c' after the node restarts..
(*) 2015-06-01 14:21:23: Collecting console history for diag purposes

ERROR: System not on expected Exadata release or kernel, exiting


ERROR: Correct error, or to override run: ./dbnodeupdate.sh -c -q -t 12.1.2.1.1.150316.2

Indeed, the database node was running the new Exadata software but still using the old kernel (2.6.39-400.243) and dbnodeupdate was expecting me to run the new 2.6.39-400.248 kernel:

imageinfo:
Kernel version: 2.6.39-400.243.1.el6uek.x86_64 #1 SMP Wed Nov 26 09:15:35 PST 2014 x86_64
Image version: 12.1.2.1.1.150316.2
Image activated: 2015-06-01 12:27:57 +0100
Image status: success
System partition on device: /dev/mapper/VGExaDb-LVDbSys1

The reason for that was that the previous run of dbnodeupdate installed the new kernel package but failed to update grub.conf. The solution is to manually add the missing kernel entry to grub.conf and reboot the server to pick up the new kernel, here is a note for more information which by the time I had this problem was still internal:


Bug 20708183 – DOMU:GRUB.CONF KERNEL NOT ALWAYS UPDATED GOING TO 121211, NEW KERNEL NOT BOOTED

 

 

Categories: oracle Tags: ,

How do I change DNS servers on Exadata storage servers

June 19th, 2015 No comments

This is just a quick post to highlight a problem I had recently on another Exadata deployment.

For the most customers the management network on Exadata is routable and the DNS servers are accessible. However in a recent deployment for a financial organization this wasn’t the case and the storage servers were NOT able to reach the DNS servers. The customer provided a different set of DNS servers within the management network which were still able to resolve all the Exadata hostnames. If you encounter similar problem stop all cell services and run ipconf on each storage server to update the DNS servers.

On each storage server there is a service called cellwall (/etc/init.d/cellwall) which actually will run many checks and apply a lot of iptables rules. Here are couple of comments from the script to give you an idea:

# general lockdown from everything external, (then selectively permit)
  # general permissiveness (localhost: if you are in, you are in)
  # allow all udp traffic only from rdbms hosts on IPoIB only
      # allow DNS to work on all interfaces
      # open sport=53 only for DNS servers (mitigate remote-offlabel-port exploit)

and many more but you can check the script and see what it does OR run iptables -L -n to get all the iptables rules.

Here is some more information on how to change IP addresses on Exadata:

UPDATE: Thanks to Jason Arneil for pointing out that proper way to update the configuration of the cell.

Categories: oracle Tags:

How to configure Power Distribution Units on Exadata X5

June 18th, 2015 No comments

I’ve done several Exadata deployments recently and I have to say of all the components PDUs were hardest to configure. Important to notice that unlike earlier generations of Exadata the PDUs in X5 are Ehnanced PDUs and not Standard.

Reading the public documentation (Configuring the Power Distribution Units) it says that on PDUs with three power input leads you need to connect the middle power lead to the power source. Well I’ve done that many times and it didn’t NOT worked, the documentation says that the PDU should be accessible on 192.168.0.1. I believe the reason for that is because DHCP has been enabled by default and this can be easily confirmed by checking the LCD screen of the PDU. I even tried setting up DHCP server myself to make the PDU acquire IP but that didn’t worked either.

To configure the PDU you need to connect through serial management port. Nowadays there are no more laptops with serial ports so you will need USB to RS-232 DB9 Serial Adapter, I bought mine from Amazon. You will also need DB9 to RJ45 cable – these are quite popular and I’m sure you’ve seen before the blue Cisco console cable.

You need to connect the cable to SET MGT port of PDU and then establish terminal connection (you can use putty too) with the following settings:
9600 baud, 8 bit, 1 stop bit, no parity bit, no flow control

The username is admin and the password is adm1n.

Here are the commands you need to configure the PDU. Each network change requires reboot of the PDU:

Welcome to Oracle PDU

pducli->username: admin
pducli->password: *****
Login OK - Admin rights!
pducli->

set pdu_name=exa01pdu01
set systime_manual_date=2015-06-18
set systime_manual_time=12:45:00
set systime_ntp_server_enable=On
set systime_ntp_server=192.168.1.2
set systime_dst_enable=On
set net_ipv4_dhcp=Off
reset=yes

set net_ipv4_ipaddr=192.168.1.10
set net_ipv4_subnet=255.255.255.0
set net_ipv4_gateway=192.168.1.1
set net_ipv4_dns1=192.168.1.3
set net_ipv4_dns2=192.168.1.4
reset=yes

Regarding the network connectivity – the documentation says you need two additional cables from your management network. However if you have half or quarter rack you can plug-in the PDU network connections to the management/cisco switch. Make a note that if you ever plan to upgrade to full rack you will have to provide the two additional cables from your management network and disconnect PDUs from the management switch.

IMPORTANT: Make sure you don’t leave any active CLI sessions, otherwise you won’t be able to login remotely and will require data centre visit to reboot the PDU.

Update 09.03.2015:

Unfortunately for PDUs that run firmware 2.02 it is required that the reset=yes be issued after each set command.  Until “Enhanced” PDU is upgraded to firmware 2.03, ensure that you issue reset=yes each time the reminder is displayed. “Enhanced” PDU firmware 2.03 is available on MOS as patch 20917482.

Categories: oracle Tags:

opatch 12.1.0.1.7 fails with System Configuration Collection Failed

June 1st, 2015 No comments

I was recently upgrading an Exadata 12.1.0.2 DBBP6 to DBBP7 and as usual I went for the latest opatch version which was 12.1.0.1.7 (Apr 2015) as of that time.

After running the opatchauto apply or opatchauto apply -analyze I got the following error:

System Configuration Collection failed: oracle.osysmodel.driver.sdk.productdriver.ProductDriverException: java.lang.NullPointerException
Exception in thread "main" java.lang.RuntimeException: java.io.IOException: Stream closed
        at oracle.opatchauto.gi.GILogger.writeWithoutTimeStamp(GILogger.java:450)
        at oracle.opatchauto.gi.GILogger.printStackTrace(GILogger.java:465)
        at oracle.opatchauto.gi.OPatchauto.main(OPatchauto.java:97)
Caused by: java.io.IOException: Stream closed
        at java.io.BufferedWriter.ensureOpen(BufferedWriter.java:98)
        at java.io.BufferedWriter.write(BufferedWriter.java:203)
        at java.io.Writer.write(Writer.java:140)
        at oracle.opatchauto.gi.GILogger.writeWithoutTimeStamp(GILogger.java:444)
        ... 2 more

opatchauto failed with error code 1.

This was a known problem caused by these bugs which are fixed in 12.1.0.1.8:
Bug 20892488 : OPATCHAUTO ANALYZE FAILING WITH GIPATCHINGHELPER::CREATESYSTEMINSTANCE FAILED
BUG 20857919 – LNX64-121023GIPSU:APPLY GIPSU FAILED WITH SYSTEM CONFIGURATION COLLECTION FAILED

As 12.1.0.1.8 is not yet available the workaround is to use lower version of opatch 12.1.0.1.6 which can be downloaded from this note:
Opatchauto Gives “System Configuration Collection Failed” Message (Doc ID 2001933.1)

You might run into the same problem if you are applying 12.1.0.2.3 PSU.

Categories: oracle Tags: , ,