Archive

Posts Tagged ‘12c’

Dead Connection Detection in Oracle Database 12c

April 7th, 2016 No comments

In my earlier post I discussed what Dead Connection Detection is and why you should use it – read more here Oracle TNS-12535 and Dead Connection Detection

The pre-12c implementation of DCD used TNS packages to “ping” the client and relied on the underlying TCP stack which sometimes may take longer. Now in 12c this has changed and DCD probes are implemented by TCP Stack. The DCD probes will now use the TCP KEEPALIVE socket option to check if the connection is still usable.

To use the new implementation set the SQLNET.EXPIRE_TIME in sqlnet.ora to the amount of time between the probes in minutes. If the operating system supports TCP keep-alive tuning then Oracle Net automatically uses the new method. The new mechanism is supported on all platforms except on Solaris.

The following parameters are associated with the TCP keep-alive probes:
TCP_KEEPIDLE  – specifies the timeout of no activity until the probe is sent. The parameter takes value from SQLNET.EXPIRE_TIME.
TCP_KEEPCNT   – number of keep-alive probes to be sent, it is always set to 10.
TCP_KEEPINTVL – specifies the delay between probes if a keep-alive packets are sent and no acknowledgment is received, it is always set to 6.

If you need to revert to the pre-12c DCD mechanism (10 bytes TNS data) add the following parameters in sqlnet.ora:
USE_NS_PROBES_FOR_DCD=true

 

Categories: oracle Tags: ,

Issues with Oracle Direct NFS

January 28th, 2016 No comments

This is a quick post to highlight two issues I had with Oracle dNFS. Both relate to wrong entries in the oranfstab file.

One might encounter ORA-00600 during database creation:

DBCA_PROGRESS : 7%
DBCA_PROGRESS : 8%
ORA-01501: CREATE DATABASE failed
ORA-00600: internal error code, arguments: [KSSRMP1], [], [], [], [], [], [], [], [], [], [], []
ORA-17502: ksfdcre:3 Failed to create file /oracle/ORCL/database/ORCL/controlfile/o1_mf_%u_.ctl

This was caused by having a wrong entry in oranfstab – there was difference between fstab and oranfstab for the same record:

server: zfs01
path: 192.168.10.100
export: /export/OTHERDB/database mount: /oracle/ORCL/database

The second issue was that the database wasn’t using dNFS. Simple query of v$dnfs_servers will return no rows and there were no errors in the alertlog. However looking around the tracefiles one can easily spot the following repetitive error in all trace files:

KGNFS_NFSPROC3_MNT FAIL 13

This was caused by trying to mount a share we don’t have access to or non existing share from the NFS server:

server: zfs01
path: 192.168.10.100
export: /export/NON_EXIST/database mount: /oracle/ORCL/database

The issue was fixed after correcting the typos in the oranfstab file and resting the database.

The bottom line is make sure that fstab and oranfstab match and have correct entries.

Categories: oracle Tags: , ,

Introducing Oracle ASM Filter Driver

October 27th, 2015 3 comments

The Oracle ASMFD (Filter Driver) was introduced in Oracle Database 12.1.0.2 and as of the moment it is available on Linux systems only.

Oracle ASM Filter Driver is a kernel module very much like the ASMLIB that resides in the I/O path of the Oracle ASM disks. It provides an interface between the Oracle binaries and the underlying operating environment.

Here are some of the features of ASMFD:

  • Reject non-Oracle I/O

The ASM filter driver will reject write I/O operation issued by non-Oracle commands. This prevents non-Oracle applications from writing to ASM disks and protects ASM from accidental corruption.

  • Device name persistence

Similarly to ASMLIB you don’t have to configure the device name persistence using UDEV.

  • Faster node recovery

According to the documentation ASMFD allows Oracle Clusterware to perform node level fencing without a reboot. So in case of CSS is not running or nodes are fenced the Oracle stack will be restarted instead of node to be rebooted. This is greatly reduce the boot time as with some enterprise servers it might take up to 10 minutes to boot.

  • Reduce OS resource usage

ASMFD exposes a portal device that can be used for all I/O on a particular host and thus decreasing the number of open file descriptors. With it each ASM process needs to have an open descriptor to each ASM disk. I’m not sure how much this will save you but might be useful in case you got hundreds of ASM disks.

  • Thin Provisioning & Data Integrity

This is another new and cool feature which is very popular in the virtualization world. When enabled the disk space not in use can be returned to the array also known as thin-provisioning. This attribute can be set only if the ASM compatibility is greater than or equal to 12.1.0.0 and requires you to use ASMFD!

In a way ASMFD is a replacement of ASMLIB as it includes base-ASMLIB features. However ASMFD takes it one step further by protecting the ASM disks from non-oracle write I/O operations to prevent accidental damage. Unlike ASMLIB the ASMFD is installed with the Oracle Grid Infrastructure installation.

 

Brief history of ASM and the need of ASM Filter Driver

To understand ASMFD better we need to understand where the need comes from. It’s important to say that this is very specific to Linux as other platforms have other methods to fulfill the requirements. Because that’s not the purpose of this post and it’s too long I decide to keep it at the end of the post.

In Linux as in any other platform there is a user separation which implies access restrictions. In Linux we usually install Oracle Database under the oracle user and to do so we need to have writable access to the directories we plan to use. By default that would be /home/oracle/ and as you can imagine that’s not very handy, also you might want to install the database in separate partition or file system. For this reason the root user will create the required directories and change their ownership to oracle, that is usually /u01 or /opt.

That would work if you want to store your database files in a file system. However the traditional file systems were not designed for database files, they need to have a file system check on a regular basis and sometimes they might get corrupted. For that reason and performance perspective many people would move to RAW devices in the past. Another case would be if you want to run RAC – you’ll either need a cluster file system or RAW devices.

Historically with 9i and 10g we used to create RAW devices which are one to one mapping between a device file and a logical name. For example you would create partition on each device /dev/sda1, /deb/sdb1 and then map those to /dev/raw/raw1, /dev/raw/raw2 and so on. Additional because in Linux the device files are rebuild each time the system reboots you need to make sure the permissions and ownership are preserved and persist after system reboot. This was achieved by having additional rules in your last boot scripts (often rc.local). For other platforms like HP-UX for example one had to buy additional license (HP Service Guard extension for RAC) which would give you the ability to have a shared LVM groups across two or more servers.

However the support and maintenance of raw devices was really difficult and Oracle came up with the idea to create their own volume manager to simplify database administration and eliminate the need to manage thousands of database files – Automatic Storage Management, ASM for short. A simple description is that ASM is very sophisticated volume manager for Oracle data. ASM could also be used if you deploy RAC hence you don’t need cluster file systems or RAW devices anymore. Additionally it provides a redundancy so if you have JBOD you can use ASM to do the mirroring of the data. Another important feature is that you don’t need persistent device naming anymore. Upon start ASM will read all the disk drives specified by asm_diskstring and use the ones on which ASM header is found. Although ASM was released in 10.1 people were still using raw devices at the time because ASM was too new and unknown for many DBAs.

So ASM will logically group all the disks (LUNs presented from the storage) into what’s called ASM disk groups and because it’s using Oracle Managed Files you don’t really care anymore where your files are and what their names are. ASM is just another abstraction layer in the database file storage. ASM is available on all platforms so in a way it will standardize the administration of database files. Often the DBAs will also administer the ASM but it could be the storage team managing the ASM. You still had to make sure the device files have the correct permissions before ASM could use them, otherwise no diskgroup will be available hence database could not start.

At the same time back in 2004 Oracle released another product ASMLib which only purpose was to persist the device naming and preserve the device files permissions. I don’t want to go into details about ASMLib here but there is an old and very good post on ASMLib from Wim Coekaerts (HERE). Just to mention that ASMLib is also available under RHEL, more can be found HERE.

In the recent years many people like myself used UDEV to persist the permissions and ownership of the device files used by ASM. I really like to have one to one match between device files and ASM disk names for better understanding and ease any future troubleshooting.

ASM Filter Driver takes this one step further by introducing the features above. I can see people start using ASMFD to take advantage of the thin provisioning OR make sure no one will overwrite (by mistake) the ASM device files, yes this happens and it happened to me recently.

Categories: oracle Tags: ,