Archive

Posts Tagged ‘asm’

How to rename ASM disk groups in Exadata

November 25th, 2015 No comments

Deployment of Exadata requires you to generate configuration using Oracle Exadata Deployment Assistant (OEDA). Within the same the default  ASM disk groups names are DBFS_DG, RECOC1 and DATAC1. I usually change those to RECO01 and DATA01 as others doesn’t make sense to me and the only place where I find the default ones is on Exadata.

I had an incident last year where the Exadata deployed half way through and names were left by default so I had to delete the configuration and start from scratch.

For my big surprise I got request recently where customer wanted to change RECO01 and DATA01 to RECOC1 and DATAC1! This was a pre-prod system, already deployed and having few databases running. The Exadata was X5-2 running ESS 12.1.2.1.2 and GI 12.1.0.2.

If this ever happens to you, here is what you need to do:

  1. Rename grid disks.
  2. Rename ASM disk groups and ASM disk names.
  3. Modify all databases to point to the new disk groups.

Rename grid disks

Since grid disks names consists of the disk group name they need to be changed too. Although this is not mandatory I strongly recommend it to avoid any confusion in the future.

The grid disks can be renamed very easily using cellcli but they should NOT be in use by GI at that time. Thus Grid Infrastructure has to be stopped on all servers, stop GI as root:

[root@exa01db01 ~]# /u01/app/12.1.0.2/grid/bin/crsctl stop cluster -all

Then run the following magic command to get the list of all grid disks and replace the disk group names with the new ones:

[root@exa01db01 ~]# for i in `dcli -g cell_group -l root cellcli -e list griddisk | awk -F":" '{print $2'} | awk '{print $1}'`; do echo "cellcli -e alter griddisk $i name=$i"; done | grep -v DBFS |sed -e "s/RECO01/RECOC1/2" -e "s/DATA01/DATAC1/2"

You’ll get a long list of cellcli commands – 12 for each cell which you need to run on the cell locally.

Once it’s done start the GI again and make sure all disk groups are mounted successfully:

[root@exa01db01 ~]# /u01/app/12.1.0.2/grid/bin/crsctl start cluster

 

Rename ASM disk groups and ASM disk names

Next is to rename the disk groups. To do so they must be dismounted on ALL cluster nodes before running renamedg on a disk group. Connect to each ASM instance and dismount the disk groups:

SQL> alter diskgroup datac1 dismount;

Diskgroup altered.

SQL> alter diskgroup recoc1 dismount;

Diskgroup altered.

At this point you can run renamеdg to rename the disk groups, here is an example for the DATAC1 disk group:

[oracle@exa01db01 ~]$ renamedg -dgname DATA01 -newdgname DATAC1

Parsing parameters..
renamedg operation: -dgname DATA01 -newdgname DATAC1
Executing phase 1
Discovering the group
Checking for hearbeat...
Re-discovering the group
Generating configuration file..
Completed phase 1
Executing phase 2
Completed phase 2

Do the same for RECO01 and after that make sure that both disk groups can be mounted on all database nodes successfully, then dismount them again so you rename the ASM disk names. In general there is a command to rename all the disks (ALTER DISKGROUP XX RENAME DISKS ALL) but it will rename the disks to a name of the form diskgroupname_####, where #### is the disk number. However ASM disk names have different names on Exadata (RECO01_CD_01_EXA01CEL01) and that’s why we need to rename them manually.

To rename the disks the disk group has to be mounted in restricted mode (so only one node in the cluster can mount the disk group). Then run the below two statement to generate the new ASM disk names:

SQL> alter diskgroup datac1 mount restricted;

Diskgroup altered.

SQL> select 'alter diskgroup datac1 rename disk ''' || name || ''' to ''' || REPLACE(name,'DATA01','DATAC1') || ''';' from v$asm_disk where name like 'DATA%';

SQL> select 'alter diskgroup recoc1 rename disk ''' || name || ''' to ''' || REPLACE(name,'RECO01','RECOC1') || ''';' from v$asm_disk where name like 'RECO%';

Execute the alter statement generated by the above two statements and mount both disk groups on all database nodes again.

There is no command to add the disk group back to Oracle Restart. They will be automatically added first time they are mounted. However you need to remove the old disk group resources:

[oracle@exa01db01 ~]$ srvctl remove diskgroup -g DATA01
[oracle@exa01db01 ~]$ srvctl remove diskgroup -g RECO01

 

Modify all databases to point to the new disk groups

The last step is to change datafile/tempfile/redolog files on all databases to point to the new disk groups. Make sure you disable block change tracking and flashback as database might not open since the location of BCT has changed:

SQL> alter database disable block change tracking;
SQL> alter database flashback off;

Next create pfile from spfile and substitute all the occurences of RECO01 and DATA01 OR you can modify the spfile just before you shut the database. Let’s assume you have created pfile, make sure all the parameters refer to the new disk group names, here are the default ones that you need to check:

*.control_files
*.db_create_file_dest
*.db_create_online_log_dest_1
*.db_create_online_log_dest_2
*.db_recovery_file_dest

Start the database in mount state and generate all the alter statements for datafiles/tempfiles and redologs:

[oracle@exa01db01 ~]$ sqlplus -s / as sysdba
set heading off
set echo off
set pagesize 140
set linesize 140
spool /tmp/rename.sql

select 'alter database rename file ''' || name || ''' to ''' || REPLACE(name,'DATA01','DATAC1') || ''';' from v$datafile;
select 'alter database rename file ''' || name || ''' to ''' || REPLACE(name,'DATA01','DATAC1') || ''';' from v$tempfile;
select 'alter database rename file ''' || member || ''' to ''' || REPLACE(member,'DATA01','DATAC1')|| ''';' from v$logfile where member like '%DATA%';
select 'alter database rename file ''' || member || ''' to ''' || REPLACE(member,'RECO01','RECOC1')|| ''';' from v$logfile where member like '%RECO%';
exit

Start another sqlplus and run the spool file from the above operation (rename.sql). At this point you can open the database (alter database open;). Once the database is open make sure you enable block change tracking and flashback:

SQL> alter database enable block change tracking;
SQL> alter database flashback on;

Finally change the database dependencies and spfile location:

For 12c databases:

[oracle@exa01db01 dbs]$ srvctl modify database -d dbm01 -nodiskgroup
[oracle@exa01db01 dbs]$ srvctl modify database -d dbm01 -diskgroup "DATAC1,RECOC1"
[oracle@exa01db01 dbs]$ srvctl modify database -d dbm01 -spfile +DATAC1/DBM01/spfiledbm01.ora

For 11g databases:

[oracle@exa01db01 dbs]$ srvctl modify database -d dbm01 -z
[oracle@exa01db01 dbs]$ srvctl modify database -d dbm01 -x "DATAC1,RECOC1"
[oracle@exa01db01 dbs]$ srvctl modify database -d dbm01 -p +DATAC1/DBM01/spfiledbm01.ora
Categories: oracle Tags: ,

Introducing Oracle ASM Filter Driver

October 27th, 2015 3 comments

The Oracle ASMFD (Filter Driver) was introduced in Oracle Database 12.1.0.2 and as of the moment it is available on Linux systems only.

Oracle ASM Filter Driver is a kernel module very much like the ASMLIB that resides in the I/O path of the Oracle ASM disks. It provides an interface between the Oracle binaries and the underlying operating environment.

Here are some of the features of ASMFD:

  • Reject non-Oracle I/O

The ASM filter driver will reject write I/O operation issued by non-Oracle commands. This prevents non-Oracle applications from writing to ASM disks and protects ASM from accidental corruption.

  • Device name persistence

Similarly to ASMLIB you don’t have to configure the device name persistence using UDEV.

  • Faster node recovery

According to the documentation ASMFD allows Oracle Clusterware to perform node level fencing without a reboot. So in case of CSS is not running or nodes are fenced the Oracle stack will be restarted instead of node to be rebooted. This is greatly reduce the boot time as with some enterprise servers it might take up to 10 minutes to boot.

  • Reduce OS resource usage

ASMFD exposes a portal device that can be used for all I/O on a particular host and thus decreasing the number of open file descriptors. With it each ASM process needs to have an open descriptor to each ASM disk. I’m not sure how much this will save you but might be useful in case you got hundreds of ASM disks.

  • Thin Provisioning & Data Integrity

This is another new and cool feature which is very popular in the virtualization world. When enabled the disk space not in use can be returned to the array also known as thin-provisioning. This attribute can be set only if the ASM compatibility is greater than or equal to 12.1.0.0 and requires you to use ASMFD!

In a way ASMFD is a replacement of ASMLIB as it includes base-ASMLIB features. However ASMFD takes it one step further by protecting the ASM disks from non-oracle write I/O operations to prevent accidental damage. Unlike ASMLIB the ASMFD is installed with the Oracle Grid Infrastructure installation.

 

Brief history of ASM and the need of ASM Filter Driver

To understand ASMFD better we need to understand where the need comes from. It’s important to say that this is very specific to Linux as other platforms have other methods to fulfill the requirements. Because that’s not the purpose of this post and it’s too long I decide to keep it at the end of the post.

In Linux as in any other platform there is a user separation which implies access restrictions. In Linux we usually install Oracle Database under the oracle user and to do so we need to have writable access to the directories we plan to use. By default that would be /home/oracle/ and as you can imagine that’s not very handy, also you might want to install the database in separate partition or file system. For this reason the root user will create the required directories and change their ownership to oracle, that is usually /u01 or /opt.

That would work if you want to store your database files in a file system. However the traditional file systems were not designed for database files, they need to have a file system check on a regular basis and sometimes they might get corrupted. For that reason and performance perspective many people would move to RAW devices in the past. Another case would be if you want to run RAC – you’ll either need a cluster file system or RAW devices.

Historically with 9i and 10g we used to create RAW devices which are one to one mapping between a device file and a logical name. For example you would create partition on each device /dev/sda1, /deb/sdb1 and then map those to /dev/raw/raw1, /dev/raw/raw2 and so on. Additional because in Linux the device files are rebuild each time the system reboots you need to make sure the permissions and ownership are preserved and persist after system reboot. This was achieved by having additional rules in your last boot scripts (often rc.local). For other platforms like HP-UX for example one had to buy additional license (HP Service Guard extension for RAC) which would give you the ability to have a shared LVM groups across two or more servers.

However the support and maintenance of raw devices was really difficult and Oracle came up with the idea to create their own volume manager to simplify database administration and eliminate the need to manage thousands of database files – Automatic Storage Management, ASM for short. A simple description is that ASM is very sophisticated volume manager for Oracle data. ASM could also be used if you deploy RAC hence you don’t need cluster file systems or RAW devices anymore. Additionally it provides a redundancy so if you have JBOD you can use ASM to do the mirroring of the data. Another important feature is that you don’t need persistent device naming anymore. Upon start ASM will read all the disk drives specified by asm_diskstring and use the ones on which ASM header is found. Although ASM was released in 10.1 people were still using raw devices at the time because ASM was too new and unknown for many DBAs.

So ASM will logically group all the disks (LUNs presented from the storage) into what’s called ASM disk groups and because it’s using Oracle Managed Files you don’t really care anymore where your files are and what their names are. ASM is just another abstraction layer in the database file storage. ASM is available on all platforms so in a way it will standardize the administration of database files. Often the DBAs will also administer the ASM but it could be the storage team managing the ASM. You still had to make sure the device files have the correct permissions before ASM could use them, otherwise no diskgroup will be available hence database could not start.

At the same time back in 2004 Oracle released another product ASMLib which only purpose was to persist the device naming and preserve the device files permissions. I don’t want to go into details about ASMLib here but there is an old and very good post on ASMLib from Wim Coekaerts (HERE). Just to mention that ASMLib is also available under RHEL, more can be found HERE.

In the recent years many people like myself used UDEV to persist the permissions and ownership of the device files used by ASM. I really like to have one to one match between device files and ASM disk names for better understanding and ease any future troubleshooting.

ASM Filter Driver takes this one step further by introducing the features above. I can see people start using ASMFD to take advantage of the thin provisioning OR make sure no one will overwrite (by mistake) the ASM device files, yes this happens and it happened to me recently.

Categories: oracle Tags: ,

ASM diskgroup unable to mount because of duplicate disk

March 6th, 2012 2 comments

This is something I would like to share for a long time, but for one or another reason I didn’t. The error is nothing special, but then I run into very interesting case.

This happen when I had to shutdown one server for maintenance and then boot it again. Once the OS started I notice that the database didn’t ran automatically. When I try to run it manually I got the following error:

SQL> startup
 ORA-01078: failure in processing system parameters
 ORA-01565: error in identifying file '+DATA/orcl/spfileorcl.ora'
 ORA-17503: ksfdopn:2 Failed to open file +DATA/orcl/spfileorcl.ora
 ORA-15077: could not locate ASM instance serving a required diskgroup

The reason for this was that the ASM diskgroup wasn’t mounted, again I tried to mount it manually, when I got this error:

SQL> alter diskgroup data mount;
 alter diskgroup data mount
 *
 ERROR at line 1:
 ORA-15032: not all alterations performed
 ORA-15024: discovered duplicately numbered ASM disk 29

So how could this happen as I didn’t change any of the parameter and configuration, the database was started for just three months. Running quick script with kfed:

ls -1 /dev/oracle/*for i in `ls -1 /dev/oracle/*`; do echo /dev/oracle/$i; kfed read /dec/oracle/$i | grep kfdhdb.dskname; done

showed me that I there are really two duplicate disks. ASM does not support disks with the same disk names, even if this is the same disk, but accessed by different path. The duplicate disk was an old disk, which wasn’t removed at the time of the change, only its filename was changed so once ASM discovers disks in the future this one would be excluded.

Looking at the asm_diskstring I found that it has changed to ‘/dev/oracle/*’ and should be ‘/dev/oracle/orcl*’ and that’s why ASM was scanning all the disks. This made me think what could happen for the time database was running and why this happen.

After looking at the logs and made few tests I had a conclusion. At some point when the DBA added new disks to ASM he done it through the dbca and he changed the discovery string for unknown reason. What happens in background is that dbca, implicitly is recreating the ASM parameter file with the new discovery string entered by the DBA. This is why ASM started with wrong discovery string when I rebooted the server and it discovered more disks.

The database ran at 26 March and a month later at 29 April new disks were added to the ASM and its parameter was recreated, almost two months later at 7 July I turned it back with correct values:

-rw-r----- 1 oracle oinstall 1336 Jul 7 18:42 init.ora
-rw-r----- 1 oracle oinstall 1327 Apr 29 11:32 init.ora.662011221038

Regards,
Sve

Categories: oracle Tags: , ,

Unable to create ASM disk under OEL 6.x

February 22nd, 2012 7 comments

It was busy month and I’m behind with my blog posts. First of all Happy New Year to all of you and wish you all the best through the new year!

This is a short note for installing Oracle Database with ASM under Oracle Enterprise Linux 6.x, which is not certified YET!

If you are running ASM with ASMLib you may get the following error when try to create ASM disk:

[root@db-app2 ~]# oracleasm createdisk DISK01 /dev/mapper/data01
Writing disk header: done
Instantiating disk: failed
Clearing disk header: done

 

Writing the header on the disk fails without obvious reason. Well, it turned out that Security Linux was blocking the access to the disk. Because configuration of SELinux is not part of the startup configuration any more I just forgot it.

To disable the SELinux edit file /etc/selinux/config and configure SELINUX=disabled. Then reboot your system and run getenforce to confirm that SELinux is disabled:

[root@db-app2 ~]# /usr/sbin/getenforce
Disabled

 

The ASM disk is created successfully this time:

[root@db-app2 ~]# oracleasm createdisk DISK01 /dev/mapper/data01
Writing disk header: done
Instantiating disk: done

 

Regards,
Sve

Categories: linux, oracle Tags: ,

Troubleshooting ASM 11.2 disk discovery

December 14th, 2011 2 comments

I was doing some installation at customer site when they asked if there anything specific to run GI 11.2 on HP-UX as this was their first interaction with 11g. Of course I replied that there is nothing specific, just to make sure the ownership of the raw disk is correct and had a correct ASM discovery string. They said that this is all done as it’s written in the documentation, but disks could not be discovered. This made me curious and asked them to log me in the system so I could have a look.

The system was running latest HP-UX 11.31 and we were going to install Oracle GI 11.2.0.2, the LUN was presented from HP EVA storage.

I couldn’t believe what they are saying and wanted them to show me what exactly they are doing. Unfortunately they were correct, after installing GI 11.2.0.2 software only, we tried to create an asm instance with asmca, but no disks were discovered although everything looked correct.

While I was looking around I remembered that the disk owner patch in HP-UX is a mandatory and it should be installed as the installation guide says this explicitly. I asked the customer and he said that all the required patches are installed, but when I checked the patch wasn’t  installed. The patch number as per installation guide is PHCO_41479, but the latest version is PHCO_41903. Also running kfed against disk on system on which the patch is not installed shows following:

KFED-00322: Invalid content encountered during block traversal: [kfbtTraverseBlock][Invalid OSM block type][][0]

I installed the patch and double checked everything and thought that this could be the reason why we are not seeing the disk, so I try to discover the disk, but again without success. The disk couldn’t be seen at ASM so I had to go deeper and see what asmca was actually doing. For the purpose I had to trace the system calls and for HP-UX the utility capable of doing this was tusc. There is MOS note describing how to trace systems call and what utilities should be used with different unix distributions [ ID 110888.1].

I run asmca and then using tusc got attached to its process, then changed the discovery string, pointing exactly to the disk I would like to use (in my case /dev/rdisk/disk3). So this is the paragraph which makes sense to me:

access("/dev/rdisk/disk3", W_OK|R_OK) ........................................................................... = 0
.......
open("/dev/rdisk/disk3", O_RDONLY|O_NDELAY|0x800, 0) ............................................................ = 7
lseek(7, 8192, SEEK_SET) ........................................................................................ = 8192
read(7, "L V M R E C 0 1 \r/ % aeN e2\va0".., 1024) ............................................................. = 1024
lseek(7, 73728, SEEK_SET) ....................................................................................... = 73728
read(7, "L V M R E C 0 1 \r/ % aeN e2\va0".., 1024) ............................................................. = 1024
close(7) ........................................................................................................ = 0

The disk is first successfully tested for read and write access and it’s opened for read-only in non-blocking mode. Then first 1024 bytes are read from offset 8192 from /dev/rdisk/disk3. This looked like a LVM header, AHA! So it seems that the disk was once used as LVM Physical Volume. Although the disk is not part of any volume group it has a LVM header and that’s why asmca it not showing this disk as CANDIDATE. It turned out that storage admins did not recreate the virtual disk on the storage, but the LUN was once used for LVM on another server.

After doing dd on the disk now the header looks better and disk could be seen as CANDIDATE:

oracle@vm:/$ dd if=/dev/zero of=/dev/rdisk/disk3 bs=1024k count=10

Now tusc output shows that header is filled with zeros:

read(7, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0".., 1024) ........................................................ = 1024

Just for troubleshooting purpose I try to read the disk header with kfed, before and after showed the same error:

KFED-00322: file not found; arguments: [kfbtTraverseBlock] [Invalid OSM block type] [] [0]

If you are not sure whether the disk contains valuable information you could import the physical volume and activate the volume group. In my case I was sure that the disk should be deleted and simple dd do the job.

Regards,
Sve

Categories: hp-ux, oracle, storage Tags: ,

Cannot drop the first disk group in ASM 11.2

October 11th, 2010 2 comments

I’ve been using ASM for long time in different configurations, with pfile, with spfile and in some configurations on shared raw device. Playing with ASM 11gR2 (Grid Infrastructure) it happens to me that I cannot neither drop neither dismount the first disk group.

The following error can be seen in asmca when try to dismount the first disk group:

Dismounting DiskGroup DATA failed with the following message:
ORA-15032: not all alterations performed
ORA-15027: active use of diskgroup "DATA" precludes its dismount


If you go to sqlplus and try to drop the disk group, there is an error that the disk group is actually still active:

SQL> drop diskgroup data including contents;
drop diskgroup data including contents
*
ERROR at line 1:
ORA-15039: diskgroup not dropped
ORA-15027: active use of diskgroup "DATA" precludes its dismount


So, let’s try to dismount it:

SQL> alter diskgroup all dismount;
alter diskgroup all dismount
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15027: active use of diskgroup "DATA" precludes its dismount


Lets see why the active group is still active. Query of v$asm_file shows all files located at this disk group:

SQL> select group_number, file_number, bytes, type from v$asm_file;
GROUP_NUMBER FILE_NUMBER      BYTES TYPE
------------ ----------- ---------- --------------------
1         253       1536 ASMPARAMETERFILE


It seems that the ASM instance server parameter file is located in the first created disk group and thats why I cannot dismount it.

Quick check confirms that:

SQL> show parameter spfile
NAME      TYPE         VALUE
--------- ----------- ------------------------------
spfile    string       +DATA/asm/asmparameterfile/registry.253.731706127


srvctl command also can be used to confirm that:

[oracle@oradb /]$ srvctl config asm
...
Spfile: +DATA/asm/asmparameterfile/registry.253.731706127
...


I made an experiment and I just installed the grid infrastructure software. Then I went with asmca to create ASM instance. There is a button ASM Parameters and then Show Advanced Parameters. There is clearly written that the parameter file is located in $ORACLE_HOME/dbs/spfile+ASM.ora

It seems that after the installation the file is moved to the disk group. So if you really want to drop the disk group you should move the spfile from the disk group to the file system (or other disk group), then shutdown and startup the ASM instance and then drop the disk group. The traditional way would be to create pfile from spfile and then start the instance with pfile. There are couple of commands in ASM 11.2 available in asmcmd, one of them is spmove. It gives you the ability to move the spfile and automatically updates Grid Plug and Play (GPnP) profile. Another one is spget which gives you the current location of the ASM instance server parameter file.

Start the asmcmd utility and execute spget to get the current spfile, then spmove to move the spfile from the disk group to the file system:

[oracle@oradb ~]$ asmcmd
ASMCMD> spget 
+DATA/asm/asmparameterfile/registry.253.731706127
ASMCMD> spmove '+DATA/asm/asmparameterfile/registry.253.731706127' '/oracle/grid11g/dbs/spfile+ASM.ora';
ORA-15032: not all alterations performed
ORA-15028: ASM file '+DATA/asm/asmparameterfile/registry.253.731706127' not dropped; currently being accessed (DBD ERROR: OCIStmtExecute)
ASMCMD> spget
/oracle/grid11g/dbs/spfile+ASM.ora;


I get the error that the file is still being used, but actually the file is copied to the file system and the profile is updated. Now shutdown and startup the ASM instance:

SQL> shutdown immediate
ASM diskgroups dismounted
ASM instance shutdown
SQL> startup
ASM instance started

Total System Global Area  283930624 bytes
Fixed Size		    2212656 bytes
Variable Size		  256552144 bytes
ASM Cache		   25165824 bytes
ASM diskgroups mounted
SQL> show parameter spfile;

NAME  TYPE	 VALUE
----------  ----------- ------------------------------
spfile	    string	 /oracle/grid11g/dbs/spfile+ASM.ora;


spget command also can be used to get the current spfile location from GPnP profile:

[oracle@oradb ~]$ asmcmd
ASMCMD> spget
/oracle/grid11g/dbs/spfile+ASM.ora;


So now we have the same behavior of ASM prior 11gR2, the server parameter file is in the file system and finally we can drop the disk group:

SQL> alter diskgroup data mount;

Diskgroup altered.
SQL> drop diskgroup data including contents;
Diskgroup dropped.


This behavior of placing the spfile of the ASM instance inside ASM disk group is now recommended by Oracle. This is accomplished by the ASM background process ASMB, which reads the spfile from the disk group and starts the instance itself. Whats more if you want this behavior or use the new spmove/spcopy/spset/spget/spbackup commands, the disk group attribute COMPATIBLE.ASM must be set to 11.2

At the end I found that there is also MOS note about this problem:
11gR2 (11.2.0.1) ORA-15027: active use of diskgroup precludes its dismount (With no database clients connected) [ID 1082876.1]

Categories: linux, oracle Tags: ,

Shared disk support for VirtualBox

August 9th, 2010 2 comments

I’m very happy to announce that VirtualBox now supports shared disks. Finally we can attach one disk to several virtual machines and run Oracle RAC and other clusters. As Oracle promised, this feature is released with the next maintenance patch (thanks!).

There is a new image write mode which is called shareable and this options is now available for the commands createhd and modifyhd of VBoxManage. To create new shared image use the command VBoxManage createhd with type shareble, creating shared disk from the GUI is not possible. To mark an existing image as a shared use the command VBoxManage modifyhd with type shareable.

Something important is that only fixed size disks are supported. If the disk is dynamic you will encounter the following error if you try to modify the image:
ERROR: Cannot change type for medium ‘/home/vm/ora11g_shared.vdi’ to ‘Shareable’ since it is a dynamic medium storage unit

There is other minor issue, if the image is already attached to two virtual machines the command modifyhd will also fail:
ERROR: Cannot change the type of medium ‘/home/vm/ora11g_shared.vdi’ because it is attached to 2 virtual machines

And finally, YES it works, I have tested it already!

sve@host:~$ VBoxManage showhdinfo /home/vm/ora11g_shared.vdi
Oracle VM VirtualBox Command Line Management Interface Version 3.2.8
(C) 2005-2010 Oracle Corporation
All rights reserved.

UUID:                     7521f059-1196-4d68-a1a6-cf0082fb446a
Accessible:               yes
Description:          
Logical size:             2048 MBytes
Current size on disk:     2048 MBytes
Type:                     shareable
Storage format:           VDI
In use by VMs:            labs1 (UUID: 25475ff4-70bc-4e2e-aa38-d8fae289273e)
                          labs2 (UUID: e4441f4c-1ef9-42e0-8e54-d2aec2c6cf4f)
Location:                 /home/vm/ora11g_shared.vdi

Regards and happy migration 😉
Sve

Categories: oracle, virtualization Tags: , , ,

Changing physical path of ASM disk group

August 11th, 2009 3 comments

The purpose of this document is to show that changing the psyhical path of ASM disk MEMBERS is possible and there is no risk.

For the purpose of the test, we create one logical volume called lvora and we grant ownership of this file to oracle:
root@node1:/# lvcreate -n lvora -L 1024 vg00
root@node1:/# chown oracle:dba /dev/vg00/rlvora

Start DBCA and create ASM instance:
– set sys password
– set data group name to DATA
– set redundancy to External
– set Disk Discovery Path to /dev/vg00/rlv*

At this stage only /dev/vg00/rlvora is CANDIDATE disk for disk group with size of 1 Gb.
Select the disk and create the disk group. Now we have one mounted disk group called DATA with external redundancy and
using /dev/vg00/rlvora as a MEMBER of the disk group.

To simulate changing (or failure) of the physical disk or even moving data from one physical disk to another we used dd
to copy raw data from /dev/vg00/rlvora to /dev/rdsk/c0t2d0 and then we delete the logical volume.

We shutdown the ASM instance and copy the contents of the logical volume to the raw physical disk using dd:

oracle@node1:/home/oracle$ export ORACLE_HOME=/oracle/ora10g
oracle@node1:/home/oracle$ export ORACLE_SID=+ASM
oracle@node1:/home/oracle$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Thu Dec 13 01:50:38 2007

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

Connected to:
Oracle Database 10g Release 10.2.0.1.0 - 64bit Production
With the Real Application Clusters option

SQL> select GROUP_NUMBER, NAME, STATE, TYPE from v$asm_diskgroup;

GROUP_NUMBER NAME                           STATE       TYPE
------------ ------------------------------ ----------- ------
1 DATA                           MOUNTED     EXTERN

SQL> select GROUP_NUMBER, DISK_NUMBER, MODE_STATUS, STATE, NAME, PATH from v$asm_disk;

GROUP_NUMBER DISK_NUMBER MODE_ST STATE    NAME      PATH
------------ ----------- ------- -------- --------- ----------------
1             0          ONLINE  NORMAL   DATA_0000 /dev/vg00/rlvora

SQL> shutdown immediate
ASM diskgroups dismounted
ASM instance shutdown
SQL>  exit

oracle@node1:/home/oracle$ exit

root@node1:/root# chown oracle:dba /dev/rdsk/c0t2d0

root@node1:/root# dd if=/dev/vg00/rlvora of=/dev/rdsk/c0t2d0 bs=1024k
1024+0 records in
1024+0 records out
root@node1:/root#  lvremove /dev/vg00/lvora
The logical volume "/dev/vg00/lvora" is not empty;
do you really want to delete the logical volume (y/n) : y
Logical volume "/dev/vg00/lvora" has been successfully removed.
Volume Group configuration for /dev/vg00 has been saved in /etc/lvmconf/vg00.conf

We have moved data to /dev/rdsk/c0t2d0 and we have removed the logical volume.

Now if you try to mount the disk group or start the instance you will get the following error:

oracle@node1:/home/oracle$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Thu Dec 13 02:05:48 2007

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup
ASM instance started

Total System Global Area  130023424 bytes
Fixed Size                  1991968 bytes
Variable Size             102865632 bytes
ASM Cache                  25165824 bytes
ORA-15032: not all alterations performed
ORA-15063: ASM discovered an insufficient number of disks for diskgroup "DATA"

SQL> select GROUP_NUMBER, NAME, STATE, TYPE from v$asm_diskgroup;

no rows selected

SQL> select GROUP_NUMBER, DISK_NUMBER, MODE_STATUS, STATE, NAME, PATH from v$asm_disk;

no rows selected

SQL> show parameter diskstring

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
asm_diskstring                       string      /dev/vg00/rlv*

As you can seen the discovery path is still pointing to /dev/vg00/rlv*, now we will change disk discovery path by pointing asm_diskstring parameter to the new location of the disk and we will mount the ASM instance:

SQL> alter system set asm_diskstring='/dev/rdsk/*' scope=both;

System altered.

SQL> select GROUP_NUMBER, DISK_NUMBER, MODE_STATUS, STATE, NAME, PATH from v$asm_disk;

GROUP_NUMBER DISK_NUMBER MODE_ST STATE    NAME      PATH
------------ ----------- ------- -------- --------- ----------------
0            0           ONLINE  NORMAL             /dev/rdsk/c0t2d0

SQL> alter diskgroup data mount;

Diskgroup altered.

SQL> select GROUP_NUMBER, DISK_NUMBER, MODE_STATUS, STATE, NAME, PATH from v$asm_disk;

GROUP_NUMBER DISK_NUMBER MODE_ST STATE    NAME      PATH
------------ ----------- ------- -------- --------- ----------------
1            0           ONLINE  NORMAL   DATA_0000 /dev/rdsk/c0t2d0

SQL> select GROUP_NUMBER, NAME, STATE, TYPE from v$asm_diskgroup;

GROUP_NUMBER NAME                           STATE       TYPE
------------ ------------------------------ ----------- ------
1 DATA                           MOUNTED     EXTERN

SQL> show parameter diskstring;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
asm_diskstring                       string      /dev/rdsk/*

Final test to show that the changes are applied:

SQL> shutdown immediate
ASM diskgroups dismounted
ASM instance shutdown
SQL> startup
ASM instance started

Total System Global Area  130023424 bytes
Fixed Size                  1991968 bytes
Variable Size             102865632 bytes
ASM Cache                  25165824 bytes
ASM diskgroups mounted
SQL> exit
Disconnected from Oracle Database 10g Release 10.2.0.1.0 - 64bit Production
With the Real Application Clusters option
oracle@node1:/home/oracle$

Conclusion
ASM does not keep track of the physical disks of the data groups. Said in other way it does not matter the path or the mminor, major numbers of the physical disks, because the metadata is kept on the disk itself and there is nothing in the dictionary. When you start ASM instance it scans the disks based on the parameter asm_diskstring and reads the header information of the discovered disks.

Categories: hp-ux, oracle Tags: , ,

Migration of HP-UX raw devices to Oracle ASM

August 10th, 2009 No comments

This is an article which I wrote about how to migrate Oracle datafiles from LVM raw devices to Oracle ASM.

Categories: hp-ux, oracle Tags: ,