Archive

Posts Tagged ‘exadata’

How to rename ASM disk groups in Exadata

November 25th, 2015 No comments

Deployment of Exadata requires you to generate configuration using Oracle Exadata Deployment Assistant (OEDA). Within the same the default  ASM disk groups names are DBFS_DG, RECOC1 and DATAC1. I usually change those to RECO01 and DATA01 as others doesn’t make sense to me and the only place where I find the default ones is on Exadata.

I had an incident last year where the Exadata deployed half way through and names were left by default so I had to delete the configuration and start from scratch.

For my big surprise I got request recently where customer wanted to change RECO01 and DATA01 to RECOC1 and DATAC1! This was a pre-prod system, already deployed and having few databases running. The Exadata was X5-2 running ESS 12.1.2.1.2 and GI 12.1.0.2.

If this ever happens to you, here is what you need to do:

  1. Rename grid disks.
  2. Rename ASM disk groups and ASM disk names.
  3. Modify all databases to point to the new disk groups.

Rename grid disks

Since grid disks names consists of the disk group name they need to be changed too. Although this is not mandatory I strongly recommend it to avoid any confusion in the future.

The grid disks can be renamed very easily using cellcli but they should NOT be in use by GI at that time. Thus Grid Infrastructure has to be stopped on all servers, stop GI as root:

[root@exa01db01 ~]# /u01/app/12.1.0.2/grid/bin/crsctl stop cluster -all

Then run the following magic command to get the list of all grid disks and replace the disk group names with the new ones:

[root@exa01db01 ~]# for i in `dcli -g cell_group -l root cellcli -e list griddisk | awk -F":" '{print $2'} | awk '{print $1}'`; do echo "cellcli -e alter griddisk $i name=$i"; done | grep -v DBFS |sed -e "s/RECO01/RECOC1/2" -e "s/DATA01/DATAC1/2"

You’ll get a long list of cellcli commands – 12 for each cell which you need to run on the cell locally.

Once it’s done start the GI again and make sure all disk groups are mounted successfully:

[root@exa01db01 ~]# /u01/app/12.1.0.2/grid/bin/crsctl start cluster

 

Rename ASM disk groups and ASM disk names

Next is to rename the disk groups. To do so they must be dismounted on ALL cluster nodes before running renamedg on a disk group. Connect to each ASM instance and dismount the disk groups:

SQL> alter diskgroup datac1 dismount;

Diskgroup altered.

SQL> alter diskgroup recoc1 dismount;

Diskgroup altered.

At this point you can run renamеdg to rename the disk groups, here is an example for the DATAC1 disk group:

[oracle@exa01db01 ~]$ renamedg -dgname DATA01 -newdgname DATAC1

Parsing parameters..
renamedg operation: -dgname DATA01 -newdgname DATAC1
Executing phase 1
Discovering the group
Checking for hearbeat...
Re-discovering the group
Generating configuration file..
Completed phase 1
Executing phase 2
Completed phase 2

Do the same for RECO01 and after that make sure that both disk groups can be mounted on all database nodes successfully, then dismount them again so you rename the ASM disk names. In general there is a command to rename all the disks (ALTER DISKGROUP XX RENAME DISKS ALL) but it will rename the disks to a name of the form diskgroupname_####, where #### is the disk number. However ASM disk names have different names on Exadata (RECO01_CD_01_EXA01CEL01) and that’s why we need to rename them manually.

To rename the disks the disk group has to be mounted in restricted mode (so only one node in the cluster can mount the disk group). Then run the below two statement to generate the new ASM disk names:

SQL> alter diskgroup datac1 mount restricted;

Diskgroup altered.

SQL> select 'alter diskgroup datac1 rename disk ''' || name || ''' to ''' || REPLACE(name,'DATA01','DATAC1') || ''';' from v$asm_disk where name like 'DATA%';

SQL> select 'alter diskgroup recoc1 rename disk ''' || name || ''' to ''' || REPLACE(name,'RECO01','RECOC1') || ''';' from v$asm_disk where name like 'RECO%';

Execute the alter statement generated by the above two statements and mount both disk groups on all database nodes again.

There is no command to add the disk group back to Oracle Restart. They will be automatically added first time they are mounted. However you need to remove the old disk group resources:

[oracle@exa01db01 ~]$ srvctl remove diskgroup -g DATA01
[oracle@exa01db01 ~]$ srvctl remove diskgroup -g RECO01

 

Modify all databases to point to the new disk groups

The last step is to change datafile/tempfile/redolog files on all databases to point to the new disk groups. Make sure you disable block change tracking and flashback as database might not open since the location of BCT has changed:

SQL> alter database disable block change tracking;
SQL> alter database flashback off;

Next create pfile from spfile and substitute all the occurences of RECO01 and DATA01 OR you can modify the spfile just before you shut the database. Let’s assume you have created pfile, make sure all the parameters refer to the new disk group names, here are the default ones that you need to check:

*.control_files
*.db_create_file_dest
*.db_create_online_log_dest_1
*.db_create_online_log_dest_2
*.db_recovery_file_dest

Start the database in mount state and generate all the alter statements for datafiles/tempfiles and redologs:

[oracle@exa01db01 ~]$ sqlplus -s / as sysdba
set heading off
set echo off
set pagesize 140
set linesize 140
spool /tmp/rename.sql

select 'alter database rename file ''' || name || ''' to ''' || REPLACE(name,'DATA01','DATAC1') || ''';' from v$datafile;
select 'alter database rename file ''' || name || ''' to ''' || REPLACE(name,'DATA01','DATAC1') || ''';' from v$tempfile;
select 'alter database rename file ''' || member || ''' to ''' || REPLACE(member,'DATA01','DATAC1')|| ''';' from v$logfile where member like '%DATA%';
select 'alter database rename file ''' || member || ''' to ''' || REPLACE(member,'RECO01','RECOC1')|| ''';' from v$logfile where member like '%RECO%';
exit

Start another sqlplus and run the spool file from the above operation (rename.sql). At this point you can open the database (alter database open;). Once the database is open make sure you enable block change tracking and flashback:

SQL> alter database enable block change tracking;
SQL> alter database flashback on;

Finally change the database dependencies and spfile location:

For 12c databases:

[oracle@exa01db01 dbs]$ srvctl modify database -d dbm01 -nodiskgroup
[oracle@exa01db01 dbs]$ srvctl modify database -d dbm01 -diskgroup "DATAC1,RECOC1"
[oracle@exa01db01 dbs]$ srvctl modify database -d dbm01 -spfile +DATAC1/DBM01/spfiledbm01.ora

For 11g databases:

[oracle@exa01db01 dbs]$ srvctl modify database -d dbm01 -z
[oracle@exa01db01 dbs]$ srvctl modify database -d dbm01 -x "DATAC1,RECOC1"
[oracle@exa01db01 dbs]$ srvctl modify database -d dbm01 -p +DATAC1/DBM01/spfiledbm01.ora
Categories: oracle Tags: ,

How to move OEM12c management agent to new location

October 29th, 2015 3 comments

While working on another Exadata project recently I found that OEM12c agents on the compute nodes were installed on different locations on each of the three Exadatas. On one of them was under /home/oracle/agent, another one had them under /opt/oracle/agent and third one had them under /oracle/agent. Obviously this was not the standard and the agents had to be moved under /u01/app/oracle/agent. The only problem with that was that the three Exadatas were already discovered along with some database targets. Fortunately this wasn’t production yet but it would still require all the agents to be reinstalled and all targets rediscovered.

Fortunately there is an easier way to move the OEM management agents to new location without all the hassle of reinstalling agents and rediscovering agents. In the following example the agent was installed in /home/oracle/agent/ and I had to move it to /u01/app/oracle/agent/.

First you need to download the ConvertToStandalone.pl utility from 2021782.1 and then upload it to the server under /home/oracle

You need to create a list of plugins, otherwise the move process will fail:

[oracle@exa01db01 ~]$ /home/oracle/agent/core/12.1.0.5.0/perl/bin/perl /home/oracle/agent/core/12.1.0.5.0/sysman/install/create_plugin_list.pl -instancehome /home/oracle/agent/core/12.1.0.5.0

This will create a file /home/oracle/agent/plugins.txt which is used by the perl script later.

Export the following variables:

export OLD_AGENT_HOME=/home/oracle/agent/core/12.1.0.5.0
export ORACLE_HOME=/u01/app/oracle/agent/core/12.1.0.5.0

Another thing is you need to do is to modify the SBIN_MODIFIED_VERSION from 12.1.0.4.0. to 12.1.0.5.0 in /home/oracle/agent/agentimage.properties, otherwise the process will fail.

Then run the perl script which will migrate the agent home to the new location:

[oracle@exa01db01 ~]$ /home/oracle/agent/core/12.1.0.5.0/perl/bin/perl /home/oracle/ConvertToStandalone.pl -instanceHome /home/oracle/agent/agent_inst -newAgentBaseDir /u01/app/oracle/agent

Pay attention that the script accepts two arguments, instanceHome is the agent instance home directory e.g. /home/oracle/agent/agent_inst/ and the newAgentBaseDir is the new base dir for the agent /u01/app/oracle/agent/

After the command completes you need to run root.sh as root:

[oracle@exa01db01 ~]# /u01/app/oracle/agent/core/12.1.0.5.0/root.sh
Finished product-specific root actions.
/etc exists

Deinstall the old agent:

[oracle@exa01db01 ~]$ /home/oracle/agent/core/12.1.0.5.0/perl/bin/perl /home/oracle/agent/core/12.1.0.5.0/sysman/install/AgentDeinstall.pl -agentHome /home/oracle/agent/core/12.1.0.5.0

Finally remove the old agent directory where a log file from the deinstall process is left:

[oracle@exa01db01 ~]$ rm -rf /home/oracle/agent

The beauty of this process is that the script will create a blackout AGT_CNT_BLK_OUT on a node level and then stop the agent. It will then migrate the agent to the new home, start the agent and finally remove the blackout. The whole process takes less than five minutes.

Categories: oracle Tags: ,

Exadata X5 PDU – CLI already in use

September 18th, 2015 No comments

Exadata X5-2 and X4-8B racks are delivered with the “Enhanced” PDU metering units connected via the Cisco switch. Although the documentation says they should have static addresses, they don’t. You need to configure them manually using serial console connection, this is described in my earlier post here.

However if you forget to exit the serial console connection to the PDU and then try to login using SSH later you’ll get the following message:

login as: admin
admin@192.168.1.10's password:

CLI already in use!!!
Please try again later .....

Then someone has to go all the way to the data centre and reset the PDU or exit from the serial console.

Categories: oracle Tags:

Exadata’s onecommand fails to validate NTP servers on storage servers

July 6th, 2015 No comments

This will be simple and short post on an issue I had recently. I got the following error while running the first step of onecommand – Validate Configuration File:

2015-07-01 12:31:03,712 [INFO  ][    main][     ValidationUtils:761] SUCCESS: NTP servers on machine exa01db02.local.net verified successfully
2015-07-01 12:31:03,713 [INFO  ][    main][     ValidationUtils:761] SUCCESS: NTP servers on machine exa01db01.local.net verified successfully
2015-07-01 12:31:03,714 [INFO  ][    main][     ValidationUtils:778] Following errors were found...
2015-07-01 12:31:03,714 [INFO  ][    main][     ValidationUtils:783] ERROR: Encountered error while running NTP validation error on host: exa01cel03.local.net
2015-07-01 12:31:03,714 [INFO  ][    main][     ValidationUtils:783] ERROR: Encountered error while running NTP validation error on host: exa01cel02.local.net
2015-07-01 12:31:03,714 [INFO  ][    main][     ValidationUtils:783] ERROR: Encountered error while running NTP validation error on host: exa01cel01.local.net

Right, so my NTP servers were accessible from the db nodes but not from the cells. When I queried the NTP server from the cells I got the following error:

# ntpdate -dv ntpserver1
1 Jul 09:00:09 ntpdate[22116]: ntpdate 4.2.6p5@1.2349-o Fri Feb 27 14:50:33 UTC 2015 (1)
Looking for host ntpserver1 and service ntp
host found : ntpserver1.local.net
transmit(172.16.1.100)
transmit(172.16.1.100)
transmit(172.16.1.100)
transmit(172.16.1.100)
transmit(172.16.1.100)
172.16.1.100: Server dropped: no data
server 172.16.1.100, port 123

Perhaps I should have mentioned that the cells have their own firewall (cellwall) which will only allow certain inbound/outbound traffic. During boot the script will build all the rules dynamically and apply them. Now the above error occurred because of two reasons:

A) The NTP servers were specified using hostname instead of IP addresses in OEDA
B) The management network was NOT available after the initial config (applyElasticConfig) was applied

Because of that cellwall was not able to resolve the NTP servers IP addresses and thus they were omitted from the firewall configuration. You can safely proceed with the deployment but if you want to get rid of the annoying message the solution is simply to restart the cell firewall – /etc/init.d/cellwall restart

Categories: oracle, Uncategorized Tags:

MGMTDB not automatically created on Exadata X5 and GI 12.1.0.2

July 1st, 2015 No comments

While deploying an X5 Full Rack recently it happened that the Grid Infrastructure Management Repository was not created by onecommand. The GIMR database was optional in 12.1.0.1 and became mandatory in 12.1.0.2 and should be automatically installed with Oracle Grid Infrastructure 12c release 1 (12.1.0.2). For unknown reason to me that didn’t happen and I had to create it manually. I’ve checked all the log files but couldn’t find any errors.  For reference the OEDA version used was Feb 2015 v15.050, image version on the Exadata was 12.1.2.1.0.141206.1.

To create the database login as the grid user and create file holding the following variables:

cat > /tmp/cfgrsp.properties
oracle.assistants.asm|S_ASMPASSWORD=[your ASM password]
oracle.assistants.asm|S_ASMMONITORPASSWORD=[your ASM password]

and run the following command:

GRID_HOME=/u01/app/12.1.0.2/grid
[oracle@exa01 ~]$ $GRID_HOME/cfgtoollogs/configToolAllCommands RESPONSE_FILE=/tmp/cfgrsp.properties

For reference, here is similar bug I found on MOS:
-MGMTDB Not Created When Using EM12c Provisioning (Doc ID 1983885.1)

Categories: oracle Tags: , ,

dbnodeupdate.sh post upgrade step fails on Exadata storage software 12.1.2.1.1

June 23rd, 2015 No comments

I’ve done several Exadata deployments in the past two months and had to upgrade the Exadata storage software on half of them. Reason for that was because units shipped before May had their Exadata storage software version of 12.1.2.1.0.

The upgrade process of the database nodes ran fine but when I ran dbnodeupdate.sh -c for completing post upgrade steps I got an error that the system wasn’t on the expected Exadata release or kernel:

(*) 2015-06-01 14:21:21: Verifying GI and DB's are shutdown
(*) 2015-06-01 14:21:22: Verifying firmware updates/validations. Maximum wait time: 60 minutes.
(*) 2015-06-01 14:21:22: If the node reboots during this firmware update/validation, re-run './dbnodeupdate.sh -c' after the node restarts..
(*) 2015-06-01 14:21:23: Collecting console history for diag purposes

ERROR: System not on expected Exadata release or kernel, exiting


ERROR: Correct error, or to override run: ./dbnodeupdate.sh -c -q -t 12.1.2.1.1.150316.2

Indeed, the database node was running the new Exadata software but still using the old kernel (2.6.39-400.243) and dbnodeupdate was expecting me to run the new 2.6.39-400.248 kernel:

imageinfo:
Kernel version: 2.6.39-400.243.1.el6uek.x86_64 #1 SMP Wed Nov 26 09:15:35 PST 2014 x86_64
Image version: 12.1.2.1.1.150316.2
Image activated: 2015-06-01 12:27:57 +0100
Image status: success
System partition on device: /dev/mapper/VGExaDb-LVDbSys1

The reason for that was that the previous run of dbnodeupdate installed the new kernel package but failed to update grub.conf. The solution is to manually add the missing kernel entry to grub.conf and reboot the server to pick up the new kernel, here is a note for more information which by the time I had this problem was still internal:


Bug 20708183 – DOMU:GRUB.CONF KERNEL NOT ALWAYS UPDATED GOING TO 121211, NEW KERNEL NOT BOOTED

 

 

Categories: oracle Tags: ,

How do I change DNS servers on Exadata storage servers

June 19th, 2015 No comments

This is just a quick post to highlight a problem I had recently on another Exadata deployment.

For the most customers the management network on Exadata is routable and the DNS servers are accessible. However in a recent deployment for a financial organization this wasn’t the case and the storage servers were NOT able to reach the DNS servers. The customer provided a different set of DNS servers within the management network which were still able to resolve all the Exadata hostnames. If you encounter similar problem stop all cell services and run ipconf on each storage server to update the DNS servers.

On each storage server there is a service called cellwall (/etc/init.d/cellwall) which actually will run many checks and apply a lot of iptables rules. Here are couple of comments from the script to give you an idea:

# general lockdown from everything external, (then selectively permit)
  # general permissiveness (localhost: if you are in, you are in)
  # allow all udp traffic only from rdbms hosts on IPoIB only
      # allow DNS to work on all interfaces
      # open sport=53 only for DNS servers (mitigate remote-offlabel-port exploit)

and many more but you can check the script and see what it does OR run iptables -L -n to get all the iptables rules.

Here is some more information on how to change IP addresses on Exadata:

UPDATE: Thanks to Jason Arneil for pointing out that proper way to update the configuration of the cell.

Categories: oracle Tags:

How to configure Power Distribution Units on Exadata X5

June 18th, 2015 No comments

I’ve done several Exadata deployments recently and I have to say of all the components PDUs were hardest to configure. Important to notice that unlike earlier generations of Exadata the PDUs in X5 are Ehnanced PDUs and not Standard.

Reading the public documentation (Configuring the Power Distribution Units) it says that on PDUs with three power input leads you need to connect the middle power lead to the power source. Well I’ve done that many times and it didn’t NOT worked, the documentation says that the PDU should be accessible on 192.168.0.1. I believe the reason for that is because DHCP has been enabled by default and this can be easily confirmed by checking the LCD screen of the PDU. I even tried setting up DHCP server myself to make the PDU acquire IP but that didn’t worked either.

To configure the PDU you need to connect through serial management port. Nowadays there are no more laptops with serial ports so you will need USB to RS-232 DB9 Serial Adapter, I bought mine from Amazon. You will also need DB9 to RJ45 cable – these are quite popular and I’m sure you’ve seen before the blue Cisco console cable.

You need to connect the cable to SET MGT port of PDU and then establish terminal connection (you can use putty too) with the following settings:
9600 baud, 8 bit, 1 stop bit, no parity bit, no flow control

The username is admin and the password is adm1n.

Here are the commands you need to configure the PDU. Each network change requires reboot of the PDU:

Welcome to Oracle PDU

pducli->username: admin
pducli->password: *****
Login OK - Admin rights!
pducli->

set pdu_name=exa01pdu01
set systime_manual_date=2015-06-18
set systime_manual_time=12:45:00
set systime_ntp_server_enable=On
set systime_ntp_server=192.168.1.2
set systime_dst_enable=On
set net_ipv4_dhcp=Off
reset=yes

set net_ipv4_ipaddr=192.168.1.10
set net_ipv4_subnet=255.255.255.0
set net_ipv4_gateway=192.168.1.1
set net_ipv4_dns1=192.168.1.3
set net_ipv4_dns2=192.168.1.4
reset=yes

Regarding the network connectivity – the documentation says you need two additional cables from your management network. However if you have half or quarter rack you can plug-in the PDU network connections to the management/cisco switch. Make a note that if you ever plan to upgrade to full rack you will have to provide the two additional cables from your management network and disconnect PDUs from the management switch.

IMPORTANT: Make sure you don’t leave any active CLI sessions, otherwise you won’t be able to login remotely and will require data centre visit to reboot the PDU.

Update 09.03.2015:

Unfortunately for PDUs that run firmware 2.02 it is required that the reset=yes be issued after each set command.  Until “Enhanced” PDU is upgraded to firmware 2.03, ensure that you issue reset=yes each time the reminder is displayed. “Enhanced” PDU firmware 2.03 is available on MOS as patch 20917482.

Categories: oracle Tags:

applyElasticConfig.sh fails with Unable to locate any IB switches

May 15th, 2015 2 comments

With the release of Exadata X5 Oracle introduced elastic configurations and changed the process on how the initial configuration is performed. Back before you had to run applyconfig.sh which would go across the nodes and change all the settings according to your config. This script has now evolved and it’s called applyElasticConfig.sh which is part of OEDA (onecommand). During one of the recent deployments I ran into the below problem:

[root@node8 linux-x64]# ./applyElasticConfig.sh -cf Customer-exa01.xml

Applying Elastic Config...
Applying Elastic configuration...
Searching Subnet 172.16.2.x..........
5 live IPs in 172.16.2.x.............
Exadata node found 172.16.2.46.
Collecting diagnostics...
Errors occurred. Send /opt/oracle.SupportTools/onecommand/linux-x64/WorkDir/Diag-150512_160716.zip to Oracle to receive assistance.
Exception in thread "main" java.lang.NullPointerException
at oracle.onecommand.commandexec.utils.CommonUtils.getStackFromException(CommonUtils.java:1579)
at oracle.onecommand.deploy.cliXml.ApplyElasticConfig.doDaApply(ApplyElasticConfig.java:105)
at oracle.onecommand.deploy.cliXml.ApplyElasticConfig.main(ApplyElasticConfig.java:48)

Going through the logs we can see the following message:

2015-05-12 16:07:16,404 [FINE ][ main][ OcmdException:139] OcmdException from node node8.my.company.com return code = 2 output string: Unable to locate any IB switches... stack trace = java.lang.Throwable

The problem was caused because of IB switch names in my OEDA XML file were different to the one’s actually physically in the rack, actually the IB switch hostnames were missing from the hosts file. So if you ever run into this problem make sure your IB switch hosts file (/etc/hosts) has the correct hostname in the proper format:

#IP                 FQDN                      ALIAS
192.168.1.100       exa01ib01.local.net       exa01ib01

Also make sure to reboot the IB switch after any change of the hosts file.

Categories: oracle Tags:

How to configure Link Aggregation Control Protocol on Exadata

May 13th, 2015 2 comments

During a recent X5 installation I had to configure Link Aggregation Control Protocol (LACP) on the client network of the compute nodes. Although the ports were running at 10Gbits and default configuration of Active/Passive works perfectly fine the customer wanted even distribution of traffic and workload across their core switches.

Link Aggregation Control Protocol (LACP), also known as 802.3ad is a methods of combining multiple physical network connections into one logical connection to increase throughput and provide redundancy in case one of the links should fail. The protocol requires both – the server and the switch(es) to have the same settings to allow LACP to work properly.

To configure LACP on Exadata you need to change the bondeth0 parameters.

On each of the compute nodes open the following file:

/etc/sysconfig/network-scripts/ifcfg-bondeth0

and replace the line saying BONDING_OPTS with this one:

BONDING_OPTS="mode=802.3ad xmit_hash_policy=layer3+4 miimon=100 downdelay=200 updelay=5000 num_grat_arp=100"

and then restart the network interface:

ifdown bondeth0
ifup bondeth0
Determining if ip address 192.168.1.10 is already in use for device bondeth0...

You can check the status of the interface by query the proc filesystem. Make sure both interfaces are up and running at the same speed. The esential part to make sure the LACP is working is shown below:

cat /proc/net/bonding/bondeth0

802.3ad info
LACP rate: slow
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
Aggregator ID: 1
Number of ports: 2
Actor Key: 33
Partner Key: 34627
Partner Mac Address: 00:23:04:ee:be:c8
I had a problem with the network where the client network did NOT come up after server reboot. This was happening because during system boot the 10Gbit interfaces goes through multiple resets causing very fast link change. Here is the status of the bond as of that time:
cat /proc/net/bonding/bondeth0

802.3ad info
LACP rate: slow
Aggregator selection policy (ad_select): stable
bond bondeth0 has no active aggregator
The solution for that was to decrease the down_delay to 200. The issue is described in this note:
Bonding Mode 802.3ad Using 10Gbps Network – Slave NICs Fail to Come Up Consistently after Reboot (Doc ID 1621754.1)

 

Categories: linux, oracle Tags: