Archive

Posts Tagged ‘exadata’

Oracle Database 12.2 released for Exadata on-premises

February 11th, 2017 No comments

We live in exciting times, Oracle Database 12.2 for Exadata was released earlier today.

The 12.2 database was already available on the Exadata Express Cloud Service and Database as a service for a few months now.

Today, it has been released for Exadata on-premises, five days earlier than the initial Oracle announcement of 15th Feb.

The documentation suggests that to run 12.2 database you need to run at least 12.1.2.2.1 Exadata storage software but better go with the recommended version of 12.2.1.1.0 which was released just a few days ago. Here are few notes on running 12.2 database on Exadata:

Recommended Exadata storage software: 12.2.1.1.0 or higher
Supported Exadata storage software: 12.1.2.2.1 or higher

Full Exadata offload functionality for Database 12.2, and IORM support for Database 12.2 container databases and pluggable databases requires Exadata 12.2.1.1.0 or higher.

Exadata Storage Server version 12.2.1.1.0 will be required for full Exadata functionality including  ‘Smart Scan offloaded filtering’, ‘storage indexes’ and’

Current Oracle Database and Grid Infrastructure version must be 11.2.0.3, 11.2.0.4, 12.1.0.1 or 12.1.0.2.  Upgrades from 11.2.0.1 or 11.2.0.2 directly to 12.2.0.1 are not supported.

There is a completely new note on how to upgrade to 12.2 GI and RDBMS on Exadata:

12.2 Grid Infrastructure and Database Upgrade steps for Exadata Database Machine running 11.2.0.3 and later on Oracle Linux (Doc ID 2111010.1)

The 12.2 GI and RDBMS binaries are available from MOS as well as edelivery:

Patch 25528839: Grid Software clone version 12.2.0.1.0
Patch 25528830: Database Software clone version 12.2.0.1.0

The recommended Exadata storage software for running 12.2 RDBMS:

Exadata 12.2.1.1.0 release and patch (21052028) (Doc ID 2207148.1)

For more details about 12.2.1.1.0 Exadata storage software refer to this slide deck and of course a link to the 12.2 documentation here as we all need to start getting familiar with it.

Also, last week Oracle released the February version of OEDA to support the new Exadata SL6 hardware. It does not support 12.2 database yet and I guess we’ll see another release in February to support the 12.2 GI and RDBMS.

Happy upgrading! 🙂

Categories: oracle Tags: ,

Upgrading to Exadata 12.1.2.2.0 or later – mind the 32bit packages

February 2nd, 2017 No comments

This might not be relevant anymore, shame on me for keeping it draft for few months. However, there are still people running an older versions of Exadata storage software and it might still help someone out there.

With the release of Exadata storage software 12.1.2.2.0, Oracle announced that some 32bit (i686) packages will be removed from the OS as part of the upgrade.

This happened to me in the summer last year (blog post) and I thought back then that someone messed up the dependencies. After seeing it again for another customer a month later I thought it might be something else. So after checking the release notes for all the recent patches, I found this for the 12.1.2.2.0 release:

Note that several i686 packages may be removed from the database note when being updated. Run dbnodeupdate.sh with -N flag at prereq check time to see exactly what rpms will be removed from your system.

Now, this will be all ok if you haven’t installed any additional packages. If you, however, like many other customers has packages like LDAP or Kerberos then your dbnodeupdate pre-check will fail with “Minimum’ dependency check failed.” and broken dependencies since all i686 package will be removed as part of dbnodeupdate pre-check.

The way around that is to run dbnodeupdate with -N flag and check the logs of what packages will be removed and what will be impacted. Then manually remove any packages you installed manually. After the Exadata storage software update, you’d need to install the relevant version of the packages again.

Having said that I need to mention the below note on what software is allowed to install on Exadata:

Is it acceptable / supported to install additional or 3rd party software on Exadata machines and how to check for conflicts? (Doc ID 1541428.1)

 

Categories: oracle Tags:

onecommand fails to change storage cell name

January 20th, 2017 No comments

It’s been a busy month – five Exadata deployments in the past three weeks and new personal best – 2x Exadata X6-2 Eighth Racks with CoD and storage upgrade deployed in only 6hrs!

An issue I encountered with the first deployment was that onecommand wouldn’t change the storage cells names. The default cell names (not hostnames!) are based on where they are mounted within the rack and they are assigned by the elastic configuration script. The first cell name is ru02 (rack unit 02), the second cell is ru04, third is ru06 and so on.

Now, if you are familiar with the cell and grid disks you would know that their names are based on the cell name. In other words, I got my cell, grid and ASM disks with the wrong names. Exachk would report the following failures for every grid disk:

Grid Disk name DATA01_CD_00_ru02 does not have cell name (exa01cel01) suffix
Naming convention not used. Cannot proceed further with
automating checks and repair for bug 12433293

Apart from exachk complaining, I wouldn’t feel comfortable with similar names on my Exadata.

Fortunately cell, grid and ASM disk names can be changed and here is how to do it:

Stop the cluster and CRS on each compute node:

/u01/app/12.1.0.2/grid/bin/crsctl stop cluster -all
/u01/app/12.1.0.2/grid/bin/crsctl stop crs

Login to each storage server and rename cell name, cell and grid disks, use the following to build the alter commands:

You don’t need cell services shut but the grid disks shouldn’t be in use i.e. make sure to stop the cluster first!

cell -e alter cell name=exa01cel01
for i in `cellcli -e list celldisk | awk '{print $1}'`; do echo "cellcli -e alter celldisk $i name=$i"; done | sed -e "s/ru02/exa01cel01/2"
for i in `cellcli -e list griddisk | awk '{print $1}'`; do echo "cellcli -e alter griddisk $i name=$i"; done | sed -e "s/ru02/exa01cel01/2"

If you get the following error restart the cell services and try again:

GridDisk DATA01_CD_00_ru02 alter failed for reason: CELL-02548: Grid disk is in use.

Start the cluster on each compute node:

/u01/app/12.1.0.2/grid/bin/crsctl start crs

 

We’ve got all cell and grid disks fixed, now we need to rename the ASM disks. To rename ASM disk you need to mount the diskgroup in restricted mode i.e. running on one node only and no one using it. If the diskgroup is not in restricted mode you’ll get:

ORA-31020: The operation is not allowed, Reason: disk group is NOT mounted in RESTRICTED state.

 

Stop the second compute node, default dbm01 database and the MGMTDB database:

srvctl stop database -d dbm01
srvctl stop mgmtdb

Mount diskgroups in restricted mode:

If you are running 12.1.2.3.0+ and high redundancy DATA diskgroup, it is  VERY likely that the voting disks are in the DATA diskgroup. Because of that, you wouldn’t be able to dismount the diskgroup. The only way I found around that was to force stop ASM and start it manually in a restricted mode:

srvctl stop asm -n exa01db01 -f

sqlplus / as sysasm

startup mount restricted

alter diskgroup all dismount;
alter diskgroup data01 mount restricted;
alter diskgroup reco01 mount restricted;
alter diskgroup dbfs_dg mount restricted;

 

Rename the ASM disks, use the following build the alter commands:

select 'alter diskgroup ' || g.name || ' rename disk ''' || d.name || ''' to ''' || REPLACE(d.name,'RU02','exa01cel01')  || ''';' from v$asm_disk d, v$asm_diskgroup g where d.group_number=g.group_number and d.name like '%RU02%';

select 'alter diskgroup ' || g.name || ' rename disk ''' || d.name || ''' to ''' || REPLACE(d.name,'RU04','exa01cel03') || ''';' from v$asm_disk d, v$asm_diskgroup g where d.group_number=g.group_number and d.name like '%RU04%';

select 'alter diskgroup ' || g.name || ' rename disk ''' || d.name || ''' to ''' || REPLACE(d.name,'RU06','exa01cel03') || ''';' from v$asm_disk d, v$asm_diskgroup g where d.group_number=g.group_number and d.name like '%RU06%';

 

Finally stop and start CRS on both nodes.

 

It’s only when I thought everything was ok I discovered one more reference to those pesky names. These were the fail group names which again are based on the storage cell name. Following will make it more clear:

select group_number,failgroup,mode_status,count(*) from v$asm_disk where group_number > 0 group by group_number,failgroup,mode_status;

GROUP_NUMBER FAILGROUP                      MODE_ST   COUNT(*)
———— —————————— ——- ———-
1 RU02                           ONLINE          12
1 RU04                           ONLINE          12
1 RU06                           ONLINE          12
1 EXA01DB01                  ONLINE           1
1 EXA01DB02                  ONLINE           1
2 RU02                           ONLINE          10
2 RU04                           ONLINE          10
2 RU06                           ONLINE          10
3 RU02                           ONLINE          12
3 RU04                           ONLINE          12
3 RU06                           ONLINE          12

For each diskgroup we’ve got three fail groups (three storage cells). The other two fail groups EXA01DB01 and EXA01DB02 are the quorum disks.

Unfortunately, you cannot rename failgroups in ASM. My immediate thought was to drop each failgroup and add it back with the intention that it will resolve the problem. Unfortunately, since this was a quarter rack I couldn’t do it, here’s an excerpt from the documentation:

If a disk group is configured as high redundancy, then you can do this procedure on a Half Rack or greater. You will not be able to do this procedure on a Quarter Rack or smaller with high redundancy disk groups because ASM will not allow you to drop a failure group such that only one copy of the data remains (you’ll get an ORA-15067 error).

The last option was to recreate the diskgroups. I’ve done this many times before when the compatible.rdbms parameter was set to too high and I had to install some earlier version of 11.2. However, since oracle decided to move the voting disks to DATA this became a bit harder. I couldn’t drop DBFS_DG because that’s where the MGMTDB was created, I couldn’t drop DATA01 either because of the voting disks and some parameter files. I could have renamed RECO01 diskgroup but decided to keep it “consistently wrong” across all three diskgroups.

Fortunately, this behvaiour might change with the January 2017 release of OEDA. The following bug fix suggests that DBFS_DG will always be configured as high redundancy and host the voting disks:

24329542: oeda should make dbfs_dg as high redundancy and locate ocr/vote into dbfs_dg

There is also a feature request to support failgroup rename but it’s not very popular, to be honest. Until we get this feature, exachk will report the following failure:

failgroup name (RU02) for grid disk DATA01_CD_00_exa01cel01 is not cell name
Naming convention not used. Cannot proceed further with
automating checks and repair for bug 12433293

I’ve deployed five Exadata X6-2 machines so far and had this issue on all of them.

This issue seems to be caused a bug in OEDA. The storage cell names should have been changed as part of step “Create Cell Disks” of onecommand. I keep the logs from some older deployments where it’s very clear that each cell was renamed as part of this step:

Initializing cells...

EXEC## |cellcli -e alter cell name = exa01cel01|exa01cel01.local.net|root|

I couldn’t find that command in the logs of the deployements I did. Obviously, the solution for now, is to manually rename the cell before you run step “Create Cell Disks” of onecommand.

Update 04.02.2017:

This problem has been logged by someone else a month earlier under the following bug:

Bug 25317550 : OEDA FAILS TO SET CELL NAME RESULTING IN GRID DISK NAMES NOT HAVING RIGHT SUFFIX

Categories: oracle Tags:

Unable to perform initial elastic configuration on Exadata X6

January 12th, 2017 No comments

I had the pleasure to deploy another Exadata in the first week of 2017 and got my first issue this year.

As we know starting with Exadata X5, Oracle introduced the concept of Elastic Configuration. Apart from allowing you to mix and match the number of compute nodes and storage cells they have also changed how the IP addresses are assigned on the admin (eth0) interface. Prior X5, Exadata had default IP addresses set at the factory in the range of IP addresses was 192.168.1.1 to 192.168.1.203 but since this could collide with the customer’s network they changed the way those IPs are assigned. In short – the IP address on eth0 on the compute nodes and storage cells is assigned within 172.16.2.1 to 172.16.7.254 range. The first time node boots it will assign its hostname and IP address based on the IB ports its connected to.

Now to the real problem, I was doing the usual stuff – changing ILOMs, setting cisco and IB switches and was about to perform the initial elastic configuration (applyElasticConfig.sh) so I had upload all the files I need for the deployment on the first compute node. I’ve changed my laptop address to an IP within the same range and was surprised when I got connection timed out when I tried to ssh to the first compute node (172.16.2.44). I thought this was an unfortunate coincidence since I rebooted the IB switches almost at the time I powered on the compute nodes but I was wrong. For some reason, ALL servers did not get their eth0 IP addresses assigned hence they were not accessible.

I was very surprised to what’s causing this issue and I’ve spent the afternoon troubleshooting it. I thought Oracle changed the way they assign the IP addresses but the scripts haven’t been changed for a long time. It didn’t take long before I find out what was causing it. Three lines in /sbin/ifup script were the reason eth0 interface wasn’t up with the 172.2.16.X IP address:

if ip link show ${DEVICE} | grep -q “UP”; then
exit 0
fi

This line will check if the interface is UP before proceeding further and bring the interface up. Actually, the eth0 interface is brought UP already by the elastic configuration script to check if there is a link on the interface. Then at the end of the script when ifup script is invoked to bring the interface up it will stop the execution since the interface is already UP.

The solution is really simple – comment out the three lines (line 73-75) in /sbin/ifup script and reboot each node.

This wasn’t the first X6 I deploy and I never had this problem before so I did some further investigation. The /sbin/ifup scripts is part of initscripts package. It turns out that the check for the interface being UP was introduced in one minor version of the package and then removed in the latest package. Unfortunately, the last entry in the Changelog is from Apr 12 2016 so that’s not very helpful but here’s a summary:

initscripts-9.03.53-1.0.1.el6.x86_64.rpm           11-May-2016 19:49     947.9 K  <– not affected
initscripts-9.03.53-1.0.1.el6_8.1.x86_64.rpm     12-Jul-2016 16:42     948.0 K    <– affected
initscripts-9.03.53-1.0.2.el6_8.1.x86_64.rpm     13-Jul-2016 08:26     948.1 K   <– affected
initscripts-9.03.53-1.0.3.el6_8.2.x86_64.rpm     23-Nov-2016 05:06     948.3 K <– latest version, not affected

I had this problem on three Exadata machine so far. So, if you are doing deployment of new Exadata in the next few days or weeks it’s very likely that you will be affected, unless your Exadata has been factory deployed after 23rd Nov 2016. That’s the day when the latest initscripts package was released.

Update 24.01.2017:

This problem has been fixed in 12.1.2.3.3.161208:
25143049 – ADD NEW INITSCRIPTS RPM TO EXADATA REPOSITORY TO FIX IFUP ISSUE

Where the latest package has been added to the patch (initscripts-9.03.53-1.0.3.el6_8.2.x86_64.rpm)

Categories: oracle Tags:

Exadata memory configuration

October 28th, 2016 No comments

Read this post if your Exadata compute nodes have 512/768GB of RAM or you plan to upgrade to the same.

There has been a lot of information about hugepages and I wouldn’t go into too much details. For efficiency, the (x86) CPU allocates RAM by chunks (pages) of 4K bytes and those pages can be swapped to disk. For example, if your SGA allocates 32GB this will take 8388608 pages and given that Page Table Entry consume 8bytes that’s 64MB to look-up. Hugepages, on the other hand, are 2M. Pages that are used as huge pages are reserved inside the kernel and cannot be used for other purposes.  Huge pages cannot be swapped out under memory pressure, obviously there is decreased page table overhead and page lookups are not required since the pages are not subject to replacement. The bottom line is that you need to use them, especially now with the amount of RAM we get nowadays.

For every new Exadata deployment I usually set the amount of hugepages to 60% of the physical RAM:
256GB RAM = 150 GB (75k pages)
512GB RAM = 300 GB (150k pages)
768GB RAM = 460 GB (230k pages)

This allows databases to allocate SGA from the hugepages. If you want to allocate the exact number of hugepages that you need, Oracle has a script which will walk through all instances and give you the number of hugepages you need to set on the system, you can find the Doc ID in the reference below.

This also brings important point – to make sure your databases don’t allocate from both 4K and 2M pages make sure the parameter use_large_pages is set to ONLY for all databases. Starting with 11.2.0.3 (I think) you’ll find hugepages information in the alertlog when database starts:

************************ Large Pages Information *******************
Per process system memlock (soft) limit = 681 GB

Total Shared Global Region in Large Pages = 2050 MB (100%)

Large Pages used by this instance: 1025 (2050 MB)
Large Pages unused system wide = 202863 (396 GB)
Large Pages configured system wide = 230000 (449 GB)
Large Page size = 2048 KB
********************************************************************

 

Now there is one more parameter you need to change if you deploy or upgrade Exadata with 512/768GB of RAM. That is the total amount of shared memory, in pages, that the system can use at one time or kernel.shmall. On Exadata, this parameter is set to 214G by default which is enough if your compute nodes have only 256GB of RAM. If the sum of all databases SGA memory is less than 214GB that’s ok but the moment you try to start another database you’ll get the following error:

Linux-x86_64 Error: 28: No space left on device

For that reason, if you deploy or upgrade Exadata with 512G/768GB of physical RAM make sure you upgrade kernel.shmall  too!

Some Oracle docs suggest this parameter should be set to the half of the physical memory, other suggest it should be set to the all available memory. Here’s how to calculate it:

kernel.shmall = physical RAM size / pagesize

To get the pagesize run getconf PAGE_SIZE on the command prompt. You need to set shmall to at least match the size of the hugepages – because that’s where we’d allocate SGA memory from. So if you run Exadata with 768G of RAM and have 460 GB of hugepages you’ll set shmall to 120586240 (460GB / 4K pagesize).

Using HUGEPAGES does not alter the calculation for configuring shmall!

Reference:
HugePages on Linux: What It Is… and What It Is Not… (Doc ID 361323.1)
Upon startup of Linux database get ORA-27102: out of memory Linux-X86_64 Error: 28: No space left on device (Doc ID 301830.1)

Categories: oracle Tags:

How to enable Exadata Write-Back Flash Cache

October 10th, 2016 No comments

Yes, this is well-known and the process has been described in Exadata Write-Back Flash Cache – FAQ (Doc ID 1500257.1) but what the note fails to make clear is that you do NOT have to restart cell services anymore hence resync the griddisks!

I had to enable the WBFC many times before and every time I’d restart the cell services, as note suggests. Well, this is not required anymore, starting with 11.2.3.3.1 it is no longer necessary to shut down the cellsrv service on the cells when changing the flash cache mode. This is not big deal if you deploy the Exadata just now but it makes enabling/disabling WBFC for existing systems quicker and much easier.

The best way to do that is to use the script that Oracle has provided – setWBFC.sh. It will do all the work for you – pre-checks and changing the mode, either rolling or non-rolling.

Here are the checks it does for you:

  • Storage cells are valid storage nodes running at least 11.2.3.2.1 or later across all cells.
  • Griddisks status is ONLINE across all cells.
  • No ASM rebalance operations are running.
  • Flash cache state across all cells are “NORMAL”.

Enable Write-Back Flash Cache using a ROLLING method

Before you enable WBFC run a precheck to make sure the cells are ready and there are no faults.

./setWBFC.sh -g cell_group -m WriteBack -o rolling -p

At the end of the script which takes less than two minutes to run you’ll see message if storage cells passed the prechecks:

All pre-req checks completed:                    [PASSED]
2016-10-10 10:53:03
exa01cel01: flashcache size: 5.82122802734375T
exa01cel02: flashcache size: 5.82122802734375T
exa01cel03: flashcache size: 5.82122802734375T

There are 3 storage cells to process.

Then, once you are ready you run the script to enable the WBFC:

./setWBFC.sh -g cell_group -m WriteBack -o rolling

The script will go through the following steps on each cell, one cell at a time:

1. Recheck griddisks status to make sure none are OFFLINE
2. Drop flashcache
3. Change WBFC flashcachemode to WriteBack
4. Re-create the flashcache
5. Verify flashcachemode is in the correct state

On a Quarter Rack it took around four minutes to enable WBFC and you’ll this message at the end:

2016-10-10 11:23:24
Setting flash cache to WriteBack completed successfully.

Disable Write-Back Flash Cache using a ROLLING method

Disabling WBFC is not something you do every day but soon or later you might have to do it. I had to do it once for a customer who wanted to go back to WriteThrough because Oracle ACS said this was the default ?!

The steps to disable WBFC are the same as enabling it except that we need to flush all the dirty blocks off the flashcache before we drop it.

Again, run the precheck script to make sure everything looks good:

./setWBFC.sh -g cell_group -m WriteThrough -o rolling -p

if everything looks good then run the script:

./setWBFC.sh -g cell_group -m WriteThrough -o rolling

The script will first FLUSH flashcache across all cells in parallel and wait until the flush is complete!

You can monitor the flush process using the following commands:

dcli -l root -g cell_group cellcli -e "LIST CELLDISK ATTRIBUTES name, flushstatus, flusherror" | grep FD
dcli -l root -g cell_group cellcli -e "list metriccurrent attributes name,metricvalue where name like \'FC_BY_DIRTY.*\' "

The script will then go through the following steps on each cell, one cell at a time:

1. Recheck griddisks status to make sure none are OFFLINE
2. Drop flashcache
3. Change WBFC flashcachemode to WriteThrough
4. Re-create the flashcache
5. Verify flashcachemode is in the correct state

The time it takes to flush the cache depends on how dirty blocks you’ve got in the flashcache and the machine workload. I did two eighth racks and unfortunately, I didn’t check the number of dirty blocks but it took 75mins on the first one and 4hrs on the second.

Categories: oracle Tags:

Extending an Exadata Eighth Rack to a Quarter Rack

October 3rd, 2016 No comments

In the past year I’ve done a lot of Exadata deployments and probably half of them were eighth racks. It’s one of those temporary things – let’s do it now but we’ll change it later. It’s the same with the upgrades – I’ve never seen anyone doing an upgrade from an eighth rack to a quarter. However, a month ago one of our customers asked me to upgrade their three X5-2 HC 4TB units from an eighth to a quarter rack configuration.

What’s the different between an eighth rack and a quarter rack

X5-2 Eighth Rack and X5-2 Quarter rack have the same hardware and look exactly the same. The only difference is that only half of the compute power and storage space on an eighth rack is usable. In an eighth rack the compute nodes have half of their CPUs activated – 18 cores per server. It’s the same for the storage cells – 16 cores per cell, six hard disks and two flash cards are active.

While this is true for X3, X4 and X5 things have slightly changed for X6. Up until now, eighth rack configurations had all the hard disks and flash cards installed but only half of them were usable. The new Exadata X6-2 Eighth Rack High Capacity configuration has half of the hard disks and flash cards removed. To extend X6-2 HC to a quarter rack you need to add high capacity disks and flash cards to the system. This is only required for High Capacity configurations because X6-2 Eighth Rack Extreme Flash storage servers have all flash drives enabled.

What are the main steps of the upgrade:

  • Activate Database Server Cores
  • Activate Storage Server Cores and disks
  • Create eighth new cell disks per cell – six hard disk and two flash disks
  • Create all grid disks (DATA01, RECO01, DBFS_DG) and add them to the disk groups
  • Expand the flashcache onto the new flash disks
  • Recreate the flashlog on all flash cards

Here are few things you need to keep in mind before you start:

  • Compute nodes upgrade require a reboot for the new changes to come into action.
  • Storage cells upgrade do NOT require a reboot and it is an online operation.
  • Upgrade work is a low risk – your data is secure and redundant at all times.
  • This post is about X5 upgrade. If you were to upgrade X6 then before you begin you need to install the six 8 TB disks in HDD slots 6 – 11 and install the two F320 flash cards in PCIe slots 1 and 4.

Upgrade of the compute nodes

Well, this is really straight forward and you can do it at any time. Remember that you need to restart the server for the change to come into action:

dbmcli -e alter dbserver pendingCoreCount=36 force
DBServer exa01db01 successfully altered. Please reboot the system to make the new pendingCoreCount effective.

Reboot the server to activate the new cores. It will take around 10 minutes for the server to come back online.

Check the number of cores after server comes back:

dbmcli -e list dbserver attributes coreCount
cpuCount:               36/36

 

Make sure you’ve got the right number of cores. These systems allow capacity on demand (CoD) and in my case customer wanted to me activate only 28 cores per server.

Upgrade of the storage cells

Like I said earlier, the upgrade of the storage cells does NOT require reboot and can be done online at any time.

The following needs to be done on each cell. You can, of course, use dcli but I wanted to do that cell by cell and make sure each operation finishes successfully.

1. First, upgrade the configuration from an eighth to a quarter rack:

[root@exa01cel01 ~]# cellcli -e list cell attributes cpuCount,eighthRack
cpuCount:               16/32
eighthRack:             TRUE

[root@exa01cel01 ~]# cellcli -e alter cell eighthRack=FALSE
Cell exa01cel01 successfully altered

[root@exa01cel01 ~]# cellcli -e list cell attributes cpuCount,eighthRack
cpuCount:               32/32
eighthRack:             FALSE

 

2. Create cell disks on top of the newly activated physical disks

Like I said – this is an online operation and you can do it at any time:

[root@exa01cel01 ~]# cellcli -e create celldisk all
CellDisk CD_06_exa01cel01 successfully created
CellDisk CD_07_exa01cel01 successfully created
CellDisk CD_08_exa01cel01 successfully created
CellDisk CD_09_exa01cel01 successfully created
CellDisk CD_10_exa01cel01 successfully created
CellDisk CD_11_exa01cel01 successfully created
CellDisk FD_02_exa01cel01 successfully created
CellDisk FD_03_exa01cel01 successfully created

 

3. Expand the flashcache on to the new flash cards

This is again an online operation and it can be run at any time:

[root@exa01cel01 ~]# cellcli -e alter flashcache all
Flash cache exa01cel01_FLASHCACHE altered successfully

 

4. Recreate the flashlog

The flashlog is always 512MB big but to make use of the new flash cards it has to be recreated. Use the DROP FLASHLOG command to drop the flash log, and then use the CREATE FLASHLOG command to create a flash log. The DROP FLASHLOG command can be run at runtime, but the command does not complete until all redo data on the flash disk is written to hard disk.

Here is an important note from Oracle:

If FORCE is not specified, then the DROP FLASHLOG command fails if there is any saved redo. If FORCE is specified, then all saved redo is purged, and Oracle Exadata Smart Flash Log is removed.

[root@exa01cel01 ~]# cellcli -e drop flashlog
Flash log exa01cel01_FLASHLOG successfully dropped

 

5. Create grid disks

The best way to do that is to query the current grid disks size and use to create the new grid disks. Use the following queries to obtain the size for each grid disk. We use disk 02 because the first two does have DBFS_DG on them.

[root@exa01db01 ~]# dcli -g cell_group -l root cellcli -e "list griddisk attributes name, size where name like \'DATA.*02.*\'"
exa01cel01: DATA01_CD_02_exa01cel01        2.8837890625T
[root@exa01cel01 ~]# dcli -g cell_group -l root cellcli -e "list griddisk attributes name, size where name like \'RECO.*02.*\'"
exa01cel01: RECO01_CD_02_exa01cel01        738.4375G
[root@exa01cel01 ~]# dcli -g cell_group -l root cellcli -e "list griddisk attributes name, size where name like \'DBFS_DG.*02.*\'"
exa01cel01: DBFS_DG_CD_02_exa01cel01       33.796875G

Then you can either generate the commands and run them on each cell or use dcli to create them on all three cells:

dcli -g cell_group -l celladmin "cellcli -e create griddisk DATA_CD_06_\`hostname -s\` celldisk=CD_06_\`hostname -s\`,size=2.8837890625T"
dcli -g cell_group -l celladmin "cellcli -e create griddisk DATA_CD_07_\`hostname -s\` celldisk=CD_07_\`hostname -s\`,size=2.8837890625T"
dcli -g cell_group -l celladmin "cellcli -e create griddisk DATA_CD_08_\`hostname -s\` celldisk=CD_08_\`hostname -s\`,size=2.8837890625T"
dcli -g cell_group -l celladmin "cellcli -e create griddisk DATA_CD_09_\`hostname -s\` celldisk=CD_09_\`hostname -s\`,size=2.8837890625T"
dcli -g cell_group -l celladmin "cellcli -e create griddisk DATA_CD_10_\`hostname -s\` celldisk=CD_10_\`hostname -s\`,size=2.8837890625T"
dcli -g cell_group -l celladmin "cellcli -e create griddisk DATA_CD_11_\`hostname -s\` celldisk=CD_11_\`hostname -s\`,size=2.8837890625T"
dcli -g cell_group -l celladmin "cellcli -e create griddisk RECO_CD_06_\`hostname -s\` celldisk=CD_06_\`hostname -s\`,size=738.4375G"
dcli -g cell_group -l celladmin "cellcli -e create griddisk RECO_CD_07_\`hostname -s\` celldisk=CD_07_\`hostname -s\`,size=738.4375G"
dcli -g cell_group -l celladmin "cellcli -e create griddisk RECO_CD_08_\`hostname -s\` celldisk=CD_08_\`hostname -s\`,size=738.4375G"
dcli -g cell_group -l celladmin "cellcli -e create griddisk RECO_CD_09_\`hostname -s\` celldisk=CD_09_\`hostname -s\`,size=738.4375G"
dcli -g cell_group -l celladmin "cellcli -e create griddisk RECO_CD_10_\`hostname -s\` celldisk=CD_10_\`hostname -s\`,size=738.4375G"
dcli -g cell_group -l celladmin "cellcli -e create griddisk RECO_CD_11_\`hostname -s\` celldisk=CD_11_\`hostname -s\`,size=738.4375G"
dcli -g cell_group -l celladmin "cellcli -e create griddisk DBFS_DG_CD_06_\`hostname -s\` celldisk=CD_06_\`hostname -s\`,size=33.796875G"
dcli -g cell_group -l celladmin "cellcli -e create griddisk DBFS_DG_CD_07_\`hostname -s\` celldisk=CD_07_\`hostname -s\`,size=33.796875G"
dcli -g cell_group -l celladmin "cellcli -e create griddisk DBFS_DG_CD_08_\`hostname -s\` celldisk=CD_08_\`hostname -s\`,size=33.796875G"
dcli -g cell_group -l celladmin "cellcli -e create griddisk DBFS_DG_CD_09_\`hostname -s\` celldisk=CD_09_\`hostname -s\`,size=33.796875G"
dcli -g cell_group -l celladmin "cellcli -e create griddisk DBFS_DG_CD_10_\`hostname -s\` celldisk=CD_10_\`hostname -s\`,size=33.796875G"
dcli -g cell_group -l celladmin "cellcli -e create griddisk DBFS_DG_CD_11_\`hostname -s\` celldisk=CD_11_\`hostname -s\`,size=33.796875G"

6. The final step is to add newly created grid disks to ASM

Connect to the ASM instance using sqlplus as sysasm and disable the appliance mode:

SQL> ALTER DISKGROUP DATA01 set attribute 'appliance.mode'='FALSE';
SQL> ALTER DISKGROUP RECO01 set attribute 'appliance.mode'='FALSE';
SQL> ALTER DISKGROUP DBFS_DG set attribute 'appliance.mode'='FALSE';

Add the disks to the disk groups, you can either queue them on one instance or run them on both ASM instances in parallel:

SQL> ALTER DISKGROUP DATA01 ADD DISK 'o/*/DATA_CD_0[6-9]*',' o/*/DATA_CD_1[0-1]*' REBALANCE POWER 128;
SQL> ALTER DISKGROUP RECO01 ADD DISK 'o/*/RECO_CD_0[6-9]*',' o/*/RECO_CD_1[0-1]*' REBALANCE POWER 128;
SQL> ALTER DISKGROUP DBFS_DG ADD DISK 'o/*/DBFS_DG_CD_0[6-9]*',' o/*/DBFS_DG_CD_1[0-1]*' REBALANCE POWER 128;

Monitor the rebalance using select * from gv$asm_operations and once done change the appliance mode back to TRUE:

SQL> ALTER DISKGROUP DATA01 set attribute 'appliance.mode'='TRUE';
SQL> ALTER DISKGROUP RECO01 set attribute 'appliance.mode'='TRUE';
SQL> ALTER DISKGROUP DBFS_DG set attribute 'appliance.mode'='TRUE';

And at this point, you are done with the upgrade. I strongly recommend you to run (latest) exachk report and make sure there are no issues with the configuration.

A problem you might encounter is that the flash is not fully utilized, in my case I had 128MB free on each card:

[root@exa01db01 ~]# dcli -g cell_group -l root "cellcli -e list celldisk attributes name,freespace where disktype='flashdisk'"
exa01cel01: FD_00_exa01cel01         128M
exa01cel01: FD_01_exa01cel01         128M
exa01cel01: FD_02_exa01cel01         128M
exa01cel01: FD_03_exa01cel01         128M
exa01cel02: FD_00_exa01cel02         128M
exa01cel02: FD_01_exa01cel02         128M
exa01cel02: FD_02_exa01cel02         128M
exa01cel02: FD_03_exa01cel02         128M
exa01cel03: FD_00_exa01cel03         128M
exa01cel03: FD_01_exa01cel03         128M
exa01cel03: FD_02_exa01cel03         128M
exa01cel03: FD_03_exa01cel03         128M

This seems to be a known bug and to fix it you need to recreate both flashcache and flashlog.

References:
Extending an Eighth Rack to a Quarter Rack in Oracle Exadata Database Machine X4-2 and Later
Oracle Exadata Database Machine exachk or HealthCheck (Doc ID 1070954.1)
Exachk fails due to incorrect flashcache size after upgrading from 1/8 to a 1/4 rack (Doc ID 2048491.1)

Categories: oracle Tags:

How to resolve missing dependency on exadata-sun-computenode-minimum

August 18th, 2016 No comments

I’ve been really busy last few months – except spending a lot of time on M25 I’ve been doing a lot of Exadata installations and consolidations. I haven’t posted for some time now but the good news is that I got many drafts and presentations ideas.

This is a quick post about an issue I had recently. I had to integrate AD authentication over Kerberos on the compute nodes (blog post to follow) but had to do compute node upgrade before that. This was Exadata X5-2 QR running 12.1.2.1.1 which had to be upgraded to 12.1.2.3.1 but I was surprised when dbnodeupdate failed with ‘Minimum’ dependency check failed. You’ll also notice the following in the logs:

exa01db01a: Exadata capabilities missing (capabilities required but not supplied by any package)
exa01db01a NOTE: Unexpected configuration - Contact Oracle Support

Starting with 11.2.3.3.0 the exadata-*computenode-exact and exadata-*computenode-minimum rpms were introduced. An update to 11.2.3.3.0 or later by default assumes the ‘exact’ rpm will be used to ‘update to’ with yum hence before running the upgrade dbnodeupdate will check if there are missing packages/dependencies.

Best way to check what is missing is to run yum check:

[root@exa01db01a ~]# yum check
Loaded plugins: downloadonly
exadata-sun-computenode-minimum-12.1.2.1.1.150316.2-1.x86_64 has missing requires of elfutils-libelf-devel >= ('0', '0.158', '3.2.el6')
exadata-sun-computenode-minimum-12.1.2.1.1.150316.2-1.x86_64 has missing requires of elfutils-libelf-devel(x86-64) >= ('0', '0.158', '3.2.el6')
exadata-sun-computenode-minimum-12.1.2.1.1.150316.2-1.x86_64 has missing requires of glibc-devel(x86-32) >= ('0', '2.12', '1.149.el6_6.5')
exadata-sun-computenode-minimum-12.1.2.1.1.150316.2-1.x86_64 has missing requires of libsepol(x86-32) >= ('0', '2.0.41', '4.el6')
exadata-sun-computenode-minimum-12.1.2.1.1.150316.2-1.x86_64 has missing requires of libselinux(x86-32) >= ('0', '2.0.94', '5.8.el6')
exadata-sun-computenode-minimum-12.1.2.1.1.150316.2-1.x86_64 has missing requires of elfutils-libelf(x86-32) >= ('0', '0.158', '3.2.el6')
exadata-sun-computenode-minimum-12.1.2.1.1.150316.2-1.x86_64 has missing requires of libcom_err(x86-32) >= ('0', '1.42.8', '1.0.2.el6')
exadata-sun-computenode-minimum-12.1.2.1.1.150316.2-1.x86_64 has missing requires of e2fsprogs-libs(x86-32) >= ('0', '1.42.8', '1.0.2.el6')
exadata-sun-computenode-minimum-12.1.2.1.1.150316.2-1.x86_64 has missing requires of libaio(x86-32) >= ('0', '0.3.107', '10.el6')
exadata-sun-computenode-minimum-12.1.2.1.1.150316.2-1.x86_64 has missing requires of libaio-devel(x86-32) >= ('0', '0.3.107', '10.el6')
exadata-sun-computenode-minimum-12.1.2.1.1.150316.2-1.x86_64 has missing requires of libstdc++-devel(x86-32) >= ('0', '4.4.7', '11.el6')
exadata-sun-computenode-minimum-12.1.2.1.1.150316.2-1.x86_64 has missing requires of compat-libstdc++-33(x86-32) >= ('0', '3.2.3', '69.el6')
exadata-sun-computenode-minimum-12.1.2.1.1.150316.2-1.x86_64 has missing requires of zlib(x86-32) >= ('0', '1.2.3', '29.el6')
exadata-sun-computenode-minimum-12.1.2.1.1.150316.2-1.x86_64 has missing requires of libxml2(x86-32) >= ('0', '2.7.6', '17.0.1.el6_6.1')
exadata-sun-computenode-minimum-12.1.2.1.1.150316.2-1.x86_64 has missing requires of elfutils >= ('0', '0.158', '3.2.el6')
exadata-sun-computenode-minimum-12.1.2.1.1.150316.2-1.x86_64 has missing requires of elfutils(x86-64) >= ('0', '0.158', '3.2.el6')
exadata-sun-computenode-minimum-12.1.2.1.1.150316.2-1.x86_64 has missing requires of ntsysv >= ('0', '1.3.49.3', '2.el6_4.1')
exadata-sun-computenode-minimum-12.1.2.1.1.150316.2-1.x86_64 has missing requires of ntsysv(x86-64) >= ('0', '1.3.49.3', '2.el6_4.1')
exadata-sun-computenode-minimum-12.1.2.1.1.150316.2-1.x86_64 has missing requires of glibc(x86-32) >= ('0', '2.12', '1.149.el6_6.5')
exadata-sun-computenode-minimum-12.1.2.1.1.150316.2-1.x86_64 has missing requires of nss-softokn-freebl(x86-32) >= ('0', '3.14.3', '18.el6_6')
exadata-sun-computenode-minimum-12.1.2.1.1.150316.2-1.x86_64 has missing requires of libgcc(x86-32) >= ('0', '4.4.7', '11.el6')
exadata-sun-computenode-minimum-12.1.2.1.1.150316.2-1.x86_64 has missing requires of libstdc++(x86-32) >= ('0', '4.4.7', '11.el6')
exadata-sun-computenode-minimum-12.1.2.1.1.150316.2-1.x86_64 has missing requires of compat-libstdc++-296 >= ('0', '2.96', '144.el6')
exadata-sun-computenode-minimum-12.1.2.1.1.150316.2-1.x86_64 has missing requires of compat-libstdc++-296(x86-32) >= ('0', '2.96', '144.el6')
Error: check all

Somehow all x86-32 packages and three x86-64 packages were removed. The x86-32 packages will be removed during as part of upgrade anyway, they were not present after the upgrade. I didn’t spend too much do understand why or how the packages were removed. I was told additional packages were installed before and then removed. Perhaps one had few dependencies and all got messed up when it was removed.

Anyway to solve this you need to download the patch for the same version (12.1.2.1.1). The p20746761_121211_Linux-x86-64.zip patch is still available from MOS 888828.1. So after that you unzip it, mount the iso, test install all the package to make sure nothing is missing and there are no conflicts and then finally install the packages:

[root@exa01db01a x86_64]# rpm -ivh --test zlib-1.2.3-29.el6.i686.rpm glibc-2.12-1.149.el6_6.5.i686.rpm nss-softokn-freebl-3.14.3-18.el6_6.i686.rpm libaio-devel-0.3.107-10.el6.i686.rpm libaio-0.3.107-10.el6.i686.rpm e2fsprogs-libs-1.42.8-1.0.2.el6.i686.rpm libgcc-4.4.7-11.el6.i686.rpm libcom_err-1.42.8-1.0.2.el6.i686.rpm elfutils-libelf-0.158-3.2.el6.i686.rpm libselinux-2.0.94-5.8.el6.i686.rpm libsepol-2.0.41-4.el6.i686.rpm glibc-devel-2.12-1.149.el6_6.5.i686.rpm elfutils-libelf-devel-0.158-3.2.el6.x86_64.rpm libstdc++-devel-4.4.7-11.el6.i686.rpm libstdc++-4.4.7-11.el6.i686.rpm compat-libstdc++-296-2.96-144.el6.i686.rpm compat-libstdc++-33-3.2.3-69.el6.i686.rpm libxml2-2.7.6-17.0.1.el6_6.1.i686.rpm elfutils-0.158-3.2.el6.x86_64.rpm ntsysv-1.3.49.3-2.el6_4.1.x86_64.rpm
Preparing...                ########################################### [100%]

[root@exa01db01a x86_64]# rpm -ivh zlib-1.2.3-29.el6.i686.rpm glibc-2.12-1.149.el6_6.5.i686.rpm nss-softokn-freebl-3.14.3-18.el6_6.i686.rpm libaio-devel-0.3.107-10.el6.i686.rpm libaio-0.3.107-10.el6.i686.rpm e2fsprogs-libs-1.42.8-1.0.2.el6.i686.rpm libgcc-4.4.7-11.el6.i686.rpm libcom_err-1.42.8-1.0.2.el6.i686.rpm elfutils-libelf-0.158-3.2.el6.i686.rpm libselinux-2.0.94-5.8.el6.i686.rpm libsepol-2.0.41-4.el6.i686.rpm glibc-devel-2.12-1.149.el6_6.5.i686.rpm elfutils-libelf-devel-0.158-3.2.el6.x86_64.rpm libstdc++-devel-4.4.7-11.el6.i686.rpm libstdc++-4.4.7-11.el6.i686.rpm compat-libstdc++-296-2.96-144.el6.i686.rpm compat-libstdc++-33-3.2.3-69.el6.i686.rpm libxml2-2.7.6-17.0.1.el6_6.1.i686.rpm elfutils-0.158-3.2.el6.x86_64.rpm ntsysv-1.3.49.3-2.el6_4.1.x86_64.rpm
Preparing...              ########################################### [100%]
1:libgcc                  ########################################### [  5%]
2:elfutils-libelf-devel   ########################################### [ 10%]
3:nss-softokn-freebl      ########################################### [ 15%]
4:glibc                   ########################################### [ 20%]
5:glibc-devel             ########################################### [ 25%]
6:elfutils                ########################################### [ 30%]
7:zlib                    ########################################### [ 35%]
8:libaio                  ########################################### [ 40%]
9:libcom_err              ########################################### [ 45%]
10:libsepol               ########################################### [ 50%]
11:libstdc++              ########################################### [ 55%]
12:libstdc++-devel        ########################################### [ 60%]
13:libaio-devel           ########################################### [ 65%]
14:libselinux             ########################################### [ 70%]
15:e2fsprogs-libs         ########################################### [ 75%]
16:libxml2                ########################################### [ 80%]
17:elfutils-libelf        ########################################### [ 85%]
18:compat-libstdc++-296   ########################################### [ 90%]
19:compat-libstdc++-33    ########################################### [ 95%]
20:ntsysv                 ########################################### [100%]

[root@exa01db01a x86_64]# yum check
Loaded plugins: downloadonly
check all

After that dbnodeupdate check completed successfully I upgraded the node to 12.1.3.2.1 in no time.

With Exadata you are allowed to install packages on the compute nodes as long as they don’t break any dependencies but you cannot install anything on the storage cells. Here’s oracle official statement:
Is it acceptable / supported to install additional or 3rd party software on Exadata machines and how to check for conflicts? (Doc ID 1541428.1)

Update 23.08.2016:
You might also get errors for two more packages in case you have updated to from OEL5 to OEL6 and now you try to patch the compute node:

fuse-2.8.3-4.0.2.el6.x86_64 has missing requires of kernel >= ('0', '2.6.14',
None)
2:irqbalance-1.0.7-5.0.1.el6.x86_64 has missing requires of kernel >= ('0',
'2.6.32', '358.2.1')

Refer to the following note for more information and how to fix it:

Categories: oracle Tags:

Oracle Exadata X6 released

April 5th, 2016 No comments

Oracle has just announced the next generation of Exadata Database Machine – X6-2 and X6-8.

Here are the changes for Exadata X6-2:
1) X6-2 Database Server: As always the hardware has been updated and the 2-socket database servers are now equip with latest twenty two-core Intel Xeon E5-2699 v4 “Broadwell” processors in comparison to X5 where we had eighteen-core Intel Xeon E5-2699 v3 processors. The memory is still DDR4 and the default configuration comes with 256Gb and can be expanded to 768Gb. The local storage can now be upgraded to 8 drives from default of 4 to allow more local storage in case of a consolidation with Oracle OVM.
2) X6-2 Storage Server HC: The storage server gets the new version CPUs as well and that is the ten-core Intel Xeon E5-2630 v4 processor (it was eight-core Intel Xeon E5-2630 v3 in X5). The flash cards are upgraded as well to 3.2 TB Sun Accelerator Flash F320 NVMe PCIe card for a total of 12.8 TB of flash cache (2x the capacity of X5 where we had 1.6Tb F160 cards).
2.1) X6-2 Storage Server EF – similarly to the High Capacity storage server this one gets the CPU and flash card upgraded. Also the NVMe PCIe Flash drives are now upgraded from 1.6Tb to 3.2Tb which gives you a total raw capacity of 25.6Tb per server.

This time Oracle released Exadata X6-8 together with X6-2 release. Changes aren’t many, I have to say that X6-8 compute node looks exactly the same as X5-8 in terms of specs so I guess that Exadata X6-8 actually consists of X5-8 compute nodes with X6-2 storage servers. Oracle’s vision on those big monsters is that they are specifically optimized for Database as a Service (DBaaS) and database in-memory. Indeed with 12Tb of memory we can host hundreds of databases or load a whole database in memory.

By the looks of it Exadata X6-2 and Exadata X6-8 will require the latest Exadata 12.1.2.3.0 software. This software has been around for some time now and has some new features:
1) Performance Improvements for Software Upgrades – I can confirm that, in the recent upgrade to 12.1.2.3.0 the cell upgrade took a bit more than an hour.
2) VLAN tagging support in OEDA – That’s not a fundamental new or exciting new feature but VLAN tagging was available before. Now it can be done through the OEDA hence it can be part of the deployment.
3) Quorum disk on database servers to enable high redundancy on quarter and eighth racks – You can now use database servers to deploy quorum disks and enable placement of voting disk on high redundancy disk groups on smaller (quarter and eight) rack. Here is more information – Managing Quorum Disks Using the Quorum Disk Manager Utility
4) Storage Index preservation during rebalance – The features enables Storage Indexes to be moved along the data when a disk hits predictive failure or true failure.
5) ASM Disk Size Checked When Reducing Grid Disk Size – this is a check on the storage server to make sure you cannot shrink a grid disk before decreasing the size of an ASM disk.

Capacity-On-Demand Licensing:
1) For Exadata X6-2 a minimum of 14 cores must be enabled per server.
2) For Exadata X6-8 a minumum of 56 cores must be enabled per server.

Here’s something interesting:
OPTIONAL CUSTOMER SUPPLIED ETHERNET SWITCH INSTALLATION IN EXADATA DATABASE MACHINE X6-2
Each Exadata Database Machine X6-2 rack has 2U available at the top of the rack that can be used by customers to optionally install their own client network Ethernet switches in the Exadata rack instead of in a separate rack. Some space, power, and cooling restrictions apply.

References:
Categories: oracle Tags: ,

Exadata onecommand fails at cell disk creation

February 3rd, 2016 No comments

I was installing another Exadata last month when I got an error on create cell disks step. I’ve seen the same error before when I was extending two to three rack Exadata configuration but thought it was one-off.

The cell disk creation failed as below:

[root@exa01db01 linux-x64]# ./install.sh -cf Customer-exa01.xml -s 8

 Initializing
 Executing Create Cell Disks
 Checking physical disks for errors before creating celldisks.........................
 Restarting cell services....................................................
 ERROR:

 Stopping the RS, CELLSRV, and MS services...
 The SHUTDOWN of services was successful.
 Starting the RS, CELLSRV, and MS services...
 Getting the state of RS services...  running
 Starting CELLSRV services...
 The STARTUP of CELLSRV services was not successful.
 CELL-01533: Unable to validate the IP addresses from the cellinit.ora file because the IP addresses may be down or misconfigured.
 Starting MS services...
 The STARTUP of MS services was successful.
 ERROR:

Going through the cell configuration is obvious why the process failed. The cell still had the default name and the IP addresses that the cell services should use are still the default ones:

CellCLI> list cell detail
         name:                   ru02
         ipaddress1:             192.168.10.1/24
         ipaddress2:             192.168.10.2/24
         cellsrvStatus:          stopped
         msStatus:               running
         rsStatus:               running

In short when you see an error like the one below then your ipaddress1 and/or ipaddress2 fields are most probably wrong:

         2       2015-12-15T17:57:03+00:00       critical        "ORA-00700: soft internal error, arguments: [main_6a], [3], [IP addresses in cellinit.ora not operational], [], [], [], [], [], [], [], [], []"

The solution to that is simple. You need to alter the cell name and IP addresses manually:

CellCLI> alter cell name=exa01cel02a,ipaddress1='192.168.10.13/22',ipaddress2='192.168.10.14/22'
Network configuration altered. Please issue the following commands as root to restart the network and open IB stack:
service openibd restart
service network restart
A restart of all services is required to put new network configuration into effect. MS-CELLSRV communication may be hampered until restart.
Cell exa01cel02a successfully altered

CellCLI> alter cell restart services all

Make sure all cells are fixed and re-run the onecommand step, this time it will succeed:

 Successfully completed execution of step Create Cell Disks [elapsed Time [Elapsed = 128338 mS [2.0 minutes] Thu Dec 17 14:26:59 GMT 2015]]

I’ve checked some older deployments and it’s the same step which should change the cell name and restart the cell services. For some reason this didn’t happened for me. For both deployments I used OEDA v15.300 (Oct 2015) so this might be a bug in this version.

Categories: oracle Tags: