Archive

Posts Tagged ‘oem’

Installing OEM13c management agent in silent mode

November 1st, 2016 3 comments

For many customers I work with SSH won’t be available between the OEM and monitoring hosts. Therefore you cannot push the management agent on to the host from the OEM console. Customer might have to raise CR to allow SSH between the hosts but this might take a while and it’s really unnecessary.

In that case the management agent has to be installed in silent mode. That is when the agent won’t be pushed from OEM to the host but pulled by the host and installed. There are thrее ways to do that – using AgentPull script, agentDeploy script or using RPM file.

When using AgentPull script, you download a script from the OMS and then run it. It will download the agent package and install it on the local host. Using the agentDeploy script is very similar but to obtain it you use EMCLI. The third method of using RPM file is similar – using EMCLI you download RPM file and install it on the system. These methods require HTTP/HTTPS access to the OMS, AgentDeploy and RPM file also require ELICLI to be installed. For that reason I always use AgentPull method since it’s quicker and really straight forward. Another benefit of using AgentPull method is that if you don’t have HTTP/HTTPS access to the OEM you can simply copy and paste the script.

Download a script from the OEM first, use curl or wget. The monitoring hosts usually don’t have HTTP access but many of them do so by using HTTP proxy. Download the script and make it executable:

curl "https://oem13c.local.net:7802/em/install/getAgentImage" --insecure -o AgentPull.sh
chmod +x AgentPull.sh

If a proxy server is not available then you can simply copy and paste the script, the location of the script on the OEM server is:

/opt/app/oracle/em13c/middleware/oms/install/unix/scripts/AgentPull.sh.template

Make sure you edit the file change the oms host and oms port parameters.

Check what are the available platforms:

[oracle@exa01db01 ~]$ ./AgentPull.sh -showPlatforms

Platforms    Version
Linux x86-64    13.1.0.0.0

Run the script by suppling the sysman password, platform you want to install, agent registration password and the agent base directory:

[oracle@exa01db01 ~]$ ./AgentPull.sh LOGIN_USER=sysman LOGIN_PASSWORD=welcome1 \
PLATFORM="Linux x86-64" AGENT_REGISTRATION_PASSWORD=welcome1 \
AGENT_BASE_DIR=/u01/app/oracle/agent13c

It takes less than two minutes to install the agent and then at the end you’ll see the following messages:

Agent Configuration completed successfully
The following configuration scripts need to be executed as the "root" user. Root script to run : /u01/app/oracle/agent13c/agent_13.1.0.0.0/root.sh
Waiting for agent targets to get promoted...
Successfully Promoted agent and its related targets to Management Agent

Now login as root and run the root.sh file.

To be honest, I use this method regardless of the circumstances, it’s so much easier and faster.

 

Categories: oracle Tags:

Database system target in pending status for standby database in OEM 12c

October 6th, 2015 No comments

That’s not really a problem but annoying issue I had with OEM 12c. Once a standby database is promoted, the database system for the same is showing as metric collections error OR Status Pending.

The standby database doesn’t need its own system since it will join the primary database system. The solution is to associate the standby database with the primary system and then remove the standby database system.

For example – we’ve got primary and standby databases – TESTDB_LON, TESTDB_RDG. Once promoted the following targets are also created in OEM – TESTDB_LON_sys and TESTDB_RDG_sys.

The second one will always be having status Pending:
Status Pending (Post Blackout)

The way to resolve that is to associate the standby database with the primary database system. I usually rename the primary database system as well to omit the location (LON and RDG):
– Go to the Targets -> Systems and choose the system you want to edit
– Then go to Database System -> Target Setup -> Edit system
– Rename the system name from TESTDB_LON_sys to TESTDB_sys
– Save changes
– Go to Database System again, Target Setup -> Edit system
– Click next to go to Step 2
– Add the standby database to the Standby Database Associations table
– Save changes

At this point we’ve got one system TESTDB_sys with two database members TESTDB_LON and TESTDB_RDG.

Next step is to remove the database system for the standby using emcli:

[oracle@oem12c ~]$ /opt/app/oracle/em12cr4/middleware/oms/bin/emcli login -username=sysman
Enter password :
Login successful

[oracle@oem12c ~]$ /opt/app/oracle/em12cr4/middleware/oms/bin/emcli delete_target -name="TESTDB_RDG_sys" -type="oracle_dbsys"
Target "TESTDB_RDG_sys:oracle_dbsys" deleted successfully

Now it’s all sorted and hopefully all targets are “green”.

Categories: oracle Tags:

EMDIAG Repvfy 12c kit – troubleshooting part 1

March 26th, 2014 No comments

The following blog post continue the EMDIAG repvfy kit series and will focus on how to troubleshoot and solve the problems reported by the kit.

The repository verification kit reports number of problems with our repository which we are about to troubleshoot and solve one by one. It’s important to notice that some of the problem are related so solving one problem could also solve another one.

Here is the output I’ve got for my OEM repository:

-- --------------------------------------------------------------------- --
-- REPVFY: 2014.0114     Repository: 12.1.0.3.0     23-Jan-2014 13:35:41 --
---------------------------------------------------------------------------
-- Module:                                          Test:   0, Level: 2 --
-- --------------------------------------------------------------------- --

verifyAGENTS

1004. Stuck PING jobs: 10
verifyASLM
verifyAVAILABILITY
1002. Disabled response metrics (16570376): 2
verifyBLACKOUTS
verifyCAT
verifyCORE
verifyECM
1002. Unregistered ECM metadata tables: 2
verifyEMDIAG
1001. Undefined verification modules: 1
verifyEVENTS
verifyEXADATA
2001. Exadata plugin version mismatches: 5
verifyJOBS
2001. System jobs running for more than 24hr: 1
verifyJVMD
verifyLOADERS
verifyMETRICS
verifyNOTIFICATIONS
verifyOMS
1002. Stuck PING jobs: 10
verifyPLUGINS
1003. Plugin metadata versions out of sync: 13
verifyREPOSITORY
verifyTARGETS
1021. Composite metric calculation with inconsistent dependant metadata versions: 3
2004. Targets without an ORACLE_HOME association: 2
2007. Targets with unpromoted ORACLE_HOME target: 2
verifyUSERS

I usually follow this sequence of actions when troubleshoot repository problems:
1. Verify the module with detail option. Increasing the level also might show more problems or problems related to the current one.
2. Dump the module and check for any unusual activity.
3. Check repository database alert log for any errors.
4. Check emagent logs for any errors.
5. Check OMS logs for any errors.

Troubleshoot Stuck PING jobs

Looking on the first problem reported for verifyAGENTS – Stuck PING jobs we can easily spot the relation between verifyAGENTS, verifyJOBS and verifyOMS modules where the same problem is occurring. For some reason there are ten ping jobs which are stuck and running for more than 24hrs.

The best approach would be running verify against any of these modules with the –detail option. This will show more information and eventually help analyze the problem. Running detail report for AGENTS and OMS didn’t helped and didn’t show much information related to the stuck pings jobs. However running detailed report for the JOBS we were able to identify the job_id, job_name and when the job was started:

[oracle@oem bin]$ ./repvfy verify jobs –detail

JOB_ID                           EXECUTION_ID                     JOB_NAME                                 START_TIME
-------------------------------- -------------------------------- ---------------------------------------- --------------------
ECA6DE1A67B43914E0432084800AB548 ECA6DE1A67B63914E0432084800AB548 PINGCFMJOB_ECA6DE1A67B33914E0432084800AB 03-DEC-2013 19:02:29

So we can see that the stuck job was started on 19:02 at 3rd of December and the time of check was 23rd of January.

Now we can say that there is a problem with the jobs rather than agents or oms, the problems at these two modules appeared as a result of the stuck job and we should be focusing on the JOBS module.

Running analyze against the job will show the same thing as verify with detail option, it’s usage would be appropriate if we got multiple jobs issues and want to see the details for particular one.

Dumping the job will show a lot of info from MGMT_ tables that’s useful, of particular interest are the details of the execution:

[oracle@oem bin]$ ./repvfy dump job -guid ECA6DE1A67B43914E0432084800AB548

[----- MGMT_JOB_EXEC_SUMMARY ------------------------------------------------]

EXECUTION_ID                     STATUS                           TLI QUEUE_ID                         TIMEZONE_REGION                SCHEDULED_TIME       EXPECTED_START_TIME  START_TIME           END_TIME                RETRIED
-------------------------------- ------------------------- ---------- -------------------------------- ------------------------------ -------------------- -------------------- -------------------- -------------------- ----------
ECA6DE1A67B63914E0432084800AB548 02-Running                         1                                  +00:00                         03-DEC-2013 18:59:25 03-DEC-2013 18:59:25 03-DEC-2013 19:02:29                               0

Again we can confirm that the job is still running and the next step would be to dump the execution which will show us on which step the job is waiting/hanging. That’s just an example because in my case I didn’t have any steps in my job execution:

[oracle@oem bin]$ ./repvfy dump execution -guid ECA6DE1A67B43914E0432084800AB548
[oracle@oem bin]$ ./repvfy dump step -id 739148

Checking job system health could also be useful by showing some job history, scheduled jobs and some performance metrics:

[oracle@oem bin]$ ./repvfy dump job_health

Back to our problem we may query MGMT_JOB to get the job name and confirm that’s system job run by SYSMAN:

SQL> SELECT JOB_ID, JOB_NAME,JOB_OWNER, JOB_DESCRIPTION,JOB_TYPE,SYSTEM_JOB,JOB_STATUS FROM MGMT_JOB WHERE UPPER(JOB_NAME) like '%PINGCFM%'

JOB_ID                           JOB_NAME                                                     JOB_OWNER  JOB_DESCRIPTION                                              JOB_TYPE        SYSTEM_JOB JOB_STATUS
-------------------------------- ------------------------------------------------------------ ---------- ------------------------------------------------------------ --------------- ---------- ----------
ECA6DE1A67B43914E0432084800AB548 PINGCFMJOB_ECA6DE1A67B33914E0432084800AB548                  SYSMAN     This is a Confirm EMD Down test job                          ConfirmEMDDown            2          0

We may try to stop the job using emcli and job name:

[oracle@oem bin]$ emcli stop_job -name=PINGCFMJOB_ECA6DE1A67B33914E0432084800AB
Error: The job/execution is invalid (or non-existent)

If that doesn’t work then use emdiag kit to cleanup the repository part:

./repvfy verify jobs -test 1998 -fix

Please enter the SYSMAN password:

-- --------------------------------------------------------------------- --
-- REPVFY: 2014.0114     Repository: 12.1.0.3.0     27-Jan-2014 18:18:36 --
---------------------------------------------------------------------------
-- Module: JOBS                                     Test: 1998, Level: 2 --
-- --------------------------------------------------------------------- --
-- -- -- - Running in FIX mode: Data updated for all fixed tests - -- -- --
-- --------------------------------------------------------------------- --

The repository is now ok but it will not remove the stuck thread at the OMS level. In order for the OMS to get healthy again it needs to be restarted:

cd $OMS_HOME/bin
emctl stop oms
emctl start oms

After OMS was restarted there were no stuck jobs anymore!

I’ve still wanted to know why that happened. Although there were few bugs at MOS they were no very applicable and didn’t found any of the symptoms in my case. After checking repository database alertlog I found few disturbing messages:

  Tns error struct:
Time: 03-DEC-2013 19:04:01
TNS-12637: Packet receive failed
ns secondary err code: 12532
.....
opiodr aborting process unknown ospid (15301) as a result of ORA-609
 opiodr aborting process unknown ospid (15303) as a result of ORA-609
 opiodr aborting process unknown ospid (15299) as a result of ORA-609
2013-12-03 19:07:58.156000 +00:00

I also found a lot of similar message on the target databases:

  Time: 03-DEC-2013 19:05:08
TNS-12535: TNS:operation timed out
ns secondary err code: 12560
nt main err code: 505

That pretty much matches the time when the job got stuck – 19:02:29. So I assume there was some network glitch at that time causing the ping job to stuck. The solution was simply run the repvfy with fix option and then restart the OMS service.

In case after restart the job is stuck again consider increasing the oms property oracle.sysman.core.conn.maxConnForJobWorkers.  Consider the following note if that’s the case:

EMDIAG repvfy blog series:

 

Categories: oracle Tags: , ,

EMDIAG Repvfy 12c kit – basics

October 30th, 2013 No comments

Second post in the series of emdiag repvfy kit about the basics of the tool. Having the kit already installed in earlier post it is time now to get some basics before we start troubleshooting.

There are three main commands with repvfy:

  • verify – repository-wide verification
  • analyze – objects specific verification/analysis
  • dump  – dump specific repository object

As you can tell from the description, the verify is run against the repository and doesn’t require any arguments by default while analyze and dump commands require specific object to be given. To get a list of all available commands of the kit run repvfy -h1.

 

Verify command

Let’s make something clear in the begging, the verify command will run repository-wide verification against many tests which are FIRST grouped into modules and SECOND categorized in several levels. To get a list of available modules run repvfy -h4, there are more than 30 modules and I won’t go into detail for each, but most useful are – Agents, Plugins, Exadata, Repository, Targets. The list of levels can be found at the end of the post, it’s important to say that levels are cumulative and by default tests are run in level 2!

When investigate or debug a problem with the repository always start with verify command. It’s a good starting point to run verify without any arguments, it will go through all modules and give you summary if certain problems (violations) are present and also get an initial look on the health of the repository and then start debugging specific problem.

So here is how verify output looked for my OEM repository:

[oracle@oem bin]$ ./repvfy verify

Please enter the SYSMAN password:

-- --------------------------------------------------------------------- --
-- REPVFY: 2013.1008     Repository: 12.1.0.3.0     29-Oct-2013 11:30:37 --
---------------------------------------------------------------------------
-- Module:                                          Test:   0, Level: 2 --
-- --------------------------------------------------------------------- --

verifyAGENTS
verifyASLM
verifyAVAILABILITY
1002. Disabled response metrics (16570376): 2
verifyBLACKOUTS
verifyCAT
verifyCORE
verifyECM
1002. Unregistered ECM metadata tables: 2
verifyEMDIAG
verifyEVENTS
verifyEXADATA
2001. Exadata plugin version mismatches: 5
verifyJOBS
verifyJVMD
verifyLOADERS
verifyMETRICS
verifyNOTIFICATIONS
verifyOMS
verifyPLUGINS
1003. Plugin metadata versions out of sync: 13
verifyREPOSITORY
verifyTARGETS
1021. Composite metric calculation with inconsistent dependant metadata versions: 3
2004. Targets without an ORACLE_HOME association: 2
2007. Targets with unpromoted ORACLE_HOME target: 2
verifyUSERS

The verify command can also be run with -detail argument to get more details for the problem found. It will also show which test found the problem and what actions can be taken to correct the problem. That’s useful for another reason – it will print the target name and guid which can be used for further analysis using analyze and dump commands.

The command can also be run with -level argument, starting with zero for a fatal errors and increasing to nine for more minor errors and best practices, list of levels can be found at the end of the post.

 

Analyze command

Analyze command is run against specific target which can be specific either by its name or its unique identifier (guid). To get a list of supported targets run repvfy -h5. The analyze command is very similar to the verify command except it is run against specific target. Again it can be run with -level and -detail arguments, like this:

[oracle@oem bin]$ ./repvfy analyze exadata -guid 6744EED794F4CCCDBA79EC00332F65D3 -level 9

Please enter the SYSMAN password:

-- --------------------------------------------------------------------- --
-- REPVFY: 2013.1008     Repository: 12.1.0.3.0     29-Oct-2013 12:00:09 --
---------------------------------------------------------------------------
-- Module: EXADATA                                  Test:   0, Level: 9 --
-- --------------------------------------------------------------------- --

analyzeEXADATA
2001. Exadata plugin version mismatches: 1
6002. Exadata components without a backup Agent: 4
6006. Check for DB_LOST_WRITE_PROTECT: 1
6008. Check for redundant control files: 5

For that Exadata target we can see there are few more problems found with level 9 except the one found earlier about plugin version mismatch with level 2.

One of the next posts will be dedicated to troubleshooting and fixing problems in Exadata module.

 

Dump command

Dump command is used to dump all the information about specific repository object, as analyze command it expects either target name or target guid. For a list of supported targets run repvfy -h6.

I won’t show any example because it will dump all the details about that target – more than 2000 lines. If you run the dump command against the same target used in analyze you will get a ton of information  like – associated targets with this Exadata (hosts, iloms, databases, instances), list of monitoring agents, plugin version, some address details, long list of  targets alerts/warnings.

Seems rather useless because it just dumps a lot of information but actually it helped me identifying the problem I had about plugin version mismatch within the Exadata module.

 

Repository verification and object analysis levels:

0  - Fatal issues (functional breakdown)
 These test highlight fatal errors found in the repository. These errors will prevent EM from functioning
 normally and should get addressed straight away.

1  - Critical issues (functionality blocked)

2  - Severe issues (restricted functionality)

3  - Warning issues (potential functional issue)
 These tests are meant as 'warning', to highlight issues which could lead to potential problems.

4 - Informational issues (potential functional issue)
 These tests are informational only. They represent best practices, potential issues, or just areas to verify.

5 - Currently not used

6  - Best practice violations
 These test highlight discrepancies between the known best practices, and the actual implementation
 of the EM environment.

7  - Purging issues (obsolete data)
 These test highlight failures to clean up (all the) historical data, or problems with orphan data
 left behind in the repository.

8  - Failure Reports (historical failures)
 These test highlight historical errors that have occurred.

9  - Tests and internal verifications
 These tests are internal tests, or temporary and diagnostics tests added to resolve specific problems.
 They are not part of the 'regular' kit, and are usually added while debugging or testing specific issues.

 

In the next post I’ll troubleshoot and fix the errors I had within the Availability module – Disabled response metrics.

 

For more information and examples refer to following notes:
EMDIAG Repvfy 12c Kit – How to Use the Repvfy 12c kit (Doc ID 1427365.1)
EMDIAG REPVFY Kit – Overview (Doc ID 421638.1)

 

EMDIAG repvfy blog series:
  • EMDIAG Repvfy 12c kit – installation
  • EMDIAG Repvfy 12c kit – basics
  • EMDIAG Repvfy 12c kit – troubleshoot Availability module
  • EMDIAG Repvfy 12c kit – troubleshoot Exadata module
  • EMDIAG Repvfy 12c kit – troubleshoot Plugins module
  • EMDIAG Repvfy 12c kit – troubleshoot Targets module

 

Categories: oracle Tags: , ,

Why my EM12c is giving Metric evaluation error for Exadata cell targets?

October 25th, 2013 No comments

As part of my Cloud Control journey I encountered a strange problem where I got the following error for few Exadata Storage Server (cell) targets:

Metric evaluation error start - oracle.sysman.emSDK.agent.fetchlet.exception.FetchletException: em_error=Failed to execute_exadata_response.pl ssh -q -o ConnectTimeout=60 -o BatchMode=yes -o StrictHostKeyChecking=no -o PreferredAuthentications=publickey -i /home/oracle/.ssh/id_dsa -l cellmonitor 10.141.8.68 cellcli -xml -e ' list cell attributes msStatus ':

Another symptom is that I received two mails from OEM, one saying that the cell and its services are up:

EM Event: Clear:exacel05.localhost.localdomain - exacel05.localhost.localdomain is Up. MS Status is RUNNING and Ping Status is SUCCESS.

and another one saying there is Metric evaluation error for the same target:

EM Event: Critical:exacel05.localhost.localdomain - Metric evaluation error start - oracle.sysman.emSDK.agent.fetchlet.exception.FetchletException: em_error=Failed to execute_exadata_response.pl ssh -q -o ConnectTimeout=60 -o BatchMode=yes -o StrictHostKeyChecking=no -o PreferredAuthentications=publickey -i /home/oracle/.ssh/id_dsa -l cellmonitor 10.141.8.68 cellcli -xml -e ' list cell attributes msStatus ':

I have to say that the error didn’t came up by itself, but it manifested after I had to redeploy the Exadata plugin on few agents. If you ever had to do this you would know that before removing the plugin from an agent you need to make sure the agent is not primary monitoring agent for Exadata  targets. In my case few of the agents were Monitoring Agents for the cells and I had to swap them with the Backup Monitoring Agent so I would be able to redeploy the plugin on the primary monitoring agent.

After I redeployed the plugin, I tried to revert back the initial configuration but for some reason the configuration messed up and I ended up with different agents monitoring different cell targets from what was at the beginning.

It turns out that one of the monitoring agents wasn’t able to connect to the cell and that’s why I got the email notifications and the Metric evaluation errors for the cells. Although that’s not a problem it’s quite annoying to receive such alerts and have all these targets with Metric collection error icons in OEM or having these targets reported with status Down.

Let’s first check which are the monitoring agents for that cell target from the OEM repository:

SQL> select target_name, target_type, agent_name, agent_type, agent_is_master
from MGMT$AGENTS_MONITORING_TARGETS
where target_name = 'exacel05.localhost.localdomain';

TARGET_NAME                      TARGET_TYPE     AGENT_NAME                         AGENT_TYPE AGENT_IS_MASTER
-------------------------------- --------------- ---------------------------------- ---------- ---------------
exacel05.localhost.localdomain   oracle_exadata  exadb03.localhost.localdomain:3872 oracle_emd               0
exacel05.localhost.localdomain   oracle_exadata  exadb02.localhost.localdomain:3872 oracle_emd               1

Looking on the cell secure log we can see that one of the monitoring agents wasn’t  able to connect because of failed publickey authentication:

Oct 23 11:39:54 exacel05 sshd[465]: Connection from 10.141.8.65 port 14594
Oct 23 11:39:54 exacel05 sshd[465]: Failed publickey for cellmonitor from 10.141.8.65 port 14594 ssh2
Oct 23 11:39:54 exacel05 sshd[466]: Connection closed by 10.141.8.65
Oct 23 11:39:55 exacel05 sshd[467]: Connection from 10.141.8.66 port 27799
Oct 23 11:39:55 exacel05 sshd[467]: Found matching DSA key: cf:99:0a:37:1a:e5:84:dc:a8:8a:b9:6f:0c:fd:05:c5
Oct 23 11:39:55 exacel05 sshd[468]: Postponed publickey for cellmonitor from 10.141.8.66 port 27799 ssh2
Oct 23 11:39:55 exacel05 sshd[467]: Found matching DSA key: cf:99:0a:37:1a:e5:84:dc:a8:8a:b9:6f:0c:fd:05:c5
Oct 23 11:39:55 exacel05 sshd[467]: Accepted publickey for cellmonitor from 10.141.8.66 port 27799 ssh2
Oct 23 11:39:55 exacel05 sshd[467]: pam_unix(sshd:session): session opened for user cellmonitor by (uid=0)

That’s confirmed by checking ssh authorized_keys file, which also confirms which were initially configured monitoring agents:

 [root@exacel05 .ssh]# grep exadb /home/cellmonitor/.ssh/authorized_keys | cut -d = -f 2
oracle@exadb03.localhost.localdomain
oracle@exadb04.localhost.localdomain

Another way to check which monitoring agent were configured initially is to check the snmpSubscriber attribute for that specific cell:

[root@exacel05 ~]# cellcli -e list cell attributes snmpSubscriber
((host=exadb03.localhost.localdomain,port=3872,community=public),(host=exadb04.localhost.localdomain,port=3872,community=public))

It’s obvious that exadb02 shouldn’t be monitoring this target but it should be exadb04 instead. I believe that when I redeployed the Exadata plugin this agent wasn’t eligible to monitor Exadata targets any more and was replaced with another one but that’s just a guess.

There are two solutions for that problem:

1. Move (relocate) target definition and monitoring to the correct agent:

I wasn’t able to find a way to do that through OEM Console and for that purpose I used emcli. Based on MGMT$AGENTS_MONITORING_TARGETS query and snmpSubscriber attribute I was able to find which agent was configured initially and which have to be removed.  Then I used emcli to relocate the monitoring agent for that target to the correct agent, the one which was configured initially:

[oracle@oem ~]$ emcli relocate_targets -src_agent=exadb02.localhost.localdomain:3872 -dest_agent=exadb04.localhost.localdomain:3872 -target_name=exacel05.localhost.localdomain -target_type=oracle_exadata -copy_from_src
Moved all targets from exadb02.localhost.localdomain:3872 to exadb04.localhost.localdomain:3872

2. Reconfigure the cell to use the new monitoring agent:

Add the current monitoring agent ssh publickey into the authorized_keys of the cell:

Place the oracle user DSA public key (/home/oracle/.ssh/id_dsa.pub) from exadb02 into exacel05:/home/cellmonitor/.ssh/authorized_keys

and also change the cell snmpSubscriber attribute:

[root@exacel05~]# cellcli -e "alter cell snmpSubscriber=((host='exadb03.localhost.localdomain',port=3872,community=public),(host='exadb02.localhost.localdomain',port=3872,community=public))"
Cell exacel05 successfully altered
[root@exacel05~]# cellcli -e list cell attributes snmpSubscriber
((host=exadb03.localhost.localdomain,port=3872,community=public),(host=exadb02.localhost.localdomain,port=3872,community=public))

After that the status at OEM for the Exadata Storage Server (cell) target became up and also the metrics were fine now.

 

Categories: oracle Tags: , , , ,

EMDIAG Repvfy 12c kit – installation

October 21st, 2013 No comments

Recently I’m doing a lot of OEM stuff for a customer and I’ve decided to tidy and clean up a little bit. The OEM version was 12cR2 and it was used for monitoring few Exadata’s and had around 650 targets. I planned to upgrade OEM to 12cR3, upgrade all agents and plugins, promote any un-promoted targets and delete any old/stale targets, at the end of the day I wanted up to date version of OEM and up to date information about the monitored targets.

Upgrade to 12cR3 went easy and smooth, described here in details, upgrade of agents and plugins went fairly easy as well. After some time everything was looking fine, but I wanted to be sure I didn’t missed anything so from one thing to another I found EMDIAG Repository Verification utility. It’s something I heard Julian Dontcheff mentioning for the first time on one of the BGOUG conferences and it’s something I was always looking to try.

So in series of posts I will describe installation of emdiag kit and how I fixed some of the problems I faced with my OEM repository.

What is EMDIAG kit

Basically EMDIAG consists of three kits:
– Repository verification (repvfy) – extracts data from the repository and run series of tests against the repository to help diagnose problems with OEM
– Agent verification (agtvfy) – troubleshoot problems with OEM agents
– OMS verification (omsvfy) – troubleshoot problems with the OEM management repository service

In this and following posts I’ll be referring to the first one – repository verification kit.

EMDIAG repvfy kit consist of set of tests which are run against OEM repository to help EM administrator to troubleshoot, analyze and help resolve OEM related problems.

The kit uses a shell and perl script as wrapper and a lot SQL scripts to run different tests and collect information from the repository.

It’s good to know that the emdiag kit has been around for some quite long time and it’ is available also for Grid Control 10g and 11g and DB Control 10g and 11g. This posts refer to the EMDIAG Repvfy 12c kit which can be installed only in a Cloud Control Management Repository 12c. Also emdiag kit repvfy 12c will be included in the RDA Release 4.30.

Apart from finding and resolving specific problem with emdiag kit it would be a good practice to run it at least once per week/month to check for any new problems that are reported.

Installing EMDIAG repvfy kit

Installation if pretty simple and straight forward, first download EMDIAG repvfy 12c kit from following MOS note:
EMDIAG REPVFY Kit for Cloud Control 12c – Download, Install/De-Install and Upgrade (Doc ID 1426973.1)

Just set your ORACLE_HOME to the database hosting the Cloud Control Management Repository and create new directory emdiag where the tool will be unziped:

[oracle@em ~]$ . oraenv
ORACLE_SID = [oracle] ? GRIDDB
The Oracle base for ORACLE_HOME=/opt/oracle/product/11.2.0/db_1 is /opt/oracle

[oracle@em ~]$ cd $ORACLE_HOME
[oracle@em db_1]$ mkdir emdiag
[oracle@em db_1]$ cd emdiag/
[oracle@em emdiag]$ unzip -q /tmp/repvfy12c20131008.zip
[oracle@em emdiag]$ cd bin
[oracle@em bin]$ ./repvfy install

Please enter the SYSMAN password:

...
...

COMPONENT            INFO
 -------------------- --------------------

EMDIAG Version       2013.1008
EMDIAG Edition       2
Repository Version   12.1.0.3.0
Database Version     11.2.0.3.0
Test Version         2013.1015
Repository Type      CENTRAL
Verify Tests         496
Object Tests         196
Deployment           SMALL

[oracle@em ~]$

And that’s it, emdiag kit for repository verification is now installed and we can start digging into OEM repository. With next post we’ll get some knowledge on the basics and commands used for verification and diagnostics.

EMDIAG repvfy blog series:
  • EMDIAG Repvfy 12c kit – installation
  • EMDIAG Repvfy 12c kit – basics
  • EMDIAG Repvfy 12c kit – troubleshoot Availability module
  • EMDIAG Repvfy 12c kit – troubleshoot Exadata module
  • EMDIAG Repvfy 12c kit – troubleshoot Plugins module
  • EMDIAG Repvfy 12c kit – troubleshoot Targets module

 

Categories: oracle Tags: , ,

Oracle EM auto discovery fails if the host is under blackout

August 5th, 2013 No comments

During Exadata project I had to put some order and tidy up the Enterprise Manager targets. I’ve decided to discover all targets, promote the one which are missing and delete old/stale one. I’ve got strange error and decided to share it if someone hit it. I’m running Oracle Enterprise Manager Cloud Control 12c Release 2.

When you try to run auto discovery from the console for a host you almost immediately get the following error:

Run Discovery Now failed on host oraexa201.host.net: oracle.sysman.core.disc.common.AutoDiscoveryException: Unable to run on discovery on demand.RunCollection: exception occurred: oracle.sysman.gcagent.task.TaskPreExecuteCheckException: non-existent, broken, or not fully loaded target

When review the agent log the following exception could be seen:

tail $ORACLE_HOME/agent/sysman/log/gcagent.log

2013-07-16 11:26:06,229 [34:2C47351F] INFO - >>> Reporting exception: oracle.sysman.emSDK.agent.client.exception.NoSuchMetricException: the DiscoverNow metric does not exist for host target oraexa201.host.net (request id 1) <<<
 oracle.sysman.emSDK.agent.client.exception.NoSuchMetricException: the DiscoverNow metric does not exist for host target oraexa201.host.net

Got another error message during my second (test) run:

2013-08-02 15:20:47,155 [33:B3DBCC59] INFO - >>> Reporting response: RunCollectionResponse ([DiscoverTargets : host.oraexa201.host.net oracle.sysman.emSDK.agent.client.exception.RunCollectionItemException: Metric evaluation failed : RunCollection: exception occurred: oracle.sysman.gcagent.task.TaskPreExecuteCheckException: non-existent, broken, or not fully loaded target @ yyyy-MM-dd HH:mm:ss,SSS]) (request id 1) <<<

Although emctl status agent shows that last successful heartbeat and upload are up to date,  still you cannot discover targets on the host.

This is caused by the fact that the host is under BLACKOUT!

1. Through the console end the blackout for that host:

Go to Setup -> Manager Cloud Control -> Agent, find the agent for which you experience the problem and click on it. Then you clearly can see that the status of the
agent is “Under Blackout”. Simply select Agent drop down menu – > Control and then End Blackout.

2. Using emcli, first login, list blackout and then stop the blackout:

[oracle@em ~]$ emcli login -username=sysman
Enter password

Login successful

[oracle@em ~]$ emcli get_blackouts
Name                                   Created By  Status   Status ID  Next Start           Duration  Reason                      Frequency  Repeat  Start Time           End Time             Previous End         TZ Region      TZ Offset

test_blackout                          SYSMAN      Started  4          2013-08-02 15:06:43  01:00     Hardware Patch/Maintenance  once       none    2013-08-02 15:06:43  2013-08-02 16:06:43  none                 Europe/London  +00:00

List of target which are under blackout and then stop the blackout:

[oracle@em ~]$ emcli get_blackout_targets -name="test_blackout"
Target Name                                         Target Type      Status       Status ID
has_oraexa201.host.net                              has              In Blackout  1
oraexa201.host.net                                  host             In Blackout  1
TESTDB_TESTDB1                                      oracle_database  In Blackout  1
oraexa201.host.net:3872                             oracle_emd       In Blackout  1
Ora11g_gridinfrahome1_1_oraexa201                   oracle_home      In Blackout  1
OraDb11g_home1_2_oraexa201                          oracle_home      In Blackout  1
agent12c1_3_oraexa201                               oracle_home      In Blackout  1
sbin12c1_4_oraexa201                                oracle_home      In Blackout  1
LISTENER_oraexa201.host.net                         oracle_listener  In Blackout  1
+ASM_oraexa2-cluster                                osm_cluster      In Blackout  1
+ASM4_oraexa201.host.net                            osm_instance     In Blackout  1

[oracle@em ~]$ emcli stop_blackout -name="test_blackout"
Blackout "test_blackout" stopped successfully

And now when the discovery is run again:

Run Discovery Now – Completed Successfully

I was unable to get an error initially when I set the blackout, but then got the error after restarting the EM agent.

 

22 Oct 2013 Update:

After update to 12c (described here) now meaningful error is raised when you try to discover targets during agent blackout:

Run Discovery Now failed on host oraexa201.host.net: oracle.sysman.core.disc.common.AutoDiscoveryException: Unable to run on discovery on demand.the target is currently blacked out

 

 

Categories: oracle Tags: , ,