Archive

Posts Tagged ‘emdiag’

EMDIAG Repvfy 12c kit – troubleshooting part 1

March 26th, 2014 No comments

The following blog post continue the EMDIAG repvfy kit series and will focus on how to troubleshoot and solve the problems reported by the kit.

The repository verification kit reports number of problems with our repository which we are about to troubleshoot and solve one by one. It’s important to notice that some of the problem are related so solving one problem could also solve another one.

Here is the output I’ve got for my OEM repository:

-- --------------------------------------------------------------------- --
-- REPVFY: 2014.0114     Repository: 12.1.0.3.0     23-Jan-2014 13:35:41 --
---------------------------------------------------------------------------
-- Module:                                          Test:   0, Level: 2 --
-- --------------------------------------------------------------------- --

verifyAGENTS

1004. Stuck PING jobs: 10
verifyASLM
verifyAVAILABILITY
1002. Disabled response metrics (16570376): 2
verifyBLACKOUTS
verifyCAT
verifyCORE
verifyECM
1002. Unregistered ECM metadata tables: 2
verifyEMDIAG
1001. Undefined verification modules: 1
verifyEVENTS
verifyEXADATA
2001. Exadata plugin version mismatches: 5
verifyJOBS
2001. System jobs running for more than 24hr: 1
verifyJVMD
verifyLOADERS
verifyMETRICS
verifyNOTIFICATIONS
verifyOMS
1002. Stuck PING jobs: 10
verifyPLUGINS
1003. Plugin metadata versions out of sync: 13
verifyREPOSITORY
verifyTARGETS
1021. Composite metric calculation with inconsistent dependant metadata versions: 3
2004. Targets without an ORACLE_HOME association: 2
2007. Targets with unpromoted ORACLE_HOME target: 2
verifyUSERS

I usually follow this sequence of actions when troubleshoot repository problems:
1. Verify the module with detail option. Increasing the level also might show more problems or problems related to the current one.
2. Dump the module and check for any unusual activity.
3. Check repository database alert log for any errors.
4. Check emagent logs for any errors.
5. Check OMS logs for any errors.

Troubleshoot Stuck PING jobs

Looking on the first problem reported for verifyAGENTS – Stuck PING jobs we can easily spot the relation between verifyAGENTS, verifyJOBS and verifyOMS modules where the same problem is occurring. For some reason there are ten ping jobs which are stuck and running for more than 24hrs.

The best approach would be running verify against any of these modules with the –detail option. This will show more information and eventually help analyze the problem. Running detail report for AGENTS and OMS didn’t helped and didn’t show much information related to the stuck pings jobs. However running detailed report for the JOBS we were able to identify the job_id, job_name and when the job was started:

[oracle@oem bin]$ ./repvfy verify jobs –detail

JOB_ID                           EXECUTION_ID                     JOB_NAME                                 START_TIME
-------------------------------- -------------------------------- ---------------------------------------- --------------------
ECA6DE1A67B43914E0432084800AB548 ECA6DE1A67B63914E0432084800AB548 PINGCFMJOB_ECA6DE1A67B33914E0432084800AB 03-DEC-2013 19:02:29

So we can see that the stuck job was started on 19:02 at 3rd of December and the time of check was 23rd of January.

Now we can say that there is a problem with the jobs rather than agents or oms, the problems at these two modules appeared as a result of the stuck job and we should be focusing on the JOBS module.

Running analyze against the job will show the same thing as verify with detail option, it’s usage would be appropriate if we got multiple jobs issues and want to see the details for particular one.

Dumping the job will show a lot of info from MGMT_ tables that’s useful, of particular interest are the details of the execution:

[oracle@oem bin]$ ./repvfy dump job -guid ECA6DE1A67B43914E0432084800AB548

[----- MGMT_JOB_EXEC_SUMMARY ------------------------------------------------]

EXECUTION_ID                     STATUS                           TLI QUEUE_ID                         TIMEZONE_REGION                SCHEDULED_TIME       EXPECTED_START_TIME  START_TIME           END_TIME                RETRIED
-------------------------------- ------------------------- ---------- -------------------------------- ------------------------------ -------------------- -------------------- -------------------- -------------------- ----------
ECA6DE1A67B63914E0432084800AB548 02-Running                         1                                  +00:00                         03-DEC-2013 18:59:25 03-DEC-2013 18:59:25 03-DEC-2013 19:02:29                               0

Again we can confirm that the job is still running and the next step would be to dump the execution which will show us on which step the job is waiting/hanging. That’s just an example because in my case I didn’t have any steps in my job execution:

[oracle@oem bin]$ ./repvfy dump execution -guid ECA6DE1A67B43914E0432084800AB548
[oracle@oem bin]$ ./repvfy dump step -id 739148

Checking job system health could also be useful by showing some job history, scheduled jobs and some performance metrics:

[oracle@oem bin]$ ./repvfy dump job_health

Back to our problem we may query MGMT_JOB to get the job name and confirm that’s system job run by SYSMAN:

SQL> SELECT JOB_ID, JOB_NAME,JOB_OWNER, JOB_DESCRIPTION,JOB_TYPE,SYSTEM_JOB,JOB_STATUS FROM MGMT_JOB WHERE UPPER(JOB_NAME) like '%PINGCFM%'

JOB_ID                           JOB_NAME                                                     JOB_OWNER  JOB_DESCRIPTION                                              JOB_TYPE        SYSTEM_JOB JOB_STATUS
-------------------------------- ------------------------------------------------------------ ---------- ------------------------------------------------------------ --------------- ---------- ----------
ECA6DE1A67B43914E0432084800AB548 PINGCFMJOB_ECA6DE1A67B33914E0432084800AB548                  SYSMAN     This is a Confirm EMD Down test job                          ConfirmEMDDown            2          0

We may try to stop the job using emcli and job name:

[oracle@oem bin]$ emcli stop_job -name=PINGCFMJOB_ECA6DE1A67B33914E0432084800AB
Error: The job/execution is invalid (or non-existent)

If that doesn’t work then use emdiag kit to cleanup the repository part:

./repvfy verify jobs -test 1998 -fix

Please enter the SYSMAN password:

-- --------------------------------------------------------------------- --
-- REPVFY: 2014.0114     Repository: 12.1.0.3.0     27-Jan-2014 18:18:36 --
---------------------------------------------------------------------------
-- Module: JOBS                                     Test: 1998, Level: 2 --
-- --------------------------------------------------------------------- --
-- -- -- - Running in FIX mode: Data updated for all fixed tests - -- -- --
-- --------------------------------------------------------------------- --

The repository is now ok but it will not remove the stuck thread at the OMS level. In order for the OMS to get healthy again it needs to be restarted:

cd $OMS_HOME/bin
emctl stop oms
emctl start oms

After OMS was restarted there were no stuck jobs anymore!

I’ve still wanted to know why that happened. Although there were few bugs at MOS they were no very applicable and didn’t found any of the symptoms in my case. After checking repository database alertlog I found few disturbing messages:

  Tns error struct:
Time: 03-DEC-2013 19:04:01
TNS-12637: Packet receive failed
ns secondary err code: 12532
.....
opiodr aborting process unknown ospid (15301) as a result of ORA-609
 opiodr aborting process unknown ospid (15303) as a result of ORA-609
 opiodr aborting process unknown ospid (15299) as a result of ORA-609
2013-12-03 19:07:58.156000 +00:00

I also found a lot of similar message on the target databases:

  Time: 03-DEC-2013 19:05:08
TNS-12535: TNS:operation timed out
ns secondary err code: 12560
nt main err code: 505

That pretty much matches the time when the job got stuck – 19:02:29. So I assume there was some network glitch at that time causing the ping job to stuck. The solution was simply run the repvfy with fix option and then restart the OMS service.

In case after restart the job is stuck again consider increasing the oms property oracle.sysman.core.conn.maxConnForJobWorkers.  Consider the following note if that’s the case:

EMDIAG repvfy blog series:

 

Categories: oracle Tags: , ,

EMDIAG Repvfy 12c kit – basics

October 30th, 2013 No comments

Second post in the series of emdiag repvfy kit about the basics of the tool. Having the kit already installed in earlier post it is time now to get some basics before we start troubleshooting.

There are three main commands with repvfy:

  • verify – repository-wide verification
  • analyze – objects specific verification/analysis
  • dump  – dump specific repository object

As you can tell from the description, the verify is run against the repository and doesn’t require any arguments by default while analyze and dump commands require specific object to be given. To get a list of all available commands of the kit run repvfy -h1.

 

Verify command

Let’s make something clear in the begging, the verify command will run repository-wide verification against many tests which are FIRST grouped into modules and SECOND categorized in several levels. To get a list of available modules run repvfy -h4, there are more than 30 modules and I won’t go into detail for each, but most useful are – Agents, Plugins, Exadata, Repository, Targets. The list of levels can be found at the end of the post, it’s important to say that levels are cumulative and by default tests are run in level 2!

When investigate or debug a problem with the repository always start with verify command. It’s a good starting point to run verify without any arguments, it will go through all modules and give you summary if certain problems (violations) are present and also get an initial look on the health of the repository and then start debugging specific problem.

So here is how verify output looked for my OEM repository:

[oracle@oem bin]$ ./repvfy verify

Please enter the SYSMAN password:

-- --------------------------------------------------------------------- --
-- REPVFY: 2013.1008     Repository: 12.1.0.3.0     29-Oct-2013 11:30:37 --
---------------------------------------------------------------------------
-- Module:                                          Test:   0, Level: 2 --
-- --------------------------------------------------------------------- --

verifyAGENTS
verifyASLM
verifyAVAILABILITY
1002. Disabled response metrics (16570376): 2
verifyBLACKOUTS
verifyCAT
verifyCORE
verifyECM
1002. Unregistered ECM metadata tables: 2
verifyEMDIAG
verifyEVENTS
verifyEXADATA
2001. Exadata plugin version mismatches: 5
verifyJOBS
verifyJVMD
verifyLOADERS
verifyMETRICS
verifyNOTIFICATIONS
verifyOMS
verifyPLUGINS
1003. Plugin metadata versions out of sync: 13
verifyREPOSITORY
verifyTARGETS
1021. Composite metric calculation with inconsistent dependant metadata versions: 3
2004. Targets without an ORACLE_HOME association: 2
2007. Targets with unpromoted ORACLE_HOME target: 2
verifyUSERS

The verify command can also be run with -detail argument to get more details for the problem found. It will also show which test found the problem and what actions can be taken to correct the problem. That’s useful for another reason – it will print the target name and guid which can be used for further analysis using analyze and dump commands.

The command can also be run with -level argument, starting with zero for a fatal errors and increasing to nine for more minor errors and best practices, list of levels can be found at the end of the post.

 

Analyze command

Analyze command is run against specific target which can be specific either by its name or its unique identifier (guid). To get a list of supported targets run repvfy -h5. The analyze command is very similar to the verify command except it is run against specific target. Again it can be run with -level and -detail arguments, like this:

[oracle@oem bin]$ ./repvfy analyze exadata -guid 6744EED794F4CCCDBA79EC00332F65D3 -level 9

Please enter the SYSMAN password:

-- --------------------------------------------------------------------- --
-- REPVFY: 2013.1008     Repository: 12.1.0.3.0     29-Oct-2013 12:00:09 --
---------------------------------------------------------------------------
-- Module: EXADATA                                  Test:   0, Level: 9 --
-- --------------------------------------------------------------------- --

analyzeEXADATA
2001. Exadata plugin version mismatches: 1
6002. Exadata components without a backup Agent: 4
6006. Check for DB_LOST_WRITE_PROTECT: 1
6008. Check for redundant control files: 5

For that Exadata target we can see there are few more problems found with level 9 except the one found earlier about plugin version mismatch with level 2.

One of the next posts will be dedicated to troubleshooting and fixing problems in Exadata module.

 

Dump command

Dump command is used to dump all the information about specific repository object, as analyze command it expects either target name or target guid. For a list of supported targets run repvfy -h6.

I won’t show any example because it will dump all the details about that target – more than 2000 lines. If you run the dump command against the same target used in analyze you will get a ton of information  like – associated targets with this Exadata (hosts, iloms, databases, instances), list of monitoring agents, plugin version, some address details, long list of  targets alerts/warnings.

Seems rather useless because it just dumps a lot of information but actually it helped me identifying the problem I had about plugin version mismatch within the Exadata module.

 

Repository verification and object analysis levels:

0  - Fatal issues (functional breakdown)
 These test highlight fatal errors found in the repository. These errors will prevent EM from functioning
 normally and should get addressed straight away.

1  - Critical issues (functionality blocked)

2  - Severe issues (restricted functionality)

3  - Warning issues (potential functional issue)
 These tests are meant as 'warning', to highlight issues which could lead to potential problems.

4 - Informational issues (potential functional issue)
 These tests are informational only. They represent best practices, potential issues, or just areas to verify.

5 - Currently not used

6  - Best practice violations
 These test highlight discrepancies between the known best practices, and the actual implementation
 of the EM environment.

7  - Purging issues (obsolete data)
 These test highlight failures to clean up (all the) historical data, or problems with orphan data
 left behind in the repository.

8  - Failure Reports (historical failures)
 These test highlight historical errors that have occurred.

9  - Tests and internal verifications
 These tests are internal tests, or temporary and diagnostics tests added to resolve specific problems.
 They are not part of the 'regular' kit, and are usually added while debugging or testing specific issues.

 

In the next post I’ll troubleshoot and fix the errors I had within the Availability module – Disabled response metrics.

 

For more information and examples refer to following notes:
EMDIAG Repvfy 12c Kit – How to Use the Repvfy 12c kit (Doc ID 1427365.1)
EMDIAG REPVFY Kit – Overview (Doc ID 421638.1)

 

EMDIAG repvfy blog series:
  • EMDIAG Repvfy 12c kit – installation
  • EMDIAG Repvfy 12c kit – basics
  • EMDIAG Repvfy 12c kit – troubleshoot Availability module
  • EMDIAG Repvfy 12c kit – troubleshoot Exadata module
  • EMDIAG Repvfy 12c kit – troubleshoot Plugins module
  • EMDIAG Repvfy 12c kit – troubleshoot Targets module

 

Categories: oracle Tags: , ,

EMDIAG Repvfy 12c kit – installation

October 21st, 2013 No comments

Recently I’m doing a lot of OEM stuff for a customer and I’ve decided to tidy and clean up a little bit. The OEM version was 12cR2 and it was used for monitoring few Exadata’s and had around 650 targets. I planned to upgrade OEM to 12cR3, upgrade all agents and plugins, promote any un-promoted targets and delete any old/stale targets, at the end of the day I wanted up to date version of OEM and up to date information about the monitored targets.

Upgrade to 12cR3 went easy and smooth, described here in details, upgrade of agents and plugins went fairly easy as well. After some time everything was looking fine, but I wanted to be sure I didn’t missed anything so from one thing to another I found EMDIAG Repository Verification utility. It’s something I heard Julian Dontcheff mentioning for the first time on one of the BGOUG conferences and it’s something I was always looking to try.

So in series of posts I will describe installation of emdiag kit and how I fixed some of the problems I faced with my OEM repository.

What is EMDIAG kit

Basically EMDIAG consists of three kits:
– Repository verification (repvfy) – extracts data from the repository and run series of tests against the repository to help diagnose problems with OEM
– Agent verification (agtvfy) – troubleshoot problems with OEM agents
– OMS verification (omsvfy) – troubleshoot problems with the OEM management repository service

In this and following posts I’ll be referring to the first one – repository verification kit.

EMDIAG repvfy kit consist of set of tests which are run against OEM repository to help EM administrator to troubleshoot, analyze and help resolve OEM related problems.

The kit uses a shell and perl script as wrapper and a lot SQL scripts to run different tests and collect information from the repository.

It’s good to know that the emdiag kit has been around for some quite long time and it’ is available also for Grid Control 10g and 11g and DB Control 10g and 11g. This posts refer to the EMDIAG Repvfy 12c kit which can be installed only in a Cloud Control Management Repository 12c. Also emdiag kit repvfy 12c will be included in the RDA Release 4.30.

Apart from finding and resolving specific problem with emdiag kit it would be a good practice to run it at least once per week/month to check for any new problems that are reported.

Installing EMDIAG repvfy kit

Installation if pretty simple and straight forward, first download EMDIAG repvfy 12c kit from following MOS note:
EMDIAG REPVFY Kit for Cloud Control 12c – Download, Install/De-Install and Upgrade (Doc ID 1426973.1)

Just set your ORACLE_HOME to the database hosting the Cloud Control Management Repository and create new directory emdiag where the tool will be unziped:

[oracle@em ~]$ . oraenv
ORACLE_SID = [oracle] ? GRIDDB
The Oracle base for ORACLE_HOME=/opt/oracle/product/11.2.0/db_1 is /opt/oracle

[oracle@em ~]$ cd $ORACLE_HOME
[oracle@em db_1]$ mkdir emdiag
[oracle@em db_1]$ cd emdiag/
[oracle@em emdiag]$ unzip -q /tmp/repvfy12c20131008.zip
[oracle@em emdiag]$ cd bin
[oracle@em bin]$ ./repvfy install

Please enter the SYSMAN password:

...
...

COMPONENT            INFO
 -------------------- --------------------

EMDIAG Version       2013.1008
EMDIAG Edition       2
Repository Version   12.1.0.3.0
Database Version     11.2.0.3.0
Test Version         2013.1015
Repository Type      CENTRAL
Verify Tests         496
Object Tests         196
Deployment           SMALL

[oracle@em ~]$

And that’s it, emdiag kit for repository verification is now installed and we can start digging into OEM repository. With next post we’ll get some knowledge on the basics and commands used for verification and diagnostics.

EMDIAG repvfy blog series:
  • EMDIAG Repvfy 12c kit – installation
  • EMDIAG Repvfy 12c kit – basics
  • EMDIAG Repvfy 12c kit – troubleshoot Availability module
  • EMDIAG Repvfy 12c kit – troubleshoot Exadata module
  • EMDIAG Repvfy 12c kit – troubleshoot Plugins module
  • EMDIAG Repvfy 12c kit – troubleshoot Targets module

 

Categories: oracle Tags: , ,