Coskan’s Approach to Oracle

August 2, 2013

oratop from MOS

Filed under: Basics, Linux, Monitoring, Performance, Tips — coskan @ 10:12 am

This is just a pointer post to a tool which I discovered today. It is ORATOP, to monitor databases real time like top of unix flavours. It is pretty much like MOATS   (tool from Tanel Poder and Adrian Billington and RAC version by  Jagjeet Singh),   with a bit better output (at least for my taste)

 

It doesn’t need anything to be installed which is the best part of the tool.

oratop – utility for near real-time monitoring of databases, RAC and Single Instance (Doc ID 1500864.1)

 

Sample output is like below. It can be a good addition to your performance tuning toolkits.

 

oratop

April 19, 2013

Direct Path Reads Again

Filed under: Performance, Tips — coskan @ 7:20 pm

Note: This post is about diagnosing real life scenario caused by  direct path decision change on 11.2.0.3 which already covered  by Tanel Poder on his awesome Optimizer statistics-driven direct path read decision for full table scans (_direct_read_decision_statistics_driven) post
If you already read and know how to approach the scenario you can easily skip this post and wait another year till I write another post while I’m working at the bank.

Story

Number of Oracle database in our estate is a high number and they are on shared storage arrays which means if one of them is naughty on I/O it has a potential impact on others.
To be proactive, from Time to time I run bunch of queries on all of the databases in a loop to check what is the wait profiles from dba_hist_active_sess_history and v$active_session_history
to see if we can optimize some of the work and offload those unnecessary I/O to unused CPU by utilizing more memory or getting rid of direct path reads when necessary.

Query is really simple and it works for my needs.
All I do is on the first query get the count(*) from ASH grouped by event. By this way I can get the overall wait profile within the ASH.
For the second query get count(*) for wait_class I/O grouped by sql_id ,sql_plan_hash_value,force_matching_signature,event. By this sql I can get the top sql_id for I/O waits (not top elapsed which is all matters for end users)

I’m not saying this is the right approach but practically it works for the purpose.

SQL: RR columns are ratio to report

prompt ************************************
prompt **** ASH OVERALL WAIT PROFILE
prompt ************************************
SELECT MIN(sample_time) min_ash_available,sysdate-MIN(sample_time) available duration FROM v$active_session_history;

select * from (
select NVL(event,'CPU') event,count(*),
round((ratio_to_report(sum(1)) over ()*100),1) rr
from gv$active_session_history
WHERE user_id<>0
AND sample_time<trunc(SYSDATE+1) AND sample_time>trunc(sysdate-1)
group by event
order by 2 desc
) where rownum<10;

prompt ************************************
prompt **** ASH I/O by SQL_ID
prompt ************************************

COLUMN force_matching_signature FOR 999999999999999999999999999
select * from (
select
sql_id ,sql_plan_hash_value,force_matching_signature,
NVL(event,'CPU') Event,
count(*),
round((ratio_to_report(sum(1)) over ()*100),1) rr
from gv$active_session_history
where
1=1
AND wait_class LIKE '%I/O'
--AND event IS null
and user_id<>0
AND sql_id IS NOT NULL
group by
sql_id,sql_plan_hash_value,event,force_matching_signature
order by 5 desc
) where rownum<30;          

For one of the databases which is not looked after by me the output was like this


************************************
**** ASH OVERALL
************************************

EVENT                                           COUNT(*)     RR
--------------------------------------------- ---------- ------
CPU                                               109648   63.1
direct path read                                   35914   20.7
log file sync                                      26136   15.1
enq: KO - fast object checkpoint                    1089     .6
db file sequential read                              263     .2
control file sequential read                         188     .1
log buffer space                                     149     .1
asynch descriptor resize                             112     .1
enq: HW - contention                                  31     .0

************************************
**** ASH I/O
************************************

SQL_ID          SQL_PLAN_HASH_VALUE     FORCE_MATCHING_SIGNATURE EVENT                                           COUNT(*)     RR
--------------- ------------------- ---------------------------- --------------------------------------------- ---------- ------
804t23t68dbmw            1520416513         10621052756171684880 direct path read                                   11689   32.1
9wthcvamy608f            1764147946         10621052756171684880 direct path read                                   11059   30.4
7scfmpxgv5mu5            3641059178          6835008959050889988 direct path read                                    6448   17.7
804t23t68dbmw            2550353655         10621052756171684880 direct path read                                    3060    8.4
9wthcvamy608f            2550353655         10621052756171684880 direct path read                                    2787    7.7
9wthcvamy608f            3119703268         10621052756171684880 direct path read                                     777    2.1
1q21k2wqgw8m3            3586661573            29420080321854211 control file sequential read                         135     .4

From this output, we can say, for the period I checked, overall sessions are %63 on CPU and %20 on direct path read. If we checke the ASH I/O we can see same sqls with different plans (804t23t68dbmw,9wthcvamy608f) are contributing in total 77% of that %20 direct path read.   Then next step check how much I/O per run the sql (804t23t68dbmw) is doing

 SQL> l
  1  select
  2     plan_hash_value plan_hash,
  3     executions,
  4     PX_SERVERS_EXECUTIONS pxe,
  5     cpu_time/1000 cpu_ms,
  6     elapsed_time/1000 ela_ms,
  7     (elapsed_time/1000000)/decode(nvl(executions,0),0,1,executions) avg_ela,
  8     IO_INTERCONNECT_BYTES/1024/1024/1024 io_inter_gb,
  9     PHYSICAL_READ_BYTES/1024/1024/1024 PHYSICAL_READ_GB,
 10     PHYSICAL_READ_BYTES/1024/1024/decode(nvl(executions,0),0,1,executions) PIO_MB_PE,
 11     buffer_gets/decode(nvl(executions,0),0,1,executions) LIOS_PE,
 12     disk_reads/decode(nvl(executions,0),0,1,executions) PIOS_PE
 13  from
 14     gv$sql
 15  where
 16     sql_id = ('&1')
 17  order BY
 18     inst_id,
 19     sql_id,
 20     hash_value,
 21*    child_number
SQL>

 PLAN_HASH EXECUTIONS        PXE     CPU_MS     ELA_MS    AVG_ELA      IO_INTER_GB PHYSICAL_READ_GB        PIO_MB_PE       LIOS_PE       PIOS_PE
---------- ---------- ---------- ---------- ---------- ---------- ---------------- ---------------- ---------------- ------------- -------------
2550353655         45          0   88018.62 147748.384 3.28329742             9.37             9.37           213.14      27402.60      27282.00
2550353655      17142          0 16035460.2   34342974 2.00344032          2208.88          2208.71           131.94      16973.79      16901.79

131MB per run (PIO_MB_PE) is looking small at first, but If you check the total reads and executions it looks a bit scary (Note: I use I/O_INTER_GB as we have exadata environments as well, in this case it is not relevant) I checked how many times in a day it runs and how much total being read

SQL> l
  1  with ds as (
  2  select min(snap_id) min_snap,max(snap_id) max_snap from
  3  dba_hist_snapshot
  4  where begin_interval_time between trunc(sysdate-1) and trunc(sysdate))
  5  select sum(executions_delta) total_execution_yesterday,sum(executions_delta)*&_tmp_pio_pe/1024/1024 total_pio_pd_TB
  6  from dba_hist_sqlstat,ds
  7  where snap_id  between ds.min_snap and ds.max_snap
  8* and sql_id='&1'

TOTAL_EXECUTION_YESTERDAY TOTAL_PIO_PD_TB
------------------------- ---------------
                    18689      2.39794088

As you can see daily read is  2.3 TB with 18K executions read per day from a single query reads innocently 131MB per execution. Table size is not that big

SQL> select sum(bytes)/1024/1024 total_mb,sum(blocks) total_blocks 
from dba_segments where segment_name='PROBLEM_TABLE';

  TOTAL_MB TOTAL_BLOCKS
---------- ------------
       109        15360

SGA is way above to go via direct path read

SQL> show parameter sga_target

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------
sga_target                           big integer 9G

It looks like our table (all partitions) is in the cache. 15.2K blocks over total 15.3K blocks

SQL> select count(distinct block#) from v$bh
  2  where objd in (select data_object_id from dba_objects where object_name='PROBLEM_TABLE' and object_type like 'TABLE%');

COUNT(DISTINCTBLOCK#)
---------------------
                15260

So, table Size is small to fit in cache also Table is already in cache then Why we are scanning via direct path read ?

To know the answer you need to be a good blog follower like me and immediately recall
Tanels awesome post “Optimizer statistics-driven direct path read decision for full table scans (_direct_read_decision_statistics_driven),
which is new change on 11.2.0.3 where our version is.

To make sure this behaviour change  is the case, I run the test with _direct_read_decision_statistics_driven=TRUE (default) and then FALSE

SQL>alter session set "_direct_read_decision_statistics_driven"=TRUE;

SQL>@mon_sqlid2 bcfqr520321mf

DBMS_SQLTUNE.REPORT_SQL_MONITOR(SQL_ID=>'BCFQR520321MF')
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL Monitoring Report

SQL Text
------------------------------
select /*+monitor*//*coskan2*/.......... from VIEW_XXXXXXXXXXXXXXX where COLUMN='ZZZ'

Global Stats
=========================================================================================
| Elapsed |   Cpu   |    IO    | Application |  Other   | Fetch | Buffer | Read | Read  |
| Time(s) | Time(s) | Waits(s) |  Waits(s)   | Waits(s) | Calls |  Gets  | Reqs | Bytes |
=========================================================================================
|    4.68 |    1.17 |     2.76 |        0.23 |     0.53 |     2 |  21228 |  171 | 164MB |
=========================================================================================

SQL Plan Monitoring Details (Plan Hash Value=2550353655)
=================================================================================================================================================================================================
| Id |                   Operation                   |         Name          |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   | Read | Read  |  Mem  | Activity |   Activity Detail    |
|    |                                               |                       | (Estim) |       | Active(s) | Active |       | (Actual) | Reqs | Bytes | (Max) |   (%)    |     (# samples)      |
=================================================================================================================================================================================================
|  0 | SELECT STATEMENT                              |                       |         |       |         1 |     +5 |     1 |       16 |      |       |       |          |                      |
|  1 |   VIEW                                        | VIEW_XXXXXXXXXXXXXXX  |       3 | 25252 |         1 |     +5 |     1 |       16 |      |       |       |          |                      |
|  2 |    SORT ORDER BY                              |                       |       3 | 25252 |         1 |     +5 |     1 |       16 |      |       |  4096 |          |                      |
|  3 |     HASH GROUP BY                             |                       |       3 | 25252 |         1 |     +5 |     1 |       16 |      |       |    1M |          |                      |
|  4 |      VIEW                                     |                       |       3 | 25250 |         1 |     +5 |     1 |     289K |      |       |       |          |                      |
|  5 |       UNION-ALL                               |                       |         |       |         1 |     +5 |     1 |     289K |      |       |       |          |                      |
|  6 |        FILTER                                 |                       |         |       |         1 |     +5 |     1 |     289K |      |       |       |          |                      |
|  7 |         HASH GROUP BY                         |                       |       2 | 19143 |         2 |     +4 |     1 |     289K |      |       |   62M |          |                      |
|  8 |          HASH JOIN                            |                       |    990K | 19090 |         4 |     +2 |     1 |     289K |      |       |   30M |    25.00 | Cpu (1)              |
|  9 |           PART JOIN FILTER CREATE             | :BF0000               |    988K |  6099 |         3 |     +2 |     1 |     289K |      |       |       |          |                      |
| 10 |            PARTITION LIST SINGLE              |                       |    988K |  6099 |         3 |     +2 |     1 |     289K |      |       |       |          |                      |
| 11 |             TABLE ACCESS FULL                 | PROBLEM_TABLE         |    988K |  6099 |         4 |     +1 |     1 |     289K |   57 |  55MB |       |    25.00 | direct path read (1) |
| 12 |           PARTITION LIST SINGLE               |                       |    988K |  6102 |         2 |     +4 |     1 |     289K |      |       |       |          |                      |
| 13 |            TABLE ACCESS FULL                  | PROBLEM_TABLE         |    988K |  6102 |         3 |     +3 |     1 |     289K |   57 |  55MB |       |    50.00 | direct path read (2) |
| 14 |        FILTER                                 |                       |         |       |           |        |     1 |          |      |       |       |          |                      |
| 15 |         HASH GROUP BY                         |                       |       1 |  6107 |           |        |     1 |          |      |       |       |          |                      |
| 16 |          FILTER                               |                       |         |       |           |        |     1 |          |      |       |       |          |                      |
| 17 |           NESTED LOOPS                        |                       |         |       |           |        |     1 |          |      |       |       |          |                      |
| 18 |            NESTED LOOPS                       |                       |       1 |  6102 |           |        |     1 |          |      |       |       |          |                      |
| 19 |             PARTITION LIST SINGLE             |                       |       1 |  6099 |           |        |     1 |          |      |       |       |          |                      |
| 20 |              TABLE ACCESS FULL                | PROBLEM_TABLE         |       1 |  6099 |           |        |     1 |          |   57 |  55MB |       |          |                      |
| 21 |             PARTITION LIST SINGLE             |                       |       1 |     2 |           |        |       |          |      |       |       |          |                      |
| 22 |              INDEX RANGE SCAN                 | PROBLEM_TABLE    _IX3 |       1 |     2 |           |        |       |          |      |       |       |          |                      |
| 23 |            TABLE ACCESS BY LOCAL INDEX ROWID  | PROBLEM_TABLE         |       1 |     3 |           |        |       |          |      |       |       |          |                      |
| 24 |           SORT AGGREGATE                      |                       |       1 |       |           |        |       |          |      |       |       |          |                      |
| 25 |            PARTITION LIST SINGLE              |                       |       1 |     4 |           |        |       |          |      |       |       |          |                      |
| 26 |             TABLE ACCESS BY LOCAL INDEX ROWID | PROBLEM_TABLE         |       1 |     4 |           |        |       |          |      |       |       |          |                      |
| 27 |              INDEX RANGE SCAN                 | PROBLEM_TABLE    _IX  |       1 |     3 |           |        |       |          |      |       |       |          |                      |
=================================================================================================================================================================================================

This time we read 164MB and if you check line 11,13 and 20
where we access PROBLEM_TABLE via full partition scan, you can see all the direct path reads are because of that access.

When we revert the parameter to pre 11.2.0.3 behaviour.


SQL>alter session set "_direct_read_decision_statistics_driven"=false;

Session altered.

SQL>@mon_sqlid2 fds2rw8vuutcn

DBMS_SQLTUNE.REPORT_SQL_MONITOR(SQL_ID=>'FDS2RW8VUUTCN')
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL Monitoring Report

SQL Text
------------------------------
select /*+monitor*//*coskan*/........................
from VIEW_XXXXXXXXXXXXXXX where COLUMN='ZZZ'

Global Stats
=================================================
| Elapsed |   Cpu   |  Other   | Fetch | Buffer |
| Time(s) | Time(s) | Waits(s) | Calls |  Gets  |
=================================================
|    0.93 |    0.93 |     0.00 |     2 |  21043 |
=================================================

SQL Plan Monitoring Details (Plan Hash Value=2550353655)
=============================================================================================================================================================================
| Id |                   Operation                   |         Name          |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   |  Mem  | Activity | Activity Detail |
|    |                                               |                       | (Estim) |       | Active(s) | Active |       | (Actual) | (Max) |   (%)    |   (# samples)   |
=============================================================================================================================================================================
|  0 | SELECT STATEMENT                              |                       |         |       |         1 |     +1 |     1 |       16 |       |          |                 |
|  1 |   VIEW                                        | VIEW_XXXXXXXXXXXXXXX  |       3 | 25252 |         1 |     +1 |     1 |       16 |       |          |                 |
|  2 |    SORT ORDER BY                              |                       |       3 | 25252 |         1 |     +1 |     1 |       16 |  4096 |          |                 |
|  3 |     HASH GROUP BY                             |                       |       3 | 25252 |         1 |     +1 |     1 |       16 |    1M |          |                 |
|  4 |      VIEW                                     |                       |       3 | 25250 |         1 |     +1 |     1 |     287K |       |          |                 |
|  5 |       UNION-ALL                               |                       |         |       |         1 |     +1 |     1 |     287K |       |          |                 |
|  6 |        FILTER                                 |                       |         |       |         1 |     +1 |     1 |     287K |       |          |                 |
|  7 |         HASH GROUP BY                         |                       |       2 | 19143 |         1 |     +1 |     1 |     287K |   62M |   100.00 | Cpu (1)         |
|  8 |          HASH JOIN                            |                       |    990K | 19090 |         1 |     +1 |     1 |     287K |   31M |          |                 |
|  9 |           PART JOIN FILTER CREATE             | :BF0000               |    988K |  6099 |         1 |     +1 |     1 |     287K |       |          |                 |
| 10 |            PARTITION LIST SINGLE              |                       |    988K |  6099 |         1 |     +1 |     1 |     287K |       |          |                 |
| 11 |             TABLE ACCESS FULL                 | PROBLEM_TABLE         |    988K |  6099 |         1 |     +1 |     1 |     287K |       |          |                 |
| 12 |           PARTITION LIST SINGLE               |                       |    988K |  6102 |         1 |     +1 |     1 |     287K |       |          |                 |
| 13 |            TABLE ACCESS FULL                  | PROBLEM_TABLE         |    988K |  6102 |         1 |     +1 |     1 |     287K |       |          |                 |
| 14 |        FILTER                                 |                       |         |       |           |        |     1 |          |       |          |                 |
| 15 |         HASH GROUP BY                         |                       |       1 |  6107 |           |        |     1 |          |       |          |                 |
| 16 |          FILTER                               |                       |         |       |           |        |     1 |          |       |          |                 |
| 17 |           NESTED LOOPS                        |                       |         |       |           |        |     1 |          |       |          |                 |
| 18 |            NESTED LOOPS                       |                       |       1 |  6102 |           |        |     1 |          |       |          |                 |
| 19 |             PARTITION LIST SINGLE             |                       |       1 |  6099 |           |        |     1 |          |       |          |                 |
| 20 |              TABLE ACCESS FULL                | PROBLEM_TABLE         |       1 |  6099 |           |        |     1 |          |       |          |                 |
| 21 |             PARTITION LIST SINGLE             |                       |       1 |     2 |           |        |       |          |       |          |                 |
| 22 |              INDEX RANGE SCAN                 | PROBLEM_TABLE_IX3 		 |       1 |     2 |           |        |       |          |       |          |                 |
| 23 |            TABLE ACCESS BY LOCAL INDEX ROWID  | PROBLEM_TABLE         |       1 |     3 |           |        |       |          |       |          |                 |
| 24 |           SORT AGGREGATE                      |                       |       1 |       |           |        |       |          |       |          |                 |
| 25 |            PARTITION LIST SINGLE              |                       |       1 |     4 |           |        |       |          |       |          |                 |
| 26 |             TABLE ACCESS BY LOCAL INDEX ROWID | PROBLEM_TABLE         |       1 |     4 |           |        |       |          |       |          |                 |
| 27 |              INDEX RANGE SCAN                 | PROBLEM_TABLE_IX      |       1 |     3 |           |        |       |          |       |          |                 |
=============================================================================================================================================================================

Suddenly, all reads are from cache and runtime dropped from 4.5 to 0.98 seconds

To make myself more sure, I checked the stats why this is happening.

Statistics think that it has 2 million rows and 55K blocks.

SQL>select num_rows,blocks,last_analyzed from dba_tables where table_name='PROBLEM_TABLE';

  NUM_ROWS     BLOCKS LAST_ANALYZED
---------- ---------- -------------------
   2086124      55981 18/04/2013 23:42:56

In reality oracle has 600K rows and 15K blocks

SQL>select count(*) from ptd_owner.PROBLEM_TABLE    ;

  COUNT(*)
----------
    611960

SQL>select sum(blocks) from dba_segments where segment_name='PROBLEM_TABLE    ';

SUM(BLOCKS)
-----------
      15360

Our problem is definitely that When the direct path read decision driven from statistics (which are not representative) it goes wrong access path.

I did not stop here and check the all_tab_modifications to see what happened to table since last analyze


####################################################
################TABLE MODIFICATIONS#################
####################################################

TABLE_OWNER   TABLE_NAME      PARTITION_NAME  INSERTS          UPDATES          DELETES TIMESTAMP TRUNCATED DROP_SEGMENTS
------------- --------------- ----------------------- ---------------- ---------------- --------- --------- -------------
PTD_OWNER     PROBLEM_TABLE                         0                0          2086124 18-APR-13 NO              0
PTD_OWNER     PROBLEM_TABLE   XXXXXXXXXXXX          0                0           267418 18-APR-13 YES             0
PTD_OWNER     PROBLEM_TABLE   YYYYYYY               0                0           830434 18-APR-13 YES             0
PTD_OWNER     PROBLEM_TABLE   ZZZ                   0                0           988272 18-APR-13 YES             0

It looks like table partitions are truncated after the nightly stats gathering and starting to get load till the next truncate.

When we check with business, they also confirmed the behaviour.

So what is next,

As the filter is the partition key which needs full partition access, indexing is not a solution.
To increase sga, will help but then why would we, if the table can fit to the cache most of the time (till it grows to a certain point during the day)
To change the parameter session wide, is not possible via trigger
To change the parameter in the whole database is also not something we want to implement

Last option is to have representative statistics in the database gathered when the table is full in half way and then lock them. (or set the stats manually like Tanel)
By this way we can avoid using undocumented parameters, we can still get buffered reads till a certain point throughout the day (oracle may decide to change the path if the table gets bigger in size)

Well thats about it. Finally I manage to write a blog after 11 months right before a year ended (banking is killing productivity and increasing spam comments)

I have to admit, when the scripts are ready, it takes about 10 mins to sort the problem but it took 2 hours to share it , so I hope you enjoyed it.

Morals of the story are check your systems regularly,  read blogs and god bless  Tanel:)

April 11, 2012

When dbms_xplan.display_sql_plan_baseline fails to show the plan

Filed under: Performance, Plan Stability, Tips — coskan @ 8:45 pm

Last week my colleague (Paul Reed – my new content provider with issues on his databases 🙂 ), had a problem where he needed to identify which baseline was using index
on a plan where collection table (like SELECT column FROM TABLE(:B1) ) was used.

When he tried DBMS_XPLAN.DISPLAY_SQL_PLAN_BASELINE he got “ERROR: An error has happened, error code: 22905” which was related with the bug below.

DBMS_XPLAN.DISPLAY_SQL_PLAN_BASELINE returns "ERROR: An error has happened, error code: 22905" [ID 1418186.1]

When he asked me, if there is a workaround, I was under the impression that dba_sql_plan_baselines has got the outline information, so I asked him to check if he can get the outline info.
He checked and make us realize the truth that cbo developers did not make outline information available to the public within dba_sql_plan_baselines table (I’m still thinking why not ??)

We checked the other options (dba_hist_active_sess_history.SQL_PLAN_OPERATION) and found out what we needed but this was not enough for me.
While I was thinking how we can get the outline, my collegue managed to get the answer accidentally which is the main reason, I’m writing this blog post.

To keep the baseline safe he was backing up them to staging table with DBMS_SPM.pack_stgtab_baseline procedure
and when he select from the staging table he saw that comp_data column is actually the outline information.

Here is how he did it with a little bit of polishing.
(I will be using the baseline generated on this post – dbms_xplan.display_sql_plan_baseline works for this but not important for the post purposes!!)

 

DBA_SQL_PLAN_BASELINE content

HR@ORACOS> @printtab 'select * from dba_sql_plan_baselines where plan_name="SQL_PLAN_13w748wknkcwd8576eb1f"'
SIGNATURE                     : 1292784087274697613
SQL_HANDLE                    : SYS_SQL_11f0e4472549338d
SQL_TEXT                      : select count(*), max(col2) from t1 where flag = :n
PLAN_NAME                     : SQL_PLAN_13w748wknkcwd8576eb1f
CREATOR                       : HR
ORIGIN                        : MANUAL-LOAD
PARSING_SCHEMA_NAME           : HR
DESCRIPTION                   :
VERSION                       : 11.2.0.1.0
CREATED                       : 23-MAR-12 10.23.23.000000
LAST_MODIFIED                 : 23-MAR-12 10.23.23.000000
LAST_EXECUTED                 : 23-MAR-12 10.24.56.000000
LAST_VERIFIED                 :
ENABLED                       : YES
ACCEPTED                      : YES
FIXED                         : NO
AUTOPURGE                     : YES
OPTIMIZER_COST                : 2
MODULE                        : SQL*Plus
ACTION                        :
EXECUTIONS                    : 3
ELAPSED_TIME                  : 14772
CPU_TIME                      : 0
BUFFER_GETS                   : 9
DISK_READS                    : 0
DIRECT_WRITES                 : 0
ROWS_PROCESSED                : 3
FETCHES                       : 3
END_OF_FETCH_COUNT            : 3
-----------------
HR@ORACOS>

As you can see baseline is there but no outline info available to the public.

Lets create a staging table then pack the baseline.

Staging Table Creation

BEGIN
  DBMS_SPM.CREATE_STGTAB_BASELINE(
    table_name      => 'SPM_STAGING',
    table_owner     => 'HR'
);
END;
/

PL/SQL procedure successfully completed.

Packing the baseline

SET SERVEROUTPUT ON
DECLARE
l_plans_packed PLS_INTEGER;
BEGIN
l_plans_packed := DBMS_SPM.pack_stgtab_baseline(
table_name =&gt; 'SPM_STAGING',
table_owner =&gt; 'HR',
sql_handle=&gt;'SYS_SQL_11f0e4472549338d',
plan_name=&gt;'SQL_PLAN_13w748wknkcwd8576eb1f'
);
DBMS_OUTPUT.put_line('Plans Packed: ' || l_plans_packed);
END;
/

Plans Packed: 1

PL/SQL procedure successfully completed.

Lets see what is inside the SPM_STAGING table where we packed the data into

 

Staging Table Content 

R@ORACOS> HR@ORACOS> @printtab 'select * from spm_staging'
VERSION                       : 2
SIGNATURE                     : 1292784087274697613
SQL_HANDLE                    : SYS_SQL_11f0e4472549338d
OBJ_NAME                      : SQL_PLAN_13w748wknkcwd8576eb1f
OBJ_TYPE                      : SQL_PLAN_BASELINE
PLAN_ID                       : 2239163167
SQL_TEXT                      : select count(*), max(col2) from t1 where flag = :n
CREATOR                       : HR
ORIGIN                        : MANUAL-LOAD
DESCRIPTION                   :
DB_VERSION                    : 11.2.0.1.0
CREATED                       : 23-MAR-12 10.23.23.000000
LAST_MODIFIED                 : 23-MAR-12 10.23.23.000000
LAST_EXECUTED                 : 23-MAR-12 10.24.56.000000
LAST_VERIFIED                 :
STATUS                        : 11
OPTIMIZER_COST                : 2
MODULE                        : SQL*Plus
ACTION                        :
EXECUTIONS                    : 3
ELAPSED_TIME                  : 14772
CPU_TIME                      : 0
BUFFER_GETS                   : 9
DISK_READS                    : 0
DIRECT_WRITES                 : 0
ROWS_PROCESSED                : 3
FETCHES                       : 3
END_OF_FETCH_COUNT            : 3
CATEGORY                      : DEFAULT
SQLFLAGS                      : 0
TASK_ID                       :
TASK_EXEC_NAME                :
TASK_OBJ_ID                   :
TASK_FND_ID                   :
TASK_REC_ID                   :
INUSE_FEATURES                : 1
PARSE_CPU_TIME                :
PRIORITY                      :
OPTIMIZER_ENV                 :
E289FB89E126A80034011000AEF9C3E2CFFA331056414555519521105555551545545558591555449665851D5511058555555155515122555415A0EA0E5551454265455454449081566E001696A35615551403025415505AE126A800050402000002000000100000000100002000000208D007000000001000243303000101000038F8000400009299010000000800E126A80065646
40202643202320000020003020A0A05050A140002000032F4010000500A0A0A0A64E803000064E08003FFFF00000804020000080032
BIND_DATA                     : BEDA0B2001004F6C4EAD000101F001200369025931
PARSING_SCHEMA_NAME           : HR
COMP_DATA                     :
<![CDATA[IGNORE_OPTIM_EMBEDDED_HINTS]]><![CDATA[OPTIMIZER_FEATURES_ENABLE('11.2.0.1')]]><![CDATA[DB_VERSION('11.2.0.1')]]><![CDATA[ALL_ROWS]]><![CDATA[OUTLINE_LEAF(@"SEL$1")]]><![CDATA[INDEX_RS_ASC(@"SEL$1" "T1"@"SEL$1" ("T1"."FLAG"))]]>
-----------------

PL/SQL procedure successfully completed.

As you can see there are more columns inside the staging table and more importantly COMP_DATA column has got the outline information.

This way outline data is a bit hard to read. You can paste the information to an xml parser or you can use the query below to get a better output.

 COMP_DATA content  (outline information)

select
    extractvalue(value(d), '/hint') as outline_hints
    from
    xmltable('/outline_data/hint'
    passing (
    select
    xmltype(comp_data) as xmlval
    from
    spm_staging
   )    ) d;

OUTLINE_HINTS
--------------------------------------------------------
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('11.2.0.1')
DB_VERSION('11.2.0.1')
ALL_ROWS
OUTLINE_LEAF(@"SEL$1")
INDEX_RS_ASC(@"SEL$1" "T1"@"SEL$1" ("T1"."FLAG"))

It is simple for this query but It can be hard to guess the plan from this output what you can do is paste the outline information the the sql as hint like below.
Only line you need to remove is IGNORE_OPTIM_EMBEDDED_HINTS to avoid ignoring hints.

var n varchar2(2)
exec :n := 'N1'
select
/*+
OPTIMIZER_FEATURES_ENABLE('11.2.0.1')
DB_VERSION('11.2.0.1')
ALL_ROWS
OUTLINE_LEAF(@"SEL$1")
INDEX_RS_ASC(@"SEL$1" "T1"@"SEL$1" ("T1"."FLAG"))
*/
count(*), max(col2) from t1 where flag = :n;

PL/SQL procedure successfully completed.

HR@ORACOS>   2    3    4    5    6    7    8    9

  COUNT(*) MAX(COL2)
---------- --------------------------------------------------
     49999 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

Generated plan with outline

HR@ORACOS>  select * from table(dbms_xplan.display_cursor(null,null,'BASIC +OUTLINE'));

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
EXPLAINED SQL STATEMENT:
------------------------
select /*+ OPTIMIZER_FEATURES_ENABLE('11.2.0.1')
DB_VERSION('11.2.0.1') ALL_ROWS OUTLINE_LEAF(@"SEL$1")
INDEX_RS_ASC(@"SEL$1" "T1"@"SEL$1" ("T1"."FLAG")) */ count(*),
max(col2) from t1 where flag = :n

Plan hash value: 3625400295

---------------------------------------------
| Id  | Operation                    | Name |
---------------------------------------------
|   0 | SELECT STATEMENT             |      |
|   1 |  SORT AGGREGATE              |      |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1   |
|   3 |    INDEX RANGE SCAN          | I1   |
---------------------------------------------

Outline Data
-------------

  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('11.2.0.1')
      DB_VERSION('11.2.0.1')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$1")
      INDEX_RS_ASC(@"SEL$1" "T1"@"SEL$1" ("T1"."FLAG"))
      END_OUTLINE_DATA
  */

As you can see with this method we can find see outline for the baseline and with using that outline the plan inside the baseline (%99 of the time-sql should be same and also underlying objects)

You can use this method (till oracle provides outlines) when you hit the bugs below as well.
As long as you have a baseline it can be used. I also think about couple of other usages but I will keep them for future posts so you can stay tuned.

Bug 8366552: ORA-6502 WHEN TRYING TO DISPLAY PLANS FROM SQL BASELINE
Bug 12335250: DBMS_XPLAN.DISPLAY_SQL_PLAN_BASELINE CAUSES ORA-979 IF CURSOR_SHARING=FORCE/SIMI
Bug 12833789: DBMS_XPLAN.DISPLAY_SQL_PLAN_BASELINE RAISING ERROR 22905
Bug 10637509: DBMS_XPLAN.DISPLAY_SQL_PLAN_BASELINE RECEIVES ORA-6502

Bug 10254875: DBMS_XPLAN.DISPLAY_SQL_PLAN_BASELINE RETURNS: ERROR CODE: 1435

Bonus Material for Geeks and SYSDBA access owners

Same information is also available with the sql below –this sql is multi purpose so you can use it for sql_patches and sql_profiles


select * from (
SELECT
  sa.signature,
  st.sql_handle,
  so.name obj_name,
  decode(sa.obj_type, 1,'SQL_PROFILE', decode(sa.obj_type, 2,'SQL_PLAN_BASELINE',decode(sa.obj_type, 3,'SQL_PATCH', null))) obj_type,
  sa.plan_id,
  st.sql_text,
  sa.creator,
  dbms_smb_internal.map_origin_base_to_view(sa.origin, sa.obj_type)
                         origin,
  sa.description,
  sa.version db_version,
  sa.created,
  sa.last_modified,
  so.last_executed,
  sa.last_verified,
  so.flags status,
  sa.optimizer_cost,
  sa.module,
  sa.action,
  sa.executions,
  sa.elapsed_time,
  sa.cpu_time,
  sa.buffer_gets,
  sa.disk_reads,
  sa.direct_writes,
  sa.rows_processed,
  sa.fetches,
  sa.end_of_fetch_count,
  sa.category,
  s.flags sqlflags,
  sa.task_id,
  sa.task_exec_name,
  sa.task_obj_id,
  sa.task_fnd_id,
  sa.task_rec_id,
  s.inuse_features,
  sa.parse_cpu_time,
  sa.priority,
  sa.optimizer_env,
  sa.bind_data,
  sa.parsing_schema_name,
  sd.comp_data
FROM
  sql$ s,
  sqlobj$auxdata sa,
  sqlobj$ so,
  sql$text st,
  sqlobj$data sd
WHERE
  sa.signature = s.signature and
  sa.signature = so.signature and
  sa.category = so.category and
  sa.obj_type = so.obj_type and
  sa.plan_id = so.plan_id and
  sa.signature = st.signature and
  sa.signature = sd.signature and
  sa.category = sd.category and
  sa.obj_type = sd.obj_type and
  sa.plan_id = sd.plan_id) v
 WHERE
 obj_name='SQL_PLAN_13w748wknkcwd8576eb1f'        ----PLAN NAME
 and category is not null
 and sql_handle = 'SYS_SQL_11f0e4472549338d'      -----PLAN HANDLE
 and obj_type = 'SQL_PLAN_BASELINE'
 and sql_text is not null

January 13, 2012

Session Snapper on Duty 2 – (how to support your answers, with proof)

Filed under: Basics, Performance, Tips — coskan @ 10:51 pm

Warning : This long blog is for educational and quality purposes and especially targeting Junior DBAs and DBAs who does not know how to use the Session Snapper
(greatest Oracle Tuning Tool yet) by Tanel Poder . If you feel comfortable with snapper and you think your relationship with your
clients just rocks then there is no need to read the post but you may still give it a go 🙂

Story started with a long email thread forward by a  developer who is asking question below for a system I am responsible but not feeling perfectly comfortable about how things really work.

Hi Josh (note to reader: Coskan is pronounced as Joshkahn so I use Josh at office)
Any ideas on how we can work with dbas to find out why our database is only committing at a range of 
2-30 sec for 100 commits instead of the ideal 100 commits a sec for a third party process?

I answered

From my point of view 

I will be requesting 

1-	Time interval when this process run
2-	If you have SID information when this process run 
3-	If it gives same results on UAT as well then can we run the operation while tracing is on (which need Change Request-hassle on production)

Then we can contione to understand what can be the problem

Developer replied

It runs 7/24 and UAT testing is not possible since we can't point the feeds to UAT but username is USER_XXX 
if it helps.

Long story short all I understood was in simplest words  they want to be able to process more data in same amount of time,  so I replied with a long mail below

BEGINNING OF THE EMAIL

I did some sampling on your sessions to understand what is happening and I think I can come up with rough conclusions about improvement suggestions.

Lets walk through on what is/maybe happening

This is 500 second sampling of one of your sessions

---Some important Statistics
-------------------------------------------------------------------------------------------------------------------------------------
    SID, USERNAME  , TYPE, STATISTIC                                                 ,     HDELTA, HDELTA/SEC,    %TIME, GRAPH
-------------------------------------------------------------------------------------------------------------------------------------
     71, USER_XXX  , STAT, user commits                                              ,        326,        .65,
     71, USER_XXX  , STAT, session logical reads                                     ,      1.36M,      2.71k,

—Wait profile

     71, USER_XXX  , WAIT, buffer busy waits                                         ,     1.18ms,     2.36us,      .0%, |          |
     71, USER_XXX  , WAIT, log file sync                                             ,     36.12s,    72.24ms,     7.2%, |@         |
     71, USER_XXX  , WAIT, db file sequential read                                   ,    276.17s,   552.34ms,    55.2%, |@@@@@@    |
     71, USER_XXX  , WAIT, flashback buf free by RVWR                                ,   403.28ms,   806.55us,      .1%, |          |
     71, USER_XXX  , WAIT, undo segment extension                                    ,    72.93ms,   145.86us,      .0%, |          |
     71, USER_XXX  , WAIT, SQL*Net message to client                                 ,     2.23ms,     4.47us,      .0%, |          |
     71, USER_XXX  , WAIT, SQL*Net message from client                               ,     178.2s,    356.4ms,    35.6%, |@@@@      |
     71, USER_XXX  , WAIT, SQL*Net more data from client                             ,   710.26ms,     1.42ms,      .1%, |          |

—Wait profile breakdown per sql

---------------------------------------------------------------------------------
Active% | SQL_ID          | EVENT                               | WAIT_CLASS
---------------------------------------------------------------------------------
    20% | ggjkmtv8cxxh3   | db file sequential read             | User I/O
    19% | 64djbhqv5v93u   | db file sequential read             | User I/O
    13% | 3b8gt3dqtn345   | db file sequential read             | User I/O
     8% |                 | log file sync                       | Commit
     2% | 8zt375nzk26rh   | db file sequential read             | User I/O
     1% | 8ztz7p814q9u2   | db file sequential read             | User I/O
     1% | ggjkmtv8cxxh3   | ON CPU                              | ON CPU
     1% | 64djbhqv5v93u   | ON CPU                              | ON CPU
     0% | 64djbhqv5v93u   | flashback buf free by RVWR          | Configuration
     0% | 64djbhqv5v93u   | SQL*Net more data from client       | Network

--  End of ASH snap 1, end=2012-01-11 21:51:46, seconds=500, samples_taken=499

When we check the stats, you have done 326 commits in 500 seconds which makes 0.65 commit per second but if we take 180 second of idle events (SQL*Net message to client + SQL*Net message from client) out, it is 326 commits in 320 (500-180) seconds active time which is roughly 1 commit per second.

When we look at the wait profile we can see 276 seconds of “db file sequential read” (physical read) over 320 second ofnon-iddle period and 36  second  “log file sync” which we will come later

If we focus on the “db file sequential read” we can see it is %86 (276/320) of the active time and there are 3 main contributer sqls for this case.

SQLs are below

SQL> @sqlid2 ggjkmtv8cxxh3

SQL_ID        HASH_VALUE SQL_TEXT
------------- ---------- ---------------------------------------------------------------------------------------------------------------
ggjkmtv8cxxh3 3503224323 DELETE FROM TABLE_T WHERE ESMP = :1

1 row selected.

  PLAN_HASH OPT_COST     PARSES   H_PARSES EXECUTIONS AVG_FETCH_PE  AVG_ROWS_PE AVG_ELA_SECONDS      AVG_LIO_PE      AVG_PIO_PE
---------- -------- ---------- ---------- ---------- ------------ ------------ --------------- --------------- ---------------
4033870845        4          0         21      20219           .0        392.7       .26172575          1198.3            28.1


Plan hash value: 4033870845

-----------------------------------------------------------------------------------------------
| Id  | Operation         | Name                      | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------
|   0 | DELETE STATEMENT  |                           |       |       |     4 (100)|          |
|   1 |  DELETE           | TABLE_T                   |       |       |            |          |
|*  2 |   INDEX RANGE SCAN| PK_TABLE_T                |     2 |    50 |     4   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------
SQL> @sqlid2 64djbhqv5v93u

SQL_ID        HASH_VALUE SQL_TEXT
------------- ---------- ------------------------------------------------------------------------------------------------------------------------------------------------------
64djbhqv5v93u 3059590266 some complex update statement which does not need to be shared

PLAN_HASH OPT_COST     PARSES   H_PARSES EXECUTIONS AVG_FETCH_PE  AVG_ROWS_PE AVG_ELA_SECONDS      AVG_LIO_PE      AVG_PIO_PE
---------- -------- ---------- ---------- ---------- ------------ ------------ --------------- --------------- ---------------
1856144614        3         26          2     473152           .0         85.4      .270458035           454.4            25.4

Plan hash value: 1856144614

-------------------------------------------------------------------------------------------
| Id  | Operation          | Name                 | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT   |                      |       |       |     3 (100)|          |
|   1 |  UPDATE            | TABLE_XXXXXXX        |       |       |            |          |
|*  2 |   INDEX UNIQUE SCAN| PK_TABLE_XXXXXXX     |     1 |   192 |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------

SQL> @sqlid2 3b8gt3dqtn345

SQL_ID        HASH_VALUE SQL_TEXT
------------- ---------- ------------------------------------------------------------------------------------------------------------------------------------------------------
3b8gt3dqtn345 1838812293 some complex update statement which does not need to be shared

PLAN_HASH OPT_COST     PARSES   H_PARSES EXECUTIONS AVG_FETCH_PE  AVG_ROWS_PE AVG_ELA_SECONDS      AVG_LIO_PE      AVG_PIO_PE
--------- -------- ---------- ---------- ---------- ------------ ------------ --------------- --------------- ---------------
861798698        3          0          7      22308           .0         97.5      .113096364           382.6            10.9

Plan hash value: 861798698

------------------------------------------------------------------------------------------------
| Id  | Operation          | Name                      | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT   |                           |       |       |     3 (100)|          |
|   1 |  UPDATE            | TABLE_ZZZZ                |       |       |            |          |
|*  2 |   INDEX UNIQUE SCAN| PK_TABLE_ZZZZ             |     1 |    69 |     2   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------

There is nothing much we can do about index unique scans other than changing the table structure but I think there is room for improve on index range scan for sql_id ggjkmtv8cxxh3 for delete statement which is waitig on db file sequential read for  %31 (500*%20 / 320 ) of the active time.

The reason the delete is doing index range scan is because we are trying to access 3 column composite index PK_TABLE_T by only using 1 column which is COL1. If business logic lets you also give other 2 columns as filter as well we will be doing much throughput with less LIO and PIO

TABLE_OWNER          TABLE_NAME                     INDEX_NAME                     POS# COLUMN_NAME
-------------------- ------------------------------ ------------------------------ ---- ------------------------------
APP_OWNER            TABLE_T                        PK_TABLE_T                        1 COL1
                                                                                      2 COL2
                                                                                      3 COL3

While I was doing sampling on above session at the same time I also did sampling on another session which is also having similar behaviour with different sqls like below.

—Some important Statistics

-------------------------------------------------------------------------------------------------------------------------------------
    SID, USERNAME  , TYPE, STATISTIC                                                 ,     HDELTA, HDELTA/SEC,    %TIME, GRAPH
-------------------------------------------------------------------------------------------------------------------------------------
    192, USER_XXX  , STAT, user commits                                              ,        343,        .69,
    192, USER_XXX  , STAT, session logical reads                                     ,      2.78M,      5.56k,

—Wait profile

  192, USER_XXX  , WAIT, buffer busy waits                                         ,      702us,      1.4us,      .0%, |          |
  192, USER_XXX  , WAIT, log file switch completion                                ,    94.34ms,   188.68us,      .0%, |          |
  192, USER_XXX  , WAIT, log file sync                                             ,     28.26s,    56.53ms,     5.7%, |@         |
  192, USER_XXX  , WAIT, db file sequential read                                   ,     364.3s,    728.6ms,    72.9%, |@@@@@@@@  |
  192, USER_XXX  , WAIT, flashback buf free by RVWR                                ,   348.79ms,   697.57us,      .1%, |          |
  192, USER_XXX  , WAIT, undo segment extension                                    ,    492.4ms,    984.8us,      .1%, |          |
  192, USER_XXX  , WAIT, SQL*Net message to client                                 ,     1.03ms,     2.07us,      .0%, |          |
  192, USER_XXX  , WAIT, SQL*Net message from client                               ,     94.02s,   188.05ms,    18.8%, |@@        |
  192, USER_XXX  , WAIT, SQL*Net more data from client                             ,   368.33ms,   736.67us,      .1%, |          |

—Wait profile breakdown per sql

---------------------------------------------------------------------------------
Active% | SQL_ID          | EVENT                               | WAIT_CLASS
---------------------------------------------------------------------------------
    37% | dkh6b1nhnhfcu   | db file sequential read             | User I/O
    32% | 29zz9n1b9sppr   | db file sequential read             | User I/O
     7% |                 | log file sync                       | Commit
     1% | dkh6b1nhnhfcu   | ON CPU                              | ON CPU
     1% | 29zz9n1b9sppr   | ON CPU                              | ON CPU
     0% | 29zz9n1b9sppr   | undo segment extension              | Configuration
     0% | 29zz9n1b9sppr   | flashback buf free by RVWR          | Configuration

--  End of ASH snap 1, end=2012-01-11 21:52:17, seconds=500, samples_taken=499

This time 94 seconds idle with 406 (500-94) second processing time which gives us 0.84 (343/406) commit per second and 392 seconds of wait
(364 “db file sequential read”+28 “log file sync”) wait.

When we look at the sqls below this time we can see 160 (500*%32) of 364 seconds (%43 of active time) is being spent on again index range scan (sqlid 29zz9n1b9sppr) on delete statement again due to the lack of composite index column filtering.

I again believe we can improve the access if we are able to give all columns as filter and maintain index unique scan.


TABLE_OWNER          TABLE_NAME                     INDEX_NAME                     POS# COLUMN_NAME                    DSC
-------------------- ------------------------------ ------------------------------ ---- ------------------------------ ----
APP_OWNER            TABLE_Z                        PK_TABLE_Z                      1    COL1
                                                                                    2    COL2
                                                                                    3    COL3

SQL> @sqlid2 dkh6b1nhnhfcu

SQL_ID        HASH_VALUE SQL_TEXT
------------- ---------- ------------------------------------------------------------------------------------------------------------------------------------------------------
dkh6b1nhnhfcu  558381466 INSERT INTO APP_OWNER.TABLE_Z where bla bla bla bla

PLAN_HASH OPT_COST     PARSES   H_PARSES EXECUTIONS AVG_FETCH_PE  AVG_ROWS_PE AVG_ELA_SECONDS      AVG_LIO_PE      AVG_PIO_PE
--------- -------- ---------- ---------- ---------- ------------ ------------ --------------- --------------- ---------------
        0        1        778         15     102566           .0       1065.3      .177031281          5673.4            30.4

-------------------------------------------------
| Id  | Operation                | Name | Cost  |
-------------------------------------------------
|   0 | INSERT STATEMENT         |      |     1 |
|   1 |  LOAD TABLE CONVENTIONAL |      |       |
-------------------------------------------------

SQL> @sqlid2 29zz9n1b9sppr

SQL_ID        HASH_VALUE SQL_TEXT
------------- ---------- -------------------------------------------------------------------------------------------------------
29zz9n1b9sppr 1453086391 DELETE FROM APP_OWNER.TABLE_Z WHERE ESMP = :1

1 row selected.

PLAN_HASH OPT_COST     PARSES   H_PARSES EXECUTIONS AVG_FETCH_PE  AVG_ROWS_PE AVG_ELA_SECONDS      AVG_LIO_PE      AVG_PIO_PE
---------- -------- ---------- ---------- ---------- ------------ ------------ --------------- --------------- ---------------
3229062859        4          0         30      21308           .0       1136.6      .642183193          2081.5            68.1

--------------------------------------------------------------------------------------
| Id  | Operation         | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | DELETE STATEMENT  |                  |       |       |     4 (100)|          |
|   1 |  DELETE           | TABLE_Z          |       |       |            |          |
|*  2 |   INDEX RANGE SCAN| PK_TABLE_Z       |     5 |   135 |     4   (0)| 00:00:01 |
--------------------------------------------------------------------------------------

I think I managed to make it clear about the index range scan (for more information see the bonus material below), Now Lets see what the log file sync is all about and why we are seeing it .
<p

Log File sync event has got definition below in Oracle Documentation.

log file sync

When a user session commits, the session’s redo information needs to be flushed to the redo logfile. The user session will post the LGWR to write the log buffer to the redo log file. When the LGWR has finished writing, it will post the user session.

Wait Time: The wait time includes the writing of the log buffer and the post.

Parameter Description
buffer# The number of the physical buffer in the redo log buffer that needs to be synchronized

Potential causes

Wait Event General Area Possible Causes Look for / Examine
log file sync I/O, over- committing Slow disks that store the online logs

Un-batched commits

Check the disks that house the online redo logs for resource contention. Check the number of transactions (commits + rollbacks) each second, from V$SYSSTAT.

First rule of tuning log file sync is doing less commit which means instead of commiting every dml or every single operation like

begin
for i in 300001..600000 loop
delete from t1 where id1=to_char(i);
commit;
end loop;
end;
/

Doing commits in batch like

begin
for i in 300001..600000 loop
delete from t1 where id1=to_char(i);
end loop;
commit;
end;
/

To prove how you are doing commits, I just get before sql statistics then get sampling of the session then again get after sql statistics and to see if number of user commits statistics during sampling interval are matching with number of increase in the sql executions

—-note to blog reader: this is, how I did,  no rocket science  (ones you arm yourself with right tools) just run the block as whole

select sql_id,executions from v$sql where sql_id in ('29zz9n1b9sppr','ggjkmtv8cxxh3');
@snapper ash=sql_id+event,stats,gather=stw,sinclude=user%commits 20 1 71,192
select sql_id,executions from v$sql where sql_id in ('29zz9n1b9sppr','ggjkmtv8cxxh3');

Results are

SQL> @test1

SQL_ID        EXECUTIONS
------------- ----------
29zz9n1b9sppr      28897
ggjkmtv8cxxh3      28211

Sampling SID 71,192 with interval 100 seconds, taking 1 snapshots...

-- Session Snapper v3.53 by Tanel Poder ( http://blog.tanelpoder.com )

-------------------------------------------------------------------------------------------------------------------------------------
    SID, USERNAME  , TYPE, STATISTIC                                                 ,     HDELTA, HDELTA/SEC,    %TIME, GRAPH
-------------------------------------------------------------------------------------------------------------------------------------
     71, USER_XXX  , STAT, user commits                                              ,         12,        .12,
     71, USER_XXX  , TIME, repeated bind elapsed time                                ,       59us,      .59us,      .0%, |          |
     71, USER_XXX  , TIME, parse time elapsed                                        ,      127us,     1.27us,      .0%, |          |
     71, USER_XXX  , TIME, DB CPU                                                    ,   278.96ms,     2.79ms,      .3%, |          |
     71, USER_XXX  , TIME, sql execute elapsed time                                  ,     36.32s,   363.21ms,    36.3%, |@@@@      |
     71, USER_XXX  , TIME, DB time                                                   ,     40.75s,   407.51ms,    40.8%, |@@@@@     |
     71, USER_XXX  , WAIT, log file sync                                             ,      4.42s,    44.22ms,     4.4%, |@         |
     71, USER_XXX  , WAIT, db file sequential read                                   ,     35.04s,   350.43ms,    35.0%, |@@@@      |
     71, USER_XXX  , WAIT, SQL*Net message to client                                 ,      242us,     2.42us,      .0%, |          |
     71, USER_XXX  , WAIT, SQL*Net message from client                               ,     60.11s,   601.05ms,    60.1%, |@@@@@@    |
     71, USER_XXX  , WAIT, SQL*Net more data from client                             ,     3.94ms,    39.44us,      .0%, |          |
    192, USER_XXX  , STAT, user commits                                              ,         12,        .12,
    192, USER_XXX  , TIME, DB CPU                                                    ,   207.97ms,     2.08ms,      .2%, |          |
    192, USER_XXX  , TIME, sql execute elapsed time                                  ,     30.74s,   307.37ms,    30.7%, |@@@@      |
    192, USER_XXX  , TIME, DB time                                                   ,     35.14s,    351.4ms,    35.1%, |@@@@      |
    192, USER_XXX  , WAIT, log file sync                                             ,       4.4s,    43.98ms,     4.4%, |@         |
    192, USER_XXX  , WAIT, db file sequential read                                   ,     29.95s,   299.51ms,    30.0%, |@@@       |
    192, USER_XXX  , WAIT, SQL*Net message to client                                 ,      113us,     1.13us,      .0%, |          |
    192, USER_XXX  , WAIT, SQL*Net message from client                               ,     64.55s,   645.48ms,    64.5%, |@@@@@@@   |
    192, USER_XXX  , WAIT, SQL*Net more data from client                             ,    19.36ms,   193.62us,      .0%, |          |
--  End of Stats snap 1, end=2012-01-12 22:27:59, seconds=100

---------------------------------------------------------------
Active% | SQL_ID          | EVENT
---------------------------------------------------------------
    28% | 29zz9n1b9sppr   | db file sequential read             |
    12% | ggjkmtv8cxxh3   | db file sequential read             |
    11% | 3b8gt3dqtn345   | db file sequential read             |
     9% |                 | log file sync                       |
     9% | 64djbhqv5v93u   | db file sequential read             |
     4% | dkh6b1nhnhfcu   | db file sequential read             |
     1% | 8zt375nzk26rh   | db file sequential read             |
     1% | b513mwk4mf9rt   | db file sequential read             |
     1% | 0h01fy07y1c1p   | db file sequential read             |

--  End of ASH snap 1, end=2012-01-12 22:27:59, seconds=100, samples_taken=99

PL/SQL procedure successfully completed.

SQL_ID        EXECUTIONS
------------- ----------
29zz9n1b9sppr      28909
ggjkmtv8cxxh3      28223

As you can see, both sql increase by 12  (after-before execution times) and both sql show user commits statistics delta as 12 which means  there is a commit per execution that leads log file sync event.

After proving that the app is not doing batch commits,  I wonder if this is something avoidable or not,  in terms of business logic. Doing batch commits  will definitely help to increase your overall throughput plus whole system health.

So in my humble opinion we have two options without me seeing any code logic.

1- Providing unique index access for the delete operations by giving all columns of the index
2- Decreasing the number of commits and try to do batch commit.

For you to test all my actions, I have added a a simple synthetic test case attached to understand how these implementations can help us.

Cases tested.

1-Nonbatch commit range scan
2-Nonbatch commit unique scan

3-Batch commit range scan
4-Batch commit unique scan

Timings for the test are as follows for deleting 100K row sets as you can see

Elapsed Improvement
nonbatch-commit-range 84 0%
nonbatch-commit-unique 47 44%
batch-commit-range 55 34%
batch-commit-unique 35 58%

END OF THE EMAIL


Note to blog reader : (In case they are still reading), I wish you found something interesting about how I used the right tool right time
and how I answered the development questions without talking bollocks but numbers and proofs. One thing I need to admit there were only 2 sessions running this operation which made my life easy.

If you wonder the duration of whole diagnose: It took 15 mins in total to find out what is going on and 1 hour to write the mail but that 1 hour save at least 3-4 unnecessary email ping pong just because nobody can prove their fact with numbers.

Quick Tip: If you are sending any output to anybody use Courier New character set and make everybodies eyes happy.

Session Snapper and its usage can be found at Tanels website
Test case can be found  here

November 11, 2011

Where is the sql_id of active session?

Filed under: Bugs, Performance, Plan Stability, Tips — coskan @ 3:39 pm

In this post I will give information on quick ways to find the sql_id’s of sessions for which you can’t see any entry when you look at v$session.

I usually see this behavior when sql is burning CPU in a nested loop and after a long time of running sql_id column suddenly not updated anymore. When I tweeted about it , ,Tanel Poder answered (If I a recall it right damn twitter does not have a proper search and history) something like ” this happens when oracle does too many recursive calls and loses track of the sql_id” (Tanel, if you are reading, please correct me if I am wrong about what you said)

On Metalink there are two notes I could find

Reason for Apparently Incorrect or Null Sql_id in v$sql [ID 406452.1]

This is caused by an architectural change in the 10g release: from 10.2 onwards, the v$sql columns (sql_id, sql_address and so on) will only show the top level user cursor,  if any (ie it will not display the recursive cursors),.
When using a remote procedure call, such as when a rpc call from a client (eg forms client) requests that a plsql object is executed on the server, there is no top level sql and, consequently, there is no sql_id in v$session.

BUG:5146994 - CANNOT FIND RECURSIVE SQL IN V$SQLAREA FOR A RUNNING SNAPSHOT REFRESH JOB
BUG 5724661 Abstract: WRONG VALUES FOR SQL_ID IN V$SESSION

On Null SQL_ID For Active Sessions In V$Session [ID 1343503.1]

From 10.2 onwards sql_id will only show the top level user cursor ( if any). 

Sometimes,Session does a plsql rpc call ie a client ( eg forms client) can request a plsql object to be executed at the server. In this case there is no top level sql and as such we don't see the sql_id in v$session.

Currently, oracle don't store the current sql thru any v$ views. The only way to find that is thru event 10046.
This is covered in Unpublished bug 5528670: "NULL VALUES FOR SQL_ID, SQL_HASH_VALUE, SQL_ADDRESS AND OTHERS IN V$SESSION", Which was closed as "Not a Bug".

When I have this issues I sometimes get it even on a sql which I run from sql_plus without a pl/sql call so I think limitation on these notes is a bit misleading or I am interpreting the information wrong. Anyway finding the reasons needs too much tracing research bla bla bla which you will never have time to go through when you hit the issue on a production system so best I cut the chase and start giving information how to find the sql_id

Notes says there is no v$ view which is a a bit misleading and notes also say 10046 is the only way to find it which is also another documentation error ignoring method 2 below.

On 10G there are 2 ways that I know will usualls work one of which needs tuning and diagnostics pack license other does not

On 11G there is one more way which again needs tuning and diagnostics pack license.

Method 1- using ASH/AWR data + v$sql combination (10g-11g license needed)

1a- Find the latest not null sql_id column for the given session and serial in v$active_session_history

1b- If you cant find it at first step, due to the reason sql_id information already lost in v$active_session_history, then you need to check similar way using dba_hist_active_sess_history

2- Once you find the sql_id,plan_hash_value double check it on v$sql if number on users_executing column is matching in current v$session then you have what you looking for

—Method-1 on Action

DATABASE1> @swact

    SID USERN STATE   EVENT                        SEQ# SEC_IN_STATE         P1         P2         P3  SQL_ID         CHN
------- ----- ------- ---------------------- ---------- ------------ ---------- ---------- ----------  ------------- ----
   1352 XXWEB WAITING db file scattered read      40906            0         67    2638509         32                       --->sql_id empty
    294 XXWEB WAITING db file scattered read      16727            0         82     234468         32                       --->sql_id empy
   1263 COSKA WORKING On CPU / runqueue              44            0 1413697536          1          0  f5cqythck1gfz    1

DATABASE1> @last_sql 1352 %

SAMPLE_TIME                           SESSION_ID SESSION_SERIAL# SQL_ID		PLAN_HASH_VALUE
------------------------------------- ---------- --------------- -------------  ---------------
31-JAN-11 11.00.35.524                      1352              72 3p1b93fq81x0f  2118159443	   ----> very likely our sql
31-JAN-11 11.00.34.514                      1352              72 3p1b93fq81x0f  2118159443
31-JAN-11 11.00.33.514                      1352              72 3p1b93fq81x0f  2118159443
31-JAN-11 11.00.32.514                      1352              72 3p1b93fq81x0f  2118159443
31-JAN-11 11.00.31.514                      1352              72 3p1b93fq81x0f  2118159443
31-JAN-11 11.00.30.514                      1352              72 3p1b93fq81x0f  2118159443
31-JAN-11 11.00.29.504                      1352              72 3p1b93fq81x0f  2118159443
31-JAN-11 11.00.28.494                      1352              72 3p1b93fq81x0f  2118159443
31-JAN-11 11.00.27.494                      1352              72 3p1b93fq81x0f  2118159443

9 rows selected.

DATABASE1> @last_sql 294

SAMPLE_TIME                            SESSION_ID SESSION_SERIAL# SQL_ID	PLAN_HASH_VALUE
-------------------------------------- ---------- --------------- ------------- ---------------
31-JAN-11 10.50.40.071                        294              39 3p1b93fq81x0f 2118159443	  ----> very likely our sql
31-JAN-11 10.50.39.061                        294              39 3p1b93fq81x0f 2118159443
31-JAN-11 10.50.38.061                        294              39 3p1b93fq81x0f 2118159443
31-JAN-11 10.50.37.061                        294              39 3p1b93fq81x0f 2118159443
31-JAN-11 10.50.36.061                        294              39 3p1b93fq81x0f 2118159443
31-JAN-11 10.50.35.061                        294              39 3p1b93fq81x0f 2118159443
31-JAN-11 10.50.34.051                        294              39 3p1b93fq81x0f 2118159443
31-JAN-11 10.50.33.051                        294              39 3p1b93fq81x0f 2118159443
31-JAN-11 10.50.32.051                        294              39 3p1b93fq81x0f 2118159443

9 rows selected.

DATABASE1>-----LETS CHECK THE SQL

DATABASE1> @sqlid 3p1b93fq81x0f

SQL_ID        HASH_VALUE SQL_TEXT
------------- ---------- --------------------------------------------------------------------------------------------------------
3p1b93fq81x0f 2894132238 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

 CH#   PLAN_HASH OPT_COST EXECUTIONS    FETCHES ROWS_PROCESSED     CPU_MS     ELA_MS    AVG_ELA       LIOS       PIOS USERS_EXECUTING
----  ---------- -------- ---------- ---------- -------------- ---------- ---------- ---------- ---------- ---------- --------------
   0  2118159443   956322          3          1              0 861533.028    5876120 1958.70667   21851873   21739212          2

Elapsed times + users_executing columns saying that these sqls has been running for a very long time and they are %99.999 the ones we are looking for

There are tiny challenges with this approach but not that much challeging.

I used it very much in RAC env with a system where multiple users and child cursors are also running
so if you are careful enough to compare apples with apples (same instance, same plan hash value,right session serial, longer elapsed time cursor) you are very unlikely to be mislead

code used for last_sql is like below (modified a bit for rac aware)


select inst_id,sample_time,session_id,session_serial#,sql_id from (
select inst_id,sample_time,session_id,session_serial#,sql_id from
gv$active_session_history
where
sql_id is not null
 and session_id=&1 and inst_id=&2
 order by 1 desc
) where rownum

Method 2- Oradebug dump error stack (10g-11g no license needed)- Most reliable and certain way it is %100 accurate and good part is it does not need any input from me since Tanel Poder already covered how to go through error stack on his post how to read errorstack output

I tried 10046 and never seen any sql_id information written for the 10 mins period I gathered the trace so I’m not %100 sure if it needs whole call to be finished before it writes the sql_id information. I personally won’t use 10046 for this case.

Method-3- SQL Monitoring (11g onwards-license needed). This is my favourite and its damn quick and %99.9 it works if the sql was running long enough to get monitored and then lost the track. It is damn simple like this piece of code

select distinct inst_id,sid,status,sql_id,sql_plan_hash_value,sql_child_address,sql_exec_id
 from gv$sql_plan_monitor
 where sid in (&1) and status='EXECUTING';

—Method 3 On Action

DATABASE1> @swact

    SID USERN STATE   EVENT                        SEQ# SEC_IN_STATE         P1         P2         P3  SQL_ID         CHN
------- ----- ------- ---------------------- ---------- ------------ ---------- ---------- ----------  ------------- ----
   1352 XXWEB WAITING db file scattered read      40906            0         67    2638509         32                       --->sql_id empty
    294 XXWEB WAITING db file scattered read      16727            0         82     234468         32                       --->sql_id empy
   1263 COSKA WORKING On CPU / runqueue              44            0 1413697536          1          0  f5cqythck1gfz    1

---Check sql_monitoring
SQL> @mon_sql 1352,294

INST_ID  SID STATUS              SQL_ID        SQL_PLAN_HASH_VALUE SQL_CHILD_ADDRES SQL_EXEC_ID
-------	---- ------------------- ------------- ------------------- ---------------- -----------
1        1352 EXECUTING           3p1b93fq81x0f          2118159443 00000002D88378E8 16777216
1        1352 EXECUTING           2kgnch9h5hbkz          1874321290 00000002D6223418 16777216  ---paren pl/sql
1         294 EXECUTING           3p1b93fq81x0f          2118159443 00000002D88378E8 16777217
1         294 EXECUTING           2kgnch9h5hbkz          1874321290 00000002D6223418 16777217

The second sql_id 2kgnch9h5hbkz is from pl_sql parent call which was calling 3p1b93fq81x0f so they both get monitored by v$sql_monitor

Apart from all these again if you have diagnostics pack license you can always use OEM/Grid Control to check back the sessions history but comparing with querying v$sql_monitor it is as slow as driving an electric car when you have ferrari.

As you can see there are more ways then listed if your session lost track of the sql_ids. If you have more ways (especially with tracing -like dump sql if there is an option like that) I would really like to hear.

Update after first comment

Method-4 Using v$open_cursor

First comment on which Michael Dinh mentioned he is using toad open cursors option to hunt down the sqls but he was not sure for the sqlplus.
I had a chance to to check with one of our systems. Looks like v$open_cursor is giving us information which not as clear as the other methods but still can be very
useful for standart edition or users without diagnostics pack and also can be easier than error stack which usually needs a change request on PRD systems.

SQL> @sw 207,980

    SID USERN STATE   EVENT                    SEQ# SEC_IN_STATE SQL_ID         CHN                        P1                        P2                        P3
------- ----- ------- ------------------ ---------- ------------ ------------- ---- ------------------------- ------------------------- -------------------------
    207 XXXXX WORKING On CPU / runqueue       62033            3                    cellhash#=203CB5D5         diskhash#= 349428727      bytes= 16384
    980 XXXXX WORKING On CPU / runqueue         759            1                    cellhash#=D84BEC46         diskhash#=56AC941E       bytes= 16384

SQL> @mon_sql 207,980

   INST_ID        SID STATUS              SQL_ID        SQL_PLAN_HASH_VALUE SQL_CHILD_ADDRES SQL_EXEC_ID
---------- ---------- ------------------- ------------- ------------------- ---------------- -----------
         2        980 EXECUTING           91uc7cvw2rqab          4166729024 000000040D4560F8    33554458
         2        207 EXECUTING           91uc7cvw2rqab          4166729024 000000040D4560F8    33554456

SQL> select * from v$open_cursor where sid in (207,980) order by 2;

SADDR                   SID USER_NAME  ADDRESS          HASH_VALUE SQL_ID        SQL_TEXT LAST_SQL_ SQL_EXEC_ID CURSOR_TYPE
---------------- ---------- ---------- ---------------- ---------- ------------- -------- --------- ----------- -----------------------------------
00000004508BA640        207 XXXXXXX    00000003A72340A0 2292456266 gw771824a86ua Begin PK              33554456 OPEN
00000004508BA640        207 XXXXXXX    00000003E6F0C230 2181403846 5bvw0ka10b566 Begin PK                       OPEN
00000004508BA640        207 XXXXXXX    000000040E81D128 4111302712 6k03snbuhv01s Begin PK                       OPEN
00000004508BA640        207 XXXXXXX    00000003EA88E4F0 1535077424 68u7tx5dryv1h Begin PK                       OPEN
00000004508BA640        207 XXXXXXX    00000004592E3690  914163366 4vs91dcv7u1p6 insert i                       OPEN-RECURSIVE
00000004508BA640        207 XXXXXXX    0000000411FC64C0 1407509578 8nwyam59y9t2a Begin PK                       OPEN
00000004508BA640        207 XXXXXXX    00000004117E1340 2399958818 62h75aq7hsxt2 Begin PK                       OPEN
00000004508BA640        207 XXXXXXX    00000004125C3DC0 4163623243 91uc7cvw2rqab INSERT I              33554456 OPEN-PL/SQL
0000000452AA1BB8        980 XXXXXXX    00000004125C3DC0 4163623243 91uc7cvw2rqab INSERT I              33554458 OPEN-PL/SQL
0000000452AA1BB8        980 XXXXXXX    00000004592E3690  914163366 4vs91dcv7u1p6 insert i                       OPEN-RECURSIVE
0000000452AA1BB8        980 XXXXXXX    000000040E81D128 4111302712 6k03snbuhv01s Begin PK                       OPEN
0000000452AA1BB8        980 XXXXXXX    0000000411FC64C0 1407509578 8nwyam59y9t2a Begin PK                       OPEN
0000000452AA1BB8        980 XXXXXXX    00000003E6F0C230 2181403846 5bvw0ka10b566 Begin PK                       OPEN
0000000452AA1BB8        980 XXXXXXX    00000004117E1340 2399958818 62h75aq7hsxt2 Begin PK                       OPEN
0000000452AA1BB8        980 XXXXXXX    00000003EA88E4F0 1535077424 68u7tx5dryv1h Begin PK                       OPEN
0000000452AA1BB8        980 XXXXXXX    00000003A72340A0 2292456266 gw771824a86ua Begin PK              33554458 OPEN

Once we spot the sqls we need to check all of them one by one on v$sql where if users_executing column (UEX) populated

SQL> @sqlid gw771824a86ua   ---->bingo with two users and high ELAPSED TIMES. this is a pl/sql package calling the sql below

SQL_ID        HASH_VALUE SQL_TEXT
------------- ---------- --------------------------------------------------------------------------------------------------------------------------------
gw771824a86ua 2292456266 xXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

INS  CH#  UEX  PLAN_HASH   OPT_COST        CPU_MS     ELA_MS    AVG_ELA OFFLOAD IO_SAVED_% IO_ELIG_GB IO_INTER_GB       LIOS       PIOS
--- ---- ---- ---------- ----------  - ---------- ---------- ---------- ------- ---------- ---------- ----------- ---------- ----------
  2    0    2          0          0     412784696  414481976  16579.279 No             .00        .00      58.007 2653518789    3248794
  3    0    0          0          0    3215639.15 3307820.05 1653.91002 No             .00        .00       3.212 1480568056     184192

----
---- Tried for all other SQLS but thet do not have users_executing column populated
----

SQL> @sqlid 91uc7cvw2rqab   -->two users and high ELAPSED TIMES on 4th child

SQL_ID        HASH_VALUE SQL_TEXT
------------- ---------- -----------------------------------------------------------------------------------------------------------------------------------
91uc7cvw2rqab 4163623243 xXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

INS  CH#  UEX  PLAN_HASH   OPT_COST       CPU_MS     ELA_MS    AVG_ELA OFFLOAD IO_SAVED_% IO_ELIG_GB IO_INTER_GB       LIOS       PIOS
--- ---- ---- ---------- ----------   ---------- ---------- ---------- ------- ---------- ---------- ----------- ---------- ----------
  2    0    0 4166729024      13081   91245882.5 91690487.2 18338.0974 No             .00        .00      11.644 3739546208     657923
  2    1    0 4166729024      13081    120056449  120491925 15061.4907 No             .00        .00      15.889 1250311783     883494
  2    2    0 4166729024      13081   64616814.8 64899824.1 12979.9648 No             .00        .00      10.564  424413836     587066
  2    3    0 4166729024      13081   58282307.7 58454183.2 19484.7277 No             .00        .00       5.442 1809526416     303954
  2    4    2 4166729024      13081   78886102.5 79233558.3 19808.3896 No             .00        .00      13.555 4163654789     756589

Thanks Michael for the information so we have 1 more method which does not need license

December 6, 2010

Alert Log Monitoring script via ADRCI

Filed under: How To, Tips — coskan @ 4:47 pm

Before I start to write about the blog series in my mind which will be the base to my first presentation again in my mind, I would like to share simple alert log monitoring with adrci.

At first I was completely against Diagnostics Dest idea but after a bit of searching and learning new tips tricks of adrci command like I think I am a big fan of diagnostics dest. Below is the script I wrote to monitor alert log hourly and with a daily summary. Script is self explanatory, the only thing it does is gets ADRCI homes put them in a bash array and grep the ORA-, TNS- errors for the last hour or last day. It is not rocket since and it is based on a simple command which can also be added to your profiles as an alias.

--last day
adrci exec="set home ${adrci_home}; show alert -p \\\"message_text like '%ORA-%' and originating_timestamp > systimestamp-1\\\"" 
--last hour 
adrci exec="set home ${adrci_home}; show alert -p \\\"message_text like '%ORA-%' and originating_timestamp > systimestamp-1/24\\\"" 

I am pretty sure this could be shortened but I did not bother to shorten it so I leave it to your imagination:)

I did not add listener checks to hourly checks but to daily summary, because there can be many which will cause you to get mail every hour.

Good part of using ADRCI and hourly checks is you do not need to edit the file to get rid of the error because you will never see that alert again in the next hour slot.

Script will send the output to the files and read those files back to the mail content and I am sure there may be better way to do this.

Complete script

#################################################
###### ALERT LOG CHECKING VIA ADRCI #############
#################################################
# Author : Coskan Gundogar
# Version Date: 02/12/2010
# Usage :  
# To run for last 24 hours - ./check_alert.sh D
# To run for last hour - ./check_alert.sh 
# Edit variables below for moving between servers 
# Changes To the Script 
#

export ORACLE_SID=+ASM1
export ORACLE_HOME=/u01/crs/oracle/product/11.2.0/grid/
export PATH=$PATH:$ORACLE_HOME/bin
DAILY_LOG=/home/oracle/bin/alert_log_check_daily.txt
HOURLY_LOG=/home/oracle/bin/alert_log_check_hourly.txt
MAIL_SUBJ="PRD:WARNING HOST: " 
MAIL_RECIPIENT="your_dba_group@your_company.com"

HOST_NAME=`hostname -a`


if [ "$1" = "D" ]
then

		############################################
		###############DAILY CHECKS ################
		############DBMS AND ASM CHECK##############
		############################################

		adrci_homes=( $(adrci exec="show homes" | grep -e rdbms -e asm))

		echo '####################################################' > $DAILY_LOG 
		echo '####### ALERT LOG OUTPUT FOR LAST 24 HOURS #########' >> $DAILY_LOG
		echo '####################################################' >> $DAILY_LOG 
		echo ''  >> $DAILY_LOG 
		echo ''  >> $DAILY_LOG 
		echo ''  >> $DAILY_LOG 

		for adrci_home in ${adrci_homes[@]}
		do 
			echo $adrci_home' Alert Log' >> $DAILY_LOG 
			adrci exec="set home ${adrci_home}; show alert -p \\\"message_text like '%ORA-%' and originating_timestamp > systimestamp-1\\\"" -term >> $DAILY_LOG 
		done


		############################################
		############## DAILY CHECKS ################
		############# LISTENER  CHECK###############
		############################################


		adrci_lsnr_homes=( $(adrci exec="show homes" | grep -e tnslsnr))

		echo ''  >> $DAILY_LOG 
		echo ''  >> $DAILY_LOG 
		echo ''  >> $DAILY_LOG 
		echo '####################################################' >> $DAILY_LOG 
		echo '###### LISTENER LOG OUTPUT FOR LAST 24 Hours #######' >> $DAILY_LOG
		echo '####################################################' >> $DAILY_LOG 
		echo ''  >> $DAILY_LOG 
		echo ''  >> $DAILY_LOG 
		echo ''  >> $DAILY_LOG 


		for adrci_lsnr_home in ${adrci_lsnr_homes[@]}
		do 
			echo $adrci_lsnr_home' Listener Log' >> $DAILY_LOG 
			adrci exec="set home ${adrci_lsnr_home}; show alert -p \\\"message_text like '%TNS-%' and originating_timestamp > systimestamp-1\\\""  -term >> $DAILY_LOG 
		done
		

		num_errors=`grep -c -e 'TNS' -e 'ORA' $DAILY_LOG`
		if [ $num_errors != 0 ]
		then 
		MAIL_SUBJ=$MAIL_SUBJ$HOST_NAME" Errors Found in Daily Alert Summary"
		mailx -s "$MAIL_SUBJ" $MAIL_RECIPIENT  $HOURLY_LOG 
		echo ''  >> $HOURLY_LOG 
		echo ''  >> $HOURLY_LOG 
		echo '####################################################' >> $HOURLY_LOG 
		echo '######### ALERT LOG OUTPUT FOR LAST HOUR ###########' >> $HOURLY_LOG
		echo '####################################################' >> $HOURLY_LOG 
		echo ''  >> $HOURLY_LOG 
		echo ''  >> $HOURLY_LOG 
		echo ''  >> $HOURLY_LOG 

		for adrci_home in ${adrci_homes[@]}
		do 
			echo $adrci_home' Alert Log' >> $HOURLY_LOG 
			adrci exec="set home ${adrci_home}; show alert -p \\\"message_text like '%ORA-%' and originating_timestamp > systimestamp-1/24\\\"" -term >> $HOURLY_LOG 
		done


		num_errors=`grep -c -e 'TNS' -e 'ORA' $HOURLY_LOG`
		if [ $num_errors != 0 ]
		then 
		MAIL_SUBJ=$MAIL_SUBJ$HOST_NAME" Errors Found in Hourly Alert Summary"
		mailx -s "$MAIL_SUBJ" $MAIL_RECIPIENT < $HOURLY_LOG
		fi

fi

September 14, 2010

Easy/Fast Navigation with aliases in profiles

Filed under: Linux, Tips — coskan @ 8:45 pm

Because I am a lazy guy who wants to do everything fast I really like to have aliases in user profiles. When I login on a box different version of Oracle Homes and multiple ORacle databases on the same machine I feel naked without aliases. I am a guy who wants to tail alert log/any related log  while I am doing something and not having an alias for tailing not just slows me down it also makes me really annoyed in 5 seconds time. At current job I suffered for the first week and I immediately requested to have a “standard” profile and got acception from my manager and I wrote/modified the one below. Most of the ideas are taken from perfect blog entry of John Hallas, The use of functions in a .profile file . I needed to modify because we have 10G databases with 11GR2 GI for ASM and some more modifications for RAC. ( Probably most of you already realized that I am good with modifying a code but rubbish with my own, old habit from university never liked programming and needed to pass some classes:))

Here are the two functions I added to the profiles. First one (setsid) sets env settings of the database listed in /etc/oratab,  second one (showsid) shows the aliases you can use after you call setsid. Couple of things to bear in mind,  setsid assumes that you have $ORACLE_BASE which is hardcoded /u01/app/oracle in the script and it also assumes that you use ORACLE_SID for alert log directories. oradiag and ldiag aliases are default set because we use 11G diag home on all the boxes. You may also need to check ora_version variable it may not suit to your settings. (OFA is needed for everything run without modification)

SETSID

setsid ()
{
unset ORATAB
unset ORACLE_BASE
unset ORACLE_HOME
unset ORACLE_SID
ORACLE_BASE=/u01/app/oracle
ORATAB=/etc/oratab
if tty -s
then
        if [ -f $ORATAB ]
        then
                line_count=`cat $ORATAB | grep -v ^# | sed 's/:.*//' | wc -l`
                # check that the oratab file has some contents
                if [ $line_count -ge 1 ]
                        then
                        sid_selected=0
                        while [ $sid_selected -eq 0 ]
                        do
                                sid_available=0
                                for i in `cat $ORATAB | grep -v ^# | sed 's/:.*//'`
                                        do
                                        sid_available=`expr $sid_available + 1`
                                        sid[$sid_available]=$i
                                        done
                                # get the required SID
                                case ${SETSID_AUTO:-""} in
                                        YES) # Auto set use 1st entry
                                        sid_selected=1 ;;
                                        *)
                                        i=1
                                        while [ $i -le $sid_available ]
                                        do
                                                printf "%2d- %10s\n" $i ${sid[$i]}
                                                i=`expr $i + 1`
                                        done
                                        echo ""
                                        echo "Select the Oracle SID with given number [1]:"
                                        read entry
                                        if [ -n "$entry" ]
                                        then
                                                entry=`echo "$entry" | sed "s/[a-z,A-Z]//g"`
                                                if [ -n "$entry" ]
                                                then
                                                        entry=`expr $entry`
                                                        if [ $entry -ge 1 ] && [ $entry -le $sid_available ]
                                                        then
                                                                sid_selected=$entry
                                                        fi
                                                fi
                                                else
                                                sid_selected=1
                                        fi
                                esac
                        done
                        #
                        # SET ORACLE_SID
                        #
                        export ORACLE_SID=${sid[$sid_selected]}
                        echo "Your profile configured for $ORACLE_SID with information below:"
                        echo ""
			export PATH=/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/home/bin
                        unset LD_LIBRARY_PATH
                        ORAENV_ASK=NO
                        . oraenv
                        alias oh="cd ${ORACLE_HOME}"
                        ora_version=`echo $ORACLE_HOME | awk 'BEGIN {FS="/"} ; {print $6}' | awk 'BEGIN {FS="."} ; {print $1}'`
                        if [ $ora_version -lt 11 ]
			then
                        alias bdump="cd /u01/app/oracle/admin/${ORACLE_SID//[!A-Za-z]}/bdump"
                        alias talert="tail -f /u01/app/oracle/admin/${ORACLE_SID//[!A-Za-z]}/bdump/alert_${ORACLE_SID}.log"
			alias valert="view /u01/app/oracle/admin/${ORACLE_SID//[!A-Za-z]}/bdump/alert_${ORACLE_SID}.log"
                        fi
                        if [ $ora_version -ge  11 ]
                        then
			export ORACLE_UNQNAME=$ORACLE_SID
                        alias talert='tail -f $ORACLE_BASE/`adrci exec="show homes" | grep $ORACLE_SID`/trace/alert_${ORACLE_SID}.log'
			alias valert='view $ORACLE_BASE/`adrci exec="show homes" | grep $ORACLE_SID`/trace/alert_${ORACLE_SID}.log'
                        alias alasm='cd $ORACLE_BASE/`adrci exec="show homes" | grep ASM`/trace'
                        alias bdump='cd $ORACLE_BASE/`adrci exec="show homes" | grep $ORACLE_SID`/trace'
                        fi
                        alias pmon='ps -fu oracle | grep pmon | grep -v grep'
			alias tns='ps -fu oracle | grep tns | grep -v grep'
                        alias oradiag='cd $ORACLE_BASE/diag/'
			alias ldiag='cd $ORACLE_BASE/diag/tnslsnr/'
			unset ORAENV_ASK
                        echo
                        #
                        #GIVE MESSAGE
                        #
                        else
                        echo "No entries in $ORATAB. no environment set"
                fi
        fi
fi
}

SHOWSID

showsid()
{
echo ""
echo "ORACLE_SID=$ORACLE_SID"
echo "ORACLE_BASE=$ORACLE_BASE"
echo "ORACLE_HOME=$ORACLE_HOME"
echo "alias oh=cd ${ORACLE_HOME}"
echo "alias pmon='ps -fu oracle | grep pmon | grep -v grep'"
echo "alias tns='ps -fu oracle | grep tns | grep -v grep'"
ora_version=`echo $ORACLE_HOME | awk 'BEGIN {FS="/"} ; {print $6}' | awk 'BEGIN {FS="."} ; {print $1}'`
if [ $ora_version -ge  11 ]
then
echo "alias talert='tail -f $ORACLE_BASE/`adrci exec="show homes" | grep $ORACLE_SID`/trace/alert_${ORACLE_SID}.log'"
echo "alias valert='view $ORACLE_BASE/`adrci exec="show homes" | grep $ORACLE_SID`/trace/alert_${ORACLE_SID}.log'"
echo "alias bdump='cd $ORACLE_BASE/`adrci exec="show homes" | grep $ORACLE_SID`/trace'"
echo "alias alasm='cd $ORACLE_BASE/`adrci exec="show homes" | grep ASM`/trace'"
echo "alias ldiag='cd $ORACLE_BASE/diag/tnslsnr/'"
echo "alias oradiag='cd $ORACLE_BASE/diag/'"
fi
if [ $ora_version -lt 11 ]
then
echo "alias bdump=cd /u01/app/oracle/admin/${ORACLE_SID//[!A-Za-z]}/bdump"
echo "alias talert=tail -f \"/u01/app/oracle/admin/${ORACLE_SID//[!A-Za-z]}/bdump/alert_${ORACLE_SID}.log\""
echo "alias valert=\"view /u01/app/oracle/admin/${ORACLE_SID//[!A-Za-z]}/bdump/alert_${ORACLE_SID}.log\""
echo "alias ldiag='cd $ORACLE_BASE/diag/tnslsnr/'"
fi
echo ""
}

Usage : (no setting when showsid runs for the first time)

gundogar@SNOW:~#cosarac1
oracle@192.168.1.111's password:
Last login: Sun Sep 12 14:31:24 2010 from cosarac1.oel.com
[oracle@cosarac1 ~]$ showsid

ORACLE_SID=
ORACLE_BASE=
ORACLE_HOME=
alias oh=cd
alias pmon='ps -fu oracle | grep pmon | grep -v grep'
alias tns='ps -fu oracle | grep tns | grep -v grep'
-bash: [: -ge: unary operator expected
-bash: [: -lt: unary operator expected

[oracle@cosarac1 ~]$ setsid
 1-      +ASM1
 2-    oracos1

Select the Oracle SID with given number [1]:
2
Your profile configured for oracos1 with information below:

The Oracle base for ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1 is /u01/app/oracle/product/10.2.0/db_1

[oracle@cosarac1 ~]$ showsid

ORACLE_SID=oracos1
ORACLE_BASE=/u01/app/oracle/product/10.2.0/db_1
ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1
alias oh=cd /u01/app/oracle/product/10.2.0/db_1
alias pmon='ps -fu oracle | grep pmon | grep -v grep'
alias tns='ps -fu oracle | grep tns | grep -v grep'
alias bdump=cd /u01/app/oracle/admin/oracos/bdump
alias talert=tail -f "/u01/app/oracle/admin/oracos/bdump/alert_oracos1.log"
alias valert="view /u01/app/oracle/admin/oracos/bdump/alert_oracos1.log"

[oracle@cosarac1 ~]$ bdump
[oracle@cosarac1 bdump]$ pwd
/u01/app/oracle/admin/oracos/bdump
[oracle@cosarac1 bdump]$ pmpn
-bash: pmpn: command not found
[oracle@cosarac1 bdump]$ pmon
oracle    2860     1  0 Sep10 ?        00:00:21 asm_pmon_+ASM1
oracle   19925     1  0 Sep12 ?        00:00:12 ora_pmon_oracos1
[oracle@cosarac1 bdump]$ oh
[oracle@cosarac1 db_1]$ pwd
/u01/app/oracle/product/10.2.0/db_1
[oracle@cosarac1 db_1]$ tns
oracle    3287     1  0 Sep10 ?        00:00:05 /u01/crs/oracle/product/11.2.0/grid/bin/tnslsnr LISTENER_SCAN1 -inherit
oracle   26861     1  0 Sep13 ?        00:00:06 /u01/crs/oracle/product/11.2.0/grid/bin/tnslsnr LISTENER -inherit
[oracle@cosarac1 db_1]$ talert
Tue Sep 14 18:25:07 2010
Thread 1 advanced to log sequence 70 (LGWR switch)
  Current log# 2 seq# 70 mem# 0: +DATA/oracos/onlinelog/group_2.257.729388121
  Current log# 2 seq# 70 mem# 1: +FRA/oracos/onlinelog/group_2.258.729388123

Believe or not at the end of a year this settings will save lots of your time when you start using them .

If you have problems with formatting please contact me so I can send you the txt file.

May 27, 2010

Flashback log generation with select statements

Filed under: Basics, Tips — coskan @ 3:50 pm

There is a question on OTN Forums about excessive flashback log generates with select statement . Basically OP says there are flashback logs generated when they do select only.

there are different answers in the thread which can be true or false,  but when I first read the question I immediately think if delayed block cleanout have the similar effect on flashback log generation as well, so I tested.

I initially thought v$sesstats counters might reveal something but no luck and then I checked v$flashback_database_stat for the generated flashback_data_size and I think I hit the jackpot.

HR@ORACOS> select * from v$flashback_database_stat;

BEGIN_TIME        END_TIME          FLASHBACK_DATA    DB_DATA  REDO_DATA ESTIMATED_FLASHBACK_SIZE
----------------- ----------------- -------------- ---------- ---------- ------------------------
20100527 15:32:53 20100527 15:50:16      875266048 1207132160 2038729728                        0
20100527 14:32:50 20100527 15:32:53      248160256  127295488  450139648               1.3215E+10
20100527 13:32:48 20100527 14:32:50       10452992   15646720    4400640               1.5549E+10
20100527 12:32:43 20100527 13:32:48      745693184  948461568 1311620608               2.2789E+10
20100527 11:25:56 20100527 12:32:43     1262026752 1984741376 2358546432               2.7212E+10

HR@ORACOS> set autotrace traceonly statistics
HR@ORACOS>  update base_table_np set y='INVALID';
 commit;

4021808 rows updated.

Statistics
----------------------------------------------------------
       2512  recursive calls
    8341430  db block gets
    4069140  consistent gets
     120569  physical reads
 1908471980  redo size
        848  bytes sent via SQL*Net to client
        793  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
    4021808  rows processed

HR@ORACOS> set autotrace off;
HR@ORACOS> select * from v$flashback_database_stat;

HR@ORACOS>
BEGIN_TIME        END_TIME          FLASHBACK_DATA    DB_DATA  REDO_DATA ESTIMATED_FLASHBACK_SIZE
----------------- ----------------- -------------- ---------- ---------- ------------------------
20100527 15:32:53 20100527 16:00:36     1236664320 2021974016 4019910656                        0
20100527 14:32:50 20100527 15:32:53      248160256  127295488  450139648               1.3215E+10
20100527 13:32:48 20100527 14:32:50       10452992   15646720    4400640               1.5549E+10
20100527 12:32:43 20100527 13:32:48      745693184  948461568 1311620608               2.2789E+10
20100527 11:25:56 20100527 12:32:43     1262026752 1984741376 2358546432               2.7212E+10

HR@ORACOS> set autotrace traceonly statistics
HR@ORACOS> select * from base_table_np;
4021808 rows selected.

Statistics
----------------------------------------------------------
        139  recursive calls
          0  db block gets
      53908  consistent gets
       4404  physical reads
    1652384  redo size							------->DELAYED BLOCKS CLEANOUTS
  175008833  bytes sent via SQL*Net to client
      88996  bytes received via SQL*Net from client
       8045  SQL*Net roundtrips to/from client
          4  sorts (memory)
          0  sorts (disk)
    4021808  rows processed

HR@ORACOS> set autotrace off
HR@ORACOS> select * from v$flashback_database_stat;    ------>CHECK THE increase in FLASHBACK_DATA
HR@ORACOS>
BEGIN_TIME        END_TIME          FLASHBACK_DATA    DB_DATA  REDO_DATA ESTIMATED_FLASHBACK_SIZE
----------------- ----------------- -------------- ---------- ---------- ------------------------
20100527 15:32:53 20100527 16:01:11     1305264128 2054594560 4021728256                        0
20100527 14:32:50 20100527 15:32:53      248160256  127295488  450139648               1.3215E+10
20100527 13:32:48 20100527 14:32:50       10452992   15646720    4400640               1.5549E+10
20100527 12:32:43 20100527 13:32:48      745693184  948461568 1311620608               2.2789E+10
20100527 11:25:56 20100527 12:32:43     1262026752 1984741376 2358546432               2.7212E+10

I am still not %100 convinced because I have no idea what is going on behind the scenes and I don’t have Julian Dyke (ppt file ) wisdom/time to go more deep into the issue and  I stopped here.

If you have something to say about the issue  comments are welcomed.

Tested on 11.2.0.1 with single active instance

July 27, 2009

When is my password file gets updated?

Filed under: Basics, Tips — coskan @ 3:51 pm

Till I read this oracle-l thread, I was thinking pasword file is only updated/touched when sys user password is changed however, it is not the only case. As you can see below it is updated/touched whenever a user password is changed, interestingly password file is not updated/touched for new user creations. My question is why oracle updates/touches the file for regular user passwords and why it doesn’t updates/touches for user creation ? If anybody knows can you please leave a comment.

SQL> host dir C:\oracle\product\product\11.1.0\db_1\database\PWDORACOS.ORA
 Volume in drive C is System
 Volume Serial Number is 303E-6F73

 Directory of C:\oracle\product\product\11.1.0\db_1\database

23/07/2009  00:01             2,560 PWDORACOS.ORA
               1 File(s)          2,560 bytes
               0 Dir(s)   5,201,625,088 bytes free

SQL> create user passw identified by passw;

User created.

SQL> host dir C:\oracle\product\product\11.1.0\db_1\database\PWDORACOS.ORA
 Volume in drive C is System
 Volume Serial Number is 303E-6F73

 Directory of C:\oracle\product\product\11.1.0\db_1\database

23/07/2009  00:01             2,560 PWDORACOS.ORA
               1 File(s)          2,560 bytes
               0 Dir(s)   5,201,625,088 bytes free

SQL> alter user passw identified by passw;

User altered.

SQL> host dir C:\oracle\product\product\11.1.0\db_1\database\PWDORACOS.ORA
 Volume in drive C is System
 Volume Serial Number is 303E-6F73

 Directory of C:\oracle\product\product\11.1.0\db_1\database

27/07/2009  16:47             2,560 PWDORACOS.ORA
               1 File(s)          2,560 bytes
               0 Dir(s)   5,201,625,088 bytes free

Footnote:explanation below came via mail from a reader and looks logical.
—-
Anytime a password is changed Oracle opens the password file to check if the user is a dba user and if so it updates the file. If not, obviously oracle does not update it. When oracle opens the file to do the check, oracle does that in read/write mode rather than in read only mode. This is because if the user does happen to be a DBA user it will be more work by closing the file in read mode and reopening in read/write mode and then perform the update.

June 30, 2009

How to explain plan on views when you don’t have access on base tables

Filed under: PL/SQL, Tips — coskan @ 11:28 am

One of my friends asked, how can developers see the explain plan on views without having access to the base tables.

After a small search in Oracle Forums I found the solution of John Spencer.

Basically he creates a procedure to run explain plan and gives the execute access to that procedure.

This is the procedure

CREATE OR REPLACE PROCEDURE explain_it (p_plan_id IN VARCHAR2,
p_sql IN VARCHAR2,
p_plan OUT my_types.expl_cur) AS
l_sqlstr VARCHAR2(32767);
BEGIN
l_sqlstr := 'SELECT LPAD('' '',2*(LEVEL-1))||OPERATION||'' ''||OPTIONS||'' ''||OBJECT_NAME||'' ''||';
l_sqlstr := l_sqlstr||'DECODE(ID,0,''COST = ''||POSITION) "QUERY PLAN" FROM plan_table ';
l_sqlstr := l_sqlstr||'START WITH ID = 0 AND STATEMENT_ID = :b1 ';
l_sqlstr := l_sqlstr||'CONNECT BY PRIOR ID = PARENT_ID AND STATEMENT_ID = :b2';

EXECUTE IMMEDIATE 'EXPLAIN PLAN SET statement_id = '''||p_plan_id||''' FOR '||p_sql;

OPEN p_plan FOR l_sqlstr USING p_plan_id, p_plan_id;
END;
/

or to use with dbms_xplan.display


CREATE OR REPLACE PROCEDURE explain_it (
p_sql IN VARCHAR2,
p_plan OUT sys_refcursor) AS
l_sqlstr VARCHAR2(32767);
BEGIN
l_sqlstr := 'select * from table (dbms_xplan.display)';
EXECUTE IMMEDIATE 'EXPLAIN PLAN FOR '||p_sql;
OPEN p_plan FOR l_sqlstr ;
END;
/

and this is how it works

SQL> create user read_only identified by read_only
  2  ;

User created.

SQL> grant create session to read_only;

Grant succeeded.

SQL> create table exp_test as select * from user_objects where rownum<1000;

Table created.

SQL> create view v_exp_test as select * from exp_test;

View created.

SQL> grant select on v_exp_test to read_only;

Grant succeeded.

SQL> grant execute on explain_it to read_only;

Grant succeeded.

----connect with read_only

SQL> connect read_only/read_only
Connected.

SQL> select count(*) from hr.v_exp_test;

  COUNT(*)
----------
       156

SQL> explain plan for select count(*) from hr.v_exp_test;
explain plan for select count(*) from hr.v_exp_test
                                         *
ERROR at line 1:
ORA-01039: insufficient privileges on underlying objects of the view

---TRY WITH NEW PROCEDURE

SQL> var my_cur refcursor;
SQL> EXEC hr.explain_it('SELECT * FROM hr.v_exp_test',:my_cur);

PL/SQL procedure successfully completed.

SQL> print my_cur;

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------
-------------------------------------------------
Plan hash value: 3488715207

------------------------------------------------------------------------------
| Id  | Operation         | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |          |   156 | 29640 |     3   (0)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| EXP_TEST |   156 | 29640 |     3   (0)| 00:00:01 |
------------------------------------------------------------------------------

Note
-----
   - dynamic sampling used for this statement

12 rows selected.

Older Posts »

Blog at WordPress.com.