Thursday, August 25, 2005

10053 - The Optimizer dump

Many seasoned oracle developers will be familiar with the 10046 event used to set trace levels that include WAITs and/or BINDs in the tracefile output.

There is another event, 10053, which can be set which produces a dump from the Query Optimizer. While it isn't a particularly useful tool in day-to-day operations, it can be instructive to look at a dump to see the optimizer's point of view.

A dump can be obtained with :

ALTER SESSION SET EVENTS '10053 trace name context forever, level 1'

Then run the query and fetch the file from the same user_dump_dest you find your trace files.

The query and optimizer parameters

Having a look at the dump and ignoring the header info, we first see the query that is being optimized.

Next there is a list of parameters, documented and undocumented, which may affect which plan is chosen. I won't mention the undocumented ones (mostly prefixed with an underscore), simply because there isn't an authorative source as to effect of changing them. The documented ones can all be found in the Oracle Reference manual. For 9iR2, this can be found here

I will put in some brief comments specific to how these affect the optimizer.

The OPTIMIZER_MODE/GOAL determines the aim of the query; that is whether it should try to deliver the first row(s) of the result set as quickly as possible, or all rows. In some cases, such as a CREATE TABLE AS SELECT.... etc. it is obvious that the entire result set will be needed and so ALL_ROWS is most appropriate.

These six will determine what the options the optimizer can or cannot consider for potential plans.

Changing the following three can affect both which plan is chosen and the resources necessary to execute that plan. Changing the resources available can directly affect the end response time even if the plan does not change.

The parameters OPTIMIZER_INDEX_CACHING and OPTIMIZER_INDEX_COST_ADJ will affect how the optimizer balances the speed of index access against table access. While changing them can make the query switch to a different plan, they do not directly affect the query response. That is you can change them but, despite the change in query COST, they will not improve the time necessary to execute the query unless there is a plan change.

OPTIMIZER_MAX_PERMUTATIONS. For queries involving only a few tables, the optimzier can quickly scan all plans to determine the best ones. However as queries include more tables, the number of potential plans will rise exponentially. To stop Oracle spending all day looking at plans, this will tell Oracle to stop after a number of plans plans. This is obsolete in 10G, and there's no real reason to use it in anywhere else. If you've set it really low, it is probably to reduce the amount of CPU time spent parsing, which means you are probably parsing too much.

OPTIMIZER_DYNAMIC_SAMPLING is simply too big for me to consider here. The 10G documentation for it can be found here

Table information

After the parameter values, we get to the tables.
I used the following script to prepare some dummy data;

drop table test_a;
drop table test_b;

create table test_a (col_a number primary key, col_b varchar2(40), col_c varchar2(40), col_d varchar2(40));

create table test_b (col_a number primary key, col_b varchar2(40), col_c varchar2(40), col_d varchar2(40));

insert into test_a
select rownum, to_char(sysdate-rownum,'Dayfm'), to_char(sysdate-rownum,'Monthfm'), rtrim(to_char(sysdate-rownum,'ddspth'))
from all_tab_columns where rownum <>

update test_a set col_b = null where col_b like 'Mon%';

insert into test_b
select rownum, to_char(sysdate+rownum,'Monthfm'), rtrim(to_char(sysdate+rownum,'ddspth')), to_char(sysdate+rownum,'Dayfm')
from all_tab_columns where rownum <>

update test_b set col_b = null where col_d like 'Mon%';

dbms_stats.GATHER_TABLE_STATS(ownname => user, tabname => 'TEST_A',
ESTIMATE_PERCENT=>100,method_opt => 'FOR ALL COLUMNS SIZE 50', cascade => true);
dbms_stats.GATHER_TABLE_STATS(ownname => user, tabname => 'TEST_B',
ESTIMATE_PERCENT=>100,method_opt => 'FOR ALL COLUMNS SIZE 50', cascade => true);


I started with the query
select * from test_a where col_b like 'Tue%' and col_c like 'J%'

At the table level, it looks at the cardinality (number of rows), number of blocks and the average row length. Then it shows the indexes,

Table stats Table: TEST_A Alias: TEST_A
-- Index stats
INDEX NAME: SYS_C00109029 COL#: 1
TOTAL :: LVLS: 0 #LB: 1 #DK: 200 LB/K: 1 DB/K: 1 CLUF: 2

For the indexes, it looks at the columns indexed, levels (blevel), number of leaf blocks, number of distinct keys, leaf blocks per key, datablocks per key and clustering factor. Clustering factor is basically how likely records with the same key values are in the same block. If they are clustered, picking up table rows based on the index will be cheaper as the table rows are more likely to be on a block that has already been read and so need less physical IO. In this case, the columns derived from the Month would be closely clustered together (a low cluster factor), with Days less clustered and the spelt out 'DD' least clustered (high cluster factor). In this case, the primary key has a low cluster factor reflecting the fact that the sequentially numbered rows are stored next to each other.

It then lists the 'interesting columns' (ie those used in joins or filter predicates), and shows the number of distinct values, number of nulls, density, low and high values and histogram information (if available).

Column: COL_B Col#: 2 Table: TEST_A Alias: TEST_A
NDV: 6 NULLS: 29 DENS: 2.9240e-03
Column: COL_C Col#: 3 Table: TEST_A Alias: TEST_A
NDV: 7 NULLS: 0 DENS: 2.5000e-03

The section SINGLE TABLE ACCESS PATH finishes with a determination of how many rows will get thrown back for the given filters on the table.

Access path: tsc Resc: 2 Resp: 2
BEST_CST: 2.00 PATH: 2 Degree: 1

A cardinality is derived based on the number of row it will think will match the given criteria. In this case it has estimated 8 out of two hundred. Actually around 1/7 will match 'Tuesday' and 2/7 with have a month starting with J (because this was run at the end of August) which will be about 8 rows.


Two undocumented hints have become known in the Oracle developer community. Being undocumented, there is no definitive statement of what they do or how they work. The 10053 dump is once place where their effects can be directly observed in the line(s) of the SINGLE TABLE ACCESS PATH concerning table cardinality.

The selectivity replaces the filter factor percentages derived from the statistics with its own percentage.
Cardinality replaces the end result with its own absolute value, so not taking into account the actual size of the table.
The following are the adjusted queries and the respective table cardinality rows:

select /*+SELECTIVITY (a 0.5)*/ * from test_a a
where col_b like 'Tue%' and col_c like 'J%'

select /*+CARDINALITY (a 5)*/ * from test_a a
where col_b like 'Tue%' and col_c like 'J%'

Given a more complex join query :

select * from test_a a, test_b b
where a.col_b like 'Tue%'
and b.col_b not like 'J%'
and a.col_d=b.col_c
and a.col_b=b.col_d

It still uses the cardinality/selectivity hints on based on the single table access path.

With a Selectivity of 0.5 on test_a, it will assume that half of test_a will be available for match up against the filtered rows from test_b. It does not imply or suggest that half the rows from test_a will match rows on test_b. In fact, the optimizer will apply its estimate of that percentage on top of the given selectivity.

With a cardinality of 5, it assumes 5 rows from test_a will be available for matching. Again, its own join filter percentage will be applied on top.

It is not correct to suggest the selectivty and cardinality hints supply additional information to the optimizer. In fact they override information derived from statistics which could actually mislead the optimizer. They should, like other hints, be used with caution and, like undocumented functionality, as rarely as possible.

This is as far as I have gone with 10053 dump files.
If it has wetted your appetite, then there is a more detailed paper by Wolfgang Breitling available here


Tonguç said...

Thank you for your post, it was very helpfull for me :)

Best regards.

Sergei Romanenko said...

10053 trace files can be enormous and difficult to examine. There is a free 10053 trace viewer which will help you navigate quickly to the most important sections of the trace file. It also uses highlighter to improve readability of the trace and optionally formats the final query after transformations. Here is the link:
I hope this is relevant to this article and will be helpful.