Showing posts with label Performance. Show all posts
Showing posts with label Performance. Show all posts

6.23.2011

Jonathan Lewis's video presentation about tuning a two-table join

Its indeed quite amazing to know how one could talk about tuning a simple two-table join query for about  an hour or more. For sure, it might not be possible other than Jonathan Lewis. Here is a video presentation URL that was done very recently by JL for Turkish Oracle Users' Group conference.

http://jonathanlewis.wordpress.com/2011/06/23/video/

Hats off to master Jonathan Lewis!

Enjoy reading,

Jaffar

12.23.2006

Why index being ignored - interesting question on OTN database forums.

As usual, I was answering few intresting questions on OTN Database forums.
This morning,I have come across of a very interesting, funny question, where the poster gave an example with some explanation and asked why the Optimizer ignores INDEX?

== POSTING ON OTN DATABASE FORUM
SQL> CREATE table t (a VARCHAR2(10),b VARCHAR2(10))
2 /
Table created.

SQL> BEGIN
2 FOR i IN 1..10000
3 LOOP
4 INSERT INTO t VALUES (i,'A');
5 END LOOP;
6 END;
7 .
SQL> /
PL/SQL procedure successfully completed.
SQL> COMMIT;
Commit complete.
SQL> CREATE INDEX t_a_idx ON t (a)
2 /
Index created.

QL> SELECT * FROM emp WHERE empno>500
2 /
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2 Card=14 Bytes=560)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (TABLE) (Cost=2 Card=14 Bytes=560)
2 1 INDEX (RANGE SCAN) OF 'PK_EMP' (INDEX (UNIQUE)) (Cost=1 Card=14)

Above query is not ignoring the possibility of using an index on empno column,while the same query with table tignoring the usage of index on 'a' column why??

SQL> SELECT *
2 FROM t WHERE a>500
3 /
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=6 Card=9500 Bytes= 133000)
1 0 TABLE ACCESS (FULL) OF 'T' (TABLE) (Cost=6 Card=9500 Bytes =133000)

SQL> SELECT *
2 FROM t WHERE a>'500'
3 /
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=6 Card=5552 Bytes= 77728)
1 0 TABLE ACCESS (FULL) OF 'T' (TABLE) (Cost=6 Card=5552 Bytes =77728)

=== END OF POSTING

The poster gave two examples, using EMP table and newly created table. Example used with EMP table has worked perfectly, i.e. index has bee used. But, his question was, why in another example, used different table, Optimizer doesn't uses the INDEX?

Analyzing given examples, few points can be easily cleared.
1. After table creation, data insertion and index creation, there is no clue in the example of statistics collections.But, in the examples, there is cardinality, cost and bytes. I guess, Optimizer might have useddyanmic samplying. I am not sure, because, the poster doesn't mentioned Oracle version and Optimizer*parameters value or how dows he collected the statistics.


SQL> SELECT *
2 FROM t WHERE a>500
3 /
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=6 Card=9500 Bytes= 133000)
1 0 TABLE ACCESS (FULL) OF 'T' (TABLE) (Cost=6 Card=9500 Bytes =133000)

2. Column 'a' has defined as varchar datatype.
(a VARCHAR2(10), and the predicate value has given in numeric.

SQL> SELECT *
2 FROM t WHERE a>500
3 /

Leaving it to Oracle to do the internal conversion. I remember long time ago, we faced the issue when datatype was character type and predicate value was given numeric. Therefore, Optimizer was ignoring index. I guess this could be one of the reason for ignoring INDEX.
May be, internal calculations and other theory can detailed exaplin by Jonathan Lewis, if at all, he reads my blog.

In another example,

SQL> SELECT *
2 FROM t WHERE a>'500'
3 /
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=6 Card=5552 Bytes= 77728)
1 0 TABLE ACCESS (FULL) OF 'T' (TABLE) (Cost=6 Card=5552 Bytes =77728)

I dont know the high value and low value for the column 'a' (statistics). If I remember, the forumal for selectity calculation when there are no hitograms in place, is :
c1 > 'value' 1 - (High - Value / High - Low)
In the given example, the poster inserted 10,000 rows using pl/sql.
If we look at the cardinality prediated by the Optimizer is 5552, which is more than 50%. surely, oracle ignores the INDEX and favours FTS (another reason).

Jonathan had written a nice function to convert high/low raw value number. Because, the value stored in hagh/low (for character) is a raw value.

http://jonathanlewis.wordpress.com/2006/11/29/low_value-high_value/

Few other concerns.

I guess the poster used AUTOTRACE utility to **predicate** the execution plan. As Jonathan reiterates that the execution plan preciated by the AUTOTTRACE is not the real one, it could be a lie or simply pridicated.

I have recommended the poster to use DBMS_XPLAN.DISPLAY (if Oracle >=9iR2 version) to get the excecution plan and also the predicate filtering value, which was clearly missing the AUTOTTRACE utility.

I have not written this to hurt the poster nor to disapoint anybody. I was just sharing my thought on this.Any comments on thhis are welcome.


Jaffar

12.14.2006

Change in transitve closure from 9i to 10g.

Sometime back, I have discussed about transitive closure behavior when query_rewrite_enabled parameter is set to true/false in this blog, http://jaffardba.blogspot.com/2006/03/transitive-closure-behaviour-when.html. It was on Oracle version 9iR2(9205).
Jonathan has discussed the changes of transitive closure in 10g at his blog, http://jonathanlewis.wordpress.com/2006/12/13/cartesian-merge-join/.
There is a (hidden) parameter _optimizer_transitivity_retain which is set to true in version 10g. This over comes the big problem of transitive closure behavior when query_rewrite_enabled is not set to true.

Jaffar

12.09.2006

Optimal Logfile Size advice - Oracle10gR2

I have come across of many forums & mailing list where people asking for the optimal value (size) for online redo logs files.

I would say, prior to 10gR2, you need to estimate its value upon system redo generation and other database setting factors.

Although, log file size is not the candidate to measure or improve LGWR performance, but, it (log file size) can affect to DBWR and ARCH bakground process performance. Therefore, log file size should be large enough in order to avoid the frequent checkpointing in the database.

Prior to Oracle10R2, there was no direct hint by oracle that you need to set this value as an optimal,or at least start with this value.

In Oracle10gR2, the optimal value for log file can be obtained from v$instance_recovery dyanmic view.

select optimal_logfile_size from v$instance_recovery;
OPTIMAL_LOGFILE_SIZE
--------------------
453 (size measured by MB)

Note : The above value is considered optimial based on the current setting of
FAST_START_MTTR_TARGET.

Oracle documentation suggest the following:A rough guide is to switch logs at most once every twenty minutes.Size your online redo log files according to the amount of redo your system generates.

Happy reading,

Jaffar

11.25.2006

How Oracle 10g makes life easier in a day to day administration. (Part I)

Although, Oracle10g is around past couple of years, we have just started migrating our databases to 10g.No question that Oracle 10g comes with few great features of Performance tuning.I have come across of few of new dictionary and dynamic views of Oracle 10g,which are useful in day by day administration.Prior to 10g, to get this information, you required a little bit work, but, in 10g, its simply selectingthese views/dictionary tables.


Please note that you need performance pack and diagnostic licences from Oracle to use these features and they are available only in Oracle Enterprise Edition.

OS Statistics
Prior to 10g, to know the OS level resource consumption, one need to login into the server and use OS levelutilities, like sar, top,vmstat to know the server resource usage.Now in 10g, just viewing v$osstat view can provide some useful information about OS resource consuption status.
v$osstat - view captures machine level information in the database. Its a good starting point to
analyze if there are hardware level resource issues.
http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14237/dynviews_2010.htm#sthref3811

STAT_NAME VALUE
------------------------------------- ----------
NUM_CPUS 4 Number of CPUs / processors on the DB Server
IDLE_TIME 609528 (value measured by hundredths of a second)
BUSY_TIME 85007
USER_TIME 62146
SYS_TIME 22861
IOWAIT_TIME 0
AVG_IDLE_TIME 152332
AVG_BUSY_TIME 21205
AVG_USER_TIME 15491
AVG_SYS_TIME 5671
AVG_IOWAIT_TIME 0
OS_CPU_WAIT_TIME 5600
RSRC_MGR_CPU_WAIT_TIME 0
LOAD .29296875
PHYSICAL_MEMORY_BYTES 1.7095E+10
VM_IN_BYTES 9576448
VM_OUT_BYTES 0

DBA_HIST_OSSTAT displays historical operating system statistics.
DBA_HIST_OSSTAT_NAME displays the names of the operating system statistics.

Tablespace Usage Statistics
In earlier version, it required to join few dictionary views and some computationto know the used percent of any tablespace.10g, makes life easier by providing all them in a view, dba_tablespace_usage_metrics. (for me, I dont get the real values in this view.)


select * from dba_tablespace_usage_metrics;

TABLESPACE_NAME USED_SPACE TABLESPACE_SIZE USED_PERCENT
------------------------------ ---------- --------------- ------------
SYSAUX 55448 4194302 1.32198397
SYSTEM 79448 4194302 1.89418883
TEMP 0 4194302 0
TEMPORARY 256 256000 .1
UNDOTBS1 31520 49280 63.961039
USERS 48 640 7.5

DBA_HIST_TBSPC_SPACE_USAGE
- This dictionary table contains the history information about tablespace usage metrics, each row contains 1 hr historical information.


desc dba_hist_tbspc_space_usage;

Name Null? Type
----------------------------------------------------------------- -------- -------------
SNAP_ID NUMBER
DBID NOT NULL NUMBER
TABLESPACE_ID NUMBER
TABLESPACE_SIZE NUMBER
TABLESPACE_MAXSIZE NUMBER
TABLESPACE_USEDSIZE NUMBER
RTIME VARCHAR2(25)

select tablespace_size,tablespace_usedsize,rtime
from dba_hist_tbspc_space_usage
where tablespace_id = 2
order by rtime

TABLESPACE_SIZE TABLESPACE_USEDSIZE RTIME
76800 55360 11/25/2006 08:00:57

76800 55368 11/25/2006 09:00:08
76800 55392 11/25/2006 10:00:23
76800 55424 11/25/2006 11:00:35
76800 55448 11/25/2006 12:00:35
76800 55488 11/25/2006 13:00:47

If you want to query the history (usage of any partitcular tablespace),you need to play around with RTIME column.

DB Startuptime log
As per my knowledge, prior to 10g, I don't see any dictionary view or table keeps the history of database startup time and version.

10g, provides, DBA_HIST_DATABASE_INSTANCE which keeps the log about database startup time and the version.

SELECT instance_name,startup_time,version
FROM dba_hist_database_instance;

INSTANCE_NAME STARTUP_TIME VERSION
---------------- -------------------------------- -------
ORCL 07-OCT-06 16.29.29.000 10.2.0.1.0
ORCL 09-OCT-06 13.20.05.000 10.2.0.1.0

ORCL 09-OCT-06 13.38.05.000 10.2.0.1.0
ORCL 09-OCT-06 20.57.31.000 10.2.0.1.0

In my next post, I will be discussing few of other useful dictionary and dynamic views of Oracle10gR2.




11.14.2006

The Power of ASH (Active Session History)

Oracle 10g, brings many new features through which one can easily tune the bad sqls or also can diagnose the database performance issues.

Using database metrics, active session history and time model views.

Following query fetchs top sqls spent more on cpu/wait/io. (Thanks to Kyle Hailey for this script):

select
ash.SQL_ID ,
sum(decode(ash.session_state,'ON CPU',1,0)) "CPU",
sum(decode(ash.session_state,'WAITING',1,0)) -
sum(decode(ash.session_state,'WAITING', decode(en.wait_class, 'User I/O',1,0),0)) "WAIT" ,
sum(decode(ash.session_state,'WAITING', decode(en.wait_class, 'User I/O',1,0),0)) "IO" ,
sum(decode(ash.session_state,'ON CPU',1,1)) "TOTAL"
from v$active_session_history ash,v$event_name en
where SQL_ID is not NULL and en.event#=ash.event#

SQL_ID CPU WAIT IO TOTAL
------------- ---------- ---------- ---------- ----------
bqts5m5y267ct 0 0 20 20
4gd6b1r53yt88 0 16 1 17
35rqnp0hn3p3j 0 13 0 13
3shtm7x3a54qu 0 0 8 8
0hf43mhpx086p 0 0 4 4

Use any of v$sql to get the sql_text for one of the above sql_id.

SELECT sql_text FROM v$sqlarea WHERE sql_id = 'bqts5m5y267ct';

dbms_xplan.display_awr can be used to extract the sql plan for this sql_id.

SELECT * FROM table(dbms_xplan.display_awr('bqts5m5y267ct');


The above scinario was done in Oracle 10g Re.2 on SunSolaris version 10.

5.02.2006

Tips to avoid hot block contentio for small look-up tables.

MINIMIZE RECORD_PER_BLOCK

The basic idea of this article is to store a single row into a single block.
This could be useful to avoid the hot block contention for the small look up tables, which access heavily.

The MINIMIZE RECORDS_PER_BLOCK features comes from Oracle 8i, which gives the
facility to almost store a single record into a single block.

Test case:

Normal Heap Table (without MINIMIZE RECORDS_PER_BLOCK)

17:26:39 jaffar@PRIMEDB> create table nomin1 as select * from user_objects where rownum = 1;

Table created.

Elapsed: 00:00:00.01
17:27:18 jaffar@PRIMEDB> analyze table nomin1 compute statistics;

Table analyzed.

Elapsed: 00:00:00.00
17:27:33 jaffar@PRIMEDB> select table_name,num_rows,blocks from user_tables where table_name = 'NOMIN1';

TABLE_NAME NUM_ROWS BLOCKS
------------------------------ ---------- ----------
NOMIN1 1 4

17:28:22 jaffar@PRIMEDB> insert into nomin1 select * from user_objects;

242 rows created.

Elapsed: 00:00:00.00
17:28:34 jaffar@PRIMEDB> commit;

Commit complete.

Elapsed: 00:00:00.00
17:28:35 jaffar@PRIMEDB> analyze table nomin1 compute statistics;

Table analyzed.

Elapsed: 00:00:00.00
17:28:39 jaffar@PRIMEDB> select table_name,num_rows,blocks from user_tables where table_name = 'NOM
IN1';

TABLE_NAME NUM_ROWS BLOCKS
------------------------------ ---------- ----------
NOMIN1 243 8


Note : Total rows comes in 8 blocks.


Test case with MINIMIZE RECORD_PER_BLOCK

17:28:47 jaffar@PRIMEDB> create table min01 as select * from user_objects where rownum = 1;

Table created.

Elapsed: 00:00:00.00
17:29:12 jaffar@PRIMEDB> analyze table min01 compute statistics;

Table analyzed.

Elapsed: 00:00:00.00
17:29:22 jaffar@PRIMEDB> select table_name,num_rows,blocks from user_tables where table_name = 'MIN01';

TABLE_NAME NUM_ROWS BLOCKS
------------------------------ ---------- ----------
MIN01 1 4


17:32:26 jaffar@PRIMEDB> alter table min01 minimize records_per_block;

Table altered.


17:32:53 jaffar@PRIMEDB> insert into min01 select * from user_objects;

243 rows created.

Elapsed: 00:00:00.09
17:33:12 jaffar@PRIMEDB> commit;

Commit complete.

Elapsed: 00:00:00.00
17:33:12 jaffar@PRIMEDB> analyze table min01 compute statistics;

Table analyzed.

Elapsed: 00:00:00.00
17:33:14 jaffar@PRIMEDB>
17:33:14 jaffar@PRIMEDB> select table_name,num_rows,blocks from user_tables where table_name = 'MIN0
1';

TABLE_NAME NUM_ROWS BLOCKS
------------------------------ ---------- ----------
MIN01 244 180

Note : Total rows 243 comes in 180 blocks.


References:

http://download-west.oracle.com/docs/cd/A87860_01/doc/server.817/a76962/ch2.htm#103385

Sources Used in the test cases:

Orcle Version : 9.2.0.7
db_block_size : 8194
Tablespace : LMT + Auto Extent Size.

3.26.2006

Transitive Closure behaviour when query_rewrite_enabled parameter set true/false

The article covers our experiences with the impact of various settings
of the parameter query_rewrite_enabled on sql performance whilst
migrating an Oracle database from AIX to HP.

All testing for this article was performed on Oracle 9.2.0.5.

TRANSITIVE CLOSURE WITH QUERY_REWRITE_ENABLED

Transitivity and Transitive Closure

Transitivity (or Transitive Closure) is the generation of additional
predicates that may open additional access methods. It is only performed by the CBO.

We have recently migrated one of our data warehouse databases (Oracle 9i Rel.2 (9205)) from AIX to HP-SUPERDOME.
The data warehouse is 1.8TB. It was migrated using export and import utilities.

The major problem we encountered post migration was a dramatic increase in the response time of one of our queries.

The query was taking 5 minutes on AIX. The same query on HP ran for 2 days.

Following is the query and its execution plan on AIX and HP.

SELECT max(as_of_date), max(cust_code), nvl(abs(sum(run_offs_sar)), 0),
nvl(abs(sum(inst_amt_sar)), 0), nvl(abs(sum(bal_sar)), 0)
FROM account a
WHERE acct_no = '00100100120'
AND as_of_date = (SELECT max(as_of_date)
FROM account b
WHERE b.acct_no = a.acct_no
AND b.run_offs_sar <> 0)

Execution Plan (AIX)
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=602 Card=1 Bytes=55)
1 0 SORT (AGGREGATE)
2 1 VIEW (Cost=602 Card=1 Bytes=55)
3 2 FILTER
4 3 SORT (GROUP BY) (Cost=602 Card=1 Bytes=75)
5 4 HASH JOIN (Cost=529 Card=167 Bytes=12525)
6 5 PARTITION RANGE (ALL)
7 6 TABLE ACCESS (BY LOCAL INDEX ROWID) OF 'A_ACCOUNT' (Cost=264 Card=167 Bytes=4509)
8 7 INDEX (SKIP SCAN) OF 'IDX_DT_ACCT_FLAG' (NON-UNIQUE) (Cost=189 Card=167)
9 5 PARTITION RANGE (ALL)
10 9 TABLE ACCESS (BY LOCAL INDEX ROWID) OF 'A_ACCOUNT' (Cost=264 Card=167 Bytes=8016)
11 10 INDEX (SKIP SCAN) OF 'IDX_DT_ACCT_FLAG' (NON-UNIQUE) (Cost=189 Card=167)

Execution Plan (HP)
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=630319908 Card=1 Bytes=55)
1 0 SORT (AGGREGATE)
2 1 VIEW (Cost=630319908 Card=1602 Bytes=88110)
3 2 FILTER
4 3 SORT (GROUP BY) (Cost=630319908 Card=1602 Bytes=107334)
5 4 MERGE JOIN (CARTESIAN) (
7 6 Cost=630319180 Card=225809 Bytes=15129203)
6 5 PARTITION RANGE (ALL)TABLE ACCESS (BY LOCAL INDEX ROWID) OF 'A_ACCOUNT' (Cost=700 Card=440 Bytes=10560)
8 7 INDEX (SKIP SCAN) OF 'IDX_DT_ACCT_FLAG' (NON-UNIQUE) (Cost=519 Card=513)
9 5 BUFFER (SORT) (Cost=630319208 Card=513 Bytes=22059)
10 9 PARTITION RANGE (ALL)
11 10 TABLE ACCESS (FULL) OF 'A_ACCOUNT' (Cost=1432542 Card=513 Bytes=22059)

I then turned on the ‘10053 trace event with level 1’ on both databases. Comparing the trace outputs,
I found that the only difference was in the setting of the ‘query_rewrite_enabled’ parameter.
On AIX this parameter was set to TRUE and on HP it was at the default setting of FALSE.

After setting this parameter to TRUE on HP, the query achieved the same response time as on AIX.
The execution plan now is identical on both systems.

Thanks to Jonathan Lewis for explaining Transitive Closure behavior:
<<
When you have a problem like this, use dbms_xplan()
to get the more detailed execution plan than you don't
get from autotrace (until 10.2). In this case I think you
would find that you have a join predicate that disappears
(to produce the cartesian join) because transitive closure
has been used to copy a filter predicate from one side of
the join to the other.
============
In the case where the constant predicate is an equality the
optimizer will generate a second constant predicate, but
lose the join predicate if query_rewrite_enabled = false, hence ...

On the other hand, if query_rewrite_enabled = true, then
8i and 9i will keep the join predicate even in this case, hence ...
============

>>

By setting the parameter ‘query_rewrite_enabled=FALSE’, then to ‘TRUE’, I ran the query with ‘explain plan for’
to produce the explain plan. Using the ‘select * from table(dbms_xplan.display) dual’, I got the following predicate
information used/created by the Optimizer:

with query_rewrite_enabled=FALSE

3 - filter("A"."AS_OF_DATE"=MAX("B"."AS_OF_DATE"))
7 - filter("B"."RUN_OFFS_SAR"<>0)
8 - access("B"."ACCT_NO"='00100100120')
filter("B"."ACCT_NO"='00100100120')
11 - filter("A"."ACCT_NO"='00100100120')

with query_rewrite_enabled=TRUE

3 - filter("A"."AS_OF_DATE"=MAX("B"."AS_OF_DATE"))
5 - access("B"."ACCT_NO"="A"."ACCT_NO") ====
7 - filter("B"."RUN_OFFS_SAR"<>0)
8 - access("B"."ACCT_NO"='00100100120')
filter("B"."ACCT_NO"='00100100120')
11 - access("A"."ACCT_NO"='00100100120')
filter("A"."ACCT_NO"='00100100120')


When the parameter was set to FALSE, the optimizer was dropping the following in the predicate information,
compared to the one where the parameter was set to TRUE.

’access("B"."ACCT_NO"="A"."ACCT_NO")’ and ‘access("A"."ACCT_NO"='00100100120'

In this example, I have used the two traditional tables, emp & dept for my testing.

Alter session set query_rewrite_enabled=FALSE;

EXPLAIN PLAN FOR
SELECT e.ename,d.dname
FROM emp e,dept d
WHERE d.deptno=10
AND e.deptno =d.deptno

select * from table(dbms_xplan.display) dual;

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
-----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 24939 | 462K| 51 |
| 1 | NESTED LOOPS | | 24939 | 462K| 51 |
| 2 | TABLE ACCESS BY INDEX ROWID| DEPT | 1 | 11 | 1 |
|* 3 | INDEX UNIQUE SCAN | SYS_C002803 | 1 | | |
|* 4 | TABLE ACCESS FULL | EMP | 24939 | 194K| 50 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):

---------------------------------------------------
3 - access("D"."DEPTNO"=10)
4 - filter("E"."DEPTNO"=10)

In the predicate information above, the optimizer doesn’t use predicate join e.deptno=d.deptno. Instead,
it substitutes the referenced column’s (d.deptno) value of 10 in the filter e.deptno=10.

Jonathan Lewis, in his book “CBO Fundamentals”, Chapter 6, Selectivity Issue, suggests some bypasses for
this problem, such as:

EXPLAIN PLAN FOR
SELECT e.ename,d.dname
FROM emp e,dept d
WHERE d.deptno=10
AND e.deptno = d.deptno
AND e.deptno = d.deptno
select * from table(dbms_xplan.display) dual;

PLAN_TABLE_OUTPUT

-----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
-----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 8313 | 154K| 51 |
| 1 | NESTED LOOPS | | 8313 | 154K| 51 |
| 2 | TABLE ACCESS BY INDEX ROWID| DEPT | 1 | 11 | 1 |
|* 3 | INDEX UNIQUE SCAN | SYS_C002803 | 1 | | |
|* 4 | TABLE ACCESS FULL | EMP | 8313 | 66504 | 50 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
3 - access("D"."DEPTNO"=10)
4 - filter("E"."DEPTNO"=10 AND "E"."DEPTNO"="D"."DEPTNO")

The above output clearly shows the difference in the execution plan of the query, cost, rows and filter.
Some other work arounds are,

EXPLAIN PLAN FOR
SELECT e.ename,d.dname
FROM emp e,dept d
WHERE d.deptno = 10
AND e.deptno = d.deptno+0

EXPLAIN PLAN FOR
SELECT e.ename,d.dname
FROM emp e,dept d
WHERE d.deptno = 10
AND e.deptno = 10
AND e.deptno = d.deptno

Note:

To use dbms_xplan.display(), you need to have plan_table of Oracle >=9206 and <=10g Rel.1.

From Release 10.2, the auto trace uses dbms_xplan.display format.

Conclusion:

If you see a similar performance degradation after a Oracle upgrade
Or migration, it can be very worthwhile to perform the above checks.

Acknowledgements:

I owe thanks to Jonathan Lewis for his kind support and also thanks to my colleague Wayne Phillips for
his technical review and corrections.

References:

Cost-Based Oracle Fundamentals – by Jonathan Lewis
Metalink Note. 68979.1 - Transitivity and Transitive Closure

Mr. Syed Jaffar Hussain[sjaffarhussain@gmail.com] is the Sr. Oracle DBA at a private bank in Saudi Arabia.
He has worked in IT for 12 years, and as an Oracle DBA for 6 years.