3.26.2006

Transitive Closure behaviour when query_rewrite_enabled parameter set true/false

The article covers our experiences with the impact of various settings
of the parameter query_rewrite_enabled on sql performance whilst
migrating an Oracle database from AIX to HP.

All testing for this article was performed on Oracle 9.2.0.5.

TRANSITIVE CLOSURE WITH QUERY_REWRITE_ENABLED

Transitivity and Transitive Closure

Transitivity (or Transitive Closure) is the generation of additional
predicates that may open additional access methods. It is only performed by the CBO.

We have recently migrated one of our data warehouse databases (Oracle 9i Rel.2 (9205)) from AIX to HP-SUPERDOME.
The data warehouse is 1.8TB. It was migrated using export and import utilities.

The major problem we encountered post migration was a dramatic increase in the response time of one of our queries.

The query was taking 5 minutes on AIX. The same query on HP ran for 2 days.

Following is the query and its execution plan on AIX and HP.

SELECT max(as_of_date), max(cust_code), nvl(abs(sum(run_offs_sar)), 0),
nvl(abs(sum(inst_amt_sar)), 0), nvl(abs(sum(bal_sar)), 0)
FROM account a
WHERE acct_no = '00100100120'
AND as_of_date = (SELECT max(as_of_date)
FROM account b
WHERE b.acct_no = a.acct_no
AND b.run_offs_sar <> 0)

Execution Plan (AIX)
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=602 Card=1 Bytes=55)
1 0 SORT (AGGREGATE)
2 1 VIEW (Cost=602 Card=1 Bytes=55)
3 2 FILTER
4 3 SORT (GROUP BY) (Cost=602 Card=1 Bytes=75)
5 4 HASH JOIN (Cost=529 Card=167 Bytes=12525)
6 5 PARTITION RANGE (ALL)
7 6 TABLE ACCESS (BY LOCAL INDEX ROWID) OF 'A_ACCOUNT' (Cost=264 Card=167 Bytes=4509)
8 7 INDEX (SKIP SCAN) OF 'IDX_DT_ACCT_FLAG' (NON-UNIQUE) (Cost=189 Card=167)
9 5 PARTITION RANGE (ALL)
10 9 TABLE ACCESS (BY LOCAL INDEX ROWID) OF 'A_ACCOUNT' (Cost=264 Card=167 Bytes=8016)
11 10 INDEX (SKIP SCAN) OF 'IDX_DT_ACCT_FLAG' (NON-UNIQUE) (Cost=189 Card=167)

Execution Plan (HP)
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=630319908 Card=1 Bytes=55)
1 0 SORT (AGGREGATE)
2 1 VIEW (Cost=630319908 Card=1602 Bytes=88110)
3 2 FILTER
4 3 SORT (GROUP BY) (Cost=630319908 Card=1602 Bytes=107334)
5 4 MERGE JOIN (CARTESIAN) (
7 6 Cost=630319180 Card=225809 Bytes=15129203)
6 5 PARTITION RANGE (ALL)TABLE ACCESS (BY LOCAL INDEX ROWID) OF 'A_ACCOUNT' (Cost=700 Card=440 Bytes=10560)
8 7 INDEX (SKIP SCAN) OF 'IDX_DT_ACCT_FLAG' (NON-UNIQUE) (Cost=519 Card=513)
9 5 BUFFER (SORT) (Cost=630319208 Card=513 Bytes=22059)
10 9 PARTITION RANGE (ALL)
11 10 TABLE ACCESS (FULL) OF 'A_ACCOUNT' (Cost=1432542 Card=513 Bytes=22059)

I then turned on the ‘10053 trace event with level 1’ on both databases. Comparing the trace outputs,
I found that the only difference was in the setting of the ‘query_rewrite_enabled’ parameter.
On AIX this parameter was set to TRUE and on HP it was at the default setting of FALSE.

After setting this parameter to TRUE on HP, the query achieved the same response time as on AIX.
The execution plan now is identical on both systems.

Thanks to Jonathan Lewis for explaining Transitive Closure behavior:
<<
When you have a problem like this, use dbms_xplan()
to get the more detailed execution plan than you don't
get from autotrace (until 10.2). In this case I think you
would find that you have a join predicate that disappears
(to produce the cartesian join) because transitive closure
has been used to copy a filter predicate from one side of
the join to the other.
============
In the case where the constant predicate is an equality the
optimizer will generate a second constant predicate, but
lose the join predicate if query_rewrite_enabled = false, hence ...

On the other hand, if query_rewrite_enabled = true, then
8i and 9i will keep the join predicate even in this case, hence ...
============

>>

By setting the parameter ‘query_rewrite_enabled=FALSE’, then to ‘TRUE’, I ran the query with ‘explain plan for’
to produce the explain plan. Using the ‘select * from table(dbms_xplan.display) dual’, I got the following predicate
information used/created by the Optimizer:

with query_rewrite_enabled=FALSE

3 - filter("A"."AS_OF_DATE"=MAX("B"."AS_OF_DATE"))
7 - filter("B"."RUN_OFFS_SAR"<>0)
8 - access("B"."ACCT_NO"='00100100120')
filter("B"."ACCT_NO"='00100100120')
11 - filter("A"."ACCT_NO"='00100100120')

with query_rewrite_enabled=TRUE

3 - filter("A"."AS_OF_DATE"=MAX("B"."AS_OF_DATE"))
5 - access("B"."ACCT_NO"="A"."ACCT_NO") ====
7 - filter("B"."RUN_OFFS_SAR"<>0)
8 - access("B"."ACCT_NO"='00100100120')
filter("B"."ACCT_NO"='00100100120')
11 - access("A"."ACCT_NO"='00100100120')
filter("A"."ACCT_NO"='00100100120')


When the parameter was set to FALSE, the optimizer was dropping the following in the predicate information,
compared to the one where the parameter was set to TRUE.

’access("B"."ACCT_NO"="A"."ACCT_NO")’ and ‘access("A"."ACCT_NO"='00100100120'

In this example, I have used the two traditional tables, emp & dept for my testing.

Alter session set query_rewrite_enabled=FALSE;

EXPLAIN PLAN FOR
SELECT e.ename,d.dname
FROM emp e,dept d
WHERE d.deptno=10
AND e.deptno =d.deptno

select * from table(dbms_xplan.display) dual;

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
-----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 24939 | 462K| 51 |
| 1 | NESTED LOOPS | | 24939 | 462K| 51 |
| 2 | TABLE ACCESS BY INDEX ROWID| DEPT | 1 | 11 | 1 |
|* 3 | INDEX UNIQUE SCAN | SYS_C002803 | 1 | | |
|* 4 | TABLE ACCESS FULL | EMP | 24939 | 194K| 50 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):

---------------------------------------------------
3 - access("D"."DEPTNO"=10)
4 - filter("E"."DEPTNO"=10)

In the predicate information above, the optimizer doesn’t use predicate join e.deptno=d.deptno. Instead,
it substitutes the referenced column’s (d.deptno) value of 10 in the filter e.deptno=10.

Jonathan Lewis, in his book “CBO Fundamentals”, Chapter 6, Selectivity Issue, suggests some bypasses for
this problem, such as:

EXPLAIN PLAN FOR
SELECT e.ename,d.dname
FROM emp e,dept d
WHERE d.deptno=10
AND e.deptno = d.deptno
AND e.deptno = d.deptno
select * from table(dbms_xplan.display) dual;

PLAN_TABLE_OUTPUT

-----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
-----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 8313 | 154K| 51 |
| 1 | NESTED LOOPS | | 8313 | 154K| 51 |
| 2 | TABLE ACCESS BY INDEX ROWID| DEPT | 1 | 11 | 1 |
|* 3 | INDEX UNIQUE SCAN | SYS_C002803 | 1 | | |
|* 4 | TABLE ACCESS FULL | EMP | 8313 | 66504 | 50 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
3 - access("D"."DEPTNO"=10)
4 - filter("E"."DEPTNO"=10 AND "E"."DEPTNO"="D"."DEPTNO")

The above output clearly shows the difference in the execution plan of the query, cost, rows and filter.
Some other work arounds are,

EXPLAIN PLAN FOR
SELECT e.ename,d.dname
FROM emp e,dept d
WHERE d.deptno = 10
AND e.deptno = d.deptno+0

EXPLAIN PLAN FOR
SELECT e.ename,d.dname
FROM emp e,dept d
WHERE d.deptno = 10
AND e.deptno = 10
AND e.deptno = d.deptno

Note:

To use dbms_xplan.display(), you need to have plan_table of Oracle >=9206 and <=10g Rel.1.

From Release 10.2, the auto trace uses dbms_xplan.display format.

Conclusion:

If you see a similar performance degradation after a Oracle upgrade
Or migration, it can be very worthwhile to perform the above checks.

Acknowledgements:

I owe thanks to Jonathan Lewis for his kind support and also thanks to my colleague Wayne Phillips for
his technical review and corrections.

References:

Cost-Based Oracle Fundamentals – by Jonathan Lewis
Metalink Note. 68979.1 - Transitivity and Transitive Closure

Mr. Syed Jaffar Hussain[sjaffarhussain@gmail.com] is the Sr. Oracle DBA at a private bank in Saudi Arabia.
He has worked in IT for 12 years, and as an Oracle DBA for 6 years.

2.09.2006

Oracle 10g Upgrade (OCP)

I have just cleared my OCP 10g upgrade (IZO-040) exam.
Couple of weeks ago, I have attend the 10g New Features for Database Administor course and prepared my self for the exam.
During my course, I have come across few of new features which we could use in our database.
I will discuss about them in my coming posts.

Its time to party.

12.28.2005

Index Rebuild Part II

I am not getting into the moot of when to rebuild index or not, since these things already discussed by many experts at their sites. I am just providing the internal machanism, pros & cons of rebuilding index.

Oracle has 4 main features with regard to its internal maintenance of
indexes that makes index rebuilds such a rare requirement.

1) 50-50 block split and self-balancing mechanism
2) 90-10 block split mechanism for monotonically increasing values
3) Reusability of deleted row space within an index node
4) Reusability of emptied nodes for subsequent index splits.

These 4 features combined almost eliminates the need for index rebuilds.

In fact most indexes that have index entries randomly distributed throughout
the index tree structure remain both balanced (as all Oracle indexes must
be) and fragmentation free *because* leaf row entries are reused.
Inserts/Updates and Deletes result in "holes" being created but importantly
"refilled" typically at even rates. The issue of wasted deleted space is
therefore typically a non-issue.

Rebuilding such indexes can actually be detrimental to overall performance
for a number of reasons. Firstly, it requires a significant amount of
resources and can conflict with the general running of the database. But
perhaps more importantly, it can actually be self-defeating in what rebuilds
are supposed to achieve. That's because after an index rebuild, the index is
more tightly packed with less overall free space (else why rebuild). This
means however that index splits are more likely to now occur which directly
impacts performance due to the additional I/O and CPU this entails. And
after the block split, we now have two blocks each with 50% free space.
After a period of time, the index potentially has "issues" due to
insufficient used space and the vicious rebuild cycle continues. The better
course of action is to do nothing and let the index evolve to it's natural
"equilibrium".

When should one perform a rebuild?

Firstly, if the index value were to have monotonically increasing values
then any deleted space could be a problem as this space may not be reused
(making feature 3 above redundant). However, if sufficient entries are
deleted resulting in index nodes being fully emptied (say via a bulk delete)
then feature 4 would kick in and the deleted space could be reused. The
question now becomes one of *when* would the equivalent amount of index
entries be reinserted from the time of the deletions, as index scans (in all
it's manifestations) would be impacted during this interim period. So
monotonically increasing values *and* sparse deletions would present one
case for an index rebuild. These types of indexes can be identified as
having predominately 90-10 splits rather than the usual 50-50 split.

Another case would be an index that has deletions without subsequent inserts
or inserts within an acceptable period of time. Such a case would result in
wasted space that can't be effectively reused as there's not the sufficient
insert activity to reclaim the space. However, in this scenario, it's really
the *table* itself rather than the indexes directly that should be rebuilt.
Because such "shrinkage" results in both the table and associated indexes
being fragmented with HWMs that need resetting (to prevent performance
issues with Full Table Scans and all types of Index Scans). Yes the index
needs rebuilding but only as a result of the dependent table being rebuilt
as well.


The information has taken from multiple Metalink Notes.

Index Rebuild Part I

Couple of days ago, as per devt. team request, I had split a partition of one of the partitioned tables, which was having local indexes, fortunately, the partition which I split was belong to current month partition and all the indexes status became unusable and I had to rebuild them.
I was surprised to see that while index rebuilding, it has taken double space than the original one and released 50% occupaid space after the index rebuild. I was suprised because, I was an impression that only index rebuild online would take double space because it create a jounral table and put the data there untile index rebuild finishes.

--- The reason for taking double space while rebuilding is :

"For a rebuild, the index has to be scanned, the results sorted, and new extents built
to hold the newly constructed index. These extents must co-exist with the original index until the process is complete and the original can be dropped.Thus, the rebuild requires a sort and sufficient free space to hold two versions of the index temporarily."
-------------

I have found few good notes on rebuild index offline/online, its pros and cons of rebuild indexes. Thought, it would be helpful to others.

An index rebuilt either Online or Offline.

Online Index Rebuild Features:
+ ALTER INDEX REBUILD ONLINE;
+ DMLs are allowed on the base table
+ It is comparatively Slow
+ Base table is referred for the new index
+ Base table is locked in shared mode and DDLs are not possible
+ Intermediate table stores the data changes in the base table, during the index rebuild to update the new index later

Offline Index Rebuild Features:
+ ALTER INDEX REBUILD; (Default)
+ Does not refer the base table and the base table is exclusively locked
+ New index is created from the old index
+ No DML and DDL possible on the base table
+ Comparatively faster

So, the base table is not referred for data when the index is rebuilt offline.

When an index is rebuilt offline there is no FTS on the base table. When index is rebuilt online all the blocks from the base table are accessed.

These conclusions are when we donot make an scenario when the index is unusable and then there
is data load to the base table, and finally the index is rebuilt.

As documentation says, while REBUILDing an index OFFLINE, the base table is not referred. But there are situations where the base table is accessed similar to an index create, they are:

+ index is made "unusable"
+ data is loaded with sql loader with skip_unusable_indexes = TRUE
+ index is REBUILD OFFLINE

Basically, when an index is made "unusable", it is normally meant that it will not be used at all (that is drop later) or it has to be drop/create. Here the index being unusable takes more priority than the Offline rebuild of it. An unusable index has to refer the base table while rebuilding.

12.27.2005

Oracle10g AS Forms and Reports diployment.

Wow, here comes my first assignment in the production after attending Oracle 10g AS courses.
The requirement is to upgrade Oracle 9i AS Forms & Reports to Oracle 10g Forms & Reports.
The good thing about 10g compare to 9i is that you will have a start and stop GUI option in the Oracle folder (windows), which eventually doing OPMNCTL STOPALL.
Procedure that I have done:

1. Ensure that the server ipaddress, DNS, and localhost name (server name) is registered in the windows
hosts file.
2. Start installation of Oracle 10g from the CD. (Middle-tier only, since Forms and Reports comes as
separate CD in 10g Rel.1, I have no other options to select for middle-tier).
3. Specify iasadmin username and password, this password is used later to connect as ias_admin to enter
in the emctl tool. Enterprise Manager Console.
4. Once the installation successfull, check the install log for port and other details of http server.
5. Application Diployment. (forms and reports).
copied the source in d:\App1
specify the location in the httdp.conf file, so that AS can recognize the location of source.
edit \oracleASHome\appache\appache\conf\httpd.conf file
replace value of *DocumentRoot (*DocumentRoot "d:\App1")
replace directory (
and save the file.
6. Configure tnsname.ora file to access to the target database.
7. Stop and start all the process.
8. From the browser, run like , http://hostname

Wow, it was so easy to do and I was happy that in first attempt I was succeeded.