Happy EID, Merry Christmas and A Ver happy and prosperous new year to everyone.
On EID, 9 days off for me. Ummm...
My kids have taken strong promise from me to take them out every day.
My brother also going to join us from Madina during the holidays and I expect a lot of fun and good time.
Once again, happy EID, merry christmas and a very happy NEW YEAR to all.
May new year brings more and much needed happiness to everyone.
Jaffar
Whatever topic has been discussed on this blog is my own finding and views, not necessary match with others. I strongly recommend you to do a test before you implement the piece of advice given at my blog.
Expert Oracle RAC
12.24.2006
12.23.2006
Why index being ignored - interesting question on OTN database forums.
As usual, I was answering few intresting questions on OTN Database forums.
This morning,I have come across of a very interesting, funny question, where the poster gave an example with some explanation and asked why the Optimizer ignores INDEX?
== POSTING ON OTN DATABASE FORUM
SQL> CREATE table t (a VARCHAR2(10),b VARCHAR2(10))
2 /
Table created.
SQL> BEGIN
2 FOR i IN 1..10000
3 LOOP
4 INSERT INTO t VALUES (i,'A');
5 END LOOP;
6 END;
7 .
SQL> /
PL/SQL procedure successfully completed.
SQL> COMMIT;
Commit complete.
SQL> CREATE INDEX t_a_idx ON t (a)
2 /
Index created.
QL> SELECT * FROM emp WHERE empno>500
2 /
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2 Card=14 Bytes=560)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (TABLE) (Cost=2 Card=14 Bytes=560)
2 1 INDEX (RANGE SCAN) OF 'PK_EMP' (INDEX (UNIQUE)) (Cost=1 Card=14)
Above query is not ignoring the possibility of using an index on empno column,while the same query with table tignoring the usage of index on 'a' column why??
SQL> SELECT *
2 FROM t WHERE a>500
3 /
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=6 Card=9500 Bytes= 133000)
1 0 TABLE ACCESS (FULL) OF 'T' (TABLE) (Cost=6 Card=9500 Bytes =133000)
SQL> SELECT *
2 FROM t WHERE a>'500'
3 /
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=6 Card=5552 Bytes= 77728)
1 0 TABLE ACCESS (FULL) OF 'T' (TABLE) (Cost=6 Card=5552 Bytes =77728)
=== END OF POSTING
The poster gave two examples, using EMP table and newly created table. Example used with EMP table has worked perfectly, i.e. index has bee used. But, his question was, why in another example, used different table, Optimizer doesn't uses the INDEX?
Analyzing given examples, few points can be easily cleared.
1. After table creation, data insertion and index creation, there is no clue in the example of statistics collections.But, in the examples, there is cardinality, cost and bytes. I guess, Optimizer might have useddyanmic samplying. I am not sure, because, the poster doesn't mentioned Oracle version and Optimizer*parameters value or how dows he collected the statistics.
SQL> SELECT *
2 FROM t WHERE a>500
3 /
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=6 Card=9500 Bytes= 133000)
1 0 TABLE ACCESS (FULL) OF 'T' (TABLE) (Cost=6 Card=9500 Bytes =133000)
2. Column 'a' has defined as varchar datatype.
(a VARCHAR2(10), and the predicate value has given in numeric.
SQL> SELECT *
2 FROM t WHERE a>500
3 /
Leaving it to Oracle to do the internal conversion. I remember long time ago, we faced the issue when datatype was character type and predicate value was given numeric. Therefore, Optimizer was ignoring index. I guess this could be one of the reason for ignoring INDEX.
May be, internal calculations and other theory can detailed exaplin by Jonathan Lewis, if at all, he reads my blog.
In another example,
SQL> SELECT *
2 FROM t WHERE a>'500'
3 /
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=6 Card=5552 Bytes= 77728)
1 0 TABLE ACCESS (FULL) OF 'T' (TABLE) (Cost=6 Card=5552 Bytes =77728)
I dont know the high value and low value for the column 'a' (statistics). If I remember, the forumal for selectity calculation when there are no hitograms in place, is :
c1 > 'value' 1 - (High - Value / High - Low)
In the given example, the poster inserted 10,000 rows using pl/sql.
If we look at the cardinality prediated by the Optimizer is 5552, which is more than 50%. surely, oracle ignores the INDEX and favours FTS (another reason).
Jonathan had written a nice function to convert high/low raw value number. Because, the value stored in hagh/low (for character) is a raw value.
http://jonathanlewis.wordpress.com/2006/11/29/low_value-high_value/
Few other concerns.
I guess the poster used AUTOTRACE utility to **predicate** the execution plan. As Jonathan reiterates that the execution plan preciated by the AUTOTTRACE is not the real one, it could be a lie or simply pridicated.
I have recommended the poster to use DBMS_XPLAN.DISPLAY (if Oracle >=9iR2 version) to get the excecution plan and also the predicate filtering value, which was clearly missing the AUTOTTRACE utility.
I have not written this to hurt the poster nor to disapoint anybody. I was just sharing my thought on this.Any comments on thhis are welcome.
Jaffar
This morning,I have come across of a very interesting, funny question, where the poster gave an example with some explanation and asked why the Optimizer ignores INDEX?
== POSTING ON OTN DATABASE FORUM
SQL> CREATE table t (a VARCHAR2(10),b VARCHAR2(10))
2 /
Table created.
SQL> BEGIN
2 FOR i IN 1..10000
3 LOOP
4 INSERT INTO t VALUES (i,'A');
5 END LOOP;
6 END;
7 .
SQL> /
PL/SQL procedure successfully completed.
SQL> COMMIT;
Commit complete.
SQL> CREATE INDEX t_a_idx ON t (a)
2 /
Index created.
QL> SELECT * FROM emp WHERE empno>500
2 /
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2 Card=14 Bytes=560)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (TABLE) (Cost=2 Card=14 Bytes=560)
2 1 INDEX (RANGE SCAN) OF 'PK_EMP' (INDEX (UNIQUE)) (Cost=1 Card=14)
Above query is not ignoring the possibility of using an index on empno column,while the same query with table tignoring the usage of index on 'a' column why??
SQL> SELECT *
2 FROM t WHERE a>500
3 /
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=6 Card=9500 Bytes= 133000)
1 0 TABLE ACCESS (FULL) OF 'T' (TABLE) (Cost=6 Card=9500 Bytes =133000)
SQL> SELECT *
2 FROM t WHERE a>'500'
3 /
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=6 Card=5552 Bytes= 77728)
1 0 TABLE ACCESS (FULL) OF 'T' (TABLE) (Cost=6 Card=5552 Bytes =77728)
=== END OF POSTING
The poster gave two examples, using EMP table and newly created table. Example used with EMP table has worked perfectly, i.e. index has bee used. But, his question was, why in another example, used different table, Optimizer doesn't uses the INDEX?
Analyzing given examples, few points can be easily cleared.
1. After table creation, data insertion and index creation, there is no clue in the example of statistics collections.But, in the examples, there is cardinality, cost and bytes. I guess, Optimizer might have useddyanmic samplying. I am not sure, because, the poster doesn't mentioned Oracle version and Optimizer*parameters value or how dows he collected the statistics.
SQL> SELECT *
2 FROM t WHERE a>500
3 /
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=6 Card=9500 Bytes= 133000)
1 0 TABLE ACCESS (FULL) OF 'T' (TABLE) (Cost=6 Card=9500 Bytes =133000)
2. Column 'a' has defined as varchar datatype.
(a VARCHAR2(10), and the predicate value has given in numeric.
SQL> SELECT *
2 FROM t WHERE a>500
3 /
Leaving it to Oracle to do the internal conversion. I remember long time ago, we faced the issue when datatype was character type and predicate value was given numeric. Therefore, Optimizer was ignoring index. I guess this could be one of the reason for ignoring INDEX.
May be, internal calculations and other theory can detailed exaplin by Jonathan Lewis, if at all, he reads my blog.
In another example,
SQL> SELECT *
2 FROM t WHERE a>'500'
3 /
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=6 Card=5552 Bytes= 77728)
1 0 TABLE ACCESS (FULL) OF 'T' (TABLE) (Cost=6 Card=5552 Bytes =77728)
I dont know the high value and low value for the column 'a' (statistics). If I remember, the forumal for selectity calculation when there are no hitograms in place, is :
c1 > 'value' 1 - (High - Value / High - Low)
In the given example, the poster inserted 10,000 rows using pl/sql.
If we look at the cardinality prediated by the Optimizer is 5552, which is more than 50%. surely, oracle ignores the INDEX and favours FTS (another reason).
Jonathan had written a nice function to convert high/low raw value number. Because, the value stored in hagh/low (for character) is a raw value.
http://jonathanlewis.wordpress.com/2006/11/29/low_value-high_value/
Few other concerns.
I guess the poster used AUTOTRACE utility to **predicate** the execution plan. As Jonathan reiterates that the execution plan preciated by the AUTOTTRACE is not the real one, it could be a lie or simply pridicated.
I have recommended the poster to use DBMS_XPLAN.DISPLAY (if Oracle >=9iR2 version) to get the excecution plan and also the predicate filtering value, which was clearly missing the AUTOTTRACE utility.
I have not written this to hurt the poster nor to disapoint anybody. I was just sharing my thought on this.Any comments on thhis are welcome.
Jaffar
12.14.2006
Change in transitve closure from 9i to 10g.
Sometime back, I have discussed about transitive closure behavior when query_rewrite_enabled parameter is set to true/false in this blog, http://jaffardba.blogspot.com/2006/03/transitive-closure-behaviour-when.html. It was on Oracle version 9iR2(9205).
Jonathan has discussed the changes of transitive closure in 10g at his blog, http://jonathanlewis.wordpress.com/2006/12/13/cartesian-merge-join/.
There is a (hidden) parameter _optimizer_transitivity_retain which is set to true in version 10g. This over comes the big problem of transitive closure behavior when query_rewrite_enabled is not set to true.
Jaffar
Jonathan has discussed the changes of transitive closure in 10g at his blog, http://jonathanlewis.wordpress.com/2006/12/13/cartesian-merge-join/.
There is a (hidden) parameter _optimizer_transitivity_retain which is set to true in version 10g. This over comes the big problem of transitive closure behavior when query_rewrite_enabled is not set to true.
Jaffar
12.10.2006
Migrating to Automatic Undo Management
Although Oracle strongly recommends to use UNDO Management instead of Manual rollback segments, at my previousorgaization, we were shifted from auto undo management to manual rollback segments.I had discussed the reason behind this shift in my old article titled as 'Auto Undo Management performance degradation'.
http://jaffardba.blogspot.com/2006_04_01_jaffardba_archive.html
The problem was with Oracle 9i (9205) version. Now I heard that the database already upgraded to 10gR2.
I am really inspired by the following features of 10gR2 about Auto Undo Management. Therefore,I will recommend my ex-boss to revert back to UNDO MANAGEMENT option.
I can convieance him with the following valid reasons:
Reason One: - Fast Ramp-Up in 10gR2.
In Oracle10gR2 , when the database startsup, undo segments which status were online during the shutdown will be online again. This will avoid the overhead of bringing undo segments online on a heavy OLTP database.
Reason Two: - Migration easy step.
To get an idea how long the UNDO talespace should be, can be easily done by using DBMS_UNDO_ADV.RBU_MIGRATION package provided in 10gr2.
DECLARE
utbsiz_in_MB NUMBER;
BEGIN utbsiz_in_MB := DBMS_UNDO_ADV.RBU_MIGRATION;
end;/
http://www.oracle.com/technology/oramag/oracle/05-jul/o45tuning.htmlhttp://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14231/undo.htm
Reason Three:
Undo guarantee and Automatic Tuning of Undo Retention.
Happy Reading,
Jaffar
http://jaffardba.blogspot.com/2006_04_01_jaffardba_archive.html
The problem was with Oracle 9i (9205) version. Now I heard that the database already upgraded to 10gR2.
I am really inspired by the following features of 10gR2 about Auto Undo Management. Therefore,I will recommend my ex-boss to revert back to UNDO MANAGEMENT option.
I can convieance him with the following valid reasons:
Reason One: - Fast Ramp-Up in 10gR2.
In Oracle10gR2 , when the database startsup, undo segments which status were online during the shutdown will be online again. This will avoid the overhead of bringing undo segments online on a heavy OLTP database.
Reason Two: - Migration easy step.
To get an idea how long the UNDO talespace should be, can be easily done by using DBMS_UNDO_ADV.RBU_MIGRATION package provided in 10gr2.
DECLARE
utbsiz_in_MB NUMBER;
BEGIN utbsiz_in_MB := DBMS_UNDO_ADV.RBU_MIGRATION;
end;/
http://www.oracle.com/technology/oramag/oracle/05-jul/o45tuning.htmlhttp://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14231/undo.htm
Reason Three:
Undo guarantee and Automatic Tuning of Undo Retention.
Happy Reading,
Jaffar
12.09.2006
Optimal Logfile Size advice - Oracle10gR2
I have come across of many forums & mailing list where people asking for the optimal value (size) for online redo logs files.
I would say, prior to 10gR2, you need to estimate its value upon system redo generation and other database setting factors.
Although, log file size is not the candidate to measure or improve LGWR performance, but, it (log file size) can affect to DBWR and ARCH bakground process performance. Therefore, log file size should be large enough in order to avoid the frequent checkpointing in the database.
Prior to Oracle10R2, there was no direct hint by oracle that you need to set this value as an optimal,or at least start with this value.
In Oracle10gR2, the optimal value for log file can be obtained from v$instance_recovery dyanmic view.
select optimal_logfile_size from v$instance_recovery;
OPTIMAL_LOGFILE_SIZE
--------------------
453 (size measured by MB)
Note : The above value is considered optimial based on the current setting of
FAST_START_MTTR_TARGET.
Oracle documentation suggest the following:A rough guide is to switch logs at most once every twenty minutes.Size your online redo log files according to the amount of redo your system generates.
Happy reading,
Jaffar
I would say, prior to 10gR2, you need to estimate its value upon system redo generation and other database setting factors.
Although, log file size is not the candidate to measure or improve LGWR performance, but, it (log file size) can affect to DBWR and ARCH bakground process performance. Therefore, log file size should be large enough in order to avoid the frequent checkpointing in the database.
Prior to Oracle10R2, there was no direct hint by oracle that you need to set this value as an optimal,or at least start with this value.
In Oracle10gR2, the optimal value for log file can be obtained from v$instance_recovery dyanmic view.
select optimal_logfile_size from v$instance_recovery;
OPTIMAL_LOGFILE_SIZE
--------------------
453 (size measured by MB)
Note : The above value is considered optimial based on the current setting of
FAST_START_MTTR_TARGET.
Oracle documentation suggest the following:A rough guide is to switch logs at most once every twenty minutes.Size your online redo log files according to the amount of redo your system generates.
Happy reading,
Jaffar
12.04.2006
Make sure you backup old version of statistics before you start collecting new statistics.
I believe very few DBAs backup old table/schema/database statistics before or while they collecting new statistics. I guess that majority of the DBAs won't backup existing/available statistics before or while collecting new ones.
If you don't backup old version of statistics, and optimizer behave differently because of new statistics, there you are in trouble. If you have backup of old version of statistics, you can restore them to have optimizer behaviour back in time.
Many Oracle experts and Oracle itself now recommends backup the old statistics before collecting new one. So that you will have an option to restore the previous version of statistics.
There are couple of workarounds to backup old version of statistics in earlier version of Oracle.
In Oracle 10g.
From Oracle10g onwards, whenever statistics are modified, old version of statistics are saved automatically. (If you use analyze in 10g to collect statistics, automatic saving of old version statistics won't take place).
DBA_OPTSTAT_OPERATIONS, dictionary view contains history of statistics operations performed at various level, table/schema/database.
OPERATION VARCHAR2(64)
TARGET VARCHAR2(64)
START_TIME TIMESTAMP(6) WITH TIME ZONE
END_TIME TIMESTAMP(6) WITH TIME ZONE
Using start_time and end_time criteria from the above dictionary view, you can always restore old version of statistics.
Use DBMS_STATS.GET_STATS_HISTORY_AVAILABILITY in order to get the oldest time stamp where statistics history is available, older than given timestamp from the above procedure is not possible.
PURGE_STATS procedure can be used to purge the old statistics manually.
The old statistics are purged automatically at regular intervals based on statistics history retention setting. The default retention value is set for 31 days. If you would like to modify the default for the history retention settings, use DBMS_STATS.ALTER_STATS_HISTORY_RETENTION.
Note : Automatic purging has relation with STATISTICS_LEVEL parameter setting. It must be either ALL/TYPICAL to avail this feature in 10g.
In previous versions of Oracle, using dbms_stats.create_stat_table you can create a table to store old version of statistics.
Jonathan Lewis has discussed with good examples on how to create the table to store old version of statistics on his blog, 'Saving Statistics'.
http://jonathanlewis.wordpress.com/2006/12/03/saving-statistics/
The other way of backup and restore of old statsitics - Exporting and Importing Statistics.
Statistics can be exported and imported. This enables to copy statistics from production test/development database.
You need a table to hold statistics before you start exporting statistics. Using dbms_stats.create_stat_table procedure to create the table. DBMS_STATS.EXPORT_*_STATS and DBMS_STATS.IMPORT_*_STATS procedures are used to export/import statistics.
I hope that I do follow this procedure with my databases, which are not on 10g.
Happy reading,
Jaffar
After reading Jonathan's blog on SAVING STATSITICS, I thought of discussing little bit of this at my blog.
If you don't backup old version of statistics, and optimizer behave differently because of new statistics, there you are in trouble. If you have backup of old version of statistics, you can restore them to have optimizer behaviour back in time.
Many Oracle experts and Oracle itself now recommends backup the old statistics before collecting new one. So that you will have an option to restore the previous version of statistics.
There are couple of workarounds to backup old version of statistics in earlier version of Oracle.
In Oracle 10g.
From Oracle10g onwards, whenever statistics are modified, old version of statistics are saved automatically. (If you use analyze in 10g to collect statistics, automatic saving of old version statistics won't take place).
DBA_OPTSTAT_OPERATIONS, dictionary view contains history of statistics operations performed at various level, table/schema/database.
OPERATION VARCHAR2(64)
TARGET VARCHAR2(64)
START_TIME TIMESTAMP(6) WITH TIME ZONE
END_TIME TIMESTAMP(6) WITH TIME ZONE
Using start_time and end_time criteria from the above dictionary view, you can always restore old version of statistics.
Use DBMS_STATS.GET_STATS_HISTORY_AVAILABILITY in order to get the oldest time stamp where statistics history is available, older than given timestamp from the above procedure is not possible.
PURGE_STATS procedure can be used to purge the old statistics manually.
The old statistics are purged automatically at regular intervals based on statistics history retention setting. The default retention value is set for 31 days. If you would like to modify the default for the history retention settings, use DBMS_STATS.ALTER_STATS_HISTORY_RETENTION.
Note : Automatic purging has relation with STATISTICS_LEVEL parameter setting. It must be either ALL/TYPICAL to avail this feature in 10g.
In previous versions of Oracle, using dbms_stats.create_stat_table you can create a table to store old version of statistics.
Jonathan Lewis has discussed with good examples on how to create the table to store old version of statistics on his blog, 'Saving Statistics'.
http://jonathanlewis.wordpress.com/2006/12/03/saving-statistics/
The other way of backup and restore of old statsitics - Exporting and Importing Statistics.
Statistics can be exported and imported. This enables to copy statistics from production test/development database.
You need a table to hold statistics before you start exporting statistics. Using dbms_stats.create_stat_table procedure to create the table. DBMS_STATS.EXPORT_*_STATS and DBMS_STATS.IMPORT_*_STATS procedures are used to export/import statistics.
I hope that I do follow this procedure with my databases, which are not on 10g.
Happy reading,
Jaffar
After reading Jonathan's blog on SAVING STATSITICS, I thought of discussing little bit of this at my blog.
Subscribe to:
Posts (Atom)