12.24.2006

Happy EID, Merry Christmas and A very happy NEW YEAR!

Happy EID, Merry Christmas and A Ver happy and prosperous new year to everyone.

On EID, 9 days off for me. Ummm...

My kids have taken strong promise from me to take them out every day.
My brother also going to join us from Madina during the holidays and I expect a lot of fun and good time.

Once again, happy EID, merry christmas and a very happy NEW YEAR to all.

May new year brings more and much needed happiness to everyone.

Jaffar

12.23.2006

Why index being ignored - interesting question on OTN database forums.

As usual, I was answering few intresting questions on OTN Database forums.
This morning,I have come across of a very interesting, funny question, where the poster gave an example with some explanation and asked why the Optimizer ignores INDEX?

== POSTING ON OTN DATABASE FORUM
SQL> CREATE table t (a VARCHAR2(10),b VARCHAR2(10))
2 /
Table created.

SQL> BEGIN
2 FOR i IN 1..10000
3 LOOP
4 INSERT INTO t VALUES (i,'A');
5 END LOOP;
6 END;
7 .
SQL> /
PL/SQL procedure successfully completed.
SQL> COMMIT;
Commit complete.
SQL> CREATE INDEX t_a_idx ON t (a)
2 /
Index created.

QL> SELECT * FROM emp WHERE empno>500
2 /
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2 Card=14 Bytes=560)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (TABLE) (Cost=2 Card=14 Bytes=560)
2 1 INDEX (RANGE SCAN) OF 'PK_EMP' (INDEX (UNIQUE)) (Cost=1 Card=14)

Above query is not ignoring the possibility of using an index on empno column,while the same query with table tignoring the usage of index on 'a' column why??

SQL> SELECT *
2 FROM t WHERE a>500
3 /
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=6 Card=9500 Bytes= 133000)
1 0 TABLE ACCESS (FULL) OF 'T' (TABLE) (Cost=6 Card=9500 Bytes =133000)

SQL> SELECT *
2 FROM t WHERE a>'500'
3 /
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=6 Card=5552 Bytes= 77728)
1 0 TABLE ACCESS (FULL) OF 'T' (TABLE) (Cost=6 Card=5552 Bytes =77728)

=== END OF POSTING

The poster gave two examples, using EMP table and newly created table. Example used with EMP table has worked perfectly, i.e. index has bee used. But, his question was, why in another example, used different table, Optimizer doesn't uses the INDEX?

Analyzing given examples, few points can be easily cleared.
1. After table creation, data insertion and index creation, there is no clue in the example of statistics collections.But, in the examples, there is cardinality, cost and bytes. I guess, Optimizer might have useddyanmic samplying. I am not sure, because, the poster doesn't mentioned Oracle version and Optimizer*parameters value or how dows he collected the statistics.


SQL> SELECT *
2 FROM t WHERE a>500
3 /
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=6 Card=9500 Bytes= 133000)
1 0 TABLE ACCESS (FULL) OF 'T' (TABLE) (Cost=6 Card=9500 Bytes =133000)

2. Column 'a' has defined as varchar datatype.
(a VARCHAR2(10), and the predicate value has given in numeric.

SQL> SELECT *
2 FROM t WHERE a>500
3 /

Leaving it to Oracle to do the internal conversion. I remember long time ago, we faced the issue when datatype was character type and predicate value was given numeric. Therefore, Optimizer was ignoring index. I guess this could be one of the reason for ignoring INDEX.
May be, internal calculations and other theory can detailed exaplin by Jonathan Lewis, if at all, he reads my blog.

In another example,

SQL> SELECT *
2 FROM t WHERE a>'500'
3 /
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=6 Card=5552 Bytes= 77728)
1 0 TABLE ACCESS (FULL) OF 'T' (TABLE) (Cost=6 Card=5552 Bytes =77728)

I dont know the high value and low value for the column 'a' (statistics). If I remember, the forumal for selectity calculation when there are no hitograms in place, is :
c1 > 'value' 1 - (High - Value / High - Low)
In the given example, the poster inserted 10,000 rows using pl/sql.
If we look at the cardinality prediated by the Optimizer is 5552, which is more than 50%. surely, oracle ignores the INDEX and favours FTS (another reason).

Jonathan had written a nice function to convert high/low raw value number. Because, the value stored in hagh/low (for character) is a raw value.

http://jonathanlewis.wordpress.com/2006/11/29/low_value-high_value/

Few other concerns.

I guess the poster used AUTOTRACE utility to **predicate** the execution plan. As Jonathan reiterates that the execution plan preciated by the AUTOTTRACE is not the real one, it could be a lie or simply pridicated.

I have recommended the poster to use DBMS_XPLAN.DISPLAY (if Oracle >=9iR2 version) to get the excecution plan and also the predicate filtering value, which was clearly missing the AUTOTTRACE utility.

I have not written this to hurt the poster nor to disapoint anybody. I was just sharing my thought on this.Any comments on thhis are welcome.


Jaffar

12.14.2006

Change in transitve closure from 9i to 10g.

Sometime back, I have discussed about transitive closure behavior when query_rewrite_enabled parameter is set to true/false in this blog, http://jaffardba.blogspot.com/2006/03/transitive-closure-behaviour-when.html. It was on Oracle version 9iR2(9205).
Jonathan has discussed the changes of transitive closure in 10g at his blog, http://jonathanlewis.wordpress.com/2006/12/13/cartesian-merge-join/.
There is a (hidden) parameter _optimizer_transitivity_retain which is set to true in version 10g. This over comes the big problem of transitive closure behavior when query_rewrite_enabled is not set to true.

Jaffar

12.10.2006

Migrating to Automatic Undo Management

Although Oracle strongly recommends to use UNDO Management instead of Manual rollback segments, at my previousorgaization, we were shifted from auto undo management to manual rollback segments.I had discussed the reason behind this shift in my old article titled as 'Auto Undo Management performance degradation'.
http://jaffardba.blogspot.com/2006_04_01_jaffardba_archive.html

The problem was with Oracle 9i (9205) version. Now I heard that the database already upgraded to 10gR2.

I am really inspired by the following features of 10gR2 about Auto Undo Management. Therefore,I will recommend my ex-boss to revert back to UNDO MANAGEMENT option.

I can convieance him with the following valid reasons:

Reason One: - Fast Ramp-Up in 10gR2.

In Oracle10gR2 , when the database startsup, undo segments which status were online during the shutdown will be online again. This will avoid the overhead of bringing undo segments online on a heavy OLTP database.

Reason Two: - Migration easy step.

To get an idea how long the UNDO talespace should be, can be easily done by using DBMS_UNDO_ADV.RBU_MIGRATION package provided in 10gr2.

DECLARE
utbsiz_in_MB NUMBER;
BEGIN utbsiz_in_MB := DBMS_UNDO_ADV.RBU_MIGRATION;
end;/


http://www.oracle.com/technology/oramag/oracle/05-jul/o45tuning.htmlhttp://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14231/undo.htm

Reason Three:
Undo guarantee and Automatic Tuning of Undo Retention.


Happy Reading,
Jaffar

12.09.2006

Optimal Logfile Size advice - Oracle10gR2

I have come across of many forums & mailing list where people asking for the optimal value (size) for online redo logs files.

I would say, prior to 10gR2, you need to estimate its value upon system redo generation and other database setting factors.

Although, log file size is not the candidate to measure or improve LGWR performance, but, it (log file size) can affect to DBWR and ARCH bakground process performance. Therefore, log file size should be large enough in order to avoid the frequent checkpointing in the database.

Prior to Oracle10R2, there was no direct hint by oracle that you need to set this value as an optimal,or at least start with this value.

In Oracle10gR2, the optimal value for log file can be obtained from v$instance_recovery dyanmic view.

select optimal_logfile_size from v$instance_recovery;
OPTIMAL_LOGFILE_SIZE
--------------------
453 (size measured by MB)

Note : The above value is considered optimial based on the current setting of
FAST_START_MTTR_TARGET.

Oracle documentation suggest the following:A rough guide is to switch logs at most once every twenty minutes.Size your online redo log files according to the amount of redo your system generates.

Happy reading,

Jaffar

12.04.2006

Make sure you backup old version of statistics before you start collecting new statistics.

I believe very few DBAs backup old table/schema/database statistics before or while they collecting new statistics. I guess that majority of the DBAs won't backup existing/available statistics before or while collecting new ones.

If you don't backup old version of statistics, and optimizer behave differently because of new statistics, there you are in trouble. If you have backup of old version of statistics, you can restore them to have optimizer behaviour back in time.

Many Oracle experts and Oracle itself now recommends backup the old statistics before collecting new one. So that you will have an option to restore the previous version of statistics.

There are couple of workarounds to backup old version of statistics in earlier version of Oracle.

In Oracle 10g.

From Oracle10g onwards, whenever statistics are modified, old version of statistics are saved automatically. (If you use analyze in 10g to collect statistics, automatic saving of old version statistics won't take place).

DBA_OPTSTAT_OPERATIONS, dictionary view contains history of statistics operations performed at various level, table/schema/database.

OPERATION VARCHAR2(64)
TARGET VARCHAR2(64)
START_TIME TIMESTAMP(6) WITH TIME ZONE
END_TIME TIMESTAMP(6) WITH TIME ZONE

Using start_time and end_time criteria from the above dictionary view, you can always restore old version of statistics.
Use DBMS_STATS.GET_STATS_HISTORY_AVAILABILITY in order to get the oldest time stamp where statistics history is available, older than given timestamp from the above procedure is not possible.

PURGE_STATS procedure can be used to purge the old statistics manually.

The old statistics are purged automatically at regular intervals based on statistics history retention setting. The default retention value is set for 31 days. If you would like to modify the default for the history retention settings, use DBMS_STATS.ALTER_STATS_HISTORY_RETENTION.

Note : Automatic purging has relation with STATISTICS_LEVEL parameter setting. It must be either ALL/TYPICAL to avail this feature in 10g.

In previous versions of Oracle, using dbms_stats.create_stat_table you can create a table to store old version of statistics.

Jonathan Lewis has discussed with good examples on how to create the table to store old version of statistics on his blog, 'Saving Statistics'.
http://jonathanlewis.wordpress.com/2006/12/03/saving-statistics/


The other way of backup and restore of old statsitics - Exporting and Importing Statistics.

Statistics can be exported and imported. This enables to copy statistics from production test/development database.

You need a table to hold statistics before you start exporting statistics. Using dbms_stats.create_stat_table procedure to create the table. DBMS_STATS.EXPORT_*_STATS and DBMS_STATS.IMPORT_*_STATS procedures are used to export/import statistics.

I hope that I do follow this procedure with my databases, which are not on 10g.

Happy reading,


Jaffar

After reading Jonathan's blog on SAVING STATSITICS, I thought of discussing little bit of this at my blog.

11.30.2006

Analysing Statspack (series)

I have come across of many requests on oracle forums and here and there on how to analyze statspack report and analyze its data to reach the bottle neck of performance issues.
Now, there is a good news, Jonathan Lewis is going to discuss in-depths about 1 statspack report and how to analyze statspack data as set of 10 articles on his blog.

http://jonathanlewis.wordpress.com/


Happy reading,

Jaffar

11.29.2006

New Job - New Responsibilities.

Its been exactly a month in my new job, how times goes!!!!!.
Previously, I have worked with Saudi Frasi Bank as Sr. Oracle DBA for 2.5 years. Now, I am working with INTRACOM MIDDLE EAST as Sr. Oracle DBA for ITC Project.
Intracom a leading company in Greece got 2 years project with ITC. ITC provides broadband, fiber optics and wireless techologies to various companies, banks and individual customers.
For me its a change working with telecom company having past 5 years of experience in Banking.
During my tenure in the bank, I have carried out many advanced and challnging jobs. It was really a great and unforgatable experience in the bank. All my colleages and my immediate boss were really very good and very cooperative. Despite the fact that I was a consultant, I treated as their employe and bank also provides me couple of Oracle university trainings courses that really helped me to pass my 10g upgrade exam and also the courses on 10gAS was very helpful to understand the technology about Oracle Application Server
Following are few tasks that I had done in the bank with the help of my colleagues:

1. Physical standby database to core banking database.
2.Migrate datawarehouse 1.7TB database from 8i to 9i on AIX, later AIX to HP-Superdom.
3.Complete datawarehouse database restore and recovery.
4. Disaster recovery site for Datawarehouse database.
5.Tuned trading local share market database to achieve heights number of tranaction per second.

Presently, I am responsible for few of known database/applications used in the telecommunications, like, CARMER, REMEDY, EHEALTH and etc.

Futuer plans are to implement heigh availability solutions for applications and database (RAC).

I am also planning to do RAC course in coming months.

Thats all now folks,

Cheers,

Jaffar

11.25.2006

String of Disasters - a human error.

I was reading Alex Gorbachev's article on his blog, 'Which Risks Are You Protected From?' , http://www.pythian.com/blogs/author/alex/, in which he had discussed how human errors are underestimated which can leadto potential cuase.
While reading the article, I recalled my hardest time in one of my previous organization where I had faced the music of disaster and spent 4 days to bring the database back.
In my one of previous company, where the start-of-art technology being used, in terms of systems hardware and software, I came across a string of disasters due to a human error.
First experience of disaster recovery was for a datawarehouse database of 2TB. Due a human error, one of the disk on UNIX OS, including its mirror disk got corrupt and unfortunately the filesystem that contains all the redo groups and their members were placed on the corrupted disk.
Since, we lost all the redo groups and its members, we only left the an option of full database restoreand recovery and database open with resetlogs.
we use to backup the complete database on every week-end and backup archived logs at every 4 hrs. The database generates around 100 archived logs a day.
Disaster occured just before the next backup schedule.
When restore was started we have faced other problems with the backup utility which we were using.
However, complete db was restored in 3.5 days of time, and recovery took good amount of time as oneweek archived logs were applying. Finally, at the end of 4th day, database was back online available to the users.
Different kind of human errors happend which cause two more times to restore other databases as well.No matter, how good resource (hardware) you have, there is always feaure of human errors.
For me it was a good learning experience to handle the restore and recovery very first time in the production and this disaster cleared many of my backup/recovery doubts.
I felt, sometime, people learn by making mistakes.

Jaffar

How Oracle 10g makes life easier in a day to day administration. (Part I)

Although, Oracle10g is around past couple of years, we have just started migrating our databases to 10g.No question that Oracle 10g comes with few great features of Performance tuning.I have come across of few of new dictionary and dynamic views of Oracle 10g,which are useful in day by day administration.Prior to 10g, to get this information, you required a little bit work, but, in 10g, its simply selectingthese views/dictionary tables.


Please note that you need performance pack and diagnostic licences from Oracle to use these features and they are available only in Oracle Enterprise Edition.

OS Statistics
Prior to 10g, to know the OS level resource consumption, one need to login into the server and use OS levelutilities, like sar, top,vmstat to know the server resource usage.Now in 10g, just viewing v$osstat view can provide some useful information about OS resource consuption status.
v$osstat - view captures machine level information in the database. Its a good starting point to
analyze if there are hardware level resource issues.
http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14237/dynviews_2010.htm#sthref3811

STAT_NAME VALUE
------------------------------------- ----------
NUM_CPUS 4 Number of CPUs / processors on the DB Server
IDLE_TIME 609528 (value measured by hundredths of a second)
BUSY_TIME 85007
USER_TIME 62146
SYS_TIME 22861
IOWAIT_TIME 0
AVG_IDLE_TIME 152332
AVG_BUSY_TIME 21205
AVG_USER_TIME 15491
AVG_SYS_TIME 5671
AVG_IOWAIT_TIME 0
OS_CPU_WAIT_TIME 5600
RSRC_MGR_CPU_WAIT_TIME 0
LOAD .29296875
PHYSICAL_MEMORY_BYTES 1.7095E+10
VM_IN_BYTES 9576448
VM_OUT_BYTES 0

DBA_HIST_OSSTAT displays historical operating system statistics.
DBA_HIST_OSSTAT_NAME displays the names of the operating system statistics.

Tablespace Usage Statistics
In earlier version, it required to join few dictionary views and some computationto know the used percent of any tablespace.10g, makes life easier by providing all them in a view, dba_tablespace_usage_metrics. (for me, I dont get the real values in this view.)


select * from dba_tablespace_usage_metrics;

TABLESPACE_NAME USED_SPACE TABLESPACE_SIZE USED_PERCENT
------------------------------ ---------- --------------- ------------
SYSAUX 55448 4194302 1.32198397
SYSTEM 79448 4194302 1.89418883
TEMP 0 4194302 0
TEMPORARY 256 256000 .1
UNDOTBS1 31520 49280 63.961039
USERS 48 640 7.5

DBA_HIST_TBSPC_SPACE_USAGE
- This dictionary table contains the history information about tablespace usage metrics, each row contains 1 hr historical information.


desc dba_hist_tbspc_space_usage;

Name Null? Type
----------------------------------------------------------------- -------- -------------
SNAP_ID NUMBER
DBID NOT NULL NUMBER
TABLESPACE_ID NUMBER
TABLESPACE_SIZE NUMBER
TABLESPACE_MAXSIZE NUMBER
TABLESPACE_USEDSIZE NUMBER
RTIME VARCHAR2(25)

select tablespace_size,tablespace_usedsize,rtime
from dba_hist_tbspc_space_usage
where tablespace_id = 2
order by rtime

TABLESPACE_SIZE TABLESPACE_USEDSIZE RTIME
76800 55360 11/25/2006 08:00:57

76800 55368 11/25/2006 09:00:08
76800 55392 11/25/2006 10:00:23
76800 55424 11/25/2006 11:00:35
76800 55448 11/25/2006 12:00:35
76800 55488 11/25/2006 13:00:47

If you want to query the history (usage of any partitcular tablespace),you need to play around with RTIME column.

DB Startuptime log
As per my knowledge, prior to 10g, I don't see any dictionary view or table keeps the history of database startup time and version.

10g, provides, DBA_HIST_DATABASE_INSTANCE which keeps the log about database startup time and the version.

SELECT instance_name,startup_time,version
FROM dba_hist_database_instance;

INSTANCE_NAME STARTUP_TIME VERSION
---------------- -------------------------------- -------
ORCL 07-OCT-06 16.29.29.000 10.2.0.1.0
ORCL 09-OCT-06 13.20.05.000 10.2.0.1.0

ORCL 09-OCT-06 13.38.05.000 10.2.0.1.0
ORCL 09-OCT-06 20.57.31.000 10.2.0.1.0

In my next post, I will be discussing few of other useful dictionary and dynamic views of Oracle10gR2.




11.20.2006

Oracle DBA Toolbar

Download 'Oracle DBA Toolbar' from OTN.

http://www.oracle.com/technology/toolbar/install/index.html

The Oracle DBA Toolbar (available for Internet Explorer only at the moment - yeah, yeah, we know; a Mozilla/Firefox version is forthcoming) gives you one-click access to key DBA resources on OTN.

11.14.2006

The Power of ASH (Active Session History)

Oracle 10g, brings many new features through which one can easily tune the bad sqls or also can diagnose the database performance issues.

Using database metrics, active session history and time model views.

Following query fetchs top sqls spent more on cpu/wait/io. (Thanks to Kyle Hailey for this script):

select
ash.SQL_ID ,
sum(decode(ash.session_state,'ON CPU',1,0)) "CPU",
sum(decode(ash.session_state,'WAITING',1,0)) -
sum(decode(ash.session_state,'WAITING', decode(en.wait_class, 'User I/O',1,0),0)) "WAIT" ,
sum(decode(ash.session_state,'WAITING', decode(en.wait_class, 'User I/O',1,0),0)) "IO" ,
sum(decode(ash.session_state,'ON CPU',1,1)) "TOTAL"
from v$active_session_history ash,v$event_name en
where SQL_ID is not NULL and en.event#=ash.event#

SQL_ID CPU WAIT IO TOTAL
------------- ---------- ---------- ---------- ----------
bqts5m5y267ct 0 0 20 20
4gd6b1r53yt88 0 16 1 17
35rqnp0hn3p3j 0 13 0 13
3shtm7x3a54qu 0 0 8 8
0hf43mhpx086p 0 0 4 4

Use any of v$sql to get the sql_text for one of the above sql_id.

SELECT sql_text FROM v$sqlarea WHERE sql_id = 'bqts5m5y267ct';

dbms_xplan.display_awr can be used to extract the sql plan for this sql_id.

SELECT * FROM table(dbms_xplan.display_awr('bqts5m5y267ct');


The above scinario was done in Oracle 10g Re.2 on SunSolaris version 10.

11.05.2006

Jonathan Blog.

One of the well known Oracle expert, author and presenter, Mr. Jonathan Lewis has started blogging on oracle technology.
Although, he was regularly updating his blog like site, but, its nice to see him blogging everyday at http://jonathanlewis.wordpress.com/.

Great job Jonathan and hats off to you.

Jaffar

10.18.2006

Interview in Oracle Magazine in the peer-to-peer section.

After receving the Oracle ACE award, I was interviewed by Oracle Magazine.
My interview is in the November/December edition in the Peer-to-Peer section.
If you would like to read my interview, following is the URL.

http://www.oracle.com/technology/oramag/oracle/06-nov/o66peer.html

Thanks for your time.

Jaffar

9.04.2006

French Classess

Our Bank has hired a French Instructor to teach French to its employees.
At the beggining I didn't get the chance to enroll my name, but, this time, I have enrolled myself for the french begining class.
So far, I have attended two classes. It look hard, at the same time, looks very interesting.
I hope I would carry on and finish all the levels.

Je suis etiudiant en francias = I am studying French.

8.20.2006

A good site to start learning ORACLE APPS.

Today, I have come across a good Oracle Apps site, maintained by Mr. Autual Kumar. I have found this site,blog is interesting helpful to people who wants to become Oracle Apps DBA.

Main site : http://teachmeoracle.com/
Blog : http://becomeappsdba.blogspot.com/

Blog also has few of Oracle Apps DBA questions.

Cheers..

6.23.2006

Oracle ACE award!!!!!!!!!

I have received an email from Oracle saying that they have confirmed my ACE award and also added my profile to the Oracle ACEs list.
Well, its really a big gift for me before I go on vacation.

http://www.oracle.com/technology/community/oracle_ace/ace1.html#hussain

Cheers.

6.14.2006

Vacation TIME.

I will be going on my annual vacation for 30 days to India.
Did a lot of purchasing for family members and my kids are very exicted to see
their grand parents and other relatives. I believe they are more excited because they are going to travel by plane.
I am going to miss my favourate OTN Database discussion forum where I oftenly contribute and learn many new things every day.
I hope I will get my ACE award from Oracle. Its been 2 months that I have sent my ACE award nomination forum and didn't receive any input.

Cheers.

5.02.2006

Tips to avoid hot block contentio for small look-up tables.

MINIMIZE RECORD_PER_BLOCK

The basic idea of this article is to store a single row into a single block.
This could be useful to avoid the hot block contention for the small look up tables, which access heavily.

The MINIMIZE RECORDS_PER_BLOCK features comes from Oracle 8i, which gives the
facility to almost store a single record into a single block.

Test case:

Normal Heap Table (without MINIMIZE RECORDS_PER_BLOCK)

17:26:39 jaffar@PRIMEDB> create table nomin1 as select * from user_objects where rownum = 1;

Table created.

Elapsed: 00:00:00.01
17:27:18 jaffar@PRIMEDB> analyze table nomin1 compute statistics;

Table analyzed.

Elapsed: 00:00:00.00
17:27:33 jaffar@PRIMEDB> select table_name,num_rows,blocks from user_tables where table_name = 'NOMIN1';

TABLE_NAME NUM_ROWS BLOCKS
------------------------------ ---------- ----------
NOMIN1 1 4

17:28:22 jaffar@PRIMEDB> insert into nomin1 select * from user_objects;

242 rows created.

Elapsed: 00:00:00.00
17:28:34 jaffar@PRIMEDB> commit;

Commit complete.

Elapsed: 00:00:00.00
17:28:35 jaffar@PRIMEDB> analyze table nomin1 compute statistics;

Table analyzed.

Elapsed: 00:00:00.00
17:28:39 jaffar@PRIMEDB> select table_name,num_rows,blocks from user_tables where table_name = 'NOM
IN1';

TABLE_NAME NUM_ROWS BLOCKS
------------------------------ ---------- ----------
NOMIN1 243 8


Note : Total rows comes in 8 blocks.


Test case with MINIMIZE RECORD_PER_BLOCK

17:28:47 jaffar@PRIMEDB> create table min01 as select * from user_objects where rownum = 1;

Table created.

Elapsed: 00:00:00.00
17:29:12 jaffar@PRIMEDB> analyze table min01 compute statistics;

Table analyzed.

Elapsed: 00:00:00.00
17:29:22 jaffar@PRIMEDB> select table_name,num_rows,blocks from user_tables where table_name = 'MIN01';

TABLE_NAME NUM_ROWS BLOCKS
------------------------------ ---------- ----------
MIN01 1 4


17:32:26 jaffar@PRIMEDB> alter table min01 minimize records_per_block;

Table altered.


17:32:53 jaffar@PRIMEDB> insert into min01 select * from user_objects;

243 rows created.

Elapsed: 00:00:00.09
17:33:12 jaffar@PRIMEDB> commit;

Commit complete.

Elapsed: 00:00:00.00
17:33:12 jaffar@PRIMEDB> analyze table min01 compute statistics;

Table analyzed.

Elapsed: 00:00:00.00
17:33:14 jaffar@PRIMEDB>
17:33:14 jaffar@PRIMEDB> select table_name,num_rows,blocks from user_tables where table_name = 'MIN0
1';

TABLE_NAME NUM_ROWS BLOCKS
------------------------------ ---------- ----------
MIN01 244 180

Note : Total rows 243 comes in 180 blocks.


References:

http://download-west.oracle.com/docs/cd/A87860_01/doc/server.817/a76962/ch2.htm#103385

Sources Used in the test cases:

Orcle Version : 9.2.0.7
db_block_size : 8194
Tablespace : LMT + Auto Extent Size.

4.23.2006

tuning query with tkprof output

Being a regular visitor to http://www.jlcomp.demon.co.uk, today, I have came across a good explanation/article by Mr. Jonathan Lewis, about measuring/gussing query performance using tkprof output.

http://www.jlcomp.demon.co.uk/tkprof_01.html

Happy reading,

Jaffar

4.01.2006

Auto Undo Management Pefromance Degradation

Auto Undo Management + Fast Ramp-up [Oracle 10g Rel.2, a new feature]


The Auto Undo Management feature brings 10 undo segments online during the database startup,
on most of the SYSTEMS.

Created Undo Segment _SYSSMU1$
Created Undo Segment _SYSSMU2$
Created Undo Segment _SYSSMU3$
Created Undo Segment _SYSSMU4$
Created Undo Segment _SYSSMU5$
Created Undo Segment _SYSSMU6$
Created Undo Segment _SYSSMU7$
Created Undo Segment _SYSSMU8$
Created Undo Segment _SYSSMU9$
Created Undo Segment _SYSSMU10$

and the strategy with Automatic Undo is to have no more than one transaction per undo segment if possible,
therefore, Oracle keeps creating new ones if there is space in the undo tablespace.

There is performance degradation or sometimes database goes in hung stage when undo segments comes online, this
can be mostly seen on highly oltp database systems. I have experienced the problem on a databse (9205) which is
extremly oltp and number of transaction per second are 400+. Refer to metalink note 301432.1.

Setting on instance level 'event=10511 trace name context forever, level 2', would prevent
going undo segment offline.

But, the problem is, when the database shutdown and startup again, it again comes up with 10 undo segmnets online
and repeat the same functionality, i.e. bringing more undo segments online upon requirements.

Oracle 10g Rel.2 Comes up with a solution for the above discussed undo segments issues. Fast Ramp-Up

In previous versions, when the instance was restarted or when you switched undo tablespaces,
the ramp-up time for the system to grow the number of online undo segments could be several minutes.
Generally, this time delay was unacceptable.
After an instance startup or a switch of undo tablespaces in Oracle Database 10g, the database decides how many
undo segments to place online, based on existing data stored in AWR.

Conclusion:
I have not tested the Fast Ramp-up feature. However, waiting for the opportunity to upgrade our
database to Oracle 10g Rel.2 to test the Sast Ramp-up feature.

References :

Metalink Note : 301432.1 Performance Slowdown During Heavy Undo Segment Onlining
Metalink Note : 266159.1 SMON: ABOUT TO RECOVER UNDO SEGMENT %s messages in alert log

3.26.2006

Transitive Closure behaviour when query_rewrite_enabled parameter set true/false

The article covers our experiences with the impact of various settings
of the parameter query_rewrite_enabled on sql performance whilst
migrating an Oracle database from AIX to HP.

All testing for this article was performed on Oracle 9.2.0.5.

TRANSITIVE CLOSURE WITH QUERY_REWRITE_ENABLED

Transitivity and Transitive Closure

Transitivity (or Transitive Closure) is the generation of additional
predicates that may open additional access methods. It is only performed by the CBO.

We have recently migrated one of our data warehouse databases (Oracle 9i Rel.2 (9205)) from AIX to HP-SUPERDOME.
The data warehouse is 1.8TB. It was migrated using export and import utilities.

The major problem we encountered post migration was a dramatic increase in the response time of one of our queries.

The query was taking 5 minutes on AIX. The same query on HP ran for 2 days.

Following is the query and its execution plan on AIX and HP.

SELECT max(as_of_date), max(cust_code), nvl(abs(sum(run_offs_sar)), 0),
nvl(abs(sum(inst_amt_sar)), 0), nvl(abs(sum(bal_sar)), 0)
FROM account a
WHERE acct_no = '00100100120'
AND as_of_date = (SELECT max(as_of_date)
FROM account b
WHERE b.acct_no = a.acct_no
AND b.run_offs_sar <> 0)

Execution Plan (AIX)
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=602 Card=1 Bytes=55)
1 0 SORT (AGGREGATE)
2 1 VIEW (Cost=602 Card=1 Bytes=55)
3 2 FILTER
4 3 SORT (GROUP BY) (Cost=602 Card=1 Bytes=75)
5 4 HASH JOIN (Cost=529 Card=167 Bytes=12525)
6 5 PARTITION RANGE (ALL)
7 6 TABLE ACCESS (BY LOCAL INDEX ROWID) OF 'A_ACCOUNT' (Cost=264 Card=167 Bytes=4509)
8 7 INDEX (SKIP SCAN) OF 'IDX_DT_ACCT_FLAG' (NON-UNIQUE) (Cost=189 Card=167)
9 5 PARTITION RANGE (ALL)
10 9 TABLE ACCESS (BY LOCAL INDEX ROWID) OF 'A_ACCOUNT' (Cost=264 Card=167 Bytes=8016)
11 10 INDEX (SKIP SCAN) OF 'IDX_DT_ACCT_FLAG' (NON-UNIQUE) (Cost=189 Card=167)

Execution Plan (HP)
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=630319908 Card=1 Bytes=55)
1 0 SORT (AGGREGATE)
2 1 VIEW (Cost=630319908 Card=1602 Bytes=88110)
3 2 FILTER
4 3 SORT (GROUP BY) (Cost=630319908 Card=1602 Bytes=107334)
5 4 MERGE JOIN (CARTESIAN) (
7 6 Cost=630319180 Card=225809 Bytes=15129203)
6 5 PARTITION RANGE (ALL)TABLE ACCESS (BY LOCAL INDEX ROWID) OF 'A_ACCOUNT' (Cost=700 Card=440 Bytes=10560)
8 7 INDEX (SKIP SCAN) OF 'IDX_DT_ACCT_FLAG' (NON-UNIQUE) (Cost=519 Card=513)
9 5 BUFFER (SORT) (Cost=630319208 Card=513 Bytes=22059)
10 9 PARTITION RANGE (ALL)
11 10 TABLE ACCESS (FULL) OF 'A_ACCOUNT' (Cost=1432542 Card=513 Bytes=22059)

I then turned on the ‘10053 trace event with level 1’ on both databases. Comparing the trace outputs,
I found that the only difference was in the setting of the ‘query_rewrite_enabled’ parameter.
On AIX this parameter was set to TRUE and on HP it was at the default setting of FALSE.

After setting this parameter to TRUE on HP, the query achieved the same response time as on AIX.
The execution plan now is identical on both systems.

Thanks to Jonathan Lewis for explaining Transitive Closure behavior:
<<
When you have a problem like this, use dbms_xplan()
to get the more detailed execution plan than you don't
get from autotrace (until 10.2). In this case I think you
would find that you have a join predicate that disappears
(to produce the cartesian join) because transitive closure
has been used to copy a filter predicate from one side of
the join to the other.
============
In the case where the constant predicate is an equality the
optimizer will generate a second constant predicate, but
lose the join predicate if query_rewrite_enabled = false, hence ...

On the other hand, if query_rewrite_enabled = true, then
8i and 9i will keep the join predicate even in this case, hence ...
============

>>

By setting the parameter ‘query_rewrite_enabled=FALSE’, then to ‘TRUE’, I ran the query with ‘explain plan for’
to produce the explain plan. Using the ‘select * from table(dbms_xplan.display) dual’, I got the following predicate
information used/created by the Optimizer:

with query_rewrite_enabled=FALSE

3 - filter("A"."AS_OF_DATE"=MAX("B"."AS_OF_DATE"))
7 - filter("B"."RUN_OFFS_SAR"<>0)
8 - access("B"."ACCT_NO"='00100100120')
filter("B"."ACCT_NO"='00100100120')
11 - filter("A"."ACCT_NO"='00100100120')

with query_rewrite_enabled=TRUE

3 - filter("A"."AS_OF_DATE"=MAX("B"."AS_OF_DATE"))
5 - access("B"."ACCT_NO"="A"."ACCT_NO") ====
7 - filter("B"."RUN_OFFS_SAR"<>0)
8 - access("B"."ACCT_NO"='00100100120')
filter("B"."ACCT_NO"='00100100120')
11 - access("A"."ACCT_NO"='00100100120')
filter("A"."ACCT_NO"='00100100120')


When the parameter was set to FALSE, the optimizer was dropping the following in the predicate information,
compared to the one where the parameter was set to TRUE.

’access("B"."ACCT_NO"="A"."ACCT_NO")’ and ‘access("A"."ACCT_NO"='00100100120'

In this example, I have used the two traditional tables, emp & dept for my testing.

Alter session set query_rewrite_enabled=FALSE;

EXPLAIN PLAN FOR
SELECT e.ename,d.dname
FROM emp e,dept d
WHERE d.deptno=10
AND e.deptno =d.deptno

select * from table(dbms_xplan.display) dual;

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
-----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 24939 | 462K| 51 |
| 1 | NESTED LOOPS | | 24939 | 462K| 51 |
| 2 | TABLE ACCESS BY INDEX ROWID| DEPT | 1 | 11 | 1 |
|* 3 | INDEX UNIQUE SCAN | SYS_C002803 | 1 | | |
|* 4 | TABLE ACCESS FULL | EMP | 24939 | 194K| 50 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):

---------------------------------------------------
3 - access("D"."DEPTNO"=10)
4 - filter("E"."DEPTNO"=10)

In the predicate information above, the optimizer doesn’t use predicate join e.deptno=d.deptno. Instead,
it substitutes the referenced column’s (d.deptno) value of 10 in the filter e.deptno=10.

Jonathan Lewis, in his book “CBO Fundamentals”, Chapter 6, Selectivity Issue, suggests some bypasses for
this problem, such as:

EXPLAIN PLAN FOR
SELECT e.ename,d.dname
FROM emp e,dept d
WHERE d.deptno=10
AND e.deptno = d.deptno
AND e.deptno = d.deptno
select * from table(dbms_xplan.display) dual;

PLAN_TABLE_OUTPUT

-----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
-----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 8313 | 154K| 51 |
| 1 | NESTED LOOPS | | 8313 | 154K| 51 |
| 2 | TABLE ACCESS BY INDEX ROWID| DEPT | 1 | 11 | 1 |
|* 3 | INDEX UNIQUE SCAN | SYS_C002803 | 1 | | |
|* 4 | TABLE ACCESS FULL | EMP | 8313 | 66504 | 50 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
3 - access("D"."DEPTNO"=10)
4 - filter("E"."DEPTNO"=10 AND "E"."DEPTNO"="D"."DEPTNO")

The above output clearly shows the difference in the execution plan of the query, cost, rows and filter.
Some other work arounds are,

EXPLAIN PLAN FOR
SELECT e.ename,d.dname
FROM emp e,dept d
WHERE d.deptno = 10
AND e.deptno = d.deptno+0

EXPLAIN PLAN FOR
SELECT e.ename,d.dname
FROM emp e,dept d
WHERE d.deptno = 10
AND e.deptno = 10
AND e.deptno = d.deptno

Note:

To use dbms_xplan.display(), you need to have plan_table of Oracle >=9206 and <=10g Rel.1.

From Release 10.2, the auto trace uses dbms_xplan.display format.

Conclusion:

If you see a similar performance degradation after a Oracle upgrade
Or migration, it can be very worthwhile to perform the above checks.

Acknowledgements:

I owe thanks to Jonathan Lewis for his kind support and also thanks to my colleague Wayne Phillips for
his technical review and corrections.

References:

Cost-Based Oracle Fundamentals – by Jonathan Lewis
Metalink Note. 68979.1 - Transitivity and Transitive Closure

Mr. Syed Jaffar Hussain[sjaffarhussain@gmail.com] is the Sr. Oracle DBA at a private bank in Saudi Arabia.
He has worked in IT for 12 years, and as an Oracle DBA for 6 years.

2.09.2006

Oracle 10g Upgrade (OCP)

I have just cleared my OCP 10g upgrade (IZO-040) exam.
Couple of weeks ago, I have attend the 10g New Features for Database Administor course and prepared my self for the exam.
During my course, I have come across few of new features which we could use in our database.
I will discuss about them in my coming posts.

Its time to party.