Shrink space on a lobsegment of 65G running since more than 5 days

A shrink space operation on a lobsegment of 65G in one of our business critical 2 instance RAC database (10gR2) is running more than five days now. As usual , when I opened a TAR with Oracle support, they coolly informed me that, look mate, you are hitting a bug (5768710), you either need to upgrade the db to v11g or apply a patch (8598989).

The lobsegment contains 8128896 blocks, 1183 extents and the size is around 65GB. We actually truncated the source table and then tried shrinking this lobsegment in order to release the free space.
I did test in development and UAT environments (single instance RAC databases) and it took almost a day and half duration to complete the task. However, the following get stuck on the production:

ALTER TABLE tablename MODIFY LOB (column_name)(SHRINK SPACE) nologging parallel 5;

Due to the bug, I am unable to predict how long this operation would take as no information is listed in  v$session_logops dynamic view as well. Unfortunately, Oracle support too can't confirm me when would this operation finishes. The only saving  point to me is that the other sessions working on the table works fine. Neither there is any performance impact nor any excessive redo generation causing.
Upon analyzing my trace files, 10046 and oradebug short_stack, Oracle support confirms that the operation is not hanged, it is progressing.
Keeping in view about no impact on the database, they requested me to leave it running as killing session may trigger SMON recovery on this table.

My boss says, whatever I am handling ending up hitting the bug, one of very rare unfortunately days of my life.

Happy reading,



Clusterized (Cluster Aware) Commands

Unlike the previous Oracle versions, beginning with Oracle 11gR2,  Oracle Clusterware stack on all nodes in a cluster can be managed from any node using the following Clusterized (Cluster Aware) commands:

$ crsctl check cluster -all - to check the status of the Clusterware on all nodes
$ crsctl stop cluster -all - to stop the Oracle Clusterware stack on all nodes
$ crsctl start cluster -all - to start the Oracle Clusterware stack on all nodes 


Voting Disk Backup Procedure Changed in Oracle 11g Release 2

Surely we are going to have too many posts on Oracle 11gR2 new feature by many people and here is mine first post.

As part of our new forthcoming book on 'Oracle 11g Clusterware' by Packt Publications, I have done Oracle 11g R2 clusterware upgrade from Oracle 11gR1 and it was a good learning experience.

Voting Disk backup Procedure Change
In prior releases, backing up the voting disks using a dd command was a required postinstallation task. With Oracle Clusterware release 11.2 and later, backing up and restoring a voting disk using the dd command is not supported.

Backing up voting disks manually is no longer required, as voting disks are backed up automatically in the OCR as part of any configuration change and voting disk data is automatically restored to any added voting disks.


Oracle 11g Scheduler book by Packt Publishing

Mr. Ronald Rood wrote a book on Oracle Scheduler, titled 'Mastering Oracle Scheduler in 11g Oracle Databases' and the book will be out somtime in May. 

When publishers were seeking for technical reviewrs on this book,  I have shown my interest to be part of the reviewers team.  My inital impression on this book was not satisfactory one, I ask myself, a book on Scheduler?.  Beleive me, after reviewing the book, I realized the real strngths of Oracle Scheduler and how it can be utilized flly in complex scenarios.  Now, I am really happy to be part of the reviewers team for this book and I did learn few new Oracle 11g Scheduler enhancements and planning to implement few some of the features that suits to our requirements.

The book is due this May and for more details and pre-order details, you can use the following link:


Oracle to acquire SUN?

I have seen couple of mails today on freelist group stating about the news, 'Oracle going to acquire SUN'. Umm.. quite interesting deal, one has to wait and see what would be the reaction of HPUX, specially after their (Oracle and HPUX) recent exadata innovation and IBM reaction on this deal.

Happy reading



Node slave process (pz99) on 10g RAC

We have observed a strange slave process, 'pz99' on our RAC databases for every connection to the database and we were wondering what type of processes are they. Thank god our doubt was cleared without much hassles. According to ML Doc :734139.1 'From Oracle 10g onwards you will find on every node slave processes like pz99. This are the processes that query the gv$ views'

Happy reading



Oracle DB & RAC 10gR2 on IBM AIX : Tips and Considerations by Erik Salander

While browsing this afternoon, I have come across of 'Oracle DB & RAC 10gR2 on IBM AIX: Tips and Considerations'
paper by Erik Salander. Well, if you have any planns to setup RAC on AIX, I recommend you to have a look at this paper.

If you wanna need the abstract of this paper before you want to download, here it is:

This paper consolidates the information that needs to be considered when implementing and Oracle Database 10gR2 or Oracle RAC (Real Application Clusters) 10gR2 on AIX 

This paper is written to a level of detail that assumes readers have an in-depth knowledge of AIX , Oracle Database 10g, Oracle RAC 10g and the related products.

Happy Reading,



How to startup RAC database services automatically

As of Oracle 10gR2 ( patch set) when RAC database is started with 'srvctl start database -d DBNAME', unfortunately, the associated database services do not startup automatially. Therefore,   services must be started manually after the database startup. It could be painfull in some situations, like, when you have many databases running on RAC with daily or weekly cold backup schedule. When services are not up, clients unable to connect to the respective databsases, in case they use the service name to connect to the database.

A possible workaround is to write  FAN server side callouts. You may download perl scripts, 'Start Services on Instance Up' from http://www.oracle.com/technology/sample_code/products/rac/index.html. Its a smaple perl script which can be used to as a FAN Server Side callout to start services when an instanes up event is received on the node. You need to put the scripts under $CRS_HOME/racg/usrco. The PDF contains the procedure how to deploy the scripts, setting permission and other stuff.

In 11g,  When services are already started on some remote nodes, then the startup of the instance on the local node will autostart the services on it.


ML 416178.1: After Srvctl Start Database, Database Services Will Not Start Up Automatically

Happy reading,



99% memory consumption on an Idle RAC nodes

We have been observed that the memory consumption on all RAC nodes(8 node setup) on HPUX Superdom (Itaninum II)  servers are consuming almost 99% memory round the clock.  There are no databases running, just Oracle clusterware was installed on these nodes. We would be soon using this setup for our production, thats our plan.

We initially thought there could be a memory leak which is causing the high memory consumption. Upon opening a TAR for this issue, Oracle support suggested that the memory consumption by the clusterware is pretty normal and there is no high memory consumption by any of Oracle CW processes. Shutting down the cluster in order to test the memory consumption doesn't help either as the memory consumption still was the same.

Then, we thought of investigating from OS point of view and we  found out that the OS kernel parameter 'dbc_max_pct' was set to the default value, i.e. 50%.  The general recommendation on most of the environments is 10%. Thank god, its a dyanmic parameter which can be modified without the shutdown. Upon resetting the value to 10%, the memory consumption drastically down from 99% to 35% on all the nodes.

I have also found a very good Metalink Note (68105.1 : Commonly Misconfigured HP-UX Kernel Parameters) which explains the most commonly misconfigured HP-UX kernal parameters. Dears, you are running any databases on HPUX, I strongly recommend you to have a look at this note and make changes accordingly.

Happy Reading,



Tough time ahead (going to have fun with RAC)

It is going to be a tough this week-end for me, and at the same time very challeging too. Well, this week-end, in simple terms, I can say that I am going to have fun with RAC.

We currently have 4 node develpment and 8 node Production RAC environment at our premises. The development currently running on Oracle 10g R2 with patchset, which we are planning to upgrade to My god, there are over 20 databases running across these 4 nodes. Uff.. upgrading cluster, asm and all those databases to patchset.. tough time!

Along with upgrade activity, we also plan the following at the same time:

Changing the IPs of public and interconnect on production.
Moving a RAC database from development to production.

Apart from the above, there are couple of small changes to be done.

I know one time such a big change is not recommended and a very tough task too. Thankfully, we are not really  into production yet,  we have ample of downtime to carry out/complete all these tasks.

Wish me best of luck and I will blog about any issues that I faced during this activity.

Happy reading,