1.31.2009

99% memory consumption on an Idle RAC nodes

We have been observed that the memory consumption on all RAC nodes(8 node setup) on HPUX Superdom (Itaninum II)  servers are consuming almost 99% memory round the clock.  There are no databases running, just Oracle clusterware was installed on these nodes. We would be soon using this setup for our production, thats our plan.

We initially thought there could be a memory leak which is causing the high memory consumption. Upon opening a TAR for this issue, Oracle support suggested that the memory consumption by the clusterware is pretty normal and there is no high memory consumption by any of Oracle CW processes. Shutting down the cluster in order to test the memory consumption doesn't help either as the memory consumption still was the same.

Then, we thought of investigating from OS point of view and we  found out that the OS kernel parameter 'dbc_max_pct' was set to the default value, i.e. 50%.  The general recommendation on most of the environments is 10%. Thank god, its a dyanmic parameter which can be modified without the shutdown. Upon resetting the value to 10%, the memory consumption drastically down from 99% to 35% on all the nodes.

I have also found a very good Metalink Note (68105.1 : Commonly Misconfigured HP-UX Kernel Parameters) which explains the most commonly misconfigured HP-UX kernal parameters. Dears, you are running any databases on HPUX, I strongly recommend you to have a look at this note and make changes accordingly.

Happy Reading,

Jaffar

1.20.2009

Tough time ahead (going to have fun with RAC)

It is going to be a tough this week-end for me, and at the same time very challeging too. Well, this week-end, in simple terms, I can say that I am going to have fun with RAC.

We currently have 4 node develpment and 8 node Production RAC environment at our premises. The development currently running on Oracle 10g R2 with patchset 10.2.0.3, which we are planning to upgrade to 10.2.0.4. My god, there are over 20 databases running across these 4 nodes. Uff.. upgrading cluster, asm and all those databases to 10.2.0.4 patchset.. tough time!

Along with upgrade activity, we also plan the following at the same time:

Changing the IPs of public and interconnect on production.
Moving a RAC database from development to production.

Apart from the above, there are couple of small changes to be done.

I know one time such a big change is not recommended and a very tough task too. Thankfully, we are not really  into production yet,  we have ample of downtime to carry out/complete all these tasks.

Wish me best of luck and I will blog about any issues that I faced during this activity.

Happy reading,

Jaffar