We have been dealing with a strange ASM behavior over the past few months across all ASM instances in our multiple Oracle 11gR2 (11.2.0.2) Cluster envs on HPUX 11.31 OS. Even a simple and typical ASM task like, adding new disk to ASM disk group, disk group mount/unmount and querying for CANDIDATE asm disks was taking min of 20 minutes, and sometimes infinity time. This behavior caused a lot of performance degradation across all database instance in the cluster, and most of the databases suffered with 'Disk file operations i/o' wait with other consequences.
I must say, Oracle support indeed made a few unsuccessful attempts by suggesting increasing the ASM instance SGA, enabling async i/o on OS, reducing the number of disks etc to address the issue. Finally, they have logged a BUG '18223021' for our issue and we are yet to receive the fix.
Below are a few consequences of this behavior we are confronting:
I must say, Oracle support indeed made a few unsuccessful attempts by suggesting increasing the ASM instance SGA, enabling async i/o on OS, reducing the number of disks etc to address the issue. Finally, they have logged a BUG '18223021' for our issue and we are yet to receive the fix.
Below are a few consequences of this behavior we are confronting:
- Oracle 10g database instances running on the node where a new ASM disk being added to the existing disk group had suffered with control file locking and database hung issues
- We ensure the disk group to which a new disk is being added is not mounted on non relative ASM instances (in other nodes)
- When adding disk takes infinite time on one of the ASM instances, the only way was to shutdown the instance with dismount the ASM disk group subsequently to speed-up and complete the procedure
- Most databases are suffering with 'Disk File operations I/O'
If you have MOS access, you may refer the BUG for more details.
Stay tuned for the fix and solution...
No comments:
Post a Comment