Exadata X8M - World's Fastest Database Machine

Exadata X8M has launched during OOW 2019, and termed as world's fastest Database Machine. Let's walkthrough about new additions that has made X8M as world's fastest Database Machine.

An Exadata X8M is the industry's first DB machine integrated with Intel Optane DC persistent memory (read more about this) and 100 gigabit RDMA over converged ethernet. This will be dramatically improves the performance for all sort of workloads, such as OLTP, analytics, IoT, high frequency trading etc by eliminating the storage access bottlenecks. Persistent memory with RoCE networking can reduces IO latency significantly and boosts the performance by 2.5X.

It uses RDMA directly from the DB to access persistent memory in smart storage servers, eliminating the entire OS, IO and network software stacks. Which will deliver the higher throughput with lower latency. Also, frees CPU resources on storage server to execute more smart scan queries for analytic workloads.

Its in-memory performance with all advantages of shared storage benefits the Analytics and OLTP  workloads. Direct database access to shared persistent memory accelerates is the real game changer for application that demands large amounts of data.

For more details, read the link below:



    Intermittent cellsrv crashes with ORA-07445 after upgrading to

    Exadata X6-2 full and half racks were patched recently with Aug/2018 quarterly patch set. An ORA-07445 started to observe and the cellsrv intermittently crashing with ORA-07445 error. 

    Following error is noticed in the cellsrv alert.log:

    ORA-07445: exception encountered: core dump [0000000000000000+0] [11] [0x000000000] [] [] [] 

    The above is registered as a bug whenever cell storage is patched with or Therefore, if you are planning to patch your cell storage software with one of the above versions, ensure you also apply patch 28181789 to avoid cellsrv intermittently crashing with ORA-07445 error. Otherwise, you need to upgrade the storage software with 18.1.5.


    Customer may experience intermittent cellsrv crashes with ORA-07445: [0000000000000000+0] [11] [0x000000000] after upgrading storage cell software version to or


    Upgrade the Storage cell software to or


    Bug 28181789 - ORA-07445: [0000000000000000+0] AFTER UPGRADING CELL TO


    Patch 28181789 is available for and Storage software releases. Follow the README to instructions to apply the patch.
    apply 18.1.5 and later which includes the fix of 28181789

    xadata: Intermittent cellsrv crashes with ORA-07445: [0000000000000000+0] [11] [0x000000000] after upgrading to or (Doc ID 2421083.1)


    Oracle Exadata X8 key capabilities summary

    Below is the summary of some of the key benefits of Exadata X8 database machine, software and hardware:

    Extreme (unmatched) performance
    According to Oracle, the new X8 database machine is capable of delivering unto 60% faster throughput performance in contrast to the earlier Exadata database machines. Having said that, you can pull a 50GB database under one second.

    Cost effective extended storage 
    Petabytes of cost effective storage with option software license. This can significantly reduce the storage cost.

    High Memory
    Can accommodate upto 28.5 of system memory. Good for heavy workload systems, such as , in-memory databases.

    Increased storage capacity
    In contrast to earlier models, X8 comes with 40% increase in disk capacity. Each X8 EF system comes with 50TB raw flash capacity. While, the X8 HC with XT storage option, it comes 160TB of raw capacity.

    High-performance connectivity
    X8 also comes with significant improvement in connectivity. It supports upto 400Gb of client connectivity over multiple 25 GB Ethernet links.

    OLTP Read/Write performance
    A full X8 rack, typically can do 6.5 million random reads and 5.7 random writes per second for a 8k DB.

    Automated features
    In addition to the above hardware improvements, it also X8 incorporates autonomous database capability. With ML and AI capabilities, the databases are auto tuned and auto maintained.

    List of technical specifications:

    • Up to 912 CPU cores and 28.5 TB memory per rack for database processing
    • Up to 576 CPU cores per rack dedicated to SQL processing in storage
    • From 2 to 19 database servers per rack
    • From 3 to 18 storage servers per rack
    • Up to 920 TB of flash capacity (raw) per rack
    • Up to 3.0 PB of disk capacity (raw) per rack
    • Hybrid Columnar Compression often delivers 10X-15X compression ratios
    • 40 Gb/second (QDR) InfiniBand Network
    • Complete redundancy for high availability


    Monitoring & Troubleshooting Oracle Cloud at Customer

    The prime advantage of cloud at customer is to deliver all cloud benefits at your data center. Oracle cloud at customer provides the same. When Oracle cloud at customer is chosen, it is Oracle who is responsible to install, configure and manage the software and hardware required to run Oracle cloud at customer. However, customers are responsible for monitoring and troubleshooting resources instantiate on Oracle cloud at customer.

    Customers are required to understand the difference between system and user-space monitoring and the tools required. The Oracle cloud at customer subscription consists of the below components:

    • Hardware and Software
    • Control panel software
    • The Oracle Advanced Support Gateway (OASW)
    • The Oracle Cloud Service

    System monitoring vs User Space Monitoring

    Typically, Oracle cloud at customer is monitored at two level:
    1. System
    2. User space
    Oracle monitors the system and the customer monitors the user space.

    The systems or machine resources, such as : hardware, control panel and cloud services on Oracle cloud at Customer are managed by Oracle remotely using Oracle Advanced Gateway. The OAGW is only used and accessible to Oracle authorized personnel.

    The user space components consists the below:

    • Oracle Cloud accounts
    • VMs instances on IaaS or PaaS
    • DB that are provisioned within the PaaS subscription
    • Applications (Oracle or any third-party)
    Oracle manages the following hardware and software components:
    • Ethernet switches
    • Power Supplies
    • Exadata Storage Servers
    • Hypervisor running on the physical servers
    Customers can assign administrators to manage cloud accounts. Customers also are free to use any external monitoring agents to monitor user-space components.


    Why Oracle Cloud @ Customer is a good option?

    One of the major concerns moving over cloud is the security for most of the organizations. Though cloud concept is around for quite sometime, still, a majority of customers are concerned about putting their data over cloud. To gain the confidence and at the same to take full advantage of Cloud technologies, various Cloud vendors started offering cloud at customer solutions. In this blog spot, am going to discuss about Oracle cloud at customer solutions, its advantages , subscription model etc.

    Oracle Cloud at Customer delivers full advantages of cloud technologies at your data center. You subscribe hardware and software together when you go for cloud at customer option. Though Oracle does the initial setup, configuration and day-to-day system management, you still have all the advantages of security, network of your data center.

    Typically, the cloud at customer option consist of the following:

    • The hardware required to run Cloud at customer
    • Control panel software
    • The Oracle Advanced Support Gateway
    • Oracle Cloud services
     As a customer, your responsibility involves managing cloud account and subscribed services. At any time, you can check your account balance and your current Oracle Cloud at Customer service usage. It is also possible that you can view your usage by region, by service, or by a specific time period.
    To check your account balance and usage, Oracle recommends that you sign in to your Oracle Cloud Account in an Oracle data region. From there, you can view your overall account usage and Universal Credits balance. 

    In nutshell, cloud at customer brings the cloud solutions to your data center, where you can apply all the rules of your data centre while taking the full advantages of cloud solutions.


    Network design for Oracle Cloud Infrastructure

    Assuming, you are planning to migrate your resources from Oracle Cloud Infrastructure Compute classic environment to Oracle Cloud Infrastructure, this blog post explains the details of network design for Cloud Infrastructure environment. It's important to understand and map the network design and details from both environments.

    Cloud Inrastructure Compute Classic network has IP Networks and Shared Network model. On other hand, Cloud Infrastructure has Virtual Cloud Networks (VCNs) , Subnets, Availability Domain network model.

    Before migration, you must map the network resources between the environments. Source -> Target:
    Shared network -> VCN, IP Network -> IP Network, VPN -> IPSec VPN and Fast Connect classic -> FastConnect.

    Consider creating below listed network elements in Oracle Cloud Infrastructure:

    • VCN and Subnet CIDR Prefixes
    • DNS Names 
    Use the below procedure to configure cloud network for Cloud Infrastructure environment:

    1. Create one or more VCNs.
    2. Create an Internet gateway and/or NAT gateway. An Internet gateway is a virtual router that allows resources in a public subnet direct access the public Internet. A NAT gateway allows resources that don't have a public IP addresses to access the Internet, without exposing those resources to incoming traffic from the Internet.
    3. Configure a service gateway, if required. A service gateway provides a path for private network traffic between your VCN and a public Oracle service such as Oracle Cloud Infrastructure Object Storage.
    4. Create one or more subnets in each VCN.
    5. Configure local peering gateways between VCNs, if required.
    6. Configure security lists, security rules, and route tables for each subnet.

    Migrating Oracle Cloud Infrastructure Classic Workloads to Oracle Cloud Infrastructure - Migration Tools

    If you are planning to migrate your resources from Oracle Cloud Infrastructure Classic to Oracle Cloud Infrastructure, Oracle provides variety of tools to achieve this. In this blog post will discuss some of thetools which can be used to migrate Oracle Cloud Infrastructure Classic Workload resources to Oracle Cloud Infrastructure. 

    The tools below can be used to identify resources from Oracle cloud infrastructure Classic environments and to migrate to Oracle Cloud Infrastructure tenancy. Using these tools, one can setup required network, and migrate VMs and block storage volumes over the targeted systems.

    Tools for migrating infrastructure resources : Compute, VMs and Block Storage

    • Oracle Cloud Infrastructure Classic Discovery and Translation Tool: as the name explains, it is a discovery tool, which assist discovering the resources of different resources in your Cloud Infrastructure Classic environments, such as : compute Classic, Object Storage classic, load balancing classic account. Additionally, it is capable of reporting the items in the specified environment, list of VMs in the source environment and also networking information of the source system.
    • Oracle Cloud Infrastructure Classic VM and Block Storage Tool: This tool automates the process of migrating VMs and Block Storage over to the target environment. 
    • Oracle Cloud Infrastructure Classic Block Volume Backup and Restore Tool: This tool used to migrate your remote snapshot of storage volumes as well as scheduled backups. 

    Tools for migrating databases

    To migrate databases to Oracle cloud infrastructure, you can use Oracle Cloud Infrastructure Class Database Migration tool. This tool uses Oracle RMAN to create backup and restore the database over the targeted system.

    Alternatively, Oracle Data Guard solution also can be used to migrate single or RAC databases to Oracle Cloud Infrastructure.

    Tools for migrating Object Storage

    • rclone command is used to migrate your object storage data f you don't use the Oracle Cloud Infrastructure Storage Software Appliance.
    • If Oracle Cloud Infrastructure Storage Software Appliance is used to store your object data, then you can migrate your data to your Oracle Cloud Infrastructure Object Storage account by using the Oracle Cloud Infrastructure Storage Gateway.


    How to change Remote Trail Location at Source– Golden Gate 12c

    There is a requirement to modify the remote trail location because of typo in the mount point. To change the remote trail location of the Golden Gate, first verify the extract trail information using the command below. 

    GGSCI (hie-p-ggate) 48> INFO EXTTRAIL

           Extract Trail: /golgengate/ggs_home/dirdat/test/ru
            Seqno Length: 6
       Flip Seqno Length: yes
                 Extract: DPMP1
                   Seqno: 0
                     RBA: 0
               File Size: 500M

           Extract Trail: D:\app\oracle\product\ogg_1\lt
            Seqno Length: 6
       Flip Seqno Length: yes
                 Extract: EXT1
                   Seqno: 0
                     RBA: 0
               File Size: 100M
    GGSCI (hie-p-ggate) 49>

    First delete the current configuration of remote trail. 

    GGSCI (hie-p-ggate) 50> delete rmttrail /golgengate/ggs_home/dirdat/test/ru, extract dpmp1
    Deleting extract trail /golgengate/ggs_home/dirdat/test/ru for extract DPMP1

    After removing, then add the remote trail again with the new location. 

    GGSCI (hie-p-ggate) 51> add rmttrail /goldengate/ggs_home/dirdat/test/ru, extract dpmp1
    RMTTRAIL added.

    After deleting and adding now we can see the extract trail for the target with new location. 

    GGSCI (hie-p-ggate) 52> INFO EXTTRAIL

           Extract Trail: /goldengate/ggs_home/dirdat/test/ru
            Seqno Length: 6
       Flip Seqno Length: yes
                 Extract: DPMP1
                   Seqno: 0
                     RBA: 0
               File Size: 500M

           Extract Trail: D:\app\oracle\product\ogg_1\lt
            Seqno Length: 6
       Flip Seqno Length: yes
                 Extract: EXT1
                   Seqno: 0
                     RBA: 0
               File Size: 100M

    GGSCI (hie-p-ggate) 53>


    Oracle Database Upgrade made easy with AutoUpgrade utility

    Upgrading Oracle database is undoubtedly a daunting task which requires a careful study, planning and execution to prevent any potential post upgrade shortcomings. Since Oracle determined to release a new Oracle version every year, at some-point-of-time, we all should focus on upgrading databases quite often than we used to perform earlier.

    Thanks to AutoUpgrade tool(utility), available with MOS Doc ID : 2485457.1, automates all upgrade process procedure without much human intervention or inputs. For latest AutoUpgrade utility version, always refer the MOD note and download. Though, with 12.2(DBJAN2019RU), 18.5 and 19.3, the AutoUpgrade utility available by default under the oracle home.

    AutoUpgrade is a command-line tool which can be used to upgrade one or many oracle databases with one command and a single configuration file. This utility automates upgrade process, such as, pre-upgrade tasks, performs automated fix-ups, perform database upgrade and run through post upgrade tasks as well. This saves a huge time and money when upgrading hundreds of databases in any environment.

    I am pretty sure AutoUpgrade make DBA's like easier when it comes to Oracle database upgrade. Have fun and test the capabilities of the tool.

    MOS Doc ID : AutoUpgrade Tool (Doc ID 2485457.1)


    Whats in Exadata X8 Server 19.2 Exadata Database Machine?

    This blog post quickly scan through the new features of Exadata Database Machine in 19.2 and also hardware capacity and change in Exadata X8 server.

    • Exadata Database Machine Software 19.2.0, supports Exadata X8-2 and X8-8 hardware
    • Changes in IORM's flashcachesize and Disk I/O limits attributes
    • To control the cost of Exadata storage, X8 introduced a new configuration, Exadata Storage Extended (XT)
    • The XT model comes with 14TB hard drives with HCC compression capacity
    • The XT model doesn't have flash drive
    • The lower cost storage option comes with one CPU, less memory and without the core feature of SQL offloading
    • Exadata X8 server has the below hardware capacity per rack:
      • Limit of 912 CPU core and 28 TB memory
      • 2-19 database servers
      • 3-18 cell storage
      • 920 TB of RAW flash capacity
      • 3 PB of RAW disk capacity




    Oracle 19c ASMCA interface

    This blog post will walk through some of very cool screen shots of 19c ASMCA interface. I have no exposure with 18c ASMCA, but, the landing page of 19c ASMCA is really cool. Here are the screenshot for you:


    ASM Instance Management

    Disk group Management

    DG attributes

     root setup


    What's new in 19c - Part III (Data Guard)

    Business continuity (Disaster Recovery) has become a key aspect of every business for a long time now. Oracle data guard is one of the best solutions for business critical applications running on  Oracle databases. From its inception, a lot has been enhanced in standby database functionality to meet the market demand.

    This blog post is dedicated and focused on some key enhancements introduced in 19c Data Guard. Below are my hand-picked list of new features, which really got my attention:

    Fast-Start-Failover (FSFO) in Observer-only Mode

    Configuring FSFO was really a big debate for quite sometime in Oracle community. Some may recommend and some are not in favor of enabling FSFO. Personally, I was not in favor of this feature. Though the decision is lot depends on various factors.

    FSFO can be configured in observe-only mode wit 19c (validate without real action), which allow DBAs to test an automatic failover configuration without actually causing any damage to the databases in production environment. When FSFO is configured in observer-only mode, no actual changes are made to the DG Broker settings, also doesn't require any application changes. And, when the conditions for FSFO are met, the DG Broker adds the messages to the observer log indicating that FSFO would have been initiated. This makes it easer to justify using FSFO to reduce the recovery time for real failover.

    To enable FSFO in observer mode, use the below syntax:


    Automatic Flashback Standby

    Prior to Oracle 19c, when flashback database or point-in-time operations are performed on a primary database, the underlying standby database needs to be put into same point-in-time as its primary database with a manual procedure (for example, FLASHBACK STANDBY DATABASE TO SCN resetlogs_change# - 2;). This functionality is automated in 19c. This new feature enables the standby database to be flashed back automatically whenever flashback database operation is triggered on the primary database. By automating this process, it drastically reduces the time & efforts and improves RTO.

    So, when a primary database has any flashback database or point-in-time recovery operations, the standby automatically follow the primary database, and the MRP on standby database perform the following actions:

    • detect the new incarnation
    • flashback the standby to the exam time as its primary
    • restart the standby recovery and move the standby to the new branch of redo
    ** Note : Flashback operation success is subject to flashback data availability

    Automatic flashback standby operation takes place when the database is opened in MOUNT state. If the standby database is open in READ ONLY mode, the error messages are recorded in the alert log and whenever standby database is restarted, the recovery process (MRP) automatically executes the flashback operation.

    DML Operations on Active Data Guard

    Performing DML operations on Active Data Guard was something long awaited. I remember, there are some application that needs to long an entry into the database whenever they connected to the database. This was causing many applications no to use with Data Guard, specially for testing.

    So, it's here finally with Oracle 19c. Though Oracle doesn't recommend heavy DML operations on active standby database pridicting the performance impact on the primary database. But, it's good for applications that mostly read-applications with occasional DML executions.

    To configure DML operations, set ADG_REDIRECT_DML init parameter to TRUE or execute the following SQL statement:


    Subsequently, perform the DML operations:

    SQL> INSERT INTO table VALUES (.......);

    ** The settings can be database or session level.

    DML operations on active standby database are transparently redirected to the primary database upon setting the above configuration, including DML operations that are part of PL/SQL blocks. However, the active data guard session waits until the corresponding changes (DML) are shipped to and applied to the ADG standby database, while maintaining the read-consistency.

    To redirect PL/SQL operation from active standby data guard database to primary database, configure the following:


    Subsequently, perform the PL/SQL operations :


    Image source (https://blogs.oracle.com/oracle-database/oracle-database-19c-now-available-on-oracle-exadata)

    Automatic outage resolution with Data Guard

    One of the common scenarios of delayed redo transport and gap resolution on data guard is due to network hangs, disconnects, and disk I/O issue. With 19c, new DATA_GUARD_MAX_IO_TIME and DATA_GUARD_MAX_LONGIO_TIME parameters, DBA can tune the amount of wait time for those detection based on the user network and Disk I/O behavior. Data Guard has an internal mechanism to detect these hung processes and terminate them allowing the normal outage resolution to occur.

    Here is the list of new parameters for Data guard:

    Stay tuned for more 19c new features.


    What's new in 19c - Part II (Automatic Storage Management - ASM)

    Not too many features to talk on 19c ASM. Below is my hand-pick features of 19c ASM for this blog post.

    Automatic block corruption recovery 

    With Oracle 19c, the CONTENT.CHECK disk group attribute on Exadata and cloud environment is set to true by default. During data copy operation, if Oracle ASM relocation process detects block corruption, it perform automatic block corruption recovery by replacing the corrupted blocks with an uncorrupted mirror copy, if one is avialble.

    Parity Protected Files Support

    The level of data mirroring is controlled through ASM disk group REDUNDANCY attribute. When a two or three way of ASM mirroring is configured to a disk group to store write-once files, like archived logs and backup sets, a great way of space is wasted. To reduce the storage overahead to such file types, ASM now introduced PARITY value with REDUNDANCY file type property. The PARITY value specifies single parity for redundancy. Set the REDUNDANCY settings to PARITY to enable this feature.

    The redundancy of a file  can be modified after its creation. When the property is changed from HIGH, NORMAL or UNPROTECTED to PARITY, only the files created after this change will have impact, while the existing files doesn't have any impact.

    A few enhancements are done in Oracle ACFS, Oracle ADVM and ACFS replication. Refer 19c ASM new features for more details.

    ** Leaf nodes are de-supported as part of Oracle Flex Cluster architecture from 19c.


    What's new in 19c - Part I (Grid Infrastructure)

    Every new Oracle release comes with bundle of new features and enhancements. Though not every new feature is really needed to everyone, there are few new features that worth considering. As part of 19c new features article series, this post is about the new features introduced in Grid Infrastructure. This blog post focuses on some real useful GI features with deprecated and de-supported features in 19.2.

    Dry-run to validate Cluster upgrade readiness

    Whether it's a new installation or upgrade from previous version to latest version, system readiness is the key factor for success. With 19c, Cluster upgrade can have a dry-run to ensure the system readiness without actually performing the upgrade of the cluster. To determine if the system is ready for the upgrade, run the upgrade in dry-run mode. During the dry-run upgrade, you can click the Help button on the installer page to understand what is being done or asked.

    Use the command below from the 19c binaries home to run the cluster upgrade in Dry-run mode:

    $ gridSetup.sh -dryRunForUpgrade

    Once you run through with all the interactive screens for dry-run, check the gridSetupActions<timestamp>.log file for errors and fix them for real upgrade run.

    Multiple ASMBn

    It is a common practice to have multiple disk groups in a RAC environment. It is also possible to have some disk groups in MOUNT state and some disk groups in DISMOUNT state on a DB node. However, when a db instance on a node try to communicate (startup) with the DISMOUNT disk group will throw errors.

    Multiple ASMB project allows for the database to use DISK GROUPS across multiple ASM instances simultaneously.  This enhancement provides the HA to RAC stack by allowing DB to use multiple disk groups even if a given ASM instance happens to have some DISMOUNT disk groups.

    AddNode and Cloning with Installer Wizard

    Adding a new node and the functionality of installing a gold image (cloning) is simplified and made easy in 19c. Adding new node and Cloning homes now directly available with Installer Wizard, you no longer need to use add node.sh and clone.pl scripts. These commands will be depreciated in the upcoming releases.

    Run ./gridSetup.sh to start the installer.

    In the upcoming blog, I will discuss about ASM 19c features.


    Oracle 19c and my favorite list

    Today (14-Feb-2019) Oracle officially released the 19c docs and Oracle Database 19c for Exadata through edelivery channel. Since the news is out, Oracle community is busy talking about 19c availability and sharing articles about 19c etc.

    I spent a little amount of time to scan through some of really useful features of 19c for DBAs, and here is my list:
    • Availability
      • Simplified DB parameter management in Broker Configuration
      • Flashback Standby DB when Primary DB is flashed back
      • Active Data Guard DML Redirection
      • New parameter for tuning automatic outage resolution with DG
    • Data Warehousing
      • SQL Diagnostic and Repair Enhancements
      • Automatic Indexing 
      • Performance Enhancement for in-memory external tables
      • Real-Time Statistics
      • High Frequency Automatic Optimizer Statistics Collection
    • Automated install, config and patch
    • Automated upgrade, migration and utilities
    • Performance
      • SQL Quarantine 
      • Real-time monitoring for Developers
      • Workload capture and Replay in a PDB
    • RAC and Grid
      • Automated Transaction Draining for Oracle Grid Infrastructure Upgrades
      • Zero-downtime Oracle Grid Infrastructure Patching

    I will start writing series of articles about my favorite Oracle 19c features. Stay tune.



    ORA-600 [ossnet_assign_msgid_1] on Exadata

    On a Exadata system with Oracle v12.1, a MERGE statement with parallelism was frequently failing with below ORA errors:

                       ORA-12805: parallel query server died unexpectedly

    A quick look in the alert.log, an ORA-600 is noticed.

    ORA-00600: internal error code, arguments: [ossnet_assign_msgid_1], [],[ ] 

    The best and easy way to diagnose any ORA-600 errors is to utilize the ORA-600 tool available on MOS.

    In our case, with large hash join, the following MOS note helped to fix the issue:

    On Exadata Systems large hash joins can fail with ORA-600 [OSSNET_ASSIGN_MSGID_1] (Doc ID 2254344.1)

    On Exadata Systems large hash joins can fail with ORA-600 [OSSNET_ASSIGN_MSGID_1] and the root cause if often a too small default value  for _smm_auto_min_io_size  and  _smm_auto_max_io_size'

    and the workaround to fix the issue is to set the following underscore (_) parameters:

    _smm_auto_max_io_size = 2048
    _smm_auto_min_io_size = 256

    In some cases, the below MOS notes helps to fix ORA-600 [ossnet_assign_msgid_1] error.

    ORA-600 [ossnet_assign_msgid_1] (Doc ID 1522389.1)


    Automated Cell Maintenance

    One of the key actives for a DBA is to well maintain the database servers and Oracle environments. In a complex Oracle environment, managing and maintaining file system space plays a very crucial role. When a FS, where Oracle binaries are stored,  runs out of space, it could lead to some sort of consequences and some situations it can cause service interruption.

    One of the routine actives for a DBA in a very busy system is to maintain the FS space by regularly purging or cleaning the old log and trace files. Some DBAs perform these activities through a schedule job. However, Oracle does introduced an auto maintain jobs. For example, in a cluster environment, the logs are maintained in terms of size as well as retention of the historical copies. On Exadata too Oracle has automated the Cell maintenance in place.

    In this blog post, we will run through some of useful information about automated cell maintenance activities.

    The Management Server (MS) component carries the responsibility of auto space management. For example, when there is a shortage of space in ADR, the MS deletes the files as per below default policy:

    • Files which are older than 7 days in ADR and LOG_HOME directories
    • alert.log will be renamed once it reaches to 10MB, and the historical files are kept for 7 days
    • Upon 80% of FS utilization, the MS triggers the deletion policy for / (root) and /var/log/oracle directories
    • Similar, the deletion policy will be activated when the /opt/filesystem 90% utilized
    • Alerts are cleared based on the criteria and policies

    The default retention policy is set to 7 days. If you want to modify the default behavior, you will have to change the metricHistoryDays and dragHistoryDays attributes with ALTER CELL command.

    Read the below Oracle document for more insights about auto cell maintenance tasks.



    Automated Cloud Scale Performance Monitoring capabilities with Exadata Software version 19.1

    Starting with v12.2, Oracle Autonomous Health Framework (AHF) multiple components work together 24x7 autonomously to keep the database system healthy and reduces human intervention & reaction time utilizing machine learning technologies .

    There is no doubt that Exadata Database Machine delivers extreme performance for all sorts of workload. However, diagnosing critical performance issues still needs some manual work and human intervention to identify root causes. This blog post highlights a new autonomous performance enhancement introduced with Exadata system software v 19.1.

    Exadata software Release 19.1 comes with an automated, cloud-scale performance monitoring for a wide-range of sub-systems, such as: CPU, Memory, File System, I/O and network. This feature built with the combination of years of real-world performance triaging experience by Oracle Support, industry best practices and Artificial intelligence (AI) technologies. This feature simplifies root cause analysis without much human intervention. It can automatically detect runtime critical performance issues and also figure out the root cause without human intervention.

    Taking a few real-world scenarios, as a DBA, we all have come across on many occasions where a spinning process on a database server eating up all system resources causing complete database death (poor performance). With this enhancement, Exadata System Software automatically identifies the exact process that is causing spinning and generates an alert with root cause analysis. Another typical example will be automatically detecting the misconfiguration of huge pages settings on the server and sending alerts. When how a server and perform badly if the huge pages setting is right on the system.

    No additional configuration and special skill set is required for this. Management Server (MS) is responsible to perform these activities. All you need is have Exadata software version 19.1 or higher, and configure your alerts on the servers.

    For more details, read the oracle documentation.


    Stay tuned and hunger for more Exadata software 19.1 new features.