Exadata Storage expansion
Most of us knew the capabilities that Exadata Database Machine delivers. Its known fact that Exadata comes in different fixed rack size capacity: 1/8 rack (2 db nodes, 3 cells), quarter rack (2 db nodes, 3 cells), half rack (4 db nodes, 7 cells) and full rack (8 db nodes, 14 cells). When you want to expand the capacity, it must be in fixed size as well, like, 1/8 to quarter, quarter to half and half to full.
With Exadata X5 Elastic configuration, one can also have customized sizing by extending capacity of the rack by adding any number of DB servers or storage servers or combination of both, up to the maximum allowed capacity in the rack.
In this blog post, I will summarize and walk through a procedure about extending Exadata storage capacity, i.e, adding a new cell to an existing Exadata Database Machine.
Preparing to Extend Exadata Database Machine
· Ensure HW placed in the rack, and all necessary network and cabling requirements are completed. (2 IPs from the management network is required for the new cell).
· Re-image or upgrade of image:
o Extract the imageinfo from one of the existing cell server.
o Login to the new cell through ILOM, connect to the console as root user and get the imageinfo
o If the image version on the new cell doesn’t match with the existing image version, either you download the exact image version and re-image the new cell or upgrade the image on the existing servers.
- Add the IP addresses acquired for the new cell to the /etc/oracle/cell/network-config/cellip.ora file on each DB node. To do this, perform the steps below from the first 1 db serer in the cluster:
- cd /etc/oracle/cell/network-config
- cp cellip.ora cellip.ora.orig
- cp cellip.ora cellip.ora-bak
- Add the new entries to /etc/oracle/cell/network-config/cellip.ora-bak.
- /usr/local/bin/dcli -g database_nodes -l root -f cellip.ora-bak -d /etc/oracle/cell/network-config/cellip.ora
- If ASR alerting was set up on the existing storage cells, configure cell ASR alerting for the cell being added.
- List the cell attributes required for configuring cell ASR alerting. Run the following command from any existing storage grid cell:
- Apply the same SNMP values to the new cell by running the command below as the celladmin user, as shown in the below example:
- Configure cell alerting for the cell being added.
- List the cell attributes required for configuring cell alerting. Run the following command from any existing storage grid cell:
- Apply the same values to the new cell by running the command below as the celladmin user, as shown in the example below:
- Create cell disks on the cell being added.
- Log in to the cell as celladmin and run the following command:
- Check that the flash log was created by default:
You should see the name of the flash log. It should look like cellnodename_FLASHLOG, and its status should be "normal".
If the flash log does not exist, create it using:
- Check the current flash cache mode and compare it to the flash cache mode on existing cells:
To change the flash cache mode to match the flash cache mode of existing cells, do the following:
i. If the flash cache exists and the cell is in WriteBack flash cache mode, you must first flush the flash cache:
Wait for the command to return.
ii. Drop the flash cache:
iii. Change the flash cache mode:
The value of the flashCacheMode attribute is either writeback or writethrough. The value must match the flash cache mode of the other storage cells in the cluster.
iv. Create the flash cache:
- Create grid disks on the cell being added.
- Query the size and cachingpolicy of the existing grid disks from an existing cell.
- For each disk group found by the above command, create grid disks on the new cell that is being added to the cluster. Match the size and the cachingpolicy of the existing grid disks for the disk group reported by the command above. Grid disks should be created in the order of increasing offset to ensure similar layout and performance characteristics as the existing cells. For example, the "list griddisk" command could return something like this:
When creating grid disks, begin with DATAC1, then RECOC1, and finally DBFS_DG using the following command:
CAUTION: Be sure to specify the EXACT size shown along with the unit (either T or G).
- Verify the newly created grid disks are visible from the Oracle RAC nodes. Log in to each Oracle RAC node and run the following command:
This should list all the grid disks created in step 7 above.
- Add the newly created grid disks to the respective existing ASM disk groups.
The command above kicks off an ASM rebalance at the default power level. Monitor the progress of the rebalance by querying gv$asm_operation:
Once the rebalance completes, the addition of the cell to the Oracle RAC is complete.
- Download and run the latest exachk to ensure that the resulting configuration implements the latest best practices for Oracle Exadata.
Reimaging Exadata Cell Node Guidance (Doc ID 2151671.1)