Exadata Storage
expansion
Most of us knew the capabilities that Exadata Database
Machine delivers. Its known fact that Exadata comes in different fixed rack
size capacity: 1/8 rack (2 db nodes, 3 cells), quarter rack (2 db nodes, 3
cells), half rack (4 db nodes, 7 cells) and full rack (8 db nodes, 14 cells). When
you want to expand the capacity, it must be in fixed size as well, like, 1/8 to
quarter, quarter to half and half to full.
With Exadata X5 Elastic configuration, one can also have
customized sizing by extending capacity of the rack by adding any number of DB
servers or storage servers or combination of both, up to the maximum allowed
capacity in the rack.
In this blog post, I will summarize and walk through a
procedure about extending Exadata storage capacity, i.e, adding a new cell to
an existing Exadata Database Machine.
Preparing to
Extend Exadata Database Machine
·
Ensure HW placed in the rack, and all necessary
network and cabling requirements are completed. (2 IPs from the management
network is required for the new cell).
·
·
Re-image or upgrade of image:
o Extract
the imageinfo from one of the
existing cell server.
o Login
to the new cell through ILOM, connect to the console as root user and get the imageinfo
o If
the image version on the new cell doesn’t match with the existing image
version, either you download the exact image version and re-image the new cell
or upgrade the image on the existing servers.
- Add the IP addresses acquired for the new cell to the /etc/oracle/cell/network-config/cellip.ora file on each DB node. To do this, perform the steps below from the first 1 db serer in the cluster:
- cd /etc/oracle/cell/network-config
- cp cellip.ora cellip.ora.orig
- cp cellip.ora cellip.ora-bak
- Add the new entries to /etc/oracle/cell/network-config/cellip.ora-bak.
- /usr/local/bin/dcli -g database_nodes -l root -f cellip.ora-bak -d /etc/oracle/cell/network-config/cellip.ora
- If ASR alerting was set up on the existing storage cells, configure cell ASR alerting for the cell being added.
- List the cell attributes required for configuring cell ASR alerting. Run the following command from any existing storage grid cell:
o
CellCLI> list cell
attributes snmpsubscriber
- Apply the same SNMP values to the new cell by running the command below as the celladmin user, as shown in the below example:
o
CellCLI> alter cell
snmpSubscriber=((host='10.20.14.21',port=162,community=public))
- Configure cell alerting for the cell being added.
- List the cell attributes required for configuring cell alerting. Run the following command from any existing storage grid cell:
o
CellCLI> list cell
attributes
o
notificationMethod,notificationPolicy,smtpToAddr,smtpFrom,
o
smtpFromAddr,smtpServer,smtpUseSSL,smtpPort
- Apply the same values to the new cell by running the command below as the celladmin user, as shown in the example below:
o
CellCLI> alter cell
notificationmethod='mail,snmp',notificationpolicy='critical,warning,clear',smtptoaddr=
'dba@email.com',smtpfrom='Exadata',smtpfromaddr='dba@email.com',smtpserver='10.20.14.21',smtpusessl=FALSE,smtpport=25
- Create cell disks on the cell being added.
- Log in to the cell as celladmin and run the following command:
o
CellCLI> create
celldisk all
- Check that the flash log was created by default:
o
CellCLI> list
flashlog
You should see the
name of the flash log. It should look like cellnodename_FLASHLOG, and its
status should be "normal".
If the flash log does
not exist, create it using:
CellCLI>
create flashlog all
- Check the current flash cache mode and compare it to the flash cache mode on existing cells:
o
CellCLI> list cell
attributes flashcachemode
To change the flash
cache mode to match the flash cache mode of existing cells, do the following:
i. If the flash cache
exists and the cell is in WriteBack flash cache mode, you must first flush the
flash cache:
CellCLI>
alter flashcache all flush
Wait for the command
to return.
ii. Drop the flash
cache:
CellCLI>
"drop flashcache all"
iii. Change the flash
cache mode:
CellCLI>
"alter cell flashCacheMode=writeback_or_writethrough"
The value of the flashCacheMode attribute
is either writeback or writethrough. The value must match the flash
cache mode of the other storage cells in the cluster.
iv. Create the flash
cache:
cellcli
-e create flashcache all
- Create grid disks on the cell being added.
- Query the size and cachingpolicy of the existing grid disks from an existing cell.
o
CellCLI> list
griddisk attributes name,asmDiskGroupName,cachingpolicy,size,offset
- For each disk group found by the above command, create grid disks on the new cell that is being added to the cluster. Match the size and the cachingpolicy of the existing grid disks for the disk group reported by the command above. Grid disks should be created in the order of increasing offset to ensure similar layout and performance characteristics as the existing cells. For example, the "list griddisk" command could return something like this:
o
DATAC1 default 5.6953125T 32M
o
DBFS_DG default 33.796875G 7.1192474365234375T
o
RECOC1 none 1.42388916015625T 5.6953582763671875T
When creating grid
disks, begin with DATAC1, then RECOC1, and finally DBFS_DG using the following
command:
CellCLI>
create griddisk ALL HARDDISK PREFIX=DATAC1, size=5.6953125T,
cachingpolicy='default', comment="Cluster cluster-clux6 DR diskgroup
DATAC1"
CellCLI>
create griddisk ALL HARDDISK PREFIX=RECOC1,size=1.42388916015625T,
cachingpolicy='none', comment="Cluster cluster-clux6 DR diskgroup
RECOC1"
CellCLI>
create griddisk ALL HARDDISK PREFIX=DBFS_DG,size=33.796875G,
cachingpolicy='default', comment="Cluster cluster-clux6 DR diskgroup
DBFS_DG"
CAUTION: Be sure to
specify the EXACT size shown along with the unit (either T or G).
- Verify the newly created grid disks are visible from the Oracle RAC nodes. Log in to each Oracle RAC node and run the following command:
·
$GI_HOME/bin/kfod
op=disks disks=all | grep cellName_being_added
This should list all the
grid disks created in step 7 above.
- Add the newly created grid disks to the respective existing ASM disk groups.
·
alter diskgroup disk_group_name
add disk 'comma_separated_disk_names';
The command above kicks
off an ASM rebalance at the default power level. Monitor the progress of the
rebalance by querying gv$asm_operation:
SQL>
select * from gv$asm_operation;
Once the rebalance completes,
the addition of the cell to the Oracle RAC is complete.
- Download and run the latest exachk to ensure that the resulting configuration implements the latest best practices for Oracle Exadata.
http://docs.oracle.com/cd/E80920_01/DBMMR/extending-exadata.htm#DBMMR21158
Reimaging Exadata Cell Node Guidance (Doc ID 2151671.1)
No comments:
Post a Comment