We have looked at the hardware that makes up the Exadata Database Machine – now we are going to delve into the software.  Exadata’s database nodes run the Oracle Clusterware, the ASM instances, and the database instances.  The user has the option to either create a single cluster or multiple clusters.  Similarly, the user can also decide whether to create a single database on the cluster or multiple databases.

For example, if you were to create three databases (dev, int, and QA), you would have two choices:  (1) One Cluster Solution – create one cluster with three databases; (2) Three Clusters Solution – create three different clusters, each containing one database.

In the first option, you will see that you have the flexibility to add and remove instances of a database effortlessly.  For example, with eight nodes in a full rack, you may add two nodes to dev, two nodes to int, and four nodes to QA.  Hypothetically, let’s say a complete production stress test is planned and that temporarily needs all of the Exadata database’s eight nodes in QA in order to match eight nodes in production.  In this instance, all you have to do is stop the dev and int instances and start the other four instances of QA on those nodes.  Once the stress test is finalised, you can stop those four QA instances and restart the dev and int instances easily transitioning to your original Exadata configuration.

If you are running many production databases on a single rack, you can still make use of this technique.  If a specific database needs more computing power temporarily to deal with a seasonal high demand, you can just shut down one instance of a database that is temporarily not needed and restart the instance of the more demanding one in that node.  After the demand has subsided, you can return to default.  You can also run two instances in the same node, but of course they will compete for resources, which is something that could be a problem if you are running near capacity.  However, to mitigate this issue, you can control the resource usage at the I/O level by the instances using the IO Resource Manager (IORM).  The drawback of this option is you are still operating Exadata on a single cluster.  Thus, when you upgrade the cluster, all the databases will need to be upgraded.

The second option notes: there are individual clusters for each Exadata database – a complete separation.  You can upgrade or manipulate individual clusters any way you want without having any affect on the others.  However, when you need more power for other nodes, you will not be able to start up another instance.  Instead, you will need to remove a node from that cluster and add the node to the other cluster where it is needed – an activity that is more difficult compared to the simple shutdown and startup of instances.

Since the cells contain the disks, how do the Exadata database compute nodes access them?  To put it another way, how do the ASM instances running on the compute nodes access the disks?  Disks are accessable to cells only, not to the compute nodes.  Thus, the compute nodes view the disks through the cells.

The Exadata Database Machine flash disks are accessable to the cell as storage devices as well – much like the normal disks.  As a result, the flash disks can be added to the pool of ASM disks to be utilized in the database for ultra fast access, or they can instead be used to create the smart flash cache layer (a secondary cache between the database buffer cache and storage).  This layer caches all the most used objects but does not follow the same logic as the database buffer cache.  Instead, the smart flash cache layer caches only those data items that are used more.  The request for data not found in the smart flash cache is automatically routed to the disks.

For more information, please contact Pebble IT. Next time, we will look at Oracle Exadata’s competitive edge, including cell offloading, smart scan, iDB, Storage Indexes, Smart Cache, Infiniband, and more.