RAC – Voting Disk

Voting Disk, is a file that sits in the shared storage area and must be accessible by all nodes in the cluster. All nodes in the cluster registers their heart-beat information in the voting disk, so as to confirm that they are all operational. If heart-beat information of any node in the voting disk is not available that node will be evicted from the cluster. The CSS (Cluster Synchronization Service) daemon in the clusterware maintains the heart beat of all nodes to the voting disk. When any node is not able to send heartbeat to voting disk, then it will reboot itself, thus help avoiding the split-brain syndrome.

For high availability, Oracle recommends that you have a minimum of three or odd number (3 or greater) of voting disks.

According to Oracle – “An absolute majority of voting disks configured (more than half) must be available and responsive at all times for Oracle Clusterware to operate.” Which means to survive from loss of ‘N’ voting disks, you must configure atleast ‘2N+1’ voting disks.

Suppose you have 5 voting disks configured for your 2 Node environment, then you can survive evenafter loss of 2 voting disks.

Keep in mind that, having multiple voting disks is reasonable if you keep them on different disks/volumes/san arrays so that your cluster can survive even during the loss of one disk/volume/array. So, there is no point in configuring multiple voting disks on a single disk/lun/array.

But there is a special scenario, where all the nodes in the cluster can see the all voting disks but the cluster-interconnect between the nodes failed, to avoid split-brain syndrome in this scenario, node eviction must happen. But the question here is which one?

According to Oracle – “The node with the lower node number will survive the eviction (The first node to join the cluster).” . So, the very first one that joined in the cluster will survive from eviction.


1.) Obtaining voting disk information –

                           $ crsctl query css votedisk

2.) Adding Voting Disks

First shut down Oracle Clusterware on all nodes, then use the following commands as the root user.

                           # crsctl add css [path of voting disk]

3.) Removing a voting disk:

First shut down Oracle Clusterware on all nodes, then use the following commands as the root user.

                          # crsctl delete css [path of voting disk]

Do not use -force option for adding or removing voting disk while the Oracle Clusterware stack is active, it can corrupt cluster configuration. You can use it when cluster is down and can modify the voting disk configuration using either of these commands without interacting with active Oracle Clusterware daemons.

4.) Backing up Voting Disks

Perform backup operation whenever there is change in the configuration like add/delete of new nodes or add/delete of voting disks.

                          $ dd if=current_voting_disk of=backup_file_name

If your voting disk is stored on a raw device, specify the device name –


                          $ dd if=/dev/sdd1 of=/tmp/vd1_.dmp

5.) Recovering Voting Disks

A Bad voting disk can be recovered using a backup copy.

                          $ dd if=backup_file_name of=current_voting_disk

So, these are some of the operations on voting disk and conclude this part here.

This entry was posted in Oracle - RAC. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s