This chapter describes how to configure and use the Cluster Edition with the Veritas Storage Foundation for Sybase CE (SF for Sybase CE).
The integration of the Cluster Edition version 15.0.3 and later with the Veritas SF for Sybase CE allows the Cluster Edition to leverage the Veritas Storage Foundation storage management technologies, Cluster Server (VCS) application, and cluster management features.
Integrating the Cluster Edition version 15.0.3 with
the SF for Sybase CE provides cluster availability and data integrity.
Other versions of the Veritas Storage Foundation do not contain
necessary integration components and should not be used.
The Cluster Edition with SF for Sybase CE includes:
Storage Foundation Cluster File System – a generic clustered file system you can use with the Cluster Edition installation files (the files and directories located in $SYBASE), database devices, quorum devices, and other application files
Cluster Volume Manager – creates the logical volumes the cluster nodes share
Dynamic multipathing – improves storage availability and performance
Service group–based application management – provides monitoring and failover capabilities, and allows you to create dependencies between applications so that, if a component fails that the application requires (such as a disk volume), you can fail over the entire application
Integrated management of multiple clusters, applications, and database servers using VCS agents and management consoles
Support for hardware replication technologies and block-level replication using Veritas Volume Replicator
The Cluster Edition and the SF for Sybase CE include two key elements of cluster operation: membership management and I/O fencing:
Membership management– the Cluster Edition and SF for Sybase CE maintain their own membership managers. The membership manager is responsible for:
Coordinating logging in and logging off the cluster
Detecting failures
Determining which members of the cluster remain alive in the event of a communications loss (known as arbitration)
Maintaining a consistent cluster view
The Veritas Cluster Membership plug-in (VCMP) allows the Cluster Edition membership service to synchronize with the underlying Veritas membership manager, which avoids a situation where two membership managers are not coordinated, arbitrate a failure differently, and cause the cluster to shut down. Using the VCMP ensures that the Cluster Edition arbitrates in favor of instances that run on nodes within the Veritas membership view.
I/O fencing – so-called because the membership manager builds a fence around the data storage, and allows only instances or nodes that behave properly to perform writes. Use I/O fencing to prevent data corruption from uncooperative cluster members. The Cluster Editions and the SF for Sybase CE coordinate I/O fencing such that:The
SF for Sybase CE manages and performs all fencing, and
The Cluster Edition can communicate to the SF for Sybase CE that a fencing action is necessary because of a Cluster Edition membership change.
Figure 12-1 describes the components that comprise the Cluster Edition and the SF for Sybase CE:
Figure 12-1: Components in clustered Veritas system
The components supplied by Sybase are:
Cluster Edition – relational database server running on the node
Quorum device – includes configuration information for the cluster and is shared by all cluster members
Veritas Cluster Membership plug-in (VCMP) – receives membership change messages from VxFEND and communicates to the Cluster Edition’s membership service, which blocks membership changes until the VCMP permits it to proceed
The components supplied by Veritas are:
LLT (low latency transport) – allows VCS to communicate across the cluster, detecting heartbeats and failures.
GAB (global atomic broadcast) – coordinates the VCS membership. The LLT provides the GAB with information about failure events (for example, no heartbeat).
CVM (cluster volume manager) – receives membership information from the GAB and coordinates with VxFEN.
CFS (cluster file system) – a file system that can be simultaneously mounted and accessed by multiple nodes in a cluster. CFS uses a distributed lock manager to maintain consistency across nodes. CVM, with a storage area network (SAN), provides the underlying storage.
VxFEN (kernel-side I/O fencing control) – VxFEN receives membership information from GAB and performs fencing, when appropriate.
VxFEND (user space I/O fencing control daemon) – daemon running in the user space (as opposed to a running in the kernel) that communicates with VxFEN. During the membership change, VxFEN sends a message to VxFEND indicating a change of membership. VxFEND informs the Cluster Edition about the membership change through VCMP.
VCS Agent – VCS monitoring agent for the Cluster Edition. If necessary, the VCS agent can trigger a host panic if the Cluster Edition fails. The VCS agent is also used by VCS to start and stop the Cluster Edition.