gfs2/dlm usage in cluster4 GFS2 clustering is driven by the dlm, which depends on dlm_controld to provide clustering from userspace. dlm_controld clustering is built on corosync cluster/group membership and messaging. Follow these steps to manually configure and run gfs2/dlm/corosync. ---------------------------------------------------------------------------- 1. create /etc/corosync/corosync.conf and copy to all nodes In this sample, replace cluster_name and IP addresses, and add nodes as needed. If using only two nodes, uncomment the two_node line. See corosync.conf(5) for more information. totem { version: 2 secauth: off cluster_name: abc } nodelist { node { ring0_addr: 10.10.10.1 nodeid: 1 } node { ring0_addr: 10.10.10.2 nodeid: 2 } node { ring0_addr: 10.10.10.3 nodeid: 3 } } quorum { provider: corosync_votequorum # two_node: 1 } logging { to_syslog: yes } ---------------------------------------------------------------------------- 2. start corosync on all nodes directly # corosync or through systemd # systemctl start corosync Run corosync-quorumtool to verify that all nodes are listed. ---------------------------------------------------------------------------- 3. create /etc/dlm/dlm.conf and copy to all nodes * To use no fencing, use this line: enable_fencing=0 * To use no fencing, but exercise fencing functions, use this line: fence_all /bin/true The "true" binary will be executed for all nodes and will succeed (exit 0) immediately. * To use manual fencing, use this line: fence_all /bin/false The "false" binary will be executed for all nodes and will fail (exit 1) immediately. When a node fails, manually run: dlm_tool fence_ack * To use stonith/pacemaker for fencing, use this line: fence_all /usr/sbin/dlm_stonith The "dlm_stonith" binary will be executed for all nodes. If stonith/pacemaker systems are not available, dlm_stonith will fail and this config becomes the equivalent of the previous /bin/false config. * To use an APC power switch, use these lines: device apc /usr/sbin/fence_apc ipaddr=1.1.1.1 login=admin password=pw connect apc node=1 port=1 connect apc node=2 port=2 connect apc node=3 port=3 Other network switch based agents are configured similarly. * To use sanlock/watchdog fencing, use these lines: device wd /usr/sbin/fence_sanlock path=/dev/fence/leases connect wd node=1 host_id=1 connect wd node=2 host_id=2 unfence wd See fence_sanlock(8) for more information. * For other fencing configurations see dlm.conf(5) man page. ---------------------------------------------------------------------------- 4. start dlm_controld on all nodes directly # modprobe dlm # dlm_controld or through systemd # systemctl start dlm Run "dlm_tool status" to verify that all nodes are listed. ---------------------------------------------------------------------------- 5. if using clvm, start clvmd on all nodes directly # clvmd or through systemd # systemctl start lvm2-cluster-activation (Before this, lvm.conf may need to be configured on all nodes: lvmconf --enable-cluster to set locking_type 3 and use_lvmetad 0) ---------------------------------------------------------------------------- 6. make new gfs2 file systems # mkfs.gfs2 -p lock_dlm -t cluster_name:fs_name -j num /path/to/storage The cluster_name must match the name used in step 1 above. The fs_name must be a unique name in the cluster. The -j option is the number of journals to create, there must be one for each node that will mount the fs. ---------------------------------------------------------------------------- 7. mount gfs2 file systems # modprobe gfs2 # mount /path/to/storage /mountpoint The fs can be mounted on any or all cluster nodes. Run "dlm_tool ls" to verify the nodes that have each fs mounted. ---------------------------------------------------------------------------- 8. shut down directly # umount -a -t gfs2 # killall clvmd # killall dlm_controld # killall coroysnc or through systemd # umount -a -t gfs2 # systemctl stop lvm2-cluster-activation # systemctl stop dlm # systemctl stop corosync ---------------------------------------------------------------------------- More setup information: dlm_controld(8), dlm_tool(8), dlm.conf(5), corosync(8), corosync.conf(5) ---------------------------------------------------------------------------- "cluster4" refers to the versions of corosync, dlm, lvm2, and gfs2 released in RHEL7.