How to configure High Availability Linux Cluster with Pacemaker in CentOS - Mugila/LinuX--Centos-7-Configuration-documents- GitHub Wiki
Welcome to the LinuX--Centos-7-Configuration-documents- wiki! https://www.linuxhelp.com/how-to-configure-high-availability-linux-cluster-with-pacemaker-in-centos/#content-anchor How to configure High Availability Linux Cluster with Pacemaker in CentOS how-to-configure-high-availability-linux-cluster-with-pacemaker-in-centos To configure High Availability Linux Cluster with Pacemaker in CentOS
Cluster is a group of server that can be considered as a single system. It is used for producing high availability without any downtime. The procedure to configure high availability Linux Cluster with Pacemaker in CentOS is explained.
Testing Environment
Operating System: Centos 7
Node 1: hostname - linuxhelp1.cluster.com IP - 192.168.5.148 Node 2: hostname - linuxhelp2.cluster.com IP - 192.168.5.149
To install pacemaker, it is necessary to have a DNS server to analyse the cluster configurations. Check for DNS server – if it is unavailable, then make the entry in the /etc/hosts location as follows.
[root@linuxhelp1 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.5.148 linuxhelp1.cluster.com linuxhelp1 192.168.5.149 linuxhelp2.cluster.com linuxhelp2
Note: The entry should be made on each and every nodes.
Run the following command to install the apache package on either nodes.
[root@linuxhelp1 ~]# yum install httpd -y Loaded plugins: fastestmirror, langpacks Loading mirror speeds from cached hostfile * base: mirror.nbrc.ac.in * extras: mirror.nbrc.ac.in * updates: mirror.nbrc.ac.in Resolving Dependencies --> Running transaction check ---> Package httpd.x86_64 0:2.4.6-40.el7.centos.4 will be installed --> Processing Dependency: httpd-tools = 2.4.6-40.el7.centos.4 for package: httpd-2.4.6-40.el7.centos.4.x86_64 --> Processing Dependency: /etc/mime.types for package: httpd-2.4.6-40.el7.centos.4.x86_64 --> Processing Dependency: libaprutil-1.so.0()(64bit) for package: httpd-2.4.6-40.el7.centos.4.x86_64 --> Processing Dependency: libapr-1.so.0()(64bit) for package: httpd-2.4.6-40.el7.centos.4.x86_64 --> Running transaction check ---> Package apr.x86_64 0:1.4.8-3.el7 will be installed ---> Package apr-util.x86_64 0:1.5.2-6.el7 will be installed . . . Installed: httpd.x86_64 0:2.4.6-40.el7.centos.4
Dependency Installed: apr.x86_64 0:1.4.8-3.el7 apr-util.x86_64 0:1.5.2-6.el7 httpd-tools.x86_64 0:2.4.6-40.el7.centos.4 mailcap.noarch 0:2.1.41-2.el7
Complete!
Disable firewall SELinux
Add the following line to disable the selinux and firewall on both nodes by running the following command.
[root@linuxhelp1 ~]# vim /etc/sysconfig/selinux
SELINUX=permissive
[root@linuxhelp1 ~]# setenforce 0 [root@linuxhelp1 ~]# systemctl stop firewalld [root@linuxhelp1 ~]# systemctl disable firewalld rm '/etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service' rm '/etc/systemd/system/basic.target.wants/firewalld.service'
To install Pacemaker Package
Run the following command to install the packages for pacemaker on both the nodes.
[root@linuxhelp1 ~]# yum install pacemaker pcs fence-agents-all -y Loaded plugins: fastestmirror, langpacks Loading mirror speeds from cached hostfile * base: mirror.nbrc.ac.in * extras: mirror.nbrc.ac.in * updates: mirror.nbrc.ac.in Resolving Dependencies --> Running transaction check ---> Package fence-agents-all.x86_64 0:4.0.11-27.el7_2.7 will be installed --> Processing Dependency: fence-virt for package: fence-agents-all-4.0.11-27.el7_2.7.x86_64 --> Processing Dependency: fence-agents-wti for package: fence-agents-all-4.0.11-27.el7_2.7.x86_64 --> Processing Dependency: fence-agents-vmware-soap for package: fence-agents-all-4.0.11-27.el7_2.7.x86_64 --> Processing Dependency: fence-agents-scsi for package: fence-agents-all-4.0.11-27.el7_2.7.x86_64 --> Processing Dependency: fence-agents-rsb for package: fence-agents-all-4.0.11-27.el7_2.7.x86_64 . . . rubygem-io-console.x86_64 0:0.4.2-25.el7_1 rubygem-json.x86_64 0:1.7.7-25.el7_1 rubygem-psych.x86_64 0:2.0.0-25.el7_1 rubygem-rdoc.noarch 0:4.0.0-25.el7_1 rubygems.noarch 0:2.0.14-25.el7_1 sg3_utils.x86_64 0:1.37-5.el7 telnet.x86_64 1:0.17-59.el7
Dependency Updated: net-snmp-libs.x86_64 1:5.7.2-24.el7_2.1 python-chardet.noarch 0:2.2.1-1.el7_1
Complete!
After the installation, set password for the “hacluster” on both nodes. Verify the user account by running the following command.
[root@linuxhelp1 ~]# tail /etc/passwd abrt:x:173:173::/etc/abrt:/sbin/nologin pulse:x:171:171:PulseAudio System Daemon:/var/run/pulse:/sbin/nologin gdm:x:42:42::/var/lib/gdm:/sbin/nologin gnome-initial-setup:x:993:991::/run/gnome-initial-setup/:/sbin/nologin postfix:x:89:89::/var/spool/postfix:/sbin/nologin sshd:x:74:74:Privilege-separated SSH:/var/empty/sshd:/sbin/nologin tcpdump:x:72:72::/:/sbin/nologin user1:x:1000:1000:user1:/home/user1:/bin/bash apache:x:48:48:Apache:/usr/share/httpd:/sbin/nologin hacluster:x:189:189:cluster user:/home/hacluster:/sbin/nologin
And then run the following command to set the password for the “hacluster” account.
[root@linuxhelp1 ~]# passwd hacluster Changing password for user hacluster. New password: Retype new password: passwd: all authentication tokens updated successfully.
Start and enable the service for pacemaker on both the nodes.
[root@linuxhelp1 ~]# systemctl start pcsd [root@linuxhelp1 ~]# systemctl enable pcsd ln -s '/usr/lib/systemd/system/pcsd.service' '/etc/systemd/system/multi-user.target.wants/pcsd.service'
To create cluster
Before creating clusters authenticate the nodes that is to be added by using the following command.
[root@linuxhelp1 ~]# pcs cluster auth linuxhelp1 linuxhelp2 -u hacluster Password: linuxhelp2: Authorized linuxhelp1: Authorized
Then utilize the below command to create the Cluster and then add the authenticated nodes into it.
[root@linuxhelp1 ~]# pcs cluster setup --name Cluster linuxhelp1 linuxhelp2 Shutting down pacemaker/corosync services... Redirecting to /bin/systemctl stop pacemaker.service Redirecting to /bin/systemctl stop corosync.service Killing any remaining services... Removing all cluster configuration files... linuxhelp1: Succeeded linuxhelp2: Succeeded Synchronizing pcsd certificates on nodes linuxhelp1, linuxhelp2... linuxhelp2: Success linuxhelp1: Success
Restaring pcsd on the nodes in order to reload the certificates... linuxhelp2: Success linuxhelp1: Success
After the cluster creation, make sure that the nodes added into it by verifying the configuration file.
[root@linuxhelp1 ~]# cat /etc/corosync/corosync.conf totem { version: 2 secauth: off cluster_name: Cluster transport: udpu }
nodelist { node { ring0_addr: linuxhelp1 nodeid: 1 }
node { ring0_addr: linuxhelp2 nodeid: 2 } }
quorum { provider: corosync_votequorum two_node: 1 }
logging { to_logfile: yes logfile: /var/log/cluster/corosync.log to_syslog: yes }
Run the following command to start the cluster services.
[root@linuxhelp1 ~]# pcs cluster start --all linuxhelp2: Starting Cluster... linuxhelp1: Starting Cluster... [root@linuxhelp1 ~]# pcs status Cluster name: Cluster WARNING: no stonith devices and stonith-enabled is not false Last updated: Wed Sep 14 16:30:06 2016 Last change: Wed Sep 14 16:29:52 2016 by hacluster via crmd on linuxhelp2 Stack: unknown Current DC: NONE 2 nodes and 0 resources configured
Node linuxhelp1: UNCLEAN (offline) Node linuxhelp2: UNCLEAN (offline)
Full list of resources:
PCSD Status: linuxhelp1: Online linuxhelp2: Online
Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
Enable the cluster services by running the following command.
[root@linuxhelp1 ~]# pcs cluster enable --all linuxhelp1: Cluster Enabled linuxhelp2: Cluster Enabled
To add resource into the cluster
Next, lets start to append the resources into the cluster. Here we are going to add two resources namely web server(httpd) and IP address(Virtual IP). These resources are added to the cluster to make the services available without any downtime.
[root@linuxhelp1 ~]# pcs resource create VirtIP IPAddr ip=192.168.5.150 cidr_netmask=24 op monitor interval=30s [root@linuxhelp1 ~]# pcs resource create Httpd apache configuration="/etc/httpd/conf/httpd.conf" op monitor interval=30s
Run the following command to check the status of the cluster.
[root@linuxhelp1 ~]# pcs status Cluster name: Cluster WARNING: no stonith devices and stonith-enabled is not false Last updated: Wed Sep 14 17:13:47 2016 Last change: Wed Sep 14 17:11:54 2016 by root via cibadmin on linuxhelp1 Stack: corosync Current DC: linuxhelp1 (version 1.1.13-10.el7_2.4-44eb2dd) - partition with quorum 2 nodes and 2 resources configured
Online: [ linuxhelp1 linuxhelp2 ]
Full list of resources:
VirtIP (ocf::heartbeat:IPaddr): Stopped Httpd (ocf::heartbeat:apache): Stopped
PCSD Status: linuxhelp1: Online linuxhelp2: Online
Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled
Check the status of the resources alone.
[root@linuxhelp1 ~]# pcs status resources VirtIP (ocf::heartbeat:IPaddr): Stopped Httpd (ocf::heartbeat:apache): Stopped
After creating the resources, start the services by running the following command.
[root@linuxhelp1 ~]# pcs constraint colocation add Httpd with VirtIP INFINITY
Then assign the following values into the cluster.
[root@linuxhelp1 ~]# pcs property set stonith-enabled=false [root@linuxhelp1 ~]# pcs property set no-quorum-policy=ignore [root@linuxhelp1 ~]# pcs property set default-resource-stickiness="INFINITY"
Run the following command to check the status of the resources and clusters.
[root@linuxhelp1 ~]# pcs status resources VirtIP (ocf::heartbeat:IPaddr): Started linuxhelp1 Httpd (ocf::heartbeat:apache): Started linuxhelp1 [root@linuxhelp1 ~]# pcs status Cluster name: Cluster Last updated: Wed Sep 14 17:52:55 2016 Last change: Wed Sep 14 17:40:24 2016 by root via cibadmin on linuxhelp1 Stack: corosync Current DC: linuxhelp1 (version 1.1.13-10.el7_2.4-44eb2dd) - partition with quorum 2 nodes and 2 resources configured
Online: [ linuxhelp1 linuxhelp2 ]
Full list of resources:
VirtIP (ocf::heartbeat:IPaddr): Started linuxhelp1 Httpd (ocf::heartbeat:apache): Started linuxhelp1
PCSD Status: linuxhelp1: Online linuxhelp2: Online
Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled
To check the Fail over
After configuring the cluster, check the Fail over between the nodes. Create different index files on the node1 and node2 with different content as shown below.
[root@linuxhelp1 ~]# vim /var/www/html/index.html
Entry:
This is Node1
[root@linuxhelp1 ~]# vim /var/www/html/index.html
Entry:
This is Node2
Trigger the browser with the following link http://<IP_ADDR> snap-1 Stop the cluster service for node 1.
[root@linuxhelp1 ~]# pcs cluster stop linuxhelp1 linuxhelp1: Stopping Cluster (pacemaker)... linuxhelp1: Stopping Cluster (corosync)...
Again access the web service using the same IP, it automatically fetches the data from the node 2. snap-2
Thank you! for using Linux Help.
You find this tutorial helpful? Share with your friends to keep it alive.
For more help topics, browse our website www.linuxhelp.com
Be the first to comment, we value your suggestions. For further queries please comment below.