- Obtener enlace
- X
- Correo electrónico
- Otras aplicaciones
This article will walk through the steps required to build a highly-available Apache cluster on CentOS 7. In CentOS 7 (as in Red Hat Enterprise Linux 7) the cluster stack has moved to Pacemaker/Corosync, with a new command line tool to manage the cluster (
pcs
, replacing commands such as ccs
and clusvcadm
in earlier releases).
The cluster will be a two node cluster comprising nodes centos05 and centos07, and iSCSI shared storage will be presented from node fedora01. There will be a 8GB LUN presented for shared storage, and a 1GB LUN for fencing purposes. I have covered setting up iSCSI storage with SCSI-3 persistent reservations in a previous article. There is no need to use CLVMD in this example as we will be utilising a simple failover filesystem instead.
The first step is to add appropriate entries to
/etc/hosts
on both nodes for all nodes, including the storage node, to safeguard against DNS failure:
# vi /etc/hosts
10.1.1.107 centos05
10.1.1.108 fedora01
10.1.1.111 centos07
Next, bring both cluster nodes fully up-to-date, and reboot them:
# yum -y update
# systemctl reboot
When the systems are back online, install the appropriate packages for
cluster setup, the service we’re running (Apache) and iscsi-initiator-utils for iSCSI
initiation:
# yum -y install pcs
fence-agents-all iscsi-initiator-utils httpd wget
Confirm that the firewall is running under FirewallD control:
#
firewall-cmd --state
running
Add the high-availability service to the running, and permanent, firewall configuration:
# firewall-cmd --permanent
--add-service=high-availability
success
# firewall-cmd --add-service=high-availability
success
# firewall-cmd --list-service
dhcpv6-client high-availability ssh
Set a password for the hacluster user. It is advised to set the same password
on both nodes:
# passwd hacluster
Start the pcsd.service unit, and set it to be enabled at the
appropriate target:
# systemctl start pcsd.service
# systemctl is-active pcsd.service
active
# systemctl enable pcsd.service
ln -s '/usr/lib/systemd/system/pcsd.service'
'/etc/systemd/system/multi-user.target.wants/pcsd.service'
# systemctl is-enabled pcsd.service
enabled
Next, from one node only, authorise both cluster nodes:
# pcs cluster auth centos05 centos07
Username: hacluster
Password:
centos05: Authorized
centos07: Authorized
iSCSI Configuration
As previously pointed out, I’ve covered this in depth in a previous
article, so I’ll only provide a cursory overview here.
Create the appropriate LVM devices for use as backing stores for the
failover filesystem and fence device:
[root@fedora01
~]# pvs
PV
VG Fmt Attr PSize PFree
/dev/sda2 fedora
lvm2 a-- 19.51g 0
/dev/sdb lvm2
a-- 20.00g 20.00g
[root@fedora01 ~]# vgcreate vg_data /dev/sdb
Volume group "vg_data"
successfully created
[root@fedora01 ~]# lvcreate -L 8G -n lv_centos07_fs
vg_data
Logical volume
"lv_centos07_fs" created
[root@fedora01 ~]# lvcreate -L 1G -n
lv_centos07_fence vg_data
Logical volume "lv_centos07_fence" created
Grab the initiator names from both cluster nodes:
[root@centos05 ~]# cat
/etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:93b6e953b121
[root@centos07 ~]# cat
/etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:2a7df8c5f243
Use targetcli to
configure the iSCSI storage LUNs, and add appropriate ACLs:
[root@fedora01 ~]# targetcli
/> cd /backstores/block
/backstores/block> create 8g-centos07-fs
/dev/vg_data/lv_centos07_fs
Created block storage object 8g-centos07-fs using
/dev/vg_data/lv_centos07_fs.
/backstores/block> create 1g-centos07-fence
/dev/vg_data/lv_centos07_fence
Created block storage object 1g-centos07-fence
using /dev/vg_data/lv_centos07_fence.
/backstores/block> cd /iscsi
/iscsi> create
Created target iqn.2003-01.org.linux-iscsi.fedora01.x8664:sn.87fc672b33bf.
Created TPG 1.
/iscsi> cd
iqn.2003-01.org.linux-iscsi.fedora01.x8664:sn.87fc672b33bf/tpg1/luns
/iscsi/iqn.20...3bf/tpg1/luns> create
/backstores/block/8g-centos07-fs
Created LUN 0.
/iscsi/iqn.20...3bf/tpg1/luns> create /backstores/block/1g-centos07-fence
Created LUN 1.
/iscsi/iqn.20...3bf/tpg1/luns> cd ../acls
/iscsi/iqn.20...3bf/tpg1/acls> create
iqn.1994-05.com.redhat:93b6e953b121
Created Node ACL for
iqn.1994-05.com.redhat:93b6e953b121
Created mapped LUN 1.
Created mapped LUN 0.
/iscsi/iqn.20...3bf/tpg1/acls> create
iqn.1994-05.com.redhat:2a7df8c5f243
Created Node ACL for
iqn.1994-05.com.redhat:2a7df8c5f243
Created mapped LUN 1.
Created mapped LUN 0.
/iscsi/iqn.20...3bf/tpg1/acls> cd /
/>
saveconfig
/> exit
Now, on each cluster node, discover the newly created target and log in:
#
iscsiadm --mode discovery --type sendtargets --portal 10.1.1.108
# iscsiadm -m node -T
iqn.2003-01.org.linux-iscsi.fedora01.x8664:sn.87fc672b33bf -l -p 10.1.1.108:3260
Start and enable the iscsi and iscsid services, if they’re not already. Mine were
in a strange state, as seen below (iscsid had been started, but wasn’t enabled,
and iscsi had
been enabled, but wasn’t started).
# systemctl start iscsi.service
# systemctl is-enabled iscsi.service
enabled
# systemctl is-active iscsid.service
active
# systemctl is-enabled iscsid.service
# systemctl enable
iscsid.service
Use fdisk to
confirm that the LUNs are available:
# fdisk -l
...
Disk /dev/sdb: 8589 MB, 8589934592 bytes, 16777216
sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512
bytes
I/O size (minimum/optimal): 512 bytes / 4194304
bytes
Disk /dev/sdc: 1073 MB, 1073741824 bytes, 2097152
sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512
bytes
I/O size (minimum/optimal): 512 bytes / 4194304 bytes
Take note of the devices – here /dev/sdb is our failover filesystem of size 8GB,
and /dev/sdc is the
fence device of size 1GB. For consistency, however, we won’t use these devices
– we’ll use the devices under /dev/disk/by-id, so look up the corresponding devices:
# ls -l /dev/disk/by-id
...
lrwxrwxrwx. 1 root root 9 Aug 11 20:25
wwn-0x60014055f0cfae3d6254576932ddc1f7 -> ../../sdb
lrwxrwxrwx. 1 root root 9 Aug 11 20:25
wwn-0x6001405708e9716ed8644369541e0b80 -> ../../sdc
So
wwn-0x60014055f0cfae3d6254576932ddc1f7
is the 8G LUN and wwn-0x6001405708e9716ed8644369541e0b80
is the 1G LUN. We will reference these devices where required.
Cluster Configuration
Create and start the cluster. All pcs commands should be executed from a single
node unless otherwise noted.
# pcs cluster setup --start --name webcluster
centos05 centos07
Shutting down pacemaker/corosync services...
Redirecting to /bin/systemctl
stop pacemaker.service
Redirecting to /bin/systemctl
stop corosync.service
Killing any remaining services...
Removing all cluster configuration files...
centos05: Succeeded
centos05: Starting Cluster...
centos07: Succeeded
centos07: Starting Cluster...
Enable the cluster to start automatically at boot:
# pcs cluster enable --all
centos05: Cluster Enabled
centos07: Cluster Enabled
If you don’t do this, you’ll have to manually run pcs cluster start after reboot
on a node.
Check the cluster
status:
# pcs cluster status
Cluster Status:
Last
updated: Mon Aug 11 20:29:53 2014
Last change:
Mon Aug 11 20:29:53 2014 via crmd on centos07
Stack:
corosync
Current DC:
centos07 (2) - partition with quorum
Version:
1.1.10-32.el7_0-368c726
2 Nodes
configured
0 Resources
configured
PCSD Status:
centos05:
Online
centos07: Online
As we can see, both nodes are online, and the cluster is quorate. Add a STONITH device – i.e. a fencing device – in our case this is the 1GB LUN presented to both nodes over iSCSI – note the use of the
/dev/disk/by-id
path to the device:
# pcs stonith create iscsi-stonith-device
fence_scsi devices=/dev/disk/by-id/wwn-0x6001405708e9716ed8644369541e0b80 meta
provides=unfencing
# pcs stonith show iscsi-stonith-device
Resource:
iscsi-stonith-device (class=stonith type=fence_scsi)
Attributes: devices=/dev/disk/by-id/wwn-0x6001405708e9716ed8644369541e0b80
Meta Attrs: provides=unfencing
Operations: monitor interval=60s
(iscsi-stonith-device-monitor-interval-60s)
Next, create a partition on the 8GB LUN – this will house a filesystem to be used as the
DocumentRoot
for our Apache installation:
# fdisk
/dev/disk/by-id/wwn-0x60014055f0cfae3d6254576932ddc1f7
Changes will remain in memory only, until you
decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition
table
Building a new DOS disklabel with disk identifier
0xcf0ffc26.
Command (m for help): n
Partition type:
p primary (0 primary, 0
extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (8192-16777215, default 8192):
Using default value 8192
Last sector, +sectors or +size{K,M,G}
(8192-16777215, default 16777215):
Using default value 16777215
Partition 1 of type Linux and of size 8 GiB is set
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
Create a filesystem on the new partition:
# mkfs.ext4
/dev/disk/by-id/wwn-0x60014055f0cfae3d6254576932ddc1f7-part1
On the other node, run
partprobe
so that the new partition is visible without the need to reboot:
# partprobe
Temporarily mount the new filesystem on one node, and sparsely populate the
DocumentRoot
for testing, remembering to unmount once done:
# mount
/dev/disk/by-id/wwn-0x60014055f0cfae3d6254576932ddc1f7-part1 /var/www
# mkdir /var/www/html
# mkdir /var/www/cgi-bin
# mkdir /var/www/error
# restorecon -R /var/www
# echo "Test" >
/var/www/html/index.html
# umount /var/www
Create the filesystem cluster resource (
fs_res
), in a new resource group (apachegroup
) which will be used to group the resources together as one unit:
# pcs resource create fs_res Filesystem
device="/dev/disk/by-id/wwn-0x60014055f0cfae3d6254576932ddc1f7-part1"
directory="/var/www" fstype="ext4" --group apachegroup
# pcs resource show
Resource
Group: apachegroup
fs_res
(ocf::heartbeat:Filesystem): Started
The Apache cluster health check uses the Apache
server-status
handler, so add the following to your httpd.conf
:
# vi /etc/httpd/conf/httpd.conf
<Location /server-status>
SetHandler server-status
Order deny,allow
Deny from all
Allow from 127.0.0.1
</Location>
Add an
IPaddr2
address resource (vip_res
) – this will be the floating virtual IP address that will failover between the nodes. This will be added to the same resource group as the filesystem resource we just created:
# pcs resource create vip_res IPaddr2 ip=10.1.1.124
cidr_netmask=24 --group apachegroup
# pcs resource show
Resource
Group: apachegroup
fs_res
(ocf::heartbeat:Filesystem): Started
vip_res (ocf::heartbeat:IPaddr2):
Started
Finally, create an Apache resource:
# pcs resource create httpd_res apache
configfile="/etc/httpd/conf/httpd.conf"
statusurl="http://127.0.0.1/server-status" --group apachegroup
# pcs resource show
Resource
Group: apachegroup
fs_res (ocf::heartbeat:Filesystem): Started
httpd_res (ocf::heartbeat:apache): Started
httpd_res (ocf::heartbeat:apache): Started
Open the firewall on both nodes to allow HTTP access:
# firewall-cmd --add-service=http
# firewall-cmd --add-service=http --permanent
The cluster configuration is now complete.
Testing
Browsing to http://<vip_address>/index.html should yield the result “Test”. Checking the cluster status in this case, all resources are online on node centos07:
# pcs status
Cluster name: webcluster
Last updated: Mon Aug 11 20:41:06 2014
Last change: Mon Aug 11 20:39:40 2014 via cibadmin
on centos05
Stack: corosync
Current DC: centos07 (2) - partition with quorum
Version: 1.1.10-32.el7_0-368c726
2 Nodes configured
4 Resources configured
Online: [ centos05 centos07 ]
Full list of resources:
iscsi-stonith-device (stonith:fence_scsi):
Started centos05
Resource
Group: apachegroup
fs_res
(ocf::heartbeat:Filesystem): Started centos07
vip_res (ocf::heartbeat:IPaddr2):
Started centos07
httpd_res (ocf::heartbeat:apache): Started
centos07
PCSD Status:
centos05: Online
centos07: Online
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
Fail the resources over by selecting one of the resources in the resource group and issuing a
pcs resource move
upon it:
# pcs resource move httpd_res
# pcs status
Cluster name: webcluster
Last updated: Mon Aug 11 20:41:30 2014
Last change: Mon Aug 11 20:41:27 2014 via
crm_resource on centos05
Stack: corosync
Current DC: centos07 (2) - partition with quorum
Version: 1.1.10-32.el7_0-368c726
2 Nodes configured
4 Resources configured
Online: [ centos05 centos07 ]
Full list of resources:
iscsi-stonith-device
(stonith:fence_scsi): Started centos05
Resource
Group: apachegroup
fs_res
(ocf::heartbeat:Filesystem): Started centos05
vip_res (ocf::heartbeat:IPaddr2):
Started centos05
httpd_res (ocf::heartbeat:apache): Started
centos05
PCSD Status:
centos05: Online
centos07: Online
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
We can then shift the resources back to the original node by running a
pcs resource clear
upon the moved resource:
# pcs resource clear httpd_res
# pcs status
Cluster name: webcluster
Last updated: Mon Aug 11 20:41:48 2014
Last change: Mon Aug 11 20:41:46 2014 via
crm_resource on centos05
Stack: corosync
Current DC: centos07 (2) - partition with quorum
Version: 1.1.10-32.el7_0-368c726
2 Nodes configured
4 Resources configured
Online: [ centos05 centos07 ]
Full list of resources:
iscsi-stonith-device
(stonith:fence_scsi): Started centos05
Resource
Group: apachegroup
fs_res
(ocf::heartbeat:Filesystem): Started centos07
vip_res (ocf::heartbeat:IPaddr2):
Started centos07
httpd_res (ocf::heartbeat:apache): Started
centos07
PCSD Status:
centos05: Online
centos07: Online
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
You can use
df
or findmnt
to confirm filesystem failover, and ip addr show
to confirm IP address failover.Related posts:
- Building a Highly-Available Load Balancer with Nginx and Keepalived on CentOS
- Solaris Cluster 4.1 Part Four: Highly Available Containers
- Highly-Available Load Balancing of Apache Tomcat using HAProxy, stunnel and keepalived
- Solaris Cluster 4.1 Part Three: Cluster Resources
- Solaris Cluster 4.1 Part One: Initial Preparation
Fuente : http://www.tokiwinter.com/building-a-highly-available-apache-cluster-on-centos-7/
- Obtener enlace
- X
- Correo electrónico
- Otras aplicaciones
. it's equally important to verify the lifetime of the battery at intervals your dog's radio collar to form bound that it's operating properly.
ResponderEliminarchain link
YerinMpenu_Reno Christina Cooper https://wakelet.com/wake/_Eso-Um95Rj2TYTEckoSs
ResponderEliminartuturazin
UnigeXsigzaLittle Rock Ean Cummings Kaspersky AntiVirus
ResponderEliminarCisco Packet Tracer
Download
studunansu