Percona Galera Cluster

Install Percona Galera Cluster on  Centos 6.5


Galera Cluster for MySQL is a multi-master cluster using synchronous replication. It is scalable, easy to use and provides high availability.
Prerequisites:
  • Disable SELinux
  • Open TCP Ports 3306, 4444, 4567 and 4568 or disable iptables
We’ll start out by setting up the percona yum repo and installing the necessary software.
Setup the Percona Yum Repo
Install Galera RPMs from Percona
yum install Percona-XtraDB-Cluster-server-56 Percona-XtraDB-Cluster-client-56 Percona-XtraDB-Cluster-galera-2 -y
 
Configure Nodes

 Three or more seems to be the suggested number of nodes in the cluster. When setting this up, keep in mind the cluster is only as powerful as the weakest link. So using identical/very similar hardware (or virtual resource) configuration is highly recommended.
We will have three nodes here, db01, db02 and db03.
  • db01 – 10.77.1.51
  • db02 – 10.77.1.52
  • db03 – 10.77.1.53
Setup conf files on all nodes
The conf file should look like this on all nodes with the only change being the IP address in wsrep_node_address. Below is for db01 but only because we have the IP in this configuration option.
 
 
[mysqld]
datadir=/var/lib/mysql
user=mysql
# Path to Galera library
wsrep_provider=/usr/lib64/libgalera_smm.so
# Cluster connection URL contains the IPs of all nodes
wsrep_cluster_address=gcomm://10.77.1.51,10.77.1.52,10.77.1.53
# In order for Galera to work correctly binlog format should be ROW
binlog_format=ROW
# MyISAM storage engine has only experimental support
default_storage_engine=InnoDB
# This changes how InnoDB autoincrement locks are managed and is a requirement for Galera
innodb_autoinc_lock_mode=2
# db01 address - put db02 and db03 address in on other nodes.
wsrep_node_address=10.77.1.51
# SST method
wsrep_sst_method=xtrabackup-v2
# Cluster name
wsrep_cluster_name=db_cluster
# Authentication for SST method
wsrep_sst_auth="sstuser:Mys3cretPAssword"
Bootstrap first node
We’ll now use the init script to start the instance on db01 using the additional parameter –wsrep-cluster-address=”gcom://” We don’t want it to start with the value in my.cnf because the cluster and the servers mentioned don’t exist and the cluster will not properly bootstrap.
 
 
/etc/init.d/mysql start --wsrep-cluster-address="gcomm://"
Set MySQL root Password
 
 
 
mysqladmin password MySecretPassWord
Create sstuser
The sstuser is a user that galera uses to keep synced, and is specified in the my.cnf. You could use root here, but that’s not a good idea.
 
 
 
echo "create user 'sstuser'@'localhost' identified by 'Mys3cretPAssword';" | mysql -uroot -p
echo "grant reload, lock tables, replication client on *.* to 'sstuser'@'localhost';" | mysql -uroot -p
echo "flush privileges;" | mysql -uroot -p
Check status of 1 node cluster
Lets take a minute to verify the cluster is in fact bootstrapped.
 
 
 
echo "show status like 'wsrep%';" | mysql -uroot -p
Copy conf to other nodes in cluster
Now copy the above /etc/my.cnf conf to the other nodes (db02 and db03) just replacing the wsrep_node_address with the correct IP address of your server and start mysql as normal with no additional flags. Upon starting the service, it will read the wsrep_cluster_address=gcom://10.77.1.51 and so on, so that it knows it’s a member of that cluster, and will automatically synchronize all data including users.
 
 
 
# on db02 and db03 after conf has been setup
/etc/init.d/mysql start

Comentarios