Galera Cluster for
MySQL is a multi-master cluster using synchronous replication. It is scalable, easy to use and provides high availability.
Prerequisites:
- Disable SELinux
- Open TCP Ports 3306, 4444, 4567 and 4568 or disable iptables
We’ll start out by setting up the percona
yum repo and installing the necessary software.
Setup the Percona Yum Repo
Install Galera RPMs from Percona
yum install Percona-XtraDB-Cluster-server-56 Percona-XtraDB-Cluster-client-56 Percona-XtraDB-Cluster-galera-2 -y
|
Configure Nodes
Three or more seems to be the suggested number of nodes in the cluster. When setting this up, keep in mind the cluster is only as powerful as the
weakest link. So using identical/very similar hardware (or virtual resource) configuration is
highly recommended.
We will have three nodes here, db01, db02 and db03.
- db01 – 10.77.1.51
- db02 – 10.77.1.52
- db03 – 10.77.1.53
Setup conf files on all nodes
The conf file should look like this on all nodes with the only change being the IP address in wsrep_node_address. Below is for db01 but only because we have the IP in this configuration option.
wsrep_provider=/usr/lib64/libgalera_smm.so |
default_storage_engine=InnoDB |
innodb_autoinc_lock_mode=2 |
wsrep_node_address=10.77.1.51 |
wsrep_sst_method=xtrabackup-v2 |
wsrep_cluster_name=db_cluster |
wsrep_sst_auth= "sstuser:Mys3cretPAssword" |
Bootstrap first node
We’ll now use the init script to start the instance on db01 using the additional parameter –wsrep-cluster-address=”gcom://” We don’t want it to start with the value in my.cnf because the cluster and the servers mentioned don’t exist and the cluster will not properly bootstrap.
/etc/init.d/mysql start --wsrep-cluster-address= "gcomm://" |
Set MySQL root Password
mysqladmin password MySecretPassWord |
Create sstuser
The sstuser is a user that galera uses to keep synced, and is specified in the my.cnf. You could use root here, but that’s not a good idea.
echo "create user 'sstuser'@'localhost' identified by 'Mys3cretPAssword';" | mysql -uroot -p |
echo "grant reload, lock tables, replication client on *.* to 'sstuser'@'localhost';" | mysql -uroot -p |
echo "flush privileges;" | mysql -uroot -p |
Check status of 1 node cluster
Lets take a minute to verify the cluster is in fact bootstrapped.
echo "show status like 'wsrep%';" | mysql -uroot -p |
Copy conf to other nodes in cluster
Now copy the above /etc/my.cnf conf to the other nodes (db02 and db03) just replacing the wsrep_node_address with the correct IP address of your server and start mysql as normal with no additional flags. Upon starting the service, it will read the wsrep_cluster_address=gcom://10.77.1.51 and so on, so that it knows it’s a member of that cluster, and will automatically synchronize all data including users.
Comentarios
Publicar un comentario