DRBD on CentOS 7.0, Oracle linux 7.0 and Red hat 7.0

The Distributed Replicated Block Device (DRBD) is a distributed replicated storage system for the Linux platform. Distributed Replicated Block Device is actually a network based RAID 1. If you need to secure data on certain disk and are therefore mirroring your data to another disk via network, you have to configure DRBD on your system.



Requirements
- Create two partitions has the same size on both machine (you could also make drbd for 2 disks between 2 machines as well). In this tutorial i will make it for the partion sda1
– Networking between machines (node1 & node2)
+ node1: 198.168.1.1
+ node2: 198.168.1.2
– Working DNS resolution (/etc/hosts file)
– NTP synchronized times on both nodes
Turn off firewall or allow ports (7788)
You should run the following steps on both machines


Install ELRepo repository on your both system:

Get started

Import the public key:


Detailed info on the GPG key used by the ELRepo Project can be found on https://www.elrepo.org/tiki/key (external link)

To install ELRepo for RHEL-7, SL-7 or CentOS-7:


To make use of our mirror system, please also install yum-plugin-fastestmirror.

To install ELRepo for RHEL-6, SL-6 or CentOS-6:


To make use of our mirror system, please also install yum-plugin-fastestmirror.


Install DRBD:

[root@node1 ~]# yum -y install drbd84-utils kmod-drbd84
[root@node2 ~]# yum -y install drbd84-utils kmod-drbd84

Insert drbd module manually on both machines or reboot:
/sbin/modprobe drbd

Format or partition DRBD on both machines if needed
[root@node1 ~]# fdisk  /dev/sdb
[root@node2 ~]# fdisk  /dev/sdb

Example:
[root@node1 yum.repos.d]# fdisk  /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x2a0f1472.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help):

you should input p to print out info
 
Command (m for help): p Disk /dev/sdb: 2147 MB, 2147483648 bytes 255 heads, 63 sectors/track, 261 cylinders, total 4194304 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x2a0f1472 Device Boot Start End Blocks Id System Command (m for help):
you should input n to create new partition
Command (m for help): n
Command action
e extended
p primary partition (1-4)
choose p (primary)
p primary partition (1-4)
p
Partition number (1-4):
choose 1
Partition number (1-4): 1
First sector (2048-4194303, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-4194303, default 4194303):
Using default value 4194303
Command (m for help):
input w to write change to disk
Command (m for help): w
The partition table has been altered!

Create the Distributed Replicated Block Device resource file (/etc/drbd.d/clusterdb.res):


[root@node1 ~]# vi /etc/drbd.d/clusterdb.res

File content as following


resource clusterdb {
     startup {
           wfc-timeout 30;
           outdated-wfc-timeout 20;
           degr-wfc-timeout 30;
     }
     net {
           cram-hmac-alg sha1;
           shared-secret sync_disk;
     }
     syncer {
           rate 10M;
           al-extents 257;
           on-no-data-accessible io-error;
     }
     on node1 {
           device /dev/drbd0;
           disk /dev/sdb1;
           address 192.168.1.1:7788;
           meta-disk internal;
     }
     on node2 {
           device /dev/drbd0;
           disk /dev/sdb1;
           address 192.168.1.2:7788;
           meta-disk internal;
      }
 }
Make sure that DNS resolution is working:
/etc/hosts
192.168.1.1 node1 node1.example.com
192.168.1.2 node2 node2.example.com

Set NTP server and add it to crontab on both machines:
vi/etc/crontab
 5 * * * * root ntpdate your.ntp.server

Copy DRBD configured and hosts file to node2:
 [root@node1 ~]# scp /etc/drbd.d/clusterdb.res node2:/etc/drbd.d/clusterdb.res
 [root@node1 ~]# scp /etc/hosts node2:/etc/

Initialize the DRBD meta data storage on both machines:
[root@node1 ~]# drbdadm create-md clusterdb
[root@node2 ~]# drbdadm create-md clusterdb
 
***************** 
If the device we are trying to initialize already contains a filesystem, we’ll obtain the following error:
 
[root@node1 ~]# drbdadm create-md clusterdb
v08 Magic number not found
md_offset 2583687168
al_offset 2583654400
bm_offset 2583572480

Found ext3 filesystem which uses 2523136 kB
current configuration leaves usable 2523020 kB

Device size would be truncated, which
would corrupt data and result in
'access beyond end of device' errors.
You need to either
   * use external meta data (recommended)
   * shrink that filesystem first
   * zero out the device (destroy the filesystem)
Operation refused.

Command 'drbdmeta /dev/drbd1 v08 /dev/mapper/VolGroup01-LogVol00 internal create-md' terminated with exit code 40
drbdadm aborting
At this time, we have 3 options as the DRBD error correctly says.
- Put metadata on another disk/partition.
- Make the filesystem smaller so the metadata fits on the volume/partition.
- Initialize the filesystem (destroy metadata)

I have decided to use option 3, destroy metadata, so I’ll execute:
[root@node1 ~]# dd if=/dev/zero bs=1M count=1 of=/dev/mapper/VolGroup01-LogVol00; sync
1+0 records in
1+0 records out
1048576 bytes (1.0 MB) copied, 0.0104967 seconds, 99.9 MB/s

Once, we have fix this problem, we can rerun the create-md command again and get the successful message as follow
 There appears to be a v08 flexible-size internal meta data block
 already in place on /dev/sdb1 at byte offset 2146430976
 Do you really want to overwrite the existing v08 meta-data?
 [need to type 'yes' to confirm] yes
 Writing meta data...
 initializing activity log
 NOT initialized bitmap
 New drbd meta data block successfully created.

********************************


Start the DRBD service on both nodes (remembering that we will be shutting it down again as it will be cluster managed):

   #service  drbd  start 
   o  
   # systemctl start drbd.service



On the MASTER NODE only, promote the volume to the primary role:
Note that in DRBD versions <8.4 (e.g. 8.3) you will need to use a different command:

You can cat /proc/drbd to watch the synchronisation operation in progress:

Or, run service drbd status:
On both nodes, create the mountpoint for the DRBD-replicated volume, which in our case will be /data. It is needless to say (but I’ll say it anyway) that this volume will be only ever mounted from a single node at a time.
From the MASTER NODE, create an ext4 filesystem and mount/umount the new filesystem to test. Ensure that you use /dev/drbd0 – i.e. the DRBD device – not the backing store device:
Now, on both nodes, stop DRBD:
 












You have finished the installation of DRBD. Now, you can run the following command to see two machines are synchronizing with eachother.
[root@node1 yum.repos.d]# cat /proc/drbd
 version: 8.3.16 (api:88/proto:86-97)
 GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 build by phil@Build32R6, 2013-09-27 15:59:12
 0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----
 ns:78848 nr:0 dw:0 dr:79520 al:0 bm:4 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:2017180
 [>....................] sync'ed: 4.0% (2017180/2096028)K
 finish: 0:02:58 speed: 11,264 (11,264) K/sec
ns:1081628 nr:0 dw:33260 dr:1048752 al:14 bm:64 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0]

The following steps i will show you how to mount the data. In the single-primary mode you can mount data only on primary node. So you should make a node to be primary by the command
[root@node1 ~]# drbdadm primary clusterdb

you could also make a node to be secondary mode by following command:
[root@node1 ~]# drbdadm secondary clusterdb
in the primary node, at the first time of using you should format it by using ext4 system file format.
[root@node1 yum.repos.d]# mkfs.ext4 /dev/drbd0
 mke2fs 1.41.12 (17-May-2010)
 Filesystem label=
 OS type: Linux
 Block size=4096 (log=2)
 Fragment size=4096 (log=2)
 Stride=0 blocks, Stripe width=0 blocks
 131072 inodes, 524007 blocks
 26200 blocks (5.00%) reserved for the super user
 First data block=0
 Maximum filesystem blocks=536870912
 16 block groups
 32768 blocks per group, 32768 fragments per group
 8192 inodes per group
 Superblock backups stored on blocks:
 32768, 98304, 163840, 229376, 294912

You can now mount DRBD device on your primary node:
[root@node1 ~]# mkdir /data
[root@node1 ~]# mount /dev/drbd0  /data

Check:
[root@node1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_unixmencentos65-lv_root 19G 3.6G 15G 20% /
tmpfs 1.2G 44M 1.2G 4% /dev/shm
/dev/sda1 485M 80M 380M 18% /boot
/dev/drbd0 2.0G 36M 1.9G 2% /data

Comentarios

  1. Thanks - Most helpful.
    I did find I had to put the FQDN of the nodes into the clusterdb.res file, and I also had to use `drbdadm attach all` on both nodes before I could set the primary. The output of `drbd-overview` was just showing `diskless\diskless` before I attached the devices.

    ResponderEliminar

Publicar un comentario