DRBD (Distributed Replicated Block Device) is a technology that is used to replicate data over TCP/IP. It is used to build HA Clusters and it can be seen as a RAID-1 implementation over the network.
As you may all know, the DRBD kernel module has now been included into Hardy Heron Server Edition’s kernel, so there is no more source downloading and compiling, which makes it easier to install and configure. Here I’ll show you how to install and and make a simple configuration of DRBD, using one resource (testing). I’ll not cover how to install and configure heartbeat for automatic failover (This will be showed in a next post).
First of all, we will have to install Ubuntu Hardy Heron Server Edition on to servers and manually edit the partition table. We do this to leave FREE SPACE that we will be used later on as the block device for DRBD. If you’ve seen the DRBD + NFS HowTo on HowToForge.com, creating the partitions for DRBD and leaving them unmounted will NOT work, and we won’t we able to create the resource for DRBD. This is why we leave the FREE SPACE, and we will create the partition later on, when the system is installed.
So, after the installation we will have to create the partition, or partitions (in case we are creating an external partition for the meta-data, but in this case it will be internal), that DRBD will use as a block device. For this we will use fdisk and do as follows:
fdisk /dev/sda
n (to create a new partition)
l83 (to create it as logical and format it as Filesystem # 83)
w (to write the changes)
After creating the partitions we will have to REBOOT both servers so that the kernel uses the new partition table. After reboot we have to install drbd8-utils on both servers:
sudo apt-get install drbd8-utils
Now that we have drbd8-utils installed, we can now configure /etc/drbd.conf, which we will configure a simple DRBD resource, as follows:
resource testing { # name of resources
protocol C;
on drbd1 { # first server hostname
device /dev/drbd0; # Name of DRBD device
disk /dev/sda7; # Partition to use, which was created using fdisk
address 172.16.0.130:7788; # IP addres and port number used by drbd
meta-disk internal; # where to store metadata meta-data
}
on drbd2 { # second server hostname
device /dev/drbd0;
disk /dev/sda7;
address 172.16.0.131:7788;
meta-disk internal;
}
disk {
on-io-error detach;
}
net {
max-buffers 2048;
ko-count 4;
}
syncer {
rate 10M;
al-extents 257;
}
startup {
wfc-timeout 0;
degr-wfc-timeout 120; # 2 minutos.
}
}
Note that we are using drbd1 and drbd2 as hostnames. This hostnames must be configured and the servers should be able to ping the other via those hostnames (that means we need to have a DNS server or configure hosts for both servers in /etc/hosts).
After creating the configuration in /etc/drbd.conf, we now can create the DRBD resources. For this we issue the following in both servers:
sudo drbdadm create-md testing
After issuing this, we will be asked for confirmation to create the meta-data in the block device.
Now we have to power off both servers. After powering them off, we start our first server and we will see something similar to this:
After confirming with ‘yes’, we can now start the second server. After the second server is running. both nodes resources are secondary, so we have to make one of them primary. For this, we issue the following on the server we would like to have the resource as primary:
drbdadm -- --overwrite-data-of-peer primary all
We verify this by issuing:
cat /proc/drbd
And this should show something like this:
Well, up to this point, i’ve have showed you how to install and configure DRBD on Hardy, and how to make one of the servers have its resource as primary. But, we still don’t have automatic failover or automatic mounting. In a next post I’ll show how to configure heartbeat to have automatic failover and to take control of the resources, aswell as configuring STONITH to use the meatware device, so that we won’t have a split-brain condition (or at least try). I’ll also show how to configure NFS and MySQL to use this DRBD resource.
BTW, if you have questions you know where to find me :).