GlusterFS Installation - wAlber47/Tech-Journal GitHub Wiki

This page describes how to install the required GlusterFS services on CentOS 7.


Requirements

For the purpose of our network storage, we decided to have two different servers both hosting the same storage. This is due to the goal of having redundancy in our service, if one of the servers were to go down, our client machines would still have access due to this. For our client machine, I used the ansible machine that was provisioned from the previous lab. Below are the various machines and information related to them that we had at our disposal.

Host Name IP Address OS Disk Purpose
gluster01-walber 10.0.5.201 CentOS 7 /dev/sdb Storage Node #1
gluster02-walber 10.0.5.202 CentOS 7 /dev/sdb Storage Node #2
ansible01-walber 10.0.5.91 CentOS 7 n/a Client Machine

We also need to have DNS records configured for each of these so that they are accessible by hostname. Without configuring this there may be trouble down the line.

Installing GlusterFS

This step needs to be repeated for both gluster01 and gluster02. By installing and running it twice we are setting up the basics for the redundancy that was mentioned earlier.

  1. Install the centos-release-gluster package:
    1. yum install -y centos-release-gluster
  2. Once the repository is added to your system, you can now install GlusterFS.
    1. yum install -y glusterfs-server
  3. After installation completes, we can start, enable, and check on the service using:
    1. systemctl start glusterd & systemctl status glusterd & systemctl enable glusterd

Configuring Firewall

There are two ways that I found to properly configure the firewall for our services. The first is to completely disable the firewall, which works in our learning environments but would not be an option in a real world scenario. The second option, was to configure a way to accept all traffics coming from a specific IP address. By default, GlusterFS listens on TCP/24007, but every time you add a brick you open up a new port. This could be confusing to configure down the line.

Disabling Firewall

  1. systemctl stop firewalld & systemctl disable firewalld

Accept traffics from an IP

  1. firewall-cmd --zone=public --add-rich-rule='rule family="ipv4" source address="<ipaddress>" accept'
  2. firewall-cmd --reload

Adding Storage

Assuming that you have one spare hard disk on your machine, /dev/sdb is the one I will use here for a brick. Create a single partition on the spare hard disk shown like below. Similar to the steps above, you will need to configure both nodes.

  1. We can use fdisk to easily configure drives on Linux, this command will open the fdisk menu on our drive:
    1. fdisk /dev/sdb
  2. Once opened, select n for New Partition. Then select the defaults for each option. This will create a new partition on our drive with the preferred configuration for our GlusterFS volume.
  3. We can then use p to list the new partition, and make sure that everything looks correct and there are no errors.
  4. Make sure to select w before leaving fdisk otherwise all of your configurations will not be saved.
  5. We then format the partition with any filesystem that we want, we used EXT4:
    1. mkfs.ext4 /dev/sdb1
  6. Then mount the disk on a new drive titled, /network-storage/brick1/ with:
    1. mount /dev/sdb1 /network-storage/brick1
  7. The last step is to add an entry to /etc/fstab so that the mount persists after a reboot:
    1. echo "/dev/sdb1 /network-storage/brick1 ext4 defaults 0 0" | tee --append /etc/fstab`

Configuring a Storage Pool

Before creating a volume, we need to create trusted storage pool by adding gluster02-walber. You can run GlusterFS configuration commands on any one server in the cluster will execute the same command on all other servers.

  1. For this step, you only need to configure this on gluster01-walber.
    1. gluster peer probe gluster02-walber.walber.local, this should give you a success message.
  2. You can then check the status of the storage pool by running:
    1. gluster peer status
  3. As well as list the storage pools with:
    1. gluster pool list

Setting GlusterFS Volume

Now that we have GlusterFS installed and configured, we have our peer connection setup, and our drives mounted, we can create the volumes for Gluster. Under your /network-storage/brick1/, create a new directory titled d1.

  1. In order to create the replicated volumes we run:
    1. gluster volume create gv0 replica 2 gluster01-walber.walber.local:/network-storage/brick1/d1 gluster02-walber.walber.local:/network-storage/brick1/d1
  2. Once that runs with no errors, start the volume using:
    1. gluster volume start d1, and you should receive a success message.
  3. We can also check the status of the volume with:
    1. gluster volume info d1

GlusterFS Client

At this point, your volumes should be up and running and all that's left to do is connect your client machine to the server.

  1. In order to connect to Gluster from our client machine, we have to install the glusterfs-client package. This is made to support the mounting of the GlusterFS filesystem. During these steps, run commands as root using su -.
    1. apt-get install -y glusterfs-client.
  2. Make a new directory to mount the Gluster volume to. In my network, I used /mnt/gfs:
    1. mkdir -p /mnt/gfs
  3. Now, we can mount the GlusterFS volume to the mountpoint that we just created:
    1. mount -t glusterfs gluster01-walber.walber.local:/d1 /mnt/gfs
    2. You may have to add firewall rules similar to before on the client machine to allow for Gluster connections.
  4. To check the status of the volume:
    1. df -hP /mnt/gfs
  5. Similar to before, you should add an entry to /etc/fstab so that the volume automatically mounts during reboots:
    1. gluster01-walber.walber.local:/d1 /mnt/gfs glusterfs defaults,_netdev 0 0

At this point, you should have a working GlusterFS filesystem. Created files inside of the client machine should be viewable from the two machines running GlusterFS server as well as any other client machines that you configured and connected.

⚠️ **GitHub.com Fallback** ⚠️