Network Storage Project Setup - jacob-dinapoli/tech-journal GitHub Wiki

Step 1:

  • Do the works: IP, hostname, LAN, gateway, DNS

Step 2 - Format and mount the bricks

Note: We are going to use the XFS filesystem for the backend bricks. But Gluster is designed to work on top of any filesystem, which supports extended attributes.

# mkfs.xfs -i size=512 /dev/sdb1
# mkdir -p /data/brick1
# echo '/dev/sdb1 /data/brick1 xfs defaults 1 2' >> /etc/fstab
# mount -a && mount

sdb1 should be mounted at /data/brick1

Step 3 - Installing GlusterFS

Install the software

# yum install glusterfs-server

Start the GlusterFS management daemon:

# service glusterd start
# service glusterd status

The Following should return when running the status command

glusterd.service - LSB: glusterfs server
   Loaded: loaded (/etc/rc.d/init.d/glusterd)
Active: active (running) since Mon, 13 Aug 2012 13:02:11 -0700; 2s ago
Process: 19254 ExecStart=/etc/rc.d/init.d/glusterd start (code=exited, status=0/SUCCESS)
CGroup: name=systemd:/system/glusterd.service
   ├ 19260 /usr/sbin/glusterd -p /run/glusterd.pid
   ├ 19304 /usr/sbin/glusterfsd --xlator-option georep-server.listen-port=24009 -s localhost...
   └ 19309 /usr/sbin/glusterfs -f /var/lib/glusterd/nfs/nfs-server.vol -p /var/lib/glusterd/...

Step 4 - Configure the firewall

The gluster processes on the nodes need to be able to communicate with each other. To simplify this setup, configure the firewall on each node to accept all traffic from the other node.

# iptables -I INPUT -p all -s 10.0.5.61 -j ACCEPT

Step 5 - Configure the trusted pool

From 10.0.5.60

# gluster peer probe 10.0.5.61

From 10.0.5.61

# gluster peer probe 10.0.5.60

Check the peer status on 10.0.5.60

# gluster peer status

The output should look something like this:

Number of Peers: 2

Hostname: 10.0.5.61
Uuid: f0e7b138-4874-4bc0-ab91-54f20c7068b4
State: Peer in Cluster (Connected)

Step 6 - Set up a GlusterFS volume

On both servers:

# mkdir -p /data/brick1/gv1

From any server:

# gluster volume create gv1 replica 2 10.0.5.60:/data/brick1/gv1 10.0.5.61:/data/brick1/gv1
# gluster volume start gv1

Confirm that volume has started

# gluster volume info

The output should look like the below:

Volume Name: gv0
Type: Replicate
Volume ID: f25cc3d8-631f-41bd-96e1-3e22a4c6f71f
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: server1:/data/brick1/gv0
Brick2: server2:/data/brick1/gv0
Brick3: server3:/data/brick1/gv0
Options Reconfigured:
transport.address-family: inet

TROUBSHOOTING

If the volume does not start - the file under /var/log/glusterfs/glusterd.log will give you a diagnosis of the situation

Step 7 - Testing the GlusterFS volume

For this step, we will use one of the servers to mount the volume. Typically, you would do this from an external machine, known as a "client". Since using this method would require additional packages to be installed on the client machine, we will use one of the servers as a simple place to test first , as if it were that "client".

# mount -t glusterfs server1:/gv1 /mnt
# for i in `seq -w 1 100`; do cp -rp /var/log/messages /mnt/copy-test-$i; done

Check the client mount point

# ls -lA /mnt/copy* | wc -l

100 files will return.