Project 3 Web Application and Database Redundancy - Oliver-Mustoe/Oliver-Mustoe-Tech-Journal GitHub Wiki

This page journals content related to SEC-440 project 3. See network diagram for visual representation.

Table of contents:

DB server setup

On each "u" server I SET THE NETWORK TO LAN, and setup a hostname, and a user:

sudo hostnamectl set-hostname "db01-oliver"
sudo adduser olivermustoe
sudo usermod -aG sudo olivermustoe

(NOTE: Same commands were run on u2 and u3 EXCEPT the hostname was set to "db02-oliver" and "db03" respectively)

and disabled root ssh login and champuser user:

sudo usermod -L champuser

(Below shows disabling root login in /etc/ssh/sshd_config)

Then I setup the following netplans in /etc/netplan/00-installer-config.yaml with sudo vi on each server:

  • U1's netplan

  • U2's netplan

  • U3's netplan

Mariadb setup

DB01 - initial node in Galera cluster

I installed mariadb on u1 (db01):

sudo apt update -y
sudo apt install mariadb-server -y

Commented out the bind address in /etc/mysql/mariadb.conf.d/50-server.cnf:

Then I added the following to the end of /etc/mysql/mariadb.conf.d/50-server.cnf:

[galera]
# Mandatory settings
wsrep_on = ON
wsrep_provider = /usr/lib/galera/libgalera_smm.so
wsrep_cluster_name = "ocluster"
wsrep_cluster_address = "gcomm://10.0.5.201,10.0.5.202,10.0.5.203"
binlog_format = row
default_storage_engine = InnoDB
innodb_autoinc_lock_mode = 2

# Allow server to accept connections on all interfaces.
bind-address = 0.0.0.0

# Node information
wsrep_node_name = "db01"
wsrep_node_address = "10.0.5.201"

# More settings
#wsrep_provider_options="gcache.size=300M;gcache.page_size=300M"
#wsrep_slave_threads = 1
wsrep_sst_method=rsync
#innodb_flush_log_at_trx_commit = 0

I then started the new cluster with on db01:

sudo galera_new_cluster

Then I checked it:

sudo systemctl status mariadb.service

Also checked with:

sudo mysql -u root -p -e "SHOW STATUS LIKE 'wsrep_cluster_size'"

DB02/03 - joining to Galera cluster

Then on u2 (db02) I installed mariadb:

sudo apt update -y
sudo apt install mariadb-server -y

Commented out the bind address in /etc/mysql/mariadb.conf.d/50-server.cnf like on db01, then added the following to the end of /etc/mysql/mariadb.conf.d/50-server.cnf:

[galera]
# Mandatory settings
wsrep_on = ON
wsrep_provider = /usr/lib/galera/libgalera_smm.so
wsrep_cluster_name = "ocluster"
wsrep_cluster_address = "gcomm://10.0.5.201,10.0.5.202,10.0.5.203"
binlog_format = row
default_storage_engine = InnoDB
innodb_autoinc_lock_mode = 2

# Allow server to accept connections on all interfaces.
bind-address = 0.0.0.0

# Node information
wsrep_node_name = "db02"
wsrep_node_address = "10.0.5.202"

# More settings
#wsrep_provider_options="gcache.size=300M;gcache.page_size=300M"
#wsrep_slave_threads = 1
wsrep_sst_method=rsync
#innodb_flush_log_at_trx_commit = 0

Then I restarted mariadb and checked the status:

sudo systemctl restart mariadb.service
sudo systemctl status mariadb.service

And on db01 I checked that the cluster size had increased:

I would repeat the same process for u3 (db03) (installing mariadb > commenting out the bind address in the 50-server.cnf file > adding a galera config to the end of the 50-server.cnf file > restarting mariadb) with the following galera config added to the end of /etc/mysql/mariadb.conf.d/50-server.cnf:

[galera]
# Mandatory settings
wsrep_on = ON
wsrep_provider = /usr/lib/galera/libgalera_smm.so
wsrep_cluster_name = "ocluster"
wsrep_cluster_address = "gcomm://10.0.5.201,10.0.5.202,10.0.5.203"
binlog_format = row
default_storage_engine = InnoDB
innodb_autoinc_lock_mode = 2

# Allow server to accept connections on all interfaces.
bind-address = 0.0.0.0

# Node information
wsrep_node_name = "db03"
wsrep_node_address = "10.0.5.203"

# More settings
#wsrep_provider_options="gcache.size=300M;gcache.page_size=300M"
#wsrep_slave_threads = 1
wsrep_sst_method=rsync
#innodb_flush_log_at_trx_commit = 0

Checking that db03 was added to the cluster:

More testing done later:

Further Galera cluster setup

(NOTE: Originally made an incorrect haproxy config before securing the database and making a user, but as I fixed it afterwards I moved the entire section to after securing the database and making a user.)

Then I would on db01 run the following command to secure the database:

sudo mysql_secure_installation

I tested that my db servers were syncing by logging in as root and checking the cluster size:

Back on db01 I also created a new superuser:

sudo mysql -u root -p
CREATE USER 'olivermustoe'@'%' IDENTIFIED BY '<PASSWORD>';
GRANT ALL PRIVILEGES ON *.* TO 'olivermustoe'@'%' WITH GRANT OPTION;
FLUSH PRIVILEGES;

Haproxy to DBs

On ha01 I updated the /etc/haproxy/haproxy.cfg with the following (saved haproxy.cfg to github afterwards - if used from github would have to be renamed to "haproxy.cfg" inside /etc/haproxy):

frontend db
    bind *:3306
    mode tcp
    default_backend dbpool
backend dbpool
    balance roundrobin
    mode tcp
    server db01 10.0.5.201:3306 check
    server db02 10.0.5.202:3306 check
    server db03 10.0.5.203:3306 check

Then I restarted haproxy:

sudo systemctl restart haproxy
sudo systemctl status haproxy

I would repeat this haproxy.cfg edit and restart on ha2.

Testing from a client

After installing mariadb client on my xubuntulan

sudo apt install mariadb-client -y

I could see that my load balancers are working with sudo mysql -h 10.0.6.10 -u olivermustoe -p -e "select @@hostname":

I then would create a project using my web servers and db's to create a redundant login application. Guide to this login web app's installation and files themselves found in p3_files.

SOURCES

⚠️ **GitHub.com Fallback** ⚠️