OpenShift and Percona Distribution for PostgreSQL Operator - stanislawbartkowski/CP4D GitHub Wiki
https://github.com/percona/percona-postgresql-operator
Practical information on setting up PostgreSQL instance on OpenShift.
Create a separate project for PostgreSQL instance, for instance, postgres. The installation and management can be done by cluster-admin user or by a separate user having admin authority only in postgres project.
An operator can be deployed only by cluster-admin.
Browse to OpenShift console Operator Hub panel.
Install the operator to postgres project.
It can be done by a user with admin authority in postgres project.
The installation is very simple and straightforward.
OpenShift console -> postgres project -> Installed Operators -> Percona Distribution for PostgreSQL Operator
Click Create PerconaPGCluster and Create button. Wait until all pods are created and running.
Open "Terminal" in cluster1- pod, run psql command and execute several commands.
psql
psql (13.4)
Type "help" for help.
postgres=#
create database testdb;
\c testdb;
create table test x (int);
insert into test values (0),(1),(2);
select * from x;
x
---
0
1
2
(3 rows)
Pod | Description |
---|---|
cluster1-pgbouncer | Connection pooler and load balancer for PosgreSQL cluster. |
cluster1- cluster1-repl |
Three PostgreSQL instances running in Master/StandBy mode. One is active and two are standbys. Every transaction committed on the active instance is replicated to standbys. The application should avoid connecting to any of the containers directly because standby instances are running in read-only mode. Connection to pgbouncer is recommended. |
cluster1-backrest-shared-repo- | Backup/Restore instance. Additional DR feature running pgbackrest utility. The backup schedule is defined in PerconaPGClusters instance. |
Replace selector with valid pod label (here name=cluster1-pgbouncer). The selector should point to pgbouncer container.
cat <<EOF |oc apply -f -
apiVersion: v1
kind: Service
metadata:
name: postgresql
spec:
selector:
name: cluster1-pgbouncer
type: NodePort
ports:
- port: 5432
protocol: TCP
targetPort: 5432
EOF
Check that the service selector is resolved into valid pod IP, endpoint contains cluster1- pod IP.
oc describe svc/postgresql
Name: postgresql
Namespace: postgresql
Labels: <none>
Annotations: <none>
Selector: name=cluster1-pgbouncer
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 172.30.196.101
IPs: 172.30.196.101
Port: <unset> 5432/TCP
TargetPort: 5432/TCP
NodePort: <unset> 32026/TCP
Endpoints: 10.254.20.71:5432
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
The external NodePort is assigned as 32076.
vi /etc/haproxy/haproxy.cfg
frontend ingress-postgresql
bind *:5432
default_backend ingress-postgresql
mode tcp
option tcplog
backend ingress-postgresql
balance source
mode tcp
server master0 10.17.43.9:32026 check
server master1 10.17.46.40:32026 check
server master2 10.17.48.179:32026 check
server worker0 10.17.57.166:32026 check
server worker1 10.17.59.104:32026 check
server worker2 10.17.61.175:32026 check
systemctl restart haproxy
Assuming that kist is the hostname of haproxy node.
nc -zv kist 5432
Ncat: Version 7.70 ( https://nmap.org/ncat )
Ncat: Connected to 9.46.197.101:5432.
Ncat: 0 bytes sent, 0 bytes received in 0.26 seconds.
Get password for postgres admin user.
oc extract secret/cluster1-postgres-secret --keys=password --to=-
# password
gHUksg0rPQJlqDFQFvuqxppC
Connect as postgres user. Important: while connecting to pgbouncer, the database name should be specified and -W option to force password.
psql -h kist -U postgres postgres
Password for user postgres:
psql (13.4)
Type "help" for help.
postgres=#
If connection as postgres user is rejected, logon to active PostgreSQL container, create a user and logon remotely as newly created user.
psql -h kist -U queryuser querydb
Password for user queryuser:
psql (13.4)
Type "help" for help.
querydb=>