Setting up a Db2 DPF environment - fsmegale/Db2-luw GitHub Wiki

Introduction

This is a fast test Case to implement Db2 DPF

Db2 version: v11.5.7.0
Operational System: Suse Linux 15

2 phisical nodes
2 logical nodes on each phisical nodes

Total of 4 nodes

  • Hosts:

         Hostnames:
    
              hostHADR1 - 192.168.145.136
              hostHADR2 - 192.168.145.138
    

-Tool used to share the Db2 instance home: NFS

Creating the DPF Instance

1) Create user (instance owner) called "db2dpf" with same UID on EACH HOST and in a Group that has the same GID (group id)

For these hosts I used:

groupadd -g 1050 db2adm1
	
useradd -u 1035 -g db2adm1 -m -d /home/db2dpf db2dpf

2) Setup NFS Server on one of the host

It is required to have the instance's owner Home Directory shared in each hosts.

Host hostHADR2 was chosen to be the NSF Server.

zypper -n in nfs-kernel-server

systemctl enable nfsserver

systemctl restart nfsserve

If Firewall is enabled:

firewall-cmd --add-service=nfs --permanent

firewall-cmd --reload`

Edit the /etc/exports by adding the Instance's Home Directory. This is how the file was let:

hostHADR2:/opt/ibm/db2/V11.5/instance # cat /etc/exports
# See the exports(5) manpage for a description of the syntax of this file.
# This file contains a list of all directories that are to be exported to
# other computers via NFS (Network File System).
# This file used by rpc.nfsd and rpc.mountd. See their manpages for details
# on how make changes in this file effective.

/home/db2dpf 192.168.145.136/24(rw,sync,root_squash,subtree_check)

Update exported directories to NFS.

exportfs -ra

3) Edit the /etc/hosts from both hosts to map each host:

The /etc/host got like this:

192.168.145.138         hostHADR1.com.br                hostHADR1
192.168.145.136         hostHADR2.com.br                hostHADR2

4) Normally create the instance in BOTH hosts:

/opt/ibm/db2/V11.5/instance # ./db2icrt -u db2dpf db2dpf

5) Share the instance's home directory between the hosts

In the target machine - in this case, hostHADR1, run:

mount -t nfs 192.168.145.136:/home/db2dpf /home/db2dpf

6) Create a passwordless for the Db2 instance's owner


It is required that each database partition server must have the authority to perform remote commands on all the other database partitions.

So, run the following command from on of the hosts with db2dpf user (the Db2 instance's owner):

ssh-keygen -t rsa

Example:

hostHADR1:~/.ssh # ssh-keygen -t rsa   //just hit ENTER for all options
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:EWFudSQJf+oLD3PD0X11fC8F9sZJVpy2JJ76RzRXpS4 root@hostHADR1
The key's randomart image is:
+---[RSA 3072]----+
|        =oooo o+*|
|       o +.o ooX+|
|        + . o *.%|
|       . . + = **|
|        S o E = =|
|         o o . + |
|        + = . .  |
|         * o . . |
|          o   .  |
+----[SHA256]-----+

hostHADR1:~/.ssh # ls -ltr
total 8
-rw-r--r-- 1 root root  568 Jan  7 17:03 id_rsa.pub
-rw------- 1 root root 2602 Jan  7 17:03 id_rsa

Because the home directory for the instance is on the shared Db2 home file system, only one pair of keys is required for the instance.

So, also run:

cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

Do a ssh access to/from each node to make sure it is working.


7) Edit the /<db2_isntance_home>/sqllib/db2node.cfg to be according to the DPF nodes you want

For the example, we will have the following in the db2nodes.cfg:

db2dpf@hostHADR1:~/sqllib> cat db2nodes.cfg
0 hostHADR1.com.br 0
1 hostHADR1.com.br 1
2 hostHADR2.com.br 0
3 hostHADR2.com.br 1

8) Export the DB2INSTANCE on BOTH hosts

This is required if if you have more than one DB2 instance defined on your system

export DB2INSTANCE=db2dpf

9) Start the instance from any host:

db2dpf@hostHADR1:~> db2start
08/15/2022 16:02:14     3   0   SQL1063N  DB2START processing was successful.
08/15/2022 16:02:14     2   0   SQL1063N  DB2START processing was successful.
08/15/2022 16:02:24     1   0   SQL1063N  DB2START processing was successful.
08/15/2022 16:02:25     0   0   SQL1063N  DB2START processing was successful.
SQL1063N  DB2START processing was successful.

Creating the DPF Database

1) Create the database

It was created a local directory in Each Host for "on" and "dbpath on" clauses.

Definitions:

The default location for the database path is defined by the dftdbpath configuration option in the DBM CFG

The database path needs to be a local file system on each server for a partitioned database

"
The database path is the location where a hierarchical directory structure is created. The structure holds the following files that are needed for the operation of the database:

-Buffer pool information
-Table space information
-Storage path information
-Database configuration information
-History file with information about backups, restores, loading of tables, reorganization of tables, altering of table spaces, and other database changes
-Log control files with information about active logs

The DBPATH ON parameter is used to place these files and information in a directory that is separate from the storage paths where the database data is kept.
"
CREATE DATABASE command
https://www.ibm.com/docs/en/db2/11.5?topic=commands-create-database


The "ON" clause is for the default path to the Containers.

This is just a test in a Virtual Machine. This is not counting best practices about the paths or additional clauses to create the database.

So, creating the de database:

db2 "create db db_dpf on /db_dpf/storage dbpath on /db_dpf/dbpath"

PS.:

The "create database" command hung and I got the following during the created database in the db2diag.log from one of the Nodes:

"
2022-08-16-15.22.17.651790-180 I1931E403             LEVEL: Warning
PID     : 11698                TID : 139894944950016 PROC : db2sysc 2
INSTANCE: db2dpf               NODE : 002
HOSTNAME: hostHADR2
EDUID   : 18                   EDUNAME: db2fcms-0 2
FUNCTION: DB2 UDB, fast comm manager, sqkfTcpServices::identifyUntrustedIncomingEvent, probe:45
MESSAGE : preconnect with node 1 got rc of 0x810f0019

2022-08-16-15.22.17.651879-180 I2335E2989            LEVEL: Error
PID     : 11698                TID : 139894944950016 PROC : db2sysc 2
INSTANCE: db2dpf               NODE : 002
HOSTNAME: hostHADR2
EDUID   : 18                   EDUNAME: db2fcms-0 2
FUNCTION: DB2 UDB, fast comm manager, sqkfTcpServices::identifyUntrustedIncomingEvent, probe:41
MESSAGE : Error while waiting for connect() completion with node 0. rc = -2129723367. status = 3; port = 20028
DATA #1 : Hexdump, 488 bytes
0x00000002012A4420 : 0000 0000 0000 0000 0000 0000 0000 0000    ................
0x00000002012A4430 : 0000 0000 0000 0000 0000 0000 0000 0000    ................
0x00000002012A4440 : 0000 0000 0000 0000 0000 0000 0000 0000    ................
0x00000002012A4450 : 0000 0000 0000 0000 0000 0000 0000 0000    ................
"

To fix that I edited the /etc/services to have same range of ports reservaded to the node partitions from both hosts.

In both host I made the following in /etc/services:

DB2_db2dpf      20061/tcp
DB2_db2dpf_END  20064/tcp
db2c_db2dpf     25014/tcp

2) Connecting to the database partitions to test:

db2dpf@hostHADR2:~> db2_all db2 connect to db_dpf1


   Database Connection Information

 Database server        = DB2/LINUXX8664 11.5.7.0
 SQL authorization ID   = DB2DPF
 Local database alias   = DB_DPF1

hostHADR1.com.br: db2 connect to db_dpf1 completed ok


   Database Connection Information

 Database server        = DB2/LINUXX8664 11.5.7.0
 SQL authorization ID   = DB2DPF
 Local database alias   = DB_DPF1

hostHADR1.com.br: db2 connect to db_dpf1 completed ok


   Database Connection Information

 Database server        = DB2/LINUXX8664 11.5.7.0
 SQL authorization ID   = DB2DPF
 Local database alias   = DB_DPF1

hostHADR2.com.br: db2 connect to db_dpf1 completed ok


   Database Connection Information

 Database server        = DB2/LINUXX8664 11.5.7.0
 SQL authorization ID   = DB2DPF
 Local database alias   = DB_DPF1

hostHADR2.com.br: db2 connect to db_dpf1 completed ok

3) Creating a Partition Group in all nodes.

db2dpf@hostHADR2:~> db2 "CREATE DATABASE PARTITION GROUP group1 ON DBPARTITIONNUMS (0,1,2,3)"
DB20000I  The SQL command completed successfully.

4) Creating a tablespace in that Database Partition

db2dpf@hostHADR2:~> db2 "create tablespace allnodests in group1"
DB20000I  The SQL command completed successfully.

5) Creating a partitioned table in that tablespace:

db2dpf@hostHADR2:~> db2 "CREATE TABLE allnodestab(a integer, b char(10)) IN allnodests"
DB20000I  The SQL command completed successfully.
⚠️ **GitHub.com Fallback** ⚠️