postgres benchmark pgbench dbchaos - ghdrako/doc_snipets GitHub Wiki
Some of the commonly used tools for benchmarking PostgreSQL include pgbench, HammerDB, Apache JMeter, pgBadger, pg_stat_statements, and pgreplay.
pgbench
- https://www.cloudbees.com/blog/tuning-postgresql-with-pgbench
- https://www.postgresql.org/docs/current/pgbench.html
- https://www.systutorials.com/docs/linux/man/1-pgbench/
With pgbench, you can either use a sample database provided with pgbench or have pgbench run custom queries against an application database.
0. Install
yum whatprovides pgbench
yum install postgresql-contrib-13.7-2.module+el8.6.0+15347+b8eabcef.x86_64
1. Create database for pgbench test
# CREATE DATABASE example;
2. Setting up the pgbench sample database
By default, pgbench will create the tables above with the number of rows shown above (100 000 rows) . This creates a simple 16MB database.
pgbench -i postgres
table # of rows
---------------------------------
pgbench_branches 1
pgbench_tellers 10
pgbench_accounts 100000
pgbench_history 0
pgbench -U ebkadm -h 127.0.0.1 -p 5432 -i -s 50 example
$ pgbench -i -s 50 example # -i (initialize) option
Since we will be using pgbench to measure changes in performance, a small 16MB database will not be enough to stress our instance. This is where the -s
(scaling) option comes into play.
The -s
option is used to multiply the number of rows entered into each table. In the command above, we entered a "scaling" option of 50. This told pgbench to create a database with 50 times the default size.
What this means is our pgbench_accounts table now has 5,000,000 records. It also means our database size is now 800MB (50 x 16MB).
3. Establishing a baseline
pgbench -c 200 -j 10 -T 60 -U postgres -h 192.168.187.134 -p 5432 postgres
The test was run in simple query mode with 200 clients and 10 threads. The benchmark was executed for a duration of 60 seconds.
# pgbench -U ebkadm -h 127.0.0.1 -p 5432 -c 10 -j 2 -t 10000 example
starting vacuum...end.
transaction type: <builtin: TPC-B (sort of)>
scaling factor: 50
query mode: simple
number of clients: 10
number of threads: 2
number of transactions per client: 10000
number of transactions actually processed: 100000/100000
latency average = 38.777 ms
tps = 257.886690 (including connections establishing)
tps = 257.901055 (excluding connections establishing)
# $ pgbench -c 10 -j 2 -t 10000 example
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 50
query mode: simple
number of clients: 10
number of threads: 2
number of transactions per client: 10000
number of transactions actually processed: 100000/100000
latency average: 4.176 ms
tps = 2394.718707 (including connections establishing)
tps = 2394.874350 (excluding connections establishing)
-c
(clients), which is used to define the number of clients to connect with- it opens 10 different sessions.-j
(threads) flag. This flag is used to define the number of worker processes for pgbench. In the above command, I specified the value of 2. This will tell pgbench to start two worker processes during the benchmarking.- -t (transactions), which is used to specify the number of transactions to execute. In example each client session will execute 10,000 transactions.
To summarize, the baseline test run was two pgbench worker processes simulating 10,000 transactions from 10 clients for a total of 100,000 transactions
pgbench -U postgres -p 5555 -c 10 -j 2 -t 10000 mydb
- -c number of clients
- -j 2 number of threads
- -t amount of transactions
These values are 10000 transactions per client. So : 10 x 10000 = 100,000 transactions
4. Change configuration - tunning
in example in postgresql.conf:
#shared_buffers = 128MB
shared_buffers = 256MB
# systemctl restart postgresql
5. rerun test
$ pgbench -c 10 -j 2 -t 10000 example