PostgreSQL benchmark on FreeBSD, CentOS, Ubuntu Debian and openSUSE

In this post, I’m going to show benchmark results of the recently released PostgreSQL 10.1. I’ve benchmarked the DB on these OSes (all are 64-bit):

  • Ubuntu 16.04, kernel 4.10.0-38-generic
  • openSUSE 42.3, kernel 4.4.87-25-default
  • CentOS 7.4, kernel 3.10.0-693.2.2.el7.x86_64
  • Debian 9.2, kernel 4.9.0-4-amd64
  • FreeBSD 11.1

Test methodology

The goal of the benchmark was to measure PostgreSQL performance under conditions similar to (typical) production deployment:

  • clients connect via a connection pool, to ensure that there is no permanent reconnection to DB (I did not use the connection pool, instead I did not use the -C pgbench flag)
  • clients connect over the network and not via an unix socket
  • PostgreSQL data directory resides on a RAID 1 mirror

For each of the tested OSes, a benchmarking database of ~74GiB was created:

pgbench -i -s 5000 pgbench

The test infrastructure consisted of two dedicated servers connected with a 1 Gbit/s network:

I was interested in these benchmark combinations:

  • 32GiB read only: read only test (only selects, without data modifications), the dataset does not fit into the PostgreSQL cache
  • 200GiB read only: read only test, the dataset fits into the PostgreSQL cache
  • 32GiB TCP-B: read write, the dataset does not fit into the PostgreSQL cache
  • 200GiB TCP-B: read write, the dataset fits into the PostgreSQL cache

pgbench setup

The pgbench program version 10.1 running on a separate FreeBSD 11.1 machine was used to generate the load. The test script consisted of three parts: vacuum + warmup, read only benchmark and read write benchmark. Before each read write benchmark, pgbench tables were vacuumed (the -v flag was used). During the test, I gradually increased the number of cuncurrent clients accessing the database.

#!/bin/sh

THREADS=8
DURATION=1800
PGIP=192.168.1.120

# warmup
pgbench -h ${PGIP} -U pgbench -j ${THREADS} -c 10 -T ${DURATION} -S -v pgbench

for clients in 1 10 20 30 40 50 60 70 80 90 100 110 120
do
  echo "RO ${clients}"
  pgbench -h ${PGIP} -U pgbench -j ${THREADS} -c ${clients} -T ${DURATION} -S pgbench > pgbench_ro_${clients}.log
done

for clients in 1 10 20 30 40 50 60 70 80 90 100 110 120
do
  echo "RW ${clients}"
  pgbench -h ${PGIP} -U pgbench -j ${THREADS} -c ${clients} -T ${DURATION} -v pgbench > pgbench_rw_${clients}.log
done

PostgreSQL server setup

For Linux distributions, PostgreSQL was installed on ext4 file system in RAID1 setup (software RAID using mdraid) on two SSDs, with atime disabled. In the case of FreeBSD, the OpenZFS file system was used on two SSDs in RAID1 setup. The ZFS dataset with PostgreSQL data was created with the following parameters:

zfs get recordsize,logbias,primarycache,atime,compression zroot/var/db/postgres
NAME                   PROPERTY      VALUE         SOURCE
zroot/var/db/postgres  recordsize    8K            local
zroot/var/db/postgres  logbias       throughput    local
zroot/var/db/postgres  primarycache  all           default
zroot/var/db/postgres  atime         off           inherited from zroot
zroot/var/db/postgres  compression   lz4           local

PostgreSQL server configuration was identical on all OSes except file paths (each OS uses a different directory structure). Content of postgresql.conf (only the interesting parts) for 32GiB instance:

autovacuum = off
default_statistics_target = 100
maintenance_work_mem = 1GB
checkpoint_completion_target = 0.9
effective_cache_size = 24GB
work_mem = 104MB
wal_buffers = 16MB
shared_buffers = 8GB
max_connections = 300

Content of postgresql.conf for 200GiB instance:

autovacuum = off
default_statistics_target = 100
maintenance_work_mem = 2GB
checkpoint_completion_target = 0.9
effective_cache_size = 144GB
work_mem = 640MB
wal_buffers = 16MB
shared_buffers = 48GB
max_connections = 300

Benchmarking

I benchmarked PostgreSQL on five different operating systems, in two modes - read only and TCP-B (read write), with two different memory profiles. The benchmark of each OS took approximately 30 hours (not including the time needed to setup the OS). The outputs from each pgbench run have been saved for later evaluation.

Results - Read Only

Results - TCP-B

Summary

The benchmark had shown that the difference between various GNU/Linux distributions is not very significant. The best peforming OS in read only benchmark was openSUSE 42.3, while FreeBSD was about 40% slower. Unfortunately I did not find out what caused such a mediocre FreeBSD performance.

A more realistic image of PostgreSQL performance were provided by read write (TCP-B) benchmark. Among the GNU/Linux distributions, Centos 7.4 was the best performer, while Debian 9.2 was slowest. I was positively surprised by FreeBSD 11.1 which was more than twice as fast as the best performing Linux, despite the fact that FreeBSD used ZFS which is a copy-on-write file system. I assumed that such a difference was caused by the Linux software RAID overhead, so I did three more TCP-B benchmarks for 100 concurrent clients, this time without software RAID:

  • FreeBSD 11.1 + UFS: 5623.86 TPS
  • FreeBSD 11.1 + ZFS: 8331.85 TPS
  • CentOS 7.4 + ext4: 8987.65 TPS

The results show the inefficiency of the Linux SW RAID (or ZFS RAID efficiency). CentOS 7.4 performance without SW RAID is only slightly better than FreeBSD 11.1 with ZFS RAID (for TCP-B and 100 concurrent clients).

EDIT: there is a discussion on HN