Performance comparison: InfluxDB, TimescaleDB, IoTDB, TDEngine, OpenTSDB v.s. TickTock - ytyou/ticktock GitHub Wiki

Table of Contents

1. Introduction

2. IoTDB Benchmark

3. Experiment settings

3.1. Hardware

3.2. Software

3.3. IoTDB Benchmark Settings

4. Throughput comparison

4.1. Write Throughput

4.2. Read Throughput

5. Response time comparison

5.1. Write Response Time

5.2. Read Response Time

6. CPU, memory, IO comparison

6.1. CPU

6.2. memory

6.3. Read IO

6.4. Write IO

7. Conclusion & Future

1. Introduction

In this wiki, we compared the performance of TickTock with InfluxDB on RaspberryPI (ARM 32 bit), since InfluxDB is one of very few TSDBs which can run on RaspberryPI. We would like to understand the performance difference between TickTock and other popular TSDBs. So here we present you a performance evaluation of InfluxDB, TimescaleDB, IoTDB, TDEngine, OpenTSDB, and TickTock, on X86. A few quick notes:

  • We use docker to run TSDBs.
  • We use IoTDB benchmark.
  • We chose a mixed read-write workload scenario (Read : write = 1 : 9).

Also please be advised that performance is not the only aspect to compare different TSDBs though it may be a very important one, if not the most among all. You may have to consider other aspects like ease-of-use, APIs adoption, reliability, community supports, and costs etc. This report only focuses on performance.

2. IoTDB Benchmark

We select IoTDB-benchmark for performance evaluation. IoTDB-benchmark is developed by THULAB, Tsinghua University, Beijing, to compare the performance of different TSDBs with flexible test settings and IoT industrial scenarios. It was published in CoRR 2019 (Computing Research Repository). You can download the PDF paper here.

@article{DBLP:journals/corr/abs-1901-08304,
 author    = {Rui Liu and Jun Yuan},
 title     = {Benchmark Time Series Database with IoTDB-Benchmark for IoT Scenarios},
 journal   = {CoRR},
 volume    = {abs/1901.08304},
 year      = {2019},
 url       = {http://arxiv.org/abs/1901.08304},
 timestamp = {Sat, 02 Feb 2019 16:56:00 +0100},
 }

IoTDB-benchmark simulates a wind power company operating several wind farms. There are many wind turbines (i.e., devices) in a wind farm, and each device has many sensors to collect different metrics such as wind speed, temperature etc periodically and send them to TSDBs.

IoTDB-benchmark is a great benchmark due to the followings:

  • It provides detailed measurement metrics such as throughput, latency (average, p10, p25, median, p75, p95, p99, p999).
  • It provides adaptors to various TSDBs such as InfluxDB, OpenTSDB, TimescaleDB. TickTock reuses the adaptor for OpenTSDB. By the way, THULAB in Tsinghua University also developed a TSDB, IoTDB which is an Apache project. So the benchmark also supports IoTDB.
  • It supports Out-Of-Order write which is a common scenario but not supported by many other TSDB benchmarks or TSDBs.
  • It supports different test scenarios, e.g, write only, read and write mixed.
  • It provides various data distributions to simulate industrial scenarios.

IoTDB-benchmark provides an open-source implementation in github/iotdb-benchmark. We used a forked version of it in github.com/ytyou/iotdb-benchmark. We did a few modifications in order to make comparison fair.

  • To add connection pools in Influxdb2.0, TDEngine, OpenTSDB clients, as other TSDBs.
  • To support both asynchronous write (in DEV branch) and synchronous writes (in HTTP branch) in OpenTSDB clients.

Please read the user guide.

3. Experiment settings

3.1. Hardware

We run our experiments in an Ideapad Gaming laptop with the specification below:

  • CPU: AMD Ryzen5 5600H, 6 core 12 hyper-thread, 1.3MHz
  • Memory: 20GB DDR4 3200MHz
  • Disk: 1T 5400 RPM HDD
  • OS: Ubuntu 20.04.3 LTS

We run each TSDB in a docker container with 2 dedicated vCPU and 4GB memory (with docker option: --cpuset-cpus 0-1 -m 4g). Since TickTock can reach 3.2M data point/second, the network traffic would be 3.2GB/sec (GBps), assuming 100 bytes per data point. This is higher than the normal router capacity, 1Gpbs (G bit per second), and close to high end router capacity, 10Gbps. To avoid network congestion in high ingestion rate, we run benchmark in the same laptop instead of in a separated machine.

3.2. Software

Here is the list of TSDB versions and the corresponding docker run commands we used.

  • InfluxDB 2.0

    docker run -dit --cpuset-cpus 0-1 -m 4g --name influxdb --hostname influxdb -p 8086:8086 -p 8083:8083 -v /var/lib/influxdb2:/var/lib/influxdb2 -e DOCKER_INFLUXDB_INIT_MODE=setup -e DOCKER_INFLUXDB_INIT_USERNAME=root -e DOCKER_INFLUXDB_INIT_PASSWORD=Benchmark_123 -e DOCKER_INFLUXDB_INIT_ORG=test -e DOCKER_INFLUXDB_INIT_BUCKET=test -e DOCKER_INFLUXDB_INIT_ADMIN_TOKEN=token influxdb:2.0

  • TDEngine 2.0.22.0

    docker run -d --name tdengine --hostname tdengine --cpuset-cpus 0-1 -m 4g -p 6030-6039:6030-6039/tcp -p 6030-6039:6030-6039/udp tdengine/tdengine:2.0.22.0

  • TimescaleDB 2.6.0-pg12

    docker run -d --name timescaledb --hostname timescaledb --cpuset-cpus 0-1 -m 4g -p 5432:5432 -e POSTGRES_PASSWORD=postgres timescale/timescaledb:latest-pg12

    We have to increase max_connection in /var/lib/postgresql/data/postgresql.conf.

  • IoTDB 0.12.4

    docker run -d --name iotdb --hostname iotdb --cpuset-cpus 0-1 -m 4g -p 6667:6667 -p 31999:31999 -p 8181:8181 apache/iotdb:0.12.4

  • OpenTsdb 9aa7f88

    docker run -d --name opentsdb --cpuset-cpus 0-1 -m 4g -h opentsdb -p 4242:4242 -v /opt/opentsdb:/etc/opentsdb petergrace/opentsdb-docker

    You need the following lines in /opt/opentsdb/opentsdb.conf

      tsd.network.port = 4242
      tsd.http.staticroot = /usr/local/share/opentsdb/static/
      tsd.http.cachedir = /tmp/opentsdb
      tsd.core.plugin_path = /opentsdb-plugins
      tsd.core.auto_create_metrics = true
      tsd.http.request.enable_chunked = tru
      tsd.http.request.max_chunk = 1048576
      tsd.storage.fix_duplicates = true
    
  • TickTock 0.4.0-beta

    docker run -d --privileged --name ticktock -h ticktock -p 6182:6182 -p 6181:6181 --cpuset-cpus 0-1 -m 4g ytyou/ticktock:0.4.0-beta --tsdb.timestamp.resolution millisecond --tcp.buffer.size 10mb –http.listener.count 10 –tcp.listener.count 10 –http.responders.per.listener 1 –tcp.responders.per.listener 1

3.3. IoTDB benchmark settings

We use a mixed read-write scenario. The read write ratio is 1:9. According to our experience, in DevOps, TSDBs mostly handle writes sent by machines being monitored. Query workload is relatively very small, even smaller than 10%.

As explained above, IoTDB benchmark simulates a wind-farm with a number of devices and multiple sensors (sensor number is also write batch size) in each device, sent by a number of clients. In our experiments, we try to use 300 clients. Each client is bound to 1 device with 10 sensor. So there are 3000 metrics (300 devices * 10 sensors). However, we have to adjust the numbers in some TSDBs workload. Please see below for details. Our goal is to compare max throughput, so we require that one of the CPU/memory/IO resources are saturated or close to saturation.

  • InfluxDB, TDEngine, OpenTSDB, TickTock
  • 300 client, 300 device, 10 sensor/device
  • IoTDB
  • 100 client, 100 device, 10 sensor/device (since 300 client, 300 device, 10 sensor will cause out of memory in 4GB docker)
  • TimescaleDB
  • 10 client, 10 device, 100 sensor/device (since 300 client, 300 device, 10 sensor cannot saturate CPU of the server. Note that this is unfair to other TSDBs since sensor number is write batch size, and it uses much larger write batch.)

We ran the tests with 100K loops (for InfluxDB, TimescaleDB, OpenTSDB) or 1M loops (for TDEngine, IoTDB, TickTock) which corresponds to 2.7 billion or 27 billion data points ingestion, respectively. The tests lasted for 3 to 18 hours depending on throughputs.

4. Throughput comparison

4.1. Write throughput

Among all TSDBs, OpenTSDB is the worst and TickTock is the best in terms of write throughput. IoTDB is the second best to TickTock. It is worth noting that TickTock uses asynchronous (TCP) write in the test. Please refer to this page for the explanation of synchronous (HTTP) and asynchronous (TCP) write support in TickTock. We also did a test using synchronous writes for TickTock, and the throughput is 2.6M data point/s. We will publish another report to compare the performance difference of synchronous and asynchronous writes.

Also please be reminded that TimescaleDB used a write batch size of 100 instead of 10 as other TSDBs use.

4.2. Read throughput

There are 5 type of queries which have different query patterns. TickTock has the best throughput in all queries, followed by IoTDB. Interestingly TimescaleDB has the worst query throughput (even worse than OpenTSDB). We suspect that it may be due to much higher sensor number in each device (100) it used than others (10).

Please note that the vertical axis is in log-arithmetic scale.

5. Response time comparison

5.1. Write response time

OpenTSDB, TickTock, and IoTDB have the best write response time. Their response times are below 1 millisecond per operation up to P99 (except IoTDB's P99). Each operation contains a batch of data points (i.e., the sensor number).

Please note that the vertical axis is in log-arithmetic scale. So the value is negative if below 1 millisecond.

OpenTSDB and TickTock use asynchronous writes so their writes are super fast. It is worth mentioning that OpenTSDB docker doesn't use Hadoop and Zookeeper, but just plain files, to support HBase. Hadoop and Zookeeper together with HBase will be too large to fit into 4GB docker.

In terms of P999 write response time, all TSDBs' results are much closer than below P999. TickTock unfortunately has the worst P999 write response time. It still has rooms to improve here.

(Please note that TimescaleDB uses 100 batch size instead of 10 as others. We have taken that into account by dividing the original TimescaleDB write response time per operation by 10. TimescaleDB read response time remain the same as its original data since reads do not depend on batch size.)

5.2. Read response time

5.2.1. PRECISE_POINT

Precise point query: select v1... from data where time=? and device in ?

TickTock has the best response time for PRECISE_POINT queries, followed by TimescaleDB, TDEngine, IoTDB, InfluxDB, and OpenTSDB, in that order. InfluxDB has the smallest variation in percentiles. It gets closer and closer to other TSDBs when percentile is getting higher.

Also note that OpenTSDB query response time is much higher than others. We are not sure if it is caused by not using Hadoop or not.

The rest of 4 queries have similar pattern with PRECISE_POINT. Explanations are omitted for simplicity.

5.2.2. TIME_RANGE

Time range query: select v1... from data where time > ? and time < ? and device in ?

5.2.3. AGG_RANGE

Aggregation query with time filter: select func(v1)... from data where device in ? and time > ? and time < ?.

5.2.4. GROUP_BY

Group by time range is hard to be represented by a standard SQL, but is useful for time series data, 
e.g., achieving down sampling. Suppose there is a time series which covers the data in 1 day. 
By grouping the data by 1 hour, we can get a new time series which only contains 24 data points.

5.2.5. LATEST_POINT

Latest point query: select time, v1... where device = ? and time = max(time)

6. CPU, memory, IO comparison

6.1. CPU

The figures above are listed in the order of OpenTSDB, InfluxDB, TimescaleDB, TDEngine, IoTDB, and TickTock. The order is also applied to the memory, read and write IO figures in the sections below.

Recall that we ran each TSDB in a dedicated 2-vCPU docker container. The above figures show that the CPU usage of all TSDBs are close to 200%. We consider that CPUs were saturated, though there were still small rooms (about 10-20%) for TDEngine and IoTDB.

6.2. memory

Recall that each docker has 4GB memory. We monitored TSDB process's resident set size (RSS) memory instead of total memory used by a docker. RSS is the portion of memory occupied by a process that is held in main memory. Total memory used by a docker includes temporary data cache. RSS is more accurate and realistic to compare the required memory used by TSDBs.

In the above figures, TDEngine used the smallest amount of RSS, about 500MB. TimescaleDB and IoTDB almost used all memory available to them (i.e., 4GB). TickTock used close to 3GB. InfluxDB and OpenTSDB used about 1GB and 1.2GB, respectively.

6.3. Read IO

InfluxDB has almost no read IO activities. TimescaleDB and IoTDB had no activities most of the time, but a few spikes larger than 4MB/s and 1MB/s, respectively. TDEngine has read IO (but less that 1KB/s) most of the time.

OpenTSDB has lots of read IO, 5-10MB/s, consistently. It is the reason why read response time in OpenTSDB is quite large.

TickTock has very regular spikes of read IO, from 20KB/s to 200KB-400KB/s (except one 800KB/s spike) every 5 minutes.

6.4. Write IO

IoTDB has the most intensive write IO, about 10MB/s to 12MB/s at all time. Interestingly its write response time is still very small, less than 1 millisecond. This makes us suspect that IoTDB might also use asynchronous writes internally. We can't verify this yet.

TimescaleDB consistently kept write IO at the rate of 1.5MB/s to 1.8MB/s. We think the consistency is attributed to PostgreSQL, the relational DBMS TimescaleDB uses.

InfluxDB's write IOs were about 500KB/s most of the time. It had a few spikes up to 2MB/s.

TDEngine's write IOs fluctuated from 100KB/s to 300KB/s.

TickTock's write IOs spiked every 5 minutes from 0 to 3MB/s most of the time, except a 5MB/s spike at the beginning.

OpenTSDB had no write IOs at all. We repeated the tests several times and the results were the same. This surprises us and we don't have good explanation. We suspect that HBase may keep its data in memory so there is no write IOs. But it doesn't explain why OpenTSDB had so much intensive read IOs (5MB/s to 10MB/s). It still remains a mystery to us. We are not sure if it is related to the fact that the OpenTSDB docker uses text files instead of Hadoop.

7. Conclusion & Future

  • TickTock has 3.2M (datapoint/s) write throughput. The second best is IoTDB (1.5M datapoint/s), and the worst is OpenTSDB (47K datapoint/s).
  • TickTock has the best read throughput.
  • OpenTSDB, IoTDB, and TickTock perform writes very fast, less that 1 millisecond per operation below P999. TickTock unfortunately has the worst P999 write response time. It still has rooms to improve here.
  • TickTock has the best read response time.
  • All TSDB CPUs were close to saturation. The throughputs are max throughputs indeed.
  • TDEngine used the smallest amount of memory, 500MB. IoTDB and TimescaleDB used all memory available (4GB). TickTock used 3GB memory.
  • All TSDBs except OpenTSDB had very small read IOs (less than 1KB/s) most of the time, although some spikes occasionally. TickTock had regular read IO spikes every 5 minutes. The max of spikes is up to 400KB/s. Considering that a 5400 RPM HDD can deliver 75MB/s read/write, TickTock's read is completely sustainable by the HDD.
  • IoTDB has the most intensive write IOs, about 10MB/s to 15MB/s. TDEngine has the best write IOs, fluctuating within the range of 100KB/s to 300KB/s. InfluxDB is also very good, at 500KB/s consistently. TickTock has regular spikes every 5 minutes up to 3MB/s most of the time.

In the future, we will evaluate how TickTock behaves and scales with different settings. Stay tuned! Thanks for your interests in TickTock!