TickTockDB v.s. InfluxDB, max cardinality comparison in PI 0 w (ARMv6, 32bit OS) 2 - ytyou/ticktock GitHub Wiki
Table of Contents
2. IoTDB-benchmark Introduction
4. Max cardinality: Resource consumption comparison
1. Introduction
In our previous wiki, we compared the max cardinality that TickTockDB and InfluxDB can handle in RPI-zero-Wireless (ARMv6, 32bit OS). Both TickTockDB and InfluxDB used InfluxDB line protocol for writes. Note that we used 200 sensors per device (SENSOR_NUMBER=200
) in benchmark configs, which means that each write request consists of 200 data points in one line.
It is not a typical scenario in DevOps cases. One line in InfluxDB line protocol usually contains metrics collected by one collector. For example, a CPU collector and a diskio collector in Telegraf (InfluxDB's default client) collects 10 and 11 metrics, respectively, each time.
cpu,cpu=cpu0,host=yi-IdeaPad usage_irq=0,usage_steal=0,usage_guest=0,usage_guest_nice=0,usage_user=0.19980019983607125,usage_system=0.1998001997997278,usage_idle=99.60039960003711,usage_nice=0,usage_iowait=0,usage_softirq=0 1677981790000000000
cpu,cpu=cpu1,host=yi-IdeaPad usage_user=0.10000000009313226,usage_system=0,usage_iowait=0,usage_softirq=0,usage_guest_nice=0,usage_idle=99.89999999990687,usage_nice=0,usage_irq=0,usage_steal=0,usage_guest=0 1677981790000000000
diskio,host=yi-IdeaPad,name=sda1 read_bytes=28385280i,write_time=0i,io_time=2416i,merged_reads=0i,merged_writes=0i,writes=0i,write_bytes=0i,read_time=2148i,weighted_io_time=2148i,iops_in_progress=0i,reads=429i 1677981790000000000
In order to simulate typical devops scenarios, we reduce the number of sensors per device in Iotdb-benchmark config from 200 to 10. We still compare the max cardinality that TickTockDB and InfluxDB can handle. We still use a 10-seconds interval between consecutive operations from a client, simulating that clients collect metrics every 10 seconds.
2. IoTDB-benchmark Introduction
Same as the previous performance evaluation.
3. Experiment settings
Same as the previous performance evaluation, except:
- Use TickTockDB 0.11.1 (instead of 0.11.0).
- Each test lasted for 10 hours (instead of 6 hours), simulating a scenario that clients collect metrics and send them to TickTockDB/InfluxDB every 10 seconds continuously for 10 hours.
4. Max cardinality: Resource consumption comparison
Note that cardinality is equal to (device number * sensor number).
4.1 CPU
The above figure shows that InfluxDB can handle up to 10k cardinality, and TickTockDB up to 80K cardinality.
The left most test case is to inject 10k cardinality (=1k devices * 10 sensor/device) into InfluxDB. CPU.idle was reduced to 5%. Note that 10% of CPU usage was attributed to an OS collector running in PI0-w (that's why we can plot the figure). So InfluxDB consumed 85% of CPU.
The second left most test case is the case in which we increased the cardinality to 20k (i.e., 2k devices) for InfluxDB. CPU.idle was 0% (i.e., CPU was completely saturated). In fact, the test lasted for 43264.59 seconds, longer than the planned 36000 seconds. And there were already 13570 out of 64766230 data points failed to insert to InfluxDB.
The third test case from the left is to inject 10k cardinality (=1k devices * 10 sensor/device) to TickTockDB. TickTockDB used only 10% of CPU. Another 10% for the OS collector so CPU.idle was 80%. This shows that TickTockDB is much more lightweight than InfluxDB.
We kept increasing cardinality to 50k, 80k, and 90k in stress-testing TickTockDB. PI0-w was saturated at 90k cardinality. CPU.idle was 0%. We conclude that the max cardinality for TickTockDB was 80K.
4.2 IO Util
The above figure showed IO util metric. IO util for InfluxDB was much higher than TickTockDB in general. At 10K cardinality, IO util was already 70% for InfluxDB. At 20K cardinality, IO util was lower. It was because CPU was completely saturated and the writes couldn't catch up. It was why the test lasted longer than planned.
IO util was 1-2% for TickTockDB throughput all the tests, 10k to 90k cardinality. It means that, in PI0-w, TickTockDB is CPU bound instead of IO bound.
4.3 Write bytes rate
Write bytes rate shows very similar pattern to IO util.
4.4 Read bytes rate
Both InfluxDB and TickTockDB consumed very little read bytes rate. At 20K cardinality test of InfluxDB, read bytes rate was spiked. We believed it was due to background compaction task.
4.5 Memory
InfluxDB used much more memory than TickTockDB. At 10K cardinality, InfluxDB used closed to 180-200MB, while TickTockDB used about 30MB at the end.
5. Conclusion
- We compared TickTockDB with InfluxDB in PI-zero-wireless (ARMv6, 32bit OS) in terms of max cardinality. Instead of using backfill scenarios, we simulated normal scenarios that a list of clients send a list of time series (10 sensors per client in one write) at every 10 seconds interval.
- InfluxDB's max cardinality is 10K (i.e., 1k devices and 10 sensors/device).
- TickTockDB's max cardinality is 80K (i.e, 8k devices and 10 sensors/device).
- At the same cardinality load, TickTockDB consumes much less OS resources than InfluxDB in CPU, IO, and memory.
- In PI-zero-W, CPU is the bottleneck to TickTockDB. It was saturated earliest among all OS resources.