FAQ - ytyou/ticktock GitHub Wiki

1. What is the latest release?

TickTockDB(TT in short) is being actively optimized. The latest version is v0.20.0-beta released on 4/21/2024.

Warning: This release is NOT backwards compatible with previous releases. If you need to migrate existing data, please see this wiki page for instructions. This release contains query optimizations. It makes querying data for a long period of time faster.

2. Does TT support string and boolean values as InfluxDB?

No, only numerical values are supported. But you can use string and boolean in tag values.

3. Does TT support HTTP pipelining as OpenTSDB?

No. You can use TCP as a workaround.

4. What collector agents are supported?

TCollector (OpenTSDB default agent), Telegraf (InfluxDB default agent), StatsD, collectD

5. Failed to bind to any network interfaces, errno=98

If you see something like this in the log: 2023-11-30 11:50:58.307 [ERROR] [main] Failed to bind to any network interfaces, errno=98. It means ports (default 6180-6183) used by TT are occupied. It is very likely because

  • another TT is running in the same host. [Solution: Stop it.]
  • or, a previous TT just exited ungracefully (killed) seconds ago. OS network manager doesn't release the ports for immediately use yet. [Solution: wait for a while, or add -r in starting command, e.g., bin/tt -c conf/tt.conf -r]

6. Too many opened files

If you see Too many opened files error in the log, or TT simply crashed due to too many opened files, please increase your nofile limit. For details, please read increasing nofile in ulimit in User Guide.

7. Why header files are so big and data files are so small?

It is likely because the number of time series (i.e., cardinality) in your environment is very small and the default config tsdb.page.count=32768 is not appropriate to your cases.

The size of a data file is determined by your real data size in a TSDB (one per day). A data file can hold from 1 to tsdb.page.count pages. If you have a small amount of data, the data file size will be small. If you have a large amount of data to fit into tsdb.page.count pages, a new data file will be created.

However, the size of a header file is predetermined by tsdb.page.count instead of your real data size. Even if you don't have many time series or data points, the size of a header file is the same. Some disk spaces will be wasted.

So if your cardinality is very small, we suggest you to update tsdb.page.count to an appropriate number (e.g., a 10 multiple of your cardinality).

Please refer to Issue #62.

8. Why index files suddenly grow too big?

The size of index files is determined by number of time series. The symptom normally means the number of time series grows rapidly.

If this is not the case, then it might be caused by corrupted ticktock.meta files in your data dir. Please check the ticktock.meta file. The corruption of the meta might be due to some write requests in bad formats in either Opentsdb put protocol or Influx line protocol. We trust clients to send requests in correct formats. If not, clients should have found them during testing.

At least until v0.12.1-beta, the consequence of write requests in weird formats might result TickTockDB's meta data in bad states and crashing TickTockDB. We plan to add more protections in the future releases after v0.12.1-beta.

Please refer to Issue #64.

9. Why Grafana (v10.2.3) doesn't show any metrics in 'Metric Name' dropped-down input box?

Grafana v10.2.3 doesn't call TT with correct parameters if the input box is empty. It is supposed to ask TT for all metrics by API /api/suggest?type=metric&q=&max=1000 (note the parameter q=). But it sends q=cpu, which refers to metrics with the cpu prefix. If there is no such metrics in TT, its response will be empty.

Please refer to Issue #65.

10. Do we have to gracefully shutdown TT?

We STRONGLY suggest you to gracefully shutdown TT by calling <ticktock>/admin/stop.sh, although TT can still survives with kill -9 or host losing power suddenly. Without graceful shutdown, TT may lose data up to 5 minutes (default) in caches.

ylin30@pi5:~/ticktock $ ./admin/stop.sh
Exiting now.
ylin30@pi5:~/ticktock $ Start shutdown process...
Shutdown process complete

Besides, it might lose some settings not flushed to files yet. E.g., in v0.20.0, if data are rolled up manually by <ticktock>/admin/rollup.sh, you must gracefully shutdown TT in order to update configs such that the newly rolled up data can work.

11. How is query performance improved since v0.20.0?

We introduce rollup data to improve query performance since v0.20.0. All raw data are rolled up to 1-hour downsampled data (1 data point per one hour per time series). If a query uses downsample equal to or larger than 1 hour (e.g., 1h-avg), TT will read rolled up data instead of raw data (which obviously has much larger disk size and thus slower to read). FYI, we only support min/max/sum/avg/count downsampling for each time series.

Further, we provide 1-day rollup data to reduce data size in large queries. It is enabled by default in x86 hosts but disabled in ARM (e.g., raspberrypi etc). The reason is that PIs' disks are usually small and 1-hour rollup is fast enough. In ARM hosts, you need to manually run <ticktock>/admin/rollup.sh to rollup one TSDB to 1-day rollup data. You will need to run multiple times if you have multiple TSDBs. Note that, in v0.20.0, you have to gracefully shutdown TT (./admin/stop.sh) and restart TT in order to use 1-day rollup data.

⚠️ **GitHub.com Fallback** ⚠️