Test Plan - ytyou/ticktock GitHub Wiki

1 Normal scenarios

1.1 Write

Generate a number of random data points, write them into the database.

Write Protocol

  1. Write using OpenTSDB telnet format, with TCP;
  2. Write using OpenTSDB telnet format, with HTTP;
  3. Write using OpenTSDB JSON format, with HTTP;
  4. Write using InfluxDB line protocol, with TCP;
  5. Write using InfluxDB line protocol, with HTTP;

Compression Versions

All the supported compression methods (currently, v0, v1, v2, v3) should be tested.

1.2 Query

  1. Query the raw data, check all data points match;
  2. For all the combination of all the supported aggregators and downsamplers, query the database and make sure results match the OpenTSDB results;
  3. Query with all the combination of tags and make sure results match the OpenTSDB results;
  • Query without tags;
  • Query with some, but not all, tags;
  • Query with ALL tags;
  • Query with tag=val*;
  • Query with tag=val1|val2;
  1. Corner cases:
  • Query non-existing metrics/tags and expect empty result;
  • Query with non-existing aggregators/downsamplers and expect error;
  • Query with misformated query and expect error;
  • Test special chars in metric/tag names/values;

Some of the above tests should use dedicated query listener;

2 With Out-Of-Order (OOO) Data Points

Generate a number of random data points, some are out-of-order, write them into the database. Then,

  1. Query the raw data, check all data points match;
  2. For all the combination of all the supported aggregators and downsamplers, query the database and make sure results match the OpenTSDB results;

3 With Duplicate Data Points

Generate a number of random data points, some are out-of-order, some are duplicates, write them into the database. Then,

  1. Query the raw data, check all data points match;
  2. For all the combination of all the supported aggregators and downsamplers, query the database and make sure results match the OpenTSDB results;

4 With Compaction

  1. Generate a number of random data points, write them into the database.
  2. Query the raw data, check all data points match.
  3. Trigger compaction.
  4. Query the raw data, check all data points match.
  5. For all the combination of all the supported aggregators and downsamplers, query the database and make sure results match the OpenTSDB results;
  6. Restart the database.
  7. Query the raw data, check all data points match.
  8. For all the combination of all the supported aggregators and downsamplers, query the database and make sure results match the OpenTSDB results;
  9. Write some additional random data points into already compacted databse.
  10. Query the raw data, check all data points match.
  11. For all the combination of all the supported aggregators and downsamplers, query the database and make sure results match the OpenTSDB results;
  12. Trigger compaction.
  13. Query the raw data, check all data points match.
  14. For all the combination of all the supported aggregators and downsamplers, query the database and make sure results match the OpenTSDB results;
  15. Restart the database.
  16. Query the raw data, check all data points match.
  17. For all the combination of all the supported aggregators and downsamplers, query the database and make sure results match the OpenTSDB results;

5 With Recovery

  1. Generate a small amount of random data points, write them into the database. Make sure the number of data points is small enough so that they are NOT flushed to disk;
  2. Query the raw data, check all data points match.
  3. Wait for append.log to be generated;
  4. Kill (-9) the database forcefully (NOT normal shutdown);
  5. Start the database again.
  6. Query the raw data, check all data points are recovered.
  7. For all the combination of all the supported aggregators and downsamplers, query the database and make sure results match the OpenTSDB results;