Performance tests - global-121/121-platform GitHub Wiki

Load testing the platform

You can create large data sets for load/performance testing in multiple ways:

By using a seed script which also creates mock data

  • You can now use the seed endpoint also to immediately create mock data
  • Currently, it works only for nlrc-multiple, and within that only loads data for OCW program. If needed for other programs, we can clone and adapt this one as needed.
  • You can specify with parameters how much registrations, transactions and messages you want
  • The underlying scripts blow up data in the database directly, which is really fast.

Via 121's import function

  • You can use 121's import function to import any large data set that you want.
  • This will take a bit of time, but will you more control over diverse data.
  • Also, this will not give you any transactions, messages, etc. yet

Generating test data

The following library can generate a CSV-file with large amounts of data: https://github.com/TheBlackHacker/csv-test-data-generator

Download generator.js(+related code) from the link above. One of the following commands can be used to generate a test data set:

NLRC

node generator.js \ 
    "id,note,phoneNumber,preferredLanguage,fspName,paymentAmountMultiplier,namePartnerOrganization,nameFirst,nameLast,vnumber,whatsappPhoneNumber" \
    "seq,alpha(0),digit(11),pick(en|nl),pick(Intersolve-whatsapp|Intersolve-no-whatsapp),pick(1|2),alpha(10),first,last,digit(10),digit(11)"  \
    5000 121-registered-pa_5000.csv

Jest Performance Test Suite

You can execute or add new performance tests using the Jest framework. Jest is widely used for integration API tests, but with a few extra tweaks it can serve perfectly as a load testing tool. The suite for performance tests is located under 121-platform/services/121-service/test/performance

Performance Tests

This directory contains performance tests for the 121-service that are automatically executed in CI via a scheduled cronjob. Tests are distributed across 3 shards to enable parallel execution and optimise CI runtime.

Test Distribution Strategy

The workflow uses a predefined exclusion-based approach:

  • Shard 1: All tests EXCEPT two specific long-running tests
  • Shard 2: payment-100k-registration-intersolve-visa.test.ts (long-running payment test)
  • Shard 3: performance-during-payment.test.ts (performance monitoring test)

Adding New Performance Tests

To add a new performance test:

  1. Create a new test file in the /test/performance/ directory
  2. By default, it will be automatically added to Shard 1
  3. If your test is extremely long-running or resource-intensive, consider:
    • Updating the workflow to move it to its own shard, OR
    • Adding it to the exclusion list for Shard 1 and creating a dedicated shard

Automatic Discovery

The GitHub Actions workflow automatically:

  • Scans all *.test.ts files in /test/performance/
  • Assigns all tests to Shard 1 EXCEPT the two predefined long-running tests
  • Runs the two long-running tests in separate shards (2 and 3)

When to Modify the Workflow

You need to update the workflow file if:

  • Your new test is extremely resource-intensive and should run alone
  • Shard 1 becomes too crowded and needs rebalancing
  • You want to add a 4th shard for better distribution

Environment Variables

All performance tests automatically receive:

  • HIGH_DATA_VOLUME=true - Use high data volumes for realistic performance testing
  • CI=true - Indicates running in CI environment

Performing the tests

You should use a production build to test the performance. The backend does not have a (separate) build optimized for production, it just has one, so you can just run it as normal when running performance tests against it. When testing, throttle the internet connection to 'Fast 3G'. This way we can simulate a slow(er) connection.

IMPORTANT: Watch out with testing bulk messages, we don't want to spend our Twilio funds. And we want to test our own code and not someone else's code, so make sure to set the MOCK_TWILIO and MOCK_INTERSOLVE ENV-variables.

When performing the tests, the core features should at least be tested:

  • Registering a PA
  • Bulk actions (including etc.)
  • Using the PA table (filtering etc.)
  • Doing a payment
  • Exporting

Measurements

  • Screens shouldn't take longer than 2 seconds to load
  • No API calls are timing out