Contribution Guidelines for Development - PROCEED-Labs/proceed GitHub Wiki

Contributor License Agreement

Before you are able to deliver Pull Requests, you must accept the Contributor License Agreement.

Code Understandability

One very important point is understandability of the source code. So, please consider the following points:

  • prefer long descriptive names instead of short, abbreviated names (variables, files, resources, etc.): bpmn-activity-modifications-in-xml.js instead of act-mod.js
  • comment the source code, so that it is absolutely clear what is done.
  • overarching, internal concepts about the program code and structures should be documented in the wiki
  • concepts that influence the (graphical) interaction with the PROCEED Software should be documented at the public user docu

Naming Convention

  • Variables as camelCase with an initial lowercase letter: const cpuAverage
  • Resources and files containing graphical Content, like .vue files, use PascalCase with an initial uppercase letter: ProcessEditorView.vue
  • Functionalities and Libraries, usually .js files, in kebap-case: bpmn-modifications.js

Code Conventions

  • use Prettier like explained in the Installation Guide
    • before every commit there is a Prettier run automatically done with husky/git hooks, and it is also checked in the CI pipeline
    • we also have ESLint but it is for restricting some JS environment features

Trunk Based Development

Our monorepo is setup to make use of Trunk Based Development where all contributors frequently merge directly into main without any long-lasting feature branches. This allows for Continuous Deployment and avoids merge hell. One important concept is to never break the build. So all unstable changes should be commited in a way, that they don't disrupt production (see Feature Flags below).

You should create Branches/Pull Requests very often for every little achievement. Sometimes this is not possible, then use Feature Flags. But also Feature Flags are not always useful (e.g. too many IFs), then there is a little exception which should not often be used:

image

Changing the Database Schema for Development

The database must be started and setup before the MS is started. So, yarn dev-ms-db should run before yarn dev-ms. It starts and sets up a docker-based Postgres database with the prisma schema of the main branch.

Note

yarn dev-ms-db: 1. starts a docker container with a Postgres DB (if not already started); 2. creates the default db proceed_db (if not already existing); 3. optional: if on main, applies the table schema from the main branch to the default db; 4. writes the database URL of the default db into the MS .env file

If you want to develop a new feature that does not require to change the database schema, you create a new branch and just use the previous commands. This uses the database schema of main inside the new branch: yarn install (if new dependencies), yarn dev-ms-db, yarn dev-ms

If you want to develop a new feature that requires to change the database schema, you do not want to change the schema of the default db because this would change the database of the main branch. And this would lead to database inconsistencies if you change back and forth between main and branches with a changed schema. Therefore, after creating a new branch, to change the database structure you have to:

  1. change db schema in schema.prisma and add default values if you change an existing table (@default( ... ))
  2. run yarn dev-ms-db-new-structure --name "new column name added" to create a new branch db, apply the new schema and create a migration history

Note

yarn dev-ms-db-new-structure < --name "new column name added" >: 1. starts a docker container with a Postgres DB (if not already started); 2. creates new branch-db "proceed_db_<branch-name>" (if not yet existing); 3. applies the changed schema to the branch db and creates a migration history with the string comment; 4. writes the database URL of the branch into the MS .env file

Note

Optional yarn dev-ms-db-generate: for generating the typescript types (usually auto-executed with yarn install or prisma migrate)

Problem: if we want to go back to the main branch, it does not automatically change the db because the branch db URL is written into the .env file which is not versioned by git. To use the main db again, use yarn dev-ms-db which writes the main/default db URL into the MS .env file. This works also, if you are still on the feature branch.

If you go back from main to you feature branch with the changed schema, just use yarn dev-ms-db-new-structure to change to the correct database.

After you finished your development, use yarn dev-ms-db-delete on the feature branch to delete the development database.

Note

yarn dev-ms-db-delete <--all | --branch <branch-name> >: 1. starts a docker container with a Postgres DB (if not already started); 2. if no options are given, the command deletes the development database of the current branch. If option --all or --branch <name> is given, the command deletes all development databases of all branches or of the specific branch.

Feature Flags

In order to enable Trunk Based Development, we make use of Feature Flags. They are simple Booleans collected in our /FeatureFlags.js file, that enable/disable certain features in our system (with simple if-statements). We use these flags to avoid long-lasting feature branches. They allow the developer to push not-yet-finished development code to the main-branch, which is not active, since the code is inside a deactivated if-statement for the feature flag.

This allows short-lived branched (Trunk Based Development) and avoids big merging conflicts, because every developer can see the latest development code on the main-branch.

The development of a new feature should generally look like this:

  1. The developer creates a new branch and adds a new variable (i.e. feature flag) to the /FeatureFlags.js file in the root directory. This flag should be false by default.
    1. The dev commits the /FeatureFlags.js file in git.
    2. The dev switches the feature flag to true on the local development system.
    3. The dev adds if-statements in the code (which checks the value of the feature flag) and implement the feature inside the if-statement. Any new, unstable or unfinished code should be excluded from executing if the flag is false. Attention: Any existing features should be untouched and continue working as if the changes weren't present (see this for strategies).
    4. Before committing the new code, the sets the feature in the FeatureFlags.js file back to false, so that the feature is not activated by default. (Or alternatively, the dev does not commit the FeatureFlags.js file.) The work can and should be merged at any time without affecting production code.
  2. Any other dev wanting to work on this new feature can pull the branch and can locally activate the feature in the FeatureFlags.js file.
  3. After sufficient testing of the new feature, the flag can be removed. If the feature is of general purpose, like a refactor or switching to a different logging system, the feature flag can be completely removed. Every if-statement can be removed and the new code becomes a regular, permanent part of the codebase. (If the feature should only be available in specific versions of the MS, then the dev converts the feature flag to an environment variable.)

Tip

Instead of using the FeatureFlags.js file, the developer can also use environment variables to create feature flags. Therefore, the if-statement would just read process.env.FEATURE_FLAG. Setting environment variables can be done in multiple ways. For example, Next.js automatically reads environment variables from .env files (Hint: .env files are excluded from git). Usually, we use environment variables to enable specific functionality for hosting the Management System. But using environment variables for developing features behind a feature flag could also be useful, because it is sometimes easier to switch between multiple .env files.

Note

If a feature should only be selectively enabled at runtime, for example, like the automation section if it was paid, we don't normally use a feature flag or environment variable for that. Instead, the feature is converted to a resource in our IAM CASL abilities management. Now the check is a normal permission check like all other features with our <AuthCan> component etc., than can incorporate different layers like user/space/environment permissions. This gives us more fine-grained control (only enable for certain users/organizations) and is easier compared to checking process.env AND the user/space abilities. But it moves the feature to the DB (for abilities) and involves more initial setup for CASL types. So, it shouldn't be done too early, only when the feature was tested enough in step 2.

Testing

Automatic checks

Locally on your computer, before you create a new version in git (also called "commit"), a linting check is automatically done.
Before you push the created version from your local computer to the remote Github repository, the Engine and MS unit tests are automatically executed.

Hint: both is configured with husky hooks inside the root package.json file.

After pushing your code to Github, our continuous deployment pipeline runs the E2E-Tests for the MS and the Engine.

Writing and running Tests

Please, try to write test code before you start writing the actual functionality (Test-Driven-Development). We are aware that is is not always easy to do (especially if you have a lot of mocking), so it is sometimes okay to not have everything tested. You can find example tests in the project folders.

  • Unit Tests: we usually use Jest
    • Before every push to the repo, the tests of the Engine and the MS are automatically executed
    • Start the unit tests manually: yarn test (Engine) or yarn test-ms (MS)
  • E2E-Tests:
    • For the Management System, we mainly test the graphical interfaces using Playwright in our E2E-Tests. See here for a usage guide.
    • For the Engine, we mainly test the API in our E2E-Tests.
    • Start the E2E-Tests manually: yarn test-e2e (Engine) or with the Playwright VS Code Extension (MS)

Libraries

We try to have a low footprint and an understanding of the underlying code, so:

  • if possible, don't introduce new libraries
  • don't duplicate code, reuse it

image

  • Add Dependencies: navigate into the project folder, e.g. cd src/management-system, and use yarn add [--dev] eslint to add a dependency => this automatically add the dependency to the package.json of the current project but hoist the installation to root/node_modules (instead of the node_modules folder of the current project)
    • there is only one yarn.lock file in the root of the PROCEED repo

Debugging

Read how to Debug the MS and the Engine

Errors

see the wiki page about how to handle Errors

Building

To build the bundled and minified JavaScript files run the following commands

Engine:

// Node.js
yarn build

// Browser version
yarn build-engine-web

The results will be generated inside the /build/engine/ folder.

Management System:

// NextJS standalone build
yarn build-ms

The results will be generated inside the /src/management-system-v2/.next/* folder.

Generate JSDoc API

To generate the JSDoc API, use yarn jsdoc. Afterwards the generated HTML files can be found in ./jsdoc/output_html. You can open the index.html to see the JSDoc documentation.

To configure JSDoc change the jsdoc.config.json file and see here: JSDoc README.

Docker

The Engine can also be started from a Docker image. To run a Docker container, execute the following (automatically fetched from Docker Hub):

Engine (only on Linux useful because of the possible --network host parameter in Docker):

yarn docker:run

To stop a running Docker container, execute the following:

yarn docker:stop

MS Server:

Note: To enable our monorepo setup and avoid lengthy install times, we reuse the MS build inside the MS docker image. This means you first have to build the MS with yarn build-ms and then run these commands:

yarn docker:build-ms

And to start the server:

yarn docker:run-ms

That command will use the .env.development file inside the MS-v2 folder. You can change the script there to use different environment variables.

For the exact docker commands look into the Dockerfiles for the Engine and the Server. There are multiple options and possibilities explained to start and configure the Docker container.

Linux Systemd Service

If you want to have the engine automatically started at OS start, you can use a systemd service on most current Linux systems.

You find a template service file inside the build folder (e.g. /build/engine) called proceed-engine.service. Copy the file to /etc/systemd/system/.

Next, replace the keywords <user>, <dir-where-proceed-engine-is-installed> and <path-to-node-binary> with the respective values (without '<' and '>').

Now, advertise the new file to the system with sudo systemctl daemon-reload. Afterwards, you can control the PROCEED engine with:

sudo systemctl start|stop|restart proceed-engine.service

Use

systemctl status proceed-engine.service

and

journalctl -ef --unit proceed-engine.service

to see the status and log entries.

If you want to (not) load the PROCEED Engine at startup, just type

sudo systemctl enable|disable proceed-engine.service
⚠️ **GitHub.com Fallback** ⚠️