Knowledge Base - KonstantinosLamprakis/42_ft_transcendence GitHub Wiki

Project structure

pong-tournament-root/
β”œβ”€β”€ services/
β”‚   β”œβ”€β”€ auth-service/            # Auth microservice (Google OAuth + JWT + 2FA)
β”‚   β”œβ”€β”€ game-service/            # Game engine microservice (Pong logic, scoring)
β”‚   β”œβ”€β”€ chat-service/            # Real-time chat microservice (WebSocket support)
β”‚   β”œβ”€β”€ api-gateway/             # API gateway aggregating microservices, SSR renderer
β”‚   β”œβ”€β”€ monitoring/              # Prometheus config + Grafana dashboards
β”‚   β”œβ”€β”€ logging/                 # ELK stack config (Docker + pipelines)
β”‚   └── common/                  # Shared code (utils, types, DB schema)
β”‚
β”œβ”€β”€ docker-compose.yml           # Orchestrate all microservices + ELK + Prometheus + Grafana
└── README.md

Commands

  • npm install:
    • installs node dependencies from package.json
    • npm install : this will also update your package.json with this spesific package
    • generates package-lock.json which locks the versions of your packages to ensure they will be the same every time npm install runs and setup the project. This should NOT be on .gitignore as it helps everyone work with same packages to avoid inconsistencies
  • npm start: runs the "special" script called start, as starting poing of your app
  • npm run "script_name": runs the script from package.json
  • docker-compose up --build

package.json

  • a crucial file in any Node.js
  • defines project metadata (name, version, description, etc.)
  • lists dependencies
  • specifies scripts to automate common tasks like building or starting your app
  • elps manage versions and makes your project portable and shareable
  • tsc: typescript compiler to js, usually in a folder dist/
  • ts-node: compile typescript during excecution, on the fly. Great for dev, but not for prod due to lack of performance and stability.
  • Semantic Versioning (SemVer)?
    • X.Y.Z:
      • X is major change API backward incompatible
      • Y is minor change API backward compatible
      • Z is minor patch API backward compatible

tsconfig.json

  • a file for tsc, the typescript compiler

Single Page Application (SPA)

  • a web app that loads a single HTML page and dynamically updates the content as the user interacts with the appβ€”without reloading the entire page
  • JavaScript updates the page content and UI dynamically, usually by manipulating the DOM or using frameworks like React, Vue, Angular, etc
  • No full page reloads: Navigation between β€œpages” happens within the app, no full page refresh needed
  • Faster and smoother user experience: Because you avoid the repeated loading of entire HTML pages

Server-Side Rendering (SSR)

  • web pages are rendered on the server before being sent to the browser
  • instead of sending a mostly empty HTML shell and loading content with JavaScript (like in a Single Page Application), the server generates the full HTML page and sends it to the client
  • browser receives a fully rendered page ready to display immediately
  • after the initial load, JavaScript can take over to make the page interactive (this is often called hydration).
  • examples: Next.js (React), Nuxt.js (Vue)

Cross-Origin Resource Sharing (CORS)

  • server decides who can access its resources from a different origin and browser applies these rules
  • how does CORS work?
    • the browser sends an HTTP request to a different origin
    • the server responds with special CORS headers that say who is allowed to access
    • the most important header is Access-Control-Allow-Origin
    • if the server allows the requesting origin, the browser lets the frontend JavaScript read the response
    • if not allowed, the browser blocks the request, and you get a CORS error in the consolei
  • an extra use case:
    • browsers enforce a Same-Origin Policy to protect users β€” meaning scripts on a web page can only make requests to the same domain, protocol, and port the page was loaded from
    • without CORS, a web page from https://example.com can’t normally request data from https://api.example2.com
    • CORS relaxes this restriction in a controlled way, allowing servers to specify who can access their resources
  • example:
    • User fetch frontend page from web-server microservice(you don't need CORS in this service as it is the very first one)
    • Then user access all other microservices through server.com:3000 which leads to api-rest-gateway microservice. This is CORS(even on same hostname, you have different port) and should be implemented for api-rest-gateway.
    • You don't need to implement any other CORS for the internal microservices as they are not exposed directly to the frontend and come through api-rest-gateway.
    • You could completely avoid CORS even on api-rest-gateway by using a reverse proxy who maps e.g. server.com/api to server.com:3000

Docker:

  • Docker deamon (also called docker engine): Is the core of docker, responsible to deal and deploy with all containers. Docker desktop is just a wrapper for it which offers also other things like GUI, cli client etc. The important thing is docker desktop(and alternatives e.g. colima) is runs docker engine out of the box, inside VM which is mandatory for MAC OS and Windows but not for linux.
  • Docker CLI: a CLI client to access and handle the docker deamon
  • Docker containers: the actual containers you want to deploy. Could be deployed on other software alternatives rather than docker engine.

Useful docker commands:

sudo systemctl start docker
sudo systemctl stop docker
sudo systemctl disable docker
sudo systemctl enable docker
sudo systemctl restart docker
sudo systemctl status docker

CommonJS VS ESM

  • ESM (ECMAScript Modules): is the modern JavaScript module system, officially standardized in ES6 (2015)
    • you need to add "type": "module" at package.json to enable ESM
    • allows you to export, import modules easier than CommonJS
  • CommonJS: older module system used by Node.js by default before ESM was supported. Its not used anymore except for legacy code.

Fastify Middleware vs Hooks

  • Middleware in Fastify is a function that sits between the request and the route handler, commonly used for: Logging, Authentication, CORS etc. and traditionally was used by other frameworks in node e.g. ExpressJs
  • Instead better and more modern practice is to use hooks for that

Proxy vs Reverse Proxy

  • Forward proxy is used by client/browser while accessing the internet and it forward its request to different servers/urls etc.
  • Reverse proxy is used by server-side and basically it maps client's request to different internal microservices. It hides the server architecture from client and avoid CORS problems e.g. redirect www.test.com/api to api-handler:3000 microservice

Fastify websockets

  • A WebSocket in Fastify enables real-time, two-way communication between the client and serverβ€”unlike HTTP, which is request/response-based and one-directional per request.
  • Useful for real time applications, as its faster e.g. live chat, multi player games, google docs etc.

npm vs nvm vs pnpm vs yarn vs node

  • node: it is a JavaScript runtime built on Chrome's V8 engine, allows you to run JavaScript code outside the browser
  • npx: a tool which can run packages from node without modifing package.json and without install them permanetely at node_modules. Especially usefull e.g. for npx wscat -c ws://localhost:3000 to test your server
  • npm: a package manager that comes with Node.js. Slow, not so good
  • yarn: a package manager developed from facebook, a bit better
  • pnpm: a package manager, the best
  • nvm: a version manager for node
  • fnm: a vetsion manager for node, faster than nvm

Painful lessons learned:

  • its ok to use some deprecated/vulnerable packages (you get warning when you run npm install). The truth is even the last updates could have vulnerabilities, even if you try to fix them it will consume you much time, maybe it will break your code and the next day you will get yet another problem. Just leave the as it is, except if you have a stronge reason.
  • each microservice should be as independent as possible. Sometimes is better to have dupliate packages rather than global ones as you are more flexible, each person have the ownership of his microservice, you don't need to worry if a change will break other microservices etc. For example:
    • each microservice has its own package.json. We don't use workspaces as this cause a lot of library not to be recognized properly from compiler
    • we avoid using common/types to share types between backend/frontend. Instead we will have duplicate code. I don't really like this solution but keep in sync both backend / frontend through common/types service was a pain. Typescript doesn't allow you to import other .js files so basically enums can not be imported only types from .d.ts. In addition even if you solve this, then typescript complains that rootDir doesn't contain the file for the types. But its but practice just to put rootDir = repo root and also needs a lot of paths to be changed.
    • Its not possible for all micro-services to be perfectly in aligment. Some of them they might use commonJs while others ESM. Some of them run fastify 4 others 5. Some have tests, other nothing. This is still ok. Every microservice is independent, has an owner and only if the owner decides a change is needed and has the capacity to do so, then he does this change.
    • Currently in most services we have a single server.ts file which contains all the logic e.g. routes, logic, config, types. This is antipattern but as our team is small, project itself is also not big and our goal is fast development I would say it the proper way so far.

High-Coupling examples we need to think:

  • we need to duplicate twice all the types/enums we use in frontend both for backend endpoint and frontend client.
  • we have services names and ports both at docker-compose and at api-rest-gateway. We also have api-rest-gateway port in frontend as well.

&& vs & vs ;

In scipts e.g. Makefile, package.json scripts etc. when you want to execute many commands you use one of these options as separator:

  • &: its a bit dangerous, it will start first command in background and run second command in parallel no matter the outcome of the first
  • &&: it runs the first command, and if succeed then it will run the second
  • ; : it runs the first command, and no matter if succeed or not, it will run the second

Security-wise decisions in our project:

  • we use https and wss only for api-rest-gateway and web-server which are exposed to the external user while all other services comunicated through http(which is ok as this happen at internal Docker network and its safe). Also http is redirected automatically to https.
  • .env are not copies from Dockerfile but instead they are injected through docker-compose directly at the run-time environment for security reasons
  • certs are copied at Dockerfile. This is not the best, as if someone has access to the image, he also get access to certs but the other solution is to have a bind mount with the certs and this is forbitten. It still not a big security problem as none else should have access to the image. Also certs automatically generated through make file and Docker expects to find them in the relevant services(api-rest-gateway and web-server)
  • certs, .env are excluded from repo through .gitignore file
  • we dont need to user Google Client Secret (only google client Id which can be public). Secret is only used if we want to make calls at Google Services on behalf of another person, not just authenticated him.

How does 2FA work?

  • Server creates a secret per user and give that back to user in plain text(also qr code etc). User will scan and insert this secret to any authendicator app.
  • Anytime the user will try to login, the server will prompt for an TOTP (time based OTP). The user should open the authenticator. The authenticator will generate an OTP based on time. The server will accept this OTP from user input and generate same OTP for this user. It will compare those 2 OTP and if they are the same then it will return a login token to the user. Basically the OTP isn't just generated from current timestamp but from a whole time window e.g. last 1 min.
  • The login token basically is just a payload(usually contain user id/username) encrypted with the server's key and send it to the user when he logins. So anytime user send something to the server, the server will decrypt token, get username/id from id and be sure that the user is the one who logged in before.
  • The big image is the in order to get your token now, you need both username/pass combination and the authendicator app.