Practices - FEUP-MEIC-DS-2024-25/ai4sd GitHub Wiki
Here you can find the practices enforced to be followed in the AI4SD project and also the respective supporting tools chosen.
Version Control
Using trunk based development with (short-lived) feature branches. Commits are standardized (tags, message formats, headers, etc.). Commits are small and incremental. Commits have a comprehensive description that translates into meaningful history. Commits are not squashed when integrated into master. Master never fails to run / compile / pass QA checks. Repository hygiene (no obsolete or unnecessary binaries, dependencies, branches...). Secrets are not present in the repository (passwords, credentials, etc...).
Development tools
3rd party libraries included via package managers (e.g. npm, ivy, pip, composer). Reproducible development environment (e.g. docker, docker-compose, vagrant).
Documentation
Technical documentation is kept up-to-date. Development guide to start working in the project (e.g. instruction to compile, test, run). APIs, formats and protocols are documented (OpenAPI, Swagger, File formats, CLI usage). Coding guidelines are explicitly defined. Potential security vulnerabilities classes are identified.
Quality Assurance
Automated unit/integration tests (e.g. xUnit) and acceptance tests (e.g. BDD, Selenium). Tests have good breadth and quality (e.g. coverage, mutation analysis). Tools are used to find violations of coding guidelines (e.g. linters), maintainability issues (e.g. static analysis), security vulnerabilities (e.g. static analysis, others). Code Review Pull requests describe the most important decisions taken in its contexts. Pull requests are cross-linked to relevant PBIs. Pull requests state the review strategy (e.g. pair prog., ensemble prog., async reviews). If using sync reviews (ensemble/pair prog.), pull requests have as "assignees" the team members involved. If using async reviews, pull requests have reasonable discussion before being merged, and pull requests are reviewed by other team members before being merged.
Integration Pipeline
Automated tests (xUnit, BDD, Selenium). Violations of coding guidelines (e.g. linters). Maintainability issues (e.g. static analysis). Security vulnerabilities (e.g. static analysis, others). If one of the checks goes over a defined threshold: It blocks the integration in case it is a feature branch. It blocks the deployment in case it is the main branch.
Integration, Deployment & Operations
Build is self-contained (no manual external dependencies, docker-compose, salt, chef). Automatically build and package product for deployment (docker, .apk, .ipa, docker...). Product is deployed automatically. Product is deployed at the very least once per sprint, but preferably multiple times. App telemetry (e.g. exception tracking w/ Sentry, real-time monitoring).
Final outcomes
Business value
Perceived value (consistent set of features; features polished enough for production) How happy are the clients? Was the actual effort appropriate for the curricular unit (116h/student)?"
Perceived external product maturity (usability, interaction, user interfaces)
technical quality of interfaces and flows Does the set of features implemented result allow a viable solution for the problem? Does the product have a “professional” feel to it? Does the product exhibit the quality attributes required? Would the product be ready to deliver to a client?
Perceived internal product maturity (e.g. modularity, understandability)
Do the chosen technologies fit well in overall architecture? Does the implementation exhibit a good application of the chosen technologies? Is the implementation easy to understand, maintainable and robust? Overall technical quality (code quality; understandability; I could continue the development quickly)