Best Practices - serlo/documentation GitHub Wiki
On this wiki page we can collect and share what we consider "best practices" or guidelines for software development at Serlo. Feel free to modify, discuss and improve this page.
- The Mistakes I Made As a Beginner Programmer by Samer Buna
- 7 daily practices of top developers by Bennett Garner
- Clean code for javascript (but applicable for any language)
For every feature we use a dedicated feature branch. Before merging to staging/production/main, we merge upstage branches into our feature branch, so that there are no conflicts in the "stable" branches. We prefer this over using the Git rebase flow.
Try to write code and structure projects in a way that ... (A) is recommended by documentation of tools we use (B) resembles other project on the internet.
We use a lot of tools to save time and effort. However, a lot of unforeseeable issues can arise if we use them in an unintended / unconventional way.
✔ Try to make your code look like the code you find in the documentation examples.
Other open source projects & people might have invested a lot of time to find a good solution to a problem that is similar to ours.
✔ When working on a solution, try to find other projects/people that already have invested time into finding a good solution.
Benefits:
- Tool documentation doubles as documentation for our own code
- Easier for new developers to understand the codebase / project
- Many developers probably ran into the same issues we will and help can be found online
- Less nasty issues that take a long time to fix
✔ Only build custom solutions when you see a large enough benefit in doing so.
Some situations where this guideline could have prevented issues (feel free to add yours):
- In the editor block-level math elements were considered by slate as being inline. This caused the slate normalization to delete nodes. Slate as a tool was not used how it was intended to be used and lead to hard to debug issues. (Lars)
- A custom built Dockerfile caused some nasty dependency issues. For optimization reasons there was a
yarn add some-dependency
in the Dockerfile. This however caused two different versions of the dependency to be installed. An unconventional solution that caused issues and was hard to debug. (Lars)
✔ Try to implement the shortest and simplest solution that works.
When your are deeply involved in a question or a problem it might be tempting to implement a complex solution. However, what is managable for you right now might be too complex for others or even yourself in the future. Furthermore, complexity can add up if you need to keep multiple interacting parts in your working memory at the same time.
In the last years the best practice developed at Serlo that we order code in source files from abstract to concrete, from general concepts to more specialized concepts (the general concepts depend on). Here concepts mean software structures like functions, classes, structures, etc. So always start with the most general concept and then go to its dependent concepts. Thus we find the source code more readable and understandable. Example:
/* ✅ Good */
function main() {
do_stuff()
}
function do_stuff() {
do_more_concrete_stuff()
}
function do_more_concrete_stuff() {
partey()
}
What we do not want to do:
/* ❌ Bad */
function do_more_concrete_stuff() {
partey()
}
function do_stuff() {
do_more_concrete_stuff()
}
function main() {
do_stuff()
}
Reason:
- The most abstract / general constructs tend to also be the most important ones. In the above example
main()
would describe what the script does and thus tells a high level story for new developers about why this is needed. - The most abstract / general constructs are also the one which are exported by a module and thus needed to be read more often.
- The more concrete / specialized constructs can only be understood when you have the overall / high level overview of the code (which is given by the more general constructs)
The pattern of defining a function to render JSX inside another component is quite common across our frontend code base. It can be a quick way to prototype certain user interfaces and is recommended when working with render props (a function that can be passed into another component and renders JSX), as the inner function has access to the whole closure of the component; however, outside of the two use cases, it may lead to issues in the long run, especially if one is not aware of how React reconciliation and rendering works.
Here is an important point to be aware about when using this pattern and what follows is the best practice that we should mostly use instead:
A function like renderInput
defined inside a component is not a regular React component. It doesn't have its own lifecycle or state, and it typically doesn't receive props, and instead accesses closure states & props of the parent component. Most importantly, it cannot be rendered as a normal React component with angle brackets. Instead, you must always call it as a regular function: {renderInput()}
.
If you do exclusively call it as a function, it doesn't lead to any obvious problems, other than readability and has a minimal performance overhead (the function is actually recreated on every render but it's extremely cheap in V8 and other browser engines). The good thing is that upon rendering, the returned JSX gets inlined and no dedicated React id is given to the return values of the inner render functions. However, upon refactoring the inner function to an inner component things get really weird:
If you were to do said refactoring and render the inner <Input />
component, the React component is recreated upon every render with a new React id. This throws the whole reconciliation process out of whack and causes the component to be unstable which results in a myriad of focus & state management problems.
If you do want to use the render functions, use them sparingly for prototyping, otherwise, let's refactor them to standalone components:
The solution here, is to use dedicated components that live on their own and to pass props to them. They should never be defined within another component and do not need access to any closures. They are self contained and deterministically turn the passed props and the state they define into a user interface. Always keep this equation in mind: UI = fn(state + props)
There is some overhead in having to move the components out and define + type the props, but ideally we'll slowly migrate away from the renderComponent
pattern and follow this best practice.
Consequently, you'd refactor the inner renderInput
into a standalone component and pass down all the closures as props. ESLint and TypeScript already will make sure of that, but also don't forget to add the types.
function Input({ query, setQuery, ... }: InputProps) {
// Component implementation...
}
// In the parent component:
<Input
query={query}
setQuery={setQuery}
...
/>
This way, Input
is a regular React component that can be rendered with angle brackets for more readability, negligible performance gains and the added benefit of each component only having access to the minimum amount of information that they need.
Consistent and efficient testing is crucial for ensuring the reliability of our software.
When writing tests that interact with the DOM, we have settled on using data-qa attributes.
In the e2e test repository, a custom locator from codecept was enabled. It allows you to select any element by their data-qa attribute simply by prefixing your action or assertion with $
. E.g [data-qa-quickbar] could be clicked by writing I.click('$quickbar')
. 🎉
Please adhere to clicking elements and selecting them by their data-qa attributes opposed to css classNames or similar, as it will prevent tests from breaking when unrelated changes are made in our app (such as using different styles or HTML elements).
For better readability and understanding of what each test does, separate different actions and assertions with a newline, as shown here:
I.say('Toggle bold on');
I.pressKey(['CommandOrControl', 'B']);
I.seeElement({ css: 'b' });
I.say('Toggle bold off');
I.pressKey(['CommandOrControl', 'B']);
I.dontSeeElement({ css: 'b' });
You can use I.say("Action about to be performed")
to make the terminal output a little bit easier to scan.
Use assertions that clearly describe what they're checking. It helps anyone reading the test to understand the intent quickly.
Each test should be an independent unit. This means one test shouldn't rely on another test or its state. If tests share setup, consider using before hooks to centralize common setup steps.
In order for us to better understand the code we follow the convention that the left value in assert_eq
and assert_neq
is the actual and the right value is the expected one.