Development Workflow - humanbit-dev-org/templates GitHub Wiki
This guide aims to bootstrap a boilerplate for new projects in a practical fashion.
It follows a step-by-step progression from top to bottom.
See each dedicated section for reference details.
Since Docker relies on the real Linux kernel virtualization provided by WSL 2 to run containers on Windows,
all related procedures are covered before moving forward with processes involving external applications.
Enabling Virtualization (VT-x/AMD-V) in the BIOS/UEFI interface is mandatory for WSL 2 because it relies on virtualization technology to run the Linux kernel.
Without it, WSL 2 will fail to start, restricting the system to WSL 1 or leading to error messages.
Although WSL 1 can function without this, enabling virtualization ensures compatibility with WSL 2 for better performance and features.
NOTE: The BIOS/UEFI interface is independent of the operating system and won't be affected by formatting the computer.
-
Restart your computer and press the relevant BIOS/UEFI interface key.
-
During startup, press the BIOS/UEFI interface key (typically
Delete
,F2
,F10
, orF1
, depending on your motherboard/brand). -
Once in the BIOS/UEFI interface, navigate to the Advanced or CPU Configuration settings (this varies by manufacturer).
-
Look for an option called Intel VT-x (for Intel) or AMD-V (for AMD) and enable it.
-
Save your changes and exit the BIOS/UEFI interface (usually by pressing
F10
and confirming).After rebooting, virtualization will be enabled.
2. Activate Windows Subsystem for Linux (WSL) [Windows only]
WSL enables the feature that allows Linux to run on Windows.
You'll need a Linux distribution to access the command line, where the kernel (the operating system's core) operates.
-
Open Windows Terminal (PowerShell or Command Prompt) as administrator (and leave it open for subsequent steps). (TODO: TEST STEPS IN POWERSHELL)
-
Install WSL:
For this setup, where only CLI access is needed, Debian is a good lightweight option with standard features. #
It's minimal, fast, and consumes fewer resources, perfect for just using WSL features without needing a full OS.Run the following streamlined command: #
wsl --install --distribution Debian
At the end of this step, you'll be prompted to create your UNIX user.
-
Set default distribution for WSL:
-
WSL integration
Set Debian (or desired distro) as the default.
wsl --setdefault Debian
This ensures that Docker Desktop does not override the default distro with its own. #
-
4. WSL version [REFERENCE]
Windows Subsystem for Linux (WSL) provides the infrastructure to run Linux binaries directly on Windows.
WSL 1: Translates Linux system calls into Windows system calls.
WSL 2: Runs a full Linux kernel in a lightweight virtual machine (offering improved performance and compatibility).
-
To check which versions are installed, run:
wsl --list --verbose
This will display all installed WSL distributions along with their status and version (WSL 1 or WSL 2).
If no distributions are installed, the list will be empty. -
Convert an existing installation to WSL 2:
wsl --set-version <distribution> 2
-
Default future installations to WSL 2:
wsl --set-default-version 2
Since Windows 10 version 2004, and Windows 11, WSL 2 is the default version for new installations.
Many commands require sudo
access to install packages.
Completing the procedure below allows these commands to run without needing a password.
Configuration for Root, Zsh, and Shell Access
To streamline this setup and avoid repeated password prompts, first configure root access and passwordless sudo.
-
Make Root User Default and Passwordless
-
Set root as the default user:
-
Open /etc/wsl.conf:
sudo nano /etc/wsl.conf
-
Add the following to set the root user as default:
[user] default=root
-
Save and exit:
- Press
Ctrl + O
to write the file. - Press
Enter
to confirm. - Press
Ctrl + X
to exit.
- Press
-
-
Then restart WSL.
-
Shut it down:
wsl --shutdown
-
Relaunch it:
wsl
-
-
Make root passwordless for sudo:
-
Open the
sudoers
file:sudo visudo
-
Add this line for passwordless sudo (use the arrow keys in Nano):
root ALL=(ALL) NOPASSWD:ALL
-
Save the file:
- Press
Ctrl + O
to write the file. - Press
Enter
to confirm. - Press
Ctrl + X
to exit.
- Press
-
-
-
Install Zsh and Oh My Zsh, and Make Zsh Like Bash [Windows only]
If you're looking for a stable, standard shell for scripting and don't need advanced features, Bash is great.
If you want advanced customization, powerful features, and an enhanced user experience, Zsh is worth exploring.-
Open your Linux distribution (e.g.,
Debian
) from the Start menu or by runningwsl
in a Windows Terminal. -
Install Zsh:
apt update && apt install zsh
Using Oh My Zsh makes it much easier to get started with customization.
-
Install Oh My Zsh:
sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"
This will install Oh My Zsh and change your shell configuration to use it.
-
Set Zsh as the default shell:
chsh -s $(which zsh)
-
Modify the Zsh prompt to look like Bash:
-
Open .zshrc for editing:
nano ~/.zshrc
-
Set the prompt to mimic Bash:
PROMPT='%n@%m:%~%# '
-
Save, exit, and apply the changes.
-
-
-
Enable Access to Zsh from Windows Command Prompt and PowerShell [Windows only]
-
For Command Prompt:
-
Create a batch file for
zsh
: Open PowerShell as administrator:echo @wsl zsh > C:\Windows\System32\zsh.bat
-
-
For PowerShell:
-
Open PowerShell as administrator.
-
First you should run the following command to change the execution policy to allow scripts or you'll get an error on the next steps:
Set-ExecutionPolicy RemoteSigned -Scope CurrentUser
RemoteSigned: This allows local scripts to run without being signed, but scripts downloaded from the internet will need to be signed.
CurrentUser: This only changes the execution policy for the current user (which will be "admin"), not globally (which is useful). -
When prompted, type
Y
and pressEnter
to confirm. -
Open your profile:
notepad $PROFILE
-
Add the following function to the file, save it, and restart PowerShell.
function zsh { wsl zsh }
-
-
-
Switch between shells
-
Return the name of the current shell:
echo $0
-
Check currently logged in user:
whoami
-
Switch to Bash:
bash
-
Switch to Zsh:
zsh
-
Exit WSL to return to Windows Terminal (it may take a few tries as you might have more than one shell open): [Windows only]
exit
-
4. Download and install Visual Studio Code
Extensions:
- Remote Explorer
-
Dev Containers
Although Docker's official extension remains a solid option, the combination above offers greater flexibility in the long run.
5. Download and install Docker Desktop
6. Download and install GitHub Desktop (GUI) and GitHub CLI (CLI)
7. Download and install PuTTY [Windows only]
8. Download and install WinSCP [Windows only]
9. Download and install Cyberduck [macOS only]
Docker is a platform for building, shipping, and running applications in containers.
Docker containers create isolated environments, allowing applications to run with all dependencies and configurations encapsulated.
The Docker setup and environment configuration aims to keep both parts of the application functioning smoothly together, with Docker containers separating Laravel and Next.js environments for flexibility and ease of deployment.
-
Key Concepts:
- Containers: Lightweight, isolated environments that include everything needed to run an app.
- Images: Read-only templates used to create containers. They can be shared via Docker Hub.
- Volumes: Store persistent data that remains even after a container is deleted.
-
Limits:
- No fixed container limit, but resource use depends on your system (CPU, RAM).
- Docker's free tier limits anonymous image pulls to 100 every 6 hours.
Docker Hub is a public repository for storing and sharing Docker images.
Free accounts have limitations (e.g., 1 private repo, 100 GB storage, pull rate limits).
This process doesn't involve a GUI on the Docker Hub interface; it's all done via the terminal.
-
Log in to Docker Hub from the terminal:
docker login
Enter your Docker Hub username and password when prompted.
-
Tag your image with your Docker Hub username:
docker tag <image-name> <your-dockerhub-username>/<repository-name>:<tag>
Example:
docker tag my-app:latest yourusername/my-app:latest
-
Push the image to Docker Hub:
docker push <your-dockerhub-username>/<repository-name>:<tag>
Example:
docker push yourusername/my-app:latest
Your image will now be available on Docker Hub under your account.
-
Enabling WSL 2 is a prerequisite to use Docker's Linux kernel environment for containerization on Windows (covered in the Setup section). # ยท #
-
Create a directory to store configuration files and data for the virtualized containers, ensuring both
docker-compose.yml
andDockerfile
are placed in it, as these will be used to set up the containers.docker-compose.yml
Next, define the services you intend to use within the
docker-compose.yml
file.Each
container_name
must be unique.version: '3.9' # Optional, used for backward compatibility
services: php-apache: # PHP and Apache web server container_name: php-apache # Name of the container build: . # Build using the Dockerfile in the current directory volumes: - ./src:/var/www/html # Map local `src` to `/var/www/html` in the container ports: - 8080:80 # Expose container port 80 on host port 8080 db: # MySQL database container_name: db # Name of the container image: mysql:latest # Use the latest MySQL image environment: MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD} # Set MySQL root password MYSQL_USER: webuser # Create MySQL user `webuser` MYSQL_PASSWORD: ${MYSQL_PASSWORD} # Password for `webuser` volumes: - ./db-data:/var/lib/mysql # Persist database data ports: - 3306:3306 # Expose MySQL port phpmyadmin_global: # phpMyAdmin for remote database access container_name: phpmyadmin_global # Name of the container image: phpmyadmin/phpmyadmin:latest # Use the latest phpMyAdmin image platform: linux/amd64 # Specify platform architecture environment: PMA_HOST: ${PMA_HOST} # Remote database host PMA_USER: webuser # Database user PMA_PASSWORD: ${PMA_PASSWORD} # Password for `webuser` PMA_PORT: 3306 # Database port ports: - 8078:80 # Expose container port 80 on host port 8078 phpmyadmin_local: # phpMyAdmin for local database access container_name: phpmyadmin_local # Name of the container image: phpmyadmin/phpmyadmin:latest # Use the latest phpMyAdmin image platform: linux/amd64 # Specify platform architecture environment: PMA_HOST: db # Connect to the `db` service PMA_USER: webuser # Database user PMA_PASSWORD: ${PMA_PASSWORD} # Password for `webuser` PMA_PORT: 3306 # Database port ports: - 8079:80 # Expose container port 80 on host port 8079 depends_on: - db # This service depends on `db`
docker-compose.yml
-> defines services, networks, and volumes for a multi-container docker application, allowing you to manage multiple containers as a single application.
NOTE: In YAML files (likedocker-compose.yml
), a hyphen indicates an item in a list.
When you run
docker-compose.yml
, each service (such asphp-apache
,db
,phpmyadmin
) is started as an individual container,
and they are grouped together under a label based on the folder name wheredocker-compose.yml
is located.Dockerfile
As previously mentioned, use the
Dockerfile
for thephp-apache
service to add dependencies and properly configure the container.# Base image for the PHP and Apache environment FROM php:8.3-apache # FROM php:apache # Pull the latest Apache version # Set the working directory for Apache WORKDIR /var/www/html # Update package lists and install required libraries for system utilities RUN apt update -y && \ apt install -y libicu-dev unzip zip curl ca-certificates build-essential software-properties-common gnupg && \ apt clean && \ update-ca-certificates # Install Node.js, npm, and pnpm from NodeSource for the latest versions RUN curl -fsSL https://deb.nodesource.com/setup_18.x | bash - && \ apt update -y && \ apt install -y nodejs && \ npm install -g pnpm # Install PHP extensions for database and internationalization support RUN docker-php-ext-install gettext intl pdo_mysql mysqli && \ docker-php-ext-enable mysqli # Copy Composer from its latest version image COPY --from=composer:latest /usr/bin/composer /usr/bin/composer # Add Sass to the environment path for easy access ENV PATH=$PATH:/usr/local/dart-sass # Enable Apache's `mod_rewrite` module for URL rewriting RUN a2enmod rewrite # Install Sass for CSS preprocessing WORKDIR /usr/local ARG SASS_VERSION=1.74.1 ARG SASS_URL="https://github.com/sass/dart-sass/releases/download/${SASS_VERSION}/dart-sass-${SASS_VERSION}-linux-x64.tar.gz" RUN curl -OL $SASS_URL && \ tar -xzf dart-sass-${SASS_VERSION}-linux-x64.tar.gz && \ rm -rf dart-sass-${SASS_VERSION}-linux-x64.tar.gz # Reset the working directory back to the default WORKDIR /var/www/html
Dockerfile
-> defines the instructions for building a Docker image, specifying the base image, environment setup and dependencies installation (which is our use case) and runtime configuration.
-
Once the files for building the images and containers are defined, run the command
docker-compose up -d
, where-d
stands for "detached", meaning that it runs in the background (will not show any output from them in the console and run them in the background). #
This command should be executed in the terminal from the directory where the previous files were placed.- First time run: Use the
docker-compose up -d
command. - Subsequent runs: You can start the container directly from the Docker Desktop interface.
Run
docker-compose build --no-cache
whenever you've updated theDockerfile
or need to apply changes to the image. - First time run: Use the
Global (shared)
THE FOLLOWING OPERATIONS ARE RUN ONLY ONCE PER SERVER.
-
Enable database connection from any IP. #
-
Open the
mysqld.cnf
file either "physically" via GUI on the database server (that is NOT in the container) or
by runningsudo nano /etc/mysql/mysql.conf.d/mysqld.cnf
from the terminal (can be executed anywhere).
Then, setbind-address
to whichever one out of the three following options:mysqld.cnf
bind-address = *
bind-address = ::
bind-address = 0.0.0.0
-
-
Create a MySQL user
'webuser'@'%'
(indicating awebuser
account accessible from any host). #i. Choose a login option.
-
Log into MySQL as root using the OS password.
sudo mysql
-
Log into MySQL as root using the MySQL password.
mysql -u root -p
-
Create a user with a password.
CREATE USER 'webuser'@'%' IDENTIFIED BY '<password>';
Replace
<password>
with the desired password.
-
THE FOLLOWING OPERATIONS ARE RUN ONLY ONCE PER PROJECT.
-
Create the database with UTF-8 encoding. #
CREATE DATABASE <database-name> CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci;
Replace
<database-name>
with the actual database name (use lowercase with no spaces nor hyphens for best practice). -
Grant the user privileges to operate on the database.
GRANT ALL PRIVILEGES ON <database-name>.* TO 'webuser'@'%';
Replace
<database-name>
with the name you set on the previous step. -
Refresh privileges. #
FLUSH PRIVILEGES;
-
Exit MySQL.
EXIT;
This closes the connection and completes the process.
Local (individual) [OPTIONAL]
THE FOLLOWING OPERATIONS ARE RUN ONLY ONCE PER PROJECT.
-
At this point, switch to VS Code.
- Open the Remote Explorer extension on the left, and ensure "Dev Containers" is selected in the dropdown at the top.
- In the "DEV CONTAINERS" section just below (visible due to the Dev Containers extension installed earlier), expand it to see a list of all containers.
- Select the
db
container โ this is NOT the one used for working on the code.- To operate inside a container from the CLI, use the command
docker exec -it {container_id} bash
from the directory containing the Docker setup files. [OPTIONAL]Replace
{container_id}
with the ID of the container you want to access.
- To operate inside a container from the CLI, use the command
- Open the Remote Explorer extension on the left, and ensure "Dev Containers" is selected in the dropdown at the top.
-
Create a MySQL user
'webuser'@'localhost'
(indicating awebuser
account accessible only fromlocalhost
).i. Choose a login option.
-
Log into MySQL as root using the OS password. (TODO: TESTING)
sudo mysql
-
Log into MySQL as root using the MySQL password.
mysql -u root -p
-
Create a user with a password.
CREATE USER 'webuser'@'localhost' IDENTIFIED BY '<password>';
Replace
<password>
with the desired password.
-
-
Create the database with UTF-8 encoding.
CREATE DATABASE <database-name> CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci;
Replace
<database-name>
with the actual database name (use lowercase with no spaces nor hyphens for best practice). -
Grant the user privileges to operate on the database.
GRANT ALL PRIVILEGES ON <database-name>.* TO 'webuser'@'localhost';
Replace
<database-name>
with the name you set on the previous step. -
Refresh privileges.
FLUSH PRIVILEGES;
-
Exit MySQL.
EXIT;
This closes the connection and completes the process.
Laravel handles the backend (API, database interactions, and server-side functionality),
while Next.js is taking care of the frontend.
This combination allows Laravel to serve as the API-only backend while
Next.js builds the user interface with React.
- If you haven't already, switch to VS Code.
-
Open the Remote Explorer extension on the left, and ensure "Dev Containers" is selected in the dropdown at the top.
-
In the "DEV CONTAINERS" section just below (visible due to the Dev Containers extension installed earlier), expand it to see a list of all containers.
-
Select the
php-apache
container โ this is the one used for working on the code.- To operate inside a container from the CLI, use the command
docker exec -it {container_id} bash
from the directory containing the Docker setup files. [OPTIONAL]Replace
{container_id}
with the ID of the container you want to access.
NOTE: When starting the
php-apache
container, it defaults to thevar/www/html
directory because this is set indocker-compose.yml
.
สแด๊ฐแดสแดษดแดแด:
volumes: - ./src:/var/www/html
- To operate inside a container from the CLI, use the command
-
The following steps should be done in the root directory (the one named after the project).
-
You can initialize a Laravel project with either:
-
The command
composer create-project laravel/laravel <project_name>
, if you want to start from scratch rather than a from the GitHubclone
. -
Or by cloning the repository from the CLI with
git clone <repo-name>
or (and the following will be the preferred method for this project):- Go to the repo templates.
- Create a new project by clicking "Use this template" > "Create a new repository" (top right).
- Configuration/Settings:
- Set it to "Private".
- Under "Owner": "humanbit-dev-org".
- Leave "Include all branches" unchecked.
-
-
Run
composer install
:composer install
-
Download the
.env
file fromprogetti aperti
and paste it into the root directory.
APP_URL
andDB_DATABASE
must be changed according to the project's configuration.NOTE: Make sure to re-add the initial dot, as Google Drive removes it.
-
If you want to use the global database (shared), set the host to the server's IP.
-
If you want to use the local database (individual), set the host to the container name (in this case,
db
).สแด๊ฐแดสแดษดแดแด
docker-compose.yml
db: container_name: db
NOTE: If starting from scratch, run
cp .env.example .env
to copy the example configuration file and set the database parameters accordingly.
-
-
Run this command to create a unique application key for security:
php artisan key:generate
-
Next, run this command to create the tables in the database:
php artisan migrate
For table drop and recreation, run
php artisan migrate:fresh
.For table drop, recreation and seeding, run
php artisan migrate:fresh --seed
. -
Finally, run this command to seed the database with sample data: [OPTIONAL]
php artisan db:seed
Note regarding steps 5 and 6 above:
These operations should be performed only once during initial project setup,
and then as needed for updates to Migrations, Factories, or Seeders.Global database (shared): This should be done by only one team member.
Local database (individual): This should be done by each individual team member.
Backpack for Laravel is a tool to quickly set up and manage admin panels (CRUD interfaces) for database records in Laravel apps.
The following steps should be done in the root directory (the one named after the project).
-
Inside Docker, the application runs on internal port 80, while the external port for accessing it is 8080.
Temporarily update the.env
file to match this setup:APP_URL=http://localhost:80/templates/public
สแด๊ฐแดสแดษดแดแด
Placeholder.
-
Install Backpack for Laravel via Composer.
composer require backpack/crud
-
Run the Backpack installation command and follow the prompts.
php artisan backpack:install
-
Set the correct permissions for the storage directory.
chmod -R 775 storage/ chown -R www-data:www-data storage
-
Revert the
.env
file to use port 8080 like so:APP_URL=http://localhost:8080/templates/public
-
The admin panel will now be accessible at:
http://localhost:8080/templates/public/admin
Follow these steps to set up and start the Next.js frontend within the container.
Make sure to execute these commands each time the container restarts:
-
Navigate to the frontend directory:
Change into the frontend directory, where all the Next.js frontend code is located.
cd frontend/
Next.js telemetry management link.
-
Install dependencies with
pnpm
: [INITIAL INSTALL & DEPENDENCY UPDATES]Run
pnpm install
to ensure all project dependencies are installed.pnpm install
-
If you encounter an error due to missing
pnpm
, install it globally with:npm install -g pnpm
-
-
Start the development server:
Run the Next.js development server, allowing you to view your application locally.
pnpm dev
-
Compile Sass files:
Sass and Prettier Watch
Since this virtualized environment stores files on the host OS, file changes aren't detected on save.
To force the container to capture these events:-
Copy the
.prettierignore
and.prettierrc.json
files into the project directory. -
Install the necessary packages:
pnpm install --save-dev prettier onchange @prettier/plugin-php concurrently
-
In
package.json
, define the following script to watch both Prettier and Sass changes in parallel:"watch-all": "concurrently \"onchange \\\"../**/*\\\" --poll 1000 -- prettier --write --ignore-unknown {{changed}}\" \"sass --watch src/static/scss/:src/static/css/ --poll\""
-
Run the combined watch script with:
pnpm run watch-all
This setup will simultaneously watch for changes in both Sass and Prettier, automatically compiling and formatting as needed.
Start Sass in watch mode to automatically compile Sass files to CSS whenever changes are made.
This command will keep the Sass compiler running and watching for changes in theassets/sass
directory, outputting the CSS toassets/css
.sass --watch src/static/scss/:src/static/css/ --poll
-
-
Adding packages, plugins, or libraries: [OPTIONAL]
To install new packages, plugins, or libraries, use
pnpm
with the appropriate package name. For example:pnpm add <package-name>
This will add the package to the project and make it available for use within your Next.js application.
สแด๊ฐแดสแดษดแดแด
Make a log in Next.js:
pnpm run build > build.log 2>&1
Clean possibly troublesome cache files:
pnpm store prune && rm -rf .next node_modules pnpm-lock.yaml && pnpm install && pnpm run build
pnpm store prune && rm -rf .next node_modules pnpm-lock.yaml && pnpm install && pnpm run lint && pnpm run build
Prettier
pnpm prettier --check ../**/* --ignore-path .prettierignore
Backpack?
php artisan cache:clear
php artisan config:clear
php artisan route:clear
- Go to the repo templates.
- Create a new project by clicking "Use this template" > "Create a new repository" (top right).
- Configuration/Settings:
- Set it to "Private".
- Under "Owner": "humanbit-dev-org".
- Leave "Include all branches" unchecked.
GitHub is arguably the most popular and fully-featured Git clients.
The hyperlinks in the following section contain a basic explanation upon
hover
.
-
Local access
- Use
clone
in GitHub Desktop to make the repository available locally if it's your first time with a new project.Operating locally allows each team member to develop independently without affecting the main project until changes are reviewed and merged.
- Use
-
Work on your local installation
- Continue coding and saving locally.
After saving, you'll see both the old and new code side by side in GitHub Desktop, allowing you to compare the discrepancies from the cloned files to your changed ones.
- Continue coding and saving locally.
-
Sync with the latest changes
- If you're collaborating with other team members and need their latest adjustments before committing, perform
fetch
to check for updates:
- If you're collaborating with other team members and need their latest adjustments before committing, perform
-
-
If you have unfinished tasks and need to switch branches or repositories,
stash
your modifications; otherwise, your progress will be lost. -
Once you're ready, create a
commit
to save your local activity.Changes should always be committed in the local environment as doing so directly on GitHub website's interface may lead to conflicts or unintended errors.
-
After committing, always
fetch
andpull
to ensure you have the latest changes made by others and avoid conflicts.- If there are no conflicts, the changes will be merged automatically.
- If conflicts occur, you'll need to resolve them manually.
-
If you have unfinished tasks and need to switch branches or repositories,
-
push
your changes to GitHub:- After committing,
push
your changes to the remote repository to make them available to other team members. - Applying specific commits from one
branch
to another can be achieved by cherry-picking. - Should any issues arise after changes are pushed and merged, GitHub allows you to rollback (
reset
/revert
) and undo the changes.
- After committing,