Services - nself-org/nchat GitHub Wiki
Version: v0.4.2 Last Updated: February 1, 2026 Complete service documentation for nself CLI stack
- Service Overview
- Core Services
- Optional Services
- Monitoring Services
- Administrative Services
- Service Integration
- Best Practices
nself CLI provides a complete backend infrastructure stack with the following services:
| Tier | Services | Always Enabled | Enable Via |
|---|---|---|---|
| Core | 4 services | Yes | Automatic |
| Optional | 7 services | No | Environment variables |
| Monitoring | 10 services | No | MONITORING_ENABLED=true |
| Admin | 1 service | No | NSELF_ADMIN_ENABLED=true |
nginx (Port 80/443)
├─→ hasura (Port 8080)
│ └─→ postgres (Port 5432)
├─→ auth (Port 4000)
│ ├─→ postgres
│ └─→ hasura
├─→ storage (Port 5001)
│ ├─→ minio (Port 9000)
│ ├─→ postgres
│ └─→ hasura
└─→ minio (Console Port 9001)
redis (Port 6379) ── Independent
mailpit (Port 8025) ── Independent
meilisearch (Port 7700) ── Independent
These four services are always enabled and form the foundation of your backend.
Production-grade relational database with 60+ extensions
| Property | Value |
|---|---|
| Image | postgres:16-alpine |
| Port | 5432 |
| Data Location |
/var/lib/postgresql/data (Docker volume) |
| Default Database | {project_name}_dev |
| Default User | postgres |
| Health Check | pg_isready -U postgres |
- Latest PostgreSQL 16 - Modern SQL features
- Alpine Linux - Minimal image size (~80MB)
- 60+ Extensions - pgcrypto, uuid-ossp, pg_trgm, etc.
- Full-text Search - Built-in FTS capabilities
- JSON Support - Native JSON and JSONB types
- Connection Pooling - Via PgBouncer (optional)
- Replication - Streaming replication support
Cryptography:
CREATE EXTENSION pgcrypto; -- Cryptographic functions
CREATE EXTENSION "uuid-ossp"; -- UUID generationFull-Text Search:
CREATE EXTENSION pg_trgm; -- Similarity search
CREATE EXTENSION pg_freespacemap;
CREATE EXTENSION btree_gin; -- GIN indexes for full-text
CREATE EXTENSION btree_gist; -- GIST indexesPerformance:
CREATE EXTENSION pg_stat_statements; -- Query performance
CREATE EXTENSION pg_buffercache; -- Cache stats
CREATE EXTENSION pg_prewarm; -- Buffer prewarmingData Types:
CREATE EXTENSION hstore; -- Key-value store
CREATE EXTENSION ltree; -- Hierarchical tree
CREATE EXTENSION citext; -- Case-insensitive text
CREATE EXTENSION isn; -- ISBN/ISSN/etcGeographic (Optional):
CREATE EXTENSION postgis; -- Geographic objects
CREATE EXTENSION postgis_topology;
CREATE EXTENSION address_standardizer;Time-Series (Optional):
CREATE EXTENSION timescaledb; -- Time-series optimizationJob Scheduling (Optional):
CREATE EXTENSION pg_cron; -- Cron-like job schedulerDefault Configuration (.backend/.env):
POSTGRES_DB=myapp_dev
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres-dev-password
POSTGRES_PORT=5432Performance Tuning:
For development (8GB RAM):
# Add to docker-compose.yml postgres service
command:
- postgres
- -c
- shared_buffers=2GB
- -c
- effective_cache_size=6GB
- -c
- maintenance_work_mem=512MB
- -c
- checkpoint_completion_target=0.9
- -c
- wal_buffers=16MB
- -c
- default_statistics_target=100
- -c
- random_page_cost=1.1
- -c
- effective_io_concurrency=200
- -c
- work_mem=16MB
- -c
- min_wal_size=1GB
- -c
- max_wal_size=4GBFor production (32GB RAM):
command:
- postgres
- -c
- shared_buffers=8GB
- -c
- effective_cache_size=24GB
- -c
- maintenance_work_mem=2GB
- -c
- work_mem=64MB
- -c
- max_connections=200From Host Machine:
# Using psql
psql -h localhost -p 5432 -U postgres -d myapp_dev
# Using Docker exec
nself exec postgres psql -U postgres -d myapp_devFrom Application:
// Node.js (pg)
const { Pool } = require('pg')
const pool = new Pool({
host: 'localhost',
port: 5432,
database: 'myapp_dev',
user: 'postgres',
password: 'postgres-dev-password',
})
// Connection string
const connectionString = 'postgresql://postgres:postgres-dev-password@localhost:5432/myapp_dev'GUI Tools:
- pgAdmin 4: Free, feature-rich
- TablePlus: Modern, beautiful UI (paid)
- DBeaver: Open-source, Java-based
- Postico: macOS native (paid)
Create Backup:
# Using nself CLI
nself db:dump backup.sql
# Manual backup
docker exec myapp_postgres pg_dump -U postgres myapp_dev > backup.sql
# Compressed backup
docker exec myapp_postgres pg_dump -U postgres myapp_dev | gzip > backup.sql.gzRestore Backup:
# Using nself CLI
nself db:restore backup.sql
# Manual restore
cat backup.sql | docker exec -i myapp_postgres psql -U postgres -d myapp_dev
# From compressed
gunzip < backup.sql.gz | docker exec -i myapp_postgres psql -U postgres -d myapp_devCreate Database:
CREATE DATABASE my_new_db;Create User:
CREATE USER myuser WITH PASSWORD 'mypassword';
GRANT ALL PRIVILEGES ON DATABASE myapp_dev TO myuser;List Databases:
\l
-- or
SELECT datname FROM pg_database;List Tables:
\dt
-- or
SELECT tablename FROM pg_tables WHERE schemaname = 'public';Table Size:
SELECT
schemaname,
tablename,
pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS size
FROM pg_tables
WHERE schemaname = 'public'
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC;Instant GraphQL API for your PostgreSQL database
| Property | Value |
|---|---|
| Image | hasura/graphql-engine:v2.44.0 |
| Port | 8080 |
| Console | http://localhost:8080/console |
| API Endpoint | http://api.localhost/v1/graphql |
| Admin Secret |
hasura-admin-secret-dev (dev) |
| Health Check | http://localhost:8080/healthz |
- Auto-Generated API - CRUD operations from database schema
- Real-time Subscriptions - Live data updates via WebSocket
- Authorization - Row-level permissions and role-based access
- Event Triggers - React to database changes
- Remote Schemas - Combine multiple GraphQL APIs
- Actions - Custom business logic endpoints
- RESTified Endpoints - Convert GraphQL to REST
- Query Caching - Response caching for performance
- API Limits - Rate limiting and depth limiting
Environment Variables:
HASURA_GRAPHQL_DATABASE_URL=postgres://postgres:postgres-dev-password@postgres:5432/myapp_dev
HASURA_GRAPHQL_ADMIN_SECRET=hasura-admin-secret-dev
HASURA_GRAPHQL_JWT_SECRET={"type":"HS256","key":"development-secret-key-minimum-32-characters-long"}
HASURA_GRAPHQL_ENABLE_CONSOLE=true
HASURA_GRAPHQL_DEV_MODE=true
HASURA_GRAPHQL_ENABLED_LOG_TYPES=startup,http-log,webhook-log,websocket-log
HASURA_GRAPHQL_ENABLE_TELEMETRY=false
HASURA_GRAPHQL_CORS_DOMAIN=*
HASURA_GRAPHQL_UNAUTHORIZED_ROLE=publicAccess Console:
# Direct access
open http://localhost:8080/console
# Or via nself CLI
nself consoleConsole Features:
- DATA: Manage tables, relationships, permissions
- ACTIONS: Define custom GraphQL mutations
- REMOTE SCHEMAS: Integrate external GraphQL APIs
- EVENTS: Set up event triggers and scheduled triggers
- API: GraphQL playground
- SETTINGS: Metadata management
Basic Query:
query GetUsers {
users {
id
name
email
created_at
}
}Filtered Query:
query GetActiveUsers {
users(where: { active: { _eq: true } }) {
id
name
}
}Pagination:
query GetUsersPaginated {
users(limit: 10, offset: 0, order_by: { created_at: desc }) {
id
name
}
}Relationships:
query GetUsersWithPosts {
users {
id
name
posts {
id
title
content
}
}
}Aggregations:
query GetUserStats {
users_aggregate {
aggregate {
count
max {
created_at
}
min {
created_at
}
}
}
}Insert:
mutation CreateUser {
insert_users_one(object: { name: "John Doe", email: "[email protected]" }) {
id
name
}
}Update:
mutation UpdateUser {
update_users_by_pk(pk_columns: { id: "123" }, _set: { name: "Jane Doe" }) {
id
name
}
}Delete:
mutation DeleteUser {
delete_users_by_pk(id: "123") {
id
}
}Batch Insert:
mutation CreateMultipleUsers {
insert_users(
objects: [
{ name: "Alice", email: "[email protected]" }
{ name: "Bob", email: "[email protected]" }
]
) {
affected_rows
returning {
id
name
}
}
}Subscribe to Table:
subscription WatchUsers {
users {
id
name
email
}
}Filtered Subscription:
subscription WatchActiveUsers {
users(where: { active: { _eq: true } }) {
id
name
status
}
}Subscribe to Specific Record:
subscription WatchUser($userId: uuid!) {
users_by_pk(id: $userId) {
id
name
email
updated_at
}
}Define Permissions in Console:
- Go to DATA → [table] → Permissions
- Add role (e.g., "user")
- Configure operations:
Select Permission (Read):
{
"id": {
"_eq": "X-Hasura-User-Id"
}
}Insert Permission (Create):
{
"user_id": {
"_eq": "X-Hasura-User-Id"
}
}Update Permission:
{
"id": {
"_eq": "X-Hasura-User-Id"
}
}Delete Permission:
{
"id": {
"_eq": "X-Hasura-User-Id"
}
}Create Event Trigger:
- Go to EVENTS → Event Triggers → Create
- Configure:
- Table: users
- Operation: Insert
- Webhook: http://myapp:3000/webhooks/user-created
Payload:
{
"event": {
"op": "INSERT",
"data": {
"old": null,
"new": {
"id": "123",
"name": "John",
"email": "[email protected]"
}
}
},
"created_at": "2026-02-01T12:00:00Z",
"id": "event-id",
"trigger": {
"name": "user_created"
},
"table": {
"schema": "public",
"name": "users"
}
}Define Custom Action:
type Mutation {
sendEmail(to: String!, subject: String!, body: String!): SendEmailOutput
}
type SendEmailOutput {
success: Boolean!
messageId: String
}Handler Endpoint:
// POST /actions/send-email
app.post('/actions/send-email', async (req, res) => {
const { to, subject, body } = req.body.input
// Send email logic
const result = await sendEmail(to, subject, body)
res.json({
success: true,
messageId: result.id,
})
})Add Remote Schema:
- Go to REMOTE SCHEMAS → Add
- GraphQL Server URL:
https://api.github.com/graphql - Headers:
{ "Authorization": "Bearer YOUR_GITHUB_TOKEN" }
Query Remote Schema:
query {
github_viewer {
login
name
repositories(first: 5) {
nodes {
name
description
}
}
}
}Export Metadata:
# Using Hasura CLI
hasura metadata export
# Using nself
nself exec hasura hasura-cli metadata exportApply Metadata:
hasura metadata applyReload Metadata:
hasura metadata reloadComplete authentication service with social providers
| Property | Value |
|---|---|
| Image | nhost/hasura-auth:0.36.0 |
| Port | 4000 |
| API Endpoint | http://auth.localhost/v1/auth |
| Health Check | http://localhost:4001/healthz |
| JWT Algorithm | HS256 |
| Token Expiry | 15 min (access), 30 days (refresh) |
- Email/Password - Traditional authentication
- Magic Links - Passwordless email login
- Social OAuth - Google, GitHub, Apple, Facebook, etc.
- Multi-Factor Auth - TOTP (Google Authenticator)
- Email Verification - Confirm email addresses
- Password Reset - Secure password recovery
- Session Management - JWT tokens with refresh
- WebAuthn - Biometric authentication (optional)
- Anonymous Users - Guest access
OAuth Providers:
- GitHub
- Apple
- Microsoft
- GitLab
- Bitbucket
- Discord
- Twitch
- Spotify
- ID.me (for military/veterans)
Enable Providers (.backend/.env):
# Email/Password (always enabled)
AUTH_EMAIL_SIGNIN_EMAIL_VERIFIED_REQUIRED=false
# Magic Links
AUTH_EMAIL_PASSWORDLESS_ENABLED=true
# Google OAuth
AUTH_PROVIDER_GOOGLE_ENABLED=true
AUTH_PROVIDER_GOOGLE_CLIENT_ID=your-client-id
AUTH_PROVIDER_GOOGLE_CLIENT_SECRET=your-client-secret
# GitHub OAuth
AUTH_PROVIDER_GITHUB_ENABLED=true
AUTH_PROVIDER_GITHUB_CLIENT_ID=your-client-id
AUTH_PROVIDER_GITHUB_CLIENT_SECRET=your-client-secret
# Apple OAuth
AUTH_PROVIDER_APPLE_ENABLED=true
AUTH_PROVIDER_APPLE_CLIENT_ID=your-client-id
AUTH_PROVIDER_APPLE_TEAM_ID=your-team-id
AUTH_PROVIDER_APPLE_KEY_ID=your-key-id
AUTH_PROVIDER_APPLE_PRIVATE_KEY=your-private-key
# Multi-Factor Auth
AUTH_MFA_ENABLED=true
AUTH_MFA_TOTP_ISSUER=MyAppSign Up (Email/Password):
POST http://auth.localhost/v1/auth/signup/email-password
{
"email": "[email protected]",
"password": "SecurePassword123!",
"options": {
"userData": {
"displayName": "John Doe"
}
}
}Sign In (Email/Password):
POST http://auth.localhost/v1/auth/signin/email-password
{
"email": "[email protected]",
"password": "SecurePassword123!"
}Response:
{
"session": {
"accessToken": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...",
"accessTokenExpiresIn": 900,
"refreshToken": "uuid-refresh-token",
"user": {
"id": "uuid",
"email": "[email protected]",
"displayName": "John Doe",
"emailVerified": false,
"phoneNumber": null,
"createdAt": "2026-02-01T12:00:00Z"
}
}
}Magic Link (Passwordless):
POST http://auth.localhost/v1/auth/signin/passwordless/email
{
"email": "[email protected]",
"options": {
"redirectTo": "http://myapp.localhost/auth/callback"
}
}OAuth Sign In:
# Redirect user to:
GET http://auth.localhost/v1/auth/signin/provider/google?redirectTo=http://myapp.localhost/auth/callback
# User authorizes, then redirected back with:
http://myapp.localhost/auth/callback?refreshToken=uuid-refresh-tokenRefresh Token:
POST http://auth.localhost/v1/auth/token
{
"refreshToken": "uuid-refresh-token"
}Sign Out:
POST http://auth.localhost/v1/auth/signout
{
"refreshToken": "uuid-refresh-token"
}Change Password:
POST http://auth.localhost/v1/auth/user/password
Headers:
Authorization: Bearer {accessToken}
{
"newPassword": "NewSecurePassword123!"
}Request Password Reset:
POST http://auth.localhost/v1/auth/user/password/reset
{
"email": "[email protected]",
"options": {
"redirectTo": "http://myapp.localhost/auth/reset-password"
}
}JavaScript/TypeScript:
import { NhostClient } from '@nhost/nhost-js'
const nhost = new NhostClient({
subdomain: 'localhost',
region: '',
authUrl: 'http://auth.localhost/v1/auth',
graphqlUrl: 'http://api.localhost/v1/graphql',
storageUrl: 'http://storage.localhost/v1/storage',
})
// Sign up
const { session, error } = await nhost.auth.signUp({
email: '[email protected]',
password: 'SecurePassword123!',
})
// Sign in
const { session, error } = await nhost.auth.signIn({
email: '[email protected]',
password: 'SecurePassword123!',
})
// Get user
const user = nhost.auth.getUser()
// Sign out
await nhost.auth.signOut()React:
import { NhostProvider, useAuth, useSignInEmailPassword } from '@nhost/react';
function App() {
return (
<NhostProvider nhost={nhost}>
<AuthComponent />
</NhostProvider>
);
}
function AuthComponent() {
const { isAuthenticated, user } = useAuth();
const { signInEmailPassword, isLoading, error } = useSignInEmailPassword();
const handleSignIn = async () => {
await signInEmailPassword('[email protected]', 'password');
};
if (isAuthenticated) {
return <div>Welcome {user?.displayName}!</div>;
}
return <button onClick={handleSignIn}>Sign In</button>;
}Access Token (decoded):
{
"sub": "user-uuid",
"iat": 1706788800,
"exp": 1706789700,
"https://hasura.io/jwt/claims": {
"x-hasura-allowed-roles": ["user", "me"],
"x-hasura-default-role": "user",
"x-hasura-user-id": "user-uuid",
"x-hasura-user-is-anonymous": "false"
}
}Default User Fields:
-
id- UUID -
email- Email address -
displayName- User's display name -
phoneNumber- Phone number (optional) -
avatarUrl- Profile picture URL -
emailVerified- Email verification status -
phoneNumberVerified- Phone verification status -
locale- User's preferred locale -
createdAt- Account creation timestamp -
metadata- Custom JSON metadata
Update User Metadata:
POST http://auth.localhost/v1/auth/user
Headers:
Authorization: Bearer {accessToken}
{
"metadata": {
"customField": "value",
"preferences": {
"theme": "dark",
"notifications": true
}
}
}Reverse proxy with SSL/TLS termination
| Property | Value |
|---|---|
| Image | nginx:alpine |
| Ports | 80 (HTTP), 443 (HTTPS) |
| Config | .backend/nginx/nginx.conf |
| Sites | .backend/nginx/conf.d/ |
| SSL Certs | .backend/ssl/certificates/ |
- Reverse Proxy - Route requests to services
- SSL/TLS - Automatic HTTPS
- Load Balancing - Distribute traffic
- Static Files - Serve frontend assets
- WebSocket - Proxy WebSocket connections
- Compression - Gzip compression
- Caching - Response caching
- Rate Limiting - Prevent abuse
Main Config (.backend/nginx/nginx.conf):
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# Gzip compression
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_types text/plain text/css text/xml text/javascript
application/json application/javascript application/xml+rss
application/rss+xml application/atom+xml image/svg+xml
text/x-component text/x-cross-domain-policy;
# Include site configurations
include /etc/nginx/conf.d/*.conf;
}Site Config (.backend/nginx/conf.d/default.conf):
# API subdomain
server {
listen 80;
server_name api.localhost;
location / {
proxy_pass http://hasura:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}
# Auth subdomain
server {
listen 80;
server_name auth.localhost;
location / {
proxy_pass http://auth:4000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# Storage subdomain
server {
listen 80;
server_name storage.localhost;
location / {
proxy_pass http://storage:5001;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# File upload settings
client_max_body_size 100M;
proxy_request_buffering off;
}
}Enable HTTPS:
server {
listen 443 ssl http2;
server_name api.localhost;
ssl_certificate /etc/nginx/ssl/localhost/cert.pem;
ssl_certificate_key /etc/nginx/ssl/localhost/key.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location / {
proxy_pass http://hasura:8080;
# ... proxy settings
}
}
# Redirect HTTP to HTTPS
server {
listen 80;
server_name api.localhost;
return 301 https://$server_name$request_uri;
}Enable these services via environment variables in .backend/.env.
S3-compatible object storage
Enable: MINIO_ENABLED=true
| Property | Value |
|---|---|
| Image | minio/minio:latest |
| API Port | 9000 |
| Console Port | 9001 |
| Console URL | http://localhost:9001 |
| Access Key | minioadmin |
| Secret Key | minioadmin |
- S3 Compatible - Works with AWS S3 SDKs
- Bucket Management - Create/delete buckets
- Object Versioning - Track file versions
- Lifecycle Policies - Auto-delete old files
- Access Control - Bucket and object policies
- Encryption - Server-side encryption
- Browser UI - Web console for management
Create Bucket:
# Using MinIO Client (mc)
docker exec -it myapp_minio mc mb /data/my-bucket
# Or via MinIO console
open http://localhost:9001Upload File:
# Using mc
docker exec -it myapp_minio mc cp /path/to/file /data/my-bucket/
# Using AWS S3 SDKAccess Control Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "AWS": ["*"] },
"Action": ["s3:GetObject"],
"Resource": ["arn:aws:s3:::my-bucket/*"]
}
]
}JavaScript (AWS SDK):
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3'
const s3 = new S3Client({
region: 'us-east-1',
endpoint: 'http://localhost:9000',
credentials: {
accessKeyId: 'minioadmin',
secretAccessKey: 'minioadmin',
},
forcePathStyle: true,
})
// Upload file
await s3.send(
new PutObjectCommand({
Bucket: 'my-bucket',
Key: 'file.txt',
Body: fileBuffer,
})
)High-performance in-memory data store
Enable: REDIS_ENABLED=true
| Property | Value |
|---|---|
| Image | redis:7-alpine |
| Port | 6379 |
| Persistence | RDB + AOF |
| Eviction | allkeys-lru |
- Caching - Lightning-fast cache layer
- Session Storage - User sessions
- Pub/Sub - Message broker
- Rate Limiting - Request throttling
- Job Queues - Background jobs (with Bull)
- Sorted Sets - Leaderboards, rankings
- Geospatial - Location-based queries
Session Storage:
import Redis from 'ioredis'
const redis = new Redis(6379, 'localhost')
// Store session
await redis.setex(`session:${sessionId}`, 3600, JSON.stringify(userData))
// Get session
const session = await redis.get(`session:${sessionId}`)Caching:
// Cache API response
await redis.setex(`cache:users:${userId}`, 300, JSON.stringify(userData))
// Get cached data
const cached = await redis.get(`cache:users:${userId}`)
if (cached) return JSON.parse(cached)Rate Limiting:
// Increment request count
const count = await redis.incr(`ratelimit:${ip}:${minute}`)
await redis.expire(`ratelimit:${ip}:${minute}`, 60)
if (count > 100) {
throw new Error('Rate limit exceeded')
}Pub/Sub:
// Publisher
await redis.publish(
'notifications',
JSON.stringify({
type: 'new_message',
userId: '123',
})
)
// Subscriber
redis.subscribe('notifications')
redis.on('message', (channel, message) => {
console.log('Received:', JSON.parse(message))
})File upload/download service
Enable: Automatically enabled with MinIO
| Property | Value |
|---|---|
| Image | nhost/hasura-storage:0.6.1 |
| Port | 5001 |
| API Endpoint | http://storage.localhost/v1/storage |
| Backend | MinIO |
- File Upload - Direct uploads to S3/MinIO
- Access Control - Permission-based access
- Image Transformation - Resize, crop, format
- Pre-signed URLs - Temporary download links
- Virus Scanning - Optional ClamAV integration
HTTP:
POST http://storage.localhost/v1/storage/files
Headers:
Authorization: Bearer {accessToken}
Body (multipart/form-data):
file: [file data]
bucketId: "public"JavaScript:
import { NhostClient } from '@nhost/nhost-js';
const nhost = new NhostClient({...});
const { fileMetadata, error } = await nhost.storage.upload({
file,
bucketId: 'public',
});GET http://storage.localhost/v1/storage/files/{fileId}GET http://storage.localhost/v1/storage/files/{fileId}?w=300&h=300&q=80Parameters:
-
w- Width -
h- Height -
q- Quality (0-100) -
b- Blur -
fit- Fit mode (cover, contain, fill)
Email testing for development
Enable: MAILPIT_ENABLED=true (enabled by default in dev)
| Property | Value |
|---|---|
| Image | axllent/mailpit:latest |
| SMTP Port | 1025 |
| UI Port | 8025 |
| Web UI | http://localhost:8025 |
- Email Capture - Catch all outgoing emails
- Web UI - View emails in browser
- API Access - Programmatic access
- Search - Full-text search emails
- Attachments - View email attachments
- No Auth - No login required (dev only)
SMTP:
import nodemailer from 'nodemailer'
const transporter = nodemailer.createTransporter({
host: 'localhost',
port: 1025,
secure: false,
auth: false,
})
await transporter.sendMail({
from: '[email protected]',
to: '[email protected]',
subject: 'Test Email',
html: '<h1>Hello!</h1><p>This is a test email.</p>',
})View Emails:
open http://localhost:8025Lightning-fast full-text search
Enable: MEILISEARCH_ENABLED=true
| Property | Value |
|---|---|
| Image | getmeili/meilisearch:latest |
| Port | 7700 |
| API Endpoint | http://localhost:7700 |
- Instant Search - Search-as-you-type
- Typo Tolerance - Handles spelling errors
- Faceted Search - Filter and facet results
- Ranking - Custom ranking rules
- Multi-Language - 100+ languages
- Synonyms - Define search synonyms
HTTP:
POST http://localhost:7700/indexes/users/documents
[
{"id": 1, "name": "Alice", "email": "[email protected]"},
{"id": 2, "name": "Bob", "email": "[email protected]"}
]JavaScript:
import { MeiliSearch } from 'meilisearch'
const client = new MeiliSearch({
host: 'http://localhost:7700',
})
const index = client.index('users')
await index.addDocuments([
{ id: 1, name: 'Alice', email: '[email protected]' },
{ id: 2, name: 'Bob', email: '[email protected]' },
])const results = await index.search('ali', {
limit: 10,
attributesToHighlight: ['name'],
filter: 'active = true',
})Custom business logic endpoints
Enable: FUNCTIONS_ENABLED=true
| Property | Value |
|---|---|
| Runtime | Node.js, Python, Go |
| Deployment | Hot reload in dev |
| Endpoint | http://functions.localhost |
functions/
├── hello.js # Simple function
├── email.js # Send email function
└── package.json # Dependencies
functions/hello.js:
export default async (req, res) => {
const { name } = req.body
res.json({
message: `Hello, ${name}!`,
timestamp: new Date().toISOString(),
})
}Call Function:
POST http://functions.localhost/hello
{"name": "World"}Machine learning experiment tracking
Enable: MLFLOW_ENABLED=true
| Property | Value |
|---|---|
| Image | mlflow/mlflow:latest |
| Port | 5000 |
| UI | http://localhost:5000 |
| Backend | PostgreSQL |
| Artifacts | MinIO |
import mlflow
mlflow.set_tracking_uri('http://localhost:5000')
mlflow.set_experiment('my-experiment')
with mlflow.start_run():
mlflow.log_param('learning_rate', 0.01)
mlflow.log_metric('accuracy', 0.95)
mlflow.log_artifact('model.pkl')Enable: MONITORING_ENABLED=true
Enables 10 services for complete observability:
| Service | Port | Purpose |
|---|---|---|
| Prometheus | 9090 | Metrics collection |
| Grafana | 3000 | Dashboards |
| Loki | 3100 | Log aggregation |
| Promtail | 9080 | Log collection |
| Alertmanager | 9093 | Alerts |
| Node Exporter | 9100 | Host metrics |
| cAdvisor | 8080 | Container metrics |
| Jaeger | 16686 | Distributed tracing |
| Tempo | 3200 | Trace storage |
| OTEL Collector | 4317 | Telemetry |
Web-based administration panel
Enable: NSELF_ADMIN_ENABLED=true
| Property | Value |
|---|---|
| Port | 3021 (NOT 3100) |
| URL | http://localhost:3021 |
Features:
- Service health dashboard
- Log viewer
- Database browser
- User management
- Configuration editor
- Metrics overview
Backend: nself CLI Frontend: Next.js Features: Auth, GraphQL API, File Storage, Search
Frontend Integration:
// lib/nhost.ts
import { NhostClient } from '@nhost/nhost-js';
export const nhost = new NhostClient({
subdomain: 'localhost',
authUrl: 'http://auth.localhost/v1/auth',
graphqlUrl: 'http://api.localhost/v1/graphql',
storageUrl: 'http://storage.localhost/v1/storage',
});
// app/providers.tsx
import { NhostProvider } from '@nhost/nextjs';
export function Providers({ children }) {
return (
<NhostProvider nhost={nhost}>
{children}
</NhostProvider>
);
}- Use Dev Secrets - Don't use production secrets locally
- Enable Console - Hasura console for rapid development
- Hot Reload - Enable file watching
- Mailpit - Test emails without sending
- Log Verbosity - Enable detailed logs
- Disable Console - Turn off Hasura console
- Strong Secrets - Use 32+ character random strings
- Enable SSL - Always use HTTPS
- Resource Limits - Set memory/CPU limits
- Monitoring - Enable full monitoring stack
- Change Default Passwords - Never use defaults in production
- JWT Secrets - Use strong, unique secrets
- CORS - Restrict to your domains
- Rate Limiting - Protect against abuse
- Firewall - Restrict database access
- Connection Pooling - Use PgBouncer
- Caching - Enable Redis for sessions
- CDN - Use CDN for static assets
- Compression - Enable gzip in nginx
- Indexes - Add database indexes
- Configuration Guide → Configuration.md
- Database Migrations → Migrations.md
- Troubleshooting → Troubleshooting.md
- Architecture Deep Dive → Architecture.md