Backend Database - Mardens-Inc/Pricing-App GitHub Wiki
This document explains the database design, schema conventions, and patterns used in the Pricing App.
- Database Overview
- Connection Management
- Table Naming Strategy
- Schema Conventions
- ID Hashing System
- Table Structures
- Query Patterns
- Migrations
- Performance Considerations
- Database: MySQL 5.7+ or 8.0+
- Driver: SQLx 0.8.2 (async, compile-time checked)
- Connection Library: database-common-lib (custom)
-
Database Name:
pricing
- Dynamic Table Names: Each location gets its own inventory table
- No Foreign Keys: Between inventory tables (different tables per location)
- TEXT Columns: For flexible inventory data (no fixed schema)
-
Shared Tables:
locations(master list),history(audit trail) -
Auto-Timestamps:
DEFAULT CURRENT_TIMESTAMPandON UPDATEtriggers
Database credentials are stored in dev-server.json:
{
"host": "localhost",
"user": "db_user",
"password": "db_password",
"hash": "some_hash_value"
}This file is gitignored for security.
use database_common_lib::database_connection::{DatabaseConnectionData, create_pool};
use sqlx::MySqlPool;
// Create pool from configuration
let data = DatabaseConnectionData::from_file("dev-server.json")?;
let pool = create_pool(&data).await?;
// Pool is shared across all workers via web::Data
HttpServer::new(move || {
App::new()
.app_data(web::Data::new(pool.clone()))
})The connection pool in lib.rs:
MySqlPoolOptions::new()
.max_connections(5) // Adjust based on load
.connect(&connection_string)
.await?These tables exist once in the database:
| Table Name | Purpose | Example |
|---|---|---|
locations |
Master list of all databases/locations | Row: {id: 1, name: "Scarborough", ...}
|
history |
Audit trail across all locations | Row: {id: 1, location_id: 5, ...}
|
Each location gets its own set of tables:
| Pattern | Purpose | Example |
|---|---|---|
{location_id} |
Inventory records |
123 (if location_id is 123) |
options_{location_id} |
Database options | options_123 |
columns_{location_id} |
Column configuration | columns_123 |
substitutions_{location_id} |
Item substitutions | substitutions_123 |
print_options_{location_id} |
Print settings | print_options_123 |
Advantages:
- Data isolation per location
- Independent schema evolution
- Easier backups (per location)
- Better query performance (smaller tables)
- Flexible column sets per location
Trade-offs:
- Cannot use foreign keys between inventory tables
- Migration scripts are more complex
- Must track table names dynamically
// inventory_db.rs - Creating inventory table
pub async fn create(id: u64, headers: Vec<String>, data: &DatabaseConnectionData) -> Result<()> {
let pool = create_pool(data).await?;
let mut query = format!(
"CREATE TABLE IF NOT EXISTS `{}` (id INT AUTO_INCREMENT PRIMARY KEY, ",
id // Table name is the location ID
);
for header in headers {
query.push_str(&format!("`{}` TEXT DEFAULT NULL, ", header));
}
query.push_str(
"`date` DATETIME DEFAULT CURRENT_TIMESTAMP NOT NULL, \
`last_modified_date` DATETIME DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP NOT NULL);"
);
pool.execute(query.as_str()).await?;
Ok(())
}id BIGINT UNSIGNED AUTO_INCREMENT PRIMARY KEY
-- or
id INT AUTO_INCREMENT PRIMARY KEY- Always named
id - Always auto-increment
- BIGINT for master tables, INT for per-location tables
-- Creation timestamp
date DATETIME DEFAULT CURRENT_TIMESTAMP NOT NULL
-- or
post_date DATETIME DEFAULT CURRENT_TIMESTAMP NOT NULL
-- Update timestamp
last_modified_date DATETIME DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP NOT NULL-
dateorpost_datefor creation time -
last_modified_datefor last update -
DEFAULT CURRENT_TIMESTAMPsets on creation -
ON UPDATE CURRENT_TIMESTAMPupdates automatically
-- Text data (variable length)
name VARCHAR(255) NOT NULL
description TEXT DEFAULT NULL
-- Numeric data
id BIGINT UNSIGNED
price DECIMAL(10,2)
quantity INT
-- Boolean flags
visible BOOLEAN DEFAULT TRUE NOT NULL
active BOOLEAN DEFAULT FALSE
-- Dates
date DATETIME DEFAULT CURRENT_TIMESTAMP- Table names: lowercase, underscore_separated (or numeric for inventory)
- Column names: lowercase, underscore_separated
-
Boolean columns:
visible,active,is_deleted(descriptive names) -
Foreign keys:
{table}_id(e.g.,location_id)
-
Security: Prevents enumeration attacks (
/api/list/1,/api/list/2, etc.) - Privacy: Obscures database structure
- Flexibility: Can change internal IDs without breaking URLs
Uses serde_hash library with hashids algorithm:
use serde_hash::hashids::{encode_single, decode_single};
// Configuration (in serde_hash library)
// Salt: "pricing-app"
// Min length: 16 characters
// Alphabet: URL-safe characters
// Encoding (database → API)
let hashed_id = encode_single(123);
// Result: "x7J8kLm9N2pQr4Tv"
// Decoding (API → database)
let db_id = decode_single("x7J8kLm9N2pQr4Tv")?;
// Result: 123// Endpoint receives hashed ID
#[get("/api/list/{id}")]
pub async fn get_location(
pool: web::Data<MySqlPool>,
id: web::Path<String>, // Hashed ID string
) -> Result<impl Responder> {
// 1. Decode at entry point
let db_id = decode_single(&id)?;
// 2. Use raw ID for database operations
let location = list_db::get_by_id(&pool, db_id).await?;
// 3. Response struct re-encodes ID via serde
Ok(web::Json(location))
}#[derive(Serialize)]
pub struct Location {
#[serde(serialize_with = "serde_hash::hashids::serialize")]
pub id: u64, // Automatically encoded to string in JSON
pub name: String,
pub location: String,
}
// JSON response: {"id": "x7J8kLm9N2pQr4Tv", "name": "...", "location": "..."}CREATE TABLE IF NOT EXISTS locations (
id BIGINT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
name VARCHAR(255) NOT NULL,
location VARCHAR(255) NOT NULL,
po VARCHAR(255) NOT NULL,
vendor VARCHAR(255) NOT NULL,
department VARCHAR(255) NOT NULL,
image VARCHAR(255) NOT NULL,
post_date DATETIME DEFAULT CURRENT_TIMESTAMP NOT NULL,
visible BOOLEAN DEFAULT TRUE NOT NULL
);Location: src-actix/list/list_db.rs:8
Purpose: Master list of all databases/locations
Columns:
-
id- Unique identifier -
name- Display name (e.g., "Scarborough Store") -
location- Physical location -
po,vendor,department- Default values for new records -
image- Icon filename -
post_date- Creation date -
visible- Show/hide from list
CREATE TABLE IF NOT EXISTS `{location_id}` (
id INT AUTO_INCREMENT PRIMARY KEY,
-- Dynamic columns (TEXT type) --
`Column Name 1` TEXT DEFAULT NULL,
`Column Name 2` TEXT DEFAULT NULL,
-- ... more columns ...
date DATETIME DEFAULT CURRENT_TIMESTAMP NOT NULL,
last_modified_date DATETIME DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP NOT NULL
);Location: src-actix/inventory/inventory_db.rs:33
Purpose: Store inventory records for a specific location
Key Features:
- Table name is the location ID (numeric)
- Dynamic columns based on Excel/CSV import
- All data columns are TEXT (flexible schema)
- Standard timestamp columns
CREATE TABLE IF NOT EXISTS `columns_{location_id}` (
id INT AUTO_INCREMENT PRIMARY KEY,
column_name VARCHAR(255) NOT NULL,
column_type VARCHAR(50),
visible BOOLEAN DEFAULT TRUE,
searchable BOOLEAN DEFAULT TRUE,
editable BOOLEAN DEFAULT TRUE,
required BOOLEAN DEFAULT FALSE,
width INT,
order_index INT,
date DATETIME DEFAULT CURRENT_TIMESTAMP NOT NULL,
last_modified_date DATETIME DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP NOT NULL
);Location: src-actix/inventory/columns/columns_db.rs
Purpose: Configuration for each column in inventory table
Columns:
-
column_name- Matches column name in inventory table -
column_type- Display type (text, number, date, etc.) -
visible- Show in table view -
searchable- Include in search -
editable- Allow editing -
required- Require value for new records -
width- Column width in UI -
order_index- Display order
CREATE TABLE IF NOT EXISTS `options_{location_id}` (
id INT AUTO_INCREMENT PRIMARY KEY,
can_print_label BOOLEAN DEFAULT TRUE,
allow_inventorying BOOLEAN DEFAULT FALSE,
date DATETIME DEFAULT CURRENT_TIMESTAMP NOT NULL,
last_modified_date DATETIME DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP NOT NULL
);Location: src-actix/inventory/options/options_db.rs
Purpose: Per-location configuration settings
CREATE TABLE IF NOT EXISTS history (
id BIGINT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
location_id BIGINT UNSIGNED NOT NULL,
record_id INT NOT NULL,
user_id BIGINT UNSIGNED NOT NULL,
action VARCHAR(50) NOT NULL,
diff JSON,
date DATETIME DEFAULT CURRENT_TIMESTAMP NOT NULL
);Location: src-actix/history/history_db.rs
Purpose: Audit trail of all changes across all locations
Columns:
-
location_id- Which database was modified -
record_id- Which record in that database -
user_id- Who made the change -
action- Type of change (created, updated, deleted) -
diff- JSON containing before/after values -
date- When it happened
sqlx::query("INSERT INTO locations (name, location, po, vendor, department, image) VALUES (?, ?, ?, ?, ?, ?)")
.bind(&name)
.bind(&location)
.bind(&po)
.bind(&vendor)
.bind(&department)
.bind(&image)
.execute(&pool)
.await?;let location = sqlx::query_as::<_, Location>(
"SELECT * FROM locations WHERE id = ? LIMIT 1"
)
.bind(id)
.fetch_one(&pool)
.await?;let locations = sqlx::query_as::<_, Location>(
"SELECT * FROM locations ORDER BY post_date DESC"
)
.fetch_all(&pool)
.await?;sqlx::query("UPDATE locations SET name = ?, location = ? WHERE id = ?")
.bind(&name)
.bind(&location)
.bind(id)
.execute(&pool)
.await?;sqlx::query("DELETE FROM locations WHERE id = ?")
.bind(id)
.execute(&pool)
.await?;For inventory tables with variable columns:
// Building dynamic query
let mut query = format!("SELECT * FROM `{}`", location_id);
let mut conditions = Vec::new();
let mut params = Vec::new();
// Add search conditions
if let Some(search_term) = search {
for column in columns {
conditions.push(format!("`{}` LIKE ?", column));
params.push(format!("%{}%", search_term));
}
}
if !conditions.is_empty() {
query.push_str(" WHERE ");
query.push_str(&conditions.join(" OR "));
}
// Bind parameters dynamically
let mut sql_query = sqlx::query(&query);
for param in ¶ms {
sql_query = sql_query.bind(param);
}
let rows = sql_query.fetch_all(&pool).await?;let mut tx = pool.begin().await?;
sqlx::query("INSERT INTO locations (...) VALUES (...)")
.bind(...)
.execute(&mut *tx)
.await?;
sqlx::query("INSERT INTO history (...) VALUES (...)")
.bind(...)
.execute(&mut *tx)
.await?;
tx.commit().await?;use serde_json::Value;
// Storing JSON
let diff_json = serde_json::to_string(&diff)?;
sqlx::query("INSERT INTO history (diff) VALUES (?)")
.bind(&diff_json)
.execute(&pool)
.await?;
// Reading JSON
let row = sqlx::query("SELECT diff FROM history WHERE id = ?")
.bind(id)
.fetch_one(&pool)
.await?;
let diff: Value = row.try_get("diff")?;Each module has an initialize() function that creates tables if they don't exist:
// Called during server startup
pub async fn initialize(pool: &MySqlPool) -> Result<()> {
pool.execute(
r#"CREATE TABLE IF NOT EXISTS locations (
...
)"#
).await?;
// Safe migration: Add column if missing
let _ = pool.execute(
"ALTER TABLE locations ADD COLUMN IF NOT EXISTS visible BOOLEAN DEFAULT TRUE"
).await;
Ok(())
}Location: Each module's *_db.rs file
For complex migrations, use tools in tools/ directory:
# General migration
cargo run --example migrate_tool
# Column mapping migration
cargo run --example column_mapping_migration_tool
# Rename tables
cargo run --example table_name_migration_tool
# Options migration
cargo run --example options_migration_tool
# History migration
cargo run --example history_migration_tool
# Clean column names
cargo run --example clean_all_column_names_toolSee Backend Migration Tools for details.
-
Always use
IF NOT EXISTSfor table creation -
Add columns with
ADD COLUMN IF NOT EXISTS(MySQL 8.0+) - For MySQL 5.7: Check column exists before adding
- Never drop columns without explicit user request
- Test migrations on copy of production data
- Back up database before running migrations
-- Master tables
CREATE INDEX idx_locations_visible ON locations(visible);
CREATE INDEX idx_locations_post_date ON locations(post_date);
-- History table
CREATE INDEX idx_history_location ON history(location_id);
CREATE INDEX idx_history_record ON history(record_id);
CREATE INDEX idx_history_date ON history(date);
-- Per-location tables (if needed)
CREATE INDEX idx_inv_date ON `{location_id}`(date);- Columns used in WHERE clauses frequently
- Foreign key columns
- Columns used in ORDER BY
- Large tables (>10,000 rows)
Caution: Too many indexes slow down INSERT/UPDATE operations.
// Always limit results when possible
sqlx::query("SELECT * FROM locations LIMIT 100")
.fetch_all(&pool)
.await?;// For pagination, use LIMIT with OFFSET
sqlx::query("SELECT * FROM `{}` LIMIT ? OFFSET ?")
.bind(limit)
.bind(offset)
.fetch_all(&pool)
.await?;
// For large offsets, consider keyset pagination instead// Specify only needed columns
sqlx::query("SELECT id, name, location FROM locations")
.fetch_all(&pool)
.await?;// Adjust based on server capacity and load
MySqlPoolOptions::new()
.max_connections(10) // More for high traffic
.min_connections(2) // Keep connections warm
.acquire_timeout(Duration::from_secs(5))
.connect(&url)
.await?;use stopwatch::Stopwatch;
let sw = Stopwatch::start_new();
let result = query.fetch_all(&pool).await?;
let elapsed = sw.elapsed_ms();
if elapsed > 1000 {
log::warn!("Slow query: {}ms - {}", elapsed, query_string);
}Cause: Module's initialize() not called during startup
Solution: Check lib.rs to ensure module's initialize() is called:
inventory::inventory_db::initialize(&pool).await?;Cause: Old database schema, migration not run
Solution: Check module's initialize() for ADD COLUMN IF NOT EXISTS
Cause: Not enough connections or connections leaking
Solution:
- Increase
max_connectionsin pool options - Ensure queries are properly awaited (not leaked)
- Use connection pooling wisely (don't hold connections)
Cause: Missing indexes or inefficient query
Solution:
- Enable query logging:
RUST_LOG=sqlx=debug - Check execution plan:
EXPLAIN SELECT ... - Add appropriate indexes
- Optimize WHERE conditions
-
Always use prepared statements (SQLx does this automatically with
bind()) - Decode IDs at endpoint entry, use raw IDs internally
- Use transactions for multi-table operations
- Log slow queries for monitoring
- Handle errors gracefully (don't expose SQL errors to users)
-
Use
CREATE TABLE IF NOT EXISTSfor idempotent migrations - Keep dynamic table names in a consistent pattern
- Document schema changes in code comments
- Backend Modules - See how each module uses the database
- Backend API Design - Learn endpoint patterns
- Backend Migration Tools - Deep dive into migration utilities
Last Updated: 2025-11-04