Backend Database - Mardens-Inc/Pricing-App GitHub Wiki

This document explains the database design, schema conventions, and patterns used in the Pricing App.

Table of Contents


Database Overview

Technology Stack

  • Database: MySQL 5.7+ or 8.0+
  • Driver: SQLx 0.8.2 (async, compile-time checked)
  • Connection Library: database-common-lib (custom)
  • Database Name: pricing

Key Design Decisions

  1. Dynamic Table Names: Each location gets its own inventory table
  2. No Foreign Keys: Between inventory tables (different tables per location)
  3. TEXT Columns: For flexible inventory data (no fixed schema)
  4. Shared Tables: locations (master list), history (audit trail)
  5. Auto-Timestamps: DEFAULT CURRENT_TIMESTAMP and ON UPDATE triggers

Connection Management

Configuration

Database credentials are stored in dev-server.json:

{
  "host": "localhost",
  "user": "db_user",
  "password": "db_password",
  "hash": "some_hash_value"
}

This file is gitignored for security.

Connection Pool

use database_common_lib::database_connection::{DatabaseConnectionData, create_pool};
use sqlx::MySqlPool;

// Create pool from configuration
let data = DatabaseConnectionData::from_file("dev-server.json")?;
let pool = create_pool(&data).await?;

// Pool is shared across all workers via web::Data
HttpServer::new(move || {
    App::new()
        .app_data(web::Data::new(pool.clone()))
})

Pool Configuration

The connection pool in lib.rs:

MySqlPoolOptions::new()
    .max_connections(5)  // Adjust based on load
    .connect(&connection_string)
    .await?

Table Naming Strategy

Master Tables (Shared)

These tables exist once in the database:

Table Name Purpose Example
locations Master list of all databases/locations Row: {id: 1, name: "Scarborough", ...}
history Audit trail across all locations Row: {id: 1, location_id: 5, ...}

Dynamic Tables (Per-Location)

Each location gets its own set of tables:

Pattern Purpose Example
{location_id} Inventory records 123 (if location_id is 123)
options_{location_id} Database options options_123
columns_{location_id} Column configuration columns_123
substitutions_{location_id} Item substitutions substitutions_123
print_options_{location_id} Print settings print_options_123

Why Dynamic Tables?

Advantages:

  • Data isolation per location
  • Independent schema evolution
  • Easier backups (per location)
  • Better query performance (smaller tables)
  • Flexible column sets per location

Trade-offs:

  • Cannot use foreign keys between inventory tables
  • Migration scripts are more complex
  • Must track table names dynamically

Creating Dynamic Tables

// inventory_db.rs - Creating inventory table
pub async fn create(id: u64, headers: Vec<String>, data: &DatabaseConnectionData) -> Result<()> {
    let pool = create_pool(data).await?;

    let mut query = format!(
        "CREATE TABLE IF NOT EXISTS `{}` (id INT AUTO_INCREMENT PRIMARY KEY, ",
        id  // Table name is the location ID
    );

    for header in headers {
        query.push_str(&format!("`{}` TEXT DEFAULT NULL, ", header));
    }

    query.push_str(
        "`date` DATETIME DEFAULT CURRENT_TIMESTAMP NOT NULL, \
         `last_modified_date` DATETIME DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP NOT NULL);"
    );

    pool.execute(query.as_str()).await?;
    Ok(())
}

Schema Conventions

Primary Keys

id BIGINT UNSIGNED AUTO_INCREMENT PRIMARY KEY
-- or
id INT AUTO_INCREMENT PRIMARY KEY
  • Always named id
  • Always auto-increment
  • BIGINT for master tables, INT for per-location tables

Timestamps

-- Creation timestamp
date DATETIME DEFAULT CURRENT_TIMESTAMP NOT NULL

-- or
post_date DATETIME DEFAULT CURRENT_TIMESTAMP NOT NULL

-- Update timestamp
last_modified_date DATETIME DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP NOT NULL
  • date or post_date for creation time
  • last_modified_date for last update
  • DEFAULT CURRENT_TIMESTAMP sets on creation
  • ON UPDATE CURRENT_TIMESTAMP updates automatically

Column Types

-- Text data (variable length)
name VARCHAR(255) NOT NULL
description TEXT DEFAULT NULL

-- Numeric data
id BIGINT UNSIGNED
price DECIMAL(10,2)
quantity INT

-- Boolean flags
visible BOOLEAN DEFAULT TRUE NOT NULL
active BOOLEAN DEFAULT FALSE

-- Dates
date DATETIME DEFAULT CURRENT_TIMESTAMP

Naming Conventions

  • Table names: lowercase, underscore_separated (or numeric for inventory)
  • Column names: lowercase, underscore_separated
  • Boolean columns: visible, active, is_deleted (descriptive names)
  • Foreign keys: {table}_id (e.g., location_id)

ID Hashing System

Why Hash IDs?

  1. Security: Prevents enumeration attacks (/api/list/1, /api/list/2, etc.)
  2. Privacy: Obscures database structure
  3. Flexibility: Can change internal IDs without breaking URLs

Implementation

Uses serde_hash library with hashids algorithm:

use serde_hash::hashids::{encode_single, decode_single};

// Configuration (in serde_hash library)
// Salt: "pricing-app"
// Min length: 16 characters
// Alphabet: URL-safe characters

// Encoding (database → API)
let hashed_id = encode_single(123);
// Result: "x7J8kLm9N2pQr4Tv"

// Decoding (API → database)
let db_id = decode_single("x7J8kLm9N2pQr4Tv")?;
// Result: 123

Usage Pattern

// Endpoint receives hashed ID
#[get("/api/list/{id}")]
pub async fn get_location(
    pool: web::Data<MySqlPool>,
    id: web::Path<String>,  // Hashed ID string
) -> Result<impl Responder> {
    // 1. Decode at entry point
    let db_id = decode_single(&id)?;

    // 2. Use raw ID for database operations
    let location = list_db::get_by_id(&pool, db_id).await?;

    // 3. Response struct re-encodes ID via serde
    Ok(web::Json(location))
}

Automatic Encoding in Structs

#[derive(Serialize)]
pub struct Location {
    #[serde(serialize_with = "serde_hash::hashids::serialize")]
    pub id: u64,  // Automatically encoded to string in JSON

    pub name: String,
    pub location: String,
}

// JSON response: {"id": "x7J8kLm9N2pQr4Tv", "name": "...", "location": "..."}

Table Structures

Locations Table (Master List)

CREATE TABLE IF NOT EXISTS locations (
    id         BIGINT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
    name       VARCHAR(255)                       NOT NULL,
    location   VARCHAR(255)                       NOT NULL,
    po         VARCHAR(255)                       NOT NULL,
    vendor     VARCHAR(255)                       NOT NULL,
    department VARCHAR(255)                       NOT NULL,
    image      VARCHAR(255)                       NOT NULL,
    post_date  DATETIME DEFAULT CURRENT_TIMESTAMP NOT NULL,
    visible    BOOLEAN DEFAULT TRUE               NOT NULL
);

Location: src-actix/list/list_db.rs:8

Purpose: Master list of all databases/locations

Columns:

  • id - Unique identifier
  • name - Display name (e.g., "Scarborough Store")
  • location - Physical location
  • po, vendor, department - Default values for new records
  • image - Icon filename
  • post_date - Creation date
  • visible - Show/hide from list

Inventory Table (Per-Location)

CREATE TABLE IF NOT EXISTS `{location_id}` (
    id                   INT AUTO_INCREMENT PRIMARY KEY,
    -- Dynamic columns (TEXT type) --
    `Column Name 1`      TEXT DEFAULT NULL,
    `Column Name 2`      TEXT DEFAULT NULL,
    -- ... more columns ...
    date                 DATETIME DEFAULT CURRENT_TIMESTAMP NOT NULL,
    last_modified_date   DATETIME DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP NOT NULL
);

Location: src-actix/inventory/inventory_db.rs:33

Purpose: Store inventory records for a specific location

Key Features:

  • Table name is the location ID (numeric)
  • Dynamic columns based on Excel/CSV import
  • All data columns are TEXT (flexible schema)
  • Standard timestamp columns

Columns Table (Per-Location)

CREATE TABLE IF NOT EXISTS `columns_{location_id}` (
    id                    INT AUTO_INCREMENT PRIMARY KEY,
    column_name           VARCHAR(255) NOT NULL,
    column_type           VARCHAR(50),
    visible               BOOLEAN DEFAULT TRUE,
    searchable            BOOLEAN DEFAULT TRUE,
    editable              BOOLEAN DEFAULT TRUE,
    required              BOOLEAN DEFAULT FALSE,
    width                 INT,
    order_index           INT,
    date                  DATETIME DEFAULT CURRENT_TIMESTAMP NOT NULL,
    last_modified_date    DATETIME DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP NOT NULL
);

Location: src-actix/inventory/columns/columns_db.rs

Purpose: Configuration for each column in inventory table

Columns:

  • column_name - Matches column name in inventory table
  • column_type - Display type (text, number, date, etc.)
  • visible - Show in table view
  • searchable - Include in search
  • editable - Allow editing
  • required - Require value for new records
  • width - Column width in UI
  • order_index - Display order

Options Table (Per-Location)

CREATE TABLE IF NOT EXISTS `options_{location_id}` (
    id                    INT AUTO_INCREMENT PRIMARY KEY,
    can_print_label       BOOLEAN DEFAULT TRUE,
    allow_inventorying    BOOLEAN DEFAULT FALSE,
    date                  DATETIME DEFAULT CURRENT_TIMESTAMP NOT NULL,
    last_modified_date    DATETIME DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP NOT NULL
);

Location: src-actix/inventory/options/options_db.rs

Purpose: Per-location configuration settings

History Table (Shared)

CREATE TABLE IF NOT EXISTS history (
    id                 BIGINT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
    location_id        BIGINT UNSIGNED NOT NULL,
    record_id          INT NOT NULL,
    user_id            BIGINT UNSIGNED NOT NULL,
    action             VARCHAR(50) NOT NULL,
    diff               JSON,
    date               DATETIME DEFAULT CURRENT_TIMESTAMP NOT NULL
);

Location: src-actix/history/history_db.rs

Purpose: Audit trail of all changes across all locations

Columns:

  • location_id - Which database was modified
  • record_id - Which record in that database
  • user_id - Who made the change
  • action - Type of change (created, updated, deleted)
  • diff - JSON containing before/after values
  • date - When it happened

Query Patterns

Basic CRUD Operations

Create

sqlx::query("INSERT INTO locations (name, location, po, vendor, department, image) VALUES (?, ?, ?, ?, ?, ?)")
    .bind(&name)
    .bind(&location)
    .bind(&po)
    .bind(&vendor)
    .bind(&department)
    .bind(&image)
    .execute(&pool)
    .await?;

Read (Single)

let location = sqlx::query_as::<_, Location>(
    "SELECT * FROM locations WHERE id = ? LIMIT 1"
)
.bind(id)
.fetch_one(&pool)
.await?;

Read (Multiple)

let locations = sqlx::query_as::<_, Location>(
    "SELECT * FROM locations ORDER BY post_date DESC"
)
.fetch_all(&pool)
.await?;

Update

sqlx::query("UPDATE locations SET name = ?, location = ? WHERE id = ?")
    .bind(&name)
    .bind(&location)
    .bind(id)
    .execute(&pool)
    .await?;

Delete

sqlx::query("DELETE FROM locations WHERE id = ?")
    .bind(id)
    .execute(&pool)
    .await?;

Dynamic Queries

For inventory tables with variable columns:

// Building dynamic query
let mut query = format!("SELECT * FROM `{}`", location_id);
let mut conditions = Vec::new();
let mut params = Vec::new();

// Add search conditions
if let Some(search_term) = search {
    for column in columns {
        conditions.push(format!("`{}` LIKE ?", column));
        params.push(format!("%{}%", search_term));
    }
}

if !conditions.is_empty() {
    query.push_str(" WHERE ");
    query.push_str(&conditions.join(" OR "));
}

// Bind parameters dynamically
let mut sql_query = sqlx::query(&query);
for param in &params {
    sql_query = sql_query.bind(param);
}

let rows = sql_query.fetch_all(&pool).await?;

Transactions

let mut tx = pool.begin().await?;

sqlx::query("INSERT INTO locations (...) VALUES (...)")
    .bind(...)
    .execute(&mut *tx)
    .await?;

sqlx::query("INSERT INTO history (...) VALUES (...)")
    .bind(...)
    .execute(&mut *tx)
    .await?;

tx.commit().await?;

Working with JSON Columns

use serde_json::Value;

// Storing JSON
let diff_json = serde_json::to_string(&diff)?;
sqlx::query("INSERT INTO history (diff) VALUES (?)")
    .bind(&diff_json)
    .execute(&pool)
    .await?;

// Reading JSON
let row = sqlx::query("SELECT diff FROM history WHERE id = ?")
    .bind(id)
    .fetch_one(&pool)
    .await?;

let diff: Value = row.try_get("diff")?;

Migrations

Initialization Pattern

Each module has an initialize() function that creates tables if they don't exist:

// Called during server startup
pub async fn initialize(pool: &MySqlPool) -> Result<()> {
    pool.execute(
        r#"CREATE TABLE IF NOT EXISTS locations (
            ...
        )"#
    ).await?;

    // Safe migration: Add column if missing
    let _ = pool.execute(
        "ALTER TABLE locations ADD COLUMN IF NOT EXISTS visible BOOLEAN DEFAULT TRUE"
    ).await;

    Ok(())
}

Location: Each module's *_db.rs file

Migration Tools

For complex migrations, use tools in tools/ directory:

# General migration
cargo run --example migrate_tool

# Column mapping migration
cargo run --example column_mapping_migration_tool

# Rename tables
cargo run --example table_name_migration_tool

# Options migration
cargo run --example options_migration_tool

# History migration
cargo run --example history_migration_tool

# Clean column names
cargo run --example clean_all_column_names_tool

See Backend Migration Tools for details.

Safe Migration Practices

  1. Always use IF NOT EXISTS for table creation
  2. Add columns with ADD COLUMN IF NOT EXISTS (MySQL 8.0+)
  3. For MySQL 5.7: Check column exists before adding
  4. Never drop columns without explicit user request
  5. Test migrations on copy of production data
  6. Back up database before running migrations

Performance Considerations

Indexing

Recommended Indexes

-- Master tables
CREATE INDEX idx_locations_visible ON locations(visible);
CREATE INDEX idx_locations_post_date ON locations(post_date);

-- History table
CREATE INDEX idx_history_location ON history(location_id);
CREATE INDEX idx_history_record ON history(record_id);
CREATE INDEX idx_history_date ON history(date);

-- Per-location tables (if needed)
CREATE INDEX idx_inv_date ON `{location_id}`(date);

When to Add Indexes

  • Columns used in WHERE clauses frequently
  • Foreign key columns
  • Columns used in ORDER BY
  • Large tables (>10,000 rows)

Caution: Too many indexes slow down INSERT/UPDATE operations.

Query Optimization

Use LIMIT

// Always limit results when possible
sqlx::query("SELECT * FROM locations LIMIT 100")
    .fetch_all(&pool)
    .await?;

Use OFFSET Efficiently

// For pagination, use LIMIT with OFFSET
sqlx::query("SELECT * FROM `{}` LIMIT ? OFFSET ?")
    .bind(limit)
    .bind(offset)
    .fetch_all(&pool)
    .await?;

// For large offsets, consider keyset pagination instead

Avoid SELECT *

// Specify only needed columns
sqlx::query("SELECT id, name, location FROM locations")
    .fetch_all(&pool)
    .await?;

Connection Pool Tuning

// Adjust based on server capacity and load
MySqlPoolOptions::new()
    .max_connections(10)        // More for high traffic
    .min_connections(2)         // Keep connections warm
    .acquire_timeout(Duration::from_secs(5))
    .connect(&url)
    .await?;

Monitoring Slow Queries

use stopwatch::Stopwatch;

let sw = Stopwatch::start_new();
let result = query.fetch_all(&pool).await?;
let elapsed = sw.elapsed_ms();

if elapsed > 1000 {
    log::warn!("Slow query: {}ms - {}", elapsed, query_string);
}

Common Issues & Solutions

Issue: Table doesn't exist

Cause: Module's initialize() not called during startup

Solution: Check lib.rs to ensure module's initialize() is called:

inventory::inventory_db::initialize(&pool).await?;

Issue: Column doesn't exist

Cause: Old database schema, migration not run

Solution: Check module's initialize() for ADD COLUMN IF NOT EXISTS

Issue: Connection pool exhausted

Cause: Not enough connections or connections leaking

Solution:

  1. Increase max_connections in pool options
  2. Ensure queries are properly awaited (not leaked)
  3. Use connection pooling wisely (don't hold connections)

Issue: Slow queries

Cause: Missing indexes or inefficient query

Solution:

  1. Enable query logging: RUST_LOG=sqlx=debug
  2. Check execution plan: EXPLAIN SELECT ...
  3. Add appropriate indexes
  4. Optimize WHERE conditions

Best Practices

  1. Always use prepared statements (SQLx does this automatically with bind())
  2. Decode IDs at endpoint entry, use raw IDs internally
  3. Use transactions for multi-table operations
  4. Log slow queries for monitoring
  5. Handle errors gracefully (don't expose SQL errors to users)
  6. Use CREATE TABLE IF NOT EXISTS for idempotent migrations
  7. Keep dynamic table names in a consistent pattern
  8. Document schema changes in code comments

Next Steps


Last Updated: 2025-11-04

⚠️ **GitHub.com Fallback** ⚠️