Backend Overview - Mardens-Inc/Pricing-App GitHub Wiki

The Pricing App backend is built with Rust using the Actix-web framework. It provides a RESTful API with real-time capabilities, handles file processing, manages database operations, and integrates with external POS systems.

Table of Contents


Architecture

High-Level Architecture

┌──────────────────────────────────────────────────────────┐
│                    HTTP Requests                         │
└─────────────────────┬────────────────────────────────────┘
                      │
┌─────────────────────┴────────────────────────────────────┐
│              Actix-web Server (Port 1421)                │
│  ┌────────────────────────────────────────────────────┐  │
│  │       Authentication Middleware (Optional)         │  │
│  └────────────────────┬───────────────────────────────┘  │
│                       │                                  │
│  ┌────────────────────┴───────────────────────────────┐  │
│  │              Endpoint Handlers                     │  │
│  │  - Decode hashed IDs                               │  │
│  │  - Validate request data                           │  │
│  │  - Call database operations                        │  │
│  │  - Encode response IDs                             │  │
│  └────────────────────┬───────────────────────────────┘  │
│                       │                                  │
│  ┌────────────────────┴───────────────────────────────┐  │
│  │          Database Operations Layer                 │  │
│  │  - Execute SQL queries                             │  │
│  │  - Handle transactions                             │  │
│  │  - Return structured data                          │  │
│  └────────────────────┬───────────────────────────────┘  │
└───────────────────────┼──────────────────────────────────┘
                        │
┌───────────────────────┴──────────────────────────────────┐
│                  MySQL Database                          │
│  - locations (master list)                               │
│  - {location_id} (inventory tables)                      │
│  - options_{location_id}                                 │
│  - history (shared audit trail)                          │
└──────────────────────────────────────────────────────────┘

         ┌─────────────────────────┐
         │  Broadcast Channels     │
         │  (SSE for real-time)    │
         └─────────────────────────┘

Core Concepts

  1. Modular Design: Features are isolated in self-contained modules
  2. Async/Await: All I/O operations are non-blocking
  3. Type Safety: Rust's type system prevents common bugs
  4. OpenAPI-First: API documentation generated from code
  5. Real-time Updates: Server-Sent Events for live data

Project Structure

src-actix/
├── main.rs                    # Binary entry point (starts server)
├── lib.rs                     # Library entry (exports configure function)
├── build.rs                   # Build script (creates directories)
│
├── constants.rs               # Global configuration
├── api_doc.rs                 # OpenAPI documentation config
├── mysql_row_wrapper.rs       # Database row serialization helper
│
├── icons_endpoint.rs          # Icon upload/management
├── server_information_endpoint.rs  # Version info endpoint
│
├── inventory/                 # Core inventory management
│   ├── mod.rs
│   ├── inventory_db.rs        # Database operations
│   ├── inventory_endpoint.rs  # HTTP handlers
│   ├── columns/               # Column configuration submodule
│   ├── options/               # Database options submodule
│   ├── substitutions/         # Item substitutions submodule
│   └── foreign_departments/   # Department management submodule
│
├── sheets/                    # Excel/CSV processing
│   ├── mod.rs
│   ├── spreadsheet_endpoint.rs # Upload/preview/apply endpoints
│   ├── csv.rs                 # CSV parser
│   ├── excel.rs               # Excel parser
│   └── templates/             # Predefined mappings
│
├── list/                      # Location/database management
│   ├── mod.rs
│   ├── list_data.rs           # Data structures
│   ├── list_db.rs             # Database operations
│   └── list_endpoint.rs       # HTTP handlers
│
├── history/                   # Change tracking
│   ├── mod.rs
│   ├── history_data.rs        # Data structures
│   ├── history_db.rs          # Database operations
│   └── history_endpoint.rs    # HTTP handlers
│
└── pos_system/                # POS integration
    ├── mod.rs
    ├── pos_data.rs            # Data structures
    ├── pos_endpoint.rs        # HTTP handlers
    └── ftp_data.rs            # FTP client wrapper

Module Pattern

Every feature module follows a consistent structure. This makes the codebase predictable and maintainable.

Standard Module Structure

feature/
├── mod.rs               # Public interface
├── feature_data.rs      # DTOs and structs
├── feature_db.rs        # Database operations
└── feature_endpoint.rs  # HTTP handlers

File Responsibilities

mod.rs - Public Interface

  • Exports public types and functions
  • Defines module configuration
  • Example:
    pub mod list_data;
    pub mod list_db;
    pub mod list_endpoint;
    
    pub use list_data::*;
    pub use list_endpoint::configure;

*_data.rs - Data Structures

  • DTOs (Data Transfer Objects)
  • Request/response structures
  • Serialization/deserialization logic
  • Example:
    #[derive(Debug, Serialize, Deserialize)]
    pub struct LocationRequest {
        pub name: String,
        pub location: String,
        pub icon: Option<String>,
    }

*_db.rs - Database Operations

  • initialize() function to create tables
  • CRUD operations
  • Complex queries
  • Transaction handling
  • Example:
    pub async fn initialize(pool: &MySqlPool) -> Result<()> {
        sqlx::query("CREATE TABLE IF NOT EXISTS locations (...)")
            .execute(pool)
            .await?;
        Ok(())
    }
    
    pub async fn get_all(pool: &MySqlPool) -> Result<Vec<Location>> {
        // Query implementation
    }

*_endpoint.rs - HTTP Handlers

  • Route definitions
  • Request validation
  • ID encoding/decoding
  • OpenAPI documentation
  • Example:
    #[utoipa::path(
        get,
        path = "/api/list",
        responses(
            (status = 200, description = "List all locations")
        ),
        tag = "Locations"
    )]
    pub async fn get_all_locations(
        pool: web::Data<MySqlPool>
    ) -> Result<impl Responder> {
        let locations = list_db::get_all(&pool).await?;
        Ok(web::Json(locations))
    }
    
    pub fn configure(cfg: &mut web::ServiceConfig) {
        cfg.service(
            web::scope("/api/list")
                .route("", web::get().to(get_all_locations))
        );
    }

Server Initialization

Startup Flow

The server initialization happens in lib.rs:

// 1. Initialize logging
pretty_env_logger::init();

// 2. Create database connection pool
let pool = create_connection_pool().await?;

// 3. Initialize all module databases
inventory::columns::columns_db::initialize(&pool).await?;
list::list_db::initialize(&pool).await?;
history::history_db::initialize(&pool).await?;
// ... other modules

// 4. Setup SSE broadcast channels
static BROADCAST_CHANNELS: OnceLock<DashMap<String, Sender<String>>> = OnceLock::new();
BROADCAST_CHANNELS.get_or_init(|| DashMap::new());

// 5. Create HTTP server
HttpServer::new(move || {
    App::new()
        .app_data(web::Data::new(pool.clone()))
        .configure(inventory::configure)
        .configure(sheets::configure)
        .configure(list::configure)
        // ... other modules
        .service(SwaggerUi::new("/docs/swagger/{_:.*}"))
})
.bind(("0.0.0.0", PORT))?
.workers(4)
.run()
.await

Key Initialization Steps

  1. Logging: Uses pretty_env_logger for structured logging
  2. Database Pool: Shared connection pool across all workers
  3. Table Creation: Each module creates its tables if not exist
  4. Broadcast Channels: Setup for real-time updates
  5. Route Configuration: Each module registers its endpoints
  6. Static Files: Serves frontend from embedded or filesystem
  7. API Docs: Swagger UI and RapiDoc mounted at /docs/

Request Flow

Typical Request Lifecycle

1. HTTP Request arrives
   ↓
2. Actix-web routing matches endpoint
   ↓
3. [Optional] Authentication middleware validates user
   ↓
4. Endpoint handler function called
   ├─ Decode hashed ID (if present)
   ├─ Extract and validate request data
   ├─ Get database pool from app_data
   └─ Call database function
   ↓
5. Database operation executes
   ├─ SQL query runs against MySQL
   ├─ Results deserialized to structs
   └─ Returns Result<T, Error>
   ↓
6. Endpoint processes result
   ├─ Encode IDs for response
   ├─ [Optional] Broadcast SSE update
   └─ Serialize response to JSON
   ↓
7. HTTP Response sent to client

Example: Get Location by ID

// Request: GET /api/list/{id}

#[utoipa::path(
    get,
    path = "/api/list/{id}",
    params(("id" = String, Path, description = "Location ID")),
    responses((status = 200, description = "Location found"))
)]
pub async fn get_location(
    pool: web::Data<MySqlPool>,
    id: web::Path<String>,
) -> Result<impl Responder> {
    // 1. Decode hashed ID
    let decoded_id = serde_hash::hashids::decode_single(&id)?;

    // 2. Call database function
    let location = list_db::get_by_id(&pool, decoded_id).await?;

    // 3. Return JSON response (IDs already encoded in struct)
    Ok(web::Json(location))
}

Key Design Decisions

1. Why Actix-web?

Chosen: Actix-web 4.9 Alternatives: Rocket, Axum, Warp

Reasons:

  • Excellent performance (one of fastest Rust web frameworks)
  • Mature ecosystem with good middleware support
  • Built-in support for async/await
  • Extractors make request handling clean
  • Active development and maintenance

2. Why SQLx instead of ORM?

Chosen: SQLx 0.8.2 Alternatives: Diesel, SeaORM

Reasons:

  • Compile-time SQL verification
  • Async-first design (works great with Actix)
  • No runtime overhead of ORM
  • Direct control over SQL for complex queries
  • Dynamic table names are easier (needed for per-location tables)

3. Why ID Hashing?

Pattern: User-facing IDs are hashed strings (e.g., "x7J8kLm9N2pQr4Tv")

Reasons:

  • Security: Prevents ID enumeration attacks
  • Privacy: Obscures internal database structure
  • Flexibility: Can change internal IDs without breaking API

Implementation:

use serde_hash::hashids::{encode_single, decode_single};

// Encoding (database → API)
let hashed_id = encode_single(database_id); // u64 → String

// Decoding (API → database)
let database_id = decode_single(&hashed_id)?; // String → u64

4. Why Dynamic Table Names?

Pattern: Each location gets its own inventory table named {location_id}

Reasons:

  • Data isolation per location
  • Independent schema evolution per location
  • Easier data management and backups
  • Better query performance (smaller tables)

Trade-offs:

  • Cannot use foreign keys between inventory tables
  • Migration scripts are more complex
  • Must track table names dynamically

5. Why Server-Sent Events?

Pattern: SSE for real-time updates instead of WebSockets

Reasons:

  • Simpler than WebSockets (one-way only)
  • Auto-reconnection handled by browser
  • Works through HTTP proxies
  • Sufficient for our use case (server → client only)

Implementation:

// Broadcasting updates
broadcast_inventory_update(
    &inventory_id,
    "record_updated",
    &record_id,
    &json!({"data": value})
);

// Client listens via EventSource API

6. Why OpenAPI Code Generation?

Pattern: #[utoipa::path] macro on every endpoint

Reasons:

  • Documentation always in sync with code
  • Interactive API testing (Swagger UI)
  • Type-safe documentation (checked at compile time)
  • Auto-generates OpenAPI 3.0 spec

Example:

#[utoipa::path(
    post,
    path = "/api/inventory/{id}",
    request_body = RecordRequest,
    responses(
        (status = 200, description = "Record created"),
        (status = 400, description = "Invalid data")
    ),
    tag = "Inventory"
)]
pub async fn create_record(...) { }

Code Organization

Naming Conventions

  • Files: snake_case (e.g., inventory_endpoint.rs)
  • Functions: snake_case (e.g., get_all_locations())
  • Structs/Enums: PascalCase (e.g., LocationRequest)
  • Constants: SCREAMING_SNAKE_CASE (e.g., PORT)
  • Module names: snake_case (e.g., mod list;)

Error Handling

// Use anyhow::Result for function returns
pub async fn get_location(id: u64) -> anyhow::Result<Location> {
    // Use ? operator for propagation
    let result = sqlx::query_as("SELECT * FROM locations WHERE id = ?")
        .bind(id)
        .fetch_one(pool)
        .await?;

    Ok(result)
}

// HTTP errors use database_common_lib::http_error::Result
pub async fn endpoint_handler(...) -> http_error::Result<impl Responder> {
    // Can convert anyhow errors to HTTP errors
    let location = get_location(id).await
        .map_err(|e| http_error::bad_request(e.to_string()))?;

    Ok(web::Json(location))
}

Logging

use log::{debug, info, warn, error};

// Debug level for detailed info
debug!("Processing request for location_id: {}", id);

// Info for important events
info!("Created new location: {}", name);

// Warn for recoverable issues
warn!("Slow query detected: {}ms", duration);

// Error for failures
error!("Database error: {}", err);

// Enable in development:
// RUST_LOG=debug cargo run

Async Patterns

// All I/O should be async
async fn database_operation(pool: &MySqlPool) -> Result<()> {
    sqlx::query("...").execute(pool).await?;
    Ok(())
}

// Use tokio for concurrent operations
let (result1, result2) = tokio::join!(
    operation1(),
    operation2(),
);

// Use tokio::spawn for background tasks
tokio::spawn(async move {
    // Background work
});

Build Process

Compilation

# Development build (fast compile, includes debug info)
cargo build

# Release build (optimized, slower compile)
cargo build --release

Build Script (build.rs)

Runs before compilation to set up the environment:

// Creates necessary directories
std::fs::create_dir_all("uploads")?;
std::fs::create_dir_all("icons")?;
std::fs::create_dir_all("target/dev-env")?;
std::fs::create_dir_all("target/wwwroot")?;

Cargo Features

The project doesn't use feature flags extensively, but key dependencies do:

[dependencies]
sqlx = { version = "0.8.2", features = ["mysql", "chrono", "runtime-tokio-rustls"] }
tokio = { version = "1.40.0", features = ["rt", "rt-multi-thread", "macros"] }

Development Tips

1. Adding a New Module

See Common Tasks for detailed guide.

2. Debugging SQL Queries

// Enable query logging
RUST_LOG=sqlx=debug cargo run

3. Testing Endpoints

Use Swagger UI at http://localhost:1421/docs/swagger/

4. Performance Profiling

use stopwatch::Stopwatch;

let sw = Stopwatch::start_new();
// ... operation ...
log::debug!("Operation took {}ms", sw.elapsed_ms());

5. Database Connection Issues

Check connection pool configuration in lib.rs:

MySqlPoolOptions::new()
    .max_connections(5) // Adjust if needed
    .connect(&connection_string)
    .await?

Next Steps


Last Updated: 2025-11-04

⚠️ **GitHub.com Fallback** ⚠️