SQL Layer Across Languages - HeyItWorked/babel-shelf GitHub Wiki
Every implementation talks to the same Postgres 16 database with the same schema. The SQL is nearly identical — what differs is how each language sends queries, receives results, and handles errors.
-- db/init.sql
CREATE TABLE IF NOT EXISTS books (
id SERIAL PRIMARY KEY,
title VARCHAR(255) NOT NULL,
author VARCHAR(255) NOT NULL,
status VARCHAR(20) NOT NULL DEFAULT 'want to read'
CHECK (status IN ('reading', 'finished', 'want to read'))
);-
SERIAL= auto-incrementing integer (Postgres-specific) -
CHECKconstraint = the database enforces valid status values, so even if app validation fails, bad data can't get in
| Language | Driver | Connection Type | Pooling |
|---|---|---|---|
| Go | lib/pq |
*sql.DB (built-in pool) |
Yes — sql.DB is already a pool |
| TypeScript | pg |
Pool (explicit pool) |
Yes — new Pool(...)
|
| Gleam | pog |
pog.Connection (Erlang pool) |
Yes — built-in via pgo |
| Python | psycopg2 |
connection (single) |
Manual (use psycopg2.pool) |
// Go — sql.DB IS the pool (misleading name)
db, err = sql.Open("postgres", databaseUrl)
// sql.Open doesn't actually connect — it just creates the pool
// db.Ping() forces the first real connection// TypeScript — explicit pool
const pool = new Pool({ connectionString: databaseUrl })
await pool.query("SELECT 1") // forces connection// Gleam — parse URL into config, then connect
let assert Ok(config) = pog.url_config("bookshelf", database_url)
let conn = pog.connect(config)Gotcha: Go's sql.Open is lazy — it doesn't connect until the first query. That's why db.Ping() is called separately. TS's pool.query("SELECT 1") serves the same purpose. Gleam's pog starts the pool on connect — no separate ping needed.
// Go — mutable package-level variable
var db *sql.DB // set in main(), used in db.go// TypeScript — module variable with setter
export let pool: Pool
export function setPool(p: Pool) { pool = p }// Gleam — no global state, connection passed as argument
pub fn get_book_by_id(conn: pog.Connection, id: Int) -> Result(Book, AppError) {
// conn comes from the caller — main() creates it, handlers pass it through
}Why the patterns differ: Go files in the same package share all package-level variables. TS modules are isolated — you can't reassign an imported binding, hence the setPool() function. Gleam has no mutable module state at all — the connection must be passed explicitly to every function that needs it. More verbose, but nothing hidden.
Every implementation uses these exact SQL queries:
| Operation | SQL |
|---|---|
| Create | INSERT INTO books (title, author, status) VALUES ($1, $2, $3) RETURNING id, title, author, status |
| List | SELECT id, title, author, status FROM books |
| Get | SELECT id, title, author, status FROM books WHERE id = $1 |
| Update | UPDATE books SET title=$1, author=$2, status=$3 WHERE id=$4 RETURNING id, title, author, status |
| Delete | DELETE FROM books WHERE id = $1 |
RETURNING clause: Postgres-specific. Gets back the row in the same roundtrip instead of doing INSERT then SELECT. All implementations use this for create and update.
Explicit columns: Both Go and TS list id, title, author, status instead of SELECT *. If a column is added later, SELECT * would break Go's Scan (expects exactly 4 fields) and silently add unexpected data in TS.
All implementations use positional parameters — never string concatenation:
// Go — variadic args after the SQL string
db.QueryRow("SELECT ... WHERE id = $1", id)// TypeScript — array of params
pool.query<Book>("SELECT ... WHERE id = $1", [id])// Gleam — chained parameter calls, each binds the next $N
pog.query("SELECT ... WHERE id = $1")
|> pog.parameter(pog.int(id))# Python — tuple of params
cursor.execute("SELECT ... WHERE id = %s", (id,))
# Note: Python uses %s, not $1 — psycopg2 converts internallySecurity: Parameterized queries prevent SQL injection. The driver sends the query and parameters separately — the database never sees user input mixed into the SQL string.
Gleam's twist: Parameters are typed at the call site — pog.int(id), pog.text(title). Go and TS infer types at runtime. This means Gleam catches type mismatches at compile time, while Go/TS discover them when Postgres rejects the query.
var book Book
err := row.Scan(&book.Id, &book.Title, &book.Author, &book.Status)You must:
- Declare variables first
- Pass pointers (
&) soScancan write into them - Match column order exactly (not names)
- Get the count right — 4 columns = 4 arguments
If you get it wrong: runtime panic or silent data corruption (values in wrong fields).
const result = await pool.query<Book>(sql, [id])
return result.rows[0] // { id: 1, title: "Dune", ... }pg automatically creates an object where column names become property keys. No manual mapping needed.
If you get it wrong: if you rename a SQL column, the old key becomes undefined. No compile error, no runtime error — just missing data.
// Gleam — pog 4.x with gleam/dynamic/decode
fn book_decoder() -> decode.Decoder(Book) {
use id <- decode.field(0, decode.int) // column 0 → Int
use title <- decode.field(1, decode.string) // column 1 → String
use author <- decode.field(2, decode.string) // column 2 → String
use status_str <- decode.field(3, decode.string) // column 3 → String
let assert Ok(status) = status_from_string(status_str)
decode.success(Book(id:, title:, author:, status:))
}Each use line says "grab column N as type T." The decoder is attached to the query with pog.returning(book_decoder()) — pog applies it to every row automatically. If the shape doesn't match, you get a Result.Error, not a crash.
Note: Earlier Gleam code used dynamic.decode4(...) (the old API). pog 4.x uses the newer gleam/dynamic/decode module with use-based chaining — more readable and composable. See Gleam-pog for the full decoder guide.
| Aspect | Go | TypeScript | Gleam |
|---|---|---|---|
| Column order matters? | Yes (Scan is positional) | No (name-based) | Yes (element index) |
| Type mismatch detected? | Runtime panic | Silent (JS coercion) | Compile-time |
| Extra column added | Scan crashes (wrong arity) | Silently included | Decoder ignores it |
| Column renamed | Scan still works (positional) | Property becomes undefined | Decoder still works (positional) |
All three need to check if the DELETE actually affected a row:
// Go — check RowsAffected
result, err := db.Exec("DELETE FROM books WHERE id = $1", id)
rowsAffected, _ := result.RowsAffected()
if rowsAffected == 0 {
return fmt.Errorf("book with id %d not found", id)
}// TypeScript — check rowCount
const result = await pool.query(sql, [id])
if (result.rowCount === null || result.rowCount === 0) {
return false
}
return trueSame pattern, different API names: Go calls it RowsAffected(), TS calls it rowCount. Gleam checks returned.count. All three answer "did anything actually get deleted?"
// Gleam — check count on the Returned record
case returned.count {
0 -> Error(NotFound)
_ -> Ok(Nil)
}// Go — Scan returns sql.ErrNoRows
err := row.Scan(...)
if err != nil {
return Book{}, err // caller treats any error as "not found"
}// TypeScript — rows[0] is undefined
return result.rows[0] // undefined if no match, caller checks with if (!book)// Gleam — pattern match on the rows list
case returned.rows {
[book] -> Ok(book) // exactly one row
_ -> Error(NotFound) // empty list = 404
}Go is less precise here: any Scan error (including connection errors) gets treated as "not found." A production app would check err == sql.ErrNoRows specifically. Gleam is the most explicit: the case separates "no rows" from "query error" — query errors are caught earlier by result.try, so the case only runs on successful queries.
Go TypeScript Gleam
── ────────── ─────
Start: sql.Open (lazy) new Pool (lazy) pog.connect (starts pool)
Verify: db.Ping() pool.query("SELECT 1") (automatic on connect)
Use: db.QueryRow / db.Query pool.query pog.query |> pog.execute
Cleanup: defer db.Close() await pool.end() pog.disconnect(conn)
Test setup: TestMain() beforeAll() (TBD — issue #12)
Test cleanup: defer db.Close() afterAll → pool.end() (TBD — issue #12)
All three drivers manage a connection pool internally. None opens a new TCP connection per query — they reuse connections from the pool.
| Rule | Enforced by DB? | Also checked in app? |
|---|---|---|
id is auto-generated |
Yes (SERIAL) |
No — apps never set id on INSERT |
title is required |
Yes (NOT NULL) |
Yes — handlers check before INSERT |
| Valid status values | Yes (CHECK constraint) |
Yes — handlers validate before INSERT |
status defaults to "want to read" |
Yes (DEFAULT) |
Yes — handlers set default too |
Belt and suspenders: The app validates before sending to the DB, and the DB validates again. If the app had a bug that skipped validation, the CHECK constraint would still reject bad data. This is defensive programming — validate at the boundary and at the storage layer.