Gleam pog - HeyItWorked/babel-shelf GitHub Wiki

Gleam + pog — Talking to Postgres

Library: pog (v4.x) | Underlying driver: pgo (Erlang) | Style: Query builder pipeline

pog is Gleam's PostgreSQL client. It wraps an Erlang connection pool and adds Gleam's type safety on top. If you've used Python's psycopg2 or Node's pg, the SQL is the same — the difference is how you build queries and decode results.

The name: pog is a rename of gleam_pgo. Same author, same library, shorter name.


Connecting — No Global State

Go and TypeScript store the connection in a module-level variable

// Go — mutable package variable
var db *sql.DB
func main() {
    db, _ = sql.Open("postgres", url)
}
// TypeScript — module variable with setter
export let pool: Pool
export function setPool(p: Pool) { pool = p }

Gleam passes it as an argument

There's no mutable global state in Gleam. You create a connection and thread it through every function that needs it:

// Gleam — connection created in main, passed to every db function
let assert Ok(conn) = pog.url_config("bookshelf", database_url)
  |> pog.connect

// every db function takes conn as its first argument
db.insert_book(conn, "Dune", "Frank Herbert", models.WantToRead)
db.get_book_by_id(conn, 1)
# Python equivalent — explicit connection passing
conn = psycopg2.connect(database_url)
insert_book(conn, "Dune", "Frank Herbert", "want to read")
get_book_by_id(conn, 1)

Why this matters: In Go/TS, any file can reach the db connection — convenient but makes testing and reasoning harder. In Gleam, every function declares "I need a database connection" in its signature. Nothing hidden.

Configuration — Builder Pattern

pog uses a builder pattern to configure the connection pool:

// From a DATABASE_URL (most common in Docker/prod)
let assert Ok(config) = pog.url_config("bookshelf", database_url)

// Or build it field by field
let config = pog.default_config("bookshelf")
  |> pog.host("localhost")
  |> pog.port(5432)
  |> pog.database("bookshelf")
  |> pog.user("shelf")
  |> pog.password("shelf")
  |> pog.pool_size(10)
# Python — keyword arguments
conn = psycopg2.connect(
    host="localhost",
    port=5432,
    dbname="bookshelf",
    user="shelf",
    password="shelf",
)

The first argument to pog.url_config / pog.default_config is a pool name — an Erlang atom that identifies this connection pool. Think of it as naming your connection so you can look it up later.


The Query Pipeline — Build, Bind, Decode, Execute

This is pog's core pattern. Every query follows four steps:

pog.query("SELECT id, title FROM books WHERE id = $1")  // 1. SQL
  |> pog.parameter(pog.int(42))                          // 2. Bind params
  |> pog.returning(my_decoder)                           // 3. Row decoder
  |> pog.execute(conn)                                   // 4. Run it
# Python equivalent — all four steps happen in one call
cursor.execute("SELECT id, title FROM books WHERE id = %s", (42,))
row = cursor.fetchone()  # decoding is manual
book = Book(id=row[0], title=row[1])

The Gleam version is more verbose but each step is typed. If you pass a string where the SQL expects an int, the compiler catches it — not the database at runtime.

Step 1: pog.query(sql) — Create the Query

pog.query("INSERT INTO books (title, author) VALUES ($1, $2) RETURNING id, title, author")

Returns a Query(Nil) — a query that returns nothing by default. The $1, $2 placeholders work exactly like Go and TypeScript (Postgres positional parameters).

Step 2: pog.parameter(value) — Bind Parameters

Each call binds the next $N placeholder:

pog.query("INSERT INTO books (title, author) VALUES ($1, $2)")
  |> pog.parameter(pog.text("Dune"))           // binds $1
  |> pog.parameter(pog.text("Frank Herbert"))   // binds $2

Parameter type constructors — you must wrap values in their Postgres type:

Gleam Postgres type Example
pog.int(42) integer IDs, counts
pog.text("hello") text/varchar Strings
pog.float(3.14) float Decimals
pog.bool(True) boolean Flags
pog.null() NULL Missing values
# Python — psycopg2 infers types automatically
cursor.execute("INSERT INTO books VALUES (%s, %s)", ("Dune", "Frank Herbert"))
# No need to say "this is a string" — psycopg2 figures it out

Why Gleam is explicit: Python's driver guesses types at runtime (and sometimes guesses wrong). Gleam makes you declare types upfront — no surprises.

Step 3: pog.returning(decoder) — Tell pog How to Read Rows

This is the biggest difference from Go and TypeScript. You must write a decoder that maps each column position to a Gleam type. (Covered in detail in the next section.)

pog.query("SELECT id, title FROM books")
  |> pog.returning({
    use id <- decode.field(0, decode.int)
    use title <- decode.field(1, decode.string)
    decode.success(#(id, title))
  })

If you skip returning, pog returns Returned(Nil) — you get the row count but no decoded data. Useful for DELETE.

Step 4: pog.execute(conn) — Run the Query

pog.query("SELECT ...")
  |> pog.returning(decoder)
  |> pog.execute(conn)
// Returns: Result(Returned(Book), QueryError)

The return type has two layers:

  • Outer Result: Did the query succeed or fail?
  • Inner Returned(t): How many rows, and what data?

Decoders — Mapping Rows to Types

The Problem

Postgres returns rows as untyped tuples. Go uses row.Scan(&book.Id, ...) to fill struct fields. TypeScript's pg auto-maps column names to object keys. Gleam needs a decoder — a recipe that says "column 0 is an int, column 1 is a string, etc."

The Modern Decode API (gleam/dynamic/decode)

pog 4.x uses Gleam's gleam/dynamic/decode module (not the older dynamic.decode4):

import gleam/dynamic/decode

fn book_decoder() -> decode.Decoder(Book) {
  use id <- decode.field(0, decode.int)         // column 0 → Int
  use title <- decode.field(1, decode.string)    // column 1 → String
  use author <- decode.field(2, decode.string)   // column 2 → String
  use status_str <- decode.field(3, decode.string) // column 3 → String
  let assert Ok(status) = status_from_string(status_str)
  decode.success(Book(id:, title:, author:, status:))
}
# Python equivalent — manual row-to-object mapping
def decode_book(row: tuple) -> Book:
    id, title, author, status_str = row  # positional unpacking
    status = BookStatus(status_str)      # convert string to enum
    return Book(id=id, title=title, author=author, status=status)

How use Works in Decoders

The use keyword here isn't error handling — it's callback chaining. Each use line desugars to a nested callback:

// What you write:
use id <- decode.field(0, decode.int)
use title <- decode.field(1, decode.string)
decode.success(#(id, title))

// What the compiler sees:
decode.field(0, decode.int, fn(id) {
  decode.field(1, decode.string, fn(title) {
    decode.success(#(id, title))
  })
})

decode.field(index, type) means "when decoding a row, grab position index and make sure it's type." If any field fails to decode, the whole decoder returns an error — no partial results.

Handling Custom Types in Decoders

The database stores "want to read" as a string, but Gleam's Book expects a BookStatus custom type. The conversion happens inside the decoder:

use status_str <- decode.field(3, decode.string)
let assert Ok(status) = status_from_string(status_str)
decode.success(Book(id:, title:, author:, status:))

Why let assert? The database has a CHECK constraint that only allows 'reading', 'finished', 'want to read'. So status_from_string can't fail unless the schema is broken. let assert means "I'm certain this will match — crash if it doesn't." It's Gleam's version of an assertion.

# Python equivalent — same confidence, same crash if wrong
status = BookStatus(status_str)  # raises ValueError if invalid

If you weren't confident, you'd use case instead of let assert and handle the error gracefully.


The Returned Type — What Queries Give Back

pub type Returned(t) {
  Returned(count: Int, rows: List(t))
}
  • count: Number of affected/returned rows (like Go's RowsAffected() or TS's rowCount)
  • rows: List of decoded values (empty for DELETE without RETURNING)
// After executing a SELECT that found 3 books:
Ok(pog.Returned(count: 3, rows: [book1, book2, book3]))

// After executing a DELETE that removed 1 row (no RETURNING):
Ok(pog.Returned(count: 1, rows: []))

// After executing a DELETE where no row matched:
Ok(pog.Returned(count: 0, rows: []))

Pattern matching on Returned is how you handle "not found":

case returned.rows {
  [book] -> Ok(book)     // exactly one row — success
  [] -> Error(NotFound)  // no rows — 404
  _ -> Error(DatabaseError("unexpected multiple rows"))
}
# Python equivalent
rows = cursor.fetchall()
if len(rows) == 1:
    return rows[0]
elif len(rows) == 0:
    raise NotFoundError()

Error Types — What Can Go Wrong

pog.execute returns Result(Returned(t), QueryError). The QueryError type covers everything that can fail:

Variant When it happens Example
ConstraintViolated INSERT/UPDATE breaks a DB constraint Duplicate key, CHECK violation
PostgresqlError General Postgres error Syntax error, permission denied
UnexpectedArgumentCount Wrong number of $N params SQL has $1, $2 but you bound 3
UnexpectedArgumentType Param type doesn't match column Passed pog.int() for a text column
UnexpectedResultType Decoder doesn't match the returned row Column order changed
QueryTimeout Query took too long Default: 5 seconds
ConnectionUnavailable Can't reach Postgres Wrong host, pool exhausted

Converting Library Errors to App Errors

You typically don't expose pog.QueryError to your handlers. Instead, wrap it:

pub type AppError {
  NotFound
  DatabaseError(String)
}

fn map_error(res: Result(a, pog.QueryError)) -> Result(a, AppError) {
  result.map_error(res, fn(e) { DatabaseError(string.inspect(e)) })
}
# Python equivalent — catching library exceptions and re-raising as app exceptions
try:
    cursor.execute(sql, params)
except psycopg2.Error as e:
    raise DatabaseError(str(e))

Then in your db functions:

pub fn get_book_by_id(conn, id) -> Result(Book, AppError) {
  use returned <- result.try(
    pog.query("SELECT ... WHERE id = $1")
    |> pog.parameter(pog.int(id))
    |> pog.returning(book_decoder())
    |> pog.execute(conn)
    |> map_error,     // pog.QueryError → AppError
  )

  case returned.rows {
    [book] -> Ok(book)
    _ -> Error(NotFound)  // domain-level error
  }
}

The use returned <- result.try(...) line is doing two things:

  1. If the query failed → return Error(DatabaseError(...)) immediately
  2. If it succeeded → unwrap returned and continue to the case

Putting It All Together

Here's the full lifecycle of a query, annotated:

// 1. Build the SQL with placeholders
pog.query("SELECT id, title, author, status FROM books WHERE id = $1")

// 2. Bind the parameter (type-safe — can't pass a string for $1 if it's an int column)
  |> pog.parameter(pog.int(id))

// 3. Attach a decoder (tells pog how to convert the raw row into a Book)
  |> pog.returning(book_decoder())

// 4. Execute against the connection pool
  |> pog.execute(conn)

// Returns: Result(Returned(Book), QueryError)
//
// Ok(Returned(count: 1, rows: [Book(id: 1, title: "Dune", ...)]))  ← found
// Ok(Returned(count: 0, rows: []))                                   ← not found
// Error(ConnectionUnavailable)                                        ← db down

The Same Query in All Three Languages

// Go — imperative, manual scan
row := db.QueryRow("SELECT id, title, author, status FROM books WHERE id = $1", id)
var book Book
err := row.Scan(&book.Id, &book.Title, &book.Author, &book.Status)
// TypeScript — concise, auto-mapped
const result = await pool.query<Book>("SELECT id, title, author, status FROM books WHERE id = $1", [id])
return result.rows[0]
// Gleam — pipeline, explicit decoder
pog.query("SELECT id, title, author, status FROM books WHERE id = $1")
  |> pog.parameter(pog.int(id))
  |> pog.returning(book_decoder())
  |> pog.execute(conn)

Three languages, same SQL, three different philosophies for getting data out.

⚠️ **GitHub.com Fallback** ⚠️