MySQL - chung-leong/zigar GitHub Wiki

In this example we're going to use the MyZql library to store and retrieve data from a MariaDB/MySql database. It demonstrates how to create an async API.

As always, we begin by creating the basic app skeleton:

mkdir myzql
cd myzql
npm init -y
npm install node-zigar fastify @fastify/formbody
mkdir src zig

We then proceed to install MyZql. Go into the zig sub-directory and create an empty build.zig:

cd zig
touch build.zig

Then go to the project's Github page and select the tag 0.15.1 :

Github - myzql

Click the "Code" button and copy the URL for the zip package:

Github - myzql

Fetch it using zig:

zig fetch --save https://github.com/speed2exe/myzql/archive/refs/tags/0.15.1.zip

That'll fetch the package and create a build.zig.zon listing it as a dependency. The empty build.zig only exists to enable the fetch --save command. It won't be used. The import of the package happens in build.extra.zig. Create a bare-bone copy by running the following command:

cd zig
npx node-zigar extra

build.extra.zig will look like this initially:

const std = @import("std");

pub fn getImports(b: *std.Build, args: anytype) []const std.Build.Module.Import {
    _ = b;
    _ = args;
    // args contains the following:
    //
    //     library: *std.Build.Step.Compile,
    //     target: std.Build.ResolvedTarget,
    //     optimize: std.builtin.OptimizeMode,
    return &.{};
}

pub fn getCSourceFiles(b: *std.Build, args: anytype) []const []const u8 {
    _ = b;
    _ = args;
    // args contains the following:
    //
    //     library: *std.Build.Step.Compile,
    //     module: *std.Build.Module,
    //     target: std.Build.ResolvedTarget,
    //     optimize: std.builtin.OptimizeMode,
    return &.{};
}

pub fn getIncludePaths(b: *std.Build, args: anytype) []const []const u8 {
    _ = b;
    _ = args;
    // args contains the following:
    //
    //     library: *std.Build.Step.Compile,
    //     module: *std.Build.Module,
    //     target: std.Build.ResolvedTarget,
    //     optimize: std.builtin.OptimizeMode,
    return &.{};
}

Add myzql as an import in getImports():

pub fn getImports(b: *std.Build, args: anytype) []const std.Build.Module.Import {
    const myzql = b.dependency("myzql", .{
        .target = args.target,
        .optimize = args.optimize,
    }).module("myzql");
    return &.{
        .{ .name = "myzql", .module = myzql },
    };
}

The other two functions aren't needed and can be removed.

Save the following code as mysql.zig:

const std = @import("std");
const zigar = @import("zigar");
const myzql = @import("myzql");
const Conn = myzql.conn.Conn;

const DatabaseParams = struct {
    host: []const u8,
    port: u16 = 3306,
    username: [:0]const u8,
    password: [:0]const u8,
    database: [:0]const u8,
    threads: usize = 1,
};

var work_queue: zigar.thread.WorkQueue(worker) = .{};
var gpa = std.heap.DebugAllocator(.{}).init;
const allocator = gpa.allocator();

pub fn openDatabase(params: DatabaseParams) !void {
    try work_queue.init(.{
        .allocator = allocator,
        .n_jobs = params.threads,
        .thread_start_params = .{params},
    });
    try work_queue.wait();
}

pub fn closeDatabase(promise: zigar.function.Promise(void)) void {
    work_queue.deinitAsync(promise);
}

const worker = struct {
    threadlocal var client: Conn = undefined;

    pub fn onThreadStart(params: DatabaseParams) !void {
        const address = try std.net.Address.parseIp(params.host, params.port);
        client = try Conn.init(
            zigar.mem.getDefaultAllocator(),
            &.{
                .username = params.username,
                .password = params.password,
                .database = params.database,
                .address = address,
            },
        );
        errdefer client.deinit(allocator);
        try client.ping();
    }

    pub fn onThreadEnd() void {
        client.deinit(allocator);
    }
};

zigar.thread.WorkQueue is a parameterized struct that contains a non-blocking queue and a thread pool. When we push work units onto the queue, one of its threads will pick it up and perform the work. The types of work that can be performed are contained in the namespace passed to WorkQueue(), worker in this case. At the moment we only have the initialization function onThreadStart() and the clean-up function onThreadEnd(). As their names suggest, these are run in each thread when it starts and when it ends.

onThreadEnd() tries to connect to the database. If successful, it stores the connection in a threadlocal variable. It receives parameters from WorkQueue, which in turns receives them from openDatabase().

openDatabase() calls work_queue.wait() to ensure that all threads have successfully open a connection. If any calls to onThreadStart() had resulted in an error, wait() would return that error.

onThreadEnd() closes each thread's database connection. Since it accepts no arguments, there was no need to provide thread_end_params to work_queue.init().

To test our code we need a working MySql or MariaDB server. If you have Docker installed on your computer, getting one up is easy enough:

docker run --detach --name some-mariadb --env MARIADB_ALLOW_EMPTY_ROOT_PASSWORD=1 mariadb:latest

The following command yields the IP address of the server:

docker inspect --format '{{ .NetworkSettings.IPAddress }}' some-mariadb
172.17.0.2

And this one is for connecting to the server:

docker run -it --rm mariadb mariadb --host 172.17.0.2 --user root --disable-ssl

Run the following SQL script to create a test database:

CREATE USER zig_user IDENTIFIED BY 'password123';
GRANT SELECT, INSERT, UPDATE, DELETE ON *.* TO zig_user;
CREATE DATABASE testdb;
USE testdb;
CREATE TABLE person (
    id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
    name VARCHAR(255),
    age INT
);
INSERT INTO person (name, age) VALUES ('Tony Stark', 53);
EXIT;

Now, create index.js in the src sub-directory:

import { closeDatabase, openDatabase } from '../zig/mysql.zig';

openDatabase({
    host: '172.17.0.2',
    username: 'zig_user',
    password: 'password123',
    database: 'testdb',
    threads: 4,
});
closeDatabase();

Add a script command to package.json:

  "scripts": {
    "start": "node --loader=node-zigar --no-warnings src/index.js",

Then run the script:

npm run start

After compilating the Zig code, the script will try to connect to the database then promptly exits. If there's a problem, you might see something like this:

warning: error packet: (code: 1045, message: Access denied for user 'zig_user'@'172.17.0.1' (using password: YES))
warning: error packet: (code: 1045, message: Access denied for user 'zig_user'@'172.17.0.1' (using password: YES))
warning: error packet: (code: 1045, message: Access denied for user 'zig_user'@'172.17.0.1' (using password: YES))
warning: error packet: (code: 1045, message: Access denied for user 'zig_user'@'172.17.0.1' (using password: YES))
node:internal/process/esm_loader:40
      internalBinding('errors').triggerUncaughtException(
                                ^

[Error: Error packet] { number: 75 }

Let us now prepare an actual SQL statement. First, we'll grab the PrepareResult struct from MyZql:

const Conn = myzql.conn.Conn;
const PrepareResult = myzql.result.PrepareResult;

Then in the worker namespace we add the following function:

    fn Prepare(comptime sql: []const u8) type {
        return struct {
            comptime sql: []const u8 = sql,
            prep_res: PrepareResult = undefined,
        };
    }

Then we add a threadlocal variable for the table person:

    const queries = struct {
        pub const person = struct {
            pub threadlocal var select: Prepare(
                \\SELECT * FROM person
            ) = .{};
        };
    };

Our plan is to keep variables related to a given table in a separate namespace for neatness sake. The PrepareResult structs need to be threadlocal since they are specific to the connection employed by each thread. The SQL statements themselves don't need to be, that's why we're storing them in a comptime field.

In case you've never use the feature before, \\ is how we escape multi-line string literals in Zig. SQL statements are often best expressed in this manner.

In onThreadStart() we loop through all statements and prepare them:

    pub fn onThreadStart(params: DatabaseParams) !void {
        const allocator = zigar.mem.getDefaultAllocator();
        const address = try std.net.Address.parseIp(params.host, params.port);
        client = try Conn.init(
            allocator,
            &.{
                .username = params.username,
                .password = params.password,
                .database = params.database,
                .address = address,
            },
        );
        errdefer client.deinit();
        inline for (comptime std.meta.declarations(queries)) |qs_decl| {
            const query_set = @field(queries, qs_decl.name);
            inline for (comptime std.meta.declarations(query_set)) |q_decl| {
                const query = &@field(query_set, q_decl.name);
                query.prep_res = try client.prepare(allocator, query.sql);
                errdefer query.prep_res.deinit(allocator);
                _ = try query.prep_res.expect(.stmt);
            }
        }
    }

And we deinitialize them in onThreadEnd():

    pub fn onThreadEnd() void {
        const allocator = zigar.mem.getDefaultAllocator();
        inline for (comptime std.meta.declarations(queries)) |qs_decl| {
            const query_set = @field(queries, qs_decl.name);
            inline for (comptime std.meta.declarations(query_set)) |q_decl| {
                const query = @field(query_set, q_decl.name);
                query.prep_res.deinit(allocator);
            }
        }
        client.deinit();
    }

Run npm run start again to verify that the code works.

Now let us start pulling data from the database. First, we'll define a struct for the table person. Add the following definition at the top level:

pub const Person = struct {
    id: u32 = 0,
    name: []const u8,
    age: u8,
};

Then in the worker namespace, add this little function:

    pub fn findPersons() !StructIterator(Person) {
        return try StructIterator(Person).init(queries.person.select.prep_res, .{});
    }

It simply initializes the StructIterator struct, which performs the actual work:

    fn StructIterator(comptime T: type) type {
        return struct {
            rows: ResultSet(BinaryResultRow),

            pub fn init(prep_res: PrepareResult, params: anytype) !@This() {
                const stmt = try prep_res.expect(.stmt);
                const query_res = try client.executeRows(allocator, &stmt, params);
                const rows = try query_res.expect(.rows);
                return .{ .rows = rows };
            }

            pub fn next(self: *@This()) !?T {
                const rows_iter = self.rows.iter();
                if (try rows_iter.next()) |row| {
                    var result: T = undefined;
                    try row.scan(&result);
                    return result;
                } else {
                    return null;
                }
            }
        };
    }

init() executes the query and obtains a ResultSet(BinaryResultRow) struct. next() then creates a row iterator on each call and uses it to read the next row. It places the row's contents into a struct using scan(). That is then returned as the iterator's next item.

The above code makes use of a couple extra types from MyZql. We need to add those:

const ResultSet = myzql.result.ResultSet;
const BinaryResultRow = myzql.result.BinaryResultRow;

Finally, we need to make a findPersons() available at the top level:

pub const findPersons = work_queue.asyncify(worker.findPersons);

When given a function returning a iterator, asyncify() yields a new function that returns an AsyncGenerator on the JavaScript side. It's equivalent to writing this:

pub fn findPersons(generator: zigar.function.GeneratorOf(worker.findPersons)) !void {
    try work_queue.push(worker.findPersons, .{}, generator);
}

To see the code in action, let us modify index.js:

import { closeDatabase, findPersons, openDatabase } from '../zig/mysql.zig';

(async () => {
    openDatabase({
        host: '172.17.0.2',
        username: 'zig_user',
        password: 'password123',
        database: 'testdb',
        threads: 4,
    });
    for await (const person of findPersons()) {
        console.log(person.valueOf());
    }
    closeDatabase();
})();

The output should look like this:

{
  id: 1,
  name: [
     84, 111, 110, 121,
     32,  83, 116,  97,
    114, 107
  ],
  age: 53
}

Okay, the code seems to be working but the objects returned by the generator aren't convenient to work with. We can make our API friendlier with the help of meta-types:

pub const @"meta(zigar)" = struct {
    pub fn isFieldString(comptime T: type, comptime _: std.meta.FieldEnum(T)) bool {
        return true;
    }

    pub fn isDeclPlain(comptime T: type, comptime _: std.meta.DeclEnum(T)) bool {
        return true;
    }
};

The first function makes all occurences of []const u8 within structs strings while the second function makes all functions return plain JavaScript objects.

Now we can simply do this in the loop:

    for await (const person of findPersons()) {
        console.log(person);
    }

Which yields the following:

{ id: 1, name: 'Tony Stark', age: 53 }

As a final exercise we're going to add a function for inserting new rows. Before we start we'll very quickly fashion a basic web app with an HTML form:

import FormBody from '@fastify/formbody';
import Fastify from 'fastify';
import { PassThrough } from 'stream';
import { closeDatabase, findPersons, openDatabase } from '../zig/mysql.zig';

const fastify = Fastify();
fastify.register(FormBody);
fastify.get('/', async (req, reply) => {
    const stream = new PassThrough();
    reply.type('html');
    reply.send(stream);
    stream.write(`<!doctype html>`);
    stream.write(`<html lang="en"><head><meta charset="UTF-8" /><title>MyZql test</title></head><body>`);
    stream.write(`<form method="POST"><ul>`);
    for await (const person of findPersons()) {
        stream.write(`<li>${person.name} (${person.age})</li>`);
    }
    stream.write(`<li><input name="name"> (<input name="age" size="2">) <button>Add</button></li>`)
    stream.write(`</ul></form>`);
    stream.write(`</body></html>`);
    stream.end();
});
fastify.post('/', async (req, reply) => {
    console.log(req.body);
    reply.redirect('/', 302);
})
fastify.addHook('onClose', () => closeDatabase());

openDatabase({
    host: '172.17.0.2',
    username: 'zig_user',
    password: 'password123',
    database: 'testdb',
    threads: 4,
});
const address = await fastify.listen({ port: 3000 });
console.log(`Listening at ${address}`);

It's as simple as can be:

Browser

At the moment the app dumps the submitted form data into the console. Let us fix that.

First, we'll add the prepared statement:

            pub threadlocal var insert: Prepare(
                \\INSERT INTO person (name, age) VALUES(?, ?)
            ) = .{};

Then the worker function:

    pub fn insertPerson(person: Person) !u32 {
        const stmt = try queries.person.insert.prep_res.expect(.stmt);
        const exe_res = try client.execute(&stmt, .{ person.name, person.age });
        const ok = try exe_res.expect(.ok);
        return @intCast(ok.last_insert_id);
    }

Finally, we make the function available callable from JavaScript:

pub const insertPerson = work_queue.promisify(worker.insertPerson);

promisify() works the same way as asyncify() except that it always return a function that returns promises.

That's it! Now is just a matter of plugging it into POST handler:

fastify.post('/', async (req, reply) => {
    const id = await insertPerson(req.body);
    console.log({ id });
    reply.redirect('/', 302);
})

And it works as intended:

Browser

Well, almost. Our app doesn't work if we try to add Thor to the list, due to a poor choice of integer type:

Browser

Configuring the app for deployment

Follow the same steps as described in the the hello world example. First change the import statements:

import { closeDatabase, findPersons, insertPerson, openDatabase } from '../lib/mysql.zigar';

Then create node-zigar.config.json:

{
  "optimize": "ReleaseSmall",
  "modules": {
    "lib/mysql.zigar": {
      "source": "zig/mysql.zig"
    }
  },
  "targets": [
    { "platform": "linux", "arch": "x64" },
    { "platform": "linux", "arch": "arm64" }
  ]
}

Add command for buiding the libraries to package.json:

    "build": "node-zigar build"

Then run it:

npm run build

As we've now set the optimization level to ReleaseSmall, our app will no longer complain about Thor's age since runtime safety is turned off. He'll just be 1024 years too young.

Source code

You can find the complete source code for this example here.

Conclusion

I hope this tutorial gave you some insight into how to work with MySQL/MariaDB using Zig. It's designed to showcase Zigar's new support for the async programming model. With the help of the builtin WorkQueue, taking advantage of the processsing power of modernm multicore CPUs is just a matter of writing a couple of functions.

If there's anyting in the tutorial that you don't quite understand, feel free to post a comment at this project's discussion section.


Additional examples.

⚠️ **GitHub.com Fallback** ⚠️