MySQL - chung-leong/zigar GitHub Wiki
In this example we're going to use the MyZql library to store and retrieve data from a MariaDB/MySql database. It demonstrates how to create an async API.
As always, we begin by creating the basic app skeleton:
mkdir myzql
cd myzql
npm init -y
npm install node-zigar fastify @fastify/formbody
mkdir src zig
Create a customizable copy of build.zig
by running the following command:
cd zig
npx node-zigar build-custom
Then install MyZql. Go to the project's Github page and get the the URL for the zip package. Fetch it using zig:
zig fetch https://github.com/speed2exe/myzql/archive/d6c1f3ba3fb2896c5bbfaac96750b414f780492d.zip
MyZql doesn't have a build.zig.zon
currently so we can't use the --save
option. We have to
manually create the build.zig.zon
for this project:
.{
.name = .mysql_demo,
.version = "0.0.0",
.dependencies = .{
.myzql = .{
.url = "https://github.com/speed2exe/myzql/archive/d6c1f3ba3fb2896c5bbfaac96750b414f780492d.zip",
.hash = "12205d9e78e4951112a13ea04ef732b4415f4a60f18f5f886eaa38d3527063e4195f",
},
},
.paths = .{
"build.zig",
"build.zig.zon",
"src",
},
.fingerprint = 0xc120485323d8f57b,
}
Make sure the hash matches the one given by the zig fetch
command.
Open build.zig
and add myzql as a dependency:
const zigar = b.createModule(.{
.root_source_file = .{ .cwd_relative = zig_path ++ "zigar.zig" },
});
const myzql = b.dependency("myzql", .{}).module("myzql");
And insert it into the list of imports:
const imports = [_]std.Build.Module.Import{
.{ .name = "zigar", .module = zigar },
.{ .name = "myzql", .module = myzql },
};
Save the following code as mysql.zig
:
const std = @import("std");
const zigar = @import("zigar");
const myzql = @import("myzql");
const Conn = myzql.conn.Conn;
const DatabaseParams = struct {
host: []const u8,
port: u16 = 3306,
username: [:0]const u8,
password: [:0]const u8,
database: [:0]const u8,
threads: usize = 1,
};
var work_queue: zigar.thread.WorkQueue(thread_ns) = .{};
var gpa = std.heap.DebugAllocator(.{}).init;
const allocator = gpa.allocator();
pub fn openDatabase(params: DatabaseParams) !void {
try work_queue.init(.{
.allocator = allocator,
.n_jobs = params.threads,
.thread_start_params = .{params},
});
try work_queue.wait();
}
pub fn closeDatabase(promise: zigar.function.Promise(void)) void {
work_queue.deinitAsync(promise);
}
const thread_ns = struct {
threadlocal var client: Conn = undefined;
pub fn onThreadStart(params: DatabaseParams) !void {
const address = try std.net.Address.parseIp(params.host, params.port);
client = try Conn.init(
zigar.mem.getDefaultAllocator(),
&.{
.username = params.username,
.password = params.password,
.database = params.database,
.address = address,
},
);
errdefer client.deinit();
try client.ping();
}
pub fn onThreadEnd() void {
client.deinit();
}
};
zigar.thread.WorkQueue
is a parameterized struct
that contains a non-blocking queue and a
thread pool. When we push work units onto the queue, one of its threads will pick it up and
perform the work. The types of work that can be performed are contained in the namespace passed to
WorkQueue()
, thread_ns
in this case. At the moment we only have the initialization function
onThreadStart()
and the clean-up function onThreadEnd()
. As their names suggest, these are run
in each thread when it starts and when it ends.
onThreadEnd()
tries to connect to the database. If successful, it stores the connection in a
threadlocal
variable. It receives parameters from WorkQueue
, which in turns receives them from
openDatabase()
.
openDatabase()
calls work_queue.wait()
to ensure that all threads have successfully open a
connection. If any calls to onThreadStart()
had resulted in an error, wait()
would return
that error.
onThreadEnd()
closes each thread's database connection. Since it accepts no arguments, there
was no need to provide thread_end_params
to work_queue.init()
.
To test our code we need a working MySql or MariaDB server. If you have Docker installed on your computer, getting one up is easy enough:
docker run --detach --name some-mariadb --env MARIADB_ALLOW_EMPTY_ROOT_PASSWORD=1 mariadb:latest
The following command yields the IP address of the server:
docker inspect --format '{{ .NetworkSettings.IPAddress }}' some-mariadb
172.17.0.2
And this one is for connecting to the server:
docker run -it --rm mariadb mariadb --host 172.17.0.2 --user root --disable-ssl
Run the following SQL script to create a test database:
CREATE USER zig_user IDENTIFIED BY 'password123';
GRANT SELECT, INSERT, UPDATE, DELETE ON *.* TO zig_user;
CREATE DATABASE testdb;
USE testdb;
CREATE TABLE person (
id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
name VARCHAR(255),
age INT
);
INSERT INTO person (name, age) VALUES ('Tony Stark', 53);
EXIT;
Now, create index.js
in the src
sub-directory:
import { closeDatabase, openDatabase } from '../zig/mysql.zig';
openDatabase({
host: '172.17.0.2',
username: 'zig_user',
password: 'password123',
database: 'testdb',
threads: 4,
});
closeDatabase();
Add a script command to package.json
:
"scripts": {
"start": "node --loader=node-zigar --no-warnings src/index.js",
Then run the script:
npm run start
After compilating the Zig code, the script will try to connect to the database then promptly exits. If there's a problem, you might see something like this:
warning: error packet: (code: 1045, message: Access denied for user 'zig_user'@'172.17.0.1' (using password: YES))
warning: error packet: (code: 1045, message: Access denied for user 'zig_user'@'172.17.0.1' (using password: YES))
warning: error packet: (code: 1045, message: Access denied for user 'zig_user'@'172.17.0.1' (using password: YES))
warning: error packet: (code: 1045, message: Access denied for user 'zig_user'@'172.17.0.1' (using password: YES))
node:internal/process/esm_loader:40
internalBinding('errors').triggerUncaughtException(
^
[Error: Error packet] { number: 75 }
Let us now prepare an actual SQL statement. First, we'll grab the PrepareResult
struct from
MyZql:
const Conn = myzql.conn.Conn;
const PrepareResult = myzql.result.PrepareResult;
Then in the thread_ns
namespace we add the following function:
fn Prepare(comptime sql: []const u8) type {
return struct {
comptime sql: []const u8 = sql,
prep_res: PrepareResult = undefined,
};
}
Then we add a threadlocal
variable for the table person
:
const queries = struct {
pub const person = struct {
pub threadlocal var select: Prepare(
\\SELECT * FROM person
) = .{};
};
};
Our plan is to keep variables related to a given table in a separate namespace for neatness
purpose. The PrepareResult
structs need to be threadlocal
since they are specific to the
connection employed by each thread. The SQL statements themselves don't need to be. That's why
we're storing them in a comptime field.
In case you've never use the feature before, \\
is how we escape multi-line string literals in
Zig. SQL statements are often best expressed in this manner.
In onThreadStart()
we loop through all statements and prepare them:
pub fn onThreadStart(params: DatabaseParams) !void {
const allocator = zigar.mem.getDefaultAllocator();
const address = try std.net.Address.parseIp(params.host, params.port);
client = try Conn.init(
allocator,
&.{
.username = params.username,
.password = params.password,
.database = params.database,
.address = address,
},
);
errdefer client.deinit();
inline for (comptime std.meta.declarations(queries)) |qs_decl| {
const query_set = @field(queries, qs_decl.name);
inline for (comptime std.meta.declarations(query_set)) |q_decl| {
const query = &@field(query_set, q_decl.name);
query.prep_res = try client.prepare(allocator, query.sql);
errdefer query.prep_res.deinit(allocator);
_ = try query.prep_res.expect(.stmt);
}
}
}
And we deinitialize them in onThreadEnd()
:
pub fn onThreadEnd() void {
const allocator = zigar.mem.getDefaultAllocator();
inline for (comptime std.meta.declarations(queries)) |qs_decl| {
const query_set = @field(queries, qs_decl.name);
inline for (comptime std.meta.declarations(query_set)) |q_decl| {
const query = @field(query_set, q_decl.name);
query.prep_res.deinit(allocator);
}
}
client.deinit();
}
Run npm run start
again to verify that the code works.
Now let us start pulling data from the database. First, we'll define a struct for
the table person
. Add the following definition at the top level:
pub const Person = struct {
id: u32 = 0,
name: []const u8,
age: u8,
};
Then in the thread_ns
namespace, add this little function:
pub fn findPersons() !StructIterator(Person) {
return try StructIterator(Person).init(queries.person.select.prep_res, .{});
}
It simply initializes the StructIterator
struct, which performs the actual work:
fn StructIterator(comptime T: type) type {
return struct {
rows: ResultSet(BinaryResultRow),
pub fn init(prep_res: PrepareResult, params: anytype) !@This() {
const stmt = try prep_res.expect(.stmt);
const query_res = try client.executeRows(&stmt, params);
const rows = try query_res.expect(.rows);
return .{ .rows = rows };
}
pub fn next(self: *@This()) !?T {
const rows_iter = self.rows.iter();
if (try rows_iter.next()) |row| {
var result: T = undefined;
try row.scan(&result);
return result;
} else {
return null;
}
}
};
}
init()
executes the query and obtains a ResultSet(BinaryResultRow)
struct. next()
then
creates a row iterator on each call and uses it to read the next row. It places the row's
contents into a struct using scan()
. That is then returned as the iterator's next item.
The above code makes use of a couple extra types from MyZql. We need to add those:
const ResultSet = myzql.result.ResultSet;
const BinaryResultRow = myzql.result.BinaryResultRow;
Finally, we need a function at the top level for pushing work units onto the work queue:
pub fn findPersons(allocator: std.mem.Allocator, generator: zigar.function.GeneratorOf(thread_ns.findPersons)) !void {
try work_queue.push(thread_ns.findPersons, .{allocator}, generator);
}
WorkQueue.push()
accepts three arguments: a function in the namespace given to WorkQueue()
, a
tuple containing arguments for that function, and an optional destination for its return value,
which can be either a Promise
or a Generator
struct. In this case, because
thread_ns.findPersons()
returns an iterator, a generator is expected.
Generator
contains an anyopaque
pointer and a function pointer. It serves as an interface to
an AsyncGenerator
object on the JavaScript side. The function pointer is of the type
fn (*anyopaque, error{ ... }!?T) bool
, with the second argument being the generator's "payload".
When null is passed, the generator shuts down. When an error is passed, that gets thrown on the
JavaScript side. The return value signals whether the caller expects more data. It'll be false
if
a break
, return
, or throw
occurs within a for await (...)
loop.
GeneratorOf()
is a convenience function that defines a Generator
using the given function's
return value. It automatically merges the error set of the function with the error set of the
iterator's next()
method.
To see the code in action, let us modify index.js
:
import { closeDatabase, findPersons, openDatabase } from '../zig/mysql.zig';
(async () => {
openDatabase({
host: '172.17.0.2',
username: 'zig_user',
password: 'password123',
database: 'testdb',
threads: 4,
});
for await (const person of findPersons()) {
console.log(person.valueOf());
}
closeDatabase();
})();
The output should look like this:
{
id: 1,
name: [
84, 111, 110, 121,
32, 83, 116, 97,
114, 107
],
age: 53
}
Okay, as a final exercise we're going to add a function for inserting new rows. Before we start we'll very quickly fashion a basic web app with an HTML form:
import FormBody from '@fastify/formbody';
import Fastify from 'fastify';
import { PassThrough } from 'stream';
import { closeDatabase, findPersons, openDatabase } from '../zig/mysql.zig';
const fastify = Fastify();
fastify.register(FormBody);
fastify.get('/', async (req, reply) => {
const stream = new PassThrough();
reply.type('html');
reply.send(stream);
stream.write(`<!doctype html>`);
stream.write(`<html lang="en"><head><meta charset="UTF-8" /><title>MyZql test</title></head><body>`);
stream.write(`<form method="POST"><ul>`);
for await (const person of findPersons()) {
stream.write(`<li>${person.name.string} (${person.age})</li>`);
}
stream.write(`<li><input name="name"> (<input name="age" size="2">) <button>Add</button></li>`)
stream.write(`</ul></form>`);
stream.write(`</body></html>`);
stream.end();
});
fastify.post('/', async (req, reply) => {
console.log(req.body);
reply.redirect('/', 302);
})
fastify.addHook('onClose', () => closeDatabase());
openDatabase({
host: '172.17.0.2',
username: 'zig_user',
password: 'password123',
database: 'testdb',
threads: 4,
});
const address = await fastify.listen({ port: 3000 });
console.log(`Listening at ${address}`);
It's as simple as can be:
Right now the app just dumps the submitted form data into the console. Let us fix that.
First, we'll add the prepared statement:
pub threadlocal var insert: Prepare(
\\INSERT INTO person (name, age) VALUES(?, ?)
) = .{};
Then the worker function:
pub fn insertPerson(person: Person) !u32 {
const stmt = try queries.person.insert.prep_res.expect(.stmt);
const exe_res = try client.execute(&stmt, .{ person.name, person.age });
const ok = try exe_res.expect(.ok);
return @intCast(ok.last_insert_id);
}
And finally the work-submitting function, callable from JavaScript:
pub fn insertPerson(person: Person, promise: zigar.function.PromiseOf(thread_ns.insertPerson)) !void {
try work_queue.push(thread_ns.insertPerson, .{person}, promise);
}
Here we're using PromiseOf()
to define a Promise(!u32)
.
That's it! Now is just a matter of plugging it into POST handler:
fastify.post('/', async (req, reply) => {
const id = await insertPerson(req.body);
console.log({ id });
reply.redirect('/', 302);
})
And it works as intended:
Well, almost. Our app doesn't work if we try to add Thor to the list, due to a poor choice of integer type:
Follow the same steps as described in the the hello world example. First change the import statements:
import { closeDatabase, findPersons, insertPerson, openDatabase } from '../lib/mysql.zigar';
Then create node-zigar.config.json
:
{
"optimize": "ReleaseSmall",
"sourceFiles": {
"lib/mysql.zigar": "zig/mysql.zig"
},
"targets": [
{ "platform": "linux", "arch": "x64" },
{ "platform": "linux", "arch": "arm64" }
]
}
Add command for buiding the libraries to package.json
:
"build": "node-zigar build"
Then run it:
npm run build
As we've now set the optimization level to ReleaseSmall
, our app will no longer complain about
Thor's age since runtime safety is turned off. He'll just be 1024 years too young.
You can find the complete source code for this example here.
I hope this tutorial gave you some insight into how to work with MySQL/MariaDB using Zig. It's
designed to showcase Zigar's new support for the async programming model. With the help of the
builtin WorkQueue
, taking advantage of the processsing power of modernm multicore CPUs is
just a matter of writing a couple of functions.
If there's anyting in the tutorial that you don't quite understand, feel free to post a comment at this project's discussion section.