Image filter (Vite) - chung-leong/zigar GitHub Wiki

In this example we're going to build a web app that apply a filter on an image. We'll create two versions: one that processes the image in the UI thread and another which offloads the task to web workers.

This example makes use of WebAssembly threads. As support in Zig 0.14.0 is still immature, you'll need to patch the standard library before continuing.

Creating the app

First, we'll create the basic skeleton:

npm create vite@latest
Need to install the following packages:
[email protected]
Ok to proceed? (y) y
✔ Project name: … filter
✔ Select a framework: › React
✔ Select a variant: › JavaScript + SWC
cd filter
npm install
npm install --save-dev rollup-plugin-zigar
mkdir zig img

Add the plugin in vite.config.js:

// https://vitejs.dev/config/
export default defineConfig({
  plugins: [react(), zigar({ topLevelAwait: false })],
})

Replace the code in App.jsx with the following:

import { useCallback, useEffect, useRef, useState } from 'react';
import SampleImage from '../img/sample.png';
import './App.css';

function App() {
  const srcCanvasRef = useRef();
  const dstCanvasRef = useRef();
  const fileInputRef = useRef();
  const [ bitmap, setBitmap ] = useState();
  const [ intensity, setIntensity ] = useState(0.3);

  const onOpenClick = useCallback(() => {
    fileInputRef.current.click();
  }, []);
  const onFileChange = useCallback(async (evt) => {
    const [ file ] = evt.target.files;
    if (file) {
      const bitmap = await createImageBitmap(file);
      setBitmap(bitmap);
    }
  }, []);
  const onRangeChange = useCallback((evt) => {
    setIntensity(evt.target.value);
  }, [])
  useEffect(() => {
    // load initial sample image
    (async () => {
      const img = new Image();
      img.src = SampleImage;
      await img.decode();
      const bitmap = await createImageBitmap(img);
      setBitmap(bitmap);
    })();
  }, [ SampleImage ]);
  useEffect(() => {
    // update bitmap after user has selected a different one
    if (bitmap) {
      const srcCanvas = srcCanvasRef.current;
      srcCanvas.width = bitmap.width;
      srcCanvas.height = bitmap.height;
      const ctx = srcCanvas.getContext('2d', { willReadFrequently: true });
      ctx.drawImage(bitmap, 0, 0);
    }
  }, [ bitmap ]);
  useEffect(() => {
    // update the result when the bitmap or intensity parameter changes
    if (bitmap) {
      const srcCanvas = srcCanvasRef.current;
      const dstCanvas = dstCanvasRef.current;
      const srcCTX = srcCanvas.getContext('2d', { willReadFrequently: true });
      const { width, height } = srcCanvas;
      const srcImageData = srcCTX.getImageData(0, 0, width, height);
      const dstImageData = srcImageData;
      dstCanvas.width = width;
      dstCanvas.height = height;
      const dstCTX = dstCanvas.getContext('2d');
      dstCTX.putImageData(dstImageData, 0, 0);
    }
  }, [ bitmap, intensity ]);
  return (
    <div className="App">
      <div className="nav">
        <span className="button" onClick={onOpenClick}>Open</span>
        <input ref={fileInputRef} type="file" className="hidden" accept="image/*" onChange={onFileChange}/>
      </div>
      <div className="contents">
        <div className="pane align-right">
          <canvas ref={srcCanvasRef}></canvas>
        </div>
        <div className="pane align-left">
          <canvas ref={dstCanvasRef}></canvas>
          <div className="controls">
            Intensity: <input type="range" min={0} max={1} step={0.0001} value={intensity} onChange={onRangeChange}/>
          </div>
        </div>
      </div>
    </div>
  )
}

export default App

Basically, we have two HTML canvases in our app. We load the initial image with the first useEffect hook, placing the resulting bitmap into the state variable bitmap:

  useEffect(() => {
    // load initial sample image
    (async () => {
      const img = new Image();
      img.src = SampleImage;
      await img.decode();
      const bitmap = await createImageBitmap(img);
      setBitmap(bitmap);
    })();
  }, [ SampleImage ]);

The async iife is necessary here, as useEffect doesn't expect a promise from the callback function. We need to put SampleImage in the dependencies array because it can change due to Vite's Hot Module Replace (HMR) feature. We want the image to reload when sample.png is changed.

The second useEffect hook, activated when bitmap changes, draws the bitmap on the first canvas:

  useEffect(() => {
    // update bitmap after user has selected a different one
    if (bitmap) {
      const srcCanvas = srcCanvasRef.current;
      srcCanvas.width = bitmap.width;
      srcCanvas.height = bitmap.height;
      const ctx = srcCanvas.getContext('2d', { willReadFrequently: true });
      ctx.drawImage(bitmap, 0, 0);
    }
  }, [ bitmap ]);

The third useEffect hook then obtains an ImageData object from the first canvas and draws it on the second canvas:

  useEffect(() => {
    // update the result when the bitmap or intensity parameter changes
    if (bitmap) {
      const srcCanvas = srcCanvasRef.current;
      const dstCanvas = dstCanvasRef.current;
      const srcCTX = srcCanvas.getContext('2d', { willReadFrequently: true });
      const { width, height } = srcCanvas;
      const srcImageData = srcCTX.getImageData(0, 0, width, height);
      const dstImageData = srcImageData;
      dstCanvas.width = width;
      dstCanvas.height = height;
      const dstCTX = dstCanvas.getContext('2d');
      dstCTX.putImageData(dstImageData, 0, 0);
    }
  }, [ bitmap, intensity ]);

We need a new index.css:

:root {
  font-family: Inter, system-ui, Avenir, Helvetica, Arial, sans-serif;
  line-height: 1.5;
  font-weight: 400;

  color-scheme: light dark;
  color: rgba(255, 255, 255, 0.87);
  background-color: #242424;

  font-synthesis: none;
  text-rendering: optimizeLegibility;
  -webkit-font-smoothing: antialiased;
  -moz-osx-font-smoothing: grayscale;
}

body {
  margin: 0;
  display: flex;
  flex-direction: column;
  place-items: center;
  min-width: 320px;
  min-height: 100vh;
}

And App.css:

#root {
  flex: 1 1 100%;
  width: 100%;
}

.App {
  display: flex;
  position: relative;
  flex-direction: column;
  width: 100%;
  height: 100%;
}

.App .nav {
  position: fixed;
  width: 100%;
  color: #000000;
  background-color: #999999;
  font-weight: bold;
  flex: 0 0 auto;
  padding: 2px 2px 1px 2px;
}

.App .nav .button {
  padding: 2px;
  cursor: pointer;
}

.App .nav .button:hover {
  color: #ffffff;
  background-color: #000000;
  padding: 2px 10px 2px 10px;
}

.App .contents {
  display: flex;
  width: 100%;
  margin-top: 2em;
}

.App .contents .pane {
  flex: 1 1 50%;
  padding: 5px 5px 5px 5px;
}

.App .contents .pane CANVAS {
  border: 1px dotted rgba(255, 255, 255, 0.10);
  max-width: 100%;
  max-height: 90vh;
}

.App .contents .pane .controls INPUT {
  vertical-align: middle;
  width: 50%;
}

@media screen and (max-width: 600px) {
  .App .contents {
    flex-direction: column;
  }

  .App .contents .pane {
    padding: 1px 2px 1px 2px;
  }

  .App .contents .pane .controls {
    padding-left: 4px;
  }
}

.hidden {
  position: absolute;
  visibility: hidden;
  z-index: -1;
}

.align-left {
  text-align: left;
}

.align-right {
  text-align: right;
}

Finally, download the following image into img as sample.png (or choose an image of your own):

Sample image

With everything in place, we can start Vite in dev mode:

npm run dev

You should see the following in the browser:

Without filter

Nothing will happen when you move the slider, as we haven't yet implemented the filtering functionality. We'll proceed with doing so now that we see that the basic code for our app is working.

First, download sepia.zig into the zig sub-directory.

The code in question was translated from a Pixel Bender filter using pb2zig. Consult the intro page for an explanation of how it works.

At the top of App.jsx, insert an import statement for the function createOutput():

import { createOutput } from '../zig/sepia.zig';

In our useEffect hook we make use of it:

  useEffect(() => {
    // update the result when the bitmap or intensity parameter changes
    (async() => {
      if (bitmap) {
        const srcCanvas = srcCanvasRef.current;
        const dstCanvas = dstCanvasRef.current;
        const srcCTX = srcCanvas.getContext('2d', { willReadFrequently: true });
        const { width, height } = srcCanvas;
        const srcImageData = srcCTX.getImageData(0, 0, width, height);
        const input = { src: srcImageData };
        const params = { intensity };
        const output = await createOutput(width, height, input, params);
        const dstImageData = new ImageData(output.dst.data.clampedArray, width, height);
        dstCanvas.width = width;
        dstCanvas.height = height;
        const dstCTX = dstCanvas.getContext('2d');
        dstCTX.putImageData(dstImageData, 0, 0);
      }
    })();
  }, [ bitmap, intensity, createOutput ]);

createOutput() has the follow declaration:

pub fn createOutput(
    allocator: std.mem.Allocator,
    width: u32,
    height: u32,
    input: Input,
    params: Parameters,
) !Output

allocator is automatically provided by Zigar. We get width and height from the source canvas. params contains a single f32: intensity. We initialize it using our state variable of the same name, which changes when we move the slider.

Input is a parameterized type:

pub const Input = KernelInput(u8, kernel);

Which expands to:

pub const Input = struct {
    src: Image(u8, 4, false);
};

Then further to:

pub const Input = struct {
    src: struct {
        pub const Pixel = @Vector(4, u8);
        pub const FPixel = @Vector(4, f32);
        pub const channels = 4;

        data: []const Pixel,
        width: u32,
        height: u32,
        colorSpace: ColorSpace = .srgb,
        offset: usize = 0,
    };
};

Image was purposely defined in a way so that it is compatible with the browser's ImageData. Its data field is []const @Vector(4, u8), a slice pointer that accepts a Uint8ClampedArray as target without casting. We can therefore simply pass { src: srcImageData } to createOutput as input.

Like Input, Output is a parameterized type. It too can potentially contain multiple images. In this case (and most cases), there's only one:

pub const Output = struct {
    dst: {
        pub const Pixel = @Vector(4, u8);
        pub const FPixel = @Vector(4, f32);
        pub const channels = 4;

        data: []Pixel,
        width: u32,
        height: u32,
        colorSpace: ColorSpace = .srgb,
        offset: usize = 0,
    },
};

dst.data points to memory allocated from allocator. Array objects holding numbers in Zigar have the property typedArray, which provides a matching TypedArray view of their data. When it is a Uint8Array, the object will also have the property clampedArray, which yields a Uint8ClampedArray. We use that to construct an ImageData object:

        const dstImageData = new ImageData(output.dst.data.clampedArray, width, height);

We have to wrap everything in an async iife due to createOutput() possibly returning a promise, a consequence of top-level await being disabled to accommodate older browsers.

Now our app does what it's supoosed to:

With filter

Asynchronous processing

Modern CPUs typically have more than one core. We can take advantage of the additional computational power by performing data processing in multiple threads. Doing so also means the main thread of the browser won't get blocked, helping to keep the UI responsive.

Multithreading is not enabled by default for WebAssembly. To enable it, add the multithreaded option in vite.config.js:

export default defineConfig({
  plugins: [react(), zigar({ topLevelAwait: false, multithreaded: true })],
})

Then replace the import statement in App.jsx:

  const { createOutput } = await import(`../zig/${filter}.zig`);

with the following:

  const { createOutputAsync, startThreadPool, stopThreadPoolAsync } = await import(`../zig/${filter}.zig`);

In the useEffect hook, change the function being called:

          const output = await createOutputAsync(width, height, input, params);

Then add an additional useEffect hook:

  useEffect(() => {
    startThreadPool(navigator.hardwareConcurrency);
    return () => stopThreadPoolAsync();
  }, [ startThreadPool, stopThreadPoolAsync ]);

Again, startThreadPool and stopThreadPoolAsync are in the dependencies array given to useEffect because they can change due to HMR. Having them there ensures that threads created by a module that has become outdated get shut down.

After saving the file, you'll notice the app no longer works. In the development console you'll find the following message:

Dev console

Multithreading requires the use of shared memory, a feature available on the browser only when the document is in a secure context. Two HTTP headers must be set.

During development, we can tell Vite to provide them:

export default defineConfig({
  plugins: [react(), zigar({ topLevelAwait: false, multithreaded: true })],
  server: {
    headers: {
      'Cross-Origin-Opener-Policy': 'same-origin',
      'Cross-Origin-Embedder-Policy': 'require-corp',
    }
  },
})

You must be able to do the same at the web server when the app is eventually deployed in order to make use of multithreading.

After saving the change, the app still isn't going to work. You're greeted by a different error instead:

Dev console

The problem due to React's Strict Mode, under which a component's useEffect hooks are activated twice on mount. The call to startThreadPool() is immediately followed by stopThreadPoolAsync(). Because it takes time for worker threads to shut down and we aren't allowed to block the browser's main thread, when startThreadPool() get called again the pool would still be in the middle of being deinitialized--hence the error. A second error is emitted by createOutputAsync() when it finds the pool in a unready state.

Removing the <React.StrictMode> tag in main.jsx would get the app to a somewhat working state. As soon as you use the slider though, a huge number of errors will appear in the dev console, caused by an excessive number of calls to createOutputAsync().

The underlying problem is that we have async functions running in parellel. We need something that force the calls to run one after another instead:

At the bottom of App.jsx, add the following class:

class AsyncTaskManager {
  currentTask = null;

  async call(cb) {
    const controller = (cb?.length > 0) ? new AbortController : null;
    const promise = this.perform(cb, controller?.signal);
    const thisTask = this.currentTask = { controller, promise };
    try {
      return await thisTask.promise;
    } finally {
      if (thisTask === this.currentTask) this.currentTask = null;
    }
  }

  async perform(cb, signal) {
    if (this.currentTask) {
      this.currentTask.controller?.abort();
      await this.currentTask.promise?.catch(() => {});
      // throw error now if the task was aborted before the function is called
      if (signal?.aborted) throw new Error('Aborted');
    }
    return cb?.(signal);
  }
}
const atm = new AsyncTaskManager();

call() creates an AbortController when the callback function accepts an AbortSignal as argument. perform() in turn will uses it to abort the previous call before it waits for a resolution of the promise received earlier. Only after that has happened would the callback be invoked.

In the useEffect hook, change the call to createOutputAsync():

      try {
          // ...
          const output = await atm.call(signal => createOutputAsync(width, height, input, params, { signal }));
          // ...
      } catch (err) {
        if (err.message != 'Aborted') {
          console.error(err);
        }
      }

As an error gets thrown when a call is interrupted, we need to wrap everything in a try/catch.

In the second useEffect hook, we make the same change:

  useEffect(() => {
    atm.call(() => startThreadPool(navigator.hardwareConcurrency));
    return () => atm.call(() => stopThreadPoolAsync());
  }, [ startThreadPool, stopThreadPoolAsync ]);

With this mechanism in place preventing overlapping async calls, our app should work correctly.

Now, let us our Zig code. We'll start with startThreadPool():

pub fn startThreadPool(count: u32) !void {
    try work_queue.init(.{
        .allocator = internal_allocator,
        .stack_size = 65536,
        .n_jobs = count,
    });
}

work_queue is a struct containing a thread pool and non-blocking queue. It has the following declaration:

var work_queue: WorkQueue(thread_ns) = .{};

The queue stores requests for function invocation and runs them in separate threads. thread_ns contains public functions that can be used. For this example we only have one:

const thread_ns = struct {
    pub fn processSlice(signal: AbortSignal, width: u32, start: u32, count: u32, input: Input, output: Output, params: Parameters) !Output {
        var instance = kernel.create(input, output, params);
        if (@hasDecl(@TypeOf(instance), "evaluateDependents")) {
            instance.evaluateDependents();
        }
        const end = start + count;
        instance.outputCoord[1] = start;
        while (instance.outputCoord[1] < end) : (instance.outputCoord[1] += 1) {
            instance.outputCoord[0] = 0;
            while (instance.outputCoord[0] < width) : (instance.outputCoord[0] += 1) {
                instance.evaluatePixel();
                if (signal.on()) return error.Aborted;
            }
        }
        return output;
    }
};

The logic is pretty straight forward. We initialize an instance of the kernel then loop through all coordinate pairs, running evaluatePixel() for each of them. After each iteration we check the abort signal to see if termination has been requested.

createOutputAsync() pushes multiple processSlice call requests into the work queue to process an image in parellel. Let us first look at its arguments:

pub fn createOutputAsync(allocator: Allocator, promise: Promise, signal: AbortSignal, width: u32, height: u32, input: Input, params: Parameters) !void {

Allocator, Promise, and AbortSignal are special parameters that Zigar provides automatically. On the JavaScript side, the function has only four required arguments. It will also accept a fifth argument: options, which may contain an alternate allocator, a callback function, and an abort signal.

The function starts out by allocating memory for the output struct:

    var output: Output = undefined;
    // allocate memory for output image
    const fields = std.meta.fields(Output);
    var allocated: usize = 0;
    errdefer inline for (fields, 0..) |field, i| {
        if (i < allocated) {
            allocator.free(@field(output, field.name).data);
        }
    };
    inline for (fields) |field| {
        const ImageT = @TypeOf(@field(output, field.name));
        const data = try allocator.alloc(ImageT.Pixel, width * height);
        @field(output, field.name) = .{
            .data = data,
            .width = width,
            .height = height,
        };
        allocated += 1;
    }

Then it divides the image into multiple slices. It divides the given Promise struct as well:

    // add work units to queue
    const workers: u32 = @intCast(@max(1, work_queue.thread_count));
    const scanlines: u32 = height / workers;
    const slices: u32 = if (scanlines > 0) workers else 1;
    const multipart_promise = try promise.partition(internal_allocator, slices);

partition() creates a new promise that fulfills the original promise when its resolve() method has been called a certain number of times. It is used as the output argument for work_queue.push():

    var slice_num: u32 = 0;
    while (slice_num < slices) : (slice_num += 1) {
        const start = scanlines * slice_num;
        const count = if (slice_num < slices - 1) scanlines else height - (scanlines * slice_num);
        try work_queue.push(thread_ns.processSlice, .{ signal, width, start, count, input, output, params }, multipart_promise);
    }
}

The first argument to push() is the function to be invoked. The second is a tuple containing arguments. The third is the output argument. The return value of processSlice(), either the Output struct or error.Aborted, will be fed to this promise's resolve() method. When the last slice has been processed, the promise on the JavaScript side becomes fulfilled.

Let us look at one last function: stopThreadPoolAsync:

pub fn stopThreadPoolAsync(promise: zigar.function.Promise(void)) void {
    work_queue.deinitAsync(promise);
}

Shutdown of the work queue can only happen asynchronously, since blocking the main thread can lead to a deadlock. In any event, it's prohibited to do so in the web browser.

Creating production build

Simply run the build script:

npm run build

After which you can preview the production build of the app:

npm run preview

Without the overhead of Zig runtime safety, the app should be much snappier. It should be noted that nothing stops you from adding optimize: 'ReleaseSmall' to the plugin options so you would get full performance from WASM code even during development.

Source code

You can find the complete source code for this example here.

Conclusion

A major advantage of using Zig for a task like image processing is that the same code can be deployed both on the browser and on the server. After a user has made some changes to an image on the frontend, the backend can apply the exact same effect using the same code. Consult the Node version of this example to learn how to do it.

The image filter employed for this example is very rudimentary. Check out pb2zig's project page to see more advanced code.

That's it for now. I hope this tutorial is enough to get you started with using Zigar.


Additional examples.

⚠️ **GitHub.com Fallback** ⚠️