Writing a Backend - notify-rs/notify GitHub Wiki

Writing a Backend

Notify Backends are used to produce events about a given set of filesystem objects. They do this either directly by polling the filesystem, or by wrapping some platform that provides these events. For example, many modern OSes have a kernel facility for this purpose, and some specialised filesystems may also have their own.

A Notify backend only needs to implement as much as its underlying mechanism is capable, and does not need to fill in for stuff it can’t do. That’s Notify’s job. The backend’s job is to watch stuff.

If you’re reading this, you probably want to write your own backend, for play or for serious (or for serious play). Notify mandates a fairly strict interface for backends, both through typing and through expected semantics. There’s also various other things one needs to consider when implementing a backend. This document’s job is to guide you along this process.

Don’t worry: it’s not as hard as it sounds! You don’t need to be awesome at Rust, or to know a lot about your platform, or even to understand how Notify works under the scenes. While I wouldn’t recommend it for an absolute beginner, Notify’s approach was designed to be easy on its backend developers. And if you do get stuck, please reach out for help!

Getting started

Set up

Do the reading

  1. This guide + notify docs, especially:

    • The Notify presentation

    • This guide’s Rust streams primer

    • The "Notify Lingo" wiki page

  2. Streams trait documentation

  3. Your chosen platform’s docs

Base code explainer

extern crate notify_backend as backend;

use backend::prelude::*; // (1)

pub struct YourBackend {
    // (2)
}

impl Backend for YourBackend { // (3)
}

impl Stream for YourBackend { // (4)
}
  1. With

  2. code

  3. listing

  4. callouts

Testing and testing

(Testing as in plugging it in a notify sample and playing around, and testing as in writing tests + the compliance tests.)

The details

The Event struct

Event kind quick reference

Intro to Rust streams

Backpressure, queue overflow, and the Buffer

Backpressure is a term used to describe a situation where data builds up on one side of a stream, because of a clog, slowdown in the consumer, or other issues. In push-based streaming systems, backpressure can be an important problem as data will fill up buffers and balloon memory usage.

Tokio and Rust Futures/Streams are poll/pull/lazy systems, where producers only generate data when asked, and backpressure is generally not an issue.

In our domain, most platforms behave reasonably, issuing an overflow event (to let us know some events were dropped) and dropping further events while the build up remains, or not exposing this mechanism at all and managing it internally without negative consequences. In those cases, leaving events in kernel memory is correct and okay. If available, the overflow event should be translated to a Missed event.

But:

  • if the kernel queue limit is too low for typical usage, or

  • if the platform has a bad reaction to overflows, such as dropping all events (even those before the overflow) or closing down the watch,

you should use a Buffer.

More commonly, a Buffer is useful if it’s impossible to only retrieve a single event at a time, instead of implementing a custom userspace queue to hold events yourself.

Notify’s Buffer is a FIFO queue with a fixed capacity and a handy Stream endpoint. Events received when the buffer is full are discarded and a Missed event is generated. If a Missed event is received to the buffer while it’s full, the counters will be summed.

The default capacity of Buffer is 16KiB divided by the size of Event on the platform. On x64, and at the time of this writing, that’s 292. That should be more than enough for light use, in the common case of using it to hold events when not able to just read one at a time.

However, in the overflow scenarios discussed above, a much larger limit may be chosen. You’ll need to balance memory consumption against how event production and the risks over overflow. Keep in mind that the Event size does not include pathnames nor attribute data — those can add up dramatically. For example, if the average path length is 80, a full Buffer with capacity set to 10'000 would use 1.3MiB! instead of what one could naively expect to be 560KiB.

// Buffer is not part of the backend prelude, so you need to import it:
use backend::Buffer;

struct YourBackend {
    buffer: Buffer,
}

impl Backend for YourBackend {
    fn new(...) -> ... {
        // do your thing

       let buffer = Buffer::default();

       // or with custom capacity in number of Events:
       let buffer = Buffer::new(768);
    }
}

impl Stream for YourBackend {
    fn poll(...) -> ... {
        // do your thing

        // add to the buffer
        self.buffer.add(event);

        // handy Stream endpoint as return!
        self.buffer.poll()
    }
}

Event driver: what, why, common patterns

Finishing up

Crate publishing

(or leaving it as a repo crate)

Advertising

  • Telling us (twitter, email)

  • Putting it up on the wiki

  • Telling the world (reddit, twitter)

Making it official

Only for really polished and general-interest backends. Criteria, process, etc.

⚠️ **GitHub.com Fallback** ⚠️