librespot, mDNS, networking and containers - librespot-org/librespot GitHub Wiki

Librespot, mDNS, networking, containers

TL;DR is at the bottom of this page.

About mDNS

Spotify uses mDNS to discover endpoints (hence a librespot instance will use mDNS to broadcast itself).

By default, librespot uses its own mDNS responder implementation. Alternatively, librespot may also be compiled to make use of an already running avahi daemon (with dbus) instead, through dnssd-dev.

About Avahi

Avahi is a system daemon, routinely installed on glibc based linux systems, that does multiple things:

  • client: provide name resolution for mDNS services through the name service switch (for getaddrinfo() calls for eg)
  • responder: optionally, avahi will also broadcast services records for other programs (typically through dbus)

Note that non glibc systems (eg: Alpine, or anything based on muslc) do not have support for NSS, making avahi a much less attractive proposition (even assuming you can get it running there)...

mDNS and networking

It's strongly recommended to NOT run multiple mDNS stacks on the same ip, as they will compete for UDP port 5353.

This is the very reason avahi is a system daemon, allowing many applications running on the same host / interface to concurrently leverage mDNS.

Also, if your network has multiple different vlans, and you expect to see mDNS records from one vlan while on the other, you are on your own, and your networking equipment will probably require vendor specific configuration.

The same goes if you broadcast on wifi, and expect to see these records on a wired NIC, as your networking gear may (or may not) segregate them.

Finally be aware that certain wifi access points will throttle / impair mDNS broadcasts to prevent multicast storms (mDNS is a very chatty protocol that may just boggle down your wifi).

Docker containers networking 101 and mDNS

A docker container may use different networking modes (by specifying the --net argument with docker run).

  • bridge (default), will put the container on a network bridge, and forward / expose specific, explicitly listed ports between the container and the host
  • host will run the container as if it was on the host
  • macvlan (or ipvlan) will grant the container its own interface, with a separate ip

Bridge: do NOT use with mDNS

Anything that uses mDNS will NOT work at all with bridge networking (as the container will be unable to use multicast).

Host: probably do NOT use with mDNS

If you use host networking, things may or may not work. Since you are sharing the same ip as the host, if the host is already running avahi (or another mDNS stack), your container's own mDNS stack will compete with it and things will likely not work. Furthermore, if you have multiple containers (using mDNS) that all use host networking, you will likely face the same issue.

Take-away: unless you are sure your container is the only thing using mDNS on that specific ip, you should not use host networking.

macvlan and ipvlan

Configuring mac (or ip) vlan (with or without docker) is a little bit more involved. It is also the only practical approach to running a number of mDNS services in containers on the same host.

You should read the docker documentation on macvlan for details, or more generally this excellent paper on mac and ipvlan.

In a shell, you first need to create your macvlan network (be sure to configure your subnet and gateway accordingly to your own lan topology, and also make sure the ip range you grant is not used by anything else):

docker network create -d macvlan \
  --subnet=192.168.0.0/24 \
  --ip-range=192.168.0.128/25 \
  --gateway=192.168.0.1 \
  -o parent=eth0 my-macvlan

Then you can run any container in that network by simply attaching them to your newly created network:

docker run --net my-macvlan ...

Each container will have its own ip on the subnet (in the range defined above), which will allow for mDNS to work for all of them.

Downsides of macvlan

  1. With macvlan, each interface gets its own mac address. Many consumer grade NICs have a hard limit on the number of MAC addresses - if you have a large number of containers, this will not scale nicely.
  2. IEEE 802.11 is generally not happy with multiple mac address on the same client. In layman's terms: macvlan will probably not work well with wifi, and your (cheap) access point will likely kick them out.

ipvlan, and downsides

ipvlan, alternatively, also grants you unique new ips, though interfaces share the same mac address, effectively allowing you to work with wifi, and also avoiding MAC addresses scalability issues.

Although, ipvlan has its own downsides:

  1. dhcpd may not be too happy about that, and will probably grant all interfaces the same ip address (since they have the same MAC address)
  2. ipv6 will likely not work nicely (or at all) either

You can workaround these issues by either:

  • using manually specified static ips for each container
  • or making sure dhcpd grants ip addresses based on clientid instead (and making sure your dhcp lease requests use unique, distinct clientid) <- this is involved, and out of scope here

TL;DR

If you want to run librespot directly on a linux host where Avahi is already installed, it is recommended to compile librespot with-dns-sd so that librespot uses the existing avahi daemon to publish its records.

If avahi is NOT installed, and you are sure no other mDNS responder is running on the host, you will be able to run librespot, using the default mDNS stack.

If you want to run librespot in a container, the safest route is to use a wired ethernet interface, with macvlan. It is also strongly recommended to NOT try to use avahi inside the container, but instead just use librespot default stack. Although using avahi in a container is doable, it is overkill in that case, and harder to get right, to monitor and secure.

There are of course other esoteric scenarios that will "work", but they are much more involved / brittle, and they are not recommended:

  • it is possible to use wifi, with ipvlan, though this likely requires static ip addressing and/or dhcp black magic, and the right wifi equipment
  • it should be possible to use bridge for a librespot container, though if you expect to have discovery working you will need either:
  • to somehow expose the host dbus inside the container so that librespot can communicate with the host avahi daemon
  • use a standalone sidecar somewhere else that will take care of mDNS on behalf of librespot (either in a different container with the right networking, or again leveraging the host avahi)