Node not starting, failing unit "ovsdb‐server.service" - gpillon/k4all GitHub Wiki
ovsdb-server.service
Failed on Boot
Troubleshooting: Problem Summary
During system boot or when starting the Open vSwitch (OVS) services manually, you may encounter the following systemd error:
[systemd]
Failed Units: 1
ovsdb-server.service
This typically occurs in K4all deployments using Open vSwitch as part of the network setup. The root cause is usually incorrect file ownership on the OVS database files, which prevents the ovsdb-server
process from accessing them.
Root Cause
The ovsdb-server
process runs as the openvswitch
user. If the ownership of the database files located in /etc/openvswitch/
is incorrect (e.g., root-owned due to system provisioning or image changes), the service fails to start properly.
Affected files include:
conf.db
.conf.db.~lock~
.conf.db.tmp.~lock~
Solution
Fix the file ownership by assigning them to the correct user and group (openvswitch
):
chown openvswitch: /etc/openvswitch/conf.db
chown openvswitch: /etc/openvswitch/.conf.db.~lock~
chown openvswitch: /etc/openvswitch/.conf.db.tmp.~lock~
Then restart the service:
systemctl stop ovsdb-server
systemctl reset-failed ovsdb-server
systemctl start ovsdb-server
Then, reboot the node.
You can verify that the service is running correctly:
systemctl status ovsdb-server
Recommendation
If you're building custom images or provisioning K4all nodes programmatically, ensure these files are created or restored with the proper ownership during post-install configuration steps. This prevents service failure after reboots or updates.