SONiC Testbed Overview - maggiemsft/SONiC GitHub Wiki
We try to build SONiC testbed as a standard, flexible, expandable and easy to duplicate and deploy test environment, so that we can build and run same tests across different location or different community member.
We use high performance servers to create virtual routers and virtual Linux machines, using Open vSwitch to connect different components and form a virtual network through tap interface(s), then use fanout switches layer2 VLAN to connect the virtual network topology to SONiC DUT.
After SONiC DUT is connected to a virtual network topology, we can run some test cases to test basic system and dataplane integration and validation tests for SONiC DUT.
SONiC testbed required hardware components:
-
High performance servers to set up VMs and dockers to deploy virtual network topologies for test SONiC OS image This is what we are using, not mandatory:
- Dell R730; 2 CPUs each has 18 cores; 192-256G memory; hard disk:2X500G
- NIC: Mellanox MT27700 Family CX4 100G or 40G matching your testbed speed
-
Layer2 switches which running SONiC OS images to have a testbed connections which are dynamically configurable
- root fanout
- leaf fanouts
-
SONiC Device Under Test (DUT): hardware switches which are running SONiC OS.
-
Lab supporting infrastructure for testbed, like HTTP Server, DHCP Server, Syslog Server, Docker registry server, management switches…

Understand SONiC testbed basic principles
Above figure is an illustration of SONiC testbed for ONE SONiC DUT, following are some very basic principles, more detailed testbed configuration and setup steps are in later portion of this document.
-
Each testbed has one fanout switch for distribution of server virtual network topologies. Root fanout switch connects leaf fanout switches and testbed servers. Each root fanout ports is a 802.1Q trunk port.
-
Every leaf fanout switch has one uplink port connects to root fanout switch. Each leaf fanout switch DUT facing port has a unique vlan tag as access mode for every connected DUT port.
-
Every SONiC DUT port is connected to leaf fanout switches, every DUT’s front panel ports are expected to connect to fanout switches. You may have as many fanout switches to connect multi SONiC DUTs as you want, as long as you have enough uplink ports on root fanout switch. If one fanout leaf switch does not have enough ports for one SONiC DUT, more than one fanout switches can be used to connect to one DUT.
-
Any testbed server can access any DUT port by sending a packet with the port vlan tag (root fanout switch should have this vlan number enabled on the server trunk)
-
Test Server is a high performance server with large disk space and lots of memory. With the testbed topology scripts/playbook, it can create lots of VMs and ptf dockers to form multiple virtual networks to run SONiC test cases.
-
Lab network: Depends on your lab management, you could have a regular lab management network, SONiC testbed management network and SONiC test network. You could have them in one flat lab management network or you may have different network segment in your lab. The principle is you have to have every components (VMs, ptf dockers, SONiC DUTs, Test servers, supporting servers) reachable to each other in whole testbed environment.
-
SONiC testbed and test cases are built based on Ansible playbook and some scripts. To run testbed deployment and test cases, you will need a SONiC management docker
sonic-mgmtas a prebuilt integrated execution environment to avoid utility and library versions incompatibility. -
There are some supporting servers need to be in the lab testing infrastructure for the whole testbed function and running test cases properly. These servers can be served from one server or from different servers depend on your lab management.
-
DHCP Server: For a SONiC switch initial boot time, it is using ONIE for the image installation from the bare metal switch. DHCP server is used to assign IP address for SONiC management interface at boot time, then SONiC management interface would be reachable from network and can install image through Network boot. Some of SONiC installer test steps also depends on this feature to test ONIE image installation. Without this DHCP server, test will fail.
-
HTTP Server: SONiC ONIE image installation support image URL from HTTP protocol. This HTTP server is served as image repository for ONIE image installation. Without HTPP Server, you cannot successfully install image through Network.
-
Docker Registry server: All our virtual network topology are build based on containerized docker image. You may consider to have a local docker registry server to serve all your docker image requirement. We also hosted a docker registry server in Azure for download our built docker image from repo github.com/azure/sonic-buildimage.
-
Syslog server: This is not required, but better have a syslog server in lab could have a history of all SONiC DUT’s syslog to trace back later.
-