Using NOX - noxrepo/nox-classic GitHub Wiki

Table of Contents

Running NOX

NOX must be invoked by the command line within the build/src directory. Generally, the command that starts the controller looks like this:

    ./nox_core [OPTIONS] [APP[=ARG[,ARG]...]] [APP[=ARG[,ARG]...]]...

Normally, NOX listens on a TCP port (SSL is supported though poorly documented at the moment) waiting for switches to connect. Once switches connect, NOX will dispatch network packets to the running components, and send OpenFlow messages to the switches on behalf of the components. For instance, the following will initiate NOX, listening for incoming connections from Openflow switches on port 6633 (the Openflow protocol port):

    ./nox_core -v -i ptcp:6633

At this point, the core of NOX is running. However, while switches can now connect to the controller, no behaviour will be imposed on them by NOX. Recall that NOX is intended to provide the control logic for an entire network. This can mean many things, including: handling traffic engineering; routing; authentication; access control; virtual network creation; monitoring; and diagnostics. However, NOX itself does none of these things. Rather, it provides a programmatic interface to network components which perform the useful functionality. So, what is missing from the above command in order to give life to a NOX network is the Components NOX should run. These are passed as arguments to the above command. For example, the command:

    ./nox_core -v -i ptcp:6633 pyswitch  

will make the switches act as regular MAC-learning switches. Note that NOX can run multiple components. For examples of passing arguments to components, take a look at netapps/discovery.py (python) or coreapps/jsonmessenger (C/C++)

Using NOX in an Openflow network

NOX is an Openflow controller and running NOX makes sense only within a network where at least one Openflow switch exists. Running NOX as described above will simply get it to a state where it listens for incoming connections from switches, on which it will impose the behaviour defined by the running Components.

An Openflow network can be deployed either by using hardware switches that support Openflow, or by using any linux machine as an Openflow switch. Furthermore, it can be useful for researchers and developers to set up a virtual testing environment.

Setting up a virtual NOX/Openflow network

Developing and testing NOX can be cumbersome with physical switches. An attractive alternative is to run NOX, OpenFlow switches and end hosts on the same host using virtual machines. Two projects that simplify this procedure are Mininet and OpenFlowVMS.

Setting up a real NOX/Openflow network

For really small testbeds, the easiest way to get some Openflow enabled switches would probably be to use the Openflow reference implementation on linux machines. Detailed instructions for installing Openflow can be found on the Openflow Wiki

⚠️ **GitHub.com Fallback** ⚠️