DANCERS variations - Chroma-CITI/DANCERS GitHub Wiki
We like to see DANCERS as an architecture not tied to any particular simulator. However, to be usable, connectors to simulators have to be developed. We call "variation" a particular implementation of DANCERS with two specific simulators. We provide three variations of DANCERS at this time:
- Gazebo variation interconnects the Gazebo simulator with ns-3
- Robotsim variation interconnects Viragh's Robotsim simulator with ns-3
- Virtualization variation uses Gazebo and ns-3 with a virtualized network to use "real" network traffic and simulate robotic applications in an SITL fashion
Here is a quick overview of the details for each module:
- Coordinator : It is tasked to handle the synchronous execution of the gazebo and the ns-3 modules. It uses two threads, one per module, to handle socket communication (sockets can be UDS or TCP), and a "shared barrier" to block the execution of the code while waiting for the other simulator.
- Gazebo connector : It is in charge of launching the Gazebo simulation and instantiate the world following the configuration file instructions. It subscribes to Gazebo's publishers to retrieve information from the Gazebo simulation. During the co-simulation, it starts and stops the advancement of the Gazebo simulation according to the synchronization scheme with the Coordinator.
- Robotsim connector : It translates Protobuf messages in C-compatible data as Robotsim, written in C, cannot understand the Protobuf messages. It uses sockets to communicate with the Coordinator on one side, and to Robotsim on the other side.
- ns-3 connector : It runs the network simulation of the packets. When the virtualization is not used, it handles the entirety of the network simulation, from the Application layer to the physical layer. Network information (for example, neighborhood information in the case of flocking) are collected during the simulation and shared. If the virtualization module is used, this behavior changes and ns-3 only simulates the packets flowing through layer 1 and 2. TapBridges are used to listen to the traffic on a TAP network interface (so, routed packets with a MAC destination). If the packet reception is a success, it is re-injected in the TAP interface of the destination. In any variation, this module receive the Protobuf messages from the Coordinator with the robots' positions and update the nodes' positions accordingly.
- Autopilot : It is easier to control a realistic robot in Gazebo with the help of an Autopilot. We use the PX4 SITL most of the time, but any other autopilot could be used. HITL versions of the Autopilot could also be used in theory, but this should be confirmed by future work. In the SITL version, the sensors (IMU, barometer, etc.) of the robot are simulated in Gazebo, and their values sent to the PX4. The Autopilot runs all its functions (sensor fusion, motor control chain, etc.) and outputs motor speeds. Instead of sending the motor speeds to a real ESC on a real drone, the SITL version publishes motor speeds on Gazebo publishers ("Commands" on the schema). With the virtualization variation, the Autopilot SITL runs in a network namespace, which provide network isolation with other robots and duplicate the network stack of Linux to mimic that the onboard computer has an autonomous network card.
- Robot Behavior : Depending on the variation, the behavior of the robot can either be included in the physics simulator (Robotsim) or be executed outside of it. For example, with Gazebo and the PX4 Autopilot, the high-level robot commands can be developed in a ROS2 node and be directly included in the co-simulation.
In the following, we will dive deeper on the functioning and code of each of these modules.
The Coordinator is a ROS2 node that has no subscriber and one publisher (/clock
). It launches two threads to handle message exchange with both simulators. The user can user either UDP or TCP and the boost::asio
library is used for convenience:
if (this->phy_use_uds_socket)
{
boost::asio::local::stream_protocol::endpoint endpoint(this->phy_uds_server_address.c_str());
this->phy_protobuf_thread = std::thread(&Coordinator::run_phy_protobuf_client_<boost::asio::local::stream_protocol>, this, endpoint);
}
else
{
boost::asio::ip::tcp::endpoint endpoint(boost::asio::ip::address::from_string(this->phy_ip_server_address), this->phy_ip_server_port);
this->phy_protobuf_thread = std::thread(&Coordinator::run_phy_protobuf_client_<boost::asio::ip::tcp>, this, endpoint);
}
if (this->net_use_uds_socket)
{
boost::asio::local::stream_protocol::endpoint endpoint(this->net_uds_server_address.c_str());
this->net_protobuf_thread = std::thread(&Coordinator::run_net_protobuf_client_<boost::asio::local::stream_protocol>, this, endpoint);
}
else
{
boost::asio::ip::tcp::endpoint endpoint(boost::asio::ip::address::from_string(this->net_ip_server_address), this->net_ip_server_port);
this->net_protobuf_thread = std::thread(&Coordinator::run_net_protobuf_client_<boost::asio::ip::tcp>, this, endpoint);
}
phy_protobuf_thread.join();
net_protobuf_thread.join();
The execution flow of each thread is managed through a shared boost::fibers::barrier
. When one thread reaches the line
this->rendezvous_threads.wait();
it waits until the other thread also reaches this line. When the two threads are in the "waiting" state, both get unlocked and continue the execution.
The gazebo connector is a ROS2 Node. At initialization, it starts the Gazebo simulation server using the gz/sim
libraries:
// Verbosity level for Gazebo - defaults to 1 if unset
gz::common::Console::SetVerbosity(4);
// Object to pass custom configuration to the server
gz::sim::ServerConfig serverConfig;
// Populate with some configuration, for example, the SDF file to load
// modify_sdf_world(config["world_sdf_path"].as<std::string>(), step_length, update_rate);
serverConfig.SetSdfFile(config["world_sdf_path"].as<std::string>());
serverConfig.SetUpdateRate(update_rate); // in Hz
// Instantiate server
gz::sim::Server server(serverConfig);
RCLCPP_INFO(this->get_logger(), "Gazebo server started.");
Once this is done, we use Gazebo's transport and API to interact with the simulation. It is important to note that Gazebo uses its own pub/sub transport library, called gz/transport
, which is different from ROS2 middleware.
Then, it creates the buildings defined in the config file :
// Call the /world/<world_name>/Create service to create the buildings
resp.Clear();
executed = false;
result = false;
service = "/world/"+world_name+"/create";
gz::msgs::EntityFactory req;
gz::msgs::Boolean response;
for(auto building : config["buildings"]){
std::string name = building["name"].as<std::string>();
double x = building["x"].as<double>();
double y = building["y"].as<double>();
double z = building["height"].as<double>()/2;
double size_x = building["size_x"].as<double>();
double size_y = building["size_y"].as<double>();
double height = building["height"].as<double>();
std::string buildingSdf = // ... here create a long string containing the sdf definition of the build
req.set_sdf(buildingSdf);
if(this->get_parameter("verbose").get_parameter_value().get<bool>()){
RCLCPP_DEBUG(this->get_logger(), "Request creation of entity : \n%s", req.DebugString().c_str());
}
executed = node.Request(service, req, timeout, response, result);
check_service_results(service, executed, result);
if(response.data()){std::cout << "Created building " << name << std::endl;}
}
The gazebo connector gets the positions of every entity in the world from the gz-transport topic /world/<world_name>/pose/info
. The positions of the robots are filtered in this list using the gazebo_models
list. gazebo_models
is a list of strings that must match the names of the robot models in Gazebo. In DANCERS, we try to stick to the following convention for robot names : <robot_model>_<robot_id>, where <robot_model> is the name of the PX4 model (robots_model
in the configuration file and also the value of the PX4_SIM_MODEL
env. var.), and <robot_id> is the instance ID of the robot in the range [1-N].
The ns-3 connector holds the configuration of the simulated network and the execution loop of ns-3. In particular :
- Buildings are configured at initialization
- Node positions are updated following the robot pose received from the Coordinator
- Propagation module is defined for radio wave simulation
- WiFi module can be configured for the simulation of the PHY layer
- Tap-Bridges are associated with existing Linux virtual interfaces at run-time, if the virtualization variation is used
We leverage the tap-bridge module from ns-3 to create a link between the "real" and "simulated" worlds. First, virtual tap (layer 2) interfaces are created in the host system. When the ns-3 module is executed, it connects tap interfaces with simulated nodes in ns-3. The details of this procedure can be found in the ns-3_documentation.
TapBridgeHelper tapBridge;
tapBridge.SetAttribute("Mode", StringValue("UseLocal"));
char buffer[10];
for (uint32_t i = 0; i < numNodes; i++)
{
sprintf(buffer, "wifi_tap%d", i + 1);
tapBridge.SetAttribute("DeviceName", StringValue(buffer));
tapBridge.Install(nodes.Get(i), devices.Get(i));
}
Buildings are created at initialization of the ns-3 simulation, from the configuration file. We define buildings by their center, sizes in x and y, and height. In ns-3, buildings are defined by min and max x, y and z, so conversions are needed. Several other properties have to be defined for a building in ns-3 :
-
BuildingType defaults to
Building::Office
-
ExtWallsType defaults to
Building::ConcreteWithoutWindows
-
NFloors defaults to
1
-
NRooms[X and Y] default to
1
In other words, the buildings are defined with no internal walls and strong, highly blocking external walls.
TapBridgeHelper tapBridge;
tapBridge.SetAttribute("Mode", StringValue("UseLocal"));
char buffer[10];
for (uint32_t i = 0; i < numNodes; i++)
{
sprintf(buffer, "wifi_tap%d", i + 1);
tapBridge.SetAttribute("DeviceName", StringValue(buffer));
tapBridge.Install(nodes.Get(i), devices.Get(i));
}