Technical Specification - ev3dev-python-tools/ev3dev2simulator GitHub Wiki

Introduction

This document describes the technical specifications of the ev3dev2 simulator. This simulator allows code designed for Lego Mindstorms EV3 robots to be executed in a virtual environment without any modifications to the code. After running the simulator, a user can run multiple Python applications as if they run on a Lego Mindstorms EV3 Brick[1]. The simulator is designed specifically to simulate bricks running the operating system ev3dev[2] and its corresponding ev3dev Python library[3]. The second version of this library is called ’ev3dev2’.

As the figure below shows, the simulator can be configured to simulate multiple robots that each consist of multiple bricks. Multiple bricks on a robot are useful to allow for additional sensors and actuators to be connected. The software created by the user that would be run on a brick, now does not connect to the normal ev3dev API/library that is installed on an actual brick, but uses a fake ev3dev API that is provided with the simulator. This fake API automatically connects to the simulator.

A single configuration file determines the ’world’ setup; it describes the robots and additional obstacles that are in the virtual world and what sensors and actuators are connected to the different bricks.

The rest of this documents defines the inner workings of the simulator, including how it handles physics, how the ’world’ is visualised, how sensor and actuators are processed and how calls to the fake API are forwarded to the simulator.

Tools and versions

The simulator and the ’fake’ ev3dev2 API are made using Python 3.7. Because the ’fake’ library simulates the Python ev3dev2 library/API, it must be written in Python. The simulator could be written in any language, since sockets are used between the fake API and the simulator. Still, Python is chosen as programming language of the simulator to keep the entire project written in one programming language, but also to make use of its wide variety of libraries, its portability between operating systems and its high level syntax . The current version of the simulator is compatible with version 2.0.0b5 of the ev3dev2 library.

The simulator uses several Python libraries. They are used to handle physics (Pymunk), to visualise the virtual world (Arcade and Pyglet), to output sound (Simpleaudio and pyttsx3), doing generic math (NumPy) and loading configuration files (PyYAML). The following table gives an overview of the python libraries are used within the simulator:

Using pipdeptree package you can find out dependencies installed

$ python -mpipdeptree -p ev3dev2simulator -e pyobjc
ev3dev2simulator==2.0.6
  - arcade [required: ==2.6.16, installed: 2.6.16]
    - pillow [required: ~=9.1.1, installed: 9.1.1]
    - pyglet [required: ==2.0.dev23, installed: 2.0.dev23]
    - pymunk [required: ~=6.2.1, installed: 6.2.1]
      - cffi [required: >=1.15.0, installed: 1.15.1]
        - pycparser [required: Any, installed: 2.21]
    - pytiled-parser [required: ==2.2.0, installed: 2.2.0]
      - attrs [required: >=18.2.0, installed: 22.1.0]
      - typing-extensions [required: Any, installed: 4.4.0]
  - ev3devlogging [required: Any, installed: 1.0.1]
  - numpy [required: Any, installed: 1.23.5]
  - pyttsx3 [required: ==2.7, installed: 2.7]
  - simpleaudio [required: ==1.0.4, installed: 1.0.4]
  - strictyaml [required: Any, installed: 1.6.2]
    - python-dateutil [required: >=2.6.0, installed: 2.8.2]
      - six [required: >=1.5, installed: 1.16.0]

Using text-to-speech

The above is provided when a user installs the simulator using pip. Windows and macOS users can directly make use of text-to-speech. On windows, the pip package also installs ’pypiwin32’, a Python package that allows the application to access the windows text-to-speech functionality. Linux users are required to install ’espeak’ by hand to hear actual sound. For Ubuntu, running ’sudo apt install espeak’ should install espeak. For other distributions, please check the espeak documentation[4].

Simulation

The visualisation part of the simulator and the GUI are built using a library called Arcade[5]. This is built on top of the popular Python multimedia library Pyglet. Arcade is a Python library for creating 2D video games. The simulation runs at 30 FPS, in which the Arcade library provides an update and a draw function which are called 30 times per second. These provide a hook to draw sprites and handle any physics. The physics are handled by Pymunk[6]. Based on the location, shape, velocity and mass of objects, it is able to simulate collisions of objects.

Simulation types and configuration

The simulator has support for various configurations. Configurations are described in a YAML file. A configuration describes a virtual world that may contain a selection of obstacles and robots. An example of a configuration file can be found at the end of this page. Four configurations are shipped with the simulator. One configuration using a simple robot with one brick, a configuration with a ‘Mars Rover’, which is a robot with two bricks, a configuration with two simple robots and finally a configuration with two ’Mars Rovers’. Other configurations can be created by adding a YAML file.

Besides these configurations, a separate file describes the settings of the simulator. These are settings that a user will probably never change. It contains the following data about the simulation:

  • display settings

  • executable settings

  • image paths

  • sizes of Lego parts

  • wheel physics settings

The configuration is handled by a Config class. Its task is to load YAML files and store the settings while the simulation is running. The simulator provides the ability to configure some settings via command-line arguments. By default the simulator starts with the single small robot setup. Using ’-t name_of_the_configuration’, you can select a different configuration. The world configuration files can be found in the directory ’config/world_configurations’. Other command-line arguments are screen related such as whether you want the simulator to start ’fullscreen’, ’maximized’ or on your second screen (if applicable).

Measurements and scaling

For the simulator to be effective, the measurements of the real world need to match those of the simulation. Throughout the simulation, the world ’model’ is handled in millimeters. Because of the limitations of Arcade, the model eventually needs to be scaled to pixels. This has to be used instead of some other, non-display dependent measurement unit. Based on an advice by Pymunk, the physics are also scaled to match the visualisation. This has the consequence that sensor measurements based on the physics such as the ultrasonic sensor must be converted back based on this scaling. The scaling is based on the size of the ’world’ in the configuration file and the screen size found in the settings file. The exact measurements (in millimeters) of the robot can be found at the end of this page.

Robots

The robots of the simulation consist of several parts which represent it sensors and actuators, like the body, the touch sensor, and the color sensor.

Color sensor

The color sensor interacts with lakes and borders via collision detection. When a collision occurs between the sensor and one of these objects, the sensor ‘senses’ the color. The collision is detected by checking if the center point of the color sensor is within the border polygon of a lake, or if it is inside of one of the rectangles of the border.

Touch sensor

The touch sensor interacts with rocks and bottles via collision detection. This is done by checking if the shape of the sensor interacts with the shape of the rock. Please note that the touch sensors have a simplified quadrilateral shape to improve the simulation performance.

Ultrasonic sensor

The ultrasonic sensor, which is pointing forward, measures distance using Pymunk’s segment queries. These can be seen as ray-casts, i.e. something is shot from a point in a given direction (drawing an invisible straight line) until it hits an object. See the image above, where the lines are drawn. The distance between the objects is the length of the line. The ultrasonic sensor draws two lines, one for each eye, to better simulate the real robot. If both ’eyes’ see an object, the measurement of the left eye gets used.

Robot Data

Movement

The movement of the robot is performed using movement jobs. These are floating-point numbers representing the number of millimeters a wheel of the robot has to move. One job represents the movement of one wheel during one frame. Movements therefore consist of many jobs that are stored in a thread-safe queue in the RobotSimulator (see chapter 3 for more information about the RobotSimulator). These jobs are put in bulk and retrieved one by one every frame. Because the wheels can move independently of each other they both have their own queue. The movement of the robot as a whole is calculated based on the differential steering principle, using the movements of the two wheels.

The third motor, to which the probe-arm is connected, works similarly as to the wheel motors. But it has degree-per-frame jobs instead of millimeter-per-frame jobs. This is because in the simulation the arm only rotates from a fixed anchor point in 2D space.

Sound

The sound output of the simulation is handled in the fake API. The sound-connector intercept the calls that would be made to the operating system of the robot and converts these to calls to a text-to-speech function, a sound-file player function or a beep making function. The library ’pyttsx3’ accepts a text and can output this text as sound on any operating system. The library ’simpleaudio’ can play audio-files and can make beeps for a provided time, with a specified frequency.

But, just like robot movement, sound is also passed to the simulation using jobs. For each frame, a job tells what to display on the screen. For beeps, it displays the frequency of the beep. For text-to-speech, it says: ’saying <text that is required by the user>’. For sound-files, it display the name of the file. The displaying of the sound might be useful when actual sound output is not desired.

Sensors

Every frame the simulation checks the robot’s sensors. The sensor data is stored in the RobotState using the address of the sensor as the key. This way data can be requested by using the correct sensor port address.

To prevent threading issues each sensor has its own lock. The lock is released whenever a sensor value is stored, and locked after a client has successfully requested the value. This prevents spamming the simulator for a new sensor value hundreds of times per frame when the value can only change once per frame. To prevent a client thread from blocking it is made sure that no more than one request can be made per frame to the simulator. If the time since the last request has not exceeded the time of one frame, a cached value is returned.

Communication and internal structure

For the simulator to work, each application that would normally be run on a brick (as part of an entire robot), now needs to be able to connect to simulator. These brick Python applications using the fake API are now seen as ’clients’ of the simulator. To facilitate the connections between a client and the simulator, TCP/IP sockets are used. Commands are sent over these connections. The connection and messages of one brick ’client’ are shown in the figure below.

Types of commands:

  • DataRequest: Request to retrieve the value of a given sensor.

  • RotateCommand: Command to move a motor by a given distance at a given speed.

  • LedCommand: Command to activate a led using a given color.

  • SoundCommand: Command to ‘display’ given sound.

To establish these connections, an object of class ServerSockets is initialized in a separate thread when the simulator starts. This server listens for incoming connections. Based on the configuration, it shows for which brick ’client’, the application should be started. When the client application starts, it creates a connection to the simulator, the ServerSockets object then creates a ClientSocketHandler in a new thread, specifically for the connected client. Afterwards, the handler is responsible for handling the connection with the client. When one connection drops, all other connections are aborted by the ServerSockets object, the simulation resets itself and the connection process starts over again.

When the simulator is running, the ClientSocketHandler knows to which brick on which robot it belongs and therefore knows how to process its messages correctly. Clients themselves do not ‘know’ who they are, they simply send their commands to the attached ClientSocketHandler socket.

The ClientSocketHandler forwards the received message to MessageProcessor which convert the ev3dev2 based values to millimeters/degrees-based values that the simulator can use. This data is forwarded to the right RobotSimulator object based on which RobotSimulator object was given on startup.

How a single message is processed within the simulator is shown in the figure below. It comes in as a message in bytes, then it gets forwarded to the message processor which translates it to a request or a series of jobs that gets added to the robot simulator. This call may then return the total job time of the actuator action (in case of motor jobs) or the sensor value.

Internal structure

Now it is clear how messages from the clients are processed and get to a RobotSimulator, it is time to give an overview of how the simulation internally works.

Handling the state of a virtual world

The state of the virtual world is stored in a WorldState object. On startup of the simulator, based on the configuration file, all object are added such as rocks, bottles and lakes. Besides, it creates RobotState objects in this startup procedure. Each RobotState creates all sensors and actuators that are described in the configuration file.

Together, the WorldState object and RobotState object describe the world at a given timestamp. They contain the locations and shapes of all objects and robots. The WorldState is changed by the WorldSimulator, the RobotStates are changed by the RobotSimulators.

Handling the simulation of a virtual world

The WorldSimulator is responsible for simulating the behaviour of the virtual world. The WorldSimulator contains a RobotSimulator for each robot. Each update (which is 30 times per second), the GUI thread calls the update function of the world simulator. The task of the world simulator is then to call Pymunk to handle the physics, update the location of sprites based on the WorldState object and call the update method of the robot simulators.

The RobotSimulator is responsible for simulating the behaviour of a single robot. It stores upcoming actuator actions such as motor- and sound jobs that are added by the MessageProcessor and updates the RobotState every update based on the actuator actions in its queues.

Visualisation

After every update (30 times per second), the class Visualiser that is run on the GUI thread, draws the current state based on the WorldState and RobotStates. It retrieves the sprites and draws them. The sprites are drawn per list to improve the performance. The underlying software is able to efficiently render batches of sprites. A summary of the workings of the simulator is shown in the figure below.

Bluetooth

Some robots support two or more bricks/clients. In the real world, these bricks can communicate via Bluetooth. This communication is most often done using the Python library PyBluez. Therefore, the BluetoothSocket of PyBluez is used as a starting point for simulating Bluetooth. Because of the requirement that identical code has to run on the simulator and on the real robot the signature of the simulation Bluetooth should be identical to the PyBluez BluetoothSocket. Not every computer has Bluetooth capabilities and if one has it, it cannot be used to communicate over the localhost. Because of this a TCP/IP socket is used to handle the Bluetooth communication. This socket is wrapped into a BluetoothSocket wrapper.

This wrapper translates any specific BluetoothSocket functions to regular Python socket functions and vice versa.

’Faking’ the ev3dev2 library with connectors

To gather data about the executing behavior of the robot, it is necessary to somehow tap into the internal ev3dev2 Python library classes. Instead of executing movements on the real robot, a brick application should send messages to the simulator. This is done by creating connector classes. These classes provide a way of extracting data from the ev3dev2 library and thereby creating the fake library we desire.

For example, the motor.py file. This file contains classes like LargeMotor and MoveTank, which provide a way to send instructions to the robot’s motors. The base class is the Motor class, see the figure above. This class interacts with the robot’s hardware via writing variables to files. Much of the movement logic is in the MoveTank and MotorSet classes. The Motor class mostly holds setters for hardware-specific variables, which are to be written to the files. These setters make an ideal place to intercept the variables and send them to a simulator instead of the hardware.

The variables are set to the Motor class whenever a function in for example the MotorSet class is processing a movement. This function then calls a function in the Motor class, to execute a movement using the already set variables. The interception of these calls is done using a Connector. A connector needs to keep track of the variables which are set and also intercept the call which executes the movement. In the Motor class, a MotorConnector is initialized. This connector intercepts the previously mentioned setters (figure below, left image) and functions that execute the movement (figure below, right image).

The run_to_rel_pos() function of the MotorConnector (called in the last line will use the intercepted movement variables to create a RotateCommand. This is sent to the simulator, see the figure below.


The process of intercepting data using connectors is repeated by other components like Leds and Sound. The sensor classes do not send data to the simulator, but they request data. This is done by a SensorConnector which sends a DataRequest to the simulator and blocks until a response is received.

Movement state

The Motor class also keeps track of the current state of the motor. This can be RUNNING or HOLDING. This state is used in a variety of functions of the ev3dev2 library. To keep track of this a variable running_until is maintained. When a movement function is called the simulator responds with a finite amount of time the movement is going to take. Using this data, it can be tracked when a robot is moving or not without sending extra calls to the simulator.

Changes made to the code of the existing ev3dev2 library have been kept to a minimum. These changes are:

  • connectors in the Motor, Leds, Sound and sensor classes.

  • running_until in the Motor class.

Robot Measurements

Below are the dimensions and positions of the robot and its parts. The positions and dimensions of the obstacles on the course can be found in the config files. All values are in millimeters.

Measurements

Part Width Height
Body 72 110
Color sensor 22 22
Touch sensor left/right 72 41
Touch sensor rear 129 10
Ultrasonic sensor top 55 45
Ultrasonic sensor bottom 57 22
Wheel 28 55

Touch sensor vertical side: 19mm, horizontal side: 50mm.

In the simulation, the center of the axis between the two wheels is used as the center point of the robot. The measurements below indicate the offset of the center point of a given part regarding to the robot’s center point.

Small robot parts offset

Part X Y
Body 0 -22.5
Color sensor 0 +81
Front ultrasonic sensor 0 -91.5
Touch sensor left -75 +102
Touch sensor right +75 +102
Wheel left -60 0
Wheel right +60 0

Large robot parts offset

Part X Y
Arm +15 +102
Body left +39 -22.5
Body right -39 -22.5
Color sensor center 0 +84
Color sensor left -69 +102
Color sensor right +69 +102
Touch sensor left -65 +102
Touch sensor right +65 +102
Touch sensor rear 0 -165
Ultrasonic sensor front -22 +56
Ultrasonic sensor rear 0 -145
Wheel left -60 0
Wheel right +60 0
  1. https://www.lego.com/en-us/product/ev3-intelligent-brick-45500

  2. https://www.ev3dev.org/

  3. https://github.com/ev3dev/ev3dev-lang-python

  4. http://espeak.sourceforge.net/

  5. https://arcade.academy/

  6. http://www.pymunk.org/en/latest

⚠️ **GitHub.com Fallback** ⚠️