Methodology - tech-igloo/Semi-autonomous-UV-sterilization-bot Wiki
Coppelia Sim simulation platform is used for modeling the robot, adding sensors, and testing the path planning algorithm. The robot is modeled with proportionate dimensions, then the sensors are modeled to match the sensors selected. The sensors are modeled with a conical FOV of 50deg and with an appropriate threshold and range. Dead spaces are the spaces that are not in the FOV of sensors and the sensors cant detect any object present in the dead space. So, the sensor is placed in a way to minimize the dead space and maximize the sensing area. Once the modeling of the robot and sensors are completed the path planning algorithm is written for simulating it in Coppeliasim. The algorithm has to be changed according to the present application and the feedback for the algorithm will be given from encoders. For testing purposes, we are simulating the algorithm based on time assuming that the robot moves at constant linear and angular velocities and based on the time we tracked the position of the robot. Once the algorithm code is tested the whole logic can be changed from time to an encoder-based approach which is a much precise approach. Once the path planning algorithm is tested, we start with the implementation of the obstacle avoidance algorithm, to avoid collision of robots with obstacles present on the way. We planned to avoid them using Ultrasonic sensors which are simulated and the algorithm can be changed to avoid obstacles and navigate autonomously in the static environment. Testing the algorithm to check the response under different obstacle shapes (concave and convex shaped obstacles) and different waypoints to check if the robot can navigate to the point successfully.
FreeRTOS is a real-time operating system that is especially used for multitasking at the microcontroller level and is implemented using embedded C coding. A dual-core microcontroller with FreeRTOS fuels up the whole project. As FreeRTOS is used for low-level microcontrollers which may not have multicores, using FreeRTOS we are capable of running multiple threads on a single core. FreeRTOS can be used for assigning multiple tasks based on priorities and the FreeRTOS takes care of scheduling each task based on priority levels to run multiple functions parallelly(multiple threads in parallel). Among multiple IDE’s available such as Eclipse, Arduino, VS code, etc. VS code along with ESP IDF is used. The code is written in embedded C and the ESP-IDF extension is used for compiling and uploading the code to the development board. The extension also provides a serial debugger which is very helpful for error debugging.
Fig.5 Software Stack
The software stack consists of all required submodules that were included in the platform. Fig. 5 shows the overview of the whole project. ESP32 being the brain of the project as we know that it is equipped with a dual-core microprocessor. The storage space available is used for storing programs, Wi-Fi credentials, and paths recorded by the user in manual mode which is retrieved back during auto mode for autonomous navigation. Running different functions parallelly is a necessity in this project rather than running them sequentially, in our application FreeRTOS helps us in running multiple functions parallelly in a single-core processor as well but ESP32 being dual-core has the facility to run two different tasks parallelly even without using RTOS. As we have three important tasks to be handled, these are handling the web interface between the user and the hardware, controlling the actuators and gathering sensor information, and an interrupter task for encoders to keep track of the current position and calculate linear and angular velocities of the robot. ESP32 has an inbuilt Wi-Fi module and it can be used in 2 modes STA and SoftAP (SAP) mode. STA mode enables ESP32 to connect to a network. In SAP mode ESP32 is used as an access point. For accessing the web interface the user has to be in the same network as the microcontroller. Once the user is connected to the network the web interface has all the necessary options to control the ESP32 wirelessly. Core 0 of ESP32 is responsible for handling all the web interfaces and it will run completely independent of other cores.
Core1 of ESP32 is responsible for controlling the actuators in manual and auto mode and the encoder task to update the current position and velocity. The core is also responsible for running the auto mode algorithm, to obtain a straight motion using the velocity control PID loop. Doing so the web interface will be up and running when the core1 is busy with handling the activators and auto mode algorithm and other tasks. In manual mode we are recording a path then the robot position estimated either by encoder feedback. Once the path recording is completed then the path is stored in the memory of the ESP32. The path is recorded and is converted to a grid-based system and saved in the memory of ESP32. In auto mode, the current position is monitored using the encoder feedback. Before the auto mode starts by the user the path is selected the path is retrieved from the file stored in the memory. The whole auto mode algorithm runs keeping track of the current positions and the goal position and parallelly it checks for any obstacle present in the path. If there is any obstacle the robot performs obstacle avoidance and continues back with the auto mode navigation.
The user can access the web interface either by typing the IP address of the ESP32 or by esp32.local/ in the browser. The home page tells about which mode ESP is currently being used either SAP(AP) or STA and also options such as manual and auto which drives the robot and the option to modify Wi-Fi credentials and a rest button that reset the whole microcontroller. The home page is as shown in Fig 6 and Fig 7. The home page also displays the battery percentage and is refreshed every 5 seconds. Refreshing the home page at constant time intervals helps us in updating the battery percentage. When the user selects the manual mode it will take you to the page as shown in Fig 8. This page is used to control the robot and the path will be recorded when the user presses start and operate the robot as shown in Fig 9. Once the path is completed and he wants to save the path the user has to press stop and then press save if he is satisfied with the path or presses new as shown in if he is not as shown in Fig 10 and name the path and submit it as shown in figure Fig 11. Then the path is saved in the memory of the microcontroller.
Fig.6. Default Home page(method1)
Fig.7.Default Home Page(method2)
Fig.8. Manual Mode
Fig.9.Manual mode path recording
Fig.11.Naming path and saving
When the person wants to use the robot in the auto mode he presses the auto button on the home page and it takes you to a page where the path names are displayed as shown in Fig 12. The person navigates to the required path and selects required options as shown in Fig 13 such as execute if he wants the robot to start in auto mode or delete if he wants to delete the path and the user is redirected to the home page. When the user presses execution it will redirect to a page as shown in Fig 14. He can either stop the robot at any point in time by pressing the pause button or stop the auto mode completely by pressing the Stop button.
Fig.13.Auto mode path options
Fig.14.Auto mode path execution
After pressing execute the bot is in autonomous mode and if the user presses home he can access the stop button by navigating back to auto mode and any specific path or the user can also access the stop button from the home page. If the user stops the robot in auto mode then the Docking button is enabled on the home page which is used to make the robot get back to the home position(starting point). If the user wants the ESP32 to change the mode of operation of ESP32 they can press SAP/STA shown in Fig 7 and it will redirect to a page as shown in Fig 15. he can either start the ESP in SAP mode or use STA and save Wi-Fi credentials as shown in Fig 16.
Fig.15.Modes of ESP32
Fig.16.SAP mode Wi-Fi credentials
To find out the position of the bot, we need to keep track of the velocity of the motors to calculate how much the bot has moved and in what direction. So for the positional feedback odometry is used with the help of optical encoders. To fetch the encoder signal hardware interrupts are used. The encoders selected have a resolution of 25 pulses per rotation, which is doubled with the help of dual-edge detection interrupt in ESP32. FreeRTOS provides a queue API used here to record the interrupts if the ISR cannot process them fast enough at higher speeds. This minimizes the risk of missing an interrupt that can create errors in localizing. The enoder_task is created to run parallelly in core 1, which checks for the queue whenever there is an entry, then it is updated to the respective global variables depending upon the GPIO it came from. Also, the interrupt service is disabled when actuation is not required to discard any unwanted ticks.
The motor acts as an intermediate between the motors and the microcontroller. The motor direction is reversed by switching the polarity at the motor driver direction pins and for speed control of the motor PWM signal is used. The microcontroller pins which are connected to the motor driver direction pins are configured as output pins and the output from these pins is a digital signal (HIGH/LOW). For generating the PWM signal, first, the timer is configured selecting the number of bit resolution, frequency, mode, timer number. These parameters are key in calculating the duty cycle of the PWM signal generated for controlling the speed. Once the timer module is configured this module is connected to the GPIO pin which is connected to the motor drive speed control pin (in the case of L298N speed control pin is the enable pin and in the case of BLDC motor driver speed control pin is VR). Once the timer module is connected and the pin is set to operate as PWM, the pin operates at the speed of the timer and we can vary the signal duty cycle from 0 to 100 to control the speed of the motor.
The Web Server which acts as the user interface uses HTML skills and for programming embedded C is required. HTML programming is done based on the features to be provided to the user and testing is done parallelly to see how the microcontroller reacts to different test cases. The components to be used in the robot such as motors, motor driver, encoders, batteries are to be selected based on the end goal of robot implementation. A PID-based algorithm is used for the autonomous navigation of robots over a predefined path. Once the HTML web server is up and running, we did the integration of GPIO and algorithm implementation in embedded C with real hardware such as motor driver and encoders to check the performance of the robot in a real environment and PID tuning to be done accordingly to maximize the performance.
The circuit diagram is designed using the EasyEDA platform. Fig.17. shows the connection of ESP32 with the motor driver, encoders, and voltage monitoring system. The version is just a test model which is based on components selected for testing the functionality but the final design will have a different motor driver and the design. All the connections are made as shown.
Fig.17. The circuit diagram(testing bot)
Fig.18. The circuit diagram(final bot)
The Ultrasonic sensor URM37 has 3 different modes of operation. They are PWM trigger mode, auto measure mode, and Serial passive mode, we have chosen to use auto measure mode. In this mode, the sensor must be programmed initially. The data to be communicated to the sensor is of format as follows,
- Written data can only be in the range of 0-255.
- Address 0x00-0x02 is used to configure the mode.
- 0x00 – threshold distance (Low) 0x01 – threshold distance (High)
- 0x02 – Operation Mode (0xaa for autonomous mode) (0xbb for PWM passive control mode)
- The return data format will be: 0x44＋Add＋Data＋SUM
The data is written to the ultrasonic sensor using UART communication; the pin connection with Arduino UNO is shown in Fig 14, similar connections are made with ESP32 as well for programming the sensor to auto mode and reading the sensor data continuously if an obstacle is present or not.
Fig. 19. UART pin connection with Arduino
Auto mode algorithm
The path is stored in the memory of the microcontroller and is retrieved back in auto mode and the path is given to the algorithm. Documentation of Path Following algorithm :
- This works on the assumption that the path has been converted to a coordinate-based representation.
- The stop point is updated with the initial position of the bot.
- First, it rotates using a PID controller to align itself with the target point (angle varies from -180 to 180, +ve in the right direction).
- Next, it starts moving forward to cover the required distance.
- After it has reached the target point, the distance is reset to 0. The angle is not reset, it is used to keep a track of the bot’s orientation.
- The stop point is updated with the current position of the bot.
- Steps 2-6 are repeated for the next point. eg) It is at (0,0) oriented at 0 deg and it needs to go to (1,1) 1) The stop point is initialized to (0,0) 1) The angle that needs to be rotated = 45 deg 2) Distance that needs to be covered = sqrt(1+1) = 1.414 3) So, first, it rotates by +45 deg at (0,0) 4) Next, it moves forward by 1.414 m 5) Distance is reset to 0 and the stop point is updated to (1,1) 6) The steps are repeated.
The above algorithm is purely a path-following algorithm; it does not include obstacle avoidance. The flow chart of the path-following algorithm along with obstacle avoidance is shown in Fig.17.
Normal motion function keeps tracks of rotation error and distance error from the goal point and updates the flag which thereby controls the robot to move forward or rotate. Initially, when the goal point is set the rotation flag is set to 1 which tells that the robot has to first orient itself with the goal point once the robot is within the threshold limit the doneflag is set to 1 and the rotationflag is set to 0. When the done flag is set to 1 then the PID loop parameters are reset to the default values. As the rotationflag is 0 which means the robot is oriented with the goal point the next step is that the robot will move toward the goal point and keeps track of the error and once the robot is close by the flags are changed and the next goal point is updated and the robot starters navigation toward the next goal point. This way the robot waypoint navigation is implemented. The flow diagram of the function is shown in Fig 19.
Fig.20. Path following and obstacle avoidance Flowchart
Fig.21. Normal motion flow diagram
A major difference in this version is the availability of docking provision only when the user stops the executing path in between or if the executed path is an open path and the final coordinates of the bot are not close to home. Only in these cases, docking button will appear on the home webpage. When it is pressed the docking algorithm starts executing the last path from the final executed coordinates to home coordinates, which then directs to another webpage with a pause, stop, and home button, similar to auto mode. The current version of the algorithm incorporates only reaching the home/docking point but not performing the actual docking connection for charging. That has been included to be implemented in future work, although research has been done on the automated charging connection while docking.
Fig. 22. Docking Path reversal flow diagram[method 1]
The main need for the docking logic is when the robot has to come to the home position or to the charging station where the battery will be charged. The docking logic is implemented when the user presses the stop button while the bot is executing the auto mode assuming that the path is always a closed path and the initial and the final coordinate of the robot is the same or when the battery is critically low. When the logic is to be implemented then the robot is initially stopped by using the setting auto_flag to 0 and stopping the robot. Once the robot is stopped then the logic is implemented which goes through the currently executing path and reverse the path till the waypoints which are already achieved and store the reversed path in currentpath.txt file and put the robot back in auto mode with the modified path. This way the robot will navigate back to the home position autonomously.
Fig.23. Docking Path reversal flow diagram[method 2]