Minitask 5 - RobotTeaching/COMP4034 GitHub Wiki
Yay, you did it! 🎉 Congratulations on making it to the final assignment stage! 🎉
In Mini-Task 5, your goal is to develop an object search behaviour that enables a robot to locate predefined objects visible to its camera. This task involves both group work (for the implementation and demo) and individual work (for the demo and the report). It is mandatory for all group members to attend the demo in order to receive a grade for this coursework.
Details on submission and marking are available in Moodle.
There are three separate components to this assignment, each with its own deadline. Please refer to Moodle for the specific deadlines.
1. Team's ROS Package - Submitted to GitLab - Deadline: Friday, 9 December 2024, Monday, 17:00
- Submit your team's ROS package on GitLab, tagged with
minitask5
. - Make sure that you have already submitted your solutions to Minitasks 1-4, on Gitlab, with the appropriate tags. Also ensure that you have shown these solutions to the lab helpers and are ticked off (if not you cannot proceed to Minitask 5).
- Each team member should use their own GitLab credentials to push contributions, as individual contributions will be assessed.
2. Individual Report - Submitted to Moodle - Deadline: Friday, 20 December 2024
Each team member must submit a maximum 4-page individual report (excluding peer review and references) through the "Assignment report" in Moodle. The report should follow the IEEE conference template. It should be formatted as a scientific paper and written individually — this means you should not collaborate with your teammates on writing the report or generating figures/tables, nor should you use large language models (such as ChatGPT) to generate content. The report should include the following sections:
- Abstract (max 150 words): Summarize your approach, key results, and conclusions.
- Introduction: Provide a description of the solution you developed and an overview of your personal contributions to the assignment.
- Methodology: Outline your system design, including steps like image processing, sensor integration, control algorithms, motor control, and planning. Include any relevant diagrams, formulas, and pseudocode as needed. This section should explicitly focus on your individual contributions.
- Evaluation: Critically assess your working system. Present your evaluation methodology and data from your testing, including quantitative results.
- Personal Reflections: Discuss the strengths and limitations of your approach, and potential improvements. Reflect on whether your approach met the original objectives and evaluate the overall performance of your controller.
- Peer Evaluation: List the names of your group members (including yourself) and assign a performance mark between 1 and 5 for each member. Optionally, provide comments explaining your ratings. If no comments are provided, an even distribution of workload will be assumed.
- References: Use IEEE citation style for your references, as shown in the example references in the template.*
3. Team Demo - In person during the assigned demo session
- Both (or all) team members must attend the demo session. Failure to attend will result in the loss of whole coursework marks for the absent member.
- Demos will take place on lab hours on the last week of teaching delivery. There will be a dedicated session for robot demos (see below).
- Each demo will last 25 minutes: 5 minutes for setup, 5 minutes for demonstration and 15 minutes for demonstration and questions. Your demo will be stopped after the allocated duration and you will be marked accordingly, so please prepare well.
You are given the option to demonstrate your code on the real robot in addition to simulation. You will be given the opportunity to test your robot in the real-world before your demonstration slot. The real-world arena will share similarities with the Gazebo world (a green box, a red cylinder "fire hydrant", blue squares to avoid) so your code should mostly be reusable.
For extra marks, you can demonstrate your solution on the real robot. You should tell us in advance that you are doing this so that we can arrange you a slot. See Moodle for giving us your request a slot to demo on the robots. This slot may be on a different day than your scheduled demo slot. The real-world arena will share many similarities with the Gazebo world (e.g., a green box, a red "fire hydrant," and blue tiles to avoid), so your code should be largely reusable across both platforms. Prior to your scheduled demo, you will have the opportunity to test your solution on the real robot.
You are provided with the launch and world files. We have also created the map for your convenience (these files should be put under a maps
folder unless you want to edit your launch file manually to provide a different path).
The task will be completed in the following Gazebo scene:
This scene is a training arena for you to implement your functionality. This training arena will resemble the test arena in terms of structure and complexity (same floor plan of the environment), but the positions of the objects will vary to assess the generality of your approach. Note: if you don't see some of the objects, you should go to FAQ and follow the steps there.
Your task is program your robot to find some objects in the scene:
A green box:
and a red fire hydrant:
There may be multiples of each object in the scene and in this case, you should count the objects as you travel and announce this count, making clear that you know if the object is previously visited or not (e.g. "I found the second green box" or "Green box #2 is found"). You need to utilise the robot’s sensory input and its actuators to guide the robot to each item.
There will be blue floor tiles in the environment. The robot shouldn't step on the blue tiles while searching for objects.
There will also be some some objects in the scene, which will be randomly placed in the test arena to make the task more challenging:
Your robot should not hit these or any other obstacle in the scene and you will be penalised in case of collisions.
Success in locating an item is defined as:
- Stopping at an appropriate distance to the item with no obstacles in between the robot and the object AND
- Announcing that the robot has found an object: command line output AND a visible marker on RViz.
The robot should also:
- Avoid driving over blue floor tiles
- Not crash into the walls or any other obstacles in the scene
- Mark the location of found obstacles on the map in RViz (See resources at the end of page)
You may choose any sensors available on the robot to drive your search behaviour. You may utilise any of the standard ROS packages or TurtleBot-specific packages. However, your system design should include the following elements:
- Perception of the robot’s environment using the Lidar and Kinect (either in RGB or Depth space, or using a combination of both RGB and Depth data) sensors, in order find the object;
- An implementation of an appropriate control law implementing a search behaviour on the robot. You may choose to realise this as a simple reactive behaviour or a more complex one, eg, utilising a previously acquired map of the environment;
- Motor control of the (simulated) Turtlebot robot using the implemented control law.
Only the behaviour demonstrated within the 5 minutes of your allocated demo time will be assessed. In other words, you should find as many objects as possible within 5 minutes.
This task is designed for you to experiment and be creative. You should be able to perform most of the functionality for this task by combining the previous mini-tasks with a few tweaks. The minimum required functionality consists of a simple reactive behaviour, allowing in principle to find at least one object. For an average mark, the behaviour should be able to successfully find some objects at unknown locations. Further marks can be obtained through additional functionality, including (but not limited to):
- Implementing a clever exploration or path-finding algorithm so that you improve the efficiency of your search.
- Exploiting maps and other structural features in the environment for clever search strategies.
- Implementing an enhanced perception system that combine multiple sensor data – use of use of computer vision for leveraging additional visual cues (e.g. edges, connected components), combination of LaserScan, RGB and Depth features, etc.,
- Attempting SLAM while performing the object search
This is not an exhaustive list, but should give you some ideas of how to achieve the best marks. If you have any questions about what might count for bonus marks, please ask someone in the lab.
The software component must be implemented in Python and be supported by use of ROS Noetic on Ubuntu 20.04LTS to communicate with the robot. The code should be well-commented and clearly structured into functional blocks. To obtain credit for this assignment you will need to demonstrate the various components of your software to the delivery team and be ready to answer questions related to the development of the solution – please follow carefully the instructions given in the lectures on the requirements for the demonstration
My program:
- Finds the target objects within 5 minutes.
- Stops directly in front of the object without an obstacle between the robot
- Makes an announcement in the command line that a unique object instance is found (e.g. "I found green box #1!")
- Puts a marker on RViz map for each found object (See resources at the end of page)
- Avoids stepping on blue tiles
- Avoids all collisions
-
Group implementation (30 points): You will be graded for:
- number of objects found (2 points each)
- whether you can avoid blue tiles (-5 points) and collisions (-5 points, unless you creatively use them for improving functinality 😄)
- code quality
- general functionality (including finding, stopping before and announcing objects
-
Demo (50 points): You will be graded for:
- presentation quality (i.e. how coherently you can explain the general approach and your own contribution)
- code understanding (also includes how well you understand the general logic of the whole program)
- quality of your evaluation approach
-
Report (20 points): Please read the first section in this page to understand what is required for the report. Only the first four pages of the text in the report will be graded and overflow pages will be ignored. Please do not change font sizes or the template to fit more material.
- Methodology description
- Description of your evaluation approach, coherence of results
- Reflections
-
You may lose up to 10 points due to peer review. GitLab commit history will also be evaluated in case of problem situations.
Resource 1: Markers: Sending Basic Shapes (C++)
This tutorial is for C++, but gives the basic idea about markers.
Resource 2: Add a marker display to RViz
How to use rviz to visualize published messages.
Resource 3: The Construct tutorial on Visualising the real time trajectory path using markers