Coursework - RobotTeaching/COMP4034 GitHub Wiki

COURSEWORK TASK

Your robot is going on a treasure hunt: it should explore a known environment, detect predefined objects using its camera, and report their locations. Your robot will have a maximum running time of 5 minutes. Its performance will be assessed based on its behaviour during this time. As part of this task, you must submit an implementation (as groupwork) and a report (individual); and do a demo, where you will be asked individual questions. See details after the submission information.

Please download the coursework world and map files from this GitHub. Your robot will need to find fire hydrants and FIRST 2015 trash cans:

Red fire hydrants Green trash cans

The final demonstration arena will be the same as the training arena in terms of structure and complexity (same floor plan of the environment), but the positions of the objects will vary to assess the generality of your approach. We will also put some obstacles which are not on the static map to test your collision avoidance features.

There will be up to 6 objects (hydrants and trash cans) that your robot should find. There may be multiples of each object in the scene and in this case, you should count the objects as you travel and mark their location on the map. You should utilise the robot’s sensory input and its actuators to guide the robot to each item.

Success in locating an item is defined as:

  1. Stopping at a safe distance from the object (you should be able to change this distance during the demo if asked), with a clear line of sight between robot and object.
  2. Announcing that the robot has found an object via a Visualization Marker on RViz. The marker should roughly be at the estimated object location and clearly state the type of the object (could be a green or a red marker). If you cannot publish a Marker, fall back to command line output (object type and coordinates) for this element.

The robot should also:

  • Localise correctly within the map (you are allowed to provide an initial pose using rviz, but all other navigation (such as waypoints) should come from your code)
  • Explore the map efficiently to maximise the area searched in minimal time
  • Avoid crashing into the walls or any other obstacles in the scene
  • Mark the location of found obstacles on the map

We wil place some beer cans in the scene to make the task more challenging. Your robot should not hit these or any other obstacle in the scene and you will be penalised in case of collisions:

You may choose any sensors available on the robot to drive your search behaviour. You may utilise any of the standard ROS packages or TurtleBot-specific third-party packages, however you will need to understand what your code is doing. Failure to explain the behaviour will be reflected as a penalty in your final mark.

Map files for simulation

Download from this GitHub:

Put these into the appropriate folders (.world and .launch.py files into turtlebot3_gazebo/worlds and /launch; .pgm and .yaml wherever you are storing your maps).

IMPORTANT SUBMISSION INFORMATION

There are four separate elements being assessed, each with a different deadline. Please see bottom of page and Moodle for deadlines.

  1. Your team's ROS2 package. You will need to submit a buildable ROS2 package (or packages) to your team’s GitLab repository.
  2. A video of your robot running in simulation should also be added to your GitLab along with your readme. This will be used for moderation purposes.
  3. Your team's simulator demo. You will both need to attend the demo, or you may fail for the demonstration or the entire coursework component. Demonstration slots are 30 minutes - up to 10 minutes for your team to set up, and rest for demonstration and questions. Only the behaviour demonstrated within the 5 minutes of your allocated demo time will be assessed. In other words, you should find as many objects as possible within 5 minutes. If you cannot make your demonstration for any reason (e.g., sickness) then you must let the module convenors (and your team-mate!) know as soon as possible and submit an EC.
  4. An individual report. Each team member must submit an individual technical report (max 5 pages) that covers:
    • The overall logic and structure of the program.
    • Your individual responsibilities and contributions.
    • How you implemented your components and integrated them into the overall solution.
    • How you tested and evaluated your components and the system as a whole.
    • Your team management strategies.
    • A short personal reflection on your work.
    • Peer review
  5. An optional robot demo. You will be given the opportunity to compete for extra 10 marks by demonstrating your solution in the real-world. We will only accept a bonus mark submission if your robot is ready to demonstrate on the day, which means you will need to show a decent solution before the robot demo. The real-world arena will share similarities with the Gazebo world (a green rubbish bin, a red fire hydrant) so your code should mostly be reusable.

Key deadlines

  1. Code -- Wednesday 10th December 2025 (17:00 GMT) -- CS GitLab
  2. Video -- Wednesday 10th December 2025 (17:00 GMT) -- CS GitLab
  3. Demos -- to be scheduled on the 11th and 12th of December (you will be given a slot) -- in person
  4. Report -- Tuesday 23rd December 2025 (23:59 GMT) -- Moodle
  5. Optional robot demo - to be scheduled on the 12th of December (you will be given a slot) -- in person

Extra information

The coursework is marked out of 100, and is worth 97% of the module. We will then add the marks you got for the minitasks (worth up to 3%) for your final grade.

There are 10 bonus marks available for the performance of your solution on the real robot, however the module remains capped at 100%.

Marking scheme

  • Practical implementation (30%): You will be marked for code quality, functionality, number of objects found, number of collisions. This is marked as a team, but you will receive individual marks for your contribution.
  • Demonstration (50%): You will be marked for presentation clarity, understanding of the overall solution and individual parts, quality of your evaluation approach. This is individually marked.
  • Individual report (20%): You will be marked for explanation of the methodology, communicating an overview of the whole system and detailing the parts you worked on (including testing, tuning, and integration); evaluation approach and results (we expect systematic evaluation of some sorts, to be explained in the last week of lectures); an explanation of the team management strategy and team-working plan; and your reflections. Up to 10 points can be deducted based on peer review scores.
  • Robot demo (10%): Groups will compete for the best scores. We will record your time to find as many objects in the scene. Marks will be allocated based on performance, i.e., 10 bonus marks are awarded to the fastest 10% of robots, 9 bonus marks are awarded to the next fastest 10% of robots, down to the slowest 10% receiving 1 bonus mark.

Marking Rubric

The marking scheme allows us to grade students based on their contribution and creativity, and their willingness to implement and explore functionality beyond the minitasks.

The minimum required functionality would be a simple reactive behaviour that allows your robot to find at least one object in sight. You can get an average mark by successfully locating multiple objects at unknown positions in the environment, even if done in an inefficient way. Higher marks can be earned by extending your robot’s capabilities beyond the minitasks. Examples include, but are not limited to:

  • Enhancing perception: combining visual cues, LaserScan, RGB-D data, or applying computer vision techniques for more reliable object detection.
  • Improving exploration: developing smarter search strategies or exploiting maps and structural features of the environment.
  • Fine-tuning existing behaviours: adjusting costmaps, colour detection, or exploration strategies to optimise overall performance.

Practical Implementation (30%)

Mark Description
>= 70 An excellent implementation featuring original functionality and elements beyond the original specification. The program code is efficient, well-structured, and commented. The solution is demonstrated very well, highlighting the additional functionalities, accomplishing the task with excellent performance.
60-69 A good implementation with some extra functionality or originality. The program code is well-structured and commented. Good demonstration of basic and additional features, accomplishing the task with very good performance.
50-59 A working software component with good functionality. Clear program structure and appropriate comments. The implementation is demonstrated successfully, accomplishing most of the task with good performance.
40-49 A working software component with basic functionality. Fair program structure and some code comments. The working implementation is demonstrated, accomplishing the search task partially.
0-39 Failure to successfully implement an object search behaviour. The robot fails to locate and announce any objects.

Individual Viva (50%)

Mark Description
>= 70 An excellent presentation of the system design and reflections on its performance, including evidence of thorough testing and evaluation of the important system features.
60-69 A very good presentation of the system design and reflections on its performance, including evidence of testing and evaluation of the important system features.
50-59 A good presentation of the system design and reflections on its performance. Good explanation of individual contibutions, however limited evaluation.
40-49 A basic presentation of the system design and its performance. The nature of individual contribution is mentioned but lacks detail.
0-39 An inadequate presentation failing to demonstrate basic understanding of the overall program structure, without clear explanation of individual contribution.

Written Report (20%)

Score Description
>= 70 An excellent, well-written report demonstrating extensive understanding, excellent evaluation, and good insight.
60-69 A comprehensive, well-written report demonstrating thorough understanding, good evaluation, and some insight.
50-59 A competent report demonstrating good understanding of the implementation and basic evaluation, which is not systematic.
40-49 An adequate report covering all specified topics at a basic level of understanding. No satisfactory evaluation is evidenced.
0-39 An inadequate report failing to cover the specified topics.

Generative AI policy

You should not copy and paste large chunks of code from any AI tool, as this constitutes false authorship and is an academic offence. You must be able to explain all code you submit. Failure to do so may result with a zero mark. Using AI to write your report is not allowed; your report must be entirely in your own words. Minor spelling or grammar mistakes will not be penalised.