Final Report - lukehsiao/RobotSoccer GitHub Wiki
Robot Soccer is an international phenomenon, with elite teams from all over the world gather to pit technique, strategy, and brawn against each other to earn victory on the soccer field. There are currently two international organizations that regulate robot soccer competitions: The Federation of International Robot-soccer Association (FIRA) and RoboCup. Both of these organizations were founded in 1997 and have continued to gain international popularity.
The nature of robot soccer requires teams to integrate wireless communication, computer vision, feedback control, real-time programming, artificial intelligence, and mechanical design. Consequently, Brigham Young University has decided to include it as a culminating senior project for electrical and computer engineering undergraduates. This year, BYU will hosted its own 1v1 robot soccer competition utilizing new hardware components (e.g., new IP camera, Odroid U3, etc.).
Our purpose was to design, build, and refine a competitive, autonomous soccer-playing robot. Our goals were to successfully engineer an ecosystem and architecture for robot soccer development, and win as many competitions as possible.
The first steps in achieving our goal was to understand the resources and constraints we were placed with. As we began to understand the hardware we were given and the software that was recommended (e.g., ROS, OpenCV), we were able to utilize a "thin-thread" model to begin to develop our system.
Team Vektor Krum placed 1st in 5 of 7 competitions thus far. We found that the most challenging aspects of the project was translating vision data into smooth, controlled movements--a challenge many teams still struggled with at the close of the semester. However, we are able to successfully create a foundation that we could have expended upon.
Ultimately, we were able to meet 13 of our 14 functional specifications in a timely manner, while spearheading development challenges for the class. Our robot was successfully able to be one of the most controlled and competitive players on the field.
Robot Soccer is an international phenomenon, with elite teams from all over the world gather to pit technique, strategy, and brawn against each other to earn victory on the soccer field. There are currently two international organizations that regulate robot soccer competitions: The Federation of International Robot-soccer Association (FIRA) and RoboCup. Both of these organizations were founded in 1997 and have continued to gain international popularity.
Brigham Young University also hosts a robot soccer competition. Our robot soccer project is designed to be a dominant and competitive team in the BYU competition. This project is the second iteration of the senior project since it was put on hiatus and will involve completely new hardware components. Note that this semester, the final competition will be 1v1 rather than the original 2v2 to allow us time to properly develop and explore the new hardware components.
Robot soccer requires true system-level design. Our team had to integrate a wireless communication system, computer vision, feedback control, real-time programming, artificial intelligence, and mechanical design. Second, the overall project was much too ambitious for a single student, so teamwork was essential.
The overall layout of the project is shown in the figure below.
The robots will be competing 1 vs. 1 on a small, walled soccer field. The field will be approximately 5 feet wide by 10 feed long. Goals will be 2 feet wide. An overhead IP camera will provide a video stream to each team's base computer to allow vision algorithms to compute the location and orientation of the two robots and the ball. This information will then be streamed via WiFi to the individual robots, who posses the artificial intelligence to analyze the data and make a competitive play.
Our project was to design the robot system, as well as the computer vision software. These robots will be completely autonomous and will require no human interaction after being activated and placed on the playing field. The scope of our project will involve mechanical design, computer vision, motion control systems, artificial intelligence, and system architecture. Most importantly, all of these systems will work seamlessly together while following the pre-programmed game rules.
Our robot consists of several subsystems. Processing was done on an ODROID-U3 running Ubuntu Linux from a microSD card. We used Robot Operating System (ROS) to allow inter-process communication. Our robot was equipped with a USB WiFi adapter to allow communication. Our robot is powered by two 7.2 V NiMH batteries connected in series. Mobility is provided by 3 omni-directional wheels driven by motors. The RoboClaw 2x5A board is used to control the motors. Finally, the robot includes a solenoid-powered kicker to allow it to shoot the ball at a distance.
The purpose of this document is to detail the results of the work that Team Vektor Krum did over the course of Winter Semester 2015. It will include detailed descriptions of the results of the project, review some of the challenges encountered and lessons learned, and discuss what we would do differently if we were approaching this project again.
One set of requirements come directly from the rules of the BYU robot soccer competition:
- Robots must fit within a cylinder of 8-inch diameter and 10-inch height
- The ball is a standard golf ball, the color will be determined by majority vote.
- The playing field is 5 feet wide by 10 feet long. The sides of the field are angled so that a ball cannot get stuck against the sides or corners.
- Goals will be 2 feet wide.
- Robots must be designed in a way that will not damage other robots, the playing field, or human spectators.
- Kickers are not allowed to shoot the ball hard enough to damage other players.
- Robots must avoid collisions.
- Any contact with a defender while in the defense area will be a violation
- Outside the defense area, contact that noticeably changes a player's orientation, position, or motion will be a violation.
- We will not observe the traditional off-sides rule.
- Robots cannot drop parts on the field
- Robots are not allowed to fix the ball to their frame or encompass the ball in any way that prevents access by other players.
- Timing
- A game will consist of two 90 second halves.
- Half time will be 60 seconds.
- There will be a 120 second break between games, with at most a 15 second grace period.
- To signal that a team is ready, one team member raises his/her hand.
- Initial/Reset Positioning: At the beginning of play,
- The ball will be placed inside the center circle.
- Teams will place their robots outside of the center circle, but on their half of the field.
- Teams place their robots first, the ball is placed second.
- Time-outs
- Each team can take (at most) one 15 sec time-out during each half.
- Teams request a time-out to the referee. After a request has been made, referees will declare the time-out the first time when:
- The ball is no longer moving.
- The opposing team is not advancing the ball towards its goal.
- At their sole discretion, referees can call a 15 sec technical time-out when:
- The ball is no longer moving.
- The ball is not inside one of the two goal boxes.
- Both teams seem to be non-functioning.
- After each time-out, play will resume from the initial/reset position.
- Interacting with Robots/Ball on the Field
- Teams are not allow to bump, nudge, kick, or touch the ball or the robots during play.
- The only physical interaction with robots and the ball will be during time-outs or between play.
- When the clock is running, team members are not allowed to touch their computers.
- Scoring
- A goal is scored when the ball breaks the plane of the back line of the goal box.
- If a game ends in a tie, there will be a sudden death period lasting at most 90 seconds.
- Play stops when the first goal is scored.
- Teams will have (at most) one 15 sec time-out during the sudden death period.
- There will be a 60 second break between normal play and the sudden death period.
- If neither team scores during the sudden death period, the match will be determined by a coin toss.
- The Home team calls the coin toss.
- Robot Appearance/Construction
- The Uniform must be securely fastened to the top of the robot.
- All cables, wires, batteries, etc. must remain inside the robot and out of view of the overhead camera.
- Side panels must be sufficiently sturdy to keep the ball from being lodged inside the robot.
- Excessively Aggressive Play
- Robots are to avoid collisions with other robots.
- A robot that repeatedly causes substantive collisions may be penalized for excessively aggressive play.
- The first offense will receive a verbal warning, with no stoppage of play.
- The second offense will cause play to stop, and players will reset in the initial/reset position, except that the penalized robot will be placed near the side of the field (still near midfield).
- Penalties and warnings will be called at the sole discretion of the referee.
The overall system architecture is outline by the block diagram below.
The overall goal of the project was to build an intelligent, autonomous robot. In order to accomplish this goal, we broke down the design of our robot into four primary subsections: mechanical system, computer vision, motion control, and artificial intelligence. Each of our team members had subsection for which they were primarily responsible for. However, we also worked together to tackle challenges within subsections together.
The subsections below highlight the major design areas associated with our robot. Each area is described and major design challenges and our solutions are outlined.
This subsection encapsulates the design and fabrication of our physical robot and was spearheaded by Andrew Keller. We discussed various features and materials as a team, then relied on Andrew to produce the refined, custom-made components we ended up using for our robot.
After discussing a variety of features and materials as shown in our Concept Generation Document (see Appendix), we decided to construct our robot using aluminum platforms and a 3D-printed shell. This allowed us to pack the internals very compactly:
We could then put the 3D-printed shell on for protection during competitions:
- How to implement the kicking system
- Challenge: One of the most important features we wanted our robot to have was a powerful kicking system. We knew that a kicker would give our robot a significant competitive edge. However, we needed to determine how to power it, and where it could fit into our chassis.
- Solution: We determined that a solenoid-based kicker would be best (see Appendix). By using the bottom layer of our robot for RoboClaws and a solenoid-based kicker, we were able to create an accurate and consistent kicking system. We hooked the solenoid to a lever that then pushed a metal plate, as shown below. When we triggered the solenoid using GPIOs, it would contract, pushing the metal plate forward.
- Converting between different logic levels and voltages
- Challenge: Our entire robot was powered from two 7.2 V NiMH batteries connected in series (a total of 14.4 V). However, several components of our robot required different voltage levels. For example, the ODROID-U3 needs a 5V/2A input.
- Solution: To accomplish this task, we created a power conversion board that stepped the 14.4V down to 5V for the encoders and Odroid. We also purchased a SparkFun Bi-directional Logic Level Converter which allowed us to convert between 3.3V and 5V signals.
- Manufacturing the robot and 3D printed shell
- Challenge: We had a limited budget, and needed to be able to prototype robot pieces quickly.
- Solution: To simplify the fabrication of our robot, we decided to use simple hexagonal plates. Because of their regular shape, it was easy to simply use the materials and tools available in our ECEn shop. To create the shell, we used a 3D printer to print two halves of the shell, which we then glued together. We needed to do it in two pieces because the 3D printer can only print small objects.
- Fitting and mounting the components in an optimal way
- Challenge: We were placed with the specification that the robot must fit within an 8"x10" cylinder. In order to accommodate the large batteries, Odroid, WiFi module, etc. we would need to be able to pack things extremely compactly.
- Solution: We started by modeling potential placements by creating paper cutouts that matched the dimensions of each component we needed to fit. This allowed us to put together the 'puzzle' in several different ways to see which would be the most optimal. The final organization of the internals components was shown in the image above.
Luke Hsiao lead development on our Computer Vision algorithms. This subsection was responsible for all of the processing from connecting and getting video from the overhead IP camera to sending relevant location data to our robot.
The task was accomplished using a multi-threaded C++ program and the OpenCV libraries. C++ was chosen because it was familiar to all of our team members, allowed for easy object-oriented design, and examples of implementing OpenCV functions were readily available. Our vision processing algorithm is shown in the block diagram below:
When run, our program allowed the user to select the area of the field and tune the color calibration of the robots and ball. Team orientation (home/away) could be manipulated in real-time using a keyboard button. A screenshot of color calibration is shown below.
Ultimately, our vision program outputs relevant data on a cropped imaged of the field as shown in the figure below:
-
Latency between getting an image and sending data
- Challenge: When the semester started, the BYU robot soccer competition did not have a well-tested solution for an overhead camera. Initially, an obsolete IP camera was provided that could only provide a single image every few seconds. USB webcams were tested and found to be to restricting because of the inability to share a camera between several users. It wasn't until about halfway through the semester that a quality IP camera was purchased that allowed for image calibration and sharing among several users. With the new camera, however, many teams still were seeing a couple of seconds of latency.
- Solution: We determined that the latency was caused by the processing being done on images, rather than by the IP camera or network itself. By measuring the CPU usage of the various major aspects of our vision algorithm (e.g., decoding the JPG, color segmentation, undistortion, etc.), we discovered that decoding took the largest amount of the CPU, and that we were only utilizing one core of our machine. To get better performance, we broke up the major components of our algorithm into separate threads, which utilized both cores, and reduced our latency to about 100ms.
-
Easily recognizing all of the different robots
- Challenge: The BYU competition did not have a defined standard for how robots were supposed to be marked to allow each team's vision algorithms to identify them. Consequently, each team had a unique style of vision jersey which worked for their algorithms, but didn't enable an opposing team to recognize their robot.
- Solution: After gaining experience implementing OpenCV color segmentation, all of the teams gathered and we proposed a standardized vision jersey. After some refinement from other teams, we were able to decide on a standard that was easy to process and allowed each team to recognize all of the robots on the field. The final vision jersey the class decided on is shown below, where the thinner strip indicates the front of the robot. This circle pattern could then be easily shaped to fit on all of the robots in our competition.
-
Calibrating vision settings between runs
- Challenge: When we initially created our vision algorithm, it required running a separate executable to tune the HSV settings for a specific color, then inputting those values into the code of our vision algorithm manually. Naturally, as we furthered development, we discovered that it was crucial to have robust computer vision that allowed us to calibrate colors, teams, and field sizes at the beginning of each run.
- Solution: We created a file-format for saving and restoring calibration settings. At the beginning of each run, the user is allowed to tune the field dimensions and location, tweak the HSV values for each object on the field, and select a team. These settings are saved to a file between runs, allowing them to be easily changed, and allowing a user to skip through the calibration quickly if calibration was performed earlier. Furthermore, we included the ability to change teams (home vs away) in real-time by pressing specified keyboard keys during a competition.
-
Fish-eye effect of the IP Camera
- Challenge: In order to capture the entire playing field from a reasonable height, the overhead IP camera needed a fish-eye lens. This resulted in a distorted image as shown below:
- Solution: OpenCV has the capabilities to undistort and image when provided with the proper calibration matrices. We ran a calibration program using a large checkerboard pattern to find the optimal calibration matrices. Note that the calibration program is included in our repository. The results of the undistortion are shown below:
- Challenge: In order to capture the entire playing field from a reasonable height, the overhead IP camera needed a fish-eye lens. This resulted in a distorted image as shown below:
Luna Zhang led motion control development. Motion control was responsible for creating low-level movement libraries that communicated with the RoboClaws. These libraries would provide very basic skills that the higher-level AI code could then utilize.
The programming language we used is Python. Compared to C/C++, it had a more comprehensive matrix library that made the motion control of our three-wheel robot much more simple to implement. In addition, the RoboClaws also provide low-level libraries in Python which simplified the development process.
In order to utilize the quad-core processor on the Odroid, we split our motion control algorithms into four Python processes as illustrated in the figure below.
- PID Control of the RoboClaws
- Challenge: When we would command our robot to move in a straight line, its path would be inconsistent and erratic.
- Solution: We discovered that the wheels have different drive strengths despite being sent identical commands. Since the RoboClaws have built-in PID controllers, we were able to use the feedback from the encoders with a simple calibration script to get the optimal PID values every time we made modifications on the robot. With these calibration PID values, we could move in the directions we expected.
- Kalman Filtering
- Challenge: Kalman Filters are complex and rely on knowing an accurate base time.
- Solution: We created a function to pull timestamps from the IP Camera video feed and timestamped the location data output by our vision algorithms. By comparing timesetamps, we had a good estimate of where we were in time. However, the complexity of implementing Kalman Filters in Python remains a challenge we are working on smoothing out.
- Velocity Control for Three Wheel Mobile Robot
- Challenge: We spent a considerable amount of time trying to implement velocity and motion control in C++, but found it difficult to implement the necessary operations.
- Solution: We had a rough start using C++ matrix libraries to implement the velocity control for our three-wheeled robot. Even though we all had more experience in C++, we eventually decided to switch to Python because it had more robust matrix libraries. The readability and simple syntax of Python also shortened our development time so that we had more time to iterate and refine our performance.
Ammon Gruwell led the development of our AI. AI was responsible for developing higher-level gameplay tactics and decision-making utilizing the lower-level utilities provided by motion control. In addition, Ammon created our ROS architecture to ensure that all of our systems could communicate properly.
If we were able to build or robot more quickly and create robust motion libraries earlier on, we would have been able to spend more time developing soccer tactics and skilled artificial intelligence. However, because of time constraints, our artificial intelligence is still just a simple foundation, and many challenges related to advanced AI(e.g. passing, path planning, advanced object avoidance, etc.) were not yet implemented.
Within the system architecture, software played an important role in the function of our robot. To maximize our ability to develop robust artificial intelligence algorithms, we utilized the software architecture below.
Like the Motion Control, Python was used to implement our Artificial Intelligence. Python had significant advantages over C++ in terms of integrating easily with our motor control and performing matrix calculations.
- Using ROS with Changing IPs
- Challenge: Early on in our development, we discovered that the IP addresses of our robot and workstation computer would change frequently, creating puzzling errors in our ROS nodes.
- Solution: We simply created network macros that mapped to a specific IP address (e.g., base = 10.4.35.121) and then set the IPs on our robot and workstation and static. This allowed us to reply on our IP addresses, and made it easy to change our IP address in a single location if it did need to be changed.
- Ambiguous Standards for Communication
- Challenge: Our first competition was based on simulation where our AI algorithm was told ball and robot locations, robot orientations, and the game score. We modeled our AI system based on these constraints. However, no rigorously specified standard has been set. This could allow teams to send velocity data or movement commands directly from their workstation computer.
- Solution: We would propose that a standard for what information can be communicated from the workstation computer be created. This may not happen this semester, but would aid in development for future semesters.
- Compensating for Camera Angles
- Challenge: Initially, when we would command our robot to line up behind the ball, it always seemed to line up slightly off-centered. We realized that this problem was caused by the fact that the top of the robot is 4" higher than the ball. Consequently, because of the angle of the camera, the location reported by the vision algorithm was slightly inaccurate.
- Solution: We used some simple trigonometric calculations within our AI to compensate for the height difference of the robot and the ball. This compensate for the camera angle and allowed our robot to line up behind the ball exactly.
After the development of our robot, we followed our test plan to quantitatively determine how well our product met the functional specifications and metrics we determined in the early stages of development. The table below shows our actual values side-by-side our ideal and marginal values.
Metric # | Metric | Ideal | Margin | Actual | Units |
---|---|---|---|---|---|
1 | Goals scored minus conceded goals | >2 | >0 | 1.1 | goals |
2 | Percentage of shots blocked given random speed angle (max 1m/s) | >99 | >80 | 95 | percent |
3 | Maximum difference between calculated positions and actual positions | <1 | <5 | <1 | cm |
4 | Maximum total cost of additional components of robot | <25 | <50 | 30.48 | USD |
5 | Maximum number of physical components in the robot | <10 | <20 | 16 | components |
6 | Maximum speed of robot in one direction | >200 | >100 | 102 | cm/s |
7 | Number of lines of code (total) | <4000 | <6000 | 3000 | lines |
8 | Maximum weight of a single robot | <5 | <8 | 2.04 | kg |
9 | Minimum battery life while robot spins in place | >60 | >30 | >120 | minutes |
10 | Minimum weight that robot can withstand uninjured | >8 | >5 | >8 | kgs |
11 | Percentage of goals when robot moves to randomly placed ball and kicks | >99 | >80 | 95 | percent |
12 | Compliance to the rules | 100 | 100 | 93 | percent |
13 | Beta customer ratings on product aesthetics | >8 | >6 | 6.5 | scale from 1 to 10 |
14 | Compliance with 802.11b/g/n standards; Compatible with Linux 3.12 | 100 | 100 | 100 | percent |
As a team, we were able to meet 13 of our 14 metrics and specifications. Futhermore, we were able to take a dominant role in competitions, placing 1st in 5 of 7 competitions thus far. Although we were successful in the BYU robot soccer competition rounds, we were not able to achieve the ultimate level of play we had initially hoped for. For example, we had initially hoped to have time to create a competitive 2-robot team with refined and controlled tactics. Due to the progress of the class, however, we all agreed to focus on a 1v1 competition to allow teams to work over the challenges faced with just a single robot.
Over the course of the semester, we estimate that our team invested over 1000 hours of time in the development of our robot. During this time, we found that there were several items we wished we had approached differently both as a team and as a class.
Although we wouldn't be able to apply test-driven development and continuous integration techniques to all aspects of our robot, it would have been very valuable to have a rigorous, robust suite of unit tests that would run automatically when changes were made. Once such system would have been Travis CI (www.travis-ci.org), which integrates with GitHub to provide continuous integration. It would have helped speed up our software development and increase our confidence in our code if we could verify that all the code still functioned properly after making changes.
One of the challenges we faced as a class was the lack of clear constraints early on in the semester. Many of the rules were carried over from the previous iteration of robot soccer and had to be modified. Although we learned valuable lessons as a class and as individuals by tackling these issues and proposing potential regulations, it also hindered the speed at which teams could develop their robot.
Robot soccer required us to not only utilize the skills and experiences we've gained in our education thus far, but also required us to jump into areas we hadn't yet explored. For example, computer vision or control systems were completely new subjects to many students in our class. Consequently, the technical lectures provided by Dr. Archibald and Dr. Beard were extremely valuable. Although we may have been able to reach the same level of progress on our own, it would have taken significantly more time. For example, the technical lectures on ROS and OpenCV provided both a high-level understanding of concepts and examples of how the concepts would be implemented. In contrast, the lectures of Kalman Filters seemed to provide high-level concepts but then left many students confused by the rigorous derivations and models. As a result, Kalman Filtering seems to be the most difficult challenge to the majority of the class. It may benefit future semesters of students if more time is spent on how implementation of a Kalman Filter could work, and how it could potentially be done in our robot soccer systems. This type of starting point would allow students who didn't take ECEn 483 catch up on concepts by viewing examples, and allow teams to progress more quickly to higher levels of competition.
Overall, this was a deeply challenging and rewarding senior project. It truly was a culminating experience that combined skills we've developed in a broad range of undergraduate classes and added to them. While we would consider ourselves successful relative to the competition this semester, we believe that future semesters will be able to achieve much greater levels of competition and gameplay. We've created documentation, issue tracking, and a code repository specifically to help future generations of engineers get a head start.
The full text of our Functional Specification Document is available here:
Functional Specification
The full text of our Concept Generation & Selection Document is available here:
Concept Generation & Selection Document
Click here to view the timeline full-resolution.
Because we knew that the robot soccer project has kinks and unexpected challenges that are still being ironed out, we decided to go with a GitHub Wiki for our project/team documents from the get to. This way, it is not only easy to find and update documentation (rather than trying to track down the correct Word file), it will be open-source and available to future generations of students in this senior project to learn from and expand on.
On of the biggest challenges in engineering is keeping track of bugs and what their solutions were. GitHub has this functionality built in with great features like image uploading, tracking, code highlighting, etc. By using GitHub Issues, we have a permanent, searchable archive of the issues we encountered and what we needed to do to resolve them. This will also be a valuable resource for future engineers.
When working as an engineering team, it's vital that everyone has the correct code, and that bug fixes can get distributed quickly. In addition, it's also crucial to be able to revert changes in case implementing a new idea results in breaking the old code. GitHub is a de facto industry standard for version control, with detailed commits, branching, version comparisons, and the ability to revert back to previous checkpoints.
In the root of our repository, users can find the code for each of the subsections we outlined in our report.
A selection of videos demonstrating our robot in action are listed here:
Video Demonstrations