Portfolio Control Award - wccarobotics/ftc-decode GitHub Wiki
The Control Award celebrates a team that uses sensors and software to increase the ROBOT’S functionality during gameplay. This award is given to the team that demonstrates innovative thinking and solutions to solve game challenges such as autonomous operation, improving mechanical systems with intelligent control, or using sensors to achieve better results. The solution(s) should work consistently during MATCHES but does not have to work all the time. Solutions considered for this award are not solely limited to the AUTO period of the MATCH and may also be used during TELEOP. The team’s PORTFOLIO must contain a summary of the software, sensors, and mechanical control but would not include copies of the code itself.
- The portfolio MUST include:
- Hardware and/or software control components on the robot
- Which challenges each component/system is intended to solve
- How each component/system works
- Team must use one or more hardware or software solutions to improve ROBOT functionality by using external feedback and control.
Encouraged:
- The control solution(s) should work consistently during most MATCHES.
- Team could discuss, describe, display, or document how the solution may consider reliability either through demonstrated effectiveness or identification of how the solution could be improved
- Use of the engineering process to develop the control solutions (sensors, hardware and/or algorithms) used on the ROBOT includes lessons learned.
We have many control systems on our robot with many of them are interlinked and can be split in many different ways.


Challenge: A tank drive is not as maneuverable as a mechanum drive. In a tank drive it's a lot harder to correct small errors and you are more susceptible to defense. A tank drive also is harder to position for parking or hitting the gate. Additionally knowing where your robot is is helpful for many system. Additionally mechanum drive allows us to use field relative control for driving which is easier on the driver
Solution: We used a mechanum drive with PedroPathing. Pedropathing is a library that lets you drive your robot better using PID controllers
How it Works: Pedro pathing is a pathing library that lets you tell the robot where you want it to go and it will drive there quickly and accurately. The odometry uses data from an IMU and two unpowered wheels to figure out where the robot is. Field relative drive uses polar coordinates and the IMU to transform the control inputs to the right direction.

Challenge: The Starter Bot couldn't intake artifacts. This led to slow cycle times
Solution: The RI3D has an intake which helps us have faster cycle times. It also allows us to score better while playing defense.
How it Works: Our intake is a two-stage system. The first stage is what initially grabs the artifacts and gets them inside the robot. Then the second stage with boot wheels takes the artifacts to the feeder larger boot wheels. Both stages are powered by a single motor. There is also a beam on a servo called the diverter that sends the balls to either the left or right launcher.

Challenge: The starter bot does not have the option to sort the artifacts by color.
Solution: The RI3D solves this by having two flywheels to be able to sort the artifacts.
How it Works: The previously mentioned diverter can choose which shooter the artifact goes in. Then when we want to shoot the feeder servos for each side power some boot wheels to feed the artifact into the flywheel. There is a state machine that keeps track of what state of shooting we're in. First it spins up the flywheel. Then it starts the feeders once the flywheel is up to speed. Once the color sensors detect the bal has been gone or the 3 second timer runs out it stops the feeders


Challenge: We want to know where the artifacts are as its useful for many things. This helps solve lots of other challenges, for example it can help make the launcher more reliable by detecting when an artifact has been fed into the launcher.
Solution: We installed 4 Rev Color Sensor V3s on our robot, two in the back and two by the intake so we can detect every possible state. We also realized that if the color sensors are too close to an artifact, they can't tell the color. We 3D printed some mounts to space the color sensors properly.
How it Works: The color sensors can also detect distance so we can know if an artifact is there. This is then used in the code to detect when an artifact has been successfully fed. This also gives data to many other systems like the lights and many driver assistance functions.

Challenge: The robot is not transparent. It is helpful for the driver to know where and how many artifacts there are on the robot so they know if they should shoot or intake more artifacts.
Solution: We have three lights on the back of the robot that show what artifacts are where increasing the driver's situational awareness using the outputs from the color sensors.
How it Works: The lights work like this: When there is no artifact it will display the alliance color. When the color sensors detect an artifact then it will show the color of the artifact.
Challenge: Having odometry on the robot is great but it drifts and is susceptible to small changes in the starting angle.
Solution: Using a camera to track April Tags is more accurate as the April Tags don't move
How it Works: The limelight processes its video feed and outputs data about the April Tag's relative angle in the code.
Challenge: It is hard to write autos quickly and easily.
Solution: We use a command system which makes programming autos easier
How it Works: We have different command types that are based on a base command. Each command type has something to do each loop. There is also a condition for it to be done. We have commands for shooting all the balls, running any function we want, among others. We also have a CommandCommand base class that is a command that runs commands. We use CommandCommand to avoid repeating a lot of code in the command system.
This is all we need to write a near auto code:

This is a CommandCommand for intaking artifacts. It avoids repetition in the code.

Challenge: Our robot had really slow loop times which negatively affected PedroPathing by making it overshoot and they made our robot not drive so well. We used bulk Reads to help but they don't read I2C for some reason.
Solution: We have a queue for I2C reads so we only do one per loop.
How it Works: Every time a read is requested it returns the last know value. It cycles through the list of sensors it has for reading a value. On the first time a value is requested it won't be put on the queue and will be read immediately so we don't return null values.
We automated many driver tasks during teleop. Automating these task lets the driver focus on shooting and driving instead of minor tasks and reduces cycle time. It also reduces stress on the driver and reduces human error and fouls. The computer is also faster and quicker
- Auto Aim Auto Aim is essential as it's hard to align the robot manually to the goal. Using the data from the odometry we can use the atan2 function to find which direction to point the robot in the right direction. We tested out many small changes to where it aims during the season.
- Auto Diverter We automatically switch the diverter using the distance sensors because when intaking balls you need to intake one and the switch the diverter if you want to carry three balls.
- Location Based Flywheels To conserve battery and save time we automatically spin up the flywheels when in the launch zones and turn it off when we leave using data from odometry.
- Automated Intake We were having many fouls by intaking more artifacts than we should as the rules state you can only carry three artifacts but sometimes our robot took in four. Intaking extra artifacts also could lead to artifacts launching outside the launch zone resulting in more fouls. We made it so that when we detected three artifacts the intake would automatically turn off. It then turns on when the robot can carry more artifacts.
- Shoot All To shoot reliably our robot needs an artifact behind the artifact being shot or the diverter behind the artifact. Instead of having to pay attention where the balls and diverter are we have a shoot all button that detects where the artifacts and diverter are and then shoots them and switches the diverter when needed using the data from the color/distance sensors which saves a lot of thinking and button presses.
