Competition Preparation - PeterJCLaw/srcomp GitHub Wiki
Competition Preparation
The preparation work needed to use SRComp for a competition is fairly common between competitions, so is detailed here as a guide. Eventually some of this may be automated, though it has not yet proved to be worth it.
Create a compstate
In general the template compstate is the best starting point for a new compstate. While it is possible to create one from scratch by hand or to start with another competition's state, the template compstate is likely easiest. It has documented placeholder values for all required files as well as having configs for useful tooling when working on a compstate.
Regardless of the route taken, a new compstate should always have a separate git history than any other. The template compstate is configured as a template repository on GitHub, which makes this especially easy.
Creating a compstate from a previous one
- Find a "good" previous compstate
git checkout
the earliest commit in that repo which looks fairly complete but doesn't have too much competition specific information added. While it is possible to start with a more complete compstate and strip it back, the author prefers to keep the first commit of a new compstate as a generic starting point, meaning that copying one of those is likely much easier.- Copy the files (but not the
.git
repo directory) from that compstate into a freshly created repo - Update the dates (
schedule.yaml
), arena info (arena.yaml
), venue layout (layout.yaml
) & shepherding details (shepherding.yaml
) and list of teams (teams.yaml
) - Configure CI validation of the compstate. If using GitHub for hosting, the SRComp Validate Action provides a simple generic implementation of both compstate validation as well as running any tests for the scorer/converter implementations you may add.
- Commit the result
At this point you'll likely have a compstate which can't be loaded. This is expected -- the state as it stands isn't useful to run a competition!
Note: Running the validator (srcomp validate path/to/compstate
) can help you
identify what is needed to prepare the compstate for use, however it is
primarily intended to validate a complete compstate during a competition so may
both emit warnings which can be ignored at this stage and also miss other
things. (For example it doesn't care about the presence of a deployments.yaml
,
but you probably do!)
Configure the repository
Having set up the content of the repository, it is often useful to have a shared host for the compstate outside of any deployments. This allows for development of the scorer & converter scripts, CI to run against those and as a place for the compstate to be archived for posterity.
Typically this hosting setup would be configured:
- To prevent history loss:
- disable force-pushes to the default (probably
main
) branch, on GitHub this is handled via a "Branch protection" rule - with rebase & squash merges of published commits/pull requests disallowed
- disable force-pushes to the default (probably
- With suitable access controls.
For Student Robotics' compstates hosted on GitHub there are three levels configured:- Volunteers: Read
- Competition Team: Triage
- Compstate Admins: Admin
- If on GitHub:
- disable unnecessary repo-adjacent features: wikis, issues & projects
- hide unnecessary repo-homepage features: releases, packages & deployments
- tag the repo with useful Topics, typically
compstate
andsrcomp
- If on GitHub, the SRComp Validate Action provides a simple generic implementation of both compstate validation as well as running any tests for the scorer/converter implementations you may add
Configure game scoring
Since SRComp is designed to be used at a variety of competitions with minimal changes, it is (mostly) agnostic to the rules of the game being played and to the scoring logic needed.
There are three things which need to be done to provide this:
- design the score input and recording formats
- implement
scoring/score.py
in the compstate repo; this is used to interpret the recorded game scores - implement
scoring/update.html
in the compstate repo; this provides the UI for SRComp Scorer - implement
scoring/converter.py
in the compstate repo; this is used by SRComp Scorer to transform the score data between the formats it needs
Typically an outline of the score input design is the starting point as that defines the data which will be captured and roughly what shape that data will be, though it is recommended that the processing of that data be considered at the same time.
The score sheet design itself is then used both:
- in the arena by the match officials to record the game state (as printed sheets)
- as the UI design for the scorer
Game designers may also wish to consider the design of the score sheets when designing the rules for a game.
score.py
Implement SRComp expects that each compstate will contain a Python module at
scoring/score.py
which contains a Scorer
class that is compatible with
libproton
.
Note: SRComp does not actually use libproton
and scoring/score.py
should not assume that libproton
will be available at import-time.
That limitation being noted, you may find it useful to have scoring/score.py
be executable and import libproton
when it is run directly:
#!/usr/bin/env python
class Scorer:
...
if __name__ == '__main__':
import libproton
libproton.main(Scorer)
This allows running the score script directly to develop it or to check individual results.
Create a score sheet
The match officials in the arena will need a way to record the result of each game. Whether done directly or via paper score sheets, the following guidance ensures that the right data is easily captured.
Note: it is highly recommended (and here assumed) that each game have two or more match officials who each independently record the whole of the result of each game. This helps catch errors early and reduces disputes from competitors.
The score sheet should:
- capture the match id (number and arena)
- capture the name of the official; this simplifies the resolution of discrepancies as you can easily ask the relevant people
- capture the identities of the teams which are actually present in each starting zone of the arena (rather than merely those which were expected to be present)
- capture whether a team was present in each starting zone
- capture the whole of the game result; i.e: in a game where tokens are moved around, always record the position of all of the tokens, rather than just those in scoring zones. This makes it easy to do consistency checking on the result -- if there are too many tokens recorded, then you know something is amiss!
- visually resemble the arena, including obstacles and other markings. A diagram of the arena from the rulebook for the game is usually a good starting point for this aspect. It may be useful to include external reference points too, however care should be taken that these will always be correct (particularly if an event has multiple arenas).
- contain instructions on its use, however if these require more than about 15 seconds to read they are either too detailed or the score sheet is too complicated
- contain space to record any irregularities observed during the match -- for example this space can be used to record the reason for a disqualification or the justification for a close call. Capturing this in the heat of the moment both encourages clear thinking and aids in resolving any disputes. It can also be used to record the outcome of any disputes.
It should be assumed that the score sheets will be used by match officials who:
- aren't familiar with the rules of the game being played
- are in a rush
- haven't seen one before
- are in an environment with poor lighting
Care must therefore be taken to ensure that the score sheets are both simple to understand and easy to use.
update.html
Implement SRComp Scorer expects that each compstate will contain an HTML Jinja 2 template
file at scoring/update.html
which extends _update.html
. This
makes use of Jinja's support for Template Inheritance, such
that parts of the template expressed as blocks
can easily be overridden.
For example you may wish to override the inputs for the scoring zones:
{% block zone_0 %}
{{ input_tla(110, 70, 0) }}
{{ input_tokens(60, 120, 0) }}
{{ input_present(80, 170, 0) }}
{{ input_disqualified(35, 220, 0) }}
{% endblock %}
See the base template for details of the blocks which can be tuned by the compstate.
converter.py
Implement SRComp Scorer expects that each compstate will contain a Python module at
scoring/converter.py
which contains a Converter
class that will be used to
transform the score data between the various internal formats needed. See
sr.comp.scorer.Converter
for details of the API and formats
needed.
Implementations may assume that sr.comp.scorer
is available at runtime and
therefore a common pattern is to extend the base Converter
class:
from sr.comp.scorer import Converter as BaseConverter
class Converter(BaseConverter):
...
Note: the converter script should not attempt to verify the content of the
scoring data as that is the role of the score.py
script. Internally the Scorer
uses the SRComp library and its validation mechanisms, so any errors from the
compstate's score.py
(or from SRComp's validation checks) will be propagated
to the user anyway.