Submitting models - til-ai/til-25 GitHub Wiki
To get your score recorded, you need to submit your model for evaluation.
Contents
Prerequisites
You should already have built your Docker image, following the instructions in Building Docker images.
We recommend that you first test your model locally to catch errors earlier.
Additionally, you will have to configure Docker to use gcloud
for authentication with Artifact Registry. This setup should only need to be done once per machine, and can be done by running the following command on your instance:
gcloud auth configure-docker asia-southeast1-docker.pkg.dev
Do also make sure that you are authenticated as your service account. You can check which account you're authenticated as by running gcloud auth list
. If you're authenticated as the wrong account, you can run gcloud config set account [email protected]
to fix it.
Submitting your model
Overview: Submit your model for evaluation by pushing your Docker image using the til
command.
Submitting your model is easy, just use the til
command in your Vertex AI Workbench. You'll need to pass in the ref of your Docker image (name:tag
); you can always look this up again by running docker image ls
. Remember to check that the model name follows the format TEAM_ID-CHALLENGE
; otherwise your evaluation will fail.
til submit TEAM_ID-CHALLENGE:TAG
til submit TEAM_ID-CHALLENGE # If you didn't tag it.
[!NOTE] The
til
command is only available in your Vertex AI Workbench. It's not builtin or installable via tools likeapt
orbrew
. We strongly recommend that you submit your models from Vertex AI, but if you must, we explain how to do it manually below.
This will push your model to Google Artifact Registry and submit it for evaluation. You'll receive notifications on your team's private Discord channel when your model has been submitted, queued, and evaluated. If you achieve a new high score, it'll be updated in the Leaderboard.
For power users
Using the til
CLI is the best option for the vast majority of participants. Though you can avoid til submit
by running its constituent steps manually, there is almost never a need to do so.
The one exception is if you prefer to train your models on your local machine, in which case your images will be built entirely locally, and you may want to submit your model directly from there. We do not formally support submitting from your own local machine, and thus have not fully tested the following submission process. We recommend doing your model training and submission on your Vertex AI Workbench Instance. If you're only using your local machine to write code, while your training still takes place on Vertex AI, you won't need to deal with any of this. Since this section is for participants who are already familiar with AI development, we'll gloss over some beginner-level details.
til submit
runs three commands to submit your model:
docker tag
to tag your model image with the URI of your team's repository on Google Artifact Registry.docker push
to push your image.gcloud ai models upload
to upload your model to Google AI Platform for evaluation.
You need to tag your Docker image with your team's Google Artifact Registry repository URI (as well as the image ref name:tag
). For legacy reasons, the repository URI follows different formats depending on your team ID:
- If your team ID begins with a numeral, your repo URI is
asia-southeast1-docker.pkg.dev/til-ai-2025/repo-til-25-TEAM_ID
; - Otherwise, your repo URI is
asia-southeast1-docker.pkg.dev/til-ai-2025/TEAM_ID-repo-til-25
.
First, you'll have to make sure Docker is able to push to your team's Artifact Registry repository. Run the gcloud auth print-access-token
command from your Vertex AI instance, and use that to run docker login -u oauth2accesstoken -p YOUR_ACCESS_TOKEN_HERE https://asia-southeast1-docker.pkg.dev
from your local machine. Refer to the docs on configuring Docker authentication to Artifact Registry for details.
Here are the flags you need to pass to gcloud ai models upload
:
--region
:asia-southeast1
--display-name
: The name of your image.--container-image-uri
: The full URI of your container image on Artifact Registry, including the image ref.--container-health-route
:/health
--container-predict-route
: The prediction endpoint route in your model server. This is different for each model; check the Challenge specifications for the input and output format.--container-ports
: The port on which your server is listening. This is different for each model; check the Challenge specifications for the input and output format.--version-aliases
:default
To submit locally, you will likely need to use the same access token you generated earlier to authenticate requests from your local gcloud
client by writing them to a file and passing in the --access-token-file your/access/token/file
CLI argument to your gcloud ai models upload
command.
Further reading
- About Docker tags: https://docs.docker.com/docker-hub/repos/manage/hub-images/tags
- Build and push your first image: https://docs.docker.com/get-started/introduction/build-and-push-first-image/
- Docs for
gcloud ai models upload
: https://cloud.google.com/sdk/gcloud/reference/ai/models/upload - Docs for
gcloud auth print-service-token
: https://cloud.google.com/sdk/gcloud/reference/auth/print-access-token - Configure authentication to Artifact Registry for Docker: https://cloud.google.com/artifact-registry/docs/docker/authentication