CI FAQ - dfki-ric/phobos GitHub Wiki

[TOC]

For Troubleshooting help see here!


What does the pipeline do?

After you set everything up the pipeline is ready to process your models. As soon you push changes to one of your triggering repos the pipeline will start running and process all models defined in your model definition repo. In detail the pipeline does the following:

  1. model_processing-Stage: The pipeline internally updates all repos and then parses the pipeline.yml of your model definition repo. For each model definition referenced there is parsed and then executed.
  2. test-Stage: Following the model_tests test procedure definition in the pipeline.yml each model is test regarding to it's modeltype.
  3. deploy-Stage: If the test stage succeeded, all models are deployed. This means that the new models are pushed to the master branch, if not to the CI_UPDATE_TARGET_BRANCH and then a merge request is opened to merge the new model into master. As the new model probably has to be tested further it is recommended to add a person's tag to CI_MR_DESCRIPTION, so that the responsible person will be notified per mail when ever a new model is pushed to the CI_UPDATE_TARGET_BRANCH. To make sure that only required files are in this repo. Note: The pipeline will clear the repo and then add the newly created model. Therefore only the following files are persistent in a repository: manifest.xml, README.md

How to setup this CI for my models?

Prerequisites:

  1. A separate GitLab-Subgroup is highly recommended to keep everything clear.
  2. An urdf-model that contains all the data of your system
  3. Repositories for your derived models which are named like the model (repo: full-urdf; modelname: full-urdf; robotname: Baymax) Note Make sure to initialize them with a README.md like this Example-README-for-model-repo.md.
  4. A repository that contains the definition for the pipeline which contains (see examples/definition_repo/):
    1. the model types and there test procedure,
    2. the mesh types and the corresponding mesh pathes,
    3. a list of the definitions of the models to derive
    4. a manifest.xml that lists all these repositories as dependencies
  5. A package set which lists all these repositories. If you want the Large File Storage feature for your mesh repos, make sure to include your model repositories not as import_package but as lfs_package. To do this you have to add the following definition at the beginning of your package_set's autobuild file:
  def lfs_package(name, workspace: Autoproj.workspace)
      import_package name do |pkg|
          pkg.post_import do
              begin
                  Autobuild::Subprocess.run(pkg, 'build', 'git-lfs', 'pull' , :working_directory => pkg.srcdir+'/meshes/lfs-obj')
              rescue
              end
              begin
                  Autobuild::Subprocess.run(pkg, 'build', 'git-lfs', 'pull' , :working_directory => pkg.srcdir+'/meshes/lfs-bobj')
              rescue
              end
              begin
                  Autobuild::Subprocess.run(pkg, 'build', 'git-lfs', 'pull' , :working_directory => pkg.srcdir+'/meshes/lfs-mars_obj')
              rescue
              end
              begin
                  Autobuild::Subprocess.run(pkg, 'build', 'git-lfs', 'pull' , :working_directory => pkg.srcdir+'/meshes/lfs-stl')
              rescue
              end
          end
      end
  end

What to do after you have everything prepared:

  1. (recommended) Make sure all the pipeline related repos have a icon corresponding to the Configuration and a proper description.
  2. Edit phobos/ci-docker-buildconf/manifest:
    • Add your package set to the package_sets
    • Add your definition repo to the layout
  3. Add your .gitlab-ci.yml file (default is here) to the repositories that shall trigger the model processing. Consider to link it in the CI/CD-configuration of your repo directly to get the most recent updates. Typically only the definition-repo should trigger the pipeline. (You can make the input repositories also triggering ones, but at some point the pipeline storage will not be sufficient and this has to be adapted on project level by the ISG.)
  4. You have to configure the repositories, in such way that your ci user can push:
    1. Generate a ssh key pair with ssh-keygen
    2. Configure the result repositories:
      • Navigate in the project settings: Settings -> Repository -> Deploy Keys
      • There add your public key with a good name or if you have done this for another repository already you can enable that key for this repository at the bottom by selecting it from the "Privately accessible deploy keys"
    3. Configure the the repositories which trigger a CI-run:
      • Navigate in the project settings: Settings -> CI / CD -> Variables
      • There add the following variables:
        • CI_PRIVATE_KEY: The private key of the keypair you generated in step 4.1.
        • SSH_KNOWN_HOSTS: Add git.hb.dfki.de as known host
        • CI_USER_EMAIL: The E-Mail Adress that shall appear in the commits made by the CI. (Should be '$GITLAB_USER_EMAIL')
        • CI_ROOT_PATH: The autoproj path to the root directory of the CI (see Directory structure below) (Probably something like: 'models/robots/$YOUR_MODELS_GROUP')
        • CI_MODEL_DEFINITIONS_PATH: The directory name of the Model Definitions repo. (E.g. 'model_definitions')
        • CI_UPDATE_TARGET_BRANCH: The name of the branch where the new models shall be pushed to (probably "develop")
        • CI_MESH_UPDATE_TARGET_BRANCH: The name of the branch where the new meshes shall be pushed to (probably "master")
      • Note: It might also be an option to add all these variables to your model subgroup, and therefore you have only to set them once. But, regarding the secure than every repo would have the abbility to push to your result repositories. Hence, it is highly recommended to set those variables especially the CI_PRIVATE_KEY only to the repositories that trigger the CI and are allowed to push.
  5. Last but not least go to the buildserver (only accessible from DFKI network), trigger a run (Build with parameters->Build) and see if it runs through successfully with your changes. Fix your errors until it is successful again.

I have a new URDF exported from CAD. What to do?

1. You have to preprocess it.

Before you can push your model there are some preprocessing steps necessary:

  • Remove the line with the XML version from the urdf
  • (Reduce unneccessary vertices in the meshes for smaller mesh files and faster simulation)
  • Fix the meshes path by replacing the package-style path by the correct relative path
  • Check the urdf for simple correctness (easiest way to do so is by generating a PDF)

All these steps are done by this script: https://git.hb.dfki.de/phobos/ci-run/-/blob/master/preprocess_cad_export.py

It is explained in the corresponding README.md file of the ci-run repo.

2. Push it

After everything is done you can push your update to the corresponding input model.

3. Trigger the pipeline run

Trigger the pipeline-run in the definition repo manually.


I want to add a new derived model what to do?

  1. Create a repository for your new model in the subgroup related to the model. Note Make sure to initialize them with a README.md like this Example-README-for-model-repo.md.
  2. Add the result repository to the subgroups package set and then commit & push.
  3. Configure your result repository in such way that it can be pushed to by the CI. (see Step 4.2 above) You can get the deploy key from the list of "Privately accessible deploy keys"
  4. Clone the definition repository from the subgroup:
    1. Write a $MODEL_NAME.yml definition file for the new derivative (for details see here)
    2. Add the $MODEL_NAME.yml file in the pipeline.yml file
    3. Add the result repository to the manifest.xml file of the definition repo
    4. double-check you $MODEL_NAME.yml
    5. triple-check it, and the rest of your changes
    6. Commit and push
  5. The pipeline will automatically start to run and update/create all models.
  6. Have a look at the pipeline processes if it runs through or if you have to have a look why it fails.

Note: you can also run the pipeline locally to test your model definitions before pushing them.


How to run the pipeline locally?

Either you check out this buildconf or adapt your own, so that it checks out the phobos/ci_package_set, too. In the latter case you have to make sure, that python is enabled in your autoproj setup. (If there occur problems, consider bootstrapping the buildconf. If this doesn't help you can create a virtual env with python3 and install all the phobos modules there by hand.) Please see the Configuration on how the directory structure has to look.

Then you can go to your model definition directory (e.g. cd models/robots/$MODEL_SUBGROUP/model_definitions) and do:

python $PATH_TO_PHOBOS_CI_RUN/run_pipeline.py --process --test pipeline.yml

Probably:

python ../../../phobos/ci-run/run_pipeline.py --process --test pipeline.yml

In the pipeline.yml you can comment all other definitions to check only yours.


I have a new model or changes I want to apply, but there are failing tests?

If there are changes in your new model that are not consistent with the old model or there are other issues according to the tests defined for the respective model type. The model will be pushed to the develop and a merge request is opened. From there you can check out the model, and test it manually.

If you are sure that your newly exported model is correct and you are aware of the breaking changes you can merge the develop into master. If not, fix your errors again in the mode definitions.

For help on how to find the errors in your model read the following section


The test fails. What can I do?

Have a look into the failed pipeline (CI/CD->Pipelines). There you can see the jobs of this pipeline. In the test job you'll find a link to the job artifacts on the right hand site of the page. In the public folder you'll find the log files of the pipeline. There you can scan all test reports and see why the pipeline has reported your model as failing some tests.

The same info is included in the console output, too.


The pipeline fails. What can I do?

All pipeline provide log files which are stored in the artifacts of the jobs. Also you can see the Console of all jobs. If your pipeline has failed, simply click the "pipeline: failed"-badge or got to CI/CD->Pipelines. There you can see which job has failed, click this job and see what was the issue. Now you might want to have a look at the Troubleshooting page for the most common errors.


How does the pipeline.yml have to look like?

See Configuration/The pipeline.yml-file.


How does a model definition have to look like?

See Configuration/The model definition file.


I want to inherit a model that is part of another model group?

You want to inherit a model from another model group? You can do this as follows: If the model is part of a pipeline progress, AND you want to use the master branch of it you can simply give it as a basefile relative to the current model_definition file. If not you can inherit any model that is available in a git repository where you have writing acccess to or where the repository is public. In case that the repo is not public you have to give the pipeline reading access to the respective repository by adding your pipelines deploy key to the repo. See also Configuration/The model definition file the repo entry...


Can I use phobos-CI to test my manually processed models in Gitlab-CI?

Yes, you can! Set-up your pipeline and use the test_model.py-script and provide an configuration like this in the repository and give it as argument to the script. You'll find a gitlab-ci-cfg.yml for this purpose here.

See also this Configuration section.

Note This is a beta-Feature. Check the latest version of the skript on the phobos-develop image.


Why does the pipeline push some models to develop and some to master?]

During the pipeline run where the models are created the models are tested for basic consistency and with the tests you required in the pipeline.yml. If all these tests succeed the model is consistent with the compare model (in most cases with the previous version on master). In this case the model is directly pushed to master as it does not contain breaking changes. On the other hand when the model fails any test it is pushed to the develop. Here you can test the model and decide if the changes are wanted and correct, this ensured you can simply merge the new model manually to master.

Therefore make sure to maintain the merge requests of the model you use on a regular basis so that you are always have the latest working model on master.

Note The models always are in synced with the corresponding model_definitions commit. The history in the model repositories proceeding the current commit is not related to the current state and is only used for version selection.