Home - acntech/RoboSchool GitHub Wiki

#Welcome to the RoboSchool wiki!

Resources

Pommerman

Implementation of playing with fire for training AI agents

https://www.pommerman.com/

Learning material

Article on different types of DQN

https://roberttlange.github.io/posts/2019/08/blog-post-5/

Docker

This tutorial covers all material used in this repo https://docker-curriculum.com/

Omniboard and mongoDB

https://github.com/vivekratnavel/omniboard/blob/master/docs/usage.md

Omniboard and mongo DB must be run and also be able to talk with each other. This can be obtained by starting them on the same docker network. First, create a new docker network or use an existing network

docker network create omniboard-network

We now have a network on which to run the docker containers. The mongodb and omniboard container should use the same docker network

docker run --rm --name mongo-container --net omniboard-network -d mongo

Then run the omniboard network

docker run --rm -d -p 9000:9000 --name omniboard --net=omniboard-network vivekratnavel/omniboard -m MONGODB_CONTAINER:27017:sacred

Docker RL development environment

To start up a RL environment with a jupyter notebook running, write:

docker run --rm -it -v pwd:/notebooks -p 8888:8888 justheuristic/practical_rl

Go to localhost:8888 and insert the token from the console to log in. A RL environment image has been made for this projects and can be run by

docker run --rm -it -p 8888:8888 fabiansd/rl-env bash

You will then start up a linux container with all the necessary libraries installed. Here you can run python scripts and linux commands, and also start jupyter by typing

sh /RoboSchool/src/run_jyputer.sh

Parallellization with Tensorflow

https://www.tensorflow.org/guide/performance/overview

RL school

Github to RL school with learning material and docker environment for development https://github.com/yandexdataschool/Practical_RL

https://github.com/beaupletga/Curve-Fever

Project Organization


├── README.md          <- The top-level README for developers using this project.
├── data
│   ├── external       <- Data from third party sources.
│   ├── interim        <- Intermediate data that has been transformed.
│   ├── processed      <- The final, canonical data sets for modeling.
│   └── raw            <- The original, immutable data dump.
│
├── models             <- Trained and serialized models, model predictions, or model summaries
│
├── notebooks          <- Jupyter notebooks. Naming convention is a number (for ordering),
│                         the creator's initials, and a short `-` delimited description, e.g.
│                         `1.0-jqp-initial-data-exploration`.
│
├── results            <- Final rsults for display and show-casing
│   └── figures        <- Generated graphics and figures to be used in reporting
│
├── requirements.txt   <- The requirements file for reproducing the analysis environment, e.g.
│                         generated with `pip freeze > requirements.txt`
│
├── setup.py           <- makes project pip installable (pip install -e .) so src can be imported
├── src                <- Source code for use in this project.
│   ├── __init__.py    <- Makes src a Python module
│   │
│   ├── data           <- Scripts to download or generate data
│   │   └── make_dataset.py
│   │
│   ├── features       <- Scripts to turn raw data into features for modeling
│   │   └── build_features.py
│   │
│   ├── models         <- Scripts to train models and then use trained models to make
│   │   │                 predictions
│   │   ├── predict_model.py
│   │   └── train_model.py
│   │
│   └── visualization  <- Scripts to create exploratory and results oriented visualizations
│       └── visualize.py

Rules for development in this project:

The development folder structure is based on the cookiecutter structure: https://drivendata.github.io/cookiecutter-data-science/

Data is immutable:

Don't ever manipulate or manually change the raw data. The raw data should be included in the .gitignore file since it does not require source control.

Notebooks are for exploration and communication

Jupyter notebooks shouldbe used for quick experimentation and exploration, making show-cases or communicating experience, tutorials or similar. This is because notebooks are challenging for source control.

Develop in the environment

To be able to reproduce the results across several developers the computational environment has to be consistent. Docker is setup for this purpose.

More info