Demo Project - Interactml/iml-unreal GitHub Wiki

Overview

A comprehensive demo project is available to showcase InteractML and provide solid examples to learn from. As follows:

  • Classification - Basic use of Classification
  • Classification Labels - Use of labels
  • Classification VR Semaphore - VR virtual semaphore trainer
  • Regression - Basic use of Regression
  • Regression Lightshow - Controlling coloured spotlights
  • Dynamic Timewarp - Basic use of Dynamic Timewarp
  • Dynamic Timewarp Gestures - Recognition of mouse/pen drawn gestures

A video showing some of these in action is available on YouTube:

Download

The project can be downloaded from the 🔗InteractML UE Demo repository on Github

Purpose

These demos provide useful reference for building your own Machine Learning systems. Whilst the basic premises are fairly straight forward, and a minimal test-case Blueprint graph can fit on a single screen, more realistic use-cases are going to be considerably more involved. Some areas where these demos will assist in developing applications using InteractML are:

  • Handling user input for triggering recording and training
  • Tracking operational state of the recording and training systems
  • Collecting parameters as ML node inputs
  • Driving systems using ML node outputs
  • In-game display of training status
  • VR input and display systems
  • Management of InteractML assets in a project
  • Best practice for building node graphs around InteractML nodes

Relevant documentation: [Machine Learning]] ](/Interactml/iml-unreal/wiki/[Model-Types) | [Collecting Parameters]] | [Training Models]] ](/Interactml/iml-unreal/wiki/[[Running-Models) | Utility Blueprints

Common Features

All demos have a few things in common:

  • Three panels are used to provide feedback about the three phases of ML operation: Record, Train, Run (details below)
  • Each panel has the Blueprint scripts to run it and only serve to visualise the state within other objects in the demo.
  • Each panel has specific control help at the bottom, most demos can use Keyboard/Mouse or Gamepad.
  • Each scene also has three solid shapes, these are physical hosts for the Blueprint graphs handling ML operations. Select one and open it's Blueprint graph to see.
    • Yellow Cylinder - Recording
    • Blue Cube - Training
    • Green Cone - Running
  • A spinning FPS indicator is used to illustrate smooth running of each demo.

The RECORD Panel

  • Visualises the example recording state
  • Shows how many examples have been recorded in the form "X of Y"
    • X is for the current Expected Output
    • Y is the altogether total
  • Shows the current Expected Output value (can be selected by the user)
  • Shows the Live Parameter inputs used for recording and running
  • Displays the 'hold to activate' indicator use by the Delete functions
  • Shows keyboard/Gamepad/VR control binding help

The TRAIN Panel

  • Visualises the model training state
  • Is the model trained?
  • How long the training took (when background operation is used)
  • The Training Set asset path & name
  • The Model asset path & name
  • Displays the 'hold to activate' indicator use by the Reset functions
  • Shows keyboard/Gamepad/VR control binding help

The RUN Panel

  • Visualises the model run state
  • Shows the actual Output value (when the model is running)
  • Shows keyboard/Gamepad/VR control binding help

Classification Demos

Various demonstrations of the Classification Machine Learning algorithm.

Basic Classification

Shows off the basics of using the Classification model to learn to associate player position (2D) with an output value (integer).

images/screens/Demo_Classification_Basic.png

Instructions

  • Move around the play area using standard 'First Person' game controls
  • Note the RECORD panel Values field shows your 2D position
  • Press the record button to record new examples (associate player position with expected output)
  • Press the train button to train the model
  • Press the run button to toggle run mode.
  • The RUN panel continuously shows the predicted Output values as you move around

Notes

  • 2D position is used as the Live Parameters for the recording and running nodes
  • Every frame the model is re-run with the current player position constantly providing new outputs

Using Labels

Shows off using the Classification model to learn to associate player position (2D) with a composite multi-value output.

images/screens/Demo_Classification_Labels.png

Instructions

  • Move around the play area using standard 'First Person' game controls
  • Note the RECORD panel Values field shows your 2D position
  • Note that RECORD panel shows multiple values as the Expected Output, each of a different type (integet, 2D vector, string)
  • Press the record button to record new examples (associate player position with expected output)
  • Press the train button to train the model
  • Press the run button to toggle run mode.
  • The RUN panel continuously shows the predicted Output values as you move around

Notes

  • The Output type has the following types:
    • Integer - whole numbers, in this example they are 10, 20, 30, etc
    • 2D Vector - a pair of floating point numbers
    • String - Shows the ability to use discrete distinct values as outputs
  • The Output is selected as before but this time the 'value' is used as the index into a table of Output values, each with the three variables set to specific values.
  • 2D position is used as the Live Parameters for the recording and running nodes
  • Every frame the model is re-run with the current player position constantly providing new outputs
  • The RUN panel displays the Output value as the same three variable values, not the simple numerical value

Relevant documentation: Labels

Virtual Reality Semaphore Trainer

Demonstrates more advanced input source and larger output value set.

images/screens/Demo_Classification_SemaphoreVR.png

Instructions

💡 This demo requires the use of a Virtual Reality headset and controllers.

  • Run using the "VR Preview" play mode
  • Familiarise yourself with the scene by looking around
  • Use the stick controls (click them) to move around the play area
  • Find the two flags, pick them up by moving your hand onto the handle and clicking the 'grip' button
    • NOTE: make sure you get them the right way round (as you look from the centre of the area towards them)
  • Position yourself in-front of the mirror so you can see all of it and the tile in the upper left
  • Press the run button to RUN the model
  • Imitate the silhouettes to perform the semaphore signalling for each letter and the tile should show you what the model recognises
  • You can retrain the model if you like by resetting the examples and re-recording them all again.

Notes

  • You can toggle the in-headset panels using the menu button
  • This has more complicated graphs to handle arm/flag position detection and mapping to suitable input parameter values
  • Normalised direction is used instead of arm angle because angular values are discontinuous at the point they wrap around. This is a problem for machine learning as it means some values are not close to each other even though the angles actually are. Mapping to a vector avoids this by effectively mapping values onto a circle in 2D space.

Regression Demos

Various demonstrations of the Regression Machine Learning algorithm.

Basic Regression

Shows off the basics of using the Regression model to learn to associate player position (2D) with an output value (float).

images/screens/Demo_Regression_Basic.png

Instructions

  • Move around the play area using standard 'First Person' game controls
  • Note the RECORD panel Values field shows your 2D position
  • Press the record button to record new examples (associate player position with expected output)
  • Press the train button to train the model
  • Press the run button to toggle run mode.
  • The RUN panel continuously shows the predicted Output values as you move around

Notes

  • 2D position is used as the Live Parameters for the recording and running nodes
  • Even though the Expected Output is selected as an integer for this demo, it is treated as a floating point value
  • Every frame the model is re-run with the current player position constantly providing new outputs
  • The output values are interpolated to give a 'fuzzy' mapping between player position and output value
  • The output value is being used to drive a material parameter on the large sphere above the panels

Coloured Light Show

Shows off more advanced use of the Regression model to learn to associate player position (2D) with a complex set of values.

images/screens/Demo_Regression_Lightshow.png

Instructions

  • Move around the play area using standard 'First Person' game controls
  • Note the RECORD panel Values field shows your 2D position
  • The six spotlights are driven by the Expected Output value when recording and the recognised Output when running.
  • Adjust the Expected Output to see the various preset lighting positions and colours.
  • Press the record button to record new examples (associate player position with expected output)
  • Press the train button to train the model
  • Press the run button to toggle run mode.
  • The RUN panel doesn't show an output value, the complex set of values is visualised using the spotlights

Notes

  • 2D position is used as the Live Parameters for the recording and running nodes
  • Selecting the Expected Output indexes a Label Table of values for controlling all the lights
  • The training process is run in the background because it takes several seconds to run
  • Every frame the model is re-run with the current player position constantly providing new outputs
  • The output values are interpolated to give a 'fuzzy' mapping between player position and output value
  • Each light has two rotations (pitch/yaw) a colour value (RGB) and an intensity value
  • The light control values are stored in a Blueprint Structure
  • The Label definition has 6 variables in it, each is a light control structure
  • The running model interpolates all these values according to where the player is
  • The spotlights perform some smoothing of values to prevent sudden jumps

Relevant documentation: Labels

Shows the

Dynamic Timewarp Demos

Various demonstrations of the Dynamic Timewarp Machine Learning algorithm.

Basic Dynamic Timewarp

Shows off the basics of using the Dynamic Timewarp model to learn to associate player movement (2D) with an output value (integer).

images/screens/Demo_DTW_Basic.png

Instructions

  • Move around the play area using standard 'First Person' game controls
  • Note the RECORD panel Values field shows your 2D position
  • Hold the record button to record new examples (associate player movement with expected output)
  • Press the train button to train the model
  • Hold the run button to record an input movement for the model to try and recognise.
  • The RUN panel shows the predicted Output values once you have completed recording and the model has run

Notes

  • 2D position is used as the Live Parameters for the recording and running nodes
  • This works on a series of positions to describe a movement or gesture
  • The model only runs after recording a new input series has finished

Gesture Recognition

Shows off the more advanced use of the Dynamic Timewarp model to learn to associate written gestures (2D) with an output value (integer).

images/screens/Demo_DTW_Gestures.png

Instructions

  • Write on the white board to record new examples
  • Note the RECORD panel Values field shows your 2D cursor position on the board
  • Hold the mouse button to write and record new examples (associate gesture with expected output)
  • Press the train button to train the model
  • Press the run button to toggle run mode on and off.
  • Hold the mouse button to write and record an input gesture for the model to try and recognise.
  • The RUN panel shows the predicted Output value once you have completed the gesture and the model has run

Notes

  • Pen position on the board is normalised -1 to 1 in both X and Y
  • This pen position is used as the Live Parameters for the recording and running nodes
  • This works on a series of positions to describe a gesture
  • The model only runs after recording a new input series has finished
  • The recording graph waits a short time before using the recorded gesture to allow for multi-stroke gestures
  • The demo is trained with numbers from 0 to 9 but only because that matches the Output value (integer), the gesture can take any form you like
  • The gesture recognition is run in the background because it takes several hundred milliseconds to run

Helper Blueprints

Several helper blueprints were created to make implementing the demonstration levels easier. These manage the user inputs, the different states the interactions can be in, various descriptive text fields and audio-visual feedback methods, and the display of this information in the UI.

The main aim of them is to keep all the boiler-plate scripting needed to run the demos nicely away from the actual machine learning graphs that are the heart of each demo. Hopefully this makes things clearer and that you can re-use them in your own projects or just used as reference.

Player Character

📁 Asset location: /Demos/Assets/Blueprints/DemoCharacter.uasset

A fairly simple implementation of a FPS movement control, largely responsible for taking controller input and applying movement and trigger events to the player and the interaction controller.

  • Player Look - Camera direction
  • Player Movement - Wandering around the play area
  • HUD toggle - Show/hide the VR in-headset UI panels
  • Record/Train/Run - Forwards the bound input events to run the demo

Demo Interaction Handler

📁 Asset location: /Demos/Assets/Blueprints/DemoInteractionHandler.uasset

A drop-in component designed to be added to the Player Controller that performs various functions:

  • Provides functions to be triggered by inputs events - e.g. Begin Record, End Delete, Change Expected Output
  • Handles Continuous vs Single recording mode.
  • Triggers various interaction sound cues
  • Handles delete and reset delay, and provides progress value
  • Measures training and running durations
  • Updates interaction state (see below)

Demo Interaction State

📁 Asset location: /Demos/Assets/Blueprints/DemoInteractionState.uasset

The ongoing state of the interaction system is stored in the DemoInteractionState component which is designed to be dropped into the Game World object to serve as a central location for coordination of recording/training/running operations. This is driven by the DemoInteractionHandler, queried by the various Record, Train, and Run scripts used in the demo levels, and monitored by the UI panels present in the scenes.

It largely consists of variables holding the on/off state to drive the nodes as well as progress, timing, textual, and counter information. There are functions for updates and process used in a couple of places too.


👈 [Your Data]] ](/Interactml/iml-unreal/wiki/🏠-[[Home) | Utility Blueprints 👉