System UnrealPlugin InferenceInterfaces - kcccr123/ue-reinforcement-learning GitHub Wiki

Inference Interfaces

UInferenceInterface defines the abstract base interface for embedding trained policies in our framework. It standardizes how trained models are loaded, how observation data is passed, and how inferences are made during runtime.

This class is not meant to be used directly—instead, you should inherit from it to implement a backend-specific interface (e.g., ONNX, Torch).


Overview

The UInferenceInterface serves as a unified contract for ML model inference. It allows bridge classes or gameplay logic to:

  • Load models from disk
  • Run inference with formatted observations
  • Get model outputs as serialized action strings

This interface is Blueprint-accessible, allowing users to call inference functions from both C++ and Blueprint systems.


Usage

You must subclass UInferenceInterface and implement the virtual methods LoadModel() and RunInference(). These implementations are then plugged into bridge classes or components via SetInferenceInterface().

This structure allows models to be cleanly swapped without modifying logic code.


Key Methods

bool LoadModel(const FString& ModelPath)

Loads a model from the specified file path. Should return true on success.

  • Called once before inference begins
  • Base class always returns false
  • Implement this to initialize your backend runtime (e.g., ONNX session)

FString RunInference(const TArray& Observation)

Runs inference on the given observation vector. Should return a comma-separated string of output values.

  • Called each time the agent/environment needs a new action
  • Outputs should be consistent with what your bridge expects (e.g., "0.0,1.0")

Extension Guidelines

To create your own model backend:

  1. Subclass UInferenceInterface
  2. Implement LoadModel() to prepare your runtime
  3. Implement RunInference() to transform observations into actions
  4. Optionally expose custom configuration or reset methods
⚠️ **GitHub.com Fallback** ⚠️