System UnrealPlugin InferenceInterfaces - kcccr123/ue-reinforcement-learning GitHub Wiki
UInferenceInterface
defines the abstract base interface for embedding trained policies in our framework. It standardizes how trained models are loaded, how observation data is passed, and how inferences are made during runtime.
This class is not meant to be used directly—instead, you should inherit from it to implement a backend-specific interface (e.g., ONNX, Torch).
The UInferenceInterface
serves as a unified contract for ML model inference. It allows bridge classes or gameplay logic to:
- Load models from disk
- Run inference with formatted observations
- Get model outputs as serialized action strings
This interface is Blueprint-accessible, allowing users to call inference functions from both C++ and Blueprint systems.
You must subclass UInferenceInterface
and implement the virtual methods LoadModel()
and RunInference()
. These implementations are then plugged into bridge classes or components via SetInferenceInterface()
.
This structure allows models to be cleanly swapped without modifying logic code.
Loads a model from the specified file path. Should return true
on success.
- Called once before inference begins
- Base class always returns
false
- Implement this to initialize your backend runtime (e.g., ONNX session)
Runs inference on the given observation vector. Should return a comma-separated string of output values.
- Called each time the agent/environment needs a new action
- Outputs should be consistent with what your bridge expects (e.g., "0.0,1.0")
To create your own model backend:
- Subclass
UInferenceInterface
- Implement
LoadModel()
to prepare your runtime - Implement
RunInference()
to transform observations into actions - Optionally expose custom configuration or reset methods