Inference - R-D-BioTech-Alaska/Qelm GitHub Wiki

Performing Inference

Run predictions from the GUI or CLI. CLI Inference Example:

python Qelm2.py --inference --input_id 5 --load_path quantum_llm_model_enhanced.json

GUI Inference:

Enter an Input Token: Type the token ID you wish to use as input.
Set Parameters:
    Max Length: Define the maximum number of tokens to generate.
    Temperature: Adjust the randomness of the output generation.
Click Run Inference: Initiate the inference process.
View Output: The generated sequence will be displayed in the GUI's output section.