LORA inference - techconative/llm-finetune GitHub Wiki
The below command can be used for inference.
MODEL="openlm-research/open_llama_3b" # Or any other model that you want to train
python generate/lora_ui_gen.py --checkpoint_dir checkpoints/$MODEL --lora_path=<the latest checkpoint .pth file from the output dir of finetuning> --prompt "Apply a top navbar."