CLI - ufal/factgenie GitHub Wiki

The factgenie CLI provides several commands to manage datasets, outputs, campaigns, and run evaluations. Below is an overview of available commands.

📋 Listing Resources

List Available Resources

# List all available datasets
factgenie list datasets

# List all available model outputs
factgenie list outputs

# List all available campaigns
factgenie list campaigns

View Detailed Information

# Show information about a specific dataset
factgenie info --dataset <dataset_id>

# Show information about a specific campaign
factgenie info --campaign <campaign_id>

🚀 Creating and Running LLM Campaigns

Create a New LLM Campaign

To create a new campaign, use the create_llm_campaign command. This command supports both evaluation (llm_eval) and generation (llm_gen) modes.

factgenie create_llm_campaign <campaign_id> \
  --mode <llm_eval|llm_gen> \
  --dataset_ids <dataset1,dataset2,...> \
  --splits <split1,split2,...> \
  --setup_ids <setup1,setup2,...> \
  --config_file <path_or_name> \
  [--overwrite]

Parameters:

  • campaign_id: Unique identifier for the campaign
  • --mode: Either llm_eval or llm_gen
  • --dataset_ids: Comma-separated list of dataset identifiers
  • --splits: Comma-separated list of splits (e.g., "train,test,valid")
  • --setup_ids: Comma-separated list of setup IDs (required for llm_eval mode)
  • --config_file: Either a path to a YAML configuration file or the name of an existing config (without file suffix)
  • --overwrite: Optional flag to overwrite an existing campaign

Example for Evaluation Mode:

factgenie create_llm_campaign llm-eval-test \
  --mode llm_eval \
  --dataset_ids quintd1-ice-hockey \
  --splits test \
  --setup_ids llama2 \
  --config_file openai-gpt3.5

Example for Generation Mode:

factgenie create_llm_campaign llm-gen-test \
  --mode llm_gen \
  --dataset_ids quintd1-ice_hockey \
  --splits test \
  --config_file gpt4-config

Run an LLM Campaign

Once a campaign is created, you can run it using the run_llm_campaign command:

factgenie run_llm_campaign <campaign_id>

Important Notes:

  • The campaign must exist and not be in a "FINISHED" or "RUNNING" state
  • For evaluation campaigns (llm_eval), ensure that model outputs exist for the specified dataset/split/setup combinations
  • The configuration file should be placed in:
    • factgenie/config/llm-eval/ for evaluation campaigns
    • factgenie/config/llm-gen/ for generation campaigns

🧮 Computing statistics

Factgenie also provides a set of CLI commands for computing statics and inter-annotator agreement on collected annotations.

See the page Analyzing Annotations for more details.

⚠️ **GitHub.com Fallback** ⚠️