CLI Usage - nshaibu/volnux GitHub Wiki

1. Create project

volnux startproject ml_platform

Created ml_platform folder containing:

  • init.py
  • conf.py
  • init.py
  • workflows/

2. Navigate to the project

$ cd ml_platform

3. Create workflow

volnux startworkflow training_pipeline

Created workflows/training_pipeline/

  • pipeline.py
  • workflow.py (WorkflowConfig)
  • events.py (Local events)
  • training_pipeline.pty (Pointy script)

4. Edit configuration

Configure registries, triggers, and connections

nano workflows/training_pipeline/workflow.py

5. Define workflow logic

Write a Pointy script to define the structure of your workflow

nano workflows/training_pipeline/training_pipeline.pty

6. Validate workflow (future)

volnux validate training_pipeline

✅ Workflow valid: training_pipeline

  • 15 events found
  • 4 registries configured
  • 4 triggers registered
  • DAG mode enabled

7. Run workflow (development)

ENVIRONMENT=development volnux run training_pipeline \
  --params '{"data_path": "data.csv", "model_type": "rf"}'

8. Run workflow (production)

ENVIRONMENT=production volnux run training_pipeline

9. Monitor execution (future)

volnux logs training_pipeline --follow
[2024-01-15 14:30:00] INFO Starting workflow: training_pipeline
[2024-01-15 14:30:05] INFO LoadData completed successfully
[2024-01-15 14:30:10] INFO MAP<PreprocessChunk> processing 10 chunks...

10. Manage triggers (future)

volnux triggers list training_pipeline
Name Type Status
daily_schedule Schedule ✅ enabled
training_requests kafka ✅ enabled
model_drift_detected conditional ✅ enabled
manual_retrain webhook ⚠️ disabled

11. Enable/disable triggers (future)

volnux triggers enable training_pipeline manual_retrain
✅ Trigger enabled: manual_retrain

12. Interactive development

volnux shell
>>> from workflows.training_pipeline.workflow import TrainingPipelineConfig
>>> config = TrainingPipelineConfig()
>>> config.ready()
🚀 Initializing training_pipeline v1.0.0
  📦 Registering event registries...
    ✅ Registered 4 event registries
  🎯 Registering workflow triggers...
    ✅ Registered 4 workflow triggers
✅ training_pipeline ready!

13. Dry run with graph visualisation (future)

volnux run training_pipeline --dry-run --show-graph

[Generated ASCII graph showing workflow structure]

Example Workflow

# Pointy Script for Training Pipeline
# Events can be:
# - Defined locally in events.py
# - Pulled from registries (company::, pypi::, github::, local::)

# Access configuration variables
@batch_size = $batch_size
@num_folds = $num_folds
@model_types = $model_types
@min_accuracy = $min_accuracy

# Workflow definition with events

LoadData |->
company::ValidateDataQuality (
    0 |-> pypi::NotifyFailure |-> GenerateReport,
    1 |-> MAP<PreprocessChunk>[batch_size=$batch_size] |->
          REDUCE<company::MergeChunks> |->
          FeatureEngineering |->
          MAP<TrainModelOnFold>[models=$model_types, folds=$num_folds] |->
          REDUCE<SelectBestModel>[min_accuracy=$min_accuracy] (
              0 |-> pypi::NotifyFailure |-> GenerateReport,
              1 |-> FILTER<ValidateAccuracy> |->
                    pypi::SaveModel || company::DeployToProduction || GenerateReport |->
                    FOREACH<company::NotifyStakeholders>
          )
)