providers - kongusen/loom-agent GitHub Wiki
This page explains how Loom connects to model providers and how application developers should configure environment variables.
For application code, the recommended path is:
- select a provider with
Model - provide keys and base URLs through environment variables or model constructor fields
- pass the model into
Agent(...)
Avoid constructing AnthropicProvider, OpenAIProvider, or other provider classes directly unless you are extending Loom internals.
from loom import Agent, Model, Runtime
agent = Agent(
model=Model.openai("gpt-5.1"),
instructions="You are a technical assistant.",
runtime=Runtime.sdk(),
)Loom resolves the provider lazily on first execution based on Model.
| Provider | Constructor | Default API Key Env Var | Base URL |
|---|---|---|---|
| Anthropic | Model.anthropic(name) |
ANTHROPIC_API_KEY |
api_base |
| OpenAI | Model.openai(name) |
OPENAI_API_KEY |
api_base or OPENAI_BASE_URL
|
| Gemini | Model.gemini(name) |
GEMINI_API_KEY or GOOGLE_API_KEY
|
no public base URL field today |
| Qwen | Model.qwen(name) |
DASHSCOPE_API_KEY |
provider default endpoint |
| Ollama | Model.ollama(name) |
not required |
api_base or OLLAMA_BASE_URL, default http://localhost:11434
|
ModelRef remains as a compatibility alias for Model.
If you are using an OpenAI-compatible gateway:
export OPENAI_API_KEY="your-key"
export OPENAI_BASE_URL="https://your-openai-compatible-endpoint/v1"from loom import Agent, Model
agent = Agent(
model=Model.openai("gpt-5.1"),
instructions="You are a platform assistant.",
)If you do not want to use the default environment variable names:
import os
from loom import Model
model = Model.openai(
"my-model",
api_base=os.getenv("MY_MODEL_BASE_URL"),
api_key_env="MY_MODEL_API_KEY",
)export ANTHROPIC_API_KEY="sk-ant-..."model = Model.anthropic("claude-sonnet-4")If you are routing through a private proxy:
model = Model.anthropic(
"claude-sonnet-4",
api_base="https://your-proxy.example.com",
)export GEMINI_API_KEY="..."or:
export GOOGLE_API_KEY="..."model = Model.gemini("gemini-2.5-flash")export DASHSCOPE_API_KEY="..."model = Model.qwen("qwen-max")export OLLAMA_BASE_URL="http://localhost:11434"model = Model.ollama("llama3")Or set it explicitly:
model = Model.ollama(
"llama3",
api_base="http://localhost:11434",
)If provider initialization fails, Loom does not necessarily raise immediately. It falls back according to runtime configuration.
Default fallback:
RuntimeFallbackMode.LOCAL_SUMMARYThat means:
- the environment variable may be missing
- provider construction may fail
- the agent may still return a local fallback summary result
If you want strict failure:
from loom import Agent, Model
from loom.config import RuntimeConfig, RuntimeFallback, RuntimeFallbackMode, RuntimeFeatures
runtime = RuntimeConfig(
features=RuntimeFeatures(
fallback=RuntimeFallback(mode=RuntimeFallbackMode.ERROR),
)
)
agent = Agent(
model=Model.openai("gpt-5.1"),
runtime=runtime,
)import os
from loom import Agent, Model
def build_model() -> Model:
provider = os.getenv("LOOM_PROVIDER", "openai").lower()
model_name = os.getenv("LOOM_MODEL_NAME", "gpt-5.1")
if provider == "anthropic":
return Model.anthropic(model_name)
if provider == "gemini":
return Model.gemini(model_name)
if provider == "qwen":
return Model.qwen(model_name)
if provider == "ollama":
return Model.ollama(model_name)
return Model.openai(model_name)
agent = Agent(
model=build_model(),
instructions="You are a multi-provider assistant.",
)Notes:
-
LOOM_PROVIDERandLOOM_MODEL_NAMEare application-level conventions - Loom still reads provider credentials from the provider-specific standard environment variables
If you are extending Loom internals, look at:
loom.providers.base.LLMProviderloom.providers.openai.OpenAIProviderloom.providers.anthropic.AnthropicProviderloom.providers.gemini.GeminiProvider
Most application developers do not need to depend on these classes directly.
The public best practice is:
Model + Agent
Related example: