agent api - kongusen/loom-agent GitHub Wiki
This page describes Loom's core application-facing runtime objects.
The public 0.8.x path is:
Agent(...)
-> Run / Session
AgentConfig, ModelRef, GenerationConfig, and create_agent() remain available for compatibility and advanced configuration, but new application code should start from Agent(...).
Agent is the top-level execution object.
from loom import Agent, Capability, Model, Runtime
agent = Agent(
model=Model.openai("gpt-5.1"),
instructions="You are a code assistant.",
capabilities=[
Capability.files(read_only=True),
Capability.web(),
],
runtime=Runtime.sdk(),
)Common constructor fields:
| Field | Description |
|---|---|
model |
Provider-backed model reference, usually Model.openai(...), Model.anthropic(...), etc. |
instructions |
Stable behavior instructions |
tools |
Explicit Python tools declared with @tool
|
capabilities |
Files, web, shell, MCP, skill, or custom capability sources |
generation |
Model generation controls |
runtime |
Runtime profile or custom policy composition |
session_store |
Optional durable session persistence |
await agent.run(prompt_or_task, context=None)
agent.stream(prompt_or_task, context=None)
await agent.receive(event_or_signal, adapter=None, session_id=None)
agent.session(config=None)
agent.resolve_knowledge(query)result = await agent.run("Summarize this design document.")
print(result.output)
print(result.state)Use it for one-off requests, stateless flows, extraction, classification, and simple chat endpoints.
async for event in agent.stream("Analyze the current requirement."):
print(event.type, event.payload)This streams run events for event-driven UIs, status displays, and debugging.
from loom import SignalAdapter
adapter = SignalAdapter(
source="gateway:slack",
type="message",
summary=lambda event: event["text"],
)
await agent.receive(
{"text": "Customer asks for deployment status"},
adapter=adapter,
session_id="support",
)receive() accepts an existing RuntimeSignal or a raw event plus SignalAdapter. The signal is stored in the target session's runtime dashboard; AttentionPolicy decides whether it should trigger execution.
from loom import SessionConfig
session = agent.session(SessionConfig(id="assistant-1"))Behavior rules:
-
agent.session()with no config returns a newSession -
agent.session(SessionConfig(id="same"))reuses the same session object when available - reused sessions merge metadata
from loom import KnowledgeQuery, RunContext
bundle = agent.resolve_knowledge(
KnowledgeQuery(
text="How does Loom manage sessions?",
top_k=3,
)
)
result = await agent.run(
"Explain Loom's session model.",
context=RunContext(knowledge=bundle),
)Use RuntimeTask when a run needs structured input or acceptance criteria.
from loom import RuntimeTask
task = RuntimeTask(
goal="Refactor the provider tool-call path",
input={"providers": ["openai", "anthropic", "gemini"]},
criteria=["tool call round-trip works", "provider details stay out of engine"],
)
result = await agent.run(task)SessionConfig is the input object for session-level configuration.
from loom import SessionConfig
config = SessionConfig(
id="demo",
metadata={"tenant": "acme"},
extensions={"trace_id": "req-123"},
)| Field | Description |
|---|---|
id |
Explicit session identifier; the same id reuses the same session |
metadata |
Business metadata |
extensions |
Future-compatible extension fields |
Session represents one stateful interaction scope.
session.start(prompt_or_task, context=None)
await session.run(prompt_or_task, context=None)
session.stream(prompt_or_task, context=None)
await session.receive(event_or_signal, adapter=None)
session.get_run(run_id)
session.list_runs()
await session.close()run = session.start("Inspect the current repository layout.")This creates a Run but does not wait for completion.
result = await session.run("Generate a requirement summary.")Equivalent to:
run = session.start("Generate a requirement summary.")
result = await run.wait()from loom import RuntimeSignal
await session.receive(
RuntimeSignal.create(
"Nightly job completed",
source="cron",
type="job",
urgency="normal",
)
)RunContext is the run-scoped structured context object.
from loom import RunContext
context = RunContext(
inputs={
"repo": "loom-agent",
"tenant": "acme",
},
extensions={"trace_id": "req-123"},
)| Field | Description |
|---|---|
inputs |
Structured inputs for the current run |
knowledge |
Optional grounded knowledge evidence |
extensions |
Future-compatible extension fields |
Prefer putting business context in inputs rather than hiding it inside the prompt.
Run represents one concrete execution.
await run.wait()
run.events()
await run.artifacts()
await run.transcript()async for event in run.events():
print(event.type, event.payload)Typical event types include run.started, run.completed, run.failed, artifact.created, and provider/tool-loop events.
Returns a serializable dictionary with run id, session id, state, prompt/task, context, output, events, and artifacts. This is useful for persistence, auditing, and debugging.
RunResult is returned by run.wait(), session.run(), and agent.run().
| Field | Description |
|---|---|
run_id |
Run identifier |
state |
Final run state |
output |
Final output |
artifacts |
Output artifacts |
events |
Execution events |
error |
Optional error payload |
duration_ms |
Execution duration |
tool() turns a Python function into a Loom tool declaration.
from loom import Agent, Model, tool
@tool(description="Get the weather for a city", read_only=True)
def get_weather(city: str) -> str:
return f"{city}: sunny"
agent = Agent(
model=Model.openai("gpt-5.1"),
tools=[get_weather],
)Common parameters:
| Parameter | Description |
|---|---|
name |
Custom tool name |
description |
Tool description |
read_only |
Whether the tool is read-only |
destructive |
Whether the tool is destructive |
concurrency_safe |
Whether the tool is concurrency-safe |
requires_permission |
Whether the tool requires permission |
import asyncio
from loom import Agent, Capability, Generation, Model, RunContext, Runtime, SessionConfig, tool
@tool(description="Search repository docs", read_only=True)
def search_docs(query: str) -> str:
return f"results for: {query}"
async def main():
agent = Agent(
model=Model.openai("gpt-5.1"),
instructions="You are a repository assistant.",
generation=Generation(temperature=0.2, max_output_tokens=512),
tools=[search_docs],
capabilities=[Capability.files(read_only=True)],
runtime=Runtime.long_running(criteria=["answers cite repo evidence"]),
)
session = agent.session(SessionConfig(id="repo-assistant"))
result = await session.run(
"Summarize the API design.",
context=RunContext(inputs={"repo": "loom-agent"}),
)
print(result.output)
asyncio.run(main())Next:
Related examples: