Skip to main content

spawn

Spawn a new agent.
async def spawn(
    premise: str | None = None,
    scope: dict[str, Any] | None = None,
    *,
    system: str | None = None,
    mcp: str | None = None,
    model: ModelId = DEFAULT_AGENT_MODEL,
    listener: Callable[[], AgentListener] | DefaultAgentListener | None = DEFAULT_AGENT_LISTENER,
    max_tokens: int | MaxTokens = MaxTokens.default(),
    reasoning_effort: ReasoningEffort | None = None,
    cache_ttl: CacheTTL | None = None,
) -> Agent:
    ...
Parameters
premise
str or None
An optional premise for the agent. This will be attached to the system prompt of all invocations of this agent. This argument cannot be provided along with the system argument.
scope
dict[str, Any]
An optional default set of resources which the agent will have access to indefinitely. Resources in scope may be arbitrary Python functions, methods, objects, iterators, types or any other Python value. These resources may additionally be specified per invocation later on.
system
str or None
An optional system prompt for the agent. This will be the system prompt of all invocations of this agent. This argument cannot be provided along with the premise argument.
mcp
str or None
The string of a path to a .json file representing an MCP configuration. Any servers and/or tools of servers outlined in the config can be used during the execution of the agent.
model
str
The model which backs your agent. Any OpenRouter model slug is supported.
listener
Callable[[], AgentListener] | None
Optional listener constructor for logging the agent’s activity and chat history. If None, no listener will be used.
max_tokens
int | MaxTokens
When an integer is supplied, this is the maximum number of tokens for an invocation. For more fine-grained control, a MaxTokens object may be passed.
reasoning_effort
'minimal' | 'low' | 'medium' | 'high' | 'xhigh' | None
Constrains thinking budget on reasoning models which support it (gpt 5.2, sonnet 4.5, gemini 3, etc…) Higher values use more reasoning tokens but may produce better results. If None, uses the model’s default reasoning effort.
cache_ttl
'5m' | '1h' | None
Controls how long Anthropic prompt caching entries persist. Only used for Anthropic models; ignored for other providers. ‘5m’ is the default Anthropic cache duration. ‘1h’ costs more but caches longer. If None, uses the default ephemeral cache (5 minutes).
The system argument cannot be provided along with the premise argument, and vice versa. premise will not be formatted if using template.
command
string
required
The executable command to run the MCP server. This should be an absolute path or a command available in the system PATH.Example:
"python"
args
array of string
An array of command-line arguments passed to the server executable. Arguments are passed in order.Example:
["server.py", "--verbose", "--port", "8080"]
env
object
An object containing environment variables to set when launching the server. All values must be strings.Example:
{
    "API_KEY": "secret-key",
    "PORT": "8080"
}
The default model is openai/gpt-4.1.
The default agent listener is the StandardListener, but can be changed for all agents and agentic functions in the current scope with set_default_agent_listener. If a context-specific logger is used in the current scope, the logger will be added to the listener: if the listener is None, then the listener will be set to:
  • the default agent listener, if it is not None, or
  • the StandardListener, if the default agent listener is None
For more information on the StandardListener and the listener hierarchy, see here.
Returns
Agent
An agent object.

Agent.__init__

Directly instantiate an agent.
class Agent:

    def __init__(
        self,
        premise: str | None = None,
        scope: dict[str, Any] | bytes | None = None,
        *,
        system: str | None = None,
        mcp: str | None = None,
        model: ModelId = DEFAULT_AGENT_MODEL,
        listener: Callable[[], AgentListener] | DefaultAgentListener | None = DEFAULT_AGENT_LISTENER,
        max_tokens: int | MaxTokens = MaxTokens.default(),
        reasoning_effort: ReasoningEffort | None = None,
        cache_ttl: CacheTTL | None = None,
    ):
        ...
Parameters See here for a description of Agent.__init__ arguments.

Agent.call

Invokes an agent with arbitrary return type.
class Agent:

    @overload
    async def call(self, task: str, /, mcp: str | None = None, **scope: Any) -> str:
        ...

    @overload
    async def call(
        self, return_type: None, task: str, /, mcp: str | None = None, **scope: Any
    ) -> None:
        ...

    @overload
    async def call[T](
        self, return_type: type[T], task: str, /, mcp: str | None = None, **scope: Any
    ) -> T:
        ...
Parameters
return_type
type[T]
Provide a return type for the agent to have it return an instance of a specific type T.
task
str
The agent’s task (or objective) for this invocation of the agent.
mcp
str or None
The string of a path to a .json file representing an MCP configuration. Any servers and/or tools of servers outlined in the config can be used during the agent’s run.
scope
dict[str, Any]
Any additional resources added to the agent’s scope for this invocation.
  • Providing a return type is optional; *if you do not provide a return_type, the return_type will default to str.
  • You may specify a return type of None if you do not care about the result, only the side effects.
If the system argument is provided when spawning the agent, task will be provided as a raw user prompt.
command
string
required
The executable command to run the MCP server. This should be an absolute path or a command available in the system PATH.Example:
"python"
args
array of string
An array of command-line arguments passed to the server executable. Arguments are passed in order.Example:
["server.py", "--verbose", "--port", "8080"]
env
object
An object containing environment variables to set when launching the server. All values must be strings.Example:
{
    "API_KEY": "secret-key",
    "PORT": "8080"
}
Returns
Awaitable[T]
An awaitable result of type T which the agent returns.

Agent.total_usage

Get the total usage across all invocations.
class Agent:

    def total_usage(self) -> ResponseUsage:
        ...

Agent.last_usage

Get the usage for the last invocation.
class Agent:

    def last_usage(self) -> ResponseUsage:
        ...