Skip to main content

@agentic

Decorator for creating LLM-implemented functions. This decorator is used on functions that will be implemented by an LLM. The decorated function should have a descriptive docstring but an empty body (containing only ...).
def agentic[**I, O](
    *scope_defined: Any,
    scope: dict[str, Any] | None = None,
    system: str | None = None,
    premise: str | None = None,
    mcp: str | None = None,
    persist: bool = False,
    model: ModelId = DEFAULT_AGENT_MODEL,
    listener: Callable[[], AgentListener] | DefaultAgentListener | None = DEFAULT_AGENT_LISTENER,
    max_tokens: int | MaxTokens = MaxTokens.default(),
    reasoning_effort: ReasoningEffort | None = None,
    cache_ttl: CacheTTL | None = None,
) -> Callable[[Callable[I, O]], Callable[I, O]]:
    ...
Parameters
*scope_defined
Any
A list of runtime resources as in scope. The names of the resources are not specified explicitly and are instead derived automatically from the resources themselves. scope and scope_defined can be used together to specify resources with both explicit and implicit names. The names can’t be repeated between the two. Example:
@agentic(my_func, db_connection, cache) # the same as @agentic(scope={"my_func": my_func, "db_connection": db_connection, "cache": cache})
async def my_function():
    ...
scope
dict[str, Any]
A dictionary of names mapped to runtime resources that are in scope and which may be used during the execution of the agentic function. Resources in scope may be arbitrary Python functions, methods, objects, iterators, types or any other Python value.
system
str | None
An optional system prompt for the agentic function. This will be the system prompt of all invocations of this agentic function. This argument cannot be provided along with the premise argument.
premise
str | None
An optional premise for the function. This will be attached to the system prompt of all invocations of this agentic function. This argument cannot be provided along with the system argument.
mcp
str | None
The string of a path to a .json file representing an MCP configuration. Any servers and/or tools of servers outlined in the config can be used during the execution of the agentic function.
persist
bool
Whether to persist the function state/history between calls.
model
str
The model used to execute the agentic function. Any OpenRouter model slug is supported.
listener
Callable[[], AgentListener] | None
Optional listener constructor for logging the agentic function’s activity and chat history. If None, no listener will be used.
max_tokens
int | MaxTokens
When an integer is supplied, this is the maximum number of tokens for an invocation. For more fine-grained control, a MaxTokens object may be passed.
reasoning_effort
'minimal' | 'low' | 'medium' | 'high' | 'xhigh' | None
Constrains thinking budget on reasoning models which support it (gpt 5.2, sonnet 4.5, gemini 3, etc…) Higher values use more reasoning tokens but may produce better results. If None, uses the model’s default reasoning effort.
cache_ttl
'5m' | '1h' | None
Controls how long Anthropic prompt caching entries persist. Only used for Anthropic models; ignored for other providers.
The system argument cannot be provided along with the premise argument, and vice versa.
command
string
required
The executable command to run the MCP server. This should be an absolute path or a command available in the system PATH.Example:
"python"
args
array of string
An array of command-line arguments passed to the server executable. Arguments are passed in order.Example:
["server.py", "--verbose", "--port", "8080"]
env
object
An object containing environment variables to set when launching the server. All values must be strings.Example:
{
    "API_KEY": "secret-key",
    "PORT": "8080"
}
The default model is openai/gpt-4.1.
The default agent listener is the StandardListener, but can be changed for all agents and agentic functions in the current scope with set_default_agent_listener. If a context-specific logger is used in the current scope, the logger will be added to the listener: if the listener is None, then the listener will be set to:
  • the default agent listener, if it is not None, or
  • the StandardListener, if the default agent listener is None
For more information on the StandardListener and the listener hierarchy, see here.
Returns
Callable[[Callable[I, O]], Callable[I, O]]
The decorated function that will be implemented by the LLM.