@agentic
Decorator for creating LLM-implemented functions.
This decorator is used on functions that will be implemented by an LLM.
The decorated function should have a descriptive docstring but an empty
body (containing only ...).
A list of runtime resources as in
scope. The names of the resources are not specified explicitly and are instead derived automatically from the resources themselves.
scope and scope_defined can be used together to specify resources with both explicit and implicit names. The names can’t be repeated between the two.
Example:A dictionary of names mapped to runtime resources that are in scope and which may be used during the execution of the agentic function.
Resources in scope may be arbitrary Python functions, methods, objects, iterators, types or any other Python value.
An optional system prompt for the agentic function.
This will be the system prompt of all invocations of this agentic function.
This argument cannot be provided along with the
premise argument.An optional premise for the function.
This will be attached to the system prompt of all invocations of this agentic function.
This argument cannot be provided along with the
system argument.The string of a path to a .json file representing an MCP configuration.
Any servers and/or tools of servers outlined in the config can be used during the execution of the agentic function.
Whether to persist the function state/history between calls.
The model used to execute the agentic function.
Any OpenRouter model slug is supported.
Optional listener constructor for logging the agentic function’s activity and chat history.
If None, no listener will be used.
When an integer is supplied, this is the maximum number of tokens for an invocation.
For more fine-grained control, a
MaxTokens object may be passed.Constrains thinking budget on reasoning models which support it (gpt 5.2, sonnet 4.5, gemini 3, etc…)
Higher values use more reasoning tokens but may produce better results.
If None, uses the model’s default reasoning effort.
Controls how long Anthropic prompt caching entries persist.
Only used for Anthropic models; ignored for other providers.
MCP Configuration Fields
MCP Configuration Fields
The executable command to run the MCP server. This should be an absolute path or a command available in the system
PATH.Example:An array of command-line arguments passed to the server executable. Arguments are passed in order.Example:
An object containing environment variables to set when launching the server. All values must be strings.Example:
The default model is
openai/gpt-4.1.The default agent listener is the
StandardListener, but can be changed for all agents and agentic functions in the current scope with set_default_agent_listener.
If a context-specific logger is used in the current scope, the logger will be added to the listener: if the listener is None, then the listener will be set to:- the default agent listener, if it is not
None, or - the
StandardListener, if the default agent listener isNone
StandardListener and the listener hierarchy, see here.The decorated function that will be implemented by the LLM.