Skip to main content
This covers advanced features. See Guides for task-oriented tutorials and production guidance.

System Prompt Templating

When overriding system prompts, we export a number of templatable “explainer” variables that you can use to include information about the environment and the agent’s capabilities. These are different per model and per AI primitive (magic function or agent).
from agentica.magic import template

SYSTEM = template("""
You are a special agent.
You have access to a REPL.

{{OBJECTIVES}}
""")

agent = await spawn(system=SYSTEM)
result = await agent.call(
  float,
  template("Give me the square root of 16. Return {{RETURN_TYPE}}."),
)
print(result)

System Prompt Variables

The standard system prompt for magic functions using OpenAI models is as follows.
{{STARTER}}

{{OBJECTIVES}}

{{WORKFLOW}}

{{INTERACTIONS}}

{{OUTPUT}}

{{NOTES}}
Interactions — Defines the two types of messages the agent receives during executionNotes — Key behavioral guidelines for code inspection, output methods, and error handlingObjectives — Core constraints defining available resources, code execution rules, and output requirementsOutput — Formatting rules for agent responses and code blocksStarter — Initial context defining the agent’s role and the function being executedWorkflow — Step-by-step process for analyzing inputs, executing code, and producing results

Additional Prompt Variables

Return Type — Formatted return type of the magic function.Stubs — Formatted stubs for all defined resources passed to the magic function scope.

System Prompt Variables

The standard system prompt for magic functions using Anthropic models is as follows.
{{STARTER}}

{{FUNCTION_SPEC}}

{{EXECUTION}}

{{COMMUNICATION}}

{{DEV}}

{{FINAL}}
Communication — Rules for combining text and code during implementation and error handlingDev — Example showing incremental development with step-by-step code testingExecution — Three-step process: comprehensive analysis, incremental development with testing, and final executionFinal — Summary emphasizing analysis-first approach and incremental execution with waiting for resultsFunction Spec — Complete specification including function name, arguments, description, return type, and input valuesStarter — Initial context defining the agent’s role and listing available modules and tools

Additional Prompt Variables

Return Type — Formatted return type of the magic function.Stubs — Formatted stubs for all defined resources passed to the magic function scope.

System Prompt Variables

The standard system prompt for agents using OpenAI models is as follows.
{{STARTER}}

{{OBJECTIVES}}

{{WORKFLOW}}

{{INTERACTIONS}}

{{OUTPUT}}

{{NOTES}}
Interactions — Defines the two types of messages the agent receives during executionNotes — Key behavioral guidelines for handling tasks, return types, and error conditionsObjectives — Core constraints defining available resources, task fulfillment rules, and output requirementsOutput — Formatting rules for agent responses and handling string return typesStarter — Initial context defining the agent’s role in the REPL session and message structureWorkflow — Step-by-step process for analyzing tasks, executing code, and producing results

Task Prompt Variables

Return Type — Formatted expected return type for the agent’s taskStubs — Formatted stubs for all defined resources available to the agentUser Prompt — Formatted task description, expected return type, and additional tools

System Prompt Variables

The standard system prompt for agents using Anthropic models is as follows.
{{STARTER}}

{{RESOURCES}}

{{INPUTS}}

{{PROCESS}}

{{DEV}}

{{FINAL}}
Dev — Error handling and example showing incremental development patternFinal — Summary emphasizing analysis-first approach and incremental execution with waiting for resultsInputs — Describes the two input formats agents receive: execution output and instruction tasksProcess — Three-step process: comprehensive analysis, incremental development with testing, and final outputResources — Lists available pre-imported modules, pre-defined tools, and special functionsStarter — Initial context defining the agent’s role and operating premise

Task Prompt Variables

Return Type — Formatted expected return type for the agent’s taskStubs — Formatted stubs for all defined resources available to the agentUser Prompt — Formatted task description, expected return type, and additional tools

Custom Exceptions

You can define your own exception classes and have the agent raise them when specific error conditions occur. The agent can raise these exceptions from within its execution environment, and they are automatically bubbled back up to your code.

Defining Custom Exceptions

Custom exceptions are useful for domain-specific error handling. To use them:
  1. Define your custom exception classes
  2. Pass them into the @magic() decorator
  3. Document when each exception should be raised so the agent knows when to use them
from dataclasses import dataclass, field
from enum import Enum
from time import time

from agentica import magic

class TaskCategory(Enum):
    BUSINESS = "business"
    PERSONAL = "personal"
    FREELANCE = "freelance"

@dataclass
class Task:
    user: str
    category: TaskCategory
    description: str
    time_created: float = field(default_factory=time)

class TaskTooComplicatedError(Exception):
    """Raised when a task is too complex to complete automatically."""
    pass

class InsufficientPermissionsError(Exception):
    """Raised when the user lacks permissions for the requested task."""
    pass

@magic(TaskTooComplicatedError, InsufficientPermissionsError)
def perform_task(task: Task) -> str:
    """
    Perform the task and return the result.

    Raises:
        TaskTooComplicatedError: If the task requires human intervention
        InsufficientPermissionsError: If the user lacks necessary permissions
        ValueError: If the task description is empty or invalid

    Returns:
        A description of the completed task.
    """
    ...

# Usage with error handling
try:
    result = perform_task(task)
    print(f"Task completed: {result}")
except TaskTooComplicatedError as e:
    print(f"Manual intervention required: {e}")
    # Escalate to human
    assign_to_human(task)
except InsufficientPermissionsError as e:
    print(f"Permission denied: {e}")
    # Request additional permissions
    request_permissions(task.user, task.category)
except ValueError as e:
    print(f"Invalid task: {e}")
The agent can see your docstrings! Be specific about when each exception should be raised. The agent uses this documentation to understand when to throw each exception type.
For comprehensive error handling patterns and best practices, see the Error Handling Guide.

Agent Listeners

This section is Python only for now. To see how TypeScript implements listeners, see the TypeScript references.
To see what your agent is up to, even when not streaming, agents are wired up to an AgentListener by default. This default listener will print to your console:
  • your agent’s local ID and which parameters the agent was invoked with; and
  • the result your agent returned when it finished.
In addition, it will create a logs/ folder in your working directory, and populate it with agent-{id}.log files which track with the printed IDs. These log files will contain the full logs of the interactions your agent engaged in. You may disable the default listener:
from agentica.magic.logging import set_default_agent_listener

set_default_agent_listener(None)
or pass in listeners explicitly:
from agentica.magic.logging import PrintOnlyListener, FileOnlyListener

agent = await spawn(
    premise="Agent's task",
    listener=PrintOnlyListener,
)
# or
@magic(listener=FileOnlyListener)
def my_func(a: int) -> str:
    ...
Listeners are attached per agent or magic function, so all invocations of either will trigger its listener.
You may implement your own listeners for full oversight.
See the references for more detail: Python | TypeScript.

MCP

We provide backwards compatibility with MCP by turning schemas back into regular functions to be compatible with Agentica’s execution model. We call this process “unMCP”.
unMCP support is only available in Python currently. See the Python references for more details using MCP servers.
Some tools are still only exposed via MCP, so Agentica remains compatible with things like VSCode, Cursor and Claude Code MCP configurations! The standard JSON format is outlined below. Both remote and local MCP servers are connected to from your local machine meaning all sensitive information (e.g. API keys) is secure.
command
string
required
The executable command to run the MCP server. This should be an absolute path or a command available in the system PATH.Example:
"python"
args
array of string
An array of command-line arguments passed to the server executable. Arguments are passed in order.Example:
["server.py", "--verbose", "--port", "8080"]
env
object
An object containing environment variables to set when launching the server. All values must be strings.Example:
{
    "API_KEY": "secret-key",
    "PORT": "8080"
}

Caching

Caching with respect to inference is managed internally and specific to the model provider.
Previous AI invocations may be cached client-side. In Python, just use the @functools.cache decorator!

Rate limiting

When rate limits from providers are imposed, exponential backoff is employed with sensible defaults in a blocking fashion. The initial delay is multiplied by a factor of exponential_base * (1 + jitter * random_float) with every retry till max_retries is reached, where 0.0 <= random_float < 1.0.