Skip to main content
When using OpenAI models (openai:gpt-4.1, openai:gpt-5, etc.) with Agentica, apply these proven strategies:Writing Agentic Function Docstrings
# Bad - Vague docstring
@agentic()
async def analyze(text: str) -> dict:
    """Analyze text"""
    ...

# Good - Clear, specific docstring
@agentic()
async def analyze(text: str) -> dict[str, Any]:
    """
    Analyze the sentiment, key entities, and main topics in the text.
    Return a dict with 'sentiment' (positive/negative/neutral),
    'entities' (list of names/places/orgs), and 'topics' (list of main themes).
    """
    ...
Crafting Agent Premises
# Bad - Generic premise
agent = await spawn(premise="You are helpful.")

# Good - Specific premise with clear role and constraints
agent = await spawn(
    premise="""
    You are a data analyst specializing in customer feedback.
    Always provide numerical confidence scores (0-1) with your conclusions.
    When uncertain, explicitly state your assumptions.
    """
)
Request Step-by-Step ReasoningFor complex tasks, explicitly ask for reasoning in your docstrings or premises:
@agentic()
async def solve_problem(problem: str) -> dict[str, Any]:
    """
    Solve the math problem step by step.
    First, identify what's being asked.
    Then, break down the solution into steps.
    Finally, provide the answer with your reasoning.
    """
    ...
Leverage Scope EffectivelyProvide focused, relevant tools rather than entire SDKs:
from slack_sdk import WebClient

slack = WebClient(token=TOKEN)

# Good - Extract only what you need
@agentic(slack.users_list, slack.chat_postMessage)
async def notify_team(message: str) -> None:
    """Send message to all active team members."""
    ...
Type Hints are InstructionsOpenAI models excel at following type hints — use them to guide output:
from typing import Literal

@agentic()
async def classify(text: str) -> Literal['urgent', 'normal', 'low']:
    """Classify the priority of this support ticket."""
    ...
When using Anthropic models (anthropic:claude-sonnet-4.5, anthropic:claude-opus-4.1, etc.) with Agentica, leverage these Claude-specific strengths:XML Tags in DocstringsClaude excels at parsing XML structure — use it in complex agentic functions:
@agentic()
async def extract_and_validate(document: str, schema: dict) -> dict:
    """
    Extract structured data from the document and validate against schema.

    <instructions>
    1. Parse the document and extract fields matching the schema
    2. Validate each field against schema constraints
    3. Return extracted data with validation status
    </instructions>

    <output_format>
    Return dict with 'data' (extracted fields) and 'valid' (bool)
    </output_format>
    """
    ...
Rich Agent PremisesClaude responds well to detailed role definitions:
agent = await spawn(
    premise="""
    You are a senior software architect with expertise in distributed systems.

    <role>
    - Analyze system designs for scalability issues
    - Suggest concrete improvements with trade-offs
    - Consider cost, latency, and reliability
    </role>

    <style>
    - Be direct and technical
    - Provide specific code/config examples when helpful
    - Acknowledge uncertainties explicitly
    </style>
    """,
    model="anthropic:claude-sonnet-4.5"
)
Chain of Thought PromptsClaude’s reasoning improves dramatically with explicit thinking requests:
@agentic()
async def debug_issue(code: str, error: str) -> dict[str, str]:
    """
    Debug the code issue by thinking through it step by step.

    Before providing a solution:
    1. Analyze what the code is trying to do
    2. Identify why the error occurs
    3. Consider multiple potential fixes
    4. Choose the best fix with explanation

    Return dict with 'analysis' and 'fix' keys.
    """
    ...
Long Context UtilizationClaude handles large contexts exceptionally well — structure them clearly:
@agentic()
async def analyze_codebase(files: dict[str, str]) -> dict:
    """
    Analyze the entire codebase for security issues.

    <critical_instructions>
    Focus on: SQL injection, XSS, auth bypasses, secrets in code
    </critical_instructions>

    Process each file systematically. Look for patterns across files.

    <critical_instructions>
    Return findings with file paths and severity (critical/high/medium/low)
    </critical_instructions>
    """
    ...
Prompting Style DifferencesOpenAI Models (openai:gpt-4.1, openai:gpt-5):
  • Concise instructions. Works well with shorter, direct docstrings and premises.
  • Delimiters. Use ### or """ to separate sections in complex prompts.
  • Step-by-step explicit. Benefits from phrases like “First… Then… Finally…”
  • Function-oriented. Natural fit for task decomposition and tool use.
Anthropic Models (anthropic:claude-sonnet-4.5, anthropic:claude-opus-4.1):
  • XML tags preferred. Use <instructions>, <context>, <examples> for structure.
  • Detailed premises. Responds well to longer, more elaborate role definitions.
  • Chain of thought. Explicitly request thinking with “Before answering, think step by step…”
  • Long context friendly. Can handle very large prompts and scope effectively.
Universal Agentica Best PracticesRegardless of model choice:1. Write Clear Docstrings/Descriptions. Be specific about what, how, and what format to return.
# Agentic functions: detailed docstrings
@agentic()
async def process(data: str) -> Result:
    """What, how, and what format to return"""
    ...

# Agents: specific premises
agent = await spawn(premise="Clear role + constraints")
2. Use Strong Type Hints. Types guide agents and ensure type-safe returns.
from typing import Literal
from pydantic import BaseModel

class Analysis(BaseModel):
    sentiment: Literal['positive', 'negative', 'neutral']
    confidence: float

@agentic()
async def analyze(text: str) -> Analysis:
    """Return sentiment analysis"""
    ...
3. Provide Focused Scope. Only include tools/data agents need for the specific task, not entire objects or SDKs.4. Request Reasoning for Complex Tasks. Add “step by step” or “think through” to prompts for better accuracy on hard problems.5. Test with Real Examples. Validate agentic functions and agents with actual use cases before production.

Choosing Models in Agentica

Consider OpenAI when:
  • Your prompts are concise and task-focused
  • You’re using structured delimiters for prompt sections
  • Example: model="openai:gpt-5"
Consider Anthropic when:
  • Your prompts benefit from XML structure
  • You’re working with very large context in scope
  • You want detailed, persona-driven agents
  • Example: model="anthropic:claude-sonnet-4.5"
Prompting Approach:
Prompt StyleBetter Match
Short, focused instructionsOpenAI models
XML-structured promptsAnthropic models
Large context (50K+ tokens)Anthropic models
Detailed role definitionsAnthropic models
Task decomposition patternsOpenAI models

Be Specific

Vague prompts lead to inconsistent results. Agents needs clear instructions about what you want, how you want it, and what format to return. Think of your docstring as a specification, not just a description. Bad approach: Generic verbs without details.
@agentic()
async def summarize(text: str) -> str:
    """Summarize the text"""
    ...
Good approach: Specify length, focus, and style.
@agentic()
async def summarize(text: str) -> str:
    """
    Create a 2-3 sentence summary of the text.
    Focus on the main argument and key supporting points.
    Use objective language without opinion.
    """
    ...
When behavior or style matters beyond just the output structure, specify it clearly. Types tell agents what to return, but your prompt tells it how to get there.
@agentic()
async def extract_sql(user_request: str, schema: dict) -> str:
    """
    Generate a SQL query from the user's natural language request.

    Requirements:
    - Use only SELECT statements (no INSERT, UPDATE, DELETE)
    - Always include LIMIT clauses to prevent large result sets
    - Use table aliases for readability
    - When joining tables, prefer INNER JOIN over implicit joins
    - Add comments explaining complex WHERE clauses

    Return valid PostgreSQL syntax.
    """
    ...

Include Examples

When you need specific formatting or a particular style, showing examples is more effective than describing the desired output in words. Models learn patterns quickly from concrete examples. Use examples for tasks where the output has specific structure, like generating changelog entries:
@agentic()
async def write_changelog_entry(commit_messages: list[str]) -> str:
    """
    Write a changelog entry from commit messages.

    Example input: ["fix: resolve login timeout", "feat: add dark mode"]
    Example output:
    ### Features
    - Added dark mode support

    ### Bug Fixes
    - Fixed login timeout issue

    Follow this format exactly.
    """
    ...
Multiple examples help establish patterns, especially for formatting that varies by input:
@agentic()
async def format_currency(amount: float, currency: str) -> str:
    """
    Format a currency amount for display.

    Examples:
    - format_currency(1234.50, "USD") → "$1,234.50"
    - format_currency(999.99, "EUR") → "€999.99"
    - format_currency(50, "GBP") → "£50.00"

    Always include the currency symbol and exactly 2 decimal places.
    """
    ...

Define Constraints

Agents needs explicit rules for handling edge cases and ambiguous inputs. Without clear constraints, you’ll get inconsistent behavior when inputs don’t match the happy path. When your task involves categorization or decision-making, spell out the criteria:
from typing import Literal

@agentic()
async def categorize_severity(error_message: str, stack_trace: str) -> Literal['critical', 'high', 'medium', 'low']:
    """
    Categorize error severity.

    Constraints:
    - Return 'critical' if: data loss, security breach, system crash
    - Return 'high' if: feature unusable, affects multiple users
    - Return 'medium' if: degraded performance, workaround available
    - Return 'low' if: cosmetic issue, minimal impact

    Edge cases:
    - If stack_trace is empty, base decision on error_message alone
    - Database errors are at minimum 'high' severity
    - Authentication errors are at minimum 'high' severity
    """
    ...
For parsing or extraction tasks, define what constitutes valid input and what to return when inputs are missing or malformed:
@agentic()
async def parse_date_range(text: str) -> tuple[str, str] | None:
    """
    Extract start and end dates from natural language.

    Return format: (start_date, end_date) as YYYY-MM-DD strings

    Constraints:
    - Start date must be before or equal to end date
    - Return None if no date range found
    - Return None if only one date found (need both)

    Edge cases:
    - "last week" → Monday to Sunday of previous week
    - "Q1 2024" → 2024-01-01 to 2024-03-31
    - Relative dates use today's date as reference
    """
    ...
For multi-step validation workflows, use an agent that progressively checks and adapts based on what it discovers. The agent remembers previous findings when deciding next steps:
from agentica import spawn
from typing import Literal

agent = await spawn(premise="You are a security validator for user queries")

# Step 1: Agent analyzes input for safety
safety = await agent.call(
    Literal['safe', 'sql_injection', 'invalid_chars'],
    "Classify this user input for security issues",
    user_input=untrusted_input
)

# Step 2: Based on what agent found, take different actions
if safety == 'safe':
    # Agent remembers the input it just analyzed
    normalized = await agent.call(
        str,
        "Normalize the input you just validated (lowercase, trim, remove extra spaces)"
    )
    return normalized
elif safety == 'sql_injection':
    # Agent remembers what SQL patterns it detected
    details = await agent.call(
        str,
        "Explain which SQL patterns you detected and why they're dangerous"
    )
    log_security_violation(details)
    raise SecurityError(details)
else:
    # Agent remembers the invalid characters it found
    suggestion = await agent.call(
        str,
        "Suggest what the user should change about their input"
    )
    return f"Invalid input. {suggestion}"

Agents

For agents, Agentica formats an initial system role message using the provided premise on instantiation. Any subsequent invocation or call of an agent fromats a user role message using the provided task. The premise should provide additional context on the agent’s overall purpose or goal over it’s lifetime, while the task should provide information about it’s immediate goal, linked to the specified return type.

Custom prompts

Both the Python and TypeScript SDK offer the ability to provide an entirely custom system role message for agents via the system argument in spawn. If system is provided instead of premise, then system will be the exact system role message on instantiation and task will be the exact user role message on invocation of the agent, without any formatting.
agent = await spawn(
    system="""
    You are a data extraction API. You receive documents and schemas.
    You MUST return valid JSON matching the schema. Never include explanations,
    preambles, or markdown formatting. Just the JSON object.
    """, # This will be the prompt provided to the model under the role system
    model="anthropic:claude-sonnet-4.5"
)

Templating variables

To include information about the environment and the agent’s capabilities in your custom system prompts and task, a number of templatable “explainer” variables are exported in Agentica. These are formatted using the template function and these variables slightly differ per model.
For more information on template see here for Python and here for TypeScript.For precise values of these variables see here.
Note that template is not supported for an agent premise for both the Python and TypeScript SDKs.
from agentica import template

SYSTEM = template("""
You are a special agent.
You have access to a REPL.

{{OBJECTIVES}} 
""") # The OBJECTIVES variable outlines the objectives and constraints of the agent w.r.t the REPL

agent = await spawn(system=SYSTEM)
result = await agent.call(
  float,
  template("Give me the square root of 16. Return {{RETURN_TYPE}}."), # This variable will correspond to `float`
)
print(result)

Agentic functions

Templating variables

To include information about the environment and the agent’s capabilities when writing an agentic function, a number of templatable “explainer” variables are exported in Agentica. These are formatted using the template function and these variables slightly differ per model.
For more information on template see here for Python and here for TypeScript.For precise values of these variables see here.
Note that template is not supported for doc-strings of agentic functions for the Python SDK.
import { agentic } from '@symbolica/agentica';
import { template } from '@symbolica/agentica/template';

async function mathematician(problem: string): Promise<number> {
  return await agentic(
    template`
You are a mathematician.
You have access to a REPL.

{{OBJECTIVES}}
    `; // The OBJECTIVES variable outlines the objectives and constraints of the agent w.r.t the REPL
  );
}

const result = await mathematician(`Give me the square root of 16.`)
console.log(result);