Strong type hints do two things: they guide agents toward the correct output structure, and they give you type-safe returns in your code. The more specific your types, the more constrained an agent’s output will be.Use literal types to restrict outputs to specific values:
Copy
from typing import Literal@agentic()async def classify(text: str) -> Literal['positive', 'negative', 'neutral']: """Classify sentiment""" ...# The agent can only return one of these three exact stringsresult = await classify("Great product!") # Type is Literal['positive', 'negative', 'neutral']
Use structured types for complex outputs. The agent will match your type structure exactly:
Never hardcode API keys or secrets. Use environment variables. This keeps credentials out of your codebase and allows different values per environment.
Copy
import os# Good - use environment variablesapi_key = os.environ["API_KEY"]database_url = os.environ.get("DATABASE_URL")# Bad - hardcoded secretsapi_key = "sk-proj-abc123..." # Never commit this
Never pass raw API keys to agents. Instead, pass pre-authenticated SDK clients or specific methods. The agent uses the functionality without ever seeing the credentials:
Copy
from agentica import spawnfrom github import Github# Good - pass authenticated client methodsgh = Github(os.environ["GITHUB_TOKEN"])agent = await spawn(premise="You are a GitHub analyst")result = await agent.call( Report, "Analyze the repository's recent activity", get_repo=gh.get_repo, search_issues=gh.search_issues)# Agent can use GitHub API without accessing the token# Bad - passing raw credentialsresult = await agent.call( Report, "Analyze repository", github_token=os.environ["GITHUB_TOKEN"] # Never do this)
Validate user input before passing it to agentic functions. This prevents injection attacks and ensures your agentic functions receive clean data.
Copy
from agentica import agentic@agentic()async def query_database(user_input: str, schema: dict) -> list[dict]: """ Generate and execute a database query based on user input. Only generate SELECT queries. Use the schema to validate table/column names. """ ...async def safe_query(user_input: str) -> list[dict]: # Validate input length if len(user_input) > 500: raise ValueError("Input too long") # Check for suspicious patterns dangerous_keywords = ['drop', 'delete', 'truncate', 'insert', 'update'] if any(keyword in user_input.lower() for keyword in dangerous_keywords): raise ValueError("Invalid query keywords") # Now safe to pass to an agent return await query_database(user_input, schema)
Agents that can open arbitrary paths can easily escape its intended sandbox (for example by traversing ../) and read, modify, or delete files across your system. Avoid passing Path objects or unrestricted file paths directly to agents or agentic functions. Instead, pre-open only the specific files you want the agent to access and pass those file handles in scope.
Copy
from typing import TextIOfrom agentica import agentic@agentic()async def summarize_report(report_file: TextIO) -> str: """ Read the already-open report_file and summarize its contents. """ ...with open("/var/reports/weekly.csv", "r", encoding="utf-8") as f: # The agent only sees this specific handle, not your whole filesystem summary = await summarize_report(f)
Log agentic operations with structured data. Include the operation name, input size, model used, and timing. This helps debug issues and identify patterns.
Never log sensitive data. User inputs, API keys, or PII should not appear in logs. See Error Handling › Sensitive Data Handling for examples of safe logging practices.
Cache agent responses when the same inputs produce the same outputs. This reduces latency and costs for repeated operations.Use caching for:
Reference data that changes infrequently (product descriptions, documentation)
Expensive operations called repeatedly with the same inputs
Read-heavy workflows where consistency is acceptable
Copy
from functools import lru_cache# Decorate the agentic function directly@lru_cache(maxsize=1000)@agentic()async def categorize_product(description: str) -> str: """Categorize product into a department""" ...# Same description returns cached resultcategory1 = await categorize_product("Red cotton t-shirt") # Calls agentcategory2 = await categorize_product("Red cotton t-shirt") # Returns cached
Advanced: Best-of-N caching with retries. Like JIT compilation that eventually compiles hot code paths, you can combine caching with retry strategies to create a “best-of-N” pattern: retry failed operations until you get a high-quality result, then cache that successful response. Future calls skip the retry logic entirely and use the cached “compiled” result. This is particularly useful for expensive operations where you want to pay the retry cost once, then reuse the validated output.
Process multiple items in parallel when they’re independent. This is faster than sequential processing.
Copy
import asyncio@agentic()async def analyze(text: str) -> dict: """Analyze the text""" ...# Process all texts in paralleltexts = ["text 1", "text 2", "text 3"]results = await asyncio.gather(*[analyze(text) for text in texts])
Use agents for multi-step workflows where later steps depend on earlier results. Agents maintain context across invocations, allowing them to make decisions based on what they’ve already done.Here’s an agent that debugs code by analyzing, then deciding whether to fix or explain based on what it finds:
Copy
from agentica import spawnagent = await spawn( premise=""" You are a code debugger. When given code with an error: 1. First analyze the error to understand the root cause 2. If it's a simple fix (syntax, typo), fix it and return the corrected code 3. If it's a logic error requiring design changes, explain the issue instead """, model="openai:gpt-4.1")# First invocation: analyzeawait agent.call(None, "Analyze this error", code=broken_code, error=error_msg)# Second invocation: agent decides to fix or explain based on analysisresult = await agent.call( str, "Based on your analysis, either fix the code or explain what needs to change")# The agent remembers its analysis and chooses the appropriate action
For truly independent operations, use agentic functions and process in parallel. For dependent workflows where context matters, use a single agent across multiple calls.
Inference costs money — optimize by choosing the right model, caching responses, and using agents only when needed.Choose the right model for the task. Use cheaper models for simple operations, more expensive models for complex reasoning. See Model Selection for guidance.Cache aggressively. Every cache hit is a cost you don’t pay. See Caching above.Keep prompts concise. Longer prompts cost more. Remove unnecessary context or examples once you’ve validated your agentic function works.Use agents strategically. Agents maintain conversation history, which grows with each call and costs more. For stateless operations, use agentic functions instead.Bad: Using an agent for independent operations
Copy
# Inefficient - agent maintains unnecessary historyagent = await spawn(premise="You are a data processor")for item in items: result = await agent.call(dict, f"Process this item: {item}") # Each call adds to history, increasing cost
Good: Using agentic function for independent operations
Copy
@agentic()async def process_item(item: str) -> dict: """Process the item""" ...# Each call is independent, no growing historyfor item in items: result = await process_item(item)
Good: Using agent when context matters
Copy
# Agent remembers context across stepsagent = await spawn(premise="You are a research assistant")# Step 1: Find relevant paperspapers = await agent.call(list[str], "Search for papers on quantum computing", web_search=search)# Step 2: Agent remembers which papers it foundsummary = await agent.call(str, "Summarize the key findings from these papers")# Step 3: Agent has full context to comparecomparison = await agent.call(str, "Which paper has the most practical applications?")