Agentica
Agentica is a type-safe agent framework that lets LLM agents integrate with your code — functions, classes, live objects, even entire SDKs. Instead of building MCP wrappers or brittle schemas, you pass references directly; the framework enforces your types at runtime, constrains return types, and manages agent lifecycle. Compose single agentic functions into multi-agent systems that feel like ordinary code and ship anywhere your stack runs.Why use Agentica?
Direct SDK Integration
Pass real SDK clients or extract just the methods you need — no MCP servers, no schema work. Inject live objects into an agent’s scope with bindings and runtime state intact.
Type-Safe Agents
Function signatures become runtime contracts. Python annotations and TypeScript generics drive structured validation, reducing hallucinations and turning agentic workloads into predictable, testable operations.
Multi-Agent Orchestration
Spin up agents on demand, keep state across calls, stream output, and compose leader/worker patterns like regular functions. Standard patterns like initialization, closures, and automatic cleanup make complex systems simple.
Multi-Language Support
Models interact with your codebase in a consistent, unified way — regardless of language or runtime. Python and TypeScript SDKs available now, with more languages coming soon.
Quick Example
Here’s how simple it is to create an agent-backed function:Copy
from agentica import agentic
from typing import Literal
@agentic()
async def analyze(text: str) -> Literal["positive", "neutral", "negative"]:
"""Analyze sentiment"""
...
result = await analyze("Agentica is an awsome framework!")
Getting Started
Ready to build your first Agentica agent? Follow our quickstart guide to get up and running in minutes.Quickstart Guide
Install Agentica and build a complete note-taking assistant with agentic functions and agents. Supports Python and TypeScript.
TypeScript Framework Support
Supported Frameworks
Supported Frameworks
- Plain TS/JS (Node, Bun, Browser) via
ts-patch - Next.JS
- React
- Vite
- Svelte
- Webpack
- Rollup
- ESBuild
Quick Setup for Next.js & Vite
Quick Setup for Next.js & Vite
- Vite+React
- Next.JS
- Plain TS/JS
After installing Agentica, create an
.env file with your API key:Copy
echo 'VITE_AGENTICA_API_KEY=<YOUR_KEY>' >> .env
Client-side usageVite requires the
VITE_ prefix to expose variables to client code. Note that this exposes the key to the browser - only do this if your API key is meant for client-side use.After installing Agentica, create an
.env.local file with your API key:Copy
echo 'NEXT_PUBLIC_AGENTICA_API_KEY=<YOUR_KEY>' >> .env.local
See the quickstart guide for full details of setting up a plain TypeScript project with Bun (or export it in your shell:
bun) or Node.js (pnpm, npm). Without a bundler (such as ESBuild, Rollup, Next.js, etc.) you will need to use the tspc (provided by ts-patch) command to compile and transform your code.Create an .env file with your API key:Copy
echo 'AGENTICA_API_KEY=<YOUR_KEY>' >> .env
Copy
export AGENTICA_API_KEY=<YOUR_KEY>
Explore the Documentation
Core Concepts
Understand the fundamentals of how Agentica works, scope, and when to use agentic functions vs agents.
Guides
Task-oriented tutorials for multi-agent systems, error handling, and best practices.
Examples
Complete working applications including Slack bots, data analysis, and research systems.
API Reference
Complete API documentation for Python and TypeScript with all parameters and types.
Summary for Coding Assistants
Summary for Coding Assistants
Copy the following markdown as context for Cursor, Claude Code, or Copilot:See our integration guides for more details.
- Python
- TypeScript
Python
Copy
# Agentica Documentation
## Overview
Agentica is a library for integrating agentic features and agents into Python applications.
## Basics
The primary method of interaction is through the `@agentic()` decorator, which works like this:
```python
from agentica import agentic
# Defines an agent-backed function
@agentic()
async def add(a: int, b: int) -> int:
"""Returns the sum of a and b"""
...
# Calls the agent-backed function
result = await add(1, 2) # This addition is done by an agent via the agentica framework
assert result == 3
```
This allows you to use agents to implement functions which are not possible to implement in pure python. Functions decorated with `@agentic` MUST be `async`.
The alternative syntax is to `spawn` an agent.
```python
from agentica import spawn
# Defines an agent
agent = await spawn(premise="You are a truth-teller.")
# Calls agent
result: bool = await agent.call(bool, "The Earth is flat")
assert result == False
```
The creation and call of agents created with `spawn` are always awaitable. When calling an agent you **must** always pass in the return type as the first argument, followed by an argument of type `str`.
### Return Types
Return types are optional and flexible:
```python
# Return type defaults to str if not specified
result = await agent.call("What is 2+2?") # Returns str
# Specify exact types for structured output
result: int = await agent.call(int, "What is 2+2?") # Returns int
result: dict[str, int] = await agent.call(dict[str, int], "Count items by category")
# Use None for side-effects only
await agent.call(None, "Send a message to John") # No return value needed
```
## Agent Instantiation
There are two ways to create agents:
### Using `spawn` (async)
Use `spawn` for most cases - it's awaitable and async-friendly:
```python
agent = await spawn(premise="You are a helpful assistant.")
```
### Using `Agent()` directly (sync)
Use direct instantiation when you need synchronous creation, such as in `__init__` methods:
```python
from agentica import Agent
class CustomAgent:
def __init__(self, directory: str):
# Must be synchronous - use Agent() not spawn()
self._brain = Agent(
premise="You are a specialized assistant.",
scope={"tool": some_tool}
)
async def run(self, task: str) -> str:
return await self._brain(str, task)
```
**Tip**: Direct `Agent` instantiation is particularly useful when building custom agent classes or in contexts that cannot be async.
## Premise vs System Prompt
You can control the agent's instructions in two ways:
```python
# Use 'premise' to add context to the default system prompt
agent = await spawn(premise="You are a math expert.")
# Use 'system' for full control of the system prompt
agent = await spawn(system="You are a helpful assistant. Always respond with JSON.")
```
**Note**: You cannot use both `premise` and `system` together.
## Passing in objects
If you want agentic function or agent to use a function, class, object etc. inside an agentic function, simply put it in the `@agentic` decorator or `scope` in the call to `spawn`.
```python
from agentica import agentic, spawn
# User-defined function
from tools import web_search
# Defines agent
agent = await spawn(premise="You are a truth-teller.", scope={"web_search" : web_search})
# Defines the agent-backed function
@agentic(scope={'web_search': web_search})
async def truth_teller(statement: str) -> bool:
"""Returns whether or not a statement is True or False."""
...
```
### SDK Integration Pattern
Extract specific methods from SDK clients for focused scope:
```python
from slack_sdk import WebClient
# Extract only the methods you need
slack_conn = WebClient(token=SLACK_BOT_TOKEN)
list_users = slack_conn.users_list
send_message = slack_conn.chat_postMessage
@agentic(scope={'list_users': list_users, 'send_message': send_message}, model="openai:gpt-5")
async def send_team_update(message: str) -> None:
"""Send a message to all team members."""
...
```
### Per-Call Scope
You can also add scope per invocation:
```python
agent = await spawn(premise="Data analyzer")
# Add resources for this specific call
result = await agent.call(
dict[str, int],
"Analyze the dataset",
dataset=pd.read_csv("data.csv").to_dict(),
analyzer_tool=custom_analyzer
)
```
## Model Selection
Agentica supports any text-to-text model provided on OpenRouter. Specify with the `model` parameter:
```python
# For agents
agent = await spawn(
premise="Fast responses needed",
model="openai:gpt-5" # Default is 'openai:gpt-4.1'
)
# For agentic functions
@agentic(model="anthropic:claude-sonnet-4.5")
async def analyze(text: str) -> dict:
"""Analyze the text."""
...
```
**Supported models**:
- `openai:gpt-3.5-turbo`
- `openai:gpt-4o`
- `openai:gpt-4.1` (default)
- `openai:gpt-5`
- `anthropic:claude-sonnet-4`
- `anthropic:claude-opus-4.1`
- `anthropic:claude-sonnet-4.5`
- `anthropic:claude-opus-4.5`
or just use any OpenRouter model slug (e.g. `google/gemini-2.5-flash`).
## Token Limits
Control the maximum number of tokens generated with `max_tokens`:
```python
from agentica import spawn, agentic, MaxTokens
# For agents
agent = await spawn(
premise="Brief responses only",
max_tokens=500 # Limit total output tokens per invocation
)
# For agentic functions
@agentic(max_tokens=1000)
async def summarize(text: str) -> str:
"""Create a concise summary."""
...
# For finer control, use MaxTokens:
# - per_invocation: total tokens across all rounds
# - per_round: tokens per inference round
# - rounds: maximum number of inference rounds
agent = await spawn(
premise="Brief responses only",
max_tokens=MaxTokens(per_invocation=5000, per_round=1000, rounds=5)
)
```
**Use cases**:
- Ensure brief responses for cost control
- Prevent overly long outputs
- Match specific output length requirements
If the response would exceed `max_tokens`, a `MaxTokensError` will be raised. See [Error Handling](#error-handling) for how to handle this.
## Tracking Token Usage
Track token consumption with `last_usage` and `total_usage`:
```python
from agentica import spawn, agentic, last_usage, total_usage
# Agents have methods
agent = await spawn(premise="You are helpful.")
await agent.call(str, "Hello!")
await agent.call(str, "How are you?")
usage = agent.last_usage() # Usage from last invocation
print(f"Last: {usage.input_tokens} in, {usage.output_tokens} out")
usage = agent.total_usage() # Cumulative usage across all invocations
print(f"Total: {usage.input_tokens} in, {usage.output_tokens} out, {usage.total_tokens} processed")
# Agentic functions use standalone functions
@agentic()
async def analyze(text: str) -> str:
"""Analyze the text."""
...
await analyze("Some text")
print(last_usage(analyze)) # Usage(input_tokens=..., output_tokens=..., total_tokens=...)
print(total_usage(analyze)) # Cumulative usage
```
The `Usage` object contains:
- `input_tokens`: tokens consumed as input
- `output_tokens`: tokens generated as output
- `total_tokens`: total tokens processed (not double-counting re-consumed tokens)
## Persistence
Agentic functions can maintain state between calls:
```python
# Stateful agentic function
@agentic(persist=True, model="openai:gpt-4.1")
async def chatbot(message: str) -> str:
"""A chatbot that remembers conversation history."""
...
# First call
response1 = await chatbot("My name is Alice")
# Second call - remembers previous context
response2 = await chatbot("What's my name?") # Will know it's Alice
```
**Tip**: Use `persist=True` when you need conversation history or stateful behavior in agentic functions. For agents, state is maintained automatically across calls to the same agent instance.
## Streaming
Streaming is supported for both agents and agentic function, most straightforwardly by using a `StreamLogger`.
```python
from agentica import spawn
from agentica.logging.loggers import StreamLogger
agent = await spawn(premise="You are a truth-teller.")
stream = StreamLogger()
with stream:
result = asyncio.create_task(
agent.call(bool, "Is Paris the capital of France?")
)
```
This creates an async generator `stream` containing producing the streamed in text-chunks of the invocation within the with-statement.
The stream can be consumed like this:
```python
# Consume stream FIRST for live output
async for chunk in stream:
print(chunk.content, end="", flush=True)
# Then await result
final_result = await result
```
Each `Chunk` object contains:
- `content`: the text content of the chunk
- `role`: one of `'user'`, `'agent'`, or `'system'`
**Important**: The stream should be consumed **before** awaiting the result, otherwise you won't see the live text generation.
## Logging and Debugging
Agentica provides built-in logging to help debug agents and agentic functions.
### Default Logging
By default, all agents and agentic functions use `StandardListener` which:
- Prints lifecycle events to stdout with colors
- Writes chat histories to `./logs/agent-<id>.log`
```shell
Spawned Agent 25 (./logs/agent-25.log)
► Agent 25: Calculate the 32nd power of 3
◄ Agent 25: 1853020188851841
```
### Contextual Logging
Temporarily change logging for specific code sections:
```python
from agentica.logging.loggers import FileLogger, PrintLogger
# Only log to file in this block
with FileLogger():
agent = await spawn(premise="Debug this agent")
await agent.call(int, "Calculate something")
# Multiple loggers can be nested
with PrintLogger():
with FileLogger():
# Both print AND file logging active
agent = await spawn(premise="Dual logging")
await agent.call(str, "Hello")
```
### Disable Logging
```python
from agentica.logging.agent_logger import NoLogging
with NoLogging():
agent = await spawn(premise="Silent agent")
await agent.call(int, "Secret calculation")
```
### Per-Agent/Function Logging
```python
from agentica.logging import PrintOnlyListener, FileOnlyListener
# Attach listener to specific agent
agent = await spawn(
premise="Custom logging",
listener=PrintOnlyListener # Only print, no files
)
# Attach listener to specific agentic function
@agentic(listener=FileOnlyListener)
async def my_func(a: int) -> str:
"""Only logs to file."""
...
```
### Global Logging Configuration
```python
from agentica.logging import set_default_agent_listener, PrintOnlyListener
# Change default for all agents/functions
set_default_agent_listener(PrintOnlyListener)
# Disable all logging by default
set_default_agent_listener(None)
```
**Logging Priority** (highest to lowest):
1. Contextual loggers (via `with` statement)
2. Per-agent/function listener
3. Default listener
## Using MCP
If you want the agentic function or agent to use an MCP server, simply put it in the `@agentic` decorator or the call to `spawn`.
```python
from agentica import agentic, spawn
# Defines the agent
agent = await spawn(premise="You are a truth-teller.", mcp="path/to/config.json")
# Defines the agent-backed function
@agentic(mcp="path/to/config.json")
async def truth_teller(statement: str) -> bool:
"""Returns whether or not a statement is True or False."""
...
```
where "path/to/config.json" is a standard JSON config file such as:
```json
{
"mcpServers": {
"tavily-remote-mcp": {
"command": "npx -y mcp-remote https://mcp.tavily.com/mcp/?tavilyApiKey=<your-api-key>",
"env": {}
}
}
}
```
## Error Handling
Agentica provides comprehensive error handling through the `agentica.errors` module.
### SDK Errors
All Agentica errors inherit from `AgenticaError`, making it easy to catch all SDK-related errors:
```python
from agentica import agentic
from agentica.errors import AgenticaError, RateLimitError, InferenceError
@agentic()
async def process_data(data: str) -> dict:
"""Process the data."""
...
try:
result = await process_data(raw_data)
except RateLimitError as e:
# Handle rate limiting
await asyncio.sleep(60)
result = await process_data(raw_data)
except InferenceError as e:
# Handle all inference service errors
logger.error(f"Inference failed: {e}")
result = {}
except AgenticaError as e:
# Catch any other SDK errors
logger.error(f"Agentica error: {e}")
raise
```
**Error hierarchy:**
- `AgenticaError` - Base for all SDK errors
- `ServerError` - Base for remote operation errors
- `GenerationError` - Base for agent generation errors
- `InferenceError` - HTTP errors from inference service
- `MaxTokensError`, `ContentFilteringError`, etc.
- `ConnectionError` - WebSocket and connection errors
- `InvocationError` - Agent invocation errors
### Custom Exceptions
You can define custom exceptions and pass them into the `@agentic()` decorator so the agent can raise them:
```python
class DataValidationError(Exception):
"""Raised when input data fails validation."""
pass
@agentic(DataValidationError)
async def analyze_data(data: str) -> dict:
"""
Analyze the dataset.
Raises:
DataValidationError: If data is empty or malformed
ValueError: If data format is not supported
Returns a dictionary with analysis results.
"""
...
try:
result = await analyze_data(raw_data)
except DataValidationError as e:
logger.warning(f"Invalid data: {e}")
result = {"status": "validation_failed"}
```
**Tip**: The agent can see your docstrings! Document exception conditions clearly in the `Raises:` section, and the agent will raise them appropriately.
## Common Patterns
### Stateful Data Analysis
Agents maintain context across calls and can manipulate variables by reference:
```python
from agentica import spawn
import pandas as pd
agent = await spawn()
# First analysis
result = await agent.call(
dict[str, int],
"Count movies by genre",
dataset=pd.read_csv("movies.csv").to_dict()
)
# Agent remembers previous result
filtered = await agent.call(
dict[str, int],
"Keep only genres with more than 1000 movies"
)
```
### Custom Agent Classes
Wrap `Agent` for domain-specific functionality:
```python
from agentica.agent import Agent
class ResearchAgent:
def __init__(self, web_search_fn):
self._brain = Agent(
premise="You are a research assistant.",
scope={"web_search": web_search_fn}
)
async def research(self, topic: str) -> str:
return await self._brain(str, f"Research: {topic}")
async def summarize(self, text: str) -> str:
return await self._brain(str, f"Summarize: {text}")
# Use it
researcher = ResearchAgent(web_search)
findings = await researcher.research("AI agents in 2025")
summary = await researcher.summarize(findings)
```
### Multi-Agent Orchestration
Coordinate multiple agents for complex tasks:
```python
from agentica.agent import Agent
class LeadResearcher:
def __init__(self):
self._brain = Agent(
premise="Coordinate research tasks across subagents.",
scope={"SubAgent": ResearchAgent}
)
async def __call__(self, query: str) -> str:
return await self._brain(str, query)
# The lead researcher can spawn and coordinate subagents
lead = LeadResearcher()
report = await lead("Research companies building AI agents")
```
Happy programming!
## Content quality standards
- Always include complete, runnable examples that users can copy and execute
- Show proper error handling and edge case management
- Add explanatory comments for complex logic
TypeScript
Copy
# Agentica Documentation
## Overview
Agentica is a library for integrating agentic features into TypeScript applications.
## Basics
The primary method of interaction is through the `agentic()` function, which works like this:
```typescript
import { agentic } from '@symbolica/agentica';
// Defines the agent-backed function
async function add(a: number, b: number): Promise<number> {
return await agentic("Returns the sum of a and b", { a, b });
}
// Calls the agent-backed function
const result = await add(1, 2); // This addition is done by an LLM via the agentica framework
console.assert(result === 3);
```
This allows you to use agents to implement functions which are not possible to implement in pure TypeScript.
The alternative syntax is to `spawn` an agent.
```typescript
import { spawn } from '@symbolica/agentica';
// Spawns the agent
const agent = await spawn({ premise: "You are a truth-teller." });
// Calls the agent
const result: boolean = await agent.call<boolean>("The Earth is flat");
console.assert(result === false);
```
The creation and call of agents created with `spawn` are always awaitable.
### Return Types
Specify return types using TypeScript generics:
```typescript
// Simple call type inferred from explicit annotation
const result: number = await agent.call("What is 2+2?");
// Structured output
interface UserData {
name: string;
email: string;
role: string;
}
// Explicitly passed into generic argument
const user = await agent.call<UserData>("Get user information");
// user is typed as UserData
```
## Model Selection
Agentica supports any text-to-text model available on OpenRouter. Specify with the `model` parameter:
```typescript
// For agents (default is 'openai:gpt-4.1')
const agent = await spawn({ premise: "You are a helpful assistant.", model: "openai:gpt-5" });
// For agentic functions
async function analyze(text: string): Promise<AnalysisResult> {
return await agentic("Analyze the text", { text }, { model: "anthropic:claude-sonnet-4.5" });
}
```
**Supported models**:
- `openai:gpt-3.5-turbo`
- `openai:gpt-4o`
- `openai:gpt-4.1` (default)
- `openai:gpt-5`
- `anthropic:claude-sonnet-4`
- `anthropic:claude-opus-4.1`
- `anthropic:claude-sonnet-4.5`
- `anthropic:claude-opus-4.5`
or just use any OpenRouter model slug (e.g. `google/gemini-2.5-flash`).
## Token Limits
Control the maximum number of tokens generated with `maxTokens`:
```typescript
import { spawn, agentic, MaxTokens } from '@symbolica/agentica';
// For agents
const agent = await spawn({
premise: "Brief responses only",
maxTokens: 500 // Limit total output tokens per invocation
});
// For agentic functions
async function summarize(text: string): Promise<string> {
return await agentic("Create a concise summary", { text }, { maxTokens: 1000 });
}
// For finer control, use MaxTokens:
// - perInvocation: total tokens across all rounds
// - perRound: tokens per inference round
// - rounds: maximum number of inference rounds
const agent2 = await spawn({
premise: "Brief responses only",
maxTokens: MaxTokens.from({ perInvocation: 5000, perRound: 1000, rounds: 5 })
});
```
**Use cases**:
- Ensure brief responses for cost control
- Prevent overly long outputs
- Match specific output length requirements
If the response would exceed `maxTokens`, a `MaxTokensError` will be raised. See [Error Handling](#error-handling) for how to handle this.
## Tracking Token Usage
Track token consumption with `lastUsage()` and `totalUsage()` methods on agents:
```typescript
import { spawn } from '@symbolica/agentica';
const agent = await spawn({ premise: "You are helpful." });
await agent.call<string>("Hello!");
await agent.call<string>("How are you?");
// Get usage from the last invocation
const usage = agent.lastUsage();
console.log(`Last: ${usage.inputTokens} in, ${usage.outputTokens} out`);
// Get cumulative usage across all invocations
const total = agent.totalUsage();
console.log(`Total: ${total.inputTokens} in, ${total.outputTokens} out, ${total.totalTokens} processed`);
// For agentic functions, use the onUsage callback
async function summarize(text: string): Promise<string> {
return await agentic("Summarize the text", { text }, {
onUsage: (usage) => console.log(`Used ${usage.totalTokens} tokens`)
});
}
```
The `Usage` object contains:
- `inputTokens`: tokens consumed as input
- `outputTokens`: tokens generated as output
- `totalTokens`: total tokens processed (not double-counting re-consumed tokens)
## Initial Prompt vs System Prompt
You can control the agent's instructions in two ways:
```typescript
// Use `premise` (added to default system prompt with environment explainer)
const agent = await spawn({ premise: "You are a math expert." });
// Use `system` for full control of the system prompt (no environment explainer)
const agent = await spawn({ system: "You are a helpful assistant. Always respond with JSON." });
```
**Note**: You cannot use both `premise` and `system` together.
## Passing in objects
If you want the agentic function or agent to use functions, classes, objects etc., simply pass them in the `scope` argument to `call` or `agentic`.
```typescript
import { agentic, spawn } from '@symbolica/agentica';
// User-defined function
import { webSearch } from './tools';
// Defines the agent with scope
const agent = await spawn({ premise: "You are a truth-teller." }, { webSearch });
// Defines the agent-backed function
async function truthTeller(statement: string): Promise<boolean> {
return await agentic(
"Returns whether or not this statement is True or False.",
{ statement, webSearch }
);
}
```
### SDK Integration Pattern
Extract specific methods from SDK clients for focused scope:
```typescript
import { agentic } from '@symbolica/agentica';
import { WebClient } from '@slack/web-api';
// Extract only the methods you need
const slackClient = new WebClient(process.env.SLACK_TOKEN);
const listUsers = slackClient.users.list.bind(slackClient.users);
const postMessage = slackClient.chat.postMessage.bind(slackClient.chat);
async function sendTeamUpdate(message: string): Promise<void> {
return await agentic(
"Send `message` to all team members",
{ message, listUsers, postMessage },
{ model: "openai:gpt-5" }
);
}
```
### Per-Call Scope
You can also add scope for specific invocations:
```typescript
const agent = await spawn({ premise: "Data analyzer" });
// Add resources for this specific call
const result: { [key: string]: number } = await agent.call(
"Analyze the dataset",
{ dataset, analyzerTool }, // Additional scope
);
```
## Agent Call Signatures
The `agent.call()` method has two signatures:
```typescript
// Simple call with just a prompt
const result = await agent.call<number>("What is 2+2?");
// Call with scope and configuration
const result = await agent.call<object>(
"Fetch and analyze user data",
{ databaseQuery, apiClient } // Additional scope
);
```
## Streaming
Streaming model generation is supported for both agents and agentic functions.
### Agent Streaming
```typescript
import { spawn } from '@symbolica/agentica';
// Enable streaming globally for the agent (default for all invocations)
const agent = await spawn({
premise: "You are a truth-teller.",
listener: (iid, chunk) => console.log(chunk.content)
});
// Override listener for this invocation specifically
const result = await agent.call<boolean>(
"Is Paris the capital of France?", {},
{ listener: (iid, chunk) => process.stdout.write(chunk.content) }
);
```
Each `Chunk` object contains:
- `content`: the text content of the chunk
- `role`: one of `'user'`, `'agent'`, or `'system'`
### Agentic Function Streaming
Streaming is also supported for agentic functions:
```typescript
async function generateStory(topic: string): Promise<string> {
return await agentic("Write a story about the topic", { topic }, {
listener: (iid, chunk) => process.stdout.write(chunk.content)
});
}
```
## Resource Management
Agents should be properly cleaned up when done.
### Manual Cleanup
```typescript
const agent = await spawn({ premise: "Helper" });
try {
await agent.call<string>("Do something");
} finally {
await agent.close(); // Clean up resources
}
```
### Automatic Cleanup with `await using`
Modern TypeScript (5.2+) supports automatic resource disposal:
```typescript
await using agent = await spawn("Helper agent");
// Agent automatically cleaned up when out of scope
const result = await agent.call<string>("Process task");
// No need to call close(), automatically cleaned up at end of scope
```
## Error Handling
Agentica provides comprehensive error handling through the `@symbolica/agentica/errors` module.
### SDK Errors
All Agentica errors inherit from `AgenticaError`, making it easy to catch all SDK-related errors:
```typescript
import { agentic } from '@symbolica/agentica';
import { AgenticaError, RateLimitError, InferenceError } from '@symbolica/agentica/errors';
async function processData(data: string): Promise<object> {
return agentic<object>("Process the data.", { data });
}
try {
const result = await processData(rawData);
} catch (e) {
if (e instanceof RateLimitError) {
// Handle rate limiting
await sleep(60000);
result = await processData(rawData);
} else if (e instanceof InferenceError) {
// Handle all inference service errors
logger.error(`Inference failed: ${e}`);
result = {};
} else if (e instanceof AgenticaError) {
// Catch any other SDK errors
logger.error(`Agentica error: ${e}`);
throw e;
}
}
```
**Error hierarchy:**
- `AgenticaError` - Base for all SDK errors
- `ServerError` - Base for remote operation errors
- `GenerationError` - Base for agent generation errors
- `InferenceError` - HTTP errors from inference service
- `MaxTokensError`, `ContentFilteringError`, etc.
- `ConnectionError` - WebSocket and connection errors
- `InvocationError` - Agent invocation errors
### Custom Exceptions
You can define custom exceptions and pass them into the agentic function's scope so the agent can raise them:
```typescript
/**
* Raised when input data fails validation.
*/
class DataValidationError extends Error {
constructor(message: string) {
super(message);
this.name = 'DataValidationError';
}
}
/**
* Analyze the dataset.
*
* @throws {DataValidationError} If data is empty or malformed
* @throws {Error} If data format is not supported
*/
async function analyzeData(data: string): Promise<object> {
return agentic<object>(
`Analyze the dataset.
Throw DataValidationError if data is empty or malformed.
Throw Error if data format is not supported.
Return analysis results.`,
{ data, DataValidationError }
);
}
try {
const result = await analyzeData(rawData);
} catch (e) {
if (e instanceof DataValidationError) {
logger.warn(`Invalid data: ${e}`);
result = { status: "validation_failed" };
}
}
```
**Tip**: The agent can see your JSDoc comments (`/** ... */`)! Document exception conditions clearly using `@throws` tags, and also include them in your prompt. The agent will raise them appropriately.
## Common Patterns
### Stateful Conversations
Agents maintain context across calls:
```typescript
const assistant = await spawn({ premise: "You are a helpful coding assistant." });
// First interaction
await assistant.call<void>("My name is Alice and I'm learning TypeScript");
// Later interaction - agent remembers context
const response = await assistant.call<string>("What's my name?");
console.log(response); // "Your name is Alice"
```
### Custom Agent Classes
Wrap agents for domain-specific functionality:
```typescript
import { spawn, Agent } from '@symbolica/agentica';
class ResearchAssistant {
private agent: Agent | null = null;
constructor(private webSearch: (query: string) => Promise<any>) { }
async initialize(): Promise<void> {
this.agent = await spawn(
{ premise: "You are a research assistant." },
);
}
async research(topic: string): Promise<string> {
return await this.agent!.call<string>(
"Research the topic",
{ topic, webSearch: this.webSearch }
);
}
async summarize(text: string): Promise<string> {
return await this.agent!.call<string>(
"Summarize this text",
{ text }
);
}
async close(): Promise<void> {
await this.agent?.close();
}
}
// Usage
const researcher = new ResearchAssistant(myWebSearchFn);
await researcher.initialize();
const findings = await researcher.research("AI agents in 2025");
const summary = await researcher.summarize(findings);
await researcher.close();
```
### Resource Management Pattern
Modern pattern for automatic cleanup:
```typescript
async function processTask(task: string): Promise<string> {
await using agent = await spawn({
premise: "Task processor",
model: "openai:gpt-5"
});
const result = await agent.call<string>(task);
return result;
// Agent automatically cleaned up here
}
```
## Multi-Agent Orchestration
Coordinate multiple agents for complex tasks by wrapping agents in custom classes.
### Pattern: Lazy Agent Initialization
Defer agent creation until first use:
```typescript
import { spawn, Agent } from '@symbolica/agentica';
class ResearchAgent {
private brain: Agent | null = null;
private async ensureBrain(): Promise<Agent> {
if (this.brain === null) {
this.brain = await spawn({
premise: "You are a research assistant.",
model: "openai:gpt-4o"
});
}
return this.brain;
}
async research(topic: string): Promise<string> {
const brain = await this.ensureBrain();
// Bind instance methods for agent scope
const webSearch = this.webSearch.bind(this);
return await brain.call<string>(
"Research the topic",
{ topic, webSearch } // Pass bound methods
);
}
private async webSearch(query: string): Promise<any> {
// [implementation omitted]
return { results: [] };
}
async close(): Promise<void> {
if (this.brain !== null) {
await this.brain.close();
}
}
}
```
### Pattern: Multi-Agent Coordination
Coordinate multiple specialized agents:
```typescript
class CitationAgent { ... }
class DeepResearchSession {
private leadResearcher: Agent | null = null;
private citationAgent: CitationAgent;
constructor(private directory: string) {
this.citationAgent = new CitationAgent(directory);
}
private async ensureLeadResearcher(): Promise<Agent> {
if (this.leadResearcher === null) {
this.leadResearcher = await spawn({
premise: "You are a lead researcher. Coordinate subagents to research the query.",
model: "openai:gpt-4o"
});
}
return this.leadResearcher;
}
async run(query: string): Promise<string> {
const leader = await this.ensureLeadResearcher();
// Lead researcher can spawn and coordinate SubAgent instances
const report: string = await leader.call(
query,
{
SubAgent, // Pass the class itself
savePlan: (plan: string) => this.savePlan(plan),
loadPlan: () => this.loadPlan()
},
{ listener: (iid, chunk) => process.stdout.write(chunk.content) }
);
// Post-process with citation agent
await this.citationAgent.run(report);
return `Research complete! Check ${this.directory}/report.md`;
}
private async savePlan(plan: string): Promise<void> {
// Save to directory
}
private async loadPlan(): Promise<string> {
// Load from directory
return "";
}
async close(): Promise<void> {
if (this.leadResearcher !== null) {
await this.leadResearcher.close();
}
await this.citationAgent.close();
}
}
// Usage
const session = new DeepResearchSession('research_output');
try {
const result = await session.run("Research AI agents in 2025");
console.log(result);
} finally {
await session.close();
}
```
### Pattern: SubAgent with Instance Counter
Create multiple agent instances with unique IDs:
```typescript
let idGen = 0;
class SubAgent {
id: number;
private brain: Agent | null = null;
constructor(private directory: string) {
this.id = idGen++; // Unique ID per instance
}
private async ensureBrain(): Promise<Agent> {
if (this.brain === null) {
this.brain = await spawn({
premise: "You are a specialized research agent.",
model: "openai:gpt-4o"
});
}
return this.brain;
}
async run(task: string): Promise<string> {
const brain = await this.ensureBrain();
// Bind methods for scope
const saveResults = this.saveResults.bind(this);
const result: string = await brain.call(
task,
{ saveResults },
{ listener: (iid, chunk) => process.stdout.write(`[SubAgent ${this.id}] ${chunk.content}`) }
);
return result;
}
private async saveResults(data: any): Promise<void> {
// Save with unique path using this.id
const path = `${this.directory}/subagent_${this.id}/results.json`;
// ... save logic
}
async close(): Promise<void> {
if (this.brain !== null) {
await this.brain.close();
}
}
}
```
### Best Practices for Multi-Agent Systems
1. **Lazy Initialization**: Create agents only when needed using `ensureBrain()` pattern
2. **Resource Cleanup**: Always implement `close()` and call it in `finally` blocks
3. **Method Binding**: Use `.bind(this)` when passing instance methods to agent scope
4. **Unique IDs**: Use counters or UUIDs for agent identification
5. **Streaming Coordination**: Stream nested agent outputs for visibility
6. **Error Handling**: Wrap agent calls in try/finally for proper cleanup
7. **Type Safety**: Use TypeScript generics for type-safe agent responses
```typescript
// Complete example with best practices
class RobustAgentWrapper {
private agent: Agent | null = null;
private async ensureAgent(): Promise<Agent> {
if (this.agent === null) {
this.agent = await spawn({
premise: "Task executor",
model: "openai:gpt-5"
});
}
return this.agent;
}
async execute<T>(task: string, tools: object = {}): Promise<T> {
const agent = await this.ensureAgent();
try {
const result: T = await agent.call<T>(
task,
tools,
{ listener: (iid, chunk) => console.log(chunk.content) }
);
return result;
} catch (error) {
console.error('Agent execution failed:', error);
throw error;
}
}
async close(): Promise<void> {
if (this.agent) {
await this.agent.close();
this.agent = null;
}
}
}
// Usage with automatic cleanup
async function processTask(): Promise<string> {
const wrapper = new RobustAgentWrapper();
try {
const result = await wrapper.execute<string>("Analyze data");
return result;
} finally {
await wrapper.close();
}
}
```
## Logging and Debugging
<Note>
**More advanced listener support on par with Python is coming soon to TypeScript.**
Logging and listener functionality is currently restricted to callback-style listeners in TypeScript.
</Note>
### Current Debugging Options
For now, you can observe agent behavior through:
```typescript
// 1. Streaming responses for real-time observation
const result = await agent.call<string>("Complex task", {}, {
listener: (iid, chunk) => console.log('[Agent]:', chunk.content)
});
// 2. Standard console logging
console.log('Agent returned:', result);
```
Happy programming!
## Content quality standards
- Always include complete, runnable examples that users can copy and execute
- Show proper error handling and edge case management
- Add explanatory comments for complex logic
Need help? Contact us via the button below!