Skip to main content
This is detailed usage documentation. New to Agentica? Start with the Quickstart or learn when to use agents vs magic functions.

Using agents

Agents in Agentica accomplish specific tasks using native libraries, code, APIs, and SDKs available in your programming language’s runtime. A single agent represents an evolving history of invocations, each of which may be provided a specific task and set of resources.

When to use agents

Agents work best for longer-running, multi-step tasks where each action depends on prior outcomes, state is preserved, and task-appropriate sets of resources need to be delegated. For single, well-bounded tasks without cross-step context, see Magic functions.

The basics

Agents can be created in Agentica using spawn and later called to perform tasks. An agent’s history evolves across its invocations, so you can follow up with tasks in the context of previous results. In Python, provide a return type to receive a result of that runtime type (defaulting to str). In TypeScript, the return type is inferred from the prompt and response.
agent = await spawn(premise="You are a helpful assistant.")

c: float = await agent.call(float, "What is the lattice constant of silicon in Ångströms?")
print("Lattice constant of silicon:", c)

derivation: str = await agent.call("And how is this constant derived?")
print("Derivation of lattice constant:", derivation)
See the API references: Python | TypeScript.

Use your tools and types

Any function, object, method, or other runtime value can be directly exposed as resources your agent can interact with. No need to set up MCP servers. Expose the full programmatic power of an SDK or API directly to your agent. Make them available when spawning the agent and/or pass per-invocation resources. You can also expose existing remote or local MCP tools by passing an MCP configuration path. See Advanced › MCP.
agent = await spawn(premise="You are a helpful researcher.")
gdp: float = await agent.call(
    float,
    "What percentage of US GDP is from California?",
    web_search=web_search,
)
print(f"Percentage: {gdp:.1f}%")
  • Objects passed to scope or as arguments are presented without private methods or field names (fields with a leading _).
  • Async functions in scope are exposed to the REPL as synchronous functions returning Future[T]. The REPL includes a top-level event loop, so the AI can await these futures directly and use standard patterns like asyncio.gather().

Multi-agent orchestration

Multi-agent orchestration becomes straightforward. Agents can trigger sub-agents by passing spawn in scope, enabling completely dynamic agent delegation.
agent = await spawn(premise="You are an agent orchestrator.", model="openai:gpt-4o")
result = await agent.call(
    tuple[int, int],
    "Use one sub-agent to compute 3**32 and another to compute 3**34, then return both results.",
    spawn=spawn,
)
assert result == (3**32, 3**34)
print(result)

Streaming

Stream responses as they are being generated.
import asyncio
from agentica import spawn
from agentica.magic.logging.loggers import StreamLogger

agent = await spawn(premise='You are a mathematician.', model='openai:gpt-4.1')

stream = StreamLogger()
with stream:
  root = asyncio.create_task(
    agent.call(float, 'Define a Newton–Raphson solver, and use it to solve for a root of a polynomial of your choice.')
  )

role = None
async for chunk in stream:
    if role is None and chunk.role == 'user':
        continue  # Skip first user message
    if role != chunk.role:
        print(f"\n\n--- {chunk.role} ---")
        role = chunk.role
    print(chunk, end='', flush=True)
print('\n')

print('root =', await root)
--- agent ---
```python
def newton_raphson(f, df, x0, tol=1e-8, max_iter=100):
    x = x0
    for _ in range(max_iter):
        fx = f(x)
        dfx = df(x)
        if abs(dfx) < 1e-12:
            break  # Avoid division by zero
        x_new = x - fx / dfx
        if abs(x_new - x) < tol:
            return float(x_new)
        x = x_new
    return float(x)

# Polynomial: x^3 - x - 2 = 0
def f(x):
    return x**3 - x - 2

def df(x):
    return 3*x**2 - 1

return newton_raphson(f, df, 1.5)
```

--- user<execution> ---
1.5213797068045676


root = 1.5213797068045676

Chat with your agents

TODO IS THIS STILL NECESSARY? Create a simple chat loop using streaming. Consume the stream before awaiting the final result to see live generation.
import asyncio
from agentica import spawn
from agentica.magic.logging import set_default_agent_listener
from agentica.magic.logging.loggers import StreamLogger

RED = "\033[91m"
GREEN = "\033[92m"
PURPLE = "\033[95m"
RESET = "\033[0m"
GREY = "\033[90m"

set_default_agent_listener(None)

async def chat():
    agent = await spawn(premise='You are a helpful assistant.', model='openai:gpt-4.1')

    while user_input := input(f"\n{PURPLE}User{RESET}: "):
        try:
            # Invoke agent against user prompt
            stream = StreamLogger()
            with stream:
                result = asyncio.create_task(
                    agent.call(str, user_input)
                )

            # Stream intermediate "thinking" to console
            print(GREY)
            async for chunk in stream:
                if chunk.role == 'agent':
                    print(chunk, end="", flush=True)
            print(RESET)

            # Print final result
            print(f"\n{GREEN}Agent{RESET}: {await result}")

        except Exception as agent_error:
            print(f"\n{RED}Error: {agent_error}{RESET}")


if __name__ == '__main__':
    asyncio.run(chat())
That’s all it takes!

Advanced

You can expose custom exceptions in scope so they can be raised from within execution (see Advanced, including information on logging, retries, rate-limiting and prefix caching). For more examples, see Examples.