Skip to main content

Overview

Agentica allows AI to operate within your runtime by combining Remote Procedure Call (RPC) with transparent proxying in a Python REPL. These two mechanisms combine to allow AI to:
  • transparently call all of your functions — no schemas or tools required
  • operate on any data by reference — non-serialisable data are allowed and large data do not have to be expanded into the context
  • write code in terms of your code, state, APIs, etc — no need to host your workflow or write integrations

Anatomy of an Invocation

Let’s walk through exactly what happens when you call a magic function. Consider the below simplified example where we have elided portions of the code for clarity.
from agentica import magic

# Your existing types and functions
class OrderResult: ...
class CustomerTier: ...
def get_customer_tier(name: str) -> CustomerTier: ...
def calculate_discount(tier: CustomerTier) -> float: ...

@magic(get_customer_tier, calculate_discount)
def process_order(customer_name: str, base_price: float) -> OrderResult:
    """Look up customer tier, calculate discount, and create order"""
    ...

# Call the function
result = process_order("Alice", 100.0)

What Happens Behind the Scenes

When process_order("Alice", 100.0) is called, it triggers the following interaction. 1. Agentica sends a request to the AI with:
  • The instruction: "Look up customer tier, calculate discount, and create order"
  • The input parameters: customer_name = "Alice", base_price = 100.0
  • The signatures and docstrings of functions in scope: get_customer_tier(), calculate_discount()
  • The details of the types in scope: OrderResult, CustomerTier
  • The details of the expected return type, OrderResult
2. The AI interactively writes and evaluates code in a sandboxed REPL environment. The AI always writes Python code in the sandbox, regardless of whether you’re using the Python or TypeScript SDK. From the AI’s perspective, the Python REPL in the Agentica sandbox has everything in scope: get_customer_tier, calculate_discount, OrderResult, CustomerTier, etc. The AI can write and evaluate code like normal. Here’s what a sample REPL session could look like.
I should begin by obtaining the customer tier
```python
tier = get_customer_tier(customer_name)
tier
CustomerTier(level='gold', benefits=['free_shipping', 'priority_support'])
This must be an important customer
I need to calculate the discount factor
discount = calculate_discount(tier)
discount
0.15
That looks reasonable, a 15% discount. I should compute the price using the discount
final_price = base_price * (1 - discount)
final_price
85.0
Good, let me assemble the order
return OrderResult(
     customer=customer_name,
     original_price=base_price,
     discount=discount,
     final_price=final_price
)
No output was produced.

<Note>
**Async Functions in the REPL:** The REPL includes a top-level event loop, so async functions work naturally. When you pass async functions from your runtime, they appear in the REPL as functions returning `Future[T]` (Python `async def foo(...) -> T` becomes `def foo(...) -> Future[T]`, TypeScript `async function foo(...): Promise<T>` similarly translates). The AI can use top-level `await`, and standard patterns like `asyncio.gather()` work as expected.
</Note>

**What's actually happening behind the scenes:**

Let's break down the key lines from the transcript above:

```python
tier = get_customer_tier(customer_name)
This calls a stub function in the sandbox — get_customer_tier was never defined in the sandbox, but our RPC and transparent proxying makes it appear to be present to the AI. The stub intercepts the call and triggers an RPC to your runtime, where your actual get_customer_tier() executes with access to your database, environment, etc.
tier
CustomerTier(level='gold', benefits=['free_shipping', 'priority_support'])
The return value is sent back as a transparent proxy. The AI sees what looks like a CustomerTier object, but it’s actually a lightweight reference to the real object in your runtime.
discount = calculate_discount(tier)
When the AI passes tier to another function, the proxy is sent over RPC. Your actual calculate_discount() executes in your runtime with the real CustomerTier object.
final_price = base_price * (1 - discount)
Simple calculations execute directly in the sandbox—no RPC needed for basic operations.
result = OrderResult(...)
Instantiating OrderResult calls a stub that triggers your actual class constructor in your runtime, returning another proxy. 3. The result type is validated and returned to your code:
Python
result = process_order("Alice", 100.0)
# Returns: OrderResult(customer="Alice", original_price=100.0, discount=0.15, final_price=85.0)
Observe that no schema was generated or needed to return a value to your code. Instead an object was instantiated in your runtime and Agentica ensures that the type of result matches the required return type (OrderResult), which ensures type safety. 4. Why this architecture matters:
Functions and classes you pass in scope execute in your environment with access to all their normal dependencies—database connections, API clients, environment variables, file system, etc.The AI doesn’t need to know implementation details. From its perspective, it’s just calling functions and interacting with data. But those functions and data interactions run where your state and resources live.
Because Agentica operates on objects by reference rather than by serialization, you never need to define schemas or conversion logic:
  • Context size is controlled: Large strings, arrays, etc may be manipulated without ever having to be expanded fully into the context
  • Rich return types: complex classes and types work — not just JSON-compatible primitives
  • No serialization overhead: Objects stay in your runtime; only lightweight references cross the boundary
  • Natural type safety: Your return type annotations are enforced directly and may contain arbitrary instantiating and correctness logic like the rest of your code
  • Stateful object interaction: Pass around database connections, file handles, API clients, or any object with state
Traditional frameworks require you to serialize everything to JSON-compatible formats and define schemas to validate structure. With Agentica, the AI instantiates your actual classes in your runtime and returns real objects — no schemas needed.
Complex objects are represented as lightweight proxies in the sandbox, not fully serialized. This means:
  • Large data stays compact: A 1GB DataFrame, a 10MB image, or a million-row dataset can be manipulated by the AI without ever being expanded into its context or copied in the answer. The AI calls methods on the proxy; operations happen in your runtime.
  • Non-serializable objects (database connections, file handles) can be passed around seamlessly
This means that method calls on proxied objects trigger lightweight RPC back to your runtime and operations can be fast and memory-efficient.
The AI has access to a full Python REPL, not just predefined function calls. This fundamentally changes what’s possible:
  • Actual programming, not just tool calling: The AI writes code with loops, conditionals, error handling—real algorithmic logic. With tool calling, you’re limited to sequences of predefined operations. Want to iterate over items until a condition is met? With REPL, write a while loop. With tools, you’d need to define a specific tool for that exact iteration pattern.
  • Unrestricted data manipulation: Slice lists, filter arrays, transform objects, compute statistics—operations you’d never think to expose as tools are just Python code. Tool calling means either defining tools for every possible operation or limiting what the AI can do.
  • Interactive problem solving: The AI doesn’t need to zero-shot a complete programmatic solution to the problem, in fact there may not even be one. The AI can inspect data to understand its structure, then write code based on what it finds. Adaptive problem-solving requires seeing data and writing new code in response—not just calling predetermined functions.
The key difference: the AI thinks in terms of code, not tool sequences. This unlocks algorithmic reasoning, data-driven adaptation, and the full expressiveness of Python—not just what you anticipated and wrapped in tools.

The Mental Model

Think of Agentica as giving the AI a coding session with a direct line to your runtime:
  • Sandboxed execution: The AI writes Python code in a safe, isolated environment (even when you’re using TypeScript)
  • RPC bridge: Functions you pass in scope appear in the sandbox as stubs, but execute in your runtime
  • Your code stays local: All actual computation happens in your process with full access to your dependencies
  • Objects are proxied: Return values from your functions are represented as lightweight proxies in the sandbox, not fully serialized
  • Full language power: The AI can write multi-step logic, inspect values, use conditionals, loops—anything Python supports
  • Type safety enforced: Return values are validated against your type annotations
This architecture enables patterns impossible with traditional tool calling: chaining function calls with intermediate processing, dynamically deciding which functions to call based on results, working with complex stateful objects, and more.

Next Steps