Overview
Agentica allows AI to operate within your runtime by combining Remote Procedure Call (RPC) with transparent proxying in a Python REPL. These two mechanisms combine to allow AI to:- transparently call all of your functions — no schemas or tools required
- operate on any data by reference — non-serialisable data are allowed and large data do not have to be expanded into the context
- write code in terms of your code, state, APIs, etc — no need to host your workflow or write integrations
Anatomy of an Invocation
Let’s walk through exactly what happens when you call a magic function. Consider the below simplified example where we have elided portions of the code for clarity.What Happens Behind the Scenes
Whenprocess_order("Alice", 100.0) is called, it triggers the following interaction.
1. Agentica sends a request to the AI with:
- The instruction:
"Look up customer tier, calculate discount, and create order" - The input parameters:
customer_name = "Alice",base_price = 100.0 - The signatures and docstrings of functions in scope:
get_customer_tier(),calculate_discount() - The details of the types in scope:
OrderResult,CustomerTier - The details of the expected return type,
OrderResult
get_customer_tier, calculate_discount, OrderResult, CustomerTier, etc. The AI can write and evaluate code like normal. Here’s what a sample REPL session could look like.
get_customer_tier was never defined in the sandbox, but our RPC and transparent proxying makes it appear to be present to the AI. The stub intercepts the call and triggers an RPC to your runtime, where your actual get_customer_tier() executes with access to your database, environment, etc.
CustomerTier object, but it’s actually a lightweight reference to the real object in your runtime.
tier to another function, the proxy is sent over RPC. Your actual calculate_discount() executes in your runtime with the real CustomerTier object.
OrderResult calls a stub that triggers your actual class constructor in your runtime, returning another proxy.
3. The result type is validated and returned to your code:
Python
result matches the required return type (OrderResult), which ensures type safety.
4. Why this architecture matters:
RPC: Your Code Stays in Your Runtime
RPC: Your Code Stays in Your Runtime
Functions and classes you pass in scope execute in your environment with access to all their normal dependencies—database connections, API clients, environment variables, file system, etc.The AI doesn’t need to know implementation details. From its perspective, it’s just calling functions and interacting with data. But those functions and data interactions run where your state and resources live.
Operating by Reference: No Schemas Required
Operating by Reference: No Schemas Required
Because Agentica operates on objects by reference rather than by serialization, you never need to define schemas or conversion logic:
- Context size is controlled: Large strings, arrays, etc may be manipulated without ever having to be expanded fully into the context
- Rich return types: complex classes and types work — not just JSON-compatible primitives
- No serialization overhead: Objects stay in your runtime; only lightweight references cross the boundary
- Natural type safety: Your return type annotations are enforced directly and may contain arbitrary instantiating and correctness logic like the rest of your code
- Stateful object interaction: Pass around database connections, file handles, API clients, or any object with state
Proxies: Efficient Data Handling
Proxies: Efficient Data Handling
Complex objects are represented as lightweight proxies in the sandbox, not fully serialized. This means:
- Large data stays compact: A 1GB DataFrame, a 10MB image, or a million-row dataset can be manipulated by the AI without ever being expanded into its context or copied in the answer. The AI calls methods on the proxy; operations happen in your runtime.
- Non-serializable objects (database connections, file handles) can be passed around seamlessly
REPL: True Code Execution
REPL: True Code Execution
The AI has access to a full Python REPL, not just predefined function calls. This fundamentally changes what’s possible:
-
Actual programming, not just tool calling: The AI writes code with loops, conditionals, error handling—real algorithmic logic. With tool calling, you’re limited to sequences of predefined operations. Want to iterate over items until a condition is met? With REPL, write a
whileloop. With tools, you’d need to define a specific tool for that exact iteration pattern. - Unrestricted data manipulation: Slice lists, filter arrays, transform objects, compute statistics—operations you’d never think to expose as tools are just Python code. Tool calling means either defining tools for every possible operation or limiting what the AI can do.
- Interactive problem solving: The AI doesn’t need to zero-shot a complete programmatic solution to the problem, in fact there may not even be one. The AI can inspect data to understand its structure, then write code based on what it finds. Adaptive problem-solving requires seeing data and writing new code in response—not just calling predetermined functions.
The Mental Model
Think of Agentica as giving the AI a coding session with a direct line to your runtime:- Sandboxed execution: The AI writes Python code in a safe, isolated environment (even when you’re using TypeScript)
- RPC bridge: Functions you pass in scope appear in the sandbox as stubs, but execute in your runtime
- Your code stays local: All actual computation happens in your process with full access to your dependencies
- Objects are proxied: Return values from your functions are represented as lightweight proxies in the sandbox, not fully serialized
- Full language power: The AI can write multi-step logic, inspect values, use conditionals, loops—anything Python supports
- Type safety enforced: Return values are validated against your type annotations