When working with AI agents, you can design them to raise exceptions back to you when they encounter specific conditions. This allows you to handle domain-specific error cases gracefully.Agent errors occur when the agent intentionally raises an exception. This could happen if:
You told the agent to raise an exception in certain situations
The agent believes your task to be impossible or contradictory
The tools you provided to the agent are not working as expected
Agent errors are different from operational errors, which are platform-level failures like network issues, API timeouts, or sandbox errors.
The simplest approach is to allow the agent to return None/null when it cannot complete the task:
Copy
from agentica import magic@magic()def extract_date(text: str) -> tuple[int, int, int] | None: """ Extract date in YYYY-MM-DD format. Return None if no date found. """ ...try: date = extract_date(document) if date is None: # Handle missing date date = "unknown"except Exception as e: logger.error(f"Failed to extract date: {e}") # Fallback logic
You can define your own exception classes and have the agent raise them when specific error conditions occur. This is useful for domain-specific error handling that goes beyond builtin exceptions.Best practices for custom exceptions:
Pass custom exceptions into the function or agent scope so they are available to raise
Clearly document when each exception should be raised in your docstringJSDoc comments (/** ... */)
Use descriptive exception names that indicate the error condition
Provide clear error messages that help diagnose the issue
The agent can see and understand your documentation to know when to raise each exception
Copy
from agentica import magic# Define custom exceptionsclass InsufficientDataError(Exception): """Raised when the input data is incomplete or insufficient for analysis.""" passclass DataQualityError(Exception): """Raised when data quality is too poor for reliable results.""" passclass UnsupportedFormatError(Exception): """Raised when the data format is not supported.""" pass@magic(InsufficientDataError, DataQualityError, UnsupportedFormatError)def analyze_dataset(data: str) -> dict: """ Analyze the dataset and return insights. Raises: InsufficientDataError: If the dataset has fewer than 10 rows DataQualityError: If more than 50% of values are missing or invalid UnsupportedFormatError: If the data format is not CSV or JSON ValueError: If the data cannot be parsed Returns a dictionary with analysis results. """ ...# Use with try/excepttry: results = analyze_dataset(raw_data) print(f"Analysis complete: {results}")except InsufficientDataError as e: logger.warning(f"Not enough data: {e}") results = {"status": "insufficient_data", "message": str(e)}except DataQualityError as e: logger.warning(f"Poor data quality: {e}") results = perform_basic_analysis(raw_data) # Fallbackexcept UnsupportedFormatError as e: logger.error(f"Format not supported: {e}") results = {"status": "error", "message": "Please provide CSV or JSON"}except ValueError as e: logger.error(f"Parsing failed: {e}") raise
The agent can see your docstringsJSDoc comments (/** ... */)! Be specific about the conditions that should trigger each exception. The more precise your documentation, the more reliably the agent will raise the appropriate exception.
Type annotations help guide the AI, and constrain the types the agent is capable of returning, but sometimes you need additional validation logic not expressible in the type system.
Validation logic may be realized in the type itself, such as during initialization of custom classes.
In these cases, the agent can see validation errors and self-correct when returning back to you.
Copy
from dataclasses import dataclassfrom agentica import magic@dataclassclass Price: amount: float currency: str def __post_init__(self): if self.amount < 0: raise ValueError("Price must be positive") if self.currency not in ['USD', 'EUR', 'GBP']: raise ValueError(f"Unsupported currency: {self.currency}")@magic()def extract_price(text: str) -> Price: """Extract price from text.""" ...
Here Price cannot be instantiated without satisfying the validation logic, and therefore cannot be returned by the agent until it is satisfied.
Off-the-shelf validation libraries such as Pydantic (Python) or Zod (TypeScript) may be used to integrate with existing validation logic or describe more complex validation requirements.
Python
TypeScript
Pydantic provides powerful declarative validation through field constraints and custom validators.Field-level constraints can specify numeric ranges, string lengths, and other basic requirements:
Not only do these fields provide basic validation, but they also provide excellent documentation for the agent.Custom field validators handle complex logic on individual fields using @field_validator:
Python
Copy
from pydantic import field_validatorclass ProductReview(BaseModel): # ... fields as above ... @field_validator('categories') @classmethod def validate_categories(cls, v: list[str]) -> list[str]: allowed = {'quality', 'price', 'service', 'delivery', 'packaging'} for category in v: if category not in allowed: raise ValueError(f"Invalid category: {category}") return v
Cross-field validation uses @model_validator to validate relationships between fields:
Python
Copy
from pydantic import model_validatorclass ProductReview(BaseModel): # ... fields and field_validator as above ... @model_validator(mode='after') def validate_sentiment_matches_rating(self) -> 'ProductReview': if self.rating >= 4 and self.sentiment == 'negative': raise ValueError("High rating inconsistent with negative sentiment") if self.rating <= 2 and self.sentiment == 'positive': raise ValueError("Low rating inconsistent with positive sentiment") return self
The agent sees Pydantic validation errors and adjusts its output to satisfy all constraints:
Python
Copy
from agentica import magic@magic()def analyze_review(review_text: str) -> ProductReview: """Analyze this product review and extract structured information.""" ...review = analyze_review("Great product! Fast shipping and excellent quality. 5 stars!")# All constraints are guaranteed to be satisfied
You may encounter edge cases where a task is genuinely impossible (missing required data, contradictory constraints, etc.). In these cases, you can design your application to degrade gracefully, maintaining basic functionality even when the full AI operation cannot complete.
Frequent fallbacks indicate an opportunity to refine your approach—adjusting prompts, choosing a different model, or providing more context. Use fallback patterns to handle genuine edge cases.
If a complex AI operation fails, fall back to simpler approaches. This example shows AI generating database migrations, with fallbacks to safer manual approaches.First, define your AI-powered function that attempts the complex task:
Copy
from agentica import magic@magic()def generate_migration(schema_old: dict, schema_new: dict) -> str: """ Generate a SQL migration script to transform the old schema to the new one. Handle complex cases like: - Column renames (detect via similarity, not just adds/drops) - Data type changes with appropriate conversions - Foreign key updates - Index optimizations Return valid SQL that preserves data. """ ...
Then create a simpler, safer fallback that generates a basic migration:
Copy
def generate_basic_migration(schema_old: dict, schema_new: dict) -> str: """Generate simple ADD/DROP column migration without smart renames.""" old_cols = set(schema_old.get('columns', [])) new_cols = set(schema_new.get('columns', [])) added = new_cols - old_cols dropped = old_cols - new_cols sql_lines = [] table = schema_new.get('table_name', 'table') for col in dropped: sql_lines.append(f"ALTER TABLE {table} DROP COLUMN {col};") for col in added: sql_lines.append(f"ALTER TABLE {table} ADD COLUMN {col} VARCHAR(255);") return "\n".join(sql_lines) if sql_lines else "-- No changes detected"
Attempt the smart migration first, falling back to basic if it fails:
Copy
def create_migration(schema_old: dict, schema_new: dict) -> str: """Generate migration with AI, fallback to basic diff.""" try: migration = generate_migration(schema_old, schema_new) logger.info("Generated smart migration with AI") return migration except Exception as e: logger.warning(f"AI migration generation failed: {e}, using basic diff") return generate_basic_migration(schema_old, schema_new)
Sometimes an AI operation can partially succeed. Instead of treating this as complete failure, design your workflow to continue with whatever succeeded. This example shows AI refactoring code across multiple files.Define a workflow where the AI processes multiple items, tracking successes and failures:
Copy
from dataclasses import dataclassfrom agentica import magic@dataclassclass RefactorResult: file_path: str success: bool updated_code: str | None error: str | None@magic()def refactor_file(code: str, instruction: str) -> str: """ Refactor the given code according to the instruction. Preserve functionality while improving code quality. """ ...def refactor_codebase(files: dict[str, str], instruction: str) -> list[RefactorResult]: """Refactor multiple files, continuing even if some fail.""" results = [] for file_path, code in files.items(): try: updated = refactor_file(code, instruction) results.append(RefactorResult( file_path=file_path, success=True, updated_code=updated, error=None )) logger.info(f"Successfully refactored {file_path}") except Exception as e: results.append(RefactorResult( file_path=file_path, success=False, updated_code=None, error=str(e) )) logger.warning(f"Failed to refactor {file_path}: {e}") return results
Then act on partial results, applying successful changes while reporting failures:
Copy
def apply_refactoring(files: dict[str, str], instruction: str) -> dict: """Apply refactoring and report on partial success.""" results = refactor_codebase(files, instruction) successful = [r for r in results if r.success] failed = [r for r in results if not r.success] # Write successful refactorings for result in successful: with open(result.file_path, 'w') as f: f.write(result.updated_code) # Log summary if len(successful) == len(results): logger.info(f"All {len(results)} files refactored successfully") elif len(successful) > 0: logger.warning( f"Partial success: {len(successful)}/{len(results)} files refactored. " f"Failed: {[r.file_path for r in failed]}" ) else: logger.error("All refactoring attempts failed") return { "total": len(results), "successful": len(successful), "failed": len(failed), "failed_files": [r.file_path for r in failed] }
For critical operations, implement progressively simpler AI tasks as fallbacks. When a task requires data that isn’t available or constraints that can’t be met, the AI may raise an error. Simpler fallback tasks with relaxed requirements are more likely to succeed.Define multiple AI approaches with decreasing strictness:
Copy
from dataclasses import dataclassfrom agentica import magic@dataclassclass ShippingAddress: name: str street: str city: str state: str zip_code: str country: str@magic()def extract_validated_address(text: str) -> ShippingAddress: """ Extract complete shipping address with ALL required fields. Fields: name, street, city, state, zip_code, country Raise an error if ANY field is missing from the text. """ ...@magic()def extract_partial_address(text: str) -> ShippingAddress | None: """ Extract shipping address. Return None if no address is found. Fill in 'unknown' for any missing fields. """ ...@magic()def extract_location_mentions(text: str) -> str: """ Extract any location information mentioned (city, state, country, etc). Return as a simple string description of what was found. """ ...
Attempt each approach, falling back when required data is missing:
Copy
def process_shipping_info(text: str) -> dict: """Extract shipping information with fallback levels.""" # Try complete validated extraction try: address = extract_validated_address(text) logger.info("Complete shipping address extracted") return {"address": address, "completeness": "complete"} except Exception as e: logger.warning(f"Complete address extraction failed: {e}") # Try partial extraction try: address = extract_partial_address(text) if address: logger.warning("Partial address extracted, manual review needed") return {"address": address, "completeness": "partial"} else: logger.warning("No structured address found") except Exception as e: logger.error(f"Partial address extraction failed: {e}") # Final fallback - just get location mentions location_text = extract_location_mentions(text) logger.error("Could not extract structured address, only location mentions") return {"address": None, "location_text": location_text, "completeness": "minimal"}
When the text only mentions “Send it to John in Seattle”, the validated extraction fails (missing street, state, zip, country), but the minimal extraction can still return “Seattle” as the location.