Every tool gives you pieces — We close the loop.
The only AI agent platform with a closed-loop operational system. Develop → Trace → Feedback → Eval → Optimize → Deploy — and back again. Open-source SDK for developers, full platform for teams. From failing trace to fixed agent in 10 minutes, not 4 hours.
Failing trace → manually write test case → manually rewrite prompt → manually test → manually deploy. Every production failure is a multi-hour context-switching fire drill across 3–4 different tools.
Failing trace → thumbs down → eval case auto-created → Prompt Optimizer rewrites → Compare Mode A/B tests → deploy → SDK pulls new version. All within one system. Zero context switching.
Build agents in Python. Run locally. Connect to the platform for observability, prompt management, and eval — when you're ready.
fa.connect() for platform servicesBuild agents in the UI. Visual chain editor. Configure connectors. Test in the Playground. Monitor in the dashboard. No SDK required.
fastaiagent SDK is open-source (Apache 2.0) and runs anywhere Python runs. Start standalone — add the platform later with a single fa.connect() call.response_format supportRunContext[T] passes DB connections, API clients, and runtime state to toolsMulti-agent orchestration with context passthrough, streaming delegation, and callable dynamic prompts.
Cyclic graphs, checkpointing, HITL gates, parallel execution, and conditional routing — all in code.
7+ scorers, LLM-as-Judge, trajectory eval, session eval. Publish results to platform with one call.
Document ingestion, multi-strategy chunking, embedding, and vector search — fully local, no platform needed.
Fork any execution at any step and re-run with modified inputs. The debugging feature no other framework has.
Version, compose with fragments, pull from platform. PromptRegistry(source="platform") for team-managed prompts.
OpenTelemetry spans with BatchSpanProcessor. Local SQLite storage or export to platform via fa.connect().
Input, output, tool-call, tool-result, and content guardrails. 5 built-in guards plus custom definitions.
Develop → Trace → Feedback → Eval Case → Optimize → Compare → Deploy — in one system. Negative feedback auto-creates eval cases. Prompt Optimizer rewrites. Compare Mode A/B tests. Ship without switching tools.
Step through any execution span-by-span. Fork from any point and re-run with modified inputs. Compare original vs. forked outcomes side by side. Time-travel debugging for AI agents.
Open-source SDK and full platform share the same trace dashboard, prompt registry, and eval framework. fa.connect() bridges them with a single call.
Drag-and-drop canvas with cyclic graphs, conditional routing, parallel execution, HITL nodes, and checkpointing — no orchestration code needed.
Hybrid search, reranking, query rewriting, adaptive routing, self-grading, contextual enrichment — 20+ configurable retrieval feature flags.
Versioning, fragments, 3-mode auto-optimization, Compare Mode A/B testing, version analytics, and approval workflows.
Full hierarchical traces: LLM calls, tool invocations, tokens, latency, cost — linked to the exact prompt version used.
LLM-as-Judge scoring, RAG eval (Ragas), annotation queues, online eval policies, A/B comparisons, CI/CD eval API, and MLflow integration.
Long-term memory across conversations with automatic extraction. Provider-agnostic — works with any LLM. Configurable thresholds and vector-based recall.
LLM-powered test data from schemas. Preview with any connector, export to eval datasets. Bootstrap testing without manual data collection.
Streaming playground with tool call visualization. Multi-turn conversation simulator with persona-driven users, session scorers, and trajectory analysis.
Databases, storage, messaging, CRM, email, and data processing. First-class MCP support with hosted servers.
Approval policies on tool calls. Chain pause for human input. Signed webhooks. Context-aware intelligent rejection.
RESTful API at /public/v1/. Scoped keys, rate limiting, SSE streaming, feedback, and webhooks.
SSO + 4-role RBAC, air-gapped deployment, audit trails, encrypted secrets, and EU AI Act compliance via FastAIShield.
Enterprise single sign-on with 4-role permission hierarchy: App Admin, Domain Admin, Developer, Viewer.
Standalone deployment bundles with pre-built images. Zero internet dependency for classified and regulated networks.
Complete audit trail: who did what, when. Action tracking, resource IDs, actor identification, and IP capture.
EU AI Act compliance platform. Risk assessment, accountability framework (19 roles), audit trails, and regulatory reporting.
OpenAI, Anthropic, Azure OpenAI, Ollama, AWS Bedrock, Google Vertex, and custom endpoints via enterprise gateway auth.
Databases, cloud storage, messaging, CRM, email, data processing, and REST/GraphQL. Each connector is reusable across all agents.
First-class support for Anthropic's Model Context Protocol. Connect external MCP servers, host your own, and discover tools dynamically.
Built-in vector store with plans for Qdrant, ChromaDB, Pinecone, Weaviate, and pgvector adapters.
Export traces and eval runs to MLflow. Bridge AI agent quality with your existing ML experiment tracking infrastructure.
OTel-native tracing with BatchSpanProcessor. Export to any OTel-compatible backend — Datadog, Grafana, or your own collector.
Build, test, deploy, and run intelligent agents with visual workflows, advanced RAG, prompt optimization, and full lifecycle management.
Thin-client consumption portal for business users. Run agents, view structured results, and manage work — no technical skills required.
The Business Context layer. Knowledge graphs, entity resolution, and semantic understanding that makes agents organisation-aware.
AI Governance and Compliance. Risk assessment, audit trails, and regulatory readiness for EU AI Act and beyond.