Ready to build Auditable AI Agents
Lár (Irish for "core" or "center") is the open source standard for Deterministic, Auditable, and Air-Gap Capable AI agents, powered by LiteLLM.
It is a "define-by-run" framework that acts as a Flight Recorder for your agent, creating a complete audit trail for every single step.
Not a Wrapper
Lár is NOT a wrapper. It is a standalone, ground-up engine designed for reliability. It does not wrap LangChain, OpenAI Swarm, or any other library. It is pure, dependency-lite Python code optimized for "Code-as-Graph" execution.
The "Black Box" Problem
You are a developer launching a mission-critical AI agent. It works on your machine, but in production, it fails. You don't know why, where, or how much it cost. You just get a 100-line stack trace from a "magic" framework.
The "Glass Box" Solution
Lár removes the magic.
It is a simple engine that runs one node at a time, logging every single step to a forensic Flight Recorder.
This means you get: 1. Instant Debugging: See the exact node and error that caused the crash. 2. Free Auditing: A complete history of every decision and token cost, built-in by default. 3. Total Control: Build deterministic "assembly lines," not chaotic chat rooms.
"This demonstrates that for a graph without randomness or external model variability, Lár executes deterministically and produces identical state traces."
Stop guessing. Start building agents you can trust.
What's New in v1.4.1 (Feb 2026)?
Reasoning Models (System 2) are now first-class citizens. Lár supports DeepSeek R1, OpenAI o1, and Liquid Thinking out of the box.
- Audit Logic: Captures "hidden" reasoning traces into metadata.
- Clean State: delivers only the final answer to your downstream nodes.
- Robustness: Handles malformed tags from local models.
Read the Reasoning Models Guide
Demos & Examples
Learn by building with our ready-made demos:
- DMN: The Showcase: A Cognitive Architecture that sleeps, dreams, and remembers.
- RAG Agent Demo: A self-correcting RAG agent with local vector search.
- Customer Support Swarm: A multi-agent orchestration pattern.
Power Your IDE (Cursor / Windsurf)
Make your IDE an expert Lár Architect with this 2-Step Workflow:
- Reference The Rules: In your chat, type
@lar/IDE_MASTER_PROMPT.md. This loads the strict typing rules. - Use The Template: Fill out
@lar/IDE_PROMPT_TEMPLATE.mdwith your agent requirements.
Ready for Production?
Lár is designed to be deployed as a standard Python library. Read our Deployment Guide to learn how to wrap your graph in FastAPI and deploy to AWS/Heroku.
Get Started in 3 Minutes https://docs.snath.ai/getting-started/
Author
Lár was created by Aadithya Vishnu Sajeev.