Butter
AI Infrastructure & MLOps
Hi HN! I'm Erik. We built Butter, an LLM proxy that makes agent systems deterministic by caching and replaying responses, so automations behave consistently across runs. - It’s a chat completions compatible endpoint, making it easy to drop into existing agents with a custom base_url - The cache
What is Butter?
Butter is an LLM proxy designed to improve consistency in agent-based automations. By caching and replaying language model responses, Butter enables automation systems to behave deterministically across multiple runs, reducing unpredictable behavior common in AI agents. The product provides a chat completions-compatible endpoint that integrates seamlessly into existing agent frameworks through a custom base_url, requiring minimal code changes. This approach allows developers to leverage their current infrastructure while adding reliability features. Butter also includes bVisor, a lightweight embedded sandbox runtime for AI agents that executes bash commands locally without requiring virtual machines or remote infrastructure. bVisor isolates command execution by intercepting Linux syscalls at the userspace level, inspired by gVisor but designed to run directly within applications. The tool is built for developers and teams implementing AI agent systems where consistent behavior and safe command execution are priorities. By eliminating the overhead of traditional sandboxing approaches, Butter enables faster local execution while maintaining security boundaries. Available as open source, Butter allows teams to self-host the solution and integrate deterministic caching into their agent workflows. The project targets the AI infrastructure and MLOps category, addressing a specific pain point in making autonomous systems more predictable and safer to deploy.
Key Features
- LLM proxy that caches and replays responses for deterministic agent behavior
- Chat completions compatible endpoint for easy integration with existing agents
- Lightweight embedded sandbox runtime for safe bash command execution
- Linux syscall interception and virtualization for process-level isolation
- No remote infrastructure or VM overhead required
- Ensures consistent automation behavior across multiple runs
Screenshots
Rating & Reviews
No ratings yet
Ratings are collected from verified users inside this app.
Reviews (0)
No reviews yet
Reviews are collected from verified users via an in-app widget. Every review comes from someone actually using the product.
Claim this listing to collect verified reviews. Install a widget, your users leave reviews, and they appear in Google with star ratings.
Claim this app →Free · 2-minute setup · No credit card
Butter Pricing
Open sourceVisit butter.dev for full pricing details.
App owners can update pricing by claiming this listing.
Similar Apps
More in ai-infrastructure →UltraContext
Hey HN! I'm Fabio and I built UltraContext, a simple context API for AI agents with automatic versioning. After two years building AI agents in production, I experienced firsthand how frustrating it is to manage context at scale. Storing messages, iterating system prompts, debugging behavior an
Replicate
Run and deploy AI models with a cloud API
Baseten
Serve and scale open-source and custom AI models on the fastest, most reliable inference platform.
Hugging Face
The AI community platform for models and datasets
Owner of Butter?
Verify ownership of butter.dev to unlock widgets, collect verified reviews, and manage your listing.
Click here to claim