Meta's standardized API and toolchain for building AI agents with Llama models, providing inference, safety, memory, and tool use in a unified stack.
Meta's official toolkit for building AI agents with Llama models — standardized APIs for inference, memory, and tool use.
Llama Stack is Meta's open-source toolchain and standardized API for building AI applications and agents using Llama models. It provides a unified interface that standardizes the core building blocks of agent development — inference, safety, memory, tool use, and evaluation — into a consistent API that works across different deployment environments from local development to cloud production.
The stack is designed around a distribution model where different providers implement the standardized APIs. A local development distribution might use Ollama for inference and ChromaDB for memory, while a production distribution could use AWS Bedrock for inference and PostgreSQL for persistence. The API remains the same, making it easy to develop locally and deploy to production without code changes.
Llama Stack includes built-in safety features through Llama Guard, Meta's content safety model that provides input and output filtering for agent interactions. This is integrated at the API level, so safety checks happen automatically without additional integration work. The safety system covers categories including violence, sexual content, criminal planning, and more.
The Agents API provides a complete framework for building tool-using agents with support for function calling, code execution, web search, and custom tools. The memory API supports both vector-based retrieval (for RAG) and conversation history management. An evaluation API enables testing agent performance with standardized benchmarks.
Llama Stack supports multiple client languages including Python and TypeScript, and provides REST APIs for language-agnostic integration. Distributions are available for local development (with Ollama), cloud deployment (with AWS, Azure, Fireworks, Together), and on-device inference. The project represents Meta's effort to create a standardized, portable agent development stack around the Llama model family.
Was this helpful?
Unified API for inference, safety, memory, tools, and evaluation that works across local, cloud, and on-device distributions.
Use Case:
Swap providers (Ollama, Bedrock, Together) without changing application code — develop locally, deploy to production seamlessly.
Use Case:
Built-in content safety filtering through Llama Guard, providing automatic input/output safety checks at the API level.
Use Case:
Complete agent framework with function calling, code execution, web search, and custom tool support for building capable agents.
Use Case:
Standardized memory API for both vector-based retrieval and conversation history, with pluggable storage backends.
Use Case:
Pre-configured distributions for local development (Ollama), cloud (AWS, Azure, Fireworks, Together), and on-device inference.
Use Case:
Free
forever
Ready to get started with Llama Stack?
View Pricing Options →Building agents with Llama models across different environments
Teams wanting built-in safety for agent interactions
Projects needing portable deployment from local to cloud
Organizations committed to open-source AI with Meta's Llama
We believe in transparent reviews. Here's what Llama Stack doesn't handle well:
Llama Stack is designed for Llama models but the API is extensible. Some distributions support other models, though the best experience is with Llama.
A distribution is a pre-configured set of providers implementing the Llama Stack APIs. For example, a local distribution uses Ollama, while an AWS distribution uses Bedrock.
Llama Guard is a safety model that classifies inputs and outputs against safety categories. It's integrated into the Llama Stack API so safety checks happen automatically on every agent interaction.
Not exactly. Llama Stack provides a standardized infrastructure layer for Llama-based agents, while LangChain is a higher-level application framework. They can be used together.
Weekly insights on the latest AI tools, features, and trends delivered to your inbox.
People who use this tool also find these helpful
Standardized communication protocol for AI agents enabling interoperability and coordination across different agent frameworks.
CLI tool for scaffolding, building, and deploying AI agent projects with best-practice templates, tool integrations, and framework support.
Full-stack platform for building, testing, and deploying AI agents with built-in memory, tools, and team orchestration capabilities.
Lightweight Python framework for building modular AI agents with schema-driven I/O using Pydantic and Instructor.
Latest version of the pioneering autonomous AI agent with enhanced planning, tool usage, and memory capabilities.
IBM's open-source TypeScript framework for building production AI agents with structured tool use, memory management, and observability.
See how Llama Stack compares to LangChain and other alternatives
View Full Comparison →AI Agent Builders
Toolkit for composing LLM apps, chains, and agents.
AI Models
Run large language models locally on your machine with a simple CLI and API, enabling private and cost-free AI agent development.
AI Models
Inference platform with code model endpoints and fine-tuning.
AI Agent Builders
Official OpenAI SDK for building production-ready AI agents with GPT models and function calling.
No reviews yet. Be the first to share your experience!
Get started with Llama Stack and see if it's the right fit for your needs.
Get Started →Take our 60-second quiz to get personalized tool recommendations
Find Your Perfect AI Stack →Explore 20 ready-to-deploy AI agent templates for sales, support, dev, research, and operations.
Browse Agent Templates →