Projects β’
Roadmap β’
Report Bug β’
Sign up for ZenML Pro β’
Blog β’
π For the latest release, see the release notes.
ZenML is built for ML or AI Engineers working on traditional ML use-cases, LLM workflows, or agents, in a company setting.
At it's core, ZenML allows you to write workflows (pipelines) that run on any infrastructure backend (stacks). You can embed any Pythonic logic within these pipelines, like training a model, or running an agentic loop. ZenML then operationalizes your application by:
- Automatically containerizing and tracking your code.
- Tracking individual runs with metrics, logs, and metadata.
- Abstracting away infrastructure complexity.
- Integrating your existing tools and infrastructure e.g. MLflow, Langgraph, Langfuse, Sagemaker, GCP Vertex, etc.
- Allowing you to quickly iterate on experiments with an observable layer, in development and in production.
...amongst many other features.
ZenML is used by thousands of companies to run their AI workflows. Here are some featured ones:
(please email [email protected] if you want to be featured)
ZenML uses a client-server architecture with an integrated web dashboard (zenml-io/zenml-dashboard):
- Local Development:
pip install "zenml[server]"
- runs both client and server locally - Production: Deploy server separately, connect with
pip install zenml
+zenml login <server-url>
# Install ZenML with server capabilities
pip install "zenml[server]"
# Install required dependencies
pip install scikit-learn openai numpy
# Initialize your ZenML repository
zenml init
# Start local server or connect to a remote one
zenml login
Here is a brief demo:
zenml_demo_comp.mp4
Stop clicking through dashboards to understand your ML workflows. The ZenML MCP Server lets you query your pipelines, analyze runs, and trigger deployments using natural language through Claude Desktop, Cursor, or any MCP-compatible client.
π¬ "Which pipeline runs failed this week and why?"
π "Show me accuracy metrics for all my customer churn models"
π "Trigger the latest fraud detection pipeline with production data"
Quick Setup:
- Download the
.dxt
file from zenml-io/mcp-zenml - Drag it into Claude Desktop settings
- Add your ZenML server URL and API key
- Start chatting with your ML infrastructure
The MCP (Model Context Protocol) integration transforms your ZenML metadata into conversational insights, making pipeline debugging and analysis as easy as asking a question. Perfect for teams who want to democratize access to ML operations without requiring dashboard expertise.
The best way to learn about ZenML is through our comprehensive documentation and tutorials:
- Your First AI Pipeline - Build and evaluate an AI service in minutes
- Starter Guide - From zero to production in 30 minutes
- LLMOps Guide - Specific patterns for LLM applications
- SDK Reference - Complete SDK reference
- Agent Architecture Comparison - Compare AI agents with LangGraph workflows, LiteLLM integration, and automatic visualizations via custom materializers
- Minimal Agent Production - Document analysis service with pipelines, evaluation, and web UI
- E2E Batch Inference - Complete MLOps pipeline with feature engineering
- LLM RAG Pipeline - Production RAG with evaluation loops
- Agentic Workflow (Deep Research) - Orchestrate your agents with ZenML
- Fine-tuning Pipeline - Fine-tune and deploy LLMs
ZenML is featured in these comprehensive guides to production AI systems.
Contribute:
- π Star us on GitHub - Help others discover ZenML
- π€ Contributing Guide - Start with
good-first-issue
- π» Write Integrations - Add your favorite tools
Stay Updated:
- πΊ Public Roadmap - See what's coming next
- π° Blog - Best practices and case studies
- π Slack - Talk with AI practitioners
Q: "Do I need to rewrite my agents or models to use ZenML?"
A: No. Wrap your existing code in a @step
. Keep using scikit-learn
, PyTorch, LangGraph, LlamaIndex, or raw API calls. ZenML orchestrates your tools, it doesn't replace them.
Q: "How is this different from LangSmith/Langfuse?"
A: They provide excellent observability for LLM applications. We orchestrate the full MLOps lifecycle for your entire AI stack. With ZenML, you manage both your classical ML models and your AI agents in one unified framework, from development and evaluation all the way to production deployment.
Q: "Can I use my existing MLflow/W&B setup?"
A: Yes! ZenML integrates with both MLflow and Weights & Biases. Your experiments, our pipelines.
Q: "Is this just MLflow with extra steps?"
A: No. MLflow tracks experiments. We orchestrate the entire development process β from training and evaluation to deployment and monitoring β for both models and agents.
Q: "How do I configure ZenML with Kubernetes?"
A: ZenML integrates with Kubernetes through the native Kubernetes orchestrator, Kubeflow, and other K8s-based orchestrators. See our Kubernetes orchestrator guide and Kubeflow guide, plus deployment documentation.
Q: "What about cost? I can't afford another platform."
A: ZenML's open-source version is free forever. You likely already have the required infrastructure (like a Kubernetes cluster and object storage). We just help you make better use of it for MLOps.
Manage pipelines directly from your editor:
Install from VS Code Marketplace.
ZenML is distributed under the terms of the Apache License Version 2.0. See LICENSE for details.