Welcome to the MCP documentation. Learn how to structure and manage context for your LLM applications.
Model Context Protocol (MCP) is a standardized way to define, share, and manage the context that LLMs use during inference. It provides a structured approach to organizing all the information that guides an LLM's behavior.
MCP helps solve common challenges in LLM application development by providing a consistent way to handle system instructions, user goals, memory, tools, and retrieved information.
Structured Context Schema
Define your context structure once, use it consistently
Provider Agnostic
Works with any LLM provider (OpenAI, Anthropic, etc.)
Memory Management
Structured approach to short and long-term memory
Tool Integration
Standardized way to define and use tools with LLMs
Observability
Debug and trace what influenced model outputs
Ready to start using MCP? Follow our getting started guide to install the package and create your first MCP context.
Learn the fundamental concepts behind MCP and how it structures context
How to define and validate your context schema for different use cases
Techniques for managing short and long-term memory in your LLM applications
How to define and use tools with LLMs to extend their capabilities
Explore our examples to see MCP in action with different use cases and LLM providers.
Detailed API documentation for all MCP classes, methods, and configuration options.