Core Concepts

Understanding the fundamental concepts behind Model Context Protocol will help you build more effective AI applications.

What is Context?

In the world of Large Language Models (LLMs), context refers to all the information that guides the model's behavior and responses. This includes:

  • System instructions that define the model's role and behavior
  • User goals and queries
  • Conversation history
  • Retrieved information from external sources
  • Available tools and their capabilities
  • User preferences and history

Traditionally, all of this information is combined into a single prompt string. As applications grow more complex, this approach becomes unwieldy and difficult to maintain.

The MCP Approach

Model Context Protocol (MCP) takes a structured approach to context management. Instead of treating context as a monolithic prompt string, MCP treats it as structured data with defined components.

The key principles of MCP are:

  1. Context as Data: Treat context as structured data, not prose
  2. Modularity: Break context into logical components that can be updated independently
  3. Provider Agnostic: Define context once, use with any LLM provider
  4. Observability: Make it easy to inspect and debug what influenced model outputs

Core Components

An MCP context consists of several core components:

System Instruction

The high-level instruction that defines the model's role, capabilities, and constraints. This is similar to the "system message" in many LLM APIs.

systemInstruction: "You are a helpful shopping assistant that recommends products based on user preferences."

User Goal

The current objective or query from the user. This helps focus the model on the specific task at hand.

userGoal: "Find waterproof sneakers under €150 in a minimalist style."

Memory

Information that persists across interactions, divided into short-term (conversation context) and long-term (user preferences, history) memory.

memory: {
  shortTerm: [
    { type: "interaction", content: "Previous message exchange" }
  ],
  longTerm: {
    preferences: { style: "minimalist", priceRange: "100-150" }
  }
}

Tools

External capabilities available to the model, such as API calls, database queries, or specialized functions.

tools: [
  {
    name: "searchProducts",
    description: "Search the product catalog",
    parameters: {
      query: "string",
      filters: "object"
    }
  }
]

Retrieved Documents

Information retrieved from external sources, such as knowledge bases, product catalogs, or documentation.

retrievedDocuments: [
  {
    source: "ProductCatalog",
    query: "waterproof sneakers minimalist",
    results: [
      { name: "Product A", price: "€135", features: [...] },
      { name: "Product B", price: "€145", features: [...] }
    ]
  }
]

The MCP Lifecycle

Using MCP in your application involves several key steps:

  1. Define Context Schema: Specify the structure of your context
  2. Create Context Instance: Populate the context with initial values
  3. Compile for Provider: Transform the structured context into a format suitable for your LLM provider
  4. Generate Response: Use the compiled context with your LLM to generate a response
  5. Update Context: Modify the context based on the interaction

This lifecycle allows for a clean separation of concerns and makes it easier to maintain complex AI applications.

Next Steps

Now that you understand the core concepts of MCP, you can: