The Model Context Protocol (MCP) is the standard your AI assistant has been waiting for. By 2026 every major AI coding tool - Claude, Cursor, VS Code, Windsurf, Codex, ChatGPT, Zed - speaks it. If you've heard the term floating around but haven't yet figured out what to do with it, this is your five-minute guide.
We'll cover: what MCP is, the problem it actually solves, the three pieces you need to keep straight, which tools support it today, and how to install your first MCP server in roughly the time it takes to make tea.
The 30-second version
MCP is to AI assistants what USB is to peripherals - a standard way to plug capability into a host without rewriting the host.
The Model Context Protocol is an open spec released by Anthropic in late 2024. It lets AI assistants connect to external tools, APIs, and data sources through a uniform interface. An MCP server is a small process that exposes tools - for example, a GitHub MCP server exposes tools for reading repos and creating PRs. An MCP client is the AI tool that calls those tools - Claude Desktop, Cursor, VS Code with Copilot, and so on.
By the end of 2025 every major AI coding tool had shipped MCP support. The protocol won, the way USB won - not because it was the most elegant spec on paper, but because it was good enough and everyone agreed to ship it.
The problem MCP solves
Before MCP, every AI tool had its own incompatible plugin system. If you wanted Slack inside Claude, somebody had to write a Slack integration specifically for Claude. If you wanted Slack inside Cursor, somebody had to write it again. Tools were locked to specific assistants, and assistants had to hand-roll every integration.
The result was predictable: a Cambrian explosion of half-finished plugins, fragmented across vendors, with no path for a small team to ship a tool once and have it work everywhere.
MCP fixes that. One server (e.g. slack-mcp-server) works in every MCP-compatible client. Build once, run everywhere.
For developers, the practical impact is two-fold:
- You stop writing client-specific glue code. If you're an AI tool builder, you implement the protocol once and inherit the entire MCP server ecosystem.
- You stop being locked into one assistant. If you're an end user, you can switch between Claude and Cursor without losing access to your Slack or Postgres or filesystem tools.
Both sides win, which is why adoption has been fast.
Server. Client. 1Server. (Three pieces to know)
You'll hear three terms thrown around. Get them straight and the rest is easy.
Server
A small process that exposes tools - read a file, query a database, hit an API. Examples: the GitHub MCP server, the Postgres MCP server, the Slack MCP server, the filesystem MCP server. Each runs as its own process and speaks the MCP protocol.
A server has a small surface: a list of tools (functions the AI can call), optionally a list of resources (read-only data), and optionally prompts (templates). When the AI client wants to call a tool, it sends a JSON-RPC request to the server, the server runs the function, and the result comes back.
Client
The AI tool that calls those tools. Claude Desktop, Cursor, VS Code with Copilot's agent mode, Windsurf, Codex, Gemini CLI, ChatGPT, Zed. The client speaks MCP and routes tool calls to the right servers.
Most clients let you configure which servers to connect to via a JSON config file. Claude Desktop has claude_desktop_config.json. Cursor has ~/.cursor/mcp.json. VS Code uses mcp.json. The shapes differ slightly per client, which is its own minor headache (more on that later).
1Server (the third piece, optional but useful)
A single MCP server that hosts every other server you install. The client connects to one instance of 1Server; 1Server connects to everything else. Other people in the ecosystem call this pattern an "aggregator" or "gateway". Same idea.
The aggregator pattern matters because the per-client config files get unwieldy fast. With five servers configured manually you have ~30 lines of JSON, scattered tokens in plaintext, and no hot reload. With 1Server you have one entry, secrets in a vault, and you can add or remove servers without restarting the client.
You don't need an aggregator to use MCP. But the moment you have more than two or three servers, you'll want one.
Which tools speak MCP?
Every major AI coding tool by 2026:
- Claude Desktop - Anthropic's desktop app. First-class MCP support since launch.
- Claude Code - Anthropic's CLI coding agent. Register servers globally with
claude mcp add. - Cursor - AI-first code editor by Anysphere. Native MCP support; supports deep-link install.
- VS Code (with Copilot) - Microsoft's editor. Speaks MCP via Copilot's agent mode. Note: VS Code's config block is
servers, notmcpServers. - Windsurf - Cognition AI's "agentic IDE" (formerly Codeium's flagship).
- Codex - OpenAI's coding agent.
- Gemini CLI - Google's open-source command-line agent.
- ChatGPT - OpenAI's flagship chat product, with custom MCP support.
- Zed - Zed Industries' Rust-based fast collaborative editor.
If you're using any of these, you can install MCP servers today. 1Server has per-client setup guides for each.
How to install your first MCP server in five minutes
Walk-through using 1Server, because that's the fastest path. Replace step 4 with manual JSON config if you'd rather not use a runtime.
1. Sign up for 1Server
Free, no credit card. Email or Google. Beta accounts stay on the Free tier permanently.
2. Pick a server from the marketplace
Browse the curated catalogue and click Install on whatever you want - GitHub, filesystem, Postgres, Slack, web search. If the server needs an API token (most do), paste it once; it goes into an AES-256 encrypted vault, never into a config file.
→ https://1server.ai/marketplace
3. Create your 1Server API key
Visit https://1server.ai/dashboard/api-keys and create a key. You'll see it once - copy it.
4. Add 1Server to your AI client
For Claude Code, run:
claude mcp add 1server --transport stdio -e ONESERVER_API_KEY=YOUR_KEY -- npx -y 1server-mcp-engine
For Cursor, Claude Desktop, or any MCP client that uses an mcpServers config block, paste this into the config file:
{
"mcpServers": {
"1server": {
"command": "npx",
"args": ["-y", "1server-mcp-engine"],
"env": { "ONESERVER_API_KEY": "your-api-key" }
}
}
}
VS Code uses a slightly different shape - servers instead of mcpServers, with an explicit type: "stdio". See the /clients/vs-code page.
5. Restart your client once
Restart Claude or Cursor or whatever you're using. From now on, every server you install in the 1Server marketplace is automatically available in your AI session - no more restarts, no more config edits.
That's it. The "five minutes" thing isn't aspirational; it's measured.
What happens next
Once you have MCP working, the things that change:
You stop bouncing between tabs. Need to look up a Postgres schema? Ask Claude. Need to comment on a PR? Ask Cursor. The AI calls the server and you get the answer in chat. No copy-paste, no context-switching, no tab to check.
You start writing tools instead of scripts. When you find yourself wishing the AI could do X, the answer is now "spin up a small MCP server" instead of "rewrite my prompt fifteen times". The server pattern is small enough to be a one-evening project.
You'll want an aggregator. As soon as you have three or four servers configured, the manual JSON config gets old. Either roll your own (1–2 weeks of engineering) or use one - 1Server is what we ship, but the architectural pattern is the point even if you build your own.
Further reading
- 1Server Setup Guide - five-minute migration from manual
mcpServersconfig to one connection - Best MCP servers for Cursor in 2026 - what to install first
- 1Server vs MCP.Directory - registry vs aggregator
- Anthropic's official MCP spec - the standard itself
- Awesome MCP Servers - community-curated server list
If you're new to MCP, install one server in your AI client today. The five minutes you spend will save you ten hours over the next month.