Model Context Protocol (MCP) has quietly become one of the most significant infrastructure shifts in AI tooling since the transformer itself. Introduced by Anthropic as an open standard, MCP gives AI assistants like Claude a standardized way to connect to external systems—databases, file systems, APIs, browsers—without bespoke glue code for each integration.
Think of it as the USB-C of AI connectivity. Before MCP, every AI integration required a custom connector. Now, any MCP-compatible client (Claude Desktop, Cursor, Zed, and a growing list of others) can plug into any MCP server and immediately gain new capabilities. The official MCP registry already lists hundreds of servers, and the reference repository on GitHub has accumulated tens of thousands of stars as of early 2026 (source).
This guide covers the most useful MCP servers for developers, organized by use case.
What Is an MCP Server?
An MCP server is a lightweight process that exposes tools, resources, and/or prompts over the Model Context Protocol. When you attach a server to a compatible AI client, the model gains the ability to call those tools directly during a conversation—reading files, querying databases, running searches, and more.
Key things to understand before diving in:
- Transport: Most servers run locally via stdio (standard input/output), which keeps data on your machine. Remote servers using HTTP+SSE are also supported.
- Security: Each server defines its own access boundaries. The official Filesystem server, for example, restricts reads and writes to directories you explicitly allowlist.
- SDKs: You can build your own MCP server in TypeScript, Python, Go, Rust, Java, Kotlin, C#, Ruby, Swift, and PHP using official SDKs (modelcontextprotocol.io).
🗂️ File Operations
1. MCP Filesystem Server
Source: github.com/modelcontextprotocol/servers
Maintained by: MCP steering group (official reference implementation)
License: MIT
The Filesystem server is the entry point for most developers exploring MCP. It exposes a set of file-system tools—read file, write file, list directory, move file, search, and get metadata—over a secure, allowlist-based access model.
Core features:
- Read and write arbitrary text files within approved directories
- Recursive directory listing and glob-based search
- File metadata (size, timestamps, permissions)
- Configurable path allowlist: you decide exactly which directories the model can access
Best for: Local development workflows where you want an AI assistant to read your codebase, create files, or modify configuration without copy-pasting content into the chat window.
I use this daily when refactoring projects—pointing Claude at a source tree and asking it to propose changes directly, rather than shuttling code back and forth manually.
🗄️ Databases
2. MCP SQLite Server
Source: github.com/modelcontextprotocol/servers-archived
License: MIT
The SQLite server exposes a local SQLite database to your AI assistant. It supports both read queries and data modifications, making it suitable for prototyping, data exploration, and lightweight business intelligence workflows.
Core features:
- Execute arbitrary SQL queries against a local
.dbfile - Schema introspection (tables, columns, types)
- Business intelligence prompts built into the server for common analytical patterns
Best for: Developers working with embedded databases, analysts who want to ask questions about local data exports, or anyone prototyping database-backed features without standing up a full RDBMS.
Note: This is an archived reference implementation, meaning it has been removed from active maintenance in the main repo. Community forks and alternatives have picked up the torch; search the MCP registry for actively maintained SQLite options.
3. MCP PostgreSQL Server
Source: github.com/modelcontextprotocol/servers-archived
License: MIT
The PostgreSQL server connects an AI assistant to a live Postgres database in read-only mode, making it safe to use against production replicas or staging environments.
Core features:
- Read-only SQL query execution (no accidental writes)
- Schema inspection across tables, views, and indexes
- Useful for natural-language data exploration (“Show me the top customers by revenue last quarter”)
Best for: Backend developers who want to explore unfamiliar database schemas, data engineers building pipelines, or teams doing ad-hoc reporting without exposing a query UI to non-technical stakeholders.
Like the SQLite server, the reference implementation is archived; actively maintained community versions exist for both plain Postgres and platforms like Supabase and Aiven.
🌐 Web & Search
4. Brave Search MCP Server
Source: github.com/brave/brave-search-mcp-server
Maintained by: Brave Software (official)
Requires: Brave Search API key (free tier available at brave.com/search/api)
This is the canonical web-search MCP server, now maintained directly by Brave after Anthropic archived the original reference implementation. It gives an AI assistant the ability to search the web and return structured results without leaving the chat interface.
Core features:
- Web search with ranked results, snippets, and URLs
- Local search for businesses and points of interest
- SafeSearch and freshness controls via server configuration
Best for: Research-heavy workflows where you want an AI assistant to pull in current information—market data, documentation, news—without you manually supplying URLs. Particularly useful combined with the Fetch server below.
The Brave Search API has a generous free tier, so getting started costs nothing.
5. MCP Fetch Server
Source: github.com/modelcontextprotocol/servers
Maintained by: MCP steering group (official reference implementation)
License: MIT
While Brave Search finds pages, the Fetch server retrieves and converts their content. It fetches arbitrary URLs and returns clean, LLM-optimized text by stripping HTML boilerplate, converting tables to markdown, and chunking long pages into digestible segments.
Core features:
- Fetch any public URL and convert to markdown
- Configurable chunk size for long documents
- Robots.txt compliance by default (can be overridden where appropriate)
- Works with static pages; does not execute JavaScript by default
Best for: Pairing with search to create a lightweight research pipeline—search with Brave, then fetch and read the top results. Also excellent for pulling in documentation, RFCs, or API reference pages on demand.
🔧 Developer Tools & Version Control
6. MCP Git Server
Source: github.com/modelcontextprotocol/servers
Maintained by: MCP steering group (official reference implementation)
License: MIT
The Git server exposes repository operations to an AI assistant—reading history, diffing commits, searching content across branches, and more. It’s built on top of gitpython and works with any local Git repository.
Core features:
git log,git diff,git status,git showas model-callable tools- File content retrieval at specific commits or branches
- Commit creation (with message, author, and file staging)
- Search across commit messages and file content
Best for: Code archaeology (understanding why a change was made), automated changelog generation, or agentic workflows that need to read, modify, and commit code in a single session.
7. Puppeteer / Browser Automation MCP Server
Source: github.com/modelcontextprotocol/servers-archived
License: MIT
The Puppeteer server wraps a headless Chrome instance and exposes browser automation tools to an AI assistant—navigation, clicking, form filling, screenshot capture, and JavaScript evaluation.
Core features:
- Navigate to URLs in a real browser (JavaScript-rendered pages)
- Click elements, fill forms, extract text from the live DOM
- Take screenshots for visual verification
- Execute arbitrary JavaScript in the page context
Best for: Web scraping JavaScript-heavy SPAs that the Fetch server can’t handle, automated UI testing, or multi-step workflows that require filling out forms or interacting with web applications.
Note: Like other archived reference servers, the community has built more actively-maintained alternatives. Playwright MCP (maintained by Microsoft) is a well-supported modern alternative for browser automation.
🧠 Memory & Reasoning
8. MCP Memory Server
Source: github.com/modelcontextprotocol/servers
Maintained by: MCP steering group (official reference implementation)
License: MIT
Language models are stateless by nature—each conversation starts fresh. The Memory server solves this by providing a knowledge graph that persists between sessions. The model can store entities, relationships, and observations, then retrieve them in future conversations.
Core features:
- Create and update named entities with associated observations
- Store relationships between entities (e.g., “Project X depends on Library Y”)
- Retrieve entities by name or search
- Persistent storage in a local JSON file
Best for: Long-running agent workflows, personal assistants that need to remember user preferences, or development environments where you want the AI to maintain context about your codebase architecture across sessions.
Quick Comparison
| Server | Use Case | Source | Notes |
|---|---|---|---|
| Filesystem | File read/write | Official | Active, allowlist-based |
| SQLite | Local DB queries | Archived | Seek maintained forks |
| PostgreSQL | Prod DB exploration | Archived | Read-only; safe for replicas |
| Brave Search | Web search | Official (Brave) | Free API tier |
| Fetch | URL-to-markdown | Official | JS-free pages only |
| Git | Repo operations | Official | Full commit support |
| Puppeteer | Browser automation | Archived | See Playwright MCP |
| Memory | Cross-session context | Official | Knowledge graph |
How to Get Started
Install a compatible client: Claude Desktop and Cursor both support MCP out of the box. Configuration lives in
claude_desktop_config.json(macOS:~/Library/Application Support/Claude/).Add a server: Most servers are installed via
npx(Node) oruvx(Python/uv). Example config for the Filesystem server:{ "mcpServers": { "filesystem": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-filesystem", "/your/allowed/path"] } } }Browse the registry: The official MCP Registry and the community-maintained awesome-mcp-servers list hundreds of third-party servers for Slack, Notion, Linear, Stripe, AWS, and more.
FAQ
Q: Is MCP only for Claude?
No. MCP is an open standard. While Anthropic developed and published the specification, any AI client can implement support. Cursor, Zed, Cline (VS Code extension), Continue, and other tools have added MCP support. The protocol is designed to be model-agnostic.
Q: Are MCP servers safe to use with production databases?
It depends on the server. The PostgreSQL reference server is deliberately read-only, making it reasonable to point at a read replica. The SQLite server allows writes. Always review what tools a server exposes before connecting it to sensitive data, and apply the principle of least privilege—connect only to databases or directories the AI actually needs.
Q: What’s the difference between “active” and “archived” reference servers?
Anthropic moved several reference servers to a separate servers-archived repository. This doesn’t mean they’re broken—it means the MCP team is no longer adding features or fixing bugs in those implementations. For production use, prefer community-maintained alternatives or official vendor-published servers (like Brave’s search server).
Q: Can I build my own MCP server?
Absolutely. Official SDKs exist for TypeScript, Python, Go, Rust, Java, Kotlin, C#, Ruby, Swift, and PHP. The official building a server guide walks through creating a server from scratch. Most simple tool servers can be written in under 100 lines.
Q: Do MCP servers require internet access?
No. Servers like Filesystem, SQLite, Git, and Memory run entirely locally. They communicate with the AI client via stdio on your machine. Only servers like Brave Search or Fetch make outbound network requests—and only when explicitly invoked.
Q: How do I find new MCP servers?
The official MCP Registry is the canonical directory. The punkpeye/awesome-mcp-servers GitHub list and mcpservers.org are community-curated alternatives with broader coverage.
Final Thoughts
MCP is one of those infrastructure layers that feels minor until you use it—and then you can’t imagine going back. The combination of Filesystem + Git + Brave Search + Fetch covers the majority of developer research and coding workflows. Adding Memory on top gives you a surprisingly capable long-running agent setup.
For teams evaluating AI assistant tooling in 2025, MCP compatibility is quickly becoming a table-stakes requirement. If you’re building internal tooling, wrapping your company’s APIs in an MCP server is one of the highest-leverage things you can do to unlock AI-powered workflows for your team.
If you want to go deeper on building production-grade AI agent systems, the book AI Engineering by Chip Huyen covers the architecture patterns underlying modern LLM applications—including tool use, retrieval, and agent design—and pairs well with hands-on MCP experimentation.
All server links point to official or community GitHub repositories. Pricing information (where applicable) reflects free/open-source availability. Always verify the latest status of any server before adopting it in production.