MCP has a team problem
Everyone's hooking up MCP servers right now. GitHub, Slack, Linear, Notion, internal stuff — a new one shows up every week and everybody just drops it into their config. It's great until you have five developers and eight servers, and then things start to get weird.
I've spent the last year building MCP tooling, and I keep watching teams hit the same wall. Everything's fine at first, and then one Thursday morning you wake up to an operational mess that snuck up on you while you were busy connecting things.
Configs diverge almost immediately
MCP clients keep server configs locally. Cursor has mcp.json, Claude Desktop has claude_desktop_config.json. Every developer has their own copy sitting on their own machine.
So developer A sets up the GitHub MCP server in February. Developer B starts in March and gets a Slack link to "the config." April rolls around, the config's been changed twice, nobody updated the Slack message, and the new hire burns 45 minutes trying to figure out why Linear won't connect. Turns out the team switched endpoints three weeks ago.
If you've ever dealt with .env drift before secrets managers came along — same energy. Except .env only bites you at deploy time. MCP configs mess with people's day-to-day workflow, which somehow makes it more annoying.
Tokens everywhere, revocation nowhere
Every MCP server wants credentials. GitHub wants a PAT. Slack wants an OAuth token. Your internal API wants a bearer token. All of these end up in plaintext in local config files. On every developer's laptop.
Now someone leaves.
Quick — revoke their access. Except you can't, really. You don't know which tokens they had. You don't know if they copied the team's shared API key or rolled their own. So you do the only thing you can: rotate everything, across every service, then ping every remaining developer to update their config. There goes your afternoon.
Contractors and interns make this worse, obviously. Short-lived access to long-lived credentials. Classic recipe for trouble.
You're flying blind
An AI agent hammers your project management API 400 times in two minutes. You find out because the rate limiter fires and dumps a notification in your ops channel.
Which developer? Which conversation? What was the agent even trying to accomplish? No idea. MCP doesn't have a built-in audit trail. Client logs live on local machines. The upstream server just sees a valid token — it doesn't know or care who's behind it.
We solved this for HTTP APIs a decade ago. Request logging, rate limiting, usage attribution — that's what API gateways do. MCP doesn't have any of it, because MCP was designed for a single developer talking to a single server. The protocol itself is fine. The operational layer around it? Doesn't exist yet.
Everyone gets every tool
Connect to a server, get all its tools. The intern who started two weeks ago sees the same delete_repository tool as the staff engineer who's been there four years. There's no scoping mechanism.
The current access control model is, essentially, "just don't call it." Which — look, that works right up until it doesn't. And when an AI agent is the one deciding what to call, you're relying on the LLM to understand org-level permission boundaries from a system prompt. That's not access control. That's a prayer.
It compounds
Any one of these problems is fine at 2 servers and 3 developers. Annoying, maybe. Manageable. But at 8 servers and 15 developers, they don't just add up — they multiply:
- More servers → more scattered credentials → uglier offboarding
- More developers → faster config drift → longer onboarding
- More tool calls → higher rate-limit risk → nothing to throttle them
- More tools → bigger permission surface → more ways things go sideways
Each server on its own is simple. The system of servers is not.
The fix isn't new
It's the same pattern API teams settled on years ago. Put a gateway between clients and servers.
Instead of every developer connecting to 8 MCP servers individually, everyone points at one gateway endpoint. The gateway handles the ugly parts:
- One config — a single URL and token, not 8 server blocks
- Credentials in one place — upstream API keys live on the gateway, not on laptops
- Audit trail — every tool call logged: who, what, when, how long
- Rate limiting — per tool, per token, per team
- Scoped access — different tokens see different servers and tools
- One-click revocation — kill one token instead of rotating eight
{
"mcpServers": {
"github": { "url": "...", "headers": { "Authorization": "Bearer ghp_..." } },
"slack": { "url": "...", "headers": { "Authorization": "Bearer xoxb-..." } },
"linear": { "url": "...", "headers": { "Authorization": "Bearer lin_..." } },
"sentry": { "url": "...", "headers": { "Authorization": "Bearer sntrys_..." } },
"notion": { "url": "...", "headers": { "Authorization": "Bearer ntn_..." } }
}
}
5 servers, 5 plaintext tokens, on every developer's machine.
{
"mcpServers": {
"warpgate": {
"url": "https://app.usewarpgate.com/mcp/bold-river",
"headers": { "Authorization": "Bearer wg_tok_..." }
}
}
}
One endpoint. One token. Revoke it in one click.
That's what I've been building with Warpgate. There are others in this space too — Vendia and MintMCP come at it from different angles — but the pattern matters more than the vendor.
Running MCP across a team needs the same infrastructure that running HTTP APIs across a team needed. Right now, adoption has outrun the tooling. If you're at 2–3 servers, you probably don't feel any of this yet. Bookmark this for when you do.
Aron Rotteveel, creator of Warpgate