Taming the Wild West of MCP Servers

Despite its documented shortcomings, MCP has now become the de-facto standard for connecting your AI to the outside world of apps, with every major developer IDE now supporting it. About two months ago, I was playing around with the Cline VS code extension and decided to try it’s “1-click” MCP server installations. I searched for “postgres” and clicked on the first result, confident that I’d be up and running within a few seconds.
A minute later, I was greeted with a $0.13 usage bill of an LLM model, a plethora of messages and plans from the installer script, and was no closer to actually installing the MCP server!

It seems like it:
Decided to build a Docker image.
This failed less than halfway through, apparently because I didn’t have BuildKit installed on my Mac.
The above error in the terminal was was not detected by the installer script, which blithely continued on and reported that “the Docker image…. has been successfully built”.

The whole experience makes the Apple App Store look like Valhalla. But OK that’s just one example, maybe that was just an exception? Let’s look at another server, one maintained by an established company. Algolia has an official MCP server. And the installation process is to…. unzip a file, remove the quarantine flag, and run the binary (that’s only supported on Mac OS)?

As an individual developer, dealing with issues like this, even for a half-dozen servers you may want to use regularly, is a bit of an inconvenience, but it’s nothing insurmountable; within a half hour or so you should be able to sort out your environment, all the dependencies, and have your servers setup. As more SaaS companies shift to hosting remote MCP servers, this installation headache becomes less an of issue, but for business with hundreds or thousands of engineers, all the enterprise complexities remain - especially for security and IT teams, who struggle with questions like:
How do we setup a centralized, hosted MCP server for all our engineers to connect to, to make onboarding easy, and use single-sign on for auth? Oh, and it has to be deployed on-premise so we can connect to our databases, warehouses, data pipeline orchestrator, etc. which are all behind the VPN/firewall.
How do we control which apps we allow our developers’ AIs to connect to? And restrict high-risk tools and resources of specific MCP servers?
How do we keep an audit log of every interaction between the AI and an MCP server?
We need an integration with (random legacy tool) that doesn’t have an MCP server, how do we do that?
Integration challenges are nothing new, and there are some standard ways of approaching things like this.
Let’s consider them one at a time:
Approach 1: Write a single MCP server with connections to every API you need

We know of Fortune 500 companies that have tried this. It starts out well enough, with a small team of 1-2 engineers. They read the MCP spec. Add a layer of abstraction over different APIs, and start off with a few wrappers of APIs that they commonly use. It works great. But soon, the requests start coming in from the engineers: can you support Slack? What about the wiki? And the CI/CD tool? The team eventually gives up trying to support everything and just asks the end users to write their own connections following their example code, which of course no one does, because who has time for that?
But you say, no need to reinvent the wheel. Every company is going to need this, so there should just be a few providers that setup all the MCP servers, and then the rest of us can just use their platform. This would be analogous to how ETL apps like Fivetran and Airbyte have connectors for the most common apps.
There are a few problems with this approach:
Good MCP servers shouldn’t just be API wrappers, but opinionated functions that make it easier for the LLMs to reliably do their job. For example, the CircleCI API requires multiple API calls to be strung together to do something like get the logs for a recent build failure in a CI/CD pipeline. A simple API wrapper is not going to make that easy for an LLM to figure out in a reliable manner.
A third-party MCP server is unlikely to be ever as good, or as up-to-date as the official MCP servers that some companies will offer themselves. Sure, you can convert the Github openAPI spec into an MCP server, and filter it down to just the APIs you need. But is your company going to ship updates to it almost every day, like Github does? Are you going to fix its bugs and security vulnerabilities faster than Github itself? Highly unlikely.
In summary, even if there were vendors who dedicated their existence to just building and maintaining MCP servers for the most commonly used developer tools, you’d have to tradeoff the quality and reliability of those integrations for scalability.
Approach 2: Just setup a gateway and run a Docker container for each existing MCP server and employee that needs it

If you compiled a registry of available MCP servers and their Docker images, you could setup something akin to a gateway that could spin up, on the fly, a container for each employee that needs it, for each app they’d like to connect to.
This scales with
O(m x n), wheremis the number of apps per employee andnis the number of employees, which is not great.Furthermore, managing potentially hundreds of unique Docker images, and thousands of containers is a logistical nightmare. I’ve worked at large companies where it took weeks just to deploy a single Docker image (Kubernetes, firewall rules, Terraform configs for load balancers, deployment tickets and approvals). Not to mention ongoing security tickets and upgrade requests.
The downsides to this approach are that it quickly becomes very expensive to scale, and doesn’t offer a complete solution, as not every tool will offer an MCP server.
Is there an alternative to the options above, that avoids each one’s disadvantages, but maintains scalability, quality, and ease of use?
We’ve developed a federated MCP server that allows you to run and proxy any supported MCP server in a single Docker container. We support servers written in Python, Node, and Go, which accounts for the vast majority of all the servers on Github, along with any remote servers that are hosted by SaaS vendors themselves. You just specify which servers you want to enable, and we launch them on the fly when the container starts. It can run locally on a developer’s laptop, in the cloud, or on-premise for an entire organization. For enterprise customers, we also bake the requirements of each server into the Docker image itself for greater security and speed.
For apps that don’t have their own MCP servers, we have an open source library that let’s us connect to every major database/data warehouse, and any API with an openAPI spec that can be customized to your needs. We’ll be adding a thousand+ apps within the next two months for customers who are able to leverage the public cloud.
This offers you the best of both worlds:
the low cost and ease of deployment of a single Docker image for all your employees
support for all official MCP servers, so you’ve got the best quality, reliability, and security straight from the source
a scalable framework for the long tail of apps that you want to connect to
