I am just glad that we now have a simple path to authorized MCP servers. Massive shout-out to the MCP community and folks at Anthropic for corralling all the changes here.
The main alternative one would have for having a plug-and-play (just configure a single URL) non-MCP REST API would be to use OpenAPI definitions and ingesting them accordingly.
However, as someone that has tried to use OpenAPI for that in the past (both via OpenAI's "Custom GPT"s and auto-converting OpenAPI specifications to a list of tools), in my experience almost every existing OpenAPI spec out there is insufficient as a basis for tool calling in one way or another:
- Largely insufficient documentation on the endpoints themselves
- REST is too open to interpretation, and without operationIds (which almost nobody in the wild defines), there is usually context missing on what "action" is being triggered by POST/PUT/DELETE endpoints (e.g. many APIs do a delete of a resource via a POST or PUT, and some APIs use DELETE to archive resources)
- baseUrls are often wrong/broken and assumed to be replaced by the API client
- underdocumented AuthZ/AuthN mechanisms (usually only present in the general description comment on the API, and missing on the individual endpoints)
In practice you often have to remedy that by patching the officially distributed OpenAPI specs to make them good enough for a basis of tool calling, making it not-very-plug-and-play.
I think the biggest upside that MCP brings (given all "content"/"functionality") being equal is that using it instead of just plain REST, is that it acts as a badge that says "we had AI usage in mind when building this".
On top of that, MCP also standardizes mechanisms like e.g. elicitation that with traditional REST APIs are completely up to the client to implement.
I can’t help but notice that so many of the things mentioned are not at all due to flaws in the protocol, but developers specifying protocol endpoints incorrectly. We’re one step away from developers putting everything as a tool call, which would put us in the same situation with MCP that we’re in with OpenAPI. You can get that badge with a literal badge; for a protocol, I’d hope for something at least novel over HATEOAS.
Agreed. Without standards, we wouldn’t have the rich web-based ecosystem we have now.
As an example, anyone who’s coded email templates will tell you: it’s hard. While the major browsers adopted the W3C specs, email clients (I.e. email renderers) never adopted the spec, or such a W3C email HTML spec didn’t exist. So something that renders correctly in Gmail looks broken in Yahoo mail in Safari on iOS, etc.
Standards are very important, especially extensible ones where proposals are adopted when they make sense - this means companies can still innovate but users get the benefit of everything just working.
But browsers/web ecosystem are still a bad example as we had decades of browsers supporting their own particular features/extensions. This has converged slightly pretty much because everything now uses Chrome underneath (bar Safari and Firefox).
But even so...if I write an extension while using Firefox, why can't I install that extension in Chrome? And vice-versa? Even bookmarks are stored in slightly different formats.
It is a massive pain to align technology like this but the benefits are huge. Like boxing developers in with a good library (to stop them from doing arbitrary custom per-project BS) I think all software needs to be boxed into standards with provisions for extension/innovation. Rather than this pick & choose BS because muh lock-in.
you have to write code for MCP server, and code to consume them too. It's just the LLM vendor decide that they are going to have the consume side built-in, which people question as they could as well do the same for open API, GRPC and what not, instead of a completely new thing.
The analogy that was used a lot is that it's essentially USB-C for your data being connected to LLMs. You don't need to fight 4,532,529 standards - there is one (yes, I am familiar with the XKCD comic). As long as your client is MCP-compatible, it can work with _any_ MCP server.
The whole USB C comparison they make is eyeroll inducing, imo. All they needed to state was that it was a specification for function calling.
My gripe is that they had the opportunity to spec out tool use in models and they did not. The client->llm implementation is up to the implementor and many models differ with different tags like <|python_call|> etc.
Clearly they need to try explaining it it easy terms. The number of people that cannot or will not understand the protocol is mind boggling.
I'm with you on an Agent -> LLM industry standard spec need. The APIs are all over the place and it's frustrating. If there was a spec for that, then agent development becomes simply focused on the business logic and the LLM and the Tools/Resource are just standardized components you plug together like Lego. I've basically done that for our internal agent development. I have a Universal LLM API that everything uses. It's helped a lot.
The comparison to USB C is valid, given the variety of unmarked support from cable to cable and port to port.
It has the physical plug, but what can it actually do?
It would be nice to see a standard aiming for better UX than USB C. (Imho they should have used colored micro dots on device and cable connector to physically declare capabilities)
Certainly valid in that just like various usb c cables supporting slightly different data rates or power capacities, MCP doesn't deal with my aforementioned issue of the glue between MCP client and model you've chosen; that exercise is left up to us still.
My gripe with USB C isn't really on the nature, but on the UX and modality of capability discovery.
If I am looking at a device/cable, with my eyes, in the physical world, and ask the question "What does this support?", there's no way to tell.
I have to consult documentation and specifications, which may not exist anymore.
So in the case of standards like MCP, I think it's important to come up with answers to discovery questions, lest we all just accept that nothing can be done and the clusterfuck in +10 years was inevitable.
A good analogy might be imagining how the web would have evolved if we'd had TCP but no HTTP.
100% agree but with private enterprise this is a problem that can never be solved; everyone wants their lock-in and slice of the cake.
I would say for all the technology we have in 2025, this has certainly been one of the core issues for decades & decades. Nothing talks to each other properly, nothing works with another thing properly. Immense effort has to be expended for each thing to talk to or work with the other thing.
I got a Macbook Air for light dev as a personal laptop. It can't access Android filesystem with a phone plugged in. Windows can do it. I know Apple's corporate reasoning, but just an example of purposeful incompatibility.
As you say, all these companies use standards like TCP/HTTP/Wifi/Bluetooth/USB/etc and they would be nowhere without them - but literally every chance they get they try to shaft us on it. Perhaps AI will assist in the future - tell it you want x to work with y and the model will hack on it until the fucking thing works.
Just to add one piece of clarification - the comment around authorization is a bit out-of-date. We've worked closely with Anthropic and the broader security community to update that part of MCP and implement a proper separation between resource server (RS) and authorization server (AS) when it comes to roles. You can see this spec in draft[1] (it will be there until a new protocol version is ratified).
Can only speak for the authorization spec, where I am actively participating - zero. The entire spec was written, reviewed, re-written, and edited by real people, with real security backgrounds, without leaning into LLM-based generation.
Idk, I'm kind of agnostic and ended up throwing it in there.
Regurgitating the OAuth draft don't seem that usefull imho, and why am I forced into it if I'm using http. Seems like there are plenty of usecases where un-attended thing would like to interact over http, where we usually use other things aside from OAuth.
It all probably could have been replaced by
- The Client shall implement OAuth2
- The Server may implement OAuth2
For local servers this doesn't matter as much. For remote servers - you won't really have any serious MCP servers without auth, and you want to have some level setting done between client and servers. OAuth 2.1 is a good middle ground.
That's also where, with the new spec, you don't actually need to implement anything from scratch. Server issues a 401 with WWW-Authenticate, pointing to metadata for authorization server locations. Client takes that and does discovery, followed by OAuth flow (clients can use many libraries for that). You don't need to implement your own OAuth server.
But where would you get bearer tokens? How would you manage consent and scopes? What about revocation? OAuth is essentially the "engine" that gives you the bearer tokens you need for authorization.
I know it's not auth-related, but the main MCP "spec" says that it was inspired by LSP (language server protocol). Wouldn't something like HATEOAS be more apt?
Author of the article here (thank you mpweiher for the submission). Pi-Hole has been, hands-down, the best infrastructure investment in our household. At this point I have 2MM+ domains blocked and the performance has been great.
Coordinator of the authorization RFC linked in this post[1].
The protocol is in very, very early stages and there are a lot of things that still need to be figured out. That being said, I can commend Anthropic on being very open to listening to the community and acting on the feedback. The authorization spec RFC, for example, is a coordinated effort between security experts at Microsoft (my employer), Arcade, Hellō, Auth0/Okta, Stytch, Descope, and quite a few others. The folks at Anthropic set the foundation and welcomed others to help build on it. It will mature and get better.
Impressive to see this level of cross-org coordination on something that appears to be maturing at pace (compared to other consortium-style specs/protocol I've seen attempted)
This reminds me of something Adam Smith said in The Wealth of Nations:
"People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices."
Ymmv, but I cannot image that this "innovation" will result in a better outcome for the general public.
It's a hobby project, but one that I love working on because it unlocks some _really_ great hardware to be open to do anything I want it to be rather than be constrained by out-of-the-box client software that asks me to sign in with an account to get an extension installed.
Shockingly, they do! Quite a few folks that I've talked to recently expressed that they are subscribed to more than one email newsletter and read them fairly consistently.
Fair feedback. In my eyes, I treat any personal site as a digital garden - I am not really sticking to the "pure" definition. If you are putting an effort to curate and grow your own site, is that not a digital garden? I think it is, but that's my own take on it.
I intentionally try to avoid sites that are in any shape _not_ personal or otherwise representative of an individual trying to stake their little corner of the internet.
I dunno, sounds more like some gate keeping, or maybe I'm biased because I hate the term for no discernible reason.
> has content of different levels of development, is imperfect and often a playground for experimentation, learning, revising, iteration, and growth for diverse content
So because I polish my "posts" to a certain degree before publishing it's not a digital garden? Because 50% of my blog posts are "timeless" in a sense that they're about... stuff that exists.. not current dvelopments it's not a digital garden? Becuase I never delete stuff (as if people with wikis would delete stuff :P)... and so on.
I mean, I couldn't care less but I feel (and this seems to be a common theme) that the indieweb people are mostly dogmatic about their definitions and not very encouraging (also why I tried to take part in the irc channel years ago and left, frustrated).
But yeah, if someone want to own the definition of blog and digital garden and not accepting a certain overlap with "personal website", sure.
> I dunno, sounds more like some gate keeping, or maybe I'm biased because I hate the term for no discernible reason.
I'm not too fond of the term either, but I think that's because it got gentrified and somewhat misunderstood. Everything personal on the web was suddenly a garden.
> But yeah, if someone want to own the definition of blog and digital garden and not accepting a certain overlap with "personal website", sure.
Of course there's a overlap. My point was not that the listed sites had to fulfill the criteria for classification, but if there's not a single hint of gardening going on, then we might as well call en.wikipedia.org a blog?
Also, is a website "personal" if it's just there to market your services? Sure, it's personal as in your website, but content wise it's not particularly personal. Gardens or personal wikis are usually not there to market services, but to build some kind of personal knowledge base (that other nerds, not employers, might find interesting).
Point taken, maybe I got a little sidetracked with the direction of the discussion and not the original website and its content. FWIW, I think there are wikis that are non-personal (wikipedia, tech, etc) and there can be websites by individuals that are just "marketing" in the wider sense, but I would not call them personal website. (i.e. jsut a portfolio or "here are my socials")
I don't care much for labels. I have one rule for websites that everyone should follow (lol): whatever you are doing, keep doing it, stick to the theme/topic, stick to the update frequency. It is only wrong if you change either.
If you keep calling your dog a "cat" it won't start meowing.
The list is really disappointing, not only it's hard to find an actual digital garden on the list, most aren't even blogs, but simple resumes or portfolios.