Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My bad. I shouldn’t have mentioned LangChain here because it’s a little besides my point. What I mean is, MCP seems designed for a world where users talk to an LLM, and the LLM calls software tools.

For the foreseeable future, especially in a business context, isn’t it more likely that users will still interact with structured software applications, and the applications will call the LLM? In that case, where does MCP fit into that flow?





it separates FE and BE for agent teams just like we did with web apps. the team building your agent framework might not know the business domain of every piece of your data/api space that your agent will need to interact with. in that case, it makes sense for your differnet backend teams to also own the mcp server that your companies agent team will utilize.

Yeah I don’t know. Let’s a say a org wants to do discovery of what functions are available for an app across the org. Okay, that’s interesting. But, each team can just also import a big file called all_functions.txt.

A swagger api is already kind of like an MCP, or really any existing REST api (even better because you don’t have to implement the interface). If I wanted to give my LLM brand new functionality, all I’d have to do is define out tool use for <random_api>, with zero implementation. I could also just point it to a local file and say here are the functions locally available.

Remember, the big hairy secret is that all of these things just plop out a blob of text that you paste back into the LLM prompt (populating context history). That’s all these things do.

Someone is going to have to unconfuse me.


it separates FE and BE for agent teams just like we did with web apps. the team building your agent framework might not know the business domain of every piece of your data/api space that your agent will need to interact with. in that case, it makes sense for your differnet backend teams to also own the mcp server that your companies agent team will utilize.

Why don’t they just own a REST or RPC server? This is the part of the MCP motivation I’m not totally getting. In fact, you can prove to yourself that your LLM can hook into almost any existing REST api in a few minutes, which gives it more existing options and functionality than just about anything else as it stands now.

Things like swagger or graphql already provide you discovery.


> This is the part of the MCP motivation I’m not totally getting

Would it help you to know that the original use case of MCP was communicating information about and facilitating communication with servers that the LLM frontend would run locally and communicate with over stdio, and that remains an important use case?


Total beginner question: if the “structured software application” gives llm prompt “plan out what I need todo for my upcoming vacation to nyc”, will an llm with a weather tool know “I need to ask for weather so I can make a better packing list”, while an llm without weather tool would either make list without actual weather info OR your application would need to support the LLM asking “tell me what the weather is” and your application would need to parse that and then spit back in the answer in a chained response? If so, seems like tools are helpful in letting LLM drive a bit more, right?

If you have a weather tool available it will be in a list of available tools, and the LLM may or may not ask to use it; it is not certain that it will, but if it is a 'reasoning' model it probably will.

You need to be careful creating a ton of tools and displaying a list of all of them to the model since it can overwhelm them and they can go down rabbit holes of using a bunch of tools to do things that aren't particularly helpful.

Hopefully you would have specific prompts and tools that handle certain types of tasks instead of winging it and hoping for the best.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: