If you haven't tried any of these yet, the first place to start is Claude Desktop. If you'd like to write your own agents, consider https://github.com/evalstate/fast-agent
EDIT: I may have misunderstood your question. If you're asking "how can I make an API call to OpenAI, and have OpenAI call an MCP server I'm running as part of generating its response to me", the answer is "you can't". You'll want a proxy API that you call which is actually an MCP client, responsible for coordinating between the MCP servers and the OpenAI API upstream agent.
You can run remote MCP servers and configure whatever client to use them. This should work even via OpenAI's API(perhaps not yet, but it's just another 'tool' to call)
Zed already has built in task running… you can use the same thing to call anything that you want like a command to run your project. You can even add custom keybindings to them
I’m trying to find where I can have it as a button so every time I want to run / debug the code, I can press on it instead lf having it as a shortcut but I haven’t found a way.
The pricing is completely prohibitive. It’s a shame, looks cool but I’m not gonna bother with a pricing model that’s clearly telling me to go look elsewhere as a solo dev
I would try this but I want to be able to use it on mobile too which requires hosting. I don’t see how valuable an LLM interface can be if I can only use it on desktop
How can this company possibly support maintaining 2 browsers… I don’t hate the idea they are going for but I swapped off of Arc the moment I heard they were trying to do yet another browser
The thing about local first syncing options like this is that they mostly do not work on mobile. For example iPhones cannot handle dropbox syncing random text files in the background as a regular filesystem for an app to deal with.
Not saying that's not iPhone's fault, but I doubt any of this works on that platform
I've been a happy user of a PWA doing local sync. That said, the data it needs to sync can fit in localStorage.
Not affiliated in anyway, but the app is http://projectionlab.com/ and it allows you to choose between json import/export, localStorage sync, and server-based sync as desired. Since it has an easy to use import/export, sync with some other cloud provider on iOS is basically just a matter of "saving the file," since iOS lets you do background sync of authorized Files providers.
Even though it's a web app, being able to download the page and then after run it entirely offline in a fresh browser window each time, built a lot of trust with me, to the point where I mostly run it with localStorage enabled and only occasionally check its online/offline behavior anymore.
There are two options. "Not interested" nudges the algorithm away from the topic while "Don't recommend <type> from this channel" immediately hides the channel from recommendations.
If your nudges don't feel effective enough, you can also manage your view history. Removing any Minecraft videos you may have accidentally watched from your history will help hide them from future recommendations.
File feedback in the app. My understanding is that people in charge of the recommendation system actually look at it and try to use it to make the system better.
Yes, it is currently dynamic and yes, that means you won't get much int he way of type information from the language itself, it's not currently a statically typed language.
I thought they ran locally only, so how would the OpenAI API connect to them when handing a request?