Hacker Newsnew | past | comments | ask | show | jobs | submit | fcap's commentslogin


In my opinion to really lift off here you need to make sure we can use these agents in production. That means the complete supply chain has to be considered. The deployment part is the heavy part and most people can run it locally. So if you close that gap people will be able to mass adopt. I am totally fine if you monetize it as a cloud service but give a full docs from code, test monitoring to deployment. And one more thing. Show what the framework is capable of. What can I do. Lots of videos and use cases here. Every single second needs to be pushed out.


OpenAI will pay most of the deal with an inflated evaluation. It could be that Jony wakes up one morning and finds OpenAI back at the ground. Sam is a master hype and inflation. He has cracked the code to generate free money and pays the companies with OpenAI equity.


Why forking and use open codex when the original OpenAI opened it for multiple models? Just trying to understand.


Hey, that is a very good question, I have answered that before. I hope you don't mind, if I simply copy paste my previous answer:

Technically you can use the original Codex CLI with a local LLM - if your inference provider implements the OpenAI Chat Completions API, with function calling, etc. included.

But based on what I had in mind - the idea that small models can be really useful if optimized for very specific use cases - I figured the current architecture of Codex CLI wasn't the best fit for that. So instead of forking it, I started from scratch.

Here's the rough thinking behind it:

   1. You still have to manually set up and run your own inference server (e.g., with ollama, lmstudio, vllm, etc.).
   2. You need to ensure that the model you choose works well with Codex's pre-defined prompt setup and configuration.
   3. Prompting patterns for small open-source models (like phi-4-mini) often need to be very different - they don't generalize as well.
   4. The function calling format (or structured output) might not even be supported by your local inference provider.
Codex CLI's implementation and prompts seem tailored for a specific class of hosted, large-scale models (e.g. GPT, Gemini, Grok). But if you want to get good results with small, local models, everything - prompting, reasoning chains, output structure - often needs to be different. So I built this with a few assumptions in mind:

   - Write the tool specifically to run _locally_ out of the box, no inference API server required.
   - Use model directly (currently for phi-4-mini via llama-cpp-python).
   - Optimize the prompt and execution logic _per model_ to get the best performance.
Instead of forcing small models into a system meant for large, general-purpose APIs, I wanted to explore a local-first, model-specific alternative that's easy to install and extend — and free to run.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: