Hacker News new | past | comments | ask | show | jobs | submit login

To some degree, sure, but I'm suggesting a productized client.

I envision an IDE with separate windows for collaborative discussions. One window is the dialog, and another is any resulting code as generated files. The files get versioned with remarks on how and why they've changed in relation to the conversation. The dev can bring any of those files into their project and the IDE will keep a link to which version of the LLM generated file is in place or if the dev has made changes. If the dev makes changes, the file is automatically pushed to the LLM files window and the LLM sees those changes and adds them to the "conversation".

The collaboration continually has "why" answers for every change.

And all of this without anyone digging into models, python, or RAG integrations.




You could build this now, using gpt-4 (or any of the top few models) and a whole bunch of surrounding systems. It would be difficult to pull off, but there is not technology change required.

If you build it you'll need to be digging into models, python and various pieces of "RAG" stuff, but your users wouldn't need to know anything about it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: