Hacker News new | past | comments | ask | show | jobs | submit login

A lot of hay has been made of this but everyone working with ChatGPT directly A) knows this and B) is champing at the bit for plugins to be released so we can get on with building verifiable knowledge based systems. It'll be an incredibly short turnaround time for this since everyone is already hacking them into the existing API by coming up with all kinds of prompt based interfaces, the plugin API will make this dead simple and we'll see a giant windfall of premade systems land practically overnight. So that huge spike you're predicting is never going to materialize.



I’m not sure if it will happen quite as fast as you suggest, but I also expect that plugins and similar techniques will improve the reliability of LLMs pretty quickly.

To the extent that the frequent reports on HN and elsewhere of unreliable GPT output are motivated by a desire to warn people not to believe all the output now, I agree with those warnings. Some of those reports seem to imply, however, that we will never be able to trust LLM output. Seeing how quickly the tools are advancing, I am very doubtful about that.

Ever since ChatGPT was released at the end of November, many people have tried to use it as a search engine for finding facts and have been disappointed when it failed. Its real strengths, I think, come from the ability to interact with it—ask questions, request feedback, challenge mistakes. That process of trial and error can be enormously useful for many purposes already, and it will become even more powerful as it becomes automated.


It'll happen pretty quickly, it takes less than a weekend to build an MVP and I've done it. I'm pretty sure this is the new todo list app given how fundamental and easy it is.


A well-integrated LLM is obviously going to be much more useful than ChatGPT is today, but it's not going to be the golden bullet for all the problems with it.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: