Hacker News new | past | comments | ask | show | jobs | submit login

Great question. Im very confident in my answer, even though it’s in the minority here: we’re not even close to exhausting the potential.

Imagine that our current capabilities are like the Model-T. There remains many improvements to be made upon this passenger transportation product, with RAG being a great common theme among them. People will use chatbots with much more permissive interfaces instead of clicking through menus.

But all of that’s just the start, the short term, the maturation of this consumer product; the really scary/exciting part comes when the technology reaches saturation, and opens up new possibilities for itself. In the Model-T metaphor, this is analogous to how highways have (arguably) transformed America beyond anyone’s wildest dreams, changing the course of various historical events (eg WWII industrialization, 60s & 70s white flight, early 2000s housing crisis) so much it’s hard to imagine what the country would look like without them. Now, automobiles are not simply passenger transportation, but the bedrock of our commerce, our military, and probably more — through ubiquity alone they unlocked new forms of themselves.

For those doubting my utopian/apocalyptic rhetoric, I implore you to ask yourself one simple question: why are so many experts so worried about AGI? They’ve been leaving in droves from OpenAI, and that’s ultimately what the governance kerfluffle there was. Hinton, a Turing award winner, gave up $$$ to doom-say full time. Why?

My hint is that if your answer involves less then a 1000 specialized LLMs per unified system, then you’re not thinking big enough.




> Hinton, a Turing award winner, gave up $$$ to doom-say full time

This is a hint of something but a weak argument. Smart people are wrong all the time.


> why are so many experts so worried about AGI?

FYI, I find this line of reasoning to be unconvincing both logically and by counter-example ("why are so many experts so worried about the Y2K bug?")

Personally, I don't find AI foom or AI doom predictions to be probable but I do think there are more convincing arguments for your position than you're making here.


Fair enough, well put to both of these responses! I’m certainly biased, and can see how the events that truly scare me (after already assessing the technology on my own and finding it to be More Important Than Fire Or Electricity) don’t make very convincing arguments on their own.

For us optimistic doomers, the AI conversation seems similar to the (early-2000s) climate change debate; we see a wave of dire warnings coming from scientific experts that are all-to-often dismissed, either out of hand due to their scale, or on the word of an expert in an adjacent-ish field. Of course, there’s more dissent among AI researchers than there was among climate scientists, but I hope you see where I’m coming from nonetheless — it’s a dynamic that makes it hard to see things from the other side, so-to-speak.

At this point I’ve pretty much given up convincing people on HackerNews, it’s just cathartic to give my piece and let people take it or leave it. If anyone wants to bring the convo down from industry trends into technical details, I’d love to engage tho :)


I've written (and am writing) extensively why I think AGI cant be as bad as everyone thinks, from a first principles (i.e physics and math) standpoint:

https://chrisfrewin.medium.com/why-llms-will-never-be-agi-70...

Still have like 2-3 big posts to publish.

Long story short its easy to get enamored with an agent spitting out tokens out but reality and engineering are far far more complex than that (orders of magnitude)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: