Hacker News new | past | comments | ask | show | jobs | submit login

This is exactly the problem with ML right now. Hinton and other billionaires are making sensational headlines predicting all sorts of science fiction. The media loves a good story and fear is catchy. But it obscures the real danger: humans.

LLM’s are merely tools.

Those with the need, will, and desire to use them for their own ends pose the real threat. State actors who want better weapons, billionaires who want an infallible police force to protect their estates, scammers who want to pull off bigger frauds without detection, etc.

It is already causing undue harm to people around the world. As always it’s those less fortunate that are disproportionately affected.




It is already causing undue harm to people around the world.

Nefarious use, from scams, weaponized tech, propaganda and just the magnitude of noise generated by meaningless content, we are about to have a burden of undesirable effects to deal with in regards to both powerful and easily available technology to all.

There is so much focus on the problems of future AGI, but little on AI that we have now, working as designed, but yet still very problematic from the impacts on societal order.

I've elaborated much on the societal and social implications in the following reference. I expect AI will lead all concerns in the area of unexpected consequences in due time.

https://dakara.substack.com/p/ai-and-the-end-to-all-things


That post came off as a bit hyperbolic to me, but I fundamentally agree with the premise that this will have an impact much like social media, with all its unforeseen consequences. It’s not about AGI taking over the world in some mechanistic fashion, it’s about all the trouble that we as humans get into interacting with these systems.


> That post came off as a bit hyperbolic to me

I would say it is as hyperbolic as the promised capabilities of AI. Meaning that if it truly has the capability claimed, then the potential damage is equivalent. Nonetheless, I expect we will see a significant hype bubble implosion at some point.

> it’s about all the trouble that we as humans get into interacting with these systems

Yes, it will always be us. Even if AGI "takes over the world", it would have been us that foolishly built the machine that does so.


> Hinton and other billionaires are making sensational headlines predicting all sorts of science fiction.

Geoff Hinton is not a billionaire! And the field of AI is much wider than LLMs, despite what it may seem like from news headlines. Eg. the sub-field of reinforcement learning focuses on building agents, which are capable of acting autonomously.


Certainly LLMs are not AGI, and AGI has not yet been built. But what's your knock-down argument for AGI being "science fiction"?


Intelligence isn’t going to be defined, understood, and created by statisticians and computer scientists.

There’s a great deal of science on figuring out what intelligence is, what sentience is, how learning works. A good deal of it is inconclusive and not fully understood.

AI is a misnomer and AGI is based on a false premise. These are algorithms and systems in the family of Machine Learning. Impressive stuff but they’re still programs that run on fancy calculators and no amount of reductive analogies are going to change that.


> Intelligence isn’t going to be defined, understood, and created by statisticians and computer scientists.

What is an example of a task that demonstrates either intelligence, sentience, or learning, and which you don't think computer scientists will be able to get a computer to do within, say... the next 10 years?


"Fancy calculators" is kind of a reductive analogy, isn't it?

I assert that machine learning is learning and machine intelligence is intelligence. We don't say that airplanes don't really fly because they don't have feathers or flap their wings. We don't say that mRNA vaccines aren't real vaccines because we created them with CRISPR instead of by isolating dead or weakened viruses.

What matters, I believe, is what LLMs can do, and they're scarily close to being able to do as much, or more, than any human can do in terms of reasoning, despite the limitations of not having much working memory, and being based on a very simple architecture that is only designed to predict tokens. Imagine what other models might be capable of if we stumbled onto a more efficient architecture, one that doesn't spend most of its parameter weights memorizing the internet, and instead ends up putting them to use representing concepts. A model that forgets more easily, but generalizes better.


Right so Bing is going to decide it doesn’t want to answer your queries anymore because you left a rude comment to someone they’re friends with on Reddit some day?

That’s what makes my day: AGI folks have to rely on myth-making and speculation.

The here and now incarnation of GPT-4 is what it is.

I’m not saying it isn’t useful, powerful, or interesting. I’m saying that needless speculation isn’t helping to inform people of the real dangers that such proselytizing is causing.


AGI folks have to rely on speculation because AGI does not exist. But Goddard and Tsiolkovsky had to rely on speculation because, in their day, rockets didn't exist either. They were ridiculed[1] for suggesting that a vehicle could be propelled through a vacuum to the moon.

But there's speculation of the fantasy sort, and then there's speculation of the well-grounded "this conclusion follows from that one" sort, and the AGI folks seem to be mostly in the second camp.

And yeah, unlike GPT-4, AGI isn't here-and-now. But GPT-2 was an amusing toy in 2019, and GPT-3 was an intriguing curiosity in 2020. In 2014, "computers [didn't] stand a chance against humans" at Go[2], but two years later, it was humans who no longer stood a chance against computers. The here-and-now is changing fast these days. Don't you think it's worth looking even a little bit up the road ahead?

Isn't there some role here for speculation?

[1] https://www.vice.com/en/article/kbzd3a/the-new-york-times-19...

[2] https://www.wired.com/2014/05/the-world-of-computer-go/


> That’s what makes my day: AGI folks have to rely on myth-making and speculation. > The here and now incarnation of GPT-4 is what it is.

I like how Robert Miles puts it (he's speaking about Safety in the sense of AI not taking over and/or killing everyone):

"I guess my question for people who don't think AI Safety research should be prioritised is: What observation would convince you that this is a major problem, which wouldn't also be too late if it in fact was a major problem?"


And what about the possibility of lines of research like Dr. Michael Levin's "Xenobots" converging with RL at some point in the future after further advancements have been made?

If autopoiesis turns out to be necessary for AGI and we embody these systems and embed them in the real world, are they still going to be fancy calculators?


So you're in the (human level intelligence) is magic camp?

You don't have to understand how something works to build it.


What makes you think that? I don’t believe in magic. Science has brought us many answers and more questions.

An algorithm may perform a task such as building models that allow it to solve complex problems without supervision in training. That doesn’t mean it’s intelligent.


>LLM’s are merely tools.

LLMs are tools.

An AGI would be a "Purposeful System".

There is a MASSIVE, MASSIVE difference.


> It is already causing undue harm to people around the world. As always it’s those less fortunate that are disproportionately affected.

Source?..


Google it yourself if your curious. But here’s a link to get you started: https://time.com/6247678/openai-chatgpt-kenya-workers/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: