Hacker News new | past | comments | ask | show | jobs | submit login

I'm beginning to think these warnings are coming from people with grandiosity issues.

Do they WANT to be working on something capable of 'human extinction'? Maybe I'm cynical, but I strongly disagree that anything OpenAI builds on the same trajectory as GPTx is going to end the world.




> Do they WANT to be working on something capable of 'human extinction'

The crazy thing is, if you take these claims seriously, then every AI company in America should be subject to ITAR [1].

[1] https://en.wikipedia.org/wiki/International_Traffic_in_Arms_...


There are a couple really important aspects to clarify there.

What does "end the world" mean here? Is the concern truly ending the physical world on earth, killing us all? Or is it ending the world as we know it, like ending our current civilizations and maybe even ending the model of humans being the leading species on earth.

With regards to OpenAI and GPTx, the question there is whether we believe we publicly know everything OpenAI software is capable of today and what capabilities they are actively trying to develop. We may think GPT isn't a serious risk, but that could be wildly wrong if OpenAI has much more powerful systems either under lock and key or behind the black box of GPT's public API.


I too have been developing a doomsday weapon in secret. Send me a million dollars or face the consequences.


Their intent really isn't what matters.

Would you argue that its impossible for OpenAI or a similar company to accidentally develop something dangerous while they believe they are developing a safe, controlled but potentially much more powerful tool?


Nah, they warning about boring dangers, like "this is the fentanyl if Facebook was heroin".

The dangers aren't AI Terminators; they're "Her" and emotional manipulation and long term psychological problems for _everyone_ because most humans simply aren't complicated and are easily manipulated.

That's how capitalism works so well and now capitalism has AI fentanyl.


One thing I've noticed is that everyone finishes the sentence "the real danger of AI is…" differently.

I model AI as a 10x speed up of the kind of changes we saw in the industrial revolution. Before then, we might be concerned about coffee or alcohol, today we are concerned about designer drugs. Before, cavalry and spies, today tanks and laser microphones and satellites. Before it was evil kings, now they have competition from evil megacorporations.

So sure, Terminator robots… made by the next Jim Jones. Westworld, brought to you by a half-assed startup branding itself as "Uber for Disneyworld" (or, given what happened with their self driving cars, actual Uber). Colossus the Forbin Project, brought to you by the radical outsiders who left Greenpeace. And so on.

But also every other thing, including all things that have already been shown by corporations perusing profits above people, dictatorships in general, and every simple mistake made by a well meaning corporation or democracy or software development team.


Sure, but some are informed.by current sociological research and others are movie scenatios'


At this point, I think it's fairly likely that an AI based on an LLM (being trained on movie scenarios) will attempt to play out the plot of one of them when some bored teenager or misanthrope asks it to.

If we're lucky, it will be incompetent because the script writers have no idea how real life works.

If we are moderately lucky, such an AI will do this in a way that allows one random underdog to push a surprising oversized "stop" button with seconds to spare despite having previously demonstrated overwhelming force against a major nation's entire armed forces.

If we're extremely unlucky, it will be acting out a horror film with great competence.

If I had to bet on one of those three, I'd pick the first option — most scripts are not written by domain experts — but my main expectation outside this is "more of the same stuff we already see with corporations, but faster, and government regulation will continue to be 20 years behind the tech just like it is with the internet in general".


It is 100% grandiosity. And it’s encouraged within the industry because what better way to make people think what you’re working on is effective than to make them think it’s dangerous? 99% of AI safety discourse even in the tech scene is becoming insufferable.


Even though LeCun is pretty much a broken clock nowadays, he's still right twice a day.

The right approach to take with AI is to develop it as fast and as open as possible, the safety issues will be polished as it happens, the more eyes on it the better.


How many eyes can understand an inscrutable pile of matrix weights? We're not even all on the same page about the capabilities and usefulness of these models, or even which tests usefully measure those things.


AI safety is huge grift, worse than NFTs. Just pull the plug if it gets out of control lmao


2000-2020: "How could an AI escape from its server, just say no."

2023: "Why isn't this AI research company that says the models are potentially dangerous in the wrong hands, not letting me download the models?"

OpenAI can switch it off… in theory. In theory, the Soviet Union could have "just switched off" Chrenobyl. In practice, the emergency off button made it explode, and other reactors on the same site had incidents both 5 years before and 5 years after the famous one.

None of the downloadable models can be "switched off" remotely, any more than we can force people to stop using buggy software.

And you would need to have that capability — was it called EvilGPT or ChaosGPT, the one explicitly tasked with trying to destroy the world? Because people absolutely will keep pushing that particular button until it actually works: some because, like you, they don't believe it ever will and wish to mock the idea; others because they are actively misanthropic.


Also, given robotics is also advancing fast, anything that relies on our being in complete control of the physical world (including physical access to on/off buttons and to things like power plants & energy grids) potentially goes out the window once both sides of the equation - robotics for execution and AI for control - have progressed enough that combined they can form an army capable of overpowering well-armed humans (unless the humans have a huge financial advantage / can be well armed enough to beat relatively weaker robots).

I don't think it's something that's bound to happen, but I do think it's a real concern - and I'm not personally worried about an I, Robot or Matrix scenario where AI decides to take control of humans for whatever reason, but by the idea of it becoming easier for a small number of immoral people to control vast power without having to persuade armed people to do what they want.

For example if the AI / robotics arms race sees one major military power take a significant enough lead, it could see them become temporarily invincible (if it includes AI-based missile defence which allows them to secure the entire country's skies without needing millions of people to do so) and they might decide to attack others to try to keep it that way.

If, before any other country had caught up enough, the leading country were to have either a single dictator, or a political party in power, who wants to act like the empire builders of several centuries ago did... maybe what hasn't been possible for many years now might become possible again, and we could see a country like the US or China decide it would be better if the whole world were controlled by them, and any country that didn't volunteer to become a state under their power would be forced into occupation by swarms of AI-controlled fighter drones.

Without needing to accuse any countries of currently being run by people who would choose to colonise the world if they had the choice, it's clearly not unprecedented for people to make that choice even when knowing it would cause many deaths on both sides - this time it might be almost entirely limited to deaths on the other side, and potentially even not many of those if everyone learns quickly that there's no point resisting against the invincible robot army?


It is convenient for the industry, but that doesn't make it not a real concern. Why are the most visible people warning about the x-risk those who have quit big AI, such as Hinton?

If it's becoming insufferable it's because nobody is taking it seriously.


The problem with these conversations is that there are many AI experts with different opinions. Hinton is Ilya Sutskever’s PhD professor so I can’t say I’m too surprised he’s concerned about AI safety. I’ll be honest, I really don’t think it’s a conversation worth having. AI will progress regardless of attempts to slow it down. I’m not even necessarily one of those people that thinks all technological growth is good. I just think it’s inevitable.


I think the risk of the proliferation of disinformation with generated video, audio, images, and text present a broad existential risk to human society as far as internet use is concerned — if the future involves the humankind engaging with both a public internet and llms, I feel like those circumstances might also lead to:

- increasingly censored, controlled, curated gate-kept providers - an inherent lack of trust in public facing information - outright bans on information access

So ushering llms could be an undo the rate of societal and economic progress, create conditions for widespread political and social unrest, and create vacuums for authoritarian leaders and information brokers to consolidate control and power


It’s a bit like the Segway hype.


yawn, we need ai bust to get these folks to see sense.

until then, they have a free pass to get away with such scare-mongering bs.


What would you need to see, short of the actual end of the world, to take the risk seriously?

I don't have to be in a car crash or get shot in the head to know this is a bad thing, and nobody sane is going to bother causing the end of the world just to convince you it's possible.


Maybe a small-scale demonstration then? Seems to have worked so far for nuclear weapons proliferation.


> Maybe a small-scale demonstration then?

What, precisely, does that even mean?

Critics (I initially said "you", but rereading this is ambiguous) clearly don't accept anything that currently exists as such a demonstration: not the models which are superhuman at strategy games; not the automation actually used by the real militaries despite dangerous flaws (whose bugs have resulted in NATO early warning systems being triggered by the moon and Soviet ones by the sun, or planes nose-diving because of numerical underflow); not the use of LLMs to automate propaganda; not Cambrige Analytica; not the lack of controls that resulted in the UN determining that Facebook bore some responsibility for the (ongoing) genocide in Myanmar; not the examples given in the safety report on GPT-4 prior to release showing how it was totally willing to explain how to make chemical weapons; not the report the other year where a drug safety system was turned into a chemical weapon discovery tool by deliberately flipping the sign of the reward function; and not the OpenAI report on maximal misalignment in their own models caused by flipping the sign of a reward function by accident.

What is the smallest "small scale" demonstration that people who currently laugh at the idea of the possibility of a problem, won't ignore?


Flipping the sign on a reward function to protect us from the next pandemic should fit the bill. If we ignore that I guess we deserve it?


> Flipping the sign on a reward function to protect us from the next pandemic should fit the bill

I don't understand, are you suggesting flipping the reward function of reproductive fitness itself, in vivo, of DNA/RNA?

And how is "protect" supposed to demonstrate danger? That's like saying "ACAB protestors are dumb, I'll only believe the police are evil when they catch a gunman"?


The 'end of the world' scenario isn't perhaps something that we expect. We always think the end of the world will be disease or natural cataclysm. Also, humanities curiosity can't help itself.

* Imagine AI giving a terrorist network a recipe for the most toxic nerve gas ever discovered. (This has already happened to AI researches)

* Imagine AI being used for deepfake propaganda inciting a war between superpowers. (This is arguably already in progress)

* or it could be the usual sci-fi classic of an AI intelligence becomes superior to humans and just takes over (using the above methods and more).

AI has the ability to be completely undetectable and incredibly insidious. We could be destroyed by a force we don't even notice.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: