Hacker News new | past | comments | ask | show | jobs | submit login
Warnings of AI doom gave way to primal fear of primates posting (thenewatlantis.com)
35 points by morisy 3 days ago | hide | past | favorite | 19 comments

The people concerned with AI doom stopped talking to journalists because journalists typically didn't report very well on the issue.

>...something I’m very concerned about is the use of AI for autonomous weapons. This is another area where we fight against media stereotypes. So when the media talk about autonomous weapons, they invariably have a picture of a Terminator. Always. And I tell journalists, I’m not going to talk to you if you put a picture of a Terminator in the article. And they always say, well, I don’t have any control over that, that’s a different part of the newspaper, but it always happens anyway.


Yea, it doesn't seem like we should conclude that nothing bad will ever happen with AI because it was predicted a while ago and hasn't happened yet and now those predictions aren't in the news as much now. The news is continuing to be sensationalized in a lot of places and content is trending to Twitter like viral messaging aim at generating ad revenues. Gloom and doom predictions of ai are old news, not interesting anymore and that content won't generate as much ad revenue.

>it doesn't seem like we should conclude that <x will never happen> because it was predicted a while ago and hasn't happened yet and now those predictions aren't in the news as much now

See also: video calling [0], internet as a social revolution [1], coronavirus [2]

[0] https://www.theverge.com/2012/3/13/2867419/videophone-in-sci...

[1] https://www.theguardian.com/technology/2000/dec/05/internetn...

[2] https://www.theguardian.com/commentisfree/2020/mar/06/corona...

Regarding [2], the same guy who one year ago was accusing the UK gov of fear mongering covid now writes this:

> Covid has exposed how incompetent the British state is, from top to bottom

> Come the inevitable inquiry into the events of the past year, it is not only politicians who should carry the can. All the components of Britain’s government, central and local, should be tested – the constitution as a whole should be under examination.

Interesting how the media is missing from that list.


Many journalists truly have no shame.

So this comment caught my eye, and I went back and looked at the author's previous articles. I did find a couple from early March 2020 that might fit the bill of "accusing the UK gov of fear-mongering": 1. https://www.theguardian.com/commentisfree/2020/mar/06/corona... 2. https://www.theguardian.com/commentisfree/2020/mar/09/cure-r...

Then we see this change in tone a month later in April 2020: https://www.theguardian.com/commentisfree/2020/apr/02/wrong-...

I highly recommend going back and looking at the history for yourself and NOT taking the parent comment at face value. Decide for yourself: is this guy just telling readers what they want to hear, or did he just change his mind as the situation played out? Was he honest? Are his articles worth reading now, and were they worth reading at time of publication? What would've you written in his situation?

I think you're giving columnists an easy ride here.

> Every medical expert I have heard on the subject is reasonable and calm.

All the healthcare professionals I know were taking covid-19 very seriously, especially when they heard it had arrived in UK.

Doctors started renting bedsits so they could keep away from their families. Nurses were sharing recipes that used only tinned food, and telling people to get some extras in. (Well before we went into lockdown).

I don't think it's an either/or situation - it's us (the social networks) creating the AI ("the algorithm") that, in the pursuit of greater "user engagement" (=making more money), exploits and amplifies our most basic primate instincts (fear, hatred etc.) and may lead to our doom if unchecked...

It's not (going to be) as blatant and obvious as popular media are portraying either. The robot overlords won't be tentacled red-glowy-eyed things, but friendly helpers.

And it won't be sudden. People will be eased into it over time. And not even that much time; we went from nothing to basically having one or more permanent on tracking devices on ourselves at all time within just a decade or two, and we freely give permission for companies to track us because they (say they) offer convenience in return.

It's not an oppressive regime, it's companies. Which funnel money to the regime so they get a free pass on doing whatever they like.

Someone asked me what the AI takeover will look like, and I said, one night you will go to sleep and we will control the computers and then you will wake up and the computers will be in charge, and you will not be able to tell.

Some of the people who are usually warning about AI doom, like the rationality community, were screaming in January 2020 that Coronavirus is getting out of control, and a pandemic is inevitable.

They were accused of fear mongering at the time, and told that they should worry about the flu, and not about a virus which only killed a few hundred people.

The existance of a problem requiring censorship is never critically examined. The need for AI mediation between humans is never questioned. The author himself is captive to this most insidious meme without the slightest awareness.

I think any analysis of the dangers of AI need to consider the principles of evolution. Human intelligence is a product of humans evolving in a natural world, where individuals were selected for their ability to compete for resources in that world. This has produced many characteristics, one of which is violence, which has sometimes been necessary to secure those resources.

AI robots will probably not be intentionally violent against humans. AIs are also evolving, but in an artificial world. In this world AIs are competing for humans favor. If they do well, if we're happy with them, we grant them computing power and replicate them. This selects for very specialized AIs with very deterministic behavior. Nobody wants an AI with unpredictable behavior.

AIs that survive are the ones that are the most adapted to serving humans. The danger is not that the AI itself harms humans, but that humans wants to harm or exploit other humans through AI. There could be a danger in AIs developed by the military, but I'm not too worried because they'll most likely be extremely special purpose with multiple fail-safes. Nobody wants to develop an AI that could kill the ones developing/using it. I'm most worried about AIs developed for economic exploitation. It's what we're motivated to work on, it's the area where there's most development being done, so it's probably where we'll first see advanced AIs causing problems. Arguably we already have (algorithms used on social media platforms promoting disinformation)

The thought that AIs will somehow gain some kind of general intelligence where it'll find that the logical thing to do is to eliminate humans is a fantasy. We don't select for AIs with general intelligence, if there even is such a thing. Most likely we are overestimating our own intelligence. It's probably not as "general" as we like to think. We don't generally kill because it's the logical thing to do, but because of emotional reactions which are a product of our evolution.

The example of the paperclip maximizer is really dumb. Such an AI would not be selected for general intelligence, and there's no reason to think general intelligence will occur accidentally. Even if it somehow gained this magical general intelligence, the decision of whether to murder humans to secure metal resources, or work with them, is probably absolutely undecidable. Even the most intelligent AI imaginable could not consider all the factors. The default would be no action. An AI would not have emotions produced through natural evolution that it could use as heuristic to decide what to do here. Not a problem for us humans. We have a built-in drive to consider killing someone outside our group, even if there's no rational argument to do it.

"f you had told a lay observer a decade ago that there would be a crisis over Facebook, Twitter, Instagram, and other social media platforms banning the president of the United States for inciting a violent riot against Congress that included a barechested, behorned man dressed as the “QAnon Shaman,” you would likely be accused of writing bad science fiction."

It wasn't popular but neither was it unheard of for people to recognize social media as antisocial a decade ago. Plenty of people would not be surprised a bit by that headline. People didn't just decide not to join Facebook and Twitter in a void by themselves; there was plenty of media (and lots of fiction being written for decades) warning about social media ten years ago.

And the fear of robots was always more about their human controllers.

AI is it's own issue, although the idea that we are being manipulated by a perverse AI has crossed my mind, as an entertaining but not realistic idea.

There's been a shift from freedom of speech to successful demands for censorship everywhere, and it only took about two years. That's scary.

The real power behind this was not social media. It was television. No need to rehash the history of Fox News and Trump. The key point is that news detached from real-world facts became the major input for a sizable fraction of the population. Fox discovered that there's a huge market for telling people what they want to hear. Notably, to the exclusion of contrary views.

Supply-side propaganda has been around for centuries. Now we have a market demand for propaganda. One that pulls media into being even more radical. That's new. Eventually even Fox felt they'd gone too far. Then they started losing viewers to Breitbart and OAN. It really is demand pull, not supply push.

Social media let you listen to people you want to listen to. So it amplifies this phenomenon. But it didn't create it.

Using censorship as a tool to silence the voices of the other side side is shocking to me. In the US, freedom of speech use to be very sacred and a third rail no one dared touch. Now young people (who use to be the staunchest defenders) actively call for it. Strange times.

It's shocking to me, because the situation is worse in other countries. And people are alright with it!

It seems as though there's this widespread belief these days that things you don't like are harmful to society.

The demand pull for propaganda is the disturbing bit.

I'm not sure it's entirely new, though; people keep repeating stories from the rise of fascism in the 1930s. Material that tells you your country is great and all the problems are the fault of the Other is always going to be tempting.

This was needlessly verbose, if they want subscribers like myself they are going to need to simplify.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact