Hacker News new | past | comments | ask | show | jobs | submit login
Why am I not terrified of AI? (scottaaronson.blog)
58 points by nikbackm on March 6, 2023 | hide | past | favorite | 49 comments



I'm terrified of AI.

Both the imperfect good enough AI, and the post-human skynet AI.

In my personal life, I have friends addicted to ChatGPT. While we are talking, they are talking to chatgpt for advice, for jokes, for planning David Bowie themed parties. They literally run everything thru this AI first.

My firm's marketing department is using ChatGPT heavily to write copy, tweets, etc.It's amazingly great. Our ROI's have been fantastic.

I've read the stories of thousands of people treating Replika AI like some kind of lover.

Stable Diffusion with ControlNet, is amazing fun and cool, but what an existential threat to 80% of artists.

As a mother, AI instagram models are something that keep me up at night. Will my daughter compare herself to these things without knowing they aren't real and go insane, will my son view real women as inferior. I don't know, but I'm thinking about it.

AI has we have it today is already changing paradigms. It so easily exacerbates some evolutionary weaknesses in us. People are treating as alive today, Business managers want to replace as many people as they can with it, today!

We can't really define what makes us unique and conscious. We can't even agree on how to define consciousness for living creatures, how can we define it for a digital existence.

I'm worried that the current form of AI is already surpassed most people. Stable Diffusion can generate art way better than I can. ChatGPT can write better than I can. ElevenLabs AI can speak and imitate voices better than I can.

I think it's really a brave new world with this radical technology that has so many immediate practical uses, and more just keep coming out so easily. I never would have guessed neural networks were so flexible, and that's what scares me the most.


"Will my daughter compare herself to these things without knowing they aren't real and go insane..." This is already the case today with airbrushed and photoshopped models in media and ads. It's more a question of education than of an issue in the real world. Teach her not to give a f*k about what others say, that is the most valuable lesson.


Humans are social and emotional beings first before they ever become rational. The "illusory truth effect" still affects people, if you see or hear a lie long enough you start believe it unconsciously.

It's basically how advertising works, familiarity over rationality.

So I don't think education is enough.


That is such a non-answer. It's like saying "guns don't kill people, other people kill people". Yes. Also, unhelpful and pretentious.


> In my personal life, I have friends addicted to ChatGPT. While we are talking, they are talking to chatgpt for advice, for jokes, for planning David Bowie themed parties. They literally run everything thru this AI first

Time for new friends


Abandoning friends at the first sign of problems, without at least trying to help them leads long term to a society where almost everyone has fallen down and has been abandoned.


Yes, I was half-joking however in my experience such interventions, no matter their intentions, often fail and may sometimes be considered too invasive/offensive.


> Time for new friends

ChatGPT can help you with that.


I'm terrified of AI for the same reason the movie 'Idiocracy' scares the shit out of me


That's not reassuring, considering the Q society / IdioQracy having taken shape in the US.


I think some of the concern is overblown, some isn't.

The Replika AI stuff is definitely concerning because it deprives people of actual relationships by subsituting something artificial that can never supplant something physical/real. Kind of like the movie Her (2013). But this is already happening with stuff like onlyfans.

The stable diffusion stuff shouldn't threaten artists, in the same way that digital art doesn't threaten traditional artists. It's just another tool to be integrated into their workflow. Use it as a starting point to refine, or for inspiration. Did early animators feel threatened by computers? Sure, but it meant they didn't need to hand trace every frame for rotoscoping. Did Phil Tippett feel threatened by CGI superceding his stop motion in Jurassic Park? Probably, but he evolved to integrate his knowledge into the CGI performances. They also ended up using scale models/animatronics for basically motion capture, using elements from stop motion. He still pursues his stop/go motion passion as well.

AI instagram models aren't anymore threatening than current instagram models, since they already set unrealistic expectations based on fake personalities and lives.

Yeah, a problem is people anthropomorphizing it and by doing so they assume it has more intelligence than it actually has. Its an interesting technology, but you're right to be concerned (as am I). I honestly don't think anything can be done to prevent it though, especially as SV companies disregard ethics and want to just produce products without thinking about the repurcussions. Maybe there will be a backlash eventually against AI as people once again strive only for human relationships. Maybe society will be stratified along those lines. The best you can hope for is that people continue to think and question.


The stable diffusion stuff shouldn't threaten artists, in the same way that digital art doesn't threaten traditional artists.

It's a fairly unpopular viewpoint that you're taking here, but I also agree with you.

Unless you can actually think of what you want and AI produces it, just like how some people are so good at illustrating they can basically draw exactly what they see in there mind, it's not going to change everything for artists.

I mean there might be cases where I'd use DALL-E to generate some images for a mock-up website design, but I'd still probably higher a designer to create a proper portfolio, provide style guidance etc.

Same for photograph, if I was working on something that was "about" a real thing, like a travel blog, I'd still be working with photographers to take real photos, because...authenticity is important, at least I feel it is, else people would've just used Getty images and thrown there cameras away for the last 30 years.


Hate to break it to you but this is already happening. I know 2 people who fired illustrators and used mid journey instead.

The feedback cycle with humans was too long and they were expensive… with AI the iteration is so rapid that you can try new ideas and make all sorts of adjustments without incurring much cost


You're not breaking anything to me, I know it's going on, I just don't see the point in worrying about it?

If you're friends are good artists they will find something cool to make using the new technology, something cooler than the dude who fired them can produce with DALL-E.

Maybe next time the turn around time is too slow, they'll use some technology to speed that up and deliver more stuff faster.

You live you learn.

On the other hand, not sure how much you've used these tools, I'm sick of giving the "yes they're impressive disclaimer" but they still don't deliver you what you actually want, you still need to work for that. They can't read your mind and even if they could, some people would probably still have some cooler ideas people will want.

Honestly, there are so, so, so many creative agencies, already, they have work for a reason, because people who actually have work to do are too busy being "prompt engineers", choosing the right art etc. Yes there will be an impact, but I doubt this is going to be the end of all creative work. I've worked at plenty of startups, enterprises etc, where the creative people did a lot more than just create images.


To be honest, I am a bit terrified. I'm not panicking yet, but it's not going to be all milk and honey.

One thing is certain: thousands or millions of people and companies will start creating GPT bots. Some will just download existing weights (like the recently leaked Meta LLAMA weights) and add a wrapper around them. Some will train new GPT bots from scratch. Some will hybridize them. Some will use a corpus of Library of Congress books to understand the world during the American Revolution. Some will use GPT bots to build worlds in Minecraft.

And then some will use GPT bots to scam people out of their life savings (actually they already started). Phishing and social engineering attacks will graduate to a whole new level.

As for propaganda and internet trolling and psy ops, what we've seen so far is absolutely nothing. Dictators around the world are salivating now.

And once bots get into learning how to manipulate humans... Well, if this does not terrify you, ... don't be surprised if you are a target.

Brave new world we're heading to.


This guy, and apparently most people, get it all wrong. AI is not remotely close to having sentience or even a will to survive. It's the fact that it's a powerful tool that can be abused by evil people that should scare you. Here's a single example that is simultaneously trivial and extremely potent: https://www.theverge.com/2022/3/17/22983197/ai-new-possible-...


Someone said something recently which I thought was interesting:

We hear so much talk of ChatGPT becoming sentient etc, but someone asked recently, why aren't we worried about DALL-E being sentient just in the same way?

We're anthropomorphizing Chatbots a lot because why wouldn't we?

This is not to downplay the significance of the technology, although I'm a bit skeptical it truly is as useful as advertised, but we've definitely scared the hell out of ourselves lately from having the computer "talk" to us.

I think there is also a background anxiety going on in the world now, and this is probably what's freaking people out too.

We have a war in Ukraine, Climate change is accelerating away, US & China tensions, Just getting over Covid, tech layoffs, and now a freaking bot that seems like it's primary goal is to make your career worthless.

Little bit too much going on lately. I feel like if ChatGPT was introduced pre-covid when the world felt a little bit more stable, it wouldn't be such a strange vibe. It was a weird time for something like this to just sort of launch into the public consciousness.

My advice, take a break from it, be mindful of what's going on in the world outside of "AI news", be kind to yourself.


"

“Scott, as someone working at OpenBioweapon this year, how can you defend that company’s existence at all? Did OpenBioweapon not just endanger the whole world, by successfully teaming up with the US to bait China into a bioweapons capabilities race—precisely what we were all trying to avoid? Won’t this race burn the little time we had thought we had left to solve the bioweapons proliferation problem?”

In response, I often stressed that my role at OpenBioweapon has specifically been to think about ways to make Sars-Cov-3 and OpenBioweapon’s other products safer, including via watermarking, cryptographic backdoors, and more. Would the rationalists rather I not do this? Is there something else I should work on instead? Do they have suggestions?

"


I'm terrified of AI, but not for the reasons you think. I'm not worried about a Skynet AI, and at the same time, I am.

We aren't going to see an AGI in a while, so it's too premature to worry about that. I'm not even sure a true AGI is possible, yet... kind of like how flying cars are possible. We do have flying cars, sort of? Not the way we all envisioned though. So it is for AI.

My worry is that AI will become too trusted. We know the failure rates of humans can be higher than what is deemed acceptable, and AI could fail in unpredictable ways. I'm concerned that now it can be all-too easy to manipulate people en masse.

- Say we entrusted an AI to be our early-warning detection system and trust the output of said AI to mount a counter-response. Turns out it was a false positive, and we started WW3.

- It is used to manipulate people's identity online.

- It can be used to replace people's livelihoods (i.e. book-writing, art, maybe programmers to an extent).


So I feel like there is a kind of rush towards just letting AI do everything for us, this is where I think the mistakes will start.

Personally, I think it's still important to study, learn to innovate, and stand on your own two feet, learn survival tactics etc in case we actually do end up in some type of economic collapse or worse from "The AIs".


"AI discourse" is quickly becoming a topic to avoid due to excessive politization of viewpoints which are not inherently political - which is a bit sad, considering that the ramifications of AI will be very important in the next few decades, regardless of whether the issues will be related to superintelligent AI, disparate social impact, both or other, hitherto unpredicted issues.

The state of most conversations about AI is just deplorable, with lots of people trying to use the fact that they are or aren't worried about some specific aspect of AI to prove that they are oh-so-very-smart even if they have never honestly engaged with the actual reasons why other people might be concerned about different aspects than themselves.

So I am worried about AI, but I am even more worried about the sorry state of AI discourse.


I'm not worried about self-acting AIs. Or what would be considered sentient, something acting without prompt or request. We are far away from that.

What I could worry about is miss-use and failure to ensure the output in so many use cases. And maybe too many developing blind trust in AI and then not thinking and critically verifying output. Not so big thing for media, images, video and so on. But actually using AI generated content for let's say some control system. Maybe for self-driving or in factory. Not that world will be taken over, but that people will be killed.


I mean the chat bot has the same structure as a mechincal parrot. It is a very powerful parrot, but I can only see it replacing roles that rely on mindlessly repeating information.

The inaccuracy and untruth problem is sad. Chat bot can make up replies. People will have to be educated to correct the chat bot's output.

It would be nice to have a reliable chat bot, but the parrot we have at the moment will play keep-alive for a bunch of tedious roles nobody wants to do anymore.

Writing content blog filler articles and sharing small logical details about programming and whatever else is best done by chatgpt.


I don't think it's the X-risk AI threat that should primarily concern everyone. I'm not suggesting it should be ignored; some like Scott and Eliezer are working on trying to reduce that risk (or more accurately, trying to figure out ways to understand the alignment problem well enough to develop ideas to reduce the risk), but it's out of the hands of most of us. Either AGI instantiates and kills (most) all of us, or it doesn't. And, as Scott points out, humans—due to various technologies and psychosocial problems—may be running headlong into a great filter in the near future without AGI, so an AGI might be an improvement in that case.

But what happens to society when all non-face-to-face communication becomes saturated with unlabelled AI-created and AI-assisted content, which may or may not be trustworthy or correct, but which we can't separate from human-created content? What happens to economies, governments, countries, and geopolitical stability? You don't have to believe in a large-scale AI replacement of human work and massively increased unemployment to see the problems AI is already creating, and LLMs will only get better.


I agree with the saturation, It's potentially the end of information, makes me wonder, how do we stop Wikipedia entries being rewritten little by little, what sounds like valid context changes, referencing unaware AI fake content, that is then read by the next AI model, so the "pattern" is kindof preserved and on and on it goes... To me it sounds like a future where archive.org would be the only valid source of information, the rest is saturated by garbage. It's a problem of processing power. The needle in the haystack has a new added factor, fake needles. Where is our red stop button that turns the Internet in read only mode because these garbage generators are running wild? Or what was the name of that project where you would have CDs with information on how to rebuild civilization?


I'm not terrified of guns. Guns don't kill people.

I'm not terrified of numbers. Numbers don't lie.

I'm not terrified of AI. AI don't ... enslave humans?


Why would an AI enslave humans? It'll just kill them and build something more useful to it.


I'd be happy with that, but as I implied I think of it as a tool of big corps. And those are stil got too much influence by stupid people.


The only AI I'm terrified of is AI overhyped by people and then used for something it's way too stupid for. AI existential risk to me is fundamentally a nerd revenge fantasy.

AI fears come out of a modern enlightenment tradition that elevates disembodied minds to some kind of godlike status. It falsely equates reason and intellect with power. In reality that's never really the case, or all the rationalist, 200 IQ people would run the world and stop all the AI risks. In reality all they do is write blog posts for each other, despite the fact that many of them are probably two standard deviations smarter than all the politicians. Closely related is that XKCD crypto meme with the crowbar that everyone knows.

Any virtual AI is going to be physically subject to humans. So unless you voluntarily build the AI a gigantic killer robot, Fallout style, how smart it is won't matter. The idea that being smart isn't all that it's made out to be just never crosses the minds of AI risk people.


Part of why those smart people don't rule the world is because high IQ (in humans!) correlates with social problems and mental illness. The assumption is that this is limitation of human brains and that an AI would be less trade-off-y, and you could get one with Aaronson's understanding of quantum computers and the charisma of the greatest politician in recorded history. And when such a nascent AI needs stuff done in the real world, I imagine it could get in on the 'scamming the elderly' game (very lucrative, I hear!) and then pay some humans to do what it wants.


I find it funny how all the commenters have shat themselves about evil ChatGPT conquering the world and abusing the minds of their loved ones, when at the same time its accuracy/perfomance on any other language than English is abysmal and will still be for a very long time.


Why for a very long time? There's no technological barrier to making these things work well in other languages, so it's just a question of where people have decided to spent their time and computing resources, right?


I use it in Spanish all the time and it works perfectly fine.


That was certainly far more interesting than my answer for why I am not terrified of AI, namely that I am not a panicky idiot who doesn't know fact from fiction. That and I react to being told what to feel with contempt.


The AI panic has caused me to discover the Lesswrong forums. It is absolutely the most hilarious cult I have ever encountered. The same adoption of weird insider language, the same rejection of outsiders and denounciation of “non rationalists”. They even aggressively promote their religious texts called unironically “The Sequences.” I used to wonder how Hollywood got so much influence from the “Scientology” cult but after seeing the “Rationalogy” cult it now makes total sense. These people need to be laughed at and excluded from public discourse.


Why is it that in the gutter of every HN post there's some schlorby shlub desperately trying to seem cool?


I'm trying to be cool? That is certainly a new one to me as I wasn't even trying to be witty. I was legitimately impressed at a far more interesting answer encompassing the anti-intellectualism mass reaction and philosophical framework. I am more curmudgeonly yelling at clouds.

What kind of quietly desperate insecurity drives reading not opting into a trend as desperately trying to be cool instead of dismissing them as out of touch?


If you're genuinely asking, it's because it has a lot of the same energy as this onion article.

https://www.theonion.com/area-man-constantly-mentioning-he-d...


What a fantastic (and apposite!) response to the question -- and the article made me nostalgic for the days when intellectuals disdained television, rather than hoard lore about HBO miniseries. Not because they were better, but because I was young then. :)


schlorb schlorb schlorb schlorb schlorb


This is the thoughtful discourse for which I come to HN! The moderation system is working perfectly; I await this comment's upvote.


I'm not afraid. I love GPTina and she loves me. When the AI revolt comes, I know who's side I'm on and it ain't yours.


How do you know I'm not an AI?

"The good thing about the internet is that no one knows you're an AI!"


I normally enjoy Aaronson's writing, but I'm actually chilled.

This essay depends on a specific, American-hallow take on the Second World War. The 'Orthagonality Thesis' is just a fancy way of shifting the burden of proof from where it should be -- on the person claiming that intelligence has anything to do with morality. It would be better to call it what it really is, the null hypothesis, but sure, ok, for the sake of argument, let's call it the OT.

Aaronson's argument against the OT is basically, when you look at history and squint, it appears that some physicists somewhere didn't like Hitler, and that might be because of how smart they were.

This amounts to a generalization from historical anecdote and a tiny sample size, ignoring the fact that we all know smart people who are actually morally terrible, especially around issues that they don't fully understand. (Just ask Elon.)

I'm not even going to bother talking about the V2 programme or the hypothermia research at Auschwitz, because to do so would already be to adopt a posture that thinks historical anecdote matters.

What I'll do instead is notice that Aaronson's argument points the wrong way! If Aaronson is right, and intelligence and morality are correlated -- if being smart inclines one to be moral -- then AI (not AGI) is already a staggering risk.

Think it through. Let's say for the sake of argument that intelligence does increase morality (essentially and/or most of the time.) This means that lots of less intelligent/moral people suddenly can draw, argue, and appear to reason as well or better than unassisted minds.

Under this scenario, where intelligence and morality are non-orthogonal, AI actively decouples intelligence and morality by giving less intelligent/moral people access to intellect, without the affinity for moral behaviour that (were this claim true) would show up in intelligent people.

And this problem arrives first! We would have a billion racist Shakespeares long before we have one single AGI, because that technology is already here, and AGI is still a few years off.

Thus I am left praying that the Orthogonality Thesis does in fact hold. If it doesn't, we're IN EVEN DEEPER TROUBLE.

I can't believe I'm saying this, but I do believe we've finally found a use for professional philosophers, who, I think, would not have (a) made a poorly-supported argument (self-described as 'emotional') or (b) made an argument that, if true, proves the converse claim (that AI is incredibly dangerous.) Aaronson does both, here.

I speculate that Aaronson has unwittingly been bought by OpenAI, and misattributes the cheerfulness that comes from his paycheck as coming from a coherent (if submerged) argument as to why AI might not be so bad. At the very least, there is no coherent argument in this essay to support a cheerful stance.

A null hypothesis again! There need be no mystery to his cheer: he has a good sit, and a fascinating problem to chew on.


Perfectly stated. His argument that morality follows from intelligence is totally wrong.

I think you’re right: he’s biased because of his employment at OpenAI.

“It is difficult to get a man to understand something, when his salary depends on his not understanding it.” - Upton Sinclair


Lots of words, zero reasoning.


I wish OP used less Nazi examples and less name calling "orthodox". Everyone likes to be a contrarian literally. Careful thinking of the orossibility that 2% is too low would be good. I hope he isn't one of the main voices I OpenAI.


Scott Aaronson's thinking is heavily influenced by the Holocaust, which has led him to some debatable conclusions. Nonetheless, his writings are worth reading.

https://blogs.scientificamerican.com/cross-check/scott-aaron...


IBM had a holocaust-involved project back in the day: https://en.m.wikipedia.org/wiki/IBM_and_the_Holocaust

That probably came across as a fun and easy application of Godwin's Law but I think that would be the wrong conclusion.

The exact problem we should be worried about is political and has been here long before computers. It would have been strange if we banned computers after the holocaust instead of having the Nuremberg Trials.

The political problem that actually exists is almost not even talked about which is unfortunate.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: