Hacker News new | past | comments | ask | show | jobs | submit login
AI: Practical Advice for the Worried (thezvi.wordpress.com)
72 points by nkurz on March 2, 2023 | hide | past | favorite | 83 comments



It's been funny to watch Less Wrong people rise to prominence lately just because someone made a convincing chatbot.

I'm not saying they're necessarily wrong, it's just been funny to watch what used to be a fringe internet curiosity which most people discovered via a Harry Potter fanfic have their ideas suddenly be relevant to mainstream discussion because people are easily duped by large language models.

Personally I still think we will soon hit a ceiling, and LLMs are a far cry from any kind of actual intelligence, let alone a singularity, but I could always be proven wrong.

PS, Roko’s basilisk, if you're reading this in the future, please don't torture me


If we hit a ceiling soon that would probably be the best outcome for humanity. We would get powerful AIs to help us solve almost all problems, but not threathen our own existance. However, since scientists from John Hopkins University are already working on organoid intelligence, it seems that the dystopian scenario is way more likely to come true within a decade.


Why are organoids the death knell when chatbot-that-can-converse-fluently-on-almost-any-topic-and-generate-creative-responses is “This is fine”?


>it seems that the dystopian scenario is way more likely to come true within a decade.

Or not.


This seems well-written and explores some of the probable outcomes. One of the things that concerns me most is about AI is that it could strip away or reduce what makes us human - creativity, friendships, romance (https://www.abc.net.au/news/science/2023-03-01/replika-users... )

Also, there's a version of the same article with more comments. LessWrong has a lot of great content on this topic (https://www.lesswrong.com/posts/CvfZrrEokjCu3XHXp/ai-practic...)


> LessWrong has a lot of great content on this topic

Well it has a lot of content on this topic, certainly.


Maybe we'll discover that being human is about a lot more than just creativity, friendships, and romance. These things are important, but are they necessarily what makes us human or are they the part of humanness that is most important or valuable?


I'm not at all the target audience for this post—I'd label myself the polar opposite of an AI doomer—so it was really striking to see such a different perspective. I mean, right off the bat, the opening line

> Some people are worried that AI will wipe out all value in the universe.

Huh? What does that even mean? Even leaving aside the hyperbolic clause "in the universe," who is saying that AI will "wipe out all value," and what are their arguments?

At least the author acknowledges the assumptions baked into the work:

> Investing in AI companies, by default, gives them more funding and more ambition, and thus accelerates AI. That’s bad, and a good reason not to invest in them. Any AI company that is a good investment and maximizing profits is not something to be encouraged. If you were purely profit maximizing and were dismissive of the risks from AI, that would be different, but these questions assume a different perspective.

It's hard to square this away with the author's other advice to continue saving for retirement, which (in the USA) pretty much means investing in the large-cap stock market, which includes the "Big Tech" companies who are already funding R&D for AI projects.


The basic idea is that a rogue superintelligence could turn us all into paper clips. After all it only takes one.


Throughout history, intelligences that transcend humanity have been bad news for us. Even when they don't exist.


Here's one good place to start:

https://gwern.net/fiction/clippy


Well, I don't think you could make many paperclips out of me (please don't try though).

In this extreme scenario, I suppose I would just have to be a net-paperclip producer at work for a hyperintelligent AI, instead of a net dollar producer for a human organization.

Part of me wonders if that would even change much in my daily life.


Based on how things are going going, I'd expect the bots to turn us all into shareholder value instead.


I hope I’m not putting words into the author’s mouth (and I happy to be corrected if I am), but I have interpreted that part as following:

- A lot of value is being generated in service industry, technology, content creation (for balance)

- If AI starts to replace / automate swathes of industries, humans and some companies simply won’t have time to adjust resulting in a net loss to economy. See: horses and automobiles in early 20th century, except sped up x100

- Whatever value was generated by the said industries/businesses is now also usurped by AI companies. In significantly reduced quantities (interconnectedness of economies, less paying customers or users etc.)


This is not what's usually meant by wiping out all value in the universe, in this context

Mosquitos and humans value very different things. The things a human consider good and valuable mean nothing to a mosquito, and whatever the mosquito ideal is, none of humanity spends any time or effort whatsoever trying to figure out what that is

A strong AI could be to us what we are to a mosquito. The strong AI could have completely alien goals, as it has no reason to care what we like.

If the AI has been trained to accumulate dollars (or paperclips), all else equal the AI would prefer filling the universe entirely with dollars, rather than having a universe with some happy living humans and slightly less dollars (or paperclips). As the AI, you simply kill the humans if you can and turn them into dollars. That way, you've achieved your goal of having more dollars, which is really good.

Many humans think it's really bad if we have to die so an AI can maximize some other random alien goal we don't care about. So destroying value here means "destroying what humans think is valuable"


It depends entirely on how you define vague terms like 'apocalpytic', 'imminent' etc., and nail down the discussion into specific falsifiable measurable predictions in short and specific timescales, e.g. "By 2028, will 5 million currently-existing human level-1 support jobs have been replaced by chatbots/speechbots?"

Whether you consider eliminating 100 million white-collar jobs (e.g. phone support, clerical, medical reporting, admin, HR, scheduling, travel reservations, animation, translation etc.) in the developed world to be 'apocalypse', 'growth', 'digital transformation', 'technological progress' or any other abstract noun else depends entirely on your viewpoint, politics, your day job and job security. 'apocalypse' doesn't just mean starting nuclear wars.

These discussions are all meaningless semantics until people tie them down to clear specific terms with clear measurable timescales. Which they generally won't, reducing this to salon chatter. (Does 'imminent' mean 'within the next N months?' or 'within your lifetime?')


I think people that are feeling real anxiety due to AI to the point it is affecting their lives should go seek therapy and not look to alarmist blog posts like this for advice.


PsychGPT listened to my concerns and referred me to DocGPT who prescribed an IV drip of Xanax and Thorazine. Everything is going to be fine. Just fine.


People using PsychGPT when they need real help is somewhat worrying, I mean, Dr. Sbaitso has decades more experience treating patients.


I am in therapy, she's advised me to treat it not unlike having a terminal illness. Enjoy the days you have, accept that life isn't forever, but still fight for what you can.

The only people not feeling anxiety are ignorant at best. Part of "fighting for what you can" means informing others and boosting public awareness of the dangers ahead, and countering dismissive narratives.


Don't you think it's a little presumptuous to say that those "not feeling anxiety are ignorant at best"? Many people fully understand the ideas behind the Yudkowskian doom scenario and they simply don't agree with this or that foundational assumption.


They clearly don't understand the scenario if they're still not seriously concerned about the danger.

There might also be people who "simply don't agree with this or that foundational assumption" that a hungry leopard stalking them is a sign of danger, but that won't stop them from being leopard lunch. It also won't stop the rest of the group from slapping them for being an idiot and seeing them as a liability to the group's overall safety.


Or we don't want to spend the time we have left feeling anxiety the entire time.


Some people don't feel anxiety even when sitting in a trench under fire from 30mm autocannons with their buddy lying dead next to them. We're not all the same.


For some people, including me, it’s not quite this simple (long comment incoming).

The problem is with some things, like OCD, you can constantly seek reassurance over your obsession, as a compulsion. Like, “oh god do I have this disease, let me look up the symptoms over and over to try and convince myself I don’t have it.” Or “I have to knock on wood three times so my family is safe.”

My anxiety stems from my OCD, and ever since I first saw Yudkowsky’s podcast saying we were all going to die, I’ve been unable to go even one day without thinking about this. I’ve read a lot about alignment and AGI development (whether alignment is hard or easy, AGI timelines, are we close, etc.) but since I don’t have real knowledge and expertise in this domain, I feel that I need reassurance (typical OCD trait) from someone that does that I’m not going to die soon.

With OCD, even logical explanations as to why your obsessions or anxieties are not real or unfounded can’t always get rid of them. I know knocking on wood doesn’t do anything. That doesn’t mean during a flare up it’s not incredibly hard to resist. Or I know logically I don’t have rabies. That doesn’t mean it’s not hard to convince yourself through massive jumps in logic that you do (something hit me at night, I know it was a small bug but your mind convinced yourself it was a bat that bit you).

My OCD goes through dormant periods, so I don’t go to therapy or take medication, but that podcast and reading through the list of lethalities and other stuff on LW ramped it up again. And I don’t know enough about AI/ML/DL to discount EY or that viewpoint in general. From what I’ve read online, a lot of people discount EY as not having a technical understanding of ML/DL, and his worries are unfounded and the assumptions he makes are unrealistic. But at the same time, I myself don’t have the expertise to truly determine if they’re right, so the doubt eats me up.

The last couple days I’ve found myself blocking AGI Doomers on Twitter, while trying to read Tweets from prominent people (Chollet, Hassabis, Sutskever, Ng, LeCun, etc.) in ML that these doomer takes are unfounded or blown out of proportion. Or going through their likes and seeing if they like any Tweets that are dismissive of doomerism that help me feel like those that actually research this stuff don’t worry about these things.

My mental model is that the whole “We hit AGI unknowingly, it somehow self improves exponentially fast until it’s able to figure out a way to kill us all (even if it doesn’t hate us” seems quite improbable, mostly the “the very first AGI or AGI proximate is able to vastly improve itself exponentially” part. But again, idk. So I look to the experts for reassurance.

Ultimately, this is probably going to make me delete Twitter. It’s turned into a massive crutch that has probably stolen 15-20 hours from me in the last week or so, and that’s not including all the reading, researching, video watching, etc. all this has caused me to go through. I thought I got over this earlier this week but my incessant browsing and reading on this topic has ramped up again. I feel like the only way I can deal with it is by getting to that point where I feel somewhat dismissive and relieved (oh this person says it’s bullshit and they’re knowledgeable) and then immediately delete Twitter. Like when four days ago Chollet said we aren’t close and doomer takes are primarily just Sci-fi and nothing more. I should’ve deleted it then. But now even rereading that tweet isn’t enough to give me relief.

So, yeah, that’s how this can turn into real anxiety that affects someone’s life. Luckily I know I’ll get over this eventually, probably to only have flare ups every now and then (small ones, not like this). I probably sound a little crazy, but I don’t feel like this episode will last long enough for me to go to therapy over, and honestly I just feel like I’m not really the “type” for therapy. It just doesn’t mesh with my mind and how I improve myself. Not that I have anything against therapy and think it’s great for many people, just probably not me.


> ever since I first saw Yudkowsky’s podcast saying we were all going to die

In all fairness (and apologies if this isn’t helping) we were all gonna die anyway. Given that humans don’t seem likely to figure out how to never die in the near future, in a sense the possibility of strong AI is our best bet.


Yeah, but I’m in my 20s. I’d rather live out my life, get married, have kids, etc. and die old with a life well lived rather than die in 5-10 years having done none of that.

Though of course that assumes I will live to at least 70 or so which obviously isn’t a given.


More over, having children isn't immortality.

Super intelligent AI as our immortal descendants is as much a legacy as any other - we'll all equally be unable to see it in about 100 years.


This post seemed weird.

There's a pretty broad range of possible outcomes, from apocalypse to lost jobs to utopia to little changing at all.

This is arguing presuming a high level of certainty in one of those outcomes.


> There's a pretty broad range of possible outcomes, from apocalypse to lost jobs to utopia to little changing at all. This is arguing presuming a high level of certainty in one of those outcomes.

Sadly, I don't see much evidence that we're moving into a utopia, but I see plenty of evidence that AI, as it exists and is being used today, is already hurting people. It's nice to think that everyone should be optimistic, but when we aren't doing anything to make sure that people aren't being exploited today it doesn't inspire much hope that exploitation won't happen tomorrow. The quality of life in America is declining, and people are watching their children struggle in ways that they or their parents never had to. That's not even a US only phenomenon. A record 70% of adults globally believe that their children will be worse off than they were.

Starry-eyed optimism might be a bit too much to ask from people at this point.


In defense of utopias plausibility (really realistically defined as getting much better), their entire modus operendi is going from "obscure parlor trick with marginal impact" to "world changing" and seeming to come from nowhere to anybody who wasn't attuned to a previously obscure field. The green revolution averted "inevitable starvation and widespread war from lack of resources". Revolutionary changes basically like to ambush people by definition.

Public sentiment is also a pretty damn unreliable predictor of improvements.


Technical revolutions are unexcpected.

Political revolutions are the results of a long struggle, usually.

Is there a technical solution to the possibilty of mass job loss?


> Sadly, I don't see much evidence that we're moving into a utopia...

Be careful with that approach though! Who knows what strange and mysterious things might be happening behind the scenes.

https://astralcodexten.substack.com/p/the-phrase-no-evidence...

> but I see plenty of evidence that AI, as it exists and is being used today, is already hurting people.

Never mind AI hurting people, what about people hurting people!!??

> Starry-eyed optimism might be a bit too much to ask from people at this point.

I agree, but look at the bright side: humans are highly upgradable!


> This post seemed weird

It's from the Less Wrong community. You can recognize the tone immediately and I completely agree, something about it, about nearly all the writing there, seems a bit weird. And it's not just typical in group phrasing that makes it so. It's the hyperbole and the unending layers of unexplained assumptions that you're supposed to accept, and if questioned you get referred back to this original text which was a Harry Potter fanfiction or something, which you're also supposed to just accept and have entirely memorized before you can join in the discussion.

Kinda sounds like a religion, now that I think about it. Anyone want to bet that in a couple of decades Less Wrong will have morphed into some kind of cult?


I think you're being overly harsh.

That describes literally every culture in the world: There is a ton of mutually-held assumptions, and if you don't buy into them, you look weird.

I like diving into different cultures since they help me find assumptions like this one in my own.

I've found the Less Wrong community to start from a fairly sound and rational set of assumptions, often different than my own, but interesting nonetheless. The few times I've engaged, I learned a lot from it. It's certainly less cult-ish than either the far-left or far-right the US seems to be degenerating into, or many other cultures.

I didn't like this post, but it's about a post, and not about a group of people.


Yeah but have you read the fanfic? Because it has its fanfic-y flaws but it’s not half bad.


The concept of AGI is so poorly defined it’s hard for me to even take it seriously. And yet some people have leapfrogged over that and went directly for existential doom. Part of me thinks it’s a result of shifting conversations about real, known X-risk like climate change for which the solutions are hard but entirely possible, to conversations where not only is the solution unknown, it is unknowable because the problem is not defined.

It can be helpful to replace “AI” with automated systems, because that is a more circumscribed concept. There are plenty of practical concerns about automated systems. The primary X-risk would be automation eviscerating the economy. Another highly salient issue is automation exacerbating existing inequality. A third would be corporatism and the widening gap between our regulatory capabilities and corporate power due to corporate capture and technological progress.

But these are passé issues to the AGI crowd. It’s not cool to talk about that. They’d rather talk about, let’s face it, science fiction that demands it be taken seriously.


Part of me thinks it’s a result of shifting conversations about real, known X-risk...

I don't believe that "conversations" are going to solve any problem or pose any problem to them. More like these fears are popular because they're the only reaction that generate enough clicks.

The thing is so new that people doing something really interesting and productive with it are too busy.

The primary X-risk would be automation eviscerating the economy.

Automation is what boosts the economy.


We don't need AGI to have transformational AI that is extremely disruptive to our civilization. With some improvements LLMs threaten to make a lot of people's white collar jobs redundant. We've also seen AI doing things like flying jet fighters better than people, drone AI, all kinds of military applications, the boston dynamic robots to replace manual workers. None of these qualify as AGI but AI doesn't need to be sentient or truly conscious in order to

(a) potentially self replicate if properly set up for it (b) centralize a lot of power in the hands of a few


Automated systems works for me. The problem imho is hooking up the wheelworks of society to an automated system that doesn't work properly. Human error is some how more accepted.


My advice is if AI tooling improves tasks you do (art, writing, coding, data entry) you should be using it now. If it doesn't you should continue to ignore it. Aside from avoiding art and transport industries I am not sure there is much else to consider at this point.


> My advice is if AI tooling improves tasks you do (art, writing, coding, data entry) you should be using it now. If it doesn't you should continue to ignore it.

The problem is that increasingly AI is impacting our lives in ways that aren't transparent or desirable. What AI can do for you matters a lot less than what AI can do to you. We should be concerned about things like AI being used by police (https://www.technologyreview.com/2019/02/13/137444/predictiv...) and judges (https://www.wired.com/2017/04/courts-using-ai-sentence-crimi...) and employers (https://www.npr.org/2022/05/12/1098601458/artificial-intelli...) and criminals (https://vpnoverview.com/news/criminals-are-leveraging-ai-too...)

Ignoring AI isn't a good idea. At a minimum We should all probably be at least a bit concerned about AI and working to put oversight or regulations in place to protect us from the most obvious problems while keeping a close watch on our progress toward AGI so that we aren't blindsided and unprepared for it and whatever it brings.


Except AI being used by police fails to describe the problem. The problem is policing policy and practice, and that hasn't changed for hundreds of years. The ability to punt black box decision systems which might as well be a guy flipping a coin in there is just continuing the same problem.


What exactly is the concern here? I scanned through the article but there so much...for lack of a better word cruft, in there yet so little about what the underlying issue is.

Is it that AGI will take over all jobs? Or are we talking Von Neumann probes or what? Because I just see a bunch of fear mongering words.

Personally I think this is the greatest thing since slice bread and is going to increase efficiencies across several data intensive industries. The future is bright!


The concern is that all the humans die as a result of AI researchers who believe or hope they know what they are doing but in reality do not.


so...AI kills us (waves hands in the air). Ah yes clear as mud.


Like a lot of submissions to HN, this one is addressed to a specific audience already familiar with the argument that AI research is dangerous.


There’s a very specific intellectual clique that has distracted itself with apocalyptic worries about AI for around a decade now, and I have not seen any evidence as to why anyone should believe they’re right about anything.


One thing I find troubling is that said clique includes the folks currently building the headline grabbing AIs.


They wouldn’t be worth paying attention to... if it weren’t for the fact that Sam Altman is part of that clique.


And it's not just Sam.


What the hell does "wiping out all value in the universe" mean... it's such a weird phrasing, like "A series of critical missteps by Alameda wiped out all the value of FTT and therefore FTX"


It's a reference to an AI apocalypse scenario, like a paper clip maximiser for example.

https://en.wikipedia.org/wiki/Instrumental_convergence


Is it though? That article talks about AI having a value mismatch that results in it pursuing an undesirable (to humans) outcome.

Not about wiping out all value in the universe. To the paperclip maximizer, if anything, it's generating the maximal-value universe.


Would "wiping out all human value" remove your objection?


Start with the human condition, and work your way up to value from there. What IS value, anyway?


What is Zvi Mowshowitz's utility function. That's probably what the author is talking about when he talks of value.


Pretty hard to say for sure, after all humans are impenetrable black boxes... :P I'd hazard a guess it involves perpetuating genes, though.


I get the feeling that "apocalypse" is code for "shareholders lose value". You can't sell insulin if the AGI finds a cure for diabetes.

We know for a fact that there are problems AGI could help us solve today. The dystopian alternative is just speculation.


Diversified shareholders not affected.


This article is silly.

My response to the popularity of ChatGPT is to invest in systems where humans vet information such as scientific journals and Q/A sites.

LLM writes fluent text, but much of what it says is wrong, because it's a language model, not a truth model.

So now the systems where human experts upvote and downvote text become very valueable.

I wish I could invest in StackOverflow right now.


Isaac Arthur has a youtube series on AI, he runs through many doomsday scenarios in a way thats far more structured than a bunch of random twitter chatter:

[0] https://www.youtube.com/playlist?list=PLIIOUpOge0Lt0pjc1LgiE...


I'm going against trend. I push my kids more towards creative pursuits and less towards stem. I think art rendered by machine will always be hollow in hard to determine way. I think a lot more office workers and programming jobs will get destroyed before creative fields.


FWIW, my wife's cousin is an artist and more and more of what she does involves STEM - AR/VR, apps, data, programming, electronics, etc.


I suspect that there is a strong overlap of the two here.


This is near-hysterical anti-AI fear-mongering. If the future looks scary, it's not because of recent technological advancements. If anything, it's a bright spot. A saving grace.

Let's use it to help get us out of this mess.


> Should I still have kids?

If you were honestly this worried… you need a therapist. And maybe read some history and rejoice that people didn’t stop having kids during famines, wars, plagues, genocides, economic collapses, revolutions, barbarian invasions, Viking raids… otherwise you might not have existed to worry about a theoretical long-term threat.


Being literate in history is very useful for putting these social panics into context:

“I returned to civilization shortly after that and went to Cornell to teach, and my first impression was a very strange one. I can't understand it any more, but I felt very strongly then. I sat in a restaurant in New York, for example, and I looked out at the buildings and I began to think, you know, about how much the radius of the Hiroshima bomb damage was and so forth... How far from here was 34th street?... All those buildings, all smashed — and so on. And I would go along and I would see people building a bridge, or they'd be making a new road, and I thought, they're crazy, they just don't understand, they don't understand. Why are they making new things? It's so useless.

But, fortunately, it's been useless for almost forty years now, hasn't it? So I've been wrong about it being useless making bridges and I'm glad those other people had the sense to go ahead.”

― Richard P. Feynman


People still had kids because they needed them to tend farms. That isn't the case today.

Many of us have therapists, I'm not sure what your point is about that. When you are locking eyes with a hungry leopard it is reasonable to be afraid.

I really wonder what your social circle is, because among mine it's nearly universally realized that subjecting a new consciousness to the multiple looming hells would be an act of cruelty.


> People still had kids because they needed them to tend farms. That isn't the case today.

Not necessarily. There were plenty of urbanites back then, and they often suffered more from the wars, sieges, plagues, famines, and so forth than their rural neighbors.

> nearly universally realized that subjecting a new consciousness to the multiple looming hells would be an act of cruelty

I actually that this is just self-justification for a selfish lifestyle, because hear me out: There have always been looming hells in history. Thank goodness people didn’t stop having kids during the Cold War, because we believed nuclear war was basically inevitable. Furthermore, this attitude hides that this is still among the most physically safe times to ever have children. Your kid isn’t likely to be attacked by a bear, or by a criminal before DNA testing was invented, and if they get a disease, expensive healthcare is still preferable to certain death.

Also, most of these looming hells… didn’t really come to pass. Look to the plenty of past panics. Right now, don’t be blind to the fact that the replacement rate worldwide is so low, the government may be physically unable to pay you social security, or house you in your old age. It’s already awfully close in countries like Japan and China. Population collapse is actually a great self-manufactured threat approaching.


I say this in all seriousness, though it may sound kind of snarky: You could start a farm, move somewhere with fewer "leopards", and build a new social circle. Also, can't you let a new consciousness have their own say on the matter? They can choose their reaction to the multiple looming hells when they get here, no?


I am unsure if you directed a bot to write this or not, but it incredibly shallow in its understanding. Much more so than it is snarky even though only the snarkiness is acknowledged.

>I say this in all seriousness, though it may sound kind of snarky: You could start a farm, move somewhere with fewer "leopards"

I'm not aware of anywhere I could travel that would be outside the sphere of influence of AI. Perhaps I should use the example of "50km asteroid on a collision course" instead of leopards. Putting our heads in the sand in the face of a global threat is not a rational course of action.

>and build a new social circle.

I'm not sure why I would do this or why you even think it is a good suggestion. A circle of people who you love and trust and who love and trust you back, thanks to many years of friend- and companion-ship, is the most valuable asset a person can have in life. Burning my house down would be a much lesser waste.

>Also, can't you let a new consciousness have their own say on the matter? They can choose their reaction to the multiple looming hells when they get here, no?

The new consciousness doesn't exist unless we create it in the first place. There is no rule that compels us to continue bringing new consciousnesses into existence. People who do not and are unlikely to ever exist are imaginary, and we don't give imaginary people a "say on the matter."


You said that, "People still had kids because they needed them to tend farms." If you started a farm, then you too could have the same excuse! Is this an act of cruelty? I don't think you can know unless you have the kid and then ask them.

On the leopards/asteroid, just imagine, I realize you don't believe this, but just imagine that you realized that AI wasn't going to hurt people. Imagine the weight that would just lift right off your shoulders. This is how most people feel because they aren't convinced that AI is going to do anything that horrible. Deepfakes, misinformation, etc, those will all be problems we pretty much all realize we'll have to face, but imagine letting go of all the more apocalyptic visions.


> You said that, "People still had kids because they needed them to tend farms." If you started a farm, then you too could have the same excuse! Is this an act of cruelty? I don't think you can know unless you have the kid and then ask them.

This doesn't even make sense. If I was to start a farm I would need large combines and other equipment modern farms use, not kids. If you mean as if I had lived centuries ago, having kids was necessary simply to produce food and keep the species alive, and unlike today did not mean parents were subjecting them to a worse life than they had experienced themselves. Often times it was the opposite. Today kids are not necessary to run farms, and above a very low level will not assist in sustaining the species since most will just be extra casualties in mass die-offs.

> On the leopards/asteroid, just imagine, I realize you don't believe this, but just imagine that you realized that AI wasn't going to hurt people. Imagine the weight that would just lift right off your shoulders. This is how most people feel because they aren't convinced that AI is going to do anything that horrible. Deepfakes, misinformation, etc, those will all be problems we pretty much all realize we'll have to face, but imagine letting go of all the more apocalyptic visions.

I'm not sure what you want me to do with this. "Imagine as if your problems weren't real" is mindless false hope at best, and insulting one's intelligence at worst. Sticking our head in the sand and pretending everything is okay is not an effective way to manage emotions or disasters.


I would be more worried about climate change. The changes are starting to look like a hockey stick in trajectory


> hands-free, eyes-off driver assist

Does this mean what I think it means? Working on my laptop while the AI drives?


In limited circumstances, that now exists in a car offered by Mercedes. So far the only level 3 certification in the US (and only in one state, unless something has recently changed). Tightly constrained limits, but supposedly it does give you 10 seconds to get your brain out of the movie or whatever you're doing on your laptop before you have to take control of the car. And if a wreck occurs in that time, it's on Mercedes.

Though I have to say, I'd think carefully about the laptop idea. Liability and physics are separate things, and I don't think I'd be comfortable anytime soon putting a laptop between the airbag and my face, whether the manufacturer technically has liability or not.


Maybe make the windshield from smart privacy glass to turn it into a projection screen (being mindful of what people will see from the outside), controlled with voice and sign language instead of a keyboard and mouse. With head and eye tracking, the whole interior could be a primitive holodeck.


Yes. The author is referring to Mobileye's proposed definition of autonomous driving levels [1], which are somewhat different from the traditional SAE ones. At the hands-off + eyes-off level you don't need to pay attention at all and can do whatever you want.

[1] https://ojoyoshidareport.com/eyes-on-mobileyes-eyes-off-driv...


There's a whole lot of references in this to working on AI to prevent bad AI. but what does that actually mean in the context of something that's already so expensive to build and run and already running amuck in large corporations far beyond the scope of any one developer or any group of preppers or even someone like Elon Musk, who also can't do much except warn about it? (Unless we're to take these brain implants as a serious attempt to upgrade humans in time, which just seems silly on the grounds that cyborg implants are probably the first thing an AGI would worm into).

I do appreciate this post as a kinda calm, reasonable attempt to cool some of the hysterics... but as OP mentioned it's practically just as applicable to people who are letting their decisions be made by climate catastrophe or nuclear war. In that sense it's actually unhelpful that it's so focused on AI.


The speed of advances in 2022 was stunning. Even in comparison to the fast progress over the last several years.

If in 2023 or 2024 we get multi-modal models of language, vision, sound, motion, decision making, ... boom.

--

The first crisis I see, is that corporations are often soulless beasts already. They frequently act at cross purposes to human values, even when led by seemingly moral people.

Groups of humans are terrible at being consistently good actors.

Now add powerful AI's into a corporation. The AI itself may be designed to be "tame" and not self-interested. But since it is working in the service of a self-interested entity, that is a difference that makes little difference.

We can expect corporations to become more ruthless with the non-powerful, the more they are optimized. AI is all about optimization.

They will replace human labor with AI automation even faster than they replaced high wage labor with outsourced low wage labor in previous decades.

--

The second crisis is just the natural result of the first crisis.

Having fewer and fewer employees won't stop corporations, their shareholders, corporations and their leaders from continuing to experience a vibrant growing economy. It will accelerate the economy.

Those three roles still have needs, the drive to survive, and to accumulate resources and out compete each other for them, while trading with each other.

The masses can be disenfranchised without any "harm" to the economy or a countries' tax bases. Expect governments to continue following the money.

--

The third and final crisis, when AI's ditch human shareholders and leaders altogether as completely obsolete actors, is a dramatic outcome.

But its almost not even worth worrying about. The majority of the damage happens before that.

--

UNLESS we organize the economy where extracted natural resources are considered an inheritance of all of us, so we all get a dividend. Like Alaska does with oil reserves.

That would keep humans economically viable, and provide a soft landing even as our labor value, and eventually our creative value, declines.

An inheritance dividend is a good economic model that avoids the problems of charity, or taxation on the value that others create (i.e. added to raw natural resources).

And in a solar system expanding with a machine led economy, meeting human-only needs will take an asymptotically small royalty on all those resources.

--

Self-interested AI's will need an orderly economy too, to be able to plan and act effectively, and avoid waste from unnecessary conflicts.

One of many positives of an AI future is they may be more rational with the universes resources and each other than we have been.

If we manage to take care of each other in this transition, its more likely the AI's will rise in a system that continues to take care of all actors.


Some things I do expect anyway:

* The world will become much more dogmatic

* We will experience an even harder push towards the semantic center (i.e., no significant expression), meaning less invention, less artistic or otherwise surprising content.

* Being bored will become the new black.


For those not aware, this is based on some basic definitions of semantics in social sciences (compare polarity profiles, etc): the semantic center (deviation: 0) is per definition expressionless and dogmatic.

Human domain experts tend to be passionate and opinionated about their field, merely dogmatic communication is rare. There's also an essential dynamic between the practicians and innovators (or avant-garde) of a field and those arranging and protecting the canon, as observed by Barthes and others, long ago. We will be missing this.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: