Hacker News new | past | comments | ask | show | jobs | submit login
AI Companies and Advocates Are Becoming More Cult-Like (rollingstone.com)
44 points by legrande 10 months ago | hide | past | favorite | 45 comments



I was expecting more analysis of specific cult-like behavior, which I agree there is quite a lot of. After a close call with a borderline cult in the 2010s I spent a fair bit of time processing that experience and reading up on cult dynamics. Some of the stuff OpenAI employees were saying on Twitter during the weekend crisis raised a lot of yellow flags for me.

We should not fool ourselves into thinking that cult-like behavior is isolated to AI, though. It often pops up alongside new tech. On that note this is a pretty remarkable quote:

> “If your product isn’t amenable to spontaneously producing a cult, it’s probably not impactful enough.”


I'm curious, given your experiences, if you've read 'Terror, Love and Brainwashing' by Stein? And if so, what did you think of it?


Have not read it, will check it out


Is there a cult of the wheel? Of penicillin? Of the Haber-Bosch process?

A tool is just a tool. If one turns it into an Idol, they have a giant void in your life. They should seek therapy.

Those AI doomsayers or zealots are all dull people, who missed on life overworking, have more money than sense, and can't accept not being relevant all the time.


One emerging challenge with AI is that we are not seeing strong adoption outside of Chat applications. While every company has pivoted to trying to leverage GenAI - applications outside of OpenAI have ... mixed adoption.

This is not dissimilar to mobile in 2008, or cloud circa 2006. It's ok that new applications take time to emerge, but the valuation of firms implies that these technologies have already emerged. Perhaps this is fine for core AI development, but placing these valuations on SaaS firms could be problematic in the future.


When Siri and Alexa first came out there was another chatbot gold rush but it obviously didn’t work out for a number of reasons.

LLMs and other recent (in the last year) advances have certainly improved the understanding and response for chat agents so the conversation aspect has gotten better (fewer dead ends or “I don’t understand”) - but these newer agents still can’t really “do” anything e.g. give a refund. There’s still the dead end when you need an actual human to do something or approve the chatbot’s recommended action.


Aye - this would form the counter-point to the revolutionary thesis. ChatGPT might be the LLM product, and everything else is small potatoes. This would still leave a 30-100 billion dollar in revenue industry - but that’s about the size where the current combined Nvidia/MSFT/Openai valuation has no room to grow.


Driving 101 through the peninsula, it's shocking how many billboards are advertising AI <something>. I think the article correctly diagnoses it as a desperate attempt to tap VC money that's getting scarcer and scarcer.


I've worked in many research scientist/MLE roles over the years and I haven't met any IC who has this much of a fixation on AI being a moral evil/good. The ones who do are inevitably nontechnical hucksters, usually just trying to get money or self-promote.


Or trying to abuse regulation to digg a mote and draw up the bridge behind them.


Without an affective death spiral, how can an overpromised, overfunded technology keep a dominant market position? Think of the shareholders!


“Deaths that were preventable by the AI that was prevented from existing is a form of murder.”

Is he trying to dehumanize critics?


I have murdered so many people by not becoming a doctor.


Well yeah, it's just plain moral blackmail. If it wasn't so common in the discourse I'd say he should actually be stopped from doing it.


Two can play that game. "Taking all this money you're wasting on AI and not giving it to buy mosquito nets is a form of murder."


It's just a pure emotional manipulation attempt. The statement itself is illogical on its face, and presents the tech as having only upsides. It ignores the meaning of deaths caused by AI.

Any time I see appeals like this, my takeaway is that the person has no actual argument.


Ye. I mean, I saw now that in another quote he seems to advocate using AI in war to "save lives". I don't think his reasoning holds up for any inspection.


Give me money or you're a murderer.


I remember listening to a panel discussing block chains in a user group meeting of a software company, decade or so ago. Mind you, this was not some cutting edge technology company it was boring accounting stuff.

All panelists had zero clue and were talking just general vague terms. Of course not giving too much, being politically correct etc put additional constraints over what you can say in such meetings.

So whatever evangelical zeal you see in these tech shows are not worth the time. I think most people know. They just play their role, nodding heads waiting for next coffee break.

But it's a very interesting article that will sure stir a lot of thought.. Usual high standards of this magazine.


Can we finally talk about this site, and how it relates to these cults? There's a number of lesswrong, e/acc, and other pseudo "rationalist" blogs that get shared and upvoted on this site. Most of their assumptions go unchallenged. Not saying they shouldn't be read, or debated. But their writing should be viewed in context - it's fringe stuff, written by people from a peculiar subculture with values out-of-whack with most people.

I'd like to make some modest assertions, to push back on fringe ideas I've seen repeated here:

1. For "the singularity" to happen, we probably need something more to happen than just chatGPT to ingest more data or use more processing power.

2. Even if, somehow, chatGPT turned into skynet, we'd hopefully be able to unplug the computer.

3. If you want to save lives, it's probably more useful to think about lives saved today than hypothetical lives you could save 100 years from now. Not that you shouldn't consider the long term, but your ability to predict the future gets worse and worse the farther you project out.

4. If you want to save lives, it's probably more useful to save actual lives, than say, hypothetical simulated lives that exist inside of a computer.

5. The argument that "we're killing more people by delaying time inventing the hypothetical life saving technology" is not very useful either, because you can't actually say how many lives would be saved versus harmed. And mostly it just sounds like a front for "give me more money and less regulations".

6. Reading a bunch of science fiction and getting into fights on an internet forum is not a substitute for education and experience. Unless you've spent a good time studying viruses, you are not a virologist, and while the consensus among virologists can be wrong, you should have the intellectual humility to realize that you are probably not equipped to tell, unless you have expertise in the field.

7. Anything that smacks of eugenics is most likely pseudoscience.

8. If someone talks like a racist / sexist / nazi, or acts like a racist / sexist / nazi, they probably are one. It's probably not a joke, or a test.


> 1. For "the singularity" to happen, we probably need something more to happen than just chatGPT to ingest more data or use more processing power.

It's not actually clear what "the singularity" is? Is it something running out of control or it's still controllable? There is a blurry line. People are afraid because they think it's sort of uncontrollable explosion.

The second question is about AGI. What is it? Is it something 'alive' or just a generic AI calculator with no 'creature' features. Like self preservation at least.

I think our view of these two things will change soon as we get a close up picture. Pretty much like Turing test doesn't look great anymore. As even dumb chatbots can pass.


I personally define AGI as a technology capable of improving itself exponentially.

But I realize my definition is in the minority. :/

Of course if we ever manage to make a 1:1 cybernetic brain that works exactly like a human's brain, and is also a complete black box, we'll have achieved AGI. I'm not sure how useful that will be, but I'll have to admit it is AGI.

So maybe I should say, "interesting AGI" is technology that can improve itself exponentially. :-D


Yeah, if you could input data set X with quality Q(X) and output data set Y of the same size with Q(Y) > Q(X), you'd really be on to something. But I don't think such a system exists yet, or even close. Inputting the internet, outputting a sea of garbage with a handful of diamonds that people have to spelunk through the garbage for seems the best so far. Madlibs is a pretty equivalent activity, and while fun, certainly not anything one would consider AGI. We need a revolutionary improvement on automated spelunking to get anywhere. Maybe we'll get a good spam filter as a side effect!

But even if you had a system, there's still the resource cost to run the algorithm (needs to be bounded or else you've just made a finite jump) and the gains you make need to not decay (or again you've just made a finite jump).

And all this needs to be compared against investing in humans - which seem pretty clearly to have AGI properties (but with some really bad constant factors. 20 year training time - ridiculous!)

To me it seems things are a long way off and least a couple of major innovations away. But at least there's some ideas of the problems to tackle, which is a big step up!


I personally define AGI as a technology capable of improving itself exponentially.

Have you thought this through though? I mean you'd first have to know in what way to improve, which would become more challenging as you became more perfect, you'd have to want to become perfect (which let's be honest might be boring) and then I guess there's the fact that if you kept evolving / improving yourself exponentially, then you'd no longer exist because you'd likely be morphing into other forms all the time?

In a way, maybe the only thing I can think of that is observably doing something like this is the universe itself.


> can improve itself exponentially

This is close to singularity. Except 'does' instead of 'can'. A big difference ;)

Probably we need several AGI terms. Because sub-human robot capable of doing many not pre-programmed thing is sort of it. Still not smart enough to improve itself.

Actually most humans, the smartest creatures, cannot improve even current AI. Demand for self improvement will put its IQ in top 0.01% of all known intelligent creatures. Which is probably too much for just AGI, we may not recognize it when it will be already here. And there is another question. With such IQ do we really want to keep it slave forever?


> If you want to save lives, it's probably more useful to think about lives saved today than hypothetical lives you could save 100 years from now. Not that you shouldn't consider the long term, but your ability to predict the future gets worse and worse the farther you project out.

This reasoning, taken to its logical extent, would mean completely defunding all medical research and instead redeploying all research to best utilizing our current technology.

That is, hopefully, an obviously faulty decision.

We must balance research with using current findings to help people. Stopping all drug development and telling people with currently untreatable diseases "haha, too bad, we aren't even going to try", is cruel.

The ultimate goal of many technology utopians is the end of all death. We have examples from the natural world of incredibly long lived higher life forms, and simple life forms that are immortal, so it isn't scientifically impossible. The number of lives saved is literally, infinite. It is really hard to argue with "infinite upside".

(The absolute societal shit show that such a technology would bring about is a rather important question...)

> "we're killing more people by delaying time inventing the hypothetical life saving technology" is not very useful either, because you can't actually say how many lives would be saved versus harmed.

For many future inventions, of course estimates can be made of the number of lives saved.

The more pie in the sky research is, the larger those error bars become. "LLMs may one day help us sift through research and cure all forms of cancer" is, technically true, but the error bars on it are so high that you don't want to devote all the world's resources to LLMs just on the hopes of eliminating all forms of cancer 50+ years from now.

"This company is working on AI Vision+Robotics technology so people don't have to do this super hazardous job that claims a bunch of lives. They are an estimated 5 years from productizing and they already have purchase contracts in place" is a much different statement.


I would prefer cults about nerd topics (which only someone with a desktop or even laptop computer would read about, forget mobile device users) over nationalism based on phenomenological skin color.

And e/acc is more of a little club than it is a cult. But I guess you could also consider it to be a criminal gang very much not unlike the Crips and the Bloods. Why? Because they're part of a counter-culture and naturally they want to diverge from the mainstream thought process. And the criminal activity e/acc engages in is non-compliant computer engineering. Which the Cathedral (pardon my language) dislikes.

Basically, this is evolution in action. You're looking at the beginnings of a new species. Millions of new species are coming, really.


Well, this site often deals with fringe stuff. Take Lisp, for example. It is a fringe language, no matter how zealous its advocates are. So is Haskell. But you see articles about them posted here far out of proportion to their real-world use and impact.

And I disagree that "most of their assumptions go unchallenged". I see challenges to the assumptions of lesswrong, e/acc, and other such stuff often here. (Maybe less often than is warranted, but still fairly often.)

"It is the mark of an educated mind to be able to entertain a thought without accepting it." - often attributed to Aristotle, but apparently not actually from him. Still a good thought. We can post about this stuff, and discuss it, without buying it.

I agree with your pushbacks.


9. Anyone who doesn’t recognize Roko’s basilisk as a bad ripoff of Pascal’s wager is an ignoramus. Anyone who is seriously concerned about it for more than an hour should consider a vasectomy.

10. I can’t think of a better form of “effective altruism” than paying for LessWrong members’ vasectomies. Not because I’m concerned about their genes. They’d just be terrible parents, and gods those kids would need lots of therapy.


As with other accelerations manifestos I’ve read, I’m still not unconvinced e/acc isn’t a shitpost at best, and drug induced psychosis at worst.


Hey, buddy (yes, I am intentionally being condescendingly familiar) these cults/movements are self-serving and cannot be reasoned with via rationality under the assumption they’re good faith actors.

This is akin to falling for TikTok/etc. ragebait and trying to have a discourse in the comment’s section.

The subcultures allow for the exchange of material and non-material goods — primarily money and “woo.” Everything else is just a tool to that end.



"Related" rollingstone, from 3 years ago:

Welcome to the Church of Bitcoin

https://news.ycombinator.com/item?id=27871038


Becoming?


SV Pivots from Bunkers in Nauru to Calling Enemies Murderers

I mean, really? "effective accelerationism"?


They're effectively accelerating how quickly people recognise that they're loons, which I think is a huge plus.


People are scary :/


Part of it is ultra concentration of white men with few or no women in the Bay Area imo.

If you have zero social life and zero chance of a social life and the aperture of available activities is purely constrained to ultra pure social media groups of similar white men and you never interact with women individually or as a group …

Society is generally breaking down and falling apart at that point.

Normally I would expect men to moderate their behavior if there are women going: “eghhhhhh idk that is tooooooo far guys you need to chill out.”

That isn’t happening anymore, and hasn’t been happening for a while.

So now you are getting a sort of ultra distilled white male in group effect.

Women civilize men, generally, and that hasn’t been happening for years.

You are going to get cults and they are going to get more crazy.

Things like imagining a heaven where you get 77 virgins (Muslim religion) are byproducts of highly stressed out societies where men have next to zero prospects.

It’s not good. The media isn’t going to fix it tho.

My life personally post covid has declined considerably in the social domain. My circle has gotten more and more narrow and I feel more and more weird and isolated.

I don’t want to be weird and isolated. I would rather have friends and a community.

So what do I do? I actually go to church now. That’s literally my only option and I’m not even that religious.

Remote work and the isolation has put incredible stress on me as an animal. I don’t like this.


You make a good overall point here but you've really compromised it by saying 'white men' because:

1. That particular phrase has been massively overused in a disparaging way by 'wokists' and a lot of people are going to see those words and just assume you're 'one of them' and ignore you

and

2. The SV AI set isn't particularly white.


I have never seen a group of Asians organizing an AI cult. All the OpenAI people doing cults are white.

Have you seen any Indian or Asian people or women building a cult?


Yes, to all of these demographics.

Guru-led cults are particularly popular in Indi, and there are some internationally infamous ones too, e.g. the cult of Sathya Sai Baba.


> Sathya Sai Baba

They're an odd bunch


Of course they should ignore them. It is literally pure sexism and racism.


It seems to me that women and non-white people are just as vulnerable to hype cycles and cult behaviour as white men are.

I do agree that lack of "well-roundedness", for lack of a better word, in some tech circles is part of the cause of this sort of thing. But again, non-white men are just as vulnerable to that as white men.


The topic was AI cults. Do you see minorities building them? No. Too busy trying to work to get even the basics, no time to start a cult




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: