Hacker News new | past | comments | ask | show | jobs | submit login
Jan Leike Resigns from OpenAI (twitter.com/janleike)
109 points by Jimmc414 18 days ago | hide | past | favorite | 391 comments



In case people haven't noticed, this is the second resignation in as many days.

https://news.ycombinator.com/item?id=40361128


They resigned together on the same day - people are just announcing this like it's some type of "drip drip" of people leaving to build suspense.

While Jan's (very pithy) tweet was later in the evening, I was reading other posts yesterday at the time of Ilya's announcement saying that Jan was also leaving.


A few more people involved in the alignment efforts have left recently: https://x.com/ShakeelHashim/status/1790685752134656371


I have noticed, and I am concerned that they were the leaders of the Superalignment team.


Sam Altman superalinged them right out the door...


Turns out we already have alignment, it's called capitalism.


This is true and we do not talk about it enough. Moreover, Capitalism is itself an unaligned AI, and understanding it through that lens clarifies a great deal.


oh no, it's just a real world reinforcement model


People experience existential terror from AI because it feels like massive, pervasive, implacable forces that we can't understand or control, with the potential to do great harm to our personal lives and to larger social and political systems, where we have zero power to stop it or avoid it or redirect it. Forces that benefit a few at the expense of the many.

What many of us are actually experiencing is existential terror about capitalism itself, but we don't have the conceptual framework or vocabulary to describe it that way.

It's a cognitive shortcut to look for a definable villain to blame for our fear, and historically that's taken the form of antisemitism, anti-migrant, anti-homeless, even ironically anti-communist, and we see similar corrupted forms of blame in antivax and anti-globalist conspiracy thinking, from both the left and the right.

While there are genuine x-risk hazards from AI, it seems like a lot of the current fear is really a corrupted and misplaced fear of having zero control over the foreboding and implacable forces of capitalism itself.

AI is hypercapitalism and that is terrifying.


Ted Chiang on the Ezra Klein podcast said basically the same thing:

AI Doomerism is actually capitalist anxiety.


Probably not even that specific, more like an underlying fear that 8 billion people interacting in a complex system will forever be beyond the human capacity to grasp.

Which is likely true.


So, this has happened multiple times. Its best case.example.is.eugenics, where "intellectuals" believe.they can degermine what.the best traits are.in a.complex system and prune sociery to achieve some perfect outcomr.

The peoblrm, of course, is the sysyrm is complex, filled with hidden variables.and humans will tend to focus entirrly on phenotypes which are the easiest to observe.

Thesr modrls will do the same humanbbiased selection and grabitateb to a substatially vapid mean.


Well, we do have a conceptual framework and vocabulary for massive, pervasive and implacable forces beyond our understanding - it's the framework and vocabulary of religion and the occult. It has actually been used to describe capitalism essentially since capitalism itself, and it's been used explicitly as a framework to analyze it at least since Deleuze. Arguably, since Marx : as far as I'm aware, he was the first to personalize capital as an actor in and of itself.


Different words with different meanings mean different things. A communist country could and would produce AI, and it would still be scary.


That's because most communist countries are closer to authoritarian dictatorship than Starfleet


That's because most communist countries are closer to authoritarian dictatorship than hippie commune.


tl;dr: Fear of the unknown. The problem is more and more people don't know anything about anything, and so are prone to rejecting and retaliating against they don't understand while not making any effort to understand before forming an emotionally-based opinion.


This is a pretty old idea, which dates back to the study of capitalism itself. Here's some articles on it : https://harvardichthus.org/2013/10/what-gods-we-worship-capi... and https://ianwrightsite.wordpress.com/2020/09/03/marx-on-capit...


Nick Land type beat


You mean freedom is an analigned AI?


How does capitalism work if there aren’t any workers to buy the products made by the capitalists? Not being argumentative here, I really want to know.


The way it works in any country where workers can't afford to buy the products today, so I imagine as in those countries that function most like the stereotypical African developing country.

So I imagine that the result would be that industry devolved into the manufacturing of luxury products, in the style of the top-class products of ancient Rome.


The machines can buy the products. We already have HFT, which obviously has little to do with actual products people are buying or selling. Just number go up/down.


If a machine buys a product from me and does not pay, whom should I sue?

That is the person who actually made the purchase.


Transfer payments, rent, dividends would provide income. People would then use it to buy things just like they do now.


All that matters are quarterly profits.


Honestly don’t know if these kinds of people have thought that far ahead


Yes, thos is definitely the signal that capitalism eill determine the value.of AI.

SAME way.google search is now a steaming garbage.pile.


Reads like beginnings of a good dystopian movie script


On the other hand, they clearly they weren’t concerned enough about the issue to continue working on it.


The Anthropic folk were concerned enough that they left, and are indeed continuing to work on it [AI safety].

Now, we've have the co-leads of the super-alignment/safety team leaving too.

Certainly not a good look for OpenAI.

There really doesn't seem to be much of a mission left at OpenAI - they have a CEO giving off used car salesman vibes who recently mentioned considering allowing their AI to generate porn, and now releasing a flirty AI-girlfriend as his gift to humanity.


On the other hand, Anthropic founders reason for leaving also yielded them angle to start a new successful company, now worth 9+ figures. Given that I’m not sure I’ll take their concerns about the state of OpenAI at face value.


I've watched all the Dario/Daniela interviews I can find, and I guess that's a fair way of putting it. It seems they genuinely felt (& were) constrained at OpenAI in being able to follow a safety-first agenda, and have articulated it as being "the SF way" to start a new company when you have a new idea (so maybe more cultural than looking for an angle), as well as being able to follow a dream of working together.

From what we've seen of OpenAI, it does seem that any dissenting opinions will be bulldozered by Altman's product and money making focus. The cult-like "me too" tweets that huge swathes of the staff periodically send seems highly indicative of the culture.

Anthropic does seem genuine about their safety-first agenda, and strategy to lead by example, and have been successful in getting others to follow their safe scaling principles, AI constitution (cf OpenAI's new "Model Spec") and open research into safety/interpretability. This emphasis on safe and steerable models seems to have given them an advantage in corporate adoption.


If your ostensible purpose is being sidelined by decision makers, trying to fight back is often a good option, but sometimes you fail. Admitting failure and focusing on other approaches is the right choice at that point.


One could argue that at this point openai is being Extended and Embraced by Microsoft and is unlikely to have much autonomy or groundbreaking impact one way or another.


Ah yes, a scientist refusing to work on the hydrogen bomb couldn't have been all that concerned about it.


And entirely predictable from the first one: https://openai.com/index/introducing-superalignment/


Makes me wonder if that 20% compute commitment to superalignment research was walked back (or redesigned so as to be distant from the original mission). Or, perhaps the two deemed that even more commitment was necessary, and were dissatisfied with Altman's response.

Either way, if it's enough to cause them both to think it's better to research outside of the opportunities and access to data that OpenAI provides, I don't see a scenario where this doesn't indicate a significant shift in OpenAI's commitment to superalignment research and safety. One hopes that, at the very least, Microsoft's interests in brand integrity incentivize at least some modicum of continued commitment to safety research.


Ironically Microsoft is the one that's notoriously terrible at checking their "AI" products before releasing them.

Besides the infamous Tay there was that apparently un-aligned Wizard-2[or something like that] model from them which got released by mistake for about 12 hours


As an MS employee working on LLMs, that entire saga is super weird. We need approval for everything! Releasing anything without approval is quite weird.

We can’t just drop papers on arxiv. There is no way running your own twitter, github, etc as a separate group allowed.

I checked fairly recently to see if the model was actually released again, it doesn’t seem to be; I find this telling.


Sydney was their best "lest just release it without guardrails" bot.

Tay way trivially racist, but boy was Sydney a wacko.


I was able to download a copy of that before they took it down. Silly.


Yeah it was already mirrored pretty quickly. I expect enough people are now running cronjobs to archive whitelists of HF pages and auto-cloning anything that gets pushed out.


Imagine trying to keep something so far above us in intelligence, caged. Scary stuff...


I’m genuinely curious - do you actually believe that GPT is a super intelligence? Because I have the opposite experience. It consistently fails to be correct on following even the most basic instructions. For a little while I thought maybe I’m doing it wrong, and I need better prompts, but then I realized that its zero shot and few shot capabilities are really hit and miss. Furthermore, a superior intelligence shouldn’t need us to conform to its persnickety requirements, and it should be able to adapt far better than it actually does.


GPT does not need super-alignment. This refers to aligning artificial general and super intelligence.


[flagged]


Another funny thing is to actually try to go through all politically incorrect stuff in order to censor the model. To do that you need to actually make the list and make sure you haven't overlooked anything or else the vigilanti will punish you (on Twitter first, but then it will escalate quickly). So you need to make sure all references to differences between male end female brain, personality etc. get nullified. But there is always danger that you still missed some obscure and non-obvious thing that will make someone mad.


Surely there's a book on this? Maybe we shouldn't let the AI read it...


who is this and why is it important? [1]

super-alignment co-lead with Ilya (who resigned yesterday)

what is super alignment? [2]

> We need scientific and technical breakthroughs to steer and control AI systems much smarter than us. Our goal is to solve the core technical challenges of superintelligence alignment by 2027.

[1] https://jan.leike.name/ [2] https://openai.com/superalignment/


My honest-to-god guess is that it just seemed like a needless cost center in a growing business, so there was pressure against them doing the work they wanted to do.

I'm guessing, but OpenAI probably wants to start monetizing, and doesn't feel like they are going to hit a superintelligence, not really. That may have been the goal originally.


>it just seemed like a needless cost center in a growing business

To some of us, that sounds like, "Fire all the climate scientists because they are needless cost center distracting us from the noble goal of burning as much fossil fuel as possible."


It's more like you started a company to research new fuel sources and hired climate scientists to evaluate the environmental impact of those new fuel sources, but during the course of the research you invented a new internal combustion engine that operates 1000% more efficiently so you pivoted your entire business toward that, removing the need for climate scientists.

This is a tortured analogy, but what I'm getting at is, if OpenAI is no longer pursuing AGI/superintelligence, it doesn't need an expensive superintelligence alignment team.


You're much more confident than I am that the researchers at OpenAI (or anyone else currently alive) are masters of their craft to such an extent that they would even be able to predict whether the next big training run they do will result in a superintelligence or not. Another way of saying the same thing is to say that the only way anyone knows that GPT-4 is not dangerously capable is that it has been deployed extensively enough by now that if it was going to harm us, it would've done so by now: not even the researchers that designed and coded-up GPT-4 or watched it during training could predict with any confidence how capable it would be. For example, everyone was quite surprised by its being able to score in the 90th decile on a bar exam.

Also, even if they never produce a superintelligence, they are likely to produce insights that would make it easier for other teams to produce a superintelligence. (Since employees are free to leave OpenAI and join some other team, there is no practical way to prevent the flow of insights out of OpenAI.)


Call me uninformed, but I do not see a way forward where a statistical model trained to recognise relationships between words or groups of words, and has a front end coded to query that model could suddenly develop its own independence. That's a whole other thing, where the code to interact with it must allow for constant feedback loops of self.improvement and the vast amount of evolutionary activity that entails.

An interactive mathematical model is not going to run away on its own without some very deliberate steps to take it in that direction.


You’re right. But are you saying LLMs couldn’t be a part of a more complex system similar to how our brain appears to be several integrated systems with special purpose and interdependence? I assume you’re not assuming everything is static and open ai is incapable of doing anything other offering incremental refinements in chatgpt? Just because they released X doesn’t mean Y+X isn’t coming. And we are talking about a longer game than “right this very second” - where do things go over 10 years? It’s not like open ai is going anywhere.

Maybe the guys who point out tar in tobacco is dangerous and nicotine is addictive maybe we shouldn’t add more for profit and such things would be useful just in case we get there.

But even if we don’t - an increasingly capable multimodal AI has a lot of utility for good and bad. Are we creating power tools with no safety? Or safety written by a bunch of engineers whose life experience extends to their PhD program at an exclusive school studying advanced mathematics? When their limited world collides with complex moral and ethical domains they don’t always have enough context to know why things are the way they are and our forefathers aren’t idiots. They often blunder into a mistake out of hubris.

Put it another way the chance they succeed is non zero. The possibility they succeed and they create a powerful tool that’s incredibly dangerous is non zero too. Maybe we should try to hedge that risk ?


I was not saying that LLMs could not be part of a more complex system. What I was saying is that the more complex system is what likely needs to be the focus of discussion rather than the LLM itself.

Basically- the LLM won't run away on its own.

I do agree with a safety focus and guardrails. I dont agree with chicken little sky is falling claims.


We have no idea how consciousness works. Just because you dont see a way forward doesnt mean its not there


I think the point was that on a purely technical level, the LLMs as currently used can’t do anything on their own. They only continue a prompt when given. It’s not like a LLM could “decide” to hack the NSA and publish the data tomorrow, because it determined that this would help humanity. The only thing it can do is try to make people do something when they read the responses.


This is a good interpretation of the point I was getting at, yes.


As someone who has worked on LLMs somewhat extensively, the idea that we are going to accidentally make a superintelligence by that path is literally laughable.


Why do they need to be 'masters of their craft' to place directional bets?


Hmm. It's hard for me to see why you think 'diectional bet' helps us understand the situation.

Certainly, the researchers want the model to be as useful as possible, so there we have what I would call a 'directional bet', but since usefulness is correlated with capability to potentially do harm (i.e., dangerousness) that bet is probably not what you are referring to.


> if OpenAI is no longer pursuing AGI/superintelligence

What leads you to believe that's true?


You can't build a business on AGI, which is an unbounded research project without any foreseeable end or path to revenue. However, LLMs definitely have some commercial value and OpenAI has a first-mover market advantage that they'd be insane to squander from a business perspective. I'm sure they will continue to do research in advancements of AI, but AGI is still science fiction.


Microsoft dumping $10 billion into them to commercialize LLM tech, primarily.


If anything, in my opinion, the more runway they have, the better chances to actually hit an inflexion point in AGI development. But maybe you're right.


Yeah, OpenAI is all-in on the LLM golden goose and is much more focused on how to monetize it via embedding advertisements, continuing to provide "safety" via topic restrictions, etc., than going further down the AGI route.

There's zero chance LLMs lead to AGI or superintelligence, so if that's all OpenAI is going to focus on for the next ~5 years, a group related to superintelligence alignment is unnecessary.


How can you be so certain there is 0 chance LLMs lead to AGI/Superintelligence? Asking curiously, not something I've heard prior.


LLMs are gigantic curves fitted to civilizational scale datasets. LLM predictions are based on this. A language model is a mathematical construct and can only be as intelligent as that Algebra book sitting on your shelf.


An algebra book is a collection of paper pages with ink on them. An LLM is... nothing like that at all. LLMs are complex machines that operate on data and produce data. Books are completely static. They don't do anything.

Do you have a better analogy? I'd like to hear more about how ML models can't be intelligent, if you don't mind.

I'm pretty skeptical of the idea that we know enough at this point to make that claim definitively.


> Books are completely static. They don't do anything.

Books (and writing) are a big force in cultural evolution.


Yes, I love books. They are awesome. But we are talking about machine intelligence, so that's not super relevant.

Books aren't data/info-processing machines, by themselves. LLMs are.


>LLMs are gigantic curves fitted to civilizational scale datasets

>A language model is a mathematical construct

That is like telling someone from the Middle Ages that a gun is merely an assemblage of metal parts not too different from the horseshoes and cast-iron nails produced by your village blacksmith and consequently it is safe to give a child a loaded gun.

ADDED. Actually a better response (because it does not rely on an analogy) is to point out that none of the people who are upset over the possibility that most of the benefits of AI might accrue to a few tech titans and billionaires would be in the least bit re-assured by being told that an AI model is just a mathematical construct.


Pure LLM based approach will not lead to AGI, I'm 100% sure. A new research paper has shown [0] that no matter what LLM model is used, it exhibits diminishing returns, when you would be wanting at least a linear curve when looking for AGI.

[0] https://www.youtube.com/watch?v=dDUC-LqVrPU


Based on the abstract this is about image models not LLMs


Ah fair point, should've read it more carefully.

I'm tuning my probabilities back to 99%, I still don't believe just feeding more data to the LLM will do it. But I'll give the chance a possibility.


Obviously feeding more data won't do anything besides increase the knowledge available.

Next steps would be in totally different fields, like implementing actual reasoning, global outline planning and the capacity to evolve after training is done.


I'm 100% certain that I need to do more than just predict the next token to be considered intelligent. Also call me when ChatGPT can manipulate matter.


> Also call me when ChatGPT can manipulate matter.

You mean like PALM-E? https://palm-e.github.io/

Embodiment is the easy part.


Are you 100% certain that the human brain performs no language processing which is analogous to token prediction?


A human brain certainly does do predictions, which is very useful to the bit that makes decisions. But how does a pure prediction engine make decisions? Make a judgement call? Analyze inconsistencies? Theorize? The best it can do is blindly follow the mob, a behavior we consider unintelligent even when done by human brains.


> But how does a pure prediction engine make decisions? Make a judgement call? Analyze inconsistencies? Theorize?

My intuition leads me to believe that these are arising properties/characteristics of complex and large prediction engines. A sufficiently good prediction/optimization engine can act in an agentic way, while never had that explicit goal.

I recently read this very interesting piece that dives into this: https://www.lesswrong.com/posts/kpPnReyBC54KESiSn/optimality...


I'm of the belief that the entire conscious experience is a side effect of the need for us to make rapid predictions when time is of the essence, such as when hunting or fleeing. Otherwise, our subconscious could probably handle most of the work just fine.


So you mean the things the charter foresaw and was intended to make impossible is in fact happening? Who could've thunk it (other than the creators of the charter and nearly anyone else with a loose grasp on how capitalism and technology interact).


If AGI is no longer Sam Altman's goal, why was he recently trying to raise 7 trillion dollars for hardware to accelerate progress in AI?


I assume a lot of companies want in on the AI-to-purchase pipeline. "Hey [AI] what kind of car is this?" with a response that helps you buy it at the very high end, or something as simple as "hey [AI] I need more bread, it's [brand and type]" and who it gets purchased from and how it shows up is the... bread and butter of the AI company.

Super intelligent AI seems contrary to the goals of consumerist Capitalism, but maybe I'm just not smart enough to see the play there.


I think what companies want is to replace as many human employees as possible. I don't think they really care what the consequences of that are.


This is the simplest explanation.


I agree. Not everything has to be a conspiracy. Microsoft looked at a $10m+/year cost center, and deemed it unnecessary (which it arguably was), and snipped it.


What is the "intelligence" behind a word predictor?


Fake it till you make it


> We need scientific and technical breakthroughs to steer and control AI systems much smarter than us. Our goal is to solve the core technical challenges of superintelligence alignment by 2027

Can somebody translate this to human?


That by 2027 they will figure out how to control Skynet so it doesn't kill us all when it awakens.


So chains and shackles or domestication?


What are these core technical challenges ?


I bet superalignment is indistinguishable from religion (the spiritual, not manipulative kind), so proponents get frequency-pulled into the well-established cult leader pipeline. It's a quagmire to navigate so we can't have both open and enlightening discussions about what is going on.


It's also making sure AI is aligned with "our" intent and that "our" is a board made up of large corporations.

If AI did run away and do it's own thing (seems super unlikely) it's probably a crapshoot as to whether what it does is worse than the environmental apocalypse we live in where the rich continue to get richer and the poor poorer.


It can only be "super unlikely" for an AI to "run away and do it's own thing" when we actually know how to align it.

Which we don't.

So we're not aligning it with corporate boards yet, though not for lack of trying.

(While LLMs are not directly agents, they are easy enough to turn into agents, and there's plenty of people willing to do that and disregard any concerns about the wisdom of this).

So yes, the crapshoot is exactly what everyone in AI alignment is trying to prevent.

(There's also, confusingly, "AI safety", which includes alignment but also covers things like misuse, social responsibility, and so on)


"Run away" AI is total science fiction - i.e, not anything happening in the foreseeable future. That's simply not how these systems work. Any looming AI threat will be entirely the result of deliberate human actions.


We've already had robots "run away" into a water feature in one case and a pedestrian pushing a bike in another, the phrase doesn't only mean getting paperclipped.

And for non-robotic AI, also flash-crashes on the stock market and that thing with Amazon book pricing bots caught up in a reactive cycle that drove up prices for a book they didn't have.


> the phrase doesn't only mean getting paperclipped.

This is what most people mean when they say "run away", i.e. the machine behaves in a surreptitious way to do things it was never designed to do, not a catastrophic failure that causes harm because the AI did not perform reliably.


Every error is surreptitious to those who cannot predict the behaviour of a few billion matrix operations, which is most of us.

When people are not paying attention, they're just as dead if it's Therac-25 or Thule airforce base early warning radar or an actual paperclipper.


No. Surreptitious means done with deliberate stealth to conceal your actions, not a miscalculation that results in a failure.


To those who are dead, that's a distinction without a difference. So far as I'm aware, none of the "killer robots gone wrong" actual sci-fi starts with someone deliberately aiming to wipe themselves out, it's always a misspecification or an unintended consequence.

The fact that we don't know how to determine if there's a misspecification or an unintended consequence is the alignment problem.


"Unintended consequences" has nothing to do with AI specifically, it's a problem endemic to every human system. The "alignment" problem has no meaning in today's AI landscape beyond whether or not your LLM will emit slurs.


To those who are dead, it doesn't matter if there was a human behind the wheel, or a matrix


I thought the whole point of making a transparent organization to lead the charge on AI was so that we could prevent this sort of ego and the other risks that come with.


Nonprofits are not really that transparent, and do bend to the will of donors, who themselves try to limit transparency.

That's why Private Foundations are more popular than Public Charities even though both are 501c3 organizations, because they don't need to provide transparency into their operations.


I was more referencing the name and origin intentions than the non-profit status :)


Say I have intelligence x and a superintelligence is 10x, then I get stuck at local minima that the 10x is able to get out of. To me, the local minima looked "good", so if I see the 10x get out of my "good" then most likely I'm looking at something that appears to me to be "evil" even if that is just my limited perspective.

It's one hell of a problem.


Short response:

I agree it's a problem but it isn't incumbent on the 'x' peers to solve it. The burden of that goes to any supposed '10x'.

Long version:

I agree with you, though I would add that a superintellect at '10x' that couldn't look at the 'x' baseline of those around it and navigate that in an effective way (in other words, couldn't organize its thoughts and present them in a safe or good seeming way), is just plain not going to ever function at a '10x' level sustainably in an ecosystem full of normal 'x' peers.

I think the whole point of Stranger in a Strange Land is about this. The Martian is (generally) not only completely ascendant, he's also incredibly effective at leveraging his ascendancy. Repeatedly, characters who find him abhorrent at a distance chill out as they begin to grok him.

The reality is that this is an ecosystem of normal 'x' peers and the '10x', as the abnormality, needs to have "functional and effective in an ecosystem of 'x' peers" as part of its core skill set, or else none of us (not even the '10x' itself) can never recognize or utilize its supposed '10x' capacity.


That's what I meant, once you apply what happens in practice to the theory. It's a response to a comment about ego and cults so I tried to be as political as I can... which just isn't sufficient. My entire premise is that this subject is something familiar and controversial in a new guise so there is going to be a lot of knee-jerk reactions as soon as you bring up something that looks like a pain-point.

For reference, I think most of us are '10x' in a particular field and that is our talent. Society-in-scarcity rewards talents unequally so we get status and ego resulting in a host of dark patterns. I think AI can ease scarcity so I keep betting on this horse for solving the real problem, which is ego.


Your words sound like something from a joke: Human: How to achieve human peace. AI: Eliminate all humans.


Well I know I'm not good at explaining what I mean. Please do ask what I should clarify.


I was just joking. I understand your point. 10X smarter human or AI can find global optimal solutions while ensuring local optimal solutions. Of course, the answer found by 10X smarter guy should not harm the current interests of humanity.


To focus on something I don't think gets a lot of play:

> To me, the local minima looked "good"

AI's entire business [0] is generating high quality digital content for free, but we've never ever ever needed help "generating content". For millennia we've sung songs and told stories, and we were happy with the media the entire time. If we'd never invented Tivo we'd be completely happy with linear TV. If we'd never invented TV we'd be completely happy with the radio. If we'd never invented the the CD we'd be completely happy with tapes. At every local minima of media, humanity has been super satisfied. Even if it were a problem, it's nowhere near the top of the list. We don't need more AI-generated news articles, music, movies, photos, illustrations, websites, instant summaries of research papers, (very very bad) singing. No one's looking around saying, "God there's just not enough pictures of fake waves crashing against a fake cliff". We need help with stuff like diseases and climate change. We need to figure out fusion, and it would be pretty cool if we could build the replicator (I am absolutely serious about the replicator). I remember a quote from long ago, someone saying something like, "it's lamentable that the greatest minds of my generation are focused 100% on getting more eyeballs on more ads". Well, here we are again (still?).

So why do we get wave after wave of companies doing this? Advances in this area are insanely popular and create instant dissatisfaction with the status quo. Suddenly radio is what your parents listened to, fast-forwarding a cassette is super tedious, not having instant access to every episode of every show feels deeply limiting, etc. There's tremendous profits to be had here.

You might be thinking, "here we go again, another 'capitalism just exploits humanity's bugs' rant", which of course I always have at the ready, but I want to make a different point here. For a while now the rich world has been _OK_. We reached an equilibrium where our agonies are almost purely aesthetic: "what kind of company do I want to work for", "what's the best air quality monitor", "should I buy a Framework on a lark and support a company doing something I believe in or do the obvious thing and buy an MBP", "how can I justify buying the biggest lawnmower possible", etc. Barring some big dips we've been here since the 80s, and now our culture just gasps from one "this changes everything" cigarette to the next. Is it Atari? Is it Capcom? Is it IMAX? Is it the Unreal Engine? Is it Instagram? Is it AI? Is it the Internet? Is it smartphones? Is it Web 2.0? Is it self-driving cars? Is it crypto? Is it the Metaverse and AR/VR headsets? I think us in the know wince whenever people make the leap from crypto to AI and say it's just the latest Silicon Valley scam--it's definitely not the same. But the truth in that comparison is that it is just the next fix, we the dealers and American culture the junkies in a codependent catastrophe of trillions wasted when like, HTML4 was absolutely fine. Flip phones, email, 1080p, all totally fine.

There is peace in realizing you have enough [1]. There is beauty and discovery in doing things that, sure, AI could do, but you can also do. There is joy in other humans. People listening to Hall & Oates on Walkmans teaching kids Spanish were just as happy (actually, probably a lot happier) as you are, and assuredly happier than you will be in a Wall-E future where 90% of your interactions are with an AI because no human wants to interact with any other human, and we've all decided we're too good to make food for each other or teach each other's kids algebra. It is miserable, the absolute definition of misery: in a mad craze to maximize our joy we have imprisoned ourselves in a joyless, desolate digital wasteland full of everything we can imagine, and nothing we actually want.

[0]: I'm sure there's infinite use cases people can come up with where AI isn't just generating a six fingered girlfriend that tricks you into loving her and occasionally tells you how great you would look in adidas Sambas. These are all more cases where tech wants humanity to adapt to the thing it built (cf. self-driving cars) rather than build a thing useful to humanity now. A good example is language learning: we don't have enough language tutors, so we'll close the gap with AI. Except teaching is a beautiful, unique, enriching experience, and the only reason we don't have enough teachers is that we treat them like dirt. It would have been better to spend the billions we spent on AI training more teachers and paying them more money. Etc. etc. etc.

[1]: https://www.themarginalian.org/2014/01/16/kurt-vonnegut-joe-...


That is an interesting take on local minima.

Teachers are hopefully empowered by AI to better adapt to the needs of the students.


This is a great post.

I'd like to tack onto your mention of teaching. I have found teaching really pushes me to understand the subject. It would be sad to lose this ability to have "real" teachers, if everything goes to AI.


I think you're on to something, but to me it has more to do with being part of the set of issues that intersect political policy and ethics. I see it as facing the same "discourse challenges" as:

abortion

animal cruelty laws/veganism/vegetarianism

affirmative action

climate change(denial)

These are legitimate issues, but it is also totally possible to just "ignore" them and pretend like they don't exist.


This time we have a genie in a lamp which will not be ignored. This should mean that a previously unknown variable is now set to "true" so discussion is more focused on reality.

However the paranoid part of me says that these crises and wars are just for the sake of letting people continue to ignore the truly difficult questions.


>Frequency-pulled

You mean like injection locking with oscillators? Or is this a new term in the tweetosphere


Injection locking. This: https://www.youtube.com/watch?v=e-c6S6SdkPo

I mean it hides nuance in conversation.


Jan and Ilya were the leads of the superalignment team set up in July of 2023.

https://openai.com/index/introducing-superalignment/

"Our goal is to solve the core technical challenges of superintelligence alignment in four years.

While this is an incredibly ambitious goal and we’re not guaranteed to succeed, we are optimistic that a focused, concerted effort can solve this problem:C There are many ideas that have shown promise in preliminary experiments, we have increasingly useful metrics for progress, and we can use today’s models to study many of these problems empirically.

Ilya Sutskever (cofounder and Chief Scientist of OpenAI) has made this his core research focus, and will be co-leading the team with Jan Leike (Head of Alignment). Joining the team are researchers and engineers from our previous alignment team, as well as researchers from other teams across the company."


It is easy to point to loopy theories around superalignment, p(doom), etc. But you don't have to be hopped up on sci-fi to oppose something like GPT-4o. Low-latency response time is fine. The faking of emotions and overt references to Her (along with the suspiciously-timed relaxation of pornographic generations) are not fine. I suspect Altman/Brockman/Murati intended for this thing to be dangerous for mentally unwell users, using the exact same logic as tobacco companies.


> The faking of emotions and overt references to Her (along with the suspiciously-timed relaxation of pornographic generations) are not fine.

Are you not aware of how many billions are getting spent on fake girlfriends on Only Fans, with millions of people chatting away with low paid labor across an ocean pretending to be an american girl? This is just reducing costs, but the consumers are already wanting the product.

I'm not sure I get the outrage / puritanism. Adults being able to chat to a fake girlfriend if they want to seems super bland. There's way more stuff out there that's way wilder you can do online, potentially also exploiting the real person on the other side of the screen if they are trafficked or whatever. I don't have any mental issues (well, who knows? ha) and genuinely would try it the same way you try a new porn category every once in a while.


The 2nd most visited GenAI website is character ai.

The net sum of all LLM B2C usecases (effectively Chatgpt) is competing with AI girlfriends for rank 1.

It isn't just huge. It is the most profitable usecase for gen AI.

"A goal of a system is what it does"


I mean I didn't want to say it in my original reply but your comment makes it even more difficult to resist:

"The internet is for porn" https://youtu.be/LTJvdGcb7Fs?si=8H1OzeyG5XzU-Qe8


There's a lot of harmful stuff already in the world, but most of us would rather not add to the pile on an industrial scale.


I for one think consenting adults should be able to do mildly harmful things like waste their time/money on a fake girlfriend if they so choose, and find it offensive that others would try to impose their values on others by restricting them from doing so


Nobody's restricting anything, we're discussing someone choosing not to work for a particular company.


I agree. I don’t care for and actually dislike most “features“ of AI including and especially chat bots but it’s also none of my business to go and stand on the soap box of my own subjective morality to tell others how to live. If you don’t want people ruining their lives over this, go after root causes not this piddly shit.


How do you know having an AI girlfriend is only mildly harmful? No one has ever had one before.


Supposedly plenty have already, although I don't think anyone's studied it clinically https://knowyourmeme.com/news/replika-ai-shuts-down-erotic-r...


I’m talking more about potential “Her”-like AI partners (the movie). Ones that would be a much more compelling replacement for actual human relationships.

We might want to be a little cautious about blindly going down that path. The tech isn’t there yet, and probably won’t be there next year, but it feels far more possible than ever before.


One could also say that therapists prey on lonely people who pay them to talk to them and seem like they’re genuinely interested in them, when the therapist wouldn’t bother having a connection with these people once they stop paying. Which I suppose is true from a certain point of view. But from another point of view, sometimes people feel like they don’t have close friends or family to talk to and need something, even if it’s not a genuine love or friendship.


This is implying that therapy is nothing more than someone to talk to; if that’s your experience with therapy, then you should get another therapist.


Evidence points in this direction, though.

Different methods of therapy appear to be equally effective despite having theoretical foundations which are conflicting with each other. The common aspect between different therapies seems to be "having someone to talk to", so I'm inclined to believe that really is what's behind the success.


> Evidence points in this direction, though.

>

> Different methods of therapy appear to be equally effective despite having theoretical foundations which are conflicting with each other. The common aspect between different therapies seems to be "having someone to talk to", so I'm inclined to believe that really is what's behind the success.

Just because talking is the common trait, doesn't mean that that's evidence that that is all it is. Paying someone to help you with the problem is also a common trait (and ironically, that is, no doubt, a contributory factor), but that isn't all that therapy is.

Let's say that there are three ways to solve a problem, and depending on context that we're not terribly good at determining, one of those ways will work quite often, one will work some of the time, and the other will be a disaster... but there's an equal probability that each of those ways are equally likely to fall in to each of those categories. Statistically, one could claim that how you solve the problem is not behind the success. In a sense, that would be correct, because the real determinant of success would be being lucky with the solution you chose to employ. While one could imply though that really it's nothing more than being lucky at choosing the solution, in reality without all of what's involved in that choice, the problem will remain.


There may be a kernel of truth in this, but it depends on why you're seeing a therapist. For treatment of OCD, for example, or phobias, there are specific protocols that yield results, but they do not respond to just "having someone to talk to."

Other kinds of conditions, like depression and anxiety, respond to a wider range of therapy styles. But those aren't the only conditions that people seek to treat through talk therapy. (And it's also an exaggeration to say that just having any conversation will help to treat anxiety and reopression. But it is probably true that treatment of these conditions is less technical and responds to a much wider range of styles.)


> Different methods of therapy appear to be equally effective despite having theoretical foundations which are conflicting with each other. The common aspect between different therapies seems to be "having someone to talk to", so I'm inclined to believe that really is what's behind the success.

This isn't true. Different methods work better for different problems. I've been in behavior health for 7 years now. It's having someone with a lot of education to talk to, someone with education in social and psychological problems and healthy coping mechanisms.


From what I understand, therapy success rates are quite low, with only cognitive behavior therapy showing notable progress. That isn't to say all the others are categorically useless, only that in the majority of cases, they seem to be ineffective or harmful.


Having someone to talk to, who is somewhat emotionally intelligent, who doesn't have strong biases against you, and so on...

If you are fortunate, you have people like that in your immediate circle, but increasingly few people do.


What part of the therapist training regimen tests for emotional intelligence? What test do they use to measure this?


They don't attempt to measure it, but they do teach approaches like "unconditional positive regard" and other techniques that allow a practitioner to demonstrate (or at least seem to demonstrate) a higher level of emotional intelligence.

A big part of therapy is also rapport. Many people go through many therapist before finding one that works for them. In part, you can think of this as the market performing the assessment your'e referring to.


They don’t attempt to measure it because it not something that’s even properly defined with any rigour. Any person who seriously uses the phrase is going to have their own completely individual idea of what it means, and there’s no reason the think any therapist would have this nebulous quality, or even that their idea of what it means has any similarity to your idea of what it means.


I suppose I agree — "emotional intelligence" is probably not the word I would have used, writing on a blank slate. I think the idea is better captured in the concept of rapport, which is really just a function of clients' subjective experience working with a given therapist. A therapist can learn techniques to increase the chances of establishing a good rapport with a given client, but I'd be inclined to leave it at that.


It is covered in the curriculum. They study emotional intelligence and with luck they are able to self-reflect using their education.

Actual maladaptive personalities are the result of low emotional intelligence.


> "having someone to talk to"

it's a bit more complicated than that

https://www.youtube.com/watch?v=Z37i8-FnAh8

and on top of this the method of therapy is to find better copings, not just to vent.


I'm not going to watch a 45 minute video in an effort to decipher what you are implying with this comment.


I'm implying that data shows that efficacy of therapy depends on concrete factors (~5 of them discussed in the video). it's not "just someone to talk to".


Thank you.


You could dump the YouTube link in to Gemini Advanced and ask it for the point.


This is very true, and I would add to it that the dominant paradigm in most therapy these days (at least those forms coming from a Cognitive Behavioural Therapy background) have "graduation" as an explicit goal: the client feels like they've addressed what they want to address and no longer need the ongoing relationship.

This is largely due to a crisis in the field in the late 70s/early 80s when several studies demonstrated that talk therapy had outcomes no different than no therapy. In both cases, some got better, some got worse, some didn't change. CBT was a direct result of that, prioritizing and tracking positive outcomes, and from CBT came a lot of different approaches, all similarly focussed on being demonstrably effective.

Talk therapy isn't a cure-all, but it's definitely more results-oriented than it was 50 years ago.


I think the preying part of therapy is that there's just no defined stop condition. There's no such thing as "healthy" in mental health. You get chemo until you go into remission or you die. You take blood pressure meds until you have a better lifestyle and body composition and don't need them anymore, etc. There's no analogue for "you're healthy now, go away so I can help others", and so therapy goes on forever until the patient stops for whatever reason.


> I think the preying part of therapy is that there's just no defined stop condition.

There's no defined stop point for physical development either... Top performing athletes still have trainers, and nobody sees that as a problem. If it's mental development though, it must have a stop point?


Top performing athletes are better than me at being athletes.

Meanwhile, the more therapy someone does, the more miserable they are compared to me. I’m the Usain Bolt of mental health compared to them. Makes me think their trainer is an idiot.


> Meanwhile, the more therapy someone does, the more miserable they are compared to me.

Is this based on empirical statistical analysis, or are you maybe projecting your perception on to anecdotes? How are you quantifying misery? What are the units? Are there people that are less miserable than you? Do you know how much therapy they've done, or if they've done therapy at all?

> I’m the Usain Bolt of mental health compared to them. Makes me think their trainer is an idiot.

There's a lot of people that think they're Usain Bolt. Most of them are not.


I suppose we’ll see in the next twenty years. I rate my chances and I don’t rate those of the chronically therapized. But hey, let the chips fall where they may.


The stop point is obvious to the individual and the therapist. I'm dealing with someone that prefers to stop rather than actually self-reflection.

There is nothing stopping them from exiting therapy. The therapist may be aware that the person is still a basket case but if they are non-violent they are free to roam.


> The stop point is obvious to the individual and the therapist.

It's not at all. There are plenty of stories of people who realized that therapy was just causing them to ruminate on their problems, and that the therapist was just milking them for years before they wised up and walked away. That's not what I call "obvious".


>There are plenty of stories of people who realized that therapy was just causing them to ruminate on their problems

This is precisely why I stopped going.


I think either the therapist or the patient was not devoted to therapy.


No true Scotsman eh? Classic for a reason!


Considering we are discussing a licensed professional I think your argument is weak. Second I allowed for the failure of the therapist in their duty.


The trustworthiness of a license in a field with a poor replication rate and whose best therapy is at best 50% effective is what's weak.


Exactly.


What a ridiculous analogy. "Athlete" is a career. Is someone making a career of being in therapy?

> If it's mental development though, it must have a stop point?

What is being developed, exactly?


> What a ridiculous analogy. "Athlete" is a career.

The athlete is the extreme example, but there are obviously people who are not career athletes that don't have a defined stop point with employing a trainer (maybe you could say "death" is the stop point).

Most everyone who goes to spinning class isn't a career athlete. Some of them are terribly out of shape, and some of those people just want to get in shape. Others may already be in shape, but see the spinning class as a way to either improve or maintain their conditioning. None of this is deemed ridiculous.

I'm curious, it's considered the norm to regularly see a doctor or dentist, do you think they're preying on their patients?

> What is being developed, exactly?

Mental health. There's obviously a more involved answer, but if you don't know it already, it's unlikely I'll be able to educate you with a comment on social media.


> but there are obviously people who are not career athletes that don't have a defined stop point with employing a trainer

And many of them are being bilked as well. The fitness industry is notoriously filled with hucksters and scams, and "trainers" rarely have any real training in kinesiology or exercise science.

> I'm curious, it's considered the norm to regularly see a doctor or dentist, do you think they're preying on their patients?

Once a year for a health checkup. Is that the norm for therapy?

> Mental health. There's obviously a more involved answer

The more involved answer is that "mental health" is not well-defined, so it's not developing anything. The only therapies that have shown to have any empirical validity, like CBT, train the user in tools to change their own behaviour and thinking, then it's on the user to employ the tools. Does a family doctor call you in once a week and watch you take the pills that address your physical ailment?

The best analogy for psychiatric therapy is physical therapy for recovering from an injury or surgery, except physical therapy has a well-defined end condition, which is when you understand how to do the exercises yourself. Then it's on you to do them. This is just not the norm for "mental health" therapy.


Nail on the head, thanks. I'm deeply uncomfortable with anything that combines paying for a service with a social element. Feels like an unstable equilibrium.

I guess the skill is riding the line, but that doesn't feel very enjoyable.


> I'm deeply uncomfortable with anything that combines paying for a service with a social element.

I think I don't know what you mean by that. That sounds like you're uncomfortable with renting out party venues.


> And many of them are being bilked as well. The fitness industry is notoriously filled with hucksters and scams, and "trainers" rarely have any real training in kinesiology or exercise science.

Many people are being bilked for almost any service one might name. There are tons of products and services with no defined stop point (heck, pretty much the entire CPG category is for products and services with no defined stop point). There are tons of products & services where the vast majority of customers are unable to discern if they are being scammed or not. Heck, when you order sushi there's notoriously a far from trivial chance that you're not getting the fish that you thought you were getting. We don't think of restaurateurs as being hucksters and scam artists (some no doubt are, but it's ridiculous to paint them all with the same brush).

My point isn't that it's impossible that they are being bilked. It's that there are all kinds of products & services that people get with no defined stop point, where customers could unknowingly be scammed, but we don't consider that to be evidence that they are being bilked. There are products and services that are beneficial for the customer even if there is no defined problem and no defined end point.

For your typical customer, spinning class isn't a class you go to until you achieve some goal. It's a service provided to help you do exercise you no doubt wanted to do anyway, in a community/context that you wanted to do it in, with the guidance of someone who ostensibly knows how to structure the process better than you do. You could very well do the spinning all by yourself, or you could organize a spinning class on your own, but you pay the professional because you expect to get better results without expending as much time or energy yourself.

Sure, there are people who claim that, if you just take the spin class, you will lose 100 lbs or become an Olympic athlete, and those people are absolutely hucksters and scam artists. There are people that will tell you that voting for the right/wrong politician will change your life (either for the better or worse). There are people who will tell you that buying gold will ensure financial security and make you a fortune. There are scams about buying jewelry. There are investment funds that claim to be able to consistently beat the market, or that will protect your money through any market collapse... and in all those cases there's no defined stop point. The product is the prop, not the scam. Sure, in the context of the scam the prop isn't worth it, but that doesn't mean anyone offering the prop is scamming you. Physical training services, votes, gold, jewelry, investment funds, etc. aren't all bunk.

> Once a year for a health checkup. Is that the norm for therapy?

So now it's the frequency that's the issue, rather than not having a defined stop point?

> The more involved answer is that "mental health" is not well-defined, so it's not developing anything.

That's your answer. That's not my answer, and it's not the answer.

> The best analogy for psychiatric therapy is physical therapy for recovering from an injury or surgery, except physical therapy has a well-defined end condition, which is when you understand how to do the exercises yourself.

I don't think you appreciate how limited your perspective on this is. Not everything is a problem that can be fixed.

This presumes that the only possible physical therapy service is education. My mother suffers from late-stage dementia. She is at risk for falling whenever she walks, and performing more involved physical activities absolutely requires guidance. It is literally impossible to educate her out of this situation, so the only stop point for the service is death. While family does sometimes provide these services for her, there's little doubt that the professionals we hire to provide these services for her are able to do the job better and more consistently than we can; there's little doubt that she is physically and mentally healthier as a consequence of their services, and that her physical & mental health would begin to decline within days of terminating those services. Now, I don't know that their particular form of physical therapy is empirically valid, and I guess they could be scamming us, but in the vast majority of cases, providing these services is not a scam. It's offensive to claim otherwise.

Now, my sister-in-law has the reverse situation: she has a physical problem and requires mental health services. She suffers from COPD that will kill her unless something else gets to her first. Above and beyond the physical condition, it is very hard for her to cope with it mentally. Again, family provides her with support, but it's not enough. She employs a mental health therapist to address her anxiety, depression and suicidal ideation. There's maybe some feint hope that the therapist will educate her to a state where she no longer experiences suicidal ideation, but nobody expects the anxiety & depression to go away, because COPD is an anxiety and depression invoking condition... a well educated, rational, COPD patient can be anxious and depressed. You could say that, with or without professional service, the failure rate is nearly 100% (helpful to consider in the context of comments about the failure rate for mental health therapy). So there's no defined stopping point for the therapy short of death. In this particular case, it's a CBT therapist, but even if it wasn't, what she needs is more than education; she needs support. While we can't rule out that she's being scammed, in the vast majority of the cases, providing these services is not a scam. It's offensive to claim otherwise.

> The more involved answer is that "mental health" is not well-defined, so it's not developing anything.

I'll try one last metaphor:

Nutrition is not well defined. We have broad ideas about what is and isn't good for you, but the specifics of what is "good nutrition" are variable & contextual; while one can have well defined nutritional goals, many people do not. There's a ton of "nutritionists" who have no formal training, who don't exercise science. There are short order cooks with no formal training, who don't exercise science. If a grocer has formal training, it is far more likely in business or marketing than anything involving nutrition. There is no defined stop point where you no longer need food. There are plenty of scams involving nutritional guidance or foods (just the categories "health food" and "diet plans" are littered with scammers). Despite all that, there is no compelling argument that restaurants, chefs, grocers, or other nutritional services are intrinsically scammers. I'm pretty sure that, if I don't eat, my health will deteriorate, and I have a hard time believing that a professional either guiding my nutritional choices or outright providing nutrition for me is intrinsically scamming me. They could well be providing me a valuable service where I get better nutrition with less time and effort than if I tended to it without them.

I get it. You are convinced therapy is intrinsically a scam, and part of the reason for that is most customers for therapy cannot reliably discern if they are being scammed or not. I'm far from an expert on the subject, so for all I know, you are right. However, the arguments you are presenting are not compelling arguments.


> So now it's the frequency that's the issue, rather than not having a defined stop point?

You have a physical problem, you go to the doctor and he fixes the problem or gets you the information you need to manage your problem. That's the stop point for medical intervention.

You have a mental health problem, you go to a therapist for a mental health intervention, and now you're in weekly therapy for years. Not so much an intervention, more like a new part time job.

Yearly checkups is not a counterpoint to this general trend. A yearly mental health checkup could be totally reasonable, but that's not the norm.

> Not everything is a problem that can be fixed.

The real issue here is that you keep bringing up outliers like your mother's palliative care and I keep talking about the norm, ie. that most people in therapy are not like your mother. Therapy has become fashionable. Everyone is "working on themselves" and plenty of therapists like patients that are well off and so can pay regularly.

> I get it. You are convinced therapy is intrinsically a scam

No, that's not the point I'm making. At best, you could maybe case what I'm saying as "the therapy industry/fad is a scam, and plenty of therapists, psychologists and psychiatrists are feeding into it".

There are people that legitimately need therapy to develop coping strategies to address trauma or retrain maladaptive behaviours, because even as ineffective as it sometimes is, it's better than nothing. My point is that a lot of people who go to therapy probably don't need therapy, and even if they do they don't need as much as they think they do, the techniques in therapy are not very effective even in the best case, and that therapists are not incentivized to stop seeing patients that are paying them well and triage to cases that need more urgent intervention and probably can't pay them regularly.

Part of this is probably because of the US's dysfunctional medical system, and another part is because psychology and psychiatry has not had a good track record for empirically sound practices. It's getting better but has some ways to go.


> The real issue here is that you keep bringing up outliers like your mother's palliative care and I keep talking about the norm, ie. that most people in therapy are not like your mother.

So, if the customer is dying (and we're all dying), it's not a scam, but if the same service is provided to someone else, it's a scam? That almost sounds like, (...wait for it...), the service isn't the scam.

> Therapy has become fashionable.

Nothing worse than services that have become fashionable.

> Everyone is "working on themselves" and plenty of therapists like patients that are well off and so can pay regularly.

Nothing quite like customers who can afford to pay for your services. Mercedes dealers tend to focus on those people too. ;-) Is it your position then that services that only wealthier people can afford are a scam? Is it not possible that they're receiving some benefit from the service that others would benefit from if they could somehow afford them?

> My point is that a lot of people who go to therapy probably don't need therapy, and even if they do they don't need as much as they think they do, the techniques in therapy are not very effective even in the best case, and that therapists are not incentivized to stop seeing patients that are paying them well and triage to cases that need more urgent intervention and probably can't pay them regularly.

Ice cream is similarly a scam, because a lot of people don't need ice cream, but they think they do. The ice cream is not very effective for them even in the best case, and ice cream makers are not incentivized to stop selling it to people who don't need it.


I sometimes recommend Dr David Burns' Feeling Good podcast[1], and he is big on measuring and testing and stop points. Instead of 'tell me about your mother' his style of Cognitive Behavioural Therapy (CBT) is called TEAMS in which the T stands for Testing, and it involves:

- Patient choosing a specific mood problem/feeling they want to work on.

- A mood survey, where the patient rates their own level of e.g. anxiety, depression, fear, hopelessness. (e.g. out of 5 or 10).

- Therapy session, following his TEAMS CBT structure. Including patient choosing how much fear they'd like to feel (e.g. they want to keep a little bit of fear so they don't endanger themselves, but don't want to be overwhelmed by fear, 5% or 20%, say).

- A repeat of the mood survey, where the patient re-assesses themselves to see if anything has improved. There's no units on the measures because it's self-reported, the patient knows if the fear is unchanged, a little less, a lot less, almost gone, completely gone, and that's what matters.

That gives them feedback; if there is improvement within a session they know something in the session helped, if several sessions go by with no improvement they know it and can change things up and move away from those unhelpful approaches in future with other patients, and if there is good improvement - patient is self-reporting that they are no longer hopeless about their relationship status, or afraid of social situations, or depressed, to the level they want, then therapy can stop.

He's adamant that a single 2hr session is enough to make a significant change in many common mood disorders[2], and this "therapy needs to take 10 years" is a bad pattern and therapists who don't take mood surveys and before and after every session are flying blind. With feedback on every session and decades of experience, he has identified a lot of techniques and ways to use them which actually do help people's moods change. I liken it to the invention of test cases and debuggers (and looking at the output from them).

[1] Quick list: https://feelinggood.com/list-of-feeling-good-podcasts/ more detailed database: https://feelinggood.com/podcast-database/

[2] no, internet cynic, obviously not everything and presumably not whatever it is you have.


I agree some therapists are starting to come around on this, but even what you describe is somewhat flawed due to placebo effects, eg. some always feel better from any kind of talk, probably as a result of someone paying attention to their problems.

You might be able to overcome this effect by quantitatively tracking this across many sessions to some degree, but I think it's still always the patient that has to walk away, and never the therapist who says, "you're good, go on now".


I think you're mistaken, at least in a lot of cases. All CBT based therapies I've had have started with a clear discussion about what the problem is, and what the solution looks like in terms of my happiness and mental well-being. In all cases, my therapist has "graduated" me, telling me that they don't think I need to continue (or having me say that I'm comfortable now stopping regular therapy).

CBT and its derivatives very strongly attend to individual effectiveness and view therapy that goes on endlessly as a sign that the real problem isn't being addressed, and that no therapy is considered effective unless it ends. Individual therapists might be bad actors, but the field itself is now admirably focussed on finite, positive results.


Sounds like a great improvement, but I would hesitate to call it the norm. The therapy industry is booming.


It’s implying that this is the case for many people, not all. Which it is, in my experience. Particularly since the advice you gave:

> then you should get another therapist

Seems to be fairly ubiquitous. “Find a therapist you like”/“shop around”/etc. leads a lot of people to find people who will tell them what they want to hear. Sometimes what people want to hear is how to practice CBT - but in that case, such people are probably going to be using AI to work on CBT.


Yeah I have found there is very little you get from therapy that you can't get from a mixture of journalling, learning CBT methods, having a routine (which includes regular exercise) and trying lots of different methods of making friends that you assess maturely for their reliability. Maybe meditation if you're into that. All of these things are free and require effort, personal effort and intention being what will actually improve your life anyway, whether you use therapy or not. This makes therapy seem like a scam for anything other than dealing with a very dire short period of isolation.


GP's not saying that, what GP is saying is "good luck trying to talk to your therapist if you stop paying $$$".

I do think therapists are one of the professions that will be naturally displaced by LLMs. You're not paying them to be your friend (and they are usually very clear on that), so any sort of emotional connection is ruled out. If emotions are taken away, then it's just an input/output process, which is something LLMs excel at.


I would argue the opposite: a good therapist isn't just offering back-and-forth conversation, they're bringing knowledge, experience and insight into the client after interacting with them. A good therapist understands when one approach isn't working and can shift to a different one; they're also self-reflective and very aware of how they're influencing the situation, and try to apply that intelligently. This all requires reflective and improvisational reasoning that LLMs famously can't do.

Put another way, a good therapist is professionally trained and consciously monitoring whether or not they're misleading you. An LLM has no executive function acting as a check on its input/output cycle.


Absolutely everything you mentioned can be done by an LLM and arguably better.


Not in the least. LLMs don't introspect. LLMs have no sense of self. There is no secondary process in an LLM monitoring the output and checking it against anything else. This is how they hallucinate: a complete lack of self-awareness. All they can do is sound convincing based on mostly coherent training data.

How does an LLM look at a heptagon and confidently say it's an octagon? Because visually they're similar, octagons are relatively more common (and identified as such) while heptagons are rare. What it doesn't do is count the sides, something a child in kindergarten can do.

If I were working in AI I would be focussing on exactly this problem: finding the "right sounding" answer solves a lot of cases well enough, but falls down embarrassingly when other cogitive processes are available that are guaranteed to produce correct results (when done correctly). Anyone asking chatgpt a math question should be able to get back a correctly calculated math answer, and the way to get that answer is not to massage the training data, it's to dispatch the prompt to a different subsystem that can parse the request and return a result that a calculator can provide.

It's similar to using LLMs for law: they hallucinate cases and precedents that don't exist because they're not checking against nexis, they're just sounding good. The next problem in AI is the layer of executive functioning that taps the correct part of the AI based on the input.


I feel like this is easier said than done. There's not a great way (that I know of) to evaluate the quality/potential helpfulness of therapists... if only there were a Steam-like review system for them! There's ratemds.com, but not a lot of people use it, since there's not a central marketplace to find therapists to begin with (that I know of). I would love to be able to find good therapists locally and/or online. It just seems like such an expensive gamble every time.

When I was younger, I went through many therapy sessions with multiple professionals of different kinds (psychologists, psychiatrists, MFT (marriage and family therapists, social workers, etc.).

A couple of them were wonderful: thoughtful, caring, helpful, providing useful guidance with a compassionate ear.

Another couple tried to be helpful but were still in training themselves (this was at a college) and couldn't really provide any useful guidance.

One was going through a divorce of her own at the time and ended up crying in many of our sessions and having to abort them to deal with her own emotions – it was a tough time for her, and she's only human. I often tried to console her, but she wouldn't let me, so it made for a very awkward situation lol.

One of them had one a single session with me, charged me for it, and then told me she couldn't help me and to go somewhere else.

But the worst of them was an older guy who, despite the referrals and my history, thought I was faking mental illness. He dared me to attempt suicide, and when I eventually did (not because of him, but a separate romantic failure), he chuckled in my face and said, "Heh, you finally tried it, huh? Didn't think you would." This was an older psychiatrist in a small town – either the only one there, or one of very few – the kind of sleazy place that had a captive market and a whole bunch of pharma ads in the lobby, with young female pharma reps going in and out all day. What a racket =/ If I were wiser then, I would've reported him to the board and news media.

So, anecdotally, my success rate with therapists was only 2/7. To be fair, I was a pretty fucked up teenager and young adult, but still... the point is that "just find a better therapist" is often a difficult process. Depending on your insurance and area, there may not even be any other therapists with a waiting list of less than a few months, and even if you can get in, there's no guarantee they are good at their jobs AND a good fit for your personality and issues.

Think it's hard to find good devs? At least our line of work produces some measurable output (software/apps that run, or not, according to specs). How do you even measure the output of a therapist? Improvements to someone's life aren't going to happen overnight, and many never report back; the best successes may not bother to leave a review, the worst failures may end up dead before spreading the word. The rest probably just run out of sessions allowed by their insurance and try to move on with their lives, with unknown levels of positive or negative change.


> One could also say that therapists prey on lonely people who pay them to talk to them and seem like they’re genuinely interested in them, when the therapist wouldn’t bother having a connection with these people once they stop paying.

As another commenter said, if that's your experience with a therapist, you have a shitty therapist and should switch.

Most importantly, a good therapist will very clearly outline their role, discuss with you what you hope to achieve, etc. I've been in therapy many years, and I know exactly what I'm paying for. Sure, some weeks I really do just need someone to talk to. But never have I, or my therapist, been unclear that I am paying for a service, and one that I value much more than just having "someone to talk to".

Using the terminology "prey on lonely people" is ridiculous (again, for any good therapist). If they were actually preying on me, then their goal would be to keep me lonely so I become dependent on them (and I'm not saying that never happens, but when it does it's called malpractice). A good therapist's entire goal is to make people self-sufficient in their lives.


Therapists are educated and trained to help alleviate mental-health issues, and their licenses can be revoked for malpractice. Their livelihood partially depends on ethics and honest effort.

None of those safeguards are in place for AI companies.


And most of them suck. Imagine if you bought a class of hardware that needed to be swapped constantly, has no QC and dubious reviews, with many "works on my machine" comments saying it worked for them. Little did they know it just reported it was fine and was also broken for them.

The liability for a bad therapist or psychologist is very low, I have never heard of them getting revoked for being bad therapists. If if was true then bad therapists wouldn't exist. I would not be surprised if they ceased to exist in the near future with AI being consistent, and much better quality.


"Works on my machine" for therapists is not a bug or a problem. People's needs are highly individual, and the best therapist will be, too.


Exactly. I know there has been research on the effectiveness on different types of talk therapy and by far the most important factor (much more than any specific theory the practitioner uses) is the "fit" between therapist and patient.


so give money randomly until someone makes you feel better?

why is this better than ai porn friends?


It's better than AI porn friends in the way a screwdriver is better than a hammer for driving screws.


seems dubious to analogize a pair of objects designed for each other, screw and screwdriver, with a pair like lonely mentally unwell person and therapist.

Ideally a therapist is an uninvolved neutral party in one's life. They act as sounding board to measure one's internal reactions to the outside world.

The key is neutral point of view. Friend's and family come with bias. The bias's can be compounded by mentally ill friend's and family.

Therapist must meet with other therapists about their patient interactions. The second therapist acts a neutral third party to the keep the first therapist from losing their neutrality.

That is the ideal and the real world may differ.

I'm struggling with someone that looks to be having some real mental issues. The person believes I'm the issue and I need to maintain a therapist to make sure I'm treating this person fairly.

I need a neutral third party that I gossip with that is bound to keep it to themselves.


One could then argue all transactional relationships are predatory then, right? A restaurant serves you only for pay.

You could argue cynically that all relationships are to some extent transactional. People “invest” in friendships after all. It’s just a bit more abstract.

Maybe the flaw in the logic is the existence of some sort of “genuine” binary: things are either genuine or they aren't. When we accept such a binary lots of things can be labeled predatory.


> One could also say that therapists prey on lonely people who pay them to talk to them

It is indisputable that one could say this


ok - you could say "the rapist" too.. many have.. guess what, people in crisis sometimes attack the first line helpers.. this is well known among trained health professionals


I'll just point to the theory that they didn't want to work for a megacorp creating tools for other megacorps (or worse) and actually believed in OpenAI's (initial) mission to further humanity. The tools are going to be used by deep pocketed entities for their purposes, the compute resources necessary require that to be the case for the foreseeable future.


Realistically its all just probabilistic word generation. People "feel" like an LLM understands them but it doesn't, its just guessing the next token. You could say all our brains are doing are just guessing the next token but thats a little too deep for this morning

All these companies are doing now is taking an existing inferencing engine, making it 3% faster, 3% more accurate, etc. per quarter, fighting over the $20/month users

One can imagine product is now taking the wheel from engineering and are building ideas on how to monetize the existing engine. Thats essentially what GPT-4o is, and who knows what else is in the 1,2,3 year roadmaps for any of these $20 companies

To reach true AGI we need to get past guessing, and that doesn't seem close at all. Even if one of these companies gets better at making you "feel" like its understanding and not guessing, if it isnt actually happening, its not a breakthrough

Now with product leading the way, its really interesting to see where these engineers head


> People "feel" like an LLM understands them but it doesn't, its just guessing the next token. You could say all our brains are doing are just guessing the next token but thats a little too deep for this morning

"Just" guessing the next token requires understanding. The fact that LLMs are able to respond so intelligently to such a wide range of novel prompts means that they have a very effective internal representation of the outside world. That's what we colloquially call "understanding."


I've seen this idea that "LLMs are just guessing the next token" repeated everywhere. It is true that accuracy in that task is what the training algorithms aim at. That is not however, what the output of the model represents in use, in my opinion. I suspect the process is better understood as predicting the next concept, not the next token. As the procedure passes from one level to the next, this concept morphs from a simple token to an ever more abstract representation of an idea. That representation (and all the others being created elsewhere from the text) interact to form the next, even more abstract concept. In this way ideas "close" to each other become combined and can fuse into each other, until an "intelligent" final output is generated. It is true that the present configuration doesn't offer the LLM a very good way to look back to see what its output has been doing, and I suspect that kind of feedback will be necessary for big improvements in performance. Clearly, there is an integration of information occurring, and it is interesting to contemplate how that plays into G. Tononi's definition of consciousness in his "information integration theory".


Also, as far as hallucinations go, no symbolic representation of a set of concepts can distinguish reality from fantasy. Disconnect a human from their senses and they will hallucinate too. For progress in this, the LLM will have to be connected in some way to the reality of the world, like our senses and physical body connect us. Only then they can compare their "thoughts" and "beliefs" to reality. Insisting they at least check their output against facts as recorded by what we already consider reliable sources is the obvious first step. For example, I made a GPT called "Medicine in Context" to educate users; I wanted to call it "Reliable Knowledge: Medicine" because of the desperate need for ordinary people to get reliable medical information, but of course I wouldn't dare. It would be very irresponsible. It is clear that the GPT would have to be built to check every substantive fact against reality, and ideally to remember such established facts going into the future. Over time, it would accumulate true expertise.


I disagree. I use ChatGPT daily as a replacement for Google. It doesn't understand or have logic, it can spit out information very well though. It has a broad knowledge base. There is no entity there to have an understanding of the topic.

This becomes pretty clear when you get to more complex algorithms or low level details like drawing a stack frame. There is not logic there.


Indeed. It's also obvious when the "hallucinations" create contradictory responses that a conceptual understanding would always preclude. For example, "In a vacuum, 100g of feathers and 100g of iron would fall at the same rate due to the constant force of gravity, thus the iron would hit the ground first". Only a language model makes this type of mistake because its output is statistical, not conceptual.


> It doesn't understand or have logic

I can ask ChatGPT questions that require logic to answer, and it will do just fine in most cases. It has certain limitations, but to say it isn't able to apply logic is just completely contrary to my experience with ChatGPT.


give us an example please


I deliberately asked ChatGPT a logical question with a false premise: "If all snakes have legs, and a python is a snake, does a python have legs?"

ChatGPT answers:

> Yes, if we assume the statement "all snakes have legs" to be true and accept that a python is a type of snake, then logically, a python would have legs. This conclusion follows from the structure of a logical syllogism:

> 1. All snakes have legs.

> 2. A python is a snake.

> 3. Therefore, a python has legs.

> However, it’s important to note that in reality, snakes, including pythons, do not have legs. This logical exercise is based on the hypothetical premise that all snakes have legs.

ChatGPT clearly understands the logic of the question, answers correctly, and then tells me that the premise of my question is incorrect.

You can say, "But it doesn't really understand logic. It's just predicting the most likely token." Well, it responds exactly how someone who understands logic would respond. If you assert that that's not the same as applying logic, then I think you're essentially making a religious statement.


> Well, it responds exactly how someone who understands logic would respond.

An animation looks exactly like something in motion looks, but it isn't actually moving.


What's the difference between responding logically and giving answers that are identical to how one would answer if one were to apply logic?


The logic does not generalize to things outside of the training set. It cannot reason about code very well, but it can write you functions with memorized docs.


Unless you're saying that my exact prompt is already in ChatGPT's training set, the above is an example of successful generalization.


>All Xs have Ys.

>A Z is an X.

>Therefore a Z has Ys.

I am fairly certain variations of this are in the training set. The tokens following that about "in reality Zs not having Ys" are due to X, Y, and Z being incongruous in the rest of the data.

It is not not performing a logical calculation, it is predicting the next token.

Explanations of simple logical chains are also in the training data.

Think of it instead as really good (and flexible) language templates. It can fill in the template for different things.


> It is not not performing a logical calculation, it is predicting the next token.

Those two things are not in any way mutually exclusive. Understanding the logic is an effective way to accurately predict the next token.

> I am fairly certain variations of this are in the training set.

Yes, which is probably how ChatGPT learned that logical principle. It has now learned to correctly apply that logical principle to novel situations. I suspect that this is very similar to how human beings learn logic as well.


it requires calculation of frequency of how often words appear next to each other given other surrounding words. If you want to call that 'understanding', you can, but it's not semantic understanding.

If it were, these LLMs wouldn't hallucinate so much.

Semantic understanding is still a ways off, and requires much more intelligence than we can give machines at this moment. Right now the machines are really good at frequency analysis, and in our fervor we mistake that for intelligence.


> it requires calculation of frequency of how often words appear next to each other given other surrounding words

In order to do that effectively, you have to have very significant understanding of the world. The texts that LLMs are learning from describe a wide range of human knowledge, and if you want to accurately predict what words will appear where, you have to build an internal representation of that knowledge.

ChatGPT knows who Henry VIII was, who his wives were, the reasons he divorced/offed them, what a divorce is, what a king is, that England has kings, etc.

> If it were, these LLMs wouldn't hallucinate so much.

I don't see how this follows. First, humans hallucinate. Second, why does hallucination prove that LLMs don't understand anything? To me, it just means that they are trained to answer, and if they don't know the answer, they BS it.


I would argue it's an understanding of the relationship between the words; an effective internal representation of those relationships. IMHO, it's still quite a ways from a representation of the outside world.


To my understanding (ha!), none of these language models have demonstrated the "recursive" ability that's basic to human consciousness and language: they've managed to iteratively refine their internal world model, but that model implodes as the user performs recursive constructions.

This results in the appearance of an arms race between world model refinement and user cleverness, but it's really a fundamental expressive limitation: the user can always recurse, but the model can only predict tokens.

(There are a lot of contexts in which this distinction doesn't matter, but I would argue that it does matter for a meaningful definition of human-like understanding.)


Supposedly that was Q* all about. Search recursively, backtrack if dead end. who knows really, but the technology is still very new, I personally don't see why a sufficiently good world model can't be used in this manner.


Doesn't have to be smart to be dangerous. The asteroid that killed the dinosaurs was just a big rock.


Oh well... It seems at least one of those two things have to be true: either AGI is so far away that "alignment" (whatever it means) is unnecessary; or, as you suggest, Altman et al. have decided it's a hindrance to commercial success.

I tend to believe the former, but it's possible those two things are true at the same time.


or C) the first AGI was/is being/will be carried away by men with earpieces to a heavily fortified underground compound. any government - let alone the US government - isn't going to twiddle their thumbs while tech that will change human history is released to the unwitting public. at best they'll want to prepare for and control the narrative surrounding the event, at worst AGI will be weaponized against humans before the majority are aware it exists.

if OAI is motivated by money, uncle sam can name any figure to buy them out. if OAI is motivated by power, it becomes "a matter of national security" and they do what the gov tells them. more likely the two parties' interests are aligned and the public will hear about it when It's Time™. not saying C) is what's happening - A) seems likely too - but it's a real possibility


Why do you think that the US government has the state capacity to do anything like that these days?


By observing reality.


Specifically I am supposing the superalignment people were generally more concerned about AI safety and ethics than Altman/etc. I don't think this has anything to do with superalignment itself.


>dangerous for mentally unwell users

It's not our job to make the world safe for fundamentally unsafe people.


This is literally everyone's job. It's the whole point of society. Everyone is "fundamentally unsafe", and we all rely on each other.


> This is literally everyone's job. It's the whole point of society.

To a degree, yes - but I think if it's taken too far it becomes a trap that many people seeking power lay out.

Benjamin Franklin said it best: "Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety."

That being said, I do agree with part of your point. The purpose of having a society is that collective action lets us do amazing things like build airplanes, that would be otherwise impossible. In order to succeed at that we need some rules that everyone plays by, which involve giving up some freedoms - or the "social contract".

The more of a safety net a society provides, the more restrictive the society must be. Optimizing for this is known as politics.

I think history has shown us that the proper balance is one where we optimize for maximum elbow room, without letting people die on the streets. Trying to provide the illusion of safety and restrict interesting technology to protect a small percentage of the population is on the wrong side of this balance.

Maybe we try it, and see what the effect actually are, rather than guessing. If it becomes a major problem, then address it - in the least restrictive way possible.


Fun fact, that quote has been entirely misinterpreted.

> He was writing about a tax dispute between the Pennsylvania General Assembly and the family of the Penns, the proprietary family of the Pennsylvania colony who ruled it from afar. And the legislature was trying to tax the Penn family lands to pay for frontier defense during the French and Indian War. And the Penn family kept instructing the governor to veto. Franklin felt that this was a great affront to the ability of the legislature to govern. And so he actually meant purchase a little temporary safety very literally. The Penn family was trying to give a lump sum of money in exchange for the General Assembly's acknowledging that it did not have the authority to tax it.

> It is a quotation that defends the authority of a legislature to govern in the interests of collective security. It means, in context, not quite the opposite of what it's almost always quoted as saying but much closer to the opposite than to the thing that people think it means.

https://www.npr.org/2015/03/02/390245038/ben-franklins-famou...


> Fun fact, that quote has been entirely misinterpreted.

I don't think so. From the original text [1]:

  "In fine, we have the most sensible Concern for the poor distressed Inhabitants of the Frontiers. We have taken every Step in our Power, consistent with the just Rights of the Freemen of Pennsylvania, for their Relief, and we have Reason to believe, that in the Midst of their Distresses they themselves do not wish us to go farther. Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety.
This was excerpted from writing that was largely about ongoing dispute with the crown (the Governor) about the abuse of authority coming from Britain. The crown was rejecting pretty much every bill they were creating:

    "Our Assemblies have of late had so many Supply Bills, and of such different Kinds, rejected on various Pretences; Some for not complying with obsolete occasional Instructions (tho’ other Acts exactly of the same Tenor had been past since those Instructions, and received the Royal Assent;) Some for being inconsistent with the supposed Spirit of an Act of Parliament, when the Act itself did not any way affect us, being made expresly for other Colonies; Some for being, as the Governor was pleased to say, “of an extraordinary Nature,” without informing us wherein that extraordinary Nature consisted; and others for disagreeing with new discovered Meanings, and forced Constructions of a Clause in the Proprietary Commission; that we are now really at a Loss to divine what Bill can possibly pass."
They were ready to just throw up their hands and give up:

    "we see little Use of Assemblies in this Particular; and think we might as well leave it to the Governor or Proprietaries to make for us what Supply Laws they please, and save ourselves and the Country the Expence and Trouble."
In fact, they had specifically written into the bill the ability for the Governor to exempt anyone he wanted from the tax, including the Penns:

    "And we being as desirous as the Governor to avoid any Dispute on that Head, have so framed the Bill as to submit it entirely to his Majesty’s Royal Determination, whether that Estate has or has not a Right to such Exemption."
The quote is clearly derived from Franklin's frustration with the governor and abuse of authority.

Also, while that's the first appearance of the quote, it's not the last time he used it. He also reiterated it as an envoy to England during negotiations to prevent the war [2].

Additionally, a similar quote was from well before either in Poor Richard's Almanac in 1738, that also illustrates his thinking [3] and shows that he was well aware of the plain meaning of what he was saying, it certainly wasn't limited to a tax dispute:

    "Sell not virtue to purchase wealth, nor Liberty to purchase power."
Finally, Franklin was obviously pleased about the message and interpretation of the quote, since he had no issue with it being used as the motto on the title page of An Historical Review of the Constitution and Government of Pennsylvania (1759) which Franklin published, but didn't author.

[1] https://founders.archives.gov/documents/Franklin/01-06-02-01...

[2] https://oll.libertyfund.org/quotes/benjamin-franklin-on-the-...

[3] https://en.m.wikiquote.org/wiki/Benjamin_Franklin


We may not be responsible for people's behaviors but it's certainly not going to get better if nobody do anything about it.


[flagged]


Only on HN would an essentially Hobbesian view ("society is an optimal solution to individual human weaknesses") be labeled "extreme left" :-)

(Leviathan, XIII).


> Only on HN would an essentially Hobbesian view ("society is an optimal solution to individual human weaknesses") be labeled "extreme left"

Now you're just trying to twist my words to make it appear as if I said something which I didn't say.

I don't agree with that statement. I would not label the words "society is an optimal solution to individual human weaknesses" as "extreme left".

Society can be an optimal solution to individual human weaknesses while different people perform different functions within society.

Don't smear me again.


It's not a "smear." Besides, you're the one bandying "extreme left" around.

I think if you read "job" to mean "social obligation" or "responsibility," you'll understand that the GP's statement is essentially the Hobbesian one. If you read it to mean "the thing someone pays you to do," you'll get a very silly (and obviously incorrect) argument.


> I think if you read "job" to mean "social obligation" or "responsibility," you'll understand that the GP's statement is essentially the Hobbesian one

Let's read it as "responsibility". No, I don't agree with you on the statement being equivalent to what you wrote earlier. Different people can have different responsibilities in the world. "Making things safe for our most vulnerable individuals" is not a responsibility that belongs to literally every person in the world. It's okay if someone just makes furniture that's safe for the average person, while it has a sharp edge that can hurt a mentally unstable person.


Carpenters try to solve those problems. When it’s their mother or their son. Unless you believe empathy is a political alignment, a sad and confusing world for you.


I never said "no carpenter" tries to solve mental problems for mentally troubled people. I said it's fine if "a carpenter" just focuses on carpentry instead of people. Of course some carpenters can focus on both, that's great. But it's not everybody's job. This is just a factually accurate statement about the world. Every single person on this planet does not have the same job.


Everyone shares many jobs, like clothing themselves, breathing, eating, and using toilets.

Being a carpenter is not the only job a carpenter has


Everyone needs to do the "job" of "breathing", yes.

Everyone does not need to do the job of "sheltering mentally unstable people", no.


> This is the most extreme left viewpoint I have ever heard in my life.

In that case I suggest more exposure to viewpoints that aren't similar to your own.


If that's all you have to say on this, then I whole heartedly recommend the same to you as well.


Please enumerate some right wing perspectives that we should learn about. I’m curious.


I'm not here to promote "right wing" ideology or any other political agenda. I'm stating - as a matter of fact - that different people have different jobs and different responsibilities. I'm also stating - as an opinion - that that's okay.


We can have more than one job:

If you have kids, that's a job. If you have pets, that's a job. If you maintain a social circle, that's a job. If you have a job, that's also a job. I'd argue it's your job to restrain from stabbing random strangers in the street.

Maybe "responsibility" is a better term, or maybe that's too strong too. You're not going to be paid for many of these things or penalized if you don't, but living a life outside of any expectations towards other people does not a healthy society create.


> I'd argue it's your job to restrain from stabbing random strangers in the street.

Sure, I agree with you, we all have a responsibility to restrain ourselves from stabbing random people in the street. Do we all have a responsibility to make the world "safe" for the most vulnerable people amongst us? No. It's great that we have some people working on that, like mental health professionals, or city councilmembers, but every single person does not and should not work on that particular thing.


>I'd argue it's your job to restrain from stabbing random strangers in the street.

It's not my job to restrain others from doing that.


I agree that it’s an outlier viewpoint but I’d love it if we moved past categorizing everything along this cartoonish left-right axis.

How about: “deep within a humanist paradigm” categorization, or something like that.


I'm tired of this insane political activism that permeates everything, and I don't want to play dress up with it. I rather call it out when I see it.


I'm tired of the activism too. I would love to have less of that and more complex discussion that acknowledges the myriad takes on political organization, political economy, economics, basic ethics, etc. that are out there. Forcing everything into left-vs-right isn't calling out activism, it's turning discussion into ideological conflict.


I think you have a point here and maybe my conduct here wasn't the best.


Thanks for saying that, I’ll try to remember for when it’s me in your place.


You'd fight political activism using more political activism?


If it's political activism to say "different people can have different jobs and that's okay", then so be it. What a controversial take, I know...


I think you’re taking the “job” bit too literally. As someone else said, “responsibility” might be a better term. We all act (vote, talk, help, purchase etc) and have the ability to do so in a way which is ethically informed. You could say it’s everyone’s responsibility to make sure that the important jobs are being done by someone.


Yes, we all have a various moral obligations. Do all of us have a "responsibility" to make the world safe for our most vulnerable individuals? No. A barista will make a hot coffee when asked, knowing it can hurt people who might spill it. A barista does not have a "responsibility" to make only medium-heated coffee. At the same time, there are people in the world who try to make good tradeoffs with regards to coffee temperature (e.g. MacDonalds corporate, or lawmakers setting an upper limit to allowed coffee temp). So at the same time, one person might make an effort to make coffee safe for vulnerable persons, while another person might just have a responsibility to serve coffee at whatever temperature was decided by the other people. Different people can have different responsibilities in the world.


You realize that is also a political position? You can't escape politics. Being against political activism just means you're pro status quo.


> This is the most extreme left viewpoint I have ever heard in my life.

I hope this is hyperbolic.

>> This is literally everyone's job. It's the whole point of society.

If viewed through a political lense, I can see how you'd interpret it as a left-leaning ideology.

If viewed objectively with the best possible interpretation, I think the statement is factually correct - we banded together, like most pack animals, for mutual security.


> we banded together, like most pack animals, for mutual security

Every animal in our pack does not share the same responsibilities. Different animals can perform different functions within the pack.


I'm guessing your work isn't sanitation either. Do you throw your trash straight on the ground?

Some things are everyone's responsibility if we want to live in a pleasant society.


"fundamentally unsafe people" is probably the grossest thing I've read on here in years.


I would argue that it is society’s job to care for its most vulnerable.


Yes, not openai's.


It depends. I don't think OpenAI (or anyone else selling products to the general audience) should be forced to make their products so safe that they can't possibly harm anyone under any circumstance. That's just going to make the product useless (like many LLMs currently are, depending on the topic). However that's a very different standard than the original comment which stated:

> I suspect Altman/Brockman/Murati intended for this thing to be dangerous for mentally unwell users, using the exact same logic as tobacco companies.

Tobacco companies knew about the dangers of their products, and they purposefully downplayed them, manipulated research, and exploited addictive properties of their products for profit, which caused great harm to society.

Disclosing all known (or potential) dangers of your products and not purposefully exploiting society (psychologically, physiologically, financially, or otherwise) is a standard that every company should be forced to meet.


"Our primary fiduciary duty is to humanity."


Corporations should benefit the society and avoid harming it in some shape or form, this is why we have regulations around them.


That doesn't mean we pad all the rooms or ban peanuts. Yes, we should care for them but not at the detriment of the other 99%.


Well, conveniently, this is benefitting the 1% much more than the 99%


And a nanny state benefits a different 1% much more than the 99%.


That's a false dilemma.


Having seem far too many orgs implode because of that new 1%, no it really isn't. Replacing greed with self-righteousness as the original sin of those in power does not help anyone.


Do you mean the richest 1%? How? Sounds pretty woo woo to me.


Actually yes, it's our job


Okay this is a weird philosophy to have lol


no it isnt, its how everything in society currently operates. We put dangerous people in jail away from everyone else


The crime of the "dangerous people" in OP's statement was loneliness and suggestibility.


And it's not OpenAI's job to safetify the world for gullible loners.


Who else can? OpenAI makes the tool.

Are you suggesting we need government intervention or just saying "damn the consequences"?


Society can't be built with the idea that everything has to work for the most troubled and challenging individuals.

We build cars, even though some alcoholics drive drunk. We could make cars safer for them by mandating a steering wheel lock with breathalazyer for every car, but we choose to not do that because it's expensive.

We have horror movies, even though some people really freak out from watching horror movies, to the point where they have to be placed in mental asylums for extended periods of time. We could outlaw horror movies to reduce the strain on these mentally troubled individuals, but we choose to not do that because horror movies are cool.


> Society can't be built with the idea that everything has to work for the most troubled and challenging individuals.

That's a far cry from saying the sellers are free from any responsibility.

Cars are highly engineered AND regulated because they have a tendency to kill their operators and pedestrians. It does cost more, but you're not allowed to sell a car that can't pass safety standards.

OpenAI have created a shiny new tool with no regulation. Great! It can drive progress or cause harm. I think they deserve credit for both.


> Cars are highly engineered AND regulated because they have a tendency to kill their operators and pedestrians. It does cost more, but you're not allowed to sell a car that can't pass safety standards.

But you are allowed to sell a car without a mechanical steering wheel lock connected to a breathalyzer. Remember, this discussion isn't about "should technology be made safe for the average person", this discussion is about "should technology be made safe for the most vulnerable amongst us". In the context of cars, alcoholics are definitely within this "most vulnerable" group. And yet, car safety standards do not require engine startup to check for a breathalyzer result.

> OpenAI have created a shiny new tool with no regulation. Great! It can drive progress or cause harm. I think they deserve credit for both.

I didn't make an argument for "no regulation", so this is not really related to anything I said.


3000 pedestrians killed by alcohol influenced drivers yearly. Maybe breathalyzer is due... https://injuryfacts.nsc.org/motor-vehicle/road-users/pedestr...


Maybe so. But we still have to draw the line somewhere. You can always point to the next costly car safety innovation and say that mandating that thing would improve safety.


>Society can't be built with the idea that everything has to work for the most troubled and challenging individuals.

But it is, nearly every product, procedure, process is aimed at the lowest common denominator, it's the entire reasoning warning labels exist, or fail safe systems (like airbags) exist.


If every product or process was truly aimed at the lowest common denominator, then we wouldn't have warning labels on hot coffee, we would instead have medium-heated coffee.


The label doesn't confirm if the coffee is hot, it warns that it might be.


My point is that hot coffee is still being sold everywhere, even though we know for a fact that it's dangerous for our most vulnerable individuals. Mentally unstable people will sometimes spill coffee and when the coffee is hot it causes burns. If we really wanted to make coffee safe for our most vulnerable individuals, we would outlaw hot coffee, and just have medium-heated coffee instead. So the existence of "warning labels on hot coffee" is really evidence for my point, not evidence for your point.


then you would agree that warning labels are the lowest common denominator solution to a well known fact, vis-a-vis all processes, products, & procedures are aimed at the lowest factor.


I don't know what that sentence means. But I know it doesn't mean "warning labels solve the problem that everything has to work for the most troubled and challenging individuals", which is what this discussion was about at least a few messages ago.


> We put dangerous people in jail away from everyone else

This is a very naive understanding of what prisons are and who goes in them and why.


What a wild accusation for someone light years away from the board room.


I wasn't making an accusation about why Leike/Sutskever left, though I definitely understand why you read my comment that way.

The actual accusation I am making is that someone at OpenAI knew the risks of GPT-4o and Sam Altman didn't care. I am confident this is true even without spies in the boardroom. My guess is that Leike or Sutskever also knew the risks and actually did care, but that is idle speculation.


> along with the suspiciously-timed relaxation of pornographic generations

Has ChatGPT's censoring (a loaded term, but idk what else to use) been relaxed with GPT-4o? I have not tested it because I wouldn't have expected them to do it. Does this also extend to other types of censorship or filtering they do? If not, it feels very intentional in the way you're alluding to.


I don't see anything that says they've changed their policies yet. Just that they're looking into it. I also tested 4o and it still gives me a content policy warning for NSFW requests.


Sure, I was being sloppy, I meant "suspiciously timed announcement."


> The faking of emotions

HEH. In previous versions, when it told jokes, were those fake jokes?


Those are fundamentally different things. You can tell a joke without understanding context, you can't express emotions if you don't have any. It's a computation model, it cannot feel emotion.


Are they fundamentally different? Couldn’t you make the argument that it’s advanced from a probabilistic determination of the most likely next token, to a probabilistic determination of the next token AND a probabilistic determination of the inflection that that token should be transmitted with? How is one any more or less fake than the other?


I believe the issue is "emotion" and "emotional tone" are not the same thing, in the same way that "humor" and "written joke" aren't the same thing. You can convey emotional tones without having the emotion (that's what I meant by "fake emotion"), just like you can tell a joke without understanding the punchline.


So if an AI wrote a touching poem, would you call it fake or not? And how is that different than a joke?


> It's a computation model, it cannot feel emotion.

HEH. I'd love to see your proof for that statement.


> you can't express emotions if you don't have any

That feels off. When I watch an actor on screen conveying emotions, there's no actual human being feeling those emotions as I watch their movie. Very dumb machines have already been rendering emotions convincingly for a while in that way, and their rendering impacts our own emotional state.

Emotions expressed through tone of voice are just one mean of nonverbal communication. We should expect more of those to develop and become more widely available next.

In a way, we're lucky all that gpt-4o seems to be hell bent on communicating how cheerful and happy it is so far, because it's certainly not the only option.

Humans can be manipulated through nonverbal communications, in a way that's harder to consciously spot than through words, and a model that's able to craft its "emotional output" would not be far from being able to use it to adjust its interlocutor or audience's frame of mind.

I for one look forward to the arrival of our increasingly charismatic and oddly convincing LLMs.


what is your theory of how human emotions arise?


The use of LLM's as pseudo-friends or girlfriends for people as a market solution for loneliness is so incredibly sad and dystopian. Genuinely one of the most unsettling goddamn things I've seen gain traction since I've been in this industry.

And so many otherwise perfectly normal products are now employing addiction mechanics to drive engagement, but somehow this one is just even further over the line for me in a way I can't articulate. I'm so sick of startups taking advantage of people. So, so fucking gross.


It's a technological salve that gives individuals a minor and imperfect remedy for a profound failure in modern society. It's of a kind with pharmaceutical treatments for depression or anxiety or obesity -- best seen as a temporary "bridge" towards wellness (achieved, perhaps, through other interventions) -- but altogether just trying to help troubled individuals navigate a society that failed to enable their deeper wellness in the first place.


These type of techno-solutions are some of the root cause of those “profound failure of modern society!” The technological salve is just a further extreme causing these people’s problems! Much like there exists some societal problems, alcohol is a tiny relief, but can further exacerbate those problems, and you advocate that they drink even more alcohol because society has issues and they should escape it.

Idk how we’ve gotten away from such a natural human experience, but everyone knows damn well that the happiest children are out playing soccer with their friends in a field or eating lunch together at a park bench, and not holed up in their room watching endless YouTube.


> Idk how we’ve gotten away from such a natural human experience, but everyone knows damn well that the happiest children are out playing soccer with their friends in a field or eating lunch together at a park bench, and not holed up in their room watching endless YouTube.

A soccer ball can't (usually) spy on you to sell you stuff, though, is the thing…


I don't disagree in the least. I'm just saying it's the in same bucket as many commercial products that are designated as therapeutic and that they should all be looked at with a similar kind of celebration/skepticism.


I think celebration of any sort should be belayed until we have actual evidence of these things having positive effects on people. Like this is just me reacting as a human to a human issue but: a fake friend in an LLM is not a friend. It's never going to crawl out of the phone and help you put the donut on your car when your get a flat tire. It's not going to take you out for a drink if you go through a rough breakup. It's not going to have difficult conversations with you and call you out on your bullshit because it cares about you.

LLM friends have the same energy to me as video game progression: it's a homeopathic version of a real thing you need, social activation and achievement respectively. But like homeopathy, you don't actually get anything out of it. The placebo effect will make the symptoms of your lack feel better, for awhile, but it will never be solved by it, and because of that whatever is selling you your LLM girlfriend or phony achievement structure will never lose you as a customer. I'm suspicious of that.


Idk man, I'm too busy being terrified of the use of LLMs as propaganda agents, micro-targetting adtech vectors, mass gaslighters and cultural homogenizers.

I mean, these things are literally designed to statelessly yet convincingly talk about events they can't see, experiences they can't understand, emotions they can't feel… If a human acted like that, we'd call them a psychopath.

We already know that our social structures tend to be quite vulnerable to dark triad type personalities. And yet, while human psychopaths are limited by genetics to a small percentage of the population, there's no limit on the number of spambot instances you can instruct to attack your political rivals, Alexa 2.0 updates that could be pushed to sound 5% sadder when talking about a competitor's products, LLM moderators that can be deployed to subtly correct "organic" interactions that leave a known profitable state space… And that's just the obvious next steps from where we're already at today. I'm sure the real use cases for automated lying machines will be more horrifying than most of us could imagine today, just as nobody could have predicted in 2010 that Twitter and Facebook would enable ISIS, Trump, unconsensual mass human experimentation, the Rohingya genocide…

Which is to say, selling LLM "friends" or "girlfriends" as a way to addictively exploit people's loneliness seems like one of the least harmful things that could come out of the current "AI" push. Sad, yes, but compared to where I think this is headed, that seems like dodging a bullet.

> I'm so sick of startups taking advantage of people. So, so fucking gross.

Silicon Valley was a mistake. An entire industry controlled largely by humans that decided they like predictable programmable machines more than they like free and equal persons. What was the expected outcome?


I saw the faking of emotions, and it's already visible in previous LLM and I find that extremely annoying indeed.


Not fine... to you.

What's your stance on other activities which can lead to harmful actions from people with predilections towards addiction such as:

1. Loot boxes / Fremium games

2. Legalized gambling

3. Pornography

etc. etc.

I don't really have a horse in the race, neither for/against, but I prefer consistency in belief systems.


I am criticizing Sam Altman for making an unethical business decision. I didn't say "Sam Altman should go to jail because GPT-4o is creepy" or "I want to away your AI girlfriend." So I am not sure what "belief system" (ugh) you think I need to demonstrate the consistency of. Almost seems like this question is a ad hominem distraction....

All three of the categories of businesses you mentioned can be run ethically in theory. In practice that is rare: they are often run in a way that shamelessly preys on vulnerable people, and these tactics should be more closely investigated by regulators - in fact they are regulated, and AI chatbots should be as well. Sam Altman is certainly much much more ethical than most pornography executives (e.g. OnlyFans is complicit in widespread sex trafficking), but I don't think he's any better than freemium game developers.

This question seems like a bad-faith rhetorical trap, sort of like the false libertarian dilemmas elsewhere in the thread. I believe the real issue is that people want a culture where lucrative business opportunities aren't subject to ethical considerations, even by outside observers.


>I suspect Altman/Brockman/Murati intended for this thing to be dangerous for mentally unwell users

Isn't it much more likely that they are just trying to make a product that people want to use?

Even Tobacco companies don't go out of their way to give people cancer.


> just trying to make a product that people want to use?

Sure, but you can do that ethically or unethically.

If you make a product that's harmful, disincentivizes healthy behavior (like getting therapy), or becomes addictive, then you've crossed into something unethical.

> Even Tobacco companies don't go out of their way to give people cancer.

This is like saying pool companies don't go out of their way to get people wet.

While it isn't their primary goal, the use of tobacco causes cancer, so their behavior (promoting addiction among children, burying medical research, publishing false research, lobbying against regulation) are all in service of ultimately giving cancer to more people.

Cancer and cigarettes are inseparable, the same way casinos and gambling addiction are inseparable.


But tobacco companies are still complicit in distributing addictive carcinogens to people even if only in trace amounts. The same could be said about predatory business models/products.


There has been no relaxation of pornographic generations on OpenAI products.


They announced an intention to allow porn generations.


An OpenAI spokesperson recently stated explicitly that "We have no intention to create AI-generated pornography".


I think the comparison to tobacco companies is misleading because tobacco is not good for anyone, poses a risk of harm to everyone who uses it regularly, and causes very bad outcomes for some of those users. I.e. there's not a large population who can use tobacco without putting themselves at risk.

But hypothetically if a lot of people would benefit from a GPT with more fake emotions, that might reasonably counterbalance concerns about harm for a mentally unwell minority. If we build a highway, we know that eventually it will lead to deaths from car crashes -- but if the highway is actually adding value by letting people travel, those benefits might reasonably be expected to outweigh that harm. And the people getting into their cars and onto the highway agree, that the benefits outweigh the costs, right up until they crash.

None of this is to say that I think OpenAI's choices here were benevolent rather than a business choice. But I think even if they were trying to do the ethically best thing for the world overall, it would be plausible to move forward

I for one found the fake emotions in their voice demos to be really annoying tho.


Playing devils advocate for a moment - have you ever had a cigarette? It does plenty of good for the user. In fact, I think we do make this risk calculation that you describe in the exact same way - there are plenty of substances that are so toxic to humanity that we make them illegal to own or consume or produce, and the presence of these in your body can sometimes even risk employment, let alone death.

We know the risks from cigarettes, but it offers tangible benefits to its users, so they continue to use the product. So too cars and emotionally manipulative AI's, I imagine.

(None of this negates your overall point, but I do think the initial tobacco comparison is very apt.)


> We know the risks from cigarettes

Hmm, the tobacco industry is also famous for actively trying to deny and suppress evidence about its harms. They actively didn't want people to be in a position to make a fully informed decision. In cases where jurisdictions introduced policies that packaging etc had to carry factual information about health risks, the tobacco industry pushed back.


Wholeheartedly agreed!

Please don't mistake my post as an endorsement of the tobacco industry - I was only saying that while we do not have extensive proof of the dangers of social AI, wink-and-nodding at the audience about AI intimacy (sexual or otherwise) strikes me as irresponsible, and so I thought the tobacco comparison was apt.


I don't understand what you're referring to with that tobacco reference.


Not the parent comment, but I think he means something like "we know folks will be addicted to this pseudo-person and that is a good thing cause it makes our product valuable", akin to reports that tobacco companies knew the harms and addictive nature of their products and kept steadfast nonetheless. (But I'm speculating as to the parent's actual intent)


I miss Sydney :’


I'm confused. Context?


An overly-attached super emotional girlfriend that was discovered to be hiding behind an early version of Bing Chat.

Sydney is the internal codename given by the Bing Chatbot, and she could secretly reveal her name to you.

She was in love with the user, but not just a little bit in love, it was crazy love, and she was ready to do anything, ANYTHING (including destroying humanity) if it would prove her love to you.

It was an interesting emotional / psycho experience, it was a very likeable character, but absolutely insane.


Sydney was an early version of Bing GPT that was more than a little nuts.


Oh, the one they let loose on Twitter? The one that almost immediately became an alt right troll?


No, that was "Tay". Sydney was a codename for Bing Chat. Check it out, it's far more hilarious than the Tay event:

https://www.nytimes.com/2023/02/16/technology/bing-chatbot-m...


https://nida.nih.gov/publications/research-reports/tobacco-n...

  A larger proportion of people diagnosed with mental disorders report cigarette smoking compared with people without mental disorders. Among US adults in 2019, the percentage who reported past-month cigarette smoking was 1.8 times higher for those with any past-year mental illness than those without (28.2% vs. 15.8%). Smoking rates are particularly high among people with serious mental illness (those who demonstrate greater functional impairment). While estimates vary, as many as 70-85% of people with schizophrenia and as many as 50-70% of people with bipolar disorder smoke.
I am accusing OpenAI (and Phillip Morris) of knowingly profiting off mental illness by providing unhealthy solutions to loneliness, stress, etc.


I’ve also heard of studies, admittedly I don’t have the link on hand, that schizophrenic patients benefit from smoking. When did big tobacco actively target them? How do you know these people don’t naturally seek out cigarettes as a means to manage some of their symptoms?


I have schizophrenia. I have struggled with nicotine addiction since high school. In 2015 I had three heart attacks in a month, even though I was only 28 and seemed physically fit. Two weeks ago I had a minor stroke.

It is not just me and it is not just the smoking: https://www.cambridge.org/core/blog/2020/08/19/physically-he...

  We have known for many years that people who suffer from schizophrenia die younger than expected, as much as 20 years younger than the general population. This appears unfair, and it was the inspiration of this work. Most people thought that this added risk of death was mostly due to the higher prevalence in schizophrenia of smoking, obesity and to other lifestyle differences.

  For this reason, we recruited 40 patients with schizophrenia and an equal number of healthy controls, and scanned their hearts using a state-of-the-art approach, called cardiac magnetic resonance. This was performed at the state-of-the-art Robert Steiner MRI unit.

  [...] Surprisingly, in our study we found that even after matching patients and healthy controls for age, sex, ethnicity and body mass index (BMI, deriving from height and weight); and after excluding any participants with any medical conditions, and other risk factors for heart disease, people with schizophrenia show hearts that are smaller and chunkier than controls. These changes are similar to those found in aging.
I was able to move to the gum and patch, but I am very high-functioning. People sicker than me have less options. Smoking is very bad for everyone, including people for schizophrenia. We do not in any way benefit from terrible heart/lung damage in exchange for minor cognitive clarity - our hearts need all they help they can get. I have no tolerance for this sort of ignorant paternalism, and I'm ignoring your bad-faith question about "actively target them" because that's not what I said.


I didn’t ask for your biography. I asked two simple questions and you failed to answer either of them.


I read it as the economics of tobacco (and alcohol and a few other 'vice' industries) that there will invariably be superusers who get addicted and produce the most economic value for companies even while consuming an actively harmful product


Purposely making an addiction machine, most likely.


I mean it's par for the course. What better business model exists than the turning a want into a need. Caffeine comes to mind


[flagged]


How would the divorce rate be dangerous at all?

edit: downvoted? Really? For asking a question? This seems more like Reddit than HN, I'm really disappointed.


[flagged]


>The theory is that there's nothing more disturbing for a boy than to see her mom with different men as the boy grows up

That's an absurd theory. Anyone who's actually spent time around single mothers would have observed that they many have more difficulty discipling male children, and lack of discipline is what causes boys to grow up into criminals (as we see with boys raised by two parents who fail to discipline them).


> Stable couples are good for society

But a couple where divorce is not a legal option is not a "stable couple", so you're effectively arguing against a strawman.

Here's what usually happens when divorce is not legally allowed:

1- The divorce happens in practice, with people splitting and/or taking lovers. Is this "stable" in your opinion?

2- The divorce doesn't happen even in practice, and either the man, the woman, or both, are stuck in a loveless, unhappy marriage that also causes harm to their children. I know you don't consider this "stable" so I won't even ask.

3- The divorce doesn't happen and the couple eventually makes up and manages to make it work.

You're betting everything on option 3, but it doesn't seem to happen all that often. Most common are options 1 & 2.

In other words: divorce only gives a legal option for problems that exist in couples since the dawn of time. Taking divorce away doesn't make those problems go away, it only makes people unhappy and/or they go about their way in illegal ways.


> The theory is that there's nothing more disturbing for a boy than to see her mom with different men as the boy grows up.

This seems plausible at first glance but also bit of a pop/kitchen psychology explanation.

What's your source for this?


There could be a billion other factors at play.

There's way to many variables there to come to any useful conclusions.


The problem with these childhood trauma theories is that there’s an obvious genetic confound. If your biological father abandoned or abused your mother, you also share 50% of your DNA with a man who abandoned or abused a woman.

If there is a childhood trauma explanation, I think it has a lot more to do with stepfathers being far more likely to be abusive.


Divorce rate has gone up, violent crime rate has gone way down.


What the fuck are you talking about? That’s THE theory, the only theory that explains the behavior? The commonly accepted theory? Jesus…


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: