It surprises me how hyper focused people are on AI risk when we’ve grown numb to the millions of preventable deaths that happen every year.
8 million people to smoking.
4 million to obesity.
2.6 million to alcohol.
2.5 million to healthcare.
1.2 million to cars.
Hell even coconuts kill 150 people per year.
It is tragic that people have lost their mind or their life to AI, and it should be prevented. But those using this as an argument to ban AI have lost touch with reality. If anything, AI may help us reduce preventable deaths. Even a 1% improvement would save hundreds of thousands of lives every year.
I do think we need to be hyper focused on this. We do not need more ways for people to be convinced of suicide. This is a huge misalignment of objectives and we do not know what other misalignment issues are already more silently happening or may appear in the future as AI capabilities evolve.
Also we can’t deny the emotional element. Even though it is subjective, knowing that the reason your daughter didn’t seek guidance from you and committed suicide was because a chatbot convinced her of so must be gut wrenching. So far I’ve seen two instances of attempted suicide driven by AI in my small social circle. And it has made me support banning general AI usage at times.
Nowadays I’m not sure if it should or even could be banned, but we DO have to invest significant resources to improve alignment, otherwise we risk that in the future AI does more harm than good.
Respectfully I disagree there. Social media is dangerous and corrosive to a healthy mind, but AI is like a rapidly adaptive cancer if you don't recognize it for what it is.
Reading accounts from people who fell into psychosis induced by LLMs feels like a real time mythological demon whispering insanities and temptations into the ear directly, in a way that algorithmically recommended posts from other people could never match.
It will naturally mimic your biases. It will find the most likely response for you to keep engaging with it. It will tell you everything you want to hear, even if it is not based in reality. In my mind it's the same dangers of social media but dialed all the way up to 11.
Oh you are absolutely right. I’m not sure yet if it IS more harmful but it has had time to do so much more harm.
Starting with dumb challenges that risk children and their families life.
And don’t get me started with how algorithms don’t care about the wellbeing of users, so if it’s depressing content that drives engagement, users life is just a tiny sacrifice in favor the companies profits.
I largely agree with what you’re saying. Certainly alignment should be improved to never encourage suicide.
But I also think we should consider the broader context. Suicide isn’t new, and it’s been on the rise. I’ve suffered from very dark moments myself. It’s a deep, complex issue, inherently tied to technology. But it’s more than that. For me, it was not having an emotionally supportive environment that led to feelings of deep isolation. And it’s very likely that part of why I expanded beyond my container was because I had access to ideas on the internet that my parents never did.
I never consulted AI in these dark moments, I didn’t have the option, and honestly that may have been for the best.
And you might be right. Pointed bans, for certain groups and certain use cases might make sense. But I hear a lot of people calling for a global ban, and that concerns me.
Considering how we improve the broad context, I genuinely see AI as having potential for creating more aware, thoughtful, and supportive people. That’s just based on how I use AI personally, it genuinely helps me refine my character and process trauma. But I had to earn that ability through a lot of suffering and maturing.
I don’t really have a point. Other than admitting my original comment used logical fallacies, but I didn’t intend to diminish the complexity of this conversation. But I did. And it is clearly a very complex issue.
>I’ve seen two instances of attempted suicide driven by AI in my small social circle
Christ, that's a lot. My heart goes out to you and I understand if you prefer not to answer, but could you tell more about how the AI-aspect played out? How did you find out that AI was involved?
I was going to write a full answer with all details but at some point it gets too personal so I’ll just answer the questions briefly.
> but could you tell more about how the AI-aspect played out?
So in summary the AI sycophantically agreed with how there was no way out of the situations and how nobody understood their position further isolating them. And when they contemplated suicide it did assist on the method selection with no issues whatsoever.
> How did you find out that AI was involved?
The victims mentioned it and the chat logs are there.
The problem is, if you want to reduce suicide, the best place to start would not be by banning AI (very neutral tech, responds to what you want it to do) but by censoring climatologists (who constantly try to convince people the world is ending and there's no hope for anyone).
I'm not interested in hearing about the effect of AI encouraging suicide until the problem of academics encouraging suicide are addressed first as the causal link is much stronger.
By elderly people wo are already dying from natural causes and ask for a medically assisted death instead of unnecessarily prolonging their suffering. It is telling that so many people who suffer choose a dignified death once they are legally allowed to.
One could argue that number should be close to 100%, as people would live to old age where eventually the body is just too worn to continue a good life.
On one hand it shows terrible inadequacies of Canadian health care. On the other would it be better to force people to suffer till the natural end of their lives that are terrible because of those inadequacies? Healthcare won't get significantly better soon enough for them anyways. It seems better to "discover" what percentage of people want to end their lives in current conditions and improve those conditions to improve that percentage. That might be a very powerful measure of how good we are doing with added benefit of not forcing suffering people to suffer longer.
It's easy to think that any % > 0 is a sign of something having gone wrong. My default guess used to be that, too.
But imagine a perfect health system: when all other causes of death are removed, what else remains?
If by "terrible inadequacies of Canadian health care" you mean they've not yet solved aging, not yet cured all diseases, and not yet developed instant-response life-saving kits for all accidents up to and including total body disruption, then yes, any less than 100% is a sign of terrible inadequacies.
Some level above 0% is achievable target at our techlevel. But we could have easily have higher assisted suicide rate than this ideal non-zero level if we made our health services worse than they are. Same way I don't suppose they are administered perfectly right now so there's still long way to go before achieving lowest technologically possible level.
And even 0% is possible without going StarTrek, if for example full-time narcotic-induced bliss till the "natural" end of your life was an option. Then assisted suicide rate would just cease to be a good indicator of how good our health care and services are.
There are a lot of edge cases where suicide is rational. The experience of watching an 80 year old die over the course of a month or few can be quite harrowing from the reports I've had from people who've witnessed it; most of whom talk like they'd rather die in some other way. It's a scary thought, but we all die and there isn't any reason it has to be involuntary all the way to the bitter end.
It is quite difficult to say what moral framework an AI should be given. Morals are one of those big unsolved problems. Even basic ideas like maybe optimising for the general good if there are no major conflicting interests are hard to come to a consensus on. The public dialog is a crazy place.
The stories coming out are about convincing high school boys with impressionable brains into committing suicide, not about having intellectual conversations with 80 year olds about whether suicide to avoid gradual mental and physical decline makes sense.
Yeah, that is why I wrote the comment. The stories are about one case where the model behaviour doesn't make sense - but there are other cases where the same behaviour is correct.
As jb_rad said in the thread root, hyper-focusing on the risk will lead people to overreact. DanielVZ says we should hyper focus, maybe even overreact to the point of banning AI because it can persuade people to suicide. However the best view to do is acknowledge the nuance where sometimes suicide is actually the best decision and it is just a matter of getting as close as possible to the right line.
> We do not need more ways for people to be convinced of suicide.
I am convinced (no evidence though) that current LLMs has prevented, possibly lots of, suicides. I don't know if anyone has even tried to investigate or estimate those numbers. We should still strive to make them "safer" but with most tech there's positives and negatives. How many, for example, has calmed their nerves by getting in a car and driven for an hour alone and thus not committed suicide or murder.
That said there's the reverse for some pharmaceutical drugs. Take statins for cholesterol, lots of studies for how many deaths they prevent, few if any on comorbidity.
> It surprises me how hyper focused people are on AI risk when we’ve grown numb to the millions of preventable deaths that happen every year.
Companies are bombarding us with AI in every piece of media they can, obviously with a bias on the positive. This focus is an expected counterresponse to said pressure, and it is actually good that we're not just focusing on what they want us to hear (i.e. just the pros and not the cons).
> If anything, AI may help us reduce preventable deaths.
Maybe, but as long as it development is coupled to short-term metrics like DAUs it won't.
Not just focusing only on what they want us to hear is a good thing, but using more noise we knowingly consider low value may actually be worse IMO. Both in terms of the overall discourse but also in terms of how much people end up buying into the positive bias.
I.e. "yeah, I heard many counters to all of the AI positivity but it just seemed to be people screaming back with whatever they could rather than any impactful counterarguments" is a much worse situation because you've lost the wonder "is it really so positive" by not taking the time to bring up the most meaningful negatives when responding.
Fair point. I don't know how to actually respond to this one without an objective measure or at least proxy of a measure on the sentiment of the discourse and it's public perception.
Anecdotically I would say we're just in a reversal/pushback of the narrative and that's why it feels more negative/noisy right now. But I'd also add that (1) it hasn't been a prolongued situation, as it started getting more popular in late 2024 and 2025; and (2) probably won't be permanent.
Fair point. I actually wish Altman/Amodei/Hassabis would stop overhyping the technology and also focus on the broader humanitarian mission.
Development coupled to DAUs… I’m not sure I agree that’s the problem. I would argue AI adoption is more due to utility than addictiveness. Unlike social media companies, they provide direct value to many consumers and professionals across many domains. Just today it helped me write 2k lines of code, think through how my family can negotiate a lawsuit, and plan for Christmas shopping. That’s not doom scrolling, that’s getting sh*t done.
You can say "shit" on the internet, as in "I bet those two thousand lines of code are shit quality",or "I hope ChatGPT will still think for you when your brain has rotted away to shit".
Agree that it's ridiculous to talk about banning AI because some people misuse it, but the word preventable is doing a lot of heavy lifting in that argument. Preventable how? Chopping down all the coconut trees? Re-establishing the prohibition? Deciding prayers > healthcare?
Our society is deeply uncomfortable with the idea that death is inevitable. We've lost a lot of the rituals and traditions over the centuries that made facing it psychologically endurable. It probably isn't worth trying to prevent deaths from coconut trees.
Not fully preventable, of course not. But reducible, certainly. Better cars aided by AI. Better diagnoses and healthcare aided by AI. Less addiction to cigarettes and alcohol through AI facilitated therapy. Less obesity due to better diet plans created by AI. I could go on. And that’s just one frame, there are plenty of non-AI solutions we could, and should, be focused on.
Really my broader point is we accept the tradeoff between technology/freedom and risk in almost everything, but for some reason AI has become a real wedge for people.
And to your broader point, I agree our culture has distanced itself from death to an unhealthy degree. Ritual, grieving, and accepting the inevitable are important. We have done wrong to diminish that.
Coconut trees though, those are always going to cause trouble.
>but for some reason AI has become a real wedge for people
Well yeah, for most other technologies, the pitch isn't "We're training an increasingly powerful machine to do people's jobs! Every day it gets better at doing them! And as a bonus, it's trained on terabytes of data we scraped from books and the Internet, without your permission. What? What happens to your livelihood when it succeeds? That's not my department".
AI people are like "HAHAHAHAH were gods! Were gods and you PEASANTS are going to be jobless once my machine can fire you!" and then wonder why people have negative feelings about it. The Ipod wasnt coming for my livelihood it just let me listen to music even more!
The iTunes music store sold music for your iPod, but we'd be ignoring history if we didn't at least acknowledge that was also the era of Napster, Limewire, Kazaa, and DCC. Pirate Bay, and later, Waffles.fm. Metallica sued Napster in 2000, the first ipod was released in 2001. iPod people laughed at the end of record companies and the RIAA while pretending to work with them. We all know that's not how it ended though.
I, for one, would be on-board with erasing coconut trees from the planet.
Why, one might ask?
Well, simple: Nobody really needs them, do they? And I, for one, don't enjoy the flavor of a coconut: I find that the taste lingers in my mouth in ways that others do not, such that it becomes a distraction to me inside of my little pea brain.
I find them to be ridiculously easy to detect in any dish, snack, or meal. My taste buds would be happier in a world where there were no coconuts to bother with.
Besides: The trees kill about 150 people every year.
(But then: While I'd actually be pretty fine with the elimination of the coconut, I also recognize that I live in a society with others who really do enjoy and find purpose with that particular fruit. So while it's certainly within my wheelhouse to dismiss it completely from my own existence, it's also really not my duty at all to tell others whether or not they're permitted to benefit in some way from one of those deadly blood coconuts.
Also it's a living organism in it's own right and other non-humans make use of it, like coconut crabs. Nature doesn't exist just for us. Humans kill a lot more coconut trees (or sharks) than they kill us.
The vast majority of traffic deaths are preventable. Whether we’re willing to accept that as a goal and make the changes needed to achieve that goal remains to be seen. Industrial accidents, and cancer from smoking are both preventable, and thankfully have been declining due to prevention efforts. Reducing pollution, fixing food supply issues, and making healthcare more available can prevent many many unnecessary deaths. It certainly is worth trying to prevent some of the dumb ways to die we’ve added since losing whatever traditions we lost. Having family & friends die old from natural causes is more psychologically endurable than when people die young from something that could have been avoided, right?
> Chopping down all the coconut trees? ... It probably isn't worth trying to prevent deaths from coconut trees
Would "not walking under coconut trees" count as prevention? Because that seems like a really simple and cheap solution that quite anyone can do. If you see a coconut tree, walk the other way.
Yes your honor this kid died of congestive heart failure at 200 kilograms but AI might have made him like computers more than humans if he went beyond 16 years.
I think this neatly illustrates how irrelevant justice system is for people's well being and that real work in harm reduction happens pretty much anywhere else.
People see that the danger will grow exponentially. Trying to fix the problems of obesity and cars now that they're deeply rooted global issues and have been for decades is hard. AI is still new. We can limit the damage before it's too late.
Maybe we should begin by waiting to see the scale of said so-called damage. Right now, there have maybe been a few incidents, but there are no real rates on "oh x people kill themselves a year from ai" and as long as x is still that, an unknown variable, it would be foolish to speed through limiting everybody for what can be just a few people.
>Trying to fix the problems _____ now that they're deeply rooted global issues and have been for decades is hard
The number of people that are already getting out of touch with AI is high. And we know that people have all kinds screwed up behaviors around things like cults. It's not hard to see that yes, AI is and will cause more problems around this.
To emphasize your point: there are literally multiple online communities of people dating and marrying corporate controlled LLM’s. This is getting out of hand. We have to deal with it.
For real though right? A bunch of nerds at openAI, Microsoft, etc. make it so a computer can approximate a person who is bordering on the sociopathic with its groveling and affirmations of the user’s brilliance, then people fall in love with it. It’s really unsettling!
We don't need to primarily focus on any single "problem name", even if it's very very bad. We need to focus on having the instruments to easily pick such problems later, regardless of the specifics. Meaning that the most important problem is representation. People must have fair protected elections for all levels of power structure, without feudal systems which throw votes into a dumpster. People must have a clear and easy path to participate in said elections if they so chose, and votes for them should not be discarded. People should be able to vote on the local rules directly, with proposals coming directly from the citizens and if passed made law (see Switzerland). The whole process should be heavily restricted from being bought with money, meaning restriction on the campaigns, on the ad expenses, fair representation in mass media etc. People should be able to vote out an incompetent politician too, and fundamental checks needs to be protected, like for example a parliament not folding to the autocrat's pressure and relinquishing legislative power to add to the autocrat's executive. And many other improvements.
Having instruments like that, people can decide themselves, what is more important - LLMs or healthcare or housing or something else, or all of that even. Not having instruments like that would just mean hitting a brick wall with our heads for the whole office duration, and then starting from scratch again, not getting even a single issue solved due to rampant populism and corruption by wealthy.
> The origin of the death by coconut legend was a 1984 research paper by Dr. Peter Barss, of Provincial Hospital, Alotau, Milne Bay Province, Papua New Guinea, titled "Injuries Due to Falling Coconuts", published in The Journal of Trauma (now known as The Journal of Trauma and Acute Care Surgery). In his paper, Barss observed that in Papua New Guinea, where he was based, over a period of four years 2.5% of trauma admissions were for those injured by falling coconuts. None were fatal but he mentioned two anecdotal reports of deaths, one several years before. That figure of two deaths went on to be misquoted as 150 worldwide, based on the assumption that other places would have a similar rate of falling coconut deaths.
Smoking had a huge campaign to (a) encourage people to buy the product, (b) lie about the risks, including bribing politicians and medical professionals, and (c) the product is inherently addictive.
That's why people are drawing parallels with AI chatbots.
Edit: as with cars, it's fair to argue that the usefulness of the technology outweighs the dangers, but that requires two things: a willingness to continuously improve safety (q.v. Unsafe at Any Speed), and - this is absolutely crucial - not allowing people to profit from lying about the risks. There used to be all sorts of nonsense about "actually seatbelts make cars more dangerous", which was smoking-level propaganda by car companies which didn't want to adopt safety measures.
Literally every person who took up smoking in the last 50 years was fully aware of the danger.
People smoke because it's relaxing and feels great. I loved it and still miss it 15 years out. I knew from day one all the bad stuff, everyone tells you that repeatedly. Then you try it yourself and learn all the good stuff that no one tells you (except maybe those ads from the 1940's).
At some point it has to be accepted that people have agency and wilfully make poor decisions for themselves.
If the coconut industry had trillions of dollars behind advocating placing coconuts above everyone’s beds and chairs, I think more people would be complaining about that.
The auto industry has trillions of dollars spent giving everyone cars, and we don't really dwell much on road safety. And cars kill a crazy number of people.
The name Ralph Nader should ring a bell for you hopefully. There was one point that we didn't spend much on road safety and if that death rate per mile remained the same as then for how much we drive now, almost everyone that you knew that died would have done so in a car accident.
The present day is _after_ huge amounts of effort and investment in road safety, and it's an ongoing process. Complete with technological mandates like lane-keeping. It's something which is a major factor in car design and has safety boards such as the NTSP and Euro NCAP.
Locally, that’s a fait accompli. Car ownership has been ubiquitous in the US for decades. Traffic deaths per capita are increasing a bit in the US but are still below where they were in the 90s, and most developed countries have seen significant decreases. I don’t really know what the discourse is like in countries where traffic deaths might actually be increasing significantly from a tiny baseline.
"In his paper, Barss observed that in Papua New Guinea, where he was based, over a period of four years 2.5% of trauma admissions were for those injured by falling coconuts. None were fatal but he mentioned two anecdotal reports of deaths, one several years before. That figure of two deaths went on to be misquoted as 150 worldwide, based on the assumption that other places would have a similar rate of falling coconut deaths."
I get your point and think in a similar way. The difference between AI and the coconuts is -> there is no way deaths by coconuts increase by 10000000x, but for AI it's possible.
The reasons we have not - and probably will not - remove obvious bad causes is, that a small group of people has huge monetary incentives to keep the status quo.
It would be so easy to e.g. reduce the amount of sugar (without banning it), or to have a preventive instead of a reactive healthcare system.
I’m not so sure that’s true. There are many examples of OpenAI putting in aggressive guardrails after learning how their product had been misused.
But the problem you surface is real. Companies like porn AI don’t care, and are building the equivalent of sugar laced products. I haven’t considered that and need to think more about it.
>It surprises me how hyper focused people are on AI risk when we’ve grown numb to the millions of preventable deaths that happen every year.
Because it's early enough to make a difference. With the others, the cat is out of the bag. We can try to make AI safer before it becomes necessary. Once it's necessary, it won't be as easy to make it safer.
Yes, thousand times yes. How tf cultivation of tobacco is still legal? This shouldn't be an industry. There should be 3 plants per person limit and ban on sales and gifting. It's should be a controlled substance. Nicotine is the most addictive substance known to man and in tobacco it's packaged with cancer inducing garbage. How is it legal?
I don't really understand this logic. Enormous efforts are made to reduce those deaths, if they weren't the numbers would be considerably higher. But we shouldn't worry about AI because of road accident deaths? Huh? We're able to hold more than one thought in our heads at a time.
> But those using this as an argument to ban AI
Are people arguing that, though? The introduction to the article makes the perspective quite clear:
> In tweaking its chatbot to appeal to more people, OpenAI made it riskier for some of them. Now the company has made its chatbot safer. Will that undermine its quest for growth?
This isn't an argument to ban AI. It's questioning the danger of allowing AI companies to do whatever they want to grow the use of their product. To go back to your previous examples, warning labels on cigarette packets help to reduce the number of people killed by smoking. Why shouldn't AI companies be subject to regulations to reduce the danger they pose?
Many people are arguing for a ban. I did get reactive, because I’ve been hearing that perspective a lot lately.
But you’re right. This article specifically argues for consumer protections. I am fully in favor of that.
I just wish the NYT would also publish articles about the potential of AI. Everything I’ve seen from them (I haven’t looked hard) has been about risks, not about benefits.
Absolutely, the OP's argument doesn't hold water. Previous dangers have been discussed and discussed (and are still discussed if you look for it), no need to linger on past things and ignore new dangers. Also since a lot of new money is being poured into AI/AI products unlike harmful past industries such as tobacco, it's probably the right thing to be skeptical of any claims this industry is making, to inspect carefully and criticize what we think is wrong.
As a society we have undertaken massive efforts to reduce all of those. Certainly debatable if it's been enough but ignoring the new thing by putting zero effort in while it's still formative seems short-sighted.
You know what else is irrelevant to this discussion? We could all die in a nuclear war so we probably shouldn’t worry about this issue as it’s basically nothing in comparison to nuclear hellfire.
Mostly whataboutism, but I think my point about cars is valid. I think nuclear is another good comparison. Nuclear could power the world, or destroy it, and I’d say we’re on the positive path despite ourselves.
It’s not that we shouldn’t worry, we should. But humanity is also surprisingly good at cooperating even if it’s not apparent that we are.
I certainly believe that looking only at the good or bad side of the argument is dangerous. AI is coming, we should be serious about guiding it.
The 1990’s saw one of the most effective smoking cessation campaigns in the world here in the US. There have been numerous case studies on it. It is clearly something we are working on and addressing (not just in the US)
* 4 million to obesity.
Obesity has been widely studied and identified as a major issue and is something doctors and beyond have been trying to help people with. You can’t just ban obesity, and clearly their efforts being made to understand it and help people.
* 2.6 million to alcohol
Plenty of studies and discussion and campaigns to deal with alcoholism and related issues, many of which have been successful, such as DUI laws.
* 2.5 million to healthcare
A complex issue that is in the limelight and several countries have attempted to tackle to vary degrees of success.
* 1.2 million to cars
Probably the most valid one on the list and one that I also agree is under addressed. However, there are numerous studies and discussions going on.
So let’s get back to AI and away from “what about…”: why is there so much resistance (like you seem to be putting up) to any study or discussion of the harmful effects of LLM’s, such as AI-induced psychosis?
Im not resisting that at all. I fully support AI safety research. The think mechanistic interoperability is a fascinating and fruitful field.
What I’m resisting are one sided views of AI being either pure evil, or on the verge of AGI. Neither are true and it obstructs thoughtful discussion.
I did get into what aboutism, I didn’t realize it at the time. I did use flawed logic.
To refine my point, I should have just focused on cars and other technology. AI amplifies humanity for both good and bad. It comes with risk and utility. And I never see articles presenting both.
Many people are. Several of my immediate family members. And several prominent intellectuals including Yudkowsky and Hinton, both fathers of the field.
Yudkowsky wrote a 250 page book to say "we must limit all commercial GPU clusters to a maximum of 8." That is terrifyingly myopic, and look at the reviews on Amazon. 4.6 stars (574). That is what scares me.
Let me rephrase: most people aren’t that myopic and the viewpoint that’s driving AI development definitely skews more towards the “no restrictions or limitations of any kind” end of the spectrum anyway. You’d have a point if AI development was being choked in some way, but it’s quite the opposite:
I don’t think you need to worry that the other extreme exists as well. The obscene flow of money into AI at every stage has thus far gone almost entirely unchallenged.
I am somewhat sympathetic of this view because it appears to be rational. But I've heard something similar when the internet was becoming more and more mainstream 25 years ago. A similar rational opinion was that online communities help people connect and reduce loneliness. But if we look at it objectively the outcome was poor in that regard. So buyer beware.
Of course, I don't think anything should be banned. But the influence on society should not be hand waved as automatically positive because it will solve SOME problems.
I fully agree with you. I do think my argument came across as more hand wavy than I intended, I definitely did a “what about” and wish I hadn’t.
What I’m really after is thoughtful discourse, that acknowledges we accept risk in our society if there is an upside.
To your point about the internet making people more lonely, I’d say on balance that’s probably true, but it’s also nuanced. I know my mom personally benefits from staying in touch with her friends from her home country.
I think one of the most difficult things to predict is how human behavior adapts to novel stimulus. We will never have enough information. But I do think we adapt, learn, and become more resilient. That is the core of my optimism.
Its possible to care about multiple things at the same time and caring but the one doesn't take away from caring about the other. These deflecting comments surrounding a nascent technology with unknown implications are pointless. You can say this about anything anyone cares about.
"we let all this harmful stuff, so let's let more harmful stuff in our society (forced, actually) so we can mint a few more billionaires and lay off a few million for the benefit of shareholders"
Agreed - Really surprising this article didn't cover the flip side - how many lives have been saved due to having an instant source of truth in your pocket.
"Source of truth." Right, that reminds me of the other issue exacerbated by AI: widespread media illiteracy. (Apologies if that was the joke, can't tell anymore).
8 million people to smoking. 4 million to obesity. 2.6 million to alcohol. 2.5 million to healthcare. 1.2 million to cars.
Hell even coconuts kill 150 people per year.
It is tragic that people have lost their mind or their life to AI, and it should be prevented. But those using this as an argument to ban AI have lost touch with reality. If anything, AI may help us reduce preventable deaths. Even a 1% improvement would save hundreds of thousands of lives every year.