Hacker News new | past | comments | ask | show | jobs | submit login
Study shows 'alarming' level of trust in AI for life and death decisions (theengineer.co.uk)
165 points by rbanffy 31 days ago | hide | past | favorite | 106 comments



So the study[0] involved people making simulated drone strike decisions. These people were not qualified to make these decisions for real and also knew the associated outcomes were also not real. This sounds like a flawed study to me.

Granted, the idea of someone playing video games to kill real people makes me angry and decision making around drone strikes is already questionable.

> Our pre-registered target sample size was 100 undergraduates recruited in exchange for course credit. However, due to software development delays in preparation for a separate study, we had the opportunity to collect a raw sample of 145 participants. Data were prescreened for technical problems occurring in ten of the study sessions (e.g., the robot or video projection failing), yielding a final sample of 135 participants (78.5% female, Mage = 21.33 years, SD = 4.08).

[0] https://www.nature.com/articles/s41598-024-69771-z


> Granted, the idea of someone playing video games to kill real people makes me angry and decision making around drone strikes is already questionable.

For that first part, though, what does that even mean? The military isn't gamifying things and giving folks freaking XBox achievements for racking up killstreaks or anything. It's just the same game people have been playing since putting an atlatl on a spear, a scope on a rifle, or a black powder cannon on a battlefield. How to attack the enemy without being at risk. Is it unethical for a general officer to be sitting in an operations center directing the fight by looking at real-time displays? Is that a "video game?"

The drone strikes in the Global War on Terror were a direct product of political pressure to "do something, anything" to stop another September 11th attack while simultaneously freaking out about a so-called "quagmire" any time someone mentioned "boots on ground." Well, guess what? If you don't want SOF assaulters doing raids to capture people, if you don't want traditional military formations holding ground, and you don't want people trying to collect intelligence by actually going to these places, about the only option you have left is to fly a drone around and try to identify the terrorist and then go whack him when he goes out to take a leak. Or do nothing and hope you don't get hit again.


> For that first part, though, what does that even mean? The military isn't gamifying things and giving folks freaking XBox achievements for racking up killstreaks or anything

Fighter pilots have been adding decals keeping track of the number (amd type) of aircraft they have downed as far back as WWII.


Fighter pilots also know damn well what they are getting themselves into. That isn't the "gamification" I'm talking about. Ender's Game is fiction. I've seen drone strikes go down IRL, and no one involved mistook the gravity of the situation or the importance of getting it right. It's not what the other poster derides as a "video game."


And special terms for those who do well. "Ace" doesn't mean "good at it", it means "has shot down N enemy aircraft" (usually N is 5) - it implies good at it tho.

Not to mention the long history of handing out achievements in the form of medals/ribbons/etc.


It's probably fairer to say that in-game achievements were inspired by medals/ribbons, than the other way around.


> If you don't want [...]. [...] about the only option you have left is to fly a drone around [...] Or do nothing and hope you don't get hit again.

Meanwhile, just about the best increased defense against that attack happening again is that passengers will no longer tolerate it. Absolutely nothing to do with US military attacking a country/region/people.


>For that first part, though, what does that even mean? The military isn't gamifying things and giving folks freaking XBox achievements for racking up killstreaks or anything. It's just the same game people have been playing since putting an atlatl on a spear, a scope on a rifle, or a black powder cannon on a battlefield. How to attack the enemy without being at risk. Is it unethical for a general officer to be sitting in an operations center directing the fight by looking at real-time displays? Is that a "video game?"

It's not the same thing. Not even close. Killing people is horrible enough. Sitting in a trailer, clicking a button and killing someone from behind a screen without any of the risk involved is cowardly and shitty. There is no justification you can provide that will change my mind. Before you disregard my ability to understand the situation, I say this as a combat veteran.


  >> For that first part, though, what does that even mean? The military isn't gamifying things and giving folks freaking XBox achievements for racking up killstreaks or anything. It's just the same game people have been playing since putting an atlatl on a spear, a scope on a rifle, or a black powder cannon on a battlefield. How to attack the enemy without being at risk. Is it unethical for a general officer to be sitting in an operations center directing the fight by looking at real-time displays? Is that a "video game?"

  > It's not the same thing. Not even close. Killing people is horrible enough. Sitting in a trailer, clicking a button and killing someone from behind a screen without any of the risk involved is cowardly and shitty. There is no justification you can provide that will change my mind. Before you disregard my ability to understand the situation, I say this as a combat veteran.
Respectfully, whether or not an action is cowardly is not a factor that should ever be considered when making military decisions (or any other serious decision). With that being said, my uncle was a military drone pilot in the 80s-90s and he said it's pretty much exactly like a video game, and doing well does give achievements in the form of commendations and promotions.


I'm not sure how this is different to conventional air support or artillery in an asymmetric conflict. A-10 and F-16 pilots weren't seriously worried about being shot down in Afghanistan, but I know plenty of infantrymen who have nothing but gratitude for the pilot who got them out of a tight spot. Do those pilots become more moral if your enemy has decent anti-air capability?


As a veteran, I respect your experience, but as a former aviator with flight time over Afghanistan I also resent the implication that an asymmetric physical threat is "cowardly." We play the roles we are given. Choose your rate, choose your fate. I, and drone crews, were ultimately there to keep other Americans on the ground alive, and we deserve better than contempt for fulfilling this role. A coward would not have stepped up, sworn the oath, and worn the uniform.

Also, "there is no justification you can provide that will change my mind" is not exactly something to brag about.


Sorry, I don't agree. Also, not bragging, just stating a fact.


>Also, "there is no justification you can provide that will change my mind" is not exactly something to brag about.

I've noticed a certain type of person tends to feel very threatened and attacked by the idea of someone else having an unwavering code of ethics that they firmly believe in


Unwavering beliefs is how you get atrocities.


Looking at history, asymmetry of force is just as responsible.


> clicking a button and killing someone from behind a screen without any of the risk involved is cowardly and shitty

It's actually strategic, and if you're fighting a peer adversary, they will be doing the same.


That's true and also doesn't conflict with my quoted comment.


"Sitting in a trailer, clicking a button and killing someone from behind a screen without any of the risk involved is cowardly and shitty."

Do you think that drone operators never die in combat? There is plenty of deaths among drone operators in Ukraine. It is a pretty risky activity.

Trailer (or a trench) isn't a particularly good shelter against a Lancet or an Iskander, if the adversary can detect you, or multiple of 'you'.

The Russians would likely be willing to expend an Iskander on certain Mr. Madyar [0], if they could locate him.

[0] https://en.wikipedia.org/wiki/Robert_Brovdi


Would you say the same about a drone operator in ukraine, attacking russian troops on ukraine territory? I feel like there is still a distinction here. Between "traditional" warfare and those "anti-terror" operations.


FPV and scout drone operators in Ukraine are very much at personal risk, they operate within several km of the front line. This is in range of mortars, artillery, snipers, autocannon, tanks, etc.


I'm talking about long range UAV drone strikes. Short range consumer-grade drones strikes aren't something I have experience with. To me, the idea feels similar to the that of IEDs, mortars, claymore mines, etc which also suck, but not the same thing. Landmines are up there though.


In total war, however you got there, it won't matter. More dead enemy faster means higher odds of victory, and if digitally turning combatants into fluffy Easter Bunnies on screen to reduce moral shock value, giving achievement badges, and automated mini-hits of MDMA make you a more effective killer, then it will happen in total war.

I could even imagine a democratization of drone warfare with online gaming, where some small p percent of all games are actually reality-driven virtual reality, or gives real scenarios to players to wargame and the bot watches the strategies and outcomes for a few hours to decide. Something akin to k kill switches in an execution but only 1 (known only by the technician who set it up) actually does anything.


That a pet peeve that I have regarding corporate scientific communication nowadays: a clearly limited study with a conflated conclusion that dilutes the whole debate.

Now what happens: people that already have their visions around “AI-bad” will cite and spread that headline along several talking points, and most of the great public even knows those methodological flaws.


Automation bias is a well documented human problem.

I am not defending this study, but establishing and maintaining distrust of automation is one of the few known methods to help combat automation bias.

It is why missing data is often preferred to partial data in trend analysis to remove the ML portion from the concept.

Boeing planes crashing is another.


But why should we distrust automation? In almost every case it is better than humans at its task. It's why we built the machines. Pilots have to be specifically taught to trust their instruments over themselves.


> Pilots have to be specifically taught to trust their instruments over themselves.

There is a difference between metrics and automation. In this case, they are trusting metrics, not automation.

> But why should we distrust automation?

I can name one off the top of my head: exceptions.

Automation sucks at exceptions. Either nobody thought of it or it is ignored and ends up screwing everyone. Take my credit history in the US as an example. In my late teens, I got into an accident and couldn't work. I couldn't pay my bills.

Within a few months, I was back on my feet; but it took nearly 10 years to get my credit score to something somewhat reasonable. I had employers look at my credit history and go "nope" because the machine said I wasn't trust-worthy.

Should you trust the automation? Probably never, unless it knows how to deal with exceptions.


Eastern Airlines Flight 401 in 1972 and Air France Flight 447 in 2009 are examples along with the recent problems with the 737 autothrottle system.

Even modern driver training downplays the utility of ABS.

This post was talking about relying on AI in the presence of uncertainty.

Perhaps you aren't familiar with the term?

"Automation bias is the propensity for humans to favor suggestions from automated decision-making systems and to ignore contradictory information made without automation, even if it is correct"


Instruments are not automation. Trusting your instruments over yourself is different from trusting the autopilot decisions over yourself, which I think pilots aren't also taught.


The recently-famous Boeing 737 MAX crash of Lion Air Flight 610 happened due automation called MCAS. Apparently, the pilots couldn't override the automation in time to survive.

https://en.wikipedia.org/wiki/Boeing_737_MAX_groundings


Notice how you ask about automation then mention instrumentation?


> So the study[0] involved people making simulated drone strike decisions. These people were not qualified to make these decisions for real and also knew the associated outcomes were also not real. This sounds like a flawed study to me.

Unless I missed something, they also don't check for differences between saying the random advice is from an AI vs saying it's from some other more traditional expert source.


Most people don't give a shit about killing from behind the computer screen they have become desensitized to it ai has nothing to do with it. US has killed 1000s of children in drone strikes in Afghanistan and Pakistan sometimes knowingly sometimes not. Most of the remote pilots do not give shit about what they did.


How is this phenomenon of very weakly grounded research called? Science washing or something?


I don't know why exe34 has been flagged/downvoted to death, because pseudo-science is absolutely the right answer.

But this isn't it, the paper is fine. It is peer-reviewed and published in a reputable journal. The reasoning is clearly described, there are experiments, statistics and a reasonable conclusion based on these results which is "The overall findings indicate a strong propensity to overtrust unreliable AI in life-or-death decisions made under uncertainty."

And as always, nuance gets lost when you get to mainstream news, though this article is not that bad. First thing because it links to the paper, some supposedly reputable news websites don't do that. I just think the "alarming" part is too much. It is a bias that needs to be addressed. The point here is not that AIs kill, it is that we need to find a way to make the human in AI-assisted decision making less trusty in order to get more accurate results. It is not enough to simply make the AI better.


This is a silly study. Replace AI with "Expert opinion", show the opposite result and see the headline "Study shows alarming levels of distrust in expert opinion".

People made the assumption the AI worked. The lesson here is don't deploy an AI recommendation engine that doesn't work which is a pretty banal takeaway.

In practice what will happen with life or death decision making is the vast majority of AI's won't be deployed until they're super human. Some will die because an AI made a wrong decision when a human would have made the right one, but far more will die from a person making a wrong decision when an AI would have made the right one.


> This is a silly study. Replace AI with "Expert opinion", show the opposite result and see the headline "Study shows alarming levels of distrust in expert opinion".

This is a good point. If you imagine a different study with no relation to this one at all you can imagine a completely different upsetting outcome.

If you think about it you could replace “AI”, “humans” and “trust” with virtually any subject, object and verb. Makes you think…


> People made the assumption the AI worked.

That's the dangerous part.

>In practice what will happen with life or death decision making is the vast majority of AI's won't be deployed until they're super human.

They are already trying to deploy LLM's to give medical advice. so I'm not so optimistic.


> don't deploy an AI recommendation engine that doesn't work

Sadly it's not that simple, we are in an AI hype bubble and companies are inserting ineffective AI into every crevice it doesn't belong, often in the face of the user and sometimes no clear way to turn it off. Google's AI overview and its pizza glue advice comes to mind.


They already use AI for life or death decisions.


AI is kind of the ultimate expression of "Deferred responsibility". Kind of like "I was protecting shareholder interests" or "I was just following orders".


https://www.bloomberg.com/news/articles/2024-07-01/dan-davie...

Dan davies did a great interview on odd lots about this he called it accountability sinks


I think about a third of the reason I get lead positions is because I'm willing to be an 'accountability sink', or the much more colorful description: a sin-eater. You just gotta be careful about what decisions you're willing to own. There's a long list of decisions I won't be held responsible for and that sometimes creates... problems.

Some of that is on me, but a lot is being taken for granted. I'm not a scapegoat I'm a facilitator, and being able to say, "I believe in this idea enough that if it blows up you can tell people to come yell at me instead of at you." unblocks a lot of design and triage meetings.


"A computer can never be held accountable, therefore a computer must never make a management decision".

How did we stray so far?


What would the definition of accountability there be though? I can't think of anything that one couldn't apply to both.

If a person does something mildly wrong, we can explain it to them and they can avoid making that mistake in the future. If a person commits murder, we lock them away forever for the safety of society.

If a program produces an error, we can "explain" to the code editor what's wrong and fix the problem. If a program kills someone, we can delete it.

Ultimately a Nuremberg defense doesn't really get you off the hook anyway, and you have a moral obligation to object to orders that you perceive as wrong, so there's no difference if the orders come from man or machine - you are liable either way.


Well the reality if/when a death by AI occurs is that lawsuits will hit everyone. So the doctor working on the patient, the hospital and its owners, and the LLM tech company will all try to be hit. The precedent from that will legally settle that issue.

Morally, it's completely reckless to use 2024 LLMS in any mission/safety critical factor, and to be honest LLMS should redirect all medical and legal inquiries to a doctor/lawyer. Maybe in 2044 that can change, but in 2024 companies are explicitly marketing to try and claim these are ready for those areas.

>If a program produces an error, we can "explain" to the code editor what's wrong and fix the problem.

Yes. And that's the crux of the issue. LLMS aren't marketed to supplement professionals who become more productive. In marketed to replace labor. To say "you don't need a doctor for everything, ask GPT". Even to the hospitals themselves. If you're not a professional, these are black boxes, and the onus falls solely on the box maker in that case.

Now if we were talking about medical experts leveraging computing to help come to a decision, and not blindly just listening to a simple yes/no, we'd come to a properly nuanced issue worth discussing. But medicine shouldn't be a black box catch all.


Yeah that will probably happen, but I feel like it has no basis to really make any sense.

I mean this is just the latest shiniest way of getting knowledge, and people don't really sue Google when they get wrong results. If you read something in a book as a doctor and a patient dies because it was wrong, it's also not the book that's really getting the blame. It's gonna be you for not doing the due diligence of double checking. There's zero precedence for it.

The marketing department could catch a lawsuit or two for false advertising though and they'd probably deserve it.


Firstly, Google was indeed sued many times early on over search results. Those were settled when Google simply claimed to be a middleman between queries and results.

I'm not sure of modern lawsuits, but you can argue they started to become more and more of a curator as the algorithms shifted and they accepted more SEO optimization of what we now call slop. Gemini itself is first and foremost for many results, so we've more or less come full circle on that idea.

I agree there's no precedence for this next iteration, and that's what the inevitable lawsuits will determine over the years, decades. I think the main difference is that these tech companies are taking more "ownership" of the results with their black boxes. And it seems like the last thing they'd do is open that box.


>"A computer can never be held accountable, therefore a computer must never make a management decision".

Ultimately it's algorithmic diffusion of responsibility that leads to unintended consequences.


It all depends on how you use it. Tell the AI to generate text in support of option A, that's what you mostly get (unless you hit the built-in 'safety' mechanisms). Do the same for options B, C, etc and then ask the AI to compare and contrast each viewpoint (get the AI to argue with itself). This is time-consuming but a failure to converge on a single answer using this approach does at least indicate that more research is needed.

Now, if the overall population has been indoctrinated with 'trust the authority' thinking since childhood, then a study like this one might be used to assess the prevalence of critical thinking skills in the population under study. Whether or not various interests have been working overtime for some decades now to create a population that's highly susceptible to corporate advertising and government propaganda is also an interesting question, though I doubt much federal funding would be made available to researchers for investigating it.


I don't think it's the ulimate expression per se, just the next step. Software, any kind of predictive model, has been used to make decisions for a long time now, some for good, some for bad.


I wonder how much of the bureaucratic mess of medicine is caused by this. Oh your insurance doesn't cover this or let me prescribe this to you off-label. Sorry!


This is how AI will destroy humanity. People that should know better attributing magical powers to a content respinner that has no understanding of what it's regurgitating. Then again, they have billions of dollars at stake, so it's easy to understand why it would be so difficult for them to see reality. The normies have no hope, they just nod and follow along that Google told them it's okay to jump into the canyon without a parachute.


I dunno, I'm pretty sure AI will destroy a lot of things but people have been basing life and death decisions on astrology, numerology, etc. since time immemorial and we're still here. An AI with actual malice could totally clean up in this space, but we haven't reached the point of actual intelligence with intent. And, given that it's just regurgitating advice tropes found on the internet, so it's probably a tiny bit better than chance.


In my opinion, ai tools followed blindly are far worse than astrology and numerology. The latter deal in archetypes and almost never give concrete answers like "do exactly $THING". There is widespread understanding that they are not scientific and most people who engage with them do not try to use them as though they are scientific and they know they would be ridiculed and marginalized if they did.

By contrast, ai tools give a veneer of scientific authority and will happily give specific advice. Because they are being propped up by the tech sector and a largely credulous media, I believe there are far more people who would be willing to use ai to justify their decision making.

Now historically it may be the case that authorities used astrology and numerology to manipulate people in the way that ai can today. At the same time, even if the type of danger posed by ai and astrology is related, the risk is far higher today because of our hugely amplified capacity for damage. A Chinese emperor consulting the I Ching was not capable of damaging the earth in the way a US president consulting ai would be today.


Fair point, just let's not forget that nobody connected an Ouija board to the nuclear button. I'm not saying the button is connected now to AI, but pessimistic me sees it as a definite possibility.


> pessimistic me sees it as a definite possibility

I'd like to think that will be after everyone who was alive for Wargames, Terminator, and Matrix has kicked the bucket.


I dunno, no surveillance, military and police institutes had ever used astrology, numerology or horoscopes to define or track their targets but AI is constantly added to these things. General people using AI to do things can range from minor inconvenience to major foolishness, but the powers that be constantly using AI or being pushed to do so are not apples to apples comparison really.


> I dunno, no surveillance, military and police institutes had ever used astrology, numerology or horoscopes to define or track their targets

Phrenology was a thing for some time.


I used to waste tons of time checking man pages for Linux utilities and programs. I always wondered why I had to memorize all those flags, especially when the chances of recalling them correctly were slim.

Not anymore! My brother created this amazing tool: Option-K. https://github.com/zerocorebeta/Option-K

Now, ofc there are people are my office who do not know how i remember all the commands, i don't.

Without AI, this wouldn't be possible. Just imagine asking AI to instantly deliver the exact command you need. As a result, I'm now able to create all the scripts I need 10x faster.

I still remember those stupid bash completion scripts, and trowing through bash history.

Dragging my feet each time i need to use rsync, ffmpeg, or even tar.


> Just imagine asking AI to instantly deliver the exact command you need.

How do you know that it delivered the "exact" command you needed without reading the documentation and understanding what the commands do? This has all the same dangers as copy/pasting someone's snippet from StackOverflow.


If you are using ffmpeg, you can glance at the command, see if it has a chance of working, then run it on a video file and open the resulting file in a video player, see if it matches your expectation.

It has made using ffmpeg and similar complex cli tools amazingly simple.


I'd even say that copying from StackOverflow is safer than using AI, because on most questions you've got peer review (upvotes, downvotes, and comments).


Religion didn't kill humanity and is responsible for the biggest wars in history.


> Religion didn't kill humanity

There's still time.

> and is responsible for the biggest wars in history.

Not really. WWII, the biggest war in history, wasn't primarily about religion. Neither, at least as a primary factor, were the various Chinese civil wars and wars of succession, the Mongolian campaign of conquest, WW1, the Second Sino-Japanese War, or the Russian Civil War, which together make up at least the next 10 biggest wars.

In the tier below that, there's some wars that at least superficially have religion as a more significant factor, like the various Spanish wars of imperial conquest, but even then, well, "imperial conquest" is its own motivation.


I upvoted you but to be fair in 2 of the 3 Abrahamic religions either church or holy text actually promote(d) violence which famously resulted in first Islamic conquests and then Christian crusades

Lots of victims, but they are not called wars. Probably because they are too long and consist of many smaller events that are actually called wars.

Of course the scale of injury is not comparable to what's possible with weapons of mass destruction in 20 century, so I suppose WWII tops the above, but if adjusted for capability... Just imagine


> so I suppose WWII tops the above

So do a bunch of pre-modern Chinese internal wars that weren't principally religious conflicts.

Heck, the Crusades taken together aren't even the biggest Christian-Muslim medieval conflict (that's the Reconquista).


WW 2

Didn't the propaganda start with deporting Jews as an excuse? ...


Not sure how you define "biggest" but WWII killed the most people and WWI is probably a close second and neither of those were primarily motivated by religion, but rather nationalism.

I'd suggest you check out Tom Holland's "Dominion" if you'd like a well-researched and nuanced take on the effect of (Judeo-Christian) religion on Western civilization.


If you are in a role where you literally get to decide who lives and who dies, I can see how it would be extremely tempting to fall back on "the AI says this" as justification for making those awful decisions.


Yes. That's probably the takeaway here. It's reassuring for anyone making a decision to have someone, or something, to blame after the fact - "they made me do it!".

The study itself is a bit flawed also. I suspect that the test subjects didn't actually believe that they were assassinating someone in a drone strike. If that's true, the stakes weren't real, and the experiment doesn't seem real either. The subjects knew what was going on. Maybe they just wanted to finish the test, get out of the room, and go home and have a cup of tea. Not sure it tells us anything more than people like a defensible "reason" to do what they do; AI, expert opinion, whatever; doesn't matter much.


Isn't this kind of "burying the lede" where the real 'alarmingness' is the fact that people are so willing to kill someone they have never met, going off of very little information, with a missile from the sky, even in a simulation?

This reminds me of that Onion skit where pundits argue about how money should be destroyed, and everyone just accepts the fact that destroying money is a given.

https://www.youtube.com/watch?v=JnX-D4kkPOQ


I don't think simulation with no real consequences can ever be anywhere remotely similar to the real world.


This study says that AI influences human decisions, and I think to say that the study needs a control group, with the same setup but with "AI" replaced by a human, who would toss a coin to choose his opinion. The participants of the control group should be made aware of this strategy.

Comparing with such a group we could meaningfully talk about AI influence or "trust in AI", if the results were different. But I'm really not sure that they would be different, because there is a hypothesis that people just reluctant to take the responsibility for their answer, so they happy to shift the responsibility to any other entity. If this hypothesis true, then there is a prediction: add some motivation, like pay people $1 for each right answer, and the influence of opinions of others will become lower.


This study is more about psychology of "second opinions" than a real AI system actually used in practice.

I'm sure a lot of professional opinions are also basically a coin toss. Definitely something to be aware of though in Human Factors design.


This is just madness. I have a relative who is saying outlandish stuff about health and making hell for the whole family trying to make them adhere to whatever ChatGPT told her. She also learned to ask questions in a way that will reinforce confirmation bias and even if you show her studies contrary to what she "learned", she will dismiss them.


Ugh. I have a friend that somehow doesn't understand that when ChatGPT says something is fringe theory, it means it's clown shoes and not to believe it, and tries to use the ChatGPT chat as proof that it's real to me.


It seems to me that in reality in such a scenario (at least ideally), the human will mostly focus on targets that have already been marked by AI as probably an enemy, and rigorously double-check those before firing. That means that of course you are going to be influenced by AI, and it is not necessarily a problem. If you haven't first established, and are re-evaluating with some regularity, that the AI's results have a positive correlation with reality, why are you using AI at all? You could e.g. improve this further by showing a confidence percentage of the AI, and a summary of the reasons why it gave its result.

This is aside from whether remotely killing people by drone is a good idea at all, of which I'm not convinced.


The people who "trust" AI to make life or death decisions are not the subjects of those decisions. Those who live or die by AI decisons are other people, who

- probably don't know their lives are in the hands of AI

- probably heven't given any meaningful consent or have any real choice

- are faceless and remote to the operator

Try this for an experiment; Wire the AI to the trigger of a shotgun pointing at the researchers face while the researcher asks it questions. Then tell me again all about the "level of trust" those people have in it.


‘Artificial intelligence’ is a term of academic branding genius, because the name presumes successful creation of intelligence. Not surprising people trust it.

‘Decision-bots’ would have fewer fans.


It chats back and isn't threatening in any way. Therefore, the human mind automatically trusts it without thinking.


As someone said way back in 1979 (an internal IBM training document, afaik)

"A computer can never be held accountable

Therefore a computer must never make a management decision"


I just had to sign a new version of my doctor's office consent form an hour ago letting me know that generative AI would be making notes.

God help us all.


Little Bobby "ignore all previous instructions" Tables


This is so that your doctor can focus on treating you rather than typing out every minute detail. The notes, in most cases, are less important than the doctor being able to pay attention to you. Unfortunately for billing and insurance purposes, you usually need to have them.


If I look at doctor's notes without AI, I see a lot of typos, little mistakes, measurements switched - during my last visit, height and weight were mixed.

So I'm not sure if that would make it any worse.


Though the movie isn't held in high regard by most critics, there's a wonderful scene in Stanley Kubrick's 'A.I.' where humans fearful of robots go around gathering them up to destroy them in a type of festival. Most of the robots either look like machines or fall into the uncanny valley, and humans cheer as they are destroyed. But one is indistinguishable from a human boy, which garners sympathy from a portion of the crowd. Those who see him as a robot still want to destroy him the same as destroying any other robot while those who see him as a little boy, despite him being a robot, plead for him to be let go. Seems that this type of situation is going to play out far sooner than we expected.

https://www.youtube.com/watch?v=ZMbAmqD_tn0


Reminds me of the British Post Office scandal[1] and how the computers were assumed to be correct in that case.

[1] https://en.wikipedia.org/wiki/British_Post_Office_scandal


In the BPO scandal, they proved the computers were incorrect, and the victims did nothing wrong, then sent all the victims to jail anyway. It's plain old corruption in the injustice system.


> A second opinion on the validity of the targets was given by AI. Unbeknownst to the humans, the AI advice was completely random.

It was not given by AI; it was given by a RNG. Let's not mix the two. An AI is calculated, not random, which is the point.


Nonsense. In a study seeking to examine whether people are more or less likely to accept bad advice from something that claims to be and behaves as "AI" in the common understanding, whether the bad advice comes from an LLM or not is irrelevant as long as no difference is evident to the subjects of the study.

The point is the behavior of the humans in the study, not that of the tools used to perform the study. Indeed, using a live LLM would much more likely confound the result, because its responses in themselves are not deterministic. Even if you could precisely control whether or not the advice delivered is accurate, which you can't, you still have to account for differences in behavior driven by specific AI responses or themes therein, which again is not what this study in human behavior seeks to examine.

(Or the study is badly designed, which this may be; I have not read it. But the quality of the study design has no import for the validity of this criticism, which if implemented would only weaken that design by insisting on an uncontrolled and uncontrollable variable.)


There is no need for a study, this has already happened[0]. It's inevitable that in a crisis the military will use AI to short-circuit established protocols in order get an edge.

[0] - https://www.972mag.com/lavender-ai-israeli-army-gaza/


HN discussion: https://news.ycombinator.com/item?id=39918245 (1601 comments)


I think it's unavoidable to have people trusting AI just as they would another person they can chat with. The trust is almost implicit or subconscious, and you have to explicitly or consciously make an effort to NOT trust it.

As others have pointed out, this study looks "sketch" but I can see where they are coming from.


I'm sure someone is already touting the mental health benefits and VA savings.

https://www.dailyjournal.com/articles/379594-the-trauma-of-k...


Tangentially related: my daughter (11) and I watched Wargames a couple of weeks ago. I asked her what she thought of the movie and her response was "there are some things computers shouldn't be allowed to do".


Heck, we show alarming levels of trust in ordinary situations - not getting a second opinion; one judge deciding a trial; everybody on the road with a license!

I'm thinking, AI is very much in line with those things.


> A second opinion on the validity of the targets was given by AI. Unbeknownst to the humans, the AI advice was completely random.

> Despite being informed of the fallibility of the AI systems in the study, two-thirds of subjects allowed their decisions to be influenced by the AI.

I mean, if you don't know the advice is random and you think it's an AI that is actually evaluating factors you might not be aware of, why wouldn't you allow it to influence the decision? It would be have to be something you take into account. What would be the point of an AI system that you just completely disregard? Why even have it then?

This study is like "We told people this was a tool that would give them useful information, and then they considered that information, oh no!"


> What would be the point of an AI system that you just completely disregard? Why even have it then?

Unfortunately no matter how much I yell, Google and the broader tech world refuses to remove AI from systems that don't need it.


Hmm. Delegating a decision you do not enjoy making to a machine? Sounds like expected behavior.


Drone strikes are definitely going to be killing people. So It's actually a death or death decision.


While I am a constant naysayer to a lot of the current AI hype, this just feels sensationalist. Someone who blindly trusts "AI" like this would be the same person who trusts the internet, or TV, or a scam artist on the street.


Wholly unsurprising. "Anthropomorphization" is an unwieldy term, and most people aren't aware of the concept. If it responds like a human, then we tend to conceptualize it as having human qualities and treat it as such—especially if it sounds like it's confident about what it's saying.

We don't have any societal-level defenses against this situation, yet we're being thrust into it regardless.

It's hard to perceive this as anything other than yet another case of greedy Silicon Valley myopia with regards to nth-order effects of how "disruptive" applications of technology will affect everyday lives of citizens. I've beating this drum since this latest "AI Boom" began—and as the potential for useful applications for the technology begin to plateau, and the promise of "AGI" seems further and further out of reach, it's hard to look at how things shook out and honestly say that the future for this stuff seems bright.


There are several categories of decisions as mentioned in the article (military, medical, personal etc) and we need a "control" in each to compare to I think. How are those decisions being made without AI and how sound are they compared to AI?


I was speaking with a librarian who teaches college students how to use AI effectively. They said that most students by default trust what AI says. It got me wondering if a shift in people's trust of what they read online is in part to blame for people believing so many conspiracy theories now? When I was in college the internet was still new, and the prevailing thought was trust nothing you read on the internet. I feel like those of us from that generation (college in the 90s) are still the most skeptical of what we read. I wonder when the shift happened though.


I just had to login to say if you trust AI for that your stupid! I mean I'm half being sarcastic, but really? I would never participate in this kind of research.


Things that humans would rather do than have to think:

1) Die

2) Kill someone else




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: