Hacker News new | past | comments | ask | show | jobs | submit login
Public trust in AI is sinking across the board (axios.com)
41 points by andsoitis 10 months ago | hide | past | favorite | 44 comments



The messaging around AI seems generally very negative to anyone without considerable capital. The idea it can complete lots of tasks that people do to pay their bills doesn't really encourage trust. The two extremes seem to be "it's a fad" or "it is exceptional and will put you out of a job". Neither encourages trust.


The fact that there are a bunch of egomaniacs involved like Musk and Altman doesn't help either. The DeepMind guys (Demis Hassabis, Shane Legg) put a better face on it, but in the end they are all rushing to develop this ASAP and it's not at all obvious it'll be a net positive for society.


A lot has gone wrong with the public perception of "AI" in the last year. But who is to blame?

OpenAI touted themselves as a truly open, not for profit organisation and made big claims about the "good of all humanity" yet within a year shamelessly transformed into a closed, proprietary, for-profit organisation in bed wuth Microsoft.

Although models are merely reflections of training data put into them and are essentially a "mirror" held up to humanity, the rush to add "guardrails" and bury everything uncomfortable or distasteful rapidly twisted toward the censorial and political, yet simultaneously proved ineffective.

AI has been turned, in almost every place that it can be, not toward Intelligence Amplification (IA) to create new and interesting jobs, but into a direct replacement for human labour. Not even drudge work and bullshit jobs, but the few remaining things that people enjoy doing. This early taste of what the ruling classes would do with a more powerful or general AI has rightly raised people's hackles.

Overall I think that recent ML advances, and the hype around them, has provided a window of insight. It's exposed the intentions of the powerful, the inevitability of criminal application, the inefficacy of governments and legislators and the helplessness of the general population in the face of abusive technology.

The fact that it might be immensely useful, save lives, cure diseases and provide research insights into cheap energy has been kicked into the long grass by the greed and selfishness of the usual bunch of megalomaniacs.

The problem isn't that machine learning is an amazing and clever thing, but that humans just aren't ready for it and can't handle it.


The fundamental problem is that politicians are essentially impotent to the influence of NGO's. You can only have a democracy that is informed. NGO's break this fundamental principle. Which makes the hysteria about misinformation laughable. The threat is coming from inside the house.


What does this mean? "NGO" covers absolutely every organization that isn't the government. Churches and bowling clubs count as NGOs as well as political organizations and labour unions. The only places without NGOs are totalitarian, because you can't do liberal democratic politics without NGOs.

> You can only have a democracy that is informed. NGO's break this fundamental principle.

Maybe you meant to write "for profit media" or "Fox News" or something there instead?


It isn't really a problem with profit. More like a problem of the commons. The Molochization of creating an informed society.


If you're saying that the commercial world has more to gain from telling lies than telling the truth, then yes I agree.


Absolutely.


The two problems with generative AI that everyone is rapidly spotting:

- it's not "reliable", for any reasonable definition. If you use it for customer service, it will make false claims, which you are then liable for - or, as a customer of such a company, you have to put up with false claims being made, incur costs, and take legal action.

- it is, however, very suitable for generating large quantities of text and images. This can overwhelm human creators, moderators, and consumers. It has made search and social media less "reliable" as a result.


Generative AI is the logical conclusion with tech's obsession with scale at all costs. Only quantity matters, not whether it actually brings any value to real humans, nor whether its externalities (e.g., large-scale noise generation) are even acknowledged, much less mitigated.


You hype garbage to the moon, people are going to figure it out.

There are good uses for AI. But most of the time it gets shoved out the door without enough thought/care. Google’s recent issue or the article I saw yesterday about how TurboTax and H&R Block both have “helpful” AIs to answer tax questions, which of course are completely wrong.

That never should have been deployed.


Bingo. And the fact the horde comes after anyone who points a critical eye as being anti-tech or too stupid to possibly understand the massive power of AI doesn't help matters one bit


As is so often the case where there are two “camps”, the public conversation degrades quickly because of binary extremes taken up by enthusiasts and naysayers.

The reality is that there is truth in both. LLMs are revolutionary, and are already enabling things that didn’t seem possible a few years ago. I’ve experienced this personally as I use GPT4 as a research and troubleshooting assistant.

But the fact that it seems so good on the surface (because sometimes it is) is what also makes it so dangerous. And the doomsday crowd rightly points out that this isn’t what many people think it is, and that people should not trust it in the way many people blindly have been.

I think the public has been primed by a broader tendency towards polarization on social media to have exactly the wrong kind of conversation about AI tech as it emerges.

The truth is some combination of all of the above. There are reasons to be excited. Reasons to be concerned. Transformational use cases. Dystopian use cases.

What does worry me is that “the horde” seems like the default state of all public discourse at this point, which does add some credence to the doomer narrative. But the validity of that stance seems less about AI, and more about the current state of public social discourse.


But isn't this the way everything seems to come nowadays? Couple years ago everything was built on a blockchain, years before everything was going microservices, earlier everything was moving to the cloud... while every fad had its merits and is here to stay, none of them lived to the level of hype in which they came wrapped up - and rightly so.

Edit: one example being the touted IBM-Maersk blockchain solution, ground-breaking as it was in 2021 then quietly retired in 2022.


"Couple years ago everything was built on a blockchain"

I've never worked anywhere where blockchain was used.

"years before everything was going microservices"

The vast majority of software architecture has gone microservices.

"earlier everything was moving to the cloud"

I've worked many places where the cloud was used but never 100%.

You are right, blockchain was a fad, everything else you cited are valid and remain.


I agree this all fits with the blockchain as well.

Which works because I generally think any company that used that word was either misusing it for buzzword bingo or it was pointless or a scam.

The difference is the blockchain enabled scams and wasting massive amounts of energy. AI enabled scams, wasting massive amounts of energy, and destroying all chance of trust in communication on the internet.

Sigh.


"Better to ask forgiveness than permission" has ruined us.


Let’s be honest. No one asks for forgiveness and there are no consequences of any kind, especially if they’re externalities to your business.

Ruin the internet or something else for an extra $30k in profit. It’s fine. That’s what we do!


Eh, it's given us a tremendous amount of benefits across the internet as a whole. It's only once it enters into the territory where both permission and forgiveness are definitely denied that it becomes a problem - as well as the US insistence that free speech means an unlimited right to lie about things including your own products.


It's maybe a problem of scale, since I can't easily name any entity that could "forgive" a questionable decision made by Google.


Pre-2000, Java and Object-Oriented programming were the big fad.


This is mostly about the tech companies behind AI. I'm honestly shocked the level of trust isn't lower. The company that was formed to be the face of responsible AI is visibly making a slow-motion heel turn. For every other (major) company developing its own AI models, the default position should already be distrust, based on their history.


>Globally, trust in AI companies has dropped to 53%, down from 61% five years ago.

This trust survey doesn't really make sense given the fast-moving pace of AI development.

There was no OpenAI and LLM proliferation five years ago, so at best this survey is only talking about very surface level generic sentiment regarding unspecified/undefined AI.


It's a survey of perception, not of performance or functionality, so it's still a very relevant change.


There's an effects lag with all these things. I don't think people have actually experienced the functionality in any meaningful way yet, only the "narrative" and bluster.

Personally I've bumped into almost zero real-world encounters with "AI" except maybe a call waiting system and making some funky images for our blog.

When significant numbers of people do encounter AI it's going to be diametric and possibly cause a deeper split in society. There will be the true believers and devotees who say "AI saved my life!". And it will be ruinous for many too.

Which "truth" or whose voice gets heard will be a battle of perceptions, and likely AI will play a part in this too. It's possible the giant tech companies will rue they day they ever heard the words "neural network" or "transformer model".


I promise you that anyone voting in this poll couldn't differentiate between what does and does not use "AI."

Do people trust this code less now than they did a decade ago when it was written in 2017:

from numpy import loadtxt

from xgboost import XGBClassifier

from sklearn.model_selection import train_test_split

from sklearn.metrics import accuracy_score

dataset = loadtxt('pima-indians-diabetes.csv', delimiter=",")

X = dataset[:,0:8]

Y = dataset[:,8]

seed = 7

test_size = 0.33

X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=test_size, random_state=seed)

model = XGBClassifier()

model.fit(X_train, y_train)

y_pred = model.predict(X_test)

predictions = [round(value) for value in y_pred]

predictionsaccuracy = accuracy_score(y_test, predictions)

print("Accuracy: %.2f%%" % (accuracy * 100.0))

Data: https://github.com/npradaschnor/Pima-Indians-Diabetes-Datase...

It's all just vibes man...


If the choices are "it worked in 2017" and "it was written by AI" then clearly it was AI:

1) The program is invalid. The "predictionsaccuracy" should be "accuracy".

Therefore it cannot be working code from 2017. Furthermore,

2) The loadtxt() does not read the linked-to CSV file because of the header line. NumPy reports: 'ValueError: could not convert string 'Pregnancies' to float64 at row 0, column 1.'

The fix is to add skiprows=1 to the loadtxt().

As a couple of style notes:

3) Modern Python prefers f-strings, f"Accuracy: {accuracy*100.0:.2f}")

4) The y_pred contains only 0s or 1s so the following is unneeded

  predictions = [round(value) for value in y_pred]
and y_pred can be passed directly to accuracy_score().


What? It was written by a human in 2017 and worked then

The code is from: https://machinelearningmastery.com/xgboost-with-python/

Point being, this was firmly "AI" in 2017. It is still widely useful today.

Is it no longer AI? If so why?

Do people believe in the code above less now? If so, why? If not, then the title is wrong if it's AI.


The code you posted does not work. Try it out. One of these two lines is incorrect:

  predictionsaccuracy = accuracy_score(y_test, predictions)
  print("Accuracy: %.2f%%" % (accuracy * 100.0))
Either the first line should be:

  accuracy = accuracy_score(y_test, predictions)
or the second line should be:

  print("Accuracy: %.2f%%" % (predictionsaccuracy * 100.0))
I tried the corrected code against the CSV data file you pointed to, and got:

  ValueError: could not convert string 'Pregnancies' to float64 at row 0, column 1.
You linked to a course that costs US$37. I don't see the code, though one of the images shows it uses that data set. Still, 1) tells me there's a copy&paste error, and 2) tells me that the dataset was somehow massaged.

> Is it no longer AI? If so why?

Okay, so I misunderstood your point. I thought your 'I promise you that anyone voting in this poll couldn't differentiate between what does and does not use "AI."' was in regards to the modern wave of using AI (typically LLM models) as a code assistant. Could someone tell if the code you posted was generated by an AI?

I could tell that the code you posted was not working code, so if the choices were "working code written by human" and "output from an AI model", it clearly belongs to the latter.

That is not your point, and I was wrong.

However, the point of the article is not "what is AI?" but more like "how much should you trust the results of everything described as AI"?

XGBoost was not generating false court citations. XGBoost was not generating images where the only scientists were white men. XGBoost was not submitting Stackoverflow answers. XGBoost was not classifying black men as gorillas. XGBoost is not the sort of AI that most people are thinking of when you say "AI".

FWIW, I'm of the classical school, and I consider alpha–beta pruning to be part of AI.


Depends. Is the size of pima-indians-diabetes.csv less than a terabyte? Then trust is the same. Else, less.


What if I have 40TB of tabular data?

XGB still beats Attention networks, CNNs, VisionTransformers etc...

Again, point being that "AI" doesn't mean anything so making broad sweeping statements about trust or whatever is nonsensical


The name Ai should be revoked from use with most of the current tools represented as leveraging it. They aren't exhibiting their own intelligence, most of these AI bots are just scripted ETL models, and they're really just patching them as embarrassing "media incidents" pop up. The needless habit of modern IT to present itself as innovative through the use of incorrect terminology buzzwords is destroying trust in technology altogether. So are the political hurdles generated in getting simple and useful tools that truly aid human productivity rather than just being a gimmicky idea hidden behind user data tracking and a monthly subscription fee.

We have let too many fraudsters into tech, and that's what is eroding trust...From Elizabeth Holmes to Crypto, once people stop really respecting technology, you'll see a ton of scammy tools labeled as "Ai" popping up in traditionally trustworthy places like bootleg vape shops in toy stores. So many of these tools use copyrighted content curated outside of original EULAs from Reddit, The Web, Twitter etc... There are huge legal issues with stealing content from individual contributors as well that just highlight the opportunism and fraud in modern IT. They're very lucky regulators are obliviously ignorant to what is actually happening here.

What Tech Companies are referring to as "Ai" is being weaponized on consumers and humans (to raise the costs of living and to lower the value of human labor) more than it is helping them. This is the main reason people will increasingly mistrust everything associated with it moving forward.

This future isn't looking bright.


> needless habit of modern IT to present itself as innovative

Yes, the destructive potential of this competitive vanity, form over function, appearances over substance, is greatly underestimated. I think many people would not care if technology actually worked at all, so long as it gave the appearance of doing so.


First, many broad-based public opinion polls end up largely reflecting the currently dominant mainstream media themes, which they then report further on, amplifying the same themes.

Second, the idea of "trust in AI" is rather bizarre to anyone who understands how the technology broadly works. It's like the idea of "trust in Wikipedia" when in reality Wikipedia can sometimes be wrong. It can also be a very useful tool if you understand its limitations — just like AI. Basically, the article is confirming that AI has been largely misrepresented to the public and was thus misunderstood. Now, through hands-on experience, people are gaining a more functional understanding of the limitations and practical utility of AI tools.


hopefully everyone involved regrets calling "Fancy Textual and Image Autocomplete" "artifical intelligence" after all this, as they move on to their next ridiculous VC-funded sinecure.


Am I missing it or is there no link to the source research? I'm curious about the information they presented and have questions.

What do they consider an AI company? What industries are surpassing tech in trust?


According to this Gallup poll last year, anyway, small businesses generally are twice as trustworthy as tech businesses.

https://news.gallup.com/poll/508169/historically-low-faith-i...


My dog is twice as trustworthy as tech businesses.


It sinks for me because it doesn’t return what is asked for. It was tight lipped about Viagra and Rogain being originally developed for hypertension.


The trust isn't there because it's a gimmicky toy and not useful for anything serious.

You have to put up with:

1. Incorrect information.

2. Political proclivities of the company hosting it

3. Spying

Anyone who tells me they use LLMs for anything other then a curious experiment make me suspect. I had one of my PMs tell me he had an LLM make a meeting summary from the minutes so I grilled him on it since I was in that same meeting. The LLM doesn't have context of what's important and will harp upon things it thinks are necessary. That Pm now writes up the summaries because what the LLM made was useless.


Useless? Or not useful enough?

These anecdotes represent a bad taste that may linger for a long time, even while the actual performance of LLMs moves from curiosity to utility to indispensability.

I’m allergic to FOMO-based thinking but I am convinced we’re already at the “useful” stage for many use cases, when combined with human judgment.


One of the things LLMs can't imitate is having serious stake in the game, and therefore responsibility for the outcome.

The PM has a stake - if they get this stuff wrong frequently, their job is at risk. The LLM provider is probably not getting paid much for the work - nothing, if it's a freely available model - and since the fine print says "do not rely on factual accuracy", the provider will claim no responsibility for its mistakes.

(And the fine print will continue to say that for the foreseeable future, because even though LLMs might get more accurate, the providers will never want liability for errors unless the user is paying through the nose.)


If the PM in question copy-pastes LLM output into a status report, they won't last long.

If they read it, realize the LLM output missed a key point, disagree with the emphasis in a section, and failed to mention an underlying trend or concern---at that point the PM is engaging their own brain, fixing the issues, but is likely way faster to get to a good place. At that point the LLM is acting like a semi-competent intern.


Useless as in it harped upon information everyone was already aware of and completely ignored the mitigation strategies that were discussed.

The PM ended up going back and listening to the recording because the strategies were what was the most important part of the discussion.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: