
The media are unwittingly selling us an AI fantasy - chobeat
https://www.theguardian.com/commentisfree/2019/jan/13/dont-believe-the-hype-media-are-selling-us-an-ai-fantasy
======
NarcolepticFrog
I agree with the sentiment that the discussion about AI and Machine Learning
should not be entirely driven by industry.

At the same time, this article seems to be a bit down on AI itself, and part
of their message is that AI _doesn 't_ provide "relevant and competent"
solutions to problems. It also sounds like they are writing off real work
(both in industry and academia) focusing on real ethical concerns with AI and
what can be done to address them (e.g., the FAT* conference, a growing number
of sessions on fairness, privacy, and other related topics at NeurIPS and
ICML, etc).

I think the most important issue is educating the general public about AI and
giving them familiarity with what types of things are automatable, what things
can be learned from their data, where and how AI is being used in the real
world currently, etc. A big part of this is to have the mainstream media be a
bit more self-guided.

A final thought: one of the suggestions from the Reuter's article is that we
should hear more from scientists and activists in the media. This seems a bit
troubling to me, since in ML and AI research there are very strong ties
between academia and industry (and often people move fairly freely between the
two). I'm not sure we would hear a significantly different narrative if we
talked to researchers in academia...

~~~
CuriousSkeptic
there is this humor show (the fix, on Netflix) where they were supposed to
“fix” AI.

I think it’s quite telling that the entire show instead ended up talking about
robotics. I don’t think any of the participants reflects at all on how AI is
used today in the systems they interact with.

So that’s probably the first step in educating the public. Find a basic mental
framework for thinking about AI that doesn’t involve robots or skynet.

~~~
NarcolepticFrog
That's a really interesting point - it does seem like most people immediately
jump to skynet or our robot overlords whenever the topic of AI comes up. I
don't have any really solid suggestions for what a good mental framework for
thinking about AI would be, but I think giving an alternative to these
unrealistic versions would be super helpful.

~~~
vlaak
I always explain it using recommendation engines. You buy a vacuum and you get
suggestions for 10 vacuums you might want to get next. Not so good. You buy a
film with Tom Cruise, and you see 10 other Tom Cruise moves suggested, thats
not so bad.

------
chobeat
I suggest this reading to anybody interested in this topic:
[https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3078224](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3078224)

It tracks the history of this narrative and explains who profits from it.

This phenomenon has a clear political and ideological root and both
journalists, politicians and engineers should fight against it together.

If you want to explore more related topics there's plenty of related content
in a reading list I've been developing in the last few months:
[https://github.com/chobeat/awesome-critical-tech-reading-
lis...](https://github.com/chobeat/awesome-critical-tech-reading-list)

~~~
klibertp
From the abstract:

> In practice, the confusion around AI’s capacities serves as a pretext for
> imposing more metrics upon human endeavors and advancing traditional
> neoliberal policies. The revived AI, like its predecessors, seeks
> intelligence with a “view from nowhere” (disregarding race, gender, and
> class) — which can also be used to mask institutional power in visions of
> AI-based governance.

Later:

> The manufactured AI revolution has created the false impression that current
> systems have surpassed human abilities to the point where many areas of
> life, from scientific inquiry to the court system, might be best run by
> machines. However, these claims are predicated on a narrow and radically
> empiricist view of human intelligence. It’s a view that lends itself to
> solving profitable data analysis tasks but leaves no place for the politics
> of race, gender, or class. Meanwhile, the confusion over AI’s capabilities
> serves to dilute critiques of institutional power. If AI runs society, then
> grievances with society’s institutions can get reframed as questions of
> “algorithmic accountability.” This move paves the way for AI experts and
> entrepreneurs to present themselves as the architects of society.

This is implausible, to say the least. Isn't this basically a conspiracy
theory? Not to mention, huge tech companies were already huge before the AI
hype. I'm 4 pages in and I still don't know what's the author's point, other
than promoting conspiracy theories and spreading FUD about AI.

~~~
chobeat
They are not conspirancy theories because they don't assume a concerted effort
from these entities but just a common interest and their action converge to
reinforce the same phenomenon.

The author is a system biologist writing about sociology, so the paper should
be read with the vocabulary of sociology, not of colloquial language.

~~~
klibertp
> They are not conspirancy theories

Agreed, I've read a bit more:

> The pragmatic interest on the part of industry is natural, since the
> behaviorist approach that has appealed to many AI researchers aligns with
> the profit motives of surveillance capitalism.

Still, the language is loaded, the examples of claims (of AI proponents) are
cherry-picked, the limitations of technology are misrepresented and the
results are dismissed because they don't take "historical context" or
"emotions visible clearly on the faces" into account.

It doesn't read like a scientific paper at all - or is this how papers in non-
STEM look like in general?

~~~
chobeat
The language is loaded because it's part of an ongoing discourse that by

>is this how papers in non-STEM look like in general?

you're not familiar with. And yes, this is quite a good paper for sociology
standards. It comes from a STEM-guy so I think I like his style of writing.

> the examples of claims (of AI proponents) are cherry-picked,

This is not hard science where you have to find hard rules and a counter-
example breaks your argument. He's commenting on a trend that we can all
relate to. Is it a vocal minority or it's actually the vast majority of the
industry/media? For that you can go back to data but that's not the goal of
the paper and it doesn't invalidate his thesis anyway, as long as the
narrative dominates the public discourse.

~~~
klibertp
> you're not familiar with.

Agreed - unfortunately, I'm not familiar with the field at all :(

> it doesn't invalidate his thesis anyway

But, but, his thesis is that we're inevitably heading in the direction of a
dystopia with "robo judges" and scientific pursuit being judged based on
"metrics"... And that "surveillance capitalism" companies and the proponents
of AI are power-hungry demons who plan to use AI to force some unspecified
"psychology model" on the society as a whole!

Well, maybe that's how it is - I suspect it's not like that, but I can't know
for sure. My objection is that, whatever the plans of evil corporations and
traitors-to-humanity scientists, our current technology is nowhere near
enabling any of the "changes to society" the author fears. The "superhuman
intelligence" is not going to surface for a long time, the "bots" which "write
quality editorials and replace journalists" will probably be realized
something like 5 years before the "superhuman intelligence" mentioned, so also
in the (very) far future. As for funding science, it's not AI (nor AI
proponents) who come up with "metrics", but people, and they did so for the
past 2 centuries at the very least. Yes, it's dangerous, but it has nothing to
do with AI. The paper makes it sound like the hype around the AI is an
imminent danger to the society-as-we-know-it, but isn't it infinitely more
probable that this hype will follow thousands of others and simply die out?

I guess what I want to say is that the difference in complexity between - also
impressive in their own right - current ML-based solution and any kind of
understanding is so vast that worrying about what will happen when the AI will
be capable of the latter in no way justifies the sensationalist tone of the
article.

Well, I could be misinterpreting the author due to my unfamiliarity, so maybe
it's not particularly sensationalist for the field and I just misinterpreted
it.

~~~
chobeat
> But, but, his thesis is that we're inevitably heading in the direction of a
> dystopia with "robo judges" and scientific pursuit being judged based on
> "metrics"... And that "surveillance capitalism" companies and the proponents
> of AI are power-hungry demons who plan to use AI to force some unspecified
> "psychology model" on the society as a whole!

I would argue that this is the present, not the future.

> Well, maybe that's how it is - I suspect it's not like that, but I can't
> know for sure. My objection is that, whatever the plans of evil corporations
> and traitors-to-humanity scientists, our current technology is nowhere near
> enabling any of the "changes to society" the author fears.

You don't need advanced technology for that. The existing technology is more
than enough and we're seeing the devastating effects on the existing society.
I don't think that for the thesis of the author, the actual progress of
technology is relevant: if it looks intelligent, they will apply the narrative
and profit from it.

The author is talking about how the promise of a yet to come AGI helps to
build a narrative today that is used to exploit people. This is one thing. The
dystopia is a critique to the narrative itself, that would lead to even
further deterioration of the social fabric if it keeps being pursued. This is
completely independent by the satisfaction of the promise of AGI or similar.
As long as the narrative is believable, it will be used.

------
chobeat
The problem is that researchers and engineers rarely engage in the public
debate except if they need to drive corporative interests.

The debate sees corporations and the tech elite on one side and artists,
activists and philosophers on the other, with journalists split between the
two. Engineers are used by either side but they don't have their own voice and
mostly because they don't have informed opinions or the cultural means and
interest to join a non-technical debate as a cohesive force.

The result is that the whole debate is conducted mostly by people that has no
idea how this stuff really works and cannot separate marketing mumbo jumbo
from the actual practice of building "AI" systems.

It's therefore very refreshing to see the work of artists like Hito Steyerl or
ssbkyh that actually learn how to work with Deep Learning or other techniques
to actually create critique of the existing narrative.

~~~
NotAnEconomist
> mostly because they don't have informed opinions or the cultural means and
> interest to join a non-technical debate

This is somewhere between slander and trying to explain away other social
groups bullying engineers.

The truth is much simpler: engineers have responded to being bullied out of
the debate by simply ignoring it, and implementing AI without concern for the
results of that debate. Engineers have that power -- they can unilaterally
change society by implementing a new kind of mind without approval or buy-in
from other parties.

So I think my position is the reverse of yours: if business leaders,
activists, etc want to meaningfully impact the AI debate, they should engage
with the engineers actually building it -- rather than having a debate among
themselves.

~~~
chobeat
> The truth is much simpler: engineers have responded to being bullied out of
> the debate by simply ignoring it, and implementing AI without concern for
> the results of that debate. Engineers have that power -- they can
> unilaterally change society by implementing a new kind of mind without
> approval or buy-in from other parties.

Truth is never simple. Engineers have been "bullied" out of the debate (this
is not really how I see it but ok) because they often hold beliefs that
renders the debate impossible. The narrative of "engineer is a pure
discipline" or that "tools have no political color" are still strong despite
countless counter-evidence. In the case of software-engineers that are left
alone in the hands of managers and their interests, the social devastation
that follows is very evident just by looking at the news.

> So I think my position is the reverse of yours: if business leaders,
> activists, etc want to meaningfully impact the AI debate, they should engage
> with the engineers actually building it -- rather than having a debate among
> themselves.

This is a real and pressing concern for many of them but it's not easy. I'm an
engineer and I'm trying to do just that, or at least bring the existing
discourse to the engineers if I can't bring the engineers to the discourse.
But believe me, the cultural and personal resistance is extremely strong,
first of all because it forces them to re-evaluate the belief system and take
responsability for what they are doing and what they did in the past. Staying
in an ethical comfort zone where you can ignore the consequences of your
action is much easier.

~~~
sanxiyn
Well, but tools do have no political color. That's why The Open Source
Definition has "No Discrimination Against Fields of Endeavor", period.

~~~
TheOtherHobbes
I can't be the only person who has seen FOSS people complaining bitterly that
their work has been used in projects they object to.

"No Discrimination Against Fields of Endeavour" \- like FOSS itself - is an
absolutely huge win for the corporates.

Related: I used to know someone who designed missile guidance systems. His
work was a purely theoretical problem solving exercise for him until he saw a
missile he had worked on being used in news footage of a war.

That was when he realised that even though he wasn't discriminating against
some fields of endeavour, the technology he was building most certainly did.

~~~
jessaustin
I have sympathy for anyone who regrets his mistakes, as I certainly regret my
own. However, it can't really have been a total surprise that a _missile
guidance system_ was used in a war?

~~~
mlthoughts2018
Depending on the circumstances it could absolutely be.

For example, many defense research labs have “red team” projects where new
capabilities are developed strictly to understand adversary capabilities,
feasibility / cost to extend a legacy system with modern tech, and many
similar things. Kind of like Myth Busters but applied to questions about an
adversary’s capabilities.

Some of these research labs are even joined with academic institutions that
carry with them a strict ethics mandate that any and all such work can only be
theoretical or defensive in nature, to assess and defend against threats,
anticipate new threats or debunk claimed capabilities from existing
adversaries, but absolutely _never_ to carry out an offensive agenda, enhance
existing attack capability or anything similar.

In a situation like that, you absolutely could be greatly surprised & upset to
learn your defensive “myth busters” research is turned around and repurposed
by another team or something to enhance attack capabilities.

It could be similar with computer security as well. Imagine putting your best
effort into developing an attack, exploit or malware because you think it’s
purely to determine if something could be done by an adversary, or to
highlight a weakness purely for defensive purposes... only to find that it’s
used to directly attack someone else after the research leaves your hands.

~~~
jessaustin
Again, this isn't meant as harsh criticism, since I know how easy it is to
fool oneself. However, we're talking about another level of cognitive
dissonance entirely with defense research lab staff discovering only too late
the purpose of "defense research". Maybe there was a sign on the door that
said "for peaceful purposes only", but the main thing to remember about the
war industry is that they lie.

~~~
mlthoughts2018
I think you are imagining that it’s some small, throw-away comment, but it’s
not necessarily.

MIT’s Lincoln Laboratory for example was originally opposed by the university
president at the time with huge community backlash against the university
becoming connected to a military research lab.

Part of the original charter of the lab was that its scope of operation was
very, very strictly restricted to US air defense, and very strictly _not_ the
development of offensive capabilities. There was even a huge report
commissioned by the university to detail exactly the defense needs and set
boundaries around them in terms of what projects could possibly be approved
for funding at the lab. The “no attack” component of this was a giant, first-
order constraint of the whole multi-million dollar endeavor to even create the
facility at all.

On the other hand, I do agree many other cases could be like what you
suggested. Just saying not all.. and some of them would be directly, loudly
predicated on “defense only” mandates where it would be a huge surprise if the
research was subverted later.

~~~
jessaustin
The thread hypothesis is that regardless of the marketing, work in weapons-
oriented research will create weapons. Does Lincoln undertake such research?
Could it do so? You introduced LL into the conversation, presumably because
you wouldn't be shocked to see weapons come from there too. I don't know much
about LL, but I wouldn't be shocked either. I do believe you when you tell me
that some people working there _would_ be shocked.

~~~
mlthoughts2018
Within the first 5-15 years of LL’s existence though, I would have been
dramatically shocked if it engaged in projects that created offensive
capabilities.

------
dwiel
Replace AI with Science and the articles points are all pretty much the same.
The media reports on cancer cures, new batteries and energy tech, etc all the
time. It's the same thing. Grand visions, money to be made, stories to sell.

------
throw2016
Everyone reading here knows there is no 'AI' but are happy to peddle or sit
back and watch as pattern matching used to identify nudity is wildly hyped up
as 'AI'. This is the same kind of fraud as Theranos was guilty of.

There is nothing in the current programming paradigm to create an 'AI' as the
'world understands the term' but individuals and groups blinded by profit and
greed are 'growth hacking' redefining and blurring the meaning of words to
suit their commercial agendas.

The technical community at large would come out against this kind of
widespread misinformation and abuse but they don't as everyone is looking for
a gig, job, contract or more ominously some 'illusory power'. But that doesn't
make it any less abusive of discourse.

How can we make an AI when we don't even understand human intelligence
properly. What in current programming language allows you to create anything
that can 'think' and make decisions. How is processing data and matching
patterns 'thinking' or AI is any way or form? A culture of hubris is seeing
programmers constantly engaged in hype and overestimating their tools while
underestimating basic human intelligence required for even mundane tasks like
driving.

But once you redefine and twist words they lose all meaning for communication
and in the end you lose all credibility, but this seems to be a gold rush and
as long as some have made money on the way who cares. If AI happens it will be
from decades of hard research in the scientific community like how all
breakthroughs happen, not in the trade.

------
YeGoblynQueenne
>> Why do people believe so much nonsense about AI?

That is the central question. Why is it that people are prepared to believe in
all the wild fantasies that the marketing arms of the tech industry come up
with? Shouldn't the expectation be that the majority would instead be more
cautious and avoid believing all the overblown promises, especially when they
repeatedly fail to be fulfilled?

~~~
pfortuny
Today we believe in SCIENCE (whatever that means). Hence, we believe what any
of its priests says unless countered by other (more famous) priest.

~~~
mwfunk
I wish I could say that people believed in science. Priests (literal ones, not
metaphorical ones like your strawmen) seem to have more influence over public
policy in much of the world though, including America, and more often than not
to the detriment of all.

~~~
noobiemcfoob
The benefit about being more honest about their role in society: they get to
exploit it more thoroughly.

------
agent008t
Rule of thumb: if someone is talking about "AI" and not "Machine Learning",
"Pattern Matching and Recognition", "Search heuristics" or "Statistics", it is
most likely going to be bullshit.

~~~
sanxiyn
On the other hand, OpenAI is not bullshit, so...

------
thanatropism
I'm just going to leave this picture of the elephant in the living-room:

[https://en.wikipedia.org/wiki/Hubert_Dreyfus%27s_views_on_ar...](https://en.wikipedia.org/wiki/Hubert_Dreyfus%27s_views_on_artificial_intelligence)

~~~
B1FF_PSUVM
Eh, this week Scott Adams' Dilbert has been going that way:

[https://dilbert.com/strip/2019-01-10](https://dilbert.com/strip/2019-01-10)

[https://dilbert.com/strip/2019-01-12](https://dilbert.com/strip/2019-01-12)

------
jayd16
Is this a real issue though? The article skips past what the problem is and
focuses on why it exists by accusing the tech industry of writing articles
about tech.

Besides the philosophical argument of whether pattern matching counts as AI is
there really a problem here? I have not heard of anyone effected by AI
becoming a buzzword. Products are still evaluated on what they can actually
do, so who cares?

Siri and Google can now take voice commands. I don't know of anyone who
actually expected an intellectual conversation. They were never marketed as
such.

~~~
skywhopper
Actually it is somewhat dangerous in that once politicians buy in, they start
pursuing policies built around the assumption that this sort of thing is real.
Billions will be wasted on military projects pursuing AI-enabled features that
are nowhere close to being ready; police are blowing money on facial
recognition systems based on misleading stats; and we already see local and
national governments salivating over self-driving cars as a solution to public
transportation woes. Tax money is and will continue to be given away to
corporations who are selling the snake oil of self driving cars rather than
that money being invested in real solutions for public transit, or even basic
maintenance.

------
tim333
>...it goes like this...on balance AI will be good for humanity. Oh – and by
the way – its progress is unstoppable ... The truly extraordinary thing,
therefore, is how many apparently sane people seem to take the narrative as a
credible version of humanity’s future.

I don't see what's extraordinary at all. AI's coming and will probably be net
good. There may be some inaccuracies in how it's reported and corporations are
not all angels but for what field is that not true?

Also he says Theresa May has drunk cool aid by agreeing to fund AI but what
she's funding is machine learning for cancer detection which is getting
results like this

>In tests, it achieved an area under the receiver operating characteristic
(AUC) — a measure of detection accuracy — of 99 percent. That’s superior to
human pathologists, who according to one recent assessment miss small
metastases on individual slides as much as 62 percent of the time when under
time constraints. [https://venturebeat.com/2018/10/12/google-ai-
claims-99-accur...](https://venturebeat.com/2018/10/12/google-ai-
claims-99-accuracy-in-metastatic-breast-cancer-detection/)

That's for metastatic breast cancer that kills 500,000 people a year. Overall
Naughton's arguments seems a bit silly.

~~~
xg15
To be fair though, those numbers are the results of a single evaluation of the
algorithm against a single group of human experts:

> _In the setting of a challenge competition, some deep learning algorithms
> achieved better diagnostic performance than a panel of 11 pathologists
> participating in a simulation exercise designed to mimic routine pathology
> workflow; algorithm performance was comparable with an expert pathologist
> interpreting whole-slide images without time constraints. Whether this
> approach has clinical utility will require evaluation in a clinical
> setting._

Certainly an impressive feat of image recognition, but far from
revolutionising cancer diagnosis. It's also not clear to me that this would
actually diagnose cancer more accurately if you factor in the ability of
experts to consider other things than just the scans.

~~~
tim333
I guess so though it seems promising. Not quite sure how it'd work in real
life. I've got two friends who went to the doctor with a headache and sore
throat and were told it was nothing to worry about and then a year or two
later found it was cancer - one died rapidly, one presently having a bunch of
surgery. It would be good if there were a better way of screening that sort of
stuff.

------
matthewfelgate
What a stupid article. Attacks Theresa May for say AI will be good for
healthcare; It will be.

------
doose_droppa
A database parser, is not AI, it is a parser. when a script, compares
realworld cause effect relationships to database parses then rewrites the
database to accurately resemble real life this is the beginning of AI.
learning is an aspect of intelligence and they both feedback to eachother in a
less than simple manner. i think Machine Learning is the glimmer of the
holygrail being glimpsed.

------
matt4077
I guess such „counter-hype“ feels like an attractive position to many. Just
like general cynicism seems to be the mindset of our times. Maybe because
people think being contrarian and/or negative makes them appear smart?

I don’t necessarily buy into any predictions of what machine learning will
accomplish in the future. But just the examples already available today are
quite stunning, especially in images and language.

In any case, I don’t see how „the media“ is at fault here. I see far more
„hype“ of AI among the tech community than the larger media outlets. The
possibility that AI could transform our economies certainly exists, and it
would seem prudent to nurture the debate about the future of work even in the
abscence of certainty.

~~~
chobeat
What is the tech community? The CEOs or the engineers/researchers/data
scientists? Because they are not by any means the same entity.

------
peterdavenport
This sort of nonsense bothers me. So the guardian, a member of the media, is
telling us that the media are unwittingly selling an AI fantasy? And the
fantasy is that it's positive? Most of the news I see about AI is
handwringingly pessimistic about AI being a disaster that will wipe out jobs
and maybe the human race, which is the opposite of the thesis of the article.
This sort of article is just trying to grab attention and took no thought to
write, it has no substance, and took no research or deep insight to write.

And the idea that somehow we can get Law before we work out the Ethics of AI?
You cant just have fiat law pulled out of a hat that works? How can you
possibly expect the law to proactively regulate AI before we know what it is?
Because at this point we don't know what it is, its an evolving thing just
like the internet was when it first came out.

Frankly it's disappointing that drivel like this can make it onto the front
page of HackerNews.

~~~
rchaud
> So the guardian, a member of the media, is telling us that the media are
> unwittingly selling an AI fantasy?

No, it is an opinion piece written by an academic that summarizes the findings
of a research study of how media outlets covered AI in their articles.

