
‘The discourse is unhinged’: how the media gets AI wrong - jonbaer
https://www.theguardian.com/technology/2018/jul/25/ai-artificial-intelligence-social-media-bots-wrong
======
dalbasal
This sort of sensationalism, in general, is so completely ingrained into
reporting that it's hard to separate from journalism itself. It's part of the
journalistic style, an inevitability of the medium.

We see an article, click a link, by a paper, but we are never committing to
more than a second or two of attention at a time. The headline makes you read
the byline. The byline gets you reading the second sentence. At every step, a
journalist loses most of her readers. The winning strategy is to increase
"story," play to existing opinions, be sensational, use bait... There are
flavours and degrees, but by and large writing like this is n inevitability of
the medium, of journalism.

If you are reporting on programs that "developed a type of machine-English
patois to communicate between themselves.." you are reporting on whether or
not the skynet singularity is coming, almost certainly.

If we want something different, I think we'll need a medium change.

Personally, I was hoping e-readers would catalyze a new medium subtype: 30-100
page mini books. A lot of "news cycle.." the drip, drip, sensation of the day
journalism just isn't a good way of understanding anything.

What happened between North Korea and the US this year? What's been happening
with the Syrian war? Brexit as of July 2018...? I would be very happy to
exchange the daily/weekly news bulletins for monthly/quarterly mini books.

~~~
DanielBMarkham
But most people don't want that, sadly.

Instant mass media has shown us that people want to consume media about other
real people -- that they admire in some fashion -- going through life. Not
reporting, explanation, facts, or science.

So reporters have responded with tweets and instant feedback on anything
that's happening. Many times even when nothing is happening, they know they
have to put out some kind of personal emotional response to keep the audience.

Newspaper and in-depth reporting worked for the most part because nobody
really cared about the author. They cared about the material. That's flipped
upside down now. The material doesn't matter as much as the author. Even the
reall good occasional long-form material you might find is all geared to get
you into a personal, minute-by-minute "relationship" with the author. To
become part of the tribe.

~~~
dalbasal
Oh I don't know... there are wants and there are wants.

I wouldn't put it down to some moral failing of humanity. The problem is more
of a systemic one than a fundamental one.. I think.

The way we use our phones/computers, our time and such... it adds up to a
system where decisions are instant and reflexive. It adds up to a medium, with
all its trappings.

------
shawndumas
‘The Gell-Mann amnesia effect is a theoretical psychological phenomenon, the
term itself being coined by author, film producer and academic Michael
Crichton after discussions with Nobel-Prize winning physicist Murray Gell-
Mann.

Originally described in Crichton's "Why Speculate?" speech, the Gell-Mann
amnesia effect labels a commonly observed problem in modern media, where one
will believe everything they read from a journalist even after they come
across an article about something they know well that is completely incorrect.

The conclusions found and perspectives portrayed by the author are entirely
erroneous, often times flipping the cause and the effect. Crichton notes these
as "wet streets cause rain" stories.

In short, most eloquently put by Thomas L. McDonald, the Gell-Mann amnesia
effect defines the idea that "I believe everything the media tells me except
for anything for which I have direct personal knowledge, which they always get
wrong."’

— [https://en.m.wikipedia.org/wiki/Gell-
Mann_amnesia_effect](https://en.m.wikipedia.org/wiki/Gell-Mann_amnesia_effect)

~~~
sievebrain
I'm not sure it's true though. Look at any polls about trust in the media.
It's been falling for a long time. People are becoming aware that journalists
are systematically unreliable and it's having an impact.

~~~
m12k
And unfortunately the alternative people have found is random hearsay on the
web. The dieticians lost our trust and now we're just eating junk food
instead.

~~~
CM30
They've been finding random hearsay for a while now. Tabloids had a huge
audience, and they're the kind of newspapers many were reading for years. Is
the Daily Mail or Sun or Mirror or Express more accurate than some clickbait
farm? Probably not.

Most people were't exactly reading [insert name of reputable publication] for
their daily news, it was low quality tabloids with a helping of TV and radio
pundits.

~~~
dragonwriter
> Tabloids had a huge audience, and they're the kind of newspapers many were
> reading for years.

Outside of NYC, the US has basically no daily tabloids.

> Most people were't exactly reading [insert name of reputable publication]
> for their daily news

Most people in the US that were reading _any_ daily news were probably reading
the dominant local newspaper or one of the latter reputable metropolitan or
national papers, because that covers pretty much all the daily written news
outlets they would have had access to.

~~~
jrumbut
I'm not sure how often the National Enquirer and it's supermarket checkout
line cousins publish but they are certainly available and fairly popular (very
popular histically) outside of NYC.

Our disagreement could be a dialect thing though, when I say tabloid I mean a
printed paper that consists entirely of stories about UFOs and lurid celebrity
gossip. I believe in other parts of the world tabloid has something to do with
the format of the paper and the content is considered somewhat but not totally
disreputable.

I'm not sure which meaning the other posters are using.

~~~
dragonwriter
> I'm not sure how often the National Enquirer and it's supermarket checkout
> line cousins publish

Both common local tabloids (which are often have basic journalistic standards,
though they frequently have...interesting...editorial viewpoints) and the
supermarket tabloids like the Enquirer are weeklies.

------
jokoon
Success in developing working AI is not so much of a big step in progress for
AI in general.

It would be more interesting if we could understand what happens in those
black boxes, and synthesize it. All we see is showcase projects and services,
but never something you can run on a client computer.

It really looks like machine learning is just brute forcing problems until you
have a partial solution key to a problem, without understanding how it works
internally. Granted, it's progress, but why isn't it possible to use the data
of a trained ML network for further analysis? I see many models of learning,
but not a lot of analysis or simplification of the resulting data.

I would have thought AI would help science understand what intelligence is,
but obviously it's always money first, science later. You often see a lot of
tools and models, but not a lot of good insights.

~~~
ThomPete
AI doesent do latteral thinking that well yet. Its a meta problem, thinking
about thinking requires selfawareness.

But saying its just bruteforce is missing the larger point, much of evolution
is bruteforce.

Of course its money or power first thats how you pay for these things to begin
with, with utillity we wil get more and more conscious ai but its obviously
going to take time but a blip compared the time evolution have used.

~~~
TaupeRanger
There's this continual refrain from AI people: "well evolution is just brute
force too", or "the brain is just doing complex statistics too" as if that
explains anything at all. There are different ways of implementing "brute
force" and "complex statistics" \- what matters is the implementation, not the
terminology. Clearly AI is not even remotely close to "thinking" or
"awareness" in the human sense. We don't even understand the implementation of
those things in humans. It is ignorant conceit to think we will just magically
stumble upon the answer with zeitgeist machine learning techniques.

~~~
mindcrime
_We don 't even understand the implementation of those things in humans._

And we don't have to, unless you're insisting on something that might be
referred to more aptly as "Artificial Human Intelligence". Generally speaking
though, in the AI field, there's no particular belief that AI _must_ work the
same way human intelligence works. If AI research leads to discoveries that
help us better understand human intelligence, that's a nice perk, but nobody
treats that as the ultimate goal.

That said, of course it makes sense to try and model after human intelligence
to the extent we can, since we are currently the best example of intelligence
we have available to use as a template.

 _It is ignorant conceit to think we will just magically stumble upon the
answer with zeitgeist machine learning techniques._

Who is out there suggesting that we will "just magically stumble upon the
answer with zeitgeist machine learning techniques"? From where I sit, it seems
that most contemporary researchers who are focused on Deep Learning / Deep
Reinforcement Learning / etc. are not talking much at all about "Artificial
Intelligence" in the general sense. And the people out there talking
specifically about "Artificial General Intelligence" (Ben Goertzel, Marcus
Hutter, Pei Wang, etc.) certainly aren't claiming that all we need is the
currently faddish ML techniques. See, for reference:

[http://agi-conf.org/2017/?page_id=20](http://agi-conf.org/2017/?page_id=20)

[http://agi-conf.org/2016/schedule/](http://agi-conf.org/2016/schedule/)

[http://agi-conf.org/2015/schedule/](http://agi-conf.org/2015/schedule/)

etc..

~~~
jokoon
> but nobody treats that as the ultimate goal.

I remember watching Andrew Ng's course on ML and he was often talking about
"AI dream".

I think that the goal of AI is to build machines that are progressively more
intelligent. To build those machine you have to build an artificial form of
intelligence, to further analyze and research what intelligence really means.

I don't think humans are really able to visualize a form of intelligence other
than human or mammal/earthly life forms. Our intelligence comes from an
evolutionary need to figure out things in order to survive, but it's an
earthly version of how we evolved. With that said, I think we can already say
that our definition of intelligence will always biased because we put our
human intelligence on a pedestal, and worst than that, we won't be able to
detect other forms of intelligence because of those reasons.

------
MrQuincle
This quote is nice: “We’ve told stories about inanimate things coming to life
for thousands of years, and these narratives influence how we interpret what
is going on now,” Bell says. “Experts can be really quick to dismiss how their
research makes people feel, but these utopian hopes and dystopian fears have
to be part of the conversations. Hype is ultimately a cultural expression that
has its own important place in the discourse.”

Hypes are not necessarily wrong. I also think in the end you won't get AI
winters anymore. If it is advanced enough AI will be used and you won't get
the genie back in the bottle.

------
tim333
>The result is dangerous

The complaints in the article seem to apply to crapy sensationalist journalism
in general and there is much more danger from that applying to immigrants,
politics war and the like. If the Sun prints some nonsense Facebook's bots
does it really matter? Nonsense about the enemies WMDs on the other hand can
costs thousands of lives and billions of dollars.

~~~
sievebrain
It does matter yes. Journalists have inordinate influence on politicians, if
an outlet (much more likely the guardian in this case) is making nonsense
claims about bots it can easily lead to legislation, especially if it lets
politicians feel they're doing something.

------
devoply
Media does not get AI wrong... media wants to create sensationalism that
drives clicks so it publishes stories to that effect. It happens for every
topic people fear which these days includes terror and AI...

~~~
notahacker
Also, for the most part, media is merely parroting the sensationalism from
within the AI industry. It's not the media that is actually allocating
millions of dollars to the possibility AI might become self aware and start
attacking people.

------
niklasd
The AI hype also seems to lead to sub-hypes in certain fields. E.g. in the
legal profession there is a new buzzword called "Legal Tech". The hope is that
AI-driven programs will eventually transform the field. While I think this is
certainly possible and will eventually happen, it is astonishing how few
programs there are admit the huge hype and countless workshops and conferences
about the topic. And while it is possible to compile a list of programs and
companies on the market, from my experience most of them aren't actually
really used in business.

~~~
kilon
I am a lawyer and a coder and I have to disappoint you that that hype is very
real indeed.

The problem is two fold a) Lawyers completely underestimated the decision
making abilities of software even without AI b) People who are not lawyers
that completely overestimate the complexity of legal resolution.

(a) Happens because the software we lawyers use is basically... well... crap
and most lawyers are clueless when it comes to technology, even ones that
specialize on it. (b) Happens because of TV and Movies have created this
fantasy legal world where lawyer, especially expensive ones can prove that
they are elephants because of their virtue to win arguments.

Surprisingly and this was also a surprise to me when I started to study law,
law and coding is much closer than people think because there is a lot more
logic than there is tv drama in a real court room.

Not only AI should not have problem resolving legal issues , judging from its
current achievements, but it can help with the most valuable skill for a
lawyer which is pattern recognition. Our profession bombards us daily with
tons of data that is extremely hard to organize and keep track of. This
applies more on civil than criminal law , but most of law is civil law anyway
(economical issues), and this data is documents used as evidence, case law
(court cases that have created a precedence) and of course legislation.

Also there is no much of an option really, AI is pretty much unavoidable
because the ever evolving immense complexity of modern society has made legal
resolution so complex that court cases take up to decades to be fully resolved
which of course is not a viable solution.

An example is the IT law which has been a huge suffering for courts and
legislators to keep up with its rapid evolution in a profession where court
cases and legislations take decades to move forward. In IT decades are in
legal terms , centuries.

AI will replace lawyers , for that I have no doubt, cause law is a dying
profession anyway for the reason I explained above. Obviously lawyers will
still be around for a long long time but yes AI will fundamentally change the
profession. The profession is in desperate need of modernization as it has
barely evolved that last few thousands of years.

The problem was never what AI can do, it can do amazing thing, the problem is
supply and demand. AI is a field of huge demand and minimal supply but then
this is a problem that has rampaged the coding profession which is why
freelancer coders make more money than lawyers.

~~~
panic
_> Also there is no much of an option really, AI is pretty much unavoidable
because the ever evolving immense complexity of modern society has made legal
resolution so complex that court cases take up to decades to be fully resolved
which of course is not a viable solution._

Instead of turning law into a computer game where the company with the most
TPUs wins, why not simplify or reform the system so that human beings can
understand it? Isn't the point of the legal system to resolve conflicts
between people, not computers?

I'm worried that we're driving off a cliff of incomprehensibility, where
things happen but nobody can understand why. Or even if they do, they don't
have the authority to override the system which is making the decision.
Reforming the system outright is always too risky -- it's been working OK so
far, right? But what happens when it stops working? How can you fix a system
you don't understand?

~~~
kilon
Simplification is an illusion. Simplification works great for understanding
and learning , I completely agree but is terrible on problem resolution.
Mainly because problems don't get simple just because you want them to.
Secondly because the nature of knowledge and the world we live in is of
immense complexity.

You are absolutely correct though that we are indeed driving off a cliff of
incomprehensibility , I cannot count the times when I have caught , including
myself, lawyers and coder not understanding even basic concepts like OOP or
legal responsibility under the influence of drugs and alcohol. It's not that
the concepts are hard to understand but they are so numerous it becomes so
easy to get lose track of where you are , where you were and most importantly
where you are going.

When I started coding back in the end of 80s coding in Assembly was not that
hard. After 30 years of coding I decided to go back to Assembly I just got
blown away how much more complex it became, though obviously not surprised,
and of course I discovered that even Assembly coder mostly use C libraries
cause well, otherwise it gets insane really fast.

My solution to this may sound insane but in life I have learned that when I
have a crazy idea ,usually, I end up being correct. I do believe that AI wont
replace us but rather augment us. I am not talking about cyborgs , singularity
and these nonsense I am talking about software that helps you navigate through
the chaos of information. And when I say AI I mean it in the most vague way
possible obviously the technology will change in the future in so many ways.

The only viable solution for the human being is to either find news ways to
take advantage of the potential of his own intelligence or augment himself in
some way.

Afterall its not a secret that AI is already used to construct AI and this
opens the door to a ton of potential.Afterall is it not coding all about
automated decisions making ? It's not as if we have not being trusting
automated machines for thousands of years. But nonetheless humans are
terrified of technology. The marvel of the human condition.

~~~
sievebrain
I guess the issue is the definition of problem. If the law reaches a point
where you need AI augmentation to understand it, that's a good sign it's being
applied to problems it can't actually address or which may not even exist at
all.

Look at GDPR. It's impossible to know what it really means. Huge efforts are
put into action with no idea of whether it will be considered good enough or
not. That's not a problem you can fix with ai. You need better law (in this
case, no law would be better)

~~~
nerdponx
I'm OK with a computer assisted legal code so long as our policy makers
recognize it for the public good that it is. If there is a standardized "legal
robot" then everyone who is eligible to votr should have free access to it.

------
benl
It's rather disingenuous of AI researchers to complain of overhype when they
are the ones claiming that their tech should be used to drive cars and hence,
as we've seen, kill people.

AI winter will be caused, once again, by the failure of the technology to do
what the researchers and practitioners claim it can do. This time, tragically,
with fatalities.

~~~
majos
It seems reasonable enough to argue both that 1. AI should take over certain
human roles like driving, which causes millions of fatalities due to human
error, and 2. it's silly to frame every new step in AI as part of a grand road
to SkyNet. The first is proposing AI for a discrete task, the second is
extending this way way out to consciousness or something.

~~~
benl
Yes, but it's my argument that claim 1 is incorrect and overhype. AI cannot
drive better than humans, and that was an hubristic claim.

------
glup
What probabilistic generative model of language generates ``Balls have zero to
me to me to me to me to me to me to me to''? Is this a bigram model taking the
highest probability continuation? Certainly nothing modern, right?

~~~
Don_Patrick
Neural networks as usual. What I make of it is that the programs noticed that
some words had more effect than others, and just started spamming those for
maximum value. Source: [https://code.fb.com/ml-applications/deal-or-no-deal-
training...](https://code.fb.com/ml-applications/deal-or-no-deal-training-ai-
bots-to-negotiate/)

------
baxtr
It is fascinating how the whole thing is following the classical hype cycle.
This and blockchain will tank, and, then come back at a later stage,
hopefully, matured.

I don't think that is necessarily a bad thing. I did my masters' thesis in
2004 using evolutionary algorithms. No ordinary person would talk to me about
that, even worse, I was easily labeled an outsider and a nerd. Today, people
having AI skills are the cool kids on the block. That's the good side of the
hype: it's bringing all these nerd topics into the mainstream.

~~~
gaius
_This and blockchain will tank, and, then come back at a later stage,
hopefully, matured_

I think it's a mistake to conflate the two, they are unrelated and orthogonal
to each other. AI has a whole raft of problems waiting to be solved as soon as
it develops enough but blockchain is very much a solution in search of a
problem _that only it can solve_. Every proposed application of it eventually
requires a "trusted third party" and then the whole house of cards comes
tumbling down. In a former role I sat in on many fintech pitches where
painfully earnest people from outside the industry proposed solutions to
problems that they only imagined existed... I mean at the level of claiming
that only blockchain can do something that actually the Medicis were doing in
the 15th century...

------
davidgerard
The Musk AI hype is quoted extensively - but these articles never get into
Musk's line largely being reheated Yudkowsky. I had one of these discussions
just a couple of days ago, where I had to point out that Musk was an
accomplished businessman and engineering manager, but not actually a working
engineer at any point ...

~~~
sievebrain
They may well be partly true but when setting up SpaceX he immersed himself in
rocket physics and basically became a self taught rocket scientist. He isn't
just a manager.

------
canihavelogin
Instead of documentation for the press on how to cover scientific topics,
don't patronize these rags. What exactly is the point of correcting the press?

------
mar77i
Good old capitalism cannot stop mixing reality and sale-speak, only if you
start taking the "magic sauce" pitch for technical facts, it's no wonder you
end up scaring yourself. On that same note, I kind of understand that AI
researchers have this need to sell their craft as magic sauce. That's how
selling things works, after all. Here we go in another cycle of debunking the
same magic sauce as common snake oil.

~~~
ForHackernews
Real AI researchers shouldn't need to be "selling" their craft. You're
confusing "data scientists" at places like Facebook and Google with genuine
scientists.

~~~
Erlich_Bachman
"Real AI researchers" also need to, if they want to get funded for their
research.

~~~
ForHackernews
There's a certain amount of puffery that goes into a grant proposal, but it's
nothing like marketing-speak. For one thing, "magic secret sauce" will never
get you funded.

------
bartq
Optimization and statistical algorithms known as "AI" are far far far away
from surpassing humans. Philosophically, it will never happen, we can create
intelligence at best equal to human being's, but creating that kind of "AI"
would be equal to creation of life which is beyond scope of technology.

AI is as dangerous as we decide it can be, for example we can create gun with
camera and shoot to people based on their looks. Law and common sense should
not allow that, that's criminal activity.

I think AI should grow, and take over boring and repetitive tasks. This will
free up many people form dull jobs and new jobs on top of that will be
created, i.e. jobs to tune and organize AI units of computations and talk to
other people about results.

~~~
Joeri
The mistake is viewing AI as somehow on the same curve as human intelligence.
As if AI gets better and better and edges closer on the curve to human
intelligence.

It’s more a different kind of intelligence which happens to be able to do some
of the same tasks. It already far exceeds human ability, at the tasks that
particular flavor of intelligence is good at. We don’t need to be looking at
things people do and asking how deep neural nets can do those as well, we need
to be looking at deep neural nets and asking which tasks they’re uniquely
capable of doing.

~~~
taneq
> We don’t need to be looking at things people do and asking how deep neural
> nets can do those as well, we need to be looking at deep neural nets and
> asking which tasks they’re uniquely capable of doing.

I remember seeing a post here a while ago, talking about essentially this
process. The poster was disappointed that AI researchers tend to get to the
point where a new approach starts bearing fruit, and then get sidetracked
finding applications for the new approach and forget about the search for
'real' AI. IIRC this was also suggested as an explanation for the "once we can
do it it doesn't count as AI" phenomenon.

