
AI is not coming for you - ntang
http://blairreeves.me/2019/04/18/ai-is-not-coming-for-you/
======
jaabe
I think ML has potential that Blockchain doesn’t, but not in all the areas
that are being hyped.

I work in the public sector of Denmark, and we’re targeted by a lot of the
hype. Which is worrisome, because it might actually lead to stupid projects if
it becomes a political focus. So far it hasn’t though, and blockchain never
did, so who knows.

The thing about ML is that all the BI is worse than what we are already doing.
Because it’s hyped we’ve naturally done proofs of concepts, with universities
and with big tech, and no one has been capable of providing ML based analytics
or BI that is even remotely close to what we already do. Because the simple
truth is that we have been working with data for four decades. We have full
time analysts who do nothing else, and they are simply lightyears ahead of
anything ML we’ve seen, and, they can actually explain their results to our
politicians and decision makers.

Where ML does work, and the fact that it does separates it from blockchain, is
for recognition. We have a lot of data, often in poor quality, and ML can
troll through it faster and with higher quality than our human workers. We had
to go through every casefile in a specific area, and identify which ones were
missing a specific form. A casefile can be 500 A4 pages long, sometimes
scanned in really terrible quality, and we had 500.000 of them. It took 12
people 6 months to do so, simultaneously we did a ML poc. It took 3 months to
train the algorithm, but only one employee, and once it was done being
trained, it took around five hours with a lot of Azure iron to troll through
the data. ML had better results than our human effort and it was obviously way
cheaper.

So ML and AI may be a lot of hype, but it’s also more than that.

~~~
ThomPete
The first thing to realize is that AI is not coming for you or me or any
other, it's coming for a subset of us. The subset of skills which can be
rented out to an employer.

AI and ML are just two things that are comming for that subset. Digitalization
is also if not a bigger then at least a huge factor.

The entire ecosystem of companies and people who were involved in supporting
the music industry were more or less wiped out over the last 20 years. Leaving
a thin layer of really successful people and then a huge group of people who
makes no money.

And the more things can be digitalized the more they are subject to pattern
analysis which means the subsets of your that make you valuable are also
easier to replace.

Ironically a cleaning lady is probably the last person to loos her job because
it's really hard to replace, but that just means the supply will increase too.

So sure humans are better at BI but that's based on the assumption that how we
do BI is the only way to do BI. I am not so sure it is. We will see.

~~~
jaabe
We’ve seen a couple of examples with BI and analytics, I think you can put
them into two categories. Prediction and automation.

Prediction simply isn’t good enough. It may work well for google and Facebook,
but that’s because failure is relatively harmless in advertising. No one dies
just because you see a commercial for something you just bought. The failure
rate is simply too high for us in the public sector, maybe that’ll change, but
probably not. I say that because we’re severely limiting the access to data
these years over privacy concerns. You could probably do some interesting
things with medical data, but to get there, you’ll need to look at medical
data and that’s just not happening in the current political climate.

Then there is the automation. IBM wanted to sell us Watson analytics on the
premise that it could recognise patterns and build the BI models our dedicated
team does. So we let them try, and none of the models they came up with was
even remotely useful. I can see this changing, but when? And what will our
analytics department look like by then? It’s hard to say.

~~~
ThomPete
Yes but you also have to think about predictions a little more nuanced.

You can make a prediction it might be true but it doesn't mean that you will
be successful with it. I wouldn't be surprised is 90% of all BI predictions
doesn't actually lead to more successful outcomes also in the public sector
also in Denmark (I'm Danish too :) )

------
charlysl
Maybe it's just me, but it seems to me that the author doesn't have a clue of
what he is writing about. For instance, when he mentions the Amazon hiring AI
embarrassment, it seem to me like he mixes the very different concepts of bias
in machine learning with the everyday concept of bias as prejudice.

If the sample data is representative of the population, then it is unbiased in
machine learning terms, even if the population as a whole is prejudiced, or
biased, which will be duly reflected in the learned model, which shouldn't
surprise anyone with at least a rudimentary knowledge of machine learning.

~~~
piokoch
I have the same impression. If AI system figured out that the risk of the car
being stolen when someone lives in a certain place is bigger and adjusted fee
to be higher, then it means AI worked correctly.

One might claim (I presume author does and this is perfectly ok) that this is
a bad thing that there are "bad neighborhoods" and we should fight it, but
what does it have to do with proper calculation of insurance fee? Should we
expect that AI will apply some "social justice" concepts while calculating
insurance price. I am not sure what author proposes.

~~~
skybrian
I'm not impressed with the article, but the idea here is that some forms of
bias are okay, others are not, and it's up to the people working on the system
to build it properly. There are laws about these things and you can't expect
some algorithm to know the law automatically and avoid prohibited
discrimination without going through the effort to build it in.

It's kind of odd to think this would happen automatically. There is some kind
of wishful thinking going on that technology can't have bad effects if you
don't have bad intentions.

------
bunderbunder
> No one has any idea what “artificial intelligence” even means.

I'll tell you exactly what it means:

It means that the software's own vendor is so uncertain of the actual value of
_what_ the product does that they've decided to promote _how_ it does what it
does - in other words, implementation details - instead.

~~~
commandlinefan
> promote how it does

And nobody who works there actually knows that, either.

------
legitster
Do we all remember "algorithms"? Like, when every company had an "algorithm"
as part of the pitch?

Some of them even had the ability to self learn and adjust to new data.

At the end of the day, AIs and MLs are just new labels for slightly more
advanced "algorithms". I think everyone would be a bit better off if we
tempered our expectations around the technology to "better algorithms".

~~~
ska
Do we all remember when it was called “AI” in the 70’s? To some degree this
stuff is just cyclical.

------
ackbar03
I think the term ai has been completely abused. In fact I try to avoid using
the word ai if I can help it and use deep learning instead because that's
really the whole crux of this new tech.

I think most people, definitely author of the piece, miss the whole core
impact of ai. Ai is basically a new way to process and calculate data, a new
type of algorithm.

We can now identify objects in pictures using computer vision with deep
learning. This was NOT POSSIBLE with the tech before. A computer can now beat
a team of human ai players in dota. This was definitely NOT POSSIBLE before.

At the end of the day this is the impact and its not a small impact. It opens
a lot of possibilities to what can now be done. To me it's far more impactful
blockchain will ever be. To some degree I'm actually quite happy with all the
misleading hype because the people who understand and are capable of using the
technology are quietly making real differences behind all the smoke and cloud

~~~
ska
While there have been some good incremental improvements, I think you are
overstating the place of deep learning vis a vis everything in machine
learning that came before it. The same thing happened to SVMs in the early
2000s, but with much less industry noise. NB I’m not trying to compare the
impact of the two, just noting it was also over stated.

After all, deep learning is fundamentally a continuation of much older
techniques. And we absolutely could identify objects in pictures, we’ve been
doing that for 40 years now - deep learning techniques have given a very nice
jump in accuracy for some tasks but they didn’t come out of nowhere.

I agree the whole labelling “AI” is problematic. That is also a many decades
old problem though...

~~~
kthejoker2
AI developments are more like battery storage improvements. Just 10%
improvements year over year, in hardware, in architecture, in algorithms, in
data capturing and labeling ... they've just really started to add up.

But 10% growth YOY compounds very quickly.

~~~
ska
10 percent would compound quickly if we were seeing that - I suspect most of
the recent acceleration has far more to do with data availability than any
technical improvements though.

------
tyingq
It mentions the blockchain hype mania as well. I happen to currently be the
clearinghouse for lots of people's silly blockchain ideas right now. I mostly
have no issue with the ones that are basic "chain of hashed blocks" git style
ideas, intended to show some simple sequence of integrity, while still
requiring trust. I don't get many of those.

But there are so many that try to shoehorn the full trustless model onto an
idea that obviously still has a central authority. Argh.

Is there a good resource I can hand to non-technical people that explains why
"full blockchain" only makes sense for a handful of use cases?

~~~
rorykoehler
From where I am standing it seems as if any resource to explain anything to
non-technical people will fall on deaf ears. The reason for this is money.
People aren't looking for the best technical solution for their problem. They
don't even have a problem. They are looking for the best way to get their
hands on money quickly. If people will hand over cash they will do it. This
goes for AI too. Many investors are stupid (naively optimistic if I'm being
kind) and people are out to get their money.

~~~
gk1
Repeating my other comment in this thread:

> This is a tired trope, that “business people” are brainless and gullible.
> Almost as tired as the “VCs are so dumb they’ll throw money at anything-AI”
> idea repeated in the article.

> Maybe, just maybe, the people running multi-billion-dollar companies, and
> multi-billion-dollar investment funds, are not stupid?

> It would be much more interesting and productive to discuss what they see in
> AI and why they feel so much urgency, rather than dismissing them as fools
> falling for magic.

VCs only need one big winner out of every ten investments. You are looking at
their nine failures and calling them stupid and naive.

~~~
barbecue_sauce
Experienced VCs are not the only business people who give out money.

~~~
gk1
My problem is with the generalization. The article and some comments in this
thread paint all VCs and business people as dummies falling for shiny objects.

I don’t think anybody would argue with an article that said _some_ percentage
of any population makes stupid decisions for stupid reasons. But then again,
nobody would read that article, either.

------
RosanaAnaDana
I want to strongly disagree with the conclusion based on 1-off personal
anectdota.

Last year my working group within our company secured a major contract;
largest for our company for the year. We expected we would have to hire 20+
entry level positions to execute on the contract. While the process was
fundamentally based on ML, we knew there would be a large quantity of human
labor as well to execute on time.

My colleagues and I, instead of going on a hiring spree, asked the overlords
for 3 months of overhead time to develop 'intelligent automation' to reduce
the number of new / temporary hires. We were able to make substantial enough
gains to not have to bring on any new hires and complete the work, ahead of
schedule and way under budget with our existing crew.

It was adjusting our thinking about how we do our work and incorporating ML at
multiple levels in our system to intelligently guide our process that allowed
us to eliminate 60% of the new positions that previously would have been
generated by this work.

100% ML is coming for you.

~~~
linuxftw
Must not have been 20 entry level tech workers, just 20 unskilled workers. 20
entry level tech workers would be worth 2 senior workers, no management team
would ever allow this.

------
carnagii
I see a lot of similarities between "AI", as understood by business people,
and the pursuit of alchemy and the philosophers stone. It seems like a hustle
to separate rich dupes from their money by promising them the keys to infinite
wealth, immortality, mars colonys, etc. In that respect it is mostly harmless
but it can be quite dangerous to the people who take it seriously.

~~~
gk1
This is a tired trope, that “business people” are brainless and gullible.
Almost as tired as the “VCs are so dumb they’ll throw money at anything-AI”
idea repeated in the article.

Maybe, just maybe, the people running multi-billion-dollar companies, and
multi-billion-dollar investment funds, are not stupid?

It would be much more interesting and productive to discuss what they see in
AI and why they feel so much urgency, rather than dismissing them as fools
falling for magic.

~~~
alkonaut
I can only speak for managers I have met: what they “see” is a promise of cost
cutting and an opportunity to tell higher managers that they are ahead of the
hype. But never, ever, have I seen one of these people say anything that
suggests they have even a vague idea of what AI or ML _is_ or what it can
realistically achieve. And I’m not sure they are interested. Because as you
say - they aren’t _stupid_ \- I just think they are part of a game of BS I
don’t understand, involving higher managers, investors etc. I don’t think it’s
so much about about producing anything using AI, it’s AI for the sake of
saying you are using it.

So perhaps not all fools but somewhere between con artist, willfully ignorant
and fool.

(Note: this is all from “traditional” industry, I.e the manager at the hammer
factory proudly launching initiatives to “use more AI” in the factory. Not
tech industry. Not plausible or concrete use of AI)

~~~
gk1
You are repeating the trope, just substituting “stupid” with other deriding
terms: BS, con artist, willfully ignorant, fool...

Does it really have to be those things just because you don’t understand it?
Is it possible those people running multi-billion-dollar organizations just
know something you don’t, or have a perspective that you don’t?

~~~
alkonaut
Updated my answer: these are middle managers such as division or site managers
not the top managers of the billion dollar corporations (who might well have
great visions but it’s not on them to implement it.). As I clarified, these
projcects are always vague initiatives such as “use more ai in our process” or
downright marketing stupidity such as “joe, I need you to work _AI_ into the
description of our latest hammer model”. Again this is old, traditional
industry.

~~~
gk1
Sounds like a great opportunity to provide value and charge accordingly.

Baking off-the-shelf anomaly detection into the hammer QA process might seem
easy and “not true AI” to us, but it solves their problem. Maybe you can even
educate them in the process, and explain the differences between AI, ML, DL,
different use cases and methods, libraries, etc. I suspect, though, they won’t
care because they just want to hit their business goals.

------
cromwellian
The claims in this article that corporations don’t do pure research and every
project is targeted at revenue generation is false.

I’ve been in R&D at both Google and IBM TJ Watson on projects that were never
even tangentially revenue generating. A company with 80,000 employees can
easily afford to let hundreds or even thousands of employees operate on non-
product focused research.

In fact on orientation day at IBM we were explicitly told that nothing we do
will ever be a product, that there’s a firewall between TJ Watson and the rest
of IBM and that any products will be rewritten by product teams.

It seems like the author is speculating how things work Not from experience.

------
leesec
I don't know what this author is talking about and I'm not sure he does
either.

I think his premise is that AI is bad and vague. And its bad because it's
bias. But he just picks a few examples with no counter argument and spitballs
from there.

AI in my opinion is technology that outperforms humans, often done with
learning algorithms instead of explicit rule based algorithms.

Except the AI I have been following is wildly successful in a variety of
topics and probably is coming for you. The author is right it could be biased
but people are working on these problems and the author is acting as if no
progress will ever be made in this relatively young field.

------
hprotagonist
>Translation software was once considered by serious people to be “AI” – until
it became easy.

Excuse me? Translation is possibly impossible for interesting work, but
certainly nothing we're doing now is even trying[0]. What we're doing right
now is the equivalent of "translating" road signs and dinner menus.

[0]
[https://www.theatlantic.com/technology/archive/2018/01/the-s...](https://www.theatlantic.com/technology/archive/2018/01/the-
shallowness-of-google-translate/551570/)

------
bsenftner
Current AI is a marketing term for trained algorithms performing analysis
using machine derived statistical models (that's the learning part). The
marketing term was created to help companies raise funding, and then taken
over by journalists who are tired of eating dirt and write about any possible
fear for the clicks revenue. That's where this really took off, in the public
media where any idiot can write anything and somewhere an entire sub-culture
believes them.

------
w8rbt
So is Machine Learning. It can be dangerous too:

[https://www.usenix.org/conference/usenixsecurity18/presentat...](https://www.usenix.org/conference/usenixsecurity18/presentation/mickens)

James Mickens is awesome BTW.

------
mindgam3
Good piece. In some ways the hype is even worse than OP suggests.

It’s not even that the current trend of “machine learning” (ie data
processing) is at best a tiny subset of what anyone would consider to be
actual intelligence (ability to act rationally in a wide range of situations,
including novel ones, ie common sense).

It’s that the very idea of “artificial intelligence” is impossible. Terry
Winograd, one of the earliest AI researchers in the 60s/70s wrote a great book
about this called Computers and Cognition in which he lays out a coherent
theory, grounded in philosophy and biology, for why the pursuit of machines
that think like humans is doomed to fail. The crux of the argument is that the
essence of intelligence/common sense has to do with something other than
symbolic representation, ie the kind of thinking one does while playing chess
or using language. It has to do with “being-in-the-world”, which sounds weird
but is a really useful concept from Heidegger.

Winograd gives the example of a man using a hammer to illustrate this. A man
using a hammer has no mental model of a hammer while he is pounding in a nail.
The hammer simply becomes an extension of his arm. The only time a
representation or mental construct of “hammer” becomes relevant is if there is
a breakdown, like if the hammer slips. But otherwise, the intelligent act of
hammering occurs without any “hammer” objects in the mind of the hammerer.

I’m vastly oversimplifying here, but the book (first published 1985) is quite
well argued and frankly persuasive. Highly recommended to all those who seek
to separate AI fact from fiction in these buzzy times.

------
olivermarks
'“AI” is not something anyone needs to be worried about. A world mediated by
unaccountable corporate software platforms is.'

Perfect

------
carlmcqueen
A lot of the discussions of how AI is coming for your jobs follows the same
reasoning of how automation is/was, even before it was called machine
learning.

The data science team at a major bank I used to work for historically had a
team that sourced and cleaned the data, then a team who would explore and
build the models and then a team who would learn and run the models going
forward. In my opinion the final team was the most important and the most
tragic for being condensed, they noticed when the models needed to return to
the model builders to be adjusted when missing the mark.

The "lost jobs" in this case is that when I was on the team we had to learn
the entire bank database structure and source our own data, build the models
and then automate them in such ways to "catch" when they were missing the
mark.

The team will be further shrunk as the software tools provided have better
auto-sql for sourcing the data, automated model building functions, and
automated visualizations thus removing even that as a special skills.

~~~
moneil971
Like automation, the ideal is that workers who are no longer doing the jobs
“AI” (aka automated systems) can now do move on to doing higher level or more
interesting work. But I agree that there’s often far too much trust that a
system can now run itself with little to no checks or humans to tune the
system.

------
douglaswlance
So long as any form of automation has existed, people have been claiming that
the sky is falling and that automation will put people out of work. And in
some respects it does, but in the grand scheme of things, quality of life
improves and people find work.

In the most simple terms it works like this:

1\. Workers create Automated systems. 2\. Automated systems create wealth. 3\.
Wealth creates workers.

And the cycle repeats, ad infinitum.

So long as there are people with money, there will be jobs to be done. Perhaps
in the future, AI employers will hire human workers that are trained by AI
educators to do human tasks. There may come a time where AI/Robotics are
better than humans at every task, and when that day comes, humans will compete
on price, and when they're priced out of the market, they'll either merge with
the machines or simply be left behind. But that time is many hundreds of years
away, and I'm not convinced that machines would even want to stay on Earth.
Space is much more conducive to a machine society.

~~~
noego
Well said. Job creation is not the hard part. Eliminating jobs is. Every time
a job becomes automated, the workers eventually find other
industries/professions to work in, boost aggregate productivity in those other
sectors, and thus end up creating more wealth for society as a whole.

[https://outlookzen.com/2019/01/05/prosperity-comes-from-
elim...](https://outlookzen.com/2019/01/05/prosperity-comes-from-eliminating-
jobs-not-saving-them/)

~~~
canjobear
“Eventually” is the key word.

Lives can be ruined during that eventually, and some don’t make it.

~~~
douglaswlance
The same AI that puts people out of work, will be used to make complex tasks
simple enough for unskilled workers, as well as training those unskilled
workers to be more skilled.

------
jatsign
Got a recruiting email about a company working in Augmented Reality, remote.
Normally I'd ignore recruiting emails, but the idea of sitting at home wearing
a hololens sounded too cool to not at least LISTEN to the pitch.

Yah, they're not doing AR. The guy explained it was a marketing gimmick for
VCs (though he made it sound nicer than that).

------
iandanforth
The author is uninformed and wrong. The hype wave propagates faster than the
utility wave, but if you believe there was no lightning just because you
haven't heard the thunder yet you will be sorely mistaken.

Function approximation for classification tasks is slowly but surely "eating
the world."

Planning and sequential decision making based on RL are being hybridized with
classical control methods for demonstrated utility.

Reasoning, rapid learning, and useful adaptation are being attacked in
hundreds of research papers a month.

Remember that technologies don't arrive they go through a phase transition of
"That's not real X" to "That's obvious and boring." with almost nothing in the
middle.

Whether you want to call all the above "AI" is irrelevant. Tasks which were
recently assumed to require humans have been shown to be tractable with other
methods.

------
biophysboy
I don't know if I agree with the author's argument that AI is bad only because
people are bad. I'm aware that garbage in yields garbage out. But I also find
myself squirming at the idea of accomplishing company goals by identifying
nonlinear patterns in data with networks. Especially if those goals are
modifying user behavior, a la watchtime.

Am I being a paranoid biophysicist? This isn't my field. If I use AI, am I
not, by definition, ignoring responsibility? At least with a regular computer
program, I understand every step of the algorithm, and can judge if its doing
the right thing. But with neural networks I'm just feeding data into a black
box and hoping the ends justify the means. I don't actually know what patterns
it is finding and utilizing.

------
currymj
this article is of course reasonable, but what’s depressing is we genuinely
have made really incredible strides in machine learning. like enough that it
shouldn’t be possible to overhype it. of course people manage though.

------
musicale
"No one has any idea what “artificial intelligence” even means."

Usually AI refers to solving real-world problems that humans (or smart
animals) are good at that aren't easily solved using traditional algorithms
such as sorting/searching, graph algorithms, numerical methods, combinatorial
optimization, etc..

Sometimes it also refers to a computer opponent in a game, or the algorithms
and strategies used by such an opponent.

There is something to the statement though in that AI seems to be a bit of a
moving target: once we have a good algorithm (and input data) to solve a
problem (e.g. winning at checkers) conclusively, then it may no longer qualify
as "AI" \- even if it did for years up to that point!

------
jammygit
Companies try to replace people with machines every time they think its even
half possible. A recent example with Suncor:

"As Suncor Energy prepares to shed 400 jobs to prepare for the implementation
of driverless ore-hauling trucks, the union representing workers at the
company is publicly condemning the decision."

[https://globalnews.ca/news/4000125/suncor-union-outcry-
autom...](https://globalnews.ca/news/4000125/suncor-union-outcry-automation-
oilsands-jobs/)

------
vnorilo
I liked the article. It mentions the fallacy of Objectivity by Indirection: "I
didn't say that - the system (I designed) said it!"

~~~
djsumdog
I mean, but we've been doing that with mathematical models for years. I mean,
just look at the controversy over books like The Bell Curve or Freakconomics.

------
natch
>But we don’t need to “regulate AI.” As we’ve seen, “artificial intelligence”
is mostly a constructed catch-all term for lots of different types of
technology

The author forgets here that AI is not static. It’s a moving target.

The examples of AI that exist today may be easy to brush off, but let’s not
assume that extends to examples of AI in the future.

In the meantime it is super annoying that simple techniques get marketed as
“AI”.

------
jerkstate
> And it turns out that racially discriminatory lending and coverage can be
> quite profitable – particularly when enabled with technological precision.

this is a troubling statement, is this just the author speaking or is there
data to back it up?

~~~
currymj
there’s decades of history about the practice of redlining in the US (good
term to Google), as well as more recent court cases related to subprime
lending.

~~~
securingsincity
More recently facebook was charged by HUD for allowing people to buy ads and
filter out by race, sex, another protected statuses under the fair housing act
[https://cdn.theatlantic.com/assets/media/files/hud_v_faceboo...](https://cdn.theatlantic.com/assets/media/files/hud_v_facebook.pdf)

------
sonnyblarney
"Global, monopolistic platforms like Google, Facebook and Amazon do not pursue
“AI” as a science project. Nor do hospitals, insurance companies, banks,
airlines or governments. They do so for specific, strategic purposes, which in
corporate settings are aimed at generating new revenue. "

Well, many companies will throw some money willy nilly at AI 'because' HBR
said to, or consultants are pushing it, or they 'know it's coming' and need to
start somewhere.

It might be rational to do some experimenting, but many companies don't have a
clue really, they're throwing money at it. FB and G obviously have a clue, but
even they have so much money and talent they can afford to experiment
whereupon the ROI might be way, way off.

~~~
kthejoker2
I sell AI and ML service-based solutions to large enterprises and if anything
they're highly skeptical of AI and ML adding value and are always comparing
its ROI to the alternative of basic data management, mastering, out-of-the-box
analytics tools.

I'd say they're a lot savvier than anyone in this thread is giving them credit
for.

