
Machine Learning Meets Economics, Part 2 - nicolaskruchten
http://blog.mldb.ai/blog/posts/2016/04/ml-meets-economics2/
======
gradi3nt
So the Sell/Check/Recycle model only requires 33% of the labor compared to the
Check only model. The author suggests that this means tripling production
would be possible, but that depends on QA being the factories bottleneck. If
QA isn't the bottleneck, than you might as well fire 2/3 of Quinn's QA
workers. Hooray, the computer didn't take my job, but it took the jobs of the
guy to my right and the gal to my left.

~~~
xbmcuser
I got to the same conclusion Ai will increase efficiency so a company will
need fewer workers. Ai augmented humans might be better than ai or humans
alone but results in the same thing loss of human jobs. I have to say though
most automation these days increases effiency for companies but passes the
work to the end user. With true Ai that will probably change.

~~~
flatfilefan
Hopefully these people get another job. After all they have proven they can
reliably show up for work every single day and cope up with the corporate
chores. It would be a waste not to employ them for some other work. That's
what entrepreneurship is about.

------
Falcon9
Just a point of discussion:

> “Exactly,” agreed Danielle, “and just like with the previous project, the
> model will get some wrong, but this will be outweighed by the cost savings
> of not having to check every single gadget.”

And that's how we consumers go from being able to receive a 100% reliable
product every time (i.e. "Quinn's team works very hard to avoid penalties") to
needing to go through the hassle/delay/cost of returns. Sure the QA department
"has to pay" for the return, but does that economic model accurately include
the cost of lost goodwill?

~~~
rlucas
Well, I'd object to the idea that even a careful human team in production can
hit 100%. In domains where the human "gold standard" is somehow falsifiable
(that is, not tautologically correct as in some judgment call situations), it
always ends up being a numbers game until the humans can no longer provide
100%.

It's kind of frustrating when you're trying to sell ML-based solutions to a
skeptic. I've found that executives will often try to poke holes in the
predictions, especially if the ML solution is risky or potentially threatening
to them.

It helps a lot to frame things with a known human error rate and cost, as the
Data Scientist in the story does, because then the conversation becomes win-
win (how do we optimize for best outcomes) rather than unwinnable (why isn't
your fancy ML algorithm right about this example X which I can plainly see
myself is wrong??).

~~~
crdb
> I've found that executives will often try to poke holes in the predictions,
> especially if the ML solution is risky or potentially threatening to them

In particular, automation that reduces headcount reduces their justifiable
budget and therefore power within the firm, salary and benefits, and external
status. For an example of the latter, Havard Business School asks you how many
people you are currently managing when you apply for an MBA.

This creates a strong incentive to block any attempts at automation or
increased efficiency, especially when said inefficiency is not reflected on
the KPIs used to gauge the executive's performance. Customer satisfaction and
error rates are rarely measured well, nor with a refresh rate sufficiently
high to be such a KPI. Blocking is easiest to do by seeding mistrust in the
person attempting to build the automation, and in the automation itself.

Part of being an effective data scientist/big data engineer/whatever the
buzzword du jour is consists of figuring out what KPIs the executive wants to
maximise and sell him on that instead. The good old "work on making your boss
look good".

------
rlucas
The main gist of the article is that you can use ML to catch "slam dunk"
categories and give people the harder tasks.

Ten years ago or more, lumber mills started using computer techniques to
suggest optimal cuts (which dimensions you can get out of a given log,
understanding that larger dimension lumber has higher sale value per unit
mass), and flashing the suggested cuts to an operator for review/approval,
which only rarely is overridden.

~~~
dmix
> which only rarely is overridden

So what's holding them back from replacing the cutter with a machine as well?

~~~
wsinks
Maybe they like the people cutting. Maybe the people do other jobs as well,
like moving the wood in or out?

------
joe_the_user
Sure, it's true that a given technology by itself doesn't lead to job
losses.But this really doesn't scratch the surface concerning the way that the
labor market has been transforming.

A human being is remarkable flexible that can do lots of things robots and
computers can't do as well as program himself/herself quickly to do things it
takes a long time to program a computer to do.

If a human being is also really cheap, and they are in many places, then a
human being is a really good deal. And the situation of automation and (even
more) the globalization of work, is that it reduces the margin price of labor
- there's still demand for people but the demand for people at a lower price -
and being flexible, people have accommodated.

But when life is cheap and work is constant, it degrades all those things that
makes human society pleasant. Except for those few with money to burn and to
some extent even for them.

------
StanislavPetrov
Unfortunately the parameters of the algorithms, which are set by humans, end
up being just as flawed. In the earlier example regarding the discarding of
widgets, no cost was attributed to discarding of these widgets. Where do they
go? Why is waste disposal free (or virtually free)? The environment is a
finite place with tremendous monetary value. The fact that businesses are able
to profit off the destruction of the environment (in this example, throwing
away widgets that might not work) is a fundamental flaw in the economic
formula. How can one discuss efficiency when one fails to factor the
destruction of a valuable, non-renewable, finite resource in the process?

------
hobb0001
Not that I'm a futurist, but I predict that 2020 will be the decade where AI
becomes unsettling (beyond image recognition, which is already becoming
unsettling), and that 2030 will be the decade where AI has _serious_ economic
consequences.

~~~
alanwatts
Curious word choice: "economic _conseqences_ "

What do you imagine they will be? What about economic _benefits_?

~~~
hobb0001
Think of DeepDream x 1000. First, start with self-driving cars and semis
cutting out taxi and freight services. Then, move on to musicians and
composers no longer being hired to do mundane (90%) work, or video game levels
being procedurally generated with intelligent design. From there, go on to
everything that could be automated today, but requires a little bit of human
ingenuity.

Economic transactions will start to look very different than what's normally
accepted today.

