

The next evolution of machine learning: Machine teaching - cscurmudgeon
http://blogs.microsoft.com/next/2015/07/10/the-next-evolution-of-machine-learning-machine-teaching/

======
TheEarlOfPuddin
When trying to solve a Machine Learning problem, it's often not hard to set up
an algorithm that does better than pure chance. As they imply in the video,
and from my own (admittedly rather limited) experience, the hard part is
selecting the correct algorithm for the job and tuning the parameters just
right. Although this is mitigated by so-called "great estimators" like Neural
Networks, achieving the highest accuracy requires both a Machine Learning
background and a decent knowledge of the problem space (i.e. domain
knowledge). Is over-fitting likely to be an issue? Is there a class imbalance?
Are the classes undergoing concept drift? How costly are false positives and
false negatives?

This is simply another example of a trend that has been driving computers, and
technology in general, since at least the industrial revolution:
_abstraction_. Although abstracting away the complex to present an easy-to-use
and universally accessible interface has been going on since before computers
were even invented, this specific instance strikes me as radically different.
Microsoft could be enabling billions of people around the world to deploy
their own personal pseudo-AIs. Trying to decide what stocks to invest in?
Deploy an AI to crunch the numbers. Not sure whether that girl in your class
likes you? Ask an AI to assess the situation. Want to confirm if a country has
nuclear weapons? Analyse the media with an AI.

On the negative side I worry that too much abstraction could be wholly
misleading. After all, there are still many people who take everything said in
the media or on the internet at face value. What if the oracle is wrong in a
life-or-death situation? What if it's right in a malicious application? In
this age of big data and non-privacy, just how scary are the potential abuse
cases for a project like this? Moreover, will these pseudo-AIs just become
another source of lying statistics due to flawed assumptions and
implementation?

At the same time I look forward to increased adoption of computer decision
making. Implemented correctly, a computer algorithm comes closer to true
impartiality than most humans ever could. They do not get tired or distracted.
For example, I cannot wait for the day when cars are driven by computers.

I can speculate about the ethical and physical ramifications of personal AIs
until the sun blows up, but I think what's more important to consider is what
comes a few steps after this in the "evolution of Machine Learning".

What do we do when we reach the _singularity_?

We're getting closer and closer every year. I think it's time we move past
having theoretical discussions about it and actually start implementing some
preventive measures. Let's not be as unprepared for and passive towards this
issue as we were towards climate change. EMP bombs anyone?

