
‘Soft’ Artificial Intelligence Is Suddenly Everywhere - jonbaer
http://blogs.wsj.com/cio/2015/01/16/soft-artificial-intelligence-is-suddenly-everywhere/
======
hackuser
Regarding weak vs. strong AI:

"Alan M. Turing thought about criteria to settle the question of whether
Machines Can Think, a question of which we now know that it is about as
relevant as the question of whether Submarines Can Swim." -E.W. Dijkstra [0]

[0]
[http://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/EW...](http://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/EWD898.html)

------
saosebastiao
Where has the WSJ been for the last 30 years? "Soft" AI has been broadly
commercially successful since the 1980s with the advent of prolog and rule
engines, and on the machine learning side, there have been major commercial
successes since the late 90's with Bayesian methods, Feed Forward Neural Nets,
Random Forests, SMVs, and most recently with multi-tiered Neural Nets.

AI research may not have reached the goal of Hard AI or AGI, but it has most
definitely paid for itself several times over by now.

~~~
bhouston
Soft AI can then be considered Amazon's (and later Netflix's) recommender
systems, which most people run into all the time. I guess you can even view
Google's page rank system as a form of Soft AI as it is just a useful result
from the analysis of big data.

Once we go into Soft AI, there are tons and tons of examples of useful
results. But what is the definition of Soft AI and where does it stop being
AI, and just number crunching -- I guess that is a problem with defining AI in
general.

~~~
saosebastiao
The reoccuring problem with the definition of AI is referred to as the AI
effect. Basically it refers to the constantly shifting definition of AI as
knowledge of a technique that originated in AI research becomes more pervasive
and well known. People stop thinking of the technique as AI because now it is
just math or statistics or computer science or whatever. The AI effect
basically states that nothing will ever be AI because as soon as we understand
it well enough to use it, nobody associates it with AI anymore.

AI researchers have been extremely successful at finding and proving specific
areas of intelligence. But what they haven't found is the "silver bullet" that
is governing all forms of intelligence. For example, intelligence isn't just
symbolic logic, as AI researchers from the 1950's reasoned. It isn't just
relational logic or optimization, as those from the 1970's and 1980's
reasoned. It isn't just statistical reasoning, as those from the 90's and 00's
have reasoned. And it isn't just the ability to process mass amounts of
knowledge (data) as those today currently reason. Significant advances in any
specific area of AI will cause people to think that they have found a victory
of Neats over Scruffies [2] (for example, the currently pervasive view that
Deep Learning is the silver bullet that can actually get us all the way to
AGI), but then an AI Winter forms [3], and all of a sudden the Scruffies were
right all along.

My opinion is that the Scruffies have always been right. The only valid Neat
view is that true intelligence is really just mathematics, but mathematics is
the superset of all algorithms ever. If you want some real philosophy on the
subject from one of the most intelligent men that have ever lived, read up on
Marvin Minsky.

[1]
[http://en.wikipedia.org/wiki/AI_effect](http://en.wikipedia.org/wiki/AI_effect)

[2]
[http://en.wikipedia.org/wiki/Neats_vs._scruffies](http://en.wikipedia.org/wiki/Neats_vs._scruffies)

[3]
[http://en.wikipedia.org/wiki/AI_winter](http://en.wikipedia.org/wiki/AI_winter)

~~~
OvidNaso
Recent trends seem to be that the AI Effect is happening in reverse. Processes
that were considered normal computational, mathematical and statistical
fuctions are being described as AI.

~~~
frik
Exactly, and various dumb devices are branded as "smart" devices, e.g. smart
meters

------
rbrogan
I also have a dumb question: Where is "AI" in our software development
toolchains? Intellisense comes to mind. Compiler optimizations also come to
mind. What more is coming or is already out there? Lately, I find myself
thinking more and more about Prolog (or something similar) to help encode some
thinking. We have seen functional programming make a comeback. Will we see
more logic programming as well?

~~~
ScottBurson
This is actually an excellent question.

Prolog is not AI, as the Japanese discovered [0].

The question of how to make good use of AI in software development has been
open for almost forty years at least -- ever since Rich and Waters'
Programmer's Apprentice project [1] produced rather unexciting results. (I
don't recall in detail what they did, but it didn't set the world on fire.)
The failure of Prolog -- not that it failed to be useful at all, but it failed
to live up to the hopes -- is instructive. Prolog never delivered on the
promise of declarative programming, because its rigid, unintelligent depth-
first search strategy makes it necessary to understand the execution model and
keep it in mind when writing programs. So it's only _mostly_ declarative.
Prolog achieved some popularity because there are times when brute-force
depth-first search is all you need, particularly if it can be done very fast;
but once you bump against the limitations of that paradigm, Prolog is of
little help.

I think this failure is instructive. The key problem, I believe, is
controlling search. Search is necessary for reasoning about programs; this is
a consequence of Turing's famous Halting Problem proof. But searching --
particularly, searching the massive spaces of possible programs and of
possible _invariants_ of programs -- without getting lost is very difficult.

You can't understand a program without understanding its invariants. All
interesting programs contain recursion (iteration being a special case of
recursion), and understanding a recursive program requires knowing the
postcondition of the recursion. That in turn requires coming up with the
correct _induction hypothesis_. A simple example to clarify the meaning of the
terms:

    
    
      int a[5];
      for (int i = 0; i < 5; i++) a[i] = 42;
    

The postcondition of this loop is:

    
    
      ∀i 0 ≤ i < 5 ⇒ a[i] = 42
    

which says that all 5 elements of the array are equal to 42. The induction
hypothesis is:

    
    
      ∀i (∀j 0 ≤ j < i ⇒ a[j] = 42) ⇒ (∀j 0 ≤ j ≤ i ⇒ a'[j] = 42)
    

where a' is the value of a at the bottom of the loop; this says that if all
the elements of a below i are 42 at the top of the loop, then all the elements
through i (note the "j ≤ i") are 42 at the bottom. Once we have this induction
hypothesis, it's trivial to prove it, and once we've proven it, it's trivial
to prove the postcondition. In simple cases like this, human programmers do
all this reasoning subconsciously; it's second nature to us. Not so easy for
machines, though.

The problem in a nutshell is that search is required to find the induction
hypothesis; there's no algorithm that will produce it. (This is a clear
consequence of the Halting Problem.) While it seems straightforward in this
case, we all know that most programs aren't anywhere near this simple. And the
search required for real-world cases is still beyond existing AI technology.

[0]
[http://en.wikipedia.org/wiki/Fifth_generation_computer](http://en.wikipedia.org/wiki/Fifth_generation_computer)
[1]
[http://dspace.mit.edu/handle/1721.1/6054](http://dspace.mit.edu/handle/1721.1/6054)

~~~
SCHiM
Seeing your example does make me wonder if the an AI of that level is truly
out of reach. It's well known that human reasoning is flawed by cheap
heuristics thrown in by nature and/or selection.

Looking at your examples you'd think that just throwing enough
memory/speed/power at a simple bruteforce/search approach would solve trivial
examples.

Not to mention that when a programmer analyses code he's actually executing it
inside his head. Perhaps not all the loops, but looking at a loop he will
follow the steps in the code. So in a sense he is executing a piece of code
inside his head, thus not violating and/or disproving the halting problem(?).

~~~
thret
"when a programmer analyses code he's actually executing it inside his head"

I have never thought about it that way before.

------
Punoxysm
In my mind, the next big step will be the machine perception, specifically
full scene understanding in vision.

Computer vision has advanced very rapidly recently in sub-tasks like object
recognition, scene segmentation, 3d-modeling from videos, and others.

Now people are trying to put these elements together, along with text-based
metadata and logic for physical interpretation of images (e.g. the coffee cup
is on the table which is on the ground and abuts the wall; physical
interpretations of spatial information).

Soon enough we'll be to the point where a drone can identify and track most
objects in its line of sight and know their physical relationships to each
other. This opens up tremendous possibilities in robotics.

~~~
jacquesm
> This opens up tremendous possibilities in robotics.

'Shoot the terrorist'.

~~~
technologia
Don't be so pessimistic. Drones have been used for search & rescue, storm
chasing, and even precision farming (precision in the application of
pesticides, etc.)

------
orasis
The two algorithms I have been getting a ton of mileage out of lately are
Bayesian Bandits and variations of the TrueSkill ranking algorithm.

Bayesian Bandits Explained:

[https://www.chrisstucchio.com/blog/2013/bayesian_bandit.html](https://www.chrisstucchio.com/blog/2013/bayesian_bandit.html)

TrueSkill Explained:

[http://www.moserware.com/2010/03/computing-your-
skill.html](http://www.moserware.com/2010/03/computing-your-skill.html)

~~~
eveningcoffee
[https://www.chrisstucchio.com/blog/2013/bayesian_bandit.html](https://www.chrisstucchio.com/blog/2013/bayesian_bandit.html)

------
Animats
Yes, AI is popping up all over. The amazing thing is that "deep learning"
actually works. We've had neural nets since the 1960s, and by the 1980s, most
people in AI were convinced that was a dead end. Yet now neural nets are doing
useful work. They haven't even changed much. They're just bigger, and with
slightly different weighting and initialization functions.

Now we have the problem that we don't really understand what they're doing
when they do work.

------
acadien
It feels like they're just reclassifying heuristics as AI and then saying AI
is everywhere. Also there is not a single solid example of what the author
refers to as soft-AI. The author says things like

"... developing highly complex socio-technical systems in areas like health
care, education, government and cities."

Like what? Give us a solid example. What code? Who is writing it? What does it
do?

~~~
jsnathan
I agree with you. There seems to be a new marketing trend right now, where any
product can be tagged with the term "artificial intelligence" if it uses any
of the algorithmic techniques developed in a field of AI research.

Some people are saying that "artificial intelligence" is the new "big data".

One example of this is thegrid.io, which as far as I can gather uses a
constraint solver to help in positioning items on a page, and they call it "AI
websites that design themselves".

Whether this is a good thing or a bad thing I don't know, but I think it is
here to stay. Because it sounds good..

~~~
Fiahil
From experience, people used to translate "working on AI" by "building nasty
robots". Now, they're just putting an AI label on everything they can.

So, there is some progress.

------
zirkonit
And yet no serious research is being put into hard AI. Just like five years
ago. Or ten. Or twenty.

My childhood passed in reading circa-computer books of 60s, 70s, 80s, where
full, general AI seemed to be just around the corner. Obviously this problem
is far harder than it seemed to be.

But the (almost universal) lack of trying is extremely disheartening.

~~~
Epenthesis
Hypothesis: Because the solution to "hard AI" isn't actually "general".

Ie, what humans view as "intelligence" isn't actually an emergent property of
the right set of rules. Rather, it's a massive and hopelessly self-
intersecting set of ad-hoc solutions and special casing all jumbled together
over millions of years of evolution, producing something that _looks_ general,
but only because we're looking at it from within itself. An easy example:
Having a computer identify all images of a thing called a "cup". That's an
arbitrary category that has no definition reducible to actual physical
properties. What makes something a "cup"? A human saying it is one.

For something to look like "full, general AI" to humans, we'd need to either:
Build something that replicates all that specialized hyper-meta-spaghetti-code
mental processing of ours to a level of fidelity that's beyond both our
current understanding of our brain's structure and the level of complexity
tenable by human computer science. Or leave out those evolutionarily-
discovered "optimization circuits" and require far, far more processing power
than we'll have access to for a good, long time.

~~~
jimmaswell
"What makes something a "cup"? A human saying it is one"

And the human got their definition of cup by seeing a lot of things others
called a cup and extrapolating rules, something that's already obtainable for
some problems with solutions like neural networks.

~~~
fit2rule
So you're saying we can't have proper AI until we've had some semi-decent AI?
That explains everything: what constitutes decent AI, versus good AI, versus
superlative AI?

The problem is: nobody knows. Its a bit like two cavemen banging rocks
together until one says "hey, lets use these rocks to kill something" and the
other one goes "eh? How?" and the first one says "I don't know, keep bashing
until something dies.."

------
usrusr
How are "cheap parallel computing" and "big data" two things and not one? On
top of that, those are hardly breakthroughs, they are the natural thing to
happen when the CPU speed hike ends. If we can't build faster, we'll build
more. Statistics (aka "soft AI") is the only application that can still grow
because it is so much easier to scale than anything else.

------
atrilla
I like seeing (soft) AI as the buttress that allows us to see farther, like
Newton standing on the shoulders of giants. I guess the sudden presence of AI
is due to the sudden amount of available data (and thus, a potential source of
useful information).

------
thesagan
I may have some dumb questions, here. But I'll stab at one or two anyhow of
them, at the risk of looking even more dumb in this thread.

Is there a possibility, or likelihood, that hard AI could diverge into at
least two major dichotomies depending on how those systems form and interact?
In how they attain, analyze and process information? In how it is shared? In
deciding what should be shared? In how information is used and grown? In how
to manage ethical questions that are often very two-sided, and difficult to
model with tools like mathematics and logic?

Things like that. And what kind of outcomes could that bring about? AI
debates? AI "wars?" AI manipulating other AI?

I guess I'm simply confused and ignorant as to how the playing field is
shaped, generally and specifically.

I'm probably not even qualified to put these questions out there, because I
have so little intelligence of AI, but the patterns in human thought that
strike at me over and over in life is that a "yin/yang" of opposing perception
(and resulting in oppositional thinking) almost always occurs in naturally
intelligent beings. Further, those inevitable disagreements often turn out in
generating new knowledge, which, in-turn, often splits into two (or more)
"camps" yet again... ad infinitum.

First, is this a valid question and perception of intelligence? And secondly,
is it fair to assume this might apply to other forms of intelligent systems?
Or maybe I am missing a big part of the discussion in AI, which may already be
addressing this (or is disregarding as simply academic or even dead-wrong.)

To me, at least, it seems counter-intuitive that AI would push in one general
direction (we can debate what we mean by "direction", too). My sensibilities
hazily point to a more dichotomic outcome, perhaps.

Or maybe I'm the one with the intelligence issue! But I'm very interested in
these concepts, and even more interested in what we may be blind in seeing as
this technology continues to evolve and take on new meanings in both our
biological minds, and non-biological minds alike.

Maybe someone can help me out? I think I'm missing something, here. Machines
may identify with a certain idea of "certitude", but I have trouble with that,
myself.

And if AI scientists have trouble with that, themselves, because I would hope
they recognize and practice humility in their thinking and interpretation of
meaning, what does that mean for the scientists working to build such a
powerful and mysterious type of existence?

Sometimes finding the questions to ask, and learning how to ask them is the
harder than teasing out the "solutions", so to speak.

Time for a stiff drink ;)

Edit: Looks like I need to read this, among other texts:
[http://www.theatlantic.com/technology/archive/2014/05/the-
mi...](http://www.theatlantic.com/technology/archive/2014/05/the-military-
wants-to-teach-robots-right-from-wrong/370855/)

