

Microsoft Challenges Google’s Artificial Brain With ‘Project Adam’ - bra-ket
http://wired.com/2014/07/microsoft-adam/

======
pradn
The Hogwild approach mentioned is extremely simple. When you do incremental
updates, don't hold locks. You'll rarely conflict, because the number of data
points to update is usually much higher than the number of threads doing the
updates. And even if you do conflict, the effects don't affect accuracy in any
noticeable way. (I found this empirically.)

It looks like Microsoft is applying this technique not to stochastic gradient
descent, but to building neural networks, in this case.

~~~
liuliu
Actually, I don't quite understand their method. At the end of the day, it is
not a gigantic shared memory, as in Hogwild! paper conveniently describes. You
need to synchronize slave's model with master model, in model parallelism
case, you need to send responses to another machine for further processes.
This sounds extremely complicated actually to implement in modern HPC
environment (assuming MPI as the main implementation technique).

Sorry if this sounds a bit defensive than it is meant to be. I just think that
there are much more to the implementation than the simple Hogwild! approach
(probably some clever pipelining to cover the communication / computation cost
as the Hogwild! approach relaxed consistency of the model).

It seems that this year's ImageNet classification competition result will be
more interesting than what I expected.

------
nwatson
I was surprised the article didn't mention IBM's Watson project (the Jeopardy
game-show winning "AI" bot) and I assumed IBM would be farther along in actual
commercialization in "outside" domains. Google and others appear to focus on
using AI techniques for their core business (though Google's "core" is very
diffuse).

This article (January 2014) implies IBM's having trouble with applying Watson
for profit: [http://www.zdnet.com/ibm-forms-watson-business-group-will-
co...](http://www.zdnet.com/ibm-forms-watson-business-group-will-
commercialization-follow-7000024929/)

~~~
m_ke
That's because IBM Watson is a gimmick that doesn't do much besides search.

~~~
mkautzm
I thought the end goal with Watson was to develop something that can
effectively diagnose medical issues or some such?

~~~
m_ke
That's how they're selling it but at the end of the day it's just a search
engine with some NLP up front to parse the query.

The reason why I said that it was a gimmick is because they still partly rely
on a rule based system and hand engineered features.

------
Locke1689
_This is because, when each machine updates the master server, the update
tends to be additive._

Hmm, reminds me of GFS (append-only file system). It seems like I managed to
really short my machine learning education during my degrees. Anyone have any
recommendations for good introductory resources for practical machine learning
design and implementation?

~~~
dekhn
You're conflating the append operation with (in their example) addition, which
is completely different. Append makes a list longer. Addition combines two
elements in a reduction. Addition is commutative. And the other issue here is
that learning is tolerant of a small amount of noise. You don't want your
filesystem of record to be.

~~~
Locke1689
I'm not actually. They both share the commonality of non-destructive updates,
which seems to be a recently pervasive technique of performing unsynchronized
parallel work. I do grant that the techniques are not _identical_ , though.
Certainly GFS has at least minor locking so that it can grab the current high
water mark.

------
dghf
"Project Adam"? Second Impact, anyone?

~~~
astrobe_
Also: Buffy the vampire slayer (season 4). It's a bit strange people make
biblic references for such projects when the "don't you play God" theme
(itself a variation on the myth of the golem) is, volens nolens, part of
popular culture. Or maybe the team deliberately did it, as a kind of memo?

~~~
TeMPOraL
I sometimes wonder how people can reconcile the "don't play God" meme with the
concept of humans being "created in the image of God".

------
chriskanan
This is a duplicate of this earlier post:
[https://news.ycombinator.com/item?id=8031546](https://news.ycombinator.com/item?id=8031546)

------
icantthinkofone
Despite the hoopla, isn't this project in Microsoft's lab while Google's is in
current use online?

------
rbanffy
I like to see someone taking a contrarian approach and exploring an unexplored
implementation space.

------
thefreeman
I'm only about... 50% serious when I say this. But all of these huge tech
companies getting into serious AI and learning really makes me worry that
Singularity is coming way sooner then I would have hoped. Like in the next 10
years.

~~~
shock-value
Most of the recent advances don't really constitute what you are probably
thinking of as "serious" AI. What's happening now is mostly just scientists
putting newly available massively parallel processing units (GPUs etc.) to
work at sort of "brute forcing" the problem of AI.

You could technically say it's a step above brute-forcing in that, after
randomly assigning neural weights, gradient descent and other tricks are used
to more quickly settle into good solutions. But the science is simply nowhere
near the level of the human brain, and the current state of the art is still
only "proficient" at relatively simple pattern matching, rather than coming up
with decisions that actually influence the state of the world that it is
perceiving.

The deep architectural insights that will likely be needed before computers
can approach the level of human intelligence still haven't been made yet, it
seems to me based on what I know of the current research.

~~~
nl
Yeah, this is almost completely wrong - or at least outdated thinking.

While it's entirely likely that better AI algorithms will become available,
the progress being made by "brute forcing" problems using lots of data and
lots of computing power is much more than was expected 10 years ago.

For example, Facebook's facial recognition software now correctly recognizes
faces 97.25% of the time. Humans do 97.53%[1].

I'd encourage you to read the Norvig et. al paper: "The Unreasonable
Effectiveness of Data"

[1] [http://www.technologyreview.com/news/525586/facebook-
creates...](http://www.technologyreview.com/news/525586/facebook-creates-
software-that-matches-faces-almost-as-well-as-you-do/)

[2]
[https://static.googleusercontent.com/media/research.google.c...](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf)

~~~
paganel
> For example, Facebook's facial recognition software now correctly recognizes
> faces 97.25% of the time. Humans do 97.53%[1].

One could say that is only data, nothing "intelligent" about pattern-matching
it.

But I'm biased, I'm a non-believer when it comes to reaching the singularity,
as in I don't think we will ever be able to build something that will
understand jokes, sexual innuendo or who/which will experience the Madeleine
effect
([http://en.wikipedia.org/wiki/Madeleine_%28cake%29#Literary_r...](http://en.wikipedia.org/wiki/Madeleine_%28cake%29#Literary_reference))

~~~
skj
The difference between "artificial intelligence" and "automated problem
solving" is how much the algorithm designers understand what's going on.

Computers will certainly be able to understand jokes or innuendo. At least as
well as people do, anyway.

As far as the Madeleine effect goes, I'm not sure why you consider that
intelligence?

~~~
paganel
> Computers will certainly be able to understand jokes or innuendo. At least
> as well as people do, anyway.

Serious question, are we close to anywhere on that? I last looked at AI-
related stuff around 10 years ago, when "sentiment analysis" as it was then
called was only capable of approximating what "good press" and "bad press"
meant.

> As far as the Madeleine effect goes, I'm not sure why you consider that
> intelligence?

I'm making a generalization in thinking that "intelligence" ultimately means
"human intelligence", and I'm also assuming (probably wrong, there's no exact
science for it) that things like the "Madeleine effect" sit at the core of us
being humans, and the third (and last generalization) in thinking that if
you're not experiencing things like the "Madeleine effect" you're not really
human-like, so you cannot really be "intelligent". Writers like Asimov and
Arthur C. Clarke have written about this problem (what it means to be human
when "robots" will come about) way better than I could ever dream of doing,
but my favorite presentation of the potential issues when the Singularity will
come around remains Stanislaw Lem's "The Cyberiad". Or we could continue
focusing on technicalities only and call it a day.

