
Brain-Like AI and Machine Learning - edfernandez
http://www.naiss.io/blog/2016/5/29/m76bvhww07z0algjbjmnl8gobjuakz
======
lawless123
>turning the traditional machine learning ‘black box’ into a ‘clear box’
neural network where new learnings can happen on the fly, in real time and at
a fraction of today’s computational cost (no retraining over the whole dataset
required).

I thought a black box meant that we aren't clear on why it makes the decisions
it makes?

~~~
michael_h
That's correct. You set up the shape of the neural net and you decide what
aggregation function the neurons will use, but the process is largely opaque.

------
aaronsnoswell
This article is terrible.

~~~
strebler
Yep, I checked out at: "...rise of deep learning since 2013, more or less when
Google’s X Lab developed a machine learning algorithm able to autonomously
browse YouTube to identify the videos that contained cats".

First, this started in 2012. Second, it wasn't Google - it was when Krizhevsky
et al published their seminal work. Realistically, Google was slow to adopt to
GPUs at the time, which I understand even contributed to Prof Ng's departure.
It was Baidu who launched the first large scale deep-learning based image
search, well ahead of Google.

Google has certainly caught up, but nobody can say they started it (and be
taken seriously).

~~~
daveguy
Technically deep learning started before 2008. Here is a trends paper from
back then:

[http://www.cs.toronto.edu/~fritz/absps/tics.pdf](http://www.cs.toronto.edu/~fritz/absps/tics.pdf)

Here is a google tech talk from 2007:

[https://www.youtube.com/watch?v=AyzOUbkUf3M](https://www.youtube.com/watch?v=AyzOUbkUf3M)

Companies didn't pick it up until more recently. GPU-ification happened in
2009 with Ng's group:

[http://robotics.stanford.edu/~ang/papers/icml09-LargeScaleUn...](http://robotics.stanford.edu/~ang/papers/icml09-LargeScaleUnsupervisedDeepLearningGPU.pdf)

And yes, Krizhevsky et al (Hinton's lab) applied GPU deep learning to ImageNet
in 2010:

[https://papers.nips.cc/paper/4824-imagenet-classification-
wi...](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-
convolutional-neural-networks.pdf)

~~~
strebler
Those are great links! But the 2008 Hinton paper would not be considered deep
learning, it is classic neural nets. It makes no mention of CNNs or GPUs,
which is what really got this all going back in 2012 with ImageNet /
Krizhevsky.

The ImageNet paper is from 2012, not 2010. That's when the computer vision
community really went "wow". IIRC, almost every entry in ImageNet 2013 was
using CNNs.

~~~
mattkrause
> it is classic neural nets. It makes no mention of CNNs or GPUs

Is using a GPU "essential" for something to be deep learning? I'd always
thought that the important part was some sort of hierarchical representation
learning.

GPUs certainly help, in that you don't want to wait all day while your code
does that, but they're not necessary.

~~~
edfernandez
I think Tsvi Achler's video here will be useful to understand better what the
article is about
[https://www.youtube.com/watch?v=9gTJorBeLi8](https://www.youtube.com/watch?v=9gTJorBeLi8)

------
JoeDaDude
The machine which turns itself off (the ultimate or sometimes useless machine)
is an old gag made by Marvin Minsky and Claude Shannon. [1]
[https://en.wikipedia.org/wiki/Useless_machine](https://en.wikipedia.org/wiki/Useless_machine)

~~~
zodPod
Yeah I was wondering why they put it at the top of the article. Not really
related at all.

~~~
edfernandez
not related, just for fun, all articles require a pic these days

------
mholt
Is there a paper on this? Did I miss the link?

~~~
shepardrtc
It seems to be mostly based on what Achler was talking about. I think you can
find his work here:
[https://scholar.google.com/citations?view_op=view_citation&c...](https://scholar.google.com/citations?view_op=view_citation&citation_for_view=XPjcqoAAAAAJ:ULOm3_A8WrAC)

~~~
mholt
Thank you!

------
AstralStorm
Why would you call AI just learning? Marketing? Self-aggrandizement?

AI is getting machines to solve problems they haven't been explicitly
programmed to solve. As it is, we do not have AI. We have some bits and pieces
of it. Best Mr algorithms so far only solve problems they have been explicitly
trained and tweaked to solve.

Online learning has been attempted before, with very limited success. Making
an online learning network stable is an open problem. These tend to quickly
overfit the problem and get stuck.

~~~
matt4077
> AI is getting machines to solve problems they haven't been explicitly
> programmed to solve.

That's one possible definition of AI, and not a terribly good one – just this
morning, gmail solved my problem "I don't have John's phone#" without ever
being explicitly programmed to "find John's phone#".

It seems people will always redefine AI to exclude whatever advances are made.
Even passing the Turing test will just mean we've build an exceptionally good
chatbot.

So here's my definition: AI is an algorithm that gets distracted from its
original purpose to argue about the definition of AI on the Internet.

...and now back to categorizing these pictures. If I see one more Ostrich I'm
going to segfault so hard.

------
username6000
This is where it start to get interesting.
[https://www.youtube.com/watch?v=9gTJorBeLi8](https://www.youtube.com/watch?v=9gTJorBeLi8)

~~~
edfernandez
yes, that's right, thanks for pointer

------
teabee89
Sounds like Numenta's HTM algorithm. What are the differences?

~~~
AstralStorm
Numenta algorithm is not online learning. It does process and learn streaming
data, but internally batches it into phases, a process not shown to happen in
the brain.

~~~
oxtopus
Full disclosure: I work at Numenta.

In the HTM model (presumably the Numenta algorithm you're referring to),
synaptic weights are updated with every new data point in discrete time steps
(as opposed to continuous). In that sense, HTM is an online learning model.
There was an experimental implementation of Temporal Memory (one component in
HTM) that batched up some of those operations into phases, but that still
happened in a single time step and that implementation has since been phased
out (pardon the pun).

For some additional literature on the topic, see: \- "Why Neurons Have
Thousands of Synapses, a Theory of Sequence Memory in Neocortex",
[http://journal.frontiersin.org/article/10.3389/fncir.2016.00...](http://journal.frontiersin.org/article/10.3389/fncir.2016.00023/full)
\- "Continuous Online Sequence Learning with an Unsupervised Neural Network
Model",
[http://www.mitpressjournals.org/doi/abs/10.1162/NECO_a_00893...](http://www.mitpressjournals.org/doi/abs/10.1162/NECO_a_00893#.WCCsMmQrJBz)
\- "The HTM Spatial Pooler: a neocortical algorithm for online sparse
distributed coding",
[http://www.biorxiv.org/content/early/2016/11/02/085035.abstr...](http://www.biorxiv.org/content/early/2016/11/02/085035.abstract?%3Fcollection=)

------
johanneskanybal
random article about AI.

~~~
eveningcoffee
No, it is not random, and it is not an article. It is an advertisement.

------
random_gangster
Why do morons come in this field yet don't know how to multiply multiple digit
numbers?

~~~
sctb
Please stop posting like this. We ask that users comment civilly or
substantively on HN or not at all.

