
The Grok prediction engine from Numenta announced - habiteer
http://www.numenta.com/grok_info.html
======
X4
It's fascinating stuff going on there at Numenta. Thanks for submitting this!

People have had huge expectations for so many decades in AI Research that it
led to a bad image. But something with that is wrong, because every step
towards understanding the brain brought all of us a step forward.

We know Nature is brilliant, so it's absolutely logical that it can't get
copied by some funky scientists over the weekend. Could've been the great AI
depression, but we'll get over it. I think the entire disappointment spiral
led the AI Community to get treated like "wadda wadda wadda".

To me it appears like when a prehistoric human finds laser technology from a
crashed UFO and doesn't know what to do with it. Making him disappointed about
laser technology. So before whining about how slow we find things out about
ourselves, the human brain and the mind we should better invest more in
education. The more people can research this topic the faster we get amazing
results.

When the research in a topic doesn't yield a promising result it doesn't mean
the researchers are incompetent, it means we just don't have enough education
or researchers who can complete the puzzle.

Before consuming popular information:
<http://en.wikipedia.org/wiki/Harold_Lasswell>

At the time the Numenta stuff got popular, the critics also got a voice:
[http://www.acceleratingfuture.com/michael/blog/2010/04/ben-g...](http://www.acceleratingfuture.com/michael/blog/2010/04/ben-
goertzel-on-numenta/) and
[http://scienceblogs.com/pharyngula/2010/08/ray_kurzweil_does...](http://scienceblogs.com/pharyngula/2010/08/ray_kurzweil_does_not_understa.php)

PS: enjoy the earworm: <http://www.musick8.com/mclips/34wadda.mp3>

~~~
felipemnoa
Inventing/Discovering Strong AI will be one of the greatest achievements of
all time in my opinion. It will change our world in ways unimaginable. Our
standard of living will go up for everybody, even the poorest nations. I just
hope that we don't manage to kill ourselves with that technology.

How could we kill ourselves with such technology? Simple, the government will
probably weaponize AI, build robots with it. An agent somewhere will issue a
command to "eliminate such and such", the robot will understand it as
eliminate humanity because of a bug in its system. Due to bad luck the
safeguards that are to prevent this also fail. i.e. This would be like a
cascading failure that brings down airplanes.

Upon receiving its orders the first thing the robot will do is that it goes
into hiding, it replicates itself as much as possible while remaining
undetected. Several decades letters all of its descendants detonate all of the
world's nuclear arsenal along with the new ones they've built. Mission
accomplished. (I think this is the plot for my first SciFi book!)

------
reader5000
I'll remain skeptical until it is determined how their "cortical algorithms"
benchmark compared to, say, Naive Bayes. Good marketing though.

~~~
Element_
Last year Dr. Davide Maltoni published a paper (Pattern Recognition by
Hierarchical Temporal Memory)[1] comparing the algorithms laid out in Dileep
Georges original PHD thesis. The results were pretty impressive. There have
been many improvements to the algorithm since then too.

<http://bias.csr.unibo.it/maltoni/HTM_TR_v1.0.pdf>

~~~
srconstantin
Read the paper; HTM's don't seem to do better than other object recognition
algorithms at recognizing shapes, especially because there are visual
properties it ignores (curvature, global topological properties, etc.) The
accuracy for the picture datasets are only 60-70%. What's interesting about
HTM is its generality. I can't judge whether it would be good for the Grok
prediction engine, but I know more about image recognition and you definitely
don't want to use it for that.

------
Caligula
Jeff Hawkins who also founded Palm is a cofounder of Numenta. He started the
company after writing 'On Intelligence'. He does some great lectures that are
on youtube.

This seems really cool, I signed up for the beta. I hope I am selected.

------
sown
So some ML/AI researchers I know consider Numenta to be of a somewhat
"outsider artist" effort. Their opinions of the research from Numenta is that
of lower regard.

Now, you don't have to upvote me for this, but you don't have to downvote me,
either, because I happen to know experts in ML/AI who disagree with Jeff
Hawkins. Got that? Downvoting me doesn't change these other people's opinion.
I'm just relating to you what I've been told by others more qualified and
knowledgeable in an area than I am not an expert in. I have to say all this
because people seem to give Numenta a very dogmatic approval and anyone who
seems to question it on discussion boards gets flamed in a religious manner.

Edit: Hey, look, a downvote.

So, is Numenta genuinely really novel and new or is it snake oil? Or is it
somewhere in between? Does it actually work better than what we have now?

~~~
joe_the_user
I'd love to read a substantial critique of Hawkins.

But I'm downvoting you for a post with _nothing_ but hearsay, appeals to
authority and arguing with your downvotes...

And no, I don't care what your important friends think of Hawkins. I'd care if
they wrote something substantial I could read but otherwise, hey, get off my
lawn...

~~~
sown
All I did was ask -- ASK -- whether if Numenta actually works.

So does it work? Better than what we have? Is it really novel?

ps: You're failing to make a prime distinction about argument from authority
and falacious argument from authority. You can safely argue from authority by
someone if is a genuine expert and a consensus of experts all are saying the
same thing. An expert would be if most of what an authority says on a topic is
often virtually all the time, then you could be OK with that person being an
expert. The Southern Baptist can argue from authority about biblical
interpretations given that they should know Greek/Hebrew/Latin, ancient
mediterranean history, etc. They can't argue from authority about evolutionary
biology if they don't know anything about it.

The reason I posted here is because I don't talk to my ML/AI friends often, I
don't have many of them, and they haven't explained why they think what they
do.

I don't understand why it is taking so much effort to get a simple amount of
proof. Surely, there's a corpus of data with performance metrics that exists
and can conclusively demonstrate one way or another?

------
wiradikusuma
How is it different from Google Prediction API?
<https://developers.google.com/prediction/>

~~~
msellout
It uses a different algorithm. Google Predict likely uses a battery of several
types of supervised learning algorithms and lets them "vote". Grok uses their
Hierarchical Temporal Memory algorithm.

~~~
X4
So you say Google is using a battery of hidden markov chains with classical or
slightly novel algorithms?

I could believe that, given the quality of the Google Predict results. Plus
Google buying <http://recordedfuture.com> undermines that their Prediction
algo's aren't top notch (mildly expressed).

------
msellout
It seems like HTMs would be prone to the same over-fitting problems that other
types of neural networks have. I tried to find some comments in the Numenta
material about this, but didn't see anything about Grok's strategy to avoid
over-fitting. Can anyone help me figure this out?

~~~
marmaduke
I breezed through the PDF on the HTM Cortical Learning Algorithms and there's
nothing new since the 80s -- over-fitting is still a problem. Perhaps there's
an implicit assumption that because learning is online, over-fitting doesn't
happen or isn't important.

------
puredanger
If you're interested in learning more, Jeff Hawkins will be doing a keynote
this year at Strange Loop talking about it in more depth.
<http://thestrangeloop.com/sessions>

------
tocomment
On a related note what are some good algorithms for anomaly detection?

I want to run my credit card transactions through it to flag things for
review.

~~~
msellout
Look for anything that costs +/- 2 standard deviations from the mean.

