
Maryland research could improve the AI task of sensorimotor representation - headalgorithm
https://eng.umd.edu/release/helping-robots-remember-hyperdimensional-computing-theory-could-change-the-way-ai-works
======
giardini
Related older easy-to-read article, paper and seminar lecture:

1\. "Binary holographic reduced representations for SWI-Prolog" by Jocelyn
Ireson-Paine:

[http://www.j-paine.org/hrr.html](http://www.j-paine.org/hrr.html)

2\. "Dual Role of Analogy in the Design of a Cognitive Computer" by Pentti
Kanerva, in Advances in Analogy Research: Integration of Theory and Data from
the Cognitive, Computational, and Neural Sciences. Workshop. Sofia, Bulgaria,
July 17-20, 1998:

[http://faculty.cs.tamu.edu/choe/mirror/kanerva.ANALOGY98-kan...](http://faculty.cs.tamu.edu/choe/mirror/kanerva.ANALOGY98-kanerva.pdf)

3\. Pentti Kanerva lectures at Stanford on "Computing with High-Dimensional
Vectors":

[https://www.youtube.com/watch?v=zUCoxhExe0o](https://www.youtube.com/watch?v=zUCoxhExe0o)

4\. Additional information, including the slides, for Kanerva's lecture are
available at:

[http://web.stanford.edu/class/ee380/Abstracts/171025.html](http://web.stanford.edu/class/ee380/Abstracts/171025.html)

------
evrydayhustling
This is probably a "publishing in the press" translation issue, but the notion
that neural networks have no memory is dead wrong. DNNs encode incredibly
specific histories of training data in their weights, which is both a
privacy/security risk [1] and a phenomena that is actively being utilized in
many dnn architectures. Overall, this has a "brand the framework before
proving exceptional capabilities" feel.

[1] [https://arxiv.org/abs/1802.08232](https://arxiv.org/abs/1802.08232)

~~~
gnode
I don't think that's what they meant. I interpreted it to mean: neural
networks are not trained in a continuous online manner.

That said, there are neural network architectures which have state, such as
recurrent neural networks, time-delay neural networks, and long short-term
memory. However, the state is used for covering problem domains with a
temporal nature, rather than for reflective learning. It's typically reset
between different inputs.

~~~
sgillen
There are lots of examples of networks being trained online, I see it all the
time in robotic control.

------
taneq
These HDVs (hyperdimensional binary vectors) sound a lot like the SDRs (sparse
distributed representations) that Numenta talks about with their HTMs
(hierarchical temporal memory).

Wow, what a lot of TLAs.

~~~
andbberger
That is probably not a coincidence. Pentti Kanerva, who does a lot of work on
hyperdimensional computing, is at the redwood center for theory neuro which
Hawkins (numenta) helped found.

~~~
evgen
Kanerva is also the author of the (vastly underrated IMHO) book Sparse
Distributed Memory from the late 80s that set the stage for a lot of this and
predates Numenta by a couple of decades...

~~~
taneq
Huh, thanks for the reference. I have heard elsewhere that the one issue with
Numenta's work is that they aren't particularly proactive about citing prior
art, so it's hard to know (as a newcomer) what parts are their innovations and
what parts are incremental improvements on existing techniques.

------
plutonorm
Hyperdimensional binary vectors??

What like:

010101010111010101010111011001

So they are encoding perceptions and actions into the same vector space. Cool.
But the marketing is just gibberish.

~~~
jbotz
Not really. "hyperdimensional binary vector" may sound rather buzzwordish, but
it has a fairly precise definition in this context, and it may indeed be
highly significant for the future of neural computing. You can learn more
about it with the code (and paper) here:
[https://github.com/alexanderganderson/bnn_vis](https://github.com/alexanderganderson/bnn_vis)

~~~
plutonorm
Err. Just read the abstract. It's a binary vector. Nothing more. That these
high dimensional binary vectors behave similarly enough to real vectors for
the purposes of training binary neural networks in no way justifies the name
"hyperdenional binary vector". It's a vector of Booleans.

~~~
sgillen
You are technically correct, it's just that when the vector of booleans gets
long enough they start to exhibit certain properties that "normal" binary
vectors don't have. Because of this the authors use the term "hyper
dimensional vector" to let other professional researchers quickly know what
they are talking about. Here's a decent blog post about some of those
properties: [http://gigasquidsoftware.com/blog/2016/02/06/why-
hyperdimens...](http://gigasquidsoftware.com/blog/2016/02/06/why-
hyperdimensional-socks-never-match/)

I kind of agree that "hyper-dimensional" sounds like a dumb buzzword, and
maybe it is. But I do think we should have a word for vectors that are big
enough that they fall in this regime.

~~~
plutonorm
I'm slightly more convinced. But it does seem a pretty silly name. Also I have
read about these counter intuitive properties of large vectors in other places
without this specific name having come up. But, yeah, interesting link, thanks
:)

------
gre
Not posting your machine learning paper on arxiv.org is a red flag.

~~~
hsdfhsdal
What about vixra.org?

~~~
p1esk
For papers so bad that even arxiv refuses to publish them? Sure, why not...

------
billconan
I do not understand the point of this paper. Say, deep reinforcement learning
also encodes perception/state into a large vector. Combining perceptions into
a large vector doesn’t seem to contribute much?

~~~
gnode
The article explains so little, so it's hard to know.

Neural networks generally don't learn by themselves, they require some kind of
optimisation outside of their operation. It seems to me that they're saying
this is more of a cognitive architecture which learns continuously by
memorisation.

------
tentboy
Unfortunately I have very little experience with AI or ML. I did however have
the lead contact on this paper (Yiannis Aloimonos) as a professor for computer
vision my senior year at Maryland.

He spent most of the lectures rambling about his research, including this,
rather than the course topic. I learned the course material on my own but
still enjoyed listening to him.

~~~
mklyons
Yiannis is a fantastic professor, probably my favorite during undergrad at
Maryland.

------
vcavallo
this seems like a pretty significant step towards a sort of “holistic” agent
whose past informs its present and future in a more human-relatable way.

------
cs702
This is a very click-baity headline. I'm tempted to flag it!

Has anyone here seen the actual paper?

~~~
nabla9
[https://robotics.sciencemag.org/content/4/30/eaaw6736](https://robotics.sciencemag.org/content/4/30/eaaw6736)

[http://sci-hub.tw/10.1126/scirobotics.aaw6736](http://sci-
hub.tw/10.1126/scirobotics.aaw6736)

~~~
p1esk
Closed access journal for machine learning in 2019? No thanks

