
Brain inspired technologies for strong AI - 0x7cfe
http://truebraincomputing.com
======
0x7cfe
We are researching information processing mechanisms of a real human brain. We
have created a fundamentally new model explaining its functioning. Our goal is
to develop a _strong_ artificial intelligence using neuromorphic computing
technology that is not based on conventional neural networks or deep learning.

On our site you may find an articles describing our vision and key concepts
where we share our take on the origins and nature of thinking,
neurophysiological mechanisms of brain functioning and the physical nature of
consciousness.

If you have any questions, feel free to ask them below.

~~~
p1esk
Code and results on standard benchmarks?

~~~
0x7cfe
We are working on them. Aside from the stuff mentioned at
[http://truebraincomputing.com/en/proof-of-
concept-2/](http://truebraincomputing.com/en/proof-of-concept-2/) we've done
MNIST and HASY tests as well as custom audio recognition tests.

However, to make results scientifically accurate, we need to spend fair amount
of time polishing the code and writing the explanation, so I would not declare
any results for now.

Of course, without proper papers and published proof of concept
implementation, all of above is just buzzwords. The site mentioned was the
first step to populate our research and draw the attention of the community.

P.S.: You may also be interested in Reddit discussion at
[https://www.reddit.com/r/artificial/comments/amogl2/brain_in...](https://www.reddit.com/r/artificial/comments/amogl2/brain_inspired_technologies_for_strong_ai/)

~~~
p1esk
How are you different from Numenta?

~~~
0x7cfe
Short answer: Numenta may seem very similar to some extent, but there are
fundamental differences too.

I'm not very familiar with their theory and technology, so please take my
opinion with a grain of salt.

Both projects are inspired by the real brain. The differences are in
perspective.

Here's a quote from their description:

"Every cortical column learns models of complete objects. They achieve this by
combining input with a grid cell-derived location, and then integrating over
movements (see Hawkins et al., 2017; Lewis et al., 2018 for details). This
suggests a modified interpretation of the cortical hierarchy, where complete
models of objects are learned at every hierarchical level, and every region
contains multiple models of objects".

If I understood correctly, they state that each minicolumn remembers how an
object is represented in this particular location meaning that each minicolumn
has its own memory.

"For example, there is no single model of a coffee cup that includes what a
cup feels like and looks like. Instead there are 100s of models of a cup. Each
model is based on a unique subset of sensory input within different sensory
modalities."

This is very different from our approach. In our case, minicolumns are
independent context processors that work by mapping input stimuli to produce
an interpretation and then using their local memory to estimate the validity
of such interpretation. The key difference is that the whole cortex area has a
single model of the object that is recognized in many contexts. The idea of a
context as a set of interpretation rules is essential to our theory.

So, to put it simply, instead of remembering the cup in every possible
scenario we remember only one concept of a cup, and then use the context space
to find the right interpretation of the input. This allows us to train the
models on a limited input and, more importantly, transfer the knowledge
between different contexts to think by analogy.

P.S.: You may also be interested in the discussion that takes place on Reddit:
[https://www.reddit.com/r/artificial/comments/amogl2/brain_in...](https://www.reddit.com/r/artificial/comments/amogl2/brain_inspired_technologies_for_strong_ai/)
Hopefully, this helps you to feel the difference.

~~~
p1esk
_I 'm not very familiar with their theory and technology_

It's strange that you're not familiar with an actively developing, widely
known, 15 years old open source project with similar goals as your initiative.
It's like if you wanted to create an OS similar to Linux, without bothering to
learn much about Linux first. As an example of what I mean, you mention things
like "brain is digital" and "combinatorial space", which seem to be
rediscovery of Numenta's SDR. Also, I don't think your way of processing an
object in different contexts is fundamentally different from Numenta's, based
on what you described, however it's hard to say without looking at your code.

To avoid reinventing the bicycle, I suggest you engage in some discussion on
Numenta's forums, or at least read their papers. It's quite possible that you
might discover something they missed. But, if you don't know what they did,
you will end up discovering a lot of what they _haven 't_ missed.

~~~
0x7cfe
When I said that I'm not familiar with Numenta meant myself only. My
colleagues, that are much more experienced than me, do indeed know that
project.

