
Artificial Intelligence Hits the Barrier of Meaning - mathgenius
https://www.nytimes.com/2018/11/05/opinion/artificial-intelligence-machine-learning.html
======
rademacher
This latest craze of "AI" research seems to be fueled by a sudden glut of
computational power (GPUs) that wasn't available previously. I think that most
technical people would agree that the mid 2020s is extremely ambitious. I'd
also argue that we're actually more likely to experience another AI winter.

The frightening part of the current deep learning research is how susceptible
they are to adversarial attacks. Adding small amounts of noise causes
misclassification in images, and some papers even explore the inevitability of
adversarial examples [1]. This is especially frightening given the amount of
autonomous vehicle work being done. I could imagine a situation in which the
sensor noise varies just enough to cause such an error. Obviously, the systems
will have redundancies built in, but I'm convinced the self-driving cars are
still a ways off as well.

EDIT: As others, have stated just adding noise is not enough and it is often
used to generalize the model. The paper does discuss that the perturbations
can be incredibly small to cause this deviation and that the set of such
deviations may be larger than expected especially for complex images.

Regarding the AI winter, I suppose I should have defined it as a reduction in
the amount of research and the extent of the progress being made in the area
rather than the utility of such research.

[1] [https://arxiv.org/abs/1809.02104](https://arxiv.org/abs/1809.02104)

~~~
dontreact
I don't think adversarial examples give any evidence of relevant problems with
these models because they occur on a very specific subset of images that can
only be discovered using detailed knowledge of how these networks process
images.

For all we know, humans have similar problems on some obscure subset of
images, but we can't find human's adversarial examples because we don't have
detailed knowledge of how the brain processes images.

~~~
mmirate
The important difference here is that most adversarial examples for the human
perception:

(a) do not occur frequently in nature,

(b) are not frequently - if at all - produced in man-made architecture or
transit-constructions,

(c) often contain repetitive and regular geometric and chromatic patterns
which further make them stand out from everything else, and

(d) practically _cannot_ be produced by digital (ergo noisy/less-than-perfect)
images of any common real-world scenario.

In short: optical illusions don't accidentally occur in places where they can
be seen by meatbag drivers.

~~~
dontreact
I don’t see how you can make any claim about “most human adversarial
examples”. There is a huge space of images and we have explored a negligible
part of it.

Also a) and b) empirically seem to be true of the test sets people have
collected thus far of the natural world for these models.

In short, we have no evidence that adversarial examples of the type being
studied occur commonly in images collected by self driving cars.

~~~
mannykannot
The issue with regard to self-driving cars is that these cases demonstrate a
disturbing level of fragility: we don't have a good handle on where the
boundary between acceptable and chaotic responses lies.

You hypothesize that there are comparable examples for humans somewhere out
there in the domain of all possible images, but the fact that, for all the
countless cases of people looking at things that have occurred in humanity's
existence, no-one has found a good example, suggests that, from the pragmatic
point of view that you propose, image-recognition software has some catching-
up to do.

Maybe a system that seeks consensus among several differently-trained models
would be more robust.

~~~
de_watcher
The difference is that you can calculate an adversarial example for our
classifiers, but it's too slow to calculate on a human.

Even if you could, the result would be specific to that particular person, so
it won't work as good on others. And these bastards learn while you're
constructing the example (which isn't fair at all to a helpless classifier
that's just sitting there and doesn't change).

~~~
mannykannot
Fairness doesn't come into it - machine vision has to be up to the task it is
given, period. If humans depend on their more general intelligence to deal
with problem cases, machine vision either has to do something similar, or
compensate adequately in some other way.

~~~
de_watcher
That was a joke.

------
mark_l_watson
Great article by Melanie Mitchell (she gave me her code for the Copy Cat
program from her book with Hofstaeder almost 30 years ago - she has been
looking at creativity and common sense in AI for a long while)

My day job is all about deep learning but personally I think we need to stop
and take a deep breath and really solve problems with biased data sets and
models, easily spoofed models, etc.

I have worked through a few AI winters and we may be hitting another one. I
would like to see care given to using deep learning models only where it is
safe to do so.

~~~
kopo
Yup. Dotcom era all over again where everyone and their grandmother need to be
building a website.

~~~
rjtavares
Dotcom era gave us Amazon and Google. If AI craze gives us something like that
again, I think that's fine.

~~~
kulahan
I know I'm late to the party, but...

Amazon abuses many of their workers, they've been dangerous to small
businesses in any area they move into, and they're contributing significantly
to the wealth inequality problem.

Google knows a _scary_ amount of information about you and everyone you know,
they're supporting a horrific change in China allowing even tighter control of
their citizens, and they were even looking at building AI for military drones
at one point.

In exchange, we get to buy cheap stuff with quick shipping and we can find
resources on the internet slightly quicker.

An AGI could have much more significant implications than either of those
companies ever have. We need to figure out a solution to the AI control
problem before we have a Dotcom-like burst of development.

------
evilturnip
I don't know much about AI, but I wonder what AI people think of this theory:

[https://en.wikipedia.org/wiki/Embodied_cognition](https://en.wikipedia.org/wiki/Embodied_cognition)

The idea being that if we want to replicate strong AI, it needs to be
embodied, because a lot of our cognition is built on metaphors that are
instantiated in our physical actions and perceptions.

~~~
AndrewOMartin
I've been studying Embodied Cognition AI and going to conventions since around
2009, it started seriously gaining popularity around 2012-2015, but then you
get division between the new followers and the old about how extreme and
radical you need to be.

The new people accuse the old of being too hand-wavy and airy fairy, the old
people accuse the new of not taking the new ideas seriously enough, and not
accepting the criticism of their entrenched views. From this comes progress.

For my money, the best philosophy comes from Dan Hutto, best book being
Radicalizing Enactivism (Hutto and Myin, 2012). The best neuroanatomy with
regards to consciousness and intelligence came from Walter J. Freeman III,
best book being How Brains Make Up Their Minds (Freeman, 1999) and the best
up-to the minute AI research is from Tom Froese. See "Referential
communication as a collective property of a brain-body-environment-body-brain
system: A minimal cognitive model" (Campos and Froese, 2017), and his
(personally very interesting) work on the possibility of self-organising
governance in Teotihuacan.

If you just want to have an introduction to the distinction between the two
approaches to AI then you can do no better than read the snappily named paper
"Why Heideggerian AI Failed and How Fixing it Would Require Making it More
Heideggerian" (Dreyfus, 2007). It's true this paper appears to skip straight
from Symbolic GOFAI to radically embodied dynamical systems, skipping
Connectionism, but the issues raised in the paper can easily be used see that
neural networks will fail to reach anything like intelligent behaviour unless
they begin to draw strongly on the embodiment literature.

I can see from the dates of my recommended publications that I've not been
keeping up particularly well, but I've been writing up my thesis on a slightly
different subject.

~~~
mehh
Pretty sure embodied cognition was very much a thing in the mid 90's.

It seems we keep moving the date forward with AI techniques, claiming they are
newer than they actually are, are we in denial?

~~~
AndrewOMartin
I think its not too controversial a statement to say that Embodied Cognition
was kicked with by the publication of The Embodied Mind (Varela, Rosch,
Thompson, 1991), but I noticed a significant increase in its popularity in the
2010s.

~~~
mehh
fair enough :)

------
symplee
As if humans haven't also hit the barrier of meaning? And yet we've still made
unimaginable discoveries. I almost added the word "progress" but I guess that
depends on what you find meaningful.

The article states: "But ultimately, the goal of developing trustworthy A.I.
will require a deeper investigation into our own remarkable abilities and new
insights into the cognitive mechanisms we ourselves use to reliably and
robustly understand the world."

Why limit the field to the capacity of humans? What the author calls
"remarkable abilities" and "robustly understand[ing] the world" can also be
seen as just reproducing our own innate and learned collective human biases.
Our theories from observation, and their unprecedented ability to predict
future events, is more about describing the world vs understanding it. Is
there any topic in the world that doesn't have contrary interpretations?

What quantifiable metric would we even use to gauge artificial intelligence's
grasp of "meaning"? We don't even have one for our own.

~~~
freeone3000
This is not philosophy. This is "apples are smaller than watermelons",
"objects must be contiguous to be the same object", "the thing that makes an
elephant an elephant is the presence of several large key features (ears,
nose, trunk, legs) and not its color". If machines are going to meaningfully
process our data, they're going to have to agree with us on certain base facts
of reality.

------
Barrin92
Good article. I think issues like the mistranslation "The bear headed man
needed a hat" really show that what we need is not an "AI algorithm" but AI
agents. The lack of common sense that we see in machines is very much
intended, it's what makes them machines in the first place.

Humans don't make these mistakes because humans are able to create stories and
place even a mundane translation in the deep context of a 'world' that must be
coherent. Heads simply don't have bears on them, we wouldn't trust that
translation even if we had heard it clearly, and we'd rather think we
hallucinate before we'd drive on a road that goes nowhere in an impossible
direction.

Algorithms that are just glorified feature extraction machines, in my opinion,
can never create a coherent story, so I'm very skeptical about the claim that
we will have somehow solved intelligence in the next ten years. It seems to me
like we are almost nowhere closer to it than we were decades ago.

~~~
inasring
Isn't it also true that what we define as coherent is based on observed data
that could be fed into these algorithms?

~~~
sifoobar
Not an expert, but as long as there is no way of knowing WHAT it is the
algorithm is learning it seems to me that it could never work reliably. It
might look perfectly reasonable until you hit one of the triggers the
algorithm used to segment the data.

Someone somewhere shared a story about using machine learning to spot the
difference between US and Russian tanks ; which apparently worked fine until
field testing, where it failed miserably. What the algorithm had learned was
the difference between great quality photos of US tanks and poor quality
photos of Russian. True or not, this is exactly the kind of issues that will
keep popping up.

Plenty of people are spending plenty of time figuring out how to mess with
facial recognition as we speak by taking advantage of the same fundamental
weakness.

~~~
freeone3000
Oh, yep! It's super simple to induce systemic errors like that. Take
[https://github.com/kevin28520/My-TensorFlow-
tutorials/tree/m...](https://github.com/kevin28520/My-TensorFlow-
tutorials/tree/master/01%20cats%20vs%20dogs) . Lighten every dog by 20%.
Darken every cat by 20%. Train. Take image of cat, lighten 20%, watch as it's
transformed into a dog!

For large corpora, it's impossible to know what features got selected. They
probably aren't any feature a human would consider.

------
not_a_moth
The wild Zuckerberg quote aside, I don't believe many people expect this
current wave of AI to approach general intelligence. I would be surprised if
we weren't all mostly in agreement with the author.

~~~
hprotagonist
In my experience, domain practitioners who spend our days elbow-deep in
tensors are well aware that what we call AI right now has nothing to do with
AGI.

That knowledge does not seem to generalize well, though. Even technically
literate people who aren't in the trenches seem to have frankly insane ideas
about AGI. This phenomenon is not helped by armchair philosophy, the frothy
chatter of the tech-adjunct world, or the naive idea that "well, transistors
are easy, so brains should be too!" that is all too common in people who
haven't stuffed electrodes into brains and then spent weeks figuring out what
the hell could be going on in there.

~~~
thwy12321
The thing is that progress is so fast in some areas of ML, you dont really
know when something could emerge. Who knows, maybe with enough training data
and a deep enough network with the right supervision, something extraordinary
pops out.

Google google bert nlp. Google recently released an NLP algorithm that
basically beats all predecessors, but the algorithm itself is extremely
simple. Just lots of data and compute, no telling what will happen in ten
years when the amount of data has exploded and GPUs are cheap.

~~~
glup
bert provides incremental improvements on tasks that don't really stress-test
contextualized world knowledge in linguistic tasks — contrast with Winograd
schemas (SOTA:
[https://arxiv.org/pdf/1806.02847.pdf](https://arxiv.org/pdf/1806.02847.pdf)).
Corpus data and compute don't magically solve everything.

~~~
p1esk
The paper you linked to shows how a simple DL brute force approach (large
model trained on lots of data) "successfully discovers important features of
the context that decide the correct answer, indicating a good grasp of
commonsense knowledge."

Which kinda goes contrary to your last sentence. Go figure.

~~~
glup
My point is that while the DL + large corpus approach is the best available,
it's still way, way lower than human performance (Tables 4 and 5).

~~~
p1esk
Sure, but we are talking about how fast it's progressing. Given that NNs for
NLP field pretty much started with Bengio's paper in 2003, I'd say it's
accelerating at an amazing rate.

~~~
glup
But NNs for language processing have been around much longer than that, e.g.,
Elman, 1990:
[https://crl.ucsd.edu/~elman/Papers/fsit.pdf](https://crl.ucsd.edu/~elman/Papers/fsit.pdf).

------
blueadept111
Everything OP says is correct.

And let's not forget that even in the absence of adversarial attacks, even
with flawless associative mapping and incredible generalization, even
hypothetically solving the problem of higher level ontological 'meaning',
there's STILL absolutely no known way for AI to even begin to address the
matter of the qualia - how to build something that's actually has a conscious
perception (of pain, for example). We're groping in the dark without matches,
and there will be many more seasons/winters to come.

~~~
dragonwriter
> there's STILL absolutely no known way for AI to even begin to address the
> matter of the qualia

Who, other than metaphysically, cares? I can't observe, test, know or be in
any way materially affected by whether or not _you_ experience qualia, it
certainly makes no material difference whatsoever if an AI actually
experiences qualia.

Discussing qualia has about as much relevance to anything as discussing how
many angels can dance on the head of a pin.

~~~
blueadept111
Pain and pleasure, which are qualitative experiences, are the basis of operant
conditioning (unsupervised learning ala behavioral psychology). This covers
the massive domain of behavior that's informed/dictated by
reinforcement/punishment. So they are pretty fundamental if you care about
building something that's analogous to biological life, depending of course on
how deeply you want that analogy to extend.

~~~
dragonwriter
> Pain and pleasure, which are qualitative experiences, are the basis of
> operant conditioning

The behaviors of attractive and aversive reinforcement are the basis of
operant conditioning. The whole _issue_ about qualia is that exhibiting
behavior to which subjective internal experience is usually attributed does
not demonstrate the subjective internal experience. The people arguing for the
importance of qualia and it's unattainability for AI aren't arguing that it is
particularly challenging for AI to exhibit attractive and aversive
reinforcement, or any other external behavior, they are building a castle they
can retreat to in the face of any observable behavior.

~~~
blueadept111
You haven't actually said anything I disagree with, even though I'm in the
castle. "Retreat" sounds a little harsh. I would say "Comfortably ensconced".
Even if an AI nails down the behavioral/functional aspects of a biological
brain, that doesn't necessarily mean that the qualia have come along for the
ride.

Does it matter if they have? That seems to be the real question.

Allow me to propose a thought experiment: I'm a traveller from the future,
where the notion of 'qualia' is understood and engineered (maybe using
techniques we would somewhat recognize today, maybe not). I present you with a
machine that has a single red button. When the button is pressed, there is no
observable behavior, except that 10,000 AI's are instantly created and
subjected to extreme agony until the button is pressed again. What are your
thoughts on pressing this button?

~~~
dragonwriter
> Allow me to propose a thought experiment: I'm a traveller from the future,
> where the notion of 'qualia' is understood and engineered

This is incoherent: qualia by definition cannot be engineered, or even
validated by external observers. This isn't a technical limitation that can be
overcome with progress, it's inherent in the definition.

~~~
blueadept111
Maybe the future involves direct brain interfacing between humans via
biological interconnects, and a new kind of engineering will evolve, defined
not by behavior/functionality in the objective world, but in a realm of shared
consciousness that is only subjectively accessible to biological entities that
have a similar level of brain evolution.

In fact our own brains probably already work this way, with different parts of
the brain acting as loci of lower-level consciousness, but also somehow
becoming more than the sum of their semi/sub-conscious parts when acting in
concert.

And maybe computers just can't participate in this shared consciousness, even
in principle, because they're made out of the wrong 'stuff' along a dimension
of reality we don't yet understand (or may never understand).

------
hacknat
It all comes down to the fact that we don’t really understand what
intelligence is. We know it is an emergent property of several mechanisms,
working together, and we are even getting a decent grip on what those
mechanisms are and how they work, but we are still struggling to see the big
picture of how intelligence actually emerges.

~~~
qubax
We don't know if it is an emergent property. That's what we think it is
likely. but we won't know until we find out what consciousness is. And we
won't find that out until we crack the secrets of the brain.

There once was a time when we thought our bodies moved because there was an
"emergent property" called the soul that instructed our bodies to move. It
took a very long time for humans to discover how our bodies through the
science of physiology.

~~~
mcguire
"The soul" is (was?) explicitly not an emergent property.

------
voidhorse
Aside from the technical obstacles currently barring us from realizing AIs
capable of human competency, we should be seriously considering the social
question of whether or not such a technology is something we really need or
want.

The world is already saturated with technical systems and artifacts that have
effectively flattened social relations, eradicated traditions, and generally
reduced the field of humanitarian meaning and theory to nil. All our problems
have become technical problems. We’ve even reached a point where we try to
solve social problems with technical solutions, and when that doesn’t work we
try to fix the boo-boo the naive and rampant application of technology has
caused with, guess what, more technology.

The philosophy of technology is perhaps the most important discipline of our
time—there’s little left in life that technics hasn’t in some way enveloped,
either insidiously or openly.

~~~
nradov
Those seem mostly like arbitrary value judgments based on what you personally
happen to be comfortable with. Ask more fundamental questions. Are we
necessarily better off with traditions than without?

~~~
voidhorse
Sure. Imagine a world in which all tradition and culture was wiped out.
Imagine a world in which all technology was wiped out. They're both absolutely
horrible. And certainly, some traditions are better discarded, just as some
technologies (the A-bomb being the prime example) are better left unexplored.
I'm only trying to point out the tendency of our era is to forget humanitarian
values entirely in the face of 'objective', 'practical', 'utilitarian' and
technical progress.

I am making a value judgement, and your dismissal of my points simply because
they entail value judgements is exactly the sort of attitude that has led to
the dissolution of culture and the near reemergence of fascist and populist
thought in arguably the most technologically advanced country in the world.
There's a reason technology couldn't stem the reemergence of such thinking,
and in fact the social conditions rampant, unquestioned (or if so, not nearly
enough) technological development has caused has contributed to the revival of
such thought. Such an approach annihilates dialogue before it can begin.
Living social subjects no longer endeavor to understand each other because
they fall back on a ideology that tells them they it's "above" values.

I am championing the philosophy of technology, and I am somewhat familiar with
it, but I also genuinely believe it's important, and yes, that is a value, and
one that I stand by. The sort of world I'm hoping will emerge in the future is
dictated by those values. I just hope there's still enough folks in tech
considering the future their current values might create.

------
js8
I recently realized that meaning cannot be done using pattern matching only,
which is what (current) machine learning is about.

The key insight came from article on bullshit:
[https://news.ycombinator.com/item?id=17764348](https://news.ycombinator.com/item?id=17764348)

I believe understanding (things having meaning) comes down to whether you can
produce an example from the given description of the situation. If you can
produce an example, then you can say you understand the description.

The problem is, you can have situation descriptions that are contradictory
(have no examples, thus have no meaning) yet they are arbitrarily close to
situation descriptions that have examples (and so have meaning).

A good example are those Escher-like impossible objects, which look very much
like real objects, but humans can easily see they are meaningless (they cannot
be interpreted, and thus imagined, in 3D). Another good examples are sentences
from the above paper. The bullshit sentences are the ones for which you cannot
construct a mental example in your head.

I suspect this happens for the famous flaws in deep learning as well, the deep
learning network cannot learn that some inputs are contradictory.

I believe that this is actually ultimately related to the boolean
satisfiability problem. In theory, one could determine whether some learning
agent is only using pattern matching or is actually having understanding (is
able to recognize logical inconsistency in the input) by teaching it different
SAT instances, and the agent that would be able to learn the difference
between arbitrarily similar SAT instances with and without solutions could be
considered to have understanding.

------
imh
I've started thinking about the whole "do computers grok meaning" thing in
terms of proof. If a computer tells me a true theorem, I don't know if I can
ascribe "understanding" to it. If it tells me a true theorem and comes up with
a proof, then I'm comfortable _defining_ that as understanding within the
context of that particular proof language. It passes the buck up a step. You
define a new notion of understanding-in-a-given-language and then get to argue
about whether that language/deduction system constitutes true "understanding."

~~~
freeone3000
Okay, but what if it just made stuff up by trying everything until something
came out true? We have theorum provers on this model today. Does such a system
"understand" mathematics? It's probably more explainable than anything else,
but the strength of the current generation of learning is application to cases
with ambiguity present, and our current approaches here are completely
unexplainable.

~~~
imh
I'm saying that we can define what constitutes understanding by defining what
constitutes reasoning within a given inference system. That's a simpler task,
but doesn't solve the original problem. It just passes the buck. Now instead
of asking whether a given learner performs reasoning and understands, we could
ask whether the language/logic constitutes understanding.

I'd definitely say that the current theorem provers have some form of true
understanding, even though it isn't the same form as we have. For the typical
ML learners, I think it's more interesting to ask and taxonomize "in what ways
is this reasoning/understanding" rather than just ask "is this
reasoning/understanding?"

------
peter303
'Understanding' is a matter of degree. Humans do not know very well how their
own brains generate ideas, thought or language. Yet they appear to be capable
of understanding.

------
bigbadgoose
Turns out the primary use case of AI at the moment is duping low info voters
into electing fascists. AI has an IQ of 87 in narrow situations, while the
targets are working with 85 and below.

One foot in front of the other, but the chasers got a little bit ahead of the
horse here.

------
wiz21c
Didn't check but, considering there are probably thousands of "small tasks"
that are being solved by AI, can one build an AI that can choose the right
"small task" to apply to a never-seen-before problem ?

------
aabajian
I agree with the author overall, but I think she picks a poor example for
dictation.

“The bareheaded man needed a hat” falls squarely in the domain of
contextually-aware models and _would_ likely be transcribed correctly using
deep learning.

------
kazinator
Optimistic framing: "Machine learning algorithms, _and domesticated animals
such as dogs,_ don’t yet understand things the way humans do — with sometimes
disastrous consequences.

------
thallukrish
This is the reason why chatbots did not live up to the hype. It is hard to
have an agent that offers solution in a domain as well as being generally
conversant.

------
goldenkey
Here's what I've realized both about the universe AND intelligence .

Silicon cannot touch either of them. I didn't always think this though. I
thought intelligent beings, but first smaller structures, could evolve in a
large digital universe. I made a blog post:
[http://scrollto.com/blog/2017/04/11/life-a-universe-
simulati...](http://scrollto.com/blog/2017/04/11/life-a-universe-simulation/)

It took a long time but I eventually created what I set out to do.

Here is ScatterLife:
[https://github.com/churchofthought/ScatterLife/](https://github.com/churchofthought/ScatterLife/)

But the result was far from what I wanted. Even with a Titan V, a world of
only 4096×4096 could be handled at a reasonable update rate. I basically had
spacetime foam under the ideas of Doubly special relativity, ie. Feynman
checkerboard universe.

[https://en.m.wikipedia.org/wiki/Doubly_special_relativity](https://en.m.wikipedia.org/wiki/Doubly_special_relativity)

[https://en.m.wikipedia.org/wiki/Feynman_checkerboard](https://en.m.wikipedia.org/wiki/Feynman_checkerboard)

If our own universe is digital, then it updates at C/plank_length times per
second, over 10^40 hz. Not only that but it consists of over 10^185 planck
lengths.

Nothing interesting is really viewable or even extrapolatable from the
smallest of truths or fundamentals. Thats why many predictions of string
theory need higher energies to be tested. The smaller you delve, the higher
energy needed.

In any case, I realized that evolution itself has been fueled by orders of
magnitude. Symmetries of matter and energy, planets stars and solar
systems...same magnitudes required.

Even the best silicon isnt going to have 10^20 transistors on it. We would
have to somehow go analog..use chemicals or matter in a way that didnt require
slow refining and construction. Chemical based computing....

Now about ML and AI, same issue is present. Brains have 100 billion neurons.
The best GPUs have maybe 5000 cores.

The only way forward is to maximize what each core does - as much as possible.

I learned this with the cellular automatas. Black and White 2-state automatas
are neat but are a huge waste of processing power. They hold little
information. Better is integers. Even better is floats. Why stop there though?
Lets use complex numbers.

[https://github.com/churchofthought/HexagonalComplexAutomata](https://github.com/churchofthought/HexagonalComplexAutomata)

Magnitude is against carefully crafted silicon. If we want to achieve what
magnitude can, with silicon, we need to make sure our fundamental neuron units
arent unnecessarily sparse. Magnitudes can afford to use simple units. Silicon
cant. 5000 maybe 100000 cores when advances in fab accelerate. But still not
100 billion. Still needs to use the advanced abilities of the cores to their
full extent. A neuron cannot compute any universal function...but a gpu core
can.

Anyways, I am almost failing to mention that I dont believe we have anything
to worry about. Unless more research is devoted to special purpose hardware
(consider an i7 has 731 million transistors), the software side will have a
really hard time compensating for low magnitude.

Lets see what happens. Its going to be exciting none the less. I am doing ML
work myself on Boltzman networks and RNNs snd Hopfield nets. This is a
promising field and its emerging at breakneck pace. Cheers!

~~~
resu_nimda
This is interesting. I too am interested in simulations and fields like
Artificial Life where we start with some basic building blocks and hope to
allow something capable of evolution to form.

My question is always "where do we start?" because, as you found out, starting
with the most fundamental physics simulation we can conceive of, we are
unlikely to generate much of interest for some time, if ever.

But I do have a feeling that in order to get something truly novel and
interesting it's going to have to "evolve" in an "environment" and the
challenge will be in identifying whether a particular instance of primordial
soup is on its way to developing more complex structures.

I strongly believe that the importance of the multi-billion year process of
evolution is seriously underestimated by the AI community and that it's pure
hubris to think we can short-circuit that entirely and simply reverse engineer
the brain with fancy algorithms.

~~~
goldenkey
> But I do have a feeling that in order to get something truly novel and
> interesting it's going to have to "evolve" in an "environment" and the
> challenge will be in identifying whether a particular instance of primordial
> soup is on its way to developing more complex structures.

I thought that way for a long time too which is why I chased the cellular
automata ideas and eventually implemented many varieties. But nowadays, I
think its all mostly torched by magnitudes.

The closest chance we have is not relying on the magnitudes. Instead of trying
to evolve a universe, or variety of entities - we need to focus on a single
entity.

I thought AI had a flawed premise, like you mention: attempting to develop
single individual is directly antithetical to how life normally develops. None
the less, my mind is changed since my automata dabblings. Single individual
engineering is the only method that has the slimmest chance in hell.

------
scottyelich
Bingo.

------
hyperpallium
Surprise! Meaning requires strong AI ( _and_ cultural understanding, much of
which is only learnable from interaction).

~~~
Retra
Meaning doesn't 'require' strong AI -- the kind of meaning that strong AIs
like us prefer to interact with does. There's plenty of simple meaning in
simple systems, and they are quite useful to us that way. They just don't
encapsulate the complexities of human experience very well.

------
guilamu
The author is trying to make a point that AI is still truggling on basic
understanding based on examples.

 _Google Translate renders “I put the pig in the pen” into French as “Je mets
le cochon dans le stylo” (mistranslating “pen” in the sense of a writing
instrument)._

Well, Google is far from being the best at translation right now. If I try the
author's example in Deepl for example "I put the pig in the pen" it translates
into French as "J'ai mis le cochon dans l'enclos." which is the perfect
translation based on context.

So, I still have to read the rest of the article, but right from the bat,
you're wrong mate (or at least you provided an example fo something
"impossible for AI" which is already perfectly done by properly
trained/programmed AI).

