Hacker News new | past | comments | ask | show | jobs | submit login
Artificial Intelligence Is Stupid and Causal Reasoning Will Not Fix It (frontiersin.org)
104 points by trenchgun on Dec 27, 2022 | hide | past | favorite | 196 comments



>In other words, unless we are content to “bite the bullet” of panpsychism, then no machine, however complex, can ever realize phenomenal consciousness purely in virtue of the execution of a particular computer program.

Ok, but why are biological neurons exempted from this argument? I mean, aside from the obvious conflict of interest that the person who wrote this article is made out of them, and they would much rather not be ontologically dissolved into a soulless zombie.

I'm not sure you actually have to bite the panpsychism bullet either (or I'm misunderstanding what that word means). Adding a Putnam mapping to an FSA or giving you a book that maps Chinese characters to other Chinese characters doesn't necessarily make the FSA conscious or you a literate Chinese reader. It just makes the sum total of those things conscious. Likewise, each individual neuron in your head does not need to be totally conscious. Consciousness is an emergent property of your whole body, not just something attributable to parts of it.


> Ok, but why are biological neurons exempted from this argument?

Right. That's the "humans are magic" argument. It's a rehash of the "first cause" argument from theology for the existence of a creator. ("But who created God?" "Shut up, kid.")

It's pretty clear now that AI via correlation can do a lot, but it has limits. Deep neural nets seem to have a generic problem of getting the right answer most of the time, getting a really wrong answer some of the time, and not being able to detect the difference. We're still missing something.

What we're missing, I point out occasionally, is "common sense", defined as getting through the next 30 seconds of life without making a big mistake. We need to get to at least the mouse/squirrel level in that area, and we're not close. There was an article on HN a few days ago about someone having discovered the neural basis of panic flight escape route planning in mice. That's an important result. It's an indication that there's common sense hardware in mammal brains, and the beginning of understanding how it works.

This is a problem which is both very hard and not too rewarding, either financially or academically. Some years ago, I met Rod Brooks, back when he'd done his insect robots and was proposing Cog, a jump to human level AI which involved a human-like robot head. I asked him, why try to jump to human-level AI in one step? A robot mouse might be within reach. He said, "Because I don't want to go down in history as the man who created the world's best robot mouse." Cog was a failure, and he went down in history as the man who created the robot vacuum cleaner.

It's one of those thankless enterprises like OpenWorm, which is trying to get a simulation of the simplest known organism with a nervous system, a nematode with 959 cells. After over 20 years, it still doesn't work very well. Yet someone has to do that kind of ground work.


> Right. That's the "humans are magic" argument. It's a rehash of the "first cause" argument from theology for the existence of a creator. ("But who created God?" "Shut up, kid.")

I don't think that's a fair summary of the argument. The conclusion of the argument is a dilemma: either (i) humans are not conscious or (ii) humans are conscious but so are lots of other things (like rocks) that we don't usually take to be conscious. If you aren't willing to accept (i) or (ii), then you can either look for flaws in the argument that led to the unwelcome conclusion, or consider the possibility that there is more to consciousness than just executing a particular program. We know virtually nothing about consciousness, so we can't confidently eliminate the possibility that certain physical properties of neurons are a necessary prerequisite for it. Being sensibly open-minded about this possibility is not akin to a belief in magic. I would rather say that the magical thinkers are those who have an unwavering faith in the ability of computer programs to conjure up conscious experience in the absence of any known causal mechanisms.


Worrying about "consciousness" is a dead end for now. We lack basic mechanisms for getting stuff done in an uncontrolled environment. That's why artificial intelligence is still too stupid to trust with anything important.


"Humans are conscious but rocks aren't" shouldn't be any more surprising than "humans are conscious but dead humans aren't". Sometimes stuff's just dead. In fact, most of the time it is.


Computers are also dead.


Not if you plug them in.


Not sure what you mean. There's a useful summary of the relevant argument here: https://consc.net/papers/rock.html


>> It's one of those thankless enterprises like OpenWorm, which is trying to get a simulation of the simplest known organism with a nervous system, a nematode with 959 cells. After over 20 years, it still doesn't work very well. Yet someone has to do that kind of ground work.

I was considering a research project like that, on insect intelligence (I like bugs) (and bugs like me) but I was discouraged from it, the discouraging agent noting the extreme inter-disciplinary nature of such research. That's another roadblock I guess.

Another form of this is that modern AI researchers tend to be completely clueless about cognitive science, psychology, and anything that has remotely to do with animal intelligence, other than computational neuroscience of the connectionist kind. People are trying to figure out how to make computers exhibit intelligent behaviour like that of animals, without having any idea what animal intelligence even looks like. Of course that's going to stall.

Then again I was in an AI workshop at the Royal Society in London recently, with many illustrious folks from all the big unis in the US and EU, and after a big discussion about the capabilities of large language models, the subject of trying to simulate animal intelligence came up and was discussed in some seriousness, I think.


>> Ok, but why are biological neurons exempted from this argument?

Because biological neurons are not computers? I think the argument is that our current theory of computation is not enough to explain the human mind, not that human minds are magickal and special compared to other computational devices. We just happen to understand one category of computational device that does not include human minds.

I think if we look hard enough we can find examples of computational systems that exist in nature and that are not Turing machines. For examples, a weighted chain forming a catenary can instantly calculate the shape of a load-bearing arch, without going through any states: it suffices to set up the computational system (the weighted chain) and the result of the computation (the shape of the arch) is immediately known.

Meaning, "there are more things in heaven and earth that are dreamt of" in our computer science. Our mind is probably the biggest fucking mystery in all of existence and it is a little ridiculous to think we can understand it, and simulate it, with our primitive science and technology.


Is the catenary example just a case of really fast computation? Using the substrate computer directly is faster than using an emulated computer atop it. A real scene renders faster than any raytracer.

I'm sympathetic to the idea that we treat the mind as an analogy with the high tech of each epoch, steam electricity computation. But Turing equivalence exists. I haven't found Roger Penrose's arguments about quantum computation in mitochondrial microtubules very compelling.


>> Is the catenary example just a case of really fast computation?

It is, but I think its speed comes from the fact that it doesn't have to go through any discrete states during the computation. There's a start state where the weights are hung off the chain, then the end state where it settles into a new shape, but what is between? An infinite number of states? You can perturb it, even, while it's settling, and it will still not change its end state, which is not something you can do to any kind of machine. The computation is also "free" as in gravity. I don't know how we could build a machine to do that and I don't think we can describe it as a Turing machine.

Btw, what makes it a computational device is that the shape of the catenary matches the shape of a load-bearing arch. It's one physical system calculating another. So it's not just using the physics of one system to calculate itself.

But you are right. The universe calculates its own state faster than any computer running in-universe could. Well, why? That's just one thing we don't fully understand. Maybe thinking of the universe as a computer is applying the wrong analogy, and maybe we do the same thing with minds, as you say about electricity etc.

I can't say I understand Penrose's arguments very well. I'm not a phycisist. Turing equivalence exists, of course. But there is something to the fact that I don't get stuck in a loop if I think that "this sentence is false".


This is what struck me as well. The explanation given for the rejection of panpsychism is that it's a non-scientific (unfalsifiable) theory. I think sooner or later you have to confront the fundamental mystery of how it feels to exist, and that we might not have the tools to access that mystery being trapped inside it and all.

Still, even if we allow panpsychism, I think there's a lot of merit in the author's points here... It seems unlikely that any system we build will be able to develop a robust enough model of reality if it can't learn by affecting reality.


> develop a robust enough model of reality if it can't learn by affecting reality

This seems like comparing AlphaGo with AlphaZero, with the former learning from professional games and the latter from self-play. AlphaZero is better but both understand the game well enough to bead pros. In particular most believed for quite a while that something like AlphaGo would not be possible along similar lines of reasoning about AGI.

Or consider the thought experiment if the machine was given the ability to perform experiments and measurements in the real world of its own choosing. Would this make AGI more possible or still only similarly possible?


The article just uses an uncommon and trivially verifiable definition of "panpsychism" that can be proven by the existence of computers. I don't think the author reflect on what it means for himself.

(People that do this usually have the "but magic (oh, no, it's just not understood physics, trust me), souls or whatever" explanation exempting them, but this one doesn't need exceptions.


What is this unusual definition of panpsychism? The article doesn't explicitly give a definition, and appears to be using the term in the normal sense, as far as I can tell.


>Ok, but why are biological neurons exempted from this argument?

They're not. The neurons don't realize phenomenal consciousness purely in virtue of executing a particular program, but in virtue of certain (unknown) other of their many non-computational properties, such as their physical properties. This is the claim.


No reason to say “your whole body”. I think it’s well established that if you amputate the lower half of your entire body along with your arms, shoulders, ears, eyes, and nose, you’re still just as conscious.


You may still be considered just as conscience, but I'm not convinced you would be the same consciousness as before.


Does trimming your nails have a similar effect?


It may. I think every sensory input, or lack there of, influences our consciousness in some way.

Specifically, it's not the cutting of the nails that influence the consciousness, but existing with the now shorter nails.


That maybe, but what if you are stripped of the access to the Internet?


> Consciousness is an emergent property of your whole body, not just something attributable to parts of it.

What an arrogant statement to make!

All the wisdom traditions of all the ages and of all the places have always taught that consciousness is the Epiphenomenon. The first cause that has no cause.

This is even a scientific statement, because there is an experiment that can be reproduced (and has been reproduced since thousands of years) If you meditate very deeply for hours upon hours (20.000 to 30.000 hours, is a rough estimate given by the tradition of kriya yoga) you will make the experience yourself.

I highly recommend to look at ORCH-OR (a theory on consciousness by nobel laureat Sir Roger Penrose and Prof. Stuard Hameroff. That substructures (microtubeli) inside biological neurons are taping into the state of the universe by going into superposition and collapsing... It is too much to further elaborate but worth checking out!

Better even, increase you meditations ;))


Why arrogant?

The statement seems sensical to me until proven otherwise.

What are the proofs that someone unable to perceive anything from the physical realm can be conscious?

[edit] Where I would disagree perhaps is the claim that a single neuron is not conscious.

It depends on the definition of consciousness. It's probably more like a spectrum. Consciousness stems from physical interaction philosophically for me. So a rock, an atom etc, would also be conscious. It just doesn't process physical domain interactions like a human would do.

Just another form of consciousness.


The idea is that Consciousness is in fact a manifestation of 'God' and we're little sparkly transducers of the divine, & that no amount of 'structure' can give rise to our godlike -- we are 'sentient creators with free will' -- consciousness. Various spiritual schools differ on precise formulation. The one I am familiar with, Qu'ran, explicitly asserts that whatever can be considered 'Light' in 'the Universe' is God. "God is the Light of the heavens and the earth".

So the proposed arrogance is the 'metaphysical arrogance' that attributes such a profound capability with a few kilograms of organic matter. Interestingly in Qur'an, this sin of 'arrogance', and, associating 'sentient worth' to its material composition (the famous statement of satan that it is "made of fiery matter while Adam is made of clay") is a declaration of satan. This then circles back to other comments in this thread questioning the Organic Matter Mind (clay centric) chauvinism.

> What are the proofs that someone unable to perceive anything from the physical realm can be conscious?

John Lilly famously did his isolation tank thing to get to the bottom of that, in this diminished (but not totally occultated) sensory environment. He wrote books about the experience. I personally find the idea that mere structure gives rise to mind to be 'wishful' (if not arrogant) thinking.


Isolation tank would not work because living organisms have memories (store data).

You would have to be born with no data and no way to acquire any (no perception).

Would you then be conscious?

Matter is just data of a particular kind as in anything.

The only thing that one is really sure of is existence. Note that non-existence doesn't even exist by definition.

If we want to get into theology, I would even claim that existence is the true unique thing that encompasses everything and all possibilities (some will never manifest physically). Call that God, the One (Allah) etc...

That could make sense. Most of everything else is folklore imho.


Please don't put the burden of prove on others. You must go so deep to realize whatever you _TRULY_ know, is what you have experienced yourself. All the rest is belief with different degrees of certainty (That is why we have the whole of the scientific apparatus with degrees, titles, peerreviewing and journals, to increase the certainty into a believe without having to reproduce all the experiments all the time. So we do not have to "make the experiment to have the experience that lead to that insight and "knowledge")

So if you want to know and you are unwilling to settle for here-say and logical deductions, you will have to dive deep!!

Mind you I am not advocating "my" tradition, or telling you to buy my book or whatever nonesense. I beg you on my knees to sit and close your eyes and go inside!! It is horrible in the beginning because of ALL THE TRASH that we accumulate in our minds. You have no peace, no tranquility, no joy and no bliss. But for this much of a leap of faith I beg you pardon: IT IS THERE! There is absolute tranquility that leads to bliss to be attained as a _sideeffect_ of your quest for truth! Truth is not what I tell or anyone else! Truth is what you experience. The only question is how deep that experience is.

All the mystic traditions have thaught the same. NO REASON to believe, but hopefully enough reason to risk to try it!!

(And besides that, you will get all the benefits with that come with meditation and cleaning up the mess of the mind, which have already been proven by science)


Lol... So if you claim that 2 + 2 = 5, I can't ask for a proof? That's new...


Lol.... I am talking on whole other level of philosophy. The prove has been given.

Please give me a chance to explain. I am completely with you, that you are allowed to ask for the prove! BUT: you will have to go through that prove (perform the experiment) to "prove" the correctness of the claim _for yourself_!! Only repeating the experiment will lead to experience. (In mathematics that experience is what i call "the experience of a logical conclusion")

As long as you never did the experiment, you will only believ (althogh you confidence may be high, because so many "prof" and PhD and papers are claiming it is correct and you haven't heard otherwise to diminsh you confidence)

Please allow that point of view to sink in, it is not that easily grasped.

All the very best to you!


The point is that I can't convince someone who doesn't see violet that there are 7 colors in a rainbow just by telling him there are.

I've experienced some pretty eerie stuff myself but the onus is on me to prove it happened. Fortunately for me there are physical manifestations of the phenomenon but the medical corps either doesn't have the tooling or the research direction to properly study it.

They are just happy giving people some medications. Fortunately for me, I don't need any and can function normally.

That's why appeal to authority is bad (PhDs and MDs are far from knowing much, one should realize, but that goes for your guru, religious figure as well). However I understand what you mean. No one is discarding experience. One might discard the exactitude of the conclusions derived from the experience however. That's the part that needs more proof.


Not the person you are replying to, but the assertion in question may not be arrogant if we construe it was made with the intent of saying "I think that consciousness is an emergent property having to do with the whole body," instead of just flat out making the assertion as if it was a general consensus.

If an assertion sounds like it makes sense, that doesn't mean it's not arrogant to make. Another point is that you supposing that since it "makes sense", that's enough justification to pass the burden proving-it-isn't-plausible onto someone who disagrees. In effect it comes off like someone taking one person for their word while asking for documentation and proof from another. Consciousness is an elusive thing to define much less to start making super strong claims about at this point lest you reveal some internal prejudices.


Well people have to read between the lines and be charitable.

If both positions are not proven absolute truths, it's just as arrogant to claim another sensible opinion's arrogant.

By the way, if one corrects a statement, I expect to know why? Don't tell someone is just wrong because they think 2 + 2 = 5 and you think 2 + 2 = 4.

That's why we have proof in mathematics so that we deal in truths and not opinions and avoid conflicts.


The opinion isn't arrogant because opinions can't really be arrogant, it's the context they're presented in which gives off the impression. I probably took what you wrote more seriously than my reply merited, but I do have an impression reading comments on this site that some people overwhelmingly lean in certain directions regarding such topics, they are essentially prejudiced. What you say next I think shows you have a nice sense of humility :)

I definitely agree personally that "Don't tell someone is just wrong because they think 2 + 2 = 5 and you think 2 + 2 = 4." I think that's the fair way to go about things. But I'd also like to add on to your point about math, that those those sorts of truths seem to be separate from the world of experience in some way while the "truth" about consciousness, if you subscribe to values of strict physicalism, must be derived and exclusively reside with the world of experience. By that I mean if there even exists some point at which people arrive at a near-perfect scientific description of consciousness, before we converge to that point we dwell heavily in a discourse rooted by the inherit distortions of experience: faulty methods, faulty equipment, cultural trends, fads and biases in the field, etc...

Appeals then to anything but clear explications verified by probably the one of the if not the most important mountain of results people for some reason are expecting, mostly resemble opinion I think, but that's alright, opinion is good I wouldn't discount it.


Where you have a point is that, if we were to reason metaphorically and consider that everyone was colorblind and couldn't see indigo except for me... I would probably claim that a rainbow has 7 colors and not 6.

The onus would be on me to prove it though. Just like Galileo had to prove himself.

In our case here though, I don't think that anyone is claiming that there aren't possibly conscious beings that are immaterial, or non-physical even for our definition of physical (What dreams are made of for example)

The issue is that human consciousness as we know it depends on perceptions for awareness. Self can't exist without Other(not self) hence if one is unable to gain awareness of the existence of that which is not self, one cannot gain awareness of self.

It seems to be a capability derived from the human physical structure. And we have yet to encounter a human who never had a body. Nor do we remember what was before birth. Some people claim they remember past lives. But that still means they'd remember being a physical entity.

Last point is that interactions require a common language. Something that is purely non-physical would probably be completely orthogonal and imperceptible to us. It wouldn't even perceive us. If I define consciousness as a spectrum of interactions between physical data, it means that consciousness require a physical support. Remains to be determined though.

(not negating anyone's experience and I fully agree, not everyone is able to experience existence the same; usually, there would be measurable things if it is physical. I agree that often times, the scientific instruments might be lagging, indadequate as too crude, so I guess I understand and agree with your point as well)


Your chain of reasoning is difficult to refute if we're speaking strictly in terms of "common sense" and what I think may be some commonly accepted guide posts we have currently have culturally. For instance, "It seems to be a capability derived from the human physical structure," referring to the idea that we come to know what consciousness is by observing that the notion of self requires the notion of the other and vice-versa, which constitutes part of, if not our whole perception of awareness. Correct me if I'm wrong but this is what I am getting.

Alrighty next to support the assertion that recognition of this dichotomy indicates a strictly physical basis for consciousness, we assert that we've never encountered a person without a body, I think implying that a body is necessary for drawing this boundary between "in" and "out" which characterizes consciousness.

What I would find a bit tenuous about this approach is that it is difficult to define what the physical, the material, really even is. Next, we are working off the presumption that consciousness comes from a division you point out, but this definition comes from a subjective personal observation, which in your last sentence you concede may be the case since people's experiences are different. For instance, in Advaita Vedanta we have one historical example of your assertion with its head turned over, essentially that there is no division, that it is illusory. This idea I would argue has about as much a basis as the one put forth, and this is due to our limitations with such subject matter. If we go down the non dualist route, well I would think we'd be down a pretty different road.

It also doesn't seem immediately apparent how, for lack of a better term, beginning with a dualist assumption, that we can then jump into the idea that this seems to be derived from human physical structure. Sure it seems like it has definitely something to do with it, but the place we begin is so fundamental it feels circular to look through the same tool in order to measure the tool so to speak.


Interesting. In my understanding, I'm not sure that division and oneness are actually in opposition.

The rationale being that the one overall encompassing thing is existence. Existence exists by definition and therefore non-existence cannot exist by definition.

It's within that one thing called Existence that I believe divisions are necessary for everything else to emerge. If everything is differentiated parts of existence, the physical world as we know may be just one of many kinds of worlds. (each world would probably called physical, composed of non orthogonal bits of existence so that interactions remain possible)

But division or differentiation is necessary for interaction. Total sameness, homogeneity cannot create space or time or anything else. Even in the physical world, gradients create displacements.

I am just not sure about the drivers of such differentiation.

I agree that, the concept of my own being exist outside of the physical world for an external observer of our world.

But my hunch is that this concept is quite plausibly the concept of a given physical structure of physical bits of existence just like any anything we perceive.

The one constant is existence.

Where you are absolutely right is that I could also be discarding the fact that we may be interacting unbeknownst to ourselves as part non-physical being. Basically, our physical existence would just be a projection in the physical world of higher dimensional selves. (thinking in the algebraic, tensorial sense).

Our physical world could also have been entirely created by a single entity but that entity would still be one of many parts of existence if not merely existence itself so I am really interested in the highest levels.

As far as I can personally tell, we don't know whichever takes is true. One seems to be quite plausible wrt our collective experience. Especially since we don't seem to be what is in our imagination. Even the idea of the self that we can envision via our imagination might be something else entirely residing in an orthogonal world. :)


It's not a sensical statement but is akin to saying that the fundamental properties of the Universe are Fire, Water, Earth, and Air. It's clearly a wrong statement to say that "Consciousness is an emergent property of your whole body". A person who has their legs and arms removed suffers no difference in their actual consciousness. They suffer a lack of motility.

A person who has had their heart replaced with a pig's heart does not suddenly lack consciousness or have the consciousness of a pig. Extremely short people, such as Afshin Ghaderzadeh 2'1", don't have less consciousness than extremely tall people, such as Robert Pershing Wadlow 8'11".

Stating that consciousness is not an emergent property of the whole body should be a starting point, after which further research and discussion can occur.


You don't get what he claimed. Your body allows your perceptions which are basically stored data that initiate the construction of your consciousness.

No data = no consciousness.

Do you think someone who was blind, deaf, and no sense of touch from birth can be conscious (have an internal representation) of colors? Dream in colors even?

The science tells that they wouldn't even dream in images.

As was said in the parent comment, the issue is that people have overloaded understandings of what is meant by consciousness.

First it would be wiser to expound on what is meant by consciousness by each...


You are misunderstanding what was written. Due to that you are claiming something that is different that the earlier claim. The earlier claim was that consciousness arises from the whole body. You are claiming that consciousness arises from the senses.

Instead of guessing, your assertion can be tested by closing your own senses. Cover your eyes, ears, etc. - which has been done in sensory deprivation tanks. Closing the senses does not remove consciousness, therefore consciousness is not dependent upon the senses.

> First it would be wiser to expound on what is meant by consciousness by each...

Then, please don't make claims as to how consciousness arises unless you first define what you mean by the word consciousness.


The claim is that a body is necessary for consciousness to emerge. Why? Because of perceptions. I'm sorry but your counter-example of shutting down your senses doesn't work because you have already perceived. Don't forget that living organisms have memory (data storage. Besides, it's probably not shutting down all perceptions anyway...

The better question is would you think you would be conscious if you had never perceived the physical world?


I actually think it’s questionable if a new born infant is conscious. I think it takes years of sensory input and possibly even language development to establish real consciousness. Of course it’s entirely impossible to determine if any organism other than yourself of conscious. I might be acting like I am, but truly you have no idea if I’m actually having a subjective experience.

Other interesting thought experiment: if you developed as a baby with the complete absence of senses. No sight, touch, smell, etc. would you ever develop consciousness? I suspect not. I believe our inner subjectivity is inexorably tied to our senses.


This is just a no true scotsman mixed with p-zombies. Not really compelling. Babies are obviously conscious.


This is just deflection followed by a statement without evidence.


checkmate


How is that obvious? There is literally no known objective method to ascertain if another organism is conscious.


Sure, you can claim without evidence that babies are not aware of their surroundings. And I can point at all the babies in the world which interact with their environment meaningfully. This is what is meant by obvious. You will say this is not "objective". A demand I don't find meaningful or persuasive.

You posit again without evidence that sensory input and language development create "real consciousness" which you haven't bothered to differentiate from pre-"real consciousness". This self-refuting argument demands an unaware newborn become aware by the mechanism you like, interacting with the world. Whereas I believe they are conscious by construction.

I would refute this by saying the baby was always conscious and simply learned some new behavior. This devolves into a ship of Theseus argument about whether a fetus is conscious or whether an embrybo or zygote is conscious. You have created a fake boundary you can't identify and put newborns on one side of the boundary.

I've already said what I think about p-zombies. They are an incoherent impossibility. I could use your same argument and say that even adults aren't really conscious. They are simply lying zombies reporting subjective states that don't exist. What is gained by claiming babies are zombies until the day they aren't.


You would need to explain why consciousness is a discrete thing though, you don't feel what a rock feels just because you are close to it. You just experience the data from a small subset of neurons in your brain, why not the other neurons? So not even computational connectivity explains it, otherwise your consciousness wouldn't be tricked by optical illusions, since it would be able to feel the data flowing through the optical nerves before the computations that creates those illusions happens.


I'm not sure I understand.

I can be conscious of the existence of the rock. Doesn't mean that I am the rock.

Each interaction has different participants that are conscious of each other in their own ways.


> I highly recommend to look at ORCH-OR (...) That substructures (microtubeli) inside biological neurons are taping into the state of the universe by going into superposition and collapsing.

Orch OR has been criticized both by physicists and neuroscientists who consider it to be a poor model of brain physiology. Orch OR has also been criticized for lacking explanatory power; the philosopher Patricia Churchland wrote, "Pixie dust in the synapses is about as explanatorily powerful as quantum coherence in the microtubules." [1]

[1] https://en.wikipedia.org/wiki/Orchestrated_objective_reducti...


It has gained a lot more traction in the last couple years when they figured out, that qunatum effects DO HAPPEN in "noisy and wet" environments. Please update your research!

Of course there are still criticism, this is how scientific research works. Criticism does not mean disproven, mind you! It is a "new" theory with a lot of promise, that needs a lot more work


Appeals to tradition and authority are not arguments. Look in the mirror for arrogance.


I've only skimmed but it seems like the general idea is sound, that neural networks are good at pattern matching and that shallow pattern matching is only one aspect of "deep thought", logical reasoning being the other one.

I'm confused though, they give examples of Go and Chess but, as far as I understand, this is exactly what the author is saying doesn't exist. AlphaGo, Deep Blue, etc, all use neural networks to weight how to traverse the minimax tree [0]. The minimax algorithm/tree is precisely the "logical reasoning". The neural network is "just" prioritizing the path to take.

Anyway, like I said, the reasoning is sound. I just read a paper specifically about this breakdown, called "On the Paradox of Learning to Reason from Data" by Zhang, et all [1] where they construct a synthetic data set based on some simple logical rules and then try to train a neural network to recapture the underlying logic (spoiler alter, it fails). From their paper:

""" Our study provides an explanation for this paradox: instead of learning to emulate the correct reasoning function, BERT has, in fact, learned statistical features that inherently exist in logical reasoning problems. We also show that it is infeasible to jointly remove statistical features from data, illustrating the difficulty of learning to reason in general. """

[0] https://en.wikipedia.org/wiki/Minimax

[1] https://arxiv.org/abs/2205.11502


Nope, I take it back. The article is not coming from a place of (in my opinion) reason.

This is, in fancy language and snowed under by enumeration of historical research, math and science, an argument that "computers can never have a soul".

From the article:

""" No matter how sophisticated the computation is ... there remains an unbridgeable gap (a “humanity gap”) between the ... ability of machine and the general problem solving ability of man. """

This is just standard dualist nonsense under the guise of logical objectivity.

Note that there is a deeper issue about the shortcomings of statistical pattern matching to do logic, which is what the paper I reference above is trying to get at, but the parent article's fundamental assumption/conclusion is that computers will never be able to reason.


It really isn't about having a soul, but more like about either:

1) stating human brains are actually hypercomputers, which cannot be implemented in real life at the moment

2) stating just what you said before: pattern matching, no matter how many layers or parameters used, is not all there is to intelligence/consciousness

At the moment, whether you like it or not, there's no proof at all the human brain works in any way like a Turing machine. That idea which has nothing to back it up is the usual POV here on HN and it is the actual faith-based belief but again, no scientific papers to back it up.


Do you really think there's more to the human brain than a more-complex version of a computer? I think we've all established that the brain is a network of neurons with different structures that do different processing and that the neurons work in a specific way to turn sensory inputs and memories into thoughts and actions. We don't know everything about the metaphysical aspects of how the mechanics turn into "consciousness" but we pretty much know that we could, with a powerful machine, simulate most of the mechanics ourselves. Maybe an AI isn't as complex as our biological computer, but that's because our AIs run on computers that are only several dozen generations in, versus our brains which are several million generations in. Perhaps in several million generations of our AI hardware and software, we'll come up with something as computationally powerful as our brains.


I don't think we know any of that. We know many of the mechanisms through which neurons act but not all of them, nor do we know that we can simulate a human brain on a computer.


> stating human brains are actually hypercomputers, which cannot be implemented in real life at the moment

Extraordinary claims require extraordinary evidence, and absence of evidence is not evidence of absence.


it's pretty easy to prove that a brain can simulate a Turing machine, and it's pretty hard to disbelieve that a computer can simulate a brain (by running the physics)


If a brain is a hypercomputer (like a quantum computer), then you could simulate it on a classical computer; you'd just never be able to do it in constant time, so it could never be fast enough to catch up with reality.

Also, you wouldn't be able to simulate a specific existing brain due to the quantum no-cloning theorem.


The claim I find objectionable is that brains do 'hypercomputation', and can not be ran on a Turing machine.


Brains can solve the halting problem, Turing machines cannot.


no they can't.


To your point 1, arguments of this sort are not that we can't make a computer, someday, that will encapsulate consciousness, it's an argument that we will never be able to. People will try to use fancy contortions to get around it, but the basic argument is that consciousness is fundamentally not "of this reality", so no, the article is squarely taking a stance that thinking machines are not a "maybe, someday" event, they're a "never" event.

To your point 2, this directly contradicts your point 1, and is, what I would call, a "dualist" philosophy. This is a basic statement that thinking, the way we do it, is fundamentally not mechanical and therefore not describable by our laws of physics. That is, however we think is literally not of this universe. The "dualism" comes in by naming the other "plane of existence" (or whatever you want to call it) as the companion universe or reality to ours where our intelligence effectively resides.

I think Turing is a little maligned here. His test was meant to strip away all the red herrings that one might use and focus on what (I'm imagining) he thought was the most fundamental of all human traits, the ability to reason and to use language. The Turing test is a functional one. If something talks like a human, reasons like a human, acts like a human, then how would you tell the difference between the "real" human and the "simulated" human? By creating a functional test, it gets at the core of the issue. If you can't come up with a test to differentiate one from the other, the question of what's "really" human becomes academic.

So, to, the focus on Turing machines. The specifics are a little byzantine, nowadays, but the point is that there's an "equivalence" of machines, where one can be used to simulate another. The point about being Turing machine equivalent is the mechanization part, not the specifics of tape reading and what the state machine looks like, so much.

In terms of proof, I would say this: If someone were to predict the progress of machine learning and artificial intelligence as it progresses towards becoming feature parity with the human brain, one might well predict all of the milestones that have been achieved. For example, beating the best human at checkers/chess/go, being able to do speech to text, being able to identify people by faces, being able to replicate someones voice/gait/face, being able draw novel art, being able to hold on a conversation, etc.

In my opinion, the human brain is not monolithic and is compromised of many sub systems interacting with each other. That being the case, I would expect some type of "general intelligence" to come by once all the sub systems have become modular and easy to use, with some kind of "operating system" bringing it together to be able to have one part communicate with another and be able to make broader scoped decisions based on input from each of the subsystems.

This is a long way of saying I believe there is quite a bit of evidence that we're getting closer to "thinking machines". They may not work exactly how our brains work but that's kind of beside the point, they're functionally doing what we would expect them to do on the path towards human brain feature parity.


I said either, not both points.

Point 2 does not require what you call a "dualist view", it's simply stating that deep neural networks are not all there is to intelligence. Another layer of fact checking, maybe even imperative programming, could be an answer. It certainly has nothing to do with soul, but with the belief the human brain is a bunch of lookup tables.


Do you have any recommended reading/viewing on the subject?


I've been thinking about this a bit since I saw your reply. I'm not sure how to respond to this and I'm not sure references I would give would be satisfactory to you.

What kind of reading and references are you looking for? Can you give some examples of reading or other resources you've found enlightening on other subjects?


Thanks a lot for summarizing it. I was curious to see whether it had a new idea, apparently not.


We are at an interesting point where now we're saying that the boolean logic of transistors can't do logic. Only a while ago we were saying that the rigid boolean logic of transistors couldn't do what large models are now doing.

I predict that it is only a matter of time (not even all that long) before we have models that hypothesize logical invariants and disprove them or possibly formulate proofs.

The distinction about having a soul or not--what is the purported pragmatic advantages of having a soul that contributes to intelligence? I haven't thought about it much but I think of a few things: (a) my memories/sentiments surviving physical demise and somehow thinking/feeling again, (b) an existence beyond our physical world which doesn't contribute to intelligence in and of itself, (c) some sort of hive mind/collective which unless isn't bound by the speed of light isn't that different than interconnecting processes.

When we say "general intelligence" do we really mean "be human" which is not interesting. Why is having a soul assuming humans do, an impasse for AGIs? As far as I can tell it's only about preserving some (pragmatic) aspect of human > machine. This has fallen in area after area and will continue to happen until we quietly drop the preconceived notion.


>This is, in fancy language and snowed under by enumeration of historical research, math and science, an argument that "computers can never have a soul".

Yep. It is amazing to me how often this argument is presented by people who seem, and probably imagine themselves to be, rational. It's always disguised under several layers of definitions so that the key argument -- that humans are magic -- is obscured.


The paper you shared is WAY more interesting than the original post.

Maybe this is a naive criticism but if the goal of the model is to "emulate the correct reasoning function" by which is implied that it learns prepositional logic rules then shouldn't the model have some architecture that tries to model generic logic rules and then using that score the input data and predict the result?

Currently it seems like the model is simply scoring each predicate since each of it's reasoning layers performs one step of forward chaining, adding some predicates to the Proved Facts.

My naive (and way undercooked) way of conducting this experiment would have been to try to use a GAN where the generator model comes up with new predicates and then the discriminator model that tries to classify them as real/fake. I would then try to train an MLP on top of the generator to classify the result and swap out the MLP depending on the sampling method basically the generator model becomes the pre-trained thing we care about to see if it can generalize.

Another nitpick is that the authors claim that BERT has enough "capacity" to solve SimpleLogic BUT this isn't actually what they want to achieve since solving != learning reasoning. So it feels like a bait and switch since IMO if a model has the capacity for something then it has some architectural aspect to it that can be used to show that it can achieve a smaller version of what you want and they didn't prove BERT can learn ANY prepositional logic rule.


Slight correction, Deep Blue beat Kasparov in the 90s with Minimax but did not use neural networks afaik. Alpha Go does use neural networks to traverse a game tree but it’s algorithm is Monte Carlo Tree Search.


(From the paper you mentioned above)* added some extra lines so it would print properly in this website. it ran without change in python.

Here's what I got out of ChatGPT after I gave up trying to get it to answer it directly:

write A Python script that parses the following and uses forward chaining inference to answer the following:

Facts: Alice is fast. Alice is normal. Rules: If Alice is fast and smart, then Alice is bad. If Alice is normal, then Alice is smart. If Alice is normal and happy, then Alice is sad. Query 1: Alice is bad. Answer: Here is a python script that uses forward chaining inference to answer the given query:

Copy code # Define the facts

facts = { "fast": ["Alice"], "normal": ["Alice"], }

# Define the rules

rules = { ("fast", "smart"): ["bad"], ("normal",): ["smart"], ("normal", "happy"): ["sad"], }

# Define the query

query = "bad"

# Initialize the list of known conclusions

conclusions = []

# Iterate over the rules

for premises, conclusions_ in rules.items(): # Check if all the premises are known to be true if all(premise in facts for premise in premises): # If all the premises are true, add the conclusions to the list of known conclusions conclusions.extend(conclusions_)

# Check if the query is in the list of known conclusions

if query in conclusions: print("The query can be inferred to be true") else: print("The query cannot be inferred to be true")

For the given facts and rules, the script will output "The query can be inferred to be true" because Alice is fast and normal, which means that Alice is smart according to the rules, and Alice is also fast, which means that Alice is bad according to the rules.

now chain the rules using the facts from the previous run to see if any new facts are generated and if not then quit To chain the rules using the facts from the previous run, we can simply put the forward chaining inference logic inside a loop and continue iterating until no new facts are generated. Here is an updated version of the script that does this:

Copy code # Define the facts

facts = { "fast": ["Alice"], "normal": ["Alice"], }

# Define the rules

rules = { ("fast", "smart"): ["bad"], ("normal",): ["smart"], ("normal", "happy"): ["sad"], }

# Define the query

query = "bad"

# Initialize the list of known conclusions

conclusions = []

# Keep iterating until no new facts are generated

while True: # Iterate over the rules for premises, conclusions_ in rules.items(): # Check if all the premises are known to be true if all(premise in facts for premise in premises): # If all the premises are true, add the conclusions to the list of known conclusions conclusions.extend(conclusions_)

    # Initialize a flag to track whether any new facts were generated

    new_facts_generated = False

    # Iterate over the conclusions

    for conclusion in conclusions:
        # Check if the conclusion is already a known fact

        if conclusion not in facts:
            # If the conclusion is not already a known fact, add it to the list of facts

            facts[conclusion] = []

            # Set the flag to indicate that a new fact was generated

            new_facts_generated = True

    # If no new facts were generated, break out of the loop

    if not new_facts_generated:
        break
# Check if the query is in the list of known conclusions

if query in conclusions: print("The query can be inferred to be true") else: print("The query cannot be inferred to be true")

This script will iterate over the rules and apply them to the known facts until no new facts are generated. It will then check if the query is in the list of known conclusions and output the result.


This seems pretty cool and like a step in an interesting direction but I'm having trouble running the code. It maybe just be HN formatting.

If you indent by 4, this will do "block code" type formatting. So, to copy code, try indenting everything by 4 spaces before you paste to HN. That way we can try running the code ourselves.


It seems to me like the Chinese Room Argument does not address the notion that in carrying out the algorithm, a possibility that the original author is instantiating a computational mind whose contents are inaccessible to the substrate hosting the mind. In other words, this argument falls because it does not resolve the hard problem of consciousness.

The pan-psychism argument is an interesting parallel to the Free Will Theorem. I am a bit wary of assuming of which side of the possibilities the outcome must fall.


It's an article on a psychology journal, those folk tend to use different ontologies than computer scientists. That said, it's a very substantial review and I don't necessarily disagree with it.

No, causal reasoning won't fix it. But it is an ingredient more. I can't tell which ones are all the others, but the kind of thing that LLM do is already very close to the mindless repetition I did at kindergarten to get positive feedback. Scientists have also noted that animal intelligence, and by extension ours, tends to raise as a response to a challenging environment. Then there is the illusion of self-consciousness, the clearing-house of environmental and internal information that engenders our thoughts.

It's like we have a stack of papers and we are searching a way to bind them together to make a book. I think we will eventually find a way and that will be AGI. What we will get won't be like us, but will pass all the tests. Now, is it just me, or is creating AGI stupidly irresponsibly?


The minute we created agriculture this day was inevitable. It is stupid that we created agriculture? That is a matter of opinion,but I would say no.


It's not inevitable. We know how to create nuclear bombs, and generally how to kill each other. We are not doing either of those wantonly.

And forgive me, but what agriculture has to do with AGI?


Not who you're asking but I do think that there's something interesting going on with human beings and organizational structures, such that agriculture is a type of Pandoras box that leads to increasing complexity of organization so long as the process isn't stopped (that is, that humans don't go extinct prematurely). I think the same applies to fire and energy; once fire is harnessed, if we go on indefinitely the desire to capture and use more energy leads to advancement over time and better energy utilization. I think that this process is tied in some way to cognition and computation as it occurs in the universe, I know it might sound hokey or whatever but I really do think something interesting is going on that we don't grasp because it's too big for our heads.


Yes, it is fractals all the way down and up but we are too stupid to realize it.


> the illusion of self-consciousness

I've read and head that a lot, and it is appealing in many ways.

However, I always ask myself: who is having that illusion?


This question always comes up for me when people talk about the simulation theory. What exactly is being simulated? All of my perceptions but my brain is real, or am I also being simulated?


It's not about simulation theory as in "we are living in a simulation," but about consciousness. My reasoning goes like this:

- 'A' is a computing entity and an organism: it takes information in, it reacts. Its genes use those two functions to ensure survival. Note that A doesn't need to be conscious.

- As environmental complexity raises, A must collaborate with other members of its own species (most likely, the same genes control the hardware) or of other species. Let's say B.

- Evolution gives A the capability of partially simulating B ("other"), so that they can negotiate. Evolution also gives them the capability to communicate. Communication happens between members of the same species because they share the same set of genes. If B survives, the genes of A survive, even if A itself doesn't (abnegation).

- The capabilities are somewhat loose. For example, there is not a restriction that A must be different from B. If A can simulate B/other, it an also simulate with B/other=A. The trick is very useful to strategize interaction ("plan"), and to cope with bigger groups.

That process where our brains diverge and simulate an interior dialogue is what we perceive as consciousness. For example, as I write and edit this post, I'm actively imagining ('simulating') how peers from my species will interpret it, and my sentences are changing so that they are decoded by the path of least resistance in the reader, thus advancing my goals (actually my genes').

So, consciousness is an illusion that we can't escape in speech[^1], discourse or logic. It is what we are. And it is tremendously useful, because it allows us to "edit our future" to a great extent, with the implication that by doing so, we increase the chances of own survival or, if that's not possible, of our genes' survival.

Now, that component is missing from our current AI iteration. Put simply, our artilects don't run continuously and don't have genes commanding them. They don't have an endocrine and nervous system constantly producing overriding signals ("hunger", "pain", "lust", "do so that the group survives") that must be rationalized a-posteriori to the point of forming entire cultures. The thing is, we could engineer entirely artificial replacements of those things, and with a bit of fussing, we would replicate the illusion of consciousness. But the idea of doing that and also binding our creations to do useful work is simply repulsive to me (edit: my genes are providing an overriding signal saying that, creating a being with the potential to inflict damage on me and my group (read: them the genes), is something that only deleterious members of the species would do, and that at the very least, dissuasive action should be taken).

[^1]: We can shut it down however, it's not inherent. We routinely do when we sleep, and sometimes when we do something that requires more of our lower brain, for example doing intense physical exercise or having sex.


Everyone.


Unless you are being sarcastic, i think the person was asking the "real" substance of the entity that is having a subjective experience.


Particularly, having an illusion of consciousness would appear to require someone to actually have that illusion. And that someone would appear to be having a subjective experience, i.e. consciousness.

So to me the whole idea of "consciousness is an illusion", while cute, solves nothing.


> or is creating AGI stupidly irresponsibly

Stupidly irresponsible and yet inevitable. It's just a question of if it's a democratic accessible model of AGI or a nation state that wields it like the atomic bomb shortly after WWII. That might be our best corollary for an utterly game changing technology with profound leverage.


Nothing new about creating general intelligence. WWI and II were fought using general intelligences called soldiers which were mass-produced by defense technologies called mothers.

(The difference between this and AGI is that you're imagining AGI has superpowers, but since it's your imagination just don't do that and now it's safe. Well, as safe as an army is, so not very.)


All those arguments reduce to "humans have a soul, computers don't, checkmate atheists". Ironically, soon AI will be able to generate articles arguing that AI can't ever think, automating yet another job.

Any model capable of executing a turing machine (given infinite resources, so in practice bounded of course) can emulate another program, even if inefficiently. GPT empirically can execute a simple vm. At best the 'wrong model' argument would have to point out that specific design collapses once the program to execute becomes too big - no matter how much compute/memory is available, but then, it's only a technical claim about one specific design.

This specific article has another problem - it handwaves the definition of consciousness. It has to do it, because any attempt to actually define necessarily either defeats the whole argument (it's some property of a computable system), or reduces it to a magical claim about an external soul.


I can highly recommend nobel laureate Sir Roger Penroses' take on AI. He argues that AI does in no way 'undestand' anything. He goes even further and uses Goedels Theorem to show that 'understanding' itself is non-computeable.

I am not sure about this, but after playing with chatGPT I can clearly see what he means by lack of 'understanding'. There is absolutely ZERO real understanding.... only correlations and probabilities of words....


Why would you equate ChatGPT with AI ? It's just a language model. I know it's fashionable to call anything machine-learning related "AI", but it makes for non-sensical conversations such as here where Penrose is using "AI" to refer to any potential future AI (something that at least appears to have human level intelligence), but you're using "AI" to refer to something completely different - a dumb language model.

Penrose is one of those people who is "religiously" against AI. He will clutch at any straw possible to argue that AI will never be conscious, or understand anything, or encroach on our human uniqueness and/or spirituality in any way. His favorite argument seems to be that our brain is utilizing (inherently non-computable - at least by non-quantum computer) quantum effects in the brain's microtubules, but I'm sure if it wasn't that it'd be something else. He doesn't WANT to believe, therefore he finds an excuse not to believe.


Probably the main reason for this confusion is the lack of a good definition of “intelligence”. Basically no consensus if you browse the associated Wikipedia page. Of course the definition is also bogged down in human-centered history, assuming that humans are at the pinnacle of some contrived intelligence scale and placing other animals below us. I’m sure the genius philosophers who came up with this scale would’ve placed themselves at the pinnacle within humans as well.

Tacking “artificial” on top of that doesn’t make it any clearer.

My understanding of the taxonomy of these things within CS is that AI is a broad class of techniques for problem solving. Machine learning is a subset of those techniques which uses data and statistical methods. Non-ML AI is sometimes called GOFAI (“good old-fashioned AI”).


AI and ML are two different things. AI can either be considered as a research area or simply end goal/result. AI doesn't itself refer to any set of techniques - it's the goal, not the means to the goal. You can consider GOFAI as non-ML, although perhaps it's more correct to regard it as older symbolic approaches (rule-based systems, etc) vs modern connectionist/ANN ones.

Machine Learning (ML) is a catch-all term (not in of itself a technology) for any approach where systems are designed to learn from data. This includes techniques such as random forests and SVMs, as well as neural nets.

There's really no fundamental relationship between AI and ML. Neither one is a subset of the other as this is an apples and oranges comparison - one is a goal, and the other is an approach). That said, all recent progress towards AI has been achieved by using ML, although not all uses of ML can really be regarded as AI.

Hope that helps define terms.


You may not see one as a subset of the other but there is no shortage of nested diagrams like [1], showing machine learning as a sub-field of artificial intelligence.

Further, the canonical Russell and Norvig AI textbook [2] only mentions machine learning briefly, as one of several skills that a computer would need in order to pass the Turing Test:

> machine learning to adapt to new circumstances and to draw new conclusions

And the Wikipedia page (which also contains a similar nested diagram) describes ML as a “part of” artificial intelligence.

So it is pretty clear to me that ML is treated as a sub-field of AI. However I still believe the AI field dances around the definition of Intelligence, preferring a practical task-oriented definition instead.

[1] https://www.edureka.co/blog/ai-vs-machine-learning-vs-deep-l...

[2] https://www.amazon.com/Artificial-Intelligence-Approach-Stua...

[3] https://en.wikipedia.org/wiki/Machine_learning


Actually the WikiPedia page you linked to there shows two diagrams, one with ML as a subfield of AI, and another with them as separate overlapping fields (I guess the overlap is meant to represent where ML is being used to pursue AI).

> As of 2020, many sources continue to assert that ML remains a subfield of AI.[29][30][27] Others have the view that not all ML is part of AI, but only an 'intelligent subset' of ML should be considered AI.[5][31][32]

I'm well aware that there are some people who regard anything ML-related to be AI, which is the sensationalist view taken by the popular press who want to write stories about AI, but that doesn't mean it's true. What does something like a SVM classifier have to do with AI - it's more like a line fitting technique. Or what about a neural net that classifies photos as cat vs dog - seems better described as image recognition than AI (and the whole post-AlexNet modern neural net revolution was born out of the 2012 ImageNet image recognition competition). Or how about an echo-cancelling circuit in a telephone - is that AI to you? Why not - it's based on machine learning ...

As for ChatGPT, technically it's a language model, so really part of NLP - linguistics research. All it does it try to predict the next word in a word sequence. Of course it therefore ends up regurgitating lots of intelligent stuff it was trained on (as well as dumb and false stuff it was trained on), but that doesn't make it intelligent. Similarly, we don't say Google is AI just because it responds to search queries by finding web pages with intelligent content.


Just wanted to point out that ChatGPT is more than just a language model - from OpenAI's (very brief) description, it was also trained with reinforcement learning to select/rank the "best" answer [0].

I think the distinction is important because I suspect it explains why ChatGPT succeeds at certain tasks when previous LM-only models failed miserably.

[0] https://openai.com/blog/chatgpt/


Yes, that's the difference between a plain language model like GPT-3 and a "task aligned" one like ChatGPT (which is based on GPT 3.5).

I'd describe it still a language model, but just one with "filtered" output.

I'm not sure if ChatGPT has been documented/described, but it's very similar to OpenAI's InstructGPT which they have described, and which they still refer to as a language model.

> We’ve trained language models that are much better at following user intentions than GPT-3 while also making them more truthful and less toxic, using techniques developed through our alignment research. These InstructGPT models, which are trained with humans in the loop, are now deployed as the default language models on our API.

https://openai.com/blog/instruction-following/


What is understanding if not correlations of words and ideas and concepts? I totally respect Penrose but unless he has an noncomputable alternative for the brain then that’s just magic (and completely unnecessary). Now I know Penrose used to have ideas how the brain is actually a quantum computer which uses entanglement. But that is still computable.


The bar should not be “does the machine understand” but rather, “can the machine be distinguished from a typical human by a test”

And further, for any test we choose we must ask ourselves, “do we expect our ability to discriminate between these two classes, man and machine, to go to zero as machine performance continues to improve”

If we believe that only continued performance improvements are needed to make AI indistinguishable then that means today we’re only talking about differences in quality rather than differences in kind.


User idiotsecant posted this excerpt from ChatGPT a couple of weeks ago. It kind of blows my mind (the last paragraph). It feels a bit beyond just basic probabilities of words.

https://news.ycombinator.com/item?id=33833178

We're just at the beginning.


Didn't blow mine, even though it involved a howitzer! It's obviously operating on a higher level than "what was the previous word" but doesn't exactly feel like high level reasoning: when prompted for similarities it elides "inflated" with "blew up" to seed a paragraph (rather than to crack a corny joke, which is the only context I can imagine a human in possession of knowledge of what a howitzer and a beach ball doing likewise!), adds some sentences consistent with sentences in its corpus with "blowing up" and "loading" as the respective verbs and appends a sentence which states that it's tenuous, because howitzer and beach balls and inflating and loading verbs aren't often found in close proximity in its corpus (and caveating stuff clearly was well rewarded in ChatGPT's training; it's almost comically quick to do so in every single non-trivial answer)

Impressive that it does so in coherent English, but I'm not convinced it's in any danger of understanding firearms or fun beyond realising sentence forms those words would be a good fit and poor fit for.

I posted an excerpt from ChatGPT a couple of days ago where it stated - in an identical format to its previous two correct answers - that 355 was a prime number because it wasn't divisible 5, and that's the sort of problem that Turing machines are known to be able to completely and accurately model


My opinion is as long the dont solve hard problems they are just probability .

For example one of those top ten mathematical problems. Humans solved a few of those, lets challenge AI


By this metric the vast majority of humans are “just probability” too.

I think it’s not a very good metric.


Hmm is see it this way. Humans are a network of information sharing that could solve this problems. Computers are the same and should also Solve hard problems.

And yes, sadly, many humans are just probalility in my eyes.


Link?

> He argues that AI does in no way 'undestand' anything. He goes even further and uses Goedels Theorem to show that 'understanding' itself is non-computeable.

So how do brains understand things? Is he asserting that something is happening in the brain that can't be described/modelled computationally?


Penrose's argument is dualism (humans have souls) with a lot of mathematical handwaving around it involving claims there are quantum computers in your brain. I do not see why it should be respected or why you should think he's different from Deepak Chopra.


Yes. Penrose hypothesizes a non-computational (which would be interesting, because it opens a whole an of worms about how to model it) constituent of the universe. A little while ago he, along with Stuart Hammeroff, proposed the Orch-OR model, which posited that this is the same constituent as what causes the "collapse" of an unmeasured superposition of states into one concrete, measured state. With that said, I have not seen any of his work on this for some time (I last looked at it around 2016), though a cursory look shows some new papers of his on this and related subjects.


So in other words, humans also have zero real understanding.


Humans add structure to unstructured data, the AI models we have just make flawed replications of structure we feed it, that is the difference. ChatGPT didn't even figure out the structure of basic math which is the most salient logical structure humans have. Even idiot humans can learn to count and compare quantities without being hardcoded to do so, they learn it from just words and pictures. A language model that failed to learn this when it was trained on the entire internet therefore can't have any understanding the way humans understands things, and feeding it more compute or data wont get it there either.

Example of a question a typical human idiot can solve without ever getting it wrong, but ChatGPT can't reliably: Is 7 dollars enough to buy a thing costing 7 dollars? ChatGPT can usually solve this, but it sometimes gets it wrong, getting that sort of thing wrong ever means that it doesn't understand, it just uses dumb statistics.

Edit: I'm not saying it is impossible to make AI models that understand these things, just that the ones we have today don't.


Dumb statistics you say?

https://i.imgur.com/im9EquK.png

Perhaps it's magic trick that a language model can do math, but it's quite impressive that it gets very close to the desired answer.


It probably had problems with this part "two one dollar bills". A lot of text that contains "two" and "one" results in a "three", but also many texts with "two" and "one" results in "two", and then the model randomly chooses between those two interpretations. The worst part is that it doesn't even do it consistently for the same piece of text, creating that nonsense.


ChatGPT does not understand, I don't think there is any such thing as machine understanding nor will there ever be. At least not within the current paradigm of computing. My guess is that our kind of understanding requires a biological substrate and / or something quantum. Obviously I'm hand waving here as I'm out of my depth with this subject. My sense though is that there are things about the brain about its evolution that are eluding us, and which are significant.

Here's the thing though. Even on the current trajectory, we will produce programs that sure as hell APPEAR like they understand. This is really what Alan Turing was getting at when he said that the question of whether a machine can think is meaningless. It's like asking whether a submarine can swim. The fact is, dead machine or not, the outputs will increasingly exhibit, what look like, emergent properties of understanding, and even apparent sentience.

This to me is the truly stunning thing about ChatGPT. It might have limitations, but through mere statistics and 'auto complete on steroids' it can do way more than you'd think was possible with such an approach.


True, except for this:

> Obviously I'm hand waving here as I'm out of my depth with this subject. My sense though is that there are things about the brain about its evolution that are eluding us, and which are significant.

Please go and deepen you meditations. First:

Get to the point, where you have stripped your awareness away from your sensory input (pratyahara)

(which everyone may know, it can be done, when you are deeply concentrated, someone may have said something to you, but you didn't "hear" it)

If you are at that point, your entire "experience" will consist of your "inner world" (thoughs and emotions). So next is to "concentrate" your mind (Dharana) by bringing it back to the same object (lets say a rose for example). This takes A LOT of practice but after the years, you mind will become clear and focused so that the time where only the object (the rose) is in your mind becomes longer and longer (a few seconds up to minutes).

As you practice further episodes of undisturbed, completely one-pointed mind become longer and longer and you enter into Dhyana (meditation). You are completely focused on the object, and the object itself is all that you experience. Imagine what that means: your entire experience constist of only the rose and nothing but the rose!!!

As this goes on and on for longer and longer periods of time, the yogis and mystics say that the boundaries between "you" the subject and "the rose" the object start to dissolve more and more, until they vanish. You are the rose and the rose is. Nothing but the rose that you are!! (They call it samadhi, the dissolution of individual existence and merging of object and subject).

This is just on an object but when the object is your awareness itself and this process happens, they call it many different names depending on tradition. ("Nirvikalpa Samadhi" in the yogic tradition, "Unio-Mystica" in the christian traditions, "fana" or "Baqaa billah" in the islamic mystical tradition. It is the realization of the unity of consciousness that is and that only is and that "I" merges into...


"Quantum" isn't magic - if you're a quantum computer, why can't you factor prime numbers in your head?

You're smarter than ChatGPT for a simple reason - you're not trapped in a computer and can decide what to spend time thinking about and where to go in real life to look up answers for your questions.


Forgive me if this is covered in his argument since I haven’t read it yet, but does he discuss why this same argument wouldn’t apply to biological brains?


He thinks our biological brains get around it by using quantum computation.

Roger Penrose is a historical hero of the physics community and a current embarrassment from time to time. His ideas on things like computation and neurobiology are not worth taking seriously.

It’s unfortunate to see people like the one you’re responding to using his Nobel Laureate credentials to justify his nonsense.


Could still be a problem of scale. A single neuron is quite stupid. But put enough of them together and they become … still stupid, but it’s more difficult to tell.


> they become … still stupid, but it’s more difficult to tell.

I feel like this describes my entire formal education path.


AI will consume half the electricity on earth and it will still be pretty dumb at this point lol. We need to find a better solution than scale up.


Is that based on anything, or just a gut feeling?


It's based on the power consumption of existing leading edge models such as ChatGPT, Stable Diffusion, etc... These are very impressive models, but come at great cost. If these are just a starting point in scaling up our models we are going to have a large problem (I mean, we already do but that is besides the point).


Geoffrey Hinton has recently been talking about how analog and "imperfect" computing with specialized hardware/circuitry may yield much cheaper neural nets, that could easily be as large as human brains, but would only cost a few dollars and would be extremely cheap to run. Not a new idea, but it is a fairly promising outlook, I think.

https://www.zdnet.com/article/we-will-see-a-completely-new-t...


I'm not so sure how inefficient this is. Compare this to how humans have to eat and learn one by one. We don't have a way to take a trained model and run it on other instances. Instead the person who knows has to teach it to other individuals. Writing or media can help but is very slow and compared to loading a model.


I mean, 30 years ago you needed a supercomputer and to burn MWh of power to render anything remotely realistic, and now a phone can do it, using a fraction of a kWh. I'm not sure why we don't expect a similar leap here again?


>“There are times when it seems autonomy is around the corner and the vehicle can go for a day without a human driver intervening … other days reality sets in because the edge cases are endless …”

Self-driving cars (and AGI in general) might not be as close as we thought, but edge cases trip up humans too.


I also don't expect AGI to appear.

Current state of AI is still interesting - we can leave edge cases to people and let computation deal with everything in its function range.

I might still have to drive through the city because it is too much to deal with - but I would be glad if we have highways restricted for autonomous cars, where I could travel between cities reading a book like I would be in a train but with all convenience of traveling on my time schedule instead of adjusting my schedule with train schedule.


difference is a human can usually understand they are dealing with an edge case and can try to work-around it or solve it. "AI" just outputs garbage.


These parts raise issues for me:

> the underlying reason for such mistakes is that “... all the impressive achievements of deep learning amount to just curve fitting.” The key, as Pearl suggests, is to replace “reasoning by association” with “causal reasoning” —the ability to infer causes from observed phenomena.

This seems hypocritical in that for tens of thousands of years and for a large part of the population is still what humans do. Only a subset generally apply true logical reasoning. It's also funny that we're now saying logical circuits can't do logic.

> “we need to stop building computer systems that merely get better and better at detecting statistical patterns [...]—and start building computer systems that from the moment of their assembly innately grasp three basic concepts: time, space, and causality.

This seems like a mistake assuming something we ourselves haven't determined are fundamental concepts. I would prefer if the data produced the concepts and potential proofs or disproofs.


> In the following, I briefly review three a priori arguments that purport to demonstrate that “computation” alone can never realize human-like understanding, and, a fortiori, no computational AI system will ever fully “grasp” human meaning.

So on the one side we have science and engineering, backed by lots and lots of working applications, saying „let’s try and find out“, „let’s see how far we come“.

And on the other side, we have philosophy, with a priori arguments which say this and that must or must not be so.

Clearly I’m on the side of a priori arguments, they have a history of working well. Remember when Kant proved that empirical space must be euclidean [1]? Or Hegel that there can only be seven planets [2]?

/s

[1] http://jur.byu.edu/?p=9200

[2] https://scienceblogs.com/stoat/2009/01/17/hegel-does-maths


Trying to prove a priori that AI can neither think nor understand is an endlessly alluring fools errand. But in the end, the arguments all seem to me to assume their conclusion. AI is entirely computational, and human understanding clearly requires understanding that can't be computed, therefore AI can't achieve human-like understanding or thought. But none of the efforts to describe what happens in human cognition that is not computable are remotely convincing. In this particular article, after reciting a litani of such unconvincing arguments, Bishop ends with nothing more profound than "current AI doesn't demonstrate understanding, which is because understanding is non-computable (why, not answered), hence no AI can understand.

Just show me what the magic that is can't be computed that is going in in the wet lab of our brain, but nowhere else is, and I might by their conclusion.


I thoroughly believe we have overestimated human intelligence to the point we are perpetually moving the goalpost for GAI.

Read this:

https://3cents.substack.com/p/culture-is-smarter-than-you-ar...

We have memories, processes, and learned responses and aversions, that's all stuff existing AI can handle, if we fill just a bit more that doesn't exist in training data yet.


In other words, "probabilistic associations” are a poor substitute for logic and reason. They are not inherently reliable nor consistent.

Bottom line: Real, reliable, "full self driving" automobiles are unlikely to be achieved using current approaches. About the best that can be expected is a marketing gimmick that is sure to cause lots of accidents.


I don't see why self driving cars need true reasoning. If you have high resolution sensors, you can use deterministic algorithms to avoid hitting anything or being hit by anything. Especially if you start putting some kind of easily detected nonlinear reflector sticker on all cars and backpacks.

You might get a few false trips if a trash bag blows in the road and the rear sensor says nothing will hit you if you stop, but who cares? If we really prioritize safety, we'll just accept that, and find other ways to save time.

A self driving car would essentially give a person extra hours in the day, and allow people with disabilities to go outside of major cities unassisted.


If you have high resolution sensors ...

*Something* has to interpret the data from the sensors. This is where logic and reasoning comes into play.

If a bird flies across the road in front of you as you're traveling at high speed on the interstate, slamming on the brakes at that particular moment might not be the most appropriate response.

A self driving car would essentially give a person extra hours in the day

No one is disputing the utility --- only the difficulty of actually achieving it.


It's not that they necessarily need true reasoning. It would just be extremely beneficial, because that's what every human driver already has built-in (although exercised to varying degrees of success).

What about an axe that falls off the landscaping truck in front of you? Or a mattress? Or a harmless styrofoam cooler? The self-driving car does not know what those things are, but suddenly there is <something> in the air that will likely collide with you. The computer is going to be unable to predict how the mattress or the axe or the styrofoam will fly through the air, it can only decide to take abrupt, evasive action. It is also unaware that the person behind the car has been periodically looking down at their phone instead of watching the road, so an abrupt swerve or stop may still cause an accident. A sensible human driver would realize they are taking on additional risk by following such a truck and/or remaining in front of the distracted driver, and maybe decide to change lanes safely. The car's pretend AI has no idea of any of these things until something falls off and it has to react. We'll praise it when it gets it right--ooooh how ingenious! And the apologists will claim "there's no way it could have made a perfect decision--look how many other times it gets it right!" when it fails. And the realists will conclude "ha, stupid computer, told ya so".


Humans have reasoning, AI has zero aggressive instinct and instant reaction time. It could be a pretty even battle.

They might make really bad decisions in edge cases(Which will get less and less common as more cars get smart), but they might make up for it with perfect behavior in ordinary circumstances.

They will never prioritize convenience or speed or avoiding angering other people over safety. They'll do the safe thing even if no human driver could maintain that level of paranoia at all times.


Self driving cars don’t need assistance from the environment.

We already have self driving cars in public streets that work. Arguing about the possibilities with self driving is like arguing about how good chess computers might some day become, it’s already been done.

Hitting the right price point, actually mapping the environment at scale, working out reliability, and bugs etc is real engineering. We are in that stage right between the prototype works and it’s ready for production which always takes longer than people expect but can it be done is no longer in question.


We have "self driving" cars that "work" in a very narrow set of circumstances and road conditions, but to say "it's already been done" feels....silly at best. It hasn't been done, it isn't even remotely close.

"but can it be done is no longer in question."

You are right - it doesn't seem like it can be done at all, not in the way that most people imagine "self driving" - where you hop in your car no matter the weather, tell it where to go and it just takes you there while you sleep, and it works in every country in the world on every road like any human can drive.That doesn't seem possible this century, if ever.


There is no reason to assume weather is more difficult to deal with than sunny conditions in comparison to human drivers. Car sensors have larger advantages in bad weather than human drivers and they’ve already been demonstrated to work in rain, fog, and snow.

These companies are avoiding bad weather for the same reason human drivers do, they simply have zero reason to take risks.


See, this I completely agree with. But OP said that we already have this and now we're basically facing a price/marketing problem and we have self driving cars otherwise - and we definitely don't.


All we really need is a sentient automobile --- seems easy enough /s.


If you assume we need a sentient automobile to drive then we must already have them because cars are driving without drivers.


I don't know about "sentient", but yes, cars "have been driving without drivers" for decades. Those are limted demos, not actual, useful self-driving cars. Big difference!

Extract from [1]:

The basic navigation problem has been "solved" for decades. Back in Carnegie Mellon University in 1990, I watched a NavLab van rolling down a trail in Schenley Park behind my graduate student office. Since then, every few years we've been hearing announcements about how self-driving cars were just around the corner.

But between demos and actual products lies a huge chasm. Today, not only do self-driving cars still not exist after billions of dollars and decades of investment, but the experts admit that they are at least a decade away.

And no, you don't get to nit-pick what "self-driving car" means. It still means that I can get in, tell it to go to a destination that I would normally drive to, and then I can go to sleep. That's what's known in the trade as "Level 5" autonomy. Anything less than that is not the life-changing promise that attracts all this investment and press.

If your car can operate only on a few roads, then it's no better than one of those people-mover trains at airports. Those are driverless, too.

[1] https://www.robotsinplainenglish.com/e/2021-08-29-humanoid.h...


That article’s entire premise falls apart based on a single line you skipped over: If your car can operate only on a few roads, then it's no better than one of those people-mover trains at airports.

Coverage area doesn’t somehow require intelligence and these systems operate on far more than a few roads. The same hardware and software could operate on every road in the US at which it would ‘suddenly be intelligent’ even though nothing changed.

Look people always say whatever AI systems can do isn’t intelligent as soon as they can do it. Chess used to be considered a bastion of human supremacy over machines, then go, now the last gasps of self driving.

In 20 years people will say self driving cars aren’t intelligent because they don’t work in North Korea, even as young slowly people stop learning how to drive the same way humanity largely forgot how to drive a horse and buggy.


> Coverage area doesn't somehow require intelligence

Coverage area is required for practical self-driving solutions, and today's ML based efforts are not close to producing it.

If you measure intelligence in this context by the ability to drive on any roads anywhere, then humans are beating the crap out of "AI".


Waymo cars could drive on any US road, they don’t need detailed mapping efforts. Their limitation is legal not software.

So, I guess by your metric they have human level intelligence. Or more likely, people will soon think self require minimal intelligence and we will go to the next bastion of human supremacy.


> Waymo cars could drive on any US road, they don’t need detailed mapping efforts. Their limitation is legal not software.

Again, you’re mixing up “could drive” with actual ability to reliably take me where I want to go. Yeah, NavLab drove from Pittsburgh to San Diego in 1995. That doesn’t mean we had self-driving cars.

And no, Waymo’s limitation is not just legal. Where are you getting that impression from? Sure, they don’t have permits to do other routes, but there’s a good reason for that: they cannot stand behind their product except within their geofence.

I am not saying intelligence equals self driving. I am saying that even if you define it that way, we are far, far away still.


> reliably take me where I want to go.

That’s got nothing to do with if the technology exists. Supersonic aircraft for example exist but you can’t book a supersonic flight NYC to LA. What you’re arguing is affordable mass market self driving cars don’t exist which is a strawman that has little to do with if self driving cars exist.

> Waymo’s limitation is not just legal.

Waymo is currently testing driverless cars in every single state (CA, NV, FL) it’s legal for them to test in. Clearly that does suggest the limitation to add any more states is legal.


What do you mean by "the technology exists"? Sure, computers exist, neural networks and machine learning exist, sensors exist. Even geofenced, unprofitable operations like Waymo's exist. But that doesn't mean self-driving cars exist. And yes, of course it has to be mass market. The self-driving story would be pointless if it weren't; that's the whole reason VCs are betting billions on it.

The problem is the insane amount of training needed. The current approach of running repeated learning iterations to iron out corner cases soon runs up against the long tail of thousands and thousands of tiny, unlikely-but-possible scenarios requiring more and more data. Every time you expand the scope of the self-driving car (different roadside conditions, behaviors of pedestrians and other drivers, new roads, weather, etc.), you discover more conditions for the model to train on.

You can't just say, "my model is intelligent. Now it's just an exercise for the reader to train the model on all possible scenarios, and voila! You have a self-driving system!". You have to actually train the model to produce a self-driving car.

This is a never-ending battle. Let me point out how utterly different it is from how human drivers deal with new conditions.

No, self-driving cars don't exist. Not yet. And it's not for legal reasons.

(One technical solution to the self-driving problem would be to also modify all the roads and surroundings to reduce the number of unlikely scenarios that our models have to learn. But this kind of solution is challenging for non-software reasons, plus it would not really be "intelligent" in terms of AGI. It would be more like a tramway. It wouldn't be sexy.)


Computers clearly existed before the 1970’s even though nobody was buying personal computers back then. We are at the earliest version of self driving where it’s not affordable for individuals and the use cases are very limited, but that’s not nothing.

The self driving car you want to buy doesn’t exist, but they are already providing a valuable service. Saying well VC’s want to replace all Taxi drivers and all Long haul truckers and all … just means that development will continue but that’s true of most products. We hardly say the CPU doesn’t exist because Intel and AMD are still spending billions on R&D. All existing self driving car investments are really just peanuts compared to what companies will spend as the market matures and profits roll in.


You have pilot test cars moving along carefully mapped paths in some specific urban locations. You also have test trucks moving along some specific interstates (which I would expect to be a somewhat easier task) but with backup human drivers on board.

Neither of these is capable of being placed into a random environment and actually "driving" from environmental feedback with minimal directions (turn left, go right, stop) --- which most any human driver can easily do.


The difference between pre mapped and random environment disappears once you map all roads in a finite roadway network. It only takes ~70 drivers to cross every public road in the US once a year, that’s hardly a major investment compared to the utility of self driving cars.

Really it’s little different than many modern drivers being dependent on the GPS for long trips.


The difference between pre mapped and random environment disappears once you map all roads in a finite roadway network.

Once you map all the roads and account for very imaginable situation and scenario --- the unexpected and unanticipated will still occur.


In unexpected situations they avoid obstacles just like people do.

Mapping isn’t about unexpected situations beyond the loss of GPS.


No they don't, that's nonsense. A Tesla FSD just gives up control deciding it has no idea what to do, often with literally 3 second notice. I've driven it and it's the most dangerous system out there - because it seems so good, right up to a point where it completely relinquishes control, leaving you in the middle of a busy intersection, or in traffic behind a cyclist which it literally doesn't know what to do with.

"Just like people do" is only true if you see 7 year olds driving around, I imagine they would have similar capacity to make decisions on the road as Tesla FSD does. Except I imagine even a 7 year old wouldn't just let go of the wheel completely.


Tesla doesn’t do self driving. Other companies have cars without humans inside on public streets, that’s self driving.

The use case of self driving taxi already exist as in you tell an app where you want to go and an empty car with AI driver shows up. The app doesn’t always summon an AI driver, but when humans and machines are doing the same job don’t expect the humans to win indefinitely.


They aren't though. I'm not aware of any example from any company anywhere. There are some really impressive examples of cruise control that works pretty well most of the time, but I wouldn't say any car "drives without drivers".


There are several examples on public streets without human drivers, random video: https://www.youtube.com/watch?v=z5eaOo-2eJM

Waymo, Cruise, and a few other companies are sending out empty cars to pick people up. They have off site people to remotely take over if the cars get confused, but such handoffs are in no way fast enough to deal with a safety issue and can easily scale to have thousands of cars per backup driver based on disengagement statistics.

This is the same way automation has always occurred. You get the machine to deal with 90%, 99%, then 99.9% etc of cases until people move from doing job X, to being backup, to troubleshooting problems.


Waymo and Cruise are nice demos that work in a limited set of circumstances. They are "self driving" in the same way my oven is "self cleaning".

>>This is the same way automation has always occurred. You get the machine to deal with 90%, 99%, then 99.9% etc of cases until people move from doing job X, to being backup, to troubleshooting problems.

Of course. We are now at maybe 20-30% coverage though. Once we are at 90%, maybe in 30-40 years, then we can start saying things like "we have self driving cars".


By what metric are you suggesting 20-30% coverage?

The point of these trials is to demonstrate safety not simply test limits. Waymo cars work fine in conditions and areas outside of their current tests.


>>Waymo cars work fine in conditions and areas outside of their current tests.

So if you took an existing Waymo car and dropped it in a middle of a previously unmapped city, it would be "fine"?


Any US city. Waymo depends on GPS style maps not high resolution LiDAR maps, so they already have maps of every city on the planet. The software however isn’t setup for non US road signs and laws for every country on the planet.

PS: I am ignoring sanity checks that would need to be removed. They are setup to be geofenced so it’s going to reject any destination outside of those until someone updates a confit file. That’s just good engineering not some abstract limitation.


but can it be done is no longer in question.

The question that remains is whether it *can be done* in an economical manner that is at least as reliable as a human driver. Anything less is a failure in terms of money and lives impacted or lost.


> avoid hitting anything or being hit by anything

Two options for the former are: brake or swerve. While for the latter a 3rd option, accelerate, can also be in play.

Which do you pick when? When you brake or swerve or accelerate, what other environmental factors (e.g. wet road, gravel road, construction workers present), should be taken into account?


thanks. let's refashion our cityscape and our clothing to accommodate self driving cars. don't care about pedestrians. </sarcasm>


The argument as I take it is that AI image analysis could be avoided. We "just" use sensors to figure that it's on the road, it's not a car, so it's a potential obstacle regardless of what it might actually be.

But I've got no idea how practically usable such an approach would be.


We don't need to refashion clothing, radar sees us just fine. But I'm sure people would want radar reflectors just like they want high visibility clothing at night now, for extra assurance in edge cases, to be sure they always got priority over sensor artifact false positives.


Not hitting pedestrians is already solved. Waymo couldn’t do driverless taxis rides if it wasn’t.

I also have tesla’s fsd beta on my car, and it does a marvelous job not hitting people. If anything it’s way too cautious around pedestrians.


So if you're traveling at interstate speed and someone steps out too close to the roadway is it going to swerve into the next lane or slam on the brakes to avoid them?

I recently had a situation where a large truck off to the right in front of me had a blowout and a large chunk of tire flew over into my lane. There were other cars traveling fast on both sides and behind me.

What to do? I chose to maintain my speed and lane and just hit the tire. It caused some damage to the front of my car but in my split second judgment, this was the least bad option.

What would your AI do?


It would have rear sensors. It knows something is behind at a fixed distance, so it must be matching your speed. If that distance is less than a safe reaction time, stopping is bad.

It knows stuff is happening to the sides and swerving is bad.

Unlike a human, it can slow down to hit the object at minimum speed, then speed up again just in time to not get hit even if the driver behind does nothing, at an optimized profile that gives the person behind time to not overreact and cause a pileup.

Self driving also has the advantage that people might stay away. Unlike regular tailgating you have machinephobia that might actually make people not tail you.


One of the most common accidents with "self driving" thus far has been the auto getting rear ended.

Yes, this is undoubtedly the human drivers fault for following too closely but ... is it being precipitated by unexpected AI braking? In any case, the AI has demonstrated very limited ability to avoid these sort of collisions.


I wonder why there aren't more devices to give you an insurance discount for following to close. Or just to make an unsilencable beep if you try.

In any case, some of those cases were probably due to lack of rear sensors, or lack of any sensors at all besides cameras, and lack of any deterministic code able to override the AI.


Tesla has two branches of their self driving software, the fsd branch which is under active development and the highway branch which has not had major changes in 2 years. The highway software is much worse around pedestrians and obstacles than FSD. It’s hard to say how fsd would handle a situation like that on the highway, but based on my experience with fsd off highway it would never choose to hit a car, so very likely it would run over the debris.


Alas, the world is neither reliable nor consistent, so any approach which is not probabalistic is also doomed to failure.


Yes, the advantage that human logic and reasoning offers over probabilistic AI is the ability to evaluate and make nearly instant judgments in unique circumstances without prior training.

Defense driving is all about making these sort of judgments --- and choosing the least bad option when there are no good ones.


"ability to evaluate and make nearly instant judgments in unique circumstances without prior training"

It could be argued that there are already ML systems that do what you describe (i.e. make instant effective judgements/decisions about something it hasn't been exposed to during training), at least for some contexts. For more information: https://en.wikipedia.org/wiki/Zero-shot_learning


You should tell Elon Musk so he can put this into the next sentient Tesla model.


My view is that the Gary Marcus argument is deeply wrong. Here's why.

Current methods are far from perfect, but not for the reasons Marcus believes. Consider that we /should/ be able to derive hypotheses for better system behavior from a given critique, test those proposed system improvements, and then (if the hypothesis is correct) see some improvement. Marcus' crew hasn't produced any useful improvements; this is either because they are not technically competent or their hypotheses are wrong. Given that people have been hammering at these systems for ~10 years now, I doubt the failure to build better logic-based systems (or hybrid systems) is for lack of trying.

Meanwhile, the non-Marcus groups have been improving systems by leaps and bounds, through exactly the kind of iterative, hypothesis-driven improvement I described above. Certainly nothing is perfect, but we can see real progress happening.

(Finally; Tesla FSD is a strawman. It's Waymo you need to argue with, and they're doing great, AFAICT.)


It's Waymo you need to argue with, and they're doing great, AFAICT.

Let's disperse a few thousand of them at random locations across the country for everyday use and see what happens.

AFAICT, no one has actually done this yet.


It's a bad argument... The limited, expanding footprint is allowing them to get high reliability, exactly what you're saying you want from the systems, and which Tesla is failing to provide. Their systems are still adapting to stochastic road conditions and events, even if they aren't requiring the systems to handle them everywhere all at once. Success /anywhere/ goes a long way towards proving that the fundamental task of driving in the real world is machine learnable.

There's a more interesting conversation to be had here than the one you're offering, in any case. And you haven't addressed the total failure of the Marcus crew to actually produce working solutions...


Could you explain why do you think that "reason" is different from "probabilistic associations"? I thought human mind is not deterministic and just good at pruning the improbable patterns retrieved from the memory (when looking for a solution). But I am new to these things


Please read the article for an explanation.


I'm really enjoying this discussion that's being had about these things, I think that while these developments will be able to do things, in the end we won't create AGI, but it will help us better understand what consciousness or cognition actually are.

I personally believe, for reasons I haven't fully fleshed out enough to clearly articulate, that an intelligence cannot be created by emulation, that the computation or process or whatever it is has to occur on the "bare metal" of the universe. This is alluded to in this article, that a machine cannot experience raw sensation. I also think it is possible for some reason that the process has to emerge naturally the way we did, although whether "emerge naturally" includes being created by something that emerged naturally I am not sure of, and at that point it becomes a meaningless distinction.


This article made more sense pre-GPT. Now I’m not so sure. I think if a transformer based architecture was able to live update its weights and was trained on the proper abstraction of data, then it could possibly pass the Turing test.

Edit: actually I take it back. This article would not have made any sense pre-GPT either.


The Turing test is a meaningless benchmark. So what if you can fool people? It's like saying if a magician can fool people then they really have supernatural powers. If you believe the magician really has powers to make stuff materialize out of thin air, then you'd be disappointed they couldn't just materialize you a million bucks, and if you believe some chatbot that passed the Turing test really is intelligent, then you'd be disappointed it couldn't help you make a million bucks in the stock market, or do anything that actually required real intelligence vs the faked appearance of intelligence.

Anyway, there's not even anything today that would come remotely close to passing the Turing test. I guess ChatGPT might fool some people into thinking it's an idiot who was locked in a box with a printout of WikiPedia last year and has no access to the outside world, but not that it's a normal person with access to, and able to interact with, the outside world.


You say: “The Turing test is a meaningless benchmark” and then in the next sentence you say: “there’s not anything today that would come remotely close to passing the Turing test”

So which is it? I think you’re confused because there’s a difference between fooling a random person and passing a Turing test as described by Kurzweil [0]

The magician analogy is kind of like James Randi’s Paranormal test [1]. Again a proper test should determine whether the magic is real or a trick.

[0] https://www.kurzweilai.net/a-wager-on-the-turing-test-the-ru...

[1] https://en.m.wikipedia.org/wiki/One_Million_Dollar_Paranorma...


> So which is it?

Given any well-defined (Kurzweil/whatever) variant of the test, we can obviously test any entity and see if it passes. However, my assertion is that passing the test is "meaningless" in sense of saying anything profound about the entity that passed.

So, there is no either/or consideration here - we can discuss an entity passing the test (benchmark) regardless of what importance we attach to the result.

My point was simply that fooling people into believing something isn't the same as that something actually being true. Additionally, as far as any "intelligence" (and/or "consciousness") test goes, IMO it's even more meaningless if we attempt to test for these before defining precisely what we mean by these terms, since otherwise passing is a subjective measure ("intelligent in the opinion of the test supervisor").

I'm OK with the notion of using a black box test to gauge intelligence (since I regard it as behavioral quality), but we have to be careful when comparing to a human (as the Turing test does) since we could allow the bar to be arbitrarily low by not demanding the performance of a fully intelligent and unhampered human (can ChatGPT convince someone it's an idiot with OCD stuck in a box looking up answers in a book it doesn't understand ... maybe. can it convince me it's an averagely intelligent student able to perform experiments and report on the outcome - not even remotely close).


Yeah I think you make good points.

> IMO it's even more meaningless if we attempt to test for these before defining precisely what we mean by these terms, since otherwise passing is a subjective measure.

This what I think is the beauty of the Turing test because trying to agree on a definition for consciousness, sentience, understanding, etc.. is futile. The Turing test says who cares about those definitions, instead look at whether a computer in general conversation is indistinguishable from a human. Because that’s really what matters.


Yes, I suppose that's what the Turing test is meant to do - test for intelligence without having to define it, and the implicit definition of ("if it acts intelligent, it is intelligent") is reasonable enough, but I'm really not sure what use it has! It was an interesting thought experiment in 1950 at the beginning of the computer age, but now that we're getting close to actually being able to build intelligent systems, what's the point ?!

Artificial intelligence will exist on a spectrum from well-below to well-beyond human, and it'd be useful to have some way to quantify that rather than just be able to say "this one's right around human level!". We really need an "ImageNet for AI" competition that can measure intelligence.

I think we can certainly define intelligence (and consciousness too for that matter), even if not everyone is going to agree. Given human nature there'll still be people denying machine intelligence well after they're being managed by machines !

My rough take definition of intelligence is "ability to predict", IOW degree to which an entity can use prior experience to anticipate and plan for the future, with plans unfolding as predicted. Given that the human cortex appears to be literally a prediction machine (learning by prediction failure), this definition seems well matched to the notion of "if it acts human, its intelligent", but it's specific enough that I think one could devise tests requiring different levels of learning (from experience) and prediction to quantify it.


> now that we're getting close to actually being able to build intelligent systems, what's the point ?!

What’s the point of going to the moon? It would be a great human achievement and life will never be the same.

It also doesn’t need to be binary while machines are attempting to pass it. You still would be able to judge how close computers are getting as the questions in a Turing test get more and more nuanced and sophisticated.

That being said we should have more objective tests like you described. No reason we wont do both.


> What’s the point of going to the moon? It would be a great human achievement and life will never be the same.

Well, building a human-level AI will be a tremendous achievement... But choosing to guage whether we've achieved that goal by using a poorly defined subjective test conceived in the 1950's would seem to diminish that by indicating how dumb we actually are! ;-)


We simply can't mechanically derive the underlying model strictly from facts. No amount of computation that transforms facts into other facts can reach the deeper layer of the truth. There's simply no logical nor mathematical connection b/w the phenomenon on the surface and models that can describe it concisely.

The modern AI models can successfully fit to various phenomenon, including even some chaotic ones, but they never can describe the phenomenon in truthful and concise ways. The modern AI strategy will need billions of parameters to unreliably predict a chaotic system that can be cleanly simulated w/ thousands of variables. Reducing the complexity from billions to thousands is a huge leap that can't be derived logically nor mathematically only from the observed facts.


I mean one could argue that most humans are stupid so yeah the bar might be lower than we think.


The point of the article is that even stupid humans can do lots of tricks that AI can't.


Sure, and even GPT-2 can do tricks that Siri can't. It's not a binary thing. It's still going to take quite some time, but imho AI is slowly getting into a range where it starts being actually useful and it doesn't look like improvements are slowing down.


"AI"... which AI? each AI solution is different.


great article, however it focuses on the limitations of using Euclidean space and neglects quaternion application to neural networks.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: