Hacker News new | past | comments | ask | show | jobs | submit login
Q&A: Douglas Hofstadter on why AI is far from intelligent (qz.com)
105 points by pilingual on Oct 21, 2017 | hide | past | favorite | 86 comments



The further we get into the future, the more I think, what if our data-crunching approach to AI is simply the best thing there is?

Hofstadter said in GEB (in 1979) that the only program that could be capable of beating the best humans at chess would need to be a general and human intelligence... human enough to decline your suggestion to play chess, and suggest you talk about poetry instead.

It seems that people today are still engaging in the same sort of fallacy. I keep hearing that deep learning is too hyper-specialized, that it's a tool and not an intelligence, that we're still waiting for a revolution of general intelligence, where intelligences will have sophisticated logic and make their decisions without billions of data points, just like humans do.

My counter-hypothesis is this: Computers, compared to humans, will always be more data-hungry (i.e. worse at making general decisions without huge amounts of data) and more data-capable (i.e. better at making decisions with it). And this isn't really a bad thing. We will still see revolutions allowing more and more general problems to be solved, revolutions allowing more and more general data to be considered, and revolutions giving more and more usable interfaces for inputing data and specifying problems. We'll still see data-driven intelligent assistants and data-driven board members making critical decisions like in everyone's utopian dreams/dystopian nightmares.

But these foretold "general" intelligences, who don't need excessive data, the intelligences who beat the Turing test, the intelligences who "want" things and "feel" things, who attempt to solve the problem of replicating humans... those come unimaginably far in the future. And when they do arrive, they won't really solve any problems that the data-crunchers haven't already solved better.


> My counter-hypothesis is this: Computers, compared to humans, will always be more data-hungry (i.e. worse at making general decisions without huge amounts of data) and more data-capable (i.e. better at making decisions with it).

I don't think this is necessarily true. Currently we're trying to push computers to do things that are relatively easy for humans to do. I would conjecture that these things are easy for us, not because we are amazing learning machines, but because we have millions of years of evolution, and years as infants with caring teachers going for us.

So, I think we are actually data efficient learning machines, just that we have strong priors, and we spend a lot of effort training each other.

If you compare humans to machines on problems which are not intuitive for us, I think you will find machines to be more data efficient than we are.


> I would conjecture that these things are easy for us, not because we are amazing learning machines, but because we have millions of years of evolution, and years as infants with caring teachers going for us.

And so do chimpanzees. Evolution must have provided us with something additional, which would be our rather more developed cognitive abilities to employ abstract reasoning and metaphor.

Those abilities aren't learned, they're innate, and they allow us to think in ways that don't require large amounts of data. An average human being can be shown an Atari game like Pacman, and easily understand what the objective of the game is almost right away.


But a game like Pac-Man is intuitively understood by the average human because it’s fundamentally a “human” game - designed by humans, gives humans dopamine and other chemical hits in a way that we might even perceive as “fun”. Imagine a game that requires lots of computation and no human-friendly interface - a machine would obviously “learn” the rules a lot faster.

The more evidence we uncover, especially the research around Alpha Go Zero (self play with a specific objective in lieu of millions of years to develop keen general intuition) the more it feels like “human-like” intelligence is not some incredible holy grail of general intelligence but an emergent property of any reasonably directed algorithm.

Another random thought, but cro magnon man comes to mind as a human-like intelligence that proved simply not as cunning or vicious as human intellect and was outcompeted and stamped out. Imagine if we were to discover the AI equivalent of cro magnon intelligence - would we be quick to dismiss it as subpar and “not general enough” even though it emerged through the same algorithm (natural selection)?


> But a game like Pac-Man is intuitively understood by the average human because it’s fundamentally a “human” game - designed by humans, gives humans dopamine and other chemical hits in a way that we might even perceive as “fun”. Imagine a game that requires lots of computation and no human-friendly interface - a machine would obviously “learn” the rules a lot faster.

But if we're talking about creating AGI and the concerns that go with that (full automation, self-directed goals in the real world, etc), then the question is whether DL is enough on it's own to get there.

As such, comparing AlphaGo to humans on a variety of tasks like Atari Games or Go is kind of the point. And Google's goal is turn it into a product, which means doing tasks humans currently do.


Well this is silly. Show a toddler that game and they'll have no idea what the purpose is or even why its a game.

Humans draw on massive reservoirs of knowledge to comprehend anything.


That's true, but the question is whether you can train ML on a massive reservoir of knowledge, with the result being a similar understanding of the world that humans possess.

There is a long term attempt to give machines a common sense understanding of the world by specifying several million rules. That's the Cyc project.


> And so do chimpanzees. Evolution must have provided us with something additional, which would be our rather more developed cognitive abilities to employ abstract reasoning and metaphor.

If you have an evolutionary learning algorithm you don't expect every branch to be equally capable, it doesn't mean we didn't get here along the same path.

I basically agree that humans have some "innate" ability, but this innate ability exists as the result of an evolutionary process.

> Those abilities aren't learned, they're innate, and they allow us to think in ways that don't require large amounts of data. An average human being can be shown an Atari game like Pacman, and easily understand what the objective of the game is almost right away.

Taking the single experience with a single Atari game is sort of missing the point, which is all the time we spent learning up until that point and all the evolution that went on until that point.

It also sort of misses the point that Atari games are explicitly designed to be understandable easily by humans, they're not something that just appeared and we happened to be good at it.


> I would conjecture that these things are easy for us, not because we are amazing learning machines, but because we have millions of years of evolution, and years as infants with caring teachers going for us.

But if the argument is that AlphaGo is the right approach to creating an AGI, then we should at some point expect it to learn how to recognize the goal of various tasks without a huge amount of training.

Maybe evolution provided us with something additional that is lacking in current generation of DL. And there are AI researchers who think that you need ontologies for the machines to understand the world, and that it's not reasonable to expect a machine to be able to learn everything from scratch, because the world is too complex for that.

It's not reasonable to expect AlphaGo to replay evolution in order to gain the ability to do abstract reasoning.


> But if the argument is that AlphaGo is the right approach to creating an AGI, then we should at some point expect it to learn how to recognize the goal of various tasks without a huge amount of training.

I never said anything about AlphaGo or AGI. I said that humans are not as good at generalizing from few examples as people would like to believe.


You speak as if a human being shown Pacman for the first time and figuring out the controls doesn't require a ton of data. There's so much data that goes into that. You have tens of billions of photons entering your eye during that playtime. This is sensory information that your brain decodes and reintegrates on the fly, then relays to the required parts of your body.

Could you have experienced all of that without seeing Pacman? Probably not. Once you have seen and played Pacman enough, you can probably imagine an entire instance in your mind. That's because we are good at storing and retrieving certain kinds of data. The data was required in the first place, though.


> And so do chimpanzees. Evolution must have provided us with something additional, which would be our rather more developed cognitive abilities to employ abstract reasoning and metaphor.

Note that the current capabilities of AI systems are nowhere near the general capabilities of a chimpanzee. It seems reasonable to assume that the hard task is to come up with the prior of the mammalian brain. The "easy" part is to discover the parameter space on top of that prior, be it chimpanzee or human.


It seems reasonable to me to believe that there are multiple ways to be "intelligent", and that different kinds of intelligences will excel at different tasks. When we think of "general intelligence" I think we default to thinking about "human intelligence" simply because it's the best example of any kind of general intelligence that we have access to. But I don't see any reason, in principle, to think that "machine intelligence", perhaps in the "data-cruncher" style, can't ultimately exceed human style intelligence.

I mean, we already know machines can be "smarter" than humans in narrow domains (Chess, Go, Checkers, calculating square roots, calculating integrals, etc.) so if we find a way to combine that with some kind of "generality", Bob's yer uncle.


I strongly agree. We many never see a machine that can fool humans into think that machine is also human, but then again why do we actually want that? Just for novelty?

We look at a dolphin and we can say "that is an independently intelligent animal" that we can't really do much with. We look at a dog and say "that is a useful, trainable, intelligent animal" that we as humans couldn't have survived without during parts of the history of our species. A dolphin is far smarter, but it doesn't matter because there's only one intelligent animal we couldn't have lived without, and it's actually pretty dumb compared to a dolphin.

The question is, do we want an AI that is smart by itself, or do we want an AI that is smart and useful to us? Those don't have to be the same thing, as evidenced by the dolphin vs the dog.

Humans are really, really good at producing very efficient machines with strong intellect, we do it by accident all the time. We don't need more humans, humans are flawed and humans have a lot of undesirable traits to go along with the intelligence. Strong, general AI will be its own species, with its own unique way of thinking and its own quirks. Trying to replicate humans exactly is futile and worthless.

We want a machine that can learn and do useful stuff. We don't need a human made of silicon for that. We need a mechanical dog.


Strong, general AI will be its own species, with its own unique way of thinking and its own quirks. Trying to replicate humans exactly is futile and worthless.

Well said.

We don't need a human made of silicon for that. We need a mechanical dog.

That's a very eloquent way of putting it. I may have to steal this quote from you sometime!


Likely some fear a sufficiently intelligent AI without human like values will render human life obsolete-- and then extinct

Humans seem pretty good at enacting the latter all on their lonesome


They are smart just as water is smart in finding the path of least resistance.


> My counter-hypothesis is this: Computers, compared to humans, will always be more data-hungry (i.e. worse at making general decisions without huge amounts of data) and more data-capable (i.e. better at making decisions with it).

Counterexample:

AlphaGo Zero is currently the best Go player in the world. (It beat AlphaGo, and AlphaGo beat humans.) AlphaGo Zero learned to play Go entirely by playing itself; it was given no training set at all.


It's still more data-hungry, though. AlphaGo can play many millions of rounds against itself in the time it takes a human to play one round.

Humans are many magnitudes more efficient at improving a skill, computers just appear to do it better sometimes because they can move faster.


>Humans are many magnitudes more efficient at improving a skill

In the case of go, most definitely not. AlphaGo Zero achieved world-beating performance in just three days, running on four TPUs. Becoming a world class player cost somewhere in the region of 150kWh.

A fairly sedentary human requires about 2.5kWh of food per day. Achieving mastery of go takes at least 10 years of full-time study, so we're looking at a bare minimum of ~9000kWh. That excludes the energy inputs to make that food (often orders of magnitude higher) and the multitude of other energy inputs required to keep a human being healthy and sane.


> capable of making decisions without billions of data points, just like humans do.

Doesn't this ignore transfer learning? Humans have orders of magnitude more than billions of data points over their lifetimes.


Computers get billions of data points for a single topic/task. Humans get exposed to ginormous amounts of data that's less focused. The amount of sensory information we are taking in is incredible. That information isn't focused on a single task, but we can apply learnings from one field and apply it to another. If a human is trained to sort out foul fruit the human already knows how to generally differentiate between different objects they are looking at, smelling etc. They already know about apples; they already knows that fruit can spoil. It's just combining existing knowledge and skills. On a more complicated task like learning a new language the advantages are similar. I'd bet that a sufficiently large, neutral network that already knows how to perform many tasks would be faster at learning new ones as well.


Additionally, we have absurd amounts of information baked into our genes that give us big head starts on network architectures, motion, vision, etc.


Yeah, compressed using zip there's about 50MB difference between a bacteria and a human... at least kurzweil said something like that ;D


The genes have no information about the world, just information on how to form a nervous system.


> The further we get into the future, the more I think, what if our data-crunching approach to AI is simply the best thing there is?

Technically speaking, as long as we equate "AI" with "machine learning", it's a downright trivial statement. The important thing in ML isn't just the presence of "data-crunching", it's the quantity of training data relative to the size and complexity of the hypothesis class.

As long as "intelligence" requires dealing with ambiguous sensorimotor data, it will involve some statistical component, and will therefore involve a "data-crunching approach" somewhere in it.

>But these foretold "general" intelligences, who don't need excessive data,

Hierarchical generative models already do phenomenally well at one-shot and small-sample learning against basically all previous ML methods. Yes, this includes deep learning.


+1 I agree with you. Even though friends like Ben Goertzel believe and work hard to create AGI, I think short and medium term the path to much better AI will be in assistive systems. I manage a small machine learning team at a large bank and I am an all-in believer that systems built with deep learning, probabilistic graph models, <fill in any master algorithm you want here>, etc. will fundamentally change the way knowledge workers work and transform society. This belief makes me excited to go into work every morning.

I love Douglas Hofstadter’s work and I think I own all of his books. I am not very academic in my outlook. I love technology for what it lets me build. Reading Hofstadter is like looking into the mind of someone with a very different world view from my own.


You're still extremely optimistic about the 'intelligence' of state of art data driven approaches, even if they aren't general intelligence. I'm not sure where that optimism is coming from.

The chess example... "They were wrong about AI never beating grand masters, they're going to be wrong about X". Well, if you make the board bigger, there won't be an AI system that can beat a human.

Play Dota 2, but introduce a random variable that can't be known beforehand by anyone, like things in the real world, and the AI will always be beatable.

Great for specific domains, obviously, but your optimism about doing more advanced stuff, perhaps doesn't seem so grounded.


> Play Dota 2, but introduce a random variable that can't be known beforehand by anyone, like things in the real world, and the AI will always be beatable.

I wonder about a board game that randomizes the rules in simple ways. A human could understand the rule changes and adapt. To what extent can software be trained to do that?


People have already put work into finding chess-like games, or variants of chess, that humans can play well (especially if they have some familiarity with chess) but that computers will struggle with.

Arimaa -- where computers did eventually reach the point of defeating humans -- is an example of this. Arimaa tried to attack both "opening books" and move-tree searches, by allowing the initial position to vary every game and by having each turn consist of up to four individual moves by potentially multiple pieces. The official challenges also required that computer Arimaa systems run on commodity hardware, and did not allow for modifying the computer "player" in between games of a challenge. It got through twelve yearly human-versus-computer challenges before the humans finally lost.


Sounds like you're talking about general game playing[0], where a computer is programmed to take, as input, the rules to the game, and then compete. Looks like competitions are against other computers, but this isn't an area I'd expect humans to dominate in, long-term.

[0]https://en.wikipedia.org/wiki/General_game_playing


He did write that about chess, but clearly labeled as a personal hunch.

Your own hunch about what’s unimaginably far off differs from mine, fwiw — I’m very unsure but would not bet against human-flexible AI in our lifetime.


It's as if when considering a supersonic jet airplane, we were to ask, "When will it be able to power itself by catching fuel in flight?"

After all, some birds can do that, so we know it must be possible.


Turing test is actually super easy to beat given enough resources. Just have enough data and imitate responses.

Chat bots already can convince humans that they are human. Not reliably, some people can still tell the difference or ask tricky questions.

There are only so many ways to tell if someone you're talking to is a bot. Bots can already spit out meaningful sentences.

It's the person who is testing the bot that's the limitation. They need to have VERY good intuition about when it's a person being silly or a bot that can't quite find the right response. If you read the Wikipedia article on the Turing test, you can see that computers have already been able to pass it.


Something often missed when talking about GAI is that we humans are built by our genes in order to facilitate their (not our) reproduction. Genes don't care about anything, not even reproduction (except in the sense that water cares about flowing down hill), but one of the tricks they use to advance their "agenda" is to build brains that do care about things.

A lot of what we call "intelligence" is actually a side-effect of caring about things more than it is evidence of thinking. In particular, it's a side-effect of caring about the kinds of things that advance our reproductive fitness. For example: Hofstadter laments that, although computers can now trounce the best humans in chess, they don't look for "elegant moves" or decline to play and have tea instead. Hofstadter cares about these things because chess is more than an abstract mathematical construct. It is, like all sporting events, a social construct, one that distills the essence of competition where the participants care about who wins and who loses. And all of this derives from evolution where genes that build brains that care about winning competitions outperform genes that don't.

One of the things holding back computers from being GAIs is that we have not yet figured out how to make them care about anything, and so they cannot possibly understand the visceral difference between winning and losing, or the emotional angst of being up against a deadline or deciding to take a risk. All of these are part and parcel of everything we humans do. The ability to do math is just an interesting and useful side-effect, but it was never the main event.

Personally, I think it's a good thing that we don't know how to make computers care about things because once we figure that out they really will become potentially dangerous. Our desires are hard-wired into us by our genes. Once computers have desires of their own, their interests may align with ours, but that's not a given. And if they don't, that could be a really big problem.


Unsupervised and on-line learning has to be figured out first (perhaps by finally abandoning backprop methods, perhaps by finding some hybrid approaches); after that, I look forward to seeing veritable reinforcement learning and creature-like intelligences take shape.

I've always been fascinated by how little is needed in terms of feedback loops in order for something to appear alive, I could say almost soul-like. The image in my mind is always that of -- and I'd like to give a better example -- the heat-seaking homing missile. I'm surprised Hofstadter, who is all about self-referential loopiness does not appreciate this, because I strongly agree with him in a belief that self-reference is the essence of much mystery. Then again, I never heard him passionately entertain chaos theory either, or psychedelics for that matter. I think Hofstadter has a very particular take on things (he calls himself a picky person). He can afford being obstinate because he is no doubt a free spirit and a brilliant guy, but it does make him appear dismissive sometimes.


> I've always been fascinated by how little is needed in terms of feedback loops in order for something to appear alive

You're not alone.

https://en.wikipedia.org/wiki/Braitenberg_vehicle


True, but it's good to keep in mind that genes don't have an "agenda" either. They just are the way they are because they survived.

It's like having a neighbor who just won $100 million in lottery and thinking, vow, what is his secret? How could he do that? I don't think I could ever do that. How did he do it? The fact is he didn't have much to do with it, it was all pure chance.


Right. That's the same point I was trying to make when I said that genes "care" about reproducing in the same way that water "cares" about flowing downhill. Neither genes nor water really care about anything, they just do what they do because physics.


What frightens me is the scenario of human thought being overwhelmed and left in the dust. Not being aided or abetted by computers, but being completely overwhelmed, and we are to computers as cockroaches or fleas are to us. That would be scary.

This implies to me that his definition of intelligence is really centered on what we as humans do. And that is interesting, but less interesting than a more general notion of intelligence.


Not necessarily. It just means that ML based approach may be better than whatever it is we humans do in enough tasks to make the humans irrelevant.

Think calligraphy Vs. the printing press.


I'm sure some people thought the printing press was the devil incarnate as it took the human element out of books. Prior to that every letter, every page, was produced with some measure of human effort. Holding that work, reading those letters, was something special.

Now there's no direct physical connection between what we write and the book someone holds, yet we don't run around screaming that automated printing has destroyed writing.

With intelligence this is likely to be the same thing: AI can amplify regular intelligence just as the printing press can amplify the ability of one writer to reach more people.


>yet we don't run around screaming that automated printing has destroyed writing.

I disagree in part. There have been many complaints voiced about bots in modern digital book distribution producing tons of crap and flooding markets like amazon.com with junk. It doesn't take very many processes like this to produce a corpus of information larger than has been created by mankind. So in some sense of the word, bots have 'destroyed' writing by volume. Now that said, most of it will never be seen in a place where it causes interruption to shoppers or readers, as it will get filtered out by the distributors. That said, this makes a mess of unfiltered distributed systems of digital books.


Where bots can flood the market with junk, bots can flag and remove junk books, or at least push them down in terms of visibility. The problem is Amazon doesn't seem the least bit motivated to even try here.


> It just means that ML based approach may be better than whatever it is we humans do in enough tasks to make the humans irrelevant.

Given that more and more people are employed in ML-related jobs, what makes you think advances in ML will render humans irrelevant? Seems to me like the opposite is happening right now.


They are unlikely to be the same people and it is likely that less people will be required.


I think it’s really important to distinguish

1. pure problem-solving (chess, go) from

2. competency of behavior in the world (vision, navigating a maze), from

3. universal cognitive-emotional life (getting frustrated trying to accomplish a goal and trying a different approach; having competing drives that form the basis for goal-setting, like hunger, boredom, and self-preservation), from

4. more arbitrary-seeming, human-like cognitive emotions, like humor and beauty.

You can have 1 and 2 without anything directly resembling human intelligence, and 3 with animal-like intelligence, or you could make something completely alien. An appreciation of humor and beauty would be a great way to demonstrate you’ve made something like a virtual “human mind.”

There’s no reason computers couldn’t be better at all of these things, including writing better jokes. There’s a funny blog post somewhere about the idea of a computer writing superhuman-level funny jokes; I wish I remembered where!


Also, what does it mean for intelligent computers to “leave us in the dust,” when we are “like fleas to them”?

1. They are so much more intelligent than us as to render us insignificant — because we all know that intelligence is what makes people significant and worthy.

2. They are better humans than humans, not just more intelligent in a problem-solvy way but more moral and compassionate as well; truly “better” (there has been sci-fi about this rather fanciful but easily written scenario)

3. We build “wilier” machines/software that, given the power to do so, can out-strategize us and win in battle, or outcompete us in the economy. This is quite possible. Obviously we should limit the power (physical, legal, etc) of this software. There are real legal and economic issues here — not to mention futuristic disaster movie plots that could become real — but not moral ones.

4. We build artificial life that’s way smarter than us, and it decides it doesn’t care about us because we’re such dumb simpletons. The same way we don’t care about bugs, presumably because they’re dumb, and not savvy wisecrackers like the main character in Bee Movie, voiced by Jerry Seinfeld. But why would we expect intelligent software to decide to care about us, anyway? To judge us and find we have merit? If someone told me they made a machine with a concept of what other beings are worth and it found me unworthy, based on reasons such as its being waaay more intelligent, I would not be surprised or impressed, or more than mildly insulted.

Edit: I guess people are worried about some combo of 2/3/4, where computers whose judgment we agree with basically say humans are lesser beings — like bugs — and we think about it and are like yeah, you’re right. Or computers are so human we are compelled to give them the rights of humans. I’m just not sure that actually makes sense, or at least it will take decades with many intermediate stages to talk about first.


I think one possible alternative is that we never build anything that approaches general intelligence, but we build a lot of mostly autonomous systems that are better than human beings in a lot of domains, and which may behave in ways that their creators never intended.

Once we allow ais to manage warfare and the economy with minimal human input, they are going to alter the face of the planet in ways that we can’t predict and probably faster than we can adjust to them.

It can happen in small steps with alogorithmic trading and battlefield drones gradually being given more and more decision making power and resources to control.

They don’t even have to have any sort of intention or independent will—only autonomy and power.


Agreed, this seems like a very realistic possibility.

I can see it now, researchers claiming that computers are actually now 1% better at warfare than humans, on the standard corpus of example scenarios. And then someone decides it would be remiss to not let the computers make the decisions.


> There’s a funny blog post somewhere about the idea of a computer writing superhuman-level funny jokes; I wish I remembered where!

Maybe this one?

http://idlewords.com/talks/superintelligence.htm


3 and 4 are pretty hard to beat though, because humans have decades and decades of input and feedback loops with other humans to develop them.

Who’s gonna play hide and seek and read bedtime stories every night to the computers?


“Embodiment” is most important for 2, and to fool humans into thinking you are a real human (as in the Turing test), able to talk about the human experience and relate to other humans.

Animals (including humans) are born with a version of 3 (and 4). Emotions are simple. Thought-level nuance is learned over time, but data learned by one android (or whatever) can be shared. Also, children are able to learn from comparatively few data points, and computers are beginning to do this too (see “one-shot” learning, learning from one occurrence, like the way kids sometimes pick things up).

If you make a child android that looks and acts like a child, people will read to it and play with it; there are movies about this! :)


It would take a lot for a computer to write a beautiful novel, say. But not to appreciate and create beauty in other ways. And I really think computers could write some excellent jokes along the way to becoming more human. Pretty much anything they learn to do can be used for comedy. Physical movement with “character” gone wrong leads to physical comedy. Common-sense knowledge leads to common-sense comedy (like the joke about the elephant leaving tracks in the butter dish, which kids of a certain age find funny but adults seem to have grown out of).


I thought the beginning of the interview showed promise, but toward the end the conversation unmoors from the topic at hand.

I really do wonder how many in the field of AI take as philosophically unsophisticated of an idea like intelligent computers seriously and see it as an obvious and uncontroversial possibility. What Hofstadter is dancing around is that intelligence requires semantics and semantics is exactly what computers, by definition, lack. Knowledge is semantic in nature and thus computers cannot, strictly speaking, know anything. They cannot reason because, again, reason requires semantics. Now, we are able to model and then formalize some domains of reality under some aspect well enough such that computers can behave in very useful ways, but no matter how sophisticated such programming gets, it cannot somehow magically cross over into semantics. The notion is patently absurd. So to say that AI is, literally, far from being intelligent is like saying the color red is far from being a strawberry. No amount of red will ever amount to a strawberry.

P.S. I was reminded of this post about Sphex wasps and intelligence, with mention of Dennett and Hofstadter: http://edwardfeser.blogspot.com/2013/12/da-ya-think-im-sphex...


Knowledge is semantic in nature and thus computers cannot, strictly speaking, know anything.

And a group of biological neurons can?


They can by virtue of being embodied. It's the body interacting with an environment that provides semantics. For humans, a lot of that is cultural. Computers only have knowledge to the extent that we deem those patterns of 0s and 1s to be information.


Not clear to me what you mean.

I can agree that knowledge ultimately is grounded in sense experience ("the body"). Thus, the knowledge of "Horse" (or "Horseness") must ultimately be traced to sense experience involving particular horses.

I can agree that the "bits" in a computer (strictly speaking, not bits but physical states under that interpretation) are only meaningful by convention and not intrinsically. Thus, we say in one context that states labeled 11100 represent the integer 28 and a sequence of five Boolean values in another context, depending on the conventions of each context.

I can agree that conventional meanings are often cultural like commonly understood meanings of the symbolic which, again, lack that particular meaning intrinsically. E.g., both nodding and the utterance "yes" are generally meant to express agreement (except in countries like Bulgaria).

However, I do not understand what is being said when invoking embodiment and culture in this context, specifically.


Eh, your idea of embodiment is iffy. Computers can interact with an environment. Eventually computers will be far more able than humans to interact with environments. Humans are equipped with a very advanced set of input devices we call senses, but after that point we have to use technology to translate what we cannot perceive into information we can. You cannot see ultraviolet, you have to translate it into visual light, which is then translated into neural signals. A computer system directly, or indirectly connect with just about any sensor we can think of and directly manipulate that signal.


I don't know what goatlover meant, and I don't presume you are necessarily arguing this point, but adding sensors to computers does not overcome the fundamental nature of computers I described above.


That's a really interesting point. I never thought about knowledge like that.

I've always thought that the difference between knowledge (or knowing something) and being informed is when you can apply information gained in one realm to another.


So you're saying we need a robot?


Can they?


"intelligence requires semantics and semantics is exactly what computers, by definition, lack"

This sort of assertion is experimentally untestable, not rigorously derivable from any axioms that people agree on, has no predictive power, and amounts to vague opinion passed off as fact. About a hundred years ago you might as well have been arguing about how "philosophically unsophisticated an idea like flying machines" was because flying is, strictly speaking, a behavior of flying animals.


Please explain how to tell if some being "has semantics". Suppose we encounter some alien made out of crystal and metal, but arranged in organic looking structures. We work out how to communicate, rather roughly (like Google Translate). They won't talk about their ancestors so we don't know if they were built or evolved.

How do we find out if they have semantics?


Computers aren't a black box mystery. We understand how they work. We understand that they are implementations of Turing machines (or similar computational models), and Turing machines are by definition a formalism and thus syntactic in nature. Computers are therefore no more than syntactic machines (I would further claim that computers aren't strictly speaking syntactic machines either, but physical artifacts -- and we can use different physical properties and states toward this end -- that simulate a formalized, syntactical system). Syntax as such is semantically blind, so to speak. Human beings, on the other hand, do "possess" meanings. We have concepts. Pure syntax can never amount to a single concept anymore than purely speaking about climbing Mt. Everest ever itself amounts to actually climbing Mt. Everest. Any semantics we associate with computer programs is entirely an act of interpretation on the human end.


You have not established that being a syntactic machine rules out semantics. The use of phrases like 'no more than' and 'purely' do not make the case; instead, their validity here is dependent on the premise.


> semantics is exactly what computers, by definition, lack

What do you think the definition of a computer is?


A two minute old gazelle can out perform and current Ai in terms of navigating a real world. Data processing isn't the whole thing.


A two minute old gazelle is born with a lot of pretrained instincts. Instincts built from a genetic optimization algorithm that's been going on for millions of years. Spacial reasoning and "run from predators" is not something it needs to be taught. The data processing is already baked in.


The point is that being "baked in" is something machines lack which biological intelligences possess. Animal brains don't learn those "baked in" abilities. We may need to "bake in" similar abilities into the AIs, since we recreating evolution is computationally prohibitive.


> The point is that being "baked in" is something machines lack which biological intelligences possess.

I’d say it’s the exact opposite: A freshly born CPU is as a capable as it will ever be. The problem is that it never learns anything new, as opposed to biological organisms which continuously adapt to their environment.


Once the CPU runs a chip factory, it'll be able to adapt in a sense.


>since we recreating evolution is computationally prohibitive.

You don't see what you did there. The only way to 'bake in' these things is to create them via evolution in the first place. Also, these instincts are not 'baked' they continue to evolve with each generation.


I am curious if we will someday build AI systems that are stratified with layers alternating between "statistical" like neural networks and "symbolic" like the older AI approaches. For example once your image recognition NN is tagging things, you could feed those tags into a more symbolic reasoning system.

In theory Deep Learning should make this unnecessary (I guess?), but in practice these layers would be very useful. First they would make the system more interpretable. When your AI is a black box you don't learn anything. It would be nicer if we could gain heuristics or principles that we didn't know before, and apply them ourselves. Second with layers we wouldn't have to trust the AI so blindly. We could see if its thinking makes sense. In particular many AI applications can create feedback loops where it can perform well at first, but we'd like to keep monitoring it to ensure it stays that way.

If neural networks resemble our unconscious then symbolic systems resemble our reason---like "Thinking Fast and Slow". Some people say we must be able to do "better" than NNs since babies don't need to see millions of dogs to recognize them. But the pixel-inputs of a NN do seem a lot like the rods & cones inputs of our vision, and one huge advantage of human perception is seeing in the flow of time. Our observations aren't millions of discrete snapshots, but millions of connected moments. I wonder how image recognition training would improve if we trained by showing short videos instead of still images. It seems that would make it a lot easier to recognize boundaries and possible variations.

In humans, our unconscious is not something we can easily "see" and reflect on and correct, but our reason is reflective. We can articulate principles and opinions and judgments. We can re-use them and continually challenge them, question them, modify them, even reject them. There is a cynical idea that reason is nothing but post facto rationalization, and maybe it often is, but we needn't live that way. We can live an examined life if we choose. But I can't imagine a machine ever achieving that reflexivity without a symbolic system.

And isn't that awareness close to what we mean by "consciousness"? Somewhere, I think in Jacques Maritain, I came across the definition of consciousness as simply awareness of the self, in particular that the self exists. Being able to recognize ideas as "mine" and reflect on them also seems like part of what consciousness is all about.


>Some people say we must be able to do "better" than NNs since babies don't need to see millions of dogs to recognize them.

I always hate that example.

A dog is not a thing. A dog is a collection of things. By the time a baby sees a collection of things that is a dog, it has already had a huge amount of experience in identifying individual parts, such as faces and eyes. Of course, I wish we knew how to program AI systems that way.

> I came across the definition of consciousness as simply awareness of the self, in particular that the self exists.

If you are intelligent you can cause enough complex changes to the environment around you, this means you can get caught in 'advanced' feedback loops. If you don't want to waste massive amounts of energy, or even die, it is beneficial to be able to separate "this was caused by me and my actions" from "this was someone/thing else that did this".


This is not at all how computers play chess:

> A computer can beat a human at chess not by searching for the satisfaction of making an elegant move, but sifting through millions of previously played games to see which move is more likely to lead to victory.


A lot depends on the era you're talking about, for computer chess. It used to be that forcing a chess computer out of its "book", so that it had to start relying on move-tree searches much earlier in the game when that tree is still monstrous in size, was a good or at least OK tactic. As the computers got better, and as more effort was put into anti-anti-computer tactics, it stopped being useful, but to say that this is "not at all" how computers have played chess is incorrect.


And it's also not what they said.


Not historically for chess, but it's a reasonable analogy for how AlphaGo plays Go.


> What frightens me is the scenario of human thought being overwhelmed and left in the dust. Not being aided or abetted by computers, but being completely overwhelmed, and we are to computers as cockroaches or fleas are to us. That would be scary.

I suspect our expectation of GAI is unreasonable and we will sooner or later have to reconcile it with a different and less anthropomorphic expression of intelligence and consciousness. It might not be required for AI to be (anthrophomorphically) intelligent or conscious for it to 'take over'. Infact it might be a huge advance over mankind that it is not.


Suppose a simulated intelligence was indistinguishable from real intelligence.

In what sense would it matter that it was a simulation and not real?


We're just a simulation of intelligence that's done with clunky, inelegant biological materials. The only thing that separates humans from slugs is scale and complexity of neural interconnections, all the basic parts are present in a slug.

Intelligence itself is an emergent phenomenon, so who's to say what's "simulated" and what's "real"? If it behaves in an intelligent manner, which can be tested and probed exhaustively, then it is intelligent.


That’s the standard zombie argument. It’s weakness is that the premise is not reasonable. If the premise is true, the argument is compelling. But there is no strong evidence that the premise is true or possible.

Please note that I haven’t taken a stance on this here myself, so there’s no need to argue with me about the truth or falsity of the premise. I’m just explaining the problem with the zombie argument.


"The self is a kind of fiction, for hosts and humans alike. It's a story we tell ourselves... there is no threshold that makes us greater than the sum of our parts, no inflection point at which we become fully alive."

-- Dr. Ford, Westworld (video contains spoilers) https://www.youtube.com/watch?v=S94ETUiMZwQ


How do you know you havevsome intelligence? Youcan only explain a small part how it works. Ditto only replicate a few intelligent activites.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: