Hacker News new | past | comments | ask | show | jobs | submit login
Artificial Intelligence Is Already Weirdly Inhuman (nautil.us)
86 points by rl3 on Aug 6, 2015 | hide | past | favorite | 77 comments



We saw this in Chess engines about 20 years back (when they started getting seriously strong) they play Chess incredibly well but looks very little like what a human playing does.

It was fascinating to watch the changes in chess theory through that period as the machines validated or invalidated concepts and ideas that GM's had posited but been unable to prove one way or the other.

John Speelman wrote an intro to The Mammoth Book of Chess (that thing was awesome,I think I wore out three copies) which (paraphrasing horribly) that Chess is a realm of pure thought and that computers play it the way an alien might.

The best engines on commodity hardware now require offering pawn odds for a top human to even have a chance and their strength is accelerating[1] (stockfish now has a distributed test architecture to test for regressions which is incredibly cool[2])

[1] http://www.chess.com/article/view/how-rybka-and-i-tried-to-b... (Rybka was/is one of the strongest engines and in concert with a human it still couldn't beat Stockfish).

[2] https://stockfishchess.org/get-involved/


This is particularly evident in endgames. Forced mates have been discovered that require hundreds of apparently random and pointless moves, until suddenly the king is cornered and mated. There's a nice discussion of some of these on Krabbé's chess blog: http://timkr.home.xs4all.nl/chess/perfect.htm


Wow.. do the games change when it's computer vs human as opposed to computer vs computer?


Once you are inside the horizon of an endgame tablebase (currently 7 pieces on the board or fewer) then the end is known, at that point the computer plays the game like a hypothetical god would.


Plus, computer chess, with its alien moves, opened up human thinking on chess, which ties in with the article's point about neural nets causing humans to see our own world differently.


Indeed (was just editing to clarify on that :) ).

It led to something of a renaissance in chess theory.

What I'd love to see (and have been tempted to take a crack at) is an engine built purely to play like a strong human with variable strength (things like not always taking the strongest move but not playing like god and then blundering either, not always avoiding mate in 27 plies and such).

Shredder does it pretty well but it still feels like a machine, ChessMaster with The King got the closest I've seen.


That would be interesting. It would have to simulate human distraction and emotional ebbs and flows. I imagine ot taking into account feelings like impatience, boredom, and overeagerness.


Keep in mind that chess engines are heuristics based hill-climbers (essentially they enumerate all their options, then optimize that process). That these don't respond the way a human mind does is quite understandable.

Deep-learning networks should already be a lot closer to how a human mind would react.


I disagree. Only because the architecture reminds one more of a brain this doesn't imply that it "thinks" even remotely similar to a brain, a human brain at that.


I disagree with you. The computer brain would still think the same as we do, but what do we do with our "thinking" process? we evaluate them based on emotions, while there is currently no way to have "real" emotions on a computer. Humans will always do their decisions based on emotions and not logically, we think way to fuzzy that any program could (yet) do it like that.


I don't find this surprising. In fact, I would find the opposite surprising. It would be very surprising if AI was human-like. Human intelligence is designed to power a human body, with fingers of a certain length and eyes of a certain stereo field of view. I don't think we will ever develop a human like AI until we give it a human like body to live in.


> I don't think we will ever develop a human like AI until we give it a human like body to live in.

This is not a new thought; there's quite a bit that's been studied and written on the importance of embodiment to cognition, perception, etc. Cognitive Scientists and a few other disciplines have dug into this a fair bit.

And I agree about the article. This quote is just ridiculous: That suggests humanity shouldn’t assume our machines think as we do. It presupposes that anyone with actual knowledge of AI, neural nets, etc. believes that we're hot on the path of human-style artificial cognitive capabilities with our current tools. At best, we may have started building small components that in a future form, might eventually be assembled, Society of Mind[1] style, into something that eventually begins to resemble general AI.

[1] https://en.wikipedia.org/wiki/Society_of_Mind


Exactly! Our brain may be our center of computation but the models it develops of the outside world and hence about itself are based on the signals it recieves from the senses.


I definitely agree. People treat human-like intelligence as if it were the only possible kind, as if human-ness were a natural consequence of intelligence.


> I don't think we will ever develop a human like AI until we give it a human like body to live in.

Perhaps, but also it doesn't automatically follow that if they look like us they must think like us, whatever Hollywood says.


Just clicked on comments to post exactly the same point although I'm not sure the body would be sufficient, I think a lot of other factors must be in place too.


I do have my doubts that plain neural networks will ever be able to achieve conceptual understanding.

I have an affinity for classical, rational AI in that you can correct it and it will take that correction and instantly apply it to is knowledge base. It can also explain why it came to a conclusion. (though obviously this style has its very real limitations)

NNs and other current statistical/connectionist approaches don't really have this capability, which I see as a necessary part of human-level intelligence. They are trained to "get a feel for" particular inputs to indicate particular outputs. If you were to personify the NN analyzing the dog/"ostrich" picture, and ask it why it thinks it's an ostrich, it could only reply with "I dunno, it feels like it's an ostrich". The only way to correct it is to retrain it with careful checks so that it behaviorally "gains a sense" of what looks like a dog vs ostrich more reliably.

Many language/word based features operate similarly. Watson, Siri, Google search, all sort of map strengths of relations with your input words & patterns to some associated results that it just sort of statistically was reinforced with. These can yield information that is of real use for a human to further evaluate and act on, but I wouldn't trust such a system to directly act on those associations; they're wrong too often.

But there is no possibility of actual conceptual discourse with NNs as we know them, to correct them, to inform them, or to ask them to explain their results. This is a fundamental barrier to achieving human-style intelligence.

This is not to say that there aren't NN-based possibilities that might work, like tacking together multiple interacting NNs, each which have the possibility to specialize on concepts and influence other NNs. But too much has focused on single NNs and direct input-to-output "monkey-see, monkey-do" training. It's a manifestation of the Chinese room problem.


At the risk of nitpicking, I'm going to specifically address your statement of "It feels like an ostrich"

To my mind, that's EXACTLY how humans do it. A baby is not instructed, "it has 2 legs, feathers, and is tall, therefore it is an ostrich", there's quite a bit more "LOOK OSTRICH /present input of ostrich/" prior to the point of being able to generate any justification.

As sister posts have pointed out, I tend to believe that the ability to justify the classification is another learned skillset that comes later developmentally, the ability for higher primates (and babies) to perform classification without the (as far as I know) ability to do the higher order reasoning lends itself to this theory.


I find how my two-year old classifies birds to be amusing:

1) He called all birds ducks for the longest time, presumably based on being exposed to a bath duck, and this weird Youtube video about ducks swimming in the water.

2) Then he called some of them bird, based on being exposed to a bath toy bird.

3) Then he split them up a bit more... parrot, chicken, rooster, etc. He started seeing them in books and real life.

4) At no point could he explain the difference. Still can't.

5) And he definitely thinks the only difference between a rooster and a chicken is colorful feathers.

6) And he's not quite sure about ducks and geese, but usually gets it right.


I guess he still needs to develop the advanced bullshitting (sorry, I meant "rationalisation") skills required to say, "Well, I think it's a duck, because it looks like my internal picture of what ducks are supposed to look like, dear sir!"


I think one of the major differences is that humans use multiple independent algorithms when it comes to deciding if there's an ostrich in the picture. And most of these are far more sophisticated than the NNs. So in practice the way we solve classification problems is fundamentally different, even though it may seem/feel somewhat similar.

Most prominently there's object recognition (telling things apart from the background, recognizing parts of objects, etc.), size/distance determination (which uses over a dozen separate algorithms: https://en.wikipedia.org/wiki/Depth_perception). We can also reconstruct a 3d model from a 2d image and compare that to our idea of what ostriches look like in 3 dimensions.

So even if it is done unconsciously, we recognize an ostrich because (among other things) it's the size of an ostrich. The NN has no idea what size is, let alone the size of an ostrich relative to other things.

I think the next big breakthrough in machine learning is in ensembles containing NNs trained for completely different things, but acting in a complementary fashion.


I agree that the initial human classifier is the "general feel". My distinguishing point is in the retraining. We can form conceptual constructions of what we see to resolve training correction, and once that is bootstrapped, carry the formed concepts and the meta-cognitive capability of concept creation over into new contexts.

If we've only seen automobiles, then all of a sudden are introduced to trains, 18-wheelers, motorcycles, etc, we are challenged to conceptually organize and discover the differences between these classifications. There's still a fair amount of "feel" in weird overlap cases, but the conceptual distinction is created after the fact. There is no mechanism for that in NNs.


Using the word "feels" (or "thinks") is a too much of a personification of a neural network. You cannot forget that under the hood, it is cranking through a bunch of math functions. You always get the same output from the same input. That function is not going to change on its own, and it is not going to suddenly gain new abilities to express itself in words. It is not a human, so there is no point in assuming it will behave or grow like a human.


I do have my doubts that plain neural networks will ever be able to achieve conceptual understanding.

I have an affinity for classical, rational AI in that you can correct it and it will take that correction and instantly apply it to is knowledge base.

I don't claim this to be any amazing insight, but I strongly suspect that the human brain works on some combination of both probabilistic methods and something like the symbolic logic of GOFAI. How many "systems" there are, and how they interact, is an open question, but I really do think there's "some there, there".

Which reminds me, I need to get back to reading "Thinking, Fast and Slow", which I started a while back and got distracted away from.


I'm a firm believer that it's a mistake to ask machines to do X in the hope that they'll do Y. We can't train an ANN to classify images then ask it to explain its reasoning; that's not the task we trained it for!

How might we train an ANN to explain its reasoning? One approach would be to learn programs: have the ANN write programs which classify images. Then we have a classifier (run the program) and an explanation of how it works (read the program). We don't have an explanation of how the ANN chose the program, but again, we didn't train it to tell us. In principle, we could keep adding meta-layers; in practice, the search space and evaluation time would explode :)


that's not the task we trained it for!

Yes, exactly. We're not training consciousness networks. That's not even a goal of all this research. We're training dumb high-speed classifiers.

How might we train an ANN to explain its reasoning?

There are networks that exist today that'll happily explain an entire scene to you (the whole "this picture contains a pizza sitting on an oven in a kitchen and there is a dog in the corner"), which is closer, but still just pairwise training. The "thought" process isn't recurrent or turing complete so it can't make progress on its own.


> There are networks that exist today that'll happily explain an entire scene to you (the whole "this picture contains a pizza sitting on an oven in a kitchen and there is a dog in the corner")

I was thinking more along the lines of: "This picture contains pizza. This is because there is an ellipse which appears to be covered with cheese and pepperoni. I say that because the dominant colour is yellow and there are elliptical patches of a more reddish colour. ..." and so on, down to the pixel level.

We can kind of get this by running the networks backwards, but it's not really deductive reasoning; for example, we can't correct the network by saying "that yellow ellipse covered with red ellipses is actually a pile of vomit", and watch that knowledge propagate through the weights. Instead, we have to re-train with more examples of pizzas and vomit.


> There are networks that exist today that'll happily explain an entire scene to you (the whole "this picture contains a pizza sitting on an oven in a kitchen and there is a dog in the corner"), which is closer, but still just pairwise training. The "thought" process isn't recurrent or turing complete so it can't make progress on its own.

Aren't those generated by RNNs, which are Turing complete?


Even humans can’t explain their own reasoning. We can only rationalize our intuitive decisions after the fact, but it’s pure hokum – all decisions are made intuitively using no rational thought whatsoever, only afterward can we make up rationalizations for it.

It follows that it should be perfectly possible for an A.I. to do this, too.

(Edit: I now realize that this rationalization process was theorized by Douglas Adams in Dirk Gently’s Holistic Detective Agency, as described by Wikipedia:

The story also mocks the corporate world with [the software] package called Reason, which inverts the idea of a decision-making program. Instead of proceeding from ideas and logic to a decision, it takes a decision that has already been made and creates a reasoned justification for it.

­— https://en.wikipedia.org/wiki/Dirk_Gently%27s_Holistic_Detec... )


Let's say you want as much money as you can get. This is probably not rational. Now let's say you have a choice of being given $5 or $10, which do you choose? Why was that not a rational choice? Because it's predicated on an irrational desire?


It's trivial to brute-force an answer to your question: if I take $5 I'll get $5, if I take $10 I'll get $10. I prefer $10 to $5, so I take the $10.

Intelligence is needed when there are too many possibilities to brute-force. We need to make guesses about which actions may give better results; we have to spot patterns, generalisations and simplifications to allow re-use of previous experience; we need to identify the crucial aspects, in order to narrow down the possibilities; we need to allocate resources efficiently, and know when to give up; and so on.

These are the things we would like machines to do well, but we have no satisfactory theory to explain, measure or compare such things. Introspecting our own thought patterns is not an effective way to reverse-engineer these proceses in humans, as the parent says.


No, that is not what I meant – the choice itself is not made rationally, even though a theoretical rational actor might have made the same choice.

The most we humans can do it to make an intuitive choice, allow the post-hoc rationalization to occur and then check its rationality by analyzing its logic. If the logic is bad, we go back and try another intuitive choice.


By your own logic, you just made an intuitive decision about whether I have a point, and now you're just rationalizing it. Not much point in trying to have a rational conversation.


> By your own logic, you just made an intuitive decision about whether I have a point, and now you're just rationalizing it.

Yes, this is true. At least, it is what I believe to be happening.

> Not much point in trying to have a rational conversation.

What? Why not? We seem to be able to converse quite easily.


Is your belief falsifiable? Wouldn't the experimenter just be rationalizing their intuitive decision about the meaning of their observations regardless?


The biggest problem I see to training a network to program is that it would require a lot of example programs to serve as training data. Which means we need a lot of really good, bug free programs that already accomplish similar tasks. And in my opinion, we do not have enough reliable seed programs to make anything useful, at least currently.

It is very easy to take a picture of a car and tell a computer what it is, not so easy to write a program and explain to a computer what it does.


IANANR (I Am Not A Neuro-Researcher), but it seems to me the "only" step you're requiring is that of self-consciousness. We can explain why we think it's an ostrich because we can point our neural networks at themselves, and say, "what sort of thinking process does this look like? Oh, it looks like visual pattern recognition of certain features like a long hairless neck, bird features, etc, so that's why I think it's an ostrich."

We are capable of conceptual thinking because we can think about our own thoughts, and further think about our own thoughts about our own thoughts, etc, in an endless cascade of meta-thinking (which is in fact what thinking is). The ability to experience a gestalt, and the ability to think about it, are two major steps. We seem to have achieved the first, with current NNs. That's pretty amazing already!

I think the ability to be pointed at itself is a necessary step for those NNs to develop self-consciousness and, thereby, conceptual understanding. We're not there yet, for sure, but how much further are we? A few years? A decade? A few decades? The latter is no doubt a very pessimistic estimate (conservative AI researchers agree on this).

Human-style intelligence is certainly not here yet. But it is almost certainly just around the corner.


To clarify, I'm not saying human-like AI is impossible or unreasonably far off.

The author seems to be implying that what current NNs show is a viable form of intelligence that just happens to "think" in a different way that we need to figure out.

I'm saying this particular manifestation of trained behavior is not viable for expanding into human-level intelligence, as there's no hope of meta-cognition (I don't use the term "self-consciousness" purely because it's overloaded with unreasonable fields). NNs would have to fundamentally change or be replaced in order to gain such. When we do achieve machine conceptual cognition & meta-cognition, it will be a system that will not "think" like feed-forward trained NNs, even if it shares some history or composition with that architecture.


>>> Neural net ... "I dunno, it feels like it's an ostrich"

I don't think this applies to deep-learning. Because with deep learning , the lower layers will will say "this has long neck", "this has legs" , etc... and that will help the higher layers to to understand "it's an ostrich".


> I do have my doubts that plain neural networks will ever be able to achieve conceptual understanding.

What do you think you're made of?


There are many specialized areas of the human brain.

It's not a single, unidirectional, backpropagated, simple-coefficient, neural net. It is not a recurrent neural network, nor is it a single instance of any particular NN-derived or NN-ish network.


(My take on this problem, feel free to correct any assumptions I make that are wrong)

I think this is because neural nets are not actually "intelligent" in terms of the commonly accepted definition of intelligence. They are dumb. They try, usually with simple probabilistic techniques and input element-wise transforms, to mimic some function that produces approximations for a given set of inputs and outputs. The training data and the test data will always have underlying differences which will create gaps in the data generating distribution, assuming a reasonably large-sized set of data. This is contrary to the assumptions that (every?) machine learning algorithm makes, usually referred to as the i.i.d assumptions. The test and training data are assumed to be independent and identically distributed. Since practically no real world data sets are perfectly identically distributed, there will always be gaps in the learned model and the real model.

Beyond that, the set of training data can never fully encompass the entire domain of possible input/outputs, else what need would we have for a machine to predict new ones? The oddities that researchers find are actually problems in the relationship of their training data to their testing data because the i.i.d assumptions are never actually true. We can only try to get as close as possible.

The solution to this problem is just nigh impossible, so we try to reduce it as much as possible.


That's pretty good. I wouldn't say things are impossible, but there're a lot of conversations happening about the problems with our current approach. Leon Bottou gave a great talk at ICML that covers stuff like this. https://www.google.com/url?sa=t&source=web&rct=j&url=http://...


Thanks for that slidedeck! It was really informative and enlightening to have some of these vague, hard-to-explain ideas that I've been wondering about cemented by Bottou.


> They try, usually with simple probabilistic techniques and input element-wise transforms, to mimic some function that produces approximations for a given set of inputs and outputs

It's my understanding that this is basically how the brain works. My personal theory is that enough of these "dumb" inputs, wired correctly together, leads to emergent behavior that is consciousness.


I imagine the brain more like hundreds (thousands, millions, I'm not sure the magnitude) of different specialized neural networks. So you have a specific neural network for picking out colors and that feeds (along with a bunch of other inputs) into the neural network for picking out object boundaries and that feeds into the neural network for object recognition and so on. In comparison, most neural networks that are used in computer vision are generally trying to do the entire process in a single network (although they also use feedforward, so the difference is more complex than just composing the various layers). I think there is something to the idea that we need the neural network to have points where it can spit out a partial piece of the eventual goal model, things like object boundaries before recognizing the object, recognizing eyes before the entire face, etc. The key is being able to get those logical partial model results at various layers of the network.


I'm outside my depth here, but isn't that what hierarchical learning is? (I think it's popularly called "deep learning", which I assume means the neural nets have depth?)

From what I've read, we aren't going more than a few dozens of levels deep. But it also sounds like this technique is very successful in image recognition.

Am I incorrect in my understanding?


I think the "static is a cheetah" example just highlights that the neural net is not identifying the best features with which to identify a cheetah. Or alternatively, if, during training, the neural net was only fed pictures of nature with or without cheetahs in them, then what it's really telling you is not the probability that a picture contains a cheetah, but rather the conditional probability that a picture contains a cheetah given that it is a picture of nature. In other words, that picture of static is most likely well outside the domain of the training set, so classifying it involves a large extrapolation, with all the attendant amplification of errors.

Perhaps what we need is a classifier that can tell when a picture is significantly outside of its training experience and say "I've never seen anything like that before" instead of giving an arbitrary classification.


There is one important point that seems to be lost every time an article uses adversarial examples to justify why neural nets are deficient: not even a human is perfect at understand an image at a blink. When we see an image we decode a stream of impulses. Saccades will follow observing in detail number of areas of the image at different orientations, and so our interpretation of the image will come from numerous samples, not a single image.

IMHO, it's actually quite amazing that such primitive software neural networks can understand an image in a blink, in one 'sample'. Conversely, it's not inhuman to see pictures in clouds, Rorschach tests, or even static.


Another reason that neural nets do crazy things: they have little knowledge. We see a dog, by looking at it's shape etc. The neural net by looking at features.

A solution to this I think would be splitting up the taks, like "finding eyes", "is this fur", etc ... freeze these networks, and use these as input. This would prevent most errors given in these examples.

A lot of animals know what eyes look like regardless of the species, not saying nature always has the best solution. But it probably had a good reason to have a specialized-build-in "head/eyes" detector.


We see a dog, by looking at it's shape etc. The neural net by looking at features.

Have you ever seen a kid learn language? At first, every four legged animal is either a cat or a dog. Every car is a truck (or every truck is a car). The specific categories get learned over time, but they aren't in any sense "natural."


You remember that bit in Hitchhiker's Guide where Arthur Dent is explaining to the ship's onboard Nutri-matic thing (I think it was) all about what a cup of tea is. After getting back many different versions of a drink that was nearly but not quite unlike tea, Arthur explained about tea leaves and climates and tea trade and tea ceremonies and on and on. I'm sure my recollection and retelling do not do it justice.

My point is, what we've got now is crude visual-feature pattern matching and recognition. What you're asking for is conceptualisation. Wiring up concepts to sense-data is our trick. We are reverse engineering ourselves bit by bit. You make the next step sound so reasonable, and maybe perhaps it is.


How vs why

I started learning genetic algorithms (random permutations selected/promoted for picking better combinations of Hamiltonian paths for solving NP-C problems obtaining near-optimal solutions quickly) and neural networks (mainly backpropagation, using different topologies) circa 1995 (college). It was like magic: you learn how to use those techniques easily in order to solve problems. However, the why was not so evident: while in the case of genetic algorithms you can understand that in a space of solutions with many similar cost solutions could allow you to pick a "good one" easily, in the case of neural networks the idea I got was like some a "magic box" which was supposed to do some interpolation/extrapolation that effectively work like generating a convex surface for giving a "solution"/"location" for a given input. It was like you got alien technology simply to make it work for simple things, but not being able to really understand why it was working (except for trivial/small networks).

Do people really understand what complex neural networks do? I.e. is still trial-error only, or are built on purpose?


>Do people really understand what complex neural networks do

From a biological sense, the only neural net we have a decent understanding of is the C elegans nematode worm with a few hundred neurons. Large neural networks like that of our brain are beyond our understanding and I suspect the same is the case for ANNs.


Sigh. The phrase "neural network" is getting tossed around these days with some type of sensationalist flair, an almost romanticized notion of this impending explosion of super-human phenomena. As Alex Smola says with much frustration in one his classes, "it's only math!" It's a fancy term for straightforward mathematics. Bloggers are so often making them out to be much more than they are.


Sigh. The phrase "neural network" is getting tossed around these days with some type of sensationalist flair, an almost romanticized notion of this impending explosion of super-human phenomena.

It's all cyclical. This is at least the second, if not the third, wave of hype for Neural Networks. I remember a period back in the mid to late 90's when this stuff was quite the rage.


There's something to be said for both sides of this. I think you're right about the way NN's have been blown out of proportion. I find as I work across fields they're the most abused and misunderstood ML technique, since everyone and their dog has heard of NN's at this point. But on the other hand, there's a lot that can be said for simple building blocks combining to form something very complex. The biological "inspiration" behind NN's make them an attractive starting point for investigating this line of thought.


If you don't know why a neural network suddenly reports that video static is a cheetah, the main reason for that is that you also don't know, in the first place, why it reports that a picture of a cheetah is a cheetah!

To emphasize our lack of understanding of the false classifications is misleading.


This article relies on carefully constructed images that maximize one particular outcome by summing up lots of small errors into it.

For it to work, the pixels have to be very accurately tweaked. If the tweaks were off by one pixel, the whole thing would fall apart.

The assumption is that this cannot be done to a person. But there is no way to put in a pixel-level "exploit of sorts" into a person to test that theory.

The real answer is probably that a little bit of noise on the input probably disrupts the exploit. It could never happen to a person because eyes have noise. At the same time, it could never happen to a robot either because cameras have noise.


From the article:

Such screwy results can’t be explained away as hiccups in individual computer systems, because examples that send one system off its rails will do the same to another. After he read “Deep Neural Networks Are Easily Fooled,” Dileep George, cofounder of the AI research firm Vicarious, was curious to see how a different neural net would respond. On his iPhone, he happened to have a now-discontinued app called Spotter, a neural net that identifies objects. He pointed it at the wavy lines that Clune’s network had called a starfish. “The phone says it’s a starfish,” George says.

Spotter was examining a photo that differed from the original in many ways: George’s picture was taken under different lighting conditions and at a different angle, and included some pixels in the surrounding paper that weren’t part of the example itself. Yet the neural net produced the same extraterrestrial-sounding interpretation. “That was pretty interesting,” George says. “It means this finding is pretty robust.”

In fact, the researchers involved in the “starfish” and “ostrich” papers made sure their fooling images succeeded with more than one system. “An example generated for one model is often misclassified by other models, even when they have different architectures,” or were using different data sets, wrote Christian Szegedy, of Google, and his colleagues.4 “It means that these neural networks all kind of agree what a school bus looks like,” Clune says. “And what they think a school bus looks like includes many things that no person would say is a school bus. That really surprised a lot of people.”


We are trying to emulate some human capabilities. In doing so we have created a complex system which can be almost as hard to predict as a human. That's not surprising, that's hust physics.

Having a few odd classicifations is not surprising either. It hasn't been trained like a human and is being asked to select a class where 'randon squiggly' isn't an option.

To get human-like intelligence we need to develop them more as virtually embodied agents. Sort of like kids.


Neural Networks do not think. They perform a series of computations, and arrive at a result. It is not intelligence. It is a very useful tool for solving problems that can be quantified and that we can generate a lot of data from, but ultimately, it is still human intelligence that is interpreting the problem and result.


Human brains do not think. They perform a series of computations, and arrive at a result. It is not intelligence. It is a very useful tool for solving problems that can be translated into neural inputs.


Sufficiently advanced robotics is indistinguishable from life. - Adapted from Arthur C. Clarke.


And what evidence do you have that human neurons do any kind of computation at all?

Generally, they are stimulated by some sensation until they reach a certain threshold that causes them to fire. That is the basic kind of functionality that nodes in a neural network try to simulate.

But human neurons are not dependent on numbers and change in much more complex ways than a few parameters. The brain requires a lot less data than these networks to learn new concepts. And the concepts that these networks learn are all ideas that humans came up with.

A network does not hold an opinion, it takes in inputs and gives outputs. I do not mean to say that we cannot make a network to simulate a brain, but that is not what we have right now.


And what evidence do you have that silicon transistors do any kind of computation at all?

Generally, they are stimulated by some voltage until they reach a certain threshold that causes them to change state.


A transistor is not doing computation. It is flipping a bit. Those bits are flipped in binary patterns with logic gates to do the computation.

And a transistor is not a neural network node.


That just answered why I think that human brains are doing computation. The neurons fill the same role as transistors, and the patterns of neural connections fill the same role as how the transistors are wired together.

I'm not saying that these are simple computations, or ones that are easy to understand, or ones that can be done in reasonable timeframes on silicon.

For more useful discussion, I'd like to hear what you think the definition of "computation" is - I suspect we're using slightly different meanings for the word.


I am using computation in the strictly mathematical sense. As in dealing with numbers. I do not think that our minds operate through a constant stream of numbers that become thoughts.

In that way, a computer and a human are fundamentally different. You cannot stimulate human thoughts as pure numbers. I think we need some extra layer of yet-to-be-invented abstraction to achieve that goal.

Of course, we could go the route of trying to create a new model of thought based around numbers, but that is proving to be difficult to understand. It would not be a good idea to try to build an intelligent system that we cannot completely understand because then all we could do is hope it works as we intended.


The map is not the territory. "Computation" is fundamentally an abstraction for talking about that which various algorithms have in common. Algorithms themselves are a high-level description of a series of well-defined tasks. Computers aren't literally doing computation in the sense you are describing. What they're doing is simple physics with lots of voltage levels. The "computation" is a useful high-level description of what the computer is doing.

I agree that there's a missing abstraction for talking about human thought - it's a terribly complicated subject that isn't well understood. That doesn't mean that the human brain is doing anything that's different on a fundamental level than what computers can do. We don't have a high-level description of how human though works like we do with a computer, but it doesn't mean that human though has some kind of magic.


I see a cheetah in the static, too...


It's right there stalking through the lush pixelated grass. Quite unmistakable, I agree.


Me too. I'm glad I'm not the only one.


Fellow robots...


What if you showed the algorithm/network/whatever itself? You could 'take a picture of it' and then train it to know that's itself, which would then change what it looks like. Keep doing that until it doesn't change or you fail trying.


You could get into a loop. Though, what are you trying to accomplish here?


Consciousness?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: