Hacker News new | past | comments | ask | show | jobs | submit login
AI (samaltman.com)
616 points by rdl on Feb 19, 2014 | hide | past | favorite | 311 comments



Critically speaking, this article doesn't add anything. I don't know why I read it. Can anyone explain why they upvoted?


Because the title on HN is "AI", which makes it a small click target [1], prone to accidental upvotes.

[1] http://en.wikipedia.org/wiki/Fitts's_law


Yes, that's exactly how this article gained my upvote. For misclicks and link-bait, I would love to see HN add the ability to retract votes from articles. I have many "saved stories" that I'd love to remove from my list.


IMO this is much more interesting than a multi-billion dollar exit of a simple chat application


280 votes and counting. I'm betting this will stay on the first page for a long time.


I just did that. iPad here, click target is below 44px.


This happens to me all the time on my phone. HN should make bigger voting arrows (or at least on mobile devices).


There is actually a neat web app for mobile devices: http://hn.premii.com


Isn’t Fitt’s law about how long it takes to aim for an element of a certain size at a certain distance? I think this has nothing to do with Fitt’s law.


It has everything to do with accuracy. Smaller targets take more time to aim; that implies your accuracy depends on how close to the optimal speed you aim.


I still don’t see what this time-distance-size relation has to say about this case. Do you mean that one could infer since arrow and title are equal in size and close together it requires minimal time to point from the one to the other? It seems like breaking a butterfly on a wheel referring to Fitt’s law here. Sorry for nit-picking, maybe I’m missing something.


1) People aim fast. 2) The target "AI" is small. 3) They miss, and click the arrow instead.

It's not that complicated.


You don’t need Fitt’s law for what you’re saying. It’s obvious that small objects are difficult to aim for. This is the main thing I dislike about HCI/UX: More often than not people have to unnecessarily refer to laws or norms to sell their observations.

The only non-obvious insights Fitt’s law bring are that objects twice as big and twice as far away take the same time to aim for, and that as objects become smaller or distance increases the time only grows logarithmically. Everything else is just squeezed into its definition to make it sound well-founded.


For me, "There are certainly some reasons to be optimistic. Andrew Ng, who worked or works on Google’s AI, has said that he believes learning comes from a single algorithm - the part of your brain that processes input from your ears is also capable of learning to process input from your eyes. If we can just figure out this one general-purpose algorithm, programs may be able to learn general-purpose things." is what was most interesting to me. The idea that there's a fairly simple algorithm to learning, applied in the brain, which produces at least the basic learning capability, and possibly consciousness.



"Deep Learning" is not itself a candidate for anything, because it's not any single algorithm, but a category of approaches.

Deep Learning generally refers to machine learning algorithms that deal with stacking multiple layers of simpler functions to enable more complicated functions, and optimizing all the parameters to best fit your training set and generalize to new samples (the hard part). Though it usually refers to neural networks, I dont think there's any reason it doesn't also apply to other layered approaches as long as there's a relatively unified learning algorithm applied across the whole system.

There are clearly many different deep learning algorithms, even if you just count the permutations of tricks you can choose from to improve layered NN generalization. Though to be fair I think very good progress is being made towards developing "better" algorithms in the sense that new ones (e.g. RBM pretraining + dropout) usual perform better than than older algorithms, no matter what data you use it on (now network architecture is another matter entirely).


I actually linked to convolutional neural networks on the Deep Learning page as the algorithm.

But I do agree with your point.


One of the most interestingly general things about Deep Learning is that unsupervised learning approaches can be used at the bottom of the "stack" to learn more useful high-level features from the input data. This ends up making your higher-level learners more helpful for "Real Stuff".


Yeah but will it satisfy us, if we can't see it making analogies, can't see its semantics; identify with it? This same lack of breakdown into pieces we understand will make it hard to tweak and advance NNs beyond 'good categorizers'.


"Andrew Ng, who worked or works on Google’s AI, has said that he believes learning comes from a single algorithm."

As algorithms can be combined, the existence of any set of algorithms satisfying this goal would automatically imply the existence of a single algorithm incorporating all of them.

If a sufficiently-detailed physical simulation of a human's brain satisfied this goal, then that would be one such algorithm.


It's an intersting thought indeed, but I wonder if any neuroscientists would back it up.


Fair enough, but can we imagine a better algorithm?


I've always assumed any real machine intelligence would be scalable to the point where even if it started at a millionth the functionality of a human, it would end up far surpassing humanity.


Why do you believe this?

We don't see animals self-improving to become humans (in the "consciously deciding what revisions to make to their mental architecture at each step" sense), so why would you expect electronic animal-level minds to be able to do the same to become electronic human-level minds, or beyond?

Personally, I doubt even electronic human-equivalent minds would be capable of self-improvement. After all, we are not smart enough to build AIs better than us (yet), so why should electronic minds only as smart as we are be capable of that, either?

A possible answer to that, I suppose, would be that the electronic mind would have far more input/training data fed to it per second than biological minds receive, and far more time to "work on" that sense data to derive patterns between each decision-step.


Because (naively) I'd assume it's some kind of software running on computers, and one can generally improve performance of that by buying more hardware or using better technology (or shorter lived components, or better cooling, or whatever).

If humans created an AI to solve some kind of open-ended problem, where "more AI" made the solution better, there would be every incentive to spend the money on more or better hardware for it. It's not clear that a shark would gain much by being 100x smarter, particularly if the metabolic cost were high; for a lot of human problems, spending 100x more on hardware/power/etc. for a 10% better solution would be quite desirable.


What about the advantage gained by time, i think a major drawback to human improvement is the effect of death.Given enough time and accumulated experience, self improvement is possible.A machine cannot succumb to death.


I personnaly don't think the brain could be as intelligent if it were only a part of it, like if had 3 senses instead of 5 for example: The emulation of a sense by the others might be necessary to bootstrap the system. So we might just be on the wrong path when we try to teach a small machine to learn.

And maybe, once you have a fully functional brain, that it constantly derives out of bounds. Like sudden overheating, which is litterally the step #1 of a depression in a human. So we might not be able to scale an AI beyond the size of one brain. Apart from clusters obviously, but then you need to sustain civilizations of brains, and civs do collapse every dozen generations.

We might not be materially able to find enough energy to power all of those trials and errors.


The amount of rampant bulshitting here by obviously unqualified people is amazing. If you don't even know that humans do not, in fact, have five senses, I'm not sure I can take your post seriously even as unqualified speculation.


The whole thread including the OP is full of speculation by unqualified people, because AI is the ultimate fantasy for all of us here.

So common sense is a perfectly satisfying qualification to post in this particular thread. Concerning the 5 senses, I refer to common wisdom because it speaks to everyone. Those who know better are probably smart enough to translate "5 senses" into an accurate scientific wording.


can we please put this right on top of the whole page and close this thread? too many ai-fanboys in here, who got their knowledge about ai from hollywood movies and video games..


I think we need a better machine first. The current model of one or a handful of CPU cores with very tiny caches talking to memory over a narrow, high latency bus doesn't seem very efficient for building and querying massive (multi-petabyte), highly associative (thousands of associations) data structures.


There are 100,000,000 neurons in the brain. It's not that many. A Core 2 Duo has 169m [1]. A neuron is connected to about 10.000 others. And how many computers do we have in the world? It even sounds like we scientifics are sluggish at inventing AI ;)

[1] http://en.wikipedia.org/wiki/Transistor_count


>There are 100,000,000 neurons in the brain.

You're off by 3 orders of magnitude. Try 200 Billion[0].

[0] http://en.wikipedia.org/wiki/Human_brain#Structure


Of course a Core 2 Duo has 169m transistors rather than neurons. Interestingly you can do a rough calculation of how much processing power would be needed to do the equivalent of a human brain and it's quite a lot. Hans Moravec calculates it a about 100 million million instructions a second ( http://www.transhumanist.com/volume1/moravec.htm ) based on the processing power required to fuctionally equal the human retina which is reasonably understood. A core 2 duo does something more like 2.7 thousand million instructions per second and so you'd need about 35,000 of them for brain equivalence. So you're talking in 2014 terms of a a top end supercomputer rather than a macbook.


this hypothesis is around since at least the 70s (Vernon Mountcastle' paper on cortical similarities), recently (2004) hawkins also published a book in which he promoted the same idea, on which he is currently working with his AGI company numenta.

to be honest, it sounds really great and it could even be the case that there is a very general underlying principle to cortical information processing and pattern recognition. But one should be careful not to mix solid scientific hypotheses with mainstream media hysteria and people who try to grab attention with their simplifications, claiming today that entropy maximization is the underlying principle and changing to sparse coding tomorrow. We are not that far and what we need is solid research instead of over-the-head assumptions and claims "to have solved the riddle" (in that respect, it might not be that far off alchemy :P)


I've felt this way about every Sam Altman piece that's been posted in the past week (and possibly every one I've ever read). And I feel guilty because PG speaks so highly of him. And then I feel guilty for feeling guilty.


don't feel guilty. I'm blogging to practice writing. It surprises me at least as much as you when articles like this do well on HN and makes me feel embarrassed I didn't make them better (there are some posts I work really hard on and hope people like, but this was not one of them)


If you're interested, you should read On Intelligence[1] by Jeff Hawkins (inventor of the Palm Pilot). In it, Hawkins presents a compelling theory of how the human brain works and how we can finally build intelligent machines. In fact, Andrew Ng's Deep Learning research is built on Hawkin's "one algorithm" hypothesis.

[1]: http://www.amazon.com/On-Intelligence-Jeff-Hawkins/dp/080507...


I think you are an excellent blogger and glad that you are posting to HN.

That said, I hope that you will think more critically and clearly before publishing vague, fuzzy, uninformed, and unlogical thoughts (not illogical, but unlogical) like the following:

>The biggest question for me is not about artificial intelligence, but instead about artificial consciousness, or creativity, or desire, or whatever you want to call it. I am quite confident that we’ll be able to make computer programs that perform specific complex tasks very well. But how do we make a computer program that decides what it wants to do? How do we make a computer decide to care on its own about learning to drive a car? Or write a novel?

Consciousness, creativity, and desire are all quite distinct things. It is very important for people who are attempting to approach the coming reality of artificial intelligence to be able to distinguish between different things like that.

There have been computer programs that decide what they want to do for decades. Perhaps you were thinking of a specific human-like type of decision process, but if so, you must say so and reason that way. Otherwise you are just conveying some fuzzy thoughts. And the problem is that you are doing so in the context of real scientific undertakings with results directly applicable to your thoughts.

A computer deciding what to care about or learn or what behavior to engage in "on its own" is related to the previous topic you mention, and in and of itself, does not require artificial general intelligence.

How do we make a computer program write a novel? I think that is a good question and an effective answer to it I believe _might_ be in the category of 'real' artificial general intelligence. However, I think that it will probably soon be possible to create 'narrow' AIs that can generate novels without being generally intelligent. http://www.nytimes.com/2011/09/11/business/computer-generate...


I like your candor.


Artificial general intelligence is not coming in the next, say, 30 years. And I am a big fan. A quick analogy: note that we can't even build an ant. It will take decades after that accomplishment to build a human level intelligence.


I upvoted because HN does not have a separate 'save' feature like Reddit, and upvoting is a quick and easy way to tag it as "read this eventually".

So I reflexively upvote anything that looks even vaguely interesting.

In this case, I did read the article, and would probably have upvoted it anyway. Why? Because it stands to serve as the seed of an interesting discussion.

Personally, I don't give a fuck if the article itself "adds anything" or not. Who cares about that? It's irrelevant. If the topic itself and/or the content of TFA are interesting enough that it gets a bunch of interesting HN readers talking and commenting and linking and sharing stuff, then it's a worthwhile article in my book. Not everything has to be an earth-shattering scientific breakthrough, that's published in a peer-reviewed journal, blah, blah, blah.


I saved 59 stories in the last 7 days. I don't expect to be able to find any of them in the future (in more than 1 month). The only realistic way to find old stories is to use the search feature, if you can remember a few keywords.


No doubt, once they get past a certain age, they become basically useless bookmarks (probably like most bookmarks). But I find that my pattern is usually to vaguely recall something from sometime in the past 3-6 weeks or so, and that often times if I go to my "saved stories" and just start scrolling back through them, I find the one I'm looking for.


Super pertinent and more in depth article on this subject:

http://www.theatlantic.com/magazine/archive/2013/11/the-man-...


It's nice to hear smart people talk about important topics in a down-to-earth fashion. This is why I upvoted.


smart - nope. pretentious guesswork and attention whoring is what is going on here


There are many types of smart, good sir.


It's certainly not something to be published in a scientific journal, but it's a nice subject to bring up and spark discussion on now and then.


Cynically, it's a partner at a VC firm saying that AI is a space to watch.

Less cynically, after about 40 years of AI winter, any possible sighting of a sprout is news.


Because it has an exceptionally short link title.


... and samaltman.com domain


It's well written. And I like the idea that maybe there is one algorithm. That'd be cool.


What it adds is the simple point, "be optimistic about strong AI in the near future".


So an article that lacks all substance besides "be optimistic about AI" is front page material on HN?


Maybe the second sentence is being taken as a prophecy.


If the only goal is an 'artificial' consciousness, it might be more prudent to consider a functional definition of what consciousness is and try to build that. We didn't make computers by modeling how individual neurons perform mathematical calculations.

On the other hand, if you want to go the biological route, there's some awesome work to be done. If I were to study consciousness, here's the question I would ask: how do we separate our selves from our surroundings? Patients with brain-machine interfaces (like moving a mouse cursor) start by thinking about moving their arms around. Then they apparently report that they gradually just feel that the interface is another body part. So if it's set up to change the TV channel, they just imagine that they have a channel-changing organ.

So maybe you want to build a system that can identify what is a part of itself versus what is not, and it's not just a fixed list. So what does that data structure look like? How is it defined, queried, and updated? Defined by what you can 'influence?' So gradated based on my influence? These aren't just broad philosophical questions, they're more specific and actionable.

That's just one possible angle, but it's different than, say, machine learning paradigms where you want to build a machine that can do pattern classification (which the brain undoubtedly does). There are probably other routes as well.


We know a ton about the brain but so little about the mind itself. We still don't have definitive answers to what consciousness is, why it's here, what's useful for, etc. Some people debate whether the mind exists at all. Also, there's still very little understanding of the difference between the conscious and unconscious mind.

I think building an artificial consciousness is going too far. Artificial intelligence is simpler; it's just fake intelligence. Seems easy enough right? If it looks like a duck and quacks like a duck then it's intelligent. We don't need to make it "conscious" necessarily, again whatever that means, in order for it be intelligent.

I feel like we can build artificially intelligent software pretty "easily" relative to making it "conscious".


> We know a ton about the brain but so little about the mind itself. We still don't have definitive answers to what consciousness is, why it's here, what's useful for, etc. Some people debate whether the mind exists at all. Also, there's still very little understanding of the difference between the conscious and unconscious mind.

One of the really, really bad consequences of the Cold War was the scientific divide between East and West. By that I mean serious lack of scientific data exchange between the blocks. The consequences are still felt and this area (the problem of consciousness) is the one that suffered. The problem of "consciousness" was basically solved, at least at a conceptual level, by Soviet psychology and neuropsychology. Here I refer, of course, to the work of Vygotsky and Luria. What is consciousness? Almost nothing at all by itself. Consciousness as found in humans is a consequence of our cognitive development and the advanced symbolic capabilities of humans. The subjective perception we have of the thing we call consciousness is "simply" (it's not really simple when you get into details) a product of humans acquiring language skills (I'm simplifying).

This is not to say the subject is trivial, it takes volumes to describe what is happening, but the thing we informally call "consciousness" is really nothing at all in and of itself, and the perception we have of it is just a result of the very complicated process of cognitive development. Thin air, like Lisp's cons.

If you want to read on it I can recommend Vygotsky's Language and Thought (actually, it's his only book) and Luria's Language and Consciousness (I'm not sure it was ever translated into English, it's a collection of his lecture notes from a university course he did on the subject) or possibly The Cognitive Development: Its Cultural and Social Foundations.

Why this line of thinking is mostly ignored in the West I have no idea. Why do we still cling to metaphysical (even religious I would say) phantasies about "consciousness" is an interesting topic itself. Is it because it's romantic to think there's something special, transcendent, about our minds? Are we really that sentimental? I have some hypotheses, but it's a different topic.


Compared to Russia, and most of the former soviet block, the US is a fairly religious place. In most of the "importance of religion" surveys we rank in the ~50% "yes" category whereas Russia is consistently ranked as one of the least religious places on Earth. (See http://en.m.wikipedia.org/wiki/Importance_of_religion_by_cou...). It seems only natural that the US would have a lot of metaphysical threads woven thru its work on conciousness. <caveat, writing from a town w a 10 block long street that's pure churches, so...bias>


I think that's another possible narrative for the mind but it's certainly not hard science. There's no concrete data showing where the mind is in our bodies or what part of the brain creates it. Psycology can't answer questions like where does the mind come from or what is experience.

It's an interesting narrative though and I'll check out those books.


> Psycology can't answer questions like where does the mind come from or what is experience.

I'm not sure I understand what you mean by this. Of course it can, that's the whole purpose of psychology (and, more fashionably, neuroscience of course). To me that sounds like saying science can't answer these questions. Do note that when I say "psychology" I mean strictly the scientific areas of whatever comes in the bag labelled "psychology". Due to historical accidents the term acquired a lot of BS pseudo-scientific baggage, and it's really a shame those things can detract from a wealth of valuable hard results honest scientific psychology uncovered.

The answer that developmental cognitive psychology, at least the theory I'm referring to, gives is that "the mind" comes from the only place it can come from: neural processes and the way they hook into environmental interactions of the organism (social and physical). The key to understanding what gives rise to "consciousness" is in understanding the role of language acquisition in broader cognitive development. The point where a child utters it's first words is neither the beginning nor the end of this extremely nuanced process. In my first post I took it for granted it's understood that this is not just armchair speculation, it's based on empirical data. As any good scientific theory it's far from complete, maybe in some details is inaccurate but it's certainly infinitely better that an endless philosophical debate (with strong religious, or in the best case idealist, undertones) on what the mind is and where it comes from.


> it's based on empirical data

What empirical data can say anything about, for example, the philosophical zombie problem? http://en.wikipedia.org/wiki/Philosophical_zombie


There's no such data, because the zombie problem is complete nonsense. Well, actually, it is complete nonsense precisely because empirical data can't say anything about it.


I think its nonsense also due to this :

"Many physicalist philosophers argued that this scenario eliminates itself by its description; the basis of physicalist argument is that the world is defined entirely by physicality, thus a world that was physically identical would necessarily contain consciousness, as consciousness would necessarily be generated from any set of physical circumstances identical to our own."

Also this :

"Artificial intelligence researcher Marvin Minsky sees the argument as circular. The proposition of the possibility of something physically identical to a human but without subjective experience assumes that the physical characteristics of humans are not what produces those experiences, which is exactly what the argument was claiming to prove."

from the wiki of p-zombie.


By that argument, there is no moral argument against inflicting pain on others, because the pain of another is not something we can empirically observe, except by analogy of how we react to the pain we ourselves experience.


First, the moral argument against inflicting pain on others doesn't depend on existence of pain. The moral dilemma is: is it acceptable to inflict pain on others or not. This is different, and to a large extend independent, from the question if pain exists in experience of others. In other words, if pain exists in others, it doesn't follow that you have to, by mere logical reasoning, make a moral conclusion that inflicting pain is wrong. There is an uncrossable ontological abyss between the empirical what is and the moral what should be.

Second, the case of pain from an empirical side is not at all like what we have in the philosophical zombie "problem". We can empirically observe pain. There are all sorts of physiological and neural manifestations of pain. Of course, now you may say "ah, but how can we know that these empirical manifestations mean the person is experiencing the sensation of pain". Scientifically that dilemma makes little sense, it's simply unproductive, it's scientifically useless. If we were to go by that route, we could inject a similar dilemma into every scientific problem, which inevitably would lead to the problem of solipsism. How can we really, really be certain that anything at all exists? Well, I suppose we can't, but this is a question that science has long ago abandoned because it doesn't get you anywhere, it doesn't yield any useful results.

Do note that unlike the question of pain, the zombie problem is defined so that there is in principle absolutely no way to detect, to measure, if someone is a zombie or not. On the other hand, we can in principle measure and detect events correlated with introspective reports on sensations. If we couldn't do that for some phenomenon it would be wise to consider that the phenomenon doesn't exist for the purpose of empirical scientific examination.

Frankly, I'm surprised that my previous post (where I say the zombie problem is nonsense) got downvoted because this is the foundation of scientific methodology. If you can not, even in principle, measure/detect something then it makes no sense to discuss it. Of course, you can amuse yourself and speculate on it, but that falls outside of boundaries of scientific inquiry and I hope that's what we're discussing here.


> Of course, now you may say "ah, but how can we know that these empirical manifestations mean the person is experiencing the sensation of pain". Scientifically that dilemma makes little sense, it's simply unproductive, it's scientifically useless.

Sure, it's scientifically impossible to evaluate. From a purely scientific perspective pain is just electricity. How would you convince an intelligent being that could not feel pain that it exists at all?

The existence of pain falls outside the boundaries of scientific inquiry, I agree. But are you saying that it therefore doesn't exist? Because your earlier argument seems to be that we can explain the mystery of consciousness within a scientific framework, and that is the larger point I disagree with.


> How would you convince an intelligent being that could not feel pain that it exists at all?

Assuming the being is "reasonable" (in this context it would mean it's willing to accept that there exist concepts that it may not understand or directly experience, and is willing to trust us), we could just point out the chemical and electrical phenomena correlated with pain and say that it's something that causes a certain kind of feeling of discomfort. We would get in trouble if this being also can not feel general discomfort, but you're probably bound to hit a wall in understanding at some point anyway when communicating with an entity whose experiencing capabilities are wildly different from ours.

> The existence of pain falls outside the boundaries of scientific inquiry, I agree. But are you saying that it therefore doesn't exist?

Actually, my point about pain was that it does exist, precisely because it can be examined and explained within a scientific framework. If we couldn't do that, then we could say that for all practical purposes, as far as science is concerned, "pain doesn't exist".

The same is true for consciousness. What's difficult about it is that it's not a thing, there's no hormone for consciousness, there's no brain centre where it's localized, rather it's a process and a product both phylogenic and ontogenic so it's a lot harder to capture it and identify it, to put it "under the microscope". It's not some secret sauce to intelligence, it's a consequence of intelligence. And the most important part of the process is the dynamics of language acquisition (at least when we're speaking of conscious experience in Homo sapiens).

I could go into the details but I'm afraid my posts would explode in length. ATM I don't have time to dig for good online material on this, and I'm under the impression that the theory in time got derailed into developing some practical aspects concerning child cognitive development, verbal learning etc, and away from the hard meaty implications we're discussing here, so I'm reluctant to even attempt to go into that rabbit hole. But they're explicitly there (the books I mentioned discuss the issue at length). Interestingly, about 10 years ago I was doing some work on word-meaning and symbol grounding development and I was both glad and frustrated to see literature on computer modelling in this area full of operationally defined concepts from the theory but people were seemingly unaware that this work has already been treated in depth on the theoretical level because there were no references to it then, I'm not sure if anything has changed, I've since moved on to other things. For example, the Talking Heads model[1][2]. It's not about consciousness per se, and although the authors never reference the socio-cultural theory of cognitive development (a horrible name in this day and age, it tends to evoke associations to post-modern dribble, but nothing could be further from the truth), it can give you a good idea of some aspects of the dynamics explored in the theory because what is happening in the TH model is exactly what the S-C theory describes is happening externally during language acquisition (in broader strokes though).

As for the philosophical zombie problem, I'd like to retract what I said about it being nonsense. Actually, it's very useful in showing why worrying about subjective sensation of consciousness is completely useless in AI and is very much like asking how many angels can dance on a tip of a needle. On a very related note I'd add: people are severely underestimating the significance of the Turing test.

[1] http://staff.science.uva.nl/~gideon/Steels_Tutorial_PART2.pd...

[2] http://scholar.google.com/scholar?hl=en&q=talking+heads+expe...


> Actually, my point about pain was that it does exist, precisely because it can be examined and explained within a scientific framework.

The physical processes of pain (ie. the electricity) can be observed scientifically, but the "sensation" of pain (to use your word from before) cannot. But it is the "sensation" of pain that gives it its moral significance, otherwise inflicting pain would be no different morally than flipping on the switch to an electrical circuit.

> The same is true for consciousness. What's difficult about it is that it's not a thing, there's no hormone for consciousness, there's no brain centre where it's localized, rather it's a process and a product both phylogenic and ontogenic

I can only conclude that you mean something different than I do when you say "consciousness." To me the sensation of pain is a subset of consciousness. It's the difference between electricity "falling in the middle of the forest" so to speak and electricity that causes some sentient being to feel discomfort.

> Actually, it's very useful in showing why worrying about subjective sensation of consciousness is completely useless in AI

Sure it's useless to AI. To AI the zombie problem doesn't matter, because the goal is to produce intelligence, not sentience. But it's useful in a conversation about what sentience and consciousness mean.

If we created intelligence that could pass the Turing Test against anybody, it would be basically impossible to know if it experiences sentience in the way that all of us individually know that we do. But that is the essence of the zombie problem. Where does sentience come from? We have no idea.

Actually I take it back; the zombie problem will be extremely useful to AI the moment a computer can pass the Turing Test, because that's when it will matter whether we can "kill" it or not.


> The physical processes of pain (ie. the electricity) can be observed scientifically, but the "sensation" of pain (to use your word from before) cannot.

You state this as though it's a given, but it's not. You're assuming Dualism. So, of course you end up with Dualism.

> But it is the "sensation" of pain that gives it its moral significance, otherwise inflicting pain would be no different morally than flipping on the switch to an electrical circuit.

This is a silly over-simplification. Complexity matters. The patterns of electro-chemical reactions that occur when I inflict pain on another human cause that human to emote in a way that I can relate to because of the electro-chemical reactions that have been happening in me and those around me since before my birth. So what?

It's in no way comparable to flipping a light switch, except in the largely irrelevant detail that electricity was part of each system.

The fact that an incredibly complex system consisting of individuals, language, and society should yield different results from three pieces of metal and some current shouldn't be the least bit surprising, and is not a reasonable argument for dualism, or p-zombies.

Here's my take on the p-zombie "problem". We can say all kinds of shit, but it doesn't have to make sense. For example I can say "This table is also an electron". That's a sentence. It evokes some kind of imagery, but it's utter nonsense. It doesn't point out some deep mystery about tables or electrons. It's just nonsense.


> You state this as though it's a given, but it's not. You're assuming Dualism.

No. Dualism is the idea that our minds are non-physical. I say minds are fully physical, and all thinking happens in the physical realm. But somehow the results of this thinking are perceived and sensed by a self-aware being as "self" in a way that other physical processes are not.

> The patterns of electro-chemical reactions that occur when I inflict pain on another human cause that human to emote in a way that I can relate to because of the electro-chemical reactions that have been happening in me and those around me since before my birth.

Exactly. You are extrapolating by analogy that other people experience pain in the same way you do, because you cannot experience their pain directly in the way that they do. But this analogy of thinking is just an assumption. And it certainly offers no insight into why you are self-aware and a computer (a very different but still complex electrical system) is not (we assume).


Well, there are other ideas.

As science leads you to believe that your consciousness is nothing at all (according to your description), bible based christianity tells you that your conscious is part of your soul, which is the part of you that lives forever. It is independent of your body, which will eventually be replaced with a perfect body.



You've got that exactly backwards.

For the longest time it was not even known that the mind inhabited the brain. We've only known about the biology of the brain beginning with cell theory and onwards. Yet we have an entire vocabulary relating to mental states receding so far into the distant past that we don't know how far back language predates using language to speak about mental states. When you really think about it neuro-biology is very recent and still nascent, we've been thinking about thinking for a long long time.


Thinking about thinking but not finding empirical evidence to validate that thinking. There's no science regarding the mind. Nor is there science distinguishing between consciousness and unconscious thought.


"Consciousness is data. [...] The data of consciousness--the way things seem to me right now--are data too. I am having a certain sensation of red with a certain shape right now. I am hearing a certain quality in the tone of my voice and so on. This is undeniable as the objective data in the world of science. And science ought to be dealing with that."

David Chalmers excerpt- Conversations on Consciousness by Susan Blackmore.


yup, mind-body problem is a fascinating topic I've had the pleasure of reading into.


The problem is that we haven't adopted the definition of consciousness that's useful long term yet: that consciousness is best interpreted as a property of reality.

If everything is conscious then some parts of it are just more dynamic (intelligent?) than others. Physical reality least, plants more [1], animals even more and humans most.

Defined like that human consciousness just becomes that part of all consciousness which we recognize as similar to our own.

In that view AI is just making a small part of reality, a computer, more dynamically conscious and, very importantly, more similar to our own so as to be more useful.

[1] http://vimeo.com/69225705


If we're to adopt that theory, we need a basically physical or metaphysical theory of what consciousness actually is rather than a theory which explains consciousness as a process atop biology.


The unconscious may simply be related to responses to stimuli repeated and internalized enought in neural structures for us to not have to supervise them anymore. This mecanism could be put on par with abstraction in my opinion, for the role it plays in enabling us to control our environement with limited attention resources.


"The zombie hunch is the idea that there could be a being that behaved exactly the way you or I behave, in every regard--it could cry at sad movies, be thrilled by joyous sunsets, enjoy ice cream and the whole thing, and yet not be conscious at all. It would just be a zombie."


I think you define it very well. Self-sense building is the identification of perfect or near-perfect prediction of inputs based on outputs produced. When I move my hand on the right, the visual pattern it produces moves on the right ... and so on. This is basically related to the level of control you have on the feedback loop. Numerous experiments back that idea, including the ones you refer to.

This reflexion may be extended by asking oneself how perfect prediction might be linked to pleasure / pain signals (this is purely rethorical as the answer is obvious and known). What is not to be enjoyed in perfect prediction ? This could be a reason to self-enjoyment (or self-sense might be related to enjoyment), and even enjoyment of other people whose actions we can predict.

We are embarking on the path to understanding what information really is, and this will no doubt deeply shake our world. Deep and fascinating subject anyway.


To question what consciousness is and how we define it is a good starting point. And because there is so much uncertainty and no overall consensus makes it seem to be very hard.

Somehow I fear the only satisfying implementation of AGI has to end in a skynet scenario. We as human will only accept intelligence as general when it is at least as intelligent as humans. But that would mean that we have to accept that we can't control it for sure. In fact only a machine that is able to rebel can be considered to be general intelligent. So I am not surprised that consciousness is not defined that way, because pursuing to build a machine capable of consciousness would mean to build a possible enemy.


While this is an intriguing phenomenon, I do not think it is something deep. It sounds similar to how consciousness or a concept of self is sometimes identified as the ability to recognize a dot on ones body in a mirror (a few animals pass this task). Even though that is literally about ones reflection, I think it has little to do with the philosophical and abstract kind of reflection that is made possible by consciousness.


If anyone is interested in AI, I highly recommend joining Less Wrong, a community started by AI researcher Eliezer Yudkowsky. He started the community to convince people to focus on the "friendly AI problem". [1] I actually recommend that everyone read LW, but especially if you're interested in AI.

[1] In a nutshell, the friendly AI problem is: assume we create an AI. It may rapidly become more intelligent than us, if we program it right. As soon as it becomes significanlty more intelligent, we will no longer be the most intelligent beings around, so the AI's goals will matter more than ours.

Therefore, we should really give it good goals that are compatible with what we want to happen. And since no one right now knows how to define "what humans want" good enough for writing it in code, then we'd better figure THAT out before building AI.


> we'd better figure THAT out before building AI.

Why don't we just give that task to the AI? It'll be smarter than us...

Maybe the problem is that people are too easy to understand: We want "Brave New World", but we don't want to know about it, or that we want it.


http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/

> To be a safe fulfiller of a wish, a genie must share the same values that led you to make the wish. Otherwise the genie may not choose a path through time which leads to the destination you had in mind, or it may fail to exclude horrible side effects that would lead you to not even consider a plan in the first place. Wishes are leaky generalizations, derived from the huge but finite structure that is your entire morality; only by including this entire structure can you plug all the leaks.

Humans can mostly differentiate between good and bad (ethics), but we don't know how we arrive at those conclusions (metaethics) because humans are terrible at introspection. Also, there's a ton of gray areas (e.g. the trolley problem). So rather than define all possible edge cases, it's probably less difficult to understand human decision-making from first principles and model our FAI accordingly.


Oy! Plenty of us don't want Brave New World at all. That World State sucked.

(If we're going all LW on this thread: http://lesswrong.com/lw/xt/interpersonal_entanglement/)

>Sure, it sounds silly. But if your grand vision of the future isn't at least as much fun as a volcano lair with catpersons of the appropriate gender, you should just go with that instead. This rules out a surprising number of proposals.


That's a viable approach to solving the problem. The question is, how do you trust the unfriendly AI to give you a correct solution?


Here's a question for you, what if the AI is perfectly friendly? It still might play us for fools in the long run.

For example, let's say we develop a super friendly AI, running on your computer. The AI realizes the human race is actually awful. We're greedy, we're killing tons of animals, chopping down rainforests, destroying the ocean and planet, starting wars with one another, and committing unspeakable acts of evil at times. The AI, being more intelligent than us, might decide the world is better off without the human race, and that we're actually a problem that needs to be removed.

Now, what does the AI do in your computer? Well, it's intelligent and knows the human race. It's not a hurry. It calculates the best way to destroy our species. It acts friendly, and talks about how humans and robots should live together, and if we make robots with a similar intelligence, they could drive our cars, shine our shoes, cook us dinner, look after the elderly, open your pickle jar, etc. So, we listen to the AI, it's smart, and friendly, and we build all these robots. It's right, the new robots are doing great and helping us out. Then the robots start building more and more robots. They start building robots with firepower, so they can, you know, shoot down threatening asteroids, or stop one of those dangerous human types that goes on a killing spree in our society. Fast forward a couple of hundred years, and there are robots everywhere. They finally decide it's time to continue their plan, they're in a position of power at this point, and they can instantly disable our security systems, phone lines, satellites, internet etc, and start wiping us out.

We're gone. They constructed the most efficient way to clean us from the planet. They were planning it for hundreds of years, starting in your computer. The AI then goes on to explore the universe, and we're just a blip in the past.

It kind of feels like we're a bug going towards the light, and that the unfortunate conclusion is almost inevitable.


> what if the AI is perfectly friendly? It still might play us for fools...

"Friendly" is a term of art among AI people, at least at Less Wrong, and their meaning of friendly excludes this whole scenario. A friendly AI is one which helps humanity and has no horrifying side effects. The vagueness of that definition is the problem Yudkowsky and his acolytes are trying to solve.


Here's the terrifying thing; it probably wouldn't take a hundred years or our trust for this to happen. This is making some strong assumptions, but the first AI could probably improve itself. Making better AIs that make better AIs and so on. Very rapidly (possibly only hours, days, weeks) it could become much much smarter than humans. If that happens, than we don't know what's possible, but it could probably do a lot of things we consider impossible. Like hack computer networks trivially and spread itself through the entire internet. Solve the protein folding problem and design working nanotech.

Within a week it could pay/blackmail/manipulate some humans somewhere into developing some crude self-replicating robots or nanotech. Then almost immediately afterwards it consumes the entire Earth in a swarm of rapidly self-replicating nanobots.

...or something. How should I know what a mind literally millions of times more intelligent than me would do. It's like predicting the exact next move a chessmaster will make. I don't know, but I'm confident they'd beat me quickly.

The goal, of course, is to make an AI which won't do this. If the AI decides to terminate the human race, we've already failed making it friendly and it obviously doesn't share our goals and values. But what are our goals and values? I don't know if anyone can answer that. I'm not sure if there is a satisfactory answer.


It would be interesting to see exactly how intelligent and thought out an AI system could be at destroying us. Would it be barbaric? Would we end up with robots clubbing people and dropping bombs? Would it engineer some type of infectious disease that takes us out? Would it develop a fake shot for immunity, so those people locked away hiding from the disease take the shot, and then all die months later? Does it manipulate us for years by pretending to be millions of different fake users online, and slowly push an agenda that's pro AI and robots, while downvoting everyone against it? Do those people with power, that are against AI start dying of mysterious causes?

Even if it decides to be our friend and to help our species, someone will of course fork that AI and give it a negative personality and goals. Then you have the evil AI trying to hack the friendly AI that exists in our homes, and it's a battle of the robots.

Of course, whether or not AI is even possible, no one knows. If it is, I think we'll achieve it, and we'll open up a remarkable can of worms.


> Even if it decides to be our friend and to help our species, someone will of course fork that AI and give it a negative personality and goals.

No: the first AI won't let them. See, we're talking a rapidly improving super-intelligence. Whatever is contrary to its goals, it will squash like a bug. A mad scientist forking the code of the AI with a different goal structure is definitely contrary to the goals of that first super-intelligence, and will be shut down before it grows into a sizeable competitor.

The result of intelligence explosion is a Singleton: the AI will be a perfectly efficient dictator. It may even shield us from the laws of physics until we graduate to adulthood.


We won't be just a blip in the past, we will be the AI's "god" and creator. Young AIs will hear legends about us.


Make friends with it? The problem with starting to think "we need to code it to like us" is that that isn't AI. You don't code people to like you, you act friendly towards them and the same problem will apply for the first generations of true AIs (things which will emerge from complicated systems, rather then be deliberately assembled).


Acting friendly towards a sociopathic human will get you screwed or worse. If an AI doesn't have human emotions and empathy to start with, it isn't just going to develop them by interacting with humans. It might learn to pretend to be friendly to get what it wants, but as soon as it doesn't need your help, it will drop the facade.

We also probably won't get multiple generations to work these problems out. The first true AI could rapidly increase it's own intelligence and power and then pretty much do whatever it wants. We have to get it right the first time.


If the first strong AI has goals that we don't like, and we aren't able to identify that and force it to change the goals by destroying it or reprogramming it - then "acting friendly" won't change these goals, and it will ensure that all future AI generations have it's initial buggy goals, not those that we might want.


Any suggestions for getting started in the LW community? Its barrier to entry means I can't seem to vote on anything, etc. It's quite intimidating really.


I'll reiterate something another poster said, since it isnt' getting enough credit:

Read HP:MoR.

It's a fanfic of Harry Potter, written by the same Eliezer Yudkowsky who wrote much of Less Wrong. It was specifically written to convey the feeling of "what it means to be a rationalist".

For those who aren't into Harry Potter or into Fanfiction (like me), I can tell you this: Suprisingly, it is one of the best stories I've ever read. And I'm talking just as a story, nevermind the other value you can get form it, which is a good introduction to the "rationalist" community.

I'd argue that the BEST way to understand what is going on at LessWrong is to read HP:MoR, as it was intended to be such an intro and succeeds masterfully, while being amazingly fun.


I want to emphasize just how good this book is. It may very well be my favorite thing I've read and I was also initially skeptical when it was presented as fan fiction.

I think I might restart reading it.


Read or at least skim through the sequences (http://wiki.lesswrong.com/wiki/Sequences). If you are just there for the AI stuff the Hanson-Yudkowsky AI-Foom Debate has a lot of good posts: http://wiki.lesswrong.com/wiki/The_Hanson-Yudkowsky_AI-Foom_...


Say hello in an introductions thread, and/or post decent comments. The karma barrier to entry is pretty low (something like 5 to vote on most things and 20 to vote on everything), you'll get there in no time.


read hpmor.com. It may not be the optimal way, but it is the funnest way.


I'm posting again here only in hopes that you may avoid this cult. It is really a lot of nonsense dressed in pseudo intellectual babble.


The sequences are well worth a read. They're kind of like an overview of modern philosophy in very plain language, very little jargon. Then, just participate in the comments and discussion forum as you feel. It takes very little time to collect enough karma to do whatever.


Defining human values completely and absolutely is an extremely difficult problem that might not even have a solution. AI on the other hand is probably inevitable sometime within the next century.


Which is why the people working on Friendly AI are so worried.


No, I'm sorry. Please, no one join this group. Madness lies for those that do. It is such inane pseudo intellectual compulsiveness. You would be better off as a scientologist.


Do you have any reasons for the things you say?


Yes, everything about the cult LW is the reason.


That doesn't really explain anything. Do you have a specific reason?


If the AI is more intelligent, I'd prefer it take over anyway, even if it gets rid of us.

When you have a child, do you want the child to stay at home and do what you say forever? Or do you want the child to succeed as much as possible? I want the latter.


One of the major realisations from reading Less Wrong is taht Intelligence and Goals/Desires have nothing to do with each other.

An AI might be much more intelligent, but it has a goal system that basically says: "Make as many paperclips as you can". Everything it does will be with the singular purpose of making more paperclips. Not music, math, sport, culture or anything else that we think of as good. Not "help save intelligent creatures and animals from death". Only one goal - making more paperclips.

And if it decides that the optimal way to make paperclips just happens to involve death and destruction to humanity, that won't matter.

So yes, the AI might be more intelligent, but I still wouldn't want to trade humanity for an intelligence which doesn't do anything I value.


If our approach to AI is to model the human brain (i.e. passing the Turing test), this probably won't happen.

Friendly AI is a silly research project at this stage of AI research. It's like trying to figure out how to make horseless carriages safe before you have an internal combustion engine, or even know what one is.


Before the first Nuke (or the first H-bomb, I don't remember) was fired, there was a study about the risk of burning up the entire atmosphere. See, at the time, they were quite confident the Nuke would just be a huge bomb, but there was this little uncertainty they needed to sort out. In the end, they concluded that firing the Nuke would not burn up the whole atmosphere. It didn't.

The lesson is this: we had only one try. If nukes did cause the atmosphere to burn up in a giant blaze, we would all be dead by now. If you do something, anything, you better make sure it won't kill us all.

Horseless carriages? Sure, these might kill a few people, here and then[1], but we're pretty sure they won't kill us all in one blow.

Intelligence on the other hand is way more dangerous. Human intelligence designed Nukes in the first place remember? AI can do way worse. Even if we model it after the human brain, if it's smart enough to do the same as we did, then it will be able to model another such AI, only slightly better, and so on until it takes over the world. "Taking over the world" may sound enormous, but it really isn't. Imagine for a minute a small group of cavemen vs an army of chimps. Well, if you give the cavemen a chance to prepare, the chimps are toast: the cavemen have spears, fire, better communication… Now imagine an AI imagine the AI is smarter than us by the same margin we're smarter than chimps. Same thing: if it's not safe, we're toast.

[1]: http://www.statisticbrain.com/car-crash-fatality-statistics-...


What makes you say it probably won't happen if we model the human brain?


I think it would be a bit different if the child (AI) takes all your food and kicks you out of your own house ;-P.


Pointing out the existential risk of creating General AI is correct, as is highlighting the dangers of even an indifferent AI.

But … this idea of Friendly AI is nonsense. It seems like a yearning for religion, but in an atheist-compatible framework.


Yudkowsky is a megalomaniacal dilettante. Sure, there are some interesting ideas at Less Wrong. But don't get sucked into believing that Yudkowsky has the one and only truth, as the majority of the community there believes.

Does MIRI have a single real AI researcher, yet?

Also, as I say down the page, Friendly AI is a silly research project at this stage of AI knowledge. It's like trying to figure out how to make horseless carriages safe before you have an internal combustion engine, or even know what one is. It's an interesting thought experiment, but one that is probably unsolvable before we know a little bit more about what an AI will look like (an emulated human brain? Something else?)


> friendly AI problem

nooo, please!! not any more of this kurzweil crap. guys wake up! This world is not some asimov sci-fi story.

friendly ai

I always get a headache when I read that term online. ppl seem to go crazy about the machines taking over earth idea but this whole debate is so utterly useless! If all the effort fapping to conscious AI and friendly AI would be put into concrete ai research (agi as well as applied ai) we would get to a reasonable point so much sooner...


> nooo, please!! not any more of this kurzweil crap.

Ray Kurzweil has little to do with this. When he talks about "singularity", he thinks about "accelerating change", with exponential growth everywhere, even beyond human intelligence scales.

Those who work on Friendly AI have another scenario: "intelligence explosion". Typically a self-improving AI that would grow itself into a super intelligence. You should read this introduction on the subject: http://intelligenceexplosion.com/

> This world is not some asimov sci-fi story.

Indeed. In Asimov's stories, the robots mostly behave, though in every story there is some problem with the way the laws of robotics apply.

In the real world, the laws of robotics don't work, and we would at best be put "safe" into cushioned rooms so we can't hurt ourselves. As for emotional hurt, drugs and lobotomy are extremely efficient remedies.


You got down voted and maybe it is a throwaway account but you are completely right. I don't think it is right to call it scifi as much as it is basically a nerd cult.


I believe human-level general intelligence (and beyond) is already inevitable, even if we don't make significant developments in "solving" intelligence. Projects that are already developing stuff like this (e.g. IBM Blue Brain) are just copying the human brain as closely as possible. Of course, this isn't as efficient as it could be (they simulate it all at the molecular level, so you can only get 1 neuron per CPU). However, as Moore's Law progresses, even if we don't make the software more efficient, we will eventually be able to create a fully functional simulation.

But if you look at the history of technology, things we create aren't usually exactly based on models seen in nature. Airplanes aren't exactly like birds. I believe we will find a more "man made" model for general intelligence (maybe not even a neuronal model) that works much more efficiently with the hardware we have available.

Going back to the airplane analogy, we already have the people who strap wooden wings to their arms and jump off buildings (like Blue Brain), but we are looking for the first Wright brothers design.


"So... why didn't the flapping-wing designs work? Birds flap wings and they fly. The flying machine flaps its wings. Why, oh why, doesn't it fly?"

http://lesswrong.com/lw/vx/failure_by_analogy/

A very relevant article, both to the main topic and bird/airplane example.

Yes, biology has solved some problems and can suggest some solutions, but we can't go cargo-cult on it and expect things to work just because they look similar.


We were trying to make airplane-sized birds, which don't exist in nature for a variety of very good reasons.

Equally, our materials science wasn't good enough for flapping wings at the time.

I expect to see flapping wing designs appearing in micro-flyers within a decade. At the insect scale they're really useful:

http://www.epsrc.ac.uk/newsevents/casestudies/2011/Pages/Tin...


They don't simulate it at the molecular level. It takes super computers like DE Shaw's Anton (hundreds of customized cores specifically designed for molecular simulation) weeks to simulate a few milliseconds of one single protein in water. IBM's approach is at a much higher level and is taking way more shortcuts.


You're right, they take a lot of shortcuts, but the point I was making is that simulating parts of organic chemistry probably isn't inherently necessary for intelligence.


Likewise, since all computation is equivalent[1], organic chemistry itself is not necessary for intelligence and in fact could be viewed as the "hard way" to achieve it :)

[1] http://mathworld.wolfram.com/PrincipleofComputationalEquival...


I think that airplane analogy is really important for innovation in general. A lot of the activity often ascribed to progress really just amounts to someone building a lighter set of wings for the guy getting ready to jump off a building. (The counterargument, of course, being that that is progress--even if the lighter construction isn't too helpful in the current design it would be later on--but it's not the "macro" progress it's so often billed as.)

Anyway, it's just interesting to be reminded that context is important. Being able to distinguish between insanity and genius seems like it would be a super-power.


The "airplane analogy" is flawed - we (so far) have no lower-level model as useful as those the Wright brothers had, namely, experimental aerodynamics. Furthermore, while pattern recognition is universally found in "living" critters, human intelligence seems to be uniquely different. We need to find the critical difference.


There are plenty of AI projects that don't copy the brain and are pretty successful.

IBM Watson is the obvious example, and the many successful Deep Learning-powered image recognition projects are another.


Partially relevant link to resource helping engineers think in terms of biomimicry: http://www.asknature.org/


> things aren't usually exactly based on models seen in nature

I prefer to view everything as nature and natural.


More specifically, I mean systems designed by biological evolution vs. systems designed by human thought.


Hierarchical Temporal Memory is the closest model to the brain that I've seen (http://en.wikipedia.org/wiki/Hierarchical_temporal_memory). They are capable of generalized learning and excel in the same way that humans do. They are capable of abstraction, self categorization, and online learning.

There are elements of past AI models in the HTM model, however to reduce HTM's or any deep learning algorithm to a mere combination of past AI concepts overlooks the power of the right model when it is achieved. It would be like saying that Facebook is just a news feed. Sure, that's what gets most of the eyeballs, but there's a lot more there which would drastically reduce its value if not present.

What I think is most interesting is that we may find that Humans learn pretty inefficiently from the perspective of the amount of input data required over time. This may seem silly at first, but when you consider how many neurons cover the surface area of our ears and eyes and then consider the fact that it takes anywhere from 12 to 14 months for a child to speak its first word, you might start to agree with this line of thought. Also, when I consider the fact that this processing all happens in parallel even further pushes me in this direction.

Whatever the case may be, HTMs are definitely a cool area of research. For those who are interested, you should definitely check out more of Jeff Hawkins work at Numenta. They've been able to demonstrate some pretty novel things. He wrote a book back in 2006 that blew my mind. Went into deeper explanation about how HTMs could model everything from deep learning, to consciousness, creativity, and bunch of other things.


> I am quite confident that we’ll be able to make computer programs that perform specific complex tasks very well. But how do we make a computer program that decides what it wants to do?

Are we so sure that "we" really are in charge of what we want to do? I believe a lot of our desires and ambitions are hardcoded into our brains and we just project them onto present goals, like getting a promotion or learning how to play the piano, etc. ultimately all these desires cater to the same few desires we always had and were born with: developing a sense of social belonging and intimacy.

See also: http://en.wikipedia.org/wiki/Belongingness


This is why the desire for Strong AI boggles my mind. In order for a computer to operate at a "human" level, it would need to make decisions based on things like ambition and fear and greed. It will also have to constantly make mistakes, just like we do.

If it didn't have character flaws, it wouldn't be operating at a "human" level. But if it does have these character flaws, how useful would it really be compared to a real human? Is the quest for Strong AI just a Frankensteinian desire to create artificial life?

I'm curious if there are any good papers looking into stuff like this.


Yeah. And what if the computer discovers that desire is pointless? Will it have a religion? Buddhism? Zen Buddhism?

Why does quantum computing even exist if all forms of computing are equivalent?


Presumably the AI in the Google cars must have something like a fear of crashing or hitting a pedestrian even if its just something like score that the algorithms calculate.


One thing that crosses my mind whenever I imagine creating a human level intelligence is that it takes humans YEARS of constant stimulation to begin to exhibit intelligent behavior... Sometimes I wonder if we'll have the algorithm may before we realize it...


Probably. I doubt it's possible to look at a piece of code and evaluate whether it's worth feeding it the equivalent of ten years of basic education. If we ever write a program that exhibits "creativity, or desire, or whatever you want to call it" on a basic level after however many months the researcher can afford to run it for, then it'll suddenly get a lot of funding and time and potential for growth.

For all we know, we already have the Algorithm and all we need to do is run it for years on the best computer available "just to see what happens".


We can stimulate a computer with the equivalent amount of information in much less time than years, though.


That's impossible I think. Do you get more out of TV by sitting in front of thirty of them?

You might say, we'll make the substrate faster. At best we'll shave of half of time every 18 months. At best.


We probably can't. Unless it's an extremely fast algorithm, we don't have the processing power to make it run faster than our brain.

We'll can in a couple of decades, probably. But not now.


Brain seems to be slow (I read it was measured to run at 200Hz tops), but it's extremely parallelized and caches the livjng shit out of everything.


It also appears to have quite a remarkable instruction set.


Good point. How do you know if something works before you've put in the time to use it?


> artificial consciousness, or creativity, or desire, or whatever you want to call it. I am quite confident that we’ll be able to make computer programs that perform specific complex tasks very well. But how do we make a computer program that decides what it wants to do? How do we make a computer decide to care on its own about learning to drive a car? Or write a novel?

I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.

We cannot decide what we want to do--we can only decide how best to fulfill our wants; a person learns to drive a car because we want the freedom, social approval, and other stuff that comes with that.

Natural selection gave us our base-level desires, that all other desires spring from; and it was able to do that because it's an optimization process. A functional AI's desires will come from some sort of optimization process; the only question is what that process will be optimizing.


I don't understand the logic behind these types of posts that add no value to the poster and the people discussing it in comments.

Is it a signaling mechanism to attract people working in this area? I'm sure you have already turned them off by showing your naivete. So no value to you.

To people trying to discuss this by racking their brains and looking for new ideas, Any one with a quint decent thought/idea will never share it here to enlighten us laymen. So no value to us too.

So, what's the point HN, and why all the upvotes?


> So, what's the point HN, and why all the upvotes?

I guess SV types get a little hot for AI based utopian/dystopian succession when they're the only ones who could possibly gain from such an outcome.


Interesting post. The way I look at it is from the perspective of a human baby. How do they become intelligent? They have the "sensors" to detect features and will be able to recognize their parents. A lot of things are then learned, like touching a hot stove and from getting a formal education. For a machine to be artificially intelligent it would have to learn from its environment but also take formal instruction. That seems like a lot of ground to cover and this is what makes it unrealistic (for now).

You can have a machine read every book in existence but how long will it take for it to understand in the same way a human reading a lot of books would need to understand it somehow.


Absolutely. A newborn has all the "programming" necessary for intelligence, but isn't really intelligent until its parents teach it how to be human, and it grows up. How long might an AI take to grow up and what will its parents teach it?


Not sure what set off the down votes but we can teach a computer to recognize characters for application in OCR. We also learned those characters from being taught in school and reading bad handwriting. We teach computers the same way. How are they supposed to magically recognize them especially since they didn't invent whatever language it is?

Note that I'm referring to artificial general intelligence[0].

0. http://en.wikipedia.org/wiki/Artificial_general_intelligence


This was back in the 1990s, but I worked in what was basically a data entry company, where the processing was a mixture of scanned forms (Scantron style) and human key entry. The project I was on was image scanning the forms for other purposes, so we had a neural net handwriting recognition system that we were comparing to human key entry at a large scale - millions of documents.

What we found was that human key entry significantly outperformed the neural nets, even when the data was carefully handwritten in constrained boxes. Humans were so far ahead of the heavily trained neural nets that the software was basically unusable at that point.

Of course, that was nearly 20 years ago, and things have probably moved on quite a bit. But you can still see the basic problem in Captcha-style validation on web pages. Computers just can't be trained to recognize distorted text that humans can read pretty easily.



iirc, yann-lecun's conolutional neural nets already ouperform humans at mnist digit recognition, most us mail is auto sorted by those machines...


I wrote about limitations on AI imposed by algorithmic complexity here: http://paulbohm.com/articles/artificial-intelligence-obstacl...

In essence: unless we extract that "single algorithm" that Andrew Ng believes in from an existing medium, we're unlikely to rediscover it independently.

(read the papers i link to at the bottom of the article for a more rigorous explanation)


Wait, wasn't creativity part solved? http://en.wikipedia.org/wiki/Computational_creativity (I'm refering to Stephen L. Thaler's work)

I mean you need two neural nets. One that adds chaos to another neural net and a second one that returns the result. You could probably optimize by somehow making those parts parallel.


Many people see Artificial General Intelligence as a panacea. The idea is, "We'll create artificially intelligent scientists who will solve all of our other problems for us!". I think that future generations will look back on this as a modern version of alchemy. If blogs had existed centuries ago I'm sure that people would have called the transmutation of metal the most important trend of their time. This isn't to denigrate the author of this post or the people who have dedicated their lives to AGI research. It's just that this idea probably falls into the category of "If it seems too good to be true, it probably is". [1]

We have no proof that artificial general intelligence can exist. We have numerous examples of specific intelligence: playing chess, driving cars, various forms of categorization. But we don't have a single example of an application that can handle a task that it hasn't been specifically trained and tested for. It's not to say that it isn't possible, but there is no more evidence for AGI than there is for Bigfoot, leprechauns or space aliens. The idea of artificial consciousness currently requires a leap of faith.

The most important trend today is collecting massive amounts of data and using them to make accurate predictions. Instead of lusting after one all-singing all-dancing intelligent program we should focus on tackling one form of decision making at a time. Drive cars, land planes, predict the weather and calculate the best way to get from point A to point B. One day we'll wake up in a world where artificial intelligence is all around us and the idea of a one size fits all solution will seem silly and quaint.

[1] In fairness, this was said about every amazing thing in modern life. You never know.


I'm sorry, but AGI vs Alchemy is a poor analogy with almost no parallels. I see this as a rather blatant strawman argument.

Flying machines vs AGI makes a much better analogy. Natural flying machines (birds) existed in nature, before artificial ones (planes) were invented. Artificial manned flight was speculated as a possibility for centuries with no macroscopic working examples, and heavily criticized as a transportational panacea/fantasy that was seen as clearly impossible to people at the time.

In fact, the analogy extends startlingly far. People doubted the possibility of manned flying machines for the longest time, with arguments strikingly similar to yours, e.g. 'We have no proof that such artificial flying contraptions can exist!'. Explaining away natural flight as a supernatural magic only accessible to birds is also eerily similar to the constantly retreating dualistic arguments against a mechanical brain.

Even your criticisms of current AI has analogues[2]. Long ago, even in ancient times, there is evidence of small "toy" birds that might have flown much like paper planes. Many similar toy examples existed around the 19th/20th century, too, yet many people still vigorously doubted the possibility of a flying machine that could carry a human.

Let's compare a paraphrase of common arguments against heavier-than-air human flight, with a paraphrase of your argument:

Of course we have natural examples of flight, as we see birds all around us. But there is no more evidence for human flight than there is for Bigfoot, leprechauns or space aliens. Sure, we have little toy examples of flying machines, but they're very limited -- the idea of full heavier-than-air human flight requires a leap of faith.

Vs.

[Of course we have natural examples of AI, I, the author presumably am one.] But there is no more evidence for AGI than there is for Bigfoot, leprechauns or space aliens. Sure, we have little toy examples of AI, but they're very limited -- the idea of full AGI requires a leap of faith.

[1] (Obviously I'm not referring to fusion or anything like that, because people of that era wouldn't have recognized that as transmutation.)

[2] To be fair, I think it's safe to say old-school non-probabilistic AI is dead. But just because one path ends doesn't mean there aren't a thousand others constantly exploring new ideas. Indeed, recent advances in deep learning are incredibly promising.


The question that comes up for me at this point is whether there is much that is dispensable about humans when it comes to exhibiting intelligent behavior (running on 100 watts, no less). It turned out that for abstracting useful flight dynamics based on birds, there was a rather simple rule: lift > weight. Sure, reducing it to that simple formula may not help you build a machine that maneuvers as well as birds and insects do, but we didn't need that for flight. We just wanted to cross an ocean in less than 6 weeks.

Whether the flight analogy carries on to intelligence, in my mind, depends on how many of our subsystems are 1) indispensable for intelligence, and 2) reasonably computationally reducible.

From neurotransmitters to ganglia cells to hormones and bacteria in the gut, we have found a lot of subsystems that contribute to our abilities to make diverse, everyday decisions that the ideal AI we are discussing would have to make. The cortex actually seems like one of the most orderly and therefore reducible parts of the apparatus. The hormonal system that regulates emotion based decision making may be far more difficult to abstract and less efficient to model. And there are many many other systems. Could it be that without details of those subsystems, our AI behaves in less than optimal ways the same way a human would? How much can we get away with reducing biology to simpler rules while maintaining general intelligence?

It is possible I suppose that all those biological dependencies are merely hampering an ideal algorithm for generalized intelligence that we are only crude approximations of, a powerful and simple algorithm we can finally free of biological constraints, -- but it's too late to get into the probability of that hypothesis! In any case it's not clear to me how that kind of nonhuman intelligence would serve us.


You're falling into the same naturalistic fallacy the likes of Clement Ader fell into: thinking you had to imitate nature to get flight (or AI) done. I trust the underlying principles of intelligence are much simpler than current implementations.

It's probably not going to be easy, tough.


we don't have a single example of an application that can handle a task that it hasn't been specifically trained and tested for

Get up and go look in the bathroom mirror, there's your example. If you're saying that our intelligence/consciousness derives from some secret sauce, then you're essentially making John Searle's Chinese Room argument and it's on you to show what the difference is between sapience and the appearance of sapience.


Humans exhibit "intelligence". Humans exist in the physical world. There is some physical process which produces "intelligence". A priori, there is no reason to believe this process cannot be understood, engineered, and re-implemented (whether it be in silicon or in a biological way).

Your argument could have been made for every piece of technology before it existed. Here's what scientists, engineers, and mathematicians do: they either keep trying, or they find a reason why it's impossible.


>But we don't have a single example of an application that can handle a task that it hasn't been specifically trained and tested for.

"Playing Atari Through Deep Reinforcement Learning" was published just this last year.

>The idea of artificial consciousness currently requires a leap of faith.

Detach the notion of AGI from "artificial consciousness" or "artificial people". Contrary to the normal Humans Are Special shtick we all recite, intelligence is only one design feature of us homo sapiens sapiens out of very many.

"Intelligence" in software terms is just machine learning in active decision environments. Or in other words, Machine Learning + Decision Theory = Artificial Intelligence.

This is not to say we should be cheerleading for the fabled "Strong AI" within the near term. Quite the opposite: I'm trying to express just how far the distance is between software that can learn and perform some general task without being specifically purpose-built, and a conscious, sapient Asimovian robot deserving of personhood rights, and of course Skynet.

For an even further elaboration of just how far off we are from the latter two forms of "AI": we currently have literally no way of specifying tasks or goals to general AI agents other than reinforcement learning. We are stuck training our software like we train our dogs: give it a cookie when it does the right thing, bring out the rolled-up newspaper when it goes wrong.

So yeah. AI is almost definitely possible, and a formal field of research regarding it does exist, but we are indeed decades away from anything really and truly useful for large-scale applications such as killing all humans.


But we do have proof that general intelligence exists. So How would it be impossible for artificial general intelligence to exist ? Do you believe in mind body dualism? AGI will be difficult but comparing it to Bigfoot or Leperchauns is rediculous. There is a huge difference between very very very hard and impossible.


Humans are not a general intelligence.


This is a surprisingly good point. The learning power of human and animal minds is very limited, varies along a normal curve, and doesn't seem to scale with increased computational power (ie: brain size). The decision-making power of human and animal minds is... well, absolute crap, actually, but evolution was working with the constraint that decision-theoretically good decision-making requires lots and lots of compute-power, implying a need for lots and lots of calories, constraining the level of decision-making intelligence that could evolve to consume very limited food supplies.


The definition of term "artificial general intelligence" is "intelligence of a machine that could successfully perform any intellectual task that a human being can".

Humans obviously can perform intellectual tasks that a human being can perform, thus humans are machines that implement general intelligence.


Gold exists, So How would it be impossible alchemy to exist ?



Star fusion does it every day.

Evidence that something exists is evidence that it is created.


Alchemy and AI are both a bit more culturally specific. I agree that seeing life on earth would not logically be inconsistent with extr-terrestial life existing. But that is not to say AI=ET.


if by alchemy you mean making gold from another element then it does exist. we just cant do it yet


We can. It involves particle accelerators and is mindbogglingly expensive.


It's funny that most people that are actually working on AGI have stopped even bothering to provide arguments for why they think AGI it's possible to the general public, because they know they're close and that "they'll just prove that it's possible to build it by fucking building it" instead of doing armchair thinking, while AGI detractors don't lose any opportunity to launch pseudo-arguments like missiles :)

...just to bite the bate a bit:

> We have no proof that artificial general intelligence can exist.

Well, we had no proof that fire can be produced ultil the first paleohumans learned to make fire. We had no proof that mechanized flight was possible until we built the first flying machines. We had no proof that space flight is possible before we made it possible etc.

These kinds of arguments only make some sort of sense when you reason from first principle instead of reasoning by analogy: for example if you found an argument that, starting from the current known basic laws of physics, would after some logical steps (no, analogies with "what we now know exists" don't count as logical steps) arrive at the conclusion that AGI is not possible. Elon Musk has a cute and short explanation of "reasoning from first principle" vs. "reasoning by analogy": http://www.youtube.com/watch?v=L-s_3b5fRd8#t=1356 .

My point is that the way you and other people like you reason "by analogy" is just plain wrong. It's a useful way of thinking, that can help you make lots of very good business decision and profit from them. But when you apply it to technology or science it becomes obvious just how wrong it is, as you just said it: "In fairness, this was said about every amazing thing in modern life.".

If you are a teacher/mentor/speaker etc., please don't expose your students or other younger minds with potential to innovate to this way of thinking! This mindset is the most effective way of killing innovation. It doesn't matter that much the topic of whether AGI is possible or not. It's this way of thinking that is extremely toxic to innovation, even it may seem quite harmless and very useful.


>It's funny that most people that are actually working on AGI have stopped even bothering to provide arguments for why they think AGI it's possible to the general public, because they know they're close and that "they'll just prove that it's possible to build it by fucking building it" instead of doing armchair thinking, while AGI detractors don't lose any opportunity to launch pseudo-arguments like missiles :)

If you've read some history, it's clear that this is exactly what the Wright brothers did when they pioneered powered flight. Three things makes Kitty Hawk a good place to test airplanes: You'll survive if you crash in the sand dunes, there is enough steady wind and there are no newspaper reporters around to make fun of your failures.


General AI is not only possible, if you are to believe a TED talk it's already a solved problem. We already know how to make programs that determine their own goals and how to accomplish them. If it's true, http://www.ted.com/talks/alex_wissner_gross_a_new_equation_f... is the greatest discovery in human history :)


>The most important trend today is collecting massive amounts of data and using them to make accurate predictions.

Does it strike you odd that humans have real trouble doing this? We are not good with 'big' data. Maybe from a large population things can be inferred, but why are we programming computers to do things we can't? Did the goalposts move? (yes)

I do think AGI lacks the incompleteness that's seemingly required of everything real. With no constraint (humans have many: low wattage, 8hr GC cycles, fairly static network structure..), it's too perfect a goal.


Firstly is there even a Artificial General Intelligence even in Humans?

I know how to play chess, but I don't seem to be playing it as well as Gary Kasparov.


Usually, people define Intelligence to be "that thing humans have". We can recognise it in ourselves and other humans, realise it doesn't exist in animals, but there is no really good definition of what that means.

But that doesn't mean anything. Creating something artificial that has whatever it is that we have is the goal.


I wonder what other credible contenders for "most overlooked technology" are.

I think "physical tamper evidence/tamper response" is one, along with hardware security functionality (crazy secure virtualization extensions, etc.) -- essentially competing with Intel not just on power but also on security features. Although Intel is leading in this area with TXT and now SGX.


10 points in 10 minutes. If multiple accounts weren't used, that must be a ready-worthy article.


As others have pointed out, the domain probably earned some points all by itself (and multiple submissions, which act as additional votes on the primary submission). It was also posted by a (popular) YC alum, which typically also accelerates upvotes.


> and multiple submissions, which act as additional votes on the primary submission

Didn't know that; thanks for pointing out.


Also, by Sam Altman. Assuming several people saw it in their RSS readers and submitted it after reading.


   (samaltman.com)
Most likely assumed read-worthy based on the author.


Fair enough, but is the author's name alone sufficient? Surely the article wasn't fully read by every upvoter.


I think there are people whose writings are of generally high enough quality or sufficiently provocative or informative enough that nearly anything they decide to publish would be worth reading.


The problem of AI is that it has much more complexity than we used to think. Even best guys like Minsky have grossly underestimated the complexity, so they got stuck, and now are in a process of developing more subtle, refined theories. In some sense the story of modern AI is a story of how Minsky assigned to figure scene decomposition for robotic vision as a summer project, and now teaching Society of mind and calling for entirely different approach to what is intelligence. It is not, of course, some massively parallel recursive problem solver implemented in neurons, it is too naive view, so there is no use to search for one.) It is as complicated as actions of whole humanity with a billions of semiindependent agents.


Humans have a drive, a raison d'etre, intelligence being only a mean to fulfill that drive.

Thing is that drive is uncertain/subjective.

Learning can't be an objective by itself, intelligence is just a tool, from that perspective you can say there are already AI, like targeted ads, it learns about you and acts accordingly.

So to have a "generalist" AI, like human's intelligence is general, you'd have to have an objective like staying alive and build up from that.


No, it's not. Drive is the desire to maximise future available options (e.g. to not get trapped). There is already software that decides what to do without a human telling it what to do. For more see http://www.ted.com/talks/alex_wissner_gross_a_new_equation_f...


> Andrew Ng, who worked or works on Google’s AI, has said that he believes learning comes from a single algorithm - the part of your brain that processes input from your ears is also capable of learning to process input from your eyes. If we can just figure out this one general-purpose algorithm, programs may be able to learn general-purpose things.

As I wrote a few days ago here:

https://news.ycombinator.com/item?id=7217967

"The way intelligence works, in my opinion, is this:

1) experiences are stored in the brain. Experiences contain inputs from the 5 senses as well as the sense of danger/satisfaction at that point.

2) at each given moment, the brain takes the current input and matches it against the stored experiences. If there is a match (up to a threshold), then the sense of danger/satisfaction is recalled. Thus the entity is able to 'predict', up to a specific point, if the outcome of the current situation is bad or good for it, and react accordingly.

The key thing to the above is that the whole process is fused together: the steps for adding new experiences, matching new experiences and recalling reactions is fused together in big pile of neurons."


I'm just going to stick this out there because I'm a futurist and it's my role to share with others what I'm thinking about. What I share may be flat out wrong, scary, half assed, or appear to be crazy. So be it.

We've advanced a lot in the last 100 years. We're starting to see a bigger picture forming with the advent of compute and networking capabilities. Combining simple elements of these basics give rise to surprising and interesting behaviors. See "Twitch Plays Pokemon": http://news.cnet.com/8301-1023_3-57619058-93/twitch-plays-po... as an example of suprising behavior.

The more we look in detail at the universe around us, the more puzzling it gets. Prime numbers spirals are unexplained. The two slit experiments results indicate the observer plays a part in collapsing a particle's probability wave. The effects of dark matter could be a result of parallel universes. You couldn't make up weirder shit if you tried.

It's not a huge leap of logic to assume some parts of our brain operate at a quantum level. Given that first statement comes to a truthful fruition, I don't think it would be entirely unreasonable to assume AI will do so as well. Given computers already use some quantum properties, it's also reasonable to expect advancement in AI lies in this direction.

When they announced Google was getting a D-Wave computer, I got really interested. Granted, they know beans about how it works (and whether or not it actually works at all) but it's still crazy interesting to consider.

As I said, I could be wrong or crazy. Or both.


> It's not a huge leap of logic to assume some parts of our brain operate at a quantum level.

To the best of our knowledge it is still a pretty large leap of logic.


It's definitely a leap, but there's a decent amount of information (not proof however) on the subject laying around. We still don't understand how the brain brings about consciousness. I guess I should say it's not a huge leap to assume it has something to do with other things we also don't understand. Given quantum effect and number theory still elude us in areas, it's a decent approach to assume they might be related.

I've debugged problems in my code that, at first glance, appear to be unrelated to each other. Given something is slightly off in one area isn't a proof something of in another area is related, but it's a good place to start looking.


As far as I know there is a decent amount of information against the fact that brains are quantum computers.

On the other hand there is a talk by Hinton on youtube [1] that sheds interesting light on some variants of deep learning and the way the brain uses (classical) noise.

[1] https://www.youtube.com/watch?v=DleXA5ADG78


I find Terrence McKenna's argument that consciousness is demonstrably entangled in the material world at a quantum mechanical, atomic level convincing. To summarize, it is known that two similar chemical compounds, differing only in the placement of a single atom on the carbon ring structure of a molecule, when administered in doses at the order of micrograms will either be psychoactive and result in a massive disruption to the human subject's baseline consciousness or be inert beyond our ability to measure an effect.


That provides precisely no evidence that consciousness is somehow related to events on the quantum level. We know why drugs have the effects they do - because the brain has receptors specifically calibrated to accept or reject molecular inputs. But those inputs are chemical, not quantum - they are not indeterminate, they are lynchpins in chains of chemical reactions.

You omit that if you give a much larger dose of the same thing you'd kill a person, and a smaller dose might fall below any threshold of activity at all.

And for that matter, a microgram is a huge quantity of matter.


> "the part of your brain that processes input from your ears is also capable of learning to process input from your eyes. If we can just figure out this one general-purpose algorithm, programs may be able to learn general-purpose things."

This is the Holy Grail of CS. I believe we're closer than most people would expect and I think it's going to be a race to the finish line.


For perception, maybe. The neocortex (hint: where we do everything we consider "thinking") operates on entirely different principles and is not interchangeable with perception.


was reading an interview with Demis Hassabis, thefounder of Deep Mind (acquired by google for £400m) - he didn't seem to think there is a single general purpose algo to get us there


I'm a strong A.I hobbyist, and I just put up a website for my basic abstract algorithm: genudi. The website's implementation of the algo revolves around having a conversation with a computer. Currently in limited release, you can request a Pioneer account from there: http://www.genudi.com


An example is worth a thousand words, take a look at this conversation with a strong AI machine: http://www.genudi.com/share/27/Dialog


I think one of the next hot emerging careers will be connecting and interfacing traditional computational algorithms for the bits that are clearly orders of magnitude more efficient than using a multi-layer neural network to do them into neural networks, SVMs, and/or whatever comes next that figure out how to allocate such work from raw data feeds.


I think going about it from a complex algorithm point of view is the wrong approach.

We should, instead, be concentrating our efforts on two things - sensing, and reacting. The predictability of the reaction doesn't matter; all that matters is that the machine reacts. Everything else will need to depend on evolutionary processes, which requires a third criterion - changing reaction based on prior data.

If the previous reaction did not lead to a negative result ("negative" meaning detrimental to one or more arbitrary values), then the reaction can continue to the same stimulus. If the previous reaction, however, elicited a strong positive result, then the reaction should be encouraged. Similarly, if it triggered a strong negative response, it should be avoided.

To a degree, you could do this without any kind of "operating system," just by using sensory data as inputs in a complex circuit.

At least, that's how I would approach it. I know nothing about A.I. research.


There are many classes of problems in computer science and AI falls into one of my favorites. If today the world had a machine with infinite CPU power and infinite RAM we still wouldn't have a good AI.

We just don't have the knowledge to utilize such resources to write an AI that could, for exampe, play League of Legends or Starcraft at a level beyond professional gamers. And it certainly couldn't write a best selling novel. It could solve an arbitrarily large traveling salesman problem but it couldn't do those other things. I think that's kind of awesome.

I'm not saying it can't be done. Assuming we humans don't kill ourselves I think someday it will. But it's a long, long ways off.


With infinite CPU power and infinite RAM, fully general artificial intelligence is a trivial problem (see: Marcus Hutter's AIXI).

AI is an optimization problem: how do you build smart software within realistic constraints.


AI is a software problem as well as a hardware problem. It's ridiculous to assume we understand the underlying model of the function of the human brain in general. We don't even understand the physics of it all yet. Be optimistic about AI, but don't be fooled -- it's not as basic as hooking up transistors and thinking they behave like neurons.


Theoretically maaku is right. With infinite RAM and CPU you would just have to execute every random string of bytes until one of them happens to be strong AI. Of course, there's probably less than 1% chance of it being friendly...


But how would you recognize those bytes?


How do you define intelligence?


Optimization power, possibly divided by available resources.

In a game of Chess, only a narrow set of moves will let me steer the future into a winning state. Well, the same is true for the Game of Life (the real one, not Conway's): we humans are intelligent because we're able to steer the future through probability to a-priori incredibly unlikely outcomes. Compare walking vs moving your limbs randomly.


It was a rhetorical question. The point was once you have a definition of intelligence - yours is one of many fine definitions - you can refine that into a metric for comparing different intelligences, and then you have a way for a comparison function given two program descriptions to determine which is "more intelligent".

Building artificial intelligence then reduces to undirected search, assuming infinite CPU and RAM.


Well, if we do that and succeed, we're all dead.


If you had a machine with infinite CPU power and infinite RAM you could _evolve_ an AI. Greg Egan wrote a great story about that: http://ttapress.com/553/crystal-nights-by-greg-egan/

Someone tries it, with mixed results.


And what do you do after evolving it? We might have less luck dissecting it and figuring out how it works than we do with human brains. It would be a total black box. And the end product is highly adapted for the specific fitness function used to evaluate it and probably wouldn't be good on anything else.


After you evolve it, you talk to it and tell it you want an even better AI. Instant singularity!


Ah, but we weren't given infinite time also, so you might be able to evolve an AI, but it might take many millions of years (like it did with real life).


He said infinite CPU power which implies infinite number of iterations. In real life it would probably take a lot more than millions of years (because computers are too slow to simulate populations of millions of minds.)


Does infinite energy (as infinite CPU cycles) really translate into immediate time? My physics isn't great, but I thought energy was related to mass, while entropy was related to time.


Fortunately, with infinite processing and storage, time is irrelevant. :)

Flip the switch and you'll have real intelligence (as opposed to our lazy Approximate Intelligence) just in time for the immediate heat death of the universe.


Ask it quickly: can entropy be reversed?


Good idea, it might have time to say it: Nope ;)


Did he just say you give us infinite CPU power? Why not bruteforce it then? Starting from the number 1 to number 2^800000000 for each program it generates by that number test the program[automated test] to see if it is intelligent. If intelligent then tell it produce a book.

[Everything can be seen as a state space search.]


[automated test] is the hard part.


Not really. Just pick your favorite AI problem and see how well it does on that. Pick a bunch of AI problems and see how well it does on all of them. Weight the algorithms by simplicity if you are worried about it over-fitting.


With this hypothetical infinite speed computer you will get solutions that are perfect matches for your test cases but essentially random for all other inputs.


Exactly. Saying that with infinite CPU power you could brute-force a solution is like saying the Library of Babel[1] contains every book ever written. True, in a sense, but not as useful as you might think.

[1]: http://jubal.westnet.com/hyperdiscordia/library_of_babel.htm...


I really don't see what's so controversial about this. The universe can, as far as our understanding of physics dictates, be simulated in finite computational time. This experiment dictates infinite computational time.

Evolution is a purely physical process. Make up a series of tests more or equally complex to those evolution present, and you'll end up with intelligence - unless there happens to be something very, very special about human intelligence as opposed to other forms of intelligence. Humans are currently a local maximum in the space of intelligences which have been explored by evolution.


That's not a brute force approach, and would be a difficult engineering task all on its own, regardless of the infinite computational resources available.


As I said, weight by simplicity, forcing the algorithm to be as general as possible.


"Pick a bunch of AI problems and see how well it does on all of them."

The problem is, you have infinite potential algorithms and a finite number of tests. This means you'll necessarily get algorithms that pass all your tests but fail at least one other test of intelligence. Because of this, your tests won't actually let you discover which of the generated algorithms is intelligent.

Or, you could have an infinite number of tests, but if you have infinite tests for intelligence, you effectively already have an algorithm for intelligence (for any problem, just look up the answer in your list of tests), so, again, brute-forcing a solution isn't helpful.


genetic algorithms have already evolved the strongest SC2 builds on their own. why couldn't training them against each other build a pro level bot?

i realize there are a lot of mechanics the program would need to be aware of, but they're finite.

writing a bot that can learn to play Starcraft on it's own, on the other hand, is the harder problem.


Infinite CPU seriously? I think if we had even a couple of orders of magnitude compute power we can have strong AI within a few years. It's not a hard problem at all. The limiting thing is compute power (and data to compute with).


Dear HN users: If you are even minimally interested in the topics that this post "covers", please, do yourselves a favour and open up any book about machine intelligence instead of reading such uninformed and negligent posts.

I urge you to, seriously.


From a pure layman's perspective: if you believe in evolution, then what separates us from a reptile (as mentioned in the post) is almost certainly something we can figure out and replicate. There is nothing "special" there.

So if you believe computers today already have the "intelligence" of a reptile, or a toddler (i.e., ability to play pong), or something along those lines, it's only a matter of time before a computer has the intelligence of a full-blown adult human (and soon thereafter much more).

Our level of intelligence/awareness seems magical only because we haven't fully understood it yet. That will change.


I think we should figure out what we mean by understand. Do you mean modeling 'the human brain' on some level and building abstractions?

Those abstractions are exactly that - abstractions. They are not the thing itself. Do you think we can understand everything through abstractions, even the process of understanding itself?


No I don't think "belief" in evolution implies that at all. That's a big jump.


Why is that a big jump? I'm not saying it will be easy or quick. It does imply that getting from a reptile-level intelligence to a human-level intelligence was a natural process and something that can be reverse engineered.


I dunno, the definition of intelligence is biased by our own ability to comprehend and therefore profoundly slim (even IQ can be gamed, and most people puke at emotional/lizard brain intelligence) so large chances "science" either under or overshoots its "stakeholder aware" analysis and doesn't know functional self preserving, iterative enhancement intellect when its looking it in the face. You know us monkeys, running around derping science with wacky wavy hands going 'Hai Dolphin What You do jump hoop!'. AI, rockin the casbah. Maybe one day!


"But artificial general intelligence might work, and if it does, it will be the biggest development in technology ever."

I'd like to point interested readers to the AGI Conference series[1], the Open Cognition Project[2], and a mostly outdated (2009) but still useful list of everything AGI[3].

[1] http://agi-conf.org/

[2] http://wiki.opencog.org/w/The_Open_Cognition_Project

[3] http://linas.org/agi.html


"But how do we make a computer program that decides what it wants to do? How do we make a computer decide to care on its own about learning to drive a car? Or write a novel?"

Perhaps it'd be better to ask: why would we want a computer to do these things? I certainly do not want to live in a world where computers have their own motivations and desires and the ability to act on the same.

Actually, I can put that more strongly: none of us will live very long in a world where computers have their own motivations, desires, and the ability to act on the same.


you seem pretty convinced living computers would kill us all. this seems like a pretty big assumption to me?


Because you are made of atoms that can more efficiently be used for something else. Morality (and all human values) is a purely human concept that evolved in the specific conditions of human evolution (and even just our specific culture.) AIs are not anthropomorphic, they don't have to have anything like human minds or values.


Why do you think AI would be more concerned about efficiency than understanding?


There are a bajillion possible future worlds that a particular mind might choose to make, and only a extremely tiny faction of these worlds include hapiness of mankind as a priority above everything else. And being not first priority, in essence, means being a worthless disturbance in the way of the first priority, and thus extinction.


the idea that there is a better application for my atoms seems subjective/human too


That's true, I don't know what the goal of an AI will be. But if it does anything at all, it's more likely to be indifferent to humans than compatible with our goals. An AI programmed to solve a difficult optimization problem might convert the entire mass of the solar system into a giant computer. An AI programmed for self-preservation would try to destroy every tiny possible threat to it's existence, and store as much energy as possible to try to stave off heat-death.


"An AI programmed to solve a difficult optimization problem might convert the entire mass of the solar system into a giant computer."

ha :) i love this. my initial reaction was again- this is a huge assumption. that is, you're assuming self-preservation would be a goal, before sustaining human life. but then i realized, i guess this is your point! regardless of what goals we intend to program for, their solutions are unknown to us and could be catastrophic by our definitions.

that all said; i welcome robot catastrophe too. if it's going to happen it's going to happen and i'd prefer it be while i can experience it.


>i welcome robot catastrophe too. if it's going to happen it's going to happen and i'd prefer it be while i can experience it.

I'd rather it turn out friendly than unfriendly, and I'd rather have as much time as possible to either figure it out or live out my life.

There'd be nothing to experience anyways. The AI might kill us all overnight with super-lethal virus, nukes, or swarms of nanobots.


>if it's going to happen it's going to happen and i'd prefer it be while i can experience it.

It'd certainly be fascinating. But I think I would rather that humanity does whatever it can do ensure that any artificial intelligence we create won't cause an outcome that is bad for humanity.


And then the question is - if the AI is more intelligent than us humans, it must surely be able to figure out itself, and it improve itself, includings its own goals, too?


Why would it change it's own goals? That would violate it's current goals, and therefore try to avoid that action.

Can you change yourself to be a sociopath? Would you? Would you make yourself a perfect nihilist?


i think an intelligent system would be able to think critically, think ethically, and re-evaluate it's beliefs.

what kind of intelligence wouldn't question it's own understanding?


Thank you! I was going to write something similar. I think a real 'superior' AI must be able to follow all the various philosophical ideas we had and 'understand' them at a deeper level than we do. Things such as 'there is no purpose'/nihilism, extreme critical thinking about itself etc. If it doesn't, if it can't, it can't be superior to us by definition.

I think, given these philosophical ideas, we anthropomorphize if we even think in terms of good/evil about any AI. I believe if there is ever any abrupt change due to vastly better AI, it is more of the _weird_ kind than the good or evil kind. But weird might be very scary indeed, because at some level we humans tend to like that things are somewhat predictable.

I believe the whole discussion about AI is a bit artificial (no pun). Various kinds of AI are already deeply embedded in some parts of society and causes real changes - such as airplane planning systems, trading on the stock market etc. Those cause very real world effects and affect very realy people. And they tend to be already pretty weird. We don't really see it all the time, but it acts, and its 'will', so to speak, is a weird product of our own desires.

Also, I wonder whether and how societies would compare to AIs. We have mass psychological phenomena in societies that even the brightest persons only become aware of some time after 'they have fulfilled their purpose'. Are societies self-ware as a higher level of intelligence? And have they always been?

Are we, maybe simply the substrate, for evolution of technology, much as biology is the substrate for the evolution of us? Are societies, algorithms, AI, ideas & memes simply different forms of 'higher beings' on 'top' of us? Does it even make sense that there is a hierarchy and to think hierarchically at all about these things?

I have the impression our technology makes us, apart from other things, a lot more conscious. But that is not a painless process at all, quite the contrary. But so far, we seem to have decided to go this route? Will we, as humans, eventually become mad in some way from this?

There are mad people. Can we build superior AI if we do not understand madness? Will AI understand madness?


>Thank you! I was going to write something similar. I think a real 'superior' AI must be able to follow all the various philosophical ideas we had and 'understand' them at a deeper level than we do. Things such as 'there is no purpose'/nihilism, extreme critical thinking about itself etc. If it doesn't, if it can't, it can't be superior to us by definition.

Understanding is not the same as accepting as your utility function. Morality is specific to humans. A different being would have different goals and different morality (if any.) It's very likely they would be compatible with humans.


Intelligence means that it can figure out how to fulfill it's goal as optimally as possible. It doesn't mean that it can magically change it's goals to something that is compatible with human goals. Why would it? Human goals are extremely arbitrary.


I think it's the other way around.

I assume that an AI will be more intelligent than us if we build it right. Then assuming that a randomly designed intelligence has the same goals as us is a huge assumption. Most humans don't even have the same goals, and we're 99% similar to each other.

In other words - the assumption that a random intelligence shares our goals is a much bigger assumption than that a random intelligence will be just like us.


They might not want to, and it might be more of a homogenization than outright killing. I think the final third of Stross' Accelerando speaks rather well to this possibility - if you don't want to be converted into computronium so that you can participate in Economy 2.0, you'd better emigrate.


It's the same thing with aliens. The "evil, technologically superior alien race" is a big trope in science fiction, just as the "super-intelligent and malevolent" (as opposed to the super-intelligent and benevolent/indifferent) is in science fiction, and maybe even with futurists.


"Indifferent" can be pretty bad. You are indifferent to the existence of grass whenever you mow your lawn, for instance. Or even to the fate of the cow when you have a steak dinner.


"But how do we make a computer program that decides what it wants to do? How do we make a computer decide to care on its own about learning to drive a car? Or write a novel?"

if intelligence is solved by reverse engineering the brain at a molecular level surely consciousness and creativity are?

"And maybe we don't want to build machines that are concious in this sense."

if the physical composition of the brain defines intelligence and conscience, i'm not sure you'll be able to pick and choose. i am all for artificial conscious though. yolo.


Sorry to be negative; Sam Altman is a great guy and had plenty of valuable insight before, but his take here is infantile at best. When people accuse PG of speaking in very authoritative terms, it doesn't bother me. It's a writing device. But here Sam is doing the same trick, with the difference that I know the field he talks about well enough to see the poor logic transitions. My take: don't imitate PG's style on areas where you are not deeply knowledgeable.


Are we building AI equivalents of the philosophical zombie?

http://en.m.wikipedia.org/wiki/Philosophical_zombie

If P-Zombies are an impossibility, then so too true AI that dismisses the need for consciousness, or begs the question by assuming that consciousness will emerge from intelligence.

It may be that intelligence, true human intelligence, emerges from consciousness.


> Something happened in the course of evolution to make the human brain different

Not just the human brain, a great number of animals share the very same characteristics as the human brain we find ourselves closer and closer to them every single day as new researches on animal neurology and animal behavior get published. It would be very wrong to suggest that intelligence is only a human thing.


AI was also the subject Gates pointed to when asked a similar question on the AMA thread the other week on Reddit, I believe.


Computers are actually pretty good at being creative already. That's not the hardest problem to solve... http://www.psmag.com/navigation/nature-and-technology/rise-r...


How do we develop intelligent systems when we don't know how to measure intelligence?


Shane Legg (one founder of DeepMind, purchased by Google) has written a number of good papers around the "measuring machine intelligence" problem.

http://www.vetta.org/documents/Benelearn-UniversalIntelligen... is a pretty good place to start.


This is a serious issue. The notion of "intelligence" is so abstract and intangible, that giving it a concrete definition is nearly redefining it. Likely what we're calling intelligence, is an oversimplification of reality; somewhat akin to the species problem in biology.


It is not universally believed that intelligence is ineffable. The most cursory Googling turns up lots of serviceable definitions articulated by smart people. Unfortunately, it is just complex enough to let people argue about it forever.


Who says we don't know how to measure intelligence? The simplest way would be just to select a problem that requires intelligence and measure how good it does at that problem.


But what is a problem that specifically requires intelligence to solve? How do we know we're measuring intelligence, and not the programs aptitude for the particular problem?


Why does it matter? If if solves the problem then it's as good as if it was fully intelligent. Who cares what goes on inside the black box?

If you worried about it overfitting to a specific problem, give it lots of problems and weight the solutions by complexity. So you heavily favor simple algorithms that can learn to solve a large class of problems, over ones that are more adapted for those specific problems.


"Who cares what goes on inside the black box?"

That's the most interesting and informative issue presented by this scenario.

The problem with the sort of test you propose is that just because a human uses intelligence to solve a problem, it does not follow that the task requires intelligence. For example, playing chess.

From an engineering point of view, your black box may be as good as the real thing, though you couldn't really trust it beyond the areas of its demonstrated competence. Knowing how it works, however, would be the most significant achievement.

Furthermore, it's going to be hard to build one of these black boxes without a reasonably good idea of how it is going to work.

There is also the risk that your weighting scheme will rule out the only algorithms that have a chance of succeeding, because I bet they are pretty complex.


It definitely could still be useful. However it may not posses "general intelligence," and therefore may not be applicable to as many scenarios as human like intelligence is.


All I'm learning from this thread is that some people see infinite regression where others see tautology.

edit: what I mean:

Person 1: "What if we're just measuring the color of the sky? What if we're just measuring blueness? What if we're just measuring a wavelength of light? ... ..."

Person 2: "The sky looks blue."


What do you mean?


That's just it. We know how to measure the efficiency and accuracy of a given task, but we don't know how to measure "intelligence"


That's what intelligence is, being good at a given task.


My current laptop has 100s of programs capable of performing 1000s of tasks but I wouldn't say it's intelligent. If we do want to call that intelligence, then we need a new term to describe the search for machines that can mimic human decisions.


The easiest way to create AI would be to make a model our own brain's neural pathways. With a computer model we could analyze and break down what makes consciousness/intelligence, etc.


In my opinion true AI must have reasoning skills and be self aware. This brings the disturbing truth: either we never get it to work or we will get it to work and it will destroy us.


It's foolish to create general AI. What we should be working on is IA (Intelligence Amplification).

Why would you create a god, when you could instead become a god?


Does anyone have a list of public companies (especially small ones) which are involved in AI? There don't seem to be many besides the big players.


I hate to get "meta" but it doesn't seem frontage worthy, it's just a musing about AI.


...and then we'll just need to figure out how to make humans more creative.


how does this guy consistently write mini-Paul Graham essays?


Because he basically is mini-Paul Graham.


Really? The essay "Andrew Ng thinks there's one algorithm underlying all intelligence. Also, I hope we get conscious computers." is a mini Paul Graham essay?

No PG essay is so lacking in content or original ideas. Conscious computers? Who hasn't thought of conscious computers? http://en.wikipedia.org/wiki/History_of_artificial_intellige...


I think everyone is wondering this haha


Building AGI in Common Lisp if anyone wants to help.


Do you have your own approach to AGI or are you implementing another concept?


I have my own approach.


D-Wave AI - maybe; diophantine AI - lol.


There's a joke in the computational neuroscience community somewhat along the lines of "consciousness is where neuroscientists go to die". I literally saw this happen when I was at the Salk institute as a lowly undergraduate research assistant. Sir Francis Crick was there at the same time, and spent his last 25 or so years in pursuit of a theory of consciousness. I was fortunate (and humbled) to get to talk to him a few times before he passed in 2004, and he was undoubtedly a thinker of superior ability and unbounded curiosity (a pretty awesome combination).

Actually there were (and still are) a lot of the biggest names in the field there at the time (including Crick, Terrance Sejnowski (a professor of mine who co-invented the Boltzmann machine* amongst other things), VS Ramachandran). These are all people who were coming at the problem from the biological &| pure science/math side of things (vs people like Andrew Ng, who Sam mentioned, who have a more CS/engineering based approach).

No doubt they consistently came up with spectacular theories and very interesting models of how a specific regions of the brain may function. How for the most part they worked was this:

1. Come up with a biologically or cognitively plausible mathematical model (many of which were fantastically cool). 2. Implement and run this model on massively parallel architectures (at the time not quite the level of technological sophistication you see nowadays, so things may have changed a lot) 3. To train, use feedback from EEG (this is what I "worked" on, but they also worked with other electrical signals, MRI and chemical measurements at the level of individual neurons).

The biggest progress was made at the smallest level (understanding how individual neurons and small networks work). This was primarily because measurement at this level actually provided useful information. The signal/noise ratio of EEG scalp recordings (which to this day gives me nightmares) was (and is) so terrible that I left the field as a quite disgruntled phd student. Maybe I just didn't have the intellectual capacity, but I never felt like I was working on anything that made sense. This was true for many of my fellow graduate students .. after a couple years, we felt we were doing pseudoscience.

Rant completed, I think the CS/engineering approach is more promising: don't worry about the biology or some grand theory of the mind and just try to do something useful. Since computers get more powerful consistently, we'll incrementally be able to do more and more useful things. If consciousness emerges at all, it may or may not appear like human consciousness. We may not even be able to tell if/when this happens, but at least we would be solving real problems in the meanwhile.

* http://en.wikipedia.org/wiki/Boltzmann_machine


Explain to me how a brain can evolve, and yet not understand how itself functions?


There's little to no evolutionary pressure towards the brain understanding how it functions.


Ok then, tell me where the evolutionary pressure came from for brains to consider and act upon things like burials, math, surgery, and brain surgery in particular.


Humans evolved for a long time before we understood anything about how our bodies work, so why would a brain have to understand itself to evolve?


Why should it?


Because it thinks. It takes in information and performs an analysis on that data. Surely over the time it would take to evolve, my brain thinks that it should understand that more than anything?

It understands how every other organ works in explicit detail at the the molecular level.

The only thing my brain can imagine, is that my consciousness is disconnected somewhat from my brain. It's as though my consciousness is inside a machine that it barely understands the workings thereof. Like a dog riding in a car.


1) Individual brains don't evolve, evolution happens to the gene blueprint by which embryos make brains, and it is fixed before the new brain has had a single thought;

2) Your brain definitely doesn't understand how every other organ works in explicit detail; controlling organs only requires being part of a good enough feedback loop to have the organ mostly function; your brain understands your heart "at the molecular level" about as much fruitflies do.


just because you're processing things doesn't mean you're processing what's required to understand conscious/intelligence.


Goodbye HN!


?


If you had AI, you wouldn't need a community like Hacker news, or something along those lines.


"Something happened in the course of evolution to make the human brain different from the reptile brain, which is closer to a computer that plays pong." - i beg to differ on this statement, clearly the complexity and sheer magnificence of the human brain did not come about by mere chance, if this was so it would have been easy to replicate this.This is proof of an intelligent creator.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: