>But we don't have a single example of an application that can handle a task that it hasn't been specifically trained and tested for.
"Playing Atari Through Deep Reinforcement Learning" was published just this last year.
>The idea of artificial consciousness currently requires a leap of faith.
Detach the notion of AGI from "artificial consciousness" or "artificial people". Contrary to the normal Humans Are Special shtick we all recite, intelligence is only one design feature of us homo sapiens sapiens out of very many.
"Intelligence" in software terms is just machine learning in active decision environments. Or in other words, Machine Learning + Decision Theory = Artificial Intelligence.
This is not to say we should be cheerleading for the fabled "Strong AI" within the near term. Quite the opposite: I'm trying to express just how far the distance is between software that can learn and perform some general task without being specifically purpose-built, and a conscious, sapient Asimovian robot deserving of personhood rights, and of course Skynet.
For an even further elaboration of just how far off we are from the latter two forms of "AI": we currently have literally no way of specifying tasks or goals to general AI agents other than reinforcement learning. We are stuck training our software like we train our dogs: give it a cookie when it does the right thing, bring out the rolled-up newspaper when it goes wrong.
So yeah. AI is almost definitely possible, and a formal field of research regarding it does exist, but we are indeed decades away from anything really and truly useful for large-scale applications such as killing all humans.
"Playing Atari Through Deep Reinforcement Learning" was published just this last year.
>The idea of artificial consciousness currently requires a leap of faith.
Detach the notion of AGI from "artificial consciousness" or "artificial people". Contrary to the normal Humans Are Special shtick we all recite, intelligence is only one design feature of us homo sapiens sapiens out of very many.
"Intelligence" in software terms is just machine learning in active decision environments. Or in other words, Machine Learning + Decision Theory = Artificial Intelligence.
This is not to say we should be cheerleading for the fabled "Strong AI" within the near term. Quite the opposite: I'm trying to express just how far the distance is between software that can learn and perform some general task without being specifically purpose-built, and a conscious, sapient Asimovian robot deserving of personhood rights, and of course Skynet.
For an even further elaboration of just how far off we are from the latter two forms of "AI": we currently have literally no way of specifying tasks or goals to general AI agents other than reinforcement learning. We are stuck training our software like we train our dogs: give it a cookie when it does the right thing, bring out the rolled-up newspaper when it goes wrong.
So yeah. AI is almost definitely possible, and a formal field of research regarding it does exist, but we are indeed decades away from anything really and truly useful for large-scale applications such as killing all humans.