What I'm saying is that the server farms @ Google, Amazon, and elsewhere, are at the stage of the "combustion engine" -- not just 'powerful enough', but more powerful than necesssary. I base this on a count of the number of neurons in the human brain, their rough topology and connectedness, how quickly and often they fire, etc.
In that sense, I think you will find it almost impossible to conclude that our server farms today do not have sufficient power to simulate 86 billion neurons' 7k connections each, at a lowly few hundred herz. They are not even more connected than we are able to simulate, precisely because we can saturate links over a million roundtrips before we meet brain's realtime behavior. (Or at least, thousands of times.)
So perhaps "we are just imitating birds by gluing feathers to cardboard wings, flapping them, and hoping flight just sort of happens" -- but while having an engine attached that's at least an order of magnitude more powerful than is required to achieve flight.
In that context, the story we're replying to is like a video of someone achieving a 2 second flight - with a lot of feathers flying everywhere. It might not seem like much, but given what we know an 86-billion-neuron neural net is capable of, it is exciting.
The results of AI based on neural nets that I see being posted every single day are absolutely astounding.
this is happening right in front of your eyes. you're witnessing the birth of flight.
no, the engines won't fly the way birds do -- but they're more than powerful enough to start flying, and you are seeing this every single day.
By the way, you know there's not a particular push toward trying to make neural nets that are self-conscious or feel pain, etc? This just isn't a primary goal of researchers at this very second.
Just as flight engineers don't really try to make ornithopters. We have better and more powerful things to be doing.
this is an absolutely miraculous time. And you're not seeing it - which is sad.
This isn't some kind of pipe dream. These are results coming out every single day.
Here's a neural net designed to make horror art out of normal photos:
And that's just one thing out of many. Nuance's Dragon systems have practically eliminated the entire profession of medical transcriptionist. (Just google regarding 'medical transcription dead' without quotes to find people from that industry reporting on its demise.) Becoming a medical transcriptionist today is like going to typewriter repair school. Because speech recognition techniques based on machine learning have gotten that good.
Go, an incredibly nuanced game with a staggering possibility space, has fallen to machine learning competing with a lifetime of competitive human mastery.
It doesn't matter in what order or how the next breakthroughs come that go toward sentient or at least very intelligent interaction. We know what the limits are. And we know based on the topology and computation involved, that we have massively more than enough horsepower.
so while you might point to flying feathers and crashing airplane and deride it, I think about the jet engine behind those flying feathers and my heart skips a beat when I see it sustain flight for 2.3 seconds before producing "A hundred and half hour ago" like a human toddler blabbering incoherently and without even understanding those words.
Because I know what else AI has been accomplishing, and I know the horsepower behind it. you need to expand your thinking and realize that our algorithms and machine learning techniques are playing catchup with hardware that has been sitting around being dumb.
That's right: computers have just been sittin' around, bein' dumb, while they have all the computational power necessary to surpass humans in every realm of neural computation. mark my words, ythn. no pipe dream involved.
What I do doubt is that machine learning will become generic anytime soon. I predict machine learning will always need some degree of specialization - we aren't reaching general intelligence/learning within our lifetimes. A machine that is awesome at playing Go will suck at translating languages.
I also predict machine learning will never be able to surpass humans in terms of creative ability. A top notch machine-written book/poem will always be inferior to a top notch human-written book/poem, for example. Humans can invent new things, machines seem only capable of rehashing existing things. For example, at some point a human writer invented the concept of an unreliable narrator. If you "teach" a machine how to write by feeding it thousands of books, but you exclude books that have unreliable narrators, will the machine ever write a book whose narrator is unreliable? I think not.
I'll happily admit you were right all along if AGI does come about within even the next 20 years, but I think you are grossly oversimplifying things in order to embrace the sci-fi fantasy you wish were real.
Almost certainly false.
> I also predict machine learning will never be able to surpass humans in terms of creative ability
Algorithms are already churning out papers that are accepted to journals, and they can compose crude music. This a mere 10-15 years after the study first began. I give it maybe 20 years before a computer generated song will appear on one of the top charts. These will likely still be domain specific algorithms.
> Humans can invent new things, machines seem only capable of rehashing existing things
So you think human brains run on magical pixie dust? "Things" that humans invent can all be described by finite bit strings, which means generating "new things" is a fiction.
We discover these compositions just like a computer would. The secret sauce that we have but don't yet know how to describe algorithmically, is discerning those bit strings that have more value to us than others, like a clever turn of phrase is more valued than a dry, factual delivery.
> If you "teach" a machine how to write by feeding it thousands of books, but you exclude books that have unreliable narrators, will the machine ever write a book whose narrator is unreliable? I think not.
I don't see why not, even if we stick to domain-specific novel generation, but it depends on how you train the system based on the inputs. Random evolution is hardly a new concept in this domain.
If you do consider and admit that computationally there seems to be enough horsepower there, where does your skepticism come from that anybody would figure it out?
Alternatively, did you happen to completely ignore the argument about how much computation the human brain does? (Which isn't that much compared with server farms). I mean on a neural level, using the same neural network topology or an approximation of it, actual neural networks.
I guess I'm perplexed at your skepticism.
You are basing your skepticism top-down based on the results the science of artificial intelligence has shown to date.
It's a fair source of skepticism. There are 15,000+ species of just mammals, all of which have neural nets and exactly one of which have higher abstract reasoning communicated in a language with very strong innate grammatical rules - and that is humans.
However, we have 7 billion working specimens, a huge digital corpus of their cultural output, and their complete digitized source code which can be explored or modified biologically.
For me bottom-up wins. We can just try things until it works - which may be sudden/overnight.
At the moment I see a jet engine, feathers flying everywhere, and no flight. But looking at that jet engine, I just can't imagine it will take long.