Marcus cites politics and economics as fields that pose fierce challenges to deep learning. I'd add marriage counseling, stand-up comedy, full-length fiction, and most sustained interactions between multiple people.
Any algorithm under which decisions are not random can be deceived — see also Google’s search results which are full of garbage for any topic outside popular keywords. The history of law is effectively the history of humans trying to game the rules of whatever systems are in place; and I fail to see how AI would function any differently.
That comment in particular suggests to me that you havn't spent a lot of time playing Go. The pieces' powers mutate quite spectacularly as the situation around them changes.
> Most interesting problems in the world don't mirror Go's orderly rules
This is true, but it isn't obvious this is going to slow deep learning down. For example, you cite players taking turns which is a significant handicap for a computer. If it comes down to reflexes, robots can win even with worse decision making algorithms than a human.
In the context of deep learning, if the situation mutates in ways that the training regime didn't then an AI will have trouble. However, people love to overestimate both how often exceptional circumstances come up (the correct answer is rarely) and how good humans are at responding to them (correct answer is badly).
My favorite part of learning the training system for artificial neural networks was that it incidentally explains a lot of human failure modes really well. It isn't at all obvious the humans have a sustainable advantage here.
Deep Learning is great for pure-data problems. I doubt it will be sufficient when interacting with the real world's inherent real-time randomness on large scales.
For almost four years, my work has been almost 100% development of sometimes novel, sometimes mundane deep learning models so I am a fan but I still wait eagerly for breakthroughs in hybrid AI systems, new Bayesian techniques that can work with less data, handle counter factual conjectures, etc.
Curiously, I had totally forgotten SHRDLU but had a vague recollection of colored blocks in an AI paper in the 70s. Google pulled the paper as the first hit from the terms('red block, AI, 1970s')
In 40 years it seems we are still doing the same thing. Moore's law has enabled us to build much fancier searching,sorting and filtering algorithms. But there has been no progress in being able to ask a machine if something is good or bad. Fortunately.
There is no reason to suspect we solve the halting problem or even any restricted variant. In fact, there is no reason to suspect we solve any problem of even modest computational complexity for anything but the tiniest domains.
It is 'not taken seriously' because the only evidence seems to be handwaving and philosophical argument (perhaps with a dollop of quantum weirdness, a la Penrose).
There are a number of models of hypercomputation. Finding any with a physical basis that might credibly match neurobiology is highly dubious. It is not even clear that any of them have any physical basis.
Hypercomputation is a cool theoretical device, again the models are way too primitive for now and likely any finding in nature would have super complicated mapping to the current theory. But hey, there might be some graduate student with an idea already...
However, the converse is not necessarily true, which would imply that the human mind is beyond a Turing machine.
In an ideal setting a human can do anything a Turing machine can, because a human can do all the elementary operations. The converse is not necessarily true, and seems most likely to be false from the evidence we have.
> machines not taken seriously?
Because it doesn't appear to be true. Every day that goes by we are able to build larger and more complex neural networks (NN, DNN, FNN, RNN, CNN, BNN, etc, etc).
> Seems to make the most sense from the data we do have.
We haven't done it yet, but we haven't come across some massive barrier to stop us doing it. Although no attempt has been successful, no attempt has shown some reason to stop trying.
> E.g. how could we possible write functioning code so
> consistently if we are limited by the halting problem for
> Turing machines?
Better testing tools are developed all of the time. You hear about the failures, but never the success. When was the last time HN crashed? When was the last time you couldn't connect to the internet? We're getting much-much better at building complex software that is reliable (TM).
It will happen, we just don't know when or how.
That said I think there is still a lot of research to be done, they are not perfect, but I think we are barking up the right tree.
It's not how it works. Yes, there's no algorithm for solving the halting problem: given a random program, prove that it halts.
But we don't write programs by taking a random source code and trying to prove that it will halt. We create programs, and creating a provable halting program is easy. Just add executed instruction counter and halt the program when it equals some number.
In other words: yes, there's at least one program you cannot prove whether it will halt or not, but it's irrelevant. You can always construct a program that halts.
It is, of course, trivially possible to construct a program that halts. Or even to modify an arbitrary program to make it halt (a good debugging tactic when things seem to be running awry). But the programs we write on a daily basis are routinely much more complicated than the threshold where their halting-status is computable.
I agree that we often intend to write programs that are guaranteed to halt. And generally code with some informal heuristic logic as to why our code probably will. But you don't have to do a lot of futzing around with formal methods and algorithmverification to know how loose that reasoning can be.
It's not a threshold in a strict sense. If a program is constructed from a proof (the Curry–Howard correspondence), then it will provably halt regardless of its length or complexity.
Most programs aren't constructed from formal proofs, of course, but a programmer always have some informal ideas why the program should work. We never take an arbitrary program and try to prove that it solves our problem. It is impossible anyway, if we aren't super-Turing (Rice's theorem).
And if we want to prove that our program does what we need and fail to do that, we don't give up, but modify our program for it to be provable.
Gary Marcus has brought up Moravec's paradox and I think that is pretty strong evidence against it. Maybe not in a precise mathematical sense, but there is no argument on the other side that's precise either. Again, the burden of proof is on the other side.
In other words, it may be that to achieve general human-like AI, we would need to invent a fundamentally different kind of machine. I don't see that idea being taken very seriously.
It could still be that lots of specialized AIs have a huge impact on the world. Although I do think that deep learning has bad engineering properties for things like self-driving cars, as Marcus points out. It could still have a lot of impact for less critical tasks.
Humans have finite brain size and thus limited state (tape).
Turing machine assumes infinite tape. Strictly speaking Humans are not computationally even Turing machine equivalents. Humans have limited memory. Single human can't solve the same class of problems that Turing machine can.
Any problem that any human in history has ever solved could have been solved with finite state machine of sufficient size.
Theoretically, you are right. But does strict Turing completeness matter, in practice? I don't have the required knowledge to counter-argument your last point.
This is the context that was set up by user yters
> E.g. how could we possible write functioning code so consistently if we are limited by the halting problem for Turing machines?
You can't use known limits of Turing machines to argue that humans can do better because we can't.