Maybe I'm being too contrarian but I still think this new AI thing (with Google et al. going down in the rabbit hole) is just the same as before: a dead-end.
The good example is how we invented planes. Or, precisely, how we failed at making flying machines for very long. We did try very hard to mimic birds, and their flapping wings. Study birds did certainly help with problems of weight, for example, but the main blocker was the flapping wings. Plane do not ressemble birds, not at all. For a long time we tried to solve the wrong problem (how to flap wings) and lost sight of the real problem (a flying machine).
I feel (but can't prove) that it will be the same with robots, and therefore AI. Trying to mimic human may help incidentally but it is going to fail repeatedly. What do we want? Make some scifi become real? No, we just want helpful machines. To be helpful, they need to react fast to orders, but do they need to understand our language? They need to see and hear and smell what we can't easily see, hear or smell, so a blind robot plugged to a network of cameras would be more helpful than a two-eyed face-shaped piece of plastique. They need to store and crunch huge datasets that our brains are not confortable with, but do they need to understand our emotions or "feel" them?
To me, the robot and AI of the near future would look like a flying phone, or an implant in the ear, or a ring on the finger, it will not look like the poor google-bullied guybot that was on the memes recently.
> What do we want? Make some scifi become real? No, we just want helpful machines.
It depends. It's common to split AI research into 4 types, by asking 2 questions:
1) Do you care about the end result or the mechanism/approach being taken?
2) Do you care about mimicking humans or "doing the right thing"?
Hence the 4 categories are:
A) Systems that think like humans (mimicking the approach taken by humans)
B) Systems that think rationally (using the "right" approach)
C) Systems that act like humans (mimicking human results)
D) Systems that act rationally (achieving the "right" result)
For example, this breakdown appears in section 1.1 "What is AI?" in "AI A Modern Approach", which I happen to have on my desk.
Whilst not perfect, identifying which of these categories some approach/research fits into can avoid a lot of well-trodden philosophical debates which can easily derail any discussion about said approach/research.
In this case, you seem to be in the "act rationally" camp. That's fine, but it's unfair to dismiss the goals of other researchers who fit into other categories. For example, those trying to find out how the human mind is structured; what assistance and interventions could help disabled people; etc. In those cases, a data-crunching ear implant might be a useful commodity to have, in the same way that AI researchers probably have mobile phones, but wouldn't offer any insight into the problem.
I think the friction is because of the different views of each camp about what is important for members of the other camps to do! Like, I would put myself in c & d but I think that D people often ignore important things like noise and robustness to starting conditions and frankly run their bloody deep network 100000 times and choose the best outcome for their paper. It's really striking when someone like Lake and Tenenbaum produce a paper where they stick big sharp pins in any mithering critique that's anyone would have. Hats off and all that! But it's striking and it shouldn't be, it should be the standard.
"Like human" and "rationally" and "think" and even "act" are too human to describe robots. That's anthropomorphic. Robots will be like robots, act robotically, they will compute, operate, etc. Anthropomorphism is a childish mistake when studying animals, i'd say it's even more so when studying robots.
> "Like human" and "rationally" and "think" and even "act" are too human to describe robots.
I wasn't describing robots; I was describing some of the goals, approaches and ideas of AI researchers, who are very much human.
Analogously, it makes sense to say that a pure mathematician tries to find pleasing, intriguing, beautiful theorems; yet the theorems themselves don't experience pleasure, intrigue or beauty.
While I'm reluctant to say these paths are a "dead end", this reminded me of an interview to Yann LeCun[0] from a while ago. I think you might want to read it, he sort of shared your view. However, deep neural networks are an extremely valuable and resourceful tool, when used against the real set of problems they solve.
Basically, what I think is happening with the NY Times (and press in general) regarding AI is one of the first questions that he is asked. "Your editor is not gonna like that" -- press goes for flashy and shiny announcements that are really far from what is really happening. And because marketing and funding, some researchers back that kind of announcements and say "Yeah! We've built a robot that smiles!". I'm not saying androids and robotics are not valuable research, I'm saying that people missinterpret and mistake many things for AI and expect a poet robot that can talk like Shakespear, and mean and feel the emotional component of its words. Not gonna happen soon.
> What do we want? Make some scifi become real? No, we just want helpful machines.
I think you underestimate the desire for mankind to create something that resembles itself. It's a very old myth, and IIRC some traces of it can be found as far away as in Greek mythology. And it can be found here and there towards modern times, from Frankenstein to Metropolis or Terminator. There's a reason why Science-Fiction is such a significant portion of modern literature and why this theme is a major part of it. It has a powerful appeal.
You say we want helpful machines, but how many of the machines we create are more toys than tools? Even a machine that would be as smart as a dog would probably be extremely lucrative as it would be sold as a pet. Slightly more intelligent it would be used as a domestic helper or a companion.
I would even go as far as saying that for some people, the ultimate goal of AI is to replace the human body, including the brain, by artificial counterparts that would free them from the pitfalls of a biological existence.
That's a point, robots as toys. But then it's just some advanced puppet, all the fuss about AI can't be just aiming so low.
And the propension of human kind to create life is real but it's more fruitful in biology, with clones and so on. Robots are machines. To me there's a gap, it is another realm, the most advanced robot will never compare to the stupidest human or even a dog.
The Golem certainly had an impact on culture. However, R.U.R. is especially interesting because it coined the word "robot". From the Wikipedia entry:
In Czech, robota means forced labour of the kind that
serfs had to perform on their masters' lands, and is
derived from rab, meaning "slave."
We aren't just creating in our own image; the creations are "slaves", intended to do work on our order. This is included in the Golem legend, which I suspect could be interpreted as an ancient desire for automation.
I'm very much with you on most of this, but to reply to this:
> but do they need to understand our language?
Natural language is the highest-bandwidth means of communication available to us. It's the easiest and most powerful method of conveying our intentions. That's important for any kind of technology that we want to be able to serve us.
I agree that AI doesn't need to be embodied in humanoid robots to be useful, but I think a robust understanding of our language is the ultimate user interface.
Written natural language is the best for human to human communication, but not for human to robot. The best we have now is computer language.
For giving orders, spoken natural language is the best for humans to humans, but for human to machines i'd say the best is the plane cockpit, or your car's interface.
I guess I should walk back what I said a little bit. Natural language is the ultimate interface for many domains. For flying a plane or doing anything else that requires continuous high dimensional input, physical interfaces are probably best. For anything that requires a level of detail approaching a computer program, a technical written or visual language is probably best.
Many of the tasks we want AIs to perform are things that humans do now. We're used to communicating about them in natural language.
I'm not aware of people trying to make flying machines that flapped their wings being a main blocker to powered flight. Gliders have been in development for hundreds of years and were functioning decades before powered flight.
The main blockers as I understand it were weight of the engine and the implementation of control surfaces. In fact, the first powered flight was done with a plane that mimicked the way a bird's control surfaces work.
> Maybe I'm being too contrarian but I still think this new AI thing (with Google et al. going down in the rabbit hole) is just the same as before: a dead-end.
I think a good intermediate solution is a robot that is a perfect 50's housewife (minus any sexual connotations): the robot cooks, cleans, gardens, etc.
I agree with you that the robot probably won't look like Rosie from the Jetsons, but probably more like the Fetch and Freight robot duo (http://fetchrobotics.com/fetchandfreight/).
Hasn't statistical learning already surpassed human expertise in a number of areas, particularly in Computer Vision and loads of classification problems?
Assuming you are making a case for the success of modern AI then this is just a difference in the scale of the failure to live up to its promises. The winged flapping flying machine never got off the ground, while the computer vision algorithms are useful in only a small subset of applications under specific conditions (not AI in my opinion).
I don't think the usefulness of such examples is a strong case that those ideas will lead us to what we really mean when we say "AI". I personally don't think we can get electrons to move fast enough across our current computer architecture to achieve human level AI, which is what google is mostly to my knowledge building their stuff on.
What? I don't know what you're referring to about Google, but I'm under the impression most of their stuff is based on optimization problems like power iteration, which have very little to do with human level AI. In fact, I'd argue human level AI is almost irrelevant except in journalism, media, and science fiction.
I'm Jeremy Howard, one of the people quoted in the article (and the one in the crazy photo!) AMA about applications of deep learning. My views from 18 months ago are in this talk on TED.com: http://www.ted.com/talks/jeremy_howard_the_wonderful_and_ter... .
In the time since then much more of the potential of deep learning has been realized, but we're still in the very earliest days - I'd say something like the internet, prior to the development of the first commercial ISPs or the web.
Just like today we don't really talk about "internet companies" (since the internet is just a tool that we use, not an end in itself), you also should not expect to see "deep learning companies" be successful as an end in itself. Rather, some companies will use deep learning as a key tool to fix key problems in current processes. For example, http://www.enlitic.com , my company, is first and foremost a medical company, and uses deep learning as a tool to fix some very specific problems in patient care.
PS: I greatly dislike the term "artificial intelligence". It's as meaningless and distracting as that other horrible phrase, "big data". I have no idea whether deep learning algorithms could lead to "intelligence", or what that even means. What matters is what a technology can do , and in this case, deep learning can do a great many interesting things.
What do you think of alternative, non-neural-network approaches to deep learning such as Bayesian networks[1] or Support Vector Machines (which as I understand it are largely dead, but the theory behind them is interesting and could lead to more developments).
Is the future of deep learning just massive neural networks on huge GPU clusters? Is anyone big doing research on new approaches, or just focusing on fixing the problems with neural networks? (Which, given that the whole idea of deep NNs is rather new anyway, I suppose it might be a little early to look at other algorithms.)
"Support Vector Machines (which as I understand it are largely dead, but the theory behind them is interesting and could lead to more developments)."
Wow, that happened fast. SVMs seemed all the rage a few years ago.
I think it shows the importance of understanding the formalisms underlying machine learning approaches, and their corresponding strengths and weaknesses. Neural networks had been on the outs for a long time, before the realization that having much more data to train against could massively increase their usefulness.
Wonder what the next ML technique to see a resurgence will be.
The difference with deep neural networks is that they can do so much of their own feature generation. It's very different to bayesian networks or SVMs. Because they can create their own features (to a large extent) they can handle unstructured data more simply and more accurately.
Neural nets are dramatically improving every month. Recently, the major developments have been around using a wider variety of differentiable functions as layers in the network - see for example Spatial Transformer Networks, and also the more general architectures such as Neural Turing Machines.
Sure. Deep learning is a tool that can impact a large percentage of processes that we use to create and provide products and services. Many of these processes involve human steps because humans are uniquely able to provide the perception, language interpretation, or other complex pattern recognition tasks. Replacing or augmenting people in these situations with deep learning driven systems would allow many of these processes to become orders of magnitude cheaper, faster, and more accurate.
I believe that the internet was a similar technology - a tool that could make many processes much cheaper, faster, and more accurate.
So it's not so much about deep learning itself improving exponentially (which I don't have an informed opinion of, because I don't think it's possible to know), but about the drastic changes that will occur as it is built into the products and services that can use it.
> I greatly dislike the term "artificial intelligence". It's as meaningless and distracting as that other horrible phrase, "big data". I have no idea whether deep learning algorithms could lead to "intelligence", or what that even means. What matters is what a technology can do , and in this case, deep learning can do a great many interesting things.
The term "artificial intelligence" refers to more than what technology can do. It also refers to what technology could or will do. That why this term matters for the public, as it points at potential future issues described in Science-Fiction.
As for what it means, I could agree that it's difficult to coin it precisely, but it's not meaningless either. Some concepts have a meaning regardless of our ability to express this meaning accurately with words. It's a case of "I know it when I see it."
As I understand it, there is a big difference between "strong" (e.g. the terminator) and "weak" (e.g. automatic image tagging) AI.
Are these basically philosophical terms by now? The knowledge engines of Watson and Wolfram Alpha are still one-step and data driven. Is there any hope for strong AI, or is it mostly a pipe dream?
I don't think we have any reason to believe that strong AI is impossible - in fact, I can't see how it could be impossible (since we have an existence proof, in ourselves, that it's possible!)
But I don't see any way to know whether we are making progress towards strong AI, or whether deep learning could be a foundation for it. The biological connections are certainly becoming more clear, but it still seems like very early days for this kind of conclusion.
I was particularly amazed after you made a few tweaks to your learning, which drastically reduced the error rate (i.e introduced adagrad, automatic learning rate/annealing, momentum...). I had never heard of these techniques before watching this - what do you find is the best way regarding how to stay up-to-date with the latest machine learning algorithms/improvements? Thanks
Hi Jeremy (Simone Brunozzi here), good to see you here on Hacker News.
I very much agree with your disliking of "Big Data" - a term that I've see misused almost 100% of the times.
On "Artificial Intelligence", I think it serves the purpose of defining a category for the layman, and doesn't necessarily confuse people as much as "Big Data" does.
"Enlitic has tested its software against a database of 6,000 lung cancer diagnoses, both positive and negative, made by professional radiologists. In research soon to be published, its algorithm was 50 percent more accurate than human radiologists, Enlitic said."
If I'm the patient, I want the AI to make the initial diagnosis, then highlight parts of the image most relevant to the diagnosis, and present links to guidelines or research articles backing up the conclusion (or whatever kinds of citations are most relevant to a radiologist). Then the radiologist either confirms (which will be the majority case, if the A.I. is as good as advertised), or disagrees and replaces with his or her own diagnosis. Note this requires a more sophisticated design and interface than a simple "cancer/not-cancer" output.
I want a human in the loop. Sometimes A.I. systems can outperform humans over all, but when it makes mistakes, the mistakes are things human "common sense" would have quickly ruled out. When it comes to my health or other topics I consider important, I want both the strength of an automated system and the intuition of an expert, applied in a cost effective and efficient manner.
A.I. should be a productivity enhancement tool for my doctor, but I'm definitely not ready for an A.I. to be my doctor.
There are problems even with that loop - let's say the machine says "yup, cancer" but the clinician disagrees. Does the diagnosis get changed? Maybe - but often the process would err on the side of caution; because lawyers, and this could end up leading to lots of extra, unnecessary and damaging surgeries or other treatments.
I think that's orthogonal to using A.I., and tied more to how hospitals and doctors are renumerated for their services. In the U.S., they get paid for the surgeries and treatments, not for the ultimate health of the patient.
Analysis of the human mind will, in time, reveal what the different areas of this mind does.
We already know a little about computer memory = RAM and static RAM. That is all randomly addressable as a set of co-ordinates. Input them = result. Human memory has at least two major aspects, such as 'content addressable memory'. Like in the army of a million men and you want John Albert Smith = you must inspect each man to see if he is Smith. In fact we call out John Albert Smith and he shouts here. Called a roll call.
There is also associative memory = roll call 'John Albert Smith and his squad', and they identify themselves.
Then there are visual memories, odor memories, tactile memories, pain memories, pain memories and more - all content addressable.
We also have computational or mathematical memories to recall or calculate add/subtract etc.
Once we know them all, we have to know how they interconnect and the hierarchy of them.
All these little brain areas, hippocampus, etc etc, all need to be understood.
Then we start hooking them together in the best way to make an AI, which may or not be the same ways a human mind assembles them.
At some point we get a primitive AI. We are almost there now. We have huge memories, and huge calculating ability in these early AIs. We add more and more of them together until if works like a human mind. Then we up the clock speed.
We are hard at work on this now, and some say 5-10 years and we will succeed.
One thing is true, if we use this AI to compute the next AI, the time spent on each stage will get smaller and smaller and the AI will rise in capability at an ever faster rate in a positive feedback loop.
This is where the risks lie. An AI than can kill you as easily as a power saw can cut off a finder is not the AI I want to share space with on this planet - but that is where we seem to be going
Yup, small steps: https://m.youtube.com/watch?v=iNLcGqbhGcc (2015 robocup soccer championship).
I think the issue is still efficiency. We are still far from the efficiency of the human brain in terms of computation / calorie.
The good example is how we invented planes. Or, precisely, how we failed at making flying machines for very long. We did try very hard to mimic birds, and their flapping wings. Study birds did certainly help with problems of weight, for example, but the main blocker was the flapping wings. Plane do not ressemble birds, not at all. For a long time we tried to solve the wrong problem (how to flap wings) and lost sight of the real problem (a flying machine).
I feel (but can't prove) that it will be the same with robots, and therefore AI. Trying to mimic human may help incidentally but it is going to fail repeatedly. What do we want? Make some scifi become real? No, we just want helpful machines. To be helpful, they need to react fast to orders, but do they need to understand our language? They need to see and hear and smell what we can't easily see, hear or smell, so a blind robot plugged to a network of cameras would be more helpful than a two-eyed face-shaped piece of plastique. They need to store and crunch huge datasets that our brains are not confortable with, but do they need to understand our emotions or "feel" them?
To me, the robot and AI of the near future would look like a flying phone, or an implant in the ear, or a ring on the finger, it will not look like the poor google-bullied guybot that was on the memes recently.