If it were me choosing who to fund, I would make my main criteria who has actually made some progress, has solid models that show promise and not base it on who's the most willing to make tons of unfounded predictions.
=================MY PREVIOUS POSTS=================
I don't think that's a very fair assessment of Kurzweil's role in technology.
He was on the ground, getting his hands dirty with the first commercial applications of AI. He made quite a bit of money selling his various companies and technologies, and was awarded the presidential Medal of Technology from Clinton.
As I was growing up, there was a series of "Oh wow!" moments I had, associated with computers and the seemingly sci-fi things they were now capable of.
"Oh wow, computers can read printed documents and recognize the characters!"
"Oh wow, computers can read written text aloud!"
"Oh wow, computers can recognize speech!"
"Oh wow, computer synthesizers can sound just like pianos now!"
I didn't realize until much later that Kurzweil was heavily involved with all of those breakthroughs.
In addition, I'd rank Minsky, Larry Page, Bill Gates, Dean Kamen, Peter, Norvig, Rafael Reif, Tomaso Poggio, Dileep George, and Kurzweil's other supporters as much more qualified to judge the merits of his ideas, than Kurzweil's detractors like Hofstadter, Kevin Kelly, Mitch Kapor, and Gary Marcus. It seems that Hofstadter is the only one of that group who is really qualified to render a verdict.
Here's Peter Norvig himself on Kurzweil:
“Ray’s contributions to science and technology, through research in character and speech recognition and machine learning, have led to technological achievements that have had an enormous impact on society – such as the Kurzweil Reading Machine, used by Stevie Wonder and others to have print read aloud. We appreciate his ambitious, long-term thinking, and we think his approach to problem-solving will be incredibly valuable to projects we’re working on at Google.”
and him effusively praising Ray at a Google Talks event:
People always list his accomplishments in this fashion. They state some things that he was "heavily involved in" which are impressive when they sound as if he alone created them in a vacuum.
The reality is a lot of different people were working on a lot of these same things, and it's not as if everyone agrees that Kurzweil was the first to accomplish most of these, or even that he indeed did accomplish them in a satisfactory manner.
Yes, the Kurzweil keyboard was very popular and groundbreaking, but "the first to sound just like pianos"? No. They did not sound "just like pianos." They sounded more like pianos than the keyboards that came before, and less like pianos than the ones that came after.
This is the problem with Kurzweil (and others like him), he paints himself as an iconoclastic, groundbreaking figure; standing alone among a sea of might-have-beens. But the reality is that he was just one of many people working in a field which has had a lot of impact in our lives, and which he likes to wax ecstatic about.
The reality is that his main job for the last 20 years has been making a litany of ridiculous predictions nearly non-stop and then when some of them hit anywhere close to the mark he and his supporters claim that he was visionary.
It's absurd and it annoys me to no end.
Kurzweil's theories deserve scrutiny and thoughtful criticism, just not the emotional middlebrow dismissal you're engaging in - to borrow PG's favorite expression.
I guess it's possible that Kurzweil is The Talented Mr. Ripley of the technology world, and that your "horseshit pattern recognizers" are just that much more astute than those of Gates, Page, Brin, Minsky, Norvig, Wolfram, Kamen, Reif, etc., but I hope you'll understand if I'm leaning a bit more towards their take on this one.
Also, I realize endtime is about to jump in here any minute now with his Argument from Authority wikipedia link, so I'll freely admit that given imperfect information, yes, I'm allowing people I respect, who are extremely knowledgeable about the relevant subject matter, and in many cases have worked closely with Kurzweil (Page, Wolfram, Gates, Norvig, Reif, George, etc.) - to help confirm my own fairly well researched opinion.
Honestly, I think you're barking up the wrong tree here. Even his strongest critics generally concede that he's a brilliant man who has contributed significantly to technology.
Please understand, I'm not a Kurzweil fan boy. I do think a lot of his conclusions are inevitable in the long term - say 100 years out - but I'd guess it's 50/50 whether or not his timelines are too optimistic to benefit any of us here personally.
What I'm reacting strongly to here is the knee-jerk dismissal of his credentials and accomplishments by some on HN due to their incredulity towards his later ideas. It doesn't encourage an informed debate, and quite frankly smacks of a religious or political argument - which admittedly many of his supporters are just as guilty of.
You could tell me that literally every other person on the planet (never mind people who are wealthy and successful in tech) thinks Kurzweil is the second coming, and it would not change my opinion of him.
Now why is that? Is it just because I'm obstinate?
No. It's because I've been reading his asinine screeds for decades and reading the fawning baloney that comes from talking heads like those you've listed, and adoring groupies like yourself as well. And it's all a bunch of horseapples.
If anyone were doing what he's been doing for the last 20+ years and getting the same response from the press and public, I would be annoyed. It doesn't matter who it is.
It's the act of spewing out a bunch of half baked theories as if they are revolutionary and thought provoking when it's really a bunch of trite garbage which wouldn't even pass for a low grade scifi novel. Then the fact that a number of people -- including the poorly informed general media which you expect to be gullible -- most disappointingly, some tech folks, treat it seriously is beyond obnoxious.
Anyway, clearly I'm not going to change your mind. You're very convinced by the sheer fact that other rich and famous people think he's great, so I'm not going to waste my time any further. You're not going to convince me either (since the only thing that would convince me is Kurzweil actually putting out something thoughtful and well considered), so let's let it go.
Political awards don't mean anything, just that you have good PR. Hilary Clinton has a Grammy, Barack Obama has a Nobel Peace Prize, and Al Gore has an Oscar. Nate Silver is doing things publicly in statistics but that doesn't mean he's the end-all be-all.
Some of us don't buy into the Kurzweil hype. Frankly, I think he's delusional. But my mother always said that there's a fine line between an idiot and a savant.
It's definitely not a Cracker Jack prize.
He's also racked up a pretty impressive list of Inventor of the Year and Lifetime Achievement awards from more technically astute sources like MIT, Carnegie Mellon, etc..
http://en.wikipedia.org/wiki/Ray_kurzweil - under Recognition and awards
I guess the main point I was trying to make is that he's definitely not this dilettante interloper that some of the more off-base commenters are alleging here.
If Google wanted to build AGI, we'd hire Eliezer Yudkowsky and Luke Muehlhauser. Kurzweil is an evangelist and a public figure, but do you honestly think he is actually going to design an AGI?
(disclaimer: I work for Google, but this is all my own unofficial opinion.)
That said, I did nearly including Ben Goertzel in my comment. But I am familiar with Eliezer and Luke's work (and know them personally); I'm much less familiar with Ben, so I didn't feel I could make such a strong statement about him.
However, Google is not dumb. They have Peter Norvig heading Google X and a team of AI engineers recruited by Norvig. Perhaps Kurzweil is just the hand guiding the ship, but not the one pulling the oars?
Kurzweil has recognized the value of Hawkins' ideas and based his latest book on them, though I and others (http://www.newyorker.com/online/blogs/books/2012/11/ray-kurz...) don't think he did a great job on it.
Full Disclosure: I work at Numenta (Hawkins company) but these opinions are my own.
Even more disappointing is the fact that Kurzweil never bothers to do what any scientist, especially one trained in computer science, would immediately want to do, which is to build a computer model that instantiated his theory, and then compare the predictions of the model with real human behavior.
This is quite damning, if true, and reconciles the apparent discrepancy in this discussion between "Kurzweil, brilliant inventor" and "Kurzweil, futurist hack." Is it possible that Kurzweil, at some point, stopped being an implementer of ideas and became a pure pontificater?
Thanks for bringing Numenta to my attention. It appears to be a very interesting company.
They did buy Motorola Mobility.
For example, the Kurzweil K2000 synthesizer illustrates exactly how he approaches problem solving - brute force - 64 MB of memory in 1991, utterly wretched price/performance ratio but awesome performance if considered on that dimension alone.
And the above is 100% consistent IMO with google's approach to solving problems - throw hundreds of thousands of computers at it. He should make an excellent cultural fit. Because today's ridiculous brute force is tomorrow's low-end cell phone.
If Google was very serious about A.I. I think they would have just acquired Jeff Hawkins' Numenta and his neuroscience crew, who have real, practical "brain-building" experience.
I think the focus away from training these systems for your own voice is probably a bad direction but I'm not an expert in this stuff.
Sounds like investors talking about Columbus in 1491.
Ray was the principal inventor of the first CCD flatbed scanner, the first omni-font optical character recognition, the first print-to-speech reading machine for the blind, the first text-to-speech synthesizer, the first music synthesizer capable of recreating the grand piano and other orchestral instruments, and the first commercially marketed large-vocabulary speech recognition.
What's more, none of these are particularly impressive from a technology point of view, what's interesting are just the product ideas.
Your statement is like saying calculus isn't an impressive mathematical achievement because any moderately competent math major in college could recreate it.
Also, yes, those things existed, but there was no solution available that was so affordable and reasonable for personal use.
It's the "first" part that somewhat matters more and then it become's a question of whether he's more Edison or Tesla ( crazy marketing or crazy smart).
Also he's been put into a Director role, so his technical merits come second to his ability to find, understand, and cultivate the smart people needed for his department's goals.
Basic search term aggregation was illuminating. If Ray K. can go another step beyond statistical analysis, the results could be fascinating... or it could just be "People aren't wearing enough hats" of a let down.
edit: identities, not impressions
Well, not now.
For all the hullabaloo, Director isn't that high at Google. He's not being hired as a VP (there are hundreds of those) or Google Fellow. It's not a bad gig, and I'm sure he'll get to work on interesting stuff, but I would have imagined a guy of his caliber getting more.
Having Kurzweil near the top isn't exactly comforting, either.
That's just multiplying through by -1 the typical nerd wet dream about the Singularity.
I can't even keep Rails, Apache, or git from occasionally shitting the bed; I am not worried about AI just yet.
You can look at it that way, but I think it is currently far more likely than the "typical nerd wet dream about the Singularity".
Your last sentence is a complete non sequitur to me. You are comparing the output of a large, AI focused, for profit organization to an individual of unknown skill using some very specific open source tools.
I am not losing sleep over it, but I definitely think about it from time to time.
also not being able to keep git etc. running is some evidence about skill level
I don't think it unreasonable to brace for the case where some Skynet wannabe launches all nukes not because of a carefully considered decision motivated by hatred for humanity, but simply because it shat the bed.
Well said. What is it that AI is supposed to do that's so scary, anyway?
You would think an intelligent person would know this.
The site I linked to has some good introductory material to the related concepts. (http://yudkowsky.net/singularity/intro is short and to the point.) http://singularity.org/research/ lists some important texts to read, and following up on the authors will lead to more. (An author I will also recommend following for his practical pursuits is Ben Goertzel. Lastly, I think this is a good criticism of the Singularity Institute's general mode of operating: http://kruel.co/2012/11/03/what-i-would-like-the-singularity... But you should probably read that one last, if you intend to read anything at all beyond the simple introduction.)
That doesn't mean that they aren't taking it seriously, but it is at least a strong signal to me.
SPENCER MICHELS: Sergay Brin thinks the ultimate search engine would be something like the computer named Hal in the movie 2001: A Space Odyssey.
SERGEY BRIN: Hal could... had a lot of information, could piece it together, could rationalize it. Now, hopefully, it would never... it would never have a bug like Hal did where he killed the occupants of the space ship. But that's what we're striving for, and I think we've made it a part of the way there.
Which is kinda the point about AI safety. We don't really know what we want, we sure as hell don't know how to formalize it, and we're lucky that we don't know how to create a real AI because otherwise it could surprise us in a very ugly way, just lawfully following the plan we gave it.
Oh. Not rules then? Well, what rules. How about giving it a rule "children must not die". Apparently we're big on that one. Except, wait, we mean "US children must not die", so we can still bomb foreign suburbs where children are known to exist. But then we'll need "children must not die on US soil" because it is still possible that the union of (isa US minor citizen ?x) and (isa-resident baghdad ?x) is non empty and the AI would be unable to act. Of course then we need, "US children must not die except in a car", because otherwise the AI would have to destroy cars.
Ok, so lets abandon rule based AI altogether and model our AI on biological processes like humans. Well humans would never cause mass suffering of other humans. We have laws against it. Oh.
"Let's play global thermonuclear war!"
Here's an interesting discussion about AI, rules, morality, etc. http://www.reddit.com/r/Futurology/comments/y9lm0/i_am_luke_...
All it would take is for the AI to just be smarter than you. Human psychology is hackable too. Cf. AI box experiment.
And I was, in fact, considering "AI in a box"-style scenarios. My point remains: it would be a very impressive AI that could break out. Presumably it at least needs some significant basic resources, memory at least, and CPU to do it in reasonable time, to do deep human-hacking. How can it convince us to give them to it if it doesn't have them to begin with?
If we're assuming we can control its goals at all (if we can't we're already doomed by virtue of trying), will it even occur to the thing to act beyond giving direct instructions to humans? Will it have enough concept of self to conceive of escaping, to say "I think therefore I am"? Can it even realize what kinds of computational resources it needs and how to ask for them?
There may be good answers to these questions, but I don't believe they're trivial.
More people, almost by definition, currently die from every other cause than x-risks.
Google is in the best position of anyone/any company to cause an AI disaster. That they don't _seem_ to care is what bothers me.
I imagine the CIA and/or defense contractors are in a better position- they have a ton of money, they work in AI... oh, and they also tend to equip their projects with a ton of destructive weaponry. Which gets deployed in foreign countries, sometimes without permission.
The key quote is at the very top:
"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." --Eliezer Yudkowsky
I wonder how EY feels when he reads threads like this. Maybe he just ignores them these days?
Are you serious ? Let's not overstate Google's position in this world.
They have a search engine, mobile platform, advertising platform and a bunch of popular websites. That's it. All of which have been done before for a decade now. Nothing they do directly influences whether people live or die.
I would be far more concerned about something like IBM Watson used in health care situations.
Also best and brightest of engineers and AI researchers, shitload of money, culture of ambition and a clear intent to go after AI tech. I haven't heard of any other company like that.
> I would be far more concerned about something like IBM Watson used in health care situations.
I'd actually welcome it with open hands. It's about time for some automated diagnosis.
EDIT I'm sorry but I do have to address this point.
> Nothing they do directly influences whether people live or die.
Shut down Google and see what happens. Huge sectors of economy depend directly on search ability they provide, not to mention how many people are now using GMail. Moreover: do you know the addresses of nearest hospitals? Phones to medical specialists? How do you navigate around places? Without Google we'd be all back to Yellow Pages.
If there's one company humanity really grew dependent on, it's Google. Yes, you could probably replace most of the services if needed, given enough time, but the fact is they're the best out there right now, we're all using them and they definitely influence our lives and deaths.
IBM ? Who also have been doing it for decades and are far ahead of Google from all indictions.
>I'd actually welcome it with open hands. It's about time for some automated diagnosis.
My point was that IBM Watson if it made wrong decisions could directly influence health care outcomes i.e. life or death. Nothing Google does is comparable.
> Shut down Google and see what happens. Huge sectors of economy depend directly on search ability they provide, not to mention how many people are now using GMail.
Google is not the first or last search or email company. We would simply switch to Bing and Yahoo and the world moves on. Or have you never heard of Altavista, Excite, Lycos ?
> Without Google we'd be all back to Yellow Pages.
Hilarious you mention that since that's where Google gets its worldwide Local/Places search content from. So in fact we are already using Yellow Pages.
>If there's one company humanity really grew dependent on, it's Google.
Humanity doesn't depend on Google. Get a grip would you.
It's all about Risk Management 101: Risk = Impact x Probability.
We're not perfect as humans, but I don't think we're stupid enough to build something we can't control. We don't even know what consciousness is, much less whether artificial consciousness is possible.
I do know that it would save lives to liberate humanity from drudgery and stupidity.
As for human stupidity, maybe you just have more faith in humans than I do (but I am not entirely pessimistic). Let me ask a related question, to try and gauge this more deeply: do you think anyone will ever release an engineered pathogen into the wild?
But it's still a machine. Albeit with very sophisticated and very useful capabilities.
And as far as I know, has not expressed self-awareness.
Yet given the fundamental nature of how Watson works it would seem logical to think that Google might one day have to compete with IBM in a number of areas where Watson excels -- medical diagnosis for example. Or worldwide financial analysis.
To do so, Google will need strong voice recognition and text-to-speech capabilities which Ray Kurzweil can help with immediately.
The kind of desperate, burning passion that Wozniak had when inventing the Apple computer doesn't seem to be something that any established company has ever been able to intentionally recreate.
There was a fairly lively discussion (either here or on slashdot) a few years ago about whether or not Google with its 20% rule could be the next Bell Labs. Driverless cars and Goolge glasses are cool, but it wasn't quite the landslide of cool everyone was hoping back then.
It might just come down to the difference between "create a world changing technology for us, cost is no object" and "go work on whatever you think is cool, and don't worry about the bills".
I do not think that I am alone when I say "please, for the love of god, don't let this happen."
I remember bringing a bumper sticker "AI, it's for Real" home from the 1982 AAAI conference and putting it on my car, so I may a little biased.
To say it's an AI researcher's paradise is putting it mildly.
A petabyte of memory? Two hundred thousand cores? Done and done.
Google has billions in the bank and can afford to be silly for quite some time, but probably not unlimited silly.
Sidenote: weird how 'surgence' isn't a word in the dictionary
(Don't believe me? Google it.)
What is noteworthy about this hiring is that for the first time, a company with serious technical ability and resources is going to be tackling strong AI. No, there was nothing in the announcement that directly mentions strong AI but I contend that they will have to tackle at least certain aspects of strong AI in order to make significant progress in speech recognition. You can only get so far using the standard tricks that are used with traditional natural language understanding (NLU). At some point, the system is going to have to have an abstract model of the way the world works in order to mimic the assumptions that human intelligence requires in order to understand language.
This shouldn't be all that surprising given what they are trying to do with Google Now. Google Now is different from other products using speech recognition because it is active; it behaves more like an independent AI agent. Think about what Google will have to do to improve its performance; it will need to build a model the user's behavior in order to tune the probability distributions that underlie the best interpretations for what someone is searching for. You see, language understanding in humans sits upon a sizable foundation of innate assumptions about the way the world, including other humans, works. My claim is that Google have to duplicate much of this in order to get better technical performance from Google Now as well as search. It will need to have a prior knowledge about people in general and modify that knowledge over time by what it learns about a particular user. What I have just described is the missing component in efforts to develop strong AI -- the mechanism by which a distributed AI can learn to mimic aspects of human thinking via evolution. (I'm a functionalist, as you've probably gleaned already, so I would argue that the only way to improve NLU is to ground the processing in "real" understanding on some level.)
Now, Canonical could build a Google Now type system for their Ubuntu phones and seed the learning algorithms with an open source, wiki-AI type project -- you don't need Google to create such a system. But Google has the monetary incentive, resources, and now the technical vision (Kurzweil) to justify pursuing this on a large scale, just as they did with their search engine.
What was the primary thing that Apple did to reach their current level of success? They tried things that other people poo-pooed. It wasn't that the iPhone and the iPad were technically that amazing -- that's why people are unimpressed with the quality of the patents Apple has asserted -- it's that they were the first to seriously try a lot of the features that are now standard. As is often said around here, 90% of success is showing up. The company I work for directly competes with Google in areas related to Google Now and I tell you I got a little nervous when I heard they had hired Kurzweil. Again, not because of his technical abilities but because now we have to worry that Google is going to come up with something totally crazy out of left field. The fact that Kurzweil is crazy is precisely why I worry.
Love the phone, totally worth it.