I see this phrase thrown around a lot by Kurzweil and fans. What does it even mean? How do you measure intelligence? Smarter than whom?
Intelligence (in a domain) is measured by how well you solve problems in that domain. If problems in the domain have binary solutions and no external input, a good measure of quality is average time to solution. Sometimes, you can get a benefit by batching the problems, so lets permit that. In other cases, quality is best measured by probability of success given a certain amount of time (think winning a timed chess or go game). Sometimes instead of a binary option, we want to minimize error in a given time (like computing pi).
Pick a measure appropriate to the problem. These measures require thinking of the system as a whole, so an AI is not just a program but a physical device, running a program.
The domain for the unrestricted claim of intelligence is "reasonable problems". Having an AI tell you what the mass of Jupiter or find Earth-like planets is reasonable. Having it move its arms (when it doesn't have any) is not. Having it move _your_ arms is reasonable, though.
The comparison is to the human who is or was most qualified to solve the problem, with the exception of people uniquely qualified to solve the problem (I'm not claiming that the AI is better than you are at moving your own arms).
Besides, an AI might be really good in solving problems in one specific domain. This does not mean this AI is anything more than a large calculator, designed to solve that kind of problems. That calculator does not need to, and will not become "self-aware". It does not need, and will not have, a "personality". It might be able to solve that narrow class of problems faster than humans, but it will be useless when faced with most other kinds of problems. Is it more intelligent than humans?
It's not at all clear how to develop an AI which will be able to solve any "reasonable" problem, and I don't even think that's what most companies/researchers are trying to achieve. Arguably the best way to approach this problem is reverse engineering our own intelligence, but this, even if successful, will not necessarily lead to anything smarter than what is being reversed engineered.
A computer that is thousands of times more intelligent than humans, means it can do things we might think are impossible. Come up with solutions to problems we would never think of in our lifetimes. Manage levels of complexity no human could deal with.
"A computer that is thousands of times more intelligent than humans, means it can do things we might think are impossible. Come up with solutions to problems we would never think of in our lifetimes. Manage levels of complexity no human could deal with."
Or did you just redefine intelligence as: "the ability to tell what color the sky is?"