Indeed. This is a common fallacy among believers of bootstrapping, superintelligent AI: In order to understand and master the real world (and humans, for that matter), you first have to interact with the real world – i.e., you have to run experiments.
Thinking really, really hard about a physical problem will not solve that problem. Ancient philosophers tried it, without much success.[1]
Except ancient philosophers weren't even working with scientific methodology. The issue is with how much empirical data you need to be able to form hypotheses about the world and falsify them, of which humans are almost certainly not doing a near-optimal job. Otherwise we might as well say scientific revolutions are independent of intelligence and the edifice of pre-existing knowledge, and depend only on observed data about the world, and there's substantial evidence against this[1]. For a counterargument from one of the believers of bootstrapping, superintelligent AI: http://lesswrong.com/lw/qk/that_alien_message/
[1] http://infoproc.blogspot.com/2008/07/annals-of-psychometry-i... ... In addition to evidence such as the long periods of time before simple understandings like natural selection, the heliocentric model, and constant acceleration of gravity were developed.
no matter how many hypothesis you create you need to be able to test them. AI won't be able to run experiments by itself, at least not in the beginning. and even when it could, some of them will require years to complete, simply because so much science depends on real physical phenomena that often are very slow.
Computer based simulations are often very slow, and AI won't be able to speed it up, just because it's AI.
>A.I could run experiments by itself if it is provided with enough information to do so. Classical physics (already possible) and even quantum physics, once fully understood can be programmed. There are already scores of algorithms and computer programs that can run physics experiments and it doesn't take years. You seem to be misinformed about even present technological capability
> Computer simulations aren't slow. Distributed computing and processing power allow for a considerable number of experiments to be simulated.
If you work in this area, I feel you have a totally different view of things. You can either speak in terms of 'x' isn't possible and end it there or say 'x' is possible and I'm going to discover how.
Let's just consider a basic example - AI tries to develop better algorithms to simulate fluid dynamics.
1. How do you expect AI to create these simulations? Will it program it itself? I assume that the answer is yes, if so
a. it needs to know how to program.
b. it needs to know how to efficiently use the hardware that it can access.
c. it may need to know all the quirks of the software, hardware and operating system that it works with.
d. it also will need to have a pretty good math skills
Someone will need to teach AI how to do all of that or it will need to learn by itself.. This will take time.And experience.
Such AI will need to run the simulation it created, analyze it, compare results, come up with a hypothesis, and do it all over again to refine the results. What if there is a wrong assumption about fluid dynamics that is caused by some wrong assumptions on a lower level, i.e theory of matter. How will that change the results and will AI need to go back a step lower to "fix" our understanding of matter? what if there are some issues with math that first need to be "fixed", or totally new math theories must first be created?
Of course AI will not need to have everything perfect, right from the beginning, and better theories will suffice, but to say that long-term experiments such as this, are not necessary simply because AI will solve physics is just wrong.
Woah, Yudkowsky takes really long to get to the point, doesn't he?
So his point seems to be, that humans aren't very efficient in processing empirical evidence, and an AI might be better at this, because it would be really, really smart?
Yeah, I've always had that same complaint about his writing. Unfortunately just the relevant portions I might have quoted still wouldn't have fit concisely in the comment. Primarily the parts about how models used to describe the world often predate the observed phenomena they're used to describe (Riemannian geometry for general relativity, in his example), and human ability to eliminate hypotheses based on observed data isn't even vaguely imaginably close to the limits set by information theory.
Not that all this is to say advances in AI will automatically guarantee wonderful new physics, much less a final theory explaining it all, but it seems really unlikely that massively improved cognition (through AI or other means) wouldn't lead to much faster progress on all sorts of problems. We're certainly not making optimal use of our observed data about the world, and even if that was the bottleneck, AI should be better at collecting and correlating more data at once anyway. The LHC already relies on machine learning to sort through the massive number of collisions and detections being produced, so the recent experimental advances are basically reliant on AI as it is.
You and the person's comment above you are missing the point. First watch :
https://www.youtube.com/watch?v=rbsqaJwpu6A (A.I and physics) are quite related and workable
^I reference the above video not because I watched it and thought (hey that makes sense). I reference it because I already felt the same in my work on A.I and felt (wow someone else gets it)
Also :
https://www.youtube.com/watch?v=NM-zWTU7X-k
It's good to keep all sorts of working ideas/knowledge in your head when you're working towards solving a problem. All the physics experiments in the world aren't going to give you insight into problems unless you have workable ideas for how to integrate the findings into understanding/knowledge.
What people often error in is that you can keep hacking away at and brute force your way to knowledge/understanding ...
“A theory can be proved by experiment; but no path leads from experiment to the birth of a theory.” — Albert Einstein
Ancient philosophers solved a lot of problems. More than you seem to be giving credit to. It's from their ideas that people knew where to look to 'prove' things. Watch the Feynman video and truly grasp what he is saying.
"All the physics experiments in the world aren't going to give you insight into problems unless you have workable ideas for how to integrate the findings into understanding/knowledge."
That's why you need both. You can't solve physics just by thinking about it, and you can't solve it just by doing experiments. And "solving physics" may also involve solving math, chemistry, philosophy and even linguistics (after all "the new physics" may not be describable in our own language, and AI will need to come up with a creative way to explain it to us)
EDIT> Why couldn't a GAI talk to every living physicist, at the same time? Read all the literature we've every written, and find links that we didn't notice?
It will be able to run experiments, but that will take a lot of time, and many iterations of hypothesis/experiment/evaluation. My point was: Many super-AI thought experiments assume that the AI is able to come up with new ways to manipulate the physical world (or humans) just by thinking about it, without running experiments. This is a flawed idea.
> Why couldn't a GAI talk to every living physicist, at the same time? Read all the literature we've every written, and find links that we didn't notice?
It would still have to run experiments in an effort to falsify its hypothesis, otherwise there's no way to be sure of its correctness.
[1] e.g., https://en.wikipedia.org/wiki/Aristotelian_physics