
Interview with Scott Aaronson - robinhouston
http://blogs.scientificamerican.com/cross-check/scott-aaronson-answers-every-ridiculously-big-question-i-throw-at-him/
======
ced
_In social sciences, there’s an absolutely massive bias in favor of publishing
results that confirm current educated opinion, or that deviate from the
consensus in ways that will be seen as quirky or interesting rather than cold
or cruel or politically tone-deaf._

Isn't that a massive problem for meta-analyses? If there are 15 studies
supporting the consensus/politically-correct position and 5 against it, that
might be evidence against the consensus, if less than a third of researchers
with against-consensus results have dared to publish their results.

~~~
pron
With all due respect to Aaronson (with whom I find myself agreeing on many
things), I wouldn't take his evaluation on the state of social science and his
analysis of its core problems as correct. He's not a social scientist, and I
don't think he'd appreciate an anthropologist evaluating the state of affairs
in quantum computing research.

~~~
privong
> He's not a social scientist, and I don't think he'd appreciate an
> anthropologist evaluating the state of affairs in quantum computing
> research.

But, from things I have read elsewhere, there does seem to be a reasonably
convincing body of evidence that there are systematic biases in social
sciences. Some of this perhaps stems from discrimination (e.g., in hiring
practices) based on political allegiance. Jonathan Haidt (a social scientist)
has spoken out on this topic[0], including specifically on sociology[1]. While
that issue likely affects academics in general, political leaning likely has a
much smaller influence on one's research in physics and chemistry, compared to
one's research in sociology. It's true that Aaronson is not a social
scientist, but I don't think that means he is wrong.

[0]
[http://people.stern.nyu.edu/jhaidt/postpartisan.html](http://people.stern.nyu.edu/jhaidt/postpartisan.html)

[1] [http://www.mindingthecampus.org/2016/02/a-conversation-
with-...](http://www.mindingthecampus.org/2016/02/a-conversation-with-
jonathan-haidt/)

~~~
dilemma
There are systematic biases in the social sciences, and there are systematic
biases in physics. Read Lee Smolin's The Trouble with Physics. As they are
social systems, they will always be biased. And the most dangerous bias is the
one that claims itself unbiased, fully objective.

~~~
Rylinks
Physics can collect so much data that the generally accepted standard for
results is five sigma. There are biases in physics, but it's much harder to
delude yourself into a five sigma result than a two sigma one--in the end the
right answer becomes obvious.

~~~
privong
> Physics can collect so much data that the generally accepted standard for
> results is five sigma. There are biases in physics, but it's much harder to
> delude yourself into a five sigma result than a two sigma one--in the end
> the right answer becomes obvious.

That's in the case of clear-cut predictions from theory or empirical
relations. But bias generally arises in a more subtle fashion than statistics
on measurements, though. Even if you have a "5-sigma result", it still needs
to be interpreted, and that's where biases can rear their heads. On possible
manifestation (though not the only), is if one ascribes to a particular
theory, one is generally biased into interpreting those results within the
framework of that theory. And if one isn't careful, that can lead to
diminishing the weight of evidence that conflicts with the theory (e.g., "it's
5-sigma, but those data aren't actually relevant to this issue").

As an example, one area I'm actively researching is the emission from
molecules in galaxies. We see that some galaxies show enhancements in emission
from a specific molecule (HCN). Everyone agrees that the enhancements exist
but there's wide disagreement on what causes it. And much of the discussion
around it parallels people's opinions/biases about which is more important for
a galaxy: how many new stars it is forming versus how active the supermassive
black hole is at the galaxy's center. The preferred interpretations for the
extra HCN emission seems to correlate very well with people's prior notions*
(which is not surprising). Of course, more data will eventually provide a
concrete solution to this question, though what new data are collected is
often chosen within an existing framework.

* - I'm sure it's true for me, even though I try to mitigate it as much as possible. But we often have trouble recognizing our own biases.

~~~
Rylinks
>That's in the case of clear-cut predictions from theory or empirical
relations. But bias generally arises in a more subtle fashion than statistics
on measurements, though. Even if you have a "5-sigma result", it still needs
to be interpreted, and that's where biases can rear their heads.

I agree, interpretation is the tricky thing. The difference is that in social
sciences, both the result and the interpretation are suspect.

------
fitzwatermellow
News to me that he is moving to UTexas in Austin to build a center for quantum
computing and theory. Terribly exciting! With the impetus being a state funded
allocation of $4B+ for basic research. It really begs the question: are our
governments spending way too little on moon shots? And what could have been
accomplished in the last two generations if the focus had been on pure science
rather than military technologies?

~~~
cowsandmilk
> the focus had been on pure science rather than military technologies?

I find your question a little funny in that Aaronson has said the DOD is where
much of the funding for his research will come from for the next 5 years[1].

So, the $4 billion in Texas likely funded his recruitment package, but it is
the military that is funding his research.

[1]
[http://www.scottaaronson.com/blog/?p=2687](http://www.scottaaronson.com/blog/?p=2687)

~~~
fitzwatermellow
What constitutes a "good return" on research dollars spent? Is it mere value
created? Or a breakthrough in fundamental human understanding? I'm actually
quite intrigued now to mine some real data!

~~~
drumdance
I don't know that's really measurable except over very long periods of time. A
discovery may be "interesting but useless" for many decades until engineering
catches up, whether through improvements in engineering technology or
subsequent discoveries that make the first discovery more actionable.

------
plesiv
I want to congratulate the person who invited Scott Aaronson to speak (New
York university, about integrated information theory) and didn't record and
publish it on youtube. /sarcasm

Seriously guys, we only have that many people like Scott. Please don't take
their smarts, wisdom and time for granted.

~~~
T-A
[http://www.scottaaronson.com/blog/?p=1799](http://www.scottaaronson.com/blog/?p=1799)

[http://www.scottaaronson.com/blog/?p=1823](http://www.scottaaronson.com/blog/?p=1823)

[http://www.scottaaronson.com/blog/?p=1893](http://www.scottaaronson.com/blog/?p=1893)

------
foldr
The discussion of whether or not we have free will strikes me as confused.
Aaronson suggests that we would clearly not have free will in a scenario where
“Everything [we] did could be fully traced to causal antecedents external to
[us], plus pure randomness—not in some philosophical imagination, but for
real, and on a routine basis”. The caveat “external to [us]” is doing a lot of
work here, but it really has no discernible meaning. (Is a cause “external to
us” if it’s not within our brains? Why would that matter? What is the status
of chains of causation that lead from the outside world to goings on in our
brains?) Without that essentially meaningless caveat, the statement is simply
a trivially true disjunction of the logical possibilities given physicalism.
_Of course_ , if physicalism is true, then everything that we do is either
determined by the laws of physics or not determined by anything. If that
entails that we don’t have free will, then Aaronson should just say that he
thinks physicalism is true and that he thinks that physicalism entails the
absence of free will. On the other hand, if we don’t simply assume
physicalism, then the fact that our behavior is determined by the laws of
physics really does nothing to argue against free will. Everything would hang
on how minds interact with physical reality, and we simply know nothing about
how that might work. For example, it might be that God ensures that the
physical world operates so as to respect the free decisions made by our minds.
Or it might be that phenomena which are random from the point of view of
physics are (sometimes) linked to free decisions made by minds. Nothing is
known, so nothing can be concluded.

I'd also have to say that Aaronson lacks imagination in a crucial respect. He
claims that in any "imaginable" universe it will hold that "if you knew the
complete state of the universe, you could use it to calculate [either
deterministically or probabilistically] everything I’d do in the future." But
of course it's very easy to imagine a universe where this is not the case.
E.g., any universe where minds make free decisions that lack physical causes.
We know that people have in fact imagined this sort of universe.

~~~
davmre
I think you're missing the point somewhat. Yes, Aaronson assumes that
something like physicalism is true, which I think is fairly defensible as a
21st-century scientist. But the argument isn't as simple as "physicalism
entails the absence of free will". His point is that even if our actions are
(probabilistically) _determined_ by a set of underlying physical laws, it may
still be that the character of those laws is such that it is fundamentally
impossible to _predict_ future actions, in which case we would still have
meaningful free will.

The main form of prediction-impossibility he mentions is that our minds might
depend meaningfully on quantum states, which are governed by the no-cloning
theorem and therefore impossible even in principle to extract. Whether or not
this is true is an open question, but the characterization of free will in
terms of the concrete question "is it possible to build a prediction machine?"
seems like a useful and novel contribution that doesn't just reduce to the old
question of "is physicalism true".

~~~
davmre
To follow up: I personally see some extra subtlety here, in that even if a
prediction machine did exist, we could _still_ have a meaningful form of free
will, because that machine would be governed by Rice's theorem
([https://en.wikipedia.org/wiki/Rice's_theorem](https://en.wikipedia.org/wiki/Rice's_theorem)),
which says that in general the only way to predict the outcome of a
computation is to run the computation. (this is closely related to the halting
problem).

That is, even if I could capture the relevant aspects of your mind state on a
computer, i.e., the no-cloning theorem is not a barrier, it's still the case
that the only way for me to predict what you will do in two minutes' time is
to actually run your mind inside the computer for two minutes of subjective
time. This would constitute a prediction machine in Aaronson's sense, assuming
the computer could run faster than realtime. But since the computation that
occurs is literally following your thought process step by step (Rice's
theorem says there are no shortcuts), there's a real sense in which the
decision was not determined until you -- the version inside the computer --
actually made it.

Of course this account depends on a functionalist/computational theory of
mind, which can be quibbled with in all sorts of ways. But it does seem
important for the philosophical discussion on free will to at least take into
account the things we now know about the nature of computation, the character
of physical law, and other scientific/mathematical questions, that were not
known decades or centuries ago when the philosophical battle lines were being
drawn. Yes, scientists can be arrogant and are often just wrong or deeply
misinformed on philosophical questions. But that doesn't remove the
responsibility of philosophers to engage with the frontiers of knowledge on
these issues, which means engaging with people like Scott rather than just
dismissing them as "confused". (as if anyone thinking about free will has ever
not been confused!)

~~~
foldr
I don't exactly disagree with you (although let me point out that I said the
discussion was confused, not Scott), but I'm not sure where people get the
impression that philosophers aren't engaging with physicists on these
questions. The ramifications of quantum physics for free will have been
discussed endlessly by philosophers, and often by philosophers who have a
significant amount of training in physics.

------
Luc
> [I]f you want me to rush to the Singularity community’s defense, the way to
> do it is to tell me that they’re a weirdo nerd cult that worships a high-
> school dropout and his Harry Potter fanfiction, so how could anyone possibly
> take their ideas seriously?

Ha, good one! I really love his long rambling answers, very interesting.

------
lucb1e
For those reading comments before the article: the title is rather silly, but
it contains lots of interesting stuff. For example the answers to questions 14
and 15 sound very sane to me, compared to most people's views on it.

~~~
johngossman
I agree:

"I think that, if civilization lasts long enough, then sure: eventually we
might need to worry about the creation of an AI that is to us as we are to
garden slugs, and about how to increase the chance that such an AI will be
“friendly” to human values (rather than, say, converting the entire observable
universe into paperclips, because that’s what it was mistakenly programmed to
want)."

------
devilsavocado
I'd recommend Scott Aaronson's book Quantum Computing Since Democritus to
anyone interested in quantum computing, mathematics, and computer science in
general. It covers a vast array of material, from the basics of set theory to
Godel and Turing, cryptography, quantum computing, free will and time travel.
It doesn't go very in depth into any specific topic, but for an engineer who's
been out of college for a few years, it was at a perfect level to rekindle a
love of learning and interest in a variety of topics.

------
Xcelerate
> I’m friendly with many of the people who spend their lives that way
> [thinking about how to transfer consciousness into a computer]; I enjoy
> talking to them when they pass through town (or when I pass through the Bay
> Area, where they congregate).

^ I think that's my favorite quote from his interview. The Bay Area certainly
has a reputation.

------
tim333
>6\. What hype about quantum computers really drives you nuts?

Was good for me. I've usually found articles about quantum computing not
really seeming to make sense and his reply is good for explaining why that
often is.

------
eachro
Does anyone know why he's leaving MIT for UT Austin?

~~~
zodiac
He has a blog post about it -
[http://www.scottaaronson.com/blog/?p=2620](http://www.scottaaronson.com/blog/?p=2620)

~~~
eachro
Maybe I should've just asked the question I really wanted to ask: did MIT deny
him tenure? If so, why? From what I've heard, Scott Aaronson is one of the
rising stars of theoretical CS, so him moving on from MIT was very surprising
to hear.

~~~
qms
No: [http://www.csail.mit.edu/node/2041](http://www.csail.mit.edu/node/2041)

------
graycat
Nice interview: Nice clarity on quantum computing and P versus NP.

~~~
enriquto
But be careful, his interpretation of P versus NP is rather light-hearted (but
OK, given that it's not his speciality). He says:

 _For example, breaking almost any cryptographic code can be phrased as an NP
problem. So if P=NP—and if, moreover, the algorithm that proved it was
“practical” (meaning, not n1000 time or anything silly like that)—then all
cryptographic codes that depend on the adversary having limited computing
power would be broken._

But the parenthetical remark is not a negligible detail. The class P contains
those O(n^1000) algorithms, and also much, much slower ones. Being in P does
not mean at all "efficient". This is one of the reasons why many people (e.g.
Knuth) say that P=NP is not crazy. The class P is so huge, that it must
certainly contain extremely clever algorithms that reduce NP to P. Yet, there
are may be no practical implications of this equality.

~~~
graycat
Yes. So far nearly all our famous algorithms run in ln(n), n, ln(n)n, or n^2.
So, we don't have much experience with algorithms that run in n^1000 or
insight into what such algorithms might do.

On the other hand, too commonly when we can't find an algorithm that runs in,
say, n^2 or faster, we do have algorithms that run in 2^n. So, we suspect that
there is something fundamental about exponential and fundamentally weak about
polynomial. But as you point out, that is just something we detect sniffing
with our nose.

The point where I critique the P versus NP issue is the claim, implicit or
explicit, that if we have a practical instance of a problem in NP-complete,
then that instance has to be too hard to solve in practice. Not necessarily
so: Many particular instances of an NP-complete problem, even with what appear
for practice to be quite large values of n, might be fairly easy to solve.
E.g., for an instance of the knapsack problem (IIRC in NP-complete), attack
with a cute version of dynamic programming. For 0-1 a practical instance of
integer linear programming (in NP-complete), attack with the simplex
algorithm, maybe branch and bound, maybe Lagrangian relaxation, etc.

And, a related issue for practice, in some optimization problems, the goal is
to save money. So, in practice it can be fairly easy to save the first 15% of
everything that is being spent and save all but that last $0.01 that can be
saved even in theory (that is, come within a penny of optimality) and a total
pain to save that last penny and know that we have done so.

The OP said that if we have an algorithm that shows P = NP, then for each of
the other Clay Math problems, say, problem X, and for positive integer n, we
can ask: Is there a string of n or fewer characters that, in the sense of
Whitehead and Russell, is a proof of problem X and let our P = NP algorithm
answer that. Well, we'd be super happy for just anything that would give us
such an answer for just one case of problem X and just a realistic value of n
and to heck with the general case.

More generally that is, the theory of NP-complete concentrates on worst case
instances of problems, and not all real instances are worst case or nearly so.

So, for the practical instances of problems in NP-complete that we can solve,
exactly or close enough to save nearly all the money, just do so, take the
money to the bank, and be happy. Then, sadly, we have to notice that currently
in practice there is surprisingly little interest in such problems and
solutions. Heck, someone could take a collection of the current, routine
software for solving instances of problems in NP-complete, advertise (falsely)
that they have an algorithm that shows that P = NP and also have corresponding
software, run their software in the cloud, invite people to submit problems,
and charge big bucks for solutions. Then, get paid for the practical instances
their old, routine algorithms do solve and make some excuse for the rest --
money back guarantee. Problem is, I doubt that very many people much care.

Why do I suspect that people don't care? Because I've see several important
practical cases, and there people didn't much care.

But, I confess, there is a lot of misunderstanding out there. E.g., once I was
talking with some people who needed to solve, exactly, although approximately
would be good, too, a lot of instances of some 0-1 integer linear programming
problems (definitely in NP-complete). So I explained that I had recently
solved an instance of a 0-1 integer linear programming that had 40,000
constraints and 600,000 variables. My solution was in 905 seconds on a 90 MHz
PC, and the feasible solution found was within 0.025% of optimality and might
have been optimal.

The people I was talking to were happy? Nope: They had been told about the
difficulty of the NP-complete problems, heard the big numbers 40,000 and
600,000, and concluded that I had to be lying. I wasn't lying. It's just that,
while 0-1 integer linear programming is in NP-complete, not all instances of
large 0-1 integer linear programming problems are difficult to solve; instead,
in practice, a lot of instances are quite reasonable to solve exactly or
plenty close enough for essentially everything of interest in practice. Right,
0-1 with 600,000 variables, so for total enumeration we're looking at
2^600,000. Gotta be impossible, right? Nope. Instead it was fairly easy.

So, it boils down that the question of P versus NP is, say, except for the $1
million Clay prize, a _pure math_ question heavily of long term philosophical
interest instead of an engineering problem of current practical interest. For
current practical interest, we should be attacking practical instances of
problems which with current algorithms, software, and computing we can often
do quite well.

------
pbw
Wow he seems very smart but not very succinct.

~~~
johngossman
It is an incredibly long interview, but I assume it was e-mail. And full of
short gems like:

"QM isn’t even “physics” in the usual sense: it’s more like an operating
system that the rest of physics runs on as application software"

~~~
disconcision
that's a line straight out of his book 'quantum computing since Democritus'.
highly recommended if you want to get a bird's eye view of the foundation of
quantum logic without all that pesky physics (and physical history) getting in
the way.

~~~
johngossman
I've actually read it, I just didn't remember this line. Thanks. And I agree,
great read.

