Hacker News new | past | comments | ask | show | jobs | submit login
Will string theory finally be put to the experimental test? (scientificamerican.com)
37 points by pseudolus 12 days ago | hide | past | web | favorite | 47 comments





What this article means by "experimental test" is "whether or not string theory can incorporate inflation". Which is not an experimental test at all--if it's any kind of "test", it's a theoretical test.

This post by Peter Woit, a string theory skeptic, may be relevant:

https://www.math.columbia.edu/~woit/wordpress/?p=11675


The most important bit:

"It seems to be very hard to get some people to understand that the number of “tests of string theory” is not “very few” but zero, for the simple reason that there are no predictions of string theory, generic or otherwise."

What's the counterargument to that, then? I'm curious!


I think the other comment puts it well. String theory is a model builder or a model factory. Models are testable, factories should be viewed from the point of view of the user. Physics theorists are the users here, some think the testable models it puts forth are rubbish, others think it puts forth good models. Would you say Math itself is rubbish because it allows for such models to exist? I don't think so. I think string theory (as a model factory) might be the best tool for finding minimal elegant testable theories that satisfy known constraints, after all isn't that what science, especially Physics is really doing?

If the string theory framework has yet to come up with a verified model nearly 40 years later, is it really that good of a framework?

The problem is that it's produced _many_ verified models that could work with all of known physics. The problem is coming up with a test that distinguishes them somehow.

It hasn't produced any model that reproduces the confirmed predictions of the Standard Model, so no, I don't think your claim there is true.


What that says is that String Theory can predict anything, therefore it can predict the Standard Model. This is not a good look for a "theory".

That's what I'm trying to get across, its not a good "theory" its a model factory. Forget the "theory" in string theory, its not predictive yet, but it will be once we have established the parameters of our low energy world well enough. What it does do, is define what a theory of everything probably looks like and that helps theorists agree on what proper measurements even look like.

If it allows one to predict anything, then it is totally useless. It doesn't help theorists do anything but extract $$$ from gullible funding agencies.

The notion that "it will be once we have established the parameters of our low energy world well enough" is not supported by any evidence, unless you are saying "once we have given it the correct theory to imitate, it will imitate it correctly". A sheet of paper could do that, though.


creating models that work with all known physics without predicting new phenomena that can be tested seems very similar to what we did with epicycles.

i.e. it is just creating a complicated system to "curve fit" known data.


Epicycles created a consistent framework in which to produce repeated measurements and observe anomalies. It was an agreement that this was the basis on which observations would be compared, which turned out to be encompassing of the minimal correct theory. I think that is how to view string theory.

The difference is Pareto efficiency on the (time,quality) axes. Circles and then epicycles were the best model ever known until superseded later by ellipses. String Theory is not the "latest and greatest" , it's latest and less good than it's predecessor, making it useless as physics.

How many testable models has the model factory produced?

zero.

> Would you say Math itself is rubbish because it allows for such models to exist?

If Math had only ever produced such models? Yes, yes I would. And I think so would you, and most other people.


It's like saying that calculus makes no testable predictions. If you don't choose your parameters, it's not a physical theory.

Calculus is math. Math is about logic and relationships. Math isn't based on empirical observations and it doesn't predict or isn't necessarily related to real phenomena. Math isn't an empirical science.

Are you saying string theory is pure math?

If not, then the comparison doesn't make sense.


Calculus makes all kinds of testable predictions. They're called "theorems".

A more recent post of his that specifically mentions the SciAm article:

https://www.math.columbia.edu/~woit/wordpress/?p=11690


I don't understand why string theory gets singled out for this. It is a theory of ultra high energy physics. No theory of ultra high energy physics is testable with current technology, or any foreseeable technology. At best we make inferences from natural experiments, and that is equally true of any alternative to string theory.

String theory is the theory that gets popular exposure, and (by and large) string theory has heavily dominated the high energy physics community. While the criticism may be more general than just string theory, it's the one that's going to be targeted the most.

People seem to take the lesson that string theory is specifically flawed, and we should study some other theory instead. It's really an argument that we shouldn't study any ultra high energy domain at all. I disagree but it would be a more productive discussion than the flawed idea that string theory is uniquely untestable.

We criticize bad things people do before bad things people aren't doing.

Woit responded to this article today:

https://www.math.columbia.edu/~woit/wordpress/?p=11690


String theory is a framework (to build models) and not a model. It cannot be falsified anytime in the near future — if certain models happen to be falsified by observational/experimental constraints, then the framework provides ways to construct other models circumventing the constraints. Likewise, even if this particular experiment provides support for the models under consideration, clever theorists will probably construct other non-stringy models explaining the same result. So take everything with a pinch of salt.

Now, “falsification” itself is a silly/naive cartoon-ish framework for how to do science. It has its uses, but we mustn’t cling to it too much (which would be cargo culting). We certainly have many other useful approaches as well.

In fundamental physics, unlike models of other limited domains, experimental results/constraints over history keep composing on top of each other. So, it is very hard to build a theory that satisfies every one of the past constraints. In a sense, string theory is the only example we’ve managed, on that front. Other “alternatives” consider a much more limited domain (quantum gravity), for better or for worse.

Whether we must continue investing effort/resources on that front is a political question, not a scientific one. And that is a complicated question, with many aspects to consider. Let’s please not get hung up on the “falsification” bugbear. If we were really nitpicky about falsification, we would completely cease doing/studying/researching psychology, economics and a whole host of complex topics.


>Now, “falsification” itself is a silly/naive cartoon-ish framework for how to do science. It has its uses, but we mustn’t cling to it too much (which would be cargo culting). We certainly have many other useful approaches as well.

i don't understand your point. you're saying something like "even though GR supersedes Newton it doesn't falsify Newton, and Newton is still useful". yes incorrect theories can still be useful but that doesn't make them correct. As soon as i use Newton for very fast things (or very small things) and i get poor results that is falsification of the claim that Newton is a GUT. maybe that isn't an interesting claim (though certainly there are a lot of theorists that are interested in that claim) but that's subjective not formal.


That's because broken theories can be used as frameworks to predict outcomes in coarser grained slicing of systems. Falsification is used when it comes to modelling but as a scientific test or rigor it's still important to pursue it because it finds the limit of your theories.

Newton's theory isn't incorrect, it's only applicable within certain limits. It's no more incorrect than statistical mechanics is because it doesn't work with single particles.

While i like your high standards, but then all current theories are incorrect. (You cannot do QM in GR and vice versa).

> If we were really nitpicky about falsification, we would completely cease doing/studying/researching psychology, economics and a whole host of complex topics.

That's totally false. Don't lump in other sciences with whatever physics is doing with string theory just because you don't have experience with them.

Psychology and economics are ruled by falsifiable theories! You might not be able to run experiments that are as clean as one might want, but our theories of how, for example, children learn language are absolutely falsifiable and they make concrete predictions about the kinds of errors children should make. Or what kind of behavior you should see from your dog when you train them, theories built up with hard work over a century of modeling and experiments.


> Now, “falsification” itself is a silly/naive cartoon-ish framework for how to do science. It has its uses, but we mustn’t cling to it too much (which would be cargo culting). We certainly have many other useful approaches as well.

It seems to me scientific realism only worked as long as we kept falsifying things. The main example people point to is the moon landings, which was pretty long ago now. What other useful approaches do we have? I've studied philosophy of science--once the philosophers told scientists that falsification was cartoonish, it seems like they've moved onto more social means of demarcating science, but ultimately a scientific statement has an epistemal value on some spectrum. I'd say it's much more valuable when we are able to falsify--how else can we determine how true something is?

The main progress in science has all come from engineering recently--we have fast computers so we can just throw the problem at the computer. That is ending now, or at least it will end at some point. Ultimately without a better scientific theory we can't give computers harder problems.

I was taught logical positivism had failed, however I don't believe it really has failed.

What successes has this new modern method of science really had? I don't see quantum mechanics really progressing that much beyond filling in the paradigm. Finding a new paradigm is where science really shines, and I think that tends to happen with methods of Popper and Ockham.

Finding simpler, more predictive theories has a much larger effect on society versus finding theories which are extremely complex to only explain 0.0001% of the phenomena.

If you think our current theories are necessarily complex, then you're a Platonist, eeek!


Almost nothing can be solved by current computers. Try to simulate 12-20 atoms in QM or 1 mol in a gas in CM or QFT for sizes larger than a few Angstrom, its all too complex.

Now, “falsification” itself is a silly/naive cartoon-ish framework for how to do science. It has its uses, but we mustn’t cling to it too much. We certainly have many other useful approaches as well.

Like what? If a model generates testable predictions, surely it's still important to go ahead and test them?


It does generate testable predictions, the problem is that they're all in energy ranges that we can't hope to reach in the foreseeable future.

So... not actually testable...

> If we were really nitpicky about falsification, we would completely cease doing/studying/researching psychology, economics and a whole host of complex topics.

Each scientific discipline has its own standards. We're talking about physics, so I don't think what psychologists are or aren't doing with their field is relevant.

In physics you used to produce predictions and test those predictions experimentally. It's not too sophisticated but it has worked pretty much miraculously so far. Now, if we're into building models that can't produce predictions, that might be deep and useful at some point, but it doesn't mean everybody else has to accept an inversion of epistemology now to make room for it simply because there are people asking to do so.


Falsifiability is not a political cult bugbear. There are very good reasons to pursue it that you can't just handwave away like that, IMO.

The psychology and economics examples are very ironic to me as I see the main issue with the effectiveness of those fields being precisely the lack of falsifiable results in their research. Its prone to a large body of bogus theories being used to the detriment of the field.


Good explanation. Another way to say it is that String Theory (if it stands for anything)represents something like 2^500 possible theories (I forget the actual exponent) and if the experiment rules out 1/2 of those theories there are still 2^499 alternatives to go, so not much of a test really.

>Now, “falsification” itself is a silly/naive cartoon-ish framework for how to do science.

Unfortunately, Popper's "theory" has become the dominant go to for epistemology of Science today. Many scientists now think science=falsifiability which is wrong. Popper over generalized a narrow method and elevated it to a global and distinguishing characteristic of the scientific method. Popper's error itself was based on the assumption that logic=Deduction (i.e. identifying a contradiction is our only means to "truth") and completely ignored Induction which is the real problem in epistemology. You can't prove something false without first knowing what is true. Knowledge or truth is not what is left over after you prove everything else false which is an impossible task anyway.


Sorry, can you define deduction for me? The way you're using it seems different to the ways I've seen it used elsewhere.

It is hard to answer your question without knowing your context. I'll assume you have had a formal logic course which usually teaches the deductive syllogism. For example;

P1: All men are mortal. P2: Socrates is a man. Ergo: Socrates is mortal.

So this is one of the standard forms of a deductive syllogism (there are many others). The basic idea is that if you accept that premises 1 & 2 are true then the inference must follow otherwise your thinking includes a contradiction. E.g. All men are mortal, Socrates is a man, but if you think that Socrates is immortal then it implies a contradiction so one or both of your premises are wrong or your belief that Socrates is immortal is wrong.

In formal logic courses in college they swamp you with all the variants of the deductive syllogism and what gets lost is that they are all based on Aristotle's Law of Identity. The Law of Identity basically says "A is A", a thing that exists is what it is and that contradictions cannot exist. So if you find a contradiction in your thinking then you have gone off the rails somewhere in that process.

So Popper's falsifiability principle is implicitly based on deductive logic which in turn is based on Aristotle's Law of Identity. Popper's mistake was ignoring induction or generalization which is evident even in the syllogism above. The premise "All men are mortal" is a generalization not a deductive conclusion; where do these premises come from? So deduction (and falsifiability) is dependent on induction which his theory completely ignores so it is DOA.


> if certain models happen to be falsified by observational/experimental constraints, then the framework provides ways to construct other models circumventing the constraints

But the string theory framework has not produced any testable models.

> string theory is the only example we’ve managed, on that front

Huh? String theory hasn't produced any testable models, so it has not done anything towards satisfying all of the constraints of known experimental results.


> Whether we must continue investing effort/resources on that front is a political question, not a scientific one

Whether, sure. But if politicians fund $X towards high-energy physics, how to allocate that funding is at least partly a scientific question.

And driving dollars after an unfalsifiable theory (or framework) sounds like a money pit. There's no point at which its adherents will say "never mind, this is a bad approach, let's try something else."


If "String theory" is a framework, it shouldn't be called "String theory" but something like "String Computation Model" or "String Mathematics". The current name make it looks like quantum mechanics or relativity theory.

Does this mean, in particular, that the inflaton field has not yet been successfully modeled in string theory?

Will Betteridge's law of headlines finally be wrong?

no.

[1] - https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headline...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: