Hacker News new | past | comments | ask | show | jobs | submit login
Lost in Math (columbia.edu)
240 points by well_i_never 8 months ago | hide | past | web | favorite | 82 comments



It seems like there may be a more general takeaway from what Hossenfelder is saying, which is that fields that should be driven principally by empiricism can be led astray by the development of a set of shared aesthetic preferences that is used as a barometer of a theory or technique's attractiveness.

Such ideas of "beauty" may begin as a useful shorthand, in that they initially incorporate experienced practitioners' intuition of what theories are more reasonable, based on the history of their field and their collective knowledge. But they are only useful for so long as they remain subordinate to actual empirical evidence in the ultimate judgement of a theory or technique's truthfulness.

But when the actual application of strong empiricism is difficult or expensive (or becomes so, as has been the case in theoretical physics over the past half-century), or just impossible, the aesthetic barometer can gradually supplant evidence entirely. This may be the true measure of a science's softness or hardness.

In the field of software engineering, there are concepts of "clean" and "beautiful" code and languages. But how rigorously are such terms defined? How closely does our subjective impression of a codebase's aesthetics mirror its actual quality on those metrics that should matter most, such as its performance, maintainability, quantity of defects, etc.?


>In the field of software engineering, there are concepts of "clean" and "beautiful" code and languages.

Anyone who has sat in a code review with two "true believer" "software engineers" will know how subjective and poorly justified, yet fanatically defended these positions are - in fact anyone recruiting will know how damaging having a true believer in your team can be (two is a genuine disaster).

Software isn't engineering because we can't measure anything much about it apart from compression, speed and detected defects so far. Speed and compression don't matter for 99% of modern software; we have laptops with 32Gb memory, servers with 1/2 Tb and scores of Ghz cores - the need to write in C has disappeared for almost everyone. Detected defects is problematic because as soon as you start counting them people stop submitting stuff to the code repository until they have removed everything that they can find. That's a problem because they hold onto it for weeks and meanwhile if it had been in the repo everyone would have found twice as much. Also the team stop reporting the stuff they have found and just talk about refactoring; also the team gets less keen on testing..

So what do we measure ? We can't tolerance it; we can't deduce structural properties; we can't deduce lifecycle (recycling, onward development) properties. We don't know why the core engineering decisions got taken because often the core decisions happen at 3:17 on Tuesday and no one even notices at the time, or remembers later. It's not engineering, not close.


> we have laptops with 32Gb memory, servers with 1/2 Tb and scores of Ghz cores - the need to write in C has disappeared for almost everyone.

I don't know in which weird world you live but most of the people I ship software to run 10 years old laptops or 10.6 / 10.7 macs. And even top of the line 2018 macbooks don't have options for more than 16G of RAM.


Not only that, much of software developed today is targeting the cloud where resources are still very expensive.


and those ressources are usually shared between systems!


Increasing memory capacity is not a sound argument that most people no longer need to write C. Most people don't write C because it's easier to abstract away the knobs C provides for faster productivity, not because we have so much hardware we don't need to care. But it sounds like you're saying most people don't need to write in C because they don't need to be efficient, which is silly.

I despise running non-native desktop applications on a workstation with 128GB RAM because the developers often think it doesn't matter how much memory they use if there's a lot of it. That's frustrating and lazy, and it means I'm paying for their time savings with my hardware resources. "Unused memory is wasted memory!" they say, except I would happily use as much memory as possible, but the nth Electron app prevents me from running something else.

C and its derivatives have disappeared from web applications (particularly the frontend), but it's painful to forgo them for desktop and systems applications.


>That's frustrating and lazy

I think it's frustrating (for you) and cheap. For them writing in Python and not C is probably (see my previous comment for my acknowledgement of the lameness of "probably") significantly cheaper - possibly several orders of magnitude cheaper. For you 128GB RAM is pricy, but nothing like the cost of software written "the old way". You are paying for their time savings with your hardware, the alternative is to pay for their time which would be way more expensive.

You definitely have a point wrt. Systems apps; C++ is needed if you want to go close to the metal. I think that Julia is pushing it out of a lot of scientific apps though.


I'm happy to pay for productivity. I'm not happy to pay for inefficiency.


I don't agree with the argument that measurement is the soul of engineering. Even if I did, the argument that we can "only" measure 3 things is itself silly: there is a boundless sea of measurements one can take about everything and anything.

Engineering is the selection of technological solutions to social and business problems, made after some analytical process rooted in mathematical and physical laws, from a set of defensible alternatives, under economic and ethical constraints.

Regardless of what our gatekeeping cousins might wish to believe, software engineering fits this definition.

That many people working on software do not behave like engineers does not change the fact that it exists. That there is rarely any legally-enforced guild structure to regulate the membership of the profession does not change the fact that it exists. Civil engineering existed thousands of years before the word engineering was even invented.


Measurement isn't the soul of engineering, but modelling is - the ability to describe and refine a system mathematically without having to build it physically.

There isn't much of that in application software engineering. Some domains have it, like ML and DSP, but most of the code written for startups is ad hoc carpentry, plumbing, and masonry, not applied physics.

You can do a lot with carpentry, plumbing, and masonry without really knowing what you're doing - up to a certain scale. But when you start building giant things without rigorous non-tribal reasons for your design choices, you're only ever going to build things that fall down and break a lot.

Too harsh? I think it's hard to find anyone today who thinks software is acceptably robust and reliable by default. Some products/apps/pages work just fine if not stressed too hard. But generally security, bugs, and efficiency are all poor compared to comparable challenges in hard engineering disciplines.


Civil engineering is much older than formal modeling. Mechanical engineering is younger than the civil one, but still older than modeling. In fact, when the modeling fad came from the study of the sky into everyday stuff, there was even process engineering happening on the wild.

The only reasonable definition of "engineering" that I know of is on the lines of "the art of building stuff". Writing software fits that definition perfectly well, it's just what people call "software engineering" that doesn't.


Yep this entire thread is a series of arguments over the definition of "engineering." The underlying point seems to be "engineering is serious business and software is not," and the "definition" of engineering is used as a bludgeon to make that point.


This is a topic that really interests me, I'd love to squeeze more info out of you!

Can you elucidate about the sea of measurements that you can extract from software? What are the things that can usefully be measured in the current art?

>made after some analytical process rooted in mathematical and physical laws, from a set of defensible alternatives, under economic and ethical constraints

Ok - how can we do analysis of things that are not measured? In the end all the software design I've ever seen comes down to "this is the best we can do honestly" there could be 100 better ways or this way could be horribly flawed, there's no way to figure this out a-priori.

When I went to college I roomed with a bunch of Aerospace engineers. They spent days and weeks doing structural calculations which were underpinned by physical knowledge of the materials that were in the structures. If the actuator lever for the flap was too heavy or not strong enough when it was made of aluminium then it could be made of titanium instead; but they knew. The sums showed them.

I did CS; at the time we thought that formal modelling would do the same for us.. that is beginning to come back with thing like Lamport's work on TLA+ but I think that the last 30 years have shown us how little we know.

Now yes, people built cathedrals in the middle ages, and bridges. Some of those are around as monuments to their craft. But a lot of them aren't, a lot of them fell on people's heads.


I once watched the software architect at a company I worked for get called out on why he was choosing a certain design decision that was going to impact a critical component of the product, and he justified himself by saying he had a good feeling about it. When pressed further he could only vaguely and emphatically respond “I don’t know, I just think this is the best way”.

So this important decision was based on this dude (excuse me, software architect) shrugging and saying it feels good.


it could mean he's developed a sense in the "right side" of his brain which isn't logic-based. Unfortunately, it's impossible to tell whether someone's taste is bullshit or finely tuned. The best-in-class typically do not follow axiomatic rules either. In this case it's probably the former.


And it's great when someone like Johanna Weber has a brilliant insight about a delta formed wing and it turns into a beautiful outcome like Concorde - if all the calculations are around to un-bullshit it to bits !


Sometimes that’s the best someone with better intuition than communication can produce. (I prefer to err on the side of clear testable communications, but there are weak communicators who are still usually correct)


At least the architect in question was honest. It would have been so easy to spout some generic BS about "industry standard", "principle of parsimony", etc. etc.


True believers are a problem indeed - people who seem to think software development is about applying firmly held beliefs out of context can really screw up things royally.

That being said, by now software engineering has a large set of techniques, with reasonably well understood properties, and we can absolutely reason through whether a technique is useful to solve a particular problem.

For example, polymorphism is a useful technique to invert the locality of implementation details, making it easier to extend software in a distributed way. Its advantage is its drawback, it makes control flow harder to follow and generally the software more difficult to understand precisely due to the distributed extensibility.

Similarly, mocking is a technique to reduce testing "surface", by removing dependency on specific components out of a test. It's a great tool to make tests easier to write, improve test performance, and avoid ambient test dependencies and associated complexity. On the other hand, it reduces test depth by ignoring potentially important interactions, and often leads to overly brittle and hard to maintain tests for little value of tested software.

It's easy to go on. All these techniques aren't silver bullets or axioms to be followed religiously, they are specific solutions to specific problems. They aren't necessarily measurable in an analog fashion, such as the load bearing capability of a steel beam; they are often more binary (quite fitting, given the domain). But you can judge whether they are useful to solve a specific problem.

That is, I don't follow the "it's all just opinion" kind of software engineering defeatism. Yes, there's a ton of extremely questionable evangelism, often spread with huge pomp and circumstance, by people with limited insight. But that's really just that, limited insight.

Others have replied to your point on performance being irrelevant - that's another area of software engineering that's hugely important, and not just in the Google kind of setting.


Dr. Hamilton (who coined the term "software engineering") developed a system of software construction she called "Higher Order Software" distilled from her experience writing the code for the Apollo 11 mission.[0]

Unfortunately, her insights have been largely ignored. I first heard about it twenty years ago in a book called "System Design from Provably Correct Constructs" (which, oddly enough, doesn't mention Hamilton outside of a few references in the back.)

In modern terms, programs in HOS are developed by the direct elaboration of the Abstract Syntax Tree (like Lisp code) using only a limited set of operations which each preserve the semantic correctness of the tree. In this way, you are prevented from building incorrect software.[1]

(You might build a barn instead of a pool, but it will be a fine barn.[2])

In addition to the availability of a defect-free development methodology, Category Theory provides a solid mathematical foundation for determining the most-highly-factored form(s) of a given code-base. Meaning, if you have a "categorical" programming language you can more-or-less automatically extract the most general form of your program, which can then be used "over" different categories to get different concrete programs from one piece of code, with confidence (mathematical confidence) that they are valid, correct programs. There's now a great explanation by Conal Elliott about this called "Compiling to Categories"[3].

In any event, I would say that Category Theory gives us a language-agnostic way to rigorously define the "cleanliness" or "beauty" of software.

The tools are there. We have to notice them and pick them up.

[0] https://en.wikipedia.org/wiki/Margaret_Hamilton_%28scientist...

[1] You're not editing text files and then using other tools to (try to) ensure they're not broken too badly. No linters, no formatters, no syntax errors, no type conflicts, no off-by-one errors, etc... Hamilton identified all the sources of error in program construction and designed a system that eliminated them.

[2] "'Tis a fine barn but 'tis no pool, English."

[3] "Compiling to Categories" Conal Elliott February 2017 http://conal.net/papers/compiling-to-categories/


I'm sorry I've spent half an hour with [3] and it's pretty challenging. Is there something with training wheels?


I don't know. It's kind of newfangled. It's "in the air" though, I think. Here's what I got (it helps to know a little Category Theory, try Bartosz Milewski's "Category Theory for Programmers"[0] if you don't already.)

In "Compiling to Categories" what he’s doing is translating lambda forms into a kind of “point-free” style and then showing how to instantiate that code over different categories to get several different kinds of program out of the same code.

I've been working with a stack-based "concatinative" language called Joy which I think makes this stuff a lot clearer. Joy code is already in "point-free" style (no vars, no lambdas) and it includes functions that do the same job as but are more elegant than the triangle operators and others in the "squiggol" tradition. ( https://wiki.haskell.org/Pointfree )

There's a little bit in "Mathematical foundations of Joy"[1] and "The Algebra of Joy"[2] by Manfred von Thun.

Here's a piece of Joy code (it's part of an ordered binary tree library):

    pop swap roll< rest rest cons cons
Here's a trace of its evaluation (with a suitable value already on the stack) in the category of values resulting in a computation of an answer. The dot is the "interpreter head", the current point of evaluation:

    [4 5 ...] 3 2 1 . pop swap roll< rest rest cons cons
    [4 5 ...] 3 2 . swap roll< rest rest cons cons
    [4 5 ...] 2 3 . roll< rest rest cons cons
    2 3 [4 5 ...] . rest rest cons cons
    2 3 [5 ...] . rest cons cons
    2 3 [...] . cons cons
    2 [3 ...] . cons
    [2 3 ...] . 

And here's the same code evaluated in a category of composition of stack effects resulting in a description of the stack effect of the expression:

    (--) ∘ pop swap roll< rest rest cons cons
    (a1 --) ∘ swap roll< rest rest cons cons
    (a3 a2 a1 -- a2 a3) ∘ roll< rest rest cons cons
    (a4 a3 a2 a1 -- a2 a3 a4) ∘ rest rest cons cons
    ([a4 ...1] a3 a2 a1 -- a2 a3 [...1]) ∘ rest cons cons
    ([a4 a5 ...1] a3 a2 a1 -- a2 a3 [...1]) ∘ cons cons
    ([a4 a5 ...1] a3 a2 a1 -- a2 [a3 ...1]) ∘ cons
    ([a4 a5 ...1] a3 a2 a1 -- [a2 a3 ...1]) ∘
If you compare the input and output of the first one you'll see that it matches the input and output stacks in the Forth-style stack effect comment computed by the second one.

To repeat, that's the same code evaluated in two different domains, er, Categories, to get two different correct computations.

If I had implementations for them I could evaluate the expression in/over a category and get as output: a dataflow diagram of the program, or a hardware description of a circuit for the program, which are examples from Elliott's talk+paper.

[0] https://bartoszmilewski.com/2014/10/28/category-theory-for-p...

[1] http://www.kevinalbrecht.com/code/joy-mirror/j02maf.html

[2] http://www.kevinalbrecht.com/code/joy-mirror/j04alg.html


I was with you till the 32Gb memory part. In what part of the world do users have devices with 32Gb RAM(barring gamers)


I did stray into polemic, but to be fair any serious user, including gamers should upgun their memory. It's cheap and effective. Also one of the complaints about mac book design has been that there is no 32/>32 memory capability!


This article [2015] is of relevance in this context. It refers to the work of Richard Dawid who's apparently laying out an expanded view of science beyond the commonly accepted Popperian perspective centered on falsifiability. In his 2013 book "String Theory and the Scientific Method" he apparently identifies "three kinds of “non-empirical” evidence that Dawid says can help build trust in scientific theories absent empirical data." Of course, this is not an accepted view yet, but it can lend some balance to the conundrum resurfacing today.

[2015] https://www.quantamagazine.org/physicists-and-philosophers-d...


For me, "beauty" is an inverse function of entropy or disorder. The fewer the arbitrary constructs in a system (or theory), the more "beautiful" (or more accurately, elegant) it is. After all, the laws of physics are about finding recurring patterns in a mass of random observations. So I think it's reasonable to prefer theories that can explain a large number of random observations with a fewer number of laws. However, I suppose the argument of the book is that when we try to make our theories more simpler than reality actually is, we have a problem?

Edit: to try to answer your question, the goal of elegance/beauty in code is entirely about comprehensibility and not utility, is it not?


"As simple as possible, but not simpler." (often credited to Einstein)

See also https://en.wikipedia.org/wiki/Occam%27s_razor

That said, the whole field of science journalism and popular science book writing lives from violating these rules, making things much simpler than they actually are. But then again, the average person is a) not in charge of the physics experiments and b) tries to use these as a reference in a scientific paper is up for a surprise.


Experimental physics is not “a mass of random observations.”


I wasn't referring the experimental physics. I was referring to how we perceive the wold in the absence of physical laws.


Beauty is in the eyes of the developer.


What does this mean for the future of science and technology?

I find it fascinating that so much interesting physics has been discovered in the last century -- for example I only recently found out the neutron essentially a post WWI era discovery [1]. And yet so little fundamental physics has come by in the last 30 years.

Now that there is no more increase in microstructure knowledge of the universe, will the other sciences and then technology follow in this stoppage?

Arguably the Internet followed from computers which followed from transistors which followed from fundamental physical theory of atoms and electrons from less than 150 years ago.

Will the lack of new physics cascade down, or is there enough "stuff" between physics and technology that tech can keep on going.

[1]https://en.wikipedia.org/wiki/Discovery_of_the_neutron


Nothing. We are centuries away from using anything in HEP in tech application. (If someone thinks otherwise I'd love to hear counterpoints!)

The thing is, subatomic stuff is extremely energy dense and packed very stably into particles and atoms. Even to just crack open that package we require massive machines like the LHC. Even if we could control things there it's not clear that it's even possible to rearrange the parts into other configurations. All such configurations would be extremely energy dense and intrinsically unstable, with QM making sure they decay quickly. So figuring out what stuff is made of doesn't give us more ways to interact with it.

The good news is that we're far from done understanding how atoms can combine into interesting structures. Like _really_ far from done. From topological states of matter to meta materials there are lots of surprises.


> Even to just crack open that package we require massive machines like the LHC.

as a small correction, this is not true. what you probably really mean is massive amounts of energy, but large machines are not the requirement because RF-based acceleration is not the end solution. see laser-based plasma accelerators.

https://en.m.wikipedia.org/wiki/Plasma_acceleration


I don't understand this opinion of "little fundamental physics". First, what do people actually know about all of modern physics to make such a statement?

There's despair at HEP, yes, but it's because we can't discover a Higgs boson every year and many physicists thought that maybe there were lots of new particles coming up. There was a solid theoretical possibility that would have allowed in a way to get something similar. But they're not there, or at least we don't have the means to produce them now, so you have to try other things. That's not bad, that's simply how science works.

You can't make up new particles if they don't exist. That's not a failure, that's the sign that you have done your job. The better you get at something, the harder it gets to improve your results. No one tells biologists that they have to get something better than Evolution every decade.

That disappointment of many can't spoil what has been a triumph, the Standard Model with warts and all is how Nature works: there really are Higgs bosons which we didn't have the technology to even look for until now. And there are still precision measurements to be made that will give us information about what we can't reach directly, because it's hiding in some numbers there. Such things don't make it to headlines, though.

About fundamental physics... Well, we've discovered that neutrinos have mass, that's new physics right there. We've measured gravitational waves, which many thought was impossible. We've discovered that the Universe expansion is accelerating, more new physics. Also that dark matter is pretty much not an hypothesis, because of weak lensing and a number of other independent observations, so there's exotic matter out there. There's a lot of work to be done, yes, but we've learned a lot already.


> Will the lack of new physics cascade down

It's not that there's "lack" of anything, except understanding. "New" is in the eyes of the observer, the way I see it, there are very new and fascinating discoveries every year. It's more how the expectations aren't properly communicated. Like the article author writes:

"I fear a complex set of issues is likely to get over-simplified, and this over-simplified version of the book’s argument is all that much of the public is ever going to hear about it. "

It's all about for what the physicists can get the funding to explore. The biggest worry is the perception of "simple" public (some of which are actually deciding what is going to be financed) that there have to be "successes" however the "successes" are defined.

In reality, in physics, investing a lot of energy to find something and not finding that something can still be a point of a new beginning, and we had that in history a lot. Without Michelson-Morley experiments that didn't find what was "expected" there would be no Special and General Relativity -- these experiments were necessary. And the invention of Michelson still gives fascinating results: the first measurement of merging of black holes, only a few years ago, more than 100 years after Michelson's first experiment, used as the basis the concepts first invented by Michelson. We simply have to fund the experiments, and every properly done experiment is success, even if the results aren't what a lot of people expected to be.

There were the times where new paricles “appearing” wasn’t expected. Now the expectations were that even relatively small efforts must bring new particles. These expectations will most likely be adjusted.

Regarding financing, for example, the last time I've checked the total cost of everything LHC did, including building it and operating it for years, was around 9 billion of dollars. It's effectively nothing, compared to 1,500 billions of dollars (!) that are still planned to be spent on a plane that will surely not advance the science, famous F-35(!) (and much more is already spent than on LHC!) That puts some views in real perspective. One military plane could finance building and operation of one totally new LHC for at least 150 years, every year one new.


> One military plane

To be fair, that budget is for 2500 planes and their maintenance over 50 years.


And it's compared to the budget of all costs for the development, building, maintenance and running of a single and unique LHC. Whereas the costs for producing 2500 planes are spread to all of them. Nevertheless the total LHC budget is a cost of mere 15 of the said series of 2500 units. The cost per unit of the whole series is 600 millions per unit! It's quite obvious something is wrong in these investments.

But the real science (especially to observe the Earth, the climate and the effects of global warming) struggles for breadcrumbs or even for pure survival. Because ideology, not rational thinking.


Still, the difference in potential outcomes for humanity are substantial. For sure one will only ever achieve blowing up someone with better precision than before... how exiting!


F-35 isn't for blowing things up, it's for deterring war. Fat good your particle accelerators do if thet constant get blown up in war.

Deterring war is the greatest way to unlock human prosperity.

And killing with precision means saving lives, the greatest good in the world.


Do you believe that? All that kind of investment does is keep the war machine spinning, and the U.S. will continue to keep the world a violent place.

Investing in healthcare, education, and infrastructure on a global scale would a) be possible with that kind of budget and b) do a lot more to prevent war, since there would be a whole lot less to fight about.


I think there's undue emphasis on "fundamental". Modelling complex systems is important in actually making real world predictions. Condensed matter physics, as one example, is an area that is growing and has a plethora practical implications.


Yes, why should physics just be "fundamental"? I think advances in tech, AI will help physicians explore complex systems.


I guess we humans are just impatient due to our limited lifespan. If you look at the really grand scale of things, even the the speed of light, which for us humans is indistinguishable from instantaneous, becomes incredibly slow.

So we now have picked all the 'low hanging fruits' in physics. The really hard stuff may take a few millenniums, which is still unimaginable short compared to the age of the universe. And the universe itself is just out of its infancy and can go on for trillions of years... But we may find another interesting thing or two before we fade away.


Probably that high-energy physics isn't going to produce much new for a while. Low-energy physics, down near absolute zero or involving single particles, seems to be doing quite well. Slowing down light, quantum entanglement - those are all recent developments which look useful.

Microbiology has lots of work ahead. We can read DNA, but progress in understanding it is slow. Lots of potential there.


> Probably that high-energy physics isn't going to produce much new for a while

The discoveries of particle physics are not where the bulk of technology spin off comes from. Most of the particles that are generated don't last long, and are hard to generate. Most of the particles that have been discovered have had limited utility. If the particles aren't naturally generated (like in muon imaging detection) there isn't too much use for them.

So, you get things like muon tomography of the pyramids

https://www.nature.com/news/cosmic-ray-particles-reveal-secr...

But you don't generate the muons yourself. Similarly, understanding cosmic rays in the atmosphere requires you to have some understanding of the various decay products, but unless you're trying to do something really sensitive, you may not care.

University College London's put out a document talking about the technological benefits of high energy physics research

http://www.hep.ucl.ac.uk/~markl/pp2020/KnowledgeExchangeDocu...

And they basically break it down to "Accelerator Science and Technology", "Detector Research and Development", "Impact of Electronics and Readout Developed for Particle Physics", "Computing, Software and Analysis Techniques", and "Special Skills and Competencies". In short, nothing really to do with the actual discoveries themselves; it's the process of building tools to explore the high energy regimes that is useful to technology. Arguably, the same can be said about the theoretical side.

An (overly pragmatic) argument is: "If you can't detect any deviations from the Standard Model, then the Standard Model is sufficient for all forseeable technological applications."

And there's plenty of things to measure and discover within the Standard Model. It's not all about discovering new fundamental particles. There's actually producing hypothetical particles as well that are predicted by the standard model but have never been made. Like Glueballs for example:

https://en.wikipedia.org/wiki/Glueball

(Which, as the wikipedia article points out, has no actual applications :) )


There seems to be an implied assumption that research should find practical application. This is not a self-evident position.


Modern advancements in physics over the past 100 years have opened a can of worms. We still haven't even scratched the surface of quantum systems in applications.

It wouldn't surprise me if the majority of physicists are now leaning towards applications and less "fundamental" disciplines. Newton and Einstein were two centuries apart! I think the past 100 years of physics will still run us another century before we make any more groundbreaking advancements in theory. It's not even as bleak as it sounds, because we're really at the tip of an iceberg here already.


Well, today we have tools (and math) that are infinitely more powerful than what we had a hundred years ago, while, in comparison, progress in fundamental physics is now infinitesimally slow. That should tell you something.


There were plenty of groundbreaking physics results between Newton and Einstein.


I think that's a fair questioning. Transistor scaling seems to have run its course, sitting right at the atomic scale (but not beyond -- hence it would be probably undisturbed by subatomic discoveries).

At least there are plenty of biological examples that we haven't gotten close to replicating technologically -- while we have substitutes for almost every biological functionality, there are areas in the parameter space where the bio versions comfortably lead. The brain of course is an example, leading in efficiency and capabilities; but there are many others like compact self-replication, self-repair of structures, micro and nanomachinery (exhibited in cells), and so much more. None of that of course requires advancing physics.

Even for long term technologies, I don't think a deeper understanding of physics would have much to contribute, unless we found something wild like a compound made from new particles. That isn't to say it's not worth pursuing; it just takes a different relation to society, similar to math and philosophy.


The issue is, as I understand it, that physics has taken as to the limits of what is controllable by humans. Results of almost everything that we can control can be derived from fundamental physics that we do have. We may not have compute / math to actually carry out analysis, so there is still room for progress, but it is just not fundamental physics. Also, we know what to do to test different new fundamental theories, but it requires ridiculous experiments.

From this one could reason, that the new fundamental physics would resolve issues that would have no practical significance. For example I don't know of any impact of discovering Higgs boson on engineering. But there is of course option of discovering something that we don't even imagine at this point and this could have practical implications.

Tl;Dr: If gathered resources of the whole world are not enough to test it, it may not have any practical, engineering significance.


> no practical significance

Imagine how many people would have said something similar about quantum physics before the first A-bomb was developed.

Historically, the practical significance of major discoveries often came much later than the discovery itself. We do need the discoveries, especially those "of no practical significance" as long as they really improve our understanding of the physical world.


I did mention that: But there is of course option of discovering something that we don't even imagine at this point and this could have practical implications.

Also, I wrote "From this one could reason". I, myself, actually hope that we will get some breakthroughs, because established theories make ridiculous predictions or require equally ridiculous fine tuning. One example is vacuum catastrophe: "the worst theoretical prediction in the history of physics". We have had similar prediction in the past e.g. "ultraviolet catastrophe" that lead to the whole idea of "quantum".

But IMO the issue here may be more fundamental. Testing quantum gravity theories requires energies way beyond stupidly high energies like in LHC. As I understand, the requirement follows from the theory. It's like light speed limit, it's not something you can engineer around to see in lower energies. You can engineer your way up to get these energies, but there doesn't seems to be a short cut like with positive feedback loops in case of A-bombs.

Chicago Pile 1, first nuclear reactor cost 50 000$ at the time, so something below 1 million $ today. Most of historically important experiment then was even cheaper. We stumbled upon lot of things because we were doing sort of cheap random search around things we did not understand. Most importantly cheapness of these experiments coincided with potential of practical applicability. I fear that this is mostly over, because as far as I know we don't have fundamentally inexplicable phenomena, I stress "fundamentally", that we can play around with cheaply. LHC budget is somewhat around 10 billion $, JWST another 10 billion $, LIGO somewhat around 1 billion $. And as I understand this is nothing in comparison to what would be required to reject e.g. string theory.

If you can't experiment with it cheaply I doubt that you can apply it cheaply.


I believe it’s still worthy to increase the energies as much as it is technically possible: e.g. I think China still plans CEPC.


Yes, I agree on that. I think that just possibility of deeper understanding of the world is worth it. I hope that especially gravitational astronomy will flourish.


> Will the lack of new physics cascade down, or is there enough "stuff" between physics and technology that tech can keep on going.

Definitely enough. We're at the dawn of bio and nanotech and haven't even scratched the surface of the consequences of 20st century science. It'll keep technologists busy for at least 50 years if not more.

Pure physicists on the other hand are in a tough spot. By pure I mean those not involved in applications, e.g., HEP, string theorists, etc. Though I'm not a pure physicist and it's just my opinion.


Sabine Hossenfelder (the author of Lost in Math) recently appeared on the Rationally Speaking podcast to talk about the book: http://rationallyspeakingpodcast.org/show/rs-211-sabine-hoss...


I am not a physicist, but I think while trying to complete our understanding of matter and forces is certainly an important topic, it would be much more important to finally understand quantum mechanics. Not that those are independent topics but I don't think progress on quantum mechanics is dependent on progress in high energy particle physics. Maybe energy scales at which gravity becomes strong might be really interesting to understand how space and time and particles fit together, but those are out of reach for the foreseeable future.


I'm not a physicist either. From what I understand, the parts of quantum mechanics that aren't still fully understood are where it overlaps with relativity. Reconciling the two is still one of the big open problems, and high energy particle physics does help in testing theories.


When people talk of understanding quantum mechanics, they usually mean measurement problem not quantum gravity. One prevailing attitude toward measurement problem is that it is a non-problem, but I and many others disagree.


I would disagree, I think we do not understand quantum mechanics at all. We have the equations and get the correct results, but we still don't know what's really behind measurements, how entanglement can be non-local, we are not even agreeing on whether time evolution is unitary or not.


>We have the equations and get the correct results, but we still don't know what's really behind measurements

That's pretty much true for classical physics as well.


In classical physics there is no limit to how gentle a measurement can be, you can essentially measure everything without perturbation and to arbitrary precision, so measurements are not really a concern. Or am I misunderstanding what you wanted to hint at?


I'm referring to the notion that we understand classical mechanics beyond merely "having the equations and getting correct results" - we really do not. Because classical mechanics prevails on the scale that our brains operate on, we do not find it unintuitive. However, at the end of the day, it is merely that we have a bunch of equations that give us correct results - just as quantum mechanics does.

This is true of all of physics - reducing observations to the level of predictability. Beyond that we are wading into philosophy.

(BTW, classical physics has warts as well - what is the force between a charge of 1 Coulomb and -1 Coulomb when they are touching each other?)


I see where you are going and you probably have a point but I also will have to think about that for some time. On the other hand my ad hoc reaction is that the problem is much more severe for quantum mechanics than it is for classical mechanics.

In classical mechanics we use a slightly wrong model of the world but then everything is consistent and works as long as you are not stepping into regions where the approximations are no longer good. Quantum mechanics yields correct predictions for a wider range of phenomena because we no longer pretend that Planck's constant is zero, it explains some strange features of classical physics, but it is - at our current level of understanding - also no longer self-consistent.

And while quantum mechanics is certainly less intuitive I think that is not really the issue. Once you leave Newtonian mechanics behind and step into the Lagrangian or Hamiltonian formulation, even classical mechanics becomes a quite abstract theory. I am certainly still better at picturing phase space flows than superpositions in momentum space but I have no fundamental problem with abstract representations.

But that time evolution is supposed to be unitary while I am not looking and then suddenly goes out of the window, that is something you can not simply ignore. It's not not intuitive, it's not consistent. And it's not limited to a region of the theory where we might expect the theory to break down, it's right in the center of it.


Measurements aren't a concern, but what's "behind" them is unknown.


What do you think is missing in classical physics? There are physical objects with real properties an we are determining their values by carefully interacting with the physical object using a measurement device. If we, for example, want to know the position of an electron - and in classical physics the electron is just a classical particle with a definite position - we carefully throw something, for example photons, into the rough direction of the electron and look at the scattering result to find out where the electron is. I don't see what this picture is missing.


Physicist here.

> From what I understand, the parts of quantum mechanics that aren't still fully understood are where it overlaps with relativity.

Yes and no. You're right, one big question are high-energy completions of the quantum field theories that make up the standard model and which we expect to break down at very high energies. On the one hand, this expectation is due to the fact that general relativity is likely going to play a role at these scales, so we'll probably have to solve the puzzle of quantum gravity along the way. On the other hand, though, there are many more reasons why the quantum field theories we have right now are expected not to be the full story:

1) For reasons of, well, beauty. Right now, the standard model has many free parameters but even if you ignore the parameters for a second, it's also a somewhat random collection of specific gauge theories (the latter is basically another word for quantum field theory), i.e. specific gauge groups. Therefore, people hope that there is an underlying "grand unifying theory" (GUT) that should become visible at high energies. Note that the beauty that comes with a GUT is only in parts what Hossenfelder means when she discusses "beauty".

2) More importantly than 1), the usual interpretation of the renormalization procedure in QFT these days is that it is a way to exclude physics at the very high energy scales from our calculations/predictions. Put differently, without introducing a renormalization scale (basically a "maximum energy") the equations we have blow up and people take this as a sign that our theories are incorrect at extremely high energies. So in this sense, current quantum field theories are nowadays seen as effective theories (which are only valid at low-energy scales) of an underlying "complete" theory which will hopefully cover all energy (and thus length) scales.

But apart from the lacking high-energy completion, there are more reasons to believe that we don't fully understand nature yet and that we might need to come up with a new theory:

3) Even after decades of research we're still lacking a rigorous mathematical underpinning for practically all relevant QFTs. Considering that nature so far has always been governed by laws that we could express in precise mathematical terms, this might suggest that we're doing something wrong at a very fundamental level. A lot of people (or at least those I have spoken to) hope that a theory of quantum gravity will solve this (and thereby reestablish mathematical sanity in high-energy physics).

4) Dark matter: As you might now, dark matter is a general name for the apparent stuff that causes our observations to differ slightly from the laws of gravity that we know (i.e. from general relativity). The existence of dark matter doesn't necessarily mean that we'll need an entirely new theory but if we're right in our assumption that dark matter consists of particle excitations of another quantum field (usually called WIMPs), we'll need to at least extend the current standard model.

5) The foundations of quantum mechanics: More specifically, the measurement problem together with the issue of having different interpretations of QM and, possibly, the issue of time. These are basically the issues danbruc meantioned above. For some background info see for instance Steven Weinberg's lecture on "What's the matter with quantum mechanics?" (discussed by Sabine Hossenfelder here: http://backreaction.blogspot.com/2016/11/steven-weinberg-doe...) and Carlo Rovelli's recent book (https://news.ycombinator.com/item?id=17376437). Again, some people hope that these issues, too, will eventually be solved by a theory of quantum gravity but this is anything but clear. (By the way, a friend of mine working on Bohmian Mechanics recently told me that there might be experiments to distinguish Bohm's interpretation from the Copenhagen one, so we might not even need to wait for quantum gravity.)

Anyway, let's assume for a second that we really need a full quantum theory of gravity to solve all these issues. Then, what's our best shot at coming up with a theory of quantum gravity? Well, studying the high-energy behavior of particles and hoping to discover "new physics". So in this sense, you were absolutely right of course. I just wanted to point out that it's not reconciling relativity and quantum mechanics alone that drives us.


(Non-physicist, please bear with me)

Of these 5 criticisms of contemporary theory, all of them except dark matter seem to stem from a suspicion about the eventual result through the lens of beauty, unification, etc. However, the dark matter problem stands out as one where measurements do not line up with theory, and seems more of a classic example of how we've typically "turned the crank" on making progress in the past.

How big is the concerted effort to understand dark matter vs trying to attack these other concerns in theoretical physics? It certainly seems like an aggressive, field-wide effort is warranted given it's one of the few universally acknowledged and reproducible blatant predictive errors in current models. The discovery of these errors seems like a huge gift to physicists.


Of these 5 criticisms of contemporary theory, all of them except dark matter seem to stem from a suspicion about the eventual result through the lens of beauty, unification, etc.

I don't think it is really justified to label it this way. We know that quantum field theory is not the correct final theory because it breaks down at short lengths and high energies. It is in some sense most likely even the completely wrong way for looking at the problem. Quantum field theories are mathematical tools to deal with many particle systems and they have some nice properties like making locality manifest, they however also force things onto us that are non-physical or hard to work with, for example gauge symmetries and virtual particles. So even if there were no four different forces that could maybe be unified into one, there would still be a lot of issues with quantum field theory.

However, the dark matter problem stands out as one where measurements do not line up with theory, and seems more of a classic example of how we've typically "turned the crank" on making progress in the past.

It is of course true that dark matter is in a certain sense a more tangible problem, but on the other hand there is the problem that there are so many different suggested resolutions. Astronomical and cosmological observations may be the way to go for modified gravity or small black holes, particle detector experiments may be the way for weakly interacting massive particles, and each other resolution requires probably its own set of experiments. So that problem seems not so much that nobody is investing the effort but that we are not really sure where to look and for what. And in case of things like weakly interacting massive particles we somewhat come back to extending the standard model because such extensions may better inform us where to look.


The standard stuff of the Copenhagen interpretation is derivable from Bohmian mechanics, to the extent that the collapse in Copenhagen is sufficiently well-specified. This obviously prevents finding experimental disagreements. One thing that is somewhat problematic from an orthodox perspective is that of arrival times since, in some sense, nothing is arriving. Time is problematic, in general, in standard QM.

One recent paper that explores Bohmian mechanical predictions concerning arrival times is https://arxiv.org/abs/1802.07141 It explains how it is not clear in the standard formulation what the answer should be and then provides the Bohmian answer since arrival times do exist as there are particles with trajectories that go somewhere. I suspect that if the predictions were proven right, someone would look at standard QM and deduce that that should have been considered the right answer all along.

---

As for quantum field theory and its divergences, while the work has only been done on some of the simpler models, there is a process called "Interior Boundary Conditions" which removes the need for ultraviolet cutoffs: https://arxiv.org/abs/1703.04476

The basic idea is that the free Hamiltonian's natural domain is not appropriate when adding in the particle creation/annihilation operators and thus a boundary condition representing those interactions needs to be chosen. More or less, the appropriate set of wave functions are those whose probabilities adjust themselves appropriately for the particle creation and annihilation. This approach was inspired by a Bohmian perspective, but it is a mathematical methodology that does not require accepting that perspective.

Thus, there is some hope that the divergences are solvable already with what we know.

---

As far as I know, the most fundamental issue of relativity and quantum mechanics is that of time. Namely, there is a strong notion of "now" seemingly from QM and there is a strong notion of "not now" from relativity. Neither statements are absolute, but the tendencies are there. Bell's work suggests that nature is either nonlocal (nowish) or there is something not quite defined about experiments from what we think there is (many universes, experimental results not finalized until actually compared, somehow). In Bohmian mechanics, the issue of "now" is quite acute as the trajectories of each particle depends not only on the wave function, but on the position of all the other particles "now". This is problematic as there is no definite "now" in relativity.

One solution is to define a "now", namely, define slices of space-time which define the instants of now. This can be done in a Lorentz covariant way in flat space-time in such a manner that we cannot detect the foliation, by defining the foliation from the dynamic structures already present: https://arxiv.org/abs/1307.1714 The authors are not necessarily advocating for this as it seems philosophically wrong, but, again, Bell's work strongly suggests something of this kind needs to be true, no matter how distasteful it is.

This would necessarily imply that space-time can have a foliation, something which is not always true. Perhaps the dynamics defining the space-time metric in the quantum framework ensures that this does indeed stay the case.


It's exactly how "science is supposed to work" when the experiments don't confirm the expectations of some theoretical physicists:

"killing off theories is simply how science is supposed to work" "“This is what we do all the time, put forward a working hypothesis and test it,” said Enrico Barausse of the Astrophysics Institute of Paris, who has worked on MOND-like theories. “99.9 percent of the time you rule out the hypothesis; the remaining 0.1 percent of the time you win the Nobel Prize.”"

(from https://www.quantamagazine.org/troubled-times-for-alternativ... )

Of course everybody hopes to be in the team who "predicted" some discovery, and the "most obvious" predictions have always be more expected. But the nature simply is, and doesn't have to be "kind" to this or that theoretical physicist.


I don't like the words "laws of nature". It is another example of anthropomorphism. Law is something that only make sense in the human social context. All those so call laws are just some generalisations that are extracted from the observations that we have observed in nature and most importantly those observations are a very tiny tiny portion of the all possible observations space of all of universe.


IMHO the real action in physics is the appearance of simple, clear-cut phenomena arising in complex systems.


Condensed matter physics is so often overlooked by science communicators it's tragic. Though to be fair, its an incredibly hard thing to explain to laypeople.


The Science review quoted has been since edited “to remove an unattributed quote.” It appears to me to be quite reasonable.


I've got a theory that the next break though, figuring out quantum gravity and that won't come from physical experiments but from AI. Advanced physics is really hard on the human brain (physics drop out here) and in the way the AlphaGo was able to see new strategies that eluded humans for centuries I'm guessing some future AI will be able to figure out interesting new physics theories that we've missed.


To me, the criticism of the Science review that this book offers no alternative paths forward is very strong. This is because when blundering around in the darkness to find truth, there are no good algorithms. If you could tell a high energy theorist a better algorithm to find good problems to work on, they would listen. But if all you can do is point out that their algorithm is suboptimal: well, sure. Of course it's suboptimal: we know, we just have no better search algorithms in physics-theory-space.

Woit and company seem really invested in smearing high energy theory in front of popular audiences. A book-long `look at the fact that this algorithm is really slow!!!' is a sigh-worthy addition.

It's well acknowledged in the field that SUSY, string theory, etc. are very incomplete ideas. No one is saying they have the full story, and I don't think anyone expects to have the full story anytime soon.

So what have people been doing?

1) People have been expositing our `best guess' theory, which /is/ string theory. We have really good tests of quantum field theory, and really good reasons to think that `the most natural' generalisation is string theory. We're not cocky enough to claim that string theory /is/ the generalisation, just that it's a really good candidate and isn't it worth spending a vanishing fraction of GDP to explore it and see how good of a candidate it really is? Like, an incredibly larger amount of money is spent on innovating ways to get people to look at advertisements. It doesn't seem like there is a high bar to pass to justify the existence of studying this stuff.

Of course, a lot of effort goes into finding better guesses. Supersymmetry has been under the gun since the LHC turned on, and tons of effort has been and is spent thinking about the alternatives. Supersymmetry just remains a strong enough idea in comparison to the alternatives people have proposed that people think it's the best idea to explore. And as time goes on and supersymmetry looks weaker and weaker, more people do spend time looking for good alternatives.

2) People have been using tools from string theory to tell us about ordinary quantum field theories. Dualities like ADS/CFT are huge right now. Lots of really good ideas have come from high energy theory in recent years. ADS/CFT is a string-theoretic duality which teaches us a lot about statistical mechanical systems, things that definitely are testable. So string theory has been testably productive, as applied to the study of quantum field theories and statistical mechanics.

3) Also, the idea of topological quantum field theory is a recent innovation of high energy theory, hardly fully explored, and has been hugely important for modern mathematics.

I think these activities are pretty reasonable.


I disagree that that is a fair criticism, that a person shouldn’t criticize the state of “physics” as a professional practice if they don’t have better solutions.

I read Lee Smolin’s “The Trouble With Physics,” covering similar terrain, and his book was not presented as a work of science: it was rather a book about the sociology of science, and how the structures in place controlling the resources for research were going astray, by continuing to support, professionally, work in areas that were not proving fruitful, and limiting resources that might go towards finding new solutions.

Lost In Math sounds very interesting, as the author has decided to speak with leading researchers about their work, at a time when the validity of that work is being questioned.

That’s a worthy topic.


Particle physics is the most glamorous branch of physics, but I agree with the book author that little progress is being made. In what branches of theoretical physics could a brilliant young person have a good chance of making important contributions?


Condensed matter theory. Consider: there is only one vacuum, the background upon which all of HEP/cosmology is set, and one collection of particles (the Standard Model). On the other hand, there are vast numbers of different crystal structures and non-crystalline phases of matter, each of which defines a different background with different properties and different particles. Each substance defines a new universe, which ultimately emerges from the Standard Model particles in the vacuum, but can be much more fruitfully studied on its own terms. https://en.wikipedia.org/wiki/Condensed_matter_physics https://en.wikipedia.org/wiki/Quasiparticle


Reminds me of "Plasma Crystals": dust in plasma can form crystals.

> Dusty plasmas are interesting because the presence of particles significantly alters the charged particle equilibrium leading to different phenomena. It is a field of current research. Electrostatic coupling between the grains can vary over a wide range so that the states of the dusty plasma can change from weakly coupled (gaseous) to crystalline. Such plasmas are of interest as a non-Hamiltonian system of interacting particles and as a means to study generic fundamental physics of self-organization, pattern formation, phase transitions, and scaling.

https://en.wikipedia.org/wiki/Dusty_plasma




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: