Such ideas of "beauty" may begin as a useful shorthand, in that they initially incorporate experienced practitioners' intuition of what theories are more reasonable, based on the history of their field and their collective knowledge. But they are only useful for so long as they remain subordinate to actual empirical evidence in the ultimate judgement of a theory or technique's truthfulness.
But when the actual application of strong empiricism is difficult or expensive (or becomes so, as has been the case in theoretical physics over the past half-century), or just impossible, the aesthetic barometer can gradually supplant evidence entirely. This may be the true measure of a science's softness or hardness.
In the field of software engineering, there are concepts of "clean" and "beautiful" code and languages. But how rigorously are such terms defined? How closely does our subjective impression of a codebase's aesthetics mirror its actual quality on those metrics that should matter most, such as its performance, maintainability, quantity of defects, etc.?
Anyone who has sat in a code review with two "true believer" "software engineers" will know how subjective and poorly justified, yet fanatically defended these positions are - in fact anyone recruiting will know how damaging having a true believer in your team can be (two is a genuine disaster).
Software isn't engineering because we can't measure anything much about it apart from compression, speed and detected defects so far. Speed and compression don't matter for 99% of modern software; we have laptops with 32Gb memory, servers with 1/2 Tb and scores of Ghz cores - the need to write in C has disappeared for almost everyone. Detected defects is problematic because as soon as you start counting them people stop submitting stuff to the code repository until they have removed everything that they can find. That's a problem because they hold onto it for weeks and meanwhile if it had been in the repo everyone would have found twice as much. Also the team stop reporting the stuff they have found and just talk about refactoring; also the team gets less keen on testing..
So what do we measure ? We can't tolerance it; we can't deduce structural properties; we can't deduce lifecycle (recycling, onward development) properties. We don't know why the core engineering decisions got taken because often the core decisions happen at 3:17 on Tuesday and no one even notices at the time, or remembers later. It's not engineering, not close.
I don't know in which weird world you live but most of the people I ship software to run 10 years old laptops or 10.6 / 10.7 macs. And even top of the line 2018 macbooks don't have options for more than 16G of RAM.
I despise running non-native desktop applications on a workstation with 128GB RAM because the developers often think it doesn't matter how much memory they use if there's a lot of it. That's frustrating and lazy, and it means I'm paying for their time savings with my hardware resources. "Unused memory is wasted memory!" they say, except I would happily use as much memory as possible, but the nth Electron app prevents me from running something else.
C and its derivatives have disappeared from web applications (particularly the frontend), but it's painful to forgo them for desktop and systems applications.
I think it's frustrating (for you) and cheap. For them writing in Python and not C is probably (see my previous comment for my acknowledgement of the lameness of "probably") significantly cheaper - possibly several orders of magnitude cheaper. For you 128GB RAM is pricy, but nothing like the cost of software written "the old way". You are paying for their time savings with your hardware, the alternative is to pay for their time which would be way more expensive.
You definitely have a point wrt. Systems apps; C++ is needed if you want to go close to the metal. I think that Julia is pushing it out of a lot of scientific apps though.
Engineering is the selection of technological solutions to social and business problems, made after some analytical process rooted in mathematical and physical laws, from a set of defensible alternatives, under economic and ethical constraints.
Regardless of what our gatekeeping cousins might wish to believe, software engineering fits this definition.
That many people working on software do not behave like engineers does not change the fact that it exists. That there is rarely any legally-enforced guild structure to regulate the membership of the profession does not change the fact that it exists. Civil engineering existed thousands of years before the word engineering was even invented.
There isn't much of that in application software engineering. Some domains have it, like ML and DSP, but most of the code written for startups is ad hoc carpentry, plumbing, and masonry, not applied physics.
You can do a lot with carpentry, plumbing, and masonry without really knowing what you're doing - up to a certain scale. But when you start building giant things without rigorous non-tribal reasons for your design choices, you're only ever going to build things that fall down and break a lot.
Too harsh? I think it's hard to find anyone today who thinks software is acceptably robust and reliable by default. Some products/apps/pages work just fine if not stressed too hard. But generally security, bugs, and efficiency are all poor compared to comparable challenges in hard engineering disciplines.
The only reasonable definition of "engineering" that I know of is on the lines of "the art of building stuff". Writing software fits that definition perfectly well, it's just what people call "software engineering" that doesn't.
Can you elucidate about the sea of measurements that you can extract from software? What are the things that can usefully be measured in the current art?
>made after some analytical process rooted in mathematical and physical laws, from a set of defensible alternatives, under economic and ethical constraints
Ok - how can we do analysis of things that are not measured? In the end all the software design I've ever seen comes down to "this is the best we can do honestly" there could be 100 better ways or this way could be horribly flawed, there's no way to figure this out a-priori.
When I went to college I roomed with a bunch of Aerospace engineers. They spent days and weeks doing structural calculations which were underpinned by physical knowledge of the materials that were in the structures. If the actuator lever for the flap was too heavy or not strong enough when it was made of aluminium then it could be made of titanium instead; but they knew. The sums showed them.
I did CS; at the time we thought that formal modelling would do the same for us.. that is beginning to come back with thing like Lamport's work on TLA+ but I think that the last 30 years have shown us how little we know.
Now yes, people built cathedrals in the middle ages, and bridges. Some of those are around as monuments to their craft. But a lot of them aren't, a lot of them fell on people's heads.
So this important decision was based on this dude (excuse me, software architect) shrugging and saying it feels good.
That being said, by now software engineering has a large set of techniques, with reasonably well understood properties, and we can absolutely reason through whether a technique is useful to solve a particular problem.
For example, polymorphism is a useful technique to invert the locality of implementation details, making it easier to extend software in a distributed way. Its advantage is its drawback, it makes control flow harder to follow and generally the software more difficult to understand precisely due to the distributed extensibility.
Similarly, mocking is a technique to reduce testing "surface", by removing dependency on specific components out of a test. It's a great tool to make tests easier to write, improve test performance, and avoid ambient test dependencies and associated complexity. On the other hand, it reduces test depth by ignoring potentially important interactions, and often leads to overly brittle and hard to maintain tests for little value of tested software.
It's easy to go on. All these techniques aren't silver bullets or axioms to be followed religiously, they are specific solutions to specific problems. They aren't necessarily measurable in an analog fashion, such as the load bearing capability of a steel beam; they are often more binary (quite fitting, given the domain). But you can judge whether they are useful to solve a specific problem.
That is, I don't follow the "it's all just opinion" kind of software engineering defeatism. Yes, there's a ton of extremely questionable evangelism, often spread with huge pomp and circumstance, by people with limited insight. But that's really just that, limited insight.
Others have replied to your point on performance being irrelevant - that's another area of software engineering that's hugely important, and not just in the Google kind of setting.
Unfortunately, her insights have been largely ignored. I first heard about it twenty years ago in a book called "System Design from Provably Correct Constructs" (which, oddly enough, doesn't mention Hamilton outside of a few references in the back.)
In modern terms, programs in HOS are developed by the direct elaboration of the Abstract Syntax Tree (like Lisp code) using only a limited set of operations which each preserve the semantic correctness of the tree. In this way, you are prevented from building incorrect software.
(You might build a barn instead of a pool, but it will be a fine barn.)
In addition to the availability of a defect-free development methodology, Category Theory provides a solid mathematical foundation for determining the most-highly-factored form(s) of a given code-base. Meaning, if you have a "categorical" programming language you can more-or-less automatically extract the most general form of your program, which can then be used "over" different categories to get different concrete programs from one piece of code, with confidence (mathematical confidence) that they are valid, correct programs. There's now a great explanation by Conal Elliott about this called "Compiling to Categories".
In any event, I would say that Category Theory gives us a language-agnostic way to rigorously define the "cleanliness" or "beauty" of software.
The tools are there. We have to notice them and pick them up.
 You're not editing text files and then using other tools to (try to) ensure they're not broken too badly. No linters, no formatters, no syntax errors, no type conflicts, no off-by-one errors, etc... Hamilton identified all the sources of error in program construction and designed a system that eliminated them.
 "'Tis a fine barn but 'tis no pool, English."
 "Compiling to Categories" Conal Elliott February 2017 http://conal.net/papers/compiling-to-categories/
In "Compiling to Categories" what he’s doing is translating lambda forms into a kind of “point-free” style and then showing how to instantiate that code over different categories to get several different kinds of program out of the same code.
I've been working with a stack-based "concatinative" language called Joy which I think makes this stuff a lot clearer. Joy code is already in "point-free" style (no vars, no lambdas) and it includes functions that do the same job as but are more elegant than the triangle operators and others in the "squiggol" tradition. ( https://wiki.haskell.org/Pointfree )
There's a little bit in "Mathematical foundations of Joy" and "The Algebra of Joy" by Manfred von Thun.
Here's a piece of Joy code (it's part of an ordered binary tree library):
pop swap roll< rest rest cons cons
[4 5 ...] 3 2 1 . pop swap roll< rest rest cons cons
[4 5 ...] 3 2 . swap roll< rest rest cons cons
[4 5 ...] 2 3 . roll< rest rest cons cons
2 3 [4 5 ...] . rest rest cons cons
2 3 [5 ...] . rest cons cons
2 3 [...] . cons cons
2 [3 ...] . cons
[2 3 ...] .
(--) ∘ pop swap roll< rest rest cons cons
(a1 --) ∘ swap roll< rest rest cons cons
(a3 a2 a1 -- a2 a3) ∘ roll< rest rest cons cons
(a4 a3 a2 a1 -- a2 a3 a4) ∘ rest rest cons cons
([a4 ...1] a3 a2 a1 -- a2 a3 [...1]) ∘ rest cons cons
([a4 a5 ...1] a3 a2 a1 -- a2 a3 [...1]) ∘ cons cons
([a4 a5 ...1] a3 a2 a1 -- a2 [a3 ...1]) ∘ cons
([a4 a5 ...1] a3 a2 a1 -- [a2 a3 ...1]) ∘
To repeat, that's the same code evaluated in two different domains, er, Categories, to get two different correct computations.
If I had implementations for them I could evaluate the expression in/over a category and get as output: a dataflow diagram of the program, or a hardware description of a circuit for the program, which are examples from Elliott's talk+paper.
Edit: to try to answer your question, the goal of elegance/beauty in code is entirely about comprehensibility and not utility, is it not?
See also https://en.wikipedia.org/wiki/Occam%27s_razor
That said, the whole field of science journalism and popular science book writing lives from violating these rules, making things much simpler than they actually are. But then again, the average person is a) not in charge of the physics experiments and b) tries to use these as a reference in a scientific paper is up for a surprise.
I find it fascinating that so much interesting physics has been discovered in the last century -- for example I only recently found out the neutron essentially a post WWI era discovery . And yet so little fundamental physics has come by in the last 30 years.
Now that there is no more increase in microstructure knowledge of the universe, will the other sciences and then technology follow in this stoppage?
Arguably the Internet followed from computers which followed from transistors which followed from fundamental physical theory of atoms and electrons from less than 150 years ago.
Will the lack of new physics cascade down, or is there enough "stuff" between physics and technology that tech can keep on going.
The thing is, subatomic stuff is extremely energy dense and packed very stably into particles and atoms. Even to just crack open that package we require massive machines like the LHC. Even if we could control things there it's not clear that it's even possible to rearrange the parts into other configurations. All such configurations would be extremely energy dense and intrinsically unstable, with QM making sure they decay quickly. So figuring out what stuff is made of doesn't give us more ways to interact with it.
The good news is that we're far from done understanding how atoms can combine into interesting structures. Like _really_ far from done. From topological states of matter to meta materials there are lots of surprises.
as a small correction, this is not true. what you probably really mean is massive amounts of energy, but large machines are not the requirement because RF-based acceleration is not the end solution. see laser-based plasma accelerators.
There's despair at HEP, yes, but it's because we can't discover a Higgs boson every year and many physicists thought that maybe there were lots of new particles coming up. There was a solid theoretical possibility that would have allowed in a way to get something similar. But they're not there, or at least we don't have the means to produce them now, so you have to try other things. That's not bad, that's simply how science works.
You can't make up new particles if they don't exist. That's not a failure, that's the sign that you have done your job. The better you get at something, the harder it gets to improve your results. No one tells biologists that they have to get something better than Evolution every decade.
That disappointment of many can't spoil what has been a triumph, the Standard Model with warts and all is how Nature works: there really are Higgs bosons which we didn't have the technology to even look for until now. And there are still precision measurements to be made that will give us information about what we can't reach directly, because it's hiding in some numbers there. Such things don't make it to headlines, though.
About fundamental physics... Well, we've discovered that neutrinos have mass, that's new physics right there. We've measured gravitational waves, which many thought was impossible. We've discovered that the Universe expansion is accelerating, more new physics. Also that dark matter is pretty much not an hypothesis, because of weak lensing and a number of other independent observations, so there's exotic matter out there. There's a lot of work to be done, yes, but we've learned a lot already.
It's not that there's "lack" of anything, except understanding. "New" is in the eyes of the observer, the way I see it, there are very new and fascinating discoveries every year. It's more how the expectations aren't properly communicated. Like the article author writes:
"I fear a complex set of issues is likely to get over-simplified, and this over-simplified version of the book’s argument is all that much of the public is ever going to hear about it. "
It's all about for what the physicists can get the funding to explore. The biggest worry is the perception of "simple" public (some of which are actually deciding what is going to be financed) that there have to be "successes" however the "successes" are defined.
In reality, in physics, investing a lot of energy to find something and not finding that something can still be a point of a new beginning, and we had that in history a lot. Without Michelson-Morley experiments that didn't find what was "expected" there would be no Special and General Relativity -- these experiments were necessary. And the invention of Michelson still gives fascinating results: the first measurement of merging of black holes, only a few years ago, more than 100 years after Michelson's first experiment, used as the basis the concepts first invented by Michelson. We simply have to fund the experiments, and every properly done experiment is success, even if the results aren't what a lot of people expected to be.
There were the times where new paricles “appearing” wasn’t expected. Now the expectations were that even relatively small efforts must bring new particles. These expectations will most likely be adjusted.
Regarding financing, for example, the last time I've checked the total cost of everything LHC did, including building it and operating it for years, was around 9 billion of dollars. It's effectively nothing, compared to 1,500 billions of dollars (!) that are still planned to be spent on a plane that will surely not advance the science, famous F-35(!) (and much more is already spent than on LHC!) That puts some views in real perspective. One military plane could finance building and operation of one totally new LHC for at least 150 years, every year one new.
To be fair, that budget is for 2500 planes and their maintenance over 50 years.
But the real science (especially to observe the Earth, the climate and the effects of global warming) struggles for breadcrumbs or even for pure survival. Because ideology, not rational thinking.
Deterring war is the greatest way to unlock human prosperity.
And killing with precision means saving lives, the greatest good in the world.
Investing in healthcare, education, and infrastructure on a global scale would a) be possible with that kind of budget and b) do a lot more to prevent war, since there would be a whole lot less to fight about.
So we now have picked all the 'low hanging fruits' in physics. The really hard stuff may take a few millenniums, which is still unimaginable short compared to the age of the universe. And the universe itself is just out of its infancy and can go on for trillions of years... But we may find another interesting thing or two before we fade away.
Microbiology has lots of work ahead. We can read DNA, but progress in understanding it is slow. Lots of potential there.
The discoveries of particle physics are not where the bulk of technology spin off comes from. Most of the particles that are generated don't last long, and are hard to generate. Most of the particles that have been discovered have had limited utility. If the particles aren't naturally generated (like in muon imaging detection) there isn't too much use for them.
So, you get things like muon tomography of the pyramids
But you don't generate the muons yourself. Similarly, understanding cosmic rays in the atmosphere requires you to have some understanding of the various decay products, but unless you're trying to do something really sensitive, you may not care.
University College London's put out a document talking about the technological benefits of high energy physics research
And they basically break it down to "Accelerator Science and Technology", "Detector Research and Development", "Impact of Electronics and Readout Developed for Particle Physics", "Computing, Software and Analysis Techniques", and "Special Skills and Competencies". In short, nothing really to do with the actual discoveries themselves; it's the process of building tools to explore the high energy regimes that is useful to technology. Arguably, the same can be said about the theoretical side.
An (overly pragmatic) argument is: "If you can't detect any deviations from the Standard Model, then the Standard Model is sufficient for all forseeable technological applications."
And there's plenty of things to measure and discover within the Standard Model. It's not all about discovering new fundamental particles. There's actually producing hypothetical particles as well that are predicted by the standard model but have never been made. Like Glueballs for example:
(Which, as the wikipedia article points out, has no actual applications :) )
It wouldn't surprise me if the majority of physicists are now leaning towards applications and less "fundamental" disciplines. Newton and Einstein were two centuries apart! I think the past 100 years of physics will still run us another century before we make any more groundbreaking advancements in theory. It's not even as bleak as it sounds, because we're really at the tip of an iceberg here already.
At least there are plenty of biological examples that we haven't gotten close to replicating technologically -- while we have substitutes for almost every biological functionality, there are areas in the parameter space where the bio versions comfortably lead. The brain of course is an example, leading in efficiency and capabilities; but there are many others like compact self-replication, self-repair of structures, micro and nanomachinery (exhibited in cells), and so much more. None of that of course requires advancing physics.
Even for long term technologies, I don't think a deeper understanding of physics would have much to contribute, unless we found something wild like a compound made from new particles. That isn't to say it's not worth pursuing; it just takes a different relation to society, similar to math and philosophy.
From this one could reason, that the new fundamental physics would resolve issues that would have no practical significance. For example I don't know of any impact of discovering Higgs boson on engineering. But there is of course option of discovering something that we don't even imagine at this point and this could have practical implications.
Tl;Dr: If gathered resources of the whole world are not enough to test it, it may not have any practical, engineering significance.
Imagine how many people would have said something similar about quantum physics before the first A-bomb was developed.
Historically, the practical significance of major discoveries often came much later than the discovery itself. We do need the discoveries, especially those "of no practical significance" as long as they really improve our understanding of the physical world.
Also, I wrote "From this one could reason". I, myself, actually hope that we will get some breakthroughs, because established theories make ridiculous predictions or require equally ridiculous fine tuning. One example is vacuum catastrophe: "the worst theoretical prediction in the history of physics". We have had similar prediction in the past e.g. "ultraviolet catastrophe" that lead to the whole idea of "quantum".
But IMO the issue here may be more fundamental. Testing quantum gravity theories requires energies way beyond stupidly high energies like in LHC. As I understand, the requirement follows from the theory. It's like light speed limit, it's not something you can engineer around to see in lower energies. You can engineer your way up to get these energies, but there doesn't seems to be a short cut like with positive feedback loops in case of A-bombs.
Chicago Pile 1, first nuclear reactor cost 50 000$ at the time, so something below 1 million $ today. Most of historically important experiment then was even cheaper. We stumbled upon lot of things because we were doing sort of cheap random search around things we did not understand. Most importantly cheapness of these experiments coincided with potential of practical applicability. I fear that this is mostly over, because as far as I know we don't have fundamentally inexplicable phenomena, I stress "fundamentally", that we can play around with cheaply. LHC budget is somewhat around 10 billion $, JWST another 10 billion $, LIGO somewhat around 1 billion $. And as I understand this is nothing in comparison to what would be required to reject e.g. string theory.
If you can't experiment with it cheaply I doubt that you can apply it cheaply.
Definitely enough. We're at the dawn of bio and nanotech and haven't even scratched the surface of the consequences of 20st century science. It'll keep technologists busy for at least 50 years if not more.
Pure physicists on the other hand are in a tough spot. By pure I mean those not involved in applications, e.g., HEP, string theorists, etc. Though I'm not a pure physicist and it's just my opinion.
That's pretty much true for classical physics as well.
This is true of all of physics - reducing observations to the level of predictability. Beyond that we are wading into philosophy.
(BTW, classical physics has warts as well - what is the force between a charge of 1 Coulomb and -1 Coulomb when they are touching each other?)
In classical mechanics we use a slightly wrong model of the world but then everything is consistent and works as long as you are not stepping into regions where the approximations are no longer good. Quantum mechanics yields correct predictions for a wider range of phenomena because we no longer pretend that Planck's constant is zero, it explains some strange features of classical physics, but it is - at our current level of understanding - also no longer self-consistent.
And while quantum mechanics is certainly less intuitive I think that is not really the issue. Once you leave Newtonian mechanics behind and step into the Lagrangian or Hamiltonian formulation, even classical mechanics becomes a quite abstract theory. I am certainly still better at picturing phase space flows than superpositions in momentum space but I have no fundamental problem with abstract representations.
But that time evolution is supposed to be unitary while I am not looking and then suddenly goes out of the window, that is something you can not simply ignore. It's not not intuitive, it's not consistent. And it's not limited to a region of the theory where we might expect the theory to break down, it's right in the center of it.
> From what I understand, the parts of quantum mechanics that aren't still fully understood are where it overlaps with relativity.
Yes and no. You're right, one big question are high-energy completions of the quantum field theories that make up the standard model and which we expect to break down at very high energies. On the one hand, this expectation is due to the fact that general relativity is likely going to play a role at these scales, so we'll probably have to solve the puzzle of quantum gravity along the way. On the other hand, though, there are many more reasons why the quantum field theories we have right now are expected not to be the full story:
1) For reasons of, well, beauty. Right now, the standard model has many free parameters but even if you ignore the parameters for a second, it's also a somewhat random collection of specific gauge theories (the latter is basically another word for quantum field theory), i.e. specific gauge groups. Therefore, people hope that there is an underlying "grand unifying theory" (GUT) that should become visible at high energies. Note that the beauty that comes with a GUT is only in parts what Hossenfelder means when she discusses "beauty".
2) More importantly than 1), the usual interpretation of the renormalization procedure in QFT these days is that it is a way to exclude physics at the very high energy scales from our calculations/predictions. Put differently, without introducing a renormalization scale (basically a "maximum energy") the equations we have blow up and people take this as a sign that our theories are incorrect at extremely high energies. So in this sense, current quantum field theories are nowadays seen as effective theories (which are only valid at low-energy scales) of an underlying "complete" theory which will hopefully cover all energy (and thus length) scales.
But apart from the lacking high-energy completion, there are more reasons to believe that we don't fully understand nature yet and that we might need to come up with a new theory:
3) Even after decades of research we're still lacking a rigorous mathematical underpinning for practically all relevant QFTs. Considering that nature so far has always been governed by laws that we could express in precise mathematical terms, this might suggest that we're doing something wrong at a very fundamental level. A lot of people (or at least those I have spoken to) hope that a theory of quantum gravity will solve this (and thereby reestablish mathematical sanity in high-energy physics).
4) Dark matter: As you might now, dark matter is a general name for the apparent stuff that causes our observations to differ slightly from the laws of gravity that we know (i.e. from general relativity). The existence of dark matter doesn't necessarily mean that we'll need an entirely new theory but if we're right in our assumption that dark matter consists of particle excitations of another quantum field (usually called WIMPs), we'll need to at least extend the current standard model.
5) The foundations of quantum mechanics: More specifically, the measurement problem together with the issue of having different interpretations of QM and, possibly, the issue of time. These are basically the issues danbruc meantioned above. For some background info see for instance Steven Weinberg's lecture on "What's the matter with quantum mechanics?" (discussed by Sabine Hossenfelder here: http://backreaction.blogspot.com/2016/11/steven-weinberg-doe...) and Carlo Rovelli's recent book (https://news.ycombinator.com/item?id=17376437). Again, some people hope that these issues, too, will eventually be solved by a theory of quantum gravity but this is anything but clear. (By the way, a friend of mine working on Bohmian Mechanics recently told me that there might be experiments to distinguish Bohm's interpretation from the Copenhagen one, so we might not even need to wait for quantum gravity.)
Anyway, let's assume for a second that we really need a full quantum theory of gravity to solve all these issues. Then, what's our best shot at coming up with a theory of quantum gravity? Well, studying the high-energy behavior of particles and hoping to discover "new physics". So in this sense, you were absolutely right of course. I just wanted to point out that it's not reconciling relativity and quantum mechanics alone that drives us.
Of these 5 criticisms of contemporary theory, all of them except dark matter seem to stem from a suspicion about the eventual result through the lens of beauty, unification, etc. However, the dark matter problem stands out as one where measurements do not line up with theory, and seems more of a classic example of how we've typically "turned the crank" on making progress in the past.
How big is the concerted effort to understand dark matter vs trying to attack these other concerns in theoretical physics? It certainly seems like an aggressive, field-wide effort is warranted given it's one of the few universally acknowledged and reproducible blatant predictive errors in current models. The discovery of these errors seems like a huge gift to physicists.
I don't think it is really justified to label it this way. We know that quantum field theory is not the correct final theory because it breaks down at short lengths and high energies. It is in some sense most likely even the completely wrong way for looking at the problem. Quantum field theories are mathematical tools to deal with many particle systems and they have some nice properties like making locality manifest, they however also force things onto us that are non-physical or hard to work with, for example gauge symmetries and virtual particles. So even if there were no four different forces that could maybe be unified into one, there would still be a lot of issues with quantum field theory.
However, the dark matter problem stands out as one where measurements do not line up with theory, and seems more of a classic example of how we've typically "turned the crank" on making progress in the past.
It is of course true that dark matter is in a certain sense a more tangible problem, but on the other hand there is the problem that there are so many different suggested resolutions. Astronomical and cosmological observations may be the way to go for modified gravity or small black holes, particle detector experiments may be the way for weakly interacting massive particles, and each other resolution requires probably its own set of experiments. So that problem seems not so much that nobody is investing the effort but that we are not really sure where to look and for what. And in case of things like weakly interacting massive particles we somewhat come back to extending the standard model because such extensions may better inform us where to look.
One recent paper that explores Bohmian mechanical predictions concerning arrival times is https://arxiv.org/abs/1802.07141 It explains how it is not clear in the standard formulation what the answer should be and then provides the Bohmian answer since arrival times do exist as there are particles with trajectories that go somewhere. I suspect that if the predictions were proven right, someone would look at standard QM and deduce that that should have been considered the right answer all along.
As for quantum field theory and its divergences, while the work has only been done on some of the simpler models, there is a process called "Interior Boundary Conditions" which removes the need for ultraviolet cutoffs: https://arxiv.org/abs/1703.04476
The basic idea is that the free Hamiltonian's natural domain is not appropriate when adding in the particle creation/annihilation operators and thus a boundary condition representing those interactions needs to be chosen. More or less, the appropriate set of wave functions are those whose probabilities adjust themselves appropriately for the particle creation and annihilation. This approach was inspired by a Bohmian perspective, but it is a mathematical methodology that does not require accepting that perspective.
Thus, there is some hope that the divergences are solvable already with what we know.
As far as I know, the most fundamental issue of relativity and quantum mechanics is that of time. Namely, there is a strong notion of "now" seemingly from QM and there is a strong notion of "not now" from relativity. Neither statements are absolute, but the tendencies are there. Bell's work suggests that nature is either nonlocal (nowish) or there is something not quite defined about experiments from what we think there is (many universes, experimental results not finalized until actually compared, somehow). In Bohmian mechanics, the issue of "now" is quite acute as the trajectories of each particle depends not only on the wave function, but on the position of all the other particles "now". This is problematic as there is no definite "now" in relativity.
One solution is to define a "now", namely, define slices of space-time which define the instants of now. This can be done in a Lorentz covariant way in flat space-time in such a manner that we cannot detect the foliation, by defining the foliation from the dynamic structures already present: https://arxiv.org/abs/1307.1714 The authors are not necessarily advocating for this as it seems philosophically wrong, but, again, Bell's work strongly suggests something of this kind needs to be true, no matter how distasteful it is.
This would necessarily imply that space-time can have a foliation, something which is not always true. Perhaps the dynamics defining the space-time metric in the quantum framework ensures that this does indeed stay the case.
"killing off theories is simply how science is supposed to work" "“This is what we do all the time, put forward a working hypothesis and test it,” said Enrico Barausse of the Astrophysics Institute of Paris, who has worked on MOND-like theories. “99.9 percent of the time you rule out the hypothesis; the remaining 0.1 percent of the time you win the Nobel Prize.”"
(from https://www.quantamagazine.org/troubled-times-for-alternativ... )
Of course everybody hopes to be in the team who "predicted" some discovery, and the "most obvious" predictions have always be more expected. But the nature simply is, and doesn't have to be "kind" to this or that theoretical physicist.
Woit and company seem really invested in smearing high energy theory in front of popular audiences. A book-long `look at the fact that this algorithm is really slow!!!' is a sigh-worthy addition.
It's well acknowledged in the field that SUSY, string theory, etc. are very incomplete ideas. No one is saying they have the full story, and I don't think anyone expects to have the full story anytime soon.
So what have people been doing?
1) People have been expositing our `best guess' theory, which /is/ string theory. We have really good tests of quantum field theory, and really good reasons to think that `the most natural' generalisation is string theory. We're not cocky enough to claim that string theory /is/ the generalisation, just that it's a really good candidate and isn't it worth spending a vanishing fraction of GDP to explore it and see how good of a candidate it really is?
Like, an incredibly larger amount of money is spent on innovating ways to get people to look at advertisements. It doesn't seem like there is a high bar to pass to justify the existence of studying this stuff.
Of course, a lot of effort goes into finding better guesses. Supersymmetry has been under the gun since the LHC turned on, and tons of effort has been and is spent thinking about the alternatives.
Supersymmetry just remains a strong enough idea in comparison to the alternatives people have proposed that people think it's the best idea to explore. And as time goes on and supersymmetry looks weaker and weaker, more people do spend time looking for good alternatives.
2) People have been using tools from string theory to tell us about ordinary quantum field theories. Dualities like ADS/CFT are huge right now.
Lots of really good ideas have come from high energy theory in recent years. ADS/CFT is a string-theoretic duality which teaches us a lot about statistical mechanical systems, things that definitely are testable. So string theory has been testably productive, as applied to the study of quantum field theories and statistical mechanics.
3) Also, the idea of topological quantum field theory is a recent innovation of high energy theory, hardly fully explored, and has been hugely important for modern mathematics.
I think these activities are pretty reasonable.
I read Lee Smolin’s “The Trouble With Physics,” covering similar terrain, and his book was not presented as a work of science: it was rather a book about the sociology of science, and how the structures in place controlling the resources for research were going astray, by continuing to support, professionally, work in areas that were not proving fruitful, and limiting resources that might go towards finding new solutions.
Lost In Math sounds very interesting, as the author has decided to speak with leading researchers about their work, at a time when the validity of that work is being questioned.
That’s a worthy topic.
> Dusty plasmas are interesting because the presence of particles significantly alters the charged particle equilibrium leading to different phenomena. It is a field of current research. Electrostatic coupling between the grains can vary over a wide range so that the states of the dusty plasma can change from weakly coupled (gaseous) to crystalline. Such plasmas are of interest as a non-Hamiltonian system of interacting particles and as a means to study generic fundamental physics of self-organization, pattern formation, phase transitions, and scaling.