Hacker News new | comments | show | ask | jobs | submit login
Ask HN: Is physics moving forward?
85 points by Murkin on July 17, 2015 | hide | past | web | favorite | 71 comments
Looking back over last 20 years, technology (esp. related to computers) had made extraordinary leaps forward and the pace is accelerating.

As a total layman in the physics world, it appears (looking from the benches) that things are crawling at the same speed they did 20 years ago.

Is that true ? Is the speed of physics advances accelerating as well ? What has been happening that we might of been missing ?

EDIT: Especially in the applied & practical worlds

As someone who's been in physics for much of the last twenty years, I'd guess that things are going at about the same rate. I don't think that the rate is "crawling", though.

If you look on log plots of parametrized experimental progress, progress remains linear, so it's a Moore's law like improvement on many fronts.

The emergence of precision cosmology has really transformed astrophysics in the last twenty years. The solar neutrino problem is now solved entirely (and even \theta_{13} has been measured!). The lynchpin of the Standard Model (the Higgs) has been found. LIGO is likely to make first detection in the next couple of years. Graphene and topological insulators have the solid-state community buzzing. Fluorescence microscopy and nanopore techniques are making waves in the biophysics community. And more, of course. Heck, this week, the most compelling evidence yet for the long-sought pentaquark appeared.

For the probes of the dark sector and of gravity, though we haven't found anything, huge swaths of parameter space (i.e. possible theories) have been ruled out. EDM searches are relentless in their searches for new physics. Someday, someone will find a reliable anomalous signal, but we can't predict when.

And, if you're looking for fun hints of new physics, check out "muon g-2". The next 5-10 years will be exciting there, too, to see if the existing discrepancy between measurement and the Standard Model will survive closer experimental scrutiny.

We go slow because we will go far.

In terms of progress on fundamental physics (i.e., ignoring things like material science), this is a very slow crawl on any sensible absolute scale, and certainly vastly slower than progress was made up until the 1970s. With respect, I think you're just renormalizing to the glacial pace we've come to expect.

The Standard Model (including the yet-to-be discovered Higgs and top quark) was completed in the mid '70s around when the bottom and charm quark were found. The pace of discovery of new fundamental particles has clearly slowed dramatically; the top quark was in 1994 and Higgs was just a couple of years ago, with serious pessimism that anything else will come out of the LHC (even with the luminosity upgrades). The existence of an LHC successor is uncertain, and even if it is built it is unlikely to find anything new.

The solar neutrino, compared to just about any other fundamental physics question in the 20th century, is boring. It was solved by adding a few additional free parameters to the SM with little conceptual insight and few implications for further post-SM physics. As you and I commented elsewhere, pentaquarks are a neat window into low-energy QCD, but they aren't new fundamental physics.

Precision cosmology's biggest result was to confirm the ad hoc and simplest inflationary models from the 1980's, with little hope in the near-term future to do anything besides rule out poorly motivated variations designed explicitly to be testable. What is the inflaton like, and what does it do besides adjust a couple of parameters in the CMB? No one knows, and know one has a serious expectation of finding out anytime soon.

None of this, obviously, means that these meager accomplishments weren't hard or that the people who made them aren't extremely clever. But they are undeniably less exciting then what was see before I was born in 1985, and pale in comparison to the absolute revolutions of the first half of the 20th century (quantum mechanics, relativity, GR, field theory).

I really want to call this out:

> huge swaths of parameter space (i.e. possible theories) have been ruled out.

This is the sort of comment that only us physicists can say with a straight face. Parameter space is bounded only by one's cleverness and stamina. Ruling it out says very little about reality because most proposals have very little a priori probability. The entire industry of constructing complicated models and then ruling them out is the lamest kind of progress I can imagine. Perhaps the open questions have really just gotten so hard that this is the best that can be hoped for. But we should be honest that this is essentially like hitting a brick wall and digging into it with spoons.

I agree that there's some renormalizing taking place.

From the particle physics end, the revolution stopped because accelerator technology stalled out. If the laser-wakefield (or similar) accelerators can pan out, there may be a resurgence of the 1950's era of progress.

Just because the Standard Model happens to have panned out doesn't mean that its experimental verification wasn't a triumph. The Standard Model didn't have to be true!

The solar neutrino thing is phenomenal, at least to me, we surely see CKM mixing in the quark sector, but to see leptons change flavor, and do so on such a grand scale is astounding. When the problem first came up, flavor violation wasn't even on the table.

Precision cosmology has really laid out that there's something really, really big about Physics that we're missing. Really big. And it says it with such statistical and systematic vehemence that even the most curmudgeonly skeptic (myself included) is forced to confront it. In combination with the Bullet Cluster, there's almost no escape from the conclusion that there's something really important to be found.

Totally agreed that parameter space, without qualifiers, is a vague term.

Some parameter spaces are more important than others. I'd argue that the absence of new signals at LHC, the absence of bright new lines in Fermi/GLAST data, the absence of new lines in ultra-high energy cosmic rays, the absence of any signal in any EDM experiment, the absence of any equivalence principle violation or anything else in the gravitational sector, the absence of any experimental signal for any dark matter candidate, the continued expected behavior of the Hulse-Taylor pulsar, etc. are all important, especially in light of the problems of CP-violation, the complete incompatibility of the Standard Model with General Relativity, and the extremely good agreement between observed cosmology/BBN with the \LambdaCDM model.

A lot of really good people have pushed really hard experimentally on entirely reasonable theories, and those ideas have been severely constrained. In my own field, precision tests of gravity, theory is forced to contort itself to avoid experimental constraint. LHC is putting the screws to supersymmetry; there's still plenty of room there, but there's a lot less than there was a few years ago.

As experimentalists, we've got little patience for contrived models of parameter space. As each experiment takes 5-10 years, we only get a few in our lifetime, so we try to make our work as meaningfully-impactful as we can.

What are the assumptions behind "log plots of parametrized experimental progress"? What do these mean in the real world?

To give an example of why "pouring money into it and getting slower and slower progress" could translate into linear progress on a log plot, imagine you have to pour in x² amount of (money, people, time) to get to level x of progress.

In that case "we go slow" means, "we'll gobble up larger and larger amounts of money, but everytime people get impatient we'll have some bone to throw at them".

At roughly constant inflation-adjusted budget, many experiments are getting factors of two in performance improvement today roughly as quickly as they were getting factors of two in the past.

The job market for physics has not changed substantially in the last twenty years.

By 'we go slow because we will go far', I meant only to emphasize that regular incremental improvements generally trump blitzkrieg efforts.

Add that to the fact there is no global dependency tree of "What we need to discover to make this practical technology possible", how can we be sure that existing/future research funds are focused in the areas needed to make the most valuable technologies available?

Is that why we're doing physics research? Practical technology?

Isn't it?

Understanding things is certainly great, but you can't even claim you understand something new if you can not use that new understanding to achieve something you couldn't before it.

Or, more clearly, pure theory is good for mathematicians, but scientists can not even claim a theory is new if it has no application.

Isn't it?

It's certainly not what I'm expecting. Even if it hadn't gone on to be central to our engineering, I would have liked to have known about general relativity or QCD.

Understanding things is certainly great, but you can't even claim you understand something new if you can not use that new understanding to achieve something you couldn't before it.

What do you mean by understand something new? This week we started to understand something new - understand many things new - about Pluto. Are you trying to claim that this isn't true unless we can use it somehow? What is it you're trying to say?

Or, more clearly, pure theory is good for mathematicians, but scientists can not even claim a theory is new if it has no application.

Is astronomy not a science? Or palaeontology? Or archaeology? Those are all observation-oriented sciences that don't conduct experiments but do gather information and develop knowledge.

I have no idea what it is you're trying to say about claim a theory is new - do you want to rephrase that? What would you say about the recent discover of pentaquarks? Pentaquarks are useless, but our knowledge of them is quite new.

What's EDM?

electric dipole movement or maybe electronic dance music


what is this \theta_{13} ?

The neutrino mixing angle Θ13 probably, written in latex :-)

Given your perspective, what do you make of stuff like Umbral Moonshine http://arxiv.org/abs/1204.2779 and ER=EPR http://arxiv.org/abs/1306.0533 ?

In both cases, I'd be out of my professional depth to make any definitive statements about either paper. My expertise is much closer to experiment.

Maldecena and Susskind are both well regarded and accomplished.

As a PhD in physics - progress in pure physics is slowing down (or at least progress per scientist).

The delay between a discovery in physics and a Nobel prize is getting bigger and bigger [1]. It's true for all fields, but the effect is particularly strong for physics.

'''It is safe to say that late 1920s and early 1930s were the “Golden Age” of 20th century physics, when the progress was lightning-fast and new discoveries lay like low-hanging fruits. In the 1940s Dirac commented bitterly, in view of problems quantum field theory was having at the time: “Then, a second-rate physicist could do first-rate work – now, it takes a first-rate physicist to do second-rate work”. Every physicist would love to live in such “interesting times”, when a new unexplored scientific territory opens up.''' [2]

So, its started decreasing quite some time. Also, as there are more an more people involved, the brainmass gets diluted. As of now there is no chance to get a photo of a concentration of luminaries as from the famous Solvay conferences [3].

So, if you (like me) read Feynman lectures of physics and know physics from 1920s-1950s, you are likely to get disappointed by the current pace.

...and just compare to recent progress in machine learning, when we can play on our computers with things, which a few years ago were thought to be out of our reach.

(But it shouldn't be surprising, technologies and science do have their growth time, and usually it's finite time.)

[1] http://priceonomics.com/why-nobel-winning-scientists-are-get...

[2] https://woodtickquarterly.wordpress.com/2011/11/17/graham-fa... (BTW: I recommend this book a lot)

[3] https://en.wikipedia.org/wiki/Solvay_Conference

Do you think there's a limit on how much we can understand in physics/any field?

My reasoning on this is that since everyone has a finite lifespan, understanding/work will be lost as the researcher retire, move to a different field, or die, unless this information is transmitted to someone else during their life time. However, as we understand more, we have to teach people more. This brings in two problems that I see:

- It takes longer for people to learn up to the frontier of science, which means they might be in their 30s, 40s, and that number may keep getting larger. - We might be able to teach these things in the same amount of time, but the understand of these topics by the younger generation might be less overall, which makes it difficult for them to make certain connections, as they might not have the deep enough understanding to see a connection.

My worry is the eventually we will reach a stage where the amount of knowledge we have is so much that no one can feasibly master a field. We can subdivide the fields, but no one will know enough general knowledge to make insightful connections that simplifies our models, or something to that effect.

As an example: my fluid dynamics professor said that the fluids mechanics I course we take in second year undergrad is what people worked on as their masters/phd thesis 100 years ago, whereas we don't even derive the formulas anymore now simply because we do not have the time to do so.

The point of your comment is disproved by your last paragraph! The more science is discovered, the more it gets compacted into the essentials of this knowledge. If 100 year old PhD level fluid dynamics can be taught to undergrads, this means it got digested into the most useful part of the work.

Similarly, it's considered basic undergrad work to understand Fourier decomposition; but the notations and clarity we get the insight from is the result of two centuries from the 1807 paper by Fourier. It was certainly PhD level at the time.

Certainly science is compacted and that's how it gets transmitted to younger generations. And remember that our brains get bigger, so maybe we'll never hit any "limit" to understanding? :)

My worry is that as we split knowledge out, no one will understand the big picture as they're always focused on their specialty of their specialty.

Since we are teaching people less in depth stuff as they need to go very far into science, it seems like building knowledge with unstable foundation. The lack of branching out may also inhibit discovering certain connections between fields that may proved to be enlightening.

Not to mention that lifespans get longer!

> It takes longer for people to learn up to the frontier of science

(1) It takes much less time to teach something, than was spent to figure things out the first time.

(2) When teaching improves, the essential points of a topic can be taught in less time, then what was spent when the topic was new. Then a topic is new, usually it is first taught following the lines and chronology on how it was discovered. Later improved teaching can figure out ways to get free of the chronology of the original story, and just teach the plain fact.

While Darwin's finches are still sometimes mentioned when teaching evolution, the whole story of measuring all kinds of properties of those birds in different islands, and presenting this data, and what can we deduct from it, is rarely presented. We just jump to the main point: species adapt and evolve.

Arguable, teaching physics still almost everyplace follows the historical chronology: mechanics, electromagnetism, quantum mechanics, which usually pushes quantum mechanics to the 3rd year in a university. The mathematical machinery of quantum mechanics however shouldn't take two full years to master, so maybe there could be a route to proceed quantum mechanics faster, not trying to cover all of classical physics first.

(3) I assume the success of future researchers is party build upon that they had access to good teachers, or good textbooks, which made it possible for them to quickly absorb the present state of knowledge. But there are voices complaining that producing good teaching material is not incentivized in the present academic system.

I agree with your observations.

One of the biggest problem is fragmentation. People used to be "philosophers of the natural science", now they are experts in "implementations of quantum algorithms with cold ions". So its unlikely to get so many good people in one place (as the number of fields is astonishingly high) and to get progress between/outside of fields (many fields are a bit historical/arbitrary) - as many scientist (even eminent) lack of even basis knowledge in other sciences (or even - subfields). So things requiring bigger picture may be already out of scope.

When it comes to switching fields or dying - did you read: http://blog.computationalcomplexity.org/2015/07/will-our-und... ?

I'm not physics expert, but I think there may be a limit to what we can understand about the physical world. Especially when we get down to such small scales. However, I don't think this is because of limited lifespans.

I've heard it said that string theory could never be scientifically tested in a traditional sense. People sometimes compare it to a religious philosophy because of this quality of untestability.

Maybe the logic of the universe at the lowest levels are hidden from us. If the universe is a computer simulation then we almost certainly are not exposed to the underlying mechanisms of how it works at the lowest level.

The fluid in a fluid simulation doesn't have access to the knowledge of how the computer on which it is running functions. The fluid is just information. That information is manipulated by the running program, but the code of the simulation is not built into the information representing the fluid.

Maybe the matter in our universe exists in the same way, as information. Maybe the mechanics that drive the motion of the smallest particles are not exposed.

I'm sure I'm not the only person to postulate on this. People have probably written very scientific papers on this very topic. I honestly don't know, but it's what my programmer brain leads me to believe.

Is digital physics what you're getting at here? https://en.wikipedia.org/wiki/Digital_physics

I'm also reminded of Dr. Nick Bostrum's writings on the topic of simulated existence.

I do not disagree that things feel slower to some degree, but I'd also bet absolutely anything that 100 years from now people will look at our time and say "Can you believe they didn't understand X? There was so much low hanging fruit in 2015."

The issue is that what appears simple, 100 years from now, takes a significant shift in how we're currently thinking. The problems are complex and the prior knowledge needed to get to a breakthrough is always growing, but I really do believe its self defeating to look at 100 years ago and think they had it easy as I believe people will undoubtedly say the same thing about us 100 years from now.

I agree with you but not with respect to Physics. I believe it is very likely that in 100 years we will think "Can you believe they didn't understand X" in things like psychology and all of the new emerging cross disciplinary fields that have emerged from it.

Economics is a good example as well I believe... Up until the second half of the 20th century many economists were not basing their theories on data but rather under assumptions that we are all "rational" actors. I will concede that the 'data' was not really available until the last 50yrs anyway but the 'behavioral' movement has hopefully set Economics as a whole on a 'straight' path and no doubt the next 100yrs will show the low hanging fruit which now dangle above us.

How poetic.

EDIT - The economics example I gave was very sweeping. Just an example.

Your belief about economics indicates that you are part of a minority paradigm that has produced interesting data but not managed to transform economics yet. If it does succeed, then your hope for it will be perfectly justified. But most such movements do not succeed.

The problem is that success has to be so compelling that people who are working within different paradigms (economics have many) generally figure out that they are thinking about it wrong and need to restructure everything in terms of behavioral economics, rather than looking at behavioral economics as a low order correction term to their own way of understanding them.

I think that my original comment wasn't specific enough and that we are in agreement. I do see it as a 'low order correction' but one that will (and has already) added a lot to the field.

Richard Thaler puts it much better than me when he roughly says that the best economics work in the last decade is by classic economists doing good empirical work. Which, at least in my understanding, is the real 'gift' of the 'behavioral' approach.

I think it's very unlikely that we will see in psychology or economics any kind of progress like what we saw in physics. In physics anything new that we learn doesn't change the experiments at all, that is we can't ignore physical laws just because we are aware of them.

But in psychology or economics as soon as you learn something new, that new knowledge is going to be a new variable to account for, for example as soon as you are made aware of the Bystander effect[1] it is trivial to avoid it, but you can't really predict what will come out of it. You can as well find ways to game any economic prediction by being aware of it, I guess you can find stable models that sort of work until something like the 2008 financial crisis happens and they don't anymore.

So either the investigators stay out of the economy/society or they will mess with their own results by making them public.

[1] https://en.wikipedia.org/wiki/Bystander_effect

If you have time, Lee Smolin's book The Trouble with Physics is a captivating read.


Smolin makes a sosiological-historical argument, that from the 20s to 70s, the development in theoretical particle physics was done by trusting the "mathematical intuition", if the math was beautiful and predicted the existence of some particles, a bit later usually the experimental physicists found those particles. And the rewards and Nobel prizes went to those who did the math the fastest.

So the whole theoretical physics adopted this style of work, when someone proposes something new that looks interesting, everyone tries to do the math as quickly as possible, to be the first to get the results.

But then from the 70s to 00s, this flocking attitude was applied to string theory, and people just developed string math furiously, and it was left unnoticed that the theory was totally unrelated to any experiments.

So, Smolin suggests, for 30 years the best of theoretical physics went into a direction that may be totally separated from experiments. If this turns out to be true, theoretical physics pretty much lost 30 years.

It seems to me that ground-breaking discoveries in physics are generally getting more expensive.

Up till about the early 20th century, ground breaking physics was largely "two guys in a garage" territory -- single individuals such as Newton or Cavendish tinkering in their own private laboratories using fairly modest equipment. Teenagers replicate their experiments in school physics lessons with equipment costing no more than a few hundred pounds today.

Throughout the mid-twentieth century, ground-breaking discoveries were increasingly made by teams of researchers, which seem to have grown larger over time, with equipment that has become increasingly large and expensive, and sponsored by universities, companies and governments.

Nowadays it seems that most ground-breaking discoveries are made by large, national or multinational teams working with equipment costing billions of dollars and processing petabytes of data. I couldn't see two guys in a garage producing their own space telescope or particle accelerator any time soon.

Most physics isn't done with space telescopes or particle accelerators; and very few groups require billions of dollars of equipment (considering the NSF budget is less than 10 billion dollars, there wouldn't be much to go around). Theoretical physics in experimentally relevant fields is about as cheap as its ever been; you really only need professors, a building, whiteboards, and travel funds.

And believe me, there are a lot of experimental physicists doing excellent work with two guys in a garage levels of equipment.

It's hard to read your question without thinking about how String Theory has dominated Physics for that entire timeframe (and more)... and how ineffective that family of models have been at progressing humanity's understanding of the broader world.

If you believe Lee Smolin then advocates of String Theory have also had an oppressive effect on opposing ideas, making it hard to get tenure if you're not working on it, effectively choking off competing theories (e.g. Doubly Special Relativity, Loop Quantum Gravity, etc)

I have no data or first-hand evidence, as I was only a mere Physics undergrad, but found his arguments pursuasive. String Theory (and his oscillating pals) have certainly been dominant in the scientific press, and don't seem to have come up with much. Could be a false negative though. We'll only know when someone opens the next big door.

I am surprised no one has mentioned quantum information yet. Quantum information is the subfield of physics whose goal is to understand what new information processing tasks are possible/efficient that are not possible/inefficient in the quantum world. Two such tasks are quantum computing and quantum key distribution.

This field was birthed in the early 80s and has seen steady progress since then. Various architectures for quantum computing have been proposed and experimental control over them has steadily increased in each one over the last few years. While we are not there yet the sequence of results shows that we are rapidly approaching the fault-tolerance thresholds after which it will be possible to build quantum computers.

Quantum key distribution is a simpler task and has already been achieved commercially. Now we are trying to increase the rates of transfer. We are slowly also relaxing experimental requirements. For instance there are protocols where you don't have to trust that your devices where not tampered with by an adversary.

These are exciting times in quantum information. While important theoretical results where found in the 80s and 90s the experimental momentum today far outpaces it.

Quantum information is indeed a new field. Yet, I wouldn't compare it with 20s or 70s. In the last 20 years there was hardly any paradigm shift in that matter (don't be fooled by poor science reporting/popularizing or scientist overadvertising their results, in search for fame and grants).

And for example, when I as attending conferences in quantum information (my PhD field), people were constantly lowering their expectations and making predictions more humble; much more "10 years ago we said it would be in 10 years; now we can say the same".

I don't claim that there is no progress. Just that is depressingly slow comparing to the frontier fields of science in engineering. (If it were fast, believe me, I would have stayed in physics.) Of course, future may be different, but who knows...

Compare it with e.g. DNA sequencing, where costs went down quite a few orders of magnitude in the last decade. Or image recognition, where in the last decade things though to be extremely hard (because of problems in 70s) like face recognition, image recognition etc - are now standard techniques.

I myself am getting at PhD in quantum information. I completely agree that the timeline for quantum computing was extremely optimistic 20 years ago. However, I think current estimates are much closer to the truth. We now have a much better understanding of the systems on which we are building QCs. Estimates of fault-tolerance thresholds have become much better. We have 20 years of experimental progress to extrapolate when we expect to hit these thresholds.

I agree that progress in the 20s and 70s was much faster. But they were grabbing onto low-hanging fruit. Building even a 'bad' quantum computer is several orders of magnitude more difficult task than building, say, a 486 processor. The degree of experimental control required is much much more. What I see looking back is steady progress towards greater experimental control in multiple systems and very frequent and steady achievements of milestones.

What you forget when you compare quantum computing with DNA sequencing or image recognitions is that quantum computing is at the bottom of current theoretical paradigm. When you do DNA sequencing the science of your instruments (eg. centrifuges) is not suspect. In QC everything is suspect; your system, your detection mechanisms, your control systems. If you normalize every field by its fundamental difficulty you will find that quantum information is keeping up with other fields.

I would argue that this difficulty is not more fundamental than difficulty of producing light bulbs by Edison.

The thing I do not want to normalize w field by its difficulty - then progress would be just related to the effective number of smart people working on it.

I was raised in a scientific culture (Central/Eastern Europe), where difficulty was a virtue, not - impact. But that way produces a --lot-- small number (because they are difficult!) of difficult results, with little impact.

Likewise, the most progress happens where there are low-hanging fruits.

Don't forget the new field of superconducting quantum annealers! I have high hopes.

It is the best of times and the worst of times.

In fundamental physics, the LHC is online, neutrino physics is hot, and lots is going on. Now there are a huge number of quantum theories of black holes, but no way to prove anything about them in site.

The dark matter problem is a huge "anomaly" left to solve so there are still mountains to climb.

In terms of practical stuff there is lots of physics in how you build a 7nm microchip. Physicists collaborate a lot with "nanotechnology" people and biologists. For instance my thesis advisor worked with experimentalists who were stretching DNA with tweezers and figured out how the AIDS virus self-assembles.

Even the "dead" area of chaos theory is looking much better now that people at NASA have made a map of the earth-moon phase space which can give a km/sec or so free propulsion.

   Even the "dead" area of chaos theory is looking much 
   better now that people at NASA have made a map of the   
   earth-moon phase space which can give a km/sec or so    
   free propulsion.
This is fascinating - how can I learn more?

This all depends on your frame of reference.

Looking back over last 20 years, technology (esp. related to computers) had made extraordinary leaps forward and the pace is accelerating.

Largely hardware advances that are heavily interrelated with physics. Even still, it has been incremental (though still impressive) production advances and not radical architectural redesign. Intel only got bounds checking in hardware (MPX extensions) a couple of years ago, even though this was first done over half a century ago.

Software hasn't.

Yeah, as per some of the discussion of another recent post, one of the big things that has happened over the past 20-30 years in computing is that we discovered a technology for use in digital logic circuits that has been amenable to incredible process shrinks. Certainly there's been extraordinary engineering that's gone into that shrinkage coupled with no small amount of semiconductor physics. But, for whatever other advances have been made in computing since 1980 or so, an awful lot comes back to CMOS.

This depends on your definition of "advances in physics".

"New physics", physics that we don't already have an explanation for, is becoming increasingly rare. Our definition of "new physics" is also expanding to including things that aren't fundamentally new, just small holes in our understanding of the equations.

The flip side of this is that while we already know the broad strokes, there is still a lot of work to be done in filling in the details. This work is just less glamorous and doesn't make the headlines.

> physics that we don't already have an explanation for, is becoming increasingly rare

We don't understand why galaxies rotate the way they do, as the visible mass does not correspond to the rotation according to known laws of gravity. The solution is to postulate dark matter and dark energy, until things match again.

(Young) experimental particle physicist here. While our current progress in particle physics has slowed down since the 70's, we've still learned quite a bit in the last 20 years. For example we've observed the top quark, tau neutrino and Higgs boson. We have also learned lots of stuff about neutrinos, like that they have mass and oscillate. There has also been amazing progress in detectors, although that's mostly behind the scenes.

Other fields of physics have had much more interesting discoveries though (e.g. graphene).

If you look at quantum computing [1], which is very much a part of physics, I would claim that progress is accelerating there. Its a field that is asking a lot of deep questions about reality and also spurring a lot of technological innovation.

[1] https://en.wikipedia.org/wiki/Timeline_of_quantum_computing

Well, to be fair, lots of computer technology advances were associated with advances in applied physics. Die sizes, MEMS, battery technology, etc.

Maybe you mean more theoretical physics? I don't really have enough knowledge to be useful there.

As a physicist for the last 20 years, I suppose I am to partly to blame... ;)

I would say that it depends on what you call physics. The areas of physics that were symbolic of advancement in the previous 20 years will not necessarily be the same areas where advances will be in the next 20 years. During the 1800s, advances in physics in the first half of the century lead to technological advances in steam power and electricity in the second half. If you thought of physics as meaning electromagnetism and thermodynamics, you might think that there were few advances in physics in the first half of the 20th century. And there were people that felt that way! But I think that nowadays we would think that Einstein's heyday was an era of major physics advances. So, maybe you shouldn't look at those areas that are reaching the plateau of their sigmoid curve, but newer areas.

Classic particle physics, exemplified by the likes of Chadwick and Lawrence way back in the 1930s, leading to the explosion of the particle zoo in the 1950s, and then the diminishing returns of the LHC era, would be a good example of sigmoid curve development in Physics. If physics only means this stuff to you, then yes, it is going slower than it was.

Areas of physics that have been closer to the high slope region of their development in the last 20 years:

Quantum Information Theory (as noted by @abdullahkhalids)

Dark Matter/Energy/whatevertheheckitis

Medical Physics (from lead block linacs and xrays to IMRT/VMAT & modern imaging)

Materials Science/'Condensed Matter' physics

Black Hole science, esp. thermodynamics

Gravity waves, or the lack thereof (interesting either way)

techniques for signal/data processing and analysis (such as superresolution or single detector imaging)

I'm sure there's more that I'm not aware of. Anything that is really very new is too small to get much press right now. Pretty much by definition, the new small stuff won't have the big press budget of CERN or NASA.

Physics has scaled out. There's a bunch of people working on a bunch of projects with, for the most part, narrowly defined goals. It's really to understand the implications of many of these projects when they succeed, let alone report on them.

Government spending in the sciences has been extremely schizophrenic over the last 5 years. The NSF is a little more stable, but the NSF doesn't fund much of anything over 100M, which would be a relatively small experiment when spread out over 4-5 years.

I'm not sure there's been any acceleration of discovery in the hard sciences.

The 30-40 years have seen tech advancements roughly in line with Moore's law, which is fantastic. But that's the result of engineering advancements. There's no precedent I'm aware of to expect the same results out of the hard sciences.

I'd disagree. New tools for biologists in the last 20 years have strongly accelerated the field.

The trouble in Physics is that the new tools look like the LHC.

String theory often gets ragged on for not having any direct practical applications, which is certainly true. However, some of the mathematics developed by string theory is key in theoretical work in fields where theory and experiment have a much tighter bond. For example, topological quantum field theory[1] has found widespread important applications in quantum information, and it was pioneered by none other than Witten himself. This isn't just on the theory side; experimentalists are looking at topological states of matter for a variety of applications, including quantum computing.

[1]: https://en.wikipedia.org/wiki/Topological_quantum_field_theo...

I really like these diagrams, as an answer to your question: http://matt.might.net/articles/phd-school-in-pictures/

You can't predict breakthroughs. A scientist might spend their entire career following their research to a dead end. That's not a failure, in my opinion. It just shows how much we already know, and I think we should honour those scientists just as much as the lucky ones.

The trouble with that diagram is that it implies that everything after that initial boundary traversal is a completely new and original contribution to human knowledge. Sadly this is not the case.

While progress may seem slower, we have a lot more people working on these unsolved problems. Also there is the fusion of abstract math and physics, which creates thousands of 'physicists' out of unwitting mathematicians. Due to the physical limitations of experiments, most forthcoming progress in physics will be purely abstract. But just because we can't test some of these theories doesn't mean we should disregard though, provided there are as few logical inconsistencies as possible.

Has software technology gotten much advanced in the last 20 years yet? It became a lot bigger, yes, and now we have GC or type inference if you're lucky. But I'd be still afraid of writing a mission-critical part of software as well as I would in 80s. Also things like Heartbleed happened. In terms of the quality/correctness, the progress of the software industry has been disappointing to me.

I've got to say I'm rather amused by the notion that type inference and GC are innovations of the last 20 years.

Regarding now vs. the 80s, we have much better tools for making assurance cases for critical software. So much progress has been made in both specification correctness and implementation correctness that I think the only way you could compare the two and say there hasn't been improvement is if you haven't tried to see what we can actually do now vs. then...

I did a BS in physics in the early 1990s, but I haven't kept up with current research.

Can someone recommend some good reading to catch up on what's been happening since then? I'd love a decent book about qm or cosmology that's not quite a textbook, but also takes more than a simplified approach aimed at people with no background in the field.

A good way to look at this objectively is to go to scholar.google.com, type in "physics" in the search bar, and limit your results to the past 10-20 years. You will have to filter through the books and survey articles (get a couple pages in), but you should get a picture of the more important articles in the past 20 years.

Not a physicist or a scientist, but I do follow the news, and I think there's a lot happening 'under the surface' with people working on mathematical foundations (category theory, complexity theory, homotopy type theory) that are going to surface into physics in unexpected ways in the fairly near future.

I just finished a PhD in Transformation Optics and Metamaterials. Metamaterials are a real novel breakthrough in physics which have wide ranging applications for antennas and electromagnetic materials. Not to mention the recent confirmation of the Higgs Boson. Physics is moving along at a similar pace.

As someone dipping into physics very late after a long journey on mathematics, this thread is both depressing and terrifying yet at the same time strangely motivating.

I think there are some big questions without answers still. I want to have a bash at them.

Lex III: Actioni contrariam semper et æqualem esse reactionem: sive corporum duorum actiones in se mutuo semper esse æquales et in partes contrarias dirigi.

Only if something is moving backward.

That depends, how do you quantify progress or rate of progress in physics? To compare directly to technology, Moore's law seems to indicate a accelerating pace of technology. On the other hand looking back to the 90ies progress in computer games, where you could expect a never seen before breakthrough each year, it seems that computers have hit a point of diminishing returns and probably the important metric is something like the log of computing power or something similar. So quantifying rate of progress is not well defined and it seems that it is possible to argue that the rate of progress has slowed down, even for computing power. The counter example would be digital video, where there was very little progress, for the average user, until divX and since the early two thousands, we went from cut scenes at 320x240, to Youtube and 4K video.

Putting the measurement problems aside, progress in physics seems to be a lot less smooth and the big jump occurred in the first three decades of the last century with the discovery of special relativity and quantum mechanics plus the ongoing project of formalizing physics. That one was a complete paradigm shift towards mathematical models and towards an entirely different picture of reality. Since then the development of quantum field theories is basically just using the same trick as for quantum mechanics. ( Not trying to belittle the development of QFT, that is one of the monumental achievements of the human mind, it just pales in comparison to the development of QM. ) So, in this view the next big jump may be just around the corner or not possible for a human brain, but we will only know the answer after progress happened. ( I should cite one of the famous philosophers of science here, unfortunately I forgot which one.)

As an example, string theory is currently not even wrong, because we can not build the known experiments that would enable us to test string theory. However a lot of brain power and ink was expended on its development over the last thirty years, and we simply do not have a good idea if it was worthwhile. If someone suggests a experiment that can distinguish between string theory and other models of quantum gravity, and if a string theory passes this test, then it was probably worthwhile to spend all that effort.

In conclusion, I would argue that the question is ill defined and runs furthermore in epistemological problems, that is even if we would find a good definition we can not really know the answer. However, I am actually quite optimistic that a breakthrough is just around the corner. For example, I think that the connection of information theory and physics is not really understood, but concepts like entropy and information seem to crop up everywhere one looks.

There are a number of unexplained phenomena, easy to forget about but nevertheless essentially waiting on "new physics" to model.

Maybe physics is in a (relative) plateau now. Just waiting for the big break through. eg ability to manipulate dark matter.

Nobody here is going to like this comment but I recommend you look into Electric Universe theory and Plasma Cosmology.

It all sounds rather pretty but I'm not aware of a single testable hypothesis associated with it, or at least a testable hypothesis that anyone is willing to put forward and have immediately demolished. The Rosetta mission should have turned up some intense magnetic field readings if the Electric Universe theory is true . . . but it didn't. Then again the only people I know who espouse the theory can't begin to explain what electromagnetic fields are anyway.

Care you explain why exactly you think this stuff is worth considering? I have to admit, I'd never heard of either before now, but a quick reading of the Wikipedia page on Plasma Cosmology does not leave me with much enthusiasm for the topic.

I looked and Wikipedia has "As of 2015, the vast majority of researchers openly reject plasma cosmology because it does not match modern observations".

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact