This biggest caveat is mentioned in the ApJL paper [0] introduction:
> Because SMBH growth via accretion is expected to be insignificant in red-sequence ellipticals, and because galaxy–galaxy mergers should not on average increase SMBH mass relative to stellar mass, this preferential increase in SMBH mass is challenging to explain via standard galaxy assembly pathways (Farrah et al. 2023, Section 5).
I think there are several observational effects that may obfuscate the interpretation -- but I also haven't read these papers in great detail.
First, there is a known age-metallicity-dust degeneracy that can make dusty star forming galaxies look more like red elliptical galaxies. This can bias estimates of the star formation and mass accretion history -- e.g. perhaps supermassive black holes have had more recent growth. Second, galaxies in more overdense regions may harbor faster growing supermassive black holes, and also be more predisposed to later merging and forming an elliptical galaxy. This seems likely true around cosmic noon (z~1-2), when star formation and supermassive black hole accretion activity were at their highest throughout cosmic history.
I also think that the estimated growth was too big to be caused by such systematic errors.
On the other hand we do not have models of a galaxy evolution based on solving equations of General Relativity. Typically the assumption is that Newtonian gravity with minimal relativistic corrections should be enough. But there is no proof that it is so.
Moreover, there were relatively recent papers that showed that better accounting for General Relativity could be enough to explain rotational curves for Galaxies without any notion of Dark Matter and that the need for Dark Matter was simply ab artifact of the assumption that Newtonian gravity can be used at the galaxy scale.
Then there are speculations that electromagnetic forces do play role at the galaxy scale affecting the rate of evolution of galaxies.
So it can be that the observed discrepancy in the growth of Black Holes caused by holes in galaxy evolution models, not by the proposed new effect.
So increasing mass of black holes ... drive galaxies apart?
Since SMBH clearly act as massive gravity sink in close range, does this mean that black holes "take" vacuum energy, increasing its own mass and then stretching out spacetime? Does dark energy gets evenly distributed to black holes per mass or heavier black hole gets exponential larger servings? I have so many uninformed questions!
It seems more and more like some kind of scaling patch as the simulation gets larger and more out of hand. Here's to hope that the next patch are far away and if it does happen it doesn't break production.
How the patch notes might have looked like:
* Oops singularity bug happened, wrapping a black hole around it. Clever fix, they'll never suspect anything. If they did, they can't see through it anyways.
* This black hole feature is working very pleasantly, putting black hole seeds in center of galaxies so they can grow faster.
* Looks like the universe will collapse on itself soon with the added mass, quick fix to add a space time stretch around the black holes so it won't do that. Will have to think of a long term solution #TODO
... 3893 commits later
* who did the black hole space time stretch? the galaxies are flying apart and heat death is upon the simulation! Reopening #TODO.
In the 17th century people thought the universe is a kind of clockwork because they were so impressed with the precision of those early machines.
Today people think the universe is a kind of simulation because they’re so impressed with how algorithms tinged with randomness can tickle their human sensibilities and create illusions of meaning.
That's an unfair take. Yes, people have always been using the most interesting or complex phenomena they knew as metaphors to explain the world, starting with animals and ending with computers. But it's not about being impressed, but rather finding similarities. And most importantly, bodily fluids != clockworks != systems of pipes != computers.
There's a qualitative jump we've made in the last ~150 years. The tight feedback loop between math, natural sciences and engineering, that was earnestly established some centuries earlier, finally picked up speed. We're no longer "impressed" by a deer or a clockwork and saying the world must work like it. We're not imagining the universe to be like something we know - we're mapping alike concepts using precise, formal, well-tested reasoning. We're applying models to the world, and we know exactly how much fidelity they have. We chose those models to be useful, not evocative.
In short, back then we were doing artistic impressions of a landscape. Nowadays, we're drawing proper maps[0].
There's a popular meme that ~150 years ago, physicists thought they had it all neatly figured out, and all that's left to do is to make numbers accurate in far decimal places - and then they stumbled on relativity, nuclear physics and quantum physics, turning everything upside down. The meme is inaccurate, and its implication - that we still don't know shit - isn't particularly convincing to me. Those new fields didn't replace our understanding of the world - they enriched it, solidified it, filled in holes. We have a more complete picture now, especially of the fundamentals. We may not have solved quantum gravity, we may not know if and what dark matter is, etc. - but we know enough to put bounds on the possible consequences and possible surprises[1].
The point I'm making is that, when we now say that e.g. the brain is a computer, it's not the same thing as people 500 years ago saying the brain is like a clockwork. We're not vaguely hinting at similarities - we're applying a specific, precise model. A model that makes concrete testable predictions. A model that can be studied to yield more understanding. A model that's tied to what we now recognize as fundamental - computation isn't some gears and belts trick, it's one of the most basic and impactful ideas in mathematics.
It's similar the case of "simulation argument". Do we live in a simulation? Who knows? We're not sure if we could tell (unless it's a really hacky one) or if it would matter much. Is it possible for our universe to be a computer simulation? Probably. We know enough about physics, biology and information theory to have a justified belief it can be done, especially if it's designed around players. Decades of videogame development experience tells us how to do it; mathematics and natural sciences give it green light and say it's an engineering problem.
And as a final point to this little rant: when I say that "we know" something today, it's not the same kind of knowing as we had 500 years ago, or even 200 years ago. Mathematics and natural sciences are thoroughly interwoven. The parts that we are sure of are all mutually reinforcing - if we're wrong about any one of it, it would mean we're wrong about most of the rest, across many fields and disciplines. 200 years ago, that might have been possible. Today? We've built so many technologies and processes based on our scientific models that everything we do today, every second of every single person's life on this planet, is its own experiment confirming that our models are good fit.
--
[0] - Comparing maps from 500 years ago to maps today is a good exercise. The difference isn't just in accuracy - it's qualitative, as we now have a deep theoretical understanding of what maps are and how to make them, capacity to make them arbitrarily good for desired purposes, and experience in putting them to actual use.
[1] - This is, sadly, what makes me very pessimistic about faster-than-light travel and certain other sci-fi dreams. We may not know enough to rule those out directly just yet, but what we do know surrounds the problem space tightly and lets us rule them out via indirect proof.
The problem with the simulation idea is one of compute. Accurately simulating the quantum interactions between 30 particles is beyond our best supercomputers [1]. Imagine what it would take to compute a cell in all its quantum glory. Even with quantum computers, I certainly can’t conceive of a computer with enough qbits to approximate how many qbits there are. I just don’t think there is anything in our experience that would lead us to believe the universe we live in is able to be simulated. Maybe it’s conceivable if you took a bajillion short cuts with the physics, but then why the hell is quantum mechanics even a thing?
- That everything that makes you you and that makes you tick is located in your body, with the higher-order things like personality and memories being located in the brain;
- In particular, it's not located somewhere else outside the body, especially not in some metaphysical realm;
- That there is structure to the brain - patterns in arrangement of physical components and their transient states (including electrical and chemical signals) - that contains the entirety of you; that structure can be studied, and over time understood to a degree;
- That once we know enough about this structure, we can reach into the brain from outside to poke it with various implements, chemicals, beams and fields, and achieve changes we predicted will happen;
- That we can use insights from information theory, and even other areas of computer science and computer engineering, to better understand and predict the structure of the brain;
- That we can replicate various substructures of the brain in a simulation, and that such replicas will be useful to study the real brains, and also could be incorporated as components into computer systems we build ourselves;
- That artificial brains are possible to build;
- That artificial brains are possible to simulate on computers we build;
Some of those have already panned out. Some are used with great success in medicine. Others may take a while to verify.
Now, I recognize that some of the points I listed invite questions along the line of, "is that a prediction specific to 'brain is a computer', or would it also be a prediction of 'brain is an X' for many other X-es?". I believe the answer to that is "both", because anything you could substitute for 'X' that would yield similar predictions can itself be viewed as a computer! Computing systems aren't just next iteration of clockworks - computation is a new framework for interpreting the reality around us and making novel, precise predictions about it.
6000 years ago they had dudes (whole tribes, actually!) who practiced ritual trepanation for reasons unknown to us.
This "mapping the human brain" is no different and no more scientific.
The human brain is an information processing machine, and the science of information and its structure was discovered less than 100 years ago.
We don't even have a name yet for this science!
And our baby steps in this scientific field are nowhere close to beginning to study the brain. To use an analogy, we discovered how to make paper airplanes, but that gets us no closer to building a supersonic jetliner.
> 6000 years ago they had dudes (whole tribes, actually!) who practiced ritual trepanation for reasons unknown to us.
I'd thought it was established it was done for medical reasons - people realized that it sometimes helps with conditions nothing else helps with. Arguably most of the things humanity did until the last few hundred years was based on empirical correlations and stores to help remember them (but lacking predictive power). It's only recently that we've developed proper theoretical understanding of most things we do, in form of specific, tested theorems with lots of predictive power.
> This "mapping the human brain" is no different and no more scientific.
It is different because nowadays we do have proper science - and more importantly, we know how proper science looks like. So even if we still know very little, we at least know what can and what cannot be done with that knowledge.
> (...) the science of information and its structure was discovered less than 100 years ago.
> We don't even have a name yet for this science!
Isn't that just "information theory"?
> To use an analogy, we discovered how to make paper airplanes, but that gets us no closer to building a supersonic jetliner.
IMO that would be a good analogy for the clockwork age. Today, we not only know how to make paper airplanes, but more importantly, we can imagine a supersonic jetliner being a thing, we have justified confidence that there's a path from here to being able to build one, and that studying the phenomena behind the flight of a paper airplane are important steps towards building a supersonic jetliner.
We do know the components of the brain. "Hypocampus", "Prefrontal cortex", etc. The AIBS is studying how these regions map down to the individual elements (neurons, peptides, chemical transports etc).
You're discounting a significant amount of experimental science that is really going on right now. I don't understand why.
That's like saying we know the components of an airplane - there is the "hard shiny bit", "the bottom rubbery bits" and the "rotator thingys".
Decomposing the brain into groups of biological clumps of cells tells us literally nothing about how the brain as an information system works.
When we want to study the brain, we do so because we want to understand its information structure and build information processing models. The biochemistry of the wetware is irrelevant, except insofar as it might help us formulate an "information science".
Here we are no closer to the goal than 6000 years ago.
But we do. Check out the research into how our visual system works, or how spatial orientation works in mice. There's been quite a lot of puzzle pieces identified over the last decades. For some of those, we have good theoretical models that can make testable predictions.
We're far from having all the puzzle pieces, and even further from fitting them together. But what we already have and the progress that's happening are both reassuring.
I don't agree.
We have some very interesting ad-hoc empirical observations, but no theory.
We're not even in "alchemy" territory here yet, we're still in the "randomly mix colored rocks and see what comes about" phase, to use an analogy from how chemistry developed.
But we don't know if we're closer more like how Worcester is closer than Boston is to Los Angeles or how Worcester is closer than Boston is to Springfield.
I think simulation is a hamfisted word for the modern take on something that people have been considering for a long time. The idea that the universe could be "virtual" and conscious experience is more "real" than physical phenomenon have been actively discussed in philosophical literature since the 16th century, and it was certainly a topic of conversation long before that.
I think there's a fundamental difference and 2 different subject here.
On a universe level, it's impossible to tell if the entire thing is just a simulation.
However, I don't think our conscious experience is any different from a physics event happening elsewhere in our universe. If the universe is indeed simulated, we just happen to evolve naturally in that simulation, and there is no special need to simulate our consciousness specifically - that's a little narcissistic to think about.
Totally agree, the people who think we're living in the matrix, all atomic minds plugged into a made up reality haven't thought about the problem enough.
That's exactly why we're tired of simulationism. Short of the simulation runners talking to us, it's impossible in principle to take the least step toward figuring out if it's true or false.
I mean not exactly... It depends on the type of simulation we could be in. If we're in the type that is monitored and if we could find a way to mess with the simulation that we'd just be rolled back to a previous state, then yes, it's impossible. If instead it's watched about as well as some of us monitor our own VMs, well, maybe a different story. Of course we're still a long way from many things in physics like a theory of everything, so no need to worry about being in a simulation at this time.
> People just can't let go of wanting some kind of God
The reverse is also true. While the inability to determine why the laws of physics are there is not evident that there must be some reason that they are there, one can conjecture a causal relation with some kind of entity that affected it. Either that be God, or the playful intern that happened to create UniverseSandbox#9971
Have you considered that fun is not always without side effects? The same machine that came up with physics, calculus and evolution came up with God and the MCU. God and the MCU are fun ideas too but just look at how much damage they've done to the world.
I'm surely missing something in the paper, but I'm having a hard time seeing how the fact that black holes gain mass in a way that is seemingly coupled with the rate of expansion of the universe mean that it's driving the expansion - couldn't the universe just be expanding, giving the black holes the energy they would need to get this large?
Unfortunately the article seems to gloss over that point with just one sentence:
> The conclusion is profound: Croker and Weiner had already shown that if k is 3, then all black holes in the universe collectively contribute a nearly constant dark energy density, just like measurements of dark energy suggest.
That sentence contains a link to a paper[1] that's highly technical and beyond my understanding of physics, but concludes:
> A population of such stellar-collapse remnants can shift in energy $\propto a^3$ while diluting in number density $\propto 1/{a}^{3}$. The population-averaged energy density is then effectively constant and readily produces the observed ${{\rm{\Omega }}}_{{\rm{\Lambda }}}$.
which matches the claim in the article.
But to understand why, I guess we'd have to learn general relativity to understand the math in that paper.
From a quickscan it seems the theoretical basis for the work is revisiting how one stiches together relativistic descriptions of compact objects to cosmological solutions. This cannot be done exactly, some approximation is required. The authors revisit the perturbation theory that was used in classic papers and argue an alternative approach that effectively brings additional "physics". In their first paper [0] they suggest that binary neutron stars, being a relativistic compact object should also exhibit "cosmological coupling". Not clear where this stands.
At first glance this looks like it would suggest that black hole formation is directly related to the cosmological constant, which suggests that as the universe expands and (eventually) black hole formation slows down as the density of regular matter in the universe decreases, the whole thing should reach an equilibrium? Any cosmology-literate physicists here want to comment?
Ordinarily I think of a black hole as being a pretty ordinary gravity source when you are far away from it. If our sun got turned into a black hole by some non-violent process the planets would keep orbiting around it the same way around. How it contributes to dark energy is beyond me (like... nothing is supposed to leave a black hole!)
It reminds me of a 1970s sci-fit book where the bad guys were trying to prevent the universe from collapsing (people thought the universe was closed, not flat then) and managed to momentarily reduce the mass of the universe by manipulating a black hole so their evil computer could live forever in an eternally expanding universe. The good guys countered this plan with a ship that traveled close to the speed of light, increasing its mass, and causing the universe to collapse. On top of that they used a black hole to create a time loop causing the events that made the ship leave.
Needless to say, none of the above is expected to work.
I kind of dislike cosmological / astronomical science. It's strictly non interacting. It's predicting a function of only output and not inputs. Sure, the number of outputs is large. But without any inputs, it's just a funny compression competition.
So long as we can't interact with it, I don't think there's any way to distinguish between ideas. Note that the only reason Newton managed to surpass his predecessors was because of his insight that whatever it was that's pulling the stars is also the same thing we have locally. Without having something to play with locally, we'd never get anywhere. Sure, you might realize that somehow the epicirles are actually ellipses, or find this or that correction. Still you'll get nowhere without something interactive and local.
I read the article, but not the paper. The article's quite popsci, I think.
No doubt I misunderstood, but it sounds like circular reasoning:
* Cosmic expansion of spacetime causes expansion of black holes
* An expanded black hole must have greater mass
* Increasing mass of black holes results in cosmic expansion of spacetime
I had also understood that cosmic expansion affects the empty space between galaxies, and doesn't affect concentrations of mass like galaxies and black holes. IOW, expansion causes galaxies to move further apart, but not stars in galaxies. If that's right (I assume it isn't), then cosmic expansion shouldn't be able to cause a black hole to expand.
> Einstein’s equations, however, give no prescription for converting the actual, position-dependent, distribution of stress-energy observed at late times into a position-independent source. Croker & Weiner (2019) resolved this averaging ambiguity, showing how the Einstein–Hilbert action gives the necessary relation between the actual distribution of stress- energy and the source for the RW model.
A consequence of this result is that relativistic material, located anywhere, can become cosmologically coupled to the expansion rate. This has implications for singularity-free BH models, such as those with vacuum energy interiors. The stress-energy within BHs like these, and therefore their gravitating mass, can vary in time with the expansion rate. The effect is analogous to the cosmological photon redshift, but generalized to timelike trajectories.
So, yes two different things are presupposed through refined cosmological models:
1) singularity-free black holes limited by vacuum energy and
2) relativistic mass becoming cosmologically coupled to the expansion rate of the universe.
In that case BHs can so-to-speak be "red-shifted" by expanding (empty) space via vacuum-energy and gain mass (> stellar remnant k=3 BHs are the astrophysical origin for the late-time accelerating expansion of the universe.)
One thing to keep in mind, GR as geometrical modeling (field equations) of the universe is at its heart reciprocal. Wheeler[1] captured this succinctly in the now-famous statement: Space-time tells matter how to move; matter tells space-time how to curve.
This seems like a huge deal if it pans out? Any astrophysicists around who call tell how solid this evidence looks? Is it the sort of thing that could easily go away with more data?
> What that means, though, is not that other people haven’t proposed sources for dark energy, but this is the first observational paper where we’re not adding anything new to the universe as a source for dark energy: Black holes in Einstein’s theory of gravity are the dark energy.”
I beg to differ. This man has developed a theory of black holes without singularities called plugstars with a maximum gravitational redshift factor of 3, just like in OP article.
I've never heard of Petit or plugstars, but a quick look at the linked document shows that he expects light emitted from them to be redshifted by a factor 3.
This is utterly unrelated to the k factor in Eq. 1 of
Wait so does energy have mass? Assuming the total amount of mass in the universe is the same, and black holes absorb mass and then slowly evaporate into energy with Hawking radiation, does the total mass in the universe keep changing?
E=mC^2 so they are interchangeable. Atomic bombs work because the fusion products have a slightly smaller mass than the matter before the reaction, and even a tiny amount of mass multiplied by the speed of light squared becomes a really big amount of energy.
Having said that, I too am wondering about conservation of mass/energy and how that pans out with this new conjecture.
Still wrong. It’s a constant here that plays a role in the relation between mass and energy. That’s all. the speed of light is always the same. The mass was not accelerated to it.
Probably what's going on is that spacetime contains information that scientists haven't properly accounted for yet. That information could also be a lot more massive than expected. It's a little like how the Casimir effect is so powerful on microscopic (quantum) scales that it leads to large estimates of the zero point field energy and the Higgs field being 246 GeV/c^2.
So for problems like galaxies spinning more rapidly than expected, the traditional way of handling that is to say that "dark matter" adds missing mass that we can't see, which keeps the stars from flying apart.
But there are other ways of thinking about it. Space itself may be moving as it spirals down the drain of the black hole at the center of the galaxy, like with frame dragging, so stars perceive their local velocity as smaller than it is. So we don't perceive it within our solar system, but we can detect that we're moving too fast through our galaxy. We'd probably be able to detect the redshift from that though, so I'm probably making this one up.
Or space may be flowing into our reality from a higher dimension, lengthening the distance between matter over time, which we perceive as dark energy. Which could be due to the multiverse collapsing into our local reality. In other words, superposition may contain more information than the collapsed state, so the evolution of our reality gradually releases that bound up potential as dark energy. Admittedly, this is such a fringe idea that I'm probably also making it up.
My favorite interpretation is that the universe is inside the event horizon of a giant black hole. As a thought experiment, imagine being a 5th dimensional being who could observe what happens within a star as it collapses into a black hole. At first, the star is propped up by fusion energy keeping its gas hot enough to stay spread out. As the nuclear fuel runs out, the gas gets cooler and dense enough to begin forming a black hole at the center. At first, the black hole is small enough that Hawking radiation causes the black hole to explode faster than matter can fall in. So microscopic, short lived black holes form and die at the center constantly. We may even perceive matter falling in and coming out changed as fusion, but I digress. Hawking radiation rate is inversely proportional to mass, so eventually enough holes form that all of the matter falls in before it has time to radiate as energy.
So far, so good (I haven't said anything outside of current physics). But think about what the matter sees. As it falls into microscopic black holes and pops out again as energy, it's as if there is no floor beneath it. It's like it fell into a little bottomless pit, started accelerating faster and faster into the outward z axis of the hole, then saw the event horizon around it begin to shrink as its micro-universe evaporated, right before it popped out again as Hawking radiation. In other words, the matter perceives space around it growing for a time, before shrinking again. We can think of the interior of the hole as adding space as the floor falls at near the speed of light, even though we see a fixed radius event horizon from outside.
This is exactly what we perceive in our universe with expanding space and the Hubble constant, with distant galaxies leaving us as they exceed the speed of light at the radius of the universe 14 billion lightyears away and leave our reality:
I think this is probably why the James Webb telescope is seeing more galaxies at high distance (which shouldn't have had time to form) because the universe is actually continuous and looks like a big block of swiss cheese with enormous black holes bumping up against each other. There might never have been a Big Bang, instead, inflation was space growing inside of the newly created black hole that our universe is inside of. Time may even reverse within the horizon, which has implications for things like antimatter and the weak interaction breaking parity symmetry (of CPT).
So we would see smaller black holes in our universe gaining mass at the same rate as space is expanding, as if dark energy is pouring into them, which sounds like the best description so far of how non-primordial black holes could reach a billion solar masses. Which has all kinds of ramifications for quantum gravity and the manipulation of mass.
I'm not saying that any of these models exactly describe reality, just that these are breadcrumbs you can use to think outside of current dogma within the physics community. Also I'm not a physicist, just a programmer searching for abstractions that might be more accessible than the gobbledygook which creates a kind of passive gatekeeping around these concepts.
I can't wait for the inevitable Sabine Hofstadter video that either call out an obvious flaw or go into the consequences/implications of what it may change if it turns out to be true.
I also can't wait for the Space Time video on this once it had enough peer review to report on.
I'm awaiting Jean Pierre Petit's analysis. He's developed a twin universe theory involving negative mass/energy using bimetric relativity (without the runaway effect). At one point in the development of his theory in the mid 2010s, he stumbled upon: Hossenfelder, S.: A bimetric theory with exchange symmetry.
He contacted her but she refused to collaborate on a common paper, calling him a plagiarist and a crank. She did the same to Tim Palmer (a Royal Society meteorologist) when he approached her with ideas on superdeterminism:
> But Tim Palmer turned out to not only be a climate physicist with an interest in the foundations of quantum mechanics, he also turned out to be remarkably persistent. He wasn’t remotely deterred by my evident lack of interest. Indeed, I later noticed he had sent me an email already two years earlier. Just that I dumped it unceremoniously in my crackpot folder.
> In a universe filled by chaos and disorder, one physicist makes the radical argument that the growth of order drives the passage of time -- and shapes the destiny of the universe. Time is among the universe's greatest mysteries. Why, when most laws of physics allow for it to flow forward and backward, does it only go forward? Physicists have long appealed to the second law of thermodynamics, held to predict the increase of disorder in the universe, to explain this. In The Janus Point, physicist Julian Barbour argues that the second law has been misapplied and that the growth of order determines how we experience time. In his view, the big bang becomes the "Janus point," a moment of minimal order from which time could flow, and order increase, in two directions
There are more players in the field with very similar ideas (the refinement of twin universes as a two-sided universe), Petit contacted them all but was never given a reply:
And now we have that paper that talks about black holes without a singularity and their coupling with the universe's expansion limited to a maximum k-factor of 3.
This sounds a lot like the conclusion from this paper by Petit:
> Supermassive objects, whose formation will be explained in a future article, are also subcritical objects, Plugstars. The theory gives all plugstars a gravitational redshift of 3. This is exactly what is shown by the measurements of the images of two hypermassive objects located at the center of the galaxies M87 and the Milky Way.
> We predict that this redshift of 3 will accompany future images of hypermassive objects that will appear in the future.
In addition to the two you mentioned, I've enjoyed 'cool worlds' by some prof. at Columbia University. He's a cosmologist (?) by trade, so lots of stuff about new planet discoveries, new detection techniques/tools. And he has a very soothing voice similar to Space Time - Good winding down before bed :-)
Edit: Ahh - its "David Kipping", Assistant Prof., Astronomy
The UMich article contains a quote that I think argues that there is nothing “on the other side” of black holes that could be the birth of a new universe: “If cosmological coupling is confirmed, it would mean that black holes never entirely disconnect from our universe, that they continue to exert a major influence on the evolution of the universe into the distant future” Tarlé said.
That suggests to me that popular notion that black holes are an entry point to a wormhole leading to a new place would be unfounded.
Is there some coagulated astrophysical theory about reality being black holes all the way down? I.e. our universe being a region of spacetime isolated inside a black hole, each black hole inside our universe also hosting one, the expansion of the universe possibly being the black hole growing, etc?
Am I right in thinking that if this connection is confirmed then it'll bring us a step closer in bringing Quantum Mechanics and General Relatively closer together?
After all, vacuum energy/zero point energy/ZPE is a quantum phenomenon, albeit not that well understood.
From a physics POV, perhaps. Unfortunately, and I wish I could find the great illustration of this I stumbled across a few years ago, the two theories, QM and GR, are also separated by a mathematical gulf: the dominant maths used in each are so far apart on math’s “evolutionary tree”, if you will, that it will take considerable work to bring them together, even with alignment of the underlying physics.
The mental image I have, based on that lost but amazing illustration, is the difference between mammals and birds: sure, they’re both warm blooded, largely social bilaterally symmetrical vertebrates, but they’re awfully far apart.
I understand your point and your analogy of mammals and birds and the mathematical gulf, it's a good one. Damn shame you've lost reference to that illustration, I'd love to see it.
Embedded in my comment was the thought that if the empirical evidence for this observation was overwhelming then this tight and specific coupling between QM and GR would be so embarrassing that mathematicians would have no option but to significantly up the ante (and also this new evidence may bring a fresh approach).
We've seen this leapfrogging between math and physics many times before, Newton/calculus, Hamilton/quaternions, Galois/group theory, etc.
Wishful thinking perhaps, but the standoff/gulf has to collapse eventually. It'd be nice if this observation was the impetus.
The containment of vacuum energy instead of a singularity aligns well with my ideas on black holes. But this correlation to dark energy seems in contradiction — dark energy is essentially the cosmological constant. How can increasing mass over time cause acceleration of the expansion rather than deceleration? Something doesn’t jive here. (Hopefully the paper will shine some sense on this.)
They are entirely different things, only related by the word “dark”. Dark energy is “negative” energy in the sense that it drives expansion rather than slowing it down. Its energy density is also constant, meaning it is not diluted by expansion unlike matter (dark or not) and normal energy. It seems to be a fundamental property of spacetime.
The reason they even came up with the idea is because they're distinguishable.
Unfortunately, my mental model is merely one step up from PopScience articles, so the following probably has more holes than a doughnut carved out of Swiss cheese:
Dark matter was first noticed in the unexpected relationship between the orbital speed of stars in galaxies and their distance from the centre. This has since been improved by the direct observation of gravitational lensing, which also shows its not "simply" gravity falling off at a different rate that 1/r^2 at these scales as the lensing isn't always inside the galaxies e.g. when two collide.
Dark matter behaves like it doesn't interact with anything much, not even itself.
On the other hand, dark energy was originally suggested by Einstein as a fudge factor to make the universe static on large scales, something he later dropped in embarrassment when Hubble expansion was found, only for it to come back when people noticed the expansion seemed to be accelerating.
Dark energy, for maths reasons I don't really understand[0], acts like negative pressure even though it's positive energy, which occupies all space evenly and therefore has an effect directly proportional to distance, not inverse squared like gravity.
[0] 16 simultaneous partial differential equations whose contents can vary throughout a 3+1 spacetime is something I have yet to even attempt to play with
> Dark energy, for maths reasons I don't really understand[0], acts like negative pressure even though it's positive energy
First, let's understand a bit about the cosmological frame: it lets us consider the universe as 3-dimension-of-space ordered by a "scale factor" dimension of time. The coordinates of each 3d space are Euclidean, but the coordinates don't line up exactly between different scale factors. Colloquially, the coordinates expand with the expansion of space, or equivalently the space between coordinates grows over time. You could think of it this way: if at some early point we label every point in space with an integer, in the future of that point we have to add new labels between the integers.
If we add in a pair low-mass freely-falling test matter at the time when everything is labelled with an integer, e.g. at point (1,1,1) and (2,1,1), then they stay at those coordinates even as more and more labelled (with non-integer) space appears between them.
To this we add a set of gasses, dusts, or fluids representing radiation, ordinary nonrelativistic matter, and maybe others (e.g. relativistic dark matter (e.g. neutrinos), non-relativistic dark matter ("cold dark matter"/"particle dark matter")). Again, any "mote" of the ordinary matter dust stays at the same coordinate forever. Rather than coping with the "motes" of radiation not staying put, we average every point in space and see there is some quantity of matter (a mote, or a fraction thereof), some quantity of radiation (a mote, or a fraction thereof), some quantity of dark matter, and so forth.
The dusts dilute away because more and more space appears between each original coordinate. In our averaging picture, we get a smaller and smaller fraction of a mote at eacn point on average as the universe expands.
This is the essence of the Friedmann-Lemaître-Robertson-Walker model that is the standard cosmology.
The view here is that of boring old observers freely floating in deep inter-galaxy-cluster space. That leads to (from that view point) concrete calculations of the various contributions to the averaged energy-density at each point in space at a particular scale factor ("at a given age of the universe"). In general, that figure is higher in the past and lower in the future, with different contributions to the total (average) energy density at a point dropping at different rates (this is the "equation of state" for each of radiation, baryons, neutrinos, dark matter, ...), but they all drop away towards nothing in the far future.
We can then think of where stress-energy goes at an average point. For freely falling baryons, almost nothing interferes with the whole of the stress-energy flowing from (microsecond-before,0,0,0) to (now,0,0,0) to (microsecond-after,0,0,0). There's some cosmic mircowaves and neutrinos that have a tiny ghost of a chance of transferring in some momentum via scattering at each point, but that falls away when we consider the average across each of these three 3d spaces.
Flipping things back around, while a "mote" of the baryon gas stays "at the integers" as we add more and more digits after the decimal point as space expands, each time we add digits we get more dark energy fluid "motes". If we change coordinates, the coordinate-distance between "motes" of the fluid grows with the expansion, e.g. as the distance between these proxies for galaxy clusters goes from 1 to 10 to 100 to ... the 9, 99, ... has "new" motes of dark energy.
Given this it is straightforward to treat some aspects of the expansion as another fluid with an energy density that is the same at every point in every space. It does not dilute away like the others. It imposes a tension ("negative pressure") on the other sources of energy-density.
If we think of a single point in one of these 3d spaces as being imprisoned within a tiny six-sided cubical cell, we can track the flow of momentum through each (pair of) face(s) of the cell. At very early times radiation flowing through the cell dominates. The inflow is tracked as the normal stress <https://external-content.duckduckgo.com/iu/?u=https%3A%2F%2F...> on each face. When the normal stress is identical on all six faces (or spherically symmetrical if we switch from a cube to a sphere), we call that pressure.
Negative pressure is just flipping the arrows around. We can call that "tension".
Colloquially we're interested in how much the pressure changes the energy level of the imprisoned matter. In general, large positive pressure leads to the imprisoned matter becoming more energetic. Large negative pressure would lead to the imprisoned matter becoming less energetic. "Large" here is relative to the energy-density within the cell, and varies by component ("equation of state" again).
In the comsological frame, freely-falling imprisoned matter (baryons, dark matter) in a cell at (t,1,1,1) will thus cool with the expansion across (t',1,1,1), (t'',1,1,1) and so forth, with the energy sucked out by the constant outward tension.
> 16 simultaneous partial differential equations
You can start with understanding the stress-energy tensor.
This is it laid out in a 4x4 matrix form <https://en.wikipedia.org/wiki/Stress%E2%80%93energy_tensor#/...>. The indices running 0,1,2,3 correspond to the timelike dimension and the three spacelike ones. Each element of the matrix has two indices i and j (e.g. for T^00 in the top left, we have i=0, j=0) indicating the "comesinfrom" and "goesoutto" directions.
Energy pouring in and staying in is the orange column. Energy already there and staying put is the top left.
Let's use spherical coordinates and call the 1 direction "in/out", i.e., described by the radial coordinate. We'll place our cell of interest microscopically displaced radially from the spherical coordinate origin.
If we were in a dense object like the core of a planet or star, T^11 would be dominated by the inwards flow of inwards-momentum and the reaction-pressure of outwards-momentum flowing outwards. In high-mass stars' cores, this number will dominate all the others in the stress-energy tensor. Also in general T^ij, i=j, i!=0 (the green bar) will not be completely identical.
However, if we're in a cosmological setting (deep inter-galaxy-cluster space), T^11 is [a] small, [b] it's the same as T^22 and T^33, and [c] they are interpretable as the flow of energy-momentum out of the cell. In the very very far future, T^00 drops to zero, and T^ii, i!=0 (despite being small) dominates.
Another difference here is that in the stellar core positive pressure case, T^11 is not a constant. However, if we got rid of all possible radiation pressure and the like, in the resulting cosmological vacuum T^11 would be a constant (and identical to T^22 and T^33).
This is an interpretation that depends on our choice of viewpoint (that of a freely floating low-mass observer who feels only the expansion and not attractive influences from dense matter, because all matter is completely evenly smeared out). The interpretation does not hold up well as we change our point of view (e.g. to a relativistic observer, to a different set of coordinates, to an observer who is close to or part of a self-gravitating mass overdensity).
Consequently it is probably better to start with the idea that dark energy is the Cosmological Constant (until this possibility is disproven, which has not happened yet). It is simply a constant of nature, like the speed of light or like the charge of an electron. Where does the constant come from? Who knows!
Sometimes it is convenient to think of this constant as if it were a substance with appropriate properties, or as if it were "the cost of empty space" (vacuum energy). However it's also possible to be misled by this. It's a scalar quantity, and it has an exact relationship to a tensor quantity (the metric). Scalars and tensors are generally covariant, so we can always make that pairing work for any possible observer, even e.g. an ultrarelativistic cosmic ray or a photon or a relativistic compact object like a black hole.
Finally, the reason for "sometimes" is: while the total tensor value of T is the same for everyone, the value of the individual components of the tensor depends on the frame of reference.
One of the sometimeses is the very early universe when radiation pressure is extremely important for smoothing out temperature and density differences; it is natural to want to do accounting of the expansion as an offset against that radiation pressure instead of the "truer picture" of the radiation pressure falling because the radiation is diluted and de-energized (redshifted) by the early expansion.
They are distinguishable in cosmological models by variation of the exponent in their equation of state relating the pressure of the mass/energy to the energy density. Specifically, dark energy is modeled as a cosmological constant so the pressure is equal to the negative of the energy density. Then individual contributions due to neutrinos, dust, stars etc. end up with exponents differing from 1.
Long story short: the stress energy tensor is a different structure for different types of matter/energy.
I hereby coin the linkage between Dark Energy and Dark Stars ... "Dark Synergy"!
Unfortunately, the Dark is scary, as far as I can tell. If black holes masses keep increasing, and the universe keeps accelerating in size, we end by stretch of by crush.
Or does one of these effects ever win? Is there any hope for a middle case where we get through?
Non-sequitor, but I grew up watching Red Dwarf with BBC on PBS. I never had a cat as a kid, so that character was totally bizarre to me. Now after being a cat "caretaker", I think owner is not quite right, I totally understand that character. This made rewatching that series so much more enjoyable. The story of his audition is still one of my favorites: "On television, John-Jules is best known for his portrayal of Cat and Cat's geeky alter ego Dwayne Dibbley in the British comedy series Red Dwarf. He obtained the part of Cat by turning up half an hour late for his audition, dressed in his father's old zoot suit. He was unaware that he was late and hence did not appear at all concerned about it. The producers immediately decided he was cool enough to be "the Cat"." --wikipedia
You can. But thinking it's a "meme" that should "die" is not being skeptical. It's the best current explanation, and an uninformed dislike is not to be mistaken for an understanding of its limitations.
> Because SMBH growth via accretion is expected to be insignificant in red-sequence ellipticals, and because galaxy–galaxy mergers should not on average increase SMBH mass relative to stellar mass, this preferential increase in SMBH mass is challenging to explain via standard galaxy assembly pathways (Farrah et al. 2023, Section 5).
I think there are several observational effects that may obfuscate the interpretation -- but I also haven't read these papers in great detail.
First, there is a known age-metallicity-dust degeneracy that can make dusty star forming galaxies look more like red elliptical galaxies. This can bias estimates of the star formation and mass accretion history -- e.g. perhaps supermassive black holes have had more recent growth. Second, galaxies in more overdense regions may harbor faster growing supermassive black holes, and also be more predisposed to later merging and forming an elliptical galaxy. This seems likely true around cosmic noon (z~1-2), when star formation and supermassive black hole accretion activity were at their highest throughout cosmic history.
[0] https://iopscience.iop.org/article/10.3847/2041-8213/acb704