Elixir Livebook is my tool of choice for Advent of Code. The language is well-suited for the puzzles, I can write some Markdown if I need to record some algebra or my thought process, the notebook format serves as a REPL for instant code testing, and if the solution doesn't fit neatly into an executable form, I can write up my manual steps as well.
This is a very sensible confusion. The forms of macroscopic averaging functions which are useful and valid cannot be made up arbitrarily, but are determined by the microscopic physical laws of the system. There is a reason that the law of increase of entropy is the second law of classical thermodynamics, with conservation of energy being the first law. To state it explicitly: energy is a globally conserved quantity, which can be freely exchanged among the interacting microscopic parts of systems. So we can bring a test system (called a thermometer) into interaction with our system under study, (indirectly) observe the average energy per degree of freedom of the thermometer, and call that observation the temperature of the system under study. Similarly, it is a known physical phenomenon that a gas confined to a container will exert a steady average outward force per normal unit area on the walls of the container; we have ways to measure this force, and we call it pressure. And so on, and on: every useful macroscopic averaging function is a relatively stable, measurable quantity which is determined by the physics of the systems under study. If we discovered some new measurement technique tomorrow which enabled us to measure the "quintessence" of physical systems, and this measurement was stable and reproducible, and could be meaningfully aggregated from the microscopic parts of the system and measured on the macroscopic scale, our definition of entropy would change, to account for "quintessence".
> Quantum fields make non-negligible contributions to the gravitational source in these models. That means that, for a fully consistent model, you need a theory of quantum gravity, which we don't have.
Importantly, I don't think this is true. The gravitational field at the event horizon of pretty much any black hole people care to model, is actually quite "weak", when compared to the Planck scale where quantum gravitational effects are expected to become important. While quantum gravity would be needed to model phenomena deep inside a black hole (near what is referred to as the singularity), phenomena at the event horizon, such as Hawking radiation (and presumably, the phenomena these researchers claim to predict?) can be modeled quite adequately just using QFT and classical gravity.
> The gravitational field at the event horizon of pretty much any black hole people care to model, is actually quite "weak"
More precisely, the spacetime curvature at the horizon of any astronomically significant black hole (i.e., one of stellar mass or larger) is quite small--many, many orders of magnitude smaller than the Planck scale.
While this is true, it's not what I was talking about. Black holes get formed by gravitational collapse of massive objects. The models of that collapse process that were current in the 1970s, when Hawking published his original paper on black hole evaporation, were purely classical. Since then, particularly in the last decade or two, there has been a lot of theoretical work on non-negligible quantum corrections to the collapse process. Not having to do with quantum gravity, but just ordinary quantum fields (like those in the Standard Model) providing corrections that were not known when Hawking's original paper was published, or for another two decades or so afterwards.
Also, even in vacuum, quantum fields (again, not quantum gravity, just ordinary quantum fields like those in the Standard Model) can provide non-negligible corrections. The most obvious one is a nonzero cosmological constant, aka a nonzero vacuum expectation value for the energy density of the "ground state" of the quantum fields. The accelerated expansion of the universe indicates that the cosmological constant is indeed nonzero, but the value implied by those observations is about 120 orders of magnitude smaller than what our best current understanding of quantum field theory gives us. So obviously there is something important missing in our understanding of vacuum quantum fields.
Finally, to get to the main issue I was referring to in my earlier post: the problem with having quantum fields as a source of gravity has nothing to do with the magnitude of the spacetime curvature, it has to do with having superpositions of different quantum field configurations, which means superpositions of different stress-energy tensors. You can't handle that with a fixed background spacetime; there would need to be a superposition of different spacetime geometries. Which requires a theory of quantum gravity.
The price sounds good for the spec. But what do I do with 8 GB RAM and 200 GB disk to run an irc client, a minimal Web server and maybe an ssh jumpbox? They happily run with 1 - 2 GB RAM and 10 GB disk.
On the non-quantitive side I really prefer to run ARM over ugly Intel.
I have only one complaint about this Base32 encoding choice, and it stems from the fact that I prefer to encode Base32 using lower case letters, instead of the choice made here to make upper case canonical. When using lower case, the main source of possible confusion is that it can be difficult to tell l and 1 apart, as in l1l1l1l... and this scheme uses both l (canonically "L") and 1.
Hmm, other base32 system avoid that by not including I and L (and O) - and some other refs I've read (ULID comes to mind) say produce UPPER output but accept either case input.
And, like this spec, the values are aliases so 0/o are the same, 1/I/l are the same, etc
Yes. I'm surprised the author would be more concerned about confusion between U/V (or u/v) than between 1/l ... the former has always seemed relatively far-fetched to me, whereas depending on the font, the latter can be a real problem. Again, I attribute the issue to the choice of upper case as canonical, because L is not easily confused with any other letter or number.
This is an excellent, and puzzling, question! Let me try to provide some insight into how physicists think about such paradoxes, by addressing the specific example you mention, of the presumed uncountable infinity of different possible photon energies in a finite range of frequencies. In the case of blackbody radiation, when physicists analyze the set of possible photon energies more carefully, they find that there is really only an infinite number of different possible photon energies (for a finite range of frequencies) if the volume of space containing the photons is infinite. In any finite volume of space, if we allow ourselves to place boundary conditions on the electromagnetic fields at the edges of that volume (for example, suppose we think of our volume as a cube with mirrored walls), we find that there are only a finite number of oscillating modes of the electromagnetic field in any finite range of frequency. In the case of a cube, there is a very lowest frequency of radiation whose wavelength will allow it to form a standing wave in the box, and all the other allowed modes are multiples of that lowest frequency. So the density of possible photon states per unit of frequency is actually proportional to the volume of space we allow to hold the photons. (By the way, this is also precisely related to the quantum complementarity of uncertainty between momentum and position. To confine a photon to a volume of space, the uncertainty in its momentum must be the same order as that carried by the lowest frequency standing waves which would be compatible with the container.)
I think part of my mistake was not thinking of the thermal energy packets (phonons?) as waves in boxes, with the box being the boundary of whatever thing has the energy. Which is still weird for a gas expanding into a vacuum I guess, but works for a hot solid object or a confined gas.
This is clearly not true. If it were an effective means of getting accurate information, then there would be moral arguments for using it in some situations, to prevent even worse events (An imminent nuclear attack? Biological weapons attack? Whatever horrific scenarios you care to imagine) from coming to pass.
I don't believe in these mathematical greater evil arguments. I assume you would not kill a baby even if it was guaranteed to prevent a plane crash killing a hundred people.
One should not make relativistic moral calculations with fundamentally unacceptable acts.
That's exactly right. In this light, what one should and shouldn't according to a single person (what your last post was about) doesn't have any meaning when discussing what many individuals could value: that's on a higher meta level.
It becomes at best a statement of opinion, and at worst it can be taken as dismissive of the values of others.
> Yes, I think most of us are dismissive of some values of some others and that's OK.
I don't believe this is a very useful property of a conversation which is about trying to understand how the world squares with opinions that people can hold.