
As Clocks Get More Precise, Time Gets More Fuzzy - moh_maya
http://www.sciencealert.com/physicists-find-as-clocks-get-more-precise-time-gets-more-fuzzy
======
M_Grey
I understand some of the subject, but I didn't understand the point this
article is trying to make. There's a lot of the usual popsci about QM, and
then this...

 _" In this case, the physicists hypothesised the act of measuring time in
greater detail requires the possibility of increasing amounts of energy, in
turn making measurements in the immediate neighbourhood of any time-keeping
devices less precise._

 _" Our findings suggest that we need to re-examine our ideas about the nature
of time when both quantum mechanics and general relativity are taken into
account", says researcher Esteban Castro."_

joe_the_user's link to the original paper is infinitely more useful than the
article itself. The paper seems to imply something fundamental about how
anything that we can use as a lock must function, then lays the usual
assumptions from QM on top; that makes sense, although who knows if it's true?

I don't get the sense from the paper that it's something we could reasonably
probe with current or near-future tech. Worse, their conclusions about whether
or not spacetime intervals can be absolutely defined may rely on an arbitrary
level of precision.

~~~
mirimir
So is this basically saying that time is quantized?

Or did we know (whatever that means) that already?

~~~
jacquesm
Planck time?

~~~
M_Grey
This is one of my favorites discussion of the Planck scale, I'd strongly
recommend it. [https://www.physicsforums.com/insights/hand-wavy-
discussion-...](https://www.physicsforums.com/insights/hand-wavy-discussion-
planck-length/)

~~~
mirimir
Indeed, very cool. Plus, John Baez commented:

> Another way to think about the Planck length is that if you try to measure
> the position of an object to within in accuracy of the Planck length, it
> takes approximately enough energy to create a black hole whose Schwarzschild
> radius is… the Planck length!

That sounds familiar, here.

------
joe_the_user
I wonder if this could be used to test various quantum gravity models.
Apparently this could involve a lot of energy - another giant machine to
build, oh boy.

Also, direct link to PNAS article.

[http://www.pnas.org/content/early/2017/03/06/1616427114](http://www.pnas.org/content/early/2017/03/06/1616427114)

~~~
dghughes
That is true or at least it seems like the smaller any natural phenomena are
the bigger and more powerful a machine needs to be to find them.

------
CarolineW
There are a lot of comments here, some by people more knowledgeable than I.
Let me just say that, by my understanding, just as there's a quantum
uncertainty principle linking position and momentum, so there is a similar (or
equivalent) quantum uncertainty principle linking energy with time[0]. The
more accurately you know when something happens, the less certain you can be
about how much energy is involved, and _vice versa._

This is why the quantum vacuum is filled with particles, and why the Casimir
effect[1] works.

[0]
[http://math.ucr.edu/home/baez/uncertainty.html](http://math.ucr.edu/home/baez/uncertainty.html)

[1]
[https://en.wikipedia.org/wiki/Casimir_effect](https://en.wikipedia.org/wiki/Casimir_effect)

~~~
raattgift
> This is why the quantum vacuum is filled with particles

The quantum vacuum is defined as the no-particle state.

The particle number operator works by matching annihilation and creation
operators. Those operators are fixed on particular coordinates.

When annihilation and creation operators match up there is no particle.

For inertial observers in flat spacetime, a Lorentz transform relates the
annihilation and creation operators for any pair of observers no matter what
their relative velocities or which way they face relative to one another.

A Bogoliubov transformation can relate annihilation and creation operators for
a wider range of observers in flat spacetime, including those who are
accelerated. However, a natural choice of coordinates for extremely
accelerated observers relates poorly with a natural choice of coordinates for
inertial obervers (for example, in each case using polar coordinates with the
origin always at the respective observers), with a result that there is a
disagreement about where in a set of shared coordinates one finds frequency
modes which relate to annihilation and creation operators.

As a result, in flat spacetime, the no-particle vacuum of a(n inertial)
Minkowski observer looks to an accelerated observer as if there are particle
creations not matched by particle annihilations, and thus the particle number
goes up. However, conversely, the no-particle vacuum of a (n accelerated)
Rindler observer looks to all Minkowski observers as having particles.

Since Special Relativity does not privilege frames (even though its focus is
on relating inertial observers' observations), neither type of observer is
really "more correct".

However, since the accelerations required to have a detectable number of
particles is extreme (you need to maintain 10^20 gees of acceleration to see a
thermal bath of 1 kelvin) and since accelerations do not last indefinitely, it
is fair to pseudo-privilege the inertial observers, and conclude that (a) the
Rindler observer is counting annihiliation operators source by whatever is
locally powering the uniform acceleration and (b) the Minkowski view that the
accelerating observer is therefore emitting a thermal bath into the Minkowski
vacuum is more correct.

Additionally, in SR one can tell an extreme acceleration from mere inertial
(even ultrarelativistic) movement, using an accelerometer.

In general curved spacetime one expects observers not moving on a geodesic
(one hovering at a fixed height above the surface of a planet, for instance)
to disagree about the particle numbers in the local area compared to an
observer freely falling on a geodesic as that observer passes infinitesimally
close to the non-geodesic (accelerated) observer, when the particle numbers
are defined using annihilation and creation operators set against "natural"
coordinate choices for these observers and relating those choices through a
Bogoliubov transformation.

Two mutually accelerating observers in flat spacetime or two observers in
general curved spacetime may not be able to find a shared coordinate system,
and when that is the case, no one can really say which of the two is "more
correct" about her or his no-particle vacuum.

Consequently, the quantum vacuum may have particles in it, but only for a
peculiar choice of quantum vacuum (i.e. you look at someone else's claimed no-
particle state and see particles in it).

When looking at one's own no-particle state, one cannot determine the vacuum
energy. One can conclude that at each coordinate point there is the same
energy, but not the exact energy -- this ground state / minimum energy can be
arbitrarily high.

However, the uncertainty inequality \Delta E\Delta t \geq {h \ over 4\pi}
applies and one can treat this as momentarily separating annihiliation and
creation operators. One can treat this formally (e.g. in Feynman diagrams of
this process) as the production of virtual particle/anti-particle pairs, or
alternatively as a realization of the non-commutation of the field's energy
operator with the particle number operator, and thus the no-particle state is
a superposition of particle number eigenstates.

Trying to turn these superposed states into real particles is taken less or
more seriously depending on the writer. I'm fairly confident that many
physical cosmologists are fine with the idea when considering the possibility
that the early universe is a fluctuation of a high-energy no-particle state
into a still-high-energy many-particle state. The problem is in suppressing
these fluctuations in later regions of space that are free of particles (from
our perspective in our solar system now). Physicists in other areas like to
poke many holes in these ideas.

However, in both cases, "filled with particles" invites horror-show ideas of
larger scale structures fluctuating into existence and causing problems. This
doesn't happen as far as we can tell, so we should preserve the no-particles
idea of the vacuum conceptually.

> the Casimir effect

does not really rely on the particle count operator. One can cast it as the
(idealized) plates screening longer positive and negative frequency modes that
when combined linearly and treated as creation and annihilation operators,
simply produces more of these on the outsides of the plates. The "more
particles" outside the plates push them together through the "no particles"
vacuum between them.

Mathematically this is OK, but conceptually it has problems.

Decades ago a quantum electrodynamics approach was taken to the Casimir effect
showing that it arises due to the relativistic retarded intermolecular forces
within the plates themselves. This view fully reproduces the "vacuum
fluctuation" observables for idealized plates and generalizes to non-idealized
plates in a way that the fluctuation view does not (yet, as far as I know).
There have been updates on this view of the Casimir force since, and it holds
up well, and has the advantage of not needing any particular value for "zero
point energy" (which is just the vacuum energy as I described a few paragraphs
above, and which ought to be _unprobe-able_ by observers similar to us, which
is in fairly serious conflict with the vacuum fluctuation idea taken to
extremes).

This is elucidated with citations under "Relativistic van der Waals force" and
under "Generalities" in your [1].

Taking virtual particles too seriously can be misleading, even if it works out
mathematically.

------
goldenkey
All this is saying is that using extremely high energy lasers or some other
high energy mechanism to keep coherence, is going to necessarily cause a local
time dilation due to the high density of energy.

Highly dense energy/matter warps the space around it.

Its not like we cant take this into account though in our measuring devices.
If they are measuring accurately and extremely extremely often, any error will
accumulate so that even a normal clock is more accurate. Therefore its not
hard to tune within reason. Error needs ti approach 0 as number of
measurements per second approaches infinite.

~~~
M_Grey
They seem to be saying that any measurement regime, in achieving the necessary
accuracy, will cause arbitrarily large measurement uncertainties. The authors
at least, seem to think that this is fundamental, but as you say, it might
just be a limitation of our current notion of a highly accurate clock.

It might not matter though, if this is another in a long line of essentially
non-falsifiable ideas.

~~~
goldenkey
If time is not well ordered on the quantum scale then we be damned!!

~~~
M_Grey
Right?! It's not like the TDSE is so easy to manage under the current regime.
Plus I _like_ causality...

------
inetknght
Let's pretend that I've never taken a single physics class at all, ever.

Who's to say that the inaccuracies are merely caused by fluctuations in
spacetime? If you turn on the radio to a frequency on which there is no known
transmitter, you hear background radiation. Is it not the same for spacetime?
As you measure time (space) more precisely (more instantaneously) and more
often, your measurements will occur more often on various boundaries of the
background radiation of gravity (which we all know fluctuates spacetime). At
that point, the is the error rate you see not also the rate of background
radiation? If you could measure or predict the velocity of multiple background
spacetime radation sources, you could reduce error? Thus, the error rate of
measuring time is the rate of change of gravity.

Of course we built gravity wave detectors, so that nullifies that whole
thought. Right? Or does it? The collision and merging of celestial bodies is
arguably one of the biggest events in the universe. But what if spacetime or
gravity (are they the same thing? I don't know) 'bounces' back on like a wave
crashing against a wall, then surely there's fluctuations in spacetime due to
past events. Wouldn't that intrinsically cause minute "errors" in
measurements?

What keywords should I search to find (hopefully free) online resources to
answer my questions?

~~~
killjoywashere
My physics is pretty old, but I'll try: skimming the actual article (1), it
looks like they're isolating time in the Heisenberg uncertainty principle
(putting all the Δ on the other side of the equality) and then following those
Δ's in the other variables, which is a common use of the uncertainty
principle). They imagine some displacement, then insert the result back into
the uncertainty principle to see how that Δ would propagate back onto time.

I was sort of excited about this because it seemed like they might be
suggesting there was actually a window here to explore a quantum theory of
gravity. Alas, they say "Although the methods presented here suffice to
describe the entanglement of clocks arising from gravitational interaction, a
full description of the physics with no background space–time would require a
fundamental quantum theory of gravity".

To your specific question "Who's to say that the inaccuracies are merely
caused by fluctuations in spacetime?" I don't think they're claiming they
found a theoretical cause to explain some observed fluctuations. In fact, they
put in some notional values to see what comes out and observe "this effect is
not large enough to be measured with the current experimental capabilities".

Now, I think what you're getting at is the idea that you would expect the
ticking of clocks to vary over space time. What they're saying is, "Of course,
but we believe the problem is even more fiendish: the tick rate not only
varies but it accumulates uncertainty to the point that a system which starts
out as a clock can no longer be considered a clock at all. Further, [what I
believe they are saying] the clock's coherence decays in a way that
potentially varies with path and frame of reference. At which point, one
starts wondering, why haven't we all flown apart already?

And they're doing this, again, all from a theoretical standpoint. It' the same
sort of thought experiment you might get if you modify Euclid's fifth axiom:
all sorts of weird stuff is possible, and we don't really need experiments to
show the math, although there are experiments that can be done.

(1)
[http://www.pnas.org/content/early/2017/03/06/1616427114.full...](http://www.pnas.org/content/early/2017/03/06/1616427114.full.pdf)

------
wayn3
There is a lesser known uncertainty relationship that works similar to the one
seen in position/momentum.

This is especially treacherous since atomic clocks seek to measure time by
counting energy state transitions in cesium atoms.

I'm not sure whether it applies at the scales we currently work with.

------
UhUhUhUh
It seems that this could be generalized to "as measurement gets more precise,
what is being measured appears more fuzzy". This begins to define a sort of
law and a sort of epistemological "wall" we have been bumping against for the
past century or so.

~~~
mirimir
Well, it's about trade-offs in measuring complementary variables. But concept
of time becoming fuzzy is new for me.

------
nvader
This effect was the a key plot point in Terry Pratchett's Thief of Time.

