Hacker News new | past | comments | ask | show | jobs | submit login
Where does my computer get the time from? (dotat.at)
851 points by fanf2 12 months ago | hide | past | favorite | 250 comments



Related to timekeeping is the NIST Randomness Beacon: https://csrc.nist.gov/projects/interoperable-randomness-beac...

"This prototype implementation generates full-entropy bit-strings and posts them in blocks of 512 bits every 60 seconds. Each such value is sequence-numbered, time-stamped and signed, and includes the hash of the previous value to chain the sequence of values together and prevent even the source to retroactively change an output package without being detected."

People here were joking about putting time on the blockchain, and, well, NIST is already doing it.


> People here were joking about putting time on the blockchain, and, well, NIST is already doing it.

It's not a blockchain, but a single writer Merkle DAG. No consensus necessary. Much like a git repository with a single author.


If each block contains the hash of the previous block, then I think that it is a blockchain (regardless of if there is multiple authors or only a single author). A git repository is a blockchain, too.


> If each block contains the hash of the previous block, then I think that it is a blockchain […]

Or simply a 'hash chain':

> A hash chain is similar to a blockchain, as they both utilize a cryptographic hash function for creating a link between two nodes. However, a blockchain (as used by Bitcoin and related systems) is generally intended to support distributed agreement around a public ledger (data), and incorporates a set of rules for encapsulation of data and associated data permissions.

* https://en.wikipedia.org/wiki/Hash_chain

Or perhaps:

> Linked timestamping creates time-stamp tokens which are dependent on each other, entangled in some authenticated data structure. Later modification of the issued time-stamps would invalidate this structure. The temporal order of issued time-stamps is also protected by this data structure, making backdating of the issued time-stamps impossible, even by the issuing server itself.

* https://en.wikipedia.org/wiki/Linked_timestamping

An(other) example of the latter:

    This document describes a mechanism, called syslog-sign in this
    document, that adds origin authentication, message integrity, replay
    resistance, message sequencing, and detection of missing messages to
    syslog.  Essentially, this is accomplished by sending a special
    syslog message.  The content of this syslog message is called a
    Signature Block.  Each Signature Block contains, in effect, a
    detached signature on some number of previously sent messages.  It is
    cryptographically signed and contains the hashes of previously sent
    syslog messages.  The originator of syslog-sign messages is simply
    referred to as a "signer".  The signer can be the same originator as
    the originator whose messages it signs, or it can be a separate
    originator.
* https://datatracker.ietf.org/doc/html/rfc5848


I think you’re basically saying that there are still no good known use cases for blockchain (/s but only a little)


I know of at least one: making electric heaters that actually contain obsolete mining hardware instead of heating elements. Obsolete to keep costs down and to have an excuse when someone complains "hey, at least we're recycling hardware!" (/s also only a little)


NIST has a good blockchain explainer:

* https://csrc.nist.gov/publications/detail/nistir/8202/final

Figure 6 is a good flowchart on helping a person decide whether it's a good solution for particular use cases. See "Distributed ledger need: blockchain, block matrix, or none?" at the bottom of:

* https://csrc.nist.gov/Projects/enhanced-distributed-ledger-t...


Given the relative clunkiness of commercial timestamping services, https://opentimestamps.org/ seems fairly useful to me.


That would imply many encryption schemes are automatically block chain.

That's a flawed understanding all the way around.


Would you know! So Linus is the real father of blockchain?


According to a news article, the first blockchain application is an application released in 1992 called AbsoluteProof by the company Surety [1].

[1] https://www.vice.com/en/article/j5nzx4/what-was-the-first-bl...


"As Ethereum's cofounder Vitalik Buterin joked on Twitter, if someone wanted to compromise Surety's blockchain they could "make fake newspapers with a different chain of hashes and circulate them more widely." Given that the New York Times has an average daily print circulation of about 570,000 copies, this would probably be the stunt of the century."

What if the hash is published in multiple newspapers.


Circulating that many fake newspapers is not possible. If you printed up that many newspapers, who would you give them to? Anyone who wants to read the NYT likely has a source, or at least knows one; same for sellers. The NYT wishes that there were twice as many people who wanted to read their paper.


Yay, thank you, I was racking my brains trying to remember Surety as an example in response to https://news.ycombinator.com/item?id=37782446


Wikipedia suggests that David Chaum first proposed what was basically a blockchain in 1982. He even had a crypto startup way before they were cool, with "eCash" in 1995.


Blind signatures are totally different from hash chains.


Fancy Linked List


People keep saying Merkle DAGs when someone calls a linear chain of recursively hashed data blocks a blockchain.

I don’t understand.

My understanding of the Merkle Tree is that it’s a recursive hash, but the leaf nodes are the data, each layer up the tree is the hash of the child nodes.

In a merkle tree, only the leaf nodes store (or reference) data, everything else is just a hash.

Is there another merkle structure I don’t know about?

https://en.wikipedia.org/wiki/Merkle_tree

If the nodes with hashes contain data, it’s not a merkle tree.


Since posting this, I've discovered that IPFS has something it calls Merkle-DAGs.

A Block-Chain is a chain of blocks where there is one valid previous block and one valid next block.

A Block-Tree is a chain of blocks where there is one single valid previous block, and multiple valid next blocks.

A Block-DAG is a chain of blocks where there are multiple valid next blocks and multiple valid previous blocks, with the constraint that you can not form cycles.

They are analogues to linked-lists, trees, and directed-acyclic-graphs but with chained hashes.

From the Merkle-DAG article on the IPFS page:

> Merkle DAGs are similar to Merkle trees, but there are no balance requirements, and every node can carry a payload. In DAGs, several branches can re-converge or, in other words, a node can have several parents.

What's interesting here is that a Merkle Tree is a valid Merkle DAG, since a node can _optionally_ include a data payload. So a blockchain, a blocktree, and a blockdag are all also Merkle-DAGs. Merkle-DAG is a kind of unifying structure that can be used to model all of them.

It's really quite clever.

https://docs.ipfs.tech/concepts/merkle-dag/

This appears to have been coined in 2014: https://github.com/jbenet/random-ideas/issues/20

However the term blockchain dates back to at least 2008.

A blockchain might be a Merkle-DAG but a Merkle-DAG is not a blockchain.


I think this is isomorphic to an unbalanced tree where every node has one non leaf child and one leaf child.


Seems like claiming that a linked list isn't actually a linked list it’s an unbalanced tree where every node has one child node.

I mean, you’re not wrong but it’s still a linked list.

I’d be careful muddying up your mental models this way though - they’re distinct data structures for distinct purposes.

You would likely not want to use a merkle tree for an append only log, and likely would not want to use a blockchain for verifying file integrity.

For example, BitTorrent, IPFS, and Storj use merkle trees to verify and discover blocks on the DHT, you would not want to use a blockchain for this.

And Scuttlebutt uses a blockchain as an append only log that is gossip friendly, you would not want to use a merkle tree for this.


> No consensus necessary. Much like a git repository with a single author.

But shouldn't we want decentralized consensus for this?

What if NIST's key(s) were to get compromised, or the org were to disband or become corrupt/dysfunctional?


>It's not a blockchain, but a single writer Merkle DAG.

Hmm. Just because something's a Merkle DAG doesn't make it useable on the Internet. A single-writer blockchain, perhaps?


Oh… so you are calling a database a “block chain”.


A blockchain is a chain of blocks.

Do you have another definition?

Colloquially, it often refers to a consensus algorithm paired with a chain of blocks.

Bitcoin’s innovation wasn’t a blockchain, it was a proof-of-work backed consensus algorithm that allowed a group of adversarial peers to agree on the state of a shared blockchain datastructure.


According to the dictionary [1], a blockchain is "a digital database containing information (such as records of financial transactions) that can be simultaneously used and shared within a large decentralized, publicly accessible network"

The distinction here might be with a decentralized network.

[1] https://www.merriam-webster.com/dictionary/blockchain


Merriam is incorrect


Every word in that definition seems to fit, no?


"decentralized" isnt necessary to a block chain. However when people say "block chain" in everyday use, they're usually talking about that type. It's a case where the everyday use of a word is different to the actual technical meaning.


> It's a case where the everyday use of a word is different to the actual technical meaning.

Which can change.

There also is centralized proof-of-work blockchains. Clouds were offering them awhile back and IBM had some offering.


A blockchain used to be a chain of blocks. It's more now. You kinda defined it - consensus algoithm, shared datastructure.

What is colloquially even susposed to mean here? That the common usage doesn't match the definition? Maybe definitions change over time....


So a linked list is a blockchain?


If you have a chained hash of the data in the linked list, yes!


Ok but then anyone in control can change the entire tree, why need this Merkle tree?


Can someone give an example use case of this? I'm not sure I understand why a very public long string of random characters on a block chain is useful, except as a way to prove an event didn't happen prior to a certain time


The draft of the version upgrade explains the possible uses of this: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8213-draft...

Mostly, it's so the public can verify events that were supposed to be random really were random. The executive summary gives plenty of examples, but think of a pro sports draft lottery. Fans always think those are rigged. They could simply use these outputs and a hashing function that maps a 512-bit block to some set with cardinality equal to the number of slots and pre-assign slots to participating teams based on their draft weight. Then fans could verify using this public API that the draw the league claims came up randomly really did come up randomly.

People always think polls are rigged. This could be used to publicly produce random population samples for polling.

This was also used to prove a Bell inequality experiment worked with no loopholes.


If they want to believe the polls are rigged, won’t they just assume that the NIST random data is “rigged” as well.


The mob "numbers game" Which my understanding was a sort of lottery used low digits of closing share prices to find the winning number. which solved a few of the same problems. It was an unaffiliated third party generating the numbers with another completely different unaffiliated third party (the newspaper) distributing them. so theoretically every one trusted them as fair numbers.



I always wondered why nobody is using that as the root of a P2P randomness system.

It would be very useful to have a trusted source of time, with a few keys that are meant to never change, that anyone can rebroadcast.

We could have zero configuration clocks that get the time from the nearest phone or computer without any manual setup!


What's amazing is that if your computer is not set to automatically sync its time, you can see how fast it's drifting.

My main desktop is 1.7 seconds ahead at the moment. Probably haven't updated the clock in a few weeks: which isn't that much. Other systems shall drift much more.

As to "why" it's not setting the time using NTP automatically: maybe I like to see how quickly it drifts, maybe I want as little services running as possible, maybe I've got an ethernet switch right in front of me which better not blink too much, maybe I like to be reminded of what "breaks" once the clocks drifts too much, maybe I want to actually reflect at the marvel of atomic drift when I "manually" update it, etc. Basically the "why" is answered by: "because I want it that way".

Anyway: many computer's internal clock/crystal/whatever-thinggamagic are not precise at all.


Crystal errors tend to be around 20 ppm (parts per million)

After a week, 20 ppm would drift 12 * 10^-6 * 7 * 24 * 60 *60 = 12 seconds.

Your motherboard probably has a cr2032 keeping it powered when unplugged.

Crystals: https://www.digikey.com/en/products/filter/crystals/171?s=N4...


There’s a fun thing about quartz wristwatches: one of the biggest contributions to frequency fluctuations in a quartz oscillator is temperature. But if it is strapped to your wrist, it is coupled to your body’s temperature homeostasis. So a quartz watch can easily be more accurate than a quartz clock!

Really good watches allow you to adjust their rate, so if it runs slightly fast or slow at your wrist temperature, you can correct it.

One of the key insights of John Harrison, who won the Longitude prize, was that it doesn’t matter so much if a clock runs slightly fast or slightly slow, so long as it ticks at a very steady rate. Then you can characterise its frequency offset, and use that as a correction factor to get the correct GMT after weeks at sea.


That would require tuning it to the average body temperature though, right?

Or are you saying that what makes quartz crystals drift is the change in temperature?


Both are true :-)


Oh, I missed your comment about being able to tune some wristwatches quartz! I wasn't aware that was a thing.

Still, wouldn't the temperature of a watch while being worn vary as least as much as when sitting in a drawer (unless you live in a region blessed with t-shirt weather year around)?

One of my favorite wristwatches I used to wear as a teenager had a thermometer, but I don't remember how exactly that varied over the year, just that it always showed neither quite my body temperature, nor quite the ambient one :)


The crystals used in watches are usually cut and selected so that a local minimum or local maximum of the tempco is near the temperature of your wrist.

Thus, the tempco is near zero, so human-to-human differences don’t matter much.

One thing to notice is that quartz watches almost always have a metal backplate touching your wrist so that the crystal can have good thermal contact. Presumably, the thermometer in your watch was decoupled from that plate.


It kinda makes you wonder why desktop computers don't use the AC frequency as a stable-ish time source. Short-term accuracy is pretty poor, but it can definitely do better than 12 seconds over a week!


I suppose it's because no AC ever gets to the motherboard in your typical ATX setup? It's all just DC 12/5/3 volts and could be coming from a battery for all it knows. There would need to be an optional standard way of getting time from the PSU and have the AC time keeping there.


Of course, but there's no reason why a 50/60Hz signal couldn't have been included in the ATX power connector back when it was established a few decades ago.

In an alternate universe it would've been put in there, together with all the weird -12V / -5V rails nobody uses these days. Getting it these days would indeed be pretty much impossible.


Sure, but it costs extra and nobody (to a first approximation) cares. So why bother?


That's a very optimistic assumption, the target is 50Hz but if it is below or over for a long period of time (e.g. high load in winter making it hard to sustain the nominal frequency) there are no provision to make it run faster or slower unless the time drifted by more than 30s (that's possibly only valid for Europe).

More at https://wwwhome.ewi.utwente.nl/~ptdeboer/misc/mains.html


The standards are tighter in the US, with corrections being triggered by 10/3/2s differences and stopping at 6/0.5/0.5s (Eastern/Texas/Western interconnections). Src: https://www.naesb.org/pdf2/weq_bklet_011505_tec_mc.pdf


> After a week, 20 ppm would drift 12 * 10^-6 * 7 * 24 * 60 *60 = 12 seconds.

Where are you getting that 12 from?


It should read 20*, not 12.

The end result is 12 seconds.


Can't say too much but I saw an IoT product where, if NTP failed, they would all slowly fall behind. I really appreciated this because fixing NTP would jump forward, leaving a gap in perceived time instead of living the same moment twice.

So I assumed that, like how speedometers purposely read a little high, the crystals must purposely read a little slow so that computers don't slip into the future.


I worked on an embedded video system once where no one took into account the slight difference between the encoder and decoder clock frequencies. If the encoder was slower, no problem, you dropped a frame every once in a while. If the encoder was faster, you would buffer frames up to the memory limit of the system. Since these systems were typically left on for weeks at a time, the pilots would eventually discover the video they were using to fly a vehicle was several seconds in the past. Luckily this was discovered before initial shipment and the code was changed to drop everything in the buffer except the latest video frame.


That's a neat way of ensuring that time never jumps backwards on your system!

Reminds me of the same idea but applied in the opposite way in some train station clocks: Their second hands take slightly less than a minute to complete one rotation, after which they stop and wait for a signal sent from a central clock to be released simultaneously.

Making a clock run slightly slow or fast is much easier than making it run just about correctly :)


When setting up a mini PC as a home server about 40 days ago, I did not realize Fedora Server does not configure NTP synchronization by default. In only two weeks I managed to accumulate 30 seconds worth of drift. Prometheus was complaining about it but I had erroneously guessed that the drift alert was due to having everything on a single node. Then when querying metrics and seeing the drift cause errors, I compared the output of date +'%s' on the server and my own laptop. The difference was well over 30 seconds.


From wikipedia

> Typical crystal RTC accuracy specifications are from ±100 to ±20 parts per million (8.6 to 1.7 seconds per day), but temperature-compensated RTC ICs are available accurate to less than 5 parts per million.[12][13] In practical terms, this is good enough to perform celestial navigation, the classic task of a chronometer. In 2011, chip-scale atomic clocks became available. Although vastly more expensive and power-hungry (120 mW vs. <1 μW), they keep time within 50 parts per trillion.


Interesting breakdown. But this format is horrible for conveying information. An improvement would be removing the slides, crafting some coherent paragraphs and then reinserting some of the more crucial images for support.


After someone gives a talk in person, some of the things they can do online:

1. Mention that they gave a talk

2. Post a video recording online

3. Post the slides online, as-is (PDF or whatever) with no explanation

4. Lay out the slides on a HTML page, with accompanying text (what would have been said by the speaker), so that it's easier to read — while still being clear you're “reading” a talk.

5. Redo/rewrite the whole thing into text form, paragraphs and all.

The author here has done 1 to 4, and you're complaining they've not also done 5, but that's a lot of work and I don't begrudge someone not doing that. I'll be grateful someone presented their talk in a readable form in the first place.

[I do agree this page was hard to read, at least on mobile and at least in its initial version—it's much better now—but I've seen many others post these "annotated talks" online and the format itself is not necessarily bad: for instance see https://idlewords.com/talks/ (example: https://idlewords.com/talks/superintelligence.htm) or https://noidea.dog/talks (example: https://noidea.dog/impostor) or https://simonwillison.net/tags/annotatedtalks/ (example: https://simonwillison.net/2022/Nov/26/productivity/) — maybe just some minor tweaks to CSS like putting the text to the right of the images would make it easier to read.]


I was mostly confused about the images being above the line of text you're supposed to read before looking at the image.

"Here's a picture of an NTP packet"

picture of a man sitting at a desk


When I gave the talk, I showed the slide before I talked about it. It’s normal to show the speaker notes below the slides in software like Keynote or Powerpoint.


That might be clearer if the header was just 'slides and notes from my talk', instead you actually claimed the opposite, that it's a 'blogified version', but it's not really - I tripped up on the same thing, and then got through several 'duplicate images', 'oh no very slightly different images', before it finally dawned on me that they were slides.


I’ve clarified the introductory paragraph and added lines between each slide. Should be a bit easier to read now.


I can tell the talk would have been really enjoyable but I agree this format is just lazy for conveying that information.


And when I shift a work to a new medium, I also shift over the conventions used to match that new medium -- and audience. It's an empathy thing.


You should be lucky someone produced free content for you to consume.

The sense of entitlement to accuse someone of lacking empathy because they didn't present it in your preferred format is literally crazy to me.


I don't feel entitled to anything, it was just a suggestion for how they can communicate better, which they ostensibly want. I constantly get lauded for good presentations, and see others do them with unforced errors, so I thought I'd do my part to level the playing field.

Ironically enough, aren't you doing the same thing now, berating me for giving free information the wrong way? How about just learning from the advice and moving on?


Sorry about that. I was more harsh then I intended. I might have misread your original reply as well. It appeared more of a complaint than advice, and I do agree it's good advice.


It's simply not intuitive in the way it was presented that the line of text was a footer for the picture. The text and pictures are mistakenly read as belonging to the same "layer", sequentially, which is not what the author intended. It's obvious what that intent was, but it's not structured correctly to be properly interpreted.


I was really bothered that on the website version, the NTP packet diagram is largely illegible. I hope that when they gave this talk on slides, you could read it.


TBH you aren’t supposed to read it, you either say to yourself, oh yes I recognise the NTP packet diagram; or, oh yes, that looks like a packet diagram; or, oh interesting maybe I should look at the NTP RFC. The slide was only up for a couple of seconds :-)


The thing about a good, simple network protocol is you can look at the packet diagram and start to understand how the protocol works. I think NTP fits the bill here.


I mean, put a little gnome hat on him and I’d believe it…


Watching the actual talk is much better: https://ripe86.ripe.net/archives/video/1126/


The linked PDF has clear page delineations, unlike the HTML page: https://ripe86.ripe.net/presentations/134-2023-04-whence-tim...


I have never seen this format before but it does mirror what going down a rabbit hole of a particular topic looks like for the average curious person.

I liked it.


I thought it was a very fun, stream-of-consciousness kind of read.


I simply assume any "slides" format comes from porting over a live talk. Lazy, yes. Efficient, yes.


Especially because half of the text just repeats what's on the slides and ultimately I didn't see an easy way to make the slides bigger. Like the NTP packet format slide was mostly unreadable.


Shout out also to the NTP Pool, a volunteer group of NTP servers that is the common choice for a lot of devices. Particularly open source stuff. Microsoft, Apple, and Google all run their own time servers but the NTP Pool is a great resource for almost everything else. https://www.ntppool.org/en/


Reminds me of that time when the NTP pool was basically ddos'ed by a buggy Snapchat release to iOS devices. https://community.ntppool.org/t/recent-ntp-pool-traffic-incr...


Or when Netgear flooded the University of Wisconsin by hardcoding their NTP server's IP address into some of their home routers:

https://pages.cs.wisc.edu/~plonka/netgear-sntp/


I was in the pool for a while using a RIPE NCC gps synced pci card. It was fun, but machineroom dynamics made keeping a dome antenna attached was hard: they hate special cables, and roof access is a security and water nightmare.

A rubidium clock is pretty cheap these days anyway.

Now, I'm on Bert Huberts gps drift and availability thing with a raspberry pi measuring visibility and availability out my home office window. Much more fun.


Do you have a link for what you are doing now? Seems interesting!



It's an interesting situation when instruments or measurements become more precise, stable, or reliable than the reference material.

And when someone (usually an individual) finally discovers that it has happened, or in some cases makes it so.

>the ephemeris second is based on an astronomical ephemeris, which is a mathematical model of the solar system

>the standard ephemeris was produced by Simon Newcomb in the late 1800s >he collected a vast amount of historical astronomical data to create his mathematical model >it remained the standard until the mid 1980s

>in 1952 the international astronomical union changed the definition of time so that instead of being based on the rotation of the earth about its axis, it was based on the orbit of the earth around the sun >in the 1930s they had discovered that the earth’s rotation is not perfectly even: it slows down and speeds up slightly >clocks were now more precise than the rotation of the earth, so the ephemeris second was a new more precise standard of time


>in 1952 the international astronomical union changed the definition of time so that instead of being based on the rotation of the earth about its axis, it was based on the orbit of the earth around the sun >in the 1930s they had discovered that the earth’s rotation is not perfectly even: it slows down and speeds up slightly

Yeah, I remember studying that back in high school but I wonder... what previous actual duration of a second they used? And also, being based on the rotation of Earth, what kind of data was the "vast amount of historical astronomical data" Newcomb collected? How can you reliably capture and store the length of time if you can only base it on the Earth rotation speed which varies over time? I would guess the data compared it to other natural phenomena?


When time was based on earth rotation, astronomers used “transit instruments” to observe when certain “clock stars” passed directly overhead. The clock stars had accurately known positions, so if you routinely record the time they pass overhead according to your observatory’s clock, then you can work out how accurate your clock is.

Newcomb’s data would have been accurately timed observations, as many as he could get hold of, going back about two and a half centuries.


I think we need a community-maintained and democratized time-tracking standard so we're not so beholden to Big Time


That's pretty much what we already have, isn't it?

True Time™ is determined by essentially averaging dozens of atomic clocks from laboratories all over the world. It doesn't really get any more "community-maintained" and "democratized" than that!


Put it on the clockchain


Please tell me you just coined this.


That’s hilarious.


The article, and this comment, makes me wonder what impact a coordinated attack on the root time-keeping mechanisms might have. It seems like there's a fair bit of redundancy / consensus, but what systems would fail? On what timeline? How would they recover?


It's probably possible to calibrate your clock using a clear night sky and a modern cell phone camera. I bet second accuracy isn't an absurd expectation. Now it'd probably take an unreasonable amount of time to calibrate...


Bring out the water jug with a hole in it


we're not, it's run by the government


DARPA are funding the Robust Optical Clock Network (ROCkN) program, which aims to create optical atomic clocks with low size, weight, and power (SWaP) that yield timing accuracy and holdover better than GPS atomic clocks and can be used outside a laboratory.

Most of the big cloud providers have deployed the equivalent of the opencompute time card which sources its time from GPS sources but can maintain accurate time in cases of GPS unavailability.

https://www.darpa.mil/news-events/2022-01-20


If you have a Raspberry Pi laying around and want to run your own Stratum 1 NTP server - https://austinsnerdythings.com/2021/04/19/microsecond-accura...


Note that for NTP it’s better to use a Raspberry Pi 4 than older boards. The old ones have their ethernet port on the wrong side of a USB hub, so their network suffers from millisecond-level packet timing jitter. You will not be able to get microsecond-level NTP accuracy.

For added fun, you can turn the Raspberry Pi into an oven compensated crystal oscillator (ocxo) by putting it in an insulated box and running a CPU burner to keep it toasty. https://blog.ntpsec.org/2017/03/21/More_Heat.html (infohazard warning: ntpsec contains traces of ESR)


That's a Stratum 0 server since it gets its time from GPS. Stratum 1 server is one that gets its time from Stratum 0 servers.


Stratum 0 is the reference clock itself; stratum 1 is an NTP server attached to one or more reference clocks. https://www.ntp.org/ntpfaq/ntp-s-algo/#5111-what-is-a-refere...


Most of those slides concern about the physics part of time measurement (GPS and atomic clock, etc.). While this is interesting in its own right, in order to understand how MY computer obtains the current time, a more relevant question is “how does a home computer measure the latency of a packet sent from a remote time server”? Does it measure the durations of several roundtrips and take the average duration as latency? What if congestion suddenly occurs during some roundtrip? I always think that these questions are more mysterious than the physical ones.



Just be careful which time source you use. One of our servers was configured to use tick.usno.navy.mil and tock.usno.navy.mil back 10-15 years ago or so. The Navy had an "issue" with the time they were sending out. The overnight result was several licensing servers wouldn't authenticate and we were locked out of those systems(SSH needs accurate time, within minutes I believe). We discovered the discrepancy by logging in locally (we were in the same building but a different office) and changed the time servers and then the sync method to resolve the issue.


> SSH needs accurate time, within minutes I believe

You may be mis-remembering a few details, SSH does not care about the time at all unless you are using _very_ short-lived SSH certificates.


You are correct and I believe it was Kerberos that we were locked out of on this system since it was running a SMB share.


Kerberos is very particular about time.


This system operated an SMB share so Kerberos is probably what locked us out.


Time based OTP is pretty sensitive though. Probably that is what broke?


The irony is that the TOTP spec explicitly takes this into account.

By default tokens are valid for 30 seconds, with a token from the previous 30-second window also being accepted. Being off by more than that is pretty rare for NTP-connected systems.

The specs also provide ways to deal with a dedicated hardware token slowly going out of sync by keeping track of the last-known clock drift, but that's pretty useless these days and can even do more harm than good.


The poster was referring to minutes, which has also been my experience. Something goes wrong, and suddenly you’re an hour off. Blam, now you can’t login. :s


They might have been using kerberos authentication?


Where does my car get the time from? It drifts and changes every time I start it up. Every 3 months I have to change it manually by 10ish minutes or more, but it’s inconsistent


Probably just a local quartz oscillator, like a cheap wristwatch but embedded into the car. That'll drift with temperature, vibration, humidity, and some other factors, but it's cheap and just relies on the user to occasionally set it. Fancier systems can use radio time or GNSS (more likely if the car has built in navigation), but that's probably not happening if you regularly set the time!


Correct. Oscillators are subject to drift through a number of means, and they all have ridiculous effects too. https://news.ycombinator.com/item?id=37613523


Temperature is the largest factor. Things like a DS3231[1] do really well compared to a basic non-compensated oscillator. I have been running some long-term experiments on a few that I have around and with some tuning got them to less than a second loss per year. But, they are super expensive compared to the basic ones (almost $5 each in quantity), so they aren't going to end up in your car where a 3 cent chip is possible to use instead. (I don't know what 5G / LTE chips cost these days, but if they're putting one in your car anyway, then they can probably get the time from that. But choose not to.)

[1] https://www.analog.com/media/en/technical-documentation/data...

Most interesting to me in all of my time experiments is looking at my clock frequency over time vs. the temperature. (NTP daemons aim to calculate your actual clock frequency; then they know how far off your internal time is from actual time.) You don't even need a temperature sensor, the clock rate is a perfect analogue.


Voltage issues can also be a big problem, and cars have notoriously dirty electrical.


Ahh, I bet that's true!


> A company i worked for wanted systems to have no more than 2ns of time drift between each other, in a network of +10 devices.

At that point it's surprising they didn't just deploy a local "time network", with a single master clock distributing time via length-calibrated coax. Approaches like that are really common in television studios.


It wasnt really the right environment for it, and they didnt even actually need that high of resolution, they couldve gotten away with 100ms drift and never noticed


Yeah, temperature is a big one. As is voltage supply stability.

If you want really good short & medium term stability, it's hard (expensive) to do better than an Oven-Controlled Crystal Oscillator (OCXO). An OCXO has a crystal in an insulated chamber with a heater and a thermocouple. A control circuit uses the heater to keep the chamber at a consistent temperature. Over long periods these still drift.

A cheaper alternative is a Temperature Compensated Crystal Oscillator (TCXO), that combines a control circuit with a crystal, a temperature sensor, and a ROM. The ROM contains a table of frequency errors that crystal had across temperature. The control circuit senses the temperature, reads the error value from the ROM, and tries to correct the crystal's oscillation frequency to compensate for that amount of error. Less accurate and less stable than an OCXO, but much smaller, much lower power, and much cheaper.

For longer-term stability you want an atomic clock, probably a Cesium clock or Hydrogen Maser. GNSS (GPS, Galileo, GLONASS, Baidu, etc) satellites have atomic clocks on board, and broadcast that time. GNSS is based around very precisely measuring the differences in received times from various clocks to calculate location, so it can also be used as a way to get a local time reference with good long-term stability. Unfortunately GNSS does have rather poor short-term stability compared to an OCXO, so GNSS alone isn't perfect. But it's common and reasonably inexpensive to create a "GPS-Disciplined OCXO" where a GPS unit corrects for the long-term drift of an OCXO, but the OCXO provides the actual output signal (thus gaining the good short & medium-term stability).

The NIST SP 1065 Handbook of Frequency Stability Analysis[1] is a go-to text on measuring clock sources.

[1] https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpubli...


It sounds like it gets the time from you.


Laurens Hammond invented the synchronous electric motor once A/C domestic voltage had proliferated enough as an alternative to the original D/C electrification first established by Edison.

This made it possible for the first time to build clocks based on the stable frequency of the incoming A/C supply voltage, much more reliably than those based on the incoming line voltage, which varies quite a bit whether it is A/C or D/C.

This put him on the map as a manufacturer when he went forward to build Hammond clocks commercially.

Years later his engineers encouraged him to consider developing an electric church organ, which would be possible to remain in tune regardless of variations in line voltage themselves.

Hammond was not musically inclined but he did it anyway.

Right up there with the Great Men in the most legendary way.

http://thehammondorganstory.com/

By the time the 1960's came around, almost all new American vehicles were recognized as modern Space Age conveniences, and a factory clock (mechanical analog, naturally) had become almost a universal standard accessory beyond the most budget price points.

There were a couple drawbacks to the factory clocks, they had to be connected to the car battery at all times to keep running, they didn't drain the battery very much at all but still would eventually deaden it if undriven, way worse than no clock. And they depended on the incoming voltage which determined the internal clock motor speed to begin with. Different automotive electrical systems and batteries themselves do vary perhaps 10 percent about a nominal design voltage of 12 VDC. There is no stable A/C in the car that a synchronous motor would need to run on[0].

These now-vintage clocks were self-correcting. You correct them yourself. Actually the same twisting of the knob to move the hands of the clock, which was familiar from earlier non-correcting clocks simply did the job. So they were somewhat backward-compatible. Only the Space Age units had smart enough mechanical ability to take into account how much and in which direction you moved the hands, and adjusted the previous running speed accordingly. If the clock was not very close to correct time when you adjusted it, it would take repeated adjustments over a number of days or weeks to get it to very realistic speed. All it really did was successive approximation. You had to supply your own natural intelligence.

Even at the time lots of drivers never knew this, and there was widespread disappointment over the wildly inaccurate clocks "which were OK when new but went downhill 'through time'". They only added maybe a dollar to your car payment but that was very expensive compared to a highly reliable cheap household clock at the time.

When you think about it, today lots of drivers are not quite up to par when it comes to engaging the amount of natural intelligence that would be needed in many other ways besides timekeeping.

[0] The electrical "vibrator" which provided switch-mode 12VAC which could be stepped up by a transformer to supply much higher voltage to power vacuum tube radios still produced a variable A/C voltage & frequency, dependent on the underlying D/C supply voltage.


The clock in my 1967 Mercury has an interesting mechanism. It's a fully mechanical wound spring clock with a self-winding mechanism. When the spring unwinds it closes a circuit on an electromagnet that quickly rewinds the clock spring.

Every couple of hours or so you'll hear the click from it rewinding on its own. Unfortunately there's nothing to prevent it from running down the battery and they often need to be replaced due to burn out when the voltage gets low. Essentially the rewinder doesn't have enough voltage to actually wind the clock and the circuit stays closed.


Nothing like a '60's Mercury when they were still building them more carefully than the corresponding mainstream Ford-badged models.

>Well if I had money

>Tell you what I'd do

>I'd go downtown and buy a Mercury or two

Mercury Blues:

https://www.youtube.com/watch?v=QsTfCITzISM

I could really use a Mercury or two about now myself.


What really gets me is when the gauge cluster clock and the radio clock differ. Just a wonderful metaphor for the modern car.


Depends on the car model. Some can use GPS or radio time signals: https://en.wikipedia.org/wiki/Time_signal



I recently rented a $65000 luxury car and it didn't even have built-in daylight savings adjustment. Owners have to dig into settings and fix it themselves twice a year. Cars are so far behind on basic software it is crazy.


DST policies vary by country and (US) state. The US DST schedule can be changed, and has been — twice in the last 50 years. There are proposals to do so again, both nationally and within states. Implementing automatic DST adjustment puts you on the hook for software updates forever. It’s much easier to just let people change the time manually when they need to, like they do with other appliances.

IMHO, the real failure is when devices make it hard to figure out how to change the time.


yeah, DST schedules change frequently enough that within the lifetime of most automobiles, at least some of them will be in use in some location where up-to-date timezone database doesn't match the timezone database in the car's software.

so the manufacturer gets to choose between making people apply DST changes manually twice a year, which most people understand and are used to doing for various things, or changing over for DST automatically but being wrong sometimes, which most people won't understand and will complain about.


Cars also move. Sometimes they move across time zones or DST boundaries. Automatic local time requires GNSS and a DST database.


Oh, it gets better. I used to get reminders in the mail to take my luxury car into the dealership for "service" to adjust the clock twice a year. Or I could ... you know, just press a few buttons for free.

The trouble with all the modern cars that have synchronized clocks is that, well, you've already put in an LTE SIM card, so why not send up some telemetry at the same time? And here we are, with cars that are surveillance devices with four wheels.


A simple GPS receiver could also provide the time. But I agree with your rant overall.


I have the same problem! It takes months, but eventually the clock in my car is minutes behind. I think currently it's about 4 minutes behind.


Do I remember the dotat.at domain for having some funny naming thing way back in the day? Was it the email dot@dotat.at? Yeah that’ll be it.

It might have inspired me to register `signaslongasitendswith.com` so that I could tell people that my email address was “put whatever you like before the ‘at’ sign as long as it ends with ‘dot.com’”.

Felt clever in 2003. I never got any mail.


That was very clever.


Just want to take a moment to appreciate the URL of "dot at, dot at, slash at"


Definitely reminds me of H T T P colon slash slash slashdot dot org


Oooooh decades later I finally get the name Slashdot! Thank you!


Ha, same here.


you should see the email address of the author :)


IIRC there was an ISP or web host in Australia way back in the day called DotNet (obviously before the MSFT days)...

Their website was http://www.dotnet.net.au (www dot dotnet dot net dot au).


/Meta: There's three different posts on the front page on the theme of "what is time, anyway", and I'm curious if there some reason for that? Did I miss some news event? Did some leap-second bug crash something?


I’ll often see articles on the front page related to a popular thread from a day or two ago. I always assume that someone either went down a rabbit hole based on the original thread and wanted to share their findings, or already knew about that topic and felt inspired by the original thread to share something useful about it.


Hypothesis:

People gaming clicks using popularity of subjects from past years would want to drift (heh!) the time forward slightly, these topics probably normally arise around the time clocks change for Winter (29 October this year is the end of British Summer Time). So, I speculate that this is a drifted "clocks go back, but will your computer adjust itself?" topic area.


This was a real talk? I would have lost my mind attending this. I am adding the Naval Observatory to my travel destination wish list.


It’s hard to tell if you “losing your mind” in this context means you would have enjoyed the talk or the opposite.


I would have enjoyed it tremendously.



Great overview, thanks for sharing. Maybe this was unintentional, but I got a good laugh out of, "In 1952, the International Astronomical Union changed the definition of time"!


If you're interested in precise time keeping, this is Time-Nuts is a great place to start (http://www.leapsecond.com/time-nuts.htm).


See also: the Metrology forum at eevblog.com. Lots of time-nuts (and volt-nuts, etc) hang out there.


It used to irritate me that my old dumb mobile must have known exactly the correct time in order to operate on the cell phone network. Yet it kept it secret from me. I had to manually set the clock by guesstimate


In the early days of mobile networks, it was my experience that the network time was not very good. Sometimes off by a minute or two, but most often filled with DST bugs.


I think the poster is getting at the fact they had a CDMA phone - those require extremely tight clock synchronization in order to work. Like a phone needs a sub-chip level of synchronization to a base station (order of a few hundred ns) otherwise it won't be locked to the relevant codes.

All this is down at the modem layer and was probably not externalized to the crappy CPU that ran the OS for the phone though.

You can still use the CDMA networks to get a very precise time reference, suitable for a stratum-1 timeserver, but I think not for very much longer.


"the BIPM collects time measurements from national timing laboratories around the world"

I'm really interested in how this is done with multiple clocks over a distance. Can anyone explain? It feels like it would be very difficult since asking "what time is it there?" at the timescale of atomic clocks is kind of a bit meaningless? And that's before considering the absolute local nature of time and the impossibility of a general universal time per relativity.


The term of art you want for searchengineering is “time transfer”.

There are a variety of mechanisms:

* fibre links when the labs are close enough

* two-way satellite time transfer, when they are further apart

* in the past, literally carrying an atomic clock from A to B (they had to ask the pilot for precise details of the flight so that they could integrate relativistic effects of the speed and height)

* there’s an example in the talk, of how Essen and Markowitz compared their measurements by using a shared reference, the WWV time signal.


I believe an important aspect is that the actual time offset between the clocks doesn't matter all that much - it is the drift between them you care about.

True UTC is essentially an arbitrary value. Syncing up with multiple clocks is done to account for a single clock being a bit slow or fast. It doesn't matter if the clock you are syncing with is 1.34ms behind, as long as it is always 1.34ms behind. If it's suddenly 1.35ms behind, there's 0.01ms of drift between them and you have to correct for that. And if that 1.34ms-going-to-1.35ms is actually 1.47ms-going-to-1.48ms, the outcome will be exactly the same.

This means you could sync up using a simple long-range radio signal. As long as the time between transmission and reception for each clock stays constant, it is pretty trivial to determine clock drift. Something like the DCF77 and WWVB transmitters seems like a reasonable choice - provided you are able to deal with occasional bounces off the ionosphere.

Of course these days you'd probably just have all the individual clocks somehow reference GPS. It's globally available, after all.


It isn’t just the difference in rate. The main content of Circular T https://www.bipm.org/en/time-ftp/circular-t is the time offset of the various national realisations of UTC. Another important aspect is characterizing the stability of each clock, which determines the weighting of its contribution to UTC.

The algorithm behind Circular T is called ALGOS.


A hydrogen atom being looked at by the Navy right?


Cesium, NIST.


The USA has two main time labs: the USNO, which provides time and navigation for the DoD, including the GPS; and NIST which provides time for civilian purposes, including WWV. NIST tends to do more research into new kinds of atomic clock (eg optical clocks, chip-scale clocks) whereas the USNO does more work on earth orientation.

The USNO atomic clock ensemble includes caesium beam clocks, hydrogen masers, and rubidium fountains. NIST uses mostly hydrogen masers, and fewer caesium beam clocks, though their primary frequency standards are caesium fountains.


I get the confusion for the US Navy though, as the clock is at the US Naval Observatory.

If you ever need the time, just call (719) 567-6742

"US Naval Observatory, Master Clock, at the tone, Mountain daylight time, nine hours, sixteen minutes, fifteen seconds...beep!"


"At the tone?" ...what kind of ship are you running here? Is it at the start or the end of the tone?


Just called the number holy cow it’s real. I love obscure infrastructure stuff like that


Another good one is Bell Atlantic's support number which is strangely still connected even though they merged to form Verizon in 2000. 570-387-0000.

"Thank you for calling Bell Atlantic. Due to an emergency condition, we are operating with a reduced staff, and you may experience delay."

Waiting on hold is like the forever traffic jam of Doctor Who.


Speaking clocks are pretty common. Here we dial *133.


When I was a kid, you could dial the operator and ask them for the time. I still don't know why anyone would do that, but I remember it was a thing you could do.

Also, dialing 0 to get a human operator. I swear I'm not that old.


Yeah according to Wikipedia anyway the USNO is operated by the US Navy.


And not just one, millions of them.


What a waste of taxpayers’ money! They should just pick one and stare at it. Why should we be paying for millions of them???


If you don't use the budget you won't get the budget, sailor.


I see no downside.


Sort of. More like from a DNS service for time, to which the navy both contributes and receives information from. I found that part to be the most interesting.


You are sort of correct. NTP is pretty decentralized. DNS has a few specific servers (root servers) that all DNS eventually hits to find where to get a result, but, the 'tree' of DNS resolution is much different from that of NTP, which doesnt have such a tree, except as defined by any DNS entries, if they are used (ex pool.ntp.org has many A records for many ips or CNAMEs to other domains (ex 0.pool.ntp.org)).

There are many contributors to the official timekeeping. Most facilities who do science will have their own actual atomic clock, which they then share out the data, in the form of an NTP server, however, they will not typically use data from the rest of the world, except for correlation events. The rest of the world relies on a handful of clocks which are either from NIST (ntp.org I think is owned by them), or from major providers like cloudflare (not sure they have an ntp server available the public can use, im almost certain that they would use their own atomic clock internally for security reasons), microsoft also has one, i think, afaik they would need to because they provide their own ntp pool, but they may just aggregate from multiple NIST servers.

You can setup your own NTP server as well, and setup systems you own to start using it instead of whatever is configured. And, if one were so inclined, could even find and run your own atomic clock, and register it with the ntp pool. Im actually not sure the atomic clock is required, id hope it would be, but idk.


every NTP story needs a link to the Netgear/UW-Madison fiasco: https://pages.cs.wisc.edu/~plonka/netgear-sntp/



One of my forays into maybe-fiction, "The Time Rift of 2100: How We lost the Future --- and Gained the Past." is possibly the only sci-fi story ever written about NTP protocol. ( https://tech.slashdot.org/comments.pl?sid=7132077&cid=493082... ) For antique computer aficionados it also contains a shout out to Windows NT.



This reminds me of a talk I gave several years ago to my local linux users group (CIALUG) about time... I don't have the recording anymore but still have the slides https://www.slideshare.net/denner1/all-about-time-or-how-to-...


This is a great write up. Easy to follow for those that are non-technical. I sent this to my kid that is pre-med in college and the feedback was "wow, I wish my Chemistry professors wrote up concepts like this."


TL;DR;

The flow of how modern day time is sourced & relayed to your computer:

1. Based on quantum / atom movement -> units -> time

2. Atomic clock based on #1

3. Time from #2, relayed to US Naval Observatory Alternate Master Clock

4. Time from #3, relayed to Space Force Base

5. Time from #4, relayed to GPS

6. Time from #5, relayed to NTP

7. Time from #6, relayed to your home computer


Great question and great article. This is an "old Internet" vibe for me


Great talk! I'd like to see explained a separate branch tree explaining how WWV, WWVB, and WWVH, the shortwave stations that continuously broadcast time, get their time from (I presume) the USNO.


They are civilian not military, so they get their time from NIST https://news.ycombinator.com/item?id=37779909


Does anyone have a good explainer for how the NTP protocol works? I can't quite wrap my head around how you could possibly synchronize two machines in time over a network with unknown and unpredictable latency.


It's not quick but "Computer Network Time Synchronization" by Mills


NTP uses the "intersection algorithm":

https://en.wikipedia.org/wiki/Intersection_algorithm


Specifically on the latency question, have a look at https://stackoverflow.com/a/18779822 for a basic explanation. tldr, once you allow for two-way communication you can start to factor out the network delay.


Is there a way to get time to be 99% or 100% accurate? time.gov shows that my Win11 and Android Pixel are off by almost a second. It'd be cool if it could grab it from the atomic clock.


99% accurate is pretty vague, but in terms of timekeeping 1% of 24 hours is still almost 15 minutes so being off by a second is couple of orders of magnitude better. Just to give some perspective.

NTP definitely should be able to keep the clock correct to sub-second level, but for more accurate local clock something like Open Time Card would do the trick, it has local atomic clock together with GPS receiver to get pretty much reference quality time.


I think this is a quirk of Windows and Android machines, which do not aim for perfect precision.

macOS is generally accurate to less than a tenth of a second (assuming desktops - laptops maybe less so, as they sleep a lot), and Linux will be just as accurate as long as it is running ntpd and not systemd-timesyncd.


I think chronyd is the one you want running on your Linux computer. IIRC it uses NTP to adjust your system clock over time for more accuracy besides just setting the time with NTP. And has more options for more time-nutty things.


Install a GPS module in your computer.

Your Android phone is already capable of receiving GPS, so that's probably the most readily-available accurate time source. Getting your Android phone to sync to GPS time instead of just displaying it in an app might be a bit tricky, though...


Soon hopefully from my raspyberry pi stratum 1 time server project :)

https://github.com/hcfman/sbts-aru


A very enjoyable read! If I wanted to be super nerdy about time and have the most precise time source possible, at home, what are my options?



Are smartphones using GPS for time, or NTP?


Cell tower time. Modern cell communications use time-slot multiplexing with transmission slots in the scale of a millisecond and need the beginning/end of transmission to have microsecond-scale accuracy so that you don't miss stuff and don't step on the toes of the previous or next transmission, they need to adjust for light speed propagation, etc.

So as a byproduct of needing to continuously sync time with the cell tower in order to function properly, your phone has quite accurate time; your computer might easily be half a second off of 'true' time, but your phone (at least on the broadband chip level - the main OS can not care) can't be even a millisecond off.


I'm pretty sure the cell network itself can provide time. Not sure if smartphones use it.

I think older cell phones that didn't have GPS or a data plan (voice only) did use it. ~15 years ago, I had an old flip phone that had an option to set the time manually or automatically, and T-Mobile "helpfully" provide a time source that was like 5 minutes slow.


The time reference inside a cell tower is usually PTP


GPS seems like a better choice for a cell tower.


Yes :)


Having just finished watching Idiocracy I have no choice but to point out that it comes from the time masheen.


This has to be the worst way of explaining something I've seen in here.


All of this beautiful discussion here tells me: time is a man made thing!


Great post! Also, Rotterdam is definitely worth a visit.


And Caesium gets the time from spacetime dilation.


We just create labels, which are rooted in Earths' rotation around the Sun at regular intervals measured by radiation and call it time.


more importantly, where does my computer let the time go?


My cat is crazy accurate for time down to the minute. I can be sitting reading, on the web, or watching a movie all of which are random and not repeated at any specific time. Yet at 9pm exactly any time of the year she sits by the stool and complains if I am not there to give her a treat at 9pm Atlantic time.

Note she does get thrown off by seasonal time changes in the fall and spring but she only needs about a week to reset.


Same with my dogs! One of them come and puts her paw on me at exactly 20:00 every day, down to the minute as well, to remind me that it's foodie time.

Maybe I could use my dog instead of NTP and have her press a button that syncs my computers to exactly 20:00? Would work offline at least.


It gives me an idea of training my dog to hit a button to get food and eventually plot the data onto a graph. Would be funny to draw some patterns from it


That’s so interesting. My dog runs on a solar clock. He starts begging for his dentastick when it gets dark out, and stays in bed until the sun comes up in winter.


We started using an automated feeder with our dog. It broke one day and we were surprised to see that he was prompting us to feed him almost exactly at the programmed times. Like down to the minute.

Not sure if he’s relying on other sensory information like certain smells or sounds. I don’t believe that’s the case; we didn’t replace the broken feeder for 3-4 months and he was able to keep time within a few minutes during that period. Our behavior is erratic and changes often; we work jobs with very inconsistent schedules (thus the automatic feeder) so it’s likely not that our behavior is prompting him as well. We can even observe him consistently going to his feeding area on the security camera at the correct time when no one is home. Interesting stuff!


Circadian cycles are pretty reliable in terms of timekeeping. I end up upstairs every day for lunch at about the same time, and I always find myself in the kitchen grabbing a diet coke at about 130 because I used to grab one after a 1pm meeting for the longest time.


Does that hold true for animals though? Modern humans sleep on a pretty consistent schedule but my dog sleeps randomly throughout the day. And unfortunately for him my sleep schedule is utter chaos so he is often up very late

And to further make it weird: our vet told us to feed him multiple small feedings throughout the day so the feeder was programmed for 6 feedings with 2 hour intervals from 9am to 9pm. He hit the mark for all feeding times!

I still think there is potentially some sort of external prompt(s) though. Circadian rhythm is an excellent idea. Maybe that combined with something hard to detect, like lighting levels (which would explain why the timing shifted a few minutes over a few months). Who knows!


Think about the number of pets doing this at, say, 20:07, and owners not realizing the time accuracy because it's not a round number of minutes after the hour.


There are circadian rhythm genes in c. elegans that take effect even when under artificial light. Also the skill for this is trainable.

At school we used to have a bell mark class ends and without a clock or a watch I could predictably tell when the bell would fire. One time I demonstrated this to a friend (both of us kicked out of class) by counting down from 10 on the second to when the bell rang while looking at a blank wall.

Strange. But nonetheless true.


I suspect in cases like this the dog is hearing something you don't in the environment and has associated it with treat time, creating the expectation. If you reconfigure NTP to use her intuition, you risk biasing whatever the source is, creating a feedback loop that will create drift.


maybe add an NTP reference clock to biff¹

...or add it to systemd (it will get there eventually anyway)

[1] https://en.wikipedia.org/wiki/Biff_(Unix)#Origin_and_name


My dog knows the days of the week too. She knows that Thursday is brewery night, and Sunday is a visit to grandma's. She get confusedly persistent if either event is cancelled.


My dog is the same. I have a friend who spends the day at my house every Thursday. The dog sits by the door waiting, but only on Thursdays!


I found out my cat could count to four once every fourth day was salmon day.


It's written, and seems plausible, that cat territory is bounded by time as well as space; for example one cat might own a place in the morning while another cat owns the same place in the evening, etc.


There was this BBC documentary where they tracked cats with GPS called The secret life of cats where they found this behavior. The cats would also visit each others house at different time and eat from each others food.


cat law sounds hard. cat lawyers must make a fortune litigating in cat court


It's even weirder with people: blood sugar level change with how you perceive time to be passing, not the actual amount of time: https://www.pnas.org/doi/10.1073/pnas.1603444113


My guinea pig will get really really loud and persistent if she doesn't get her vitamin c laced hay biscuit at 7AM EST. I have no idea how she knows what time it is, but she's super accurate about it as well.


Only one? It's recommended to keep at least two as they are very social animals.

My four live in the garden, well protected and I'm too chaotic to keep any sort of regular feeding schedule, but they are fine with that, must be exciting for them if an unexpected feed of carrots or cucumbers drops.


Just the one as her sister died, and then she tried to kill every other I ever tried to pair her with. My dog gets along with her well though, and she gets a lot of human attention.


Ah, yeah the old problem of death. Happened with my first pair of siblings but I added three from piggies from the animal shelter and though it took a while for them to settle in, it worked. Just like with us humans, it becomes easier to fit in with a larger group.


Their guinea pig may cohabit or socialise with non Guinea pigs. Eg rabbits


Interestingly keeping piggies with rabbits is not allowed in every country. For example here in Austria it's not allowed (according to the 2. Tierhaltungsverordnung Anlage 1 ( 3.6)), while in my native Germany it is.


I’m curious, do you know why? (I don’t understand German)


Our dogs meanwhile get fed at 5:00 and every day they think it must be 5:00 at 4:15-4:25, so it seems my dogs may be Martians.


Exactly the same here with my golden doodle. We feed her dinner at 4pm and she’s pretty much always off-by-one and comes to check on the status at 3.


Maybe retrievers are bad at time. The ring leader is a lab.


Girlfirend's minpin-chihuahua mix is like this. Thinks its breakfast time well before it is, indeed, time for breakfast.


It's not just cats; I think humans are capable of much of the same but we actively suppress it for $reasons.

Any time I have an alarm in the middle of the night for any random hh:mm, after just a few days of the same pattern I will naturally wake up exactly 1 or 2 minutes before the alarm as my internal clock knows what to expect. If I ignore it out of laziness and go back to sleep until the alarm rings (literally a minute later) I can break the habit but if I embrace it, it is really accurate and reliable (though thrown off if I went to bed absolutely exhausted, so there are limits as one would naturally expect).


Reminds me of the feynman book "What do you care what other people think?"

There's a chapter:

“It’s as Simple as One, Two, Three…”

where he talks about and experiments with mental counting and mental time judgement

I decided to investigate. I started by counting seconds—without looking at a clock, of course—up to 60 in a slow, steady rhythm: 1, 2, 3, 4, 5…. When I got to 60, only 48 seconds had gone by, but that didn’t bother me: the problem was not to count for exactly one minute, but to count at a standard rate. The next time I counted to 60, 49 seconds had passed. The next time, 48. Then 47, 48, 49, 48, 48…. So I found I could count at a pretty standard rate.

Now, if I just sat there, without counting, and waited until I thought a minute had gone by, it was very irregular—complete variations. So I found it’s very poor to estimate a minute by sheer guessing. But by counting, I could get very accurate.

he goes on to do all kinds of other experiments like counting while running up and down stairs and more... :)


Pawlow would have something to say about that


I better check the oscillator inside my cats, because they want dinner at 4pm plus or minus a half hour.


Can my computer get time from your cat? (:


They are creatures of habit




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: