
Galileo's Proposed Authentication Algorithm: Part 1 - throw0101a
https://berthub.eu/articles/posts/galileos-authentication-algorithm-part-1/
======
ris
The article makes a good description of the problem, explains how the solution
works, but doesn't really explain _why_ the solution solves the problem. How
can it protect against a delayed identical signal being re-transmitted at a
higher power by a relay station? All its cryptography will check out.

How does the key being transmitted afterwards _actually_ help it?

If we need a chain of message keys to be able to decode a block of messages,
and messages frequently go missing, doesn't this mean missing a message in the
chain makes authenticating any later (er... previous?) messages of the block
impossible?

~~~
petertodd
That's an excellent point. But IIUC it can be solved with a local clock of
sufficient accuracy by making sure you got the message for the right time, at
the right moment.

This is now even feasible for handheld equipment, as "chip scale" atomic
clocks are now commercially available, with dimensions on the order of a few
cm on each side. This one is based on Rubidium:
[https://www.microsemi.com/product-directory/embedded-
clocks-...](https://www.microsemi.com/product-directory/embedded-clocks-
frequency-references/5207-space-csac), and here's another based on cesium:
[http://www.jackson-labs.com/index.php/products/csac](http://www.jackson-
labs.com/index.php/products/csac)

At one point Digikey was even stocking them as an off-the-shelf item!
Relatively cheap too, "just" thousands of dollars per unit.

~~~
cryptonector
Right, to protect against delayed broadcasts this system requires that the
receiver have a reasonable clock. If there's only one frequency of signed key
distribution, then it has to be very high frequency, and the clocks can't suck
too much. This can be ameliorated by having a second, lower frequency signed
key distribution schedule (basically: every N) that can enlarge bad clock
tolerance.

Starting from scratch, with invalid time will make the device susceptible to
attack, but if this is a rare event then it hardly matters because attacks
should be detected by other devices ("herd immunity"?), but if there's a
sufficiently low frequency key distribution schedule then a user can validate
that the time the device got from Galileo seems right.

~~~
petertodd
For a lot of applications you start off in a location where you have a high
assurance of being able to receive GPS/Galileo signals without interference,
allowing you to set your clock accurately at the start of your mission (eg
military or aviation).

------
eximius
Very interesting, but what stops an adversary from spoofing a key 6' that was
derived from 5 in the same way the network might? The same verification would
derive 5 from 6', then 4, down to 0. What secret does the network have that
allows for the next keygen to be done only by them?

EDIT: Does the network generate all X keys in advance in reverse or something?

~~~
vii
Yes, the network works back from the signed key, which is published last. The
TESLA scheme uses a one-way pseudo-random hash function to generate keys, see
this paper
[https://www.esat.kuleuven.be/cosic/publications/article-2749...](https://www.esat.kuleuven.be/cosic/publications/article-2749.pdf)
so if you know the signed key you can generate the others.

That means you can't easily fork a new chain with a K_6' as the receiver can
easily check f(K_6') = K_5 where f is a hash function that is hard to reverse.

Once the key has been published, an adversary can use it to generate messages.
A requirement in TESLA is that there is close time-synchronisation between the
sender and receiver. This is enforced by the safe-packet test as defined in
[https://tools.ietf.org/html/rfc4082](https://tools.ietf.org/html/rfc4082)
which checks that the message packet was received within the time window
expected from key signing.

As the satellite positioning systems depend heavily on highly synchronised
clocks this is pretty reasonable.

~~~
espadrine
You might notice that one of the author of this document is casually the
creator of the most widely used cipher in the world.

~~~
RcouF1uZ4gsC
That would be Vincent Rijmen of AES and SHA-3 fame.

------
petertodd
Note that this is an example of the broader timestamped HMAC signature scheme
concept. My Google-fu isn't up to the task today. But I'm sure I've seen
proposed before, eg with something like Bitcoin as the timestamp provider.

My OpenTimestamps project is a curious example where I actually made an effort
to _avoid_ implementing this by accident.

tl;dr: It's a service that does Bitcoin timestamping. In the process,
centralized servers generate "calendars" \- the database of digests they've
timestamped. Now, for disaster recovery the calendar servers have a HMAC with
secret key to authenticate digests they've promised to timestamp. The idea
being that the HMAC lets us get timestamps back from the community to re-add
to the calendar in case the data somehow gets lost.

For technical reasons, that HMAC "signs" the current time, and the output of
that HMAC gets timestamped securely with Bitcoin. But since OTS servers are
_not_ supposed to be trusted, that HMAC digest is deliberately truncated to 64
bits to make it feasible to brute force with significant effort, thus making
them not useful as an actual timestamp:
[https://github.com/opentimestamps/opentimestamps-
server/blob...](https://github.com/opentimestamps/opentimestamps-
server/blob/1b191439f66d603d3d5d32a60b691ce8c92746ad/otsserver/calendar.py#L30)

This is ok for their intended purpose, because if you're willing to brute
force 64-bits, I'm ok letting you add some junk to the OTS calendars in the
event of a disaster.

I'm actually considering using that scheme as a _trusted_ timestamping
mechanism for OTS, by simply revealing the HMAC keys after they've been
timestamped by Bitcoin. They're already derived with a merkle tree from a
single seed - each second gets a different HMAC key. To turn that into a
verifiable signature you just need to derive all the keys in advance,
construct a tweaked version of that merkle tree, and then publish the root. A
full signature is then the HMAC output, and a path up the tree to the public
root.

The advantage for OTS is the amortized size of the signatures would be smaller
than a ECC signature, plus being hash based it's a quantum resistant scheme.

~~~
londons_explore
Your comment here would make a lot more sense to someone outside the project
if you have a one-sentence summary about what opentimestamps tries to offer.

~~~
petertodd
Sorry! I wanted to focus on the technical thing, rather than promoting it.

OpenTimestamps is a free, open-source, standard for creating timestamps using
(mainly) Bitcoin as the root of trust. The calendar server system makes this
efficient, using merkle trees to aggregate an unlimited number of timestamp
requests into a single Bitcoin transaction, allowing it to be free to use,
funded by donations.

[https://opentimestamps.org/](https://opentimestamps.org/)

~~~
londons_explore
If I'm understanding correctly, it's the same as posting the SHA hash of some
secret data to twitter to prove you have some data.

At a later time, you can reveal the data, and others can see your earlier
tweet and be sure you have had the data since the tweet.

Except opentimestamps doesn't need to rely on the honesty of twitter.

~~~
petertodd
> If I'm understanding correctly, it's the same as posting the SHA hash of
> some secret data to twitter to prove you have some data.

Yup!

However, it's important to understand that the only thing OpenTimestamps can
do is prove data existed prior to some point in time. It can't prove data is
unique. Twitter however can.

A simple example is if I want to show off how good I'm at predicting which
team is going to win some sports championship. With a Twitter hash commitment
I can only pick one team; with OpenTimestamps I could just timestamp every
single possible prediction in advance.

Which in most years would have worked! Though depending on the sports
championship, this year that trick wouldn't have been easy because I'd have to
have brute-force timestamped a _lot_ of predictions to have a "Cancelled due
to COVID-19" timestamp handy. :)

