Hacker News new | past | comments | ask | show | jobs | submit login
Tillitis Key – Mullvad spin-off inspired by measured boot and DICE (tillitis.se)
305 points by km on Sept 19, 2022 | hide | past | favorite | 118 comments



Are you aware of Trussed, an initiative by SoloKeys and Nitrokey? https://solokeys.com/blogs/news/trussed-announcement / https://trussed.dev/

From what I understand, this is an API to write applications against a common interface, which can run on different hardware devices. An abstraction layer for security key apps. Similar to Java Card, but in a more modern way. Is this something that would or could be compatible with Tillitis?


I've been dreaming of a fpga-based key since I read about precursor. Not sure if it's yet possible to power it via NFC. But with this said, sharing at least the FIDO implementation would be outstanding.


Yes, I'm aware of it. I'm not sure if it's small enough for the Tillitis Key to be able to use it.


A bit off-topic: Can anyone recommend a platform that is production ready today, if I want to (develop and) deploy a custom Smartcard / HSM application in small scale? JavaCard seems to fit the bill, but I've not yet found an approachable tutorial.


JavaCard is the answer for smartcards. You can find example card software all over github, and you're looking for the JavaCard SDK from Oracle and GlobalPlatformPro to program them: https://github.com/martinpaljak/GlobalPlatformPro. There's even an ant task around somewhere that allows you to use ant tooling. Blank cards with "developer"/default keys can be picked up pretty much anywhere.

Buy blank cards, write your applet, test in an emulator if you want, push to card, test for real with your software that talks to the card, profit. Be aware that if your goal is to write custom cryptography implementations in Java on the Javacard, these will be prohibitively slow. No need to take my word for it, Niels Duif did exactly this: https://research.tue.nl/en/studentTheses/smart-card-implemen...

> Java Card proves to be a worthless platform for high-speed cryptography. Despite the > speedups, generating a signature takes more than 28 minutes for a private key of 254 > bits.

How is crypto done then? JavaCard provides APIs that do it, but these call implementations that either use coprocessors, or contain optimised implementations in the mask ROM. You can't program a mask ROM without doing a production run of smartcards in the hundreds of thousands. Small scale, this isn't possible.

HSM vendors will often sell SDKs for custom code, which you can add to certain models. The barrier to entry simply being that you need to buy an HSM, which isn't cheap. It can be done, however, and on the plus side in my experience of Thales HSMs this means actual C code, meaning performant implementation is possible.


https://github.com/OpenSC/OpenSC

Note that "production ready" does not equate to "follow a YouTube video and write 17 lines of TypeScript." You need to know Java, you need to know crypto, and you need a few bucks to throw at the appropriate hardware. That said, the entire US DoD is built on JavaCard so it is as production grade as you can get.


Am I right in thinking that this is basically like a yubikey except with openness as key differentiator?

Or is it’s function something else ?


The Tillitis Key is a new kind of USB security key inspired by measured boot and DICE.

Tillitis Key’s design encourages developers to experiment with new security key applications and models in a way that makes adoption easier and less risky for end-users.

It offers both security and flexibility by being end-user programmable while also preventing applications loaded onto the device from knowing each other’s secrets. During use firmware on Tillitis Key derives a unique key for each application it runs by measuring it before execution. This is done by combining an application’s hash value with a unique per device secret. Applications are loaded onto the device from the host computer during use, and are not stored persistently on the device.

A user- or host-supplied secret can also be mixed into the key derivation function, providing further protection. A sophisticated physical attacker should be assumed to have knowledge of the target application’s hash, and will likely eventually succeed in extracting the UDS from the hardware. By adding a host-supplied secret, knowledge of the application used as well as the security key’s UDS is not sufficient to produce the application secret. This makes the security impact of a lost or stolen Tillitis Key less than for conventional security keys.

Device applications can be chain-loaded where the first application stage hands off its secret to the second stage. This improves user experience as it makes it possible for the application secret (and its public key) to remain the same even if a device application is updated. It also enables developers to define their own software update trust policies. A simple first-stage application might do code signing verification of the second stage, whereas a more advanced one will require m-of-n code signatures, or a Sigsum inclusion proof. Sigsum was designed with embedded use cases in mind.

Tillitis Key is and always will be open source hardware and software. Schematics, PCB design and FPGA design source as well as all software source code can be found on GitHub.

https://www.tillitis.se

https://github.com/tillitis/

(Full disclosure: I'm Fredrik Stromberg, cofounder of Mullvad VPN and co-designer of Tillitis Key)


What is DICE? I’m searching but the only results are the company and not whatever you’re referring to


https://trustedcomputinggroup.org/work-groups/dice-architect...

> The DICE Architectures Work Group is exploring new security and privacy technologies applicable to systems and components with or without a TPM. The goal is to develop new approaches to enhancing security and privacy with minimal silicon requirements. Even simple silicon capabilities combined with software techniques can establish a cryptographically strong device identity, attest software and security policy, and assist in safely deploying and verifying software updates.



So basically a pluggable HSM? Curious if this should be considered more similar to the Yubi HSM 2 [0] (think dedicated HSM, not very useful for anything else but very secure) or the USB Armory [1], very adaptive small arm SBC with a USB port? (for this thread option 3 - more similar to a bone stock Yubikey?)

Also curious if there are plans to support BLS signatures natively?

[0] https://www.yubico.com/product/yubihsm-2/

[1] https://inversepath.com/usbarmory


One question, with the two-stage approach, isn’t that a security risk? App 1 gets measured and can’t change, but it needs to identify app 2 in a way that still allows it to be updatable, invalidating the whole security chain.

I’m probably not understanding something, so I’d love an explanation (preferably one that non-cryptographers understand)


> One question, with the two-stage approach, isn’t that a security risk? App 1 gets measured and can’t change, but it needs to identify app 2 in a way that still allows it to be updatable, invalidating the whole security chain.

It doesn't invalidate it if it works as the application developer intended. The essential idea is that the first mutable boot stage contains a trust policy which somehow verifies the second stage. Let's say that's a valid signature over the hash value of the second mutable stage. The trusted public key in contained in the first stage.

What we've done there is first used measured boot, and then verified boot.

Measured boot is basically load-hash-measure-execute, where "measure" means "store the hash value of whatever I'm about to execute somewhere safe where the soon executed thing won't be able to undo my storing of its hash".

Verified boot on the other hand is about verifying the next stage and only execute it if it verifies as valid by the trust policy.

> I’m probably not understanding something, so I’d love an explanation (preferably one that non-cryptographers understand)

Honestly I'm beginning to realize it's not all that simple to explain.


Ah, so for example challenge-response/public key? The first stage knows the second stage’s public key, and can verify that it knows the private key.

That would make it less secure ("only" as secure as the private key), which is the trade-off the developer made in exchange for an upgradable app, correct?


IIUC, popular security key devices like the YubiKey securely store a private key, but only allow it to be used for specific authentication applications (e.g. OTP or U2F). Would the Tillitis Key be able to securely store a private key, then with appropriate authentication from the host, use that key for encryption and decryption?


> Would the Tillitis Key be able to securely store a private key, then with appropriate authentication from the host, use that key for encryption and decryption?

Yes. I don't think that would be very hard to do.


Devices like the YubiKey can also be used for encryption (e. g. with LUKS or OpenPGP), digital signatures and SSH authentication.


@kfreds thank you for the response, and the commitment you have for this project.

>> ... this is basically like a yubikey ...

> ... new kind of USB security key ...

The things you have listed are indeed very nice, but they are not new kind, as they are available elsewhere.

Can you give a bit more compare and contrast to the original question?

Again, thank you.


> The things you have listed are indeed very nice, but they are not new kind, as they are available elsewhere.

Really? I wasn't aware that there is another USB security key with measured boot-based key derivation. Please provide a link!

> Can you give a bit more compare and contrast to the original question?

Except for Tillitis Key, all USB security keys I'm aware of either boot any software, or only boot software that has been signed with a key pair. Tillitis Key is different in that it measures the application, and uses that measurement to derive a secret specific to the application as well as the stick it's running on.


No, the integrity is within the device. You load the small (64k) apps onto the key and the content of the apps with the unique key for the device can be used by the app to perform cryptography and their integrity can be audited. This is similar to JavaCard with cryptographic integrity of the applets. Read more at: https://github.com/tillitis/tillitis-key1/blob/main/doc/syst...


> A user- or host-supplied secret can also be mixed into the key derivation function

To clarify, this secret does not affect the program's hash, right? (e.g. to prove liveness, the parameter is a nonce to be signed with a deterministic private key)


No, the USS would be used (mixed in as kfreds stated) during the hash operation. So the hash result would be based in (1) the hash of the application, (2) the unique device secret, and (3) the user supplied secret. The result is called Compound Device Identity in DICE parlance. And is basically

CDI = Hash(UDS, Hash(application) + USS)

If the application would use the result (called CDI - Compound Device Identity in DICE parlance) to derive a pair of keys, the keys would thus be based on the hardware (the specific device you have), the integrity of the application and what you know.


Thanks for the detailed response!


According to [1]

> It offers both security and flexibility by being end-user programmable while also preventing applications loaded onto the device from knowing each other’s secrets. During use firmware on Tillitis Key derives a unique key for each application it runs by measuring it before execution. This is done by combining an application’s hash value with a unique per device secret. Applications are loaded onto the device from the host computer during use, and are not stored persistently on the device.

So the idea here is:

* General purpose, reprogrammable security coprocessor

* If you save secrets with application A, then install evil application B, it can't access the secrets from A.

* And if you revert back to A, those saved secrets will still be there.

* Therefore, it's more practical to run two different applications - and safer to experiment with your own applications, because you won't lose all your website logins.

[1] https://www.tillitis.se/


* If you save secrets with application A, then install evil application B, it can't access the secrets from A.

* And if you revert back to A, those saved secrets will still be there.

What stops app B from pretending it's an app A ?


1. The hardware contains a UDS (unique per device secret) which can only be read once per boot cycle.

2. Firmware in ROM does unconditional measurement of the first mutable boot stage, which is loaded from the host, over USB.

The KDF used for measurement is Blake2s(UDS, Blake2s(application), USS).

Note that when I say hardware I mean FPGA hardware design.


Again, what stops malicious app B from just taking the A's id and presenting it to device ? token doesn't know who sent USB packet


I'm not sure I understand your question.

If you're asking about applications running on the device (Tillitis Key) the answer is measured boot. You can read more on tillitis.se.


I think the app is installed on the stick itself, kind of like how you install coin apps on a Ledger.


It would have to hash to the same value as app A


The device is not checking app's hash tho, it has no way to verify that the usb frame containing that hash is really from app A


The app runs on the USB device. The code is loaded from the host, and if it hashes to the correct value, it will be able to access the secrets on the Tillitis.


Something like 'Secure Boot' / 'Measured Boot' on modern PCs, I imagine.

A bootloader will checksum the current application before running it, checking its digital signatures and version and whatnot, and deriving an encryption key based on that.


It is an FPGA, fully open both at software and hardware level. So quite a bit more futurproof, inspectable and upgradable than a yubikey.


(For full disclosure I am the primary FPGA designer of TillitisKey.)

It also perform a measurement of the application being loaded. And the measurement together with the Unique Device Secret (UDS) will generate the primary secret applications can use to derive keys etc it needs. This means that you can verify the application integrity.

This is very close to, inspired by DICE: https://www.microsoft.com/en-us/research/project/dice-device...


Did you design the board? It looks sick, such high density of components on the top layer.


No, the board design is done by the wizard Matt Mets at https://blinkinlabs.com/


OMG. Just saw the Thinking Machines CM-2 replica on their homepage. What an awesome idea.


Yeah, super cool. A old school lisp machine would be cool as well.


Does this mean that a software upgrade will change the keys?


For now, yes. But as Fredrik (kfreds) has written in another comment. What is possible to do is a two stage approach with an application (which gets measured) loading other applications.


What exactly is the “measurement”? A hash of the application code?


Yes. The hash of the application code and the 256 bit Unique Device Secret is hashed to generate a primary secret, which then the application can use to derive the secrets it needs.

You can additionally supply a secret from the host (the User Supplied Secret). This means that the keys generated are tied to the specific device (the UDS), that the integrity of the application is correct, and to you as a user.


I was under the impression that not being upgradeable was a security feature of Yubikeys?


It is. And therein lies the innovation here: upgradable with verification.


Not exactly sure, but the OSFC conference page has some extra info on what it can do: https://www.osfc.io/2022/talks/tillitis-key-a-usb-security-k...

Maybe it will be ~YubiKey plus extras?


So Solokey V2. Not that I'm complaining by the way. Any open competition to yubikey is a win in my book.


I really dislike how "Yubikey" is being used in many places as a name for U2F and FIDO2.


It's the same as "Google Authenticator" being used instead of TOTP. I think it's reasonable for apps' documentation to meet users on their turf to aid understanding.


This is because the fido2 libraries follows a lot of defacto standardisation: either to match yubikey or windows hello as the only real implementations in the wild. Fido2 ctap extensions would support all the use cases of the new device except you won't be able to use them e.g. in windows because vendors are ignorant about all the openess to push their own agenda.


Probably for use with something like secure enclave.



Good VPN company (one of the best) and good idea (sounds like USB Armory). But the best it can do is assure that their VMs are not logging anything and keep other promises. Will they also be able to share details of their hosting setup in a way you can independently verify (because they can always have more middleware transparent traffic logging VMs)? doubt it, same goes to whomever they use for hosting.

My point is, while I don't ascribe to the extremes of pro or anti VPN sentiments, having a good understanding of what services like this can and cannot do and performing rudimentary yet essential security and privacy risk asessment is essential before trusting them with all your traffic.


> Good VPN company (one of the best) and good idea (sounds like USB Armory). But the best it can do is assure that their VMs are not logging anything and keep other promises. Will they also be able to share details of their hosting setup in a way you can independently verify (because they can always have more middleware transparent traffic logging VMs)? doubt it, same goes to whomever they use for hosting.

We are working on this as part of the System Transparency project.

https://system-transparency.org/

Disclaimer: I work on this.

Beyond this Penetration Testing reports on the Mullvad infrastructure is public.


I’ve always wondered what is feasible through a state-issued mandate along with a gag order to circumvent the technology for something like this.


That's what I mean about risk asessment. You should not expect mullvad or any other legally liable organization to resist lawful orders or unlawful coercion, these are not reasonable expectations and your security posture should account for that.


Couldn't this be solved by something like remote attestation?


Thanks for the response and your transparency, it looks like you folks really believe in your mission.

The most revolutionary thing you are doing in my opinion is "registration" and email free account management and accept various forms of payment. You are way ahead of your time! Other apps and sites outside of VPN services could do so well to follow your example.


Let’s not frame this as trust them vs don’t, it’s trust them vs trust your ISP. On one hand, you have a company that seems to be doing as much as possible to commit to privacy, and on the other, a company that straight up tells you they’re monitoring you and sending the data all over the place. Does that scale really tilt differently if you point out there’s a non-zero chance the first company is secretly just as bad as the second?


Not just your ISP but the ISPs ISP, local police, government, etc... one thing people seem to forget in such situations is your ISP router's security, is it better or worse than Mullvad's security? A compromised router can and has (see VPNFilter) rerourted traffic through attacker compromised infra for mitm and most people have to accept whatever crappy gateway is given to them. If you are on arbitrary wifi networks, the chances of joining a compromised network get higher with the more networks you join but having one constant potential point of failure that you can reasonably take measures to account for is better.


Unimportant trivia from a Swedish speaker:

Mullvad means mole (for the tunnelling more than planted spy connotations, hopefully), and the "Tillit" part of the name means trust.

They're working on some IKEA style naming, which I enjoy.


Thanks! The other news of today is that we've started a second sister company - Glasklar Teknik AB - which will focus on maintenance and development of System Transparency and Sigsum.

System Transparency: Mullvad's security architecture we'll use to eventually make our running VPN systems transparent.

Sigsum: A transparency log design with distributed trust assumptions (witness cosigning).


Glad to hear it! Both valiant efforts, and good naming here too.

For non-speakers; "Glasklar" means literally "glass clear", but makes more sense to explain as the phrase in Swedish equivalent to "clear as day".


You can say crystal clear in English, it's a bit closer to the original version.


Cool! Always nice to see extra competition in this space.

One thing I've wanted for a while is a way to properly backup a webauthn token. An approach I discussed a couple of weeks ago [1] was:

1: Generate on-hardware webauthn master key on device A.

2: Generate on-hardware key-pair on device B

3: Export B’s public key, import to A

4: On Device A: Encrypt master key with B’s public key

5: Export encrypted master key to B

6: Decrypt on B

I guess this would probably be possible with this device? Perhaps there are some even more clever way to do it.

[1]: https://news.ycombinator.com/item?id=32621426


Hi! Interesting. Which company do you work for?

Yes, that'd be possible. I don't know how webauthn works, but if it relies on ECC you could probably do ECDH between all security keys you wanted to carry your master key, and then use the combined ECDH values as the master key.


I work at a consulting firm in Stockholm, my remark about competition was solely from a user/hacker standpoint. My day job is more about security in the higher levels of the stack :-)

Cool! It was a while since I looked in the standards, but I think there definitely is support for using ECC based algorithms.

Looking forward to follow your journey!


Ah, OK! :)


It strikes me that open source hardware should be more common. It's surely much easier to monetise than open source software: you just sell the hardware, because noone wants to build one themselves. Why isn't it?


Because hardware is, well, hard. There is a huge upfront investment that isn't even remotely comparable to the amount of money you can spend on software development, and equally huge incentives for third parties to undercut you by taking your designs, manufacturing them for cheap and offloading support onto you (as already pointed out Arduino is a great example of this happening in real life). Even if everything is open source you have to build an entire business and marketing department around selling the hardware, while with pure software you can just put it up on GitHub and call it a day.

Not to mention that in this day and age every piece of hardware has software at its core, so open source hardware does not save you from also writing the code that runs on it. If anything developing open source firmware is actually harder, because most chip vendors expect your product to be closed source and want you to sign countless NDAs before you can access a datasheet or even just buy the chips. You are restricted to using older and/or more expensive parts whose documentation is freely available; it's the exact opposite of the software world, where the latest and greatest is one npm install away.


No one wants to built it themselves until people actually want it, if your device is popular then your device is $2 on AliExpress/eBay and you have no part in that, look at Arduino for a good example.


This...if it's successful, a knockoff of your device will end up on eBay, AliExpress, Amazon, etc, often with sketchy parts, at a price you'll never match. With some exceptions, where you're able to make the product depend on a hard-to-get-cheap part, etc.


Just from reading comments and articles, I’d guess that’s because you very often rely on 3rd-party parts that are not open. So you either need to limit what you use, or design the whole thing.


It's risky. You still need the software, otherwise only some devs and tinkerers will buy it - and even for them there better be tool chains etc


From the photo, that looks like a stock iCE40 FPGA, which does not support hardware attestation of the loaded bitstream. How does the user verify that the FPGA loaded the expected bitstream instead of something with a backdoor? A DICE chain that is not rooted in physical, immutable hardware isn't very useful.


> From the photo, that looks like a stock iCE40 FPGA, which does not support hardware attestation of the loaded bitstream.

Which FPGA models support _attestation_ of the loaded bitstream? Do any?

> How does the user verify that the FPGA loaded the expected bitstream instead of something with a backdoor?

It's a Lattice ice40up5k, which contains a programmable and lockable NVCM memory in-package. The engineering samples we handed out today at OSFC store the FPGA configuration bitstream on a SPI flash memory though.

> A DICE chain that is not rooted in physical, immutable hardware isn't very useful.

When we start selling them we'll likely sell both security keys with pre-provisioned bitstreams in NVCM as well as unprovisioned security keys so you can provision your own.


An interesting approach to vendor-independent attestation was outlined in [1]. Basically the bitstream is fed into a physical unclonable function (PUF) which is used to derive a key to decrypt the rest of the bitstream. For attestation, one could simply store the secret part of an asymmetric key in the encrypted bitstream (for challenge-response).

[1]: An Autonomous, Self-Authenticating, and Self-Contained Secure Boot Process for Field-Programmable Gate Arrays, https://www.mdpi.com/2410-387X/2/3/15


> Which FPGA models support _attestation_ of the loaded bitstream? Do any?

I haven't seen this feature yet, but I desperately want it on every FPGA I use. NVCM eliminates most of the benefits of using an FPGA...


So basically a pluggable HSM? Curious if this should be considered more similar to the Yubi HSM 2 [0] (think dedicated HSM, not very useful for anything else but very secure) or the USB Armory [1], very adaptive small arm SBC with a USB port?

[0] https://www.yubico.com/product/yubihsm-2/

[1] https://inversepath.com/usbarmory


Yes, except YubiHSM boots using verified boot. In the case of Tillitis Key each program gets its own secret key, which it can use to e.g. derive a key pair.


Or CrypTech: https://cryptech.is/

There are things developed by the CrypTech project I would like to try and reuse for TillitisKey.


Very cool, thanks for the link! This context helps a ton :)



I'm not sure what problem this solves. I see per-application keys based on the hash of the application, but wouldn't this prevent updates of those applications without key loss? It's clear to me that this device can be used for _some_ kind of cryptographic operation/verification mechanism, but I'm at a loss for what problem this is actually designed to solve.

What's the practical application of this key?


Tillitis Key’s design encourages developers to experiment with new security key applications and models in a way that makes adoption easier and less risky for end-users.

You can read more on tillitis.se or in the comment I made below.

Tillitis Key will allow you to chain-load applications. This means that you could have a thin loader which does code signing verification of the next application stage, and hand off the secret to it. Basically it's a trust policy that defines under what circumstances the next application stage gets the secret.

Another trust policy the loader could have is requiring m-of-n code signatures, or perhaps that as well as transparency log inclusion. Check out sigsum.org.



The app key would need to stay the same, but I can't think of a mechanism that would deny one app trying to pretend it's another.

Also the fact it doesn't emulate smartcard means every single software supporting it would have to make a special client so yeah, that's a problem.

"Just" smartcard allows for at the very least GPG signing and SSH agent without much fuss, and also HTTPS client cert auth


I believe the only thing needed is someone writing a PKCS #11 driver for it, then it should be interoperable.


We used to use the same version of applications for years.

It's OK to say this has a serious limitation in that it can't easily support updating applications, but that hardly rules out it being useful at all.


Quick question about such devices: can I use stuff like Yubikey or similar to luksOpen a crypt device during boot or operation?

Thanks in advance.


Yes, there's multiple ways. Systemd offers systemd-cryptenroll that works with FIDO2 and X509 certificates on the hardware key to unlock a drive.

The key is embedded as a luks header into the partition.

The information about the key and the device is passed to initrd through /etc/crypttab for unlocking during boot.

I wrote a couple of posts describing how this can be sort-of-handrolled with nitrokey and gpg key for x509 cert:

https://vtimofeenko.com/posts/unlocking-luks2-with-x509-nitr...



This doesn't solve cookie theft, but this does massively raise the bar for hackers getting into service provider environments. Couldn't think of a better time to launch.


Being FPGA Based from what i can tell is a brilliant idea. This makes it possible to fix hardware level security issues. I've been shocked by the recent exploit found in google pixels security chip since i rely heavily on it (using graphene os). An unfixable hardware level bug in it would turn my phone to e-waste for me. This solves this quite elegantly.


Wonder how this compares to NitroKey, which also is open, but not FPGA-based.

It's almost certainly a good time to update https://lwn.net/Articles/736231/


The most important part of this project is in the very last sentence: it's all implemented on an FPGA (one which doesn't have any backdoorable-for-surveillance hard cores).

Without that, none of the other stuff would be trustable.


Note that we specifically chose the Lattice ice40 UltraPlus 5K because:

- It is supported by an open-source FPGA toolchain

- Has an in-package non-volatile configuration memory (NVCM) that is lockable. This is where we'll eventually keep the FPGA configuration bitstream, including the unique per device secret.

After some reverse-engineering work we're also able to program and lock NVCM with open tooling, as opposed to having to use Lattice's proprietary one.


RYF certification?


From a security perspective people should glue their USB-ports shut. Do the people endorsing this crap even considering that anyone with a bit of knowhow can buy a malicious replica.


The name sounds like a disease.


It’s a play on the Swedish word ”tillit”, which means trust. So tillitiskey = trust is key.


Probably a play on words in Swedish, "tillit" means trust / confidence.


Move over, IKEA.


The letters themselves form the side profile of a restaurant dining room in disarray.


It’s tillitating.


> Something that makes the key unique is the fact that both its software and hardware are open source

Aren't SoloKeys [1] also open hardware and software? Or is the Tillitis key more general purpose and thus not in the same category?

[1] https://solokeys.com/


My understanding is that it's both a more general platform (targeting more than 2FA) and also uses an FPGA running open-source code, so that the "secure enclave" functionality can be inspected and found to be secure, rather than just trusting NXP/ARM's chip as SoloKeys have done.


Correct. I think the difference is just NFC.

If you want to power your key via NFC (tap to phone to authenticate), you need a micro which consumes very little, powers up quickly and can do a signature before the FIDO protocol times out. I'm not sure this is currently possible with a FPGA, but maybe it is.


FTR SoloKeys targets FIDO2, not just U2F


The TillitisKey should be able to be used for FIDO2, as a TOTP generator etc. Right now there is a SSH agent application, which allows you to sign in by touching the device.

Personally I'm very excited to see what applications will be developed at the hackathon at the OSFC conference, and onwards. We have had people at the conference showing interest in trying to write applications in Rust. I will try and implement an application of my own tomorrow.


SSH supports FIDO2 so I'm not really sure the purpose of having an agent.


I think what they mean is that this can be reprogrammed for more use cases than FIDO2 and U2F, it can say be programmed to support my own homegrown thing that I've made up just now or even a more general concept than just getting into things perhaps.


Yes. And your application will get a per device unique primary secret when loaded, which the application then can use for whatever it needs. (Including not using it all all.)

TOTP, FIDO2, PIV, simple touch triggered challenge/response... or something completely different. If it can fit in around 100 kByte RAM when compiled for RV32IMC and not be too computationally expensive, it could be a Tillitis app.

Just to give you some indication, the Ed25519 signer operation in the SSH authentication we showed on stage today takes ~ one second to perform the signing. And we have several ways to improve that we know already.


Good explanation. Thank you.



To be fair, this thread was posted first.


OK, we'll merge that one hither.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: