Hacker News new | past | comments | ask | show | jobs | submit login
Signal under fire for storing encryption keys in plaintext (stackdiary.com)
30 points by doublextremevil 3 months ago | hide | past | favorite | 19 comments



How are they supposed to store them? Unless there’s a password required to use the app, is the only other option a fixed key, or maybe something derived from the account name or something? All of this feels like obfuscation though, if we’re targeting malware specifically designed to collect signal data, they can probably just work around this. Maybe a TPM? Or an OS-provided keychain?


MacOS provides a keychain for use by applications.


What is the threat model here?

When the device is stolen or lost it better protects data at rest with full disk encryption.

When the account operating Signal or the machine as a whole was taken over, the attacker can observe everything the user does and can do everything the user is entitled to do. When the data is accessible to the user, it will be accessible to the attacker and there's nothing anyone can do about it.


If a user is tricked into running malware, that malware can apparently get to these keys. Whereas on macOS, if they were stored in the keychain, that would have been impossible.


This isn’t true. You have full access to the Keychain without any authentication via the CLI. Try it.


I have very little experience with the threat model associated with the macOS keychain and how its promises are enforced. A cursory look into its features reveals an export/import function for elements (except passwords) secured in the keychain. An attacker could just export the keys then.

The next question would be, if the keychain just hands out the keys to the application to be used? How would it ever identity a program as being eligible to receive a certain key? An attacker could just ask the keychain for the key.

A user is commonly able to read the process memory of any process running under his account. An attacker could extract the keys from process memory. And so on and so forth.

So you are very likely again where you were before: An attacker being able to execute code in the context of a user can steal his data. The only thing that varies is how much work it is.

In theory Apple may have exposed a security processor akin to a smart card with which Signal could generate public/private key pairs where the private key resides in the hardware and cannot be extracted. It does not appear the MacOS key chain works that way.


It uses ACLs and access groups. So a malicious app should not be able to access keychain items for other apps. Sure, it might be possible to get the key out of process memory, but that raises the bar significantly for an attacker. Most infostealer malware is pretty basic, and just looks for files A, B, C, etc.

https://developer.apple.com/documentation/security/keychain_...


The CLI gives you full access to the Keychain with no authentication. Try it.


>If a user is tricked into running malware

...then it's over.


I believe it could be useful to update the title to mention that this only happens in the desktop application. I personally only use the mobile app, and while I'm aware I might be biased, I didn't even know a desktop Signal application existed.

Not saying the severity of the issue is lower because of it, just that it might not impact as many people as the current title would suggest.


Credential protection in desktop apps, state of the art on each platform, July 2024 edition:

-- macOS --

On macOS protecting credentials is easy and effective. You can store secrets in two places:

1. In the system keychain. The best place because the keystore database records the code signing identity of each app that creates an entry, and any other app that tries to access those credentials will trigger a permission check by the OS. The permission check window can't be tampered with by other OS processes, nor can your own app be tampered with because the OS will block debugger APIs and also block attempts to overwrite code files on disk. There are APIs for this in whatever framework or language you're using. Browsers protect cookies and local storage for you by using this mechanism (the data is readable on disk but encrypted using a key stored in the keychain).

2. If you opted in to the App Sandbox, you can store data in ~/Library/Preferences which is virtualized and redirected to what Apple calls a "container". Apps that try to access files in containers also trigger a permission prompt.

As a quick test to prove to yourself this works make sure you're on the latest version of macOS, go into System Settings, Privacy & Security, and ensure that under "full disk access" your terminal app is unchecked. If you toggle it off you'll need to restart the process. Now run e.g. `ls Library/Containers/desktop.WhatsApp/Data` and note you get an permission prompt. This block exists for ALL APPS i.e. every app on macOS is sandboxed even legacy apps and apps that don't opt-in to what Apple confusingly calls the "app sandbox" (they mean something like "stricter app sandbox").

Storing credentials is easy on macOS because of the many years of work Apple's security engineers have quietly done, mostly without disruption. Every app on macOS has cryptographically verified identity, which is the hardest part. The rollout of all of this has been so smooth most devs don't even realize how much protection the OS now provides. Let's take a moment to note that this is worthy of serious respect. MacOS is by far the most secure desktop OS out there today, IMHO.

-- Windows --

Possible but much harder and mostly undocumented. I haven't yet tried it myself, but it's on the todo list. Firstly, note that protecting credentials from other apps requires the OS to understand app identity i.e. what files belong to an app, who signed them, and then the OS must stop other apps from tampering with those files or the address space of the program that's running. The only supported way to do this on Windows is by packaging and installing your app with MSIX. If you haven't encountered this format before please open my bio, follow the link to my company's product and the user guide will tell you all about it. If you aren't shipping your app with MSIX forget it, there's no way to protect credentials from other apps, only other users using regular file or keychain permissions.

If you do use MSIX then it's possible to create directories that only your app can read, but which are still owned by the user i.e. the user can override the protection using Explorer if they want to. The way to do this is far too complicated to explain here unfortunately and involves some serious Windows API voodoo.

-- Linux --

As always on Linux, "it depends" but is basically not possible in a widely supported and portable way. Whilst Linux the kernel has all the needed tools, they're not knitted together across the userspace and packaging ecosystem to make it possible. The closest thing to app identity is either SELinux (Red Hat specific more or less) or FlatPaks. Neither is understood by system keychains AFAIK? Debugger APIs aren't really locked down either, and you can't make a directory in $HOME that only a specific FlatPak can read.


> the keystore database records the code signing identity of each app that creates an entry, and any other app that tries to access those credentials will trigger a permission check by the OS

Except for the CLI, which had full access to the keychain.


No, CLI apps also trigger access checks. Like I said, try it in your terminal using "ls". You'll find that trying to access a sandboxed container triggers a permission check.


Linux has Secrets API implemented in kwallet and gnome-keyring. They are used by default in gnome and plasma or can independently be installed anywhere else.


[vouched to bring your post back from the dead]

Yes that's true but the secrets API isn't really that useful last time I looked. It isn't capable of stopping one app reading the secrets of another, which makes it little different to just a regular file stored in the user's $HOME


im not sure how much access each application has over the entire secret store however it does ask the user before allowing access which is still better than a plain user readable file in home.

https://docs.kde.org/stable5/en/kwalletmanager/kwallet5/kwal...


as a reminder, this is just the desktop version not Android.

good catch. this kind of technical analysis makes signal a stronger application. now if only someone could convince Marlinspike to get off his high horse and de-centralize the service.

AWS is a terrible host. its all too willing to de-platform, as was evidenced when Senator Joe Liebermann politely requested they de-platform Assange and Wikileaks.


very, very early on in Signal, back when it was still called TextSecure, they did actually federate with other servers, in particular with servers run by CyanogenMod (predecessor of LineageOS, briefly a commercial startup, now defunct).

https://signal.org/blog/cyanogen-integration/

It seems like this was a very bad experience for Open Whisper Systems which is why they don't like decentralization.

Maybe the arguments for and against have changed since then.

But it's not like they just refuse to try it for no reason.


Moxie wrote about why they stopped: https://signal.org/blog/the-ecosystem-is-moving/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: