Am I the only one a bit worried about them using a fixed string as a salt? A salt is intended to make it hard to create a rainbow table. I don't know how much entropy is in their 'folderID' variable, but given that it's a 'short string', it seems low-entropy and not random. If so, the current implementation makes it trivial to make a rainbow table. That means that if you can get the passwordtoken and know a folderID, you can create a rainbow table that maps all possible passwordtokens to valid passwords.
BTW: 'worried' as in 'code smell', not 'worried' as in 'the encryption can be easily broken'.
The question is how trivial is it to get the folder id. If it's just an incrementing integer, is it really providing a good salt? Im not actually asking that question or asserting that it isnt, just explaining what you missed.
That would mean each salt is about 36 bits. If you create 2^18=262k folders in your lifetime using the same algorithm and same password, there's a 50% chance one of the salts is dup'd.
Maybe we can wave this as good enough, but cryptography usually has higher standards.
> In addition the untrusted device must not be able to modify, remove or introduce data by itself without detection.
It sounds like that property is satisfied on a file-by-file level. But what about between files? Can the untrusted device delete some files without it being detected? Or un-delete deleted files? Or selectively revert files to older versions? The documentation describes how each file gets a signed blob containing the file’s metadata and hashes of its contents, but there’s no mention of any signed structure containing the state of the directory tree as a whole. So judging (only) by the documentation, it sounds like the answer to all of those questions is yes.
YMMV on how big of an issue that is. It helps that the attacker theoretically shouldn’t know which file is which. But imagine, say, a Git checkout of an open-source codebase being stored in a Syncthing folder. The attacker can guess which file is which based on file sizes and modification patterns. At that point, selectively reverting files might be able to mess up the code being stored, or associated configuration, in a way that creates some sort of vulnerability.
Apologies if I’m misunderstanding the security guarantees; again, I’m only going off of what the linked page says.
You might be interested in Peergos[0], where the server is treated as an adversary, and can't see file sizes, number of files etc. Everything is in merkle trees whose roots are signed. So not possible for a mirror to selectively remove a file, or add an old version of a file. They don't even know what constitutes a file.
You are expecting too much. You want something like noise protocol (like Signal does for messages) but for files, plus guarantees of a blockchain. Ensuring deleted files are not kept is impossible. Use encryption containers such as Veracrypt, where all files is just a big binary blob, if you want that.
Defining covered threat models would be useful, though.
It's also a poor solution for an "always on" solution since it's dependent on your home Internet/VPN/power working, which is probably true for your other clients too.
If I'm running Syncthing on a desktop computer at home and a laptop with me on the go, adding a Raspberry Pi to the same home network doesn't really improve resiliency of the sync service. But if I have a server in a completely different location, it actually will.
If someone manages to access the running raspberry pi though, FDE doesn't protect against that, while the Syncthing's untrusted device encryption does.
The threat model described by the post above you is actually not about physical access. It's about the PI getting hacked remotely.
If you use Syncthing's encryption then at no point is the decrypted content available to the PI. It gets decrypted locally by other Syncthing peers after they have downloaded it.
Besides, there's still a difference between physical accesses: plain and non-targeted (besides how profitable they're expected) burglaries are way more common than violent targeted attacks meant to extract a secret from an individual.
Maybe a better example might be wanting to use a smartphone as the always on sync endpoint. The phone can easily be stolen but with this feature it won't contain the valuable data.
You send it a WoL packet[0], use key-based SSH to log in to the initramfs environment[1], and type in your password. Or if you have a TPM you can just stick encryption keys there. Do note that if the device lacks secure boot or such, this is vulnerable to an attack where the initramfs is modified to steal your password; how bad this is depends on your threat model.
Depends on the setup, disk encryption has disadvantages such as needing to decrypt each time you reboot (and if that's the root partition, you can't really boot unattended). It can be advantageous to not have to trust the server and have a non-encrypted zfs dataset for this.
I've used dropbear-initramfs on both Debian and Ubuntu to remote-unlock hosts with encrypted root filesystems successfully. It'd be nice if it were better supported though.
I understood that point. The raspberry pi would still serve as an always on endpoint regardless of the untrusted device encryption feature.
In my case I don't use this feature on the pi, but I do use it on my phone which has 256gb storage. I have configured syncthing to only run when charging the phone, so it is not always on.
Yes you do. Any device that doesn't support good enough fde should have syncthing encrypted. Because usually you want the device to boot and start operating unattended - it is quite vulnerable.
Not necessarily. Lets say I have a node at my in-laws house. I would use untrusted device encryption in case anyone decided to take the drive and have a look.
My biggest problem with how this feature is implemented is that the encryption passphrase is stored in plain text in the syncthing config file.
I wish there was a way to lock the passphrase in the session keyring, with syncthing only sending files to untrusted devices if the keyring is unlocked.
yes because more than one process can access the file.
A "password manager" provides a defined api and schields the password away from everything. It can also ask the user if process x can access the key y.
If a user has access to your machine to steal the password, why not just steal the data that's protected by it? Or add another device to syncthing? Install a keylogger. Rootkit.
On a rootkit you don't trust the OS anymore. So a safe location inside the OS space isn't an option anymore. But often you are not a root user (e.g. android, windows in a corporate environment)
If you have OS backups there is a risk it is readable by others (e.g. cloud, different IT department). There is also a risk a user uploads the config somewhere.
If you want to rotate keys you would have to search all keys compared to a centralized location.
Not familiar with `Clevis + Tang`, but the way I would solve is my implementing an IPC mechanism where an external process can provide the encryption passphrase.
This would allow syncthing to start at boot, but untrusted devices would start in paused state. Once an external process connects and provides the passphrase (libpam module for login integration?), syncthing would start syncing devices which require the passphrase.
> The untrusted device will be able to observe:
> File sizes
> Which parts of files are changed by the other devices and when
I know that cryfs[1] is resilient to at least the first of these, and possibly the second as well. I don't know if cryfs allows to modify the base directory while the filesystem is online, if it does then it might already be a better solution for syncthing, if you only care about Linux.
On the flip side syncthing could incorporate cryfs's base directory format instead of their home-grown one.
"Warning! Never access the file system from two devices at the same time. This can corrupt your file system. When switching devices, always make sure to stop CryFS on the first device, let Dropbox finish synchronization, and then start CryFS on the second device. There are some ideas on how future versions of CryFS could allow for concurrent access, but in the current version this is not safe."
I'm looking to improve my documents syncing setup. Currently I'm using owncloud, but that seems overkill for just files syncing and it requires maintenance, so I gave Syncthing a look. The "Untrusted device encryption" was not appealing to me because I'm not convinced by the security aspects yet, and also because it is in beta for now. I used gocryptfs [1] in the past and was quite happy with it, so I'm planning to use it on top of Syncthing to have files synced encrypted. As far as I have read this setup (Syncthing + gocryptfs) seems to be used by several people and has already been discussed by gocryptfs' author, who recommended a `-sharedstorage` flag for such use case [2]. Reading [3] I think gocryptfs is more suited for files syncing than cryfs. I'm aware that the metadata (file size, structure, …) of my files are not encrypted but that's a compromise I'm ready to make.
I would be happy to hear about opinions about this approach.
This feature is really awesome, can't wait for it to stabilize because it makes it possible to offer a sort of syncthing-based backups' backup by making available untrusted storage servers.
Who knows if I'll ever build it but if someone else wants to I'd love to see that happen
I'd try borg, but I need more of an archival software. A number of large files that I don't want to gather all in one place every time I run the backup command. I'm trying out git annex for now.
Apparently, even synthing doesn't think oblivious RAM (ORAM) is worth it. For those who haven't heard about it before, it's completely possible to abstract over a block storage device such that an attacker can't even determine things like which logical blocks are written to most frequently/correlations between writes on logical blocks.
Unfortunately, even though there are algorithms make each access only take log(n)^2, ORAM by definition makes caches on the server side not work at all. If the server can effectively service requests via cache, that means they can predict which memory cells are most likely to be used.
How does it play with versioning? Like, if I edit a file gradually, there are going to be ever growing number of frequent versions. There is no deduplication in the encrypted node (I’m not sure if there is deduplication between versions in plaintext either). The storage could rapidly increase.
I expect that the encrypted patch is kept instead of the full version because of this, see also what "the untrusted device will be able to observe" in the linked doc.
BTW: 'worried' as in 'code smell', not 'worried' as in 'the encryption can be easily broken'.