Having said that, we do need these kinds of solutions without the back doors. Why aren't any software developers from countries where mandatory backdoors aren't a thing building stuff like BTSync? And does anyone know which countries those are? For example, i live in Belgium. Were I to build such a thing, would someone show up on my doorstep and force me to backdoor it?
(*) nothing that is reasonable, only paranoia and extreme lengths will suffice.
Yeah, but you don't have to make it _easy_ for them, like using closed-source software that's capitalizing on the name recognition of "BitTorrent" to pretend to be open.
(I expect that even if you lived in McLean, Virginia and never locked your doors, if you were writing open-source software, nobody would make you put a backdoor in. The intelligence agencies have shown a great bias towards doing things that people won't notice.)
Also, even assuming you can't keep the NSA and FBI out if they really care (which I somewhat agree with), it's definitely worth it to keep everyone else out. Like BitTorrent the company, or anyone MITMing any of those HTTP URLs.
You do have that. Further, you've had it since 2006.
Point duplicity to rsync.net.
> Change of sharing paradigm that introduced this vulnerability happened after the first releases. This may be the result of NSL (National Security Letters, from US Government to businesses to pressure them in giving out the keys or introducing vulnerabilities to compromise previously secure systems) that could have been received by BitTorrent Inc and/or developers.
IF that's true, then it's extremely alarming. I wouldn't use their software to share sensitive files.
At least, that's how any competent developer would code it.
Bottom line: do not use for sensitive data.
> [MEDIUM] Attack vector potentiel : mise à jour automatique (silent update) du client en HTTP sur http://update.utorrent.com
If they check the digital signature on an update package, this is not an issue. If they don't, it is.
There are ways to mitigate this, but not well given the design constraints of closed-source software.
If you meant that a man in the middle can do that, then if things are implemented correctly on the app's end and if an attacker doesn't have vendor's private key, then - no, of course, they cannot inject anything that way.
Why does this have to be the case? Here are two specific alternative deployment models that protect you from this attack:
1. The update is signed at rest (e.g., PGP), instead of the update traffic being signed in motion (e.g., SSL), and the update and signature are widely mirrored or distributed through a third party. The vendor can still inject malicious code, but everyone has the same code: they can't easily introduce targeted attacks against a specific person.
This is the security model of Chrome or Firefox extensions, and it's one of the reasons there are vague security advantages in distributing an app that way instead of directly via a website. It's also the security model of the Apple App Store, the Play Store, almost all Linux distros (kinda), etc., sometimes with the addition that there's a human reviewer at the third party.
2. Same as 1, but the code is open source and reproducibly built, such that anyone can audit (possibly even automatically!) that the sources match the binaries. Or the code is open source and not in a compiled language. At that point, any injected crap is visible to the world.
(This design has the secondary advantage that it disincentivizes the government from compelling the software author to add a hidden backdoor, but that's for the same reasons it makes it hard for the software author to add a hidden backdoor of their own volition.)
My point is that we shouldn't settle for a bad threat model just because a for-profit vendor tells us it's the threat model.
> [LOW] confirmed: When registering, http traffic for creating new user on loopback http://127.0.0.1:8888
And I wonder the same. Didn't really get that one, but yeah you may be right that they mean it should be https. I guess the risk is when an application can monitor loopback traffic, but you can only do that with special permissions or if you're root. The former case is exceptional, in the latter case you're pwned anyway and your keystrokes aren't safe either. So yeah, I don't know. Https might do a little bit extra... but I don't know.
If I understood the article correctly, an MITM attacker (or federal agency) could in theory know that you have a copy of a file only after they have a copy of the same file themselves, by comparing hashes.
I thought everything happened with a DHT ?