Hi all. I am a Tor Project Developer and work at Mozilla on this project. We appreciate everyone's enthusiasm and feedback. Our ultimate goal is a long way away because of the amount of work to do and the necessity to match the safety of Tor Browser in Firefox when providing a Tor mode. There's no guarantee this will happen, but I hope it will and we will keep working towards it.
If anyone is interested in assisting development-wise, Firefox bugs tagged 'fingerprinting' in the whiteboard are a good place to start. You can also run Tor relays and help us improve the health of the network by working with Tor's new Relay Advocate (https://blog.torproject.org/get-help-running-your-relay-our-...). More people being involved in spec work (especially at the W3C) and focusing on fingerprinting and privacy concerns is also very useful - it's very hard to keep eyes on all the things happening everywhere.
We also appreciate users of Firefox Beta and Nightly (Nightly especially). The flags Tor features are developed behind (privacy.resistFingerprinting and privacy.firstparty.isolate) are experimental. I appreciate bug reports from users running these flags but you should expect them to break things on the web (resistFingerprinting especially; first party isolate is generally more stable and usually only has breakage on particular login forms).
>You can also run Tor relays and help us improve the health of the network by working with Tor's new Relay Advocate
Since I've seen this come up before in many previous discussions of Tor I think it's worth emphasizing/clarifying up front: Tor relays are not the same as Tor exit nodes. Relays do not talk to the public internet, they serve only the full encrypted internal Tor virtual network. So they won't ever send out traffic from an IP under your control to some website or general Internet system (and in turn tie that IP in any way to spam/abuse/whatever, at least not for that reason). It's not necessarily hidden that it is acting as a relay, but the relay itself will have no knowledge of the traffic it's carrying.
Plenty of people have reasonable concerns about the risks/inconveniences that might come with acting as an exit node, but on both a legal and practical level there are many more jurisdictions where merely relaying encrypted traffic between other relays isn't a problem. And it's still quite helpful, both for network speed and because purely internal Tor Hidden Services do not need any exit nodes at all.
One way to help that avoids this is to operate a bridge node. Bridge nodes are used as entry points into the Tor network for people in regions where Tor is blocked, so efforts are made to keep the addresses of bridges confidential. Which makes it less likely that people who don't know what they're doing will wrongfully put it on a block list.
I had nothing but pain when trying to run an exit node. Every site behind cloudflair would captcha me on what seemed like every page. Cox shut off my internet every other week due to "computers on my network being infected with viruses", and I'd have to call their support and tell them I cant be infected I only run linux at home.
I could do some shenanigans on my modem and end up with a new dynamic IP from cox, but generally within hours that new IP would be on whatever list people use to track exit node IPs and the pain would start all over again.
And plenty of people insist that Tor relays are totally safe to run. They are not. I NEVER ran an exit node from my home IP, only relays, and my IP was still blacklisted from various sites due to this behavior.
I still contribute to Tor via VPS rentals and such, but relays are not no-risk alternatives to exit nodes. Period.
Given the low level of technical knowledge with a great deal of US law enforcement, increasing militarization, no knock warrants, etc... Please think twice before running an exit node from your house. Do it in Colo somewhere with a small, plucky ISP owned by a first and fourth amendment absolutist.
Thanks for your effort! If I can ask, how much overlap exists between your team and the team overseeing the implementation of security protocols within Firefox e.g. HSTS, CSP, etc.? It'd be neat to see Firefox drive innovation here alongside the effort to weave Tor into the browser; although I wouldn't necessarily treat Tor integration the same as I might the implementation of other security specifications, I can see how the teams working on such might overlap, hence my question.
By passing this on to Mozilla and discontinuing Tor Browser, you're going to inherit the innumerable issues in their code base. Wouldn't it be easier to hard-fork and create a simple browser with minimal overhead? It doesn't have to be loaded with features. Just minimalist and private.
Some anti-features that come to my attention off the top of my head:
* Biometric login (as of FF60)
* Dumb PR Stunts like Mr. Robot
* Telemetry
* Balrog (Analytics and browser fingerprinting on AmazonS3)
* Social API
* VR sensors
* DRM
* Google Chrome (large contract Mozilla has with them as they backport this into IPC)
* CloudFlare DNS (Department of Homeland Security partner and Tor arch-enemy)
Tor Browser will exist as long as Tor feels it needs to. If the features or anti-features in FF cause them to believe Firefox does not fit their need, then we're/they're not going to discontinue it.
> By passing this on to Mozilla and discontinuing Tor Browser, you're going to inherit the innumerable issues in their code base.
What issues exactly? Tor Browser = Firefox ESR + some patches + some other stuff and tweaks. Before the release of the next ESR TB devs rebase and submit these patches to mainline Firefox, that's why you have prefs like privacy.resistFingerprinting and privacy.firstparty.isolate in mainline Firefox, see: https://wiki.mozilla.org/Security/Tor_Uplift
> Our ultimate goal is a long way away because of the amount of work to do and the necessity to match the safety of Tor Browser in Firefox when providing a Tor mode.
If that doesn't pan out, do you expect the ongoing work on this project to reduce the size of the patches that the Tor Browser project needs to carry on top of the Firefox trunk?
Ultimatest super-goal: make anonymity the default stance and socially accepted norm. Do with anonymous browsing what WhatsApp did with E2E encryption. Force big data suckers to invent new business models for exploiting our data without breaching our privacy.
I can't read this article because I'm at work, but unless they managed to solve the problem of Tor being very, very, very slow, this will never happen. End users will definitely notice a difference and likely won't care about their privacy. They'll just see Firefox being way slower than Chrome and switch.
This would be a privacy option for Firefox, not the default. But yes, Tor introduces latency and reduces bandwidth. For traffic to the open Internet, traffic uses circuits through three relays: entry guard, middle and exit. So there are four hops between users and websites, instead of one. The Earth's circumference is about 40 thousand km. So the longest path is arguably ~20 thousand km. And rtt for that would be about 300-500 msec, according to my measurements.[0] It's only ~130 msec at lightspeed, but there are some copper links, plus switching time and caching.
So with four hops, rtt would at most be 1200-2000 msec, if every hop were the maximum length. In practice, rtt for Tor is at most half that, and often even less. But latency is actually good if your goal is anonymity. Because it reduces the accuracy of traffic analysis.
With traditional onion sites, there are two three-relay circuits, one for the user and one for the site, plus a rendezvous relay. So rtt is much greater. However, sites can opt for one-relay circuits, sacrificing anonymity, so overall rtt isn't that bad.
Bandwidth is also reduced with Tor. Increased latency is part of that. But also, many relays have low-bandwidth uplinks, especially ones that people run at home. The Tor client does pick faster relays, but there's a tradeoff, in that doing so reduces anonymity. Increased investment in high-bandwidth relays would help a lot.
Also, with more relays, it would be workable to implement multipath circuits. Especially for onion sites, where precious exit relays aren't needed. Using MPTCP, I managed ~50 Mbps throughput for bbcp transfers between onion sites (with gigabit uplinks).[1] I was getting ~36 subflows per tcp connection.
Is it even worth talking about speed without at least considering client network conditions? A lot of people have poor internet connections, many places world-over are basically mobile-internet only already, shared internet spaces with variable QoS (campuses), etc etc. Most people aren't using engineer-priced laptops/workstations or backed by enterprise-grade routing equipment, after all.
Yes it is. I can't speak for everyone, but in Brazil, it's virtually impossible to use Tor even for HTML-only websites. And I can say most people have a slower bandwidth than I.
So long as Tor depends on volunteers to maintain exit nodes, and with that the risk of being arrested for all sorts of criminal activity by non-tech-savvy law enforcement, this is how it'll be.
Tor is slow because every packet has to be routed through several random servers distributed all over the world with multiple layers of crypto at every pass. Even with plenty of top-of-the-line inner and exit nodes you will still see substantially more latency than just sending packets directly.
The "distributed all over the world" part would still be just as much of an issue; the speed of light puts a substantial lower bound on the total latency.
True, but pinging from one hub to another is way faster than someone's cable modem in Mumbai to someone's in Australia to someone's in Peru and back again. Those last miles add up in a huge way.
No, IPSec tunnels to remote headquarters have indistinguishable latency impacts for normal users browsing (<150ms). The bad latency is because of congestion, not crypto and multiple hops.
Mirimir typically uses a three-VPN nested chain. Just now, rtt to google.com was ~260 msec. That's four hops. Just not with servers on the far side of the planet.
Also get on the tor-relays@lists.torproject.org mail list.
But the sad truth is that there aren't that many hosting providers that allow Tor relays. Especially exit relays, because of abuse complaints.
Also, as you might expect, Tor relays can use lots of bandwidth. It's more common to get flat-rate bandwidth for 100 Mbps uplinks, and metered bandwidth for 1 Gbps uplinks. Digital Ocean, for example, just switched to metered bandwidth, and that has killed some relays.
However, all this could arguably change, if Tor became mainstream, as part of Firefox.
That has no relation to the cases above which were of people running Tor exit nodes from their home. If one wants to hide their Tor usage then that's something else and there are pluggable transports that are already built-in the Tor Browser to obfuscate Tor traffic to look like something else--no need for a VPN.
If you're worried that law enforcement will knock on your door because somebody used your exit node for illegal internet activity, a VPN service (that does not log traffic) will give you additional protection by exposing their IP address, not yours.
They sometimes are. Not always. And they walk out free, except for that Jewish guy who lived in Austria (I can’t remember his name but he was the only one to get in real trouble for running an exit node).
Why take that trouble when they can do it directly using Tor without running any exit at home? Also for instance Bogatov had an alibi when that happened.
The forum I run only bans IP addresses caught posting link-spam. Which, admittedly, asymptotically approached 100% of Tor exit nodes before I instituted more rapid ban-expiration. I added faster ban-expiration after hearing from some of my privacy-conscious users that Tor had become unusable for my forum.
If you use tor and only visit onion sites, the sites don't know who visited them.
If you use tor and visit the regular web sites (like, say, HN), the last computer that does the actual request to the website is an exit node, as far as that site is concerned, the exit node made the http request. If you run an exit node, your computer is going to be doing tons of requests to all kinds of websites, this may include sites that deal in illegal stuff like drugs, child prostitution, human trafficking, terrorism, etc.
edit: Forgot to say, you must explicit be running an exit node. Not every tor node is an exit node.
I think Tor will get faster, now new protocols like TLS/1.3, HTTP/2 and QUIC are being developed.
Currently Tor looks like HTTPS done with TLS/1.2 on TCP (like regular HTTPS). As these newer protocols get more and more delpoyed Tor can start using them too which will help make Tor faster.
Not immediately, but I feel that as those protocols become more ubiquitous, _maybe_ the base Tor transport protocol (for nodes which aren't bridges) might be able to benefit from some of the same upgrades by using them?
I don't know how much (if at all) it might help—but other, similar overlay networks have previously noticed that (intuitively) inefficiency in the transport protocol is likely to be (broadly speaking) multiplied by the number of hops; so any improvements in that might be useful in improving the user experience by using the same available resources more efficiently.
What that might mean for Tor's perceived speed is a somewhat murky issue, as that's a function of the complex interaction of latency and bandwidth and crypto and routing overhead of all the involved nodes in a tunnel put together; which of course is also shared with other tunnels; not to mention it will _also_ be particularly affected by exit node outproxy bandwidth; _and_ any possible packet loss and delay caused by both incidental _and_ deliberate adverse network conditions…
There are in fact some vague ideas floating around about using QUIC as a transport protocol for Tor. However, there is so much work to do and so few people that have the necessary skills (solid cryptography -- not at a "build the next AES" level, but "implement AES with no side channels" is already incredibly difficult -- plus low-level networking, C, and so on...) that in my view it is a minimum of 2-3 years from being mainstream available (look at how long HSv3 took).
Tor circuits tend to be rather high latency, so anything that reduces the number of round trips needed for page loads is likely to have a significant impact on Tor's effective speed.
WhatsApp is not perfect and certainly its code not being available for inspection is one of its flaws. However, it did bring security to the masses. I mean I am pretty sure the security it offers is enough for 95% of people. I would not use it for sending documents stolen from NSA, but for the rest of cases - it gets you covered.
Security very often need to be balanced with convenience - with WhatsApp you get immense boost of protection without sacrificing much convenience. One could argue, that you could get better security with Signal - true, but first you'd need to convince all your family and friends to install it.
They need to solve the issue of speed, altough maybe for sensitive queries(assuming that's enough, a very big assumption,), people may be willing to use a slow "super private browsing mode". another option is to make people pay for faster speeds ?
And if i recall correctly, a "global passive attacker" listening to internet traffic around can de-anonimize TOR using ML. Seems like something that would be possible and profitable for a Google and internet infra companies.
Google isn't a GPA. Also having a low-latency anonymity system that isn't affected by a GPA is an open problem. The important thing here is that using Tor is better than not.
Likely actual result: Firefox will become increasingly irrelevant.
If Tor is going to be a built-in feature of Firefox, most employers are going to flag it as malware. This is a ridiculously dumb thing on so many levels -- promote privacy by directing your network traffic to "volunteer" proxy services?
You already don't know what proxies your traffic is going through. Using Tor might increase the odds of a bad actor a bit but end-to-end security is something the web is getting better at right now.
The risk now is that some bad actor is replacing TLS certificates, which is an uncommon and tamper-evident event. Tor is handing your traffic to an unknown 3rd party.
Plus, users do not understand what Tor is or how to use it.
Fighting political battles with software is dumb — the end result is going to be a permanent loss of freedom, as governments force the use of platforms with trusted app stores.
The risk now is BGP hijacking. Or really just normal operation of BGP. You data could go anywhere on the planet on its way to the destination and you're not going to know ahead of time what path any particular packet will take.
If you're using TLS, it doesn't matter so much if the exit node is malicious because they still won't be able to read it.
It's been my understanding that Firefox has been soft splitting its consumer and business versions of the product for a while. This would presumably just be another step down that road.
Firefox recently whitelisted a bunch of p2p protocols so that they can be used by browser extensions. One of them is the Dat protocol [0], which is similar to BitTorrent but has better support for mutable data and random access [1]. It's far from being "baked in", but it's a step in the right direction.
We always could do that with extensions since JS is turing complete and has access to the network. Webtorrent is a thing after all.
The issue is not technical. It's just a chicken and egg problem. Most won't use bittorent unless it's stupidely easy to do. Remember that the average user don't know what an URL is and doesn't open new tabs willingly. Since they are the majority, they drive cost and benefits, so we must include them.
You couldn't, until Firefox 59. Before that, protocol handlers were not allowed to handle links to Dat/IPFS resources [0].
And while I agree with your comment regarding the chicken and egg problem, there are still some technical issues. As the shadowbanned sibling comment says, extensions don't have access to UDP/TCP sockets, meaning that you will need to run a gateway on your machine. See e.g. what dat-fox [1] does.
> You couldn't, until Firefox 59. Before that, protocol handlers were not allowed to handle links to Dat/IPFS resources [0].
You could, kind of, before Firefox 57 (or at least, at some point). Implementing nsIProtocolHandler/nsIChannel/etc. correctly was difficult (and probably not from JavaScript), and distribution problems meant nobody did it.
Chrome extensions [1], and Firefox WebExtensions [2], can have background scripts that can continue running even when the browser window is closed. So in theory, with a p2p filesharing extension, you may not have to keep any browser window open or even minimized.
I think IFPS needs a little more field testing before being set in stone. Indeed, if you bake in something in the browsers, then those implementation will be the boundary of what is practical to do. So any innovation will then be constraint by the browsers release and good will.
IFPS is a young tech, it needs time to evolve yet.
Last month I download 2 linux ISO with it for work. This month all the seasons of the pretender for fun.
Facebook used to deploy their code using bittorent. I doubt it has changed.
A lot of blizzard video games update using bittorent as well. If you play Starcraft 2, you use bittorent.
Streaming services like stremio are basically bittorent. After netflix, it's my main source of video content.
If you want to download the internet archive, that's the saner option. Same if you are a pentester, as a lot of heavy leak or hash db are so huge only bittorent makes it practical. Too expensive to host for one small actor. It's also more resistant to take down notice.
We talked a lot about RSS lately, and how to revive it, while in comments people said it actually never died. Bittorrent is a lot like that. Great tech, great standard, it works flawlessly and fill its use case perfectly.
The only reason it's not more adopted is because it's not in the browser by default. Otherwise the hosting benefit and the dl speed is such that it would be an instant hit.
Blizzard games no longer use BitTorrent but a proprietary http-based protocol called ngdp. BitTorrent was causing a lot of issues with firewalls so users were disabling it, so they had added http mirrors to them... And then CDNs became a thing, the rest is history.
I'll be happy to give more details on ngdp if you are curious.
They basically created their own git protocol + virtual filesystem, optimized for asset patches inside large compressed binary files. I wish they'd open source it.
It definitely still is. And with the concept of private trackers, it's not really a simple thing to turn off. Companies like IP Echelon tend to automatically bully US/cloud users, but in general they're IMHO far from killing the network. The only problem is that there's fewer and fewer torrent search engines...
It is. For both legitimate and nefarious uses. Bigger software/game devs still use bittorrent to distribute patches and updates—World of Warcraft is an example.
Torrenting through Tor overloads relays, increasing latency and throttling bandwidth for other users. Also, it's unworkably slow. Just use nested VPN chains.
Ultimate goal of the Tor project, that is. Lots of people in this comment thread seem to have not clicked the link and are presuming that this is a Firefox roadmap. The Mozilla analogue of this page only mentions bringing Tor features into Firefox, without stating a goal of obviating the Tor Browser entirely.
Well, Mozilla engineers wouldn't be the ones in charge of, or benefiting from, shutting down the Tor Browser fork, would they? So why would they need to include that kind of (incredibly distant, virtually pie in the sky) goal on their wiki page?
Is it a fork? I thought it was a heavily customized version of Firefox ESR with specific settings and defaults using the channels OEM configs. Looking at the binaries, it's definitely not the standard build anymore.
OEM configs have never been enough to implement everything they need in Tor Browser. They eventually started their uplift effort [1], to upstream all the patches and features they've added to it, so that they can possibly just use OEM configs.
Looking it up, I think the more accurate term is actually "partner repacks." [1] They're versions of Firefox shipped with different default settings and/or an extension included by default at install time, IIUC. Though reading this rather opaque page [2] it doesn't seem to explicitly preclude patches to the compiled code, though it seems like they would forbid that.
I haven't looked into it in a while, but last I knew you had to have MoFo sign-off on a bunch of stuff, including code changes (i.e., you're only allowed to use the Firefox branding if you get MoFo sign-off on those changes; you can always redistribute it without the trademarks).
What will Mozilla do about the Tor network's usability problems? Advanced users can workaround them and because they understand the benefits and engineering, accept the frustrations as a cost for a worthwhile (and free) technology. But what will non-technical users do?
Many public Internet websites filter connections from the Tor network, many other websites are very slow, yet others impose extra obstacles such as multiple rounds of captchas (even 5 or more) or degraded service (including high suspicion of payments), and of course you often will receive webpages in the wrong locale or language - which can trigger regional filters. Currently, workarounds requires resetting the circuit (few non-technical users will even understand what the circuit is), lots of patience and reloads, and often just giving up. [EDIT: And non-technical users won't understand what is happening and therefore won't know when to use which workaround.]
If that's the experience of typical Firefox users, they won't use it and they will have bad associations with Tor and Firefox.
They aren't Tor network usability problems. They're clear web network usability problems. It'd be great if more people used Tor, NoScript (yes, I know this will not be baked in), and other privacy protecting mechanisms so that clear web sites would care about the users they're intentionally making the experience worse for.
A huge number of people use adblockers, and websites haven't changed to make the experience better. All they've done is changed to block anyone using adblockers, which is what many sites/networks already do to Tor.
There are many things in the Tor Browser that benefit privacy that are separate from the use of the Tor network, and it's these things that Firefox is most likely to adopt.
Which sites in 2018 still present multiple CAPTCHAs to users with cookies and JavaScript enabled?
I think the theory behind this project is that those problems are primarily caused by Tor's popular image as a 'fringe network for pedophiles and drug dealers' and that by making it more mainstream they can fix those issues.
(please more replies saying "that sounds really hard" and less replies saying "tor is not a fringe network for pedophiles and drug dealers", thanks)
Google has always been showing captchas when using their search engine for quiet a long time (even for VPN users). In fact they were even straight blocking users from using that very same captcha: https://bugs.torproject.org/23840
I more meant that Google offers a captcha service for others that is quite good, them implemented a really terrible one for the internal services. It truly is awful to deal with. Whoever made it must have been smoking crack or taking large quantities of lsd
Source? Cloudflare did post a moderately hostile response to Tor a few years back, but their technical implementation is sound and does not present multiple CAPTCHAs to users who have cookies enabled (the CAPTCHA might be broken with JS off, but that's a Google problem).
Even I have seen CF captchas on multiple sites when using Tor. Though might be because of my usage pattern where I use Tor for a selective list of sites giving CF less chance to maintain my identity.
With wider adoption of ipv6 and all the good things that come with it (don't mistake me, they are great!) also comes the risk that each computer will get a uniquely identifiable IP address that will be used for fingerprinting. I've never really used Tor in the past, but this got me thinking about it.
An option could be to provide a webRTC-based node, but I am not sure how feasible that would be, after reading some comments here. Maybe for entry nodes and guard nodes instead of exit nodes? The transient nature of browser sessions could greatly enhance privacy. Of course, you would need some algorithms to deal with this very nature... But I can imagine some.
This surely lowers the barrier to entry for greatly enhanced privacy. Quite a lot of people seem to be aware of the private browsing mode, and I can imagine this being turned into a simple toggle on the private browsing home page, along with a short explanation (and a link to additional privacy tips).
A low hanging fruit that could enhance the privacy a bit would be to use the trusted recursive resolver (DNS over https) in private browsing by default, since it already is part of Firefox. It just needs a default trusted resolver.
Cool, now let me start an ephemeral v3 onion service from JS and have it reachable via WebRTC by a peer who has their own. It's the perfect tech marriage, removes signalling servers and NAT busters, but may be a bit taxing on directory servers and too slow to use for media streams (but I'll take data channels only).
Hah, kinda reminds me of Opera Unite. (That one wasn't from JS, it offered some fixed applications like file hosting, notes, etc., but it was hosting stuff from the browser)
sounds really useful, but to be fair, it doesn't really remove either. you're effectively just using the Tor network as freely available (but slow) signalling and TURN servers.
There is a gap between the safer and safest security level: sometimes I want to display icons and symbols but don't want js to run.
Installing additional extensions is discouraged; but in my experience Decentraleyes makes latency somewhat less disturbing, CAPTCHAs appear less often; and uBlock Origin is essential [-].
[-] shipping with every available filter list enabled and cached may be a good enough default
> Installing additional extensions is discouraged; but in my experience Decentraleyes makes latency somewhat less disturbing, CAPTCHAs appear less often; and uBlock Origin is essential [-].
>
> [-] shipping with every available filter list enabled and cached may be a good enough default
This would be amazing. The main reason I've never used Tor is the fear that it would make me look like I had something to hide (instead of just a general desire for privacy). If it were built into Firefox, I'd probably switch over from Chrome.
> The main reason I've never used Tor is the fear that it would make me look like I had something to hide (instead of just a general desire for privacy).
1. You can hide the fact that you're using Tor by using pluggable transports which are already built-in the Tor Browser (such as meek-azure, obfs4, snowflake, ...).
2. That's the biggest reason as to why one must use Tor as much as possible even if they don't care about privacy. More people using Tor = the less interesting it is to be a Tor user.
> it would make me look like I had something to hide
That's exactly why I use it on a regular basis!
Perhaps when looking up political or medical things (also for friends), it could be good to run that through Tor. That is not something to hide per se, but it is definitely not something that is anyone else's business, and you don't want to be bubbled.
If I understand it correctly, you shouldn't be using any website accounts using Tor browser that you also use outside of it. I really wonder if/how they can make the user properly aware of that in a kind-of super private browsing mode.
you can, and it is still a valuable contribution to the Tor network, since it adds cover traffic. the catch is just that the website you're using will know who you are. I hope that the vast majority of users will understand that if they log in, there is no privacy mode that will save them from the site they log into.
Right, I guess that's good if you want to contribute to the Tor ecosystem, but if you want to browse anonymously, that's no good. Unfortunately, even as a somewhat tech savvy (though in hindsight obviously naive) person, the repercussions of logging in somewhere in Tor Browser weren't immediately clear to me.
I'm not quite sure what you mean by this comment, but if you mean that their approach would be similar to MAC, in the sense that "super private browsing" would have to be enabled in about:config or through an extension: that would make sense.
I know the extension, but what does that have to do with informing users about not using their account when browsing with Tor? Heck, informing users about being able to use different accounts with Multi-Account Containers was already such a hard problem that they opted into shipping it as a separate extension, so it's only used by power users.
How I'd do it is have a warning when using Tor outside of the designated Tor container. That is, encourage the installation of the extension when enabling a "Tor" tab (permanently dismissable of course). Alternatively they could co-opt the private-browsing window (a.k.a. incognito) feature to have a tor-browsing window.
The latter might even be preferred since the former is only at the tab level and probably quite easy to forget whether you are in Tor before entering an address. It appears to be what is suggested in the notes (now that the link is up).
Have you considered also implementing I2P[0] in parallel with tor? It suppose to be harder to analyze traffic at nodes with I2p though it isn't as battletested as tor.
Okay but what about maidsafe, dat, ipfs etc. The correct solution is to have firefox expose a 'protocol api' or something along those lines so that any 'alternate internet' project can create a backend extension to make firefox compatible with that protocol.
I think Mozilla should look at using Servo instead of Gecko in this mode along with a new JavaScript interpreter written in Rust, at least optionally, since perfect security is essential when using a Tor browser without a dedicated VM.
Servo components are being uplifted into Gecko gradually. There's less benefit to rewriting the JIT in Rust because static type systems can only do so much when the whole goal of a program is to generate code dynamically.
My suggestion was to have Firefox support both Gecko (with uplifted component) and a pure Rust renderer, with the latter to be used in the Tor mode where security (to preserve anonymity against resourceful adversaries) matters over compatibility.
For security, instead of a JIT, a simple JavaScript bytecode interpreter written in Rust to be used exclusively in the Tor mode would be ideal, for maximum security at the cost of worse performance.
Another option is a JIT that generates code that is easily proven to be safe (e.g. because it does a bounds check on all memory accesses and only does indirect jumps using a jump table, or because it's the only thing running in a process and jumps are still constrained with a read only jump table and read only code).
So we lose another valuable project of the Web. I wonder why I feel so lost - that when I was young, the Internet was barely born... and now I'm watching it die.
I've been waiting for this for years. Good job convincing Mozilla to do this! Good idea to standardize the spec, too.
I hope they give a good name to this new super-private mode (which actually isn't too bad of a name, either).
I also hope they don't just implement a "more private" mode in Firefox, but also a more hardened mode for Tor. The Tor mode in Firefox should use the strictest possible sandboxing technologies available to them from the operating system (file system virtualization, etc).
I'm even talking about those new fancy hypervisor-based micro-VMs in Windows 10, which I believe they are called Krypton containers, and it's what Edge uses within the Application Guard context. Although if the users have to enable Hyper-V/Micro-VMs first in Windows, then maybe this hardening mechanism should be optional, but encouraged. Otherwise, it should probably be the default.
Oh, and this hardened mode should use a different process for every tab/extension, too, by default, just like Chrome does. I still don't think Mozilla's "hybrid" approach makes it as secure as Chrome (which is why it's a hybrid/compromise for lower memory usage).
That quote doesn't say currently hundreds of millions of daily tor users. That quote says that in the future, when firefox includes tor by default, there might become hundreds of millions of daily tor users.
I believe Tor browser is already just a version of Firefox if I'm not mistaken. What would be the advantage of integrating with Firefox as opposed to say, a VPN integrated into the browser via a plugin. Just seems a little redundant and Tor is beginning to seem dated also with new solutions popping up and making the pitfalls of Tor more apparent.
I think an important part is to lessen the maintenance burden for the Tor project: if the code is in Firefox proper, Firefox developers will encounter it when working on other features and make sure they work together, whereas currently the Tor project needs to rebase their modifications onto "regular" Firefox.
"Removing fingerprintability" amounts to the browser just NOT sending all the http request headers that it sends by default. How hard can it be to "comment out" these lines?
I just checked and there are over 100 checks for "should I have different behavior here if I am resisting fingerprinting?" just in the C++ code in Firefox today. There are some more in the JS code but they're harder to search for.
Some simple examples:
* Various navigator APIs (oscpu, platform, etc) need to be disabled.
* Gamepad API needs to be disabled.
* Have to prevent reading canvas pixel data
* Have to block information about avaiable OpenGL extensions from WebGL
* Modifier keys on keyboard events need to be spoofed (because they can be used to guess at keyboard layout)
* Errors from the media stack (for <video> and <audio>) need to be blanked out.
* Something to do with voice synthesis APIs; I didn't look into details.
* Connection API needs to be neutered
* Various timing APIs hanging off "performance" need to be neutered.
* Presentation API needs to be neutered.
* Number of CPUs reported by the navigator API needs to be spoofed.
* Window sizing for window.open needs to be spoofed.
* Ability to measure the difference between the window.inner* and window.outer* APIs needs to be disabled.
* Mouse positions in mouse events need to be spoofed to make it look like the window is fullscreened.
Need to do something about fonts and the CSSOM (Element.getBoundingClientRect() for example), too.
Just shipping a standard bundled set of fonts and only allowing use of that doesn't suffice because anti-aliasing width differences could give away the used font renderer.
I am starting to feel firefox in a serious way. But im just sooo concerned that their software still sucks. I mean, we ARE talking about a company who thought it was smart to spend time trying to build a javascript OS for mobile.
So let me ask again. When are you guys going to start building firefox from the ground up and make the perfect browser we all deserve?
And if you disagree, please. Present your arguments. I am the person you need to sell right now.
But webstandards are evolving faster than modern browsers are, so building a browser from the ground up would require quite a bit more money, and worse know-how - you can't just buy that in unlimited amounts, than they have for Firefox right now, and they can't exactly stop developing Firefox in the meantime either.
Also, their three big competitors have most of their browser market share thanks to building an operating system underneath it. No matter how slim Mozilla's chances at success were, it would've been foolish to not try to get into the operating system market. And they built it based on web technologies, because that's where they have know-how.
Yes, the guy running the exit node can read the bytes that come in and out there. Tor anonymizes the origin of your traffic, and it makes sure to encrypt everything inside the Tor network, but it does not magically encrypt all traffic throughout the Internet.
This is why you should always use end-to-end encryption such as SSL for sensitive Internet connections.
Ive seen HSTS applied to things like Windows 10 updates, to prevent users from seeing what exactly your OS is sending to the mothership.
Ideally, we should be able to see exactly the content being exfiltrated, and choose to allow/disallow. But the moment we use tools like ettercap or mitmssl, it kills the session and we can't see the data.
HSTS seems more "self cutting" than useful at this juncture.
You're thinking of HPKP and other certificate and key-pinning implementations. HSTS only enforces HTTPS, it does not prevent the usage of things like mitmproxy with custom roots added to your trust store.
The use of HSTS is a way to suggest to your clients that they use SSL for that domain - for all requests within a time span (which could be "forever").
If anything, the vendors you mention should be applauded for taking that step towards a more secure distribution of updates. Not enforcing SSL makes malware injection through updates way way easier.
Debian introduced HTTPS repos a while back as an option, but not by default. Other distros already offer it.
Debian, and most other distributions as well, verifies its updates for ages and doesn't need SSL to do that. There are some arguments for using HTTPS anyway, but preventing malware injection isn't one (unless your package manager is exploitable).
Otherwise go ahead and disable DEP, ASLR, and other modern defense in depth and mitigations mechanisms used by Debian, of course, unless your OS is exploitable.
If you're connecting to a clearnet site, you probably need HTTPS _more_ than on a standard connection - you _explicitly_ have a node between you and the site that you don't necessarily trust (see, e.g., https://nakedsecurity.sophos.com/2015/06/25/can-you-trust-to...).
This is the biggest misconception about Tor. Tor provides anonymity, but any node (EDIT: any exit node) can read what you're sending if it's not encrypted. You need both.
That's definitely not true if your endpoint you're talking to is an *.onion . A connection to an Onion is encrypted to the destination. That also means if you were running, say NodeRed with authentication, sending credentials "over the clear" (no SSL cert, because stupidity) it's not actually sent over the clear. It's encrypted to the public key relating to your onion address.
Now, if you're using Public Internet->Tor->Public Internet, then absolutely yes the last node CAN read the contents of your packets. In that case, you absolutely need appropriate encryption to hide the contents (sigh, not the metadata) of your packets.
I would still prefer https on a .onion. If tor itself is popped, then traffic can be routed or mirrored to another host. This has happened in a PoC and was fixed in a security update in one of the alpha releases. There are additional fixes required for HS that are coming.
If the target is using https, you can see if the signature changes (there are addons for this).
Digicert will sign .onion domains, though the hidden site must be willing to share their identity with Digicert. I would love to see LetsEncrypt sign .onion domains, assuming they are willing to connect back to a .onion to validate the server.
The CA/B Forum rules only allow EV certificates for .onion, so even if Let's Encrypt wanted they couldn't give out .onion certificates without getting that changed first.
Oh right, that is a good point. I suppose Digicert is the only option for now then.
Perhaps the Tor team or an affiliate could set up a simplified CA and have a public CA cert restricted to .onion that folks could install as a work around to having browsers trust it by default.
Since then Fotis Loukos and I have drafted a ballot, which I believe he plans to introduce soon after asking a few other organizations to look it over.
You can subscribe to the cabfpub mailing list without becoming an Interested Party or Member. Only Interested Parties or Members can post to the list, while only Members can introduce or vote on ballots.
(Edit: Strangely, the reason for this is seemingly not that they're worried that the general public will make crazy suggestions, but rather that the general public will make patented suggestions, without being willing to license them according to the Forum's patent policy, and thereby sneak patented technology into the standards.)
Indeed, I too would love to have SSL certs for .onions and not have to bend over with an Extended Payment... Err, Verification check.
I thought about setting up boulder on tor, and start rolling it myself. But then again who'd trust me? This should be part of the Tor organization. I can't see my own system getting inertia, or put into TBB, or Firefox for that matter. It was hard enough for LE to be put in trusted CAs on machines.
Those are interesting preferences and perspectives. I will set up a site to counter them from my perspective and experiences.
To summarize, you are a dissident. I am a news reporter. You are giving me information about your government. Your life and the lives of your family members now depend 100% on the security of the Tor Proxy transport. Tor is a proxy transport and nothing more.
As a dissident, you have been trained by me to install addons that validate the signature of my HTTPS certs will not change. I also showed you how to do this using openssl s_client. When Tor is popped and routing you to your government hosts, you will see the SSL signature change. Per my instructions, you will cease all communication with me.
Without HTTPS, you are relying entirely on the transport for assurance of who you are talking to. This is neither appropriate nor acceptable for this type of communication.
PGP is not a mitigating control, because the handshake has completed and you are now downloading your state sponsored rootkit. It's too late by that point. The only thing we have to validate ID and allow or block application traffic is a certificate.
Exactly, they have to pop my machine too. They can't just take control of the traffic in the middle, which absolutely can be done in Tor with enough gov controlled guard nodes, as the arms race of patching has proven.
I can set up multiple canaries that they will have to pop and the fingerprint of one of those canaries is going to change or drop off the net.
> They can't just take control of the traffic in the middle, which absolutely can be done in Tor with enough gov controlled guard nodes, as the arms race of patching has proven.
I think you're misunderstanding something here, with onion services traffic is e2e encrypted and self-authenticated, as Matt explains:
> When you connect to an onion service, how do you know no one is MitM'ing you? Easy. It's impossible. The bad guy would have to be in your browser (more accurately: between the browser part of Tor Browser and the Tor process it runs in the background) or between the Tor process the onion service operator is running and the webserver it's pointing at. If you assume your Tor Browser hasn't been compromised, and you assume the onion service is being run intelligently, then a MitM attack is impossible. (And if the onion service isn't being run intelligently, can you really trust its operator to do HTTPS intelligently?)
>
> https://matt.traudt.xyz/posts/dont-https-your-o44SnkW2.html
I think where we are having a disconnect is that what Matt posted works when Tor is working as expected. Software has bugs. Tor has not been an exception to this. I watch their change logs for alpha and there are often bugs that affect this overall concept. They are patched quickly, but Tor nodes are not forced to update, nor is there a safe way for them to do so.
My point is that is a single point of success. Any other web service I would cut some slack. In the case of Tor, it is marketed as a means by which dissidents may communicate safely. Putting peoples lives on a single point of success is not appropriate, especially when there are technical means to mitigate the risk.
If you connect to a clearnet website, the clearnet website is fetched by the exit node. If it's not HTTPS, the exit node can change the website however it wants.
> you can stay anonymous while making sure anyone can read what you are sending.
That depends a lot on what you're sending. Tor stops people from identifying you based on your IP address, but you can still identify yourself by logging in on http://not-encrypted.com.
If anyone is interested in assisting development-wise, Firefox bugs tagged 'fingerprinting' in the whiteboard are a good place to start. You can also run Tor relays and help us improve the health of the network by working with Tor's new Relay Advocate (https://blog.torproject.org/get-help-running-your-relay-our-...). More people being involved in spec work (especially at the W3C) and focusing on fingerprinting and privacy concerns is also very useful - it's very hard to keep eyes on all the things happening everywhere.
We also appreciate users of Firefox Beta and Nightly (Nightly especially). The flags Tor features are developed behind (privacy.resistFingerprinting and privacy.firstparty.isolate) are experimental. I appreciate bug reports from users running these flags but you should expect them to break things on the web (resistFingerprinting especially; first party isolate is generally more stable and usually only has breakage on particular login forms).