Update: Can confirm Firefox Quantum with SOCKS proxy leaks the address. Oh dear!
Update 2: I didn't realize this is how WebRTC actually works. FF even has an entire page for tweaking this stuff https://wiki.mozilla.org/Media/WebRTC/Privacy. I hate it when features like these, which atleast in my case go mostly unused, have such critical weaknesses by design and it's not announced anywhere with a big red danger sign.
But no, WebRTC added data-channels. They have no good use to be silent and especially not to override SOCKS proxy. In fact, some key people on the WebRTC group, when I pressed them, could not provide a single real use-case for silent data channels.
Firefox is absolutely in the wrong to ignore your proxy settings, especially without getting consent first to start a call. It's a complete mess. Regardless of what the "spec" says, Firefox is responsible for implementing broken software that harms users.
Then again, so is STUN/ICE and basically every single thing that has to do with SIP/VoIP. It's like they go out of their way to be obtuse and come up with shitty standards then take glee in how bad it gets. As an example, look up SIP Torture Tests. There's an RFC just to illustrate the moronic edge-cases in SIP parsing that at one point implies your software needs to be conscious to infer the intention of malformed messages.
t. been working in telecom for far too long
However, using Whonix for Tor, even if you install a random browser with WebRTC enabled, there is no WebRTC leak. Because the workstation VM has no Internet access except through Tor. The gateway VM is not a router. There is no forwarding, and it's firewalled. It just exposes Tor ports to a private internal network, for the workstation VM.
And one can do the same for VPNs, using pfSense VMs as VPN gateways. Apps in workspace VMs have no Internet access except through the VPN client running in the gateway VM.
They are totally detached from actually implementing elegant or high performance software. Actual engineers achieve this despite of SIP's terrible decisions. Granted, many of those are inherited from HTTP. Tell me how many HTTP stacks handle comments and line folding properly...
It opens up security holes, too. One proxy interprets \r\r\n as 1 line break, another as 2 line breaks and start of body. Oops, now you can end up sending fun headers to your target because they try to be flexible in accepting input.
IIRC there is essentially no way to implement SIP standardly because of so many broken implementations. You have to make some decisions on certain parsing that will break one but any other way will break another.
Text-based protocols invite abuse by designers and implementors alike. IETF compounds these by living in a fantasy world.
FWIW, I consider the canonical implementation of SIP at this point to be asterisk.
I work on a SIP derived protocol, P25 CSSI - it has all the issues of SIP, and more!
Actually we've seen people manage to fuck that up royally, too. Examples: Routers confusing MAC addresses starting with "6" with IPv6 packets. Various TLS implementations in proxies falling completely apart when they see an unknown version. Various TLS proxies falling apart when seeing an unknown extension. Far too many instances of "assert version==1" to list.
They dismiss all privacy concerns with "you can't have privacy in a browser" and "fingerprinting will work anyways so we can't make it worse". It's head-in-sand approach to privacy and it's bad.
Even then, Firefox is simply wrong to tell people it'll use your proxy, then throw that out.
And none of this excuses overriding the user's expressed network connectivity.
When this was brought up, they immediately jumped to the excuses that fingerprinting can't be helped. And many jumped to saying hey you should use Tor Browser anyway. Total capitulation.
Maybe someone cared, but it's not the official stance from webrtc groups or browsers.
And I say that as someone who thinks WebRTC has some pretty cool use cases.
Can't wait to see how WebAssembly will be used against us.
This is already being worked on.
Computer vision adblockers are also being worked on.
Have you heard of Coinhive ? They are using WebAssembly to mine bitcoins in the browser.
Yep. STUN is in the spec, and always has been. This has been a thing for years.
Most of its actions seem inexplicable and counterproductive when strong privacy and user protection should be their reason to exist and keep a dedicated user base.
The bigger problem is exploding complexity in many areas is a huge concern for open source as it's becoming increasingly impossible for small teams to implement or come up with alternatives. This is going to reduce meaningful choice and leave users at the mercy of a mix of corporate or vested interests.
Did you file a bug report?
Using something as complex as firefox for anything important is just stupid.
> Your browser supports WebRTC! Your real IP address is visible to every website you visit.
> Web Real-Time Communication (WebRTC) is enabled by default in Firefox, Opera and Google Chrome, and enables video chat, voice calling and P2P sharing from within your browser.
> A neat trick, but it allows any website to instantly see your true IP address. The only way to avoid sharing your IP address this way is to disable WebRTC completely.
Nope, that's not my "real" IP address
Reminds me a bit of this old story: http://sirkan.iit.bme.hu/~kapolnai/fun/bitchecker.html
Actual location is Kuala Lumpur, which was caught by the time zone... So need to look into fixing that.
For those wondering, my setup is:
ProtonVPN via the ProtonVPN Mac Beta (before I used Tunnelblick - which actually was more reliable - the ProtonVPN Mac Beta disconnects often)
AdGuard Pro, with DNSCrypt using the Adguard Family servers:
Safari. With Camera, Microphone, Location, and Notifications all set as deny by default.
Looks like there is a word missing.
To disable WebRTC in Firefox, set the about:config prefs "media.peerconnection.enabled" and "media.navigator.enabled" to false.
media.peerconnection.turn.disable = true
media.peerconnection.use_document_iceservers = false
media.peerconnection.video.enabled = false
media.peerconnection.video.vp9_enabled = false
media.peerconnection.video.h264_enabled = false
media.peerconnection.identity.enabled = false
media.peerconnection.identity.timeout = 1
There is an extension  that'll at least disable the IP address gathering (it doesn't look to disable all of the above settings but may have a similar effect if browser.privacy.network.peerConnectionEnabled disables everything):
If you want the preferences locked the application level and not be overridden or be unchangeable at profile level. Mainly important if you are managing a lot systems.
That’s pretty clever. Alternatively they could have a unique file of garbage and have some seeders for it and then when someone connects it would also be the same person. But the tracker solution is less work and probably almost entirely as good.
One's IP address should be detectable to at least some degree with data from the packets making the request for the webpage. Some of this is remedied with what appears to be duplicative information further down the page.
DNS Address detection is done better by https://dnsleaktest.com/.
Torrent detection is also needlessly JS-driven, and done better at http://dev.cbcdn.com/ipmagnet/.
There are also some grammar errors confusing singular and plural in the text at the bottom of the page.
Written from Opera tho. So not saying it sucks :)
OpenVPN leaks DNS on every default Ubuntu installation I have tried. But I think it's actually Ubuntu NetworkManager's fault.
The WebRTC leaks discussed in this article are not prevented by OpenVPN either (last time I checked, which was a while ago). You have to disable WebRTC in the browser.
Incorrect. An easy and foolproof way of using VPNs is with network namespaces. You start the VPN in your init network namespace and then move the created device into a dedicated VPN namespace. OpenVPN has support for this because it allows you to execute a shell script after the VPN device has been created. Then you simply start your browser, torrent client, whatever in this namespace and you are completely safe:
1. If the VPN fails, then the only network device inside the network namespace disappears (modulo the lo device) and the programs in this namespace cannot use the internet.
2. Since the browser can only see the devices within the network namespace, the only IP it can see is the one assigned to you by your VPN provider (usually 10.x.y.z or similar.)
DNS leaks can be prevented by using a generic DNS provider such as 188.8.131.52.
You mean leaking to Google doesn't count as leaking?
Your namespaces suggestion is interesting, but easy and foolproof?
"DNS leaks can be prevented by using a generic DNS provider such as 184.108.40.206."
... and you replied:
"You mean leaking to Google doesn't count as leaking?"
But I don't understand where the DNS leaks would be coming from if you are using an actual VPN for your entire network stack - wouldn't that tunnel all traffic (TCP and UDP) to your endpoint ?
How are you leaking DNS in that scenario ?
1) All network traffic should go through the VPN tunnel.
2) All DNS requests should be sent to the VPN provider's DNS server and not to the one configured in the OS.
If either or both of these two things isn't happening then it's a DNS leak.
If I understood correctly, then mahkoh was saying that (2) doesn't matter if the host DNS is configured to use Google's public DNS server 220.127.116.11. That's what I called "leaking to Google".
¹ Or run your own recursive resolver
Yeah, that's known behaviour. I think it's working as intended from Ubuntu/NM's standpoint since that bug has been open for a while with no fixes. The one line fix for that is to comment out dns=dnsmasq in NM's config. This is the bug for reference: https://bugs.launchpad.net/ubuntu/+source/network-manager/+b...
There is no such line on either of the two leaking systems I just checked.
Obviously, self-marketing was the motivation for this test. However, it is still a helpful reference. If your provider doesn't even release a properly configured VPN client, you might want to reflect on whether to rely on the rest of their infrastructure.
>Also if anonymity is of "real" concern
You're digressing. Whatever a user wishes to anonymize their information for can be a valid concern but that does not necessarily require absolute anonymity.
The solution to that is to use memes and bad grammar. I am not kidding. Keep your messages really brief and have a community of people that talk in a very similar fashion to one-another.
The only reason i can imagine they're writig this way is to foil stylometry.
Another option is to use translation services, en->fr->ja->de->en. Reread message -- does it say what you mean? If yes, go for it! If not, modify as needed.
1. Run VPN software on host.
2. Download a widely used, generic VM image.
3. Route VM's entire network connection through host's VPN.
4. Do whatever you need to do, in the VM only.
5. Reset VM to initial settings after each use.
Am I missing anything?
B) Config your firewall to block everything except connection to the VPN entry point.
So, more appropriate title would have been "23% of VPN providers leak user IP" :)
Assuming we're using IPv4, the default gateway is a VPN and the machine is behind a NAT: Any outside service (e.g. STUN server) would see the VPN's IP address. How would the browser even technically be able to know the public (i.e. the NAT's) IP address?
However, the WebExtensions API allows tweaking this via the webRTCIPHandlingPolicy to only reveal the public "interface" IP address.
FWIW, I'm always connected to a VPN and I have configured my macOS  and Android  firewalls to drop any connection other than the VPN's.
0: Wrote it down here: https://jomo.tv/security/pf-prevent-traffic-bypassing-vpn
1: Quite self-explaining: https://f-droid.org/packages/dev.ukanth.ufirewall/
Really appreciate your work on this!
I have VPN on my home router with Tomato firmware. All of my devices pass this flawlessly.
1) If I wanted to do that, I'd use OpenVPN rather than strongSwan. They're both written in C, but I get extra flexibility by using OpenVPN. Their "TLS is suspect" stance doesn't hold water in my view.
2) When I don't want to set up my own server, OpenVPN allows me to use or even chain lots of third party servers and create my own nested VPN topologies. Installing an OpenVPN client on my phone or tablet takes a few minutes.
So, to summarize, Algo would be interesting if it didn't introduce dependency on memory unsafe code or minimized such dependency. But it doesn't. On the client, I do not see why I should trust Apple's IPSEC implementation (racoon?) more than OpenVPN client which is another point they tried to make. As it currently stands, it does not compare favorably to OpenVPN in any way.
But then again I don't use popular browsers that cram in fancy new features every week to expose new leaks and attack surfaces.
Uploading videos isn't "consuming". Writing blogs/articles isn't "consuming". Contributing to open source projects isn't "consuming". Neither of those activities require forwarded ports.
And that's bad because it leads to centralization. And centralization leads to perverse incentives to spy and censor.
I'm really not sure what point you're trying to make, or how you're defining "participating" in this context.
But why? You probably have a tens to hundreds of megabit connection that is always on. You have powerful computers that wouldn't even notice a webserver running. Buying a domain costs $8 and pointing it at home is as simple as changing the DNS entry a couple times a year or using DynDNS services.
And what you don't have is a need for all the complexity and requirements that most automatically assume they need just because they're drowing in them in their day job.
Hosting from home is more than enough for a personal website. It cuts the gordian knot of deciding what types of speech and content will be allowed on any given service. It prevents the perverse incentives of spying and selling users. It allows you to add things to your site on a whim just by copying a file to your web directory or opening an text editor. All the tools of your operating system, this refined and extremely usable software is now just there. Now you don't need a database. No need for a CMS. No need for scaling or containers or 99.9999% uptime.
Hosting from home allows you to participate in the 'net in a way that is just natural. When there's not 5 layers of abstraction between you and the web you really can participate and build whatever you want.
And since you don't need all that abstraction, dynamic content, and CMS (your OS is the CMS!) the security problems everyone loves to jump on simply vanish.
Say you want to monitor your logs, well, you don't need to go install some dynamic language parser and prettifier full of attack surfaces. You just tail the log and grep. You open it in OpenOffice if you really have to have a GUI. You can set alerts as easily as tailing a log.
You see day to day the type of bots, people, and referers and how they come to your site all without google analytics. You can respond to people using your site in real time; I love adding personal messages to people as they browse my site(s).
This is what I mean by participating in the net. Getting down into it. It's a beautiful thing and it solves so many problems that can't even be approached when you're using someone elses computer and someone else's connection.
And if you're in the USA you completely bypass third party doctorine and actually have an expectation of real privacy.
I just don't get the hostility to the concept I see on HN.
Cool, and then when I post something to my self-hosted blog that pisses someone off, my home internet gets DDoS'd and I lose my Internet access. It has happened before on IRC. I banned a user because they were spamming racial slurs, and they responded with a DDoS. I was offline for an hour while struggling to get someone on my ISP's support line that understood what it meant to force my IP address to change. Now I use an IRC bouncer in AWS to hide my home IP address.
> It allows you to add things to your site on a whim just by copying a file to your web directory or opening an text editor. All the tools of your operating system, this refined and extremely usable software is now just there. Now you don't need a database. No need for a CMS. No need for scaling or containers or 99.9999% uptime.
I think you misunderstand the reasons people use CMS. It makes it so I can just fill out a single text box and click "Post" and have all the indexes and links on the entire web site update automatically to include that post. I can allow people to write comments. I can create the ability for users of my site to search it.
And maximum uptime is still important. My home internet died shortly after I got to work a couple days ago, and I wasn't able to fix it until I got home. 10 hours of straight unplanned downtime is unacceptable for any server, even a personal website, IMO.
> And since you don't need all that abstraction, dynamic content, and CMS (your OS is the CMS!) the security problems everyone loves to jump on simply vanish.
The truth is quite literally the opposite. If I'm hosting it myself, and my server gets hacked, my entire home network becomes at risk.
> Say you want to monitor your logs, well, you don't need to go install some dynamic language parser and prettifier full of attack surfaces. You just tail the log and grep.
Uh...people can tail and grep logs from any server. You misunderstand there's a reason people use dynamic language parsers and prettifiers. Look at raw logs is awful. It's far easier to fire up a log analyzer and see "Oh, there are a lot of people making requests to X resource and it's creating a bottleneck."
> It's a beautiful thing and it solves so many problems that can't even be approached when you're using someone elses computer and someone else's connection.
It creates more problems than it solves. It puts my home network at risk. It makes me in charge of dealing with hardware failures. If whatever I'm hosting gets popular, and I can't scale.
> I just don't get the hostility to the concept I see on HN.
Because what you're proposing shows extreme naivete.
I've done it for 20 years without any of the problems you describe. I've never been DDoS'd at home but I suppose if you run in some circles it happens once or twice in a lifetime. On the otherhand the servers and upstream of my paid VPS providers I run other websites on have been DDoS'd and usually once or twice a year. AWS isn't immune from other types of outages either. It's someone elses computer.
>Now I use an IRC bouncer in AWS to hide my home IP address.
If you can do that you can use simple ssh port forwarding of 80 to AWS (or whatever) too.
>10 hours of straight unplanned downtime is unacceptable for any server, even a personal website, IMO.
I've been offline some for tens of hours too but it didn't matter at all because I'm not running an ecommerce site or some business. Is your personal site really that important that it can't ever go offline for half a day? I'd argue that it isn't a personal site if you're using it as a reputation device for work or portfolio or the like. The inability to separate work from life complicates things.
> and see "Oh, there are a lot of people making requests to X resource and it's creating a bottleneck."
And if you don't bring the work mindset home and run all those pretty tools on your server with the $cms turnkey of the month you don't ever run into bottlenecks because you're not running excess crap with 5 more layers of abstraction that create things dynamically when there's no reason to.
> I can allow people to write comments. I can create the ability for users of my site to search it.
A comment system is a bit of a challenge with my mindset. You can always just embed something like discus but I know that's not a strong argument. I personally implemented it with perl script parsing the logs and editing text files and iframes plus 1 line of JS. While parsing the perl script only accepts characters from a list of something like 30 that are harmless. I admit this is definitely not for everyone.
As for search you and I both know that everyone only uses google anyway and it'll work better than whatever you implement.
> If I'm hosting it myself, and my server gets hacked, my entire home network becomes at risk.
The biggest security hole for everyone is using their browser for EVERYTHING by running JS apps instead of self-hosting and just using a native application on their OS. Some 0-day for nginx or $serversoftware is far less likely than the constant stream of browser exploits and far more likely to be patched quickly.
You keep saying it creates a security risk at home that doesn't exist otherwise. But that's only if you make it that way and even then it's magnitudes less of a risk than simply running a modern browser.
I see commercials everywhere all the time and its #1 or #2 on most VPN reviews websites.
I was very tempted to switch, especially when they routers' Firmware is available for the newest/coolest routers out there; but kind of got used to ExpressVPN over the years, so went with them and their firmware for NETGEAR Nighthawk R7000 is very easy to use. Glad to see ExpressVPN is not leaking and I continue not to find any bad news about them (versus HideMyAss for example LOL)
That's because of their marketing budget.
> ... and its #1 or #2 on most VPN reviews websites.
That's because of their affiliate programs.
For testing webrtc and other leaks including fingerprinting rather use https://browserleaks.com/
(and it is unable to capture any exposing data for my browsers on any of my devices)
Not good that it leaked anything but at least the public IP is hidden by their software.
The only possible solution is a piece of software that guarantees end-to-end privacy by literally standing guard at each end (from the moment you connect to your network with your hardware MAC address exposed to the final moment when a web page is retrieved for you from your destiantion website).
Shameless plug: my project proposes to do exactly this. https://qwaitwhat.github.io/
It can be very useful, but can also cause a lot of problems.
Both the founder (Andrew Lee) and the CEO (Ted Kim) are known in the industry, have made their views on encryption and authoritarianism pretty clear in interviews, articles, and even full-page ads in the NYT and WaPo to argue for broadband privacy and encryption.
PIA also seems pretty profitable; they certainly have enough to contribute to various open source projects, join pro-net-neutrality lobbying efforts like Fight For the Future, saved the Linux Journal from death (considered an "extremist forum" by the NSA's XKeyScore), and pay to keep Freenode ticking over (they also do loads of glitzy events for the Korean-American community).
You could argue that all of this is an elaborate hoax of course, but you could do that with anything.
Yasha Levine has been on a tear ‘exposing’ this for the last few years, but it’s not personally shocking to me.
Did you know that the US government funded Signal, too?
Other parts of the US Government appear to have a desire to know everything about everyone all the time.
The "US Government" is not one thing, it's an enormous collection of agencies, bureaus, and humans, who sometimes have desires that are at odds with one another.
It's also not even mentioned in the article linked, not sure why you brought it up at all tbh.
So yeah, if a service can't be profitable, it can't be trusted.
It requires a vast amount of hospitals to ensure that there is one local enough to wherever you get ill or injured and they all have to be staffed by lots of different highly qualified specialists who are in as regular practice as possible.
If you were going to require that they be profitable, there simply are not enough rich people for the doctors to work on in order to stay in good practice, or to pay for enough suitably equipped hospitals to ensure a short travel time in an emergency.
The majority of hospitals with E.R. in the US are non-profits that receive federal subsidies to help them exist.
So yeah, charities can be ok.
It isn't always as simple as looking for an obvious motive, though I would agree to always keep an eye out for where the money comes from and if there is a game and if you are a mark. However that should apply whether or not something looks profitable on the surface, otherwise you drop all cynicism the moment someone tells you the right story.
So perhaps I'm biased (even though I'm not in nonprofit anymore), but I would put the chance of an NSA honeypot at close to 0%...
PIA put up £20,000 to keep them going after a tax mixup dropped a large bill on the foundation.
Of course, it could conceivably all be government funded philanthropy to make them look like the good guys, but there’s no evidence to suggest that.
We already have Emacs for that. ;-)
What is being described is actually a proxy.