One the one hand, ubiquitous encryption is simply required for security on the internet. Things like lets encrypt and warning on http are great improvements.
On the other hand, the owner of a network has some right to look into the packets on that network. Especially if the owner of the network also owns the end-points of that traffic.
My main use-case here isn't corporate networks, snooping there makes me uncomfortable.
Really, my issue is stuff on my own network. I want to see what my TV sends home. Same with an amazon-echo, or really any IoT thing. Yet, if they all use SSL and don't allow me to add a root CA, I can't look at what they run.
A user has no control over an amazon echo. You can't modify the software because the bootloader is locked down. You can't inspect the traffic because it is SSL cert-pinned. Amazon can push updates to it at any time.
All a user gets to do is decide whether it is turned on, and whether it gets a network connection.
Really, what I would want to see is the option to install a CA cert on any device I own.
At the same time, that is a terrible idea. Every 14 year old with google is going to find some stack-overflow answer that'll tell them to MitM their TV to do some simple thing.
The devices we own should be acting in our own best interest; we shouldn't need to treat them as adversaries.
A large part of trust is auditability. And a large part of auditing a device is looking at its communication.
What we are kind of running into is the security implications of a debug-interface.
Your point about dissidents needing secure access is a strong one. Any way to MitM a connection for audit purposes can be repurposed into an MitM for surveillance given enough coercion.
However, I don't think this applies to a network connection between my TV and Samsung.
Once a surveillor coerces me to give them access, I can disconnect my TV from the internet.
The same goes for an amazon echo, or a juice press with WiFi. It is different with whatsapp, facebook and internet banking. These are much more essential, so it is more important that it is hard to MitM them even if the user gives consent.
Sure, but you can also look at the device's communication if you have administrative access to it yourself, rather than trying to MITM it. I suppose this could be a vulnerability itself if done poorly, but users have administrative access to their PCs; why should other devices they own be any different?
>These are much more essential, so it is more important that it is hard to MitM them even if the user gives consent.
If the user gives consent, they should be allowed to inspect the connection. The device's UI can make it _very clear_ that they're about to do something dangerous, but it's their device and should be their choice.
This reminds me of Programming Satan's Computer  (PDF link), the first time I understood that while programming is hard, security programming is absolutely insane.
I don't agree. If you let a guest use your WiFi network for instance, there's no inherent moral right for you to intercept their emails.
However I generally agree with the principle that if you own a device, you have the right to learn what it's doing, and the trend towards black boxes is concerning.
You might have purchased a particular device, but you generally don't have the rights to the software or firmware that runs it.
If you're truly concerned, you should support manufacturers that leave their system open by design, or support open source devices, and contact manufacturers describing your concerns.
But the tradeoff wars of freedom v. security were lost long ago. The idea that one should shoulder any blame for choices is lost on a generation that needs YouTube videos to cut an onion. Likewise, a significant number of individuals can't be bothered to change their router password from 123456, and an even greater number of folks think the govt needs to regulate the internet to keep it fair and safe for everyone.. Forced encryption with no choice by design is a feature, not a bug.
Use open source or
True, I don't have a right to see their source code. But decent laws shouldn't prevent me from reverse-engineering the code (DMCA is not a decent law).
Moreover, access to code is orthogonal to control over devices one owns. If I own a device, I should be the one who has ultimate control over it.
Legally this is correct in certain jurisdictions. But I'd say this is similar to arguing that you have no right to know the ingredients in food you buy.
Which is part of why the more paranoid of us steadfastly refuse to own such devices.
My smart TV (I am ashamed to admit I have one) is really useful. Very little about the idea of a TV with build-in Plex support requires it be totally locked down. Hence it is a buisness decision that could be competed or regulated away.
(I was going to make the same argument about a TV with build-in Netflix support, but I believe the DRM requirements of Netflix require quite a bit of lock-down)
Possible items for its agenda:
- Right to Repair
- Right to Tinker
- Right to "Pwn Ur Own"
- Hardware sellers required to deliver firmware source to buyers
--- including build scripts
--- including device-specific signing keys
--- even for cars and tractors
- Public APIs required for public-facing services
- Right to non-backdoored strong encryption
- Formal requirements for custodianship of PII
- Formal requirements for security in IoT devices
- Ban on local monopolies on wired telecoms
- Definitions for terms used in advertising (i.e. "5G")
- Establish a separate USPTO regime specifically for computer programs
For awhile I've been wondering if people would like me to be their spokes person for such issues. I have a lot of experience and I think I have a mind for politics as well. I think its really critical we get it right at this point in history. I would love to work with others for such a cause.
If you have any advice to get started I would humbly accept. I think its really critical we get it right at this point in history.
That, or you can buy a commercial display instead of a TV.
Might need to invest in a large Faraday cage
What's more important, is the possibility of having a camera-less and microphone-less smart-tv. Or to have the possibility of practicing a microphonectomy and a camerectomyon your smart-tv.
The last carrier I worked for was running trials of a "virtual CPE" that replaced your home gateway with a much dumber and cheaper device that effectively extended your home LAN to the local exchange (at least), where the actual isolation, filtering and NAT were performed.
I switched home providers when I left there, and if my current provider ever goes the same route I'll drop my own firewall in front of my LAN.
(Theoretically there is already relatively little isolation between your LAN and parts of the carrier network if you have VoIP or IPTV, but in this case I happen to know who tests that equipment and have a very good idea of what it really does, because I used to work in that team a long time ago...)
I'm with you 100%. The reality is, though, your only choice is to not run those devices with access to the internet. A TV should not require internet access to be usable. I won't use an Echo, and IoT devices are isolated to their own internal network without WAN access.
Without those we might as well go back to storing content locally which let's face it - is mostly retrieved quasi-legally through BitTorrent.
But at least it's always available and will not disappear between the moment you first watch it and the moment you want to show it to your spouse.
I really expect (and hope) to see a resurgence of torrenting in the wake of ongoing balkanization of streaming platforms.
Agree. A lot of people think it's still the torrent dark-ages where you need to search for individual torrents like a cave-person. If so google sonarr, radarr and get a vpn!
Still have to trust the manufacture for that to work, but at lease you're not just left wide open there. (I mean you still could be, but it seems less likely with this model). And, if it doesn't stay up todate, you can toss the box and plug in a new one.
Honestly, who cares? I mean, it's a TV. It should not need Internet to work, and it definitely shouldn't need software updates. Displaying video on a screen is a solved problem.
The very fact you can use "software update" and "TV" in the same sentence signals a pretty big problem - a problem of companies selling you TV as a service, and customers accepting half-done pseudo-products.
Don't make the mistake of thinking you are a representative of all customers. YOU may not care about connectivity, but others, many others, do.
As long as the tv has import ports I can use other devices than the TV itself to get content so I'm not really concerned about that.
I bought the TV because it was good at being a TV - that is, it is OLED, high res, good upscaling, etc. The "smart" stuff I could take or leave, but I'm enjoying the convenience of it while it works.
Security isn't about being perfect, so stop pushing that false narrative. It's about being good enough, and plenty of companies making smart TVs can certainly become "good enough".
I prefer to just have a screen and a box though. I cut the wifi out of my TV and only plug it in to update firmware a few times/year.
My whole point in my post is that I don't want devices that I cannot look into and control. Unfortunately that's become incredibly difficult if not impossible. Personally, the line I've drawn does not include my TV. it's not that important to me to have an integrated TV system that I can't control.
Are there enterprise versions of all these devices that we can buy instead?
You may found the evidence of malicious act after it has happened, but often it's too late.
If Amazon is directly responsible for Amazon Echo's malicious act, you can blame Amazon. But if the attacker was government spy agency, you're out of lack.
Just don't use these devices which cannot be examined thus cannot be trusted.
What worries me is that the full chain (Client-software -- Network -- Server-software) is totally opaque and immutable at the discretion of a vendor. That means we are left at the mercy of the vendors.
I'm hoping for laws like 'Right to Repair' to help with this. The alternative would be to go full Richard Stallman, and I still think that is too radical.
Years ago sticking these types of devices on a trusted home LAN would have been unthinkable.
The best solution to the privacy problem is to do research and exercise consumer choice about what sorts of devices you purchase.
Schools, financial institutions, and more will pay big bucks to web gateway vendors who will help them deploy man in the middle attacks on their own machines, employ blacklists or whitelists (even on Google search terms not just at the DNS level), scan traffic for SSNs, and so on. It's not a dead market (quite the opposite, startups like Zscaler are fetching unicorn valuation).
It also encourages terrifying but legal behavior for employers like monitoring which subreddits you read or what kind of YouTube videos you watch or how much time you spend slacking off at work.
The arms race between security and exploitation isn't likely to stop, and I have no confidence that corporations with sensitive data will willingly take a privacy-granting approach when vendors promise them unmatched security by decrypting traffic.
I think the two viable approaches are educating the public that your work machine is not private or looking for lawmakers to step in (but let's be real, that option is unlikely)
During my time working for one of these web gateway vendors, I became highly sensitive to what browsing happened on my primary operating system (which had company certificates installed), and what went on my development VM (which I set up myself without corporate certificates)
However, the huge problem is that employees are completely left in the dark about this privacy invasion... only the tech-savvy ones notice and understand it.
At the time, I thought that it seemed a bit heavy-handed- just use DPI and you'll get the same results. This article is making me think he was very prescient in the matter.
> We conclude that malware's usage of TLS is distinct from benign usage in an enterprise setting, and that these differences can be effectively used in rules and machine learning classifiers.
Disclaimer: I work for Cisco
This is a fundamental evolutionary cat & mouse game that's impossible to win; antibiotics & bacteria, toxins in prey & toxicity resistance in predators, etc.
That being said, this is at the OS level. An app such as Firefox could still override those settings or provide their own implementation.
One reason why organizations use packet inspection is to protect its staffs, customers and vendors from malicious actors who could cause data breaches leading to huge privacy issues.
Privacy over Security? The right balance must be found
It can't be both at once. Either you have multiple layers because both the appliance and the endpoints are independently secure and the attacker has to compromise both, or you don't monitor/secure the individual endpoints and the appliances become a single layer / single point of compromise.
And if the appliances can see all the plaintext of everything then they're a single point of compromise even if the endpoints are otherwise secure, because the attacker can still read all the secrets through the man-in-the-middlebox.
What works is to leave each thing to what it's good at. The endpoints are good at inspecting the plaintext, because they inherently have to have it anyway and they have the context to understand what it's supposed to look like. So you don't end up interfering with a newer, more secure protocol because the middlebox doesn't understand it. And plaintext is sensitive data so the fewer things that have access to it the fewer things you can compromise to get access to it.
What middleboxes are really good at is certain types of access control, e.g. blacklisting malicious IP addresses for outgoing connections, or whitelisting source and destination addresses and ports for incoming connections. They keep your local IP cameras off the internet even if the cameras "should" be secure on their own.
As in the good old post "What colour are your bits"  regarding the subject of copyright, the computer is colorblind when it comes to privacy vs. security tradeoffs. You seem to see color, believing compromise for security to be acceptable, and hoping you can allow your lawful and good security inspections to occur while disallowing nasty privacy invasion.
The computer doesn't see color. It is impossible to build a security protocol that will distinguish between good third parties and malicious third parties. "Good security controls" come down to trusting people to do the right thing, and when there's big money coercing companies to do the wrong thing, the right thing too often loses.
This way, an employee could see whether their employer is MitM-ing their connection to FB / reddit.com / pornhub / their bank. Based on this, they could complain to their employer for unreasonable MitMing, and serve as a weak detection point for compromise of the company root CA.
Maybe you'll download your own Chrome, but that silently gives you their hacked version. The SHA256sum on their website has also been tampered with. Fine, you say, you'll download the source code and compile it yourself. But the compiler has been tampered with to detect when it's compiling Chromium, and adds the IT department's hacks.
You cannot trust a client you do not fully control.
Sometimes you either have your own device or you trust your employer to not directly lie to you.
My ISP uses the User-Agent header in outgoing requests to guess how many computing devices I have at home, and tries to charge money if it's more than an undisclosed limit. This of course only works for plain HTTP, but there are still enough unencrypted sites out there that my ISP has an opportunity to intercept a request at least a couple of times a day.
Meanwhile, my country is just beginning to roll out a system that detects the SNI hostname in encrypted connections, in order to block illegal sites that hide behind Cloudflare. Fortunately they can't spoof certificates on the public internet, so users just get a connection error. Too bad Cloudflare supports ESNI now ;)
Nobody is under any threat of prosecution for talking about our ridiculous censorship regime, and the surveillance side of the program is probably no worse than in any other developed country.
Which isn't much of a compliment, but at least we're not China-level evil -- just incompetent. DPI for blocking SNI hostnames is a particularly annoying way to waste taxpayers' money. It's almost as if they timed it to coincide with wide availability of DoH and ESNI!
The IP protocols have some expectation of end-to-end packet delivery. Over time we found ways in which networks could be kept "working" with this requirement relaxed. Except what could be known to "work" was just whatever was tested by the manufacturers of various middle-boxes, making change and development of new ways of solving problems harder than it should be.
The less visibility middle-boxes have into what the the traffic is, the less they are able to selectively screw things up and the internet will be more reliable for it.
Assumption 1: Machines on your network are already compromised and fully owned by a sophisticated and extremely difficult to detect rootkit. This is true of every large business. There is always that guy who will click on any link or open the document from what appears to be their co-worker.
Assumption 2: APT tries to disguise their traffic as ordinary web traffic, because anything else is suspicious.
Assumption 3: You have massive legal liabilities if your data is exfiltrated.
Being able to do DPI and pattern matching on all TLS traffic (and firewall off anything you can't DPI) is pretty much mandatory.
Which is another reason why DPI is ineffective. The smart malware will identify when its connection is presenting a custom root certificate rather than the expected one and not proceed with its suspicious activities (if not deploy some kind of steganography). Then the same "that guy" will plug his personal phone into his computer, and now the malware has an unmonitored cellular data connection to the outside on a machine that's also connected to the internal network. Or a compromised laptop will hook up to the WiFi of the company on the adjacent floor or the coffee shop next door, or the user connects it to the coffee shop WiFi when they're in the coffee shop.
In theory you can build a Faraday cage around your space and then strip-search employees for digital devices at the door, but if your data is that important then you probably ought to just not be connected to the internet at all.
And if the malware doesn't work because it has certificate pinning, well, that's a win too. Its not a 100% solution, but you can significantly raise the bar on your attackers.
The theory behind TLS MITM is that it's an extraordinary and dangerous method that could be justified if sufficiently effective. If there are a dozen common ways to route around it, the risk is more than the benefit.
A VPN can't fix it because a compromised endpoint would be able to choose which traffic it sends over the VPN. Whereas if the endpoint isn't fully compromised then you could be doing whatever scanning is being done by the middlebox on the endpoint itself, without centralizing on single point of compromise for the entire network.
> And if the malware doesn't work because it has certificate pinning, well, that's a win too.
It may not make an outside connection, but that doesn't mean it doesn't work. It could still infect every machine on your internal network. What's the chance that none of them are ever in range of a public WiFi?
This is before even getting to the issue of steganography. Information theory says that if your legitimate communications contain zero entropy then they can be encoded into zero bits, i.e. you don't need a network connection at all, but if they contain nonzero bits of entropy then an attacker can encode that much arbitrary data into the stream and still be indistinguishable from legitimate data.
So the whole thing is inherently a cat and mouse game. A lazy attacker may use a data pattern that isn't found in the legitimate data and then a middlebox vendor may find it and use that to distinguish their traffic, but as soon as they do the attacker can stop using it. The longer the game is played, the better the attackers get at making their data indistinguishable and the fewer remaining undiscovered ways to distinguish it that it's possible to find. In the limit there are none left, the data is completely identical to legitimate data with the same amount of entropy, the attacker is only assigning different meaning to it at the endpoints.
There's also the opposite of what the initial boon of DPI gave you for egress traffic and instead doing DPI on ingress traffic in places in front of critical applications using things like SSL bump and so on. This seems worse but better in a way, where the DPI is part of the secure system instead of doing a carte blanche decrypting streams (the traffic that the internal secure system receives is in fact the traffic that it is party to instead of just wile e coyote to the universe). It's very hard to detect targeted attack to third party enterprise webapps otherwise.
That lead to an arms race from core networking vendors to push out all sorts of traffic sniffing and policing with insane degrees of intrusion that made me quite uneasy (I worked in core network planning), and it's been a relief to finally see LetsEncrypt take hold and TLS become de rigeur.
I do have some qualms about the way legal interception can be abused (in general) and occasionally ponder how far those vendors may have progressed in MITM, though - carriers and exchange points are not as secure as they should (in sometimes surprising ways), and back then finding bugs in carrier equipment was relatively frequent.
I wonder what's it like now that most of it are actually Linux VMs running someplace in their ancient datacenters.
Slowing torrents or streaming video directly helps maintain their « golden age of double dipping by running data over lines paid by audio/vidéo infrastructure »
By default, nobody, and I mean, nobody needs to know ones home IP address, period.
And nobody needs know what sites a person visit or when.
So not only DPI should go away, but also IP address-based blacklisting/whitelisting, tracking/ advertising and so on.
Your workplace can install themselves as a trusted certificate authority on client machines in order to break that model and allow themselves to issue certificates on the fly for any website.
If you're using your own hardware, they can't do this to you. If you're using someone else's, you're at their mercy.
How SQUID enables SSL failure is obscure to me because the failure of SSL is complicated and this slide is overwhelming
SQUID is a proxy for the website you want to access - so when your browser negotiates the TLS handshake it's really talking to SQUID who then talks to the website if necessary. So how does your browser get directed to an illegitimate proxy? The only way I can imagine is if the DNS is bad but I don't see how that can happen.
So for example you want to go to google.com Your browser goes out to your DNS (the ip you plugged in to the network configuration settings) and gets the ip for google. How can someone interpose squid between you and google ?
And then even if squid talks to the server on your behalf, it cannot just forward that stuff to you in clear text because you are expecting that stuff to be encrypted with a private key that only the server or the certificate authority has. Since squid doesn't have the private key it needs, I don't understand what it does.
"Squid-in-the-middle decryption and encryption of straight CONNECT and transparently redirected SSL traffic, using configurable CA certificates." I interpret that to mean that squid configures a CA certificate. So squid can make it appear like you are talking directly to google?
I think VPN may not be safe if the local machine has to negotiate encryption with the VPN server.
Squid seems like it couldn't intercelt that.