Hacker News new | past | comments | ask | show | jobs | submit login
Deep packet inspection is dead, and here's why (2017) (ias.edu)
159 points by ogig 3 months ago | hide | past | web | favorite | 121 comments



I'm worried about this development.

One the one hand, ubiquitous encryption is simply required for security on the internet. Things like lets encrypt and warning on http are great improvements.

On the other hand, the owner of a network has some right to look into the packets on that network. Especially if the owner of the network also owns the end-points of that traffic. My main use-case here isn't corporate networks, snooping there makes me uncomfortable.

Really, my issue is stuff on my own network. I want to see what my TV sends home. Same with an amazon-echo, or really any IoT thing. Yet, if they all use SSL and don't allow me to add a root CA, I can't look at what they run.

A user has no control over an amazon echo. You can't modify the software because the bootloader is locked down. You can't inspect the traffic because it is SSL cert-pinned. Amazon can push updates to it at any time. All a user gets to do is decide whether it is turned on, and whether it gets a network connection.

Really, what I would want to see is the option to install a CA cert on any device I own. At the same time, that is a terrible idea. Every 14 year old with google is going to find some stack-overflow answer that'll tell them to MitM their TV to do some simple thing.


The implication here is that you can't trust the devices on your network. IMO that's itself a problem; rather than weakening encryption to enable network owners to analyze traffic on their network (which also harms dissidents who need secure network access), I would prefer a push for more trustworthy devices.

The devices we own should be acting in our own best interest; we shouldn't need to treat them as adversaries.


> The implication here is that you can't trust the devices on your network. IMO that's itself a problem;

A large part of trust is auditability. And a large part of auditing a device is looking at its communication. What we are kind of running into is the security implications of a debug-interface.

Your point about dissidents needing secure access is a strong one. Any way to MitM a connection for audit purposes can be repurposed into an MitM for surveillance given enough coercion. However, I don't think this applies to a network connection between my TV and Samsung.

Once a surveillor coerces me to give them access, I can disconnect my TV from the internet. The same goes for an amazon echo, or a juice press with WiFi. It is different with whatsapp, facebook and internet banking. These are much more essential, so it is more important that it is hard to MitM them even if the user gives consent.


>A large part of trust is auditability. And a large part of auditing a device is looking at its communication. What we are kind of running into is the security implications of a debug-interface.

Sure, but you can also look at the device's communication if you have administrative access to it yourself, rather than trying to MITM it. I suppose this could be a vulnerability itself if done poorly, but users have administrative access to their PCs; why should other devices they own be any different?

>These are much more essential, so it is more important that it is hard to MitM them even if the user gives consent.

If the user gives consent, they should be allowed to inspect the connection. The device's UI can make it _very clear_ that they're about to do something dangerous, but it's their device and should be their choice.


If you actually were to carry out an audit of a device you'd request source code, not the SSL key. If device manufacturers want you to trust their stuff, they need to stop locking bootloaders. Wanting stuff to be SSL-free for auditability is like asking for web pages to be fully renderable on lynx so you can scrape them


The devices we own should be acting in our own best interest; we shouldn't need to treat them as adversaries.

This reminds me of Programming Satan's Computer [1] (PDF link), the first time I understood that while programming is hard, security programming is absolutely insane.

[1] https://www.cl.cam.ac.uk/~rja14/Papers/satan.pdf


Another problem is that nowadays, all devices need internet to do anything. It's ridiculous. Even offline gaming is impossible for most platforms/games.


> On the other hand, the owner of a network has some right to look into the packets on that network.

I don't agree. If you let a guest use your WiFi network for instance, there's no inherent moral right for you to intercept their emails.

However I generally agree with the principle that if you own a device, you have the right to learn what it's doing, and the trend towards black boxes is concerning.


You have no more right to learn what a particular device is doing than you have in intercepting emails of guests on Wi-Fi.

You might have purchased a particular device, but you generally don't have the rights to the software or firmware that runs it.

If you're truly concerned, you should support manufacturers that leave their system open by design, or support open source devices, and contact manufacturers describing your concerns.

But the tradeoff wars of freedom v. security were lost long ago. The idea that one should shoulder any blame for choices is lost on a generation that needs YouTube videos to cut an onion. Likewise, a significant number of individuals can't be bothered to change their router password from 123456, and an even greater number of folks think the govt needs to regulate the internet to keep it fair and safe for everyone.. Forced encryption with no choice by design is a feature, not a bug.

Use open source or


> You might have purchased a particular device, but you generally don't have the rights to the software or firmware that runs it.

True, I don't have a right to see their source code. But decent laws shouldn't prevent me from reverse-engineering the code (DMCA is not a decent law).

Moreover, access to code is orthogonal to control over devices one owns. If I own a device, I should be the one who has ultimate control over it.


> but you generally don't have the rights to the software or firmware that runs it

Legally this is correct in certain jurisdictions. But I'd say this is similar to arguing that you have no right to know the ingredients in food you buy.


> Really, my issue is stuff on my own network. I want to see what my TV sends home. Same with an amazon-echo, or really any IoT thing. Yet, if they all use SSL and don't allow me to add a root CA, I can't look at what they run.

Which is part of why the more paranoid of us steadfastly refuse to own such devices.


I'm hoping for something like 'Right to Repair' or 'Right to Tinker' that'll let us verify more devices are trustworthy.

My smart TV (I am ashamed to admit I have one) is really useful. Very little about the idea of a TV with build-in Plex support requires it be totally locked down. Hence it is a buisness decision that could be competed or regulated away.

(I was going to make the same argument about a TV with build-in Netflix support, but I believe the DRM requirements of Netflix require quite a bit of lock-down)


Some of us (for whom Charisma is not their dump stat) need to get elected to national legislative assemblies and form technology and engineering caucuses.

Possible items for its agenda:

  - Right to Repair
  - Right to Tinker
  - Right to "Pwn Ur Own"
  - Hardware sellers required to deliver firmware source to buyers
  --- including build scripts
  --- including device-specific signing keys
  --- even for cars and tractors
  - Public APIs required for public-facing services
  - Right to non-backdoored strong encryption
  - Formal requirements for custodianship of PII
  - Formal requirements for security in IoT devices
  - Ban on local monopolies on wired telecoms
  - Definitions for terms used in advertising (i.e. "5G")
  - Establish a separate USPTO regime specifically for computer programs
The lawyers of the Boomer generation that seem to be running the show now are not very well-suited to understand the near-future problems in the technology field. A lot of these problems are better solved by strongarming companies into behaving more eusocially than by attempting a technical solution or starting disruptive competitor businesses.


I'm interested in doing it.

For awhile I've been wondering if people would like me to be their spokes person for such issues. I have a lot of experience and I think I have a mind for politics as well. I think its really critical we get it right at this point in history. I would love to work with others for such a cause.

If you have any advice to get started I would humbly accept. I think its really critical we get it right at this point in history.


Ok, I’ll draw the logo!


Is it really paranoid anymore to refuse to own them? At this point, it just seems to be common sense.


And sadly it's near-impossible to find a non-Smart TV these days :(


Smart TVs can be left unconfigured and not plugged in via Ethernet.


I would also smash any 2.4gz antennas inside the TV for a good measure. They might not have access to your WiFi network, but the XFinitiWiFi is available to any TV maker for a modest sum of money.

That, or you can buy a commercial display instead of a TV.


4G radios are also pretty cheap now and antennas can be embedded on-circuit..

Might need to invest in a large Faraday cage


Don’t forget that tin foil hat!


if you have enough networking skills and a more-than-decent switch, you might let your smart-tv live in a different, dedicated vlan where it can't see any other systems within your network.

What's more important, is the possibility of having a camera-less and microphone-less smart-tv. Or to have the possibility of practicing a microphonectomy and a camerectomyon your smart-tv.


It's not just the Alexa-like and IoT devices, it's your entire home LAN that might be indirectly exposed.

The last carrier I worked for was running trials of a "virtual CPE" that replaced your home gateway with a much dumber and cheaper device that effectively extended your home LAN to the local exchange (at least), where the actual isolation, filtering and NAT were performed.

I switched home providers when I left there, and if my current provider ever goes the same route I'll drop my own firewall in front of my LAN.

(Theoretically there is already relatively little isolation between your LAN and parts of the carrier network if you have VoIP or IPTV, but in this case I happen to know who tests that equipment and have a very good idea of what it really does, because I used to work in that team a long time ago...)


>Really, my issue is stuff on my own network. I want to see what my TV sends home. Same with an amazon-echo, or really any IoT thing.

I'm with you 100%. The reality is, though, your only choice is to not run those devices with access to the internet. A TV should not require internet access to be usable. I won't use an Echo, and IoT devices are isolated to their own internal network without WAN access.


Your definition of a usable TV is outdated.


Agreed. It's not even the TV that needs access to Internet. What if you have one of the Roku, Apple TV, Chromecast, etc devices that stream content from the Internet?

Without those we might as well go back to storing content locally which let's face it - is mostly retrieved quasi-legally through BitTorrent.


> Without those we might as well go back to storing content locally which let's face it - is mostly retrieved quasi-legally through BitTorrent.

But at least it's always available and will not disappear between the moment you first watch it and the moment you want to show it to your spouse.

I really expect (and hope) to see a resurgence of torrenting in the wake of ongoing balkanization of streaming platforms.


>I really expect (and hope) to see a resurgence of torrenting in the wake of ongoing balkanization of streaming platforms.

Agree. A lot of people think it's still the torrent dark-ages where you need to search for individual torrents like a cave-person. If so google sonarr, radarr and get a vpn!


yeah, while that's true, a manufacture like amazon or apple is much more likely to update your device since they are still hoping to sell you content. You think your offbrand Smart TV is going to update past a couple bugs? They already made all the money they are going to make off that TV and they are done.

Still have to trust the manufacture for that to work, but at lease you're not just left wide open there. (I mean you still could be, but it seems less likely with this model). And, if it doesn't stay up todate, you can toss the box and plug in a new one.


> You think your offbrand Smart TV is going to update past a couple bugs? They already made all the money they are going to make off that TV and they are done.

Honestly, who cares? I mean, it's a TV. It should not need Internet to work, and it definitely shouldn't need software updates. Displaying video on a screen is a solved problem.

The very fact you can use "software update" and "TV" in the same sentence signals a pretty big problem - a problem of companies selling you TV as a service, and customers accepting half-done pseudo-products.


I don't have my TV connected to the internet. But I do stream, so I have to have something connected to the internet. And I prefer a box that is likely to be updated.


Agreed. But I prefer that something to be a separate box, so that I can upgrade and replace it separately from the TV.


TVs have had software for decades. The fact that we're finally using "software update" and "TV" together is a long time coming, and finally a security model we can work with.


yes, but not connected for the most part.


Yes, with fewer features.

Don't make the mistake of thinking you are a representative of all customers. YOU may not care about connectivity, but others, many others, do.


Yes, but the right way to do this is to get a separate device for the connectivity part, in order to protect yourself against services having much shorter lifetime than hardware. Techies knows this, and customers who got burned on smart TVs know this too. Regular people don't always realize this, but companies pushing smart TVs know this perfectly well. This is pretty obvious planned obsolescence. In my eyes, it's scamming people.


I recently got a smart TV, and I use the netflix app built into it, as well as the general video player for playing files I have on my computer's media server... But I used to just use my Playstation for that before I got a tv with the apps.

As long as the tv has import ports I can use other devices than the TV itself to get content so I'm not really concerned about that.

I bought the TV because it was good at being a TV - that is, it is OLED, high res, good upscaling, etc. The "smart" stuff I could take or leave, but I'm enjoying the convenience of it while it works.


Techies don't "know" this, I want to be abundantly clear that having a technical background doesn't make you more savvy to security concerns -- if this were true, the prevalence of security problems throughout the industry wouldn't be nearly as pervasive.

Security isn't about being perfect, so stop pushing that false narrative. It's about being good enough, and plenty of companies making smart TVs can certainly become "good enough".


I'm not talking security. I'm talking user-hostile practices and bad engineering.


Then you're even more offbase, in addition to being off topic.


That's where RokuTV has an edge. They deploy firmware updates and have a consistent track record of supporting old hardware better than any of the consumer electronic giants with proprietary systems.


might be, I'd take a smart tv that was backed by roku, apple, amazon, or even microsoft over some low budget company that doesn't really care once they've made the sale.

I prefer to just have a screen and a box though. I cut the wifi out of my TV and only plug it in to update firmware a few times/year.


My tv is not connected to the internet, but I connect devices to my TV that are connected to the internet. The devices that I connect are computers that I have full access to and control over.

My whole point in my post is that I don't want devices that I cannot look into and control. Unfortunately that's become incredibly difficult if not impossible. Personally, the line I've drawn does not include my TV. it's not that important to me to have an integrated TV system that I can't control.


I share your worry about not being able to inspect communications for various apps and devices purely from a privacy advocacy perspective.

If you own a device, and said device is transmitting data from your environment, you should be able to know what information this device is communicating. It is not enough to trust a company privacy policy.


What about an enterprise situation? Would a company allow devices that don't accept their root CA on the network?

Are there enterprise versions of all these devices that we can buy instead?


Most Internet connected consumer devices don't have separate enterprise versions. The usual approach for enterprises that care about security is to ban them (DHCP server won't issue an IP address and switch blocks all traffic), or segregate them onto a very limited virtual network.


That can't prevent a malicious device which act like harmless for the most time, but do evil thing on certain conditions are met.

You may found the evidence of malicious act after it has happened, but often it's too late.

If Amazon is directly responsible for Amazon Echo's malicious act, you can blame Amazon. But if the attacker was government spy agency, you're out of lack.

Just don't use these devices which cannot be examined thus cannot be trusted.


While I understand your point of view, I think the plausible deniability network owners get frees them from much risk and burocracy.


To be honest, I'm probably happy with the breaking of corporate blanket MitM-ing. It was never very effective because data exfiltration can be made very hard to detect.

What worries me is that the full chain (Client-software -- Network -- Server-software) is totally opaque and immutable at the discretion of a vendor. That means we are left at the mercy of the vendors.

I'm hoping for laws like 'Right to Repair' to help with this. The alternative would be to go full Richard Stallman, and I still think that is too radical.


It's a major bugbear I have with Android. Now most apps don't by default respect CAs you've added, even via MDM. They get marked differently and can only be used by VPN, WiFi, ActiveSync, and apps that opt in to your custom certs.


Just wait for devices with 5G modem. They will skip your network control entirely.


You have a choice. Boycott especially egregious offenders and packet filter and deny service to any suspicious device on _your_ home network.

Years ago sticking these types of devices on a trusted home LAN would have been unthinkable.


So it's perfect time to start a powered-by-community smart device. NLP and voice commands should work offline, without license issues, trained for private use, no constant connection bullshit.


It is a bit of a double edge sword, but overall I think crypto is better.

The best solution to the privacy problem is to do research and exercise consumer choice about what sorts of devices you purchase.


You ostensibly have access to any private key being used to decrypt this traffic, assuming you have access to the device, which I believe is the correct boundary.


With public key encryption? Very unlikely.


"big companies" and big hardware producers in general would bake crypto in their stuff anyway.


I think a more correct title would be "Deep packet inspection should be dead, and here's why"

Schools, financial institutions, and more will pay big bucks to web gateway vendors who will help them deploy man in the middle attacks on their own machines, employ blacklists or whitelists (even on Google search terms not just at the DNS level), scan traffic for SSNs, and so on. It's not a dead market (quite the opposite, startups like Zscaler are fetching unicorn valuation).

It also encourages terrifying but legal behavior for employers like monitoring which subreddits you read or what kind of YouTube videos you watch or how much time you spend slacking off at work.

The arms race between security and exploitation isn't likely to stop, and I have no confidence that corporations with sensitive data will willingly take a privacy-granting approach when vendors promise them unmatched security by decrypting traffic.

I think the two viable approaches are educating the public that your work machine is not private or looking for lawmakers to step in (but let's be real, that option is unlikely)

During my time working for one of these web gateway vendors, I became highly sensitive to what browsing happened on my primary operating system (which had company certificates installed), and what went on my development VM (which I set up myself without corporate certificates)


My workplace has such a MitM gateway where every host has a company root CA installed and every SSL certificate we receive in the browser is an interchanged one. Fair enough.

However, the huge problem is that employees are completely left in the dark about this privacy invasion... only the tech-savvy ones notice and understand it.


A few years ago, one of the best managers I ever worked for left to become the CTO of a company doing pattern analysis of network traffic, rather than Deep Packet Inspection. The premise was that most of the internet traffic on your network follows the same typical patterns, but nefarious traffic doesn't. Drop their system into the network and voila, you can start to find the weird things going on that seem out of the ordinary.

At the time, I thought that it seemed a bit heavy-handed- just use DPI and you'll get the same results. This article is making me think he was very prescient in the matter.


This is exactly what has been researched at multiple security companies and productized by Cisco under "Encrypted Traffic Analytics". This is based on research from 2016 that can be found on arXiv: https://arxiv.org/abs/1607.01639

> We conclude that malware's usage of TLS is distinct from benign usage in an enterprise setting, and that these differences can be effectively used in rules and machine learning classifiers.

Disclaimer: I work for Cisco


Neat paper but as soon as this becomes more widespread malware authors are going to adapt to hide as regular traffic so the analysis is going to get more & more complex until it's not useful as malware traffic will look indistinguishable from real traffic.

This is a fundamental evolutionary cat & mouse game that's impossible to win; antibiotics & bacteria, toxins in prey & toxicity resistance in predators, etc.


& autoimmune diseases...


The Chinese have been doing this for a age now, using pattern analysis to detect firewall avoidance methods. There of course have been developed anti pattern analysis methods and the fight goes on.


What was the name of the company? My management responsibility at work includes networks (by default, we are small) and I always say I don’t have to know/care that you’re using BitTorrent (encrypted/port shifting), so much as there’s an anomaly on the network impacting others. I’d rather have something flagging “hmm this is atypical” based on size/src/dst/ports then try to make rules ahead of time that might miss new trends.


> Drop their system into the network and voila, you can start to find the weird things going on that seem out of the ordinary.

hmmm[1]

[1]: https://everything2.com/user/The+Custodian/writeups/Seek+And...


I find this is interesting however I cannot see businesses with important data to protect depending on this when decryption is a far safer option, there is no guessing what it is when on a corporate network you can simply decrypt the traffic and see what it is.


The author suggests towards the end to analyze DNS queries, but that's on the best way [1] to be encrypted as well (finally).

[1] https://wiki.mozilla.org/Trusted_Recursive_Resolver


DNS queries are monetized by some carriers, who sell the aggregate data to brokers. I was actually approached by one such company a few years back.


In a corporate environment, managed devices can be configured to force the use of specific DNS settings. The same type of implementation (MITM) could be used to analyse the requests.

That being said, this is at the OS level. An app such as Firefox could still override those settings or provide their own implementation.


If you IT department is your adversary you should get a new job. Or at least use a personal device for personal matters :)


I don't think NOT performing packet inspection due to privacy concern is a good idea. (Good security controls should exist over its administration)

One reason why organizations use packet inspection is to protect its staffs, customers and vendors from malicious actors who could cause data breaches leading to huge privacy issues.

Privacy over Security? The right balance must be found


Of course, this means the packet inspection host and the organisation's internal CA are now great targets to attack. This approach puts all the eggs in a single central basket.


IMO relying 100% on the end devices to protect themselves is too risky. Layered security seems to work best. Also I prefer to heavily monitor/secure two appliances/systems than heavily monitor thousands of end devices


> Layered security seems to work best. Also I prefer to heavily monitor/secure two appliances/systems than heavily monitor thousands of end devices

It can't be both at once. Either you have multiple layers because both the appliance and the endpoints are independently secure and the attacker has to compromise both, or you don't monitor/secure the individual endpoints and the appliances become a single layer / single point of compromise.

And if the appliances can see all the plaintext of everything then they're a single point of compromise even if the endpoints are otherwise secure, because the attacker can still read all the secrets through the man-in-the-middlebox.

What works is to leave each thing to what it's good at. The endpoints are good at inspecting the plaintext, because they inherently have to have it anyway and they have the context to understand what it's supposed to look like. So you don't end up interfering with a newer, more secure protocol because the middlebox doesn't understand it. And plaintext is sensitive data so the fewer things that have access to it the fewer things you can compromise to get access to it.

What middleboxes are really good at is certain types of access control, e.g. blacklisting malicious IP addresses for outgoing connections, or whitelisting source and destination addresses and ports for incoming connections. They keep your local IP cameras off the internet even if the cameras "should" be secure on their own.


A user has no way of knowing whether a packet inspection will be performed by benevolent actors seeking to protect their security or by malicious actors seeking to invade their privacy.

As in the good old post "What colour are your bits" [1] regarding the subject of copyright, the computer is colorblind when it comes to privacy vs. security tradeoffs. You seem to see color, believing compromise for security to be acceptable, and hoping you can allow your lawful and good security inspections to occur while disallowing nasty privacy invasion.

The computer doesn't see color. It is impossible to build a security protocol that will distinguish between good third parties and malicious third parties. "Good security controls" come down to trusting people to do the right thing, and when there's big money coercing companies to do the wrong thing, the right thing too often loses.

[1]: https://ansuz.sooke.bc.ca/entry/23/


It could be shown client-side whether an SSL connection uses a locally installed root CA or a globally trusted CA.

This way, an employee could see whether their employer is MitM-ing their connection to FB / reddit.com / pornhub / their bank. Based on this, they could complain to their employer for unreasonable MitMing, and serve as a weak detection point for compromise of the company root CA.


You can't trust your client. The IT department will just push a browser that says "you're using the root CA for this connection" while actually using the MITM CA.

Maybe you'll download your own Chrome, but that silently gives you their hacked version. The SHA256sum on their website has also been tampered with. Fine, you say, you'll download the source code and compile it yourself. But the compiler has been tampered with to detect when it's compiling Chromium, and adds the IT department's hacks.

You cannot trust a client you do not fully control.


That's a separate issue, because a completely custom browser can intercept even without a MitM on the connection.

Sometimes you either have your own device or you trust your employer to not directly lie to you.


Deep packet inspection seems to be alive and well, even outside of corporate networks.

My ISP uses the User-Agent header in outgoing requests to guess how many computing devices I have at home, and tries to charge money if it's more than an undisclosed limit. This of course only works for plain HTTP, but there are still enough unencrypted sites out there that my ISP has an opportunity to intercept a request at least a couple of times a day.

Meanwhile, my country is just beginning to roll out a system that detects the SNI hostname in encrypted connections, in order to block illegal sites that hide behind Cloudflare. Fortunately they can't spoof certificates on the public internet, so users just get a connection error. Too bad Cloudflare supports ESNI now ;)


Where do you live (if you can share the country name, of course)?


South Korea.

Nobody is under any threat of prosecution for talking about our ridiculous censorship regime, and the surveillance side of the program is probably no worse than in any other developed country.

Which isn't much of a compliment, but at least we're not China-level evil -- just incompetent. DPI for blocking SNI hostnames is a particularly annoying way to waste taxpayers' money. It's almost as if they timed it to coincide with wide availability of DoH and ESNI!


This sort of development seems good, not exactly from an moral point of view, but from the point of view of long-term reliability of the internet.

The IP protocols have some expectation of end-to-end packet delivery. Over time we found ways in which networks could be kept "working" with this requirement relaxed. Except what could be known to "work" was just whatever was tested by the manufacturers of various middle-boxes, making change and development of new ways of solving problems harder than it should be.

The less visibility middle-boxes have into what the the traffic is, the less they are able to selectively screw things up and the internet will be more reliable for it.


It's not dead. Encryption has (unjustifiably) pushed the enterprise to install fake catchall certificates on proxies so they can snoop plain-text traffic. (Why anyone would ever think this is a good idea is beyond me.)


How else are you going to catch APT (Advanced Persistent Threat) data exfiltration/control channel traffic?

Assumption 1: Machines on your network are already compromised and fully owned by a sophisticated and extremely difficult to detect rootkit. This is true of every large business. There is always that guy who will click on any link or open the document from what appears to be their co-worker.

Assumption 2: APT tries to disguise their traffic as ordinary web traffic, because anything else is suspicious.

Assumption 3: You have massive legal liabilities if your data is exfiltrated.

Being able to do DPI and pattern matching on all TLS traffic (and firewall off anything you can't DPI) is pretty much mandatory.


> There is always that guy who will click on any link or open the document from what appears to be their co-worker.

Which is another reason why DPI is ineffective. The smart malware will identify when its connection is presenting a custom root certificate rather than the expected one and not proceed with its suspicious activities (if not deploy some kind of steganography). Then the same "that guy" will plug his personal phone into his computer, and now the malware has an unmonitored cellular data connection to the outside on a machine that's also connected to the internal network. Or a compromised laptop will hook up to the WiFi of the company on the adjacent floor or the coffee shop next door, or the user connects it to the coffee shop WiFi when they're in the coffee shop.

In theory you can build a Faraday cage around your space and then strip-search employees for digital devices at the door, but if your data is that important then you probably ought to just not be connected to the internet at all.


I'd argue that those examples are a higher bar to hurdle than failing to recognize a spear phishing attack, and can be mitigated by solutions like always-on VPN.

And if the malware doesn't work because it has certificate pinning, well, that's a win too. Its not a 100% solution, but you can significantly raise the bar on your attackers.


> I'd argue that those examples are a higher bar to hurdle than failing to recognize a spear phishing attack, and can be mitigated by solutions like always-on VPN.

The theory behind TLS MITM is that it's an extraordinary and dangerous method that could be justified if sufficiently effective. If there are a dozen common ways to route around it, the risk is more than the benefit.

A VPN can't fix it because a compromised endpoint would be able to choose which traffic it sends over the VPN. Whereas if the endpoint isn't fully compromised then you could be doing whatever scanning is being done by the middlebox on the endpoint itself, without centralizing on single point of compromise for the entire network.

> And if the malware doesn't work because it has certificate pinning, well, that's a win too.

It may not make an outside connection, but that doesn't mean it doesn't work. It could still infect every machine on your internal network. What's the chance that none of them are ever in range of a public WiFi?

This is before even getting to the issue of steganography. Information theory says that if your legitimate communications contain zero entropy then they can be encoded into zero bits, i.e. you don't need a network connection at all, but if they contain nonzero bits of entropy then an attacker can encode that much arbitrary data into the stream and still be indistinguishable from legitimate data.

So the whole thing is inherently a cat and mouse game. A lazy attacker may use a data pattern that isn't found in the legitimate data and then a middlebox vendor may find it and use that to distinguish their traffic, but as soon as they do the attacker can stop using it. The longer the game is played, the better the attackers get at making their data indistinguishable and the fewer remaining undiscovered ways to distinguish it that it's possible to find. In the limit there are none left, the data is completely identical to legitimate data with the same amount of entropy, the attacker is only assigning different meaning to it at the endpoints.


There's plenty of this stuff at the US government level for data exfiltration and the fact that ordinary websites can have XSS or other funny business going on. For instance there's EINSTEIN https://www.dhs.gov/einstein.

There's also the opposite of what the initial boon of DPI gave you for egress traffic and instead doing DPI on ingress traffic in places in front of critical applications using things like SSL bump and so on. This seems worse but better in a way, where the DPI is part of the secure system instead of doing a carte blanche decrypting streams (the traffic that the internal secure system receives is in fact the traffic that it is party to instead of just wile e coyote to the universe). It's very hard to detect targeted attack to third party enterprise webapps otherwise.


Corporate MITM devices/proxies are surely in a new business boom. Now we went from lack of encryption to encryption with MITM certificates on questionable appliances running questionable code.


There was a pre-2010 burst of interest in DPI in the carrier world, back when they thought it would be feasible to bill different kinds of traffic separately (i.e., beyond zero-rating traffic's to their walled gardens).

That lead to an arms race from core networking vendors to push out all sorts of traffic sniffing and policing with insane degrees of intrusion that made me quite uneasy (I worked in core network planning), and it's been a relief to finally see LetsEncrypt take hold and TLS become de rigeur.

I do have some qualms about the way legal interception can be abused (in general) and occasionally ponder how far those vendors may have progressed in MITM, though - carriers and exchange points are not as secure as they should (in sometimes surprising ways), and back then finding bugs in carrier equipment was relatively frequent.

I wonder what's it like now that most of it are actually Linux VMs running someplace in their ancient datacenters.


The other interest from carriers is protecting their media interests.

Slowing torrents or streaming video directly helps maintain their « golden age of double dipping by running data over lines paid by audio/vidéo infrastructure »


The "policing" bit was actually about doing that. Strategies varied, from smooth shaping to randomly dropping packets to force TCP window resets and drastically lower throughput.


Not related to the core of the article, but it taught me I can pipe random gibberish (such as tcpdump) to the audio output and I am finding it amazing.


Luca Deri, the author of nDPI did an excellent talk on this topic at the DPDK summit in December. The techniques they have to use now to apply heuristics on https is really cool:

https://www.youtube.com/watch?v=4Vp8-UONhmM&t=0s&index=17&li...


I think (and hope), that the next big thing (after https) -- will be VPNs by default. (and independent from the internet provider service).

By default, nobody, and I mean, nobody needs to know ones home IP address, period. And nobody needs know what sites a person visit or when.

So not only DPI should go away, but also IP address-based blacklisting/whitelisting, tracking/ advertising and so on.


This sent me on a spiral of checking for MITM connections on my machine. You can compare the fingerprints of known sites with this list on this site: https://www.grc.com/fingerprints.htm Though I think the facebook one is wrong (the one I see starts with BD 25 8C for SHA-1)


Author is dead wrong. Products exist today that perform DPI on SSL streams: https://www.a10networks.com/resources/articles/ssl-inspectio...


Thew author does mention that's doable if you break the SSL tunnel. They also mention some ethical issues with doing that.


Cool so how would I use these to circumvent the great chinese firewall with my SOCKS tunnel?


PDI is just one tool in the toolbox. It's never gonna die.


TL;DR = Because encryption.


TL;FFS = Except for SSL inspection software.


Not SSL, encrypted content. Deep packet inspection won't help you if there's only encrypted data inside the packet.


Breaking TLS so you can do deep packet inspection is like a lifeguard throwing people in the water during winter so he can save them.


Or a lifeguard blowing their whistle at folks who specifically used the "no lifeguard on duty" beach so they could swim out far.


I did not realize that squid could provide false certificates on the fly. The whole business of invalid certificates made people nervous about some sites. Now someone can sit in starbucks with a squid proxy in the middle and harvest everything, regardless of ssl encryption. Looking at the little lock in the URL means nothing to a MITM running squid. Will a VPN protect me by encrypting everything from my machine so that a squid in the middle will be thwarted?


SSL certificates have to be signed by a vendor ( authority ) your browser trusts or the certificates are considered invalid ( hence the hoo-ha when firefox or chrome occasionally stop trusting some vendor for bad practices ).

Your workplace can install themselves as a trusted certificate authority on client machines in order to break that model and allow themselves to issue certificates on the fly for any website.

If you're using your own hardware, they can't do this to you. If you're using someone else's, you're at their mercy.


then i totally misunderstood how squid bypasses ssl. My understanding is that: After building a TCP connection, the SSL handshake is started by the client and sends which version of TLS it knows. The server then picks a compatible cipher and compression method. and then sends its certificate. This certificate must be trusted by either the client itself or a party that the client trusts(a certificate authority). In this context the word 'trust' means that the client has the public key that can decrypt the certificate. For example, if the client trusts GeoTrust, then the client can trust the certificate from Google.com because GeoTrust used its private key to cryptographically sign Google's certificate and my browser has the public key for Geotrust.

How SQUID enables SSL failure is obscure to me because the failure of SSL is complicated and this slide is overwhelming https://blog.ivanristic.com/SSL_Threat_Model.png

SQUID is a proxy for the website you want to access - so when your browser negotiates the TLS handshake it's really talking to SQUID who then talks to the website if necessary. So how does your browser get directed to an illegitimate proxy? The only way I can imagine is if the DNS is bad but I don't see how that can happen. So for example you want to go to google.com Your browser goes out to your DNS (the ip you plugged in to the network configuration settings) and gets the ip for google. How can someone interpose squid between you and google ?

And then even if squid talks to the server on your behalf, it cannot just forward that stuff to you in clear text because you are expecting that stuff to be encrypted with a private key that only the server or the certificate authority has. Since squid doesn't have the private key it needs, I don't understand what it does. This wiki https://wiki.squid-cache.org/Features/SslBump says "Squid-in-the-middle decryption and encryption of straight CONNECT and transparently redirected SSL traffic, using configurable CA certificates." I interpret that to mean that squid configures a CA certificate. So squid can make it appear like you are talking directly to google?


The whole point of certificates is for the browser to check with a cert authority. How does squid circumvent the certificate authority?

I think VPN may not be safe if the local machine has to negotiate encryption with the VPN server. Squid seems like it couldn't intercelt that.


It doesn't. In enterprise environments that use something like this, the sysadmins install their own CA certificate on all machines. You can't just MITM random machines at a coffeeshop.


Your machine (browser) will only accept the false certificates without complaining a lot if you have previously added the certification authority of the attacker to your browsers list of valid CAs.


Even with certificate pinning?


For legacy compatibility reasons, browsers ignore certificate pining when the certificate is signed by a certificate authority which was added by the user.


Isn't that a giant security hole? When some shitty anti-virus suite installs a generated root certificate to use for man-in-the-middle traffic inspection, doesn't that open up a big hole for an exploited browser to be installed?


Not a very informative article. All it manages to say is that deep packet inspection does not work with encrypted traffic. I think author is not aware of transparent deep packet inspection of SSL traffic. Here is one such product doing it.

https://www.sonicwall.com/en-us/products/firewalls/security-...


Actually, that's kind of what the whole of the second half of the article is about.


That’s just a run-of-the-mill MITM privacy violator


Yes, but I don't think the article explains why it is, or will be, "dead". Companies that have them don't want to give them up. What compelling reason would make them? "Employee privacy at work" is not one. Not with the level of perceived threat of malware downloads and trojaned NPM packages.


The article describes such products, so the author clearly is aware of their existence.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: