Hacker News new | past | comments | ask | show | jobs | submit login
EU Tries to Slip in New Powers to Intercept Encrypted Web Traffic (techdirt.com)
172 points by rntn 7 months ago | hide | past | favorite | 26 comments



Will Google geoblock Chromium source code downloads in the EU to prevent people from going #undef BREAK_CRYPTO_FOR_EU? I guess all Linux distros that package their own browser builds will be illegal?

Or could you be arrested in the EU for setting the wrong compile flags and producing a binary with fully functional security?


I don't think they care tbh. Feels like too much work for too little return. If this passes (let's hope not) I'd just expect them to ensure the default browsers in the most popular OSes does what they want. That would be full insight into 99% of web traffic. They already can't see people who really want to hide, and they won't later either.


if it succeed and enough time passes having those flags with will be looked at as ground for suspicion and justification for more investigation just like tinted glass on a car.


This will likely be handled via the Digital Markets Act’s “gatekeepers” mechanism.



Mozilla should sue this "European signature dialog" group for defamation


NOYB might.


I love that the EU gets caught. In the US, I feel like it's illegal wiretaps all the way down.



(Pssst: Then it's not encrypted.)


Does this mean the Squid is back in business?


The open letter from Mozilla (https://blog.mozilla.org/netpolicy/files/2023/11/eIDAS-Indus...) has a very different tone:

"Articles 45 and 45a of the proposed eIDAS provisions are likely to weaken the security of the Internet as a whole. These articles mandate that all Web browsers recognize a new form of certificate for the purposes of authenticating websites. *The current language is imprecise, and this risks being interpreted as requiring that browsers recognize the certificate authorities that each EU member state appoints for the purposes of authenticating the domain name of websites.*"

If I look at the public draft from 2021 (available on the EC website, or the annotated version from an awesome Dutch man: https://timspeelman.nl/eidas/#A45(2)) I see the problem:

" 2.Qualified certificates for website authentication referred to in paragraph 1 shall be recognised by web-browsers. For those purposes web-browsers shall ensure that the identity data provided using any of the methods is displayed in a user friendly manner. Web-browsers shall ensure support and interoperability with qualified certificates for website authentication referred to in paragraph 1, with the exception of enterprises, considered to be microenterprises and small enterprises in accordance with Commission Recommendation 2003/361/EC in the first 5 years of operating as providers of web-browsing services. "

For those unfamiliar with the concepts behind eIDAS 2.0: the system consists of:

* A wallet which is owned by a person and contains their potentially sensitive personal information in the form of signed attributes. * Relying Parties; these are organizations which are allowed to request attributes from a user's wallet and who are able to verify the attributes. Often known as "verifiers". * Attribute Providers; these are organizations which provide the signed attributes. Think the driving license authority.

One important aspect is that Relying Parties are regulated; they have to have permission to be able to request certain attributes from a user's wallets. In order for this to technically work, this means that they must be able to authenticate themselves to the user's wallet. Now, the user's wallet can be browser based. In which case the website needs to authenticate itself to the browser (the wallet!!) to prove that it is allowed request the attributes.

So here we are: the text of the regulation exists because an "EU Wallet Trust Store" is required in all internet browsers, otherwise the system won't work. The problem being that the actual text does not make this explicit - it is open for an interpretation whereby the TLS layer also falls under the regulation. It sounds to me like a huge stretch, however there are enough instances where a law is poorly interpreted to make this something which should be avoided.

Now for the bear in the woods: Mozilla claim in other places that THE TEXT HAS CHANGED. This is both a massive and very worrying claim. It's also extremely strange that Mozilla do not mention this in their letter? So what's up.

Mozilla need to leak the current draft text ASAP so that we can see what the exact problem is and so that we can help legislators fix the issue.


Reading the text it seems to me a huge stretch to claim that it somehow not applies to tls...


I came at the text after working on the mobile wallet for two years (EU-DCC). In that context this is purely application level with the verifier authenticating to the wallet as part of the negotiation.

The problem being that it could be implemented at the TLS level; if I was working on browsers (or didn't have experience with said apps) then that would be my conclusion.

Whatever the situation: the language is too fluffy and should be updated such that it clearly makes a distinction between TLS/DNSSEC and the needs of EUDI.


The text is very clearly about browsers and browsing


The writing has been on the wall for some time that encryption is no longer sufficient and steganography is required to protect ourselves.


I think you're missing an important point - this is a legal issue, not a technical one. Private communication is becoming illegal, however you do it. You can call it "steganography," and the government will call it "20-to-life."

The press will go along with the government and talk about you as a "hacker who is using sophisticated message hiding techniques to disguise their secret communications with conspirators." Even if you were just talking about dinner plans.


They're not missing the point, they're taking your premise and accepting it, and then moving past it. They're saying: let's just forget about the possibility of encryption being something you can do legally — let's just instead concentrate on encryption that works whether it's legal or not, because it's impossible to determine that it's you doing it. (After all, much of what we've spent the last 40 years of cryptography research doing, is inventing new and creative ways to communicate or broadcast or participate in systems anonymously.)


It does not matter if it is you doing it or not, the oppressive governments of the world will just as happily jail you for facilitating it, even if you were unaware. These laws will be used as a cudgel against anyone whom they don't like.

Technical solutions to lawfare are pointless.


If you're in an oppressive country, why would you facilitate an illegal activity any way other than anonymously?

I do understand what you're trying to refer to — to things like "you can't tell it was me who connected to that darknet market" / "but we can tell that your IP address originated a Tor connection, and we now consider that illegal in-and-of-itself, even if it wasn't you doing it but you were just acting as an exit node for others."

But again, we've spent a long time building various different cryptosystems that make different sets of guarantees that are useful in different situations. Tor is useful under one particular set of constraints, but that set of constraints is not "staying long-term under an oppressive regime that will arrest you for emitting Tor fingerprintable traffic."

For a cryptosystem that is useful under this use-case, see: Freenet. Or NNTP numbers groups. Or MixMaster.

(Though also, even Tor does work here, with modification. This is why Tor added bridges — precisely so that people in oppressed regimes could forward their traffic, in an innocuous-looking way, through bridge nodes hosted by people in non-oppressed regimes, where the IPs of these nodes are only handed out as needed to Tor users, one or two at a time, and so can't be blacklisted by country-level firewalls in advance.)


I wonder if LLMs and text-to-speech AIs would be useful for generating unique plausible conversations so you could hide encrypted data in the background noise of the audio stream. You could even train it on your own voice so there would be even less chance of tipping off the Thought Police AIs listening in.

(I've read enough cyberpunk to be sufficiently prepared for this dark future.)


I've read enough news to be sufficiently prepared for this dark future.


I'm super curious why you think of steganography specifically ?

What's benefit to `steganography` and the current encryption tools, I'll admit I don't know a lot about steganography but I've always seen it as the receiver still needing the handshake.


Tl;dr: Stenography just helps with obfuscation as long as no-one is looking for it. The second someone looks it'll fail, and if it becomes a common practice; everyone will look for it.

The theoretical idea is that the information could be hidden in a random location, e.g. extracting every Nth bit, such that the information would be indistinguishable from random noise. In practice, both parties need to agree on the location, so it has the same flaw as 'perfect encryption' methods like a one-time-pad, you need to transfer information IRL.

One could make a case for a group/app using a custom scheme to add a layer of security until someone infiltrates the group / RE the app, but you'd get the same security by just encrypting everything with the same password.


> Tl;dr: Stenography just helps with obfuscation as long as no-one is looking for it. The second someone looks it'll fail, and if it becomes a common practice; everyone will look for it.

Not if the obfuscated payload is encrypted using end to end methods. This way obfuscated layer starts to look like trash.

To decrypt such a message you need to seize the control over message receiving computer with user and no encryption has protection from this.


> Tl;dr: Stenography just helps with obfuscation as long as no-one is looking for it. The second someone looks it'll fail,

That was my 'playground' conclusion too. It has to be clandestine to work.

> In practice, both parties need to agree on the location

That is still a key by another term..

> but you'd get the same security by just encrypting everything with the same password.

Right, it feels like just a level of obfuscation at that point. Ironically what I was interested in was encypted payloads that could be `called` when loaded. Involves a ridiculous amount of 0days, but the idea of opening an image and having your browser/os touched.. It's interesting given today's lifestyle.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: