> Modern implementations must support a range of TLS protocol versions (from legacy TLS 1.0 to current TLS 1.3)
So this statement is strange considering "modern" security standards either nudge you (or demand) to deprecate anything that isn't v1.3 or v1.2.
If the implementation is "modern" why would I allow 1.0 ?
This seems like a HA-Proxy problem. They ought to maintain support for geriatric TLS versions on a dedicated release branch connected to a support-model that nudges their client into updating by increasing their fees for maintaining that system. Not doing so means the vendor is part of the problem why we have slower adoption rates for 1.3 than we could otherwise have.
> "In 2015, AWS introduced s2n-tls, a fast open source implementation of the TLS protocol. The name "s2n", or "signal to noise," refers to the way encryption masks meaningful signals behind a facade of seemingly random noise. Since then, AWS has launched several other open source cryptographic libraries, including Amazon Corretto Crypto Provider (ACCP) and AWS Libcrypto (AWS-LC). AWS believes that open source benefits everyone, and we are committed to expanding our cryptographic and transport libraries to meet the evolving security needs of our customers."
Here is a pdf that provides some performance results for s2n (sadly not s2n-quic):
> If the implementation is "modern" why would I allow 1.0 ?
Because there's a distinction between "using" (especially by default) and "implementing".
The real world has millions (billions?) of devices that don't receive updates and yet still need to be talked to, in most cases luckily by a small set of communication partners. Would you rather have even the "modern" side of that conversation be forced to use some ancient SSL library? I'd rather have modern software, even if I'm forced to use an older protocol version by the other endpoint. Just disable it by default.
And it's not like TLS 1.0 and 1.1 are somehow worse than cleartext communication. They're still encrypted transport protocols that take significant effort to break. That you shouldn't use them if at all possible doesn't mean that you can't use them if anything else is impossible.
Exposing TLS 1.0 leaves your connections vulnerable to BEAST. Requiring TLS 1.2 deprecates clients older than what, Android 4.4.2 and Safari 9? Maybe exceptional cases like IoT crapware and fifteen year old smart phones you might still need 1.1? I don't see why you'd want to take on the additional work and risk otherwise. In practice TLS 1.2 has been available for long enough that it should be the bare minimum at this point.
If I were to implement a TLS server today, I'd start at 1.2, and not bother with anything older. All of the edge cases, ciphers, protocols, config files, and regression tests are wasted time and effort.
BEAST is AFIR mitigated by RC4. It is vulnerable too but an attack on RC4 requires significant traffic never sent by many clients. Everything is a tradeoff and denying service to old clients sometimes worse than to introduce a small risk of TLS not being able to stop MitM.
> Exposing TLS 1.0 leaves your connections vulnerable to BEAST.
So?
> Requiring TLS 1.2 deprecates clients older than what, Android 4.4.2 and Safari 9? Maybe exceptional cases like IoT crapware and fifteen year old smart phones you might still need 1.1?
You're underestimating the amount of "IoT crapware" out there. And industrial control systems. And other early internet-ified infrastructure.
Even bringing up Android and Safari hints that you're not thinking in the same direction I am. I'm concerned about RTEMS, FreeRTOS, Zephyr, and oooooold versions of mbedTLS or wolfSSL.
These systems were built using "stable" versions. What do you think was stable 10-15 years ago? That's 20 year old software. I'm happy if it's TLS and not SSL, my dear friend.
> And it's not like TLS 1.0 and 1.1 are somehow worse than cleartext communication.
In reality humans can't actually do this nuance you're imagining, and so what happens is you're asked "is this secure?" and you say "Yes" meaning "Well it's not cleartext" and then it gets ripped wide open.
HTTP in particular is like a dream protocol for cryptanalysis. If in 1990 you told researchers to imagine a protocol where clients will execute arbitrary numbers of requests to any server under somebody else's control (Javascript) and where values you control are concatenated with secrets you want to steal (Cookies) they'd say that's nice for writing examples, but nobody would deploy such an amateur design in the real world. They would be dead wrong.
But eh, we wrote an RFC telling you not to use these long obsolete protocol versions, and you're going to do it anyway, so, whatever.
> In reality humans can't actually do this nuance […]
Luckily the cases that need this aren't normally about a wide user base, rather they only concern a bunch of developers and admins. Which is why I pointed out the default-off nature of this.
> But eh, we wrote an RFC telling you not to use these long obsolete protocol versions, and you're going to do it anyway, so, whatever.
You're losing audience with unnecessary hostility. Your post would've been much more effective with plain omitting this last paragraph.
Just to be clear, we don't care at all about performance of 1.0. The tests resulting in the pretty telling graphs were done in 1.3 only, as that's what users care about.
> This seems like a HA-Proxy problem. They ought to maintain support for geriatric TLS versions on a dedicated release branch connected to a support-model that nudges their client into updating by increasing their fees for maintaining that system. Not doing so means the vendor is part of the problem why we have slower adoption rates for 1.3 than we could otherwise have.
If I understand what you're suggesting, it's that HAProxy should have their current public releases support only TLS 1.2 and 1.3, and a paid release that supports TLS 1.0-1.3 and that would encourge adoption of 1.3?
I would expect those users who have a requirement for TLS 1.0 to stay on an old public free release that supports TLS 1.0-1.2 in that case. If upgrading to support 1.3 would mean dropping a requirement or paying money, who would do it? How does that increase adoption vs making it available with all the other versions in the free release? Some people might reevaluate their requirements given the choices, but if anything that pushes abandonment of TLS 1.0 more than adoption of TLS 1.3.
I no longer have to support this kind of thing, but when you require dropping the old thing at the same time as supporting the new thing, you're forcing them to choose, and unless the choice is very clear, you'll have a large group of people that pick to support the old thing. IMHO, the differences between TLS 1.0,1.1, and 1.2 aren't so big that you can claim it's too hard to support them all, and dropping support for 1.0 and 1.1 on the server doesn't gain much security. 1.2 to 1.3 is a bigger change, if you wanted to only support 1.3, that's an argument to have, but I don't think that's a realistic position for a general purpose proxy at this point in time (it would certainly be a realistic configuration option though).
Actually this is a problem for anyone that either:
- needs to rely on LTS versions
- runs multi-threaded software (like HAProxy)
- has real performance and scalability needs.
Note on #2 above that there are other LB/RP projects that don't have a problem here because they chose to be single threaded. This means their performance is not greatly impacted.
HAproxy is incredibly performant because the project chooses to prioritize performance. Also, as an open source project, we should applaud the efforts of the team to provide the best product possible and not just push everything of value into the commercial offering.
HAProxy used to support multiprocess and multithread. When performance needs exceed a single cpu, you do need to do one or both of those. But a lot of HAProxy users need to share state between workers and that's a lot easier in the threaded environment.
When I had a need for HAProxy at high throughput, I ran multiprocess, and it was tricky to make sure the processes didn't try to use the same outbound addresses, among some other challenges I had to address to get to the connection numbers I thought were reasonable. I can understand why they would have chosen to go with threads only in 2.5 though.
If I was doing the same thing today, hopefully enough has changed in my chosen OS that threads would work for me, but if not, it shouldn't be too hard for me to spawn the right number of haproxy threads with individual configs to get what I need.
do we have any records of how society perceived that time. It would be interesting to compare it to how that fares compared to the perceived injustices that modern society complains about.
While it's impossible to directly compare recent events, like the pandemic to the plague, it would be interesting to understand the claim of "the worst year to be alive" between a society that is hyper-distracted and always online today, with a society that walks among the ruins of a collapsing Roman empire ~1500 years ago.
That said, both scenarios seem to ignore non Western history.
Actually it's "his". Also Redditors at the time rated him merely as "one among many talented playwrights and poets". It wasn't until the 17th century that he's been been considered _the_ supreme playwright.
... is this^^ the type of content you want on Itter? Because that's what you get from this crowd.
could someone with legal/data-privacy expertise comment if this would be something they have to disclose under data breach disclosure laws?
Technically it might not be a "data leak", but it very well could result in one if arbitrary content (including js?) can be uploaded to these webpages?
they've been contacted through the "proper channels" over 18 months ago by several (more than 1) security researchers.
After some people started publicly naming and shaming on LinkedIn and tagging ENISA, the issue got some exposure, but still was not fixed. It only made it more evident that several people independently reported these issues, and they became aware of peers stumbling over the issue. Still nothing happened.
ENISA is supposed to act as a CNA and expects to be notified of data breaches from EU based orgs for PSIRT / CSIRT as part of the Cybersec Resiliance Act and other laws.
Would I trust that vulnerability data that gets reported as a CVE, or a breach notification is safe with ENSIA ?
... feck no!
Would I trust that documents that europa.eu hosts on its infra are authentic? (such as security-compliance documents telling orgs how to properly implement security, but literally any public communication under one of the domains)
... hecking heck no!
... At this stage I think everyone else except ENISA has control over their infrastructure.
oh no. Pooh, you ate all the propaganda instead of the honey.
> The systemic media control in authoritarian regimes is often inspired by China’s propaganda model. China (178th) remains the world’s largest jail for journalists and reentered the bottom trio of the Index, coming just ahead of North Korea (179th). -- https://rsf.org/en/rsf-world-press-freedom-index-2025-econom...
Is it surprising that the places with more conflicts has poor reporting ranking (according to this model)?
A media control does not need government to act openly. A mind-numbing patriotism can be as effective. Look at US reporting on the aftermath of 9/11. How many papers argued against Iraq or Afghanistan war? How many papers are talking about Gaza now? or even covering hands-off rallies?
Afghanistan is more complicated than that, due to the collective shock of a successful attack on US soil. Bush had to do something or the American people would have lynched him.
Iraq, though? There was tons of opposition to the Iraq War. It was only approved because people were lied to about the "weapons of mass destruction." Once the truth was out, a lot of us felt betrayed.
Iraq is a large part of why that particular segment of the Republican party (the neocons) lost its power. Which is a shame, really, given who replaced them...
would have loved to see some non native English speaking authors on the list. (instead of listing some authors twice - as great as they are). There were 2 Russians that stood out but no Camus, Feuchtwanger, Remarque, Musil, Borges, ...
Yes, it's kind of a strange slice - we get Faulkner three times and we get Joseph Conrad no fewer than four times(!), but not a single book from Dostoevsky or Tolstoy? No Bulgakov, no Turgenev? No Flaubert?
Lermontov's 'Hero of Our Time' is probably my favorite Russian novel, and I say that as someone who absolutely adores Dostoevsky. It still feels relevant and modern.
So this statement is strange considering "modern" security standards either nudge you (or demand) to deprecate anything that isn't v1.3 or v1.2.
If the implementation is "modern" why would I allow 1.0 ?
This seems like a HA-Proxy problem. They ought to maintain support for geriatric TLS versions on a dedicated release branch connected to a support-model that nudges their client into updating by increasing their fees for maintaining that system. Not doing so means the vendor is part of the problem why we have slower adoption rates for 1.3 than we could otherwise have.
It would have been cool to see AWS's s2n-tls (or s2n-quic https://github.com/aws/s2n-quic) included in their benchmark.
One of my all time favorite episode from the SCW podcast goes into the design decisions of s2n:
The feeling's mutual: mTLS with Colm MacCárthaigh https://securitycryptographywhatever.com/2021/12/29/the-feel...
From AWS: https://aws.amazon.com/security/opensource/cryptography/
> "In 2015, AWS introduced s2n-tls, a fast open source implementation of the TLS protocol. The name "s2n", or "signal to noise," refers to the way encryption masks meaningful signals behind a facade of seemingly random noise. Since then, AWS has launched several other open source cryptographic libraries, including Amazon Corretto Crypto Provider (ACCP) and AWS Libcrypto (AWS-LC). AWS believes that open source benefits everyone, and we are committed to expanding our cryptographic and transport libraries to meet the evolving security needs of our customers."
Here is a pdf that provides some performance results for s2n (sadly not s2n-quic):
"Performance Analysis of SSL/TLS Crypto Libraries: Based on Operating Platform" https://bhu.ac.in/research_pub/jsr/Volumes/JSR_66_02_2022/12...
reply