We consider our ten year anniversary to be in 2025 but I appreciate the kind words here!
Today is roughly the ten year anniversary of when we publicly announced our intention to launch Let's Encrypt, but next year is the ten year anniversary of when Let's Encrypt actually issued its first certificate:
It's actually serendipitous that it happened exactly on December 2015. That's when I had only enough for a domain, but not for a ssl, and my site needed an ssl. Thanks to let's encrypt free ssl, the project hit.
OCSP systems at scale are complex. At the core there is an on-demand or pre-signed OCSP response generation system, then there is usually an internal caching layer (redis or similar), then there is an external CDN layer. Just because the outer layer appears not to know about a certificate (a bug, to be sure), doesn't necessarily mean that other CA systems don't know about it.
Requiring CAs to run OCSP makes running a CA more complex and expensive, which has considerable downsides like it does for any system, particularly as it is a zero sum game. Every dollar or hour of engineering time that Let's Encrypt spends on OCSP (and it is a considerable amount) is a dollar or hour not being spent on other parts of the CA. From the outside that may not be very visible because it's not obvious what isn't getting done instead but there is a real and considerable cost that is not worth it IMO.
CAs are money printing machines. If they cannot even track the state of their delegated trust, then why should they be trusted themselves?! Trust is their core value proposition, the only reason they exist!
The person you're replying to (Josh Aas) is the head of the largest CA in the world, which has never charged anyone for a certificate and has never made a profit. That CA, at least, isn't a money-printing machine!
Good point. Though being a non-profit doesn't remove the fact that the value proposition of CAs is trust.
Perhaps there is some compromise like they just have to submit all issuances certs to a 3rd party, and maintain the last few months worth of issuances.
Downloading the CT log is an enormous amount of data, so it addresses a rather different case from the "is this certificate valid right now (because I'm thinking of accepting it)?" question.
For what it's worth, the person you're replying to is the founder and executive director of Let's Encrypt, the non-profit free and open CA which decidedly isn't a money-printing machine.
Do you disagree that OCSP would be significantly less costly and complex if the responder URL were not included in certificates, freeing the responders from having to serve hundreds of millions of clients?
Yes, I disagree. Best case scenario I think it would just allow us to get rid of the CDN layer. We'd still have to build and manage the rest with utmost care.
Even that really depends on what the "SLA" expectation is. How many QPS would we be required to support before we're allowed to start dropping queries? If there's any possibility of a burst beyond what we can handle locally we'd keep the CDN, so maybe the CDN bill would be significantly smaller on account of lower volume but all the complexity and care of operation would still be there.
NTP is worth moving to a memory safe language but of course it's not the single most critical thing in our entire stack to make memory safe. I don't think anyone is claiming that. It's simply the first component that got to production status, a good place to start.
NTP is a component worth moving to a memory safe language because it's a widely used critical service on a network boundary. A quick Google for NTP vulnerabilities will show you that there are plenty of memory safety vulnerabilities lurking in C NTP implementations:
Some of these are severe, some aren't. It's only a matter of time though until another severe one pops up.
I don't think any critical service on a network boundary should be written in C/C++, we know too much at this point to think that's a good idea. It will take a while to change that across the board though.
If I had to pick the most important thing in the context of Let's Encrypt to move to a memory safe language it would be DNS. We have been investing heavily in Hickory DNS but it's not ready for production at Let's Encrypt yet (our usage of DNS is a bit more complex than the average use case).
It continues to astonish me how little people care (i.e., it triggers the $%&@ out of me). I really appreciate the professionalism and cool rationale when faced with absolute ignorance of how shaky a foundation our "modern" software stack is built upon. This is a huge service to the community, kudos to you and many others slowly grinding out progress!
That's because there is no such thing as "truly secure", there can only be "secure under an assumed threat model, where the attacker has these specific capabilities: ...". I agree that software engineering is getting away with chaos and insanity compared to civil or other engineering practices, which have to obey the laws of physics.
Remind me of the One World Trade Center rebuild, and "if you want a 747-proof building, you're building a bunker".
Translate the internet to the real world, and basically every building (IP address) is getting shot at, hit by planes, nuked, bioweapons are stuffed into the mail slot, and lock-picked all day, every day.
People bring up postfix all the time in this context because supposedly nobody has ever found a memory safety vulnerability in it. Presumably this is supposed to make the point that it is possible to write complex programs in C safely.
The reason people know this about postfix and keep bringing this one specific example up is because it's so unusual! It's an example that almost stands alone, it's extraordinary.
I don't think this is making the point about the safety of C that you think it is.
There is a mountain of evidence suggesting that C is dangerous, particularly for network services, and one possible example to the contrary doesn't change that.
That's a bummer honestly, I would encourage you not to let Internet flamewars color your decisions about what languages you might learn. The well is seriously poisoned with regards to Rust in this community. That's more of a reflection on HN than the value of Rust as a technology or even the Rust community.
Don't learn Rust if you aren't interested, it's not a one true language, but don't cheat yourself on engaging with an interest because HN struggles to discuss it amicably.
HN is capable of discussing it just fine. Low-effort, bad faith posts getting downvoted into oblivion is the system working as intended.
GP is obstinately failing to grasp the difference between “can” and “should”. Networked services can be securely written in C. Networked services should not be written in C.
There are people who can operate motor vehicles safely at speeds regularly in excess of 100mph. People should not do so on public roads.
“What about this one C project that hasn’t been a mess of exploitable vulnerabilities” is thoroughly unconvincing in a world where networked C programs are an unending source of severe vulnerabilities.
When even a very small number of downvotes here can result in a comment's text being coloured in a way that makes it harder to read, or even nearly impossible to read in extreme cases, I think it's reasonable to equate downvoting with censorship.
Censorship doesn't require content to be completely hidden or blocked; even just partially obscuring the content in some way is still censorship.
I disagree that this is equivalent to censorship, but it sure sounds like a really good incentive to make more convincing comments than “Then how do you explain this single counterexample? Checkmate, Rustaceans.”
Plenty of reasonable disagreement happens here without getting downvoted into obscurity. Substantive points are generally upvoted even when they’re controversial opinions. So I don’t feel too upset when borderline-trolling or bad-faith arguments get hidden by consensus.
Ironic counterexample: It looks like you may have gotten downvoted despite having a fundamentally reasonable perspective.
I wouldn't call it "censorship", but sure, let's say it is.
There's a reason why "this is the exception that proves the rule" is a common saying. Picking out one exceptional example of something, and using that to argue against a general point, is at best lazy, and at worst actively dishonest. I'm fine with those kinds of comments being "censored". They -- and their replies that call people out for doing this -- are boring and don't further discussion.
I wouldn't call it "censorship", but sure, let's say it is.
There's a reason why "this is the exception that proves the rule" is a common saying. Picking out one exceptional example of something, and using that to argue against a general point, is at best lazy, and at worst actively dishonest. I'm fine with those kinds of comments being "censored". They -- and the comments calling people out for doing this -- are boring and don't further discussion.
It's not censorship to be downvoted for low quality posts. Please reference "intellectual honesty" and research how to have legitimate conversations. Reddit/Twitter/4chan esque communication is not appropriate for serious spaces.
> Intellectual honesty would be to point out the memory safety issue CVE of Postfix in the past decade.
this thread isn't about Postfix, you brought up Postfix as a counterpoint to a wider topic where such a anecdote doesn't hold up nearly as well in a broader scope. Don't purposefully narrow the topic and move goalposts just to win internet fights.
What an interesting coincidence that all three of these accounts were created within 15 minutes of each other. I'm sure all "three" users are being "intellectually honest" here.
One of the major drivers (if not the driver) for the creation of Rust the fact that C is not a memory-safe language.
This has been known for decades, but it wasn't until 2010 that a serious attempt at writing a new system-language that was memory safe was attempted and got traction - Rust.
I would consider that serious attempts have been made since 1961, however all of them failed in the presence of a free beer OS, with free beer unsafe compilers.
I take a slightly different view of this, though I do not deny that those things are important. #1 in particular was important to my early involvement in Rust, though I had also tinkered with several other "C replacement" languages in the past.
A lot of them simply assumed that some amount of "overhead," vaguely described, was acceptable to the target audience. Either tracing GC or reference counting. "but we're kinda close to C performance" was a thing that was said, but wasn't really actually true. Or rather, it was true in a different sense: as computers got faster, more application domains could deal with some overhead, and so the true domain of "low level programming languages" shrunk. And so finally, Rust came along, being able to tackle the true "last mile" for memory unsafety.
Rust even almost missed the boat on this up until as late as one year before Rust 1.0! We would have gotten closer than most if RFC 230 hadn't landed, but in retrospect this decision was absolutely pivotal for Rust to rise to the degree that it has.
That wasn't the case of Modula-2, Ada or Object Pascal.
Their problems was a mix of not being hieroglyph languages (too much text to type!), being much strongly typed (straitjacket programming with a famous rant post, wrongly), commercial offerings being expensive (more so against free in the box alternative), and not comming with an OS to make their use unavoidable.
Note that for all the hype, Zig is basically Modula-2 features with C syntax, to certain extent.
No shared mutable state in an imperative language is common, as are memory safe languages with performance close to C's? Didn't see the latter in the language shootout benchmarks, in fact Rust is much closer to C than the next closest thing
> It’s like asking why SQL query forming via string concatenation of user inputs is unsafe.
To be honest, that's a surprisingly deep question, and the answer is something I'm yet to see any developer I worked with understand. For example, did you know that SQL injection and XSS are really the same problem? And so is using template systems[0] like Mustache?
In my experience, very few people appreciate that the issue with "SQL query forming via string concatenation" isn't in the "SQL" part, but in the "string concatenation" part.
The problem that causes it is in photo cases that a flat string representation kind of mixes code and data. And therefore escaping (such as not allowing quotes in the user input to terminate quotes in your query, or not allowing html tags to be opened or closed by user input, nor allowing html attributes to be opened or closed by the user input, etc) is needed.
> In my experience, very few people appreciate that the issue with "SQL query forming via string concatenation" isn't in the "SQL" part, but in the "string concatenation" part.
Really? To me it's pretty obvious that not escaping properly is the issue, and therefore the same issue applies wherever you need escaping. I don't think I've ever heard anyone say that SQL itself was the problem with SQL injection. (Although you certainly could argue that - SQL could be designed in such a way that prepared statements are mandatory and there's simply no syntax for inline values.)
Really, the problem is in combining tainted input strings with string concatenation. If you have certain guarantees on the input strings, concatenation can be safe. That said, I still wouldn’t use it since there are few guarantees that future code wouldn’t introduce tainted strings.
I think you have a missunderstanding on what memory safety is.
Memory safety means that your program is free of memory corruption bugs (such as buff overflow or under flow bugs) that could be used to retrieve data the user isn't supposed to have, or can be used to inject code/commands that then get run in the programs process. These don't get shut down by the OS because the program is interacting with its own memory.
You'd probably want an alternative libc implementation rather than a compiler flag.
However, calloc everywhere won't save you from smashing the stack or pointer type confusion (a common source of JavaScript exploits). Very rarely is leftover data from freed memory the source of an exploit.
They have been unsafe from their very early days, Multics got a higher security score than UNIX thanks to PL/I, and C.A.R Hoare has addressed C's existence on his Turing Award in 1980, while Fran Allen has also made similar remarks back in the day.
The Rustls TLS implementation and certificate verification are all safe Rust.
The underlying cryptography is still a mix of C and asm, that's the best option we have now particularly if we want support for things that make it deployable, like FIPS. We are looking for ways to improve the safety of the underlying crypto in the future.
Has anyone in this space considered adding type annotations to assembly?
It’s totally possible and it’s a thing compilers for memory safe languages sometimes have to do internally.
It wouldn’t take a lot of language engineering to make it nice. You’d end up being able to take that asm code more or less as is and annotate it with just a type proof so that Rust/Go/Fil-C can call into that shit without worrying about it blowing up your rules.
Somehow I doubt even a percent of owners would choose to unpair. And if they do for repair purposes, there's no reason they couldn't re pair/register. What am I missing?
> Comparing data in the six months before and after Apple released its anti-theft feature, police said iPhone robberies in San Francisco dropped 38 percent. In London, they fell 24 percent.
> In New York City, robberies (which typically involve a threat of violence) of Apple products dropped 19 percent and grand larcenies of Apple products dropped 29 percent in the first five months of 2014, compared with the same time period from 2013, according to a report from the New York attorney general’s office, which included data from the New York City Police Department. By comparison, thefts of Samsung products increased 51 percent in the first five months of 2014, compared with the same period a year ago, the report said.
2014 would be prior to most of the features mentioned in this article (parts pairing) and not the subject of your article (kill switches). The article does tell us that:
* iPhones are very commonly stolen
* Android devices introduced a similar kill switch in 2014
> Samsung introduced to kill switch for its Galaxy S5 in April, so it will be sometime until its effects can be evaluated
There have been reports of thieves threatening victims with weapons and demanding they disable icloud. Not saying that negates the benefit necessarily but it's a relevant data point.
Those numbers are misleading. IdenTrust has always been a small CA in terms of volume. The large percentage you see for them there is actually Let's Encrypt volume counted as IdenTrust because of the cross-sign. The Let's Encrypt percentage there is from sites not using the cross-sign. Add them together and that is basically Let's Encrypt's total volume, as IdenTrust itself is likely < 1%.
As the person who negotiated the agreements between Let's Encrypt and Identrust I can tell you that they have provided valuable services, including but not limited to cross-signs. I would not describe it as rent seeking.
We are sincerely glad to have them as partners, and grateful for their contributions to helping get Let's Encrypt going. We could not have done what we did without them. Running a publicly trusted CA is not easy, and cross-signing others involves work and liability, particularly if the entity asking for a cross-sign is an upstart with a strange plan and little to no experience running a CA.
Today is roughly the ten year anniversary of when we publicly announced our intention to launch Let's Encrypt, but next year is the ten year anniversary of when Let's Encrypt actually issued its first certificate:
https://letsencrypt.org/2015/09/14/our-first-cert/
In December of 2015 (~9 years ago today) is was made available to everyone, no invitation needed:
https://letsencrypt.org/2015/12/03/entering-public-beta/