Hacker News new | past | comments | ask | show | jobs | submit | jaas's comments login

We consider our ten year anniversary to be in 2025 but I appreciate the kind words here!

Today is roughly the ten year anniversary of when we publicly announced our intention to launch Let's Encrypt, but next year is the ten year anniversary of when Let's Encrypt actually issued its first certificate:

https://letsencrypt.org/2015/09/14/our-first-cert/

In December of 2015 (~9 years ago today) is was made available to everyone, no invitation needed:

https://letsencrypt.org/2015/12/03/entering-public-beta/


It's actually serendipitous that it happened exactly on December 2015. That's when I had only enough for a domain, but not for a ssl, and my site needed an ssl. Thanks to let's encrypt free ssl, the project hit.


No team is going to prevent issues like this 100% of the time. That's a wildly unrealistic expectation.

Wherever the bar is, it won't be 100%. That's why good leadership invests in the ability to respond well to mistakes that will inevitably be made.


Still. These incidents should be so rare, that when they occur, it is more likely to be leadership failure than a series of unfortunate events.

This replacing the leadership is always the right response.


OCSP systems at scale are complex. At the core there is an on-demand or pre-signed OCSP response generation system, then there is usually an internal caching layer (redis or similar), then there is an external CDN layer. Just because the outer layer appears not to know about a certificate (a bug, to be sure), doesn't necessarily mean that other CA systems don't know about it.

Requiring CAs to run OCSP makes running a CA more complex and expensive, which has considerable downsides like it does for any system, particularly as it is a zero sum game. Every dollar or hour of engineering time that Let's Encrypt spends on OCSP (and it is a considerable amount) is a dollar or hour not being spent on other parts of the CA. From the outside that may not be very visible because it's not obvious what isn't getting done instead but there is a real and considerable cost that is not worth it IMO.


CAs are money printing machines. If they cannot even track the state of their delegated trust, then why should they be trusted themselves?! Trust is their core value proposition, the only reason they exist!


The person you're replying to (Josh Aas) is the head of the largest CA in the world, which has never charged anyone for a certificate and has never made a profit. That CA, at least, isn't a money-printing machine!


Good point. Though being a non-profit doesn't remove the fact that the value proposition of CAs is trust.

Perhaps there is some compromise like they just have to submit all issuances certs to a 3rd party, and maintain the last few months worth of issuances.


Part of that compromise is already in effect:

https://certificate.transparency.dev/


Why don’t CT logs include revocations? Seems like that could provide transparency into this without the need for another system (OCSP)


Downloading the CT log is an enormous amount of data, so it addresses a rather different case from the "is this certificate valid right now (because I'm thinking of accepting it)?" question.


For what it's worth, the person you're replying to is the founder and executive director of Let's Encrypt, the non-profit free and open CA which decidedly isn't a money-printing machine.


Do you disagree that OCSP would be significantly less costly and complex if the responder URL were not included in certificates, freeing the responders from having to serve hundreds of millions of clients?


Yes, I disagree. Best case scenario I think it would just allow us to get rid of the CDN layer. We'd still have to build and manage the rest with utmost care.

Even that really depends on what the "SLA" expectation is. How many QPS would we be required to support before we're allowed to start dropping queries? If there's any possibility of a burst beyond what we can handle locally we'd keep the CDN, so maybe the CDN bill would be significantly smaller on account of lower volume but all the complexity and care of operation would still be there.


They are so complex, and in practice unreliable, that my employer runs a caching proxy for our (non-browser) users (they mostly don’t want fail open).

IMO it is unfixable and should go


I don’t understand what relevance your caching proxy has. if you have time, are you able to explain this a bit further?


The CA ocsp endpoints are so unreliable we have to run a cache (the sdk we provide will use it).


Relying on OCSP alone client-side is simply too unreliable, either the responses have to be cached client-side or the server has to do stapling.


ntpd-rs support NTS, I agree it would be great if more people used it!


I'm the person driving this.

NTP is worth moving to a memory safe language but of course it's not the single most critical thing in our entire stack to make memory safe. I don't think anyone is claiming that. It's simply the first component that got to production status, a good place to start.

NTP is a component worth moving to a memory safe language because it's a widely used critical service on a network boundary. A quick Google for NTP vulnerabilities will show you that there are plenty of memory safety vulnerabilities lurking in C NTP implementations:

https://www.cvedetails.com/vulnerability-list/vendor_id-2153...

Some of these are severe, some aren't. It's only a matter of time though until another severe one pops up.

I don't think any critical service on a network boundary should be written in C/C++, we know too much at this point to think that's a good idea. It will take a while to change that across the board though.

If I had to pick the most important thing in the context of Let's Encrypt to move to a memory safe language it would be DNS. We have been investing heavily in Hickory DNS but it's not ready for production at Let's Encrypt yet (our usage of DNS is a bit more complex than the average use case).

https://github.com/hickory-dns/hickory-dns

Work is proceeding at a rapid pace and I expect Hickory DNS to be deployed at Let's Encrypt in 2025.


It continues to astonish me how little people care (i.e., it triggers the $%&@ out of me). I really appreciate the professionalism and cool rationale when faced with absolute ignorance of how shaky a foundation our "modern" software stack is built upon. This is a huge service to the community, kudos to you and many others slowly grinding out progress!


Lol, shaky indeed. A business person once said, "can you imagine if machine engineer (like auto makers) behave like software engineering?".

Seems no digital system is truly secure. Moving foundational code to memory safe seems like a good first step.


That's because there is no such thing as "truly secure", there can only be "secure under an assumed threat model, where the attacker has these specific capabilities: ...". I agree that software engineering is getting away with chaos and insanity compared to civil or other engineering practices, which have to obey the laws of physics.


Remind me of the One World Trade Center rebuild, and "if you want a 747-proof building, you're building a bunker".

Translate the internet to the real world, and basically every building (IP address) is getting shot at, hit by planes, nuked, bioweapons are stuffed into the mail slot, and lock-picked all day, every day.


[flagged]


People bring up postfix all the time in this context because supposedly nobody has ever found a memory safety vulnerability in it. Presumably this is supposed to make the point that it is possible to write complex programs in C safely.

The reason people know this about postfix and keep bringing this one specific example up is because it's so unusual! It's an example that almost stands alone, it's extraordinary.

I don't think this is making the point about the safety of C that you think it is.

There is a mountain of evidence suggesting that C is dangerous, particularly for network services, and one possible example to the contrary doesn't change that.


[dead]


That's a bummer honestly, I would encourage you not to let Internet flamewars color your decisions about what languages you might learn. The well is seriously poisoned with regards to Rust in this community. That's more of a reflection on HN than the value of Rust as a technology or even the Rust community.

Don't learn Rust if you aren't interested, it's not a one true language, but don't cheat yourself on engaging with an interest because HN struggles to discuss it amicably.


HN is capable of discussing it just fine. Low-effort, bad faith posts getting downvoted into oblivion is the system working as intended.

GP is obstinately failing to grasp the difference between “can” and “should”. Networked services can be securely written in C. Networked services should not be written in C.

There are people who can operate motor vehicles safely at speeds regularly in excess of 100mph. People should not do so on public roads.


[flagged]


Downvoting isn’t censorship. It’s disagreement.

“What about this one C project that hasn’t been a mess of exploitable vulnerabilities” is thoroughly unconvincing in a world where networked C programs are an unending source of severe vulnerabilities.


When even a very small number of downvotes here can result in a comment's text being coloured in a way that makes it harder to read, or even nearly impossible to read in extreme cases, I think it's reasonable to equate downvoting with censorship.

Censorship doesn't require content to be completely hidden or blocked; even just partially obscuring the content in some way is still censorship.


I disagree that this is equivalent to censorship, but it sure sounds like a really good incentive to make more convincing comments than “Then how do you explain this single counterexample? Checkmate, Rustaceans.”

Plenty of reasonable disagreement happens here without getting downvoted into obscurity. Substantive points are generally upvoted even when they’re controversial opinions. So I don’t feel too upset when borderline-trolling or bad-faith arguments get hidden by consensus.

Ironic counterexample: It looks like you may have gotten downvoted despite having a fundamentally reasonable perspective.


I wouldn't call it "censorship", but sure, let's say it is.

There's a reason why "this is the exception that proves the rule" is a common saying. Picking out one exceptional example of something, and using that to argue against a general point, is at best lazy, and at worst actively dishonest. I'm fine with those kinds of comments being "censored". They -- and their replies that call people out for doing this -- are boring and don't further discussion.


I wouldn't call it "censorship", but sure, let's say it is.

There's a reason why "this is the exception that proves the rule" is a common saying. Picking out one exceptional example of something, and using that to argue against a general point, is at best lazy, and at worst actively dishonest. I'm fine with those kinds of comments being "censored". They -- and the comments calling people out for doing this -- are boring and don't further discussion.


It's not censorship to be downvoted for low quality posts. Please reference "intellectual honesty" and research how to have legitimate conversations. Reddit/Twitter/4chan esque communication is not appropriate for serious spaces.


[flagged]


> Intellectual honesty would be to point out the memory safety issue CVE of Postfix in the past decade.

this thread isn't about Postfix, you brought up Postfix as a counterpoint to a wider topic where such a anecdote doesn't hold up nearly as well in a broader scope. Don't purposefully narrow the topic and move goalposts just to win internet fights.


What an interesting coincidence that all three of these accounts were created within 15 minutes of each other. I'm sure all "three" users are being "intellectually honest" here.


Why are C and C++ all of a sudden unsafe? Did I miss something?

What is safe now? JavaScript? PyTorch?


One of the major drivers (if not the driver) for the creation of Rust the fact that C is not a memory-safe language.

This has been known for decades, but it wasn't until 2010 that a serious attempt at writing a new system-language that was memory safe was attempted and got traction - Rust.

https://kruschecompany.com/rust-language-concise-overview/#:....


I would consider that serious attempts have been made since 1961, however all of them failed in the presence of a free beer OS, with free beer unsafe compilers.

The getting traction part is the relevance part.


Yeah, all previous attempts at making such a language lacked two things:

1. They didn't have a big, well-known, company name with good enough reputation to attract contributors.

2. They didn't have brackets.

Success was because traction, traction was because appeal, and appeal was mostly because those two things. Nothing else was new AFAIK.


I take a slightly different view of this, though I do not deny that those things are important. #1 in particular was important to my early involvement in Rust, though I had also tinkered with several other "C replacement" languages in the past.

A lot of them simply assumed that some amount of "overhead," vaguely described, was acceptable to the target audience. Either tracing GC or reference counting. "but we're kinda close to C performance" was a thing that was said, but wasn't really actually true. Or rather, it was true in a different sense: as computers got faster, more application domains could deal with some overhead, and so the true domain of "low level programming languages" shrunk. And so finally, Rust came along, being able to tackle the true "last mile" for memory unsafety.

Rust even almost missed the boat on this up until as late as one year before Rust 1.0! We would have gotten closer than most if RFC 230 hadn't landed, but in retrospect this decision was absolutely pivotal for Rust to rise to the degree that it has.


That wasn't the case of Modula-2, Ada or Object Pascal.

Their problems was a mix of not being hieroglyph languages (too much text to type!), being much strongly typed (straitjacket programming with a famous rant post, wrongly), commercial offerings being expensive (more so against free in the box alternative), and not comming with an OS to make their use unavoidable.

Note that for all the hype, Zig is basically Modula-2 features with C syntax, to certain extent.


> not comming with an OS to make their use unavoidable.

Early MacOS did use Pascal, though IIRC not Object Pascal.


You can still find documentation for Macos toolkit with Pascal bindings.

This is a bit of a nostalgia trip for me as I owned a copy of this book way back in the 90's

https://developer.apple.com/library/archive/documentation/ma...


No shared mutable state in an imperative language is common, as are memory safe languages with performance close to C's? Didn't see the latter in the language shootout benchmarks, in fact Rust is much closer to C than the next closest thing


How is C not memory safe? If I access memory I didn't allocate the OS shuts the program down. Is that not memory safety?

(Unless you're running it on bare metal ...)


This is not a question for HN at this point. It’s like asking why SQL query forming via string concatenation of user inputs is unsafe.

Google it, C memory boundary issues have been a problem for security forever.


> It’s like asking why SQL query forming via string concatenation of user inputs is unsafe.

To be honest, that's a surprisingly deep question, and the answer is something I'm yet to see any developer I worked with understand. For example, did you know that SQL injection and XSS are really the same problem? And so is using template systems[0] like Mustache?

In my experience, very few people appreciate that the issue with "SQL query forming via string concatenation" isn't in the "SQL" part, but in the "string concatenation" part.

--

[0] - https://en.wikipedia.org/wiki/Template_processor


The problem that causes it is in photo cases that a flat string representation kind of mixes code and data. And therefore escaping (such as not allowing quotes in the user input to terminate quotes in your query, or not allowing html tags to be opened or closed by user input, nor allowing html attributes to be opened or closed by the user input, etc) is needed.


> In my experience, very few people appreciate that the issue with "SQL query forming via string concatenation" isn't in the "SQL" part, but in the "string concatenation" part.

Really? To me it's pretty obvious that not escaping properly is the issue, and therefore the same issue applies wherever you need escaping. I don't think I've ever heard anyone say that SQL itself was the problem with SQL injection. (Although you certainly could argue that - SQL could be designed in such a way that prepared statements are mandatory and there's simply no syntax for inline values.)


Really, the problem is in combining tainted input strings with string concatenation. If you have certain guarantees on the input strings, concatenation can be safe. That said, I still wouldn’t use it since there are few guarantees that future code wouldn’t introduce tainted strings.


I think you have a missunderstanding on what memory safety is.

Memory safety means that your program is free of memory corruption bugs (such as buff overflow or under flow bugs) that could be used to retrieve data the user isn't supposed to have, or can be used to inject code/commands that then get run in the programs process. These don't get shut down by the OS because the program is interacting with its own memory.


Memory errors can be exploited by a clever adversary to control your process in a variety of unpleasant ways, see: https://en.wikipedia.org/wiki/Memory_safety#Types_of_memory_...


> If I access memory I didn't allocate the OS shuts the program down.

The real problem is when you access memory that did allocate.


So we need a new flag for gcc that writes zeros to any block of allocated memory before malloc returns, not a new language.


You'd probably want an alternative libc implementation rather than a compiler flag.

However, calloc everywhere won't save you from smashing the stack or pointer type confusion (a common source of JavaScript exploits). Very rarely is leftover data from freed memory the source of an exploit.


If only the very competent people that decided to create Rust had thought of asking you for the solution instead...

Have a little humility.


That wouldn't make it safe. It would just make it crash in a different way, and still be vulnerable to exploitation by an attacker.


We have that already. There are still other problems that exist.


Highly recommend Alex Gaynor's intro to memory unsafety https://alexgaynor.net/2019/aug/12/introduction-to-memory-un...


That's not what memory safety refers to.


No, that's a security vulnerability waiting for someone to exploit.


They have been unsafe from their very early days, Multics got a higher security score than UNIX thanks to PL/I, and C.A.R Hoare has addressed C's existence on his Turing Award in 1980, while Fran Allen has also made similar remarks back in the day.


All of a sudden? They've been unsafe for decades, it's just that you had less of a choice then.


Because you can cast a pointer to a number and back again. Then you can stuff that value into index [-4] of your array. More or less.


Tcl


The Rustls TLS implementation and certificate verification are all safe Rust.

The underlying cryptography is still a mix of C and asm, that's the best option we have now particularly if we want support for things that make it deployable, like FIPS. We are looking for ways to improve the safety of the underlying crypto in the future.


Is it just a perf issue, or something else?


I assume you're asking why the underlying crypto still needs to be written in asm. There are two primary reasons:

1. Performance

2. Defense against side channel attacks (e.g. constant time operations)


And one political reason: the existing implementation is FIPS, and FIPS validation is a gigantic pain in the rear :-)


Has anyone in this space considered adding type annotations to assembly?

It’s totally possible and it’s a thing compilers for memory safe languages sometimes have to do internally.

It wouldn’t take a lot of language engineering to make it nice. You’d end up being able to take that asm code more or less as is and annotate it with just a type proof so that Rust/Go/Fil-C can call into that shit without worrying about it blowing up your rules.


Ah, thanks, makes sense


Because thieves know that all iPhones are locked like this and thus don’t steal them.

If they knew that some significant percentage were not locked they would be worth stealing again, and if it’s locked they just throw it in a dumpster.

The fact that your phone is locked protects everyone else as well.


Somehow I doubt even a percent of owners would choose to unpair. And if they do for repair purposes, there's no reason they couldn't re pair/register. What am I missing?


Nothing, the argument for pairing the parts is so weak that I’m genuinely surprised to see it here on HN so often.


> thieves know that all iPhones are locked like this and thus don’t steal them

Do you even believe this? I haven't seen any examples of thieves saying "stick em up! Oh, iPhone? Have a nice day"

Easy counterexample:

> an entire wall of iPhones (approximately 436) were gone

https://www.usatoday.com/story/news/nation/2023/04/07/apple-...


> Comparing data in the six months before and after Apple released its anti-theft feature, police said iPhone robberies in San Francisco dropped 38 percent. In London, they fell 24 percent.

> In New York City, robberies (which typically involve a threat of violence) of Apple products dropped 19 percent and grand larcenies of Apple products dropped 29 percent in the first five months of 2014, compared with the same time period from 2013, according to a report from the New York attorney general’s office, which included data from the New York City Police Department. By comparison, thefts of Samsung products increased 51 percent in the first five months of 2014, compared with the same period a year ago, the report said.

https://archive.nytimes.com/bits.blogs.nytimes.com/2014/06/1...


2014 would be prior to most of the features mentioned in this article (parts pairing) and not the subject of your article (kill switches). The article does tell us that:

* iPhones are very commonly stolen

* Android devices introduced a similar kill switch in 2014

> Samsung introduced to kill switch for its Galaxy S5 in April, so it will be sometime until its effects can be evaluated


But those were brand new iPhones, not activated, and therefore not locked...


...and Apple presumably have the IMEIs.

Another easy example:

> In the UK in 2016, there were almost half a million mobile phones stolen [...] _most of them iPhones_

https://www.trustonic.com/opinion/smartphone-crime-turning-t...


There have been reports of thieves threatening victims with weapons and demanding they disable icloud. Not saying that negates the benefit necessarily but it's a relevant data point.


This did not cost us $225k. About half that. Nobody pays the website price, you pay a lot less via a VAR.

- ED of ISRG / Let's Encrypt


Those numbers are misleading. IdenTrust has always been a small CA in terms of volume. The large percentage you see for them there is actually Let's Encrypt volume counted as IdenTrust because of the cross-sign. The Let's Encrypt percentage there is from sites not using the cross-sign. Add them together and that is basically Let's Encrypt's total volume, as IdenTrust itself is likely < 1%.


As the person who negotiated the agreements between Let's Encrypt and Identrust I can tell you that they have provided valuable services, including but not limited to cross-signs. I would not describe it as rent seeking.

We are sincerely glad to have them as partners, and grateful for their contributions to helping get Let's Encrypt going. We could not have done what we did without them. Running a publicly trusted CA is not easy, and cross-signing others involves work and liability, particularly if the entity asking for a cross-sign is an upstart with a strange plan and little to no experience running a CA.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: