Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I love the MaxInboundMessageSize example. I've run into that many times.

Often there will be a note in the release notes about it, and I know I should read the release notes in detail when I upgrade dependencies, but like many people I don't always. Sometimes it's just laziness or complacency -- especially for "utility" libraries like for compression or encoding -- but other times it's a challenge with release notes:

* Each version has each release note published independently (or worse: only on the Releases tab in GitHub, and you have to click to expand to read each)

* The release notes are really long or dense, and breaking changes are easily missed

There's also worse problems:

* The release notes don't actually call out the breaking change (you have to read each ticket in detail)

* The release notes just say "Bug fixes" or there are no release notes

I think along with the suggestions in this article, library authors should also put effort into making good release notes. This includes realizing sometimes people are using from a couple major versions and/or years ago.



While it is a good example, you could also use the same example and conclude that the problem was inadequate tests. SemVer is great, but you can't count on dependencies that you do not control actually adhering to it, either intentionally or unintentionally.

The only thing that could have prevented something like this for sure was mentioned:

> And while nothing in our early testing sent messages larger than 256k, there were plenty of production instances that did.

To me, this was the clear failure; not the fact that some dependency broke semver. Their production system relied on being able to send messages larger than 256k, and their tests did not.


While it's easy to say, how far do you go? Do you test every bit of every upstream library you use? The ideal is probably yes, but the reality is this rarely happens.

Even with a test, you may not find this. In the IOException example, the author calls out why:

> When we upgraded, all our tests passed (because our test fixtures emulated the old behavior and our network was not unstable enough to trigger bad conditions)

The only way to catch this type of thing is to emulate the entire network side of things, and that's still only as good as your simulation of the real world. Again, reality is even if you test your upstream to this extent, you're probably mocking a bunch of things, and that may mask something in a way you won't see until possibly production use.


> While it's easy to say, how far do you go? Do you test every bit of every upstream library you use? The ideal is probably yes, but the reality is this rarely happens.

It depends. I have a friend who used to work on banking systems. They had full test coverage of every dependency. Even standard lib functions and language features.

One time they found a bug in the md5 implementation in a minor version of a popular database.


> It depends. I have a friend who used to work on banking systems. They had full test coverage of every dependency. Even standard lib functions and language features.

these are not dependencies anymore then, they are part of your code source and should be vendored with it. I don't know what language your friend is using but I'm pretty sure most std libs and languages already have tests with very good coverage.

> One time they found a bug in the md5 implementation in a minor version of a popular database.

Every piece of code can have bugs. 100% code coverage doesn't eliminate bugs, it just says all code path are tested, an algorithm can still be wrong for some values even if 100% code path are tested.


The lesson learned isn’t about code coverage or oaths tested. It’s to not blindly trust 3rd part anything, even “languages already have test with very good coverage” when the stakes are high.

If billions of dollars are riding on your code, you better be damn sure you trust everything it relies on.

Fun side note: every piece of internal code was always developed in parallel to the same spec by 3+ teams so they could cross validate. If all 3 functions don’t return the same value for the same input, every team gets to build it again until all implementations behave the exact same.

High reliability engineering sounds “fun”


Certainly it's fun from the pure engineering perspective but I guess also somewhat tedious.

On the other hand, if a billion dollars depend on your code working or not, or in other cases human lives like in space rockets, you don't get a second chance. If you fuck up, lots of important things get flushed down the toilet, usually including your job.

So you have 3 teams do in parallel to be 99.999999999% certain that it'll work as advertised. It's also sorta why banks are slow to adopt new changes since they want to be sure that whatever is going on, it'll work and not flush down Grandma's rent.


Was the spec also written by the 3 teams in parallel to make sure the spec is not broken?


Tests Georg is an outlier, and should not have been counted.


But isn't it impractical to test every feature of every library you are using? In an ideal world you would have everything tested in isolation as well as integration. But in practice there will always be a corner case that remains untested because you don't know all internals of the libraries you use.


If you want to be able to randomly upgrade those dependencies and not have to worry about a breaking change, then yes. Semver is not going to help you there. Server is only going to help you when someone knows they are releasing breaking change. And even then, only if they are nice enough to actually follow the spec.

You don't have to test every bit of every dependency you use, but upgrading them without either carefully reviewing the changes or having tests in place for at least critical functionality is asking for something like this to happen eventually.


God yes! For the apps that I maintain (and which have users outside my team), I enforce high-quality release notes like you describe. Representative example: https://github.com/sapcc/swift-http-import/blob/master/CHANG... (note that this also takes SemVer seriously)


> I enforce high-quality release notes like you describe. Representative example: https://github.com/sapcc/swift-http-import/blob/master/CHANG.... (note that this also takes SemVer seriously)

Nitpick: you are not using SemVer.

A normal version number MUST take the form X.Y.Z where X, Y, and Z are non-negative integers, and MUST NOT contain leading zeroes. X is the major version, Y is the minor version, and Z is the patch version. Each element MUST increase numerically. For instance: 1.9.0 -> 1.10.0 -> 1.11.0.[1]

[1]https://semver.org/#spec-item-2

Some of your version numbers lack the patch version.


Thanks for the heads-up. Will fix that in future releases.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: