Hacker News new | past | comments | ask | show | jobs | submit login

I wouldn't call mocking testing against reality. Like all testing it has its limitations:

- Mocking doesn't validate schema. Vendors can change schemas on a dime and the schema you're working with is often the schema as you know it at the time. This is partially negated through aggressive dependency management and languages with type systems and vendor provided interfaces.

- Mocking doesn't validate behavior. APIs can be weird and have bugs, vendor bugs still become your bugs in mocking.

- Mocking doesn't simulate network calls. You can certainly implement mocked latency, but again this is a smooth facade over a complex problem.

It's also worth noting that using mocks, especially on vendor interfacing code, will shape how that code reads and not everyone is going to grok that right away. It's definitely different.

No testing is perfect and will catch all of the above, but if you're cognizant of the above you can get close by combining good unit tests. That said, calling it reality is a misnomer imo.




> Vendors can change schemas on a dime and the schema you're working with is often the schema as you know it at the time. This is partially negated through aggressive dependency management and languages with type systems and vendor provided interfaces.

This is why testing in production is so important. Building monitoring and alerting tools, making code observable for all of that to be feasible, and building an incident response culture can all fall under the "testing in production" umbrella to me.

Shifting testing left and right has been a big thing of mine for the last few years and I think I'm onto something. Planning your architecture and projects early and thinking about making things easy to test and observe in production, as well as monitoring and alerting after its live.

Cindy Sridharan has an amazing treatise on this topic.

https://copyconstruct.medium.com/testing-in-production-the-h...


The part I didn't cover that probably loops your ideas and my ideas together are that no testing is fool proof, and the ultimate goal of testing in modern software is minimize the time you spend in an incident - not to absolve yourself of them entirely.

The downside of synthetic testing is that it's relatively expensive. Doing it in staging is one thing, and personally as a SRE this is where I'd put it. Putting it in production requires cordoning these requests (and data) in some way which is additional overhead in a set of testing which is already relatively expensive in terms of operational complexity and point at which errors are discovered (later in the SDLC).


> The downside of synthetic testing is that it's relatively expensive. Doing it in staging is one thing, and personally as a SRE this is where I'd put it

I actually have the exact opposite opinion. It is _because_ synthetic testing is so expensive, you want it running in production where the value can be maximized. We abolished our staging envs completely because we realized all the effort put into maintaining them and duplicating all production changes to staging as well - was much better spent in making testing in production safe. Far too often an issue only exists in staging and not production, or staging doesn't catch something because of the plethora of delta that will always exist between different envs.

When I was in QA - I found myself caring less and less if an issue was present while testing locally, or in some sandbox or staging environment. I only cared about production, so that is where I invested time into testing.

If a synthetic test breaks production in some unanticipated way, that is incredibly valuable because one user shouldn't be able to break production for everyone and you just found one heck of a bug to address.


To your first point, I agree that's probably a smart allocation to make given the expense. Where we differ is the value driven by lower order tests; synthetic test breakages are also more expensive fixes since they've traveled the SDLC. Overweighting synthetic tests compared to unit and integration can lead to a lack of confidence in proposed changes. That's to say, I think it's good to have the larger volume of your testing allocated in unit and integration tests while synthetic tests can validate major contracts.

> If a synthetic test breaks production in some unanticipated way, that is incredibly valuable because one user shouldn't be able to break production for everyone and you just found one heck of a bug to address.

This is a valid point, however, I wasn't referring to breakages. I was referring to usage statistics, data, etc. Conflating synthetic usage and data with real usage and data can be problematic. There's ways to mitigate that, but they add overhead, which was my original point that synthetic testing and monitoring adds a good deal of operational and code complexity.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: