Early on in my engineering career, there are a handful of times I was assigned to a project where I was set up to fail. I didn't understand the situation until far too late. There are a few tell-tale symptoms: I didn't quite understand what I was supposed to do; I didn't really know how to do it; and I didn't understand how the assigned work actually solved a higher-level problem. No one else did, either (these situations usually represent a failure of management.)
As those projects dragged on and I was unable to make "progress", whatever that meant, I felt shame and a mounting dread of returning to the console each day. Eventually, fortunately, I was able to roll off them (not having accomplished much in the preceding month or two) and got back to doing useful things.
These days I can usually recognize such projects in advance, but it's still not always possible to avoid them.
As a senior person, these are some of my favorite types of projects, because I feel like it's where I'm the most useful, and where I can have the most impact: understand the need, then implement the need rather than the request.
When I see people struggling with these, it's usually from a lack of information seeking/gathering, where they first sit down and code rather than spend the first few weeks talking, reviewing understanding, and, most importantly, finding those few A-team people that have meaningful input.
Definitely not something someone early in their career should be given, but these types of problems usually benefit from devaluing manager input, since they have a disconnected/warped perception of reality. I take these projects only if the understanding is that I'll be solving a problem, taking input from all involved, rather than implementing a specific solution.
that's assuming you have somewhat cooperative people, in which case I don't think it's a setup to fail situation.
my reading of it was more like some past experience where I tried to do what you described only to be reprimanded for "overstepping boundaries" and that not being my job function, especially when the manager/PO/etc insist on being an exclusive gateway to information despite repeatedly failing to do that job correctly.
I think we all agree on the importance on your last part where you need to suss out whether they want an obedient code monkey or someone to solve the problem. probably through an iterative dialog where the parties involved get to explore and update their understanding of the problem, the technical and non technical constraints and the solution space.
> especially when the manager/PO/etc insist on being an exclusive gateway to information
A strong signal to leave a sinking ship. I suspect even if you somehow pull a rabbit out of the bag you wouldn't be rewarded for it. I stuck with one project like that when I shouldn't: I finished the work however the useless salesperson was screwing up the communications with the client and didn't get the project across the line to them.
One of the most demotivating environments to suffer.
> I take these projects only if the understanding is that I'll be solving a problem, taking input from all involved, rather than implementing a specific solution.
If this is the case, I would say you're not being "set up to fail", which was the explicit description the GP gave. "Set up to fail", to me, implies that whoever is tasking you is not tasking you to actually solve the problem, either because they're too clueless to know that what they're specifically tasking you to do won't work, or because there is some other hidden agenda in play.
Perfect wording! I tell people the hardest problem, in my work, is getting people to see there is a problem. The crazy workarounds that people will come up with, to avoid the root issue, are really incredible.
I agree with you but there are two prerequisites for this to work :
- as you said, this can’t be too soon in your career : gathering requirements and knowledge is something that you can’t do without experience
- you need to be 100% confident that you are working in an environment where you will be rewarded accordingly. Working hard on those projects just to see your manager be promoted will be an absolute emotional disaster.
If one of those requirements aren’t met, you are good for burning out.
I agree, and I also find that it's mostly developers with experience that are able to do it... but it's not because of seniority: it's because people with 10+ years of experience simply had to learn it in the past.
In the past there was no choice: developers would talk to users and stakeholders and collect information.
Today there are few opportunities for a junior developer to do this.
That would be really cool to do. One of the reasons I became a dev is cos I used to work for customer service in a games company and I was frustrated with the fact that there were ongoing bugs that I could've fixed if I was in the dev position. In reality, as a dev I am now on the other end where we never really hear feedback from users and there always seems to be some agile race to the bottom where we try to fix as little as possible as simply as possible.
I had several instances of product people asking stakeholders, users and support people not to talk with developers. Prioritisation had to go exclusively trough them.
There was constant complaining from both sides: from product that "tickets opened to us are horrible, support/customers are ignorant" and from customers that "nothing ever gets done here".
In the end nothing of that was true. Nobody was ignorant and a lot was getting done.
> always seems to be some agile race to the bottom where we try to fix as little as possible as simply as possible.
Yeah, that's not agile. The entire point of agile is that you close the loop and the reason you break your work into smaller chunks is so you can deliver them faster in order to gauge the feedback of that chunk and better understand what should be delivered in the next chunk.
If you're not closing the loop, you're not doing agile, you're just doing waterfall with more bureaucracy.
Yeah, these types of projects can turn out to be a real gift. They are essentially a "folks here have an initiative they want to fund but don't know how" and if you direct the conversations well, you can create something incredible out of it.
Things being up in the air and vague is often an opportunity to step in a tame a wild forest of ideas into a real application.
It definitely takes a certain type of mindset to harness that energy and herd the cats though.
> Early on in my engineering career, there are a handful of times I was assigned to a project where I was set up to fail. I didn't understand the situation until far too late. There are a few tell-tale symptoms: I didn't quite understand what I was supposed to do; I didn't really know how to do it; and I didn't understand how the assigned work actually solved a higher-level problem. No one else did, either (these situations usually represent a failure of management.)
Yes! I have been in such a situation once (I didn't quite understand what I was supposed to do and no one else did, either) and, to this day, I remember it as a cautionary tale when I think of moving to a new position.
The people who think that they can shine in this situation by being proactive underestimate the lack of understanding: it is not a blurry task, it is a task where you are told to do X and no one knows what X actually means. You can do great things, but they will not be able to deliver on the requirements.
Reminds me of when I was hired as an SRE, then told you can't log on to any of the customer systems until you've committed code to the main infrastructure codebase and you're "trusted". This was the only thing the other members of the SRE team actually did though - the job itself was basically "figure out why things go weird sometimes in the customer systems and then fix underlying automation issues".
But there was an actual developer team as well, including all the original founders - who knew the system perfectly. So things would get posted and you'd be like "oh, that's a good starter task..." and instead a "core" developer would pick it, do the one line fix, and then just push it straight to the master branch (or PR it but it would get approved within minutes in their timezone) while I was still trying to get my bearings in the code and tests.
Within a month or two the manager who hired me had "resigned", and then I was let go near the end of my probationary period. The whole time I never had any solid work assigned beyond "oh figure out where you can be useful".
I've rarely had a task assigned to me that was well-defined enough to even know if I completed it. Usually there is good intentions, but the people above you just don't have time to dig into the details of what it is they think they are assigning to you. This also means that they can't really check up on you too well, either. The trick is to just dig in, do some research, do stuff (ideally stuff that genuinely needs doing, that you can accomplish, and is related to the "task" you have been given), and report on it confidently. Eventually people will consider you an expert on that issue/area and will defer to you about what needs to be done. Once you hit this point, you can make a list of "nice to haves" in this area, throw them on the backlog, and declare yourself done. If you're recognized as the main expert on that subject, it will be tough for people to argue with you.
I think the difference here is that a 'bad' project will often be very specific. You will have a very particular outcome that you must achieve to finish, and you know what it is. ("Migrate our database from NoSQL to a relational database!" etc.) But you don't actually have the skills to perform the task, or the support to even get started in the right direction. You don't even know what questions to ask, or who to ask them to. You're just ... lost.
It's also possible the project is simply too hard -- maybe doing it right would take an experienced engineer two years; but since that approach seems like obviously too much work, you flail around assuming there must be some alternative.
This is, sadly, pretty common for junior programmers and people who are new to a team.
I've been in a situation where I was set to fail despite the fact that I was actually performing a vital task for the company.
Every damned time I set out to implement changes necessary to ensure the maintainability of the project, the boss would bring in an architect (who otherwise never even looked at the project) and he would pull the rug out from under my feet.
Every damned time. By the time they realised I was actually useful and regretted the situation I was checked out and ready to walk out the door.
Hmm, was your architect also allergic to any kind of risk? E.g. any change was by default undesireable because any change carried more risk than not changing anything.
This brings back memory of one of my projects. After I had couple years of professional development under my bell, I started contract work on the side. I've done a project with a client and it was a success. They wanted me to tackle a bigger problem, porting their app server from Unix to Windows. I thought how hard could it be, and took on the job. Turned out their server was written in Perl scripts and Perl was not running on Windows back then. After couple days of looking at their code and build system, I told them to either rewrite their server in Microsoft IIS Web server or Perl had to be ported to Windows to run their Perl scripts. They said port Perl to Windows. The naive me didn't comprehend the gravity of such task and took it on. I struggled for couple weekends and realized it's not just the Perl interpreter needed to be ported but the whole Perl ecosystem. It's not a task a weekend contractor could take on. I gave up and told the client the bad news. They just shrugged and said they just wanted how far I could get.
Sort of similar, I was handed business critical system to maintain on my own. I was new into the company, and new into that industry too. I was expected to handle business requests, establish good relations with users and run all meetings with them, Sort out the old backlog which was in a total mess, provide constant support, progress stories, create new stories. My manager threw me under a bus early on, we had a pre meeting catch up to decide what we'd discuss with business stakeholders. In the meeting I brought up what he mentioned and I genuinely thought he'd forgot to ask, his response "why would you think that..." Then proceeded to act shocked I brought this up and proceeded to contradict me.
If I asked my manager for anything he'd throw it back in my face.
I had 18 months of pure hell. Only after I left did I realise I was in a sort of abusive relationship. I heard later he said that my problem was I refused his help. It was a miserable experience, made worse because I pretty much always blame myself if something is wrong.
yeah I had one of these at my first job. Was given a massive task to create the testing strategy for this app that I hardly understood. Nobody seemed to understand the task I was doing and what the limitations were when I asked for help. It just made me dread going to work. 2 years into my career now I feel like maybe I could work something out. Still feels a bit heavy to lay on someone as their first ticket, something that requires deep understanding of the workings of every single part of the application.
Test-a-single-function style, what most people default to when they hear "unit test", is probably the thing that most leads into that situation.
I've been occasionally pushing coworkers to restructure code and tests into a higher level style, defining an API boundary and calling it from the rest of the code (as if you're writing a library except it's inside your codebase instead of installed), then writing tests to that API. That does make for some easily refactorable code where the tests don't need to change at all.
That pseudo-library style is kind of what "unit test" originally meant: a business unit, not a code unit. Examples simplified it too much and people copied the style instead of the substance, and the original meaning was lost.
For another example, some of our recent projects have been on updating daemons, those have also been really good candidates for this since there are clear entry and exit points.
> That pseudo-library style is kind of what "unit test" originally meant: a business unit, not a code unit. Examples simplified it too much and people copied the style instead of the substance, and the original meaning was lost.
Very interesting. It sounds a bit like what happened with the term "Hungarian notation".
But you probably won't get 99% coverage doing that. Some conditions are just too difficult to get from a higher level, and you still need to take them in consideration: hardware-related problems, race conditions, etc...
I don't consider it a problem, but it is to those who want to see big numbers.
I feel like if you're unable to get 99% coverage with that then there's probably some dead code that needs pruned.
As for hardware-related problems and race conditions, testing at a higher level of abstraction seems like it'd help more than it'd hurt - in the former case ensuring graceful handling as part of the tests, and in the latter case making the race conditions more likely to hit (and hopefully fix).
I see the OP as talking about “integration” style tests (say testing against an api) and not being able to capture all of the OS and similar edge case errors in the test. I do think this is a hard problem, particularly if you don’t design for it from the outset.
It’s a bit of the old classic “reading from the file never returns an ‘access denied’ error, until it does”. There’s ways and means, but let’s not pretend this is simple.
I bet the reason they aim for 99% coverage is because they're using Python on JavaScript for some complex software, which means they've roped themselves into being their own compiler. Without all that test coverage, all the bugs are surprise runtime bugs, and the tests take the "surprise" part out.
I believe that the original definition, way back before most of us were born, was a test of the smallest possible unit, i.e a single function ect. At least according to Wikipedia, in 1969, it was defined at unit tests, component tests and integration tests.
In todays development world, the unittest is primarily a developer tool to help speed up development. One should not be afraid to delete unittests if they are doing refactorings. But the long term value is always in the integration tests.
Not the smallest unit, just an internal unit or component smaller than the whole of the system.
The important thing is not the size of the code affected by the test, the important thing is that a test should verify a single requirement isolated from other requirements.
I believe the original distinction between unit test and integration test was that integration test was for when parts developed in isolation by seperate teams was finally integrated. But this tended to be a disaster, which is why continous integration was adopted.
Just going of the Wikipedia page on unit testing, in the 1956 paper on programming for the SAGE project, refers to testing as: Parameter testing, assembly testing, shake down testing. In the context, parameter testing is testing of individual sub routines (functions/methods). From there the term unit test was used in the Mercury System as unit testing of "individual programs outside and inside the system". I suppose here, the ethos is the same.
Obviously, when reading the rest of the papers, they are clear on the fact, that it is the specifications that the programmer have developed for, that should be tested. That was at the time, synonyms with individual sub routines, and as such it was both the smallest unit and a logical testable unit at the same time. Since then, we've come a long way with programming smaller units with better composability.
I'm not sure I agree that the original meaning of a unit test was lost. Perhaps, only one part of the definition was carried over to modern development practices. In any case, I always stand by the fact that long term value is in tests of the "API". Everything else is implementation details.
Just to re-iterate another commenter, I had a positive experience before.
When I was at BigCompanyA, we had 95% coverage requirements as a management fiat. The company generally was very management-decree-driven. People would unit test individual methods and helper functions and each “unit” of code. If you wanted to change an API, you had to dig through a sea of broken tests because each little bit of code was individually tested. We literally had unit tests that validated one line methods for string concatenation of a prefix (in Java). Management said add tests with each commit so everyone added tests without thinking about what is valuable.
At BigCompanyB, our testing was engineering driven, not some management metric being tracked. The goal was to test “public interfaces” and ensure that these tests captured all the helper methods along the way. This helped catch dead or extraneous code if you couldn’t write a test to exercise a particular path. Changing an API didn’t require changing a bunch of silly tests. We still had equal >95% coverage, and it was very useful.
Basically you need to actually think about what makes sense to test and write logical tests.
You could quickly make the changes you want not worry about what else breaks, then run the tests. The tests that you expect to fail would fail, and you fix them. Then you find other tests that you didn't expect also failed; you'd review the impact of your changes on those parts of the system and make appropriate changes until the tests were happy.
Because the testing was so thorough (and quality), you had high confidence that nothing unexpected was broken.
It was "move fast & break [tests], then fix them and deploy" :)
You need mostly integration tests, not unit tests.
Unit tests which dependency-inject mocks of other parts of your codebase are 99% worthless.
Source: Spent many years writing the latter and they caught close to 0 bugs. Moved to Elixir ecosystem where integration tests are the norm, and they catch bugs on a regular basis.
I never understood the fascination with Unit Tests. For Testing to be useful the code being tested should have a certain degree of Complexity (algorithmic/behavioural/state-transition/etc.). But what i see from most unit tests is mere "busy work" as if mock-testing a trivial class/api/etc. would somehow make your code better. A similar criticism is also applicable to TDD based programming.
> I never understood the fascination with Unit Tests.
Me neither. The legacy forms of testing that "unit" was trying to differentiate from have died out. All tests are unit tests today, which is why most developers just say "tests". "Unit" is already implied and adds no additional information not already understood by "test" alone.
Same goes for integration tests. Integration tests, as it was historically defined, died out. It seems anyone still trying to use the term today is using it to also just mean "testing", or what was historically known as unit testing. Like unit, a pointless qualification.
The "busy work" of low level unit tests is perfectly valid, but TDD still works extremely well with high level integration tests so I don't think a similar criticism necessarily applies.
In my experience, Unit tests are interesting when they are inlined with the function they test, i.e. in the same file (like python doctests). Of course it’s harder to set the testing environment but that may in fact be encouraging you to write stateless code so there is that.
I've written a number of apps where I'm fearless with making changes because I trust the tests I wrote. One of the apps has been running for 7 years and it's had a lot of big updates, refactors, etc.. I'm breaking every rule there is on jinxing things but there hasn't been a single bug introduced due to those updates and there's ~85% coverage. It's only a ~3k line Flask app though, but it does get deployed straight to production with no staging environment and gets hundreds of requests per day. It writes to a DB, Redis, interacts with multiple 3rd party APIs and sends out webhooks so it does quite a few "external" things.
I've never been a fan of TDD and personally I think tests that really hit your DB, Redis, etc. help a lot more than mocked out unit tests or a billion unit tests and nothing else. I tend to write more tests that really test things together. Not full blown Selenium style tests, but I do really write to the DB and other data stores in tests and often use a framework's built-in test client for making HTTP calls. Everything can still be really fast too (~100 tests in 2 seconds is my usual rough guide for having an assortment of "real" tests with Flask).
> I've never been a fan of TDD and personally I think tests that really hit your DB, Redis, etc. help a lot more than mocked out unit tests or a billion unit tests and nothing else... ...but I do really write to the DB and other data stores in tests
I've come to the conclusion that the idea that "unit tests" should test functions/objects in isolation with completely mocked dependencies is based more on the slow speed of those dependencies in the past than what actually makes for good tests. Now that we have faster computers and storage devices, and easy/fast store creation, we should move past this.
Obviously, this is very dependant (no pun intended) on the dependency in question, but as a minimum, anything with SQL should have a test that hits a real SQL DB (PG in docker for example) at some point.
It's not even about faster computers and storage, but just about better test techniques, and better written tests. You don't actually need very fast hardware for fast tests; you just need to spend some time on it. And roughly know what you're doing.
External dependencies (such as PostgreSQL, Redis, what-have-you) are a pain though. I feel pretty strongly that just a single command ("go test", "cargo test", "npm test", etc.) should run all the tests on all platforms, reliably, with a minimal of fuss and messing about. Things like docker-compose or whatnot quality as "a fuss and messing about".
They can be a bit of a pain, but if you have a lot of logic tied up in your external dependencies (SQL usually bring main offender here) it's not possible to properly test without using said dependency.
That said, in the current job, we have a dedicated Postgres container that comes up with with a just file command. The setup, test, and teardown of schema all happen within the standard test system of the tested platform (pytest or go test specifically).
Take a look at hexagon pattern from Spotify. Once you start testing the user contract of your services against real databases (testcontainners is a good option), then you can change all the internals and be sure that the externals will work.
> > Even with good design, w/o tests you will fear change and so the code will rot. With good tests, there’s no fear, so you’ll clean the code
> Has anyone ever actually found this to be true?
Yes, but you have to have the right kind of test coverage, and that's the tricky part.
Yesterday, I refactored a bunch of functions that changed a bunch of unit tests. However, since we also have integration/system tests on that code, I'm confident that I haven't broken the code as a whole. Without those system tests, I would not have confidence that the change would be successful, and probably would not have refactored.
In another codebase that hadn't been touched for a year, as part of a feature change I refactored an SQL statement to what I thought was a more optimal design and immediately broke a bunch of tests. Based on that, I was able to understand the original intent of the SQL, and updated it in line with the feature change. I added test scenarios for the new feature, but left the existing scenarios as is.
Without those tests, would have broken the system in a subtle way.
We have a 300kloc monster, and I find that going up from 0 to 60% test coverage has given me appreciably more confidence that the system still works after any change.
Sure, the test code is almost twice that size, and breaks almost as often as the code, but my confidence in the system itself is definitely higher.
To prevent people from writing tests that need too many mocks we now have explicit dependency injection. So much easier to reason about stuff, and prevent’s people from not testing the important bits.
99% coverage does not mean you have good tests. Yes, I have experienced that good tests, combined with a good understanding which parts are critical and which are not, create a great comfort about introducing changes, whether it's Friday afternoon or Monday morning.
Only for limited areas of a server codebase on a small team who understood what tests were meant to accomplish and wrote excellent tests. Basically, about a year and a half out of a twenty five year career. I’ve never seen them useful on front end code except in extremely limited circumstances.
In most other cases, we had tests that only covered the happy path and did little more than slow down builds and make refactoring more difficult. In other words, they made things worse. E2E tests, in particular, are the Afghanistan of web development. I’ve seen more time wasted getting useless tests green than I’ve seen wasted on any other programming exercise.
At some point you start to fear changing too much, because of all the tests you will have to rewrite, which can be tedious, especially, when the code mutates things all over the place and you have to mock 5 things per unit test.
> because of all the tests you will have to rewrite
If you have to rewrite tests that means you've changed the user experience in ways that are not backwards compatible.
Which is sometimes valid, but not exactly what is being talked about here. The discussion here is more about changing the code in ways that makes the code better, but still delivers the same user experience – possibly with new features added, but not where anything is taken away.
> If you have to rewrite tests that means you've changed the user experience in ways that are not backwards compatible.
That is only true for integration tests. You can rewrite a set of local functions without changing user behavior, and then you need to change tests, such refactors becomes a pain when you have too many unit tests but are really easy when you have many integration tests.
I don't get it. Beck was quite explicit when coining the term "unit tests" that the unit refers to the set of functionality found at the integration point – which seems to be what you refer to as "integration tests", and what everyone else these days call "tests". It's all the same.
If changes to "local functions" calls for tests to be rewritten, that means you've exposed "local functionality", even if by accident, to the outside user. Which means it is not actually local functionality, but something you have exported and are committed to maintaining. Rewriting the tests is not the correct course of action. You need to fix the code that you just broke as the functional contract was violated. With any luck that hard lesson will teach you to be more careful next time.
Unless of course, someone before you has written lots of tests for all the functions separately, regardless of whether they represent functionality at the integration point and there is a culture of not wanting to delete tests, because of coverage reasons. Then what you have are suddenly broken tests, even if you change only procedures at a lower level.
Deleting the tests will not impact coverage as the tests at the integration points will necessarily already cover any local functions. That is unless said local functions are unused, but in that case you would remove the unused local functions anyway, still not impacting coverage.
But you are right that deleting the tests isn't an option as it will break the contract that was entered into with the users of the code. Of course, you can't modify the tests for the same reason, so...
I work in a place with 99% test coverage requirements and it's honestly still a super brittle system that everyone is afraid to make big changes to.
Obviously tests are going to break when you change the code's APIs and functionality. That's expected, and does nothing to boost your confidence in the part of the code you're working in. The point of tests is to improve your confidence that you're not doing things that have an unexpected impact somewhere else (hence the Bob Martin quote in the article about tests being useful even if you have good design).
Tests aren't an alternative to thinking. You still have to consider what changes you're making, and what tests should break as a result. That's just part of your code change. When tests that shouldn't have broken start failing that's when they show their value.
I think if you evolve your testing system to feed random numbers into your program and ensure that various correctness properties hold for all these random inputs you are likely to get a pretty robust program. You can make this slightly efficient by using a fuzzer (essentially uses feedback from running previous inputs to see how running new inputs works). See for example https://propertesting.com/ or https://en.wikipedia.org/wiki/American_Fuzzy_Lop_(software)
Of course if you need a higher standard of guarantee you can try model checking or other formal verification techniques.
This is why I'm neutral on the value of unit tests. Sometimes they are helpful, other times less so.
I am always in favor of tests around the boundaries between system components (regardless of what you call them; integration tests, API tests, etc.) The more robust the better.
If I'm working on a system with robust (ideally generative) integration tests covering the interface of a component, I feel I have near-complete freedom to rewrite any aspect of the component I want with high confidence.
Unit tests are good when there are clear and durable units, but what many people seem to mean by a unit test is one that exercises the internal functions of a package, or the API of an internal package that evolves as fast as your product.
Stable interfaces that the user cares about (e.g. CLI, public API, RPCs) make for the highest-value tests.
> I am always in favor of tests around the boundaries between system components (regardless of what you call them; integration tests, API tests, etc.)
Funnily enough, "unit tests" is what this was called originally. The boundary is what "unit" referred to. In my experience, these days most people just call them "tests". That they happen at the boundary is implied as by this point it is generally agreed upon that testing anything else is a waste of time at best and sometimes even detrimental.
Which is why nobody in fake internet argument land can settle on what "unit", "integration", etc. mean in the modern age. There is nothing related to testing in need of additional qualification to communicate to other developers what isn't already communicated with "tests" alone.
How have you seen this done well? In my experience, this usually ends up being some hello-world type simplicity that doesn't really represent the real world use corner cases.
I have, on multiple occasions. Unfortunately mostly on contracting gigs so I can't cite specifics but I've worked on a major ecommerce platform's api, and two separate databases using this approach.
For public examples, I'd point you to (e.g.) Jepsen.
I'm not going to deny it requires a pretty high level of time/money to implement. But done well it's super powerful.
You're both right. TDD is no panacea, and having no test is being blind. There's a black art into estimating what angles to test and how deep to avoid calcification.
> There's a black art into estimating what angles to test and how deep to avoid calcification.
There may be a black art to recognizing what you overlooked, but otherwise it is pretty straightforward. Tests are your documentation. The angles needed to be covered are exactly what the user needs to know to use your software – how to use it and, when using it, what happens both in the expected case and when failure occurs (especially what happens when failure occurs!).
Calcification is of little concern as the inverse is breaking changes, and the user does not want to deal with your breaking changes. You can put your mind at ease knowing that once you commit to a documented feature, it should remain there until the end of time.
I think you missed a word. "Good" tests is a crucial part of the statement. It's totally possible to write bad tests, the same way it's possible to write bad code.
1. Your input will be incorrect/corrupt/malicious. Sanitize the crap out of your input. Think of every possible way your parser could go off the rails and fail.
2. Your code will hit pathologically slow cases. Thread the needle between the overly complicated but linear thing and the simplistic but straightforward to implement, debug and test O(nlogn) thing.
3. Your code will fail in production in a way that is hard to debug. Put in logging, monitoring, and dashboards. Check them. Alert on them.
4. You'll have to debug your code. Put in tracing modes, use good names, divide things up to separately debug them.
5. You'll have to explain your code. Make it easier to understand for future, drunk, or stupid you.
Fear also makes you a better programmer: people these days are way too willing to take on cavalier amounts of risk and then offload deciding whether it worked or not onto others in code review, onto a stifling number of unit tests, and even--most frustratingly--onto the ability to scope or roll back changes before they affect "too many" users. The reality is that the number of people whose photo library or bank account--or even merely whose birthday party--it is "acceptable" to ruin is 0, not a supposedly-negligible 0.0001% of your billion users :(... that's a thousand people whose lives you have affected, and at least they deserve some kind of personal apology from the engineers who failed them! My entire job and most of the money I have made in my life is following behind "fearless" programmers and picking up after them: tearing open holes, protecting user funds, and even winning a (giant) bug bounty from a company that trusted their tests too much and leaned heavily into "move fast and break things" :/. You should have courage to stand up to fear, but you should never discard it or think you have somehow outsmarted it: that fear is your best defense from hubris and it also shows you have empathy for those who rely on the correctness of your work.
No, Fear (within bounds) is very much a necessity for a good programmer. It makes him/her think hard before doing anything. As the saying goes; "there is a fine line between bravery and stupidity". There is nothing worse than some smart-aleck programmer being all gung-ho and willy-nilly refactoring/rewriting code as they see fit when they don't understand the complete system yet. Only if the entire system design can be held in your head (possible if you were part of the original design team and stayed with the product through its evolution) can you afford to be truly "Fearless".
Especially true if you are going to be touching prod. Part of being a junior I think is hearing a few horror stories, hopefully enough to make you stop and think before pressing enter on that rm command.
I'm surprised nobody mentioned types or type systems as one of the best solutions for eliminating fear. Out of all the systems that worked on, the ones that were changed fearlessly were stuff written in Haskell and Purescript. That was before Rust became so popular. These systems were changed fearlessly and we had very minimal tests. The code reviews were much easier to perform as you don't tend to check if the code might throw exceptions but you just need to think about business logic.
These days I'd argue that Rust is at the same level of Haskell and Purescript in fearlessness.
Some languages that removes certain fears:
- Kotlin: removes the fear of null exceptions
- Go: removes the fear of forgetting about an error that this function throws
- Languages with ADT (Haskell, Rust ... etc): remove the fear of missing a case
At an org I worked at, we (the whole org) were notorious (internally) for just not being able to execute big projects. In late 2022 2 of the top-level big priority initiatives got "pivoted" (eyeroll) because they just weren't "going". And allegedly a big part of the problem was that nobody wanted to admit that things weren't going just swell on these big projects, so they gussied up their status reports, and then when this was done systemically across every person in the entire project, you got a project that looked real good until it mysteriously missed deadlines, and after enough of that, finally the curtain pulls back.
Now, management and leadership were most affronted by the state of affairs, I assure you, and many words were spent extolling virtues and lambasting vices and sin. But let me assure you, it's not that they love eloquent speech, they were simply not allowed to say the real reason for any of it because the real reason was a trait of the org that the founder thought was a key to their success.
They run a meritocracy, you see, and a pay for performance comp philosophy. They run performance reviews very tight, and getting a passing mark is publicly said to be something to be proud of. The natural result being that everyone lives in constant fear, and being put on a big risky project is a great way to not get to vest your whole equity grant. So everyone plays defensively, the smartest people dodge the hard/big projects, and so on and so on.
And they'll never fix it because they are not allowed to. How tragic.
In my experience, fear serves as a double-edged sword in software development. While it can inhibit us from making necessary changes, it also acts as a powerful motivator. The fear of introducing new bugs drives us to conduct thorough testing and think critically about our code. Moreover, the fear of burdening others or causing system outages reinforces the importance of careful consideration and caution
The article reminded me of the military's emphasis on maintaining a healthy level of fear to prevent complacency.
When working on new projects, there's room for bold moves and experimentation.
However, in existing large-scale systems, the impact of our actions extends beyond ourselves to the entire team and company. Let's avoid cowboy coding and prioritize stability and reliability.
A fear not mentioned directly in this post that definitely makes you a worse programmer is fear of (perceived) individual failure. I've dealt with (usually junior, but not always) programmers who shied away from difficult tasks or complex parts of a codebase. It seemed like they were afraid of failing and being seen failing, and preferred to select tasks they knew they could handle. A natural outcome of this is that you end up with big important parts of your codebase that are only understood by a handful of people, because they're scary. The fact that we had tons of automated tests didn't fix this.
You can partly address this by trying to make sure every part of your codebase is easy to understand, but sometimes your code is just going to be complex. That becomes an education problem, you have to work with these developers and coax them into confronting difficult chunks of code and help them develop the skills they need for understanding.
The same applies for development/debugging techniques. I walked one developer of equal seniority through using WinDbg once, because javascript he wrote was causing IE6 to crash. It was my first time doing it, so the role I had to play was the "let's just try things and see if we can get anywhere, there's no harm in failing" facilitator. Better to try something new than to give up. We didn't come away from the exercise with clear answers, but we had learned some useful information in the process by exploring.
While some languages are safe or safer now, the concerns of correctness and robustness are not solvable by unit testing or code coverage alone. Other holistic layers of smoke testing, property testing/fuzzing, integration testing, and user acceptance testing (UAT) must be included. Robustness means making the code defensible and easy to understand, test, refactor, and modify.
Where there is most deafening silence in the area of proving correctness of the codebase and of the resulting binaries. seL4, klee, fp, and coq ran in the right direction with this in some aspects, formal verification still hasn't become a standard practice because the level of effort is still costly and tools to accomplish this aren't readily available.
I loved Julia's blogs as a young dev, they were like watching Star Trek TNG episodes. All hope and naive solutions. I remember reading this one especially back then.
Then you learn the tests are a monster stack themselves, often taking more effort than the actual features they are guarding.
Then you are part of one of these 'blameless postmortems' and realise everyone has an opinion and there are now 80 outcome actions, half of them complete horse shit but they are high priority now and jammed into your sprints for the next 6 weeks.
I wouldn't say I have no fear reflex at all, but I've definitely noticed I have far less of it than is average in this industry. My reasoning is that, even if what I try fails utterly, I myself probably learned a lot trying to pull it off, and since my aim has always been to maximize mechanical sympathy, that's hard to write off as an unmitigated loss.
As with all tools the goal is to be able to use the tool without fear.
Remember the first weeks of coding C? It took me 4 years to not get cold sweat just thinking about using it when the compiler didn't inform you about a typo and you have to rollback the code all the time to fix problems and learn assembly (in X86, ARM and now Risc-V).
And that was without deadlines or any delivery pressure.
Today I realize C is for mad people, but I learned to respect the thing while no longer being afraid to use it in a real scenario.
Fear makes you worse at everything but running away from something. Also, being cautious is not the same as being fearful, the same way that braveness is not impulsivity (or stupidity).
It’s a matter of degree. An uneasiness about writing code you’re not certain is correct, and being apprehensive of code possibly causing faulty behavior in certain cases, is a good thing.
Incidentally, AI not having that kind of feelings could be a drawback for it creating code.
> If you’re scared of making changes, you can’t make something dramatically better, or do that big code cleanup.
Funny, reading the headline I thought the article was going to be about the polar opposite. Over time, I've found the headline to be absolutely true, but I have also found that many invest in tooling and overabundant testing out of fear.
eg, "What if we need to recreate our entire stack from absolute scratch?"
I mean yeah, from that perspective terraform is totally useful, but what's next, automate the entire creation of the company, including customer acquisition and hiring? How often do you really need to recreate everything from scratch?
As a more junior engineer I used to have the confidence that things just wouldn't break, or at least if they broke it wouldn't be such a big deal. More often than not I was right. As a senior engineer, I find myself just pushing my junior engineer tenets. Usually everything will be fine, and even if it does break, we can fix it.
Write tests and invest in tooling if they're helpful. If they're not, don't be afraid to just write code by ssh'ing into a server.
As those projects dragged on and I was unable to make "progress", whatever that meant, I felt shame and a mounting dread of returning to the console each day. Eventually, fortunately, I was able to roll off them (not having accomplished much in the preceding month or two) and got back to doing useful things.
These days I can usually recognize such projects in advance, but it's still not always possible to avoid them.