If you're struggling to judge the engineering culture of a company that you're considering joining, consider this indicative of a poor one. It isn't definitive, but it's something you should ask about and probe further. Ask to see their CI dashboard and PR comments over the last few days. When they talk about Agile, ask what _engineering_ techniques (not process!) they leverage. These things will tell you if you're joining a GM or a Toyota; a company that sees quality and efficiency as opposing forces, or one that sees them as inseparable.
When it comes to tests, there are two types of people: those who know how to write tests, and those who think they're inefficient. If I had to guess what happened here, I'd say: the company had a lack of people who knew how to write effective tests combined with a lack of mentoring.
That's why you ask to see recent PR comments and find out if they do pair programming. Because these two things are decisive factors in a good engineering culture.
I'm convinced that unit tests don't usually find bugs. IMO, most bugs are edge cases that were an oversight in the design. If the dev didn't handle the case in code they're not going to know to test for it. Fuzzing is a much better approach.
At my current position I have the opportunity to work with two large code-bases, built by different teams in different offices. One of the projects has ~70% code coverage, the other doesn't have a single test. Exposure to both of these systems really bent my opinion on unit tests and it has not recovered.
The project with high code coverage is basically shit and has so many bugs that we regularly cull anything marked less than "medium" severity as "not worth fixing". This project was written by a team that loves "patterns", so you can find all sorts of gems like Aspect Based Programming, CQRS, N-tier, etc. well mixed into a featureless grey Java EE goo. We get so many bug reports that its someones job to go through them.
The other project with no tests is a dream to work on. Not a single file over a few hundreds lines, everything linted and well documented. Almost no methods that don't fit on the screen, no recursion. No bullshit "layering" or "patterns". I can't remember the last time we had a bug report, since our monitoring picks up any exception client and server side. Every bug I've worked on was identified by our monitoring and fixed before anyone noticed.
Whats the difference between teams that developed these vastly different applications?? I've worked with both teams for a while, and honestly, the engineers that wrote no tests are of far higher caliber. Use Linux at home, programming since they can remember, hacking assembler on the weekends and 3D printing random useless objects they could easily buy. The other team went to school to program, and they do it because it pays the bills. Most of the bad programmers know what they're doing is wrong, but they do it anyways so they can pad their resume with more crap and tell the boss how great they are for adding machine learning to the help screen that nobody has ever opened.
If your developers are great then tests would hardly fail and be fairly useless, and if they're terrible tests don't save you. Maybe there's some middle ground if you have a mixed team or a bunch of mediocre devs??
Anecdotal: I once helped out a team who was writing a Maven plugin for doing static tests on some js code during build. There was already a test suite with a bunch of test code. As my stuff was fairly complicated and I have a habit of writing unit tests for such I added a bunch. Fast forward a year and a half later: I was greeted with a mail that there was a bug in it. I had to fight the better part of a day to nail it down: First not being familiar any longer with the code and secondly because a bunch of stuff has been added meanwhile. I fixed and thought it was a good idea to add a test as it was a nasty corner case. I headed for the natural place where the test would go and found -- exactly the test I was going to write, nicely commented out. A quick check with Git revealed that I added this test initially, which then was commented out when the new feature causing the bug was added. Firing up git blame was next... This is why I am fond of having tests: you are stomped onto it if you break something at least if your test suite is worth its salt.
I like your story, but I find amusing that `git blame` has such an appropriate name.
Interestingly, "svn annotate" had 2 aliases: "svn blame" and "svn praise". But git didn't add a "praise" alias, just "blame". I actually almost submitted a PR to add "git praise" one time.
> I'm convinced that unit tests don't usually find bugs.
They don't, they test whether or not the API contract the developer had in mind is still valid or not.
> IMO, most bugs are edge cases that were an oversight in the design. If the dev didn't handle the case in code they're not going to know to test for it.
You don't write test to find bugs (in 98% of cases), but you can write tests for bugs found.
> Fuzzing is a much better approach.
If you're writing an I/O intense thing, such as a JSON parser, then yes. For 80% which is CRUD, probably not.
> The project with high code coverage is basically shit and has so many bugs that we regularly cull anything marked less than "medium" severity as "not worth fixing". This project was written by a team that loves "patterns", so you can find all sorts of gems like Aspect Based Programming, CQRS, N-tier, etc. well mixed into a featureless grey Java EE goo. We get so many bug reports that its someones job to go through them.
You are blaming tests for bad design choices. With the patterns raised unit tests only get you so far, integration tests are what help you prevent bad deployments.
> The other project with no tests is a dream to work on. Not a single file over a few hundreds lines, everything linted and well documented. Almost no methods that don't fit on the screen, no recursion. No bullshit "layering" or "patterns". I can't remember the last time we had a bug report, since our monitoring picks up any exception client and server side. Every bug I've worked on was identified by our monitoring and fixed before anyone noticed.
So how many exceptions were raised due to bad deploys? Core review only gets you so far.
> If your developers are great then tests would hardly fail and be fairly useless, and if they're terrible tests don't save you.
Failing tests don't have to do with devs being "great" or not. Developers must have the capability of quickly testing the system without manual work, in order to be more effective and ship new features faster. If the tests are one-sided (only unit tests, only integration tests), then this will get you only so far, but it still get's you that far.
Don't abandon good development practices only because you saw a terrible Java EE application.
Tests are a pattern. And patterns are the bread and butter of the medicore. That's not to say that patterns or tests are bad, but high calibre guys know when to use which tool. As a tool, unit testing is almost useless.
Low calibre guys don't have any feel for what they're doing. They just use the tools and patterns they were taught to use. All the time. This goes from engineers to managers to other disciplines.
I've seen people at a factory floor treating my test instructions for a device I built as some kind of bible gospel. I had a new girl who had no idea I designed said gadget telling me off for not doing the testing exactly like the instruction manual I wrote says.
The same thing happened with patterns and unit tests. You have hordes of stupid people following the mantra to the letter because they don't actually understand the intent. Any workplace where testing is part of their 'culture' signals to me that its full of mediocre devs who were whipped into some kind of productivity by overbearing use of patterns. It's a good way to get work done with mediocre devs, but good devs are just stifled by it and avoid places that force it.
For more complex items, I'm much more interested in higher level black-box integration tests.
Having an expected/input output set when writing something like a parser is standard practice. Turning that set into unit tests is worthless for a few reasons.
1: You will design your code to make them all pass. A unit test is useless if it always passes. When your test framework comes back saying x/x (100%) of tests have passed, you are receiving ZERO information as to the validity of your system.
2: You wrote the unit tests with the same assumptions, biases, and limitations as the code they're testing. If you have a fundamental misunderstanding of what the system should do, it will manifest in both the code and the test. This is true of most unit tests - they are tautological.
3: While doing all of the above and achieving almost zero additional utility, you had to fragment your logic in a way that easily yields itself to unit testing. More than likely, that's not the most readable or understandable way said code could have been written. You sacrificed clarity for unit testability. Metrics like test code coverage unintentionally steer developers to writing unreadable tangled code.
The only use case for unit testing here would be if this parser was a component that gets frequently updated by many people or a component that gets implemented anew for different configurations. But at this point i'm just talking about regression testing, and there are many ways to do that other than unit testing.
> While doing all of the above and achieving almost zero additional utility, you had to fragment your logic in a way that easily yields itself to unit testing
Another way to consider it is that unit testing forces you to structure your code to be more composeable which is a win. The amount of intrusive access/changes you need to avail test-code is language-dependent.
> The only use case for unit testing here would be if this parser was a component that gets frequently updated by many people or a component that gets implemented anew for different configurations. But at this point i'm just talking about regression testing, and there are many ways to do that other than unit testing.
And yet successful large-scale projects like LLVM use unit-testing. Not exclusively but it's a part of their overall strategy to ensure code quality. Sure, for very small-scale projects with a few constant team members it can be overkill. Those aren't particularly interesting scenarios because you're facing fewer organizational challenges. The fact of the matter is that despite all the hand-wringing about how it's not useful, unit tests are the inevitable addition to any codebase that has a non-trivial number of contributors, changes and/or lines of code.
For URL parsing, some runtimes/frameworks have that thing already implemented. E.g. in .NET the Uri class allows getting scheme/host/port/path/segments, and there’s a separate ParseQueryString utility method to parse query part of the URI.
To ensure a class conforms to an interface, the majority of strongly-types OO languages have interfaces in their type systems. If you’ll use that but fail to implement an interface or some parts of it, you code just won’t compile.
Tests are not a replacement for good developers, they are just a tool for contract validation and a regression safety net.
Running unit tests is hardly quick. Especially if you have to compile them. End-to-end are even worse, in this regard.
> They don't, they test whether or not the API contract the developer had in mind is still valid or not.
If you're always breaking the API, then that's a sign that the API is too complex and poorly designed. The API should be the closest thing you have to being set in stone. Linus Torvalds has many rants on breaking the Linux kernel's API (which, also, has no real unit tests).
It's also really easy to tell if you're breaking the API. Are you touching that API code path at this time? Then yes, you're probably breaking the API. Unless there was a preexisting bug that you are fixing (in which case, the unit test failed) then you are, by definition, breaking the API, assuming your API truly is doing one logical, self-contained thing at a time as any good API should.
edit: As an aside, I'd like to point out that POSIX C/C11/jQuery/etc. are littered with deprecated API calls, such as sprintf(). This is almost always the correct thing to do. Deprecate broken interfaces and create new interfaces that fix the issues. Attempting to fix broken APIs by introducing optional "modes" or parameters to an interface, or altering the response is certain to cause bugs in the consumer of the interface.
> Don't abandon good development practices
Unit tests are a tool. There are cases where they make sense, where they are trivial to implement and benefit you greatly at the same time. Then there are cases where implementing a unit test will take an entire day with marginal benefit and the code will be entirely rewritten next year anyway (literally all web development everywhere). It doesn't make sense to spend man-months and man-years writing and maintaining unit tests when the app will get tossed out and rewritten in LatestFad Framework almost as soon as you write the test.
The benefit should be realizing that if you need an entire day to implement an unit test you're doing something very very wrong.
Especially, if the codebase evolves due to new end-user requirements being discovered along the lifetime of the project unit test on various corner cases can be a lifesaver.
I'm not a bad dev, honestly. The complexity of the code I have to maintain just overwhelms my working memory. And yes, without any silly patterning. Sometimes domain requirements alone are sufficiently complex to confound a person without additional safeguards.
Another level of complexity comes from functionality that is rational only to include form third party sources. The third party sources must be updated frequently (because the domain is complex and later versions usually are subjectively of higher quality). The unit tests are about the only thing that can tell me in timely manner if there was a braking change somewhere.
Yes, there is smoke testing later on, but I much prefer dealing with a few unit tests telling me they don't work rather than dealing with all the ruckus from bugs caught closer to the end user.
Dogmatic unit testing is silly. Testing should focus on fixing in the critical end user constraints and covering as much of the functionality visible to the end user. So, I would not necessary focus on testing individual methods unless they are obviously tricky.
In a an organization where everybody can code anywhere I would enforce per method testing, though. Sometimes a succint and lucid unit test is the best possible documentation.
Two quick points.
1 - I've added fuzz testing to my arsenal and find it a good value, especially if you're blasting through a new project.
2 - Good monitoring (logging, metrics) trump tests when it comes to quality, both for being _much_ easier to do and in terms of absolute worth.
That said, testing is primarily a design tool (aka, identifying tight coupling). The more you do it, the more you learn from it, the less value you get from it because you inherently start to program better (especially true if you're working with static typed languages, where coupling tends to be more problemantic). There are secondary benefits though: regression, documentation, onboarding.
I think one key difference between what you describe and my own situation is that my small team manages a lot of different projects. Maybe their total size is the same as your two, but our ability to confidently alter a project we haven't looked at in months is largely due to our test coverage. I agree that then and there, I get less benefits from tests for the project that I'm currently focused on.
The nut I haven't cracked yet is real integration test between systems. This seems like something everyone is struggling with, and it's both increasingly critical (as we move to more and and more services) and increasingly difficult. My "solution" to this hasn't been testing, but rather: use typed massaging (protocol buffers) and use Elixir (the process isolation lets you have a lot of the same wins as SOA without the drawbacks, but it isn't always a solution, obviously)
If you're interested, I've written more about this: http://openmymind.net/A-Decade-Of-Unit-Testing/
Unit tests "identify" tight coupling because they are themselves a form of tight coupling.
I'm not denying that the pain of having one form of tight coupling interact with another can be used to "guide" the design of the code. It can be. I've done it.
I'm simply pointing out that you're using tight couplings (between unit test and code) to highlight other tight couplings (between code and code).
I use my eyes to detect tight couplings in code that deserve a refactor because that's cheaper than writing 10,000 lines of unit test code to "guide design". Each to their own, though. I know my view isn't a popular one:
They are also completely orthogonal to patterns and layers and aspects and J2EE and what not. All that has nothing to do with tests at all.
I routinely call myself a proponent of BDT (Bug Driven Tests) over TDD for much a similar reason. That said, tests are HUGELY beneficial for guarding against regressions and ensuring effective refactors. Anecdotal but on my current project tests helped us:
* Catch (new) bugs/changes in behavior when upgrading libraries.
* Rewrite core components from the ground up with a high degree of confidence in backwards compatability.
* Refactor our Object model completely for better async programming patterns.
I don't think tests are particularly good at guarding against future bugs in new features; as your comment about fuzzing hits squarely on.
But I DO think tests are good at catching regressions and improving confidence in the effectiveness of fundamental changes in design or underlying utilities version to version.
Personally, I say write tests when it makes development quicker or serves as a good example / spec.
These days I follow three "good practice" rules, all of which are violated when you follow common unit testing practise:
* Only put tests on the edge of a project. If you feel like you need lower level test than that then either a) you don't or b) architecturally, you need to break that code off into a different project.
* Test as realistically as possible. That means if your app uses a database, test against the real database - same version as in prod, data that's as close to prod as possible. Where speed conflicts with realism, bias toward realism.
* Isolate and mock everything in your tests that has caused test indeterminism - e.g. date/time, calls to external servers that go down, etc.
I mostly agree with your point, but I think this is too much. Projects should be made up of decoupled modules (units ?) with well-defined interfaces. These should be stable and tested, and mostly without mocking required.
The larger your project the more important this is.
That goes without saying.
Nonetheless, if it's a very decoupled module with a very well defined, relatively unchanging interface which you could surround with tests with the bare minimum of mocking - to me that says that it probably makes sense to put it in a project by itself.
>The larger your project the more important this is.
The larger a project gets the more I want to break it up.
I think the downvotes are largely dogma driven - people are quite attached to unit testing, especially for the situations where it worked for them and largely see them in opposition to "no testing" not a "different kind of test".
At work, I've rejected many merge requests with the comment "this unit test is equivalent to verifying a checksum of the method source". It's so frustrating that people still think it's necessary to write things like this literally real example:
To understand how unit tests are useful, you look at how code is developed. Typically there's a write/compile/run cycle that you iterate on as you write code (or you do it in that order if you're a coding god). Then you test it with some sample inputs to validate that it works correctly. The "test it with some sample inputs" is simply what a unit test is. This is frequently a simpler environment to do so as you can control the inputs in a more fine-grained manner than you might otherwise. If you submit this then at the very least reviewers can have more confidence in the quality of your code or perhaps see some corner cases that may have been missed in your testing as devs in my experience are horrible at communicating exactly what was tested in a change (moreover, they tend to be high-level descriptions that can contain implicit information that's omitted whereas unit tests do not). Once you get it in, pre-submit validation enforces that someone else can't break the assumptions you've made. This is a double-edged sword because sometimes you have to rewrite sections of code that can invalidate a lot of unit tests. However, the true value-add of unit tests is much longer-term. When you fix a bug, you write a regression test so that the bug won't resurface as you keep developing. Importantly you have to provide a comment that links to the bug system you're using that can provide further contextual information about the bug.
Unit tests aren't free as they can be over-engineered to the point of basically being another parallel code base to maintain or they can be over-specified and duplicated so that minor alterations causes a cascading sequence of failures. However, for complex projects with lots of moving parts it can be used to obtain the super useful result of always being able to guarantee a minimum level of quality before you hand off to a manual QA process. Moreover, the unit tests can serve a very useful role of on-boarding less experienced engineers more quickly (even quality coders take time to ramp up) or handing off the software to less motivated/inexperienced/lower quality contractors if the SW has transitioned into maintenance mode. Additionally, code reviews can be hit or miss with respect to catching issues so automated tests ensure that developers can focus on other higher-level discussions rather than figuring out if the code works.
Sure unit tests can go insane by having mocks/stubs everywhere to the point of insanity. I prefer to keep test code minimal & only use mocks/stubs when absolutely necessary because the test environment has different needs (e.g. not sending e-mails, "shutdown" meaning something else, etc). There's no free lunch but I have yet to see a decent combination of well thought-out automation & unit tests failing to ensure the quality maintains over time (the pre-submit automation part is a 100% prerequisite for unit tests to be useful).
One of the things that really sold me on unit tests for Django development was realising that it was quicker to write a test than to open a shell, import what I was working on and run the code manually.
There are several things you as a software engineer are expected to do as a part of your job: write code, write tests, participate in code reviews, ensure successful deployment, work effectively with various groups, etc.
It's really simple: state the job requirements up front in the position description and during the hiring process. Make testing part of the code review process, and use it as an opportunity to educate about what makes a good test. Make it part of the performance review and tie raises to it (and, if it goes on long enough, continued employment).
Need to write tests for existing untested areas of the code? Have the team create a developer rotation so they dedicate someone to it for part of each sprint.
Even a few engineers on the team who don't write tests can make the product as unreliable, from the customer's point of view, as it would be if none of the engineers wrote tests.
At my current company, test coverage is taken seriously as a job requirement, and it is considered during performance reviews. Consequently, the test coverage is pretty darn good.
I'm also unsure if sitting one developer down in a corner for a segment of each sprint and dedicating them exclusively to testing legacy code with no purpose is valuable. You should be testing legacy code as you come across it and making sure you harness it properly and make your modifications and continue to the next stop. If you are spending time doing something that doesn't complete a bug or a feature, you're spending valuable time on testing something that may completely removed in the future.
I've only ever had to do a test rotation once or twice, and it was like pulling the rip cord on a lawnmower. Requires effort at first and then it becomes self-sustaining over time. It establishes or affirms a culture of testing. The rotation doesn't even need to last long.
You should know which portions of the code are here to stay and which are nearing their end of life. Naturally, you want to spend your time where it will have maximum payoff.
Also for the latter point I guess that also depends on context. If you work for a consulting company you may not have the full knowledge of what the code base is, or even have direction to be touching some things. If you are developing software for your own company, I do agree you need to figure these things out, and maybe having a developer dedicated to it each sprint isn't a bad idea. I overstepped my bounds on that comment, as I have never worked for a company that makes its own software it sells, I've only ever done consulting and I sometimes forget about alternative perspectives, so sorry about that.
Of course you combine this with managerial support and coaching around task planning and messaging to other groups.
I've been a consultant, too, and I agree that it can sometimes (for some clients) be difficult to make the case for testing in that environment.
I just wanted to say that this was beautifully stated. I've been looking for better words to explain this concept to the people around me.
Definition of cost of Quality It's a term that's widely used – and widely misunderstood. The “cost of quality” isn't the price of creating a quality product or service. It's the cost of NOT creating a quality product or service. Every time work is redone, the cost of quality increases. Obvious examples include: The ...
In my GitHuby life, I write tests obsessively. In my enterprisey-softwarey life I don't, because there is no sensible way to do it.
I mean we develop database heavy code. Should we never test the code running with the database? would be a poor choice since we would loose a lot of coverage.
What we did instead were transactional tests. Which means that in PostgreSQL sense that we actually use SAVEPOINTS to actually wrap our tests inside a savepoint and than rollback to the sane state and never commit anything to the database.
With DI this is fairly easy since we can just replace the database pool with a single connection that uses the pg jdbc driver which can insert these savepoints.
Test suite runs ~4 minutes (scala full compile + big test suite ~65%+ coverage (we started late)) in best cases and can be slow if we have cache misses (dependencies needs to be resolved, which is akward slow on scala, sbt)
We use elixir, so we get nice features like ecto sandbox with Async tests out of the box.
But strategically applied? I think it's a pretty big win. Specifically, I'm talking about onboarding people (onto the company or a project) and working with interns and juniors. Doesn't even have to be a senior and a junior, two juniors working together is significant. And it isn't just about information flowing from SR -> JR, the benefits are tangible the other way.
I'd say at a minimum, having everyone spend 4 hours a week pair programming should be a goal to try.
Horses for courses - pairing works really well for some teams and is painful for others. The presence (or lack) of pairing in a company wouldn't be a signal to me, rather I'd take it as a good sign if the team is fine with pairing whenever it makes sense but doesn't have any specific rules about it.
Pair programming is like nails on a chalkboard to me and at least a plurality of developers generally, based on what I’ve experienced personally and read online. An expectation that I’d do 4 hours a week of it would have me hunting for a new job immediately.
It’s different in kind to other practices like mandatory code reviews or minimum test coverage. Organizations are free to select for compatible employees, of course, but it’s totally unrelated to the health of the engineering org in any dimension.
I'm old, I learned about sql injection and hash salts and coupling and testing by being awful at it for decades. How do I transfer that knowledge so that a 26 year old can be where I was at 32 if not working closely with them, using their code as the foundation for that knowledge transfer?
What I’m not happy to do, and what pair programming is, as far as I have seen, is to sit down with another engineer and figure out a problem together. In addition to being simply incompatible with my mind’s problem-solving faculties, it in my experience produces lowest-common-denominator software that strips any sense of self-satisfaction or -actualization from my work. No thank you.
You pick up a goddamn book, man!
> How do I transfer that knowledge so that a 26 year old can be where I was at 32
You tell them to pick up a goddamn book, man!
Sorry for being curt. But it's the professional responsibility of developers to educate themselves. Some people think they can cram on binary trees in CS, use that limited knowledge to BS the interview, and then coast into working on the transaction system at a bank or whatever.
If a company wants to pay you to mentor a junior, that's one thing. And should be explicitly stated as such. I'm willing to help just about anyone that asks. But if I find myself showing a developer how the compiler works (or a compiler works), or the syntax of our programming language, or basic things that Google knows, I'm going to walk away from that flaming wreck of a company. I've worked with developers that hunt-and-peck typed before. You ever have to explain syntax to a guy that can barely work a keyboard? Let's just say, my threshold for putting up with BS is dramatically lower now.
They don't avoid sql injection because they think it's bad, they avoid it because they're adapting your code. When they're asked to make a page that does X, they just copy a page that almost does X somewhere else in the system. Maybe one day they read a list of the top 10 vulnerabilities and realize why you did it that way.
It's why loads of developers can add new functionality just fine, but ask them to build a whole new app from scratch and you will get an incomprehensible mess.
Of course, this doesn't work too well when your code base is a mess of competing styles, etc.
 Not that I'm saying some additional help wouldn't be good, but that the significant amount can be learnt alone, with no guidance, from the code base.
Essential for onboarding and cross skilling. But mandatory for everything ? Awful idea.
Not everyone learns or benefits by watching someone else type.
This is fantastic advice.
Ends justify the means. This resonates me with multiply teams I've left--irrational exuberance, about technologies like coffeescript / mongodb / etc. Anyone who has played with a functional language / "nosql" / etc on the weekend can experience this euphoria without the toxicity of churn to their company. It's patronizing to people who understand the importance of where things are headed. This is one of the signals that I look for.
>> [article] I’ve found it a real struggle to get our team to adopt writing tests.
> If you're struggling to judge the engineering culture of a company that you're considering joining, consider this indicative of a poor one. It isn't definitive, but it's something you should ask about and probe further.
After reading the article, parent's comment is spot on in multiple dimensions... this article is full of red flags to look for when joining a team. The depressing thing is if you're manager's manager doesn't care... and you're manager doesn't care... and you care, well.. then.. nobody cares.
"In particular, I wanted to explain the quite different management experiences encountered in System/360 hardware development and OS/360 software development. This book is a belated answer to Tom Watson's probing questions as to why programming is hard to manage."
Tdd helps focus and structure some developers but rarely does it save time. In situations where everyone is being pushed too hard for too long saving time is more important. I would bet documentation is also a low priority.
That is a sign of bad culture, both in engineering and product. Whether its starting a green-field project in a scrappy startup or building yet-another-feature for an established product if the estimates are constantly redlining everyone's available time and never giving thought to maintenance, QA, testing, code review, and testing then of course it will always feel like that. When estimates include that stuff and you show product you can ship features more reliably more often in the long run, they buy into that. If they don't buy into that they are either very delusional, have only worked with absolutely perfect people, or they utterly do not care how many extra hours / stress the lack of quality causes you / the team / the company.
This waste exhausts morale.
Not true 100% of the time, but it's the right "default" mindset, because it's true the majority of the time.
If you know exactly what to write because you have done this 100s of times before tdd will slow you down.
If you are unsure of what the outcome of what you write tdd will give you training wheels and help guide you. That may make some quicker for a little while.
Nobody who ‘needs training wheels’ is going to get them from/do TDD.
I’m more concerned about people who think they can fly without learning to walk.
Most of us can type 60-80 wpm. Have you ever gotten close to that while coding? Typing is incidental. The very easiest part of your day. You’re right, they type less, because they’re so into the smell of their own farts that they refuse to believe their bugs are bugs, and they make other people clean up after them.
Humans are fallible. We all have bad days. We all get distracted. We all misunderstand. We all change our minds. Don’t be so sure you got it right the first time. Even if the evidence supports you. You’ll be looking at a broken piece of code soon enough that you can’t figure out how it ever worked. Sooner or later it’ll be yours.
I once had this attitude. Then, I worked with other people. It makes all the difference. My perspective shifted when I was bitten by something small when making a small patch to a foreign system because someone else didn't leave good test cases behind.
In my experience, if you are unsure of how you are going to solve a problem, writing test only makes you slower. When you are coding exploratively, you will likely have to delete and completely rework what you did several times before you find what works. If you write tests for all but the highest level, those will just be scrapped along with the rest of the code.
But the reason we do it is because it increases quality.
Making that math work, though, seems to depend on the idea of some sort of future crisis state, where normal development is slowed way down. (You'd need to avoid a big slow down in the future, in order to balance out the continuous extra time given to testing.)
Does such a crisis lurk in the future of every development effort? Hard to know. It's certainly not the only way technology projects fail. Plenty of products have passing tests but fail to find customers.
A friend of mine just had his startup acquired, so his startup was an above average success and he told me 80% of what they built ended up getting scrapped.
This kinda depends on if it is a public or private (or `__private(self):`) method. If its private, no need to test it. But suppose that rather than using something in an existing library, you are bothering to write your own ADD function and expose it to the rest of your codebase. Wouldn't that indicate that your function was special enough that it should be tested?
I find this hard to believe. Do others CTOs / team leads find this to be the case?
I've been a CTO of two small startups with 3-7 developers. We've had resistance to tests at some points (myself included). We've solved it fairly simply. All pull requests (PRs) require tests. PRs are rejected immediately without tests. If a PR doesn't have tests and it is critical to get in, we open a new ticket to track adding tests. It isn't fool proof, but it does result in a high degree of test coverage.
And once developers understand how and where to write tests, they usually see the benefit quickly and want to write more tests.
That said, the biggest resistance I have found is "this feature is due in three days, I need two and a half to finish, and then we have another half to review and find bugs." In the end, the biggest issue is that we have time to test on the spot or write tests, but not both. You can scrape by with just manual testing, but I don't think anyone would ever rely on automated tests 100%.
Our larger projects are test-backed, and our largest even reaches 90% coverage, but the only reasons we wrote tests for those was because we knew we would be working on them for 2-3 years and it was worth the time in that case. I wish this wasn't the case, but I've found it's always the argument against automated tests in my corner of the market
It made hiring devs fun. Trying to explain to people why it was that way, and their insistence that software development doesn't work that way.
That is exactly it for 90% of agency projects. Underquoted to get the deal, a rapid development cycle that leaves the devs feeling dead, and then once that first release is out, you have maybe 1 or 2 small updates and the project is never touched again, or at least not for a year or two.
There is no world where it makes sense to write tests for these projects.
Agency for what?
Every developer/engineer should work in an agency for a while because of the amount of sheer work and lifeline of said work is short, projects are primarily promotions and one and dones in many cases.
What we did at the agency I worked at was try to harvest systems from common work. Landing page systems that then had base code that was testable and common across all, create a content management system that supports agency specifics. Promotions/instant win systems that had common code across all and could live longer than the 3 week promotion, create a prize/promotions system that ran all future promotions and improved AFTER most promotions due to time constraints. Game systems for promotional games / advergaming, after new games and types became common or re-usable etc.
Many times, you have to take an after the ship approach and harvest systems that make sense from the sheer amount of work you are going across hundreds of projects. Where good engineering really comes along on subsequent systems where promotions, projects or games/apps were initially made and proved a need or prototype for how to do future projects quicker and with more robust systems.
Testing and doing code specifically for that campaign may be usable or not, but later you can harvest the good ideas and try to formulate a time saving system for the next, including better testing and backbone/baseline libs/tools/tests etc.
I have worked in agencies 5+ years and game studios 5+ years and both are extremely fast paced, usually the harvesting approach is the one that is workable in very pressurized project environments like that. Initial projects/games/apps etc are harvested for good ideas and the first one might even be more prototype like where testing/continuous integration might not fit in the schedule the first time around, or might not even be clear what to standardize and test until multiple types of those projects it out. Starting out with verbose development on new systems/prototypes/promotions/campaigns/games might not be budget capable or time allowed to do so on the first versions as they might be throwaway after just a few weeks or months. There is a delicate balance in agencies/game studios like that where the product and shipping on time is more important on the first go around as the project timeline and lifeline may be short. Subsequent projects that fit that form are where improvements can be made.
Now days I work on a single long-running legacy project where tests make sense. Back then, I read a lot about how testing was the "right thing to do." But I also realized that most of the time (a) the client wasn't going to be willing to pay for the tests and (b) odds are that once we launch the product, that will be last time I ever look at it.
Maintenance will occur in five years when sales talks the client into scrapping the entire thing and rewriting it -- the client won't be willing to pay for maintenance or automated tests, but somehow sales could always sell them on a total rewrite.
I wonder if each such project is built completely from scratch. If not, the reusable parts can be improved over time, and covered by tests.
For us, it's a mixed bag. We have a CMS we use for most projects that we did not develop, but we have developed our own packages/blocks for it that are included in every project that bootstrap and whitelabel the hell out of the CMS to provide the functionality we need in every project. From a data standpoint, one of our packages replaces several dozen classes and hundreds, if not thousands, of lines of custom code in every project.
When it comes to more custom projects, specifically ones that never see public use (like a custom CRM, admin dashboard, CRUD-based system, API backend, etc.), we build using the Laravel framework which bootstraps away all of the authentication, permissions, routing, middleware, etc. and gives us a very good blank slate to work with. For these, everything is mostly from scratch, minus what we can use third-party packages for (such as the awesome Bouncer ACL). We have a front-end library that I wrote to abstract away common tasks into single line instantiations, but it's our experience that these projects are being built on a blank slate for a reason. These are the projects that may actually see tests written for as well, although not all will.
You take an existing CMS or shop software and customize it, or take a web framework and build a very customer-specific service on top of it. Most everything you can share between CMS projects is already part of that CMS.
Need to test interfacing with an SDK correctly?
Sure, patch the SDK methods and ensure they are called with the proper parameters
Also, for extra coverage, add a long running test that makes actual calls using the SDK. Run these only when lines that directly call the SDK change (and ideally there should only be a few of those).
Need to mock a system class?
Sure - Here's the saved snippet on how to do that
This of course applies only if you repeatedly access projects that use the same stack. If you don't then I understand that it can be pretty hard. But basically over time, writing tests must become easier else that's a sign that something in the process is not working correctly. Knowledge isn't being transferred. Or things aren't being done uniformly.
Ideally once you get past a certain point, testing should be just a selection of patterns from which you can choose your desired solution to implement against a given scenario.
I accept that I could be missing something here so please take what I say within the context that my thinking applies to work that can be described as technologically similar.
The only reason I brought it up was to show that we don't skip test writing entirely and the projects where we do write them, it isn't like we just wrote a test to check that "Project Name" is returned on the homepage and called it a day.
They wrote stupid test after stupid test after stupid test. Hundreds of them. Oh Em Gee. It was like that story of Mr T. Where the army sergeant punished him by telling him to go chop trees down, only to come back and find Mr T had cut down the whole forest.
With good TDD (or at least my definition of it :-) ), the programmer is constantly thinking about branch complexity and defining "units" that have very low branch complexity. In that way you minimise the number of tests that you have to write (every branch multiplies the number of tests you need by 2). The common idea that a "unit" is a class and "unit tests" are tests that test a class in isolation is pretty poor in practice, IMHO. Rather it's the other way around (hence test driven design, not design driven tests). Classes fall out of the units that you discover. I wish I could explain it better, but after a few years of thinking about it I'm still falling short. Maybe in a few more years :-)
In any case, my experience is that good TDD practitioners can write code faster than non-TDD practitioners. That's because they can use their tests to reason about the system. It's very similar to the way that Haskell programmers can use the type system to reason about their code. There is an upfront cost, but the ability to reduce overall complexity by logically deducing how it goes together more than pays off the up front cost.
But that leads us to our second caveat. If you already have code in place that wasn't TDDed, the return can be much lower. Good test systems will run in seconds because you are relying on the tests to remove a large part of the reasoning that you would otherwise have to do. You need to have it check your assumptions regularly -- normally I like to run the tests every 1-2 minutes. Clearly if it takes even 1 minute to run the tests, then I'm in big trouble. IMHO good TDD practitioners are absolutely anal about their build systems and how efficient they are. If you don't have that all set up, it's going to be a problem. On a new project, it's not a big deal for an experienced team. On legacy projects -- it will almost certainly be a big deal. Whether or not you can jury rig something to get you most of the way there will depend a lot on the situation.
So, if I were doing agency work on a legacy system... Probably I wouldn't be doing TDD either. I might still write some tests in areas where I think there is a payoff, but I would be pretty careful about picking and choosing my targets. On a greenfield project of non-toy size, though, I would definitely be doing TDD (if my teammates were also on board).
If you know exactly what you are writing it is quicker to add your changes jump to the next file add your changes. If you are constantly checking the browser to see if what you wrote works Tdd can help.
I think you overestimate the agency project life cycle. Most of our projects are built and ready for client review in 2-3 weeks total. Once the client makes a few days worth of changes, the project is shipped and we likely do not look at it again for another year or three.
That said, there are always long-running projects and those are the ones you try to include tests in.
IME, there are far too many "senior" devs (who absolutely should know better) who never worked on any testing-heavy teams that just don't see the point. After all, there's QA, and it's not like THIS code should break THAT code in a seemingly-unrelated part of the codebase...
Sure thing that you can use your authority to force ppl, but should you?
Smart people are hard to come by but once you have them you should let them work, and when you tell them how to do their job you implicitly assuming that you know better. Besides if you force them you achieve nothing but some brain-dead tests that are going to hunt you later and getting a budget to "rewrite tests" is a fairytale.
The art here is to build a culture that embrace the test as a powerful tool. So newcomers are quickly seeing benefits and start to write the tests in right places, not for the sake of an artificial metric.
Besides, there are plenty of places where having high coverage is going to be a waste of time:
- throwaway prototypes,
- heavy UI code full of animations - they need to look right that is hard to test,
- infrastructure code if you have just a few servers of a particular type,
- customer projects with unreasonable deadlines that are not going to be touched again,
So having your team that writes tests is a hard job and using PR policy won't help much.
The things that worked for me were:
- write tests that make sense your self in early stages of the project
- pair with your employees and write tests with them,
- do peer reviews and suggest what could be tested and why it make sense.
People are resistant to change when they don't know how it benefits them directly and immediately.
My suggestions have been:
- By giving developers slight nudges every time they get frustrated with developing when tests aren't present is a good way to help them see the benefit. "Imagine how much easier it would be to write this piece of code if you had tests in places where this function calls other things".
- Enforcing it during commits (as you suggest, using PRs)
- Reminding your whole organization that while you migrate to implementing more testing that velocity of development will be impacted. This is really important, because it means people outside of the dev team also need to see the benefit.
- Eliminating "deadline cultures" first and then implementing unit testing
One of the MOP checkoff boxes, test results.
So many times you could tank someone by asking "Where are the test results" and they would have to reschedule their maintenance window. If you pissed some ops engineer off, expect the question "Where are the test results" every MOP meeting.
I like to see good coverage (say, 85%) because the act of trying to cover that much has led us to discovering some bugs that would otherwise have gone unnoticed until someone ran into them in production. But 100% line coverage is still a tiny, tiny fraction of covering all permutations of how that code is used, so I feel like trying to hit some kind of holy grail perfect coverage target over-emphasizes the value of tests. While tests can absolutely be very useful, it's the actual running code that needs to be high quality, the tests are just helpers.
Tests, like any process, should be serving you and your goals.. You shouldn't be serving your processes or testing practices. This sort of un-nuanced thinking isn't indicative of a high performing startup or CTO IMHO. Perhaps your policies are not directly indicative of your real thoughts on the matter?
As others have said, line coverage is a misleading metric. Ideally, your tests would fully cover all _program states_, and even 100% line coverage doesn't guarantee full state coverage. If you have untested states, then the following facts are true:
- You don't have a formalized way of modeling or understanding what will happen to your program when it enters an untested state.
- You have no way to detect when a different change causes that state to behave in an undesired way.
So the answer to how many tests does a PR needs- as many as needed to reduce your software's risk of failure to a minimal level... And this is failure right now, and in the future, because you will likely be stuck with this code for a while. Since it's difficult to know how much a future failure will cost your company, IMO I always try to err on testing as much as possible. Plus, good comprehensive tests have other benefits, such as making other changes / cleanups safer by reducing the risk that they unintentionally side-effect other code.
If a function has been statically proven to return an int, I know it will either return an int or not return at all. It can't suddenly return a hashmap at runtime, no matter what untested state it enters.
Unless you're actually writing complex tools - no, you're probably not getting a "formalized way of modeling" what happens to your program.
If somebody tells me "hey, I have to keep manually testing this and that, I'm losing a lot of time, how about I spend 2 days writing my test thing?" - I'll say Sure!
But if someone tries to convince me in the abstract - I'll be skeptical. Developer busy-work is real.
Enough test for each of your spec. Adding a new functionality to your product? Your test should cover and the cases you put in your specs. Correcting a bug? You test should trigger it with the old code.
You can have 100% code coverage with unit testing, it will do jack-shit for your users when they enter a negative number and there was no code meant to manage this case so it never was tested.
Enough so the overall coverage doesn't go down.
What you outline seems reasonable, at least in an environment where you sometimes have hard deadlines (eg. Ticket sales for this festival go live next week). Outside of that, I'm curious what cases there are where you can have a PR which is both critical to merge and doesn't need tests. When I review a PR, I look at the tests as one way of thinking through "what edge cases have already been accounted for here?"
What I find more common is for the business to be unprepared to make lateral changes to a product. Even rational unit tests are a medium term investment. You need to spend time developing features customers don't see, and apply those tools for some time, to see quality differences. That can be difficult to justify in a number fairly normal business scenarios (low cashflow/reserves, high tech debt/regret, etc.).
To help offset the cost (and delayed benefits), I've always suggested phasing in unit test strategically. Pick a module or cross-section of the product that is suffering from bugs that customers see (i.e., affecting revenue) and add the minimum viable tests to that. Repeat as needed, and within months/years, you'll have coverage that fits the business needs well.
Consumer might not need that as much as enterprise.
- Educate your CTO.
- Just start writing tests. Consider whether you can pull this off.
- Wait for something to go wrong where tests would have caught the problem earlier. That would be a good time to bring testing up again.
- Find another job.
TDD is a tool for specific needs, just doing TDD because you heard it's great will just kill the team productivity and whatever product they are working on.
Sure, it will pay off, but not right away. Something needs to cover the interim.
* only hire when desperate
Strong talent is so hard to get you should probably always be hiring. If you're hiring too many people your bar is probably too low.
* only hire to keep up with growth
You need to be at least a little preemptive. The hiring process itself can take months, plus the time to train even good new hires is at least a few months, AND you need your most sr. engineers to help interview so that is time they aren't writing features when you're trying to hit that critical milestone.
* Don’t hire someone to do something you’ve not yet figured out
This is probably also a mistake as software engineering has become pretty specialized. Specialized Frontend, Devops, or Data engineers can bang out solutions even a strong generalist would take ten times longer to even approximate (and most likely anything they build will be throw-away). There is huge low hanging fruit in engineering productivity /business value to getting at least a decent 80% solution for most of these areas that it's worth hiring at least one strong specialist to help Shepard development.
I think this is not an indictment of hiring for something you do not know how to do, so much as it is of hiring someone before you have a defined job for them to do.
When you’re hiring an engineer, presumably you’ll be placing them onto a team that is responsible for some well-defined part of the stack. So you should know what skills you’re looking for when you’re interviewing. This should make interviewing easier; if you know what capabilities you need a new hire to have, then you know exactly what to test for in new candidates.
(This is yet another reason why generic whiteboard interviews make no sense. They’re optimizing for solving problems that could be wholly unrelated to the problems your company faces on a daily basis. I’m surprised more companies do not give interviews that focus more specifically on their relevant problem domains.)
If you don’t know what the new hire is going to do when he or she starts work, then you have no idea what skills to measure in the interview, and end up settling for the “least common denominator” of whiteboard coding ability.
1) give them a take-home project in an area relating to the position they want to weed out the unqualified.
2) bring them on site and speak with them in persona along with other members of the team they will be joining.
3) Its fairly easy to tell whos a whos an impostor if you are knowledgeable yourself, but a group of engineers can identify a faker fairly quickly.
4) Always consult your team about the new hire and don't make it unilaterally or their failures will reflect on you. Even their success won't make up for it if they turn out to be a nutjob and you vouched for them.
Compare: the way meditation is usually taught. There is something "there" to communicate, but meditation teachers mostly fail to communicate it. To use an old phrase, they are "pointing at the moon"—but, to stretch the analogy a bit, they're doing this pointing indoors, where the sight-picture you get by following the tangent of their finger does not, in fact, contain a visible moon. You have to imagine taking the thing they're doing (pointing), and reframe it in a context where there is a hypothetical moon to see. Whether that helps you find the moon is more about what you know about the sky and fingers and angles, than it is about how well the meditation teacher can point. And this is why the teachers end up failing to communicate: they did not, themselves, figure out how to "reach enlightenment" by absorbing a verbalized lesson, but rather by pondering a gestalt mess of ideas that have little in the way of words associated—so they can't just turn that gestalt mess back into words.
So: are HBR writers pointing at a visible moon, or are their words Markov-chain-speak because they're trying to backwards-chain the gestalt mess of their own mostly wordless understanding into a verbal lesson?
The problem of meditation teaching is false positives: people experience enlightenment while pondering some koan, so they think that that koan actually helped, and pass it on. It's superstition. Anything could have helped. Something that truly helps, should help more people than average, more often than chance—and if you've got that, you've got words.
> Anything could have helped.
If something helped a person, and they want to pass it along, even if it's difficult to communicate in a tangible fashion, I'm not going to stand in their way.
People don't yet know what they don't know, until they know it—so it can't be the learner's task to preemptively avoid vacuous lessons. That responsibility has to fall to the teacher.
However, a good writer should be able to convey even the most advanced topics in accessible ways. Often when I see someone relying on jargon and insider language too much, they strike me as a poor writer, regardless of their grasp of the source material.
I have also found increased risk of bikeshedding. The higher you go, the more likely you're working cross-disciplinary with ego-intellects. That also leads to suppressing dissent (hierarchy relationships more than experience-based), leading to worse decisions.
Please don't listen to the HBR articles, they're generally very terrible and often can be summed up by survivorship bias.
They would hire almost anyone, and then not take active action in maintaining a healthy staff. Needless to say, it's not going very well over there, regardless of the CTO being quite technically proficient.
The old fire hose saying is true, but it’s not just that you’re drinking from a fire hose, it’s that you often don’t know what’s coming out of the hose next. One minute deep technical decisions, the next minute helping to establish hiring philosophy, and cashflow and growth always on a background thread.
After a few years of this I think my experience is not uncommon. If you exit and through whatever circumstance (success or failure), come back inside an F500 company, you realize that trial by fire has force fed you a vast amount of new skills without even realizing it.
On one hand, the realization is really empowering, the realization you feel comfortable taking on various high impact tasks without much thought that you could have never jumped right into before. On the other hand, it can feel limiting, because F500 companies tend not to encourage even the most talented technical people to cross roles and help define company wide hiring practices.
It’s an invaluable education, but I don’t know if MBA is quite the correct ananlogy, not sure what a better comparison is.
CTO positions are much more about technology vision (e.g. choosing frameworks/technologies that can last + serve your needs today and tomorrow) and hiring/retaining talent. Everything else is gravy.
Many of the Fortune 500 companies have both CIO and CTO, and they are not necessarily peer to each other. In the recent years a bunch of new titles like big data chief, digital media /innovation chief, process and technology chief, etc which make the political scheme more confusing and toxic. Many of them end up reporting to CEO directly. There's also EVP rank, so go figure...
Again, it depends.
CTO of, say, US Foods? No, of course not.
This has always confused me in start up land. There will be a full c suite in a company of 10 people, despite that those c suite folks day to day would look nothing like a corporate position.
Just call it what it is instead of inflating titles.
I agree! How many times I got job offers that read basically: "CTO/sole developer". That's inflated and meaningless, like being one of 100 Senior VPs in a bank.
The CTO/lead gets a better title to inflate their ego and (possibly) future earnings.
The company gives the employee something the employee values but costs the employer nothing.
Specifically this could mean as minuscule as "We're using RoR for our website" to more broad like "We need to have sensors in every food package we ship to manage our supply chain and we use IBM IoT platform to do this". The point is have a defined vision with subsequent technology choice behind it. Whether or not you have a Lt that helps drive those decisions is moot.
Hum. I would say the reverse. Bring people that are smarter and know more than you.
I was bit before by hiring specialists when I did not know what task at hands was, what metrics I should expect, what kind of timeline is reasonable, what potential gotchas are.
And for person with CTO title, I think it must have.
Don't hire a role you think you need until you're sure you need it. Sometimes startups think "we need an HR person" or "we need a marketing person" before those jobs are actually at the point where they require a full-time person.
But after your first few engineering hires, you will probably know well whether you need, say, a backend engineer. You will have people doing some of that work, and be able to look at your roadmap and estimate correctly.
But for first-of-their-type roles (like my marketing or HR examples), that's harder - often part of it is startup leadership thinking "we could be doing so much XYZ I don't know about", instead of "we're doing 10 hours of XYZ a week and I know we need 40".
Once you've decided you need the hire, you want to get a person as smart as possible.
I see this all the time in hiring and acquiring vendors. Management just wants to fill missing talent, but then can't tell they are getting mediocre work.
I've seen people who don't know how to market their product go out and try to hire a marketing guy. You might luck out and get someone perfect for you, but I've never seen it.
Usually they just end up wasting a lot of money and learning some hard lessons.
Do 5 of these, and you'll have a good idea of what someone good looks like. (And those 5 may give you some candidates)
This is very difficult though because things like "organizes the team to hit quota every quarter" can come in many different manners.
It's entirely possible that this was the primary source of his problems with hiring, firing, testing, and a lot more.
The technology you choose determines which technologists you attract. And it's not a superficial thing, it actually says everything about the CTO's own technical skill, judgement, and experience.
Early on, like OP discovered, you pretty much have to do it all, but you slowly remove yourself away from a lot of those tasks as you find better people to replace you in those areas.
Very well; now, I can go back to work with my head up high. :)
This fact alone makes me so glad that I stuck with older tech that has withstood the test of time for our own SaaS. I know that we have users from bleeding edge tech companies sign up for our service, then run away when they glean the 'ancient' tech that it runs on - but then again, I think we have outlasted many other new tech frameworks/languages that have rocketed on high, then fizzled out into obscurity in that same time.
Is this a thing? How can my company get $100,000 of AWS on credit?
I'm curious what this "founder magic" bit means. Is this advice largely because of the difficulty of trying to find a qualified expert to bring new capabilities to your company when you personally aren't familiar with that area? E.g., it's hard to not get the wool pulled over your eyes by someone who talks well but can't deliver?
There are many many ways to fail in a position and only a few key parts that matter. The "founder magic" is taking your unique perspective as a domain expert in your business and finding out what the role really needs. You do it by executing in that role for real. After you do that for a while (weeks/months) then you know what will make a candidate successful (and now you have a 50/50 chance of hiring the right person rather than a 10/90).
So you have to try it all out yourself first and figure out what makes someone in this role successful, what makes them not successful, and how to create a process or blueprint that your new hire can follow to success.
Experience and Failure are important guide-posts to help you look for the right person to fill that role. Where are they better than you? Then you have to mentor them so they get to be better than themselves so they can make your next hire(s).
Scale enough to have people dedicated to building and maintaining data lakes is a late stage problem. Who’s going to go build and maintain that reshaping of data?
That said I’m helping an early stage company and an AWS read replica plus Metabase is meeting most of our needs fine for today. We’ll probably start pushing events to bigquery soon so we can make some metrics that would otherwise take crazy joins and sub queries.
Over engineer much? I've worked at trading and advertising analytics firms that had less engineering
In my past experience, the two methods above can produce wildly different impacts on database performance.
Maybe they should try TiDB(https://github.com/pingcap/tidb). It is a MySQL drop-in replacement that scales.
I fixed this in a new project by starting with jest  and failing the CI if the test coverage wasn't at 100%.
 : https://facebook.github.io/jest/
This is horrible advice and should never be followed.
That being said, we do something similar where we require 80% coverage.
100% goes into "change detecting test" territory. There's also the time aspect: going from 0-70 is not hard, 70-100 is extremely time consuming, and often not worth the effort.
Monitoring is a way more efficient tool at catching issues.
We've found that with using Jest and just doing snapshots you can get to 70% without actually testing any of your others methods, hence the 80% coverage requirement.
how long have they been using perl5 over at craiglist?
Also, try defining (maybe in collaboration with the team) the tests you want people to write rather than leaving it up to them or (hopefully not) expecting 100% coverage. I wrote this on my thoughts a while back https://getcorrello.com/blog/2015/11/20/how-much-automated-t...
We had some success with increasing testing using that and code review so others could check tests were being written. Still not total buy in to be honest but a big move in the right direction :)
One surprising thing was that after years of thinking I was encouraging my team to write tests, the main feedback on why they didn't was that the didn't have time. Making it an explicit part of the process and importantly defining what tests didn't need to be maintained forever really helped.
In terms of business, you are trying to prove your business model. If your business model is bad, it doesn't matter how well your software is written. You need to prove your business model before you run out of funds.
It's a give and take. You really need to understand both the technical aspects and the business aspects to understand why entities might do certain things.
Also, people have been writing software without unit tests for decades.
Disagree - if your engineers don't write tests, you need to clearly state to them that tests are table stakes, and create an environment conducive to the outcome you want (set up CI, make it fast, set aside time for test-writing hackathons).
If your engineers don't want to _follow_ that leadership after it's given, then yeah, you hired the wrong people - but don't demonize employees for not doing something they weren't told they need to do.
Just telling engineers, "write tests" and then promoting the ones that don't is bad leadership: you need to create an environment where the behaviors you desire are the ones that are promoted.
How do you make sure that the refactored class does the same thing as the old one? Rewriting old code that you don't have test coverage for is way riskier than whatever small change you were going to make to it.
I write a lot of code without tests because a lot of legacy codebases aren't set up to be testable, but they work, and it's important to the business that we're able to deliver small bug fixes and incremental improvements on the existing code while we write whatever replacement system we want to write. As I work on them they'll slowly get more testable, but if you're abandoning working code because it has no tests, you're usually making the wrong decision. (Which the author recognizes.)
Solid integration tests may work, but it's hard to get really good coverage in any reasonable running time from integration tests.
These days I try to cover the happy path with a fairly integrated flavour of test, the edge cases around the tricky bits of code, and fairly exhaustive coverage for authentication / authorization code paths, and not a whole lot more.
If I wanted to be a consultant or contractor again, it would be walking into these situations and essentially building test systems for legacy code.
(And if anyone wants to pay me 8,000 a week for a few months...)
This. This is the problem. The answer, with tests.
Tests are simply the implementation of knowing what the code is expected to do. If you don't have any basis for that expectation, writing tests is meaningless - either you test the current behavior of the code, which doesn't help you change anything, or you test your imagined behavior of the code, which doesn't help you validate anything.
That's great if you have the luxury of time.
Good test coverage will definitely save you time in the long run, no doubt about it. But it will cost you dearly in the short term.
And if your company's life or death hangs on getting a feature out a couple days sooner, then skipping tests is a perfectly valid thing to do.
> Stepping aside from pure technical decisions, the life-blood of being a CTO is people management
Not really, no. That's the job of a CTO at a start up, not at a larger company. I'm not sure the author of the article has actually learned the right lessons from his experience.
At the end of the day, CTO of a start up is not really a CTO role in my opinion. It's a technical co founder. You just happened to be the most senior person of the team at a point in time and you inherited a few leadership responsibilities in the process.
I've seen a lot of start ups fail because they fail to recognize that fact and didn't realize that after a few years, they needed a different CTO than the co founder, someone who understands that role at scale and the many tasks it implies that are not necessarily relevant to the early years of the company.