
What I wish I knew when I became CTO - edmack
https://medium.com/sketchdeck-developer-blog/what-i-wish-i-knew-when-i-became-cto-fdc934b790e3
======
latch
> I’ve found it a real struggle to get our team to adopt writing tests.

If you're struggling to judge the engineering culture of a company that you're
considering joining, consider this indicative of a poor one. It isn't
definitive, but it's something you should ask about and probe further. Ask to
see their CI dashboard and PR comments over the last few days. When they talk
about Agile, ask what _engineering_ techniques (not process!) they leverage.
These things will tell you if you're joining a GM or a Toyota; a company that
sees quality and efficiency as opposing forces, or one that sees them as
inseparable.

When it comes to tests, there are two types of people: those who know how to
write tests, and those who think they're inefficient. If I had to guess what
happened here, I'd say: the company had a lack of people who knew how to write
effective tests combined with a lack of mentoring.

That's why you ask to see recent PR comments and find out if they do pair
programming. Because these two things are decisive factors in a good
engineering culture.

~~~
forgotpw1123
PR comments I agree with, but after believing in unit tests for years I'm
drifting slowly into the "waste of time" camp.

I'm convinced that unit tests don't usually find bugs. IMO, most bugs are edge
cases that were an oversight in the design. If the dev didn't handle the case
in code they're not going to know to test for it. Fuzzing is a much better
approach.

At my current position I have the opportunity to work with two large code-
bases, built by different teams in different offices. One of the projects has
~70% code coverage, the other doesn't have a single test. Exposure to both of
these systems really bent my opinion on unit tests and it has not recovered.

The project with high code coverage is basically shit and has so many bugs
that we regularly cull anything marked less than "medium" severity as "not
worth fixing". This project was written by a team that loves "patterns", so
you can find all sorts of gems like Aspect Based Programming, CQRS, N-tier,
etc. well mixed into a featureless grey Java EE goo. We get so many bug
reports that its someones job to go through them.

The other project with no tests is a dream to work on. Not a single file over
a few hundreds lines, everything linted and well documented. Almost no methods
that don't fit on the screen, no recursion. No bullshit "layering" or
"patterns". I can't remember the last time we had a bug report, since our
monitoring picks up any exception client and server side. Every bug I've
worked on was identified by our monitoring and fixed before anyone noticed.

Whats the difference between teams that developed these vastly different
applications?? I've worked with both teams for a while, and honestly, the
engineers that wrote no tests are of far higher caliber. Use Linux at home,
programming since they can remember, hacking assembler on the weekends and 3D
printing random useless objects they could easily buy. The other team went to
school to program, and they do it because it pays the bills. Most of the bad
programmers know what they're doing is wrong, but they do it anyways so they
can pad their resume with more crap and tell the boss how great they are for
adding machine learning to the help screen that nobody has ever opened.

If your developers are great then tests would hardly fail and be fairly
useless, and if they're terrible tests don't save you. Maybe there's some
middle ground if you have a mixed team or a bunch of mediocre devs??

~~~
arekkas
Let's break this down.

> I'm convinced that unit tests don't usually find bugs.

They don't, they test whether or not the API contract the developer had in
mind is still valid or not.

> IMO, most bugs are edge cases that were an oversight in the design. If the
> dev didn't handle the case in code they're not going to know to test for it.

You don't write test to find bugs (in 98% of cases), but you can write tests
for bugs found.

> Fuzzing is a much better approach.

If you're writing an I/O intense thing, such as a JSON parser, then yes. For
80% which is CRUD, probably not.

> The project with high code coverage is basically shit and has so many bugs
> that we regularly cull anything marked less than "medium" severity as "not
> worth fixing". This project was written by a team that loves "patterns", so
> you can find all sorts of gems like Aspect Based Programming, CQRS, N-tier,
> etc. well mixed into a featureless grey Java EE goo. We get so many bug
> reports that its someones job to go through them.

You are blaming tests for bad design choices. With the patterns raised unit
tests only get you so far, integration tests are what help you prevent bad
deployments.

> The other project with no tests is a dream to work on. Not a single file
> over a few hundreds lines, everything linted and well documented. Almost no
> methods that don't fit on the screen, no recursion. No bullshit "layering"
> or "patterns". I can't remember the last time we had a bug report, since our
> monitoring picks up any exception client and server side. Every bug I've
> worked on was identified by our monitoring and fixed before anyone noticed.

So how many exceptions were raised due to bad deploys? Core review only gets
you so far.

> If your developers are great then tests would hardly fail and be fairly
> useless, and if they're terrible tests don't save you.

Failing tests don't have to do with devs being "great" or not. Developers must
have the capability of quickly testing the system without manual work, in
order to be more effective and ship new features faster. If the tests are one-
sided (only unit tests, only integration tests), then this will get you only
so far, but it still get's you that far.

Don't abandon good development practices only because you saw a terrible Java
EE application.

~~~
deckard1
> Developers must have the capability of quickly testing the system without
> manual work

Running unit tests is hardly quick. Especially if you have to compile them.
End-to-end are even worse, in this regard.

> They don't, they test whether or not the API contract the developer had in
> mind is still valid or not.

If you're always breaking the API, then that's a sign that the API is too
complex and poorly designed. The API _should_ be the closest thing you have to
being set in stone. Linus Torvalds has many rants on breaking the Linux
kernel's API (which, also, has no real unit tests).

It's also really easy to tell if you're breaking the API. Are you touching
that API code path at this time? Then yes, you're probably breaking the API.
Unless there was a preexisting bug that you are fixing (in which case, the
unit test failed) then you are, by _definition_ , breaking the API, assuming
your API truly is doing one logical, self-contained thing at a time as any
good API should.

edit: As an aside, I'd like to point out that POSIX C/C11/jQuery/etc. are
littered with deprecated API calls, such as sprintf(). This is almost always
the correct thing to do. Deprecate broken interfaces and create new interfaces
that fix the issues. Attempting to fix broken APIs by introducing optional
"modes" or parameters to an interface, or altering the response is certain to
cause bugs in the consumer of the interface.

> Don't abandon good development practices

Unit tests are a tool. There are cases where they make sense, where they are
trivial to implement and benefit you greatly at the same time. Then there are
cases where implementing a unit test will take an entire day with marginal
benefit and the code will be entirely rewritten next year anyway (literally
_all_ web development everywhere). It doesn't make sense to spend man-months
and man-years writing and _maintaining_ unit tests when the app will get
tossed out and rewritten in LatestFad Framework almost as soon as you write
the test.

~~~
spdionis
> implementing a unit test will take an entire day with marginal benefit

The benefit should be realizing that if you need an entire day to implement an
unit test you're doing something very very wrong.

------
mkarazin
> I’ve found it a real struggle to get our team to adopt writing tests.

I find this hard to believe. Do others CTOs / team leads find this to be the
case?

I've been a CTO of two small startups with 3-7 developers. We've had
resistance to tests at some points (myself included). We've solved it fairly
simply. All pull requests (PRs) require tests. PRs are rejected immediately
without tests. If a PR doesn't have tests and it is critical to get in, we
open a new ticket to track adding tests. It isn't fool proof, but it does
result in a high degree of test coverage.

And once developers understand how and where to write tests, they usually see
the benefit quickly and want to write more tests.

~~~
apocalyptic0n3
I'm not a CTO but I do lead the dev team at our agency (was previously 16
devs, but we've slimmed down to 7 currently). I want to preface this by saying
that at an agency, your biggest enemy is always time; sales teams have to sell
projects for the absolute minimum in order to get a contract, so you can't
waste time on non-essentials for most projects.

That said, the biggest resistance I have found is "this feature is due in
three days, I need two and a half to finish, and then we have another half to
review and find bugs." In the end, the biggest issue is that we have time to
test on the spot or write tests, but not both. You can scrape by with just
manual testing, but I don't think anyone would ever rely on automated tests
100%.

Our larger projects are test-backed, and our largest even reaches 90%
coverage, but the only reasons we wrote tests for those was because we knew we
would be working on them for 2-3 years and it was worth the time in that case.
I wish this wasn't the case, but I've found it's always the argument against
automated tests in my corner of the market

~~~
yread
I always find code coverage such a useless metric: if you have two independent
ifs next to each other and one test goes in one if and another test in the
other you have 100% coverage. Congratulations. But you've never tested what
happens when you go in both

~~~
apocalyptic0n3
I agree that it is a useless statistic, especially when comparing unit vs
integration vs functional vs smoke testing. There are different types of tests
and just because you are reaching 90% of your code does not mean you are
thoroughly testing it.

The only reason I brought it up was to show that we don't skip test writing
entirely and the projects where we do write them, it isn't like we just wrote
a test to check that "Project Name" is returned on the homepage and called it
a day.

------
siliconc0w
So hiring is pretty hard but I kinda disagree with most of the points there..

* only hire when desperate

Strong talent is so hard to get you should probably always be hiring. If
you're hiring too many people your bar is probably too low.

* only hire to keep up with growth

You need to be at least a little preemptive. The hiring process itself can
take months, plus the time to train even good new hires is at least a few
months, AND you need your most sr. engineers to help interview so that is time
they aren't writing features when you're trying to hit that critical
milestone.

* Don’t hire someone to do something you’ve not yet figured out

This is probably also a mistake as software engineering has become pretty
specialized. Specialized Frontend, Devops, or Data engineers can bang out
solutions even a strong generalist would take ten times longer to even
approximate (and most likely anything they build will be throw-away). There is
huge low hanging fruit in engineering productivity /business value to getting
at least a decent 80% solution for most of these areas that it's worth hiring
at least one strong specialist to help Shepard development.

~~~
chatmasta
> Don’t hire someone to do something you’ve not yet figured out

I think this is not an indictment of hiring for something you do not know how
to do, so much as it is of hiring someone before you have a defined job for
them to do.

When you’re hiring an engineer, presumably you’ll be placing them onto a team
that is responsible for some well-defined part of the stack. So you should
know what skills you’re looking for when you’re interviewing. This should make
interviewing easier; if you know what capabilities you need a new hire to
have, then you know exactly what to test for in new candidates.

(This is yet another reason why generic whiteboard interviews make no sense.
They’re optimizing for solving problems that could be wholly unrelated to the
problems your company faces on a daily basis. I’m surprised more companies do
not give interviews that focus more specifically on their relevant problem
domains.)

If you don’t know what the new hire is going to do when he or she starts work,
then you have no idea what skills to measure in the interview, and end up
settling for the “least common denominator” of whiteboard coding ability.

~~~
blackflame7000
The Whiteboard is nothing more than a hazing ritual testing marathon runners
on their 100-yard dash. As CTO I opted to go for two-pronged:

1) give them a take-home project in an area relating to the position they want
to weed out the unqualified.

2) bring them on site and speak with them in persona along with other members
of the team they will be joining.

3) Its fairly easy to tell whos a whos an impostor if you are knowledgeable
yourself, but a group of engineers can identify a faker fairly quickly.

4) Always consult your team about the new hire and don't make it unilaterally
or their failures will reflect on you. Even their success won't make up for it
if they turn out to be a nutjob and you vouched for them.

------
ripberge
Enjoyed reading this article, all valid points. However, the one thing that
stood out to me in this area was how light I was in effective principles of
management and leadership. As a CTO of an organization of more than a handful
of people you eventually "get things done" largely via other people rather
than being hands on yourself. Had to read a lot of Harvard Business Review to
gain the skills and confidence for that. Just like programming, there are
indeed tangible skills to learn. It's not just common sense and you're not
just born with it.

~~~
redler
It’s funny how, as you progress through a career and gain responsibility,
those HBR articles go from seeming like a bunch of Markov chain corporate-
speak to being on-target for that exact problem you had last month with the
leadership team.

~~~
derefr
Are you sure they really have any more meaning, or are you just ascribing them
meaning that exists more in your "evaluation context" than in the text itself?

Compare: the way meditation is usually taught. There is something "there" to
communicate, but meditation teachers mostly fail to communicate it. To use an
old phrase, they are "pointing at the moon"—but, to stretch the analogy a bit,
they're doing this pointing _indoors_ , where the sight-picture you get by
following the tangent of their finger does not, in fact, contain a visible
moon. You have to _imagine_ taking the thing they're doing (pointing), and
reframe it in a context where there _is_ a hypothetical moon to see. Whether
that helps you find the moon is more about what you know about the sky and
fingers and angles, than it is about how well the meditation teacher can
point. And this is _why_ the teachers end up failing to communicate: they did
not, themselves, figure out how to "reach enlightenment" by absorbing a
verbalized lesson, but rather by pondering a gestalt mess of ideas that have
little in the way of words associated—so they can't just turn that gestalt
mess back into words.

So: are HBR writers pointing at a visible moon, or are their words Markov-
chain-speak because they're trying to backwards-chain the gestalt mess of
their own _mostly wordless_ understanding into a verbal lesson?

~~~
darkerside
What is up with the disrespect I constantly hear for wordless understanding?
Not everything is best communicated verbally. There's a reason traditional
education is often described as a series of falsehoods.

~~~
derefr
There's nothing wrong with wordless understanding per se; the thing that's
"wrong" is thinking that you _have_ words (i.e. a teaching) that can
effectively, _repeatably_ communicate a concept, when you actually just have a
wordless understanding.

The problem of meditation teaching is false positives: people experience
enlightenment while pondering some koan, so they think that that koan actually
_helped_ , and pass it on. It's superstition. Anything could have helped.
Something that truly helps, should help _more people_ than average, _more
often_ than chance—and if you've got that, you've got words.

~~~
darkerside
False dichotomy. Understandings aren't completely wordless or wordable. They
fall along a scale.

> Anything could have helped.

If something helped a person, and they want to pass it along, even if it's
difficult to communicate in a tangible fashion, I'm not going to stand in
their way.

~~~
derefr
Sure, but If I want to _learn_ a difficult-to-communicate lesson, I would hope
that the people who have a wordless understanding would keep their
communicating to themselves—unless-and-until they come up with some coherent
words to match their thoughts, that they can be sure can be used to
reconstruct those thoughts without their brain there to help.

People don't yet know what they don't know, until they know it—so it can't be
the _learner 's_ task to preemptively avoid vacuous lessons. That
responsibility has to fall to the teacher.

~~~
darkerside
Sometimes what sounds like nonsense hints at a higher truth.

[http://m.nautil.us/issue/40/learning/teaching-me-softly-
rp](http://m.nautil.us/issue/40/learning/teaching-me-softly-rp)

------
mberning
I have quit jobs because we kept bad hires too long and then didn’t fight to
keep good hires from walking away. I think grooming and retaining talent is
just as important as providing technical leadership. You need to be strong in
both areas.

~~~
joallard
I've seen exactly this in a local well-regarded startup. Incompetent hires
with problematic behaviors thriving and being protected, and competent hires
being unprotected, not cared about, and almost pushed out.

They would hire almost anyone, and then not take active action in maintaining
a healthy staff. Needless to say, it's not going very well over there,
regardless of the CTO being quite technically proficient.

~~~
ihsw2
Incompetent PMs can be an issue too, between inaccurate/incomplete feature
planning and shoving their responsibilities onto unwitting developers. I'd
argue that a great PM is worth as much as the much-vaunted 10x developers, if
not worth much more.

------
WhitneyLand
What always stands out about startup reflections like these is how utterly
undefined, freeform, and rapidly evolving the roles can be.

The old fire hose saying is true, but it’s not just that you’re drinking from
a fire hose, it’s that you often don’t know what’s coming out of the hose
next. One minute deep technical decisions, the next minute helping to
establish hiring philosophy, and cashflow and growth always on a background
thread.

After a few years of this I think my experience is not uncommon. If you exit
and through whatever circumstance (success or failure), come back inside an
F500 company, you realize that trial by fire has force fed you a vast amount
of new skills without even realizing it.

On one hand, the realization is really empowering, the realization you feel
comfortable taking on various high impact tasks without much thought that you
could have never jumped right into before. On the other hand, it can feel
limiting, because F500 companies tend not to encourage even the most talented
technical people to cross roles and help define company wide hiring practices.

It’s an invaluable education, but I don’t know if MBA is quite the correct
ananlogy, not sure what a better comparison is.

------
mbesto
Much of this is summed up to be:

CTO positions are much more about technology vision (e.g. choosing
frameworks/technologies that can last + serve your needs today and tomorrow)
and hiring/retaining talent. Everything else is gravy.

~~~
dpeck
Is the choosing frameworks and technologies really a thing that CTOs do? That
seems like more of a tech lead/architect job to choose the right tool for the
job. I could see the cto pushing back on those choices from time to time if
something is being drastically over engineered but declaring what technology
is being used seems like a job far below a cto.

~~~
speby
In a word, no. But in startup-land, where the total number of people on the
engineering team is, say, less than 10, chances are good that the CTO will
also play a lead engineer and/or architect sort of role, in which case they
will play a part in designing the architecture, selecting frameworks, and so
forth.

CTO of, say, US Foods? No, of course not.

~~~
ecshafer
Why call the role a CTO then? If the role is closer to a tech lead or
architect, just call them an architect.

This has always confused me in start up land. There will be a full c suite in
a company of 10 people, despite that those c suite folks day to day would look
nothing like a corporate position.

Just call it what it is instead of inflating titles.

~~~
gaadd33
What do you call the person running the company of 10 people? I guess "team
lead" would be a non inflated title, or just "manager"? It would be pretty
strange to have to explain that title to anyone outside of the company though

~~~
dpeck
"founder"?

------
hartator
> Don’t hire someone to do something you’ve not yet figured out

Hum. I would say the reverse. Bring people that are smarter and know more than
you.

~~~
Negitivefrags
If you don't know how to do something yourself, you wont even be able to
identify someone who is better than you in that field.

I've seen people who don't know how to market their product go out and try to
hire a marketing guy. You might luck out and get someone perfect for you, but
I've never seen it.

Usually they just end up wasting a lot of money and learning some hard
lessons.

~~~
eleusive
So how do you hire effectively as a CEO? There are too many areas for you to
be knowledgeable, yet you need to be able to hire top talent across a variety
of areas.

~~~
mathattack
The method I've found is to ask people I respect a lot, "Who is the best X
that you know?" Then call/email them, saying, "I'm the CEO of Y, and I'm
trying to find out what a good X looks like. So and so said you're the best
she knows. Can I have 30 minutes of your time?"

Do 5 of these, and you'll have a good idea of what someone good looks like.
(And those 5 may give you some candidates)

This is very difficult though because things like "organizes the team to hit
quota every quarter" can come in many different manners.

------
dumbfounder
"Only hire when you feel you’re completely desperate for the role". Maybe for
a tiny, extremely lean startup. But for anyone else if you wait until you are
desperate you will end up hiring the first person that you think might do the
job. That doesn't sound like the right way to go to me. But maybe "desperate"
is relative.

~~~
paladin314159
Although I'm not a founder, the rule I espoused in the early days (<20 people)
was to not hire for a role unless 50% of a person's time was collectively
being spent on that role across the company. This was a great way to be
disciplined in determining what was actually a bottleneck for our growth.

------
staunch
> _...it’s a blessing that my predilection for hipster technologies has not
> caused any serious problems._

It's entirely possible that this was the primary source of his problems with
hiring, firing, testing, and a lot more.

The technology you choose determines which technologists you attract. And it's
not a superficial thing, it actually says everything about the CTO's own
technical skill, judgement, and experience.

~~~
sidlls
I was thinking that most of the problems he notes are the result of that
litany of tech. How much of that was really necessary or appropriate?

------
akurilin
As time goes on, the CTO becomes a pretty flexible position, somewhat
analogous to that of a COO. This article was useful for me to figure out the
kind of options I had as a CTO, in terms of specializing, as the company got
progressively bigger: [https://www.linkedin.com/pulse/five-flavors-being-cto-
matt-t...](https://www.linkedin.com/pulse/five-flavors-being-cto-matt-tucker/)

Early on, like OP discovered, you pretty much have to do it all, but you
slowly remove yourself away from a lot of those tasks as you find better
people to replace you in those areas.

------
zeeshanm
I like reading posts like this one. May be they serve as a form of therapy for
me that I’m not in this alone. There are others in a similar boat, fighting
the good fight, making similar mistakes, and having the same realizations.

Very well; now, I can go back to work with my head up high. :)

------
cyberferret
> "I appreciate now that technologies have a surprisingly short lifespan"

This fact alone makes me so glad that I stuck with older tech that has
withstood the test of time for our own SaaS. I _know_ that we have users from
bleeding edge tech companies sign up for our service, then run away when they
glean the 'ancient' tech that it runs on - but then again, I think we have
outlasted many other new tech frameworks/languages that have rocketed on high,
then fizzled out into obscurity in that same time.

~~~
wu-ikkyu
What was the stack?

~~~
cyberferret
Front end is basically Bootstrap + jQuery (o_O) Back end is Ruby, but built
using Padrino, based on the Sinatra framework instead of Rails. Not exactly
'old' tech there, but not nearly as cool or fast moving as Rails, Go, Rust
etc.

------
fergie
> hence why cloud providers can offer $100,000 initial credit

Is this a thing? How can my company get $100,000 of AWS on credit?

~~~
GordonS
That kind of offer is generally only available to startups that are in an
accelerator program

------
majormajor
> Don’t hire someone to do something you’ve not yet figured out (some
> exceptional candidates can bring new capabilities to companies, but often
> the most reliable route is for some “founder magic” to re-assemble the
> company until it can perform the new thing)

I'm curious what this "founder magic" bit means. Is this advice largely
because of the difficulty of trying to find a qualified expert to bring new
capabilities to your company when you personally aren't familiar with that
area? E.g., it's hard to not get the wool pulled over your eyes by someone who
talks well but can't deliver?

~~~
cardine
Unless you are hiring someone for a VP type role, your new hire likely isn't
going to step in and know exactly what they should be doing everyday to
achieve the goals you have laid out for them.

So you have to try it all out yourself first and figure out what makes someone
in this role successful, what makes them not successful, and how to create a
process or blueprint that your new hire can follow to success.

~~~
ramses0
Specifically: in a company of five, don't hire a customer service person if
you haven't done customer support yourself at least a bit. Don't hire a
database person if you yourself (or somebody internal) hasn't already tried
[and presumably failed].

Experience and Failure are important guide-posts to help you look for the
right person to fill that role. Where are they better than you? Then you have
to mentor them so they get to be better than themselves so they can make your
next hire(s).

------
perfmode
Yo, why are you doing BI queries on MySQL?

~~~
benhoyt
I'm no fan of MySQL either, but probably because they were already using MySQL
and they had some Business questions they wanted answered Intelligently
without setting up a bunch of new infrastructure. Sometimes at a startup you
just need to get things done, and fix it later when (if) it becomes a pain
point.

~~~
bpicolo
Definitely the kind of thing you use a dedicated replica for

~~~
reilly3000
Really not a replica. That is a fine shim for early days but it means you are
severely limited to what you can do with reporting based on the data structure
prod has. A better pattern is prodDB->Kafka/kinesis streams to a reporting DB
like redshift/snowflake/big query. That way you can shape the data however you
need, and it lets data teams avoid bogging down engineering.

~~~
bpicolo
That’s putting the cart miles ahead of the horse at small-medium
organizations. SQL scales a long way. It’s not a shim - it’s the best way to
do business until the technology is a limit for you. Modern RDMS can go a long
way.

Scale enough to have people dedicated to building and maintaining data lakes
is a late stage problem. Who’s going to go build and maintain that reshaping
of data?

~~~
reilly3000
I guess I really value data, especially for early stage companies trying to
understand users and find fit. I don’t think the DB needs to be a dedicated
analyticsDB, MySQL and especially Postgres work great for analytics. My issue
is with read replicas. In most cases it doesn’t make sense to force the prod
DB to have an analytics friendly schema for the replica to use. Making all
those views and interesting in them as important business questions come up
shouldn’t require a production DB migration.

That said I’m helping an early stage company and an AWS read replica plus
Metabase is meeting most of our needs fine for today. We’ll probably start
pushing events to bigquery soon so we can make some metrics that would
otherwise take crazy joins and sub queries.

~~~
tomnipotent
Most early stage companies will be writing queries directly against OLTP
tables - which is why a read-only replica of your master DB is the
safest/fastest option.

------
shenli3514
>Of the list, AngularJS and MySQL have been the only ones to give us scaling
problems. Our monolithic AngularJS code-bundle has got too big and the initial
download takes quite a while and the application is a bit too slow. MySQL (in
RDS) crashes and restarts due to growing BI query complexity and it’s been
hard to fix this.

Maybe they should try
TiDB([https://github.com/pingcap/tidb](https://github.com/pingcap/tidb)). It
is a MySQL drop-in replacement that scales.

------
senoroink
It's funny you mention that you had difficulty having your team write tests.
At my company, the CTO has difficulty writing tests and the team has
consistently written adequate test coverage.

I fixed this in a new project by starting with jest [1] and failing the CI if
the test coverage wasn't at 100%.

[1] : [https://facebook.github.io/jest/](https://facebook.github.io/jest/)

~~~
tomnipotent
> failing the CI if the test coverage wasn't at 100%.

This is horrible advice and should never be followed.

~~~
senoroink
Why? It's not hard to do if you start a fresh project.

~~~
baconomatic
Just because you have coverage doesn't necessarily mean that you have written
good tests.

That being said, we do something similar where we require 80% coverage.

~~~
tehlike
The difference between 80% coverage and 100% coverage is overrated. 80% is
more than sufficient, i'd even go ahead and say 70% is better.

100% goes into "change detecting test" territory. There's also the time
aspect: going from 0-70 is not hard, 70-100 is extremely time consuming, and
often not worth the effort.

Monitoring is a way more efficient tool at catching issues.

~~~
baconomatic
While I agree with 70% is about the sweet spot, it really depends on the tools
you're using.

We've found that with using Jest and just doing snapshots you can get to 70%
without actually testing any of your others methods, hence the 80% coverage
requirement.

------
tribby
> You accept long-term “technical debt” with the adoption of any technology.

how long have they been using perl5 over at craiglist?

------
robinwarren
Re: getting your team more interested in testing. This is not an easy thing to
get momentum on if people aren't used to it. Yes to getting the test time down
(and keeping it down)

Also, try defining (maybe in collaboration with the team) the tests you want
people to write rather than leaving it up to them or (hopefully not) expecting
100% coverage. I wrote this on my thoughts a while back
[https://getcorrello.com/blog/2015/11/20/how-much-
automated-t...](https://getcorrello.com/blog/2015/11/20/how-much-automated-
testing-is-enough-automated-testing/) We had some success with increasing
testing using that and code review so others could check tests were being
written. Still not total buy in to be honest but a big move in the right
direction :)

One surprising thing was that after years of thinking I was encouraging my
team to write tests, the main feedback on why they didn't was that the didn't
have time. Making it an explicit part of the process and importantly defining
what tests didn't need to be maintained forever really helped.

------
macca321
I just write the gherkin in comments, interleaved with the test code. No
messing with regexes

------
mychael
Why are so many self-appointed startup CTOs so anxious to share startup
advice?

------
orginal__idear
I recently interviewed with these guys. Was not impressed.

------
dustedrob
Great article!

------
thrrr
If your engineers don‘t write tests you hired the wrong people. Testing is
vital. Make a rule: Every change needs to be tested (you can even set up a
pre-commit hook for this. If a class has no test, one has to be written. If
tests can not be written easily for a class, it has to be refactored.

~~~
geofft
> _If tests can not be written easily for a class, it has to be refactored._

How do you make sure that the refactored class does the same thing as the old
one? Rewriting old code that you don't have test coverage for is way riskier
than whatever small change you were going to make to it.

I write a lot of code without tests because a lot of legacy codebases aren't
set up to be testable, _but they work_ , and it's important to the business
that we're able to deliver small bug fixes and incremental improvements on the
existing code while we write whatever replacement system we want to write. As
I work on them they'll slowly get more testable, but if you're abandoning
working code because it has no tests, you're usually making the wrong
decision. (Which the author recognizes.)

~~~
brett40324
> How do you make sure that the refactored class does the same thing as the
> old one?

This. This is the problem. The answer, with tests.

~~~
geofft
What properties do you write tests for, though? Presumably you're touching the
code because there's something wrong with it. How do you know _how much_ of it
is wrong? How do you know that all callers are actually thinking the current
behavior is wrong, instead of one caller misbehaving and another caller
expecting it (possibly because someone noticed and worked around it, and now
that workaround is going to break)?

Tests are simply the implementation of knowing what the code is expected to
do. If you don't have any basis for that expectation, writing tests is
meaningless - either you test the current behavior of the code, which doesn't
help you change anything, or you test your imagined behavior of the code,
which doesn't help you validate anything.

~~~
brett40324
I agree, and commend how well you've noted the problems when code is written
without tests. Such a codebase becomes mentally exhausting and expensive to
probe into; much more expensive than the original time saved by ignoring
testing practices altogether. Sure, a huge amount of legacy software may not
have tests, let alone comments. Then yes, it's like where do you even begin
and have any confidence in what youre testing for. But ignoring proper testing
practices in new, modern codebases, especially in a business where the single
product or service offering is software, is extremely risky and irresponsible.
This is a little ranty because why are devs justifying not writings tests in
2018!

------
hota_mazi
> I appreciate now that technologies have a surprisingly short lifespan

That's pretty much only true in the Javascript ecosystem. Every other areas of
the technological stack usually see lifetimes in the decades.

> Stepping aside from pure technical decisions, the life-blood of being a CTO
> is people management

Not really, no. That's the job of a CTO at a start up, not at a larger
company. I'm not sure the author of the article has actually learned the right
lessons from his experience.

At the end of the day, CTO of a start up is not really a CTO role in my
opinion. It's a technical co founder. You just happened to be the most senior
person of the team at a point in time and you inherited a few leadership
responsibilities in the process.

I've seen a lot of start ups fail because they fail to recognize that fact and
didn't realize that after a few years, they needed a different CTO than the co
founder, someone who understands that role at scale and the many tasks it
implies that are not necessarily relevant to the early years of the company.

