Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Are my expectations on code quality and professionalism too high?
271 points by veggiepulse 40 days ago | hide | past | favorite | 291 comments
I have lost perspective on what is reasonable to expect, and need a reality check from HN.

Some time ago, I left a large-ish company with what I perceived as overall quite good engineering to join a smaller company. When I say "quite good", I don't mean perfect, but what I consider the basics were there

- Code review, where we would consider architectural concerns, failure cases, etc. ensuring maintainability. Shortcuts were taken intentionally with a plan to address them

- Test coverage was good enough that you could generally rely on the CI to release to production

- Normal development workflow would be to have tests running while developing, adding tests as you introduce functionality. For some projects that didn't have adequate test coverage, developing might involve running the service locally and connecting to staging instances of dependencies

- Deployments were automated and infrastructure was managed in code

Those are what I consider the basics. Other things I don't expect from every company and am fine setting up myself as needed.

In $current_company, I was surprised that none of the basics were there. All agree to do these things, but with the slightest bit of pressure those principles are gone and people go back to pushing directly to prod, connecting to prod DBs during development, breaking tests, writing spaghetti code with no review, leaving us worse off than we were before. This is frustrating since I see how slow dev is, and I know how fast it is to develop when people write good code with discipline. Most devs in the company don't have experience with other kind of environments (even "senior" ones), I think they just can't imagine another way. My disappointment isn't with the current state, but that people of all levels are making it worse instead of better.

These setbacks are demoralizing, but I'm wondering if my standards are unreasonable. That this is what mid-sized companies are and I just have to endure and keep pushing

Me as a manager at a startup: "Look, we have 5 months of runway. Does that make sense?"

Young dev from large corp: "Yes. But if you don't use Terraform we wont be able to see our infrastructure changes over time. We don't even have a proper code review process."

Me as a manager at a startup: "We have two micro instances. Do not install Terraform. Finish the import prototype... now."

Young dev from large corp: "Sigh, ok, just saying in 2 years from now we wont be in a good spot". [Then proceeds to blow 2 hours complaining on hacker news.]

This is why it's important that early team members are experienced engineers. They know which corners should be cut today to get the prototype out, and which shouldn't because they'll take 30 mins to do "properly" and save a day of work before the runway is even up.

I work in a small company that has all the things the author mentioned and that can work quickly as a result. Almost none of it comes from "we'll do it this way", almost all has evolved over time as we've been able to gradually introduce more rigour to our development process. Much is down to some great engineers at the beginning setting us on that path.

I highly agree here. Do not staff a very new startup with junior engineers and expect a good outcome.

Furthermore, you absolutely do need to cut corners _somewhere_, but a seasoned team understands that taking on such tech debt will require a reduction in velocity at some point in the near future. It is then incumbent upon engineers to ensure management understands that the team is borrowing from future velocity to deliver quicker, and it's incumbent on management to facilitate remedying that tech debt at the earliest available convenience. If management does not facilitate removing tech debt by allowing for such refactoring/structural work it is then incumbent on engineering to stop offering the shortcuts that temporarily increase velocity, because there's already too much tech debt to be had, and to take on more will reduce velocity unacceptably.

A good startup has experienced engineers and experienced managers who understand this dynamic.

I really wish we had more precise terms for tech debt.

For instance I think type/schema debt (where your domain model has changed from your original type/schema debt) is very painful and has a very high interest rate.

I think duplicate domain logic has a medium to high interest rate.

I think code organization(what methods go where) has a low to medium interest rate.

I think white spacing, duplicate non-domain logic and code style has a low interest rate.

My take is that the highest tech debt is never duplication but always abstraction. Duplication make the job tedious and error-prone, but a wrong abstraction in the start and suddenly you have an insurmontable rewrite of everything for the unexpected change or feature that comes in and wasn't envisioned in the beginning. And it'll come, until you have product market fit and are not a startup anymore.

Agree, and in theory duplication is easy to refactor and clean up, but the nasty thing about duplication is that the instances can diverge in little ways, and then the person cleaning it up may not be able to tell if the differences are meaningful and need to be preserved, or accidents of maintenance.

My current job is cleaning up both kinds of tech debt for a startup. Duplication has created several bugs, but most of them were fixable in a few hours at most, and we finished cleaning them all up within 4 months. Bad abstraction has taken hours to months to clear up and we're still not done.

The difficulty in our current culture with abstraction mostly derives from taking "don't repeat yourself" too far and using that to mean "apply any compression strategy you can find to this code" which could range from enterprise generalization and configuration to APL-style code-golfing.

A gentler, more helpful admonition would be "find the theme, reuse the theme". If you only set out to compress the code you obscure the theme of the code, but if you set out to elevate the theme as you find it, you are likely to let more code rest in a form that is duplicated but a reasonable reflection of the themes currently present.

Was going to say the same... I've been in projects where the abstractions made what should be a trivial 30-60min change, with tests, blows up to almost a week just because all of the layers that need to be worked through.

To GP: schema debt is usually from adding stuff before you need it... it's almost always pre-mature and one should have a good migration strategy in place early on in order to support destructive (or at least not-simple) restructuring over time.

Even better if those could be demonstrated by studies/concrete data instead of "I think/feel..."

Really, really agree. Especially on the last point. Having a small, experienced team is I think the least risky way to build products. Taking the right shortcuts is the big advantage here. Not only about technical debts, but also on reducing planning things to the last detail and reducing overhead on researching best practices for a non-proven stack.

Can you clarify your last sentence? Do you mean not spending too much time planning things out to too much detail?

cies is correct but I'd like to expand the point. A junior engineer (like me) might have encountered tens of systems, and work in less than ten. Senior/more experienced engineers encounter and work in much more. They know which things to implement. For example, let's implement an authentication system for a SaaS. The junior engineer will ask for specs to the brim to avoid missing anything. How to do forget password? Password requirements? Social logins? While the experienced can see the contexts, pick the suitable decision and implement it quickly. They will be more concerned whether the auth will need to support OAuth to smooth out onboarding or to put it off later because their target demographic is more familiar with email/password. These kinds of shortcuts compound to reduce anxiety, increase velocity, and to iterate faster when the product hits the market.

I think what is meant is that a senior can shoot these things more/less from the hip, hence needing less research time.

I suggest it's frequently less about knowing which corners to cut, and more about knowing hard from easy, to avoid future impedance mismatches between eng & sales/marketing (or even internal business leadership).

One of the very most frustrating things for an exec is downtime. The next worst is inaccurate time estimates for IT work. If you don't have at least some experienced architects/developers, it's normal to find yourself in the position where nobody knows what they don't know, and that's a bad look for everyone involved. It's also why a lot of bigco's have been outsourcing internal appdev to MSPs over the last 10+ years. It de-risks projects, eliminates skilled labor shortages, and ensures reasonable QoS... in theory. It doesn't always work out that way, but that's the objective.

Experienced engineers are hard to get though, unless they are founders themselves. Early-stage startups tend to rely on the cheapest engineers they can find, and only hire more experienced devs when they've accumulated a pile of technical debt and poor decisions.

That's why who your technical partner is is so important. Pick someone smart who you wouldn't be able to afford otherwise or pay for it now and later.

For some years I was the developer you talk about, who always knew everything better, now I am a manager at a startup. So I was able to see both sides of the discussion.

I feel like it always comes down to a communication issue. Team members need to feel heard, and feel acknowledged that you fully understood what they are suggesting. On the other hand they need to understand that there are constraints that force a decision that might not technically be the best, but is pareto optimal. When this discussion occurs I sometimes bring up the analogy of these space movies where they put everything on a table that the astronauts have up there and try to make something out of it that mostly consists of duct-tape. Sure there are better tools, but the stuff on the table is all we have. Another thing that took me very long to understand as a developer is that innovation is a risk that might not always make sense economically. A thing that took me long to understand as a manager is that the happiness of your devs is one of your most precious resources that you have to manage well.

A coworker introduced me to the concept of a 'novelty budget', and I find it really useful. If my team is taking on a new project, there are many ways to improve on how things have been done in the past. But doing anything new carries some risk. If every part of the project is new, it will be very risky.

So I use the idea of a novelty budget to negotiate with my engineers. "What things do you really want to change? Let's focus on that, and use tried and true, albeit suboptimal, methods for the other stuff, to stay within our novelty budget." Down the road, once you can handle more risk, you can introduce one of those changes that were originally punted.

I like that!

"Novelty Budget"

Your scenario can be a lose-lose situation as it applies to the question. Speaking from experience, you can do the “agile” approach do fail fast, release faster, but with no buy-in from leadership to address the breaks in the future, your compounding problems on problems.

So, short-term, you have happy leaders who see releases as they want, and unhappy devs who are in a shitshow of code.

Long-term, if your startup is either acquired or starts to grow a strong revenue stream, leaders need to appease their new owners or their influx of new customers, and you can’t slow down because either a.) the new board needs to be impressed to justify their large acqui-investment, or b.) your new customer base is too big to risk losing. So you keep building on bad code, maybe squeezing in a few skunkwork refactored that only the dev peer reviewing and QA know about, and accept any delay on release as the result.

The approach should be somewhere in between, ideally with the engineering team being empowered to tell leadership “yo, we have a bomb of bugs just waiting to burst if we add one more feature with duct tape, let’s prioritize time to address it”.

If leadership doesn’t trust your team, then you have bad leadership.

So much this. There is such a thing as YAGNI in the early and tight days of a startup but tech debt is something that needs to be constantly addressed, and you as a manager need to trust your team to put in the proper engineering as a solution grows. Is quick and dirty fine for a prototype or MVP? In theory yes, but understand that a lot of devs have seen that turn into the final product (with customers on it within days of the "prototype" being finished in some cases). I've seen it with some of the worst codebases I've ever encountered, and those "prototypes" and "MVPs" were never overhauled with a scalable architecture, and the startups end up running out of steam because they did not invest in their product codebase, or better, have a sustainable pace of development that allows the solution to constantly remain high quality.

Don't have the runway for that? Are you managing the startup as well as you could? Why did you tell your investors that you could do it in five months of dev time?

Is there any situation under which leadership shouldn't buy in to exactly what tech wants to do?

Does a captain of a ship do exactly what his crew wants all the time? No, because they're steering a ship and sometimes they have to make moves that not everyone will agree with.

Erm... That's not how sailing works. Yes, the Captain needs buy-in all the time, since they are ultimately responsible for the ship and crew both in the eyes of the law and survival wise. Also, most of the time the crew is at the helm.

Source: I'm a trained sailor.

Scotty wasn't on the bridge, he didn't know what was out the window.

Point of order: Scotty was frequently on the bridge commanding the ship, and there's a strong argument to be made that he was the second-best at the job, after Kirk. Certainly 3rd best.

i voted you up, it was a risky analogy given my limited knowledge of Star Trek compared to the majority here :)

This is a little embarrassing. I asked what I thought was a well-designed question to frame the problem in a way that made my own conclusion obvious. Clearly, I missed the mark, so I'll be more straightforward now.

Engineers below the upper senior levels typically operate with an "infinite" mindset in making these choices. We all laugh at the relevant XKCD[0] because there's a grain of truth in there. This isn't a knock on engineering, it's just the nature of operating at the lowest levels. You become involved in the problem and you solve for local maxima.

Leadership done well is going to offering transparency to the people on the ground level, communicating timelines and strategy in a way that is relevant to the problem at hand, so people know, for example, how long they can spend solving a particular problem.

Where this sometimes goes wrong is, the higher you go in leadership, the lesser certainty there is. There are no "right" answers, there isn't perfect information, and yet, decisions must be made. If engineering is operating on different base assumptions, those decisions will look wrong (and they might be). And yet, someone needs to get people aligned and moving forward, because moving fast in the wrong direction is typically better than moving too slow, even in the right direction.

[0] https://xkcd.com/974/

Or this one, which provides good info on whether to take the time to automate something: https://xkcd.com/1205/

Hopefully there are situations where the leadership has more information available than the engineers.

> Young dev from large corp: "Sigh, ok, just saying in 2 years from now we wont be in a good spot".

Except in my experience it’s not two years, it’s two months. And I can set up two micro instances with terraform in less than an hour.

So soon you’re burning your three months of runway with development at 20-50% efficiency, and have no ability to pivot, if needed, because your infrastructure is rigid and hardcoded.

Neither scenario is “correct”, there is only “judgement”, and it can be devastatingly incorrect either way.

A lot of people with strong or extreme opinions (too little or too much tech debt) are inadvertently being fooled by “personal survivor bias“.

> And I can set up two micro instances with terraform in less than an hour.

The post was more of a metaphor than a concrete example.

However, as a manager I can’t count how many times my team has declared that they can do something in under a day only to have it become a part-time job for someone to maintain indefinitely, or a multi-week rabbit hole as they debug some unexpected behavior, or technical debt as they neglect to document their solution, or a productivity bottleneck as everyone else must now either learn Terraform or wait for the Terraform guy to be available to help and so on.

When I transitioned from IC engineer to engineering manager, I quickly learned that engineers frequently assume the best-case scenario for their time estimates, while managers end up planning for the worst-case scenario. The catch is that it’s difficult to tell an engineer that their time estimate could be an order of magnitude or more short of a realistic estimate without implicitly insulting their abilities in the process. ICs also quickly forget that many of us managers were once over-optimistic IC engineers as well.

> "I quickly learned that engineers frequently assume the best-case scenario for their time estimates, while managers end up planning for the worst-case scenario."

To be fair, bad managers (and there are a lot of them out there) often get really angry if you give a reasonable worst case scenario time estimate for a task, even if you can back it up.

Indeed. You learn in all roles that the only way to get your projects pushed through is to give overly ambitious timelines. If you were conservative or realistic, nothing would ever progress.

I've constantly been on projects where someone said, "This will only be a month of work for maybe 3 engineers." That project quickly becomes 5 engineers over a year because of empty promises.

That said, some of these projects frequently time a very long time because there's dishonesty in time estimates. If they were honest with estimates, you wouldn't have to cut corners. They try to cut corners to push a project through quickly - but it backfires constantly... Which leads to more work rather than doing the more effective and reliable part of lots of non-visible work before seeing visible results.

When I was managing a large, distributed team of mostly mediocre engineers (at a poor bigco), it was common to take the best case estimate (provided by a senior engineer or architect), add 50-100% because due to lack of similarly experienced engineers responsible for day-to-day coding, add 25-30% for SQA, and add 50% for "business" delays (usually LOB heads who couldn't agree on features, or who asked for utterly stupid things that we'd have to explain away before we even got started ... like a web UI with a 52 column table meant to emulate the previous Excel-based "system"). The result was frustration all around because the senior engineers knew our estimates were crazy high compared to what they'd be able to do independently, our business stakeholders were mad because of the cost, and our CIO couldn't understand why it seemed literally nothing was doable in less than several thousand person hours. Good times.

But when you 1) don't have an constructive relationship between CIO/CTO and business leaders, and 2) offshore everything not to follow-the-sun but purely to save money (and then pay bottom of the barrel even in lower cost regions -- that company still only pays engineers 10-12 lakh in south India), you're going to get an inefficient org creating crap quality software.

It sounds like you had distributed decision making authority but concentrated accountability for the outcome. Worst-case scenario.

> the senior engineers knew our estimates were crazy high compared to what they'd be able to do independently

Interestingly, I worked at a company where teams of engineers were given wide latitude to design, implement, and deliver their features in isolation. The theory was that engineers could be more productive if they were left alone to make the decisions.

I'd say it worked about 1 in 4 times. The other 3 out of 4 times, engineers created their own labyrinth of over-engineering, spent too much time building things as reusable frameworks instead of simply getting the job done, and over complicating the system with unnecessary "Wouldn't be cool if..." type features.

I find my sense of laziness keeps me from going too far down a rabbit hole most of the time.

When I was in a large organization, the upper management roadmap was based on quarters and year. Like Q1 2020 Q2 2020. That's a very large granularity of 3 months.

This had the benefits of ignoring the usual variations of software planning. Any development task could take 1 week or 4 weeks (if it goes well or not). This stops being an issue when the planning is quarterly because few weeks of jitter doesn't matter.

The question is then, do we have time to do this task next quarter, as well as the other 3 tasks we scheduled there? otherwise we'll plan it for the quarter afterward.

> engineers frequently assume the best-case scenario for their time estimates, while managers end up planning for the worst-case scenario

I (engineer) have experienced exactly the opposite. On Thursday I say "It will most likely take me a week; I cannot possibly have it done in less than three days; if X is true -- which I can't know yet -- it will be more like two weeks". And that becomes a commitment to have the code on master by next Wednesday.

Perhaps I've just been unfortunate, but most of the time when I have given qualified or worst-case scenarios, I have been pressed to assume the best, and that's what gets written down.

I want to chime in that I, also a developer, mostly experienced the opposite. I hated giving time estimates, they were demanded, and sometimes when I overshot the optimistic time, managers were incredulous.

Very few people involved understand how difficult it is to estimate work for some/most projects. If you haven't explored the problem space pretty well, you can and will run into problems that are unique and very troublesome. And you can't know that ahead of time, except in a very hand-wavy way.

> Except in my experience it’s not two years, it’s two months. And I can set up two micro instances with terraform in less than an hour.

Fucking that. Only skip the basics entirely if you don't plan to hire anyone in those 5 months (no or inadequate tests, no scripted way to create a dev instance? You just cut their first-two-months productivity in half.) and are 1000% sure no-one's gonna horrifically break prod, and don't mind dev velocity dropping ~10% (and/or bug rate and "of shit it's on fire" calls increasing alarmingly) every month over the next 6 months.

There's a lot of room between "we need a k8s cluster with autoscaling, three kinds of databases, lambdas, blah blah" and "LOL develop on prod YOLO!" IME this kind of thing never gets added in month 4 because there's already too much tech debt. Add it in month 1. A couple days and 10s of dollars/month setting up or buying tools & services to keep your team confidently moving fast over the next half-year is some of the most valuable work you'll do.

[EDIT] oh and in the specific case of Infra as Code or fully-scripted deployments, around month 4 when your stuff's a bit more complicated it makes your answer "yeah, give me these couple pieces of info and put me in touch with one of their tech guys and I'll have it done by lunch tomorrow" instead of "uhhhh... fuck... uh.... two weeks? Hopefully?" when your sales guy says "we've got a huge deal almost closed but they need our thing hosted on govcloud/on-prem/European servers, can we do that?"

At the same time, it also means collectively we don't know what we're doing to get where we're going and are just throwing stuff at a wall to see what sticks.

It also made me think of the Apollo 13 movie.

The engineering/process people got the thing into space, the adaptable/dynamic solve it with 'gaffer-tape and whatever is laying around' people got the thing home.

We still need both approaches for different reasons.

“Personal survivor bias“ accidentally chooses a side for you.

Experience eventually makes one realise there aren't diametrically opposing sides, just different capabilities needed in different circumstances.

Except the people who did the "tape it together"? Those were the same people who made the long process before. That's why they could just tape it together, they were deep in the knowledge of the system they were fixing and thus could tape a kludge together to get them back.

But that's because in those days the people that built the thing have to do production support once it goes live. That doesn't happen so much in larger organisations because they have separate teams divided along "add/build something new" and "keep it going" boundaries without a lot of cross-pollination.

There's also the difference between building bottom up with full knowledge of what you built, like the Apollo program team had (even if they bought something externally, the people who made it were available to the team, and/or the team studied the items in depth).

And then there's modern world of slapping together libraries that are often not known, sometimes with closed source, sometimes without real documentation. I've had to use disassembler in the past to access framework we were supposed to use...

Twitter had the fail whale for years. Same with reddit. Facebook moved fast and broke things. What engineers often don't understand is that engineering is not important to the business, only the preceptor of it is. Even these huge companies are only backfilling engineering on their core critical components, while new experimental projects are sloppy messes. The only reason half the stuff is solid is that it gets to free ride on comping built for the critical core.

I agree. While I'm a big proponent of "as simple as possible", often when people say "x is overkill", they mean "I don't know x" (where x != k8s of course).

I think this is 100% true. I've never used the term overkill personally.

If I don't know something, I simply say I don't know. And if I do, I always position the trade off of the choice relatively and weigh it.

Thing is, it really changes the equation of whether it's worth or not.

If you or some other senior engineer knows X, then perhaps they can implement X in an hour and things remain simple.

If your team does not know X, then it will take a week to learn X, understand it, try it out on a toy project and implement it in production (which alone might make it an overkill, because it's not worth a week) and it becomes quite plausible that we'll implement it wrong in a way that will bite us later.

So unless the benefit is huge, then it's quite reasonable to say "x is overkill" for your team simply because you don't know X.

The GP of my comment says:

> Me as a manager at a startup: "Do not install Terraform. Finish the import prototype... now."

The team knows Terraform. The manager probably does not. I was responding to that.

Well introducing some new technology that people will have to learn is often overkill, if there is already a slightly less optimal way to do it that the whole team is familiar with.

Do you know k8s?

Haha perhaps not well enough, thus proving my earlier point :-)

If you can set something up in an hour, don't discuss it; show it. Spend the hour, show it works. If it does, fantastic. If it doesn't, drop it.

The point I think you're making is that "it depends" and I agree with it.

I've seen companies that have no funding issues building products that they know they will need to maintain in coming years and still follow a few or none of the best practices.

But I've also seen people building SW that no one can guarantee that will ever be used (and gets scrapped after a few months) spending days and weeks setting up the perfect agile CICD setup and arguing in endless pedantic discussions in code reviews.

All things with balance, and I agree analysis paralysis and overengineering are real problems to avoid...

but I'm sorry you can't deploy without CI/CD unless it's like a desktop or mobile app or something. You don't skip that for servers. Does it have to be perfect? Hell no, but it needs to be in place before you can seriously call it shipping

I'm guessing that you work on server-side stuff, because most people who do mobile/desktop work would say exactly the opposite if they desperately had to come up with a general rule: do whatever you want for servers, but you have to do things right for mobile/desktop. Those are your servers, and you can fix them whenever you want. You push software to end users, and it's gone forever.

I do, mostly. Also anticipated there'd be at least one mobile/desktop app dev who would swoop in to say it's important there too and I agree! It's just so common to have two servers to deploy to, and as soon as you do, you aren't going to repeat manual steps to deploy your software on them. I guess in that sense app stores are sort of like CI for apps :-P

But indeed, especially on mobile where you need to have releases go through external validation, CI/CD can save you a lot of time.

Plenty of Unicorn IPOs disprove this thesis.

When that dev gets older, he'll get the self confidence be honest about his reasons and say, "No. Then we won't be able to see our infrastructure changes right now and in 2 weeks time we will be confused enough that our forward progress will slow significantly."

> Then proceeds to blow 2 hours complaining on hacker news

You are absolutely right, that was wrong. They should have instead ignored you and install Terraform.

Why not both?!

<next week> where's the import prototype? check it out, I moved our metrics stack to InfluxDB and Grafana! Now we can scale to 10,000 instances no problem!

The time when you have 2 micro instances is the best time to implement terraform. You can do it in 5min, existing infra is obvious, and you likely can explain it to 2 other people and be done.

Once you're established, growing, and accumulating features it will take weeks/months. (And if you need a duplicate environment for staging, how many clicks will it take?)

I thought the same thing, but after reading what the comment said again it wasn't a "use terraform or don't" story, it was terraform vs "import prototype".

Generally for most resources, I feel like I can write terraform just as fast as I could create the resources in the console.

Probably not a good example, but I agree with the sentiment that you have to prioritize the business.

And it depends on existing skillsets too. I did a recent project where I almost certainly would have used Terraform if I already knew how. But the choice was between "learn Terraform" and "do this in a Makefile," and the "learn Terraform" was clearly crazy overhead at the time.

I am not convinced that you are competent enough to be an engineering manager at a big company with oversight from an experienced manager, nevermind at a startup. You've revealed a cascading set of failures here that are 100% your responsibility and then have the audacity to caricaturize your well intentioned junior engineer:

1) Having 5 months of runway is an existential problem that your CEO's fundraising should have addressed 7-12 months ago

2) Lacking a proper code review process ensures that you'll pay pounds of prevention long term instead of ounces of prevention, which over the long term is not a frugal usage of your already scarce engineering resources

3) Your senior engineers should be having the conversation with your junior engineers about the tradeoffs of heavier vs lighter forms of code review and CI process. Unless...

4) You hired a junior engineer as your early or first employee without a senior engineer and are unwilling to do the necessary work mentoring them yourself

You are responsible for each of these failure modes. Who made the decisions to get the company to a place where it only has 5 months left of cash? Who made the decision as a startup manager to let other managers take it there? Who made the choice to hire the junior engineer? Who made the decision to forgo proper code review standards?

It sounds like your junior engineer has a sense that they are on a sinking ship. Maybe they're right. You're one of the captains. Turn it around, or cut your losses and learn from your mistakes. Saying this as someone who has been a startup manager, I think it is a profound breach of the duty of leadership to not do so, which is what it sounds like you are doing right now.

EDIT: In case you need some pointers or some food for thought, I think that another HN thread that's also on the front page could be a great place to start:

Remember that management is not the same as IC work at a higher seniority. It is a fundamentally different vocation. You really need to internalize that to be effective and it is tricky.



I am not convinced that you have ever worked at the average startup. Your points are incredibly divorced from the reality most face.

Years worth of runway is not the norm. Having your choice of senior engineers to hire right away is not the norm. Most startups exist as sinking ships.

It sounds like you lucked in to a company that raised a shitload early on and so had the luxury to make these decisions.

> Your points are incredibly divorced from the reality most face. > It sounds like you lucked in to a company that raised a shitload early on and so had the luxury to make these decisions.

I worked for a string of them early on in my career. I was miserable and worried that I'd never get anywhere better because of the scarlet letter of working at a failed company (and several of them) would be my first impression. Rather than blaming it on luck, I resolved to figure out where I went wrong and how I could avoid it in the future.

While you cannot remove the element of timing and large scale market events outside of your control, startups are about making (and learning to make) calculated risks. Mediocre startup operators love this kind of argument (so and so can do things because they raised more money than us because they're lucky not because they executed better) because it lets them escape the consequences of their own unforced errors and ineffective execution. You won't hear this argument from effective operators because it would get laughed out of the room. They know that they have to operate proactively, not reactively from (and even before) their first fundraising event.

The most well run companies have a tendency to create exponentially more, not less, expansion and subsequent job opportunities. My advice is don't work for a risky early stage company until you learn what success looks like somewhere bigger. At an early stage startup that's on the right track, you'll see a lot of the same things just in a smaller org. But it's hard to know what you should require unless you've at least seen what it looks like to work at a well oiled machine. Folks that tell you otherwise are either ignorant or willfully trying to pull the wool over your eyes.

Counterpoint: Twitter's codebase early on was a well known clusterf%%% that was impossible to scale reliably and was consistently causing the fail whale. They found product market fit, got funding and improve their code.

Counterpoint #2: Amazon's code base was a monolith and highly dependent on Oracle. The rest is history.

The order of priority is to have a product that customers want, convince investors to give you more money to continue your growth and improve your infrastructure and hopefully become profitable or at least have an exit strategy.

Twitter was founded in 2005. Amazon in 1995.

Git didn't exist, SVN didn't exist before 2000. Common languages in use today didn't exist or were in their first versions (go, python, java, etc...). Most of the unit testing frameworks and IDE that are taken from granted weren't invented yet.

There were different expectations back then. No software company should be running in 2020 with no version control, no unit test and no CI.

Source control very much did exist in 1995. Twitter was initially built on top of Ruby.

Java was very big in the Enterprise by 2005 and yes there were plenty of IDEs back in 1995. Amazon I believe was originally built on a C code base.

There are plenty of companies that have no automated unit tests and do quite well with manual testing.

But we aren’t just talking about source control and unit testing we are talking about code quality.

Double checking the dates. CVS is 1990 (source control before SVN before git). Visual Studio first release is 1997. Eclipse and IntelliJ are both 2001 one month apart. PyCharm is 2010. Jenkins CI is 2005 (initially named Hudson). Teamcity is 2006.

Code quality is limited to the tools available at the time. It was an uphill battle to preach for (automated) scripted builds or any form of (automated) testing without having the later tools/frameworks/infrastructure at disposal.

Funfact: The java compiler was still fixing "bugs" in 2016 to be able to produce an identical JAR between builds.

Nowadays if an intern opens PyCharm. He can get right away a lint report on the current file and entire codebase, finding potential bugs and questioning why is this thing not running automatically on commit. This pushes quality up organically quite a bit. By comparison doing embedded C++ development in the 2000's, I can only recall of one company having a static code analyzer, that costed no less than $50k for a handful of small projects (they charged by line).

I can't really blame them for nondeterministic builds. Not only was Java slower but computers in general were slower when javac was written, so recompiling unchanged source files was an obvious waste that everyone tried to avoid (by checking mtimes, because hashing every file was also pretty expensive).

I doubt very seriously that Amazon was running on Windows servers so I fail to see the relevance of Visual Studio. Even so, the first version of Visual C was released in 1993 (https://winworldpc.com/product/visual-c/1x).

C is not a new language. People have been writing large well structured C code for decades. The first C linter was written in 1978. Even PC-Lint was written for C in 1985.

I think this is generally a great comment, but just to note: you seem to characterize the "5 months of cash" thing as a decision. But most startups fail, and all of those startups at some point (or multiple points) in time only had 5 months of runway left, which is not always (I would say not usually) because they decided it was ok to have that little runway, but because they could not avoid it despite their best efforts.

But you're totally right that this is not the junior dev's problem!

> they could not avoid it despite their best efforts

I think we're making the same point, but I don't disagree. I think this is what I'm getting at. A CEO who cannot raise and is at 5 months of cash left has a) effectively gotten a vote of no confidence from their existing investor base, or b) they are not asking because then they are delaying the inevitable and eventually impossible to ignore conclusion a). Not being able to avoid low cashflow despite your best efforts is a failing on the CEO's role. The CEO is the last line of defense in terms of responsibility, and fundraising is supposed to be one of their core competencies and responsibilities. It's why the job is not for everyone.

This. Though, I think the exact example may be debatable.

The (appropriate) question was “We have 5 months of runway, does this make sense?”

Assuming the manager is halfway competent in listening to their team, then this should be a useful focusing activity for both the dev and the manager.

In business, and in life, it isn’t about what you “can” do, it is more about focusing on the handful of things that give you the biggest “bang” for the buck. Think Pareto law, and how you can get 80% of a result, for 20% if the effort.

So, in the context of the scenario, the developer should be asking themselves: 1. What problem does <insert tool/process> solve now or will solve in next 5 months? 2. How painful is the problem, and what is it’s frequency (High impact, but low probability; moderate impact, but high probability).

3. What is the cost of the new process/tool to the team? (Time, cognitive load, $$$ that could be spent elsewhere).

As long as ratio of value to cost is large (10X or more), in the timeframe that matters (in this example 5 months, for most companies time is measured in years), and you can articulate it, selling the idea should be a no brainer.

However unless you can truly get that 10X(or more) return, then it might be better to focus your improvement activities elsewhere.

All that said, I do think the “Command voice” at the end is a bit of an issue... at least for me. You are “pulling rank”. This is “okay” if you have built up the “rep”/karma/trust to do so, but unless you have, will likely make the dev (long term) run for the hills. It might take a bit more time/coaching to explain something like the above to them, but that is an investment in people that lasts a lifetime, whether or not the startup lasts past 5 months.

You’re assuming the manager hired senior engineers who can competently know the best strategies in each. By contrast an inexperienced engineer might take the worst path

Rapid prototyping does not mean abandoning established programming practices that help people feel confident about their work.

In my org, I have created a process to clearly define what makes a prototype different from a MVP. It appears to be somewhat successful for what my goals were, which was to prevent prototypes from turning into long-term production code.

Prototypes have no restrictions that we would normally place on code (coverage %, automated pipelines, etc) and can only be created in a sandbox AWS account that has no access to our internal systems. It allows developers to prove out concepts or work with product to quickly iterate on PoC.

On the other side, PoC's have a limited lifespan and if successful, are expected to be re-implemented as an MVP, following our internal standards and guidelines.

This is excellent. I'm always a bit wary of treating something as a "prototype", because of the number of times I've seen things evolve into the actual product.

I like the separation of AWS accounts.

Yes it does. The clue is in the name - prototyping. A prototype is only supposed to prove the concept works; it doesn't need to be good.

The problem is that most companies can't do prototyping because they don't have the resources. They build a prototype, and then they don't throw it away and build the proper version and they use as production.

I was hoping to see the word 'protoduction' in your comment.

This is such a good word, great find!

There are so many things in production that are prototypes (even when people/companies cannot allow themselves to recognize them as such...), having a word for this pattern is great.

Yes. There's a reason why best practices are called "best".

Experience recently went the other way around.

"It's just simple project, don't bring out the big guns".

Somehow, a simple requirement of "make sure we can backup it all and load it elsewhere, manually, nothing fancy" made it so that I lost 5 days purely due to acceptance of manual work at the start.

I'm gravitating towards trying to pick automation tools that I can replace easily, but I don't skip automating things.

Often, creating things using proper automation tools ends up being significantly faster than doing things manually, and the result is better.

If I'm setting up a proof-of-concept, I'll at the very least try to write a shell script that contains whatever it is I did to get a host set up.

If doing stuff for production, I might have one instance where I set up things manually while simultaneously writing the appropriate automation to get another instance up and running. That allows me to iterate the set-up quickly while allowing me to verify the result by recreating it from scratch using the automation I made.

I would argue the mid-level devs are more dangerous in this regard than junior devs. Mids tend to follow rules and really consider them gospel. Seniors know when and how to break those rules. Juniors are a bit all over the place in terms of outcomes and expectations.

Folks are focusing on "five months of runway" or other minor details and missing the big picture here.

Big companies typically have fewer constraints on time, money, and developers. They can have very mature development processes because of that.

Small companies typically have tighter constraints. It's not as if this hypothetical manager doesn't want better process, but if the things to spend time on are "better deployment methods" or "ship this feature in time to land a critical contract," there's no choice there.

OP has, potentially, a unique position here. If they can find a way to use their experience to help implement small process changes without grinding development to a halt, they can seriously improve their team and maybe emerge as a leader as well.

This is different from "we need to stop and do all-team code reviews before every PR merge!" The time may not be there, and the management buy-in sure isn't.

But encouraging teammates to add new tests with each new feature, and helping folks who may not know how to do that, can have a huge impact. Taking an hour or so Friday afternoon to gather the team and casually review merged PRs or do a sprint post meeting over drinks can be a good unwinding activity.

OP's got an opportunity here.

Exactly this. In a similar situation I am aware of, a senior dev came in, found some measurable bottlenecks, convinced non-technical management that they could be fixed, and then fixed them. He is now the CTO.

They key here is to find problems with the current environment that can be tracked. Downtime, performance, bugs in prod, etc... If you can't find any measurable issues that management can relate to, then quite frankly, there isn't really any reason for management to invest in solutions to problems that don't exist.

This, unfortunately and hilariously, rings quite true.

Not to open the test-driven development can of worms, either, which seems to be part of the OP's dispiritment ... but I've yet to see any scenario where TDD was not a massive waste of time.

Thank you, it feels like heresy whenever I say that... like it's one of those grim realities you're not allowed to point out. I'm all for automated testing in a sane manner, but TDD is so rigid and over-engineered.

"Can't fail your TDD tests..."

taps forehead

"If there's no code to test"

Depends. Is it the crazy religious TDD where you somehow are supposed to write tests before writing any code, despite the fact that you're making exploratory code? TDD works best when you have figured out what to test for, unfortunately more than once I encountered people who wanted to write tests for exploratory code, in rigid TDD like environment.

(sidenote: It's possible to do kind-of TDD with exploratory code in certain environments - essentially, you start writing code from top to bottom and fill in missing bits using debugger all the time)

I have only toyed with it on personal projects. But that is kind of a misrepresentation from what a lot of them propose. You typically will not be writing more than a part of a test, just enough to get it to fail, before writing the code.

It can work good as a built in driver to exercise the code you are trying to write as you go. It also gives you a chance to test the interfaces to the code you are making as you go for usability, because you are coding to it.

There are actually some really interesting videos demonstrating the RED/GREEN/REFACTOR cycle they propose.

That's why I mentioned "crazy religious". I've seen it got that far in real world, which warps TDD to the point where a good technique is thrown away by people disgusted with the pushed purity message.

It doesn't help that often, even otherwise good material about TDD, didn't really talk about cases like exploratory coding - working from messing around in a REPL, formalizing the code into TDD skeletons, is pretty powerful method, but isn't (wasn't?) mentioned often when I learnt the most about TDD (granted, that was 2005-2015)

TDD excels at messing around with the design of an API.

As your basically exercising the API through the tests, your seeing how nice it is use.

If you're only running two micro instances, I'm assuming the rest of your infra is small.

Likely would have been a single day project to have everything inside of Terraform.

I agree/

I work for a company that took all sorts of shortcuts that we are digging our way out of now. But guess what? If they hadn't taken those shortcuts, they would have run out of money before they found product market fit, would not have referenceable customers (B2B), would not have been able to secure funding and the company wouldn't be around today and well enough capitalized for them to pay me to help improve processes.

I agree completely with this post.

If you happen to know the runway time you got before things start to fall apart economically, I don't think there's much to argue regarding code quality (and general maintenance of the platform).

In large companies, projects usually have a budget and some desired launch date. If that launch date is not met, your _worst_ case scenario is you're fired. While your best is the acceptance that It just got delayed by whatever factor affected the outcome.

With small companies, your worst case is you're fired and the company could go bankrupt (or something as bad).

Now, OP said he was at a mid-size company and I'd like to believe he could be a beacon of change as the company grows. Maybe all the other seniors were used to work in startups and just can't fathom doing development any other way.

Bottom line is, your expectations might be off depending on the scale of the company and their current objective.

It's because we hire "software engineers" but we actually want them to be "business hackers", but lie to customers and regulators about the quality and integrity of the product. Then we mock them for trying to do the job they were hired for. The fact that pogrammers bias toward rule following (the computer doesn't react well to people bluffing) is part of the conflict.

That's not what OP is talking about though.

Not using prod DBs for development is common sense and best practice.

Not pushing to prod is common sense and best practice.

Same thing about breaking tests, spaghetti code without review.

It's not some pie in the sky, engineer-wants-to-tinker-with-shiny-toys like using Terraform or Kubernetes would be if you only have 10 paying users and two microservices.

Worth noting that the OP said "mid-sized company", which probably means one that doesn't have a runway measured in months. That makes a big difference.

Terraform is a bit of a straw man here.

OP was talking about code reviews and testing. He did mention infra as code but not TF specifically. He didn’t say it was a startup with 5 months runway (aka a soon to be dead company?)

TF is like Typescript: if you already know it why not use it? If you have to learn it on the job at a high pressure startup then maybe not. Not clear he is working at a startup or just a regular old company

This story is so eerily close to what happened at my company, even as regarding specifics that I was tempted to ask if you are a colleague. :)

That seems like very myopic management since a) motivated employees will work harder b) technical debt has a continual productivity hit even in the short term. If spending 20 hours on terraform saves the team 100 hours of debugging over the next 6 months then that may be a net win. Keep in mind that due to context switching and flow even a 10 minute "fix it now" bug will use up 2-4 hours worth of productivity.

Newly hired experienced eng lead a year after this conversation: "Ok we won the product-market fit lottery with our prototype - awesome! - now we should begin identifying the highest value pieces to reimplement with better practices, so that we can scale the team and product effectively".

You as a manager at a slightly more mature startup: "What, you want to rewrite? No, everyone knows rewrites don't work! Just hire up and keep moving fast."

Experienced eng lead after a few months of burning out on a miserable project: "I have better options for this, here is my resignation."

Board member after another few years and lots of turnover: "It seems like we're no longer able to execute because of all our technical debt. You have been a great manager for this early stage of the company, but now we need to bring in the mature company professionals. Thank you for your service."

end scene

I do not even claim that this is a bad outcome. It really is important to execute quickly early if you are going to have any chance of success. I have made the mistake of being too slow too early. But path dependency is also a big deal, and there is rarely (never?) an obvious bright line point at which it is clear that the time is right to up the professionalism game. It is not hard to get into a situation where the team is spending nearly all their time baling water out of a sinking ship that you're still trying to sail while competitors speed past you in newer sleeker dryer ships. I also think the hiring / turnover risk is a real problem. There are companies with good engineering practices, which make work much more pleasurable, and you are competing against them for talent.

But I think this is a false dichotomy. I think there are a set of practices that don't introduce so much friction that they are an existential threat early on, but that are small steps in the right direction down the path to practices that scale well.

I think it is telling that you went straight to Terraform in your example, although nothing like that was alluded to in the OP. I would agree with you that Terraform does not belong in this low-friction-step-in-the-right-direction category of practices. But I think all or most of the practices the OP actually listed do fall into that category. Code review, in conjunction with automatic style fixing (which most languages have tools for now), is low friction and has immediate value. Same thing for unit tests. I do think a whole end-to-end testing setup is beginning to be more friction than can be afforded, but unit tests are easy (especially if you choose languages with type systems that allow you to dispense with tests that verify the type contracts are being satisfied). Automated deployment is actually both a step in the right direction and something it would slow down your early execution to not have.

There is a huge gray area, and I think it is really difficult to strike the right balance (to wit: I've never seen or personally accomplished this balance satisfactorily), but I think it is the right way of framing it. I think the right mindset for you as a manager at an early startup is not "we don't have time for that!" but "will the ROI of that be worth the time?", and that you need to do the hard work to explain that that thinking to and get buy-in from your developers.

I think you have some things to learn, as a manager, and as an engineer.

I'm no SWE, so it might not be 100% applicable here.

As a manager, one of the your tasks are to make sure that your team knows why they are doing what they do. That way, when there is no time for lengthy discussions, they don't feel demoralized when you give them an order they disagree with.

Another task is, to make sure to get the best out of your team. That means to use the experience and knowledge of people as much as possible. Not doing that only drives down motivation.

What happened to me in the past, in a different domain, was somewhat similar to OP. I was explicietly hired because of my FANG background to imporve operations of a new business unit for a German unicorn. So that's what I started to do. Map processes, talk to the team, shadow them, identify issues and problems, work out KPIs and SOPs... the drill. Ony to be shit down by my manager at every turn "because we have 1,000s of proplems right now and nne are addressed by what you are doing". That sucked, it really did. My impression was, that they wanted FANG level withut putting the effort in. Just having someone with the right experience doesn't magicaly solve your problems.

In the above example, when time is critical, implement it. But hen have an pen discussion with your team about best practices, and maybe use whatever solution your "bi-corp" people have in mind.

people usually can't articulate what they want and they always want to skip the hard bits. look to what they do rather than what they say and find a way to slowly pull them towards what they ought to want. it takes a long time but you can make things better under conditions like that.

If you have a team of 5, and one wants to implement terraform or some other package like that, what is the current cost to the rest of the team that is unfamiliar with the new software? Will that one person be hobbled for the next few months trying to get their work done and support the other engineers who get blocked by tech they haven’t learned as well? Was the system viable using shell scripts or however it was originally set up? Is it going to tick off another developer who put effort into the current orchestration?

Would you even have cross-cutting concerns in a team of 5? Given previous experience, it's quite common that if you have a team of 5, it will be reasonable to assume everyone is sharing a bus factor of 1~1.5 and one person will be the infrastructure master. Letting them use whatever tools to leverage until you can have more runway is good.

I agree, unless the introduction of a tool for everyone could slow down 4/5 of the team. Startups usually have inexperienced people but the successful ones have talented people. Sometimes hiring an inexperienced big company employee causes these issues because they “know how it should be done” instead of having the experience to exercise judgement. It’s not a management failure to say, “let’s think about that in 5 months after we have a working product.”

The tradeoffs are not obvious. It could be that terraform is as important to this startup as version control would be, they just don’t know it. But if it ain’t broke....

Btw, I’ve worked at a startup (25 years ago) that had no version control and I was the one who identified the problem and introduced it. I also was unaware we needed it or what it could do for us before we had our first pucker-up moment with the source code. After version control we moved faster and slept better. Or we would have if we weren’t pulling all nighters.

The startup failed anyway.

Don't we all?

What exactly do you mean?

Two micro instances don't seem to warrant using Terraform

Once you're experienced writing Infrastructure as Code, you always use it, no matter how small the project is. Right now I run a single micro instance for a personal project and I've got the whole thing Terraformed, which took me less than a workday - VPC, subnet, security groups, S3 buckets, and a basic provisioning script to setup the instance's software and cron jobs.

It is (eventually, once you learn it) faster and more intuitive than bothering with the AWS console, and I often reference Terraform's own AWS resource documentation over the AWS docs since it usually has better information density.

Anyway, the issue is all beside the point, I think we realize that Terraform was used as a placeholder example. Another example might be setting up single sign on, Active Directory, and other things that are truly not necessary for a company of that scale.

The other part of this that if you've got a startup with 5 month runway and 10 employees, hiring an Infrastructure as Code SME is probably not the best use of your dollars. Sure, I was able to put this side project in Terraform in under a day, but I also can't write a front-end application to get myself out of a wet paper bag.

> Once you're experienced writing Infrastructure as Code, you always use it, no matter how small the project is.

It's a good idea to start early with IaC but one of the marks of a good engineer is knowing what tooling to use for each circumstance.

Your app might be a perfect fit for Terraform or it might not, I couldn't possibly comment. Given the info you've provided it seems excessive but it's hard to say for sure

I can't think of a project of any size that terraform would be excessive for.

Agree, if you're introducing it to a new team and don't have time to teach the tooling that could be problematic.

But being somewhat familiar with Terraform I would use it to launch a single EC2 instance. Because it's about as fast to do that as it would be for me to use the AWS Console.

It's not that I think Terraform is needed for that small of a project, it's that it adds no cost even for tiny projects.

The original comment suggested that Terraform was being introduced at the wrong time.

You are right that size is not the best measure of suitability.

Not sure about the run-way thing. I'm also partially a six sigma / lean guy, and doing things right in the first place is a key lesson I learned, sometimes the hard way (aerospace definitely drove this home as well).

I would make the case that, even with 5 months of runway, you should invest in a solid foundation. Because if the company goes down, it does. Only a little bit earlier. if it doesn't, which is I assume the reasoning behind the company in the first place, getting the basics right now has a huge impact on runway and scalability later on.

The problem is that sometimes, the overhead isn't the right decision from a cognitive load perspective and a fiscal perspective (cost of infrastructure + developer time).

If the infrastructure is simple, the Terraform is simple too.

> Two micro instances don't seem to warrant using Terraform

That doesn't really make sense, it's not like there's some huge fixed cost of using Terraform, the whole point of infrastructure as code is that all the infrastructure is code, i.e. it's all variable cost.

Small infrastructure as small code.

And then it's all there when you expand. Change `num_micro_instances = 2` to `4`. `10`.

They probably warrant CI and tests though.

If that team can't even do code review and they're pushing to production as OP said, maybe the startup should proactively close shop and everyone should go back to something they're competent at, like growing potatoes.

How long has your startup been in existence? If it's more than a few years and you haven't found market fit then it explain why you have "5 month of runway"

Your standards are reasonable. However, what's missing in a lot of comments is a good process for assessing what the human and business factors that led to where the company is now, and how to approach changing it. This is natural as most people here are techies first, but it could lead to setting you up for a bad experience.

There are very limited conditions, under which making this a blame exercise would be the right answer.

There are also business conditions, under which this situation is the best that the business could have achieved by now. Eg, if other things needed to be higher priority for the viability of the business. Having said that, once you are in such a situation it is difficult to get out of it, and hence such situations tend to linger for much longer than any justification.

What you probably need to do now is find out how much appetite there is to change, and what the blockers are.

Strongly agree - particularly this part:

"what the human and business factors that led to where the company is now, and how to approach changing it"

This is a people problem, not a technology problem (or an opportunity if you look at things that way).

Near the start of my career I was at a small/medium sized company on a tiny engineering team (3.5 people). We also had a lot of responsibility. Naturally we did the only thing we could and formed silos and worked in our individual silos without too much regard for testing or pull requests. We had a lot of communication and I personally owned the "back-end" and was able to ship stuff fast. I knew how every piece of code worked.

We eventually (to our protest) got a few new developers. They didn't fit in our silo system. Our lack of tests/PRs didn't scale. We tried to reform and encourage people to use pull requests but everyone requested review from one developer who hardly looked at the code (scroll-scroll-scroll-lgtm reviewing.) Code added from people outside of a silo didn't fit in at all with the rest of the code base. Testing was never made a priority. Eventually I left and took with me almost all of the institutional knowledge about some of our services.

We made horrible practices work until they didn't. No one would have been able to join that team and fix it. The company I left for had mandatory testing, multiple people code review and lots of discussion. It was like stepping in to an entirely new world of software engineering. In my view you need to install a good culture from the beginning or... good luck.

Its all about what you value as an engineer. You and I have similar values in this regard and good practices are worth a lot to me. I don't think your ask is unreasonable at all. I think you are just working with people with different values.

100% this.

For instance I'm an engineer who cares far more for business/technical alignment than software practices.

If I had to choose between company A where I work closely with domain experts but the software practices are awful, vs company B where we understand the problem domain through 4 layers of management and business analysts but the software practices are top notch. I'd choose company A every time.

Also poorly applied software practices upset me far more than no software practices. For instance most places that implement code review don't use structured code reviews and spend most of their time enforcing their tech leads specific nitpicks (which are never written down) instead of actually looking through the code to reduce bugs.

(Did you mean you'd choose company B?)

No, "company A" makes more sense from the context.

It's a matter of personal mental space: some SW engineers like to build things that work and solve problems in the "outside world". For which you invariably need deep domain knowledge. Technology is a tool, means to an end. "Company A".

And others like to tinker with technology, big picture be damned: it's all about isolated performance benchmarks, tech stack, purity. "Company B".

I don’t even work in software and find this all too familiar. Too many small companies are like this.

Align it with business needs and observable outcomes.

Does the company suffer from outages? (Not just have them—but actually suffer from them.) When there's a production incident, do a root cause analysis. If some of the root causes are related to coding and deployment practices, make notes of that. Start aggregating that information. Soon you'll have evidence that you can reduce outages by changing practices.

Does Product feel frustrated by development speed? Start taking surveys. How much time is spent on fixing past mistakes/dealing with bad code/discovering broken tests? That can be used as evidence too, when you want to start a conversation with management about pausing in the race to tie your shoelaces.

Also do 1 thing at a time. Do not try to do them all at the same time. That is a fast track to nothing being done. Put them in order highest to lowest pick of an 'easy' win. Make it easy for people to want to do the right thing. Once in place retro it. Is it working? Why not? CI/CD takes time to build. These devs probably have not done it before and have no interest in it. Because they do not see any benefit. Also be sure you want this. Because in a small group you will own it for a long time.

Agreed. I've found that part of the problem is how we (engineers) communicate value to non-engineers. It's not right to get upset with a laymen/business people without putting some effort into explaining. I've always found this type of communication effective:

- This database change will improve our page load time from 3s to 1s at the cost of 25% of a sprint's capacity.

- This library version change takes X amount of time because there are 45 sites in the code we need to change, and that impacts 120 test cases which we need to rerun.

- Not adopting this <security tech> puts us at risk of an incident like <example incident from the news>.

I've noticed that (low- to mid-level, at least) business, marketing, and other folks are generally a whole lot looser with their numbers than developers are. That is, they'll toss around measures that aren't meaningful—and not just in some quibbling sense of "well technically maybe these could possibly be wrong" but truly are very likely to be way off the mark—because they didn't properly control for confounders or are clearly measuring the wrong thing or whatever. They'll assign meaning to measures that could easily have nothing to do with what they're saying they mean, or mean the opposite.

... but this usually goes over just fine. Rarely questioned. I think if they didn't do this they'd be just as bad off as developers, if not worse due to (again, generalizing) having a worse understanding of how to measure things well.

This seems below average. No tests or code reviews is common but pushing directly to prod or connecting to prod databases is unusual. It seems you've gone from one extreme to the other. From what I've seen, even at companies were people work in silos producing untested spaghetti code, production is protected. In any case, among small companies, you'll have a hard time finding the same level of quality you were used to. I'm not saying they don't exist but I'd say it might be 10% of them.

This is how it struck me, as well. There is a big gradient, and this is the extreme opposite end.

Just be thankful the devs can “push” to production and aren’t individually rsyncing their code into the app servers. :)

The standards you've described are totally reasonable! This sounds like an amazing opportunity for you to have a really wide impact and a great chance for the company to level up their game using your experience. I'm sure you've thought about this already but it doesn't have to be a "fight" - in many cases what people need is some appealing reasons, e.g. describing how their world would be much more amazing if they implemented X, Y and Z. Depending on the size and stage of your company it might actually make sense to do it in a 'quick & dirty' way (e.g. dominating the market might be more important at an early stage than doing it in a sustainable way) but even then they (tech decision makers) should be aware of the tradeoffs and if they are, they should be communicating the reasons to the whole company.

I recently joined a company(for less than ideal reasons) knowing they don't follow basic software practices. I felt frustrated in initial months and then took on challenge of evangelizing and implementing X.

But I'm getting too much resistance from "senior" developers who've been there for 15-16 years. Everyone agreed to do X and circumvented it the very next day.

This consumed so much of my time(outside office hours) and energy that now I feel why do I bother making a change when I'm not getting paid or even recognized for it.

The Joel Test :

https://www.joelonsoftware.com/2000/08/09/the-joel-test-12-s... *

The Joel Test

    Do you use source control?
    Can you make a build in one step?
    Do you make daily builds?
    Do you have a bug database?
    Do you fix bugs before writing new code?
    Do you have an up-to-date schedule?
    Do you have a spec?
    Do programmers have quiet working conditions?
    Do you use the best tools money can buy?
    Do you have testers?
    Do new candidates write code during their interview?
    Do you do hallway usability testing?

I think this is super helpful, I definitely don't think all scores on the Joel test are equal. I would steer very clear of a company that had an 11 on the Joel test because they didn't use source control, but would happily work at a company with a score of 8 because they allowed some bugs, didn't have dedicated testers, no hallway usability testing, and had to take a few steps to build.

Joel included this in the SO jobs site. I wonder how many employers lie to themselves about the answers to these questions.

In any case, questions like these are great ones to ask during an interview.

Companies with poor process will be pretty frustrating to work at.

Joel's test is a good baseline. Ok, some things might or might not apply in some cases (are you really not going to put out a feature before fixing a minor low priority bug?), and I've seen companies follow these items but still be atrocious places, anyway.

My point is: a lot of software wasn't built using these principles. Software that's used in a lot of places and/or got very popular (also note there's no "unit tests" in those items)

So yeah, these are all good, but what I would set as priority (in my mind)

- Source control

- Quiet/comfortable working conditions (the "best tools" might be merged here - though the meaning of best tools is debatable)

- Bug database

- One step build

- Candidates writing code on interviews


Do you use source control? YES

Can you make a build in one step? NO

Do you make daily builds? NO

Do you have a bug database? NO

Do you fix bugs before writing new code? NO

Do you have an up-to-date schedule? NO

Do you have a spec? NO, NEVER.

Do programmers have quiet working conditions? YES

Do you use the best tools money can buy? NO

Do you have testers? NO

Do new candidates write code during their interview? NO

Do you do hallway usability testing? NO

Unfortunately this test lets absurdly low quality code pass. While code quality can be subjective on the margins, general poor quality code is obvious to most.

Oh my. Run away.

If that is their current culture, there will be a lot of pushback if you want to initiate changes. Especially among the senior (responsible for the mess) ones who get defensive real fast.

Maybe ask more about how their engineering culture is next time around during interviews.

Not in my experience.

The devs in teams I've worked with (including me) are most of time all quiete aware of the places where their code en setup is lacking and as long as they don't feel personally attacked are very happy with people who come with implementable solutions.

The not feeling attacked and implementable parts are key.

This. I've never had a negative response when I joined a project and discovered obvious problems when I explained why I think that they are or are going to be a problem and offered to help fix them. Nobody likes the guy that shows up, says "that's all shit, you need to fix it", and vanishes again. But I've never seen anyone hate the guy that shows up, says "I've noticed there's a lot of friction around X. I think we could make it easier to work with and more stable by doing Y".

And I've always liked it personally when somebody does that to me. When somebody with a lot more experience and/or skill helps you, it's like some super high level player carrying you through boss fights: you level up much quicker than if you did it on your own.

I'm of the same opinion. I love getting feedback, better if it's constructive, but all kind of honest feedback is useful and welcome. And it doesn't always come from people with more experience, sometimes a greenhorn will see things that a greybeard like me doesn't.

But I've seen people giving negative responses to the best feedback. Hell, I've had people complaining for hours because someone else replaced their manual process that took an hour to do with a script (and copy&paste of the result) that took literally seconds to run.

Yeah, I've seen that as well. I believe it's mostly fear-driven, in a way of "if this gets automated, why would they need to keep me around". It's a terrible mindset for everybody involved. It really slows the team down, and it's a major issue for the person themselves. It's like impostor syndrome on steroids. Not only do they worry that they might be found out, they worry they might be replaced by a script.

I've never personally seen somebody getting fired because a script can do their job. I've seen them being freed from doing the same repetitive bullshit day after day though, and finally being able to actually tackle new challenges.

But unfortunately, that kind of mindset is not something you can change through rational arguments, at least in my experience.

Yes I think the not feeling attacked part is a responsibility of both parties. And therefore sometimes unavoidable.

Unfortunately I have to second this advice. There's very little chance you'll be able to make things change so better to move on now than when you're depressed in a few months or years.

If the author is not going to accept the challenge, then yes, run away is the best choice. On the other hand, the situation is unique, and the author can try to show himself with these challenges.

But I agree with the last sentence. This is a huge mistake, and it seems that the author did not ask these proper questions during his interviews. Perhaps, this is the most valuable lesson to him from the story. You have to know which company you are going to join and the state of engineering.

Yes, run.

I've been trying to improve our standards for years, it never gets traction.

My perspective is that quality is a process and a project rather than just a goal or a fixed property of a team. This new team is like a project that's barely begun and you're coming off a project that was in maintenance mode.

So like any project it works best if you do one thing at a time and build up some momentum. You can't fix everything at once.

See if you can get people to buy in to just getting the indentation consistent (my favorite place to start), or preventing the development software from touching the production database.

Get one win, give it a little time, and then go for another pass. Meanwhile set an example in your own contributions. It is very possible to improve things but it takes patience, planning how to deal with setbacks, flexibility, and recognizing what the real priorities of the company are and what quality means in context.

I have been in both kinds of companies. 20 years ago, having none of these basics was almost the norm. It can work, and sometimes I even miss those days, because it can be fun and very fast paced. However, it also comes at a price. There's a constant, high stress level because things break all the time, even on production, and you have to be quick to fix them, often directly on production. I wouldn't want to go back, especially not with a bigger team or today's average developer and project complexity. And it sounds like you already made your decision. Unless there are compelling reasons to stay (and maybe try to introduce newer practices), you have learned a lesson which questions to ask in your next interview (also google for Joel Test) and try to find something new.

I think it's fine until your product and team reaches a certain size. Our product is currently beyond that point but out tech lead still carries on adding more stuff and not adding any tests.....

> 20 years ago, having none of these basics was almost the norm

I guess I worked at one of the best then. It still went out of business.

I just read the almost 20 year old Joel Test page for the first time in a long while and I was surprised by what it didn't contain. For example, it doesn't even mention unit tests. Which is no real surprise, because JUnit itself is only 23 years old. Things like code coverage tools didn't exist back then. And while there were some early tools for automated builds, like Mozilla Tinderbox, setting them up was a real challenge.

Unit tests are definitely plenty old.

I think that maybe folks aren't quite clear on what "Unit Test" means.

Nowadays, "unit test" means a test written into an existing test harness system, like JUnit or XCTest (what I use).

In fact, I was writing unit tests for PHP-based Web apps and C++-based Mac software more than twenty years ago. I just had to write my own framework.

Even nowadays, I sometimes write my own framework. For example on my [hopefully] last PHP project, I designed a SaaS-type system, and eschewed PHPUnit; mostly because it would need some special installation that might be beyond the ken of the target users. Instead, I wrote my own test harness system, and it worked great. It wasn't a crazy amount of extra work. I always create massive tests, anyway, so it wasn't that big a deal.

For my Apple/Swift work, I rely on XCTest, which is a perfectly acceptable test harness.

I'm pretty sure that unit tests were mentioned in Writing Solid Code, which is almost 30 years old.

Code coverage and automated testing tools existed and were mature when I started at my now defunct employer 20 years ago. In The Soul of a New Machine [0] (published 40 years ago) such tools were used and the author wrote about them without any indication that they were new or novel. Every new generation of developers thinks that the "old days" were primitive, but that's just not true.

[0] https://en.wikipedia.org/wiki/The_Soul_of_a_New_Machine

So I've been working with a lot of startup type companies recently doing agency work

You have to sacrifice a lot of code quality and professionalism in fast paced environments, its almost always better to under-engineer than to over-engineer

Here's my opinions

- codereview - there's never any resources for QA. I just check teammates PR if it builds and passes a few manual tests, then merge. I don't spend more than 15 minutes on this. Coding standards highly depends if I've worked with teammate before. Sometimes I disagree with something but still OK it anyhow if its just me being nitpicky

- Test coverage - not in early stages. If you use typescript it eliminates a huge need for test coverage, but you'll still eventually want end-to-end testing. Just not until production is near deployment

- Normal development workflow - No tests, its a high opportunity cost, but again depends on what software your building. I'd say 90% of apps don't need testing early on

- CI/CD - yes, its good practice to still use protected branches and specify workflows for PR'ing and naming things

At the end of the day, you realize you have to cut corners and need to weigh out the value proposition each tool set offers. As well as how your team composition works.

Sometimes you have to defer tasks for later dates to revisit because its over-engineering and things aren't fleshed out enough yet

The most important thing early on is elminating tech debt too. Each decision you make is cascading. For instance code patterns you set today are going to be used several months from now, ideally you want to minimize the need for refactoring early on.

Write clean code with good variable names, and sane folder structures. Likewise with sass styling etc. KISS (keep it simple stupid), just follow those rules

No, i would say that your standards are quite reasonable. I would also say that there are many companies that do not meet those standards. Usually they have no excuse for doing so, just that no one there knows or most likely just don't care.

There are also many companies that go overboard with their code review and tests, get none of the intended gains, produce more slowly than they can, because of an attempt to cater to the lowest denominator. But I suppose if you're going to pick one of two extremes, that would still tend to be the lesser of two evils.

There are also many companies that go overboard with their code review and tests, get none of the intended gains, produce more slowly than they can, because of an attempt to cater to the lowest denominator. But I suppose if you're going to pick one of two extremes, that would still tend to be the lesser of two evils.

I'm not sure that's necessarily the case. IME, not much will kill both the pace of development and team morale faster than poorly conducted, heavyweight code reviews, but one of the few things that will is a dogmatic requirement to have some unnecessarily high level of code coverage in a test suite where everyone is just writing make-work tests to get the number up.

I think if we're considering the morale of the engineers, then they would prefer to work in a wild environment rather than an overly rigid one. In a sense, it's fun to write bad code, commit it without consequences, and just watch a report of your LOC go up (the key metric for engineers in this strawman company).

I think from the business perspective, they would rather have slow to no changes rather than breaking changes. Presumably there is already a product that makes money, and would continue to do so if kept in a maintenance mode. Layer on poorly applied practices, and you've effectively turned it into that.

Testing is a skill to learn like anything else. Teams that dogmatically apply any new practice or process without making a genuine effort to learn that thing will never extract value from it. If your team can try, fail, and learn from mistakes, you are in a mature organization. If your team gives up quickly because of fear of dogma or has no strategy for dealing with the challenges of learning new skills, it's unlikely your team will ever extract value from testing until the team culture itself changes.

IME, the problem more usually happens because someone in management read a book and thought "I know, we can change our entire development process to do this thing I just heard of and overnight my team will all become 10x developers!"

If you've got a team that is genuinely interested in improving and willing to work on new ideas to make the best use of them, both reviews and testing can obviously be very helpful. That's not really the situation I'm talking about above, though.

Eeek. Doesn't sound like you're being unreasonable in your expectations at all!

I've been a small business owner, and a software contractor, so I've seen a fair few medium/large python codebase of varying age and quality.

To address your points from the ~10 codebase I've worked on in recent memory:

- Code review: Always in place, though often the CTO has a tendency to merge without review on evenings/weekends

- Test Coverage: Often a challenge - or rather tests are there for coverage, but are of poor quality and don't really assert anything that useful

- Your suggested normal workflow _is_ the normal. Sometimes there's been a dedicated QA team that verifies the changes as seen by the end user too.

- Deployment: Yep. Heroku or k8s have been common, but at least some form of 1-click deploy/promote

Perhaps I've just been super lucky! There are often still major problems in the codebase and refactoring/rebuilding needed to address the challenges, as well as resistance of the existing team to truly change their mindsets to address the problem v.s. the band-aid (or instant gratification) of a quick fix.

Don't give up the fight!

Adequate test coverage is hard without rebuilding from scratch. Hard to expect in a big, old company. Except when a huge codebase have to be rebuild because python2 have to be replaced (one of our software provider is in this case).

Out of all the items on the list (code reviews, CI, etc), it seems like test coverage would be the simplest to improve? Assuming it has a test suite at all, use one of the various test coverage scanners (simplecov, coverage.py, etc) and check the output for a file with a low % of lines covered. Then write one or more tests to cover the edge cases not yet tested.

It's a purely mechanical process unlike most of the other problems which depend on changing the behaviour of people in different departments and/or with more influence than you.

Depends, code that was not written with testing in mind can be very difficult to cover without some major refactoring.

Adding the first tests can be a slow and painful process.

Now, once you have those few tests? Yep! Add a few tests along the ones for the new functionality every time you touch a piece of code or pick the low hanging fruit every now and again and you can get to a useful test suite in no time.

"It's a purely mechanical process" - I thought on this for a while. I think if your testing is purely mechanical then you're probably not getting much value from it. Good tests need to be "smarter" than the code they are testing in a way.

Unit testing, for coverage (as a vanity metric) can be mechanical maybe, but integration tests are the way more valuable ones (for the most part) and way more complex.

The book "Working Effectively with Legacy Code" by Michael C Feathers is an excellent book in which he talks about, amongst other things, a process of selling the concept of proper unit testing to teams. And how he's managed it in the past.

> I know how fast it is to develop when people write good code with discipline.

Interestingly, IMO writing good code with discipline is also slow (CI breaks, code review takes forever, updating unit tests takes time, integration tests take a long time to run etc.). The main advantage of disciplined process over the cowboy approach is that, in cowboy approach, the code can become unworkable after a couple of years, whereas with "mature" software engineering there's good hope that the codebase will still be maintainable 5-10 years down the road.

From my observations the payback is far quicker than a couple of years. Unstructured with below average coders we are talking a mess in weeks that becomes intractable without a lot of work.

Those seem like reasonable policies to aim for. What isn't immediately clear is whether they are missing because of a lack of understanding or a lack of willingness to change.

If it's the former and you've come into the small company with more experience than most people there, you might get a long way and become quite popular with your colleagues if you share the benefit of that experience intelligently. For example, with the agreement of the senior people, you could show everyone how to set up a good CI system or automated deployments, if that's something you know how to do and they don't. These kinds of things tend to have obvious benefits once they're up and running, but setting them up in the first place can be a barrier depending on what else you're using, how everything you've already got fits together, and of course the level of knowledge and awareness of the people working on it.

On the other hand, if everyone is well aware that they should be doing something and they know how to but just can't be bothered, that's a cultural problem that you're unlikely to be able to change as the newbie. Probably no-one else is either, unless they're doing it from the top down and have hiring and firing privileges to force the issue by bringing in more people who expect it and, if necessary, letting go anyone who stubbornly refuses to engage. If this is the environment you're in, unfortunately putting up with it or getting out are probably your only certain options.

This has been my experience as well.

If it's a skills problem, by all means set everything up and be the hero.

If it's a cultural problem, run.

In my engineering days there was a saying: you can have it quick and cheap and high quality, but you can only have 2 of those 3.

Spunss like you work somewhere quick and cheap. That's not wrong per se. If lives depend on the outcome, you need quality. But I worked in plenty of financial places that were quick and cheap. Especially back office.

Ask yourself: would the organisation be better off if we were higher quality BUT it meant we were slower or more expensive. If speed is more important, welcome to spaghetti code land.

Skipping tooling & processes below a certain (not very low) point isn't doing it "cheap and fast". It's saying "I want you to build my house but I want it to be cheap so don't buy any more hammers, it's too much money and we can't afford the time for you to run to the hardware store, just all share the one hammer you already have".

It's doing it cheap (but actually expensive) and slow and low-quality.

You really want to build that house and I respect that, but the client just wants a tent and the hammer budget is zero. And since a storm is 10 minutes away they're not wrong...

And you can build a tent quick, cheap and fast. Getting customers to tell you want they really need is the silver bullet.

It not quick though.

All of these things their missing allows you iterate fast.

It depends on the cost to the end user and the cost of failure.

At one end of the spectrum you have free-to-play games. You'd have to be insane to implement a high test coverage on that. Nobody cares if Battletoads Royale crashes.

On the other end of the spectrum you have safety critical software, financial software, etc, where the costs of failure are very high.

Ask yourself "what's the worst that can happen if this fails?". Asking this can also inform where exactly the tests need to be added first.

You mention f2p games, but those definitely need testing because they can bring in money that most startups and other companies can only dream of. If their servers go down / if the game becomes unplayable they stand to lose millions.

But I guess there's plenty of games that don't have that kind of criticality. Just mentioning a counter-example. (Personally I'm still salty that GTA Online was released for free but the servers couldn't handle it, I mean come on, it's a decade old, you'd think they'd have sorted out scalability and stability by now. That game still earns its publisher hundreds of millions a year).

Agreeing with your sentiment, but want to point out that there are free-to-play-games out there where I'd be seriously surprised if they didn't do some very serious testing; they make just too much money and put out too many new features too fast in order to forgo automated testing. Just to name a few I know: Dota2, Path of Exile, League of Legends.

Granted, they're probably not the cash-grab-f2p-mobile-game you might have had in mind, but still technically free-to-play.

Yep. I'm reading a book right now called "Click Here to Kill Everybody [1] and it discusses exactly this. It also considers the financial cost involved. What is the cost of establishing and maintaining a CI/CD system that OP is proposing? And what is the cost of a bug / deployment error? In the last decade (assuming the company has been around that long), how often has a deployment error occurred? How much time saved will occur due to automating deployments?

Unless the errors are substantial, or the time savings for staff are significant, it's unlikely that organizations will invest significantly in non-business / non-consumer facing technology. Unless software is their business, then having mature CI/CD systems helps the company attract strong candidates, like OP.

[1]: https://www.schneier.com/books/click_here/

Sounds like you’ve got alignment on these things at your startup but no one knows how to make time for it. What have you done to step up and lead your team out of the jungle?

I am not being flip when I ask this. I’ve worked at eight startups because situations like this can turn into opportunities if you put your back into it. Most of the work at a startup is about turning chicken shit into chicken salad. If your team had it all figured out, it probably wouldn’t be much of a startup.

> turning chicken shit into chicken salad

and being sure to get paid for doing so (in the form of equity rewards). don't forget that and let someone else claim your hardwork.

Consensus that these processes are really the best way to do things is relatively recent, and I wouldn’t be surprised if there are a lot of shops that haven’t yet retooled to them.

Continuous integration was only proposed as a concept 30 years ago, and was a relatively fringe theory for another decade at least. Automated deployment followed a similar path, but about a decade later. Manual acceptance tests, in particular, remained quite common a decade ago even at the big tech giants.

Is this a startup/young company or a more established one? “Enterprise”, “consumer”, “b2c”? Some comment threads are related to a “publish or perish” situation at under-resourced startups. If this is “b2b” enterprise, I’d despair and look for greener pastures.

I am working now with a code base that was created during a time of such a mentality, but it wasn’t true, just fashionable 8 years ago. So much of it sucks, no functioning architect to ensure basic design quality or consistency. Thankfully, for the first time in my career, I have management committed to getting the worst bits in order, they seem to understand how costly all those earlier “savings” were. Thing is, it wouldn’t have cost more to do it right the first time. Engineers just would have needed to design correctly and implement correctly. Which may have cost more eng $$ in the beginning, but we’d also have had $MM more in referenceable customers as a result.

FWIW I have the responsibility of raising the quality via design review and code review, as well as designing and coding. I’m one of those architects now. Seems to be working.

I'm surprised at the comments here acting as if change is not possible. Do people really believe that there are 2 completely separate group of coders in this world... those who "get it" and meet the best practices, and those who reject those practices and fight to never improve?

I'd say that there is a huge middle ground of coders who want to improve, but have business drivers that have stopped it from happening. And if approached correctly, improvement is possible.

Overnight change is an unreasonable expectation. But incremental improvements over the status quo absolutely can happen. You do need to approach it expecting it to take time - storming into a dev team and declaring all things must change won't go well. But picking one improvement, talking to the team about it, talking to your leadership about the cost/benefit of making the change, and then getting a consensus to make a change.... that should work. Repeat that process, and not only will you improve the team over time, but you will show yourself to be a leader and improve your career.

It's not the coders, it's the management. And sometimes it's not the management, it's the business, which is the market, which is the customer.

Do you have customers who want fast but unstable products? I don't. On the contrary, when we talk to them about our product roadmap, they are more than happy to wait for new features in order for us to improve the stability.

What if your customer is the government and your users have no choice in the matter?

My customers are 100% government entities, so I'm not sure what you mean?

There is a "overkill tactic": do all the improvements for yourself at your own expense (your personal productivity and heat from the stakeholders). In this case you're doing it all yourself plus you take on the overhead of building the interface to keep the workflow of others unaffected.

You'll get fired or you'll gain the support of other developers who won't see you as a threat (because of your low productivity and because of the handy things you do).

The reality in most startups is that everyone works on whatever will increase revenue the most. There's no point in testing, code reviews, CI, etc if you are not yet sure that you'll still be selling the same product 2 months from now.

That kind of makes it a moot point if people get it or not. They just do what needs to be done to help the company survive. Once the company is secure financially and has settled on a product, then it's kind of not a startup anymore.

Right, but OP said mid-size companies, not startups.

My suggestion as a candidate remember that you are also in charge of interviewing the company you are applying for. Joel Spolsky had "The Joel Test" and it was written decades ago This author tries to revise it: https://dev.to/checkgit/the-joel-test-20-years-later-1kjk

The interview is often seen as a one-sided interaction you are on the "I hope they like me", and the company is like "can they do the job?" when in my experience you should treat more like a date: do we have similar tastes, what we have in common, where we differ, can we make this relationship work?

I don't think your expectations are too high, it's just that you as a professional can work at most into what, 35 companies during your entire career? If you stay at most 1 year on each company. That sample simple is very small and not indicative of the industry.

Improve how you select your next job.

> The interview is often seen as a one-sided interaction you are on the "I hope they like me", and the company is like "can they do the job?" when in my experience you should treat more like a date: do we have similar tastes, what we have in common, where we differ, can we make this relationship work?

The more you manage to flip the interview, the better you're likely to come across, too. 1) people like talking about themselves and what they do and will like you more as a result, and 2) it puts them in a mindset of trying to impress/sell-to you which does a bunch of stuff psychologically that works in your favor.

> ... this is what mid-sized companies are and I just have to endure and keep pushing.

For your own well-being and career happiness, I think you'll need to be strategic about exactly how you "keep pushing" for change.

Changing habits in an organization is _extremely_ difficult. You can't just provide a list of practices, make a terse case for each one, and expect people to just say "Oh, I see, you're right, let me retrain myself on multiple complex topics and then change everything about how I work, while at the same time figure out how to explain all this to project managers."

The time scale for implementing what you call "the basics" in a workplace with a dozen or so people is going to be more than a year-- at best, if you do everything right and are people person with enormous leadership leverage.

Read about "Diffusion of Innovations" (https://en.wikipedia.org/wiki/Diffusion_of_innovations). It's an arguably successful theory for how organizations implement technological changes. It was originally conceived by social scientists who studied how farmers (that's right, the agriculture industry) adapted to change. The most fundamental aspect of putting this theory to work is that you need to recognize and find "early adopters". Early adopters have two characteristics which are important for spreading change: 1. they're willing to try new things just for the novelty of it, 2. they're influential, others look up to them as examples. If you focus you initial efforts on a very small (even 1 or 2) "early adopters", that's the first step to success. There's a lot more after that, like training and dealing with laggards (people who need to be forced to change), but I don't want to type an article.

You can try and softly push towards better practices but if it's an organisational issue you will likely fail. Your standards are not unreasonable so it might pay to find a better company

I think you just don't want to work with people who accept such state of affairs, so no point trying.

These setbacks are demoralizing

It is, and after enough time you'll find yourself repeating the same bad practices, because there are likely strong incentives in place for that.

In this case my question would be: are the products you deliver meant for consumers, businesses or developers?

Because from my experience these groups have different expectations regarding quality - here shown in ascending order - and require an appropriate approach in each instance.

Cultural problems are hard to change. If you like your team, one option might be to simply begin testing your own code and ask people to review your work. Don't be overly optimistic, but from time to time the right person can create enormous change simply by doing the right thing and patiently explaining why it's helpful. It's likely your experience of mature engineering practices in a bigger company is largely foreign to your colleagues, and they would enjoy learning more from you, because you care about them and care about your work.

If that doesn't help, move on. Life is short and bad practices will be a drain on your energy and your long term career trajectory.

> Cultural problems are hard to change.

Agreed, but the culture reflects the leadership. If leaders put up a backstop and say "tests passing gate new commits and new commits should include new tests," then the culture will bend to this. If leaders think that simple process elements are negotiable, then the culture will reflect that.

Your standards aren't too high but I think you must realise that this is a cultural problem with little hope of changing. Even if the push comes from the CTO, it will take years for change to happen, it will require new hires and bringing new blood into the engineering leadership.

If you do want to take the challenge (which I strongly discourage you from) you'd need to collect data to build your case, quantify the time and human cost from issue/jira to code landing in prod to number of incidents/bugs. The instrumentation to do this will be a fairly chunky piece of devops work. Frame the data in light of your competitor's ability to iterate their products and so on. When it's collected and presented it can be quite compelling and people will listen.

It's only at this point you'll be able to present the problem to management in way that they understand. You know and I know that this is a cultural problem first, then a process problem and lastly a technology problem. The amount of work to effect this kind of organisational change, even in a small engineering company is immense. I don't know your motivations are for staying, if it's the domain or the money but if this something that bothers you then this is the best piece of advice I can give you:

Run, head for the hills, and don't look back.

> this is a cultural problem with little hope of changing

I disagree. Most existing teams with good processes started out decades ago without them, and improved incrementally from where they started. Team culture can absolutely be changed, but it takes a lot of time (and requires cooperation from seniors and managers).

I've worked in places where people cannot understand the necessity of most of the things you list.

If I don't do those steps, they are fine with it at long as it works. If I want to setup all the minimal stack, as long it don't multiply the planning per 3, they are fine with it.

I think it is extremely discouraging, because even if you do all those good steps that would surely improve the product quality overall, it is not something that people can see or will consider. This is not a thing customers can see. Things will just work most of the time, and in case of a problem, you will be able to fix them in no time and it will go back to normal fast because you built all the toolkit before. Customers won't even have the time to complain.

Your standards are highly reasonable, but if you ask some sales person if you should spend 5 days on a feature or 7 days with better quality and full test coverage (ok it would be more like 10 days), they will always say no, don't do it or it's fine, we'll take care of this later.

Documentation? Oh, you'll do this next summer.

Explaining why you should do all this to a team is a long term work: it can take one to two years before they start to understand what the hell you are doing (I mean, maybe you'll have better results than the rest of the team or your infrastructure will be somewhat a little bit more stable).

But it is extremely discouraging, keep yourself warm and don't get too tired. Just do the things the way you are used to do and try to setup the things that are missing so hopefully one day other people will want to use them too.

Your standards seem to make sense. But I suspect there is immense variability between different companies.

You should I believe push for changes with examples of good development practice. Start with easy things that can have impact, and hope the improvement in quality of life due to better practices will stir more interest in improving their process.

But if you have to be the jerk about everything and people just end up resenting you for it, well I think you know what to do.

good luck.

Usually these trade-offs that logically follow different priorities. With a large company, it's important not to break things. With a small company, the speed of iterating and moving things forward can be much more vital from a business POV.

However, if you say you're confident that

> I see how slow dev is

it's just crappy quality and not a trade-off, as I would assume in a usual case.

I think that you should bind your standardS to the business. What you said are absolutely best practices recognized on the Accelerate book. But they also require time and effort (well it really depends, you can have everything deploying an app on Firebase using GitHub CI for example and run the tests). Do they actually change the cost of delivering in your case? Are you actually able to demonstrate what is the gain? Also? Do you have a market fit? All of this is not really important if people are not using the product. In short I think engineering standard are important but they always need to be put in the context you are working. If I’m opening a startup I need to test my product with customers. I will probably need to change the UI and the business logic many times. I could not care less about proper automatic testing in this phase. Once I have the product then I can get worried about engineering best practice.

I read your post and: wow that's EXACTLY my situation!

It's also a lot disappointing to me that I can have a very basic CI on my github side projects done in half an hour at most, while a company with some great devs in it can't.

I'm not talking about "use this new technology/framework here, blabla". I'm talking about me having to fix regression bugs because no one didn't (or couldn't) write small unit tests on the critical parts of our infrastructure.

As you also said, everyone I spoke to, including CEO, agreed that we are less productive due to this and a company culture shift is needed. 6 months in I still have to see any change.

Plus: I'm not in a small startup that needs to ship something fast. I'm in a smallish company with almost 20 years of experience. They did somehow fine until now, but they're struggling to scale up.

This is an interesting conversation and one I think about a lot myself. I currently work for a large-ish company with "quite good" practices, and I often wonder what sorts of policies other companies implement. As you grow in your career and continue to join teams that were better than the last, it's sometimes hard to know if your standards are a product of your career trajectory, or simply a product of the times.

However, I do think there is some mid-ground between what you are used to and what the company currently does. In my opinion, these are the minimum requirements of what a mid-sized company should be doing:

- Version control. No excuses here.

- Code Review. This needs to be done.

- Test frameworks are setup and make it easy for a developer to write and run tests. I wouldn't necessarily get hung up on coverage, but large features should have tests in the PRs, and there should be a good culture of testing within the company.

- A CI pipeline that fails the build if tests or linting fails.

- I wouldn't be too concerned about automated deployment from the CI. This also depends a lot on the type of product. It's much easier to automatically deploy a b2p application than a b2b one that might require proper release correspondence with clients beforehand. I think the minimum here should be a system to handle provisioning and deploying to boxes (whether that be something like ansible or in-house scripts).

I'd be interested to know what others think the benchmark should be for mid-sized companies. One last point: You are in an excellent position to make changes within your company. If the truth of the matter is that your colleagues "just can't imagine another way", then you can work with them to try to get some of these policies implemented. If you can convince them that there are actual benefits to doing their job a different way, then I think they should be receptive. You can come out of this job knowing that you helped level-up the developers and left the company in a better place than when you started.

I depends from what you derive your satisfaction. I've been at a company that had perfect engineering practice and was completely failing because engineers were almighty and oblivious of business. That was demoralizing. A useful product makes me more happy. Code quality is not an absolute end goal. Sure I would prefer to have both, but in reality it's usually a tradeoff, and over-engineering is worse than under-engineering when in doubt. Also I've seen shitty over-engineered (edit: from a user experience perspective) monstrosity with very good "code quality", and hand-crafted delightful to use apps that would not check much on your list.

Your expectations are neither unreasonable, nor reasonable, without the context and environment of the business and market it operates in.

For some products, companies and their markets, the expectation from the users/customers could be so low that the mere existence of the product can be quite disruptive.

In fact, (most) startups exist to pursue an idea of disruption.

As the product, company, market and customers evolve, the expectations begin shifting.

This leads to an inevitable choice across a spectrum from leaving things as they are to rewriting entire stacks.

If the leadership can do a good job hiring the right people to both navigate the chaos and know how to get to a better place, things would then evolve, in whatever pace that can be allowed by the business, given calculated risks on engineering practices or lack thereof.

The key here is the "right people", rather than just "more people".

Based on the information shared, you sound like you are the right person to lead an engineering quality and reliability initiative.

Although, I do assume that you may not be the right person because you are failing to cope with the current chaos.

That being said, the mere fact that you're seeking counsel here tells me you are willing to manage this. I think that's a great start.

At this point, I belive there are two (and possibly more) things you can do:

- Talk to the key people in leadership to understand how they perceive the current situation and whether they are willing, if not even intent on, improving the status quo.

- Ask yourself for how long more you can navigate the chaos and whether that takes away from your mental and even physical well being.

Perhaps shifting your perspective and being intentional about it can help. This could even be the best thing that happened to you. Fast forward a few years, you'll perhaps have become an engineering leader who has had the opportunity to have experienced a huge paradigm shift and worked on solving its challenges.

I work for a small company that's been around for 20+ years. All of those things you outlined as basic, are things that I have been fighting for the last eight years. I've won a few battles, compromised on a few others, but lost most.

In my opinion your expectations are too high for a small company. Most of the people I have worked with at small companies don't have experience with most those basic tools. Some actively are afraid of code reviews, and others see tests as writing the same code twice, hence a waste of time.

Management and coworkers pay lip service to wanting these things, but I've found that they see them as nice to have extras, and not as time saving efficiency increasing tools.

Don't get me wrong things can change. One way I found is to overwork myself and implement it outside of my normal projects. Then I just have to keep pushing and educating people to follow along. That's how I got the majority of our code under revision control.

The other way is that management finally has enough of a problem and someone convinces them one of those basics will make things better. In our case managment got tired of making new features live, and finding out afterwards that something else had regressed. That's how we got end-to-end tests.

Company culture can change, but it'll be slow. Try and frame things in a cost-benefit light for management. For developers try and show them how it will make their life easier, and show them the cool factor. Good Luck!

I'm on sabbatical right now because I get how frustrating that can be. That being said, maybe me sharing some points about my sabbatical can help put things into perspective:

- It's taken four months since my project (a personal-use hybrid OLTP/OLAP streaming BI framework) has started, and I'm only wrapping up the development now. I'm fine with this because I know what I want and I don't want to deal with users while things are in flux, but if this was a business, I'd be out on the street by now.

- Engineers are paid well because there's not that many of us. It's not because we're important. Sales is what brings home the bacon, marketing kicks off the sales funnel, customer support closes the sales loop, product management translates sales' requirements to engineers...and engineers just build the thing according to requirements. Many one-person startups don't have engineers, or even anybody technical, they just use no-code tools and pay contractors and they do just fine.

I'd consider broadening your options in order to grant yourself more agency, especially paid solutions that you can form a demonstrable ROI-based argument for. Terraform may seem like a bit overkill, esp. if you don't have serious cloud charges. Maybe managed APIs as a service makes sense, like Netlify, that have CI/CD and can spin up various apps for money. Then you can say "for $25 / mo. we save X developer hours, which @ market rate saves us $XXX per mo. which increases our runway by this much."

In Denmark we have an expression "fagidiot" loosely translated "idiot of your field".

A soldier who can't enjoy a war movie because their use or the sound of the machine gun is wrong.

The designer who can't stand the product because the typography turns them off.

This tendency to think that your field always needs to be applied to extreme perfection.

Anyone who want to know where the line is should try and launch a company following the "fagidiot"'s principle and see how likely they are going to be successful.

I've experience very early stage startups and I've experienced medium size (400 dev) company. To be completely honest I don't think these practices are necessarily functions of company or engineering team size.

I'm convinced that there are startups that have good practices and there are big companies that don't (and vice versa), so I'd say it's more about this particular company rather than all companies around that size/location/industry or whatever attribute.

On a practical note, I think you have a few choices. 1. Leave and join a company with better practices. It's a safe choice. 2. Take it as a challenge and be a change agent at the current company. Good practices have value. Teach them. Bring them in. Negotiate with managers and people above, try to convince them to adopt to better practices. This is hard, really hard, but you'll learn a lot from the process, and you'll be a hero at your company if you succeed. 3. Do nothing. Get frustrated and complain. Sink into the same level of mediocre/bad practices just like everyone else. I don't recommend this one.

As with anything in Software, there's a tradeoff. "Code Quality" and processes have tradeoffs around risk, i.e. is this going to break something in prod?

If it's a real small startup (< 10), and you're pre-revenue or there's a bunch of emergency situations, then yes, a lot of what you're describing is normal. Things are crazy, and you have to cut corners to even make money.

If it's bigger than that, then you've got a problem and you should start looking elsewhere or attempt to change the company from within (hard to do unless you're in leadership).

Otherwise, here's my personal rules for "running scrappy":

- Pushing to master & prod w/o review is fine if it's truly an emergency or something super tiny, i.e. a whitespace change. Otherwise, it's good to get in the habit of putting up a PR, even for posterity.

- Connecting to prod dbs - never do this unless it's something you have to fix right now. Otherwise, just grab a copy, a backup, or connect to an idle follower instance if you need prod data (v ez to setup).

- Bad code is fine. You have to pick & choose your battles. Ideally care about stuff that's around building a mental model of your application rather than quibbling around refactoring some small function

- Similar note about testing... you can get away without it in a bunch of cases, especially if the app is easy to QA locally

- Deployments.... just use Heroku :-)

- It's good to spend at least a little time architecting/designing, but seriously time-box it and run with what you've got

- Staging environment.... you won't have a good one for a long time :-)

I agree that your demands are unreasonable for a small company. It's like a person getting kitted out in full tour de france gear to ride their bicycle to the local shops.

However there is a time and a place. Smaller companies typically havent reached the point where they have both the time and driving need to improve processes. But they will eventually. And when they do they need a person who is able to implement it, like yourself.

I think its great that your concerned about it, that your eager to improve the processes. Dont be disheartened by the fact that its hard to make people change behavior - its normal. You can do it though.

Its important for now that instead of seeing it as a general team-quality issue, you look deeper and tie your improvements to things that are critical locally. Eg. if pushing to production is causing issues quantify it, explain it to folks and get their buy in to implement the process.

This kind of direct "immediate problem solving" approach will likely fly better than filing initiatives under the boring banner of good practice.

Your standards seem fine. But for a lot of companies the initial techies set the approach and a lot of people find the thoight of these practices stifling. Not necessarily in practice but it can sound very heavy. I'm self-taught and I have a fairly sizeable mismatch with enterprise OOP-focused engineers when it comes to testing, overhead and process.

I think it's hard for people to appreciate thw other side of these argumenta without having lives through some consequences. I firmly believe in automation and testing. I believe tests must be pragmatic to not be overbearing. I mostly believe in code review for teaching. The research covered in the book Accelerate (subtitle something something devops) made a strong case against code review and change control for most software development.

So yeah, your standards sound reasonable, if maybe a bit much for my taste in some aspects but I know lots of shops that wouldn't be anywhere near. And this is a skillset and know-how you can provide. Hope that's a useful perspective.

As others are saying: it depends.

Without knowing more about your situation and its context within the larger org, it's difficult to make any judgments.

All I'll say is that I've internalized all these best practices and yet here I am running various products on various VPS's with a git post-receive hook and then logging in to the servers and recompiling or restarting. I have no tests, the code is not properly commented, and I've only written typespecs (Elixir) for the crucial bits of some of these apps, and I truly believe I've made the right decisions.

In recent weeks I've pushed for refactoring, and I managed to get some hours to work on a better deployment strategy. When more money becomes available I'll push for writing tests and working on documentation. The goal then would be to remove the 'bus factor' of 1.

I imagine you're nowhere near this particular scenario, but I imagine you're on a scale somewhere and I hope you figure out what the right standards are for you.

This is an engineering leadership problem. Your new company does not seem to have an engineering leader who understands the cost to the business of having the engineers act the way they currently are. I don't think you have to run away, and you don't have to become such a leader yourself, but you should talk with your leadership and show them why what the team is doing now is not good. Make sure to first understand what your leadership values, then present to them better ways of doing engineering where you show the impact it has on the things they value.

So like for automated tests if the leadership currently thinks that test code isn't actually doing anything for the customer so why should it be done? Then you have to show how much money/time/whatever has been spent by engineering fixing the same bug over and over because you lack regression tests and compare that to how much would have been spent to write a test case for that regression.

Do one thing at a time, well, and track outcomes.

Be prepared to take responsibility for it.

Code review is a practice that has studies which demonstrate that it's effective at reducing errors in software [0].

Work on that at first. Be prepared to spend a good amount of your time doing most of the code reviews. Write a review guideline with your team and share it around. Once people get used to the habit start encouraging others to take some of the responsibility.

I encountered a similar situation. Nobody practiced code review. Test coverage was spartan and brittle. The company was focused on shipping the first thing that worked, fast. Maybe it was to keep the lights on and land those precious contracts... but it had the external effect of tightly coupled systems, increased error rates and support requests, and in some cases errors resulted in customers losing data.

When changing culture and habits you have to be patient and resilient.

It took about a year before code review became widely adopted on the team. About six months after that we started tracking our code review metrics as we found that pull requests with few comments and large amounts of change were a big risk. We started getting feedback from reviewers that changes over a certain size were slowing down our velocity so we started seeing people submit smaller, leaner changes.

I did the same thing with testing. We now trust our CI pipeline to manage our code quality and releases. We're working on CD. It started small. And it grew.

You're not unreasonable. And neither is the team you're working with. Manage expectations and be the change you want to see and be patient.

[0] https://sail.cs.queensu.ca/Downloads/EMSE_AnEmpiricalStudyOf...

I've worked for both types of companies. It's surprising that both cultures work in their own context.

At the startup I worked where the dev process was zero, we were able to flesh out pivots and products at a pace that was able to bring continuous investments to the company. To be honest I didn't enjoy the craft though during that time. There was no depth, but a lot of breadth. It was just about churning out code like a machine.

At the bigger company where I work now, money stopped being a factor. I was able to afford time to ask fundamental questions about my code. My code was reviewed and shipping was governed by a process that scrutinized devs. Slow and sometimes frustating, but still enjoyable given the freedom of time I was now given.

At any day, I'd pick working for a large-ish company than a early stage startup unless it's a startup that makes things I care about.

Yes, your expectations are too high.

I'm trying to hire a devops engineer. Applicants have the shiniest CVs with even 10 years of devopsy stuff on it.

Then I ask the following question: "Explain me the relation between a VM, linux container, docker containers, kubernetes pods. Whatever you find important"

And they have no clue.

I've faced situations like yours. Your expectations are totally reasonable IMO, but, in order to promote a significant change at the current company, you may have to go through a probably long process with many layers of human factors involved.

I don't feel myself in a proper position to give you advice, since my experience is limited. But, some things I've seen working:

- identify whether there is a "dominant group" that enjoys some sort of political influence and are trusted by the direct manager. This is important. They are usually the oldest ones at the team and, as they got trust and respect from the management, they also got out of the pace with good practices applied elsewhere. Be careful with them, in case this group exists, because they won't let you proceed with such "disruptive" ideas risking to make evident that there's always been a better, more productive way, but they haven't applied them for pure inertia.

- if you have the opportunity, and think it's worth the effort, try to deploy or build some small tools/processes/etc and show them. That will be very eloquent examples;

- don't be concerned about taking credit. Everybody is interested on being more productive, but human emotions will always prevail over any rational need. I'm proud for having built some productivity tools in use for 15 years now in a company I've worked for, and almost nobody knew that I was the author. The "architecture" team wrapped them up on a asp.net web app and took all the credit. That's fine considering I achieved a more important goal. If I had attempted to bypass the architecture team and promoted my tool independently, it would probably have not been put in use.

And always bear in mind that this is usually a long term effort and you have to continuously assess if it's worth. Maybe other non technical aspects counterbalance the issues, maybe not. Some of my happiest years were spent working on legacy, terrible code. But this is a very personal and multifaceted decision.

Your $current_company needs the reality check.

I run a 4 person dev team at my tiny 8 person startup. We kinda do all the things you mention. CI, reviews, auto deployment. Our test coverage could be better but is increasing by the day.

What you expect is totally normal.

I bet the current co has weak technical leadership

These don't sound unreasonable. I've been in a similar situation before. In my experience, if the culture in the org is generally positive, it is worth taking the effort to champion good practices. You can become the go-to guy for setting these standards. Future developers will thank you for having done so.

However, if you think the culture is not conducive to positive, large scale change, it might end up being a waste of your time and energy. Having to constantly say no, and ask for everyone else to fix stuff in PRs can be exhausting. There is also a real risk of others perceiving you as someone who always finds faults with everyone else. That is a hard tag to shake off. Balancing that is the real challenge.

Your larger company built up those things over time, but likely with management support and pushing and enough of it built into the culture that everyone realized the value.

In your smaller company these things need to be done, and you need to be a champion of them. Pick your battles, but don't assume you can win every one at the same time. Pick the one most important to you or would have the most impact, champion it, and lead. You've seen nirvana, now you might just have to lead people to the promised land.

Just don't be one of those people who are like, "When I worked at ($current_company - 1) everything was much better than here. No one here gets it." Be the solution.

Counter these examples with small companies paralyzed by process, introduced by well-meaning hires from large companies with plenty of capacity.

A large company is a marathon run. Discipline, strategy, training matter.

A small company is running from a bear with your hair on fire.

Except that lack of some of these basics makes small companies extremely hard to pivot and modify their products for larger customer base - pretty much cementing their fate as a failures.

Even small teams of 3 people can benefit from basic code reviews and CI pipeline that makes them significantly more capable of responding to new business requirements than startups bogged down in 5 months of technical debt.

Sure its dangerous. But the question was, why is it so chaotic in a some small startups?

And you can't have everything. You can insist on process all the way to bankruptcy. Whatever the risks, sometimes the one that succeeds is going full-on to the finish line.

Man, that's so clear I have to make a tattoo with that

> but with the slightest bit of pressure those principles are gone

If this is pressure from above, there is little you can do. Employees have one overriding requirement and that is to keep management happy.

Management clearly didn't care about any of this before and they evidently do care about speed, so what are employees going to prioritize?

Also, testing and code review are better at producing bug free features. Most managers I know aren't counting the bugs as part of the feature but rather as an additional feature. Past bosses would be happier with me if I delivered a buggy feature in a week and spent the next week fixing it rather than delivering the feature in 10 days.

Nah, entirely reasonable. We do all of that and more at the current project and the client has been super happy with the result all the way. We've been able to keep the tech debt low enough that we're just chugging along at the same velocity two years since the start (to the point where we've got the attention of the CEOs on both sides). Just don't expect to change company culture within a reasonable time frame. If you can't take it (I know I'd be out of there if they wouldn't even give it a try) just look elsewhere - plenty of places have reasonable processes in place.

Make your peace with working in a slightly crap environment or get a different job. It will be hard for you to produce anything good and if you do it'll probably go to waste. These issues are signs of deeper issues.

Good luck bud. For both of us.

I’ve found myself thinking similar thoughts (as OP) at a startup I joined. It seemed like no one valued code quality, yada yada.

If the things you value are valuable in your new environment, you’ll find the tact, time, and patience to improve the codebase.

One thing that helped me was helping others. Writing documentation, documenting processes, playing dumb sometimes...

Your standards are _not_ unreasonable, but they are _your standards_. If you want others to adopt them, you must lower the cost for them to do so—and must do so in a genuine way.

Lately, I’ve found the quickest way to be “right” is to allow others to be “wrong.”

I don't think your standards are unreasonable.

Good practices pay off in overall being able to produce more _correctly working_ code. Even if in the short term it feels like for example writing unit tests "slows you down".

Sounds to me like your standards are fine. Pushing straight to production without code review or tests is only acceptable if you're new and nobody is using your product yet.

When you're still in the prototyping stage, you want features and proofs of concepts, not waste your time on infrastructure that's going to change in a week, or tests for code that's going to be completely rewritten two weeks from now.

But once you're in production and you're starting to have a user base, code quality matters. It's worth investing in.

Your standards are very reasonable. You should keep pushing for higher standards. Preferably these also should be enforced technically. I.e., it should just not be possible to push something to development without code review. At first you can also just make sure you have good quality in the parts of the application that you are working on. It will at some point become quite noticeable that thing are going better there. I.e., the tests will catch something that would have otherwise failed in production. This can work as an eye-opener.

I’ve been there before and you will be lucky to change just a couple of those deficits within the timespan of 2-3 years. It’s still worth pursuing but just make sure that the organization is willing to take it on like an actual project and not as an extra hobby you yourself are working on. The first place to start is buy-in from the other devs and most likely that will be the most time consuming and frustrating part of the whole endeavor. Work on it together from the perspective of reducing tech debt.

One word: _Prioritization_.

Your standards and expectations are reasonable, but sometimes push comes to shove. You're describing a lot of intentionality of the seniors and management. Ask. Communicate. Share your concerns. Accepting the vast distance between the ideal and the reality is part of engineering. Bridging the gap is the other part.

I sympathize with your frustration, but no ideal place exists. My old team tried their hardest to get to the state you're describing for two years and got there. It was beautiful.

> In $current_company, I was surprised that none of the basics were there. All agree to do these things, but with the slightest bit of pressure those principles are gone

This is the normal. I think I have never be in a project with the "proper" stuff from the start.

I have applied what I can, not even asking (because the answer will be "no?"), so the fact that:

> All agree to do these things..

You are set! that is the key!. Now, how do this successfully?

Pick only 1 thing to solve (like I say, almost daily, to Stakeholder of the projects "One problem(or miracle!) at time").

The first is "use source control" OR "use a task manager/bug tracker".

For a project with more non-developers, the second will be MORE useful.

The key is use the most simple thing you can get. I use pivotal, ignore it for all the SCRUM stuff, and only use it to list the task to be done, task doing and done, and train everyone with this ONLY rule "Put what is more important from top to bottom".

That is literary, the only process that I use with others. With non-developers, I not even try to expect them to label correctly bugs vs features, and most of time I rewrite the task myself (for clarity).

But man, people, specially non devs, love to see the progress of the project! This relieve so much presure is insane.

So the key:

* Pick only 1 thing to improve at time

* Use the more simple tool/process possible

For example, use git, only 2 branches (master, dev)

* Accept some imperfect inputs from people that are not skilled in the trade

"My users will never reporte correctly a bug, so I rewrite the reports after talk to them"

And for others "the devs never put a good message in commit", then

* Pick only 1 thing to improve at time

if are in the phase of "at least using git!" not sweat it. Maybe gently remming them

And probably:

* Have a mega list of good things to do in the task manager, and cross milestones with the team. Celebrate when done!

All of this sounds like something you should expect except perhaps point 4, I don't think deployments need to be automated and with infra as code although that is a nice thing to introduce. I'd push for the first three points to be mandatory though, there's no good reason (except maybe in a 1-3 dev startup) to avoid them.

However, don't see it as demoralizing, see it as a way to teach other developers good practice and have a huge impact on the businesses engineering practices!

If the business don't see the value of those things, you're more likely to convince them if you can prove how those things can actually save/make money.

Some pointed out things like retrospective on outages, but there are other things, like how testing could reduce the number of bugs, thus, reducing context switching and debugging times.

You could even tight it directly to money by measuring what a bug cost and make a experiment in a team showing testing reducing that cost over time.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact