Hacker News new | comments | show | ask | jobs | submit login
Ask HN: In your company, how important is code quality vs. getting things done
94 points by maephisto on Dec 4, 2017 | hide | past | web | favorite | 91 comments
How much value does your employer put on code quality, design patterns, tested code, test coverage vs. shipping features fast and fixing/refactor it "later"?



Define: code quality

We're focused on getting things done, and I think that is a good focus and always has been more satisfying for engineers to see things ship.

But... there are extremes at both ends of that scale that make for very bad places to work, and really bad product and code.

Getting things done whilst not applying any focus to debt avoidance, maintainability, observability, supportability... is a recipe for disaster that leads to constant fire-fighting and the burning out of engineers and others.

Yet building ivory towers of as close to perfect code equally is a bad signal that can kill a company (not bringing in revenue whilst having costs). Not all code is equal and some code requires a higher quality than other code.

I've found that what experience has given me over 20 years is a pragmatism... is it constructive to add this nice-to-have thing, or would it bring more $$$ in at an acceptable TCO if I just do it sooner.

The best engineers I know all strike a balance; keeping things simpler than a lot less experienced engineers would expect, and shipping something sooner.


> The best engineers I know all strike a balance; keeping things simpler than a lot less experienced engineers would expect, and shipping something sooner.

Yes.

Because experienced engineers know that there is an exponentially-growing cost associated with complexity. So what may look like "clean quality code" at first look may in fact require an entanglement with frameworks that may or may not age well. And when the business needs start diverging with the vision of framework makers, then there's a world of painful molasses as the team struggles to either hack the requirements into the current codebase, or perform a costly rewrite.


Well said.


Get things done unfortunately.

Everything is Minimum Viable Product, trouble is those MVPs end up in producation and never actually get rewritten so all code is horrendous when it comes to quality. Result: Even what should be insignificant changes takes forever and you never know if you are introducing bugs due to zero unit tests. Work that should take a day or so takes weeks.

I've seen programs that are just hundreds of lines of code in Main(). variable names like `var varvar`. No DI, code that is very susceptible to sql injection, things like

`var sql = "select * from " + tblName + " where Foo = " + searchString;`

And other horrendous things. No one seems to care though. If it causes an issue in the future (and it will!) that's future developers problem.

The other issue I find with this is you end up going home each day as a developer and feeling completely unsatisfied, you can't take any pride in your work as it's all just a hack job to get it working ASAP. It's like if I trained to be a chef then got stuck in a kitchen and spent all day just heating stuff up in a microwave.


We have similar issues. The MVP concept is taken to an extreme; proof of concepts are created, used, but never iterated on.

I can relate to the last bit too; it also feels bad when you know people are looking down at your "lack of productivity" when you're just trying to slow down a bit to build maintainable software.


Why? I can understand not doing unit tests or taking shortcuts to decrease time to market, but it doesn't take any more time to write "clean code" than unclean code. The code syntax looks like C#, it's a strongly typed language and with a tool like Resharper, it's simple - and my usual MO just to write a quick method to get it done and go back and use automated refactorings like Extract Method, pull members up, Rename variables, Extract class, create object from parameters, etc.


I find most devs won't buy a tool like Reshaper, so if the company won't pay for it, they will never use it.

I love resharper personally and have my own licence.

I mean, I've seen WPF apps where everything was in app.xaml.cs because it was just easier. Need a function, stick it in App.xaml.cs. Need a class, stick it in app.xaml.cs. At this point I think APp.xaml.cs in one our solutions is around 130k LOC because one of the devs thought just having a whole bunch of global functions was easier than worrying about nonsense like classes, ecapsulation, testability etc. You can do this when their is no code reviews. (Who has time to do code reviews, that's going to slow everyone down!)

I've seen console apps written that were all in Main because the attitude was start writing it, debug as you go, almost like writing C# interactively, moving the debug breakpoint up as you make mistakes nad want to rerun stuff. When you can get to the end without it blowing up. You're done. Deploy it.

Or web projects where they say we want you to gather data from some DB, display it in a nice way. It's just a PoC, so knock it up in half a day. It actually though ends up pretty complicated but you get it done in a day or so. But still, the code is a mess but it's PoC. But then the client likes it so this mess gets deployed. Then they want a feature added, but you don't have time to rewrite it so you just have to hack around the initial mess to get it working. Then more features and more features until you have an unmanagable mess that you hope you never have to touch again.

When you're working to half a day / a day for what's a non-trivial project sometimes even naming variables takes a back seat. So you end up with stuff like `var varvar` or `string s1`, `string s2`, `string s3` ending up in your source code.

It's not great, although sometimes it makes me laugh, it mostly makes we think why do I bother though, but that's the reality for some places.


That sounds like a nightmare tbh. I hope you can get out of there soon enough.


In my case I work at Google. Company-wide, we aggressively push code quality. You cannot submit code without someone reviewing it. You cannot submit code in a language in which you aren’t certified as proficient without someone with proficiency (called readability) reviewing it. Our review tools run automated code quality heuristics and present negative results to the reviewer. All affected unit tests are run when you request review and all negative results are highlighted. You can’t modify another team’s code without their approval at review time. We have an entire team dedicated to performing modernizing refactors across our entire codebase to remove deprecated methods, migrating to new language features, etc. Unit testing isn’t strictly enforced, but we have automated dashboards that will show you your coverage and guilt you if it’s poor. This is just the stuff I can name off the top of my head.

The result of all this is that even if a team wants to just get things done, they’re going to be nudged in the direction of better quality simply by using our tools.

As for my team, I work in Ads. We’re a group of just over half a dozen people operating a handful of revenue-critical systems, and our code quality standards are raised accordingly. From a management perspective, it doesn’t make sense to hire a group almost entirely composed of PhDs and then have them spend their time putting out fires caused by sloppy implementstions. We put a lot of effort into getting the code right at the outset and then running it for years without need for massive intervention.

In my experience this involves a few things. First, we tolerate a fair bit of deadline slip. Google isn’t exactly gonna go broke without our newfangled product, so if we need an extra half a quarter to make it happen right, then we take it. Secondly, we write up thorough design documents before breaking ground. This means few surprises at implementation time, letting us focus on getting the code right. Finally, we spend a lot of time refactoring and cleaning things up. Systems balloon over time, and sometimes it’s more cost effective to clean house and make otherwise complex changes simple. Doubly so if subsequent changes are also made easier.

I can’t speak for the rest of the company, but I think our team’s standards are pretty high.


The great part about this process is that, in my opinion, it highlights the importance of pragmatism. Google, as multi-billion dollar ad company, is understandably (and I would say, correctly) risk adverse when it comes to changes to its bread and butter.

But if a new startup tries to replicate a process whose aim is to produce code that "[runs for] years without need for massive intervention" before they've established product/market fit, they will run out of cash.

Google has very specific needs. Code quality requirements for banking software is different than a CRUD app.

IMHO, the best path is to be honest with where you are as a company. Some times you need to just see what works, and multiple cheap iterations that you have no plan to support in the future can be critical to success. Other times you're making billions of dollars and the most important thing is to not screw up. With every variation in between.

So at my company: our subscription and payment handling is designed to run for years. Not screwing up is the most important. Certain product features are pushed as quickly to market as we can, with an eye towards future stability. "Good enough" being good enough.


sounds like no fun at all


Processes that ensure high value code (longevity, readability, maintainability) rarely are, but that's what pays the bills.


If processes and algorithms hold such tight grip on any new code created, just employ the same processes and algorithms to create the code itself:) Humans just seem obsolete there.


Your humans still end up writing those processes and algorithms :) it's just a fancier IDE!


I bet there are flow diagrams created by drag and dropping things. Any corporate process-related software evolves in a final form into an UI with diagrams created by drag and dropping things.


It sounds like it, but in practice it’s not as onerous as you’d think. If someone’s idea of fun is to play the cowboy, the no, it’s not fun. On the other hand the problems we deal with in designing and implementing systems like this are so challenging and unique that they more than make up for the added process.


Creating robust software is both challenging and satisfying.

It doesn't need to be "fun" to be enjoyable.


"Secondly, we write up thorough design documents before breaking ground."

Design documents? Sounds scarily like waterfall to me.


When did planning and design stop being the first stage of a sprint? Agile doesn't mean start writing code before planning and requirements gathering.


What's wrong with waterfall? I've got a buddy who works at Lockheed on the f22 and a brother who's a pilot in the Air Force. I prefer certain systems be completely spec'd out before a single line of code is written.

Moving fast and breaking things isn't the only way to write code.


Waterfall sucks. I say this as a guy with experience in the kind of systems you mention (though not the F-22). Waterfall itself leaves little room for early feedback which means schedules are always delayed and problems are pushed to the next block cycle (a 12-24 month activity). Integration of changes is delayed until the last minute (versus the lean software concept of the last responsible moment). Testing turns into a massive clusterfuck at PFQT and FQT which is the last stage of a waterfall cycle.

Waterfall assumes you can know too many things at the start and have everything go right. It fails when it encounters the real world but is still pushed by program offices across the DOD despite extreme cost and schedule overruns and quality issues.

I’m out now and on mobile. But I can write in more detail later if you want about the problems of the pervasive waterfall method in DOD projects. Even the DOD systems acquisition documents have endorsed iterative/incremental (a slow motion agile/lean when you dig into it) development models since 1985 but still Waterfall lives on.


Waterfall, Or why I hate government projects

There's a lot of options between "moving fast and breaking things" and waterfall. And almost every one of them is better than waterfall.

Let's be clear on terms. True Waterfall is a single sequential pass: Analysis, design, development, testing.

There is no feedback, there is no room for error. Now, you think there being no room for error sounds good until your project is years late and billions overdue. Because no time was allowed for feedback and correcting for errors, they're discovered late (in testing) and either:

1) you loop back then and fix it [0]

2) you "descope" requirements and whittle it down to a functioning, but incomplete, system

3) you push through and finish it on time (because in most Waterfall shops the estimated schedule is a commitment) and deliver a buggy product (that in the USAF and others can, and has, literally meant death).

Implemented waterfall actually allows for a greater degree of feedback. Maybe they even do unit tests on software [1] so they have some degree of effective testing during development. More often, they keep the Waterfall idea of big testing at the end. By this I mean a 1+ month event where a comprehensive test suite is executed. This is preceded by a preliminary execution which also takes 1+ months. So let's say you have a 15 month project, this means they literally schedule 2+ months to executing these tests and this is the only time they will execute all of them. Between the PFQT and FQT they will deal with any rework. They still treat estimates as commitments and still rush crappy code out to the customer. So every defect is discovered as late as possible, rather than as early as possible. Any error in logic, any error in design (which is the most critical cause of lack of safety and reliability in systems) will not be found until month 12 or 13 of a 15 month project, if it's found in this cycle at all.

There is no place in this world for Waterfall in any variation except for small projects (less than 30k or so lines of C) or projects that have been done a hundred times before by the same team who is going to do it now (they are literally the only ones who can get this done). But most companies will still try to do waterfall with teams of novices (to the domain, often to the profession to as they prefer cheap new grads to experienced developers and experts).

The DoD has, since 1985, endorsed a model based on iterative and incremental development. This means that you scope your initial necessary features, and over a series of projects you add in the extra ones (including ones you may not have thought of to start with, either on oversight or something that changed). Within each project cycle the iterative model has you doing a rough equivalent of the "sprints" in Agile (big-A) shops. Though often longer (still 1-3 months rather than aiming for days or weeks). However, since there's always a lag this wasn't really picked up by leadership until the 1990s so most of the present leadership still started their careers in the Waterfall world of the 1980s (civil service tends to stick around for 40 years) and still insist on peddling that bullshit rather than adopting more sensible development models.

There is no feasible way that anyone working on the F-35, originally conceived in 1992!!!, could have possibly laid out the work for that project properly in a Waterfall approach. Which is also probably why it was years late, billions over budget, and failed to meet some operational requirements when it was finally delivered.

[0] Too late to be effective because everyone who did the development has been laid off or reassigned or simply slept and forgotten what they did a year earlier.

[1] The initial developers may, but they will strip this out before handing it off to the maintenance programmers who are in a different company. Can't help the competition, customer be damned.


My company exists for the sole reason that another company focused too much on "getting things done". We're a newly founded company that is completely in charge with replacing the engineering departments of our sister companies. Our sister companies have millions of lines code code written over the year. Most of in the most horrible state you can imagine, full with security holes. Our company was brought into existence to clean up the mess and allow the whole organisation to move quicker.

In our own company, getting things done is important, and sometimes we do prefer that over writing high quality code. But that decision is not made lightly and it's usually still good code compared to what it's replacing.

We've been working on replacing a large real-estate website (millions of visitors) with a new version that has been rewritten from the ground up. We did this in 6 months and are going to be releasing on time and within budget. The code is in great shape and we got things done.


Sounds eerily similar to the company I work for which was founded because the company tried to build something internally and failed. Our engineers ship a ton of code, most of it good, but unfortunately we've had quite a few problems since we effectively don't do any real testing of the software before releasing to production.


Maybe you should hire some QA people, that helped us greatly :-)


that seriously sounds like fun


It is. I doubt you're in the same geographic area, but in case you are, check the link to our company's website in my HN profile.


It takes a lot of experience to be able to write high quality code but I think it also takes a decent amount of experience to know when you should just step away and ship it when code is good enough.

If you're writing a minimal viable product, something that you know isn't likely to be expanded on or code you know is only going to be live for a few months, obsessing over code quality can actually be a really bad thing. I've been on short time scale (you won't always have control of this) projects before for example where continuous integration, code coverage, code reviews and TDD were insisted upon where the extra time invested in these just didn't make any sense.

It's a different story for projects where mistakes can be very expensive or when you know code is going to require maintenance for a long time but I think it's important to learn when there are other priorities. Recognising when you should just "get things done" is actually an important skill that shouldn't be looked down upon because projects only have a finite amount of resources.


Business exists to turn a profit, and the business side (even in many software companies) doesn't see more or faster profit from well structured, quality code with high coverage/thorough tests and a well designed deployment infrastructure. Markets and tastes change quickly so you can't really launch 1.0 with all that in place, but if you have good engineering management they are able to triage the import pieces and build the infrastructure around the product as it grows and changes.

It is definitely a balancing act and a lot of people here are going to disagree with me but launching and getting a viable product out into the world has to take precedence over having a pretty release pipeline or 100% test coverage. On the flip side, it's the engineering managers' responsibility to make sure the tech side gets fleshed out once you have product market fit, otherwise like @LandR alluded to, you'll end up building a house of cards inside a wind tunnel.


I'm a freelancer. Customers usually want to meet deadlines no matter what. Even the ones that agree to delay deliveries usually do it only when they understand that something new happened. A recent example: an integration with a banking API which is still under development is delaying a project. Force majeure.

Almost nobody understands or cares about good engineering, tests and the like. That's not where they are making money from, at least in the near future. However I did have some pleasant experiences.


I've been with my current company for 6 months now and what I found is the following situation:

- After many years of getting shit done mindset, now maintenance costs are significant, mainly because we can never reject bug reports from customers because we don't know if stuff is really broken. So most of the time, one of the maintenance guys spends hours/days and finds it's a problem with the customers' network, weird proxies, unsupported hardware, whatever.

- As a result, the company frantically started building extensive test suites. However, with most things done in a frantic manner, this doesn't work so well because it was not engineered properly. We have/had hours-long test suites, hard to reproduce failures, devops problems for the test machines, etc. etc. It is, however, much, much better than the previous approach (i.e. release and pray to God).

- Now, after the dust settled somehow, people are starting to realize that the only way to improve the situation is by focusing on quality from the beginning. We spend more time designing architecture with testing in mind, try to be somewhat TDD-ish, look at coverage early on. Now, when a manager is asked to provide estimates for a feature, they are asked if they can fulfill the requirements on time, within budget and in quality.


In the better places I've worked "getting things done" was the priority. The places that valued code quality always overshot deadlines by enormous degrees (over a year late on an 8 month project, for instance).

Typically I find that code quality isn't too vital, as the majority of big-co systems tend to get re-written from the ground up every few years, rather than modifying and evolving the existing system.


Seems that way at most large orgs. A friend of mine was talking about how some random support desk person will just log in, kill a specific logging process and manually type the path to run it in at a root shell, which inevitably borks things, and due to it happening multiple times a day he has written a unit file so systemd will actually kill the process, fix the now wrecked permissions, and start the logging daemon's unit file properly.

But, the real question is why does a support person have global root access to every server in a large enterprise?


Well, because it "used to be like that" and nothing is as persistent as a stopgap solution. Fun story: Used to work at a 8 year old, heavily growing ad network in customer support and hadn't even officially progressed to engineering yet, but people heard stories about my PHP hacking skills.

So one of the old-timers came by (really fun and totally random guy) and was like... "ah, there's this one thing I thing you could help me, I'm trying to find a solution for these tracking error reports. wait, I'm giving you access"

A few minutes later came my email together with password over MS Lync, and nothing else. And I waited for him to come by with some requirements.

Which of course didn't happen. But what happened instead was that the newly minted Head of IT came by who was tasked of cleaning up the mess after the acquisition and introduce a proper process.

He looked at me very earnestly and just when I tried to think hard what I did wrong he blurted out "WTF ARE YOU DOING WITH YOUR LIVE DB PRODUCTION CREDENTIALS!?"

In the moment I realized what that guy gave me, it was gone already unfortunately.

So long story short, as time goes by and companies grow, there are very different approaches at "getting things done" ;-)


What one person may think is fine might look crazy to someone else. Getting things done seems to be the MO of most orgs though.


In my team, if something needs redoing, we usually make that decision immediately after shipping the "get it done" version, so everything's still fresh in our mind. Other times we review things once a problem comes up, like code is becoming too difficult to maintain or is performing poorly.

To us, code quality matters insofar as it affects us, whether that's performance (rarely a problem), maintenance (often a problem in our older code), or something else. I think this is a good compromise. Don't burn mental cycles optimizing (whether that's for readability, CPU, memory, whatever) until you have to, but really take time to do it when you have to. Of course, write the best code you can in the mean time to minimize the need to rework old code.


Games Industry perspective: Usually getting things done is more important. Code gets rewritten many times and features change very fast, so not much use or time to try to come up with a solution that lasts that long. Also, once a game is out a lot of the code is thrown away. Engine code usually more durable and stable but also pushed to get it out of the door as fast as possible.


> Also, once a game is out

Curious, what kind of games you work on that still "get out"? I used to work on mobile, now switched to Steam "indie", and in both cases the public release is only the beginning of ongoing tweaking and support.


You're right, my work has been mostly standard (old-school?) AAA games but lately with games being developed more as a service things have changed. Nonetheless, what I meant was there's is a lot of ad-hoc code, specially in gameplay, that only really fits that particular game, so its life is fairly limited. In my experience, that code tends to just get the work done, which is good in my opinion.


> In my experience, that code tends to just get the work done, which is good in my opinion.

Not if you have to support it and build on top of the features implemented with such an ad-hoc approach.


Not the OP or in the games industry, but when Blizzard rereleased StarCraft 1, they made it a point to reproduce all the known "bugs" of the original. By this time, those are treated as part of the game.

I've heard this is true of other games as well: Street fighter (cancels I believe?) and smash (Wave dashing). Sequels reproduced bugs from earlier versions as they came to be seen as features of the gameplay.


Some aspects of 'code quality', such as DRY and sane code structure (e.g. early return in case of errors), cost nothing and are not at odds with but rather accelerate getting things done; immediately (avoiding bugs, shortening code review), in the short-term (understanding code which someone else, or you, wrote) and in the long-term (making the code easier to change if so needed).

It's only some aspects of 'code quality', which do have a cost (immediate and/or upkeep, e.g., tests) which even incur any dilemma.


> Some aspects of 'code quality', such as DRY and sane code structure (e.g. early return in case of errors), cost nothing and are not at odds with getting things done, but rather accelerate getting things done

There are many people who don't write that one line of code that throws an immediate exception with a proper error message (and instead just return null or something).

I also see every few months objects and classes with code copied (as in ctrl-c ctrl-v) instead of generalized.

To them, this is getting things done (vs spending time writing descriptive exception strings or isolating the repetitive part in a separate function).

So, your point of view is of course correct, but your definition of "getting things done" takes the next few weeks already in account which, frankly, I don't see that often.


> I also see every few months objects and classes with code copied (as in ctrl-c ctrl-v) instead of generalized.

What is easier to generalise -

   1. the point where you realise you need to do basically the same thing in another location with a few small changes
   2. the point where you have a few cases of basically the same thing with their own small differences in context?
Counterintuitively i think it's 2. It's the "small differences in context" which cause the issue. Once you see a few examples of how the code will be used it's often much simpler to generalise. You don't need to guess any of the future use cases, you have a bunch of examples to work with and you can be far more confident in your improvement.

The kicker is that 2 is much cheaper at all steps.

I think the biggest problem with this idea is setting a time limit to allow these duplications to grow - if you allow for it indefinitely, you'll end up with duplicated code building on top of duplicated code. In that scenario, the refactoring becomes harder not easier.


Postdoc in HPC/numerical physics here. Somewhat rarely for my field, my supervisor gave me a lot of freedom during my PhD to `do things properly’ (instead of pushing me to get things done yesterday). While I don’t have 100% test coverage in my (by now) 40k lines of C++, the overall design is sound, documentation exists and covers everything necessary, some tests exists and it is clear that the entire thing is very flexible and new things can quickly be implemented in it.

So far, this approach has paid of tremendously both for me as well as my supervisor, as the reuse value is quite high and performance very competitive, resulting in many projects both within the former chair and also with external collaborations.


40k lines sounds huge... what does your baby do?


Tensor networks, as in the condensed matter theory sense - i.e. matrix product states, DMRG, time evolution, many different physical systems, some different tensor network topologies etc.

Unfortunately closed-source (though open for collaborations), otherwise there’d be a link here.


As a contractor I can see different approaches across companies. Some go for quality over delivery, some for delivery over quality. What I rarely see is balance. There should be a sweet spot somewhere there. Great developers though will produce quality code right off the bat but these developers are costly and you'd need most of your team to consist of those. Its possible to get things done and deliver quality code at the same time. You just need to know which corners can be rounded and when.

Edit: Just to be clear, nobody sets out intentionally and says "lets favour delivery over quality" or vice versa. Its an approach manoeuvred by the personalities in charge and the team as a whole.


My employer puts no value on code quality, design patterns, tested code and test coverage. I would not work for them if they did.

I put value on those things, because that's my job. Their job is business. They want features that will enable their job. I expect nothing less. I want them to tell me exactly what they need to get to the next level. How to achieve that is what I do.

If they ever tell me how to do my job, that's when I will hand in my notice.


How to deal with management breathing down mandates wasn't the question. The 'company' in this context represents you and your team.


That sounds nice in theory but it looks to me more like an inmates running the asylum situation.

The company or "business guys" should at least hire a CTO that ensures a baseline level of quality instead of letting the developers roam wild and free :)

Unless you're the CTO, then it makes perfect sense.


Getting the right type of testing is generally speaking more important than code quality. I have more confidence putting 'bad' code into production if it has good comprehensive tests, vs 'good' code into production with no tests.

Re type of tests, I think the most useful type tend to be end-to-end, that is testing the end usage.

also anecdotally, engineers who focus too much on code quality tend to miss the big picture


Getting things done should be the correct answer for most of the times. All the code quality, design patterns, testing, and automation is done with an objective of keeping the ability to still get things throughout the life of that product.

I've seen horrible overengineered sins being done due to this "focus on code quality" for the code quality sake.


I would say both, but if I had to choose one I'd choose shipping. Anyone who has ever code spelunked through a multi-year code base notices something. Most code is written and never touched again, good or bad. This means most investments to ensure maintainability yield no return. I think a better approach is to exercise good judgement. Critical, high churn/high risk code is easily identifiable and should be given proper attention. Code that is not critical path, trivial in complexity, or unlikely to change can survive with more lax standards.


We have two classifications: Prototype and tracer bullets

Prototypes are meant for info. They are not simply disposable, they are supposed to be destroyed, like toxic waste. We crank up speed all the way. No documentation, etc.

Tracer bullets are meant to hit, but not overly planned. They are completely maintainable, fully documented, refactored whenever possible. The name comes from the idea that instead of making calculations and plans, you just point and shoot. After shooting, you adjust the machine to hit the target better.


I'm not a software engineer, but I have worked for many years with software engineers. In my former company I experienced both.

With the first manager, everything was overplanned, but the estimations was completely out of control. Specially one software engineer said that he need one month for a simple reporting page, one week for adding a button to a webpage or just one day to add a label in a dropdown. Every user was unhappy and the company was completed stucked because no new tools was released. We was neither allowed to do queries by hand or to just write down our CLI tools. Combined it with a narcissist manipulator boss and you will have a toxic environment.

After some here, the management change: a new manager, focalised on "getting things done quick", was in charge and the situation was a little better (at least in the small term). But after some months it was impossible to survive: there was no any plan, no any quality review or architectural decisions. Project was made as fast as you can, without thinking about scalability or side effects. Refactoring and adding new features was impossible or takes weeks.

I left, but now the company is bankrupted.

The lesson that I learnt is "In medio stat virtus".


We care about code quality up to a baseline, but only because that contributes to the getting shit done part.

The baseline may vary depending on how experimental the feature is (proofs of concept are written to be rewritten; upgrades of existing major and heavily used components are written with the long term in mind) and how business critical it is (we put a lot more time and effort into e-commerce related code than into "share this listing on twitter" code).

As our codebase and feature set stabilize, and as the drag of tech debt has become more obvious, things like automated testing have transitioned from being discouraged to optional to encouraged and now required for significant new development. We don't enforce code coverage standards, but our team is now fully on board with the idea that tests help us move faster with higher quality.

In short, be pragmatic. Quality and speed are always a balance, and the right balance depends on company and development stage as well as the particular piece of code you're working on.


The guy I work with implemented pep8 tests into unit testing. So now I can't submit code unless it meets stupid criteria like line length and function descriptions phrased very specifically. He also complains at me for using mathematically named variables in my functions. I do not believe this is code quality and it just hampers my ability to work.


I'll take these comments to heart and think about it. However when implementing an algorithm I prefer to write out the algorithm in mathematical notation in the comments which will obviously explain the names of the variables. If someone reading the code can't understand it they couldn't anyway. I just find PEP8 very inflexible and my work mate has slowed a one month project down to 4 months and counting with his attitude towards the work and we're massively behind schedule.


I'm not a python developer by trade, but there isn't anything wrong with pep8 standards. Standards are good, they remove ambiguity and make the code uniform. You shouldn't have to write a thesis of comments to describe what can be done with variable names. The project is taking longer because you are fighting a _standard_ opposed to working with it - swallow your pride and work as a team.


please hear that guy, the code is read much more often than it is written, mathematically named variables are really a bad pattern, that will haunt you and your team in just months, instead of just `v` use `velocity_of_ball`, and stick to the pep8 standards. I come from other engineering background and have known many smart colleagues that code for run many kind of simulation, but after some month don't even them can understand the code


I'm honestly mostly fine with sticking to the PEP8 standards but not being able to submit code simply because something wasn't written in an imperative mode is absurd to me.


So... You think that not following language code style guidelines and introducing cryptic ("mathematical") variable names helps you to do your job..? Are you even a programmer? :-)


I'm not super familiar with python, but I've tried using a pep8 auto formatter and it seemed to reformat it to comply to standards https://github.com/hhatto/autopep8

Perhaps that could be useful when having to write pep8 compliant code but don't want to manually format everything.


I just tried this out. It works beautifully. Thankyou so much.


I've usually been in a position to choose that trade-off, due to having a career where I'm usually reporting to someone non-technical.

I think of code quality as practices that reduce the risk of common or damaging errors. As a toy example, if you have a lot of off-by-one errors from loops, using list comprehensions instead reduces the risk of those errors, and would therefore be a code quality improvement. A larger example is writing code with well-defined interfaces that's easy to delete, in order to make it easy to replace bad decisions as our understanding of the problem improves (because there will always be some bad code.)

I tend to place a very high value on code quality that affects errors I'm reasonably sure are common, and less value on practices where I can't see how they result in fewer errors.


The question is tricky, because it suggests that getting things done and having quality code somewhat differ. For me it doesn't. In my company, code of poor quality is just not done. To get code in master branch you must have made a design document to explain your assumptions and decisions, it must pass automatic checks and code review, it must be tested, and pass regression checks on CI. When this is at the core of how the team works, this doesn't decrease time to market, and actually helps accelerate as you can add new features with confidence that you don't have to stop because of breaking something.


After a bit over 6 months into our startup, I find myself being more caring about my git commits. I find `git add --edit` for single-change a great thing as a QA and review of my own code, and find myself usually more disciplined than others.

Plain, "assembly"-grade discipline - because you don't get anywhere without when feeding the CPU with bare operations - is not widespread enough among developers, but I think that at least some of that discipline will be vital in the long run. Code review on other people's code is so much easier if the patches are mostly unobstructed by out-of-context style changes.


In my experience code quality only manages to employers/managers when things break. The only other thing that matters to them is getting stuff out the door so money can come in and new stuff can be built and the cycle can be repeated.

I've had situations where I talked to managers about a situation that would escalate in the coming months, only to have those same managers mad at me when it finally did asking me why I didn't warn them.

I always wonder who these developers are that get time to write tests and develop new necessary skills. But I imagine these developers work on in-house projects, not for clients.


How come this question managed to appear on the top page? Asking this question and expecting a correct answer is bananas. There are so many factors that you have to take into consideration that it is virtually impossible to make a right generalization. The question you should ask though, is: "What should I do to achieve goal that I've set." If your product is unstable, bugs are piling in the backlog - _maybe_ holding the factory line for a refactoring is right - but only just maybe. Remember, that users don't care about how cool your stack is.


It’s not necessarily one at the expense of the other, shipping fast can help keeping things minimalistic and robust, and focusing too much on the code can lead to fragile over engineered products (BTDT)


My team is currently developing legacy product that was bought from another company, and code quality is what you expect when the team had changed several times already. On one hand, we have immovable deadlines like Steam sales; on the other, bugs in the build immediately impact sales through negative reviews and refunds. So, after seeing so many bugs with our multiplayer, our studio head urged us to spend some time to refactor the damn thing from scratch - while I, the lead developer, has been more cautious about it.


It all depends on the team size and the where the company is at. We're a 2 member team and for all intents and purposes code quality takes a backseat. It's not outright ignored but there are a few shortcuts that i take that will hopefully not bite us. All in the interest of shipping things faster. I end up refactoring code at nights after we've pushed to the app store.

In a previous startup that I worked in code quality was outright ignored and I saw first hand the kind of technical debt that builds up because of this.


As I understand, code quality is not a goal in itself, it matters so far as it affects getting things done. So if you compare the two, of course getting things done is more important.


There is often a trade-off between gettings things done in the short term and gettings things done in the long term. All other things being equal (of course they never are...) you'll likely have a much harder time getting things done in the longer term if you pay less attention to code quality.

This is of course exactly the notion of technical debt, and how the debt can compound (that is, you will struggle to get things done more and more the longer you leave your debt unpaid).


Anything which handles business or personal data (so virtually everything making money) has security concerns, and IMHO the first step to good security is a well structured and clean codebase (Note: I'm a dev but not a security specialist)


Non-tech company in Seattle: Code quality be damned, we need new features made quickly by outsourced teams with no code-review. Then hide them with a feature flag for three months.


Whole range of attitudes actually. Worked in places where all stakeholders were interested exclusively in a working deployed feature and where architecture refactoring "featureless" periods were considered as bumming around. Worked in places where the test setup was choking down development to the extend that it was difficult to complete a feature, and in such cases person was disciplined by overzealous Scrum Master/QA and snitched to the manager.


Getting things done + the expected outcome of code quality/testing (rather than that as an end in itself). Of the two metrics we've recently started looking at, #2 tries to capture this. If we can reduce it without resorting to some prescribed form of test coverage, it's perfectly fine. I think.

1. Deviation from original estimate as a percentage

2. Time spent on bug fixing as a percentage of time spent on new development

Still new, so this could go either way...


Quality and testing is less important for us before we have customers. Ocasionally we will rewrite a whole application if people actually use it.


A foolish consistency is the hobgoblin of little minds. :)

These kinds of priorities will vary depending on time as they have inside my own little company. I generally like to invest more in quality aspects earlier on the project so that the amount of entropy is limited. As the deadlines loop, corners are cut and compromises made to get the thing out the door.


I'd say there's a swing depending on priority. We care about code quality and having high test coverage, but sometimes the business needs something done yesterday and we sacrifice quality to get it out the door. I don't agree with this sacrifice, as I feel it hurts us in the long run by slowing down future work.


It's just my opinion, but as I always see during the years of work if you only focus on "getting things done" you have an illusion that you ship faster.. This way you will have more bugs, regressions, technical debt etc. and fixing them will take much more time than doing things the right way.


Why does gettings things done preclude quality code if you know in advance that you took shortcuts? For instance, write now to "get things done" I might keep all of my passwords for a system I'm trying to get through QA and UAT in a plain text configuration file. Knowing that's a "bad thing", I'm going to abstract how I get credentials in a method/class that everyone uses.

Then once I decide how we want to store credentials securely, we can rip it out, change one method and everyone is updated through the shared library.

Another real world example for us is logging. I knew in advance that we ultimately wanted a better logging framework for searching than just a straight text log like you get with log4net/log4j. We wanted structured searchable logging and to eventually use Elastic Search and Kibanna. I didn't have the bandwidth to do that at the time, but I still insisted on Serilog that would give us structured logging and just used a text file sink, later I added a Mongo sink for searchability just by making a configuration change, when I get around to it, I'll add an ElasticSearch sink. We then get easily searchable logs with applications specific properties to search with a nice front end without any code changes.

Even if I skip unit testing, I'm still going to insist code be structured in a way that separates business code from persistence so when we get a chance to write unit tests (if ever) it makes it a lot easier.

Yes it takes years of experience to know when you are taking shortcuts (getting money on credit - technical debt), and knowing how to get the loan at the lowest interest rate to make the loan less onerous (paying back the technical debt)


It is important to nick exactly the complexity level needed to ship your product or service as soon as possible while allowing space / flexibility to pivot or scale. In my admittedly limited experience, purists are often at big disadvantage with such an approach and need time to adapt.


Perhaps I work for too huge a company, but I'd say it varies greatly from team to team.


"Damn the torpedeo's"

Its so bad that even though I've been here going on 6 years. It will never appear on my resume. I don't want my name sullied by the decisions of others.


Equally important. In my opinion we ship fast because and not inspite of it. Hardly any bugs in production.


The focus is 50-50. Although, it becomes very difficult to explain it to non tech stake holders


Almost entirely on getting things done.


I work on a research CubeSat project in an academic setting, so not quite a company, but we're at an interesting crossroads/transition point regarding our philosophy about our codebase so I thought I'd share. Furthermore I should mention that this is almost entirely an undergraduate-led project (at least with regard to the spacecraft bus) of about 40 or so team members, hence we're not exactly working at professional aerospace industry standards of engineering, but we try.

For some context, almost exclusively I work on ground software, e.g. integration and test (I&T) and mission operations, mostly the former recently. We're nearing our delivery deadline for our completed flight model hardware, so since the summer we've been in the I&T phase, integrating various spacecraft subsystems and running test campaigns on the lab bench, out in the field, or in our vacuum chamber.

I lead the development of our (Python 3 and PyQt4 based) general "Ground Support Equipment" (GSE) software, which the engineers use to communicate with the spacecraft over both a physical/umbilical serial UART interface and through our radio. Getting back to the point of the question though, during this time the software side has been focused solely to get the job done, sometimes in regretfully ad hoc ways.

In some sense, working closely with the engineers building the spacecraft almost necessitates this, since as capabilities are added to the spacecraft, the GSE must be extended to accommodate those in the way of new command and telemetry abilities. Some examples of more significant additions made within the last 8 months or so include adding radio interface capabilities (in our case, sending data through a TNC for uplink and connecting to a chain of SDR software for downlink), allowing for more complex command abilities (and the requisite handling of telemetry data), and creating sequences of commands for use in hardware tests. The more mundane day-to-day work mostly involves adding new commands for the spacecraft and making telemetry easily visible/accessible to engineers.

Throughout all of this though, there's been a great deal of seat-of-the-pants/build-it-as-you-need-it style development, even during test campaigns, because things break, issues (both in the GSE software or on the spacecraft) arise and have to be troubleshooted, etc. And though from the start I desperately wanted to have a (software) test architecture (set up a Jenkins server and everything!), the time and the will to write tests was often not available. From the outset I tried to make the architecture of the application amenable to easy extension, but many of the engineers, at least to begin with, weren't familiar enough with Python to do so, though this has gotten better with time. When time permits, I refactor and redesign (or sometimes actually design something that started as a kludge) the back end, internals, and GUI portions. In some instances I was lucky enough to have previously designed something with enough foresight to make it easy. Sometimes not.

With all that said, the end of the hardware build and test phase is near, so we're at a transition point regarding our coding practices and standards. There's a lot of "dead time" between delivering the satellite to the launch provider and the actual launch, so we've waited on building much of our in-flight mission operations software until now. Because this is the actual mission-critical ground software, though, we're taking the design/construction/devops a lot more seriously. Just this weekend I set up a new server to host a Jenkins instance and future documentation. We'll be running unit tests, coverage checks, and a linter, and also have regular code reviews.

Most of the mission operations software design was finished long ago, but since we're using the GSE as a base, and assumptions/plans may have changed since then, there will definitely be further refactoring and design. I basically look at this software as our actual "production" code as compared to our internal design and test tooling.

I'm quite glad that we'll taking a more systemized approach now, not only because of the importance of the in-flight ground software, but also because we've recently expanded our ops development team and will have many more people simultaneously contributing. Our external reviewers will of course be happy as well.

I didn't intend to write a novel here, but wanted to share some insight from a less-common place, even if we're not in industry. In the three years I've worked on this project, I've learned a huge amount, and have gotten to experiment in small ways with different practices, but I know I still have a lot more to learn. I'd love to hear perspectives from within academia (where code quality is notoriously neglected) and in the aerospace industry, where the stakes require a more rigorous approach.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: