Some of this is just completely wrong. Like the entire row titled code performance.
Level 4 isn't going around your codebase shaving milliseconds from execution time.. Level 4 is knowing that not everything needs to be optimized.. In fact, most code doesn't need to be optimized at all. The only parts that actually need optimization are those that have been deemed to be too slow (because of some external reason--ie: effect on users, for ex) or that are on a hot code path.
I'll go one step further... all new code should be written for clarity only. Optimized only if necessary.
No one cares if your function thats called once a month takes an extra few seconds to run.
Also no one cares about your loop optimization if you're wasting 20 seconds in the network because you aren't using your ORM right. Also no one cares about your memory efficiency if you're sending megabytes of json to the client. Include network time in that as well. Also no one cares about your optimization if the GC goes postal.
I maintain 2 positions, though we can discuss about the details:
- There is unoptimized code, and there's just stupid code. I'm fine with raw code without optimizations. But I hate code making all mistakes from the last 2 years of the company, and the last 10 years of experience in this industry again and again and again.
- All other optimization should be done based on production monitoring. Or, production-like loadtests if you have that luxury. For a new feature, good monitoring will allow a good developer to increase performance significantly with very small changes. For a legacy clusterfuck, good monitoring will allow devs and ops to build a strong case for a bigger rework. Which will result in much larger benefits for both teams.
> I'll go one step further... all new code should be written for clarity only. Optimized only if necessary.
I'm not saying you do, but many people I run across who have this point of view do a poor job at measuring the "if necessary" part. You aren't really prepared to detect it without some form of production performance monitoring, meaning on the chart, a level of around 2.5. I would say they should probably change this section of the spreadsheet to emphasize knowledge of performance.
To add to that point, in my experience the laissez faire optimize-later attitude is often pursued under the notion that you'll just optimize that hot code path or that single function and everything will be glorious. The best of both worlds. In reality such implementations are often death by thousands of cuts, where endemic poor performance makes it impossible to fix without enormous re-engineering. Where users get delighted when a competitor's product is just a little bit faster.
This applies at virtually every level. From your web page being just a little bit faster than a competitor, to the sense of fluidity of an app, to being able to host a profitable service on a reasonable set of hardware (we've watched countless Ruby services fold when a trivial system serving a small number of users needs to be scaled across dozens of machines). Performance is one aspect that seldom goes without payoff.
It's not that simple. Optimization is almost always additional complexity.
So an argument in favor of optimizing whenever and wherever possible, is an argument in favor of introducing unnecessary complexity.
Optimization isn't free. It has a cost to implement (slowing development) and another cost whenever the code is read/refactored/extended/etc (also slowing development time). That second part is incurred by every developer working on that piece of code now and in the future.
So the danger in unnecessary optimization is both wasted time and slowing of development, delaying product market fit or making it difficult to respond to competitors introduction of new features (for example).
What is the more common story: start up died because competitor was slightly faster; or start up died because they never found product market fit.
I don't disagree with what you wrote, or the spirit of its meanings.
But optimizations in a modern sense seldom means implementing a section in assembly. In most cases it means a skillful, well-considered use of appropriate technologies, appropriate algorithms (e.g. a hash table instead of a simple linked list for a lookup heavy section, appropriate database designs, etc) and a coherent design.
When you start from day 1 thinking "performance matters", it doesn't and shouldn't demand any added complexity. But it does demand constant consideration as implementing requirement.
Of course the counterpoint is that of course we should use appropriate technologies, algos, designs, etc. Who could argue otherwise? But whenever I've seen the premature optimization boogeyman appear in a modern context, it is usually in the context of just such a discussion. A sort of "performance is a concern for another day".
This is the cycle of every "premature optimization" discussion, ever. Someone discounts optimization, but when countered with optimizations states that they aren't actually optimizations.
Your root post states "In fact, most code doesn't need to be optimized at all". That is de facto meaningless if we go under the assumption that optimizing itself -- ergo implementing optimally -- doesn't count as optimizing.
You've thrown a lot of stuff under the label of optimization.
Take coherent design. I started off saying everything should be written for clarity, and having a coherent design is part of that.
A coherent design can mean the code has lower performance than an incoherent design.
How could this possibly be considered an optimization? An optimization now includes things that reduce performance?!
Edit: Also "implementing optimally" is not the definition of optimization. optimal: "best or most favorable." optimize: "rearrange or rewrite (data, software, etc.) to improve efficiency of retrieval or processing."
I think you should revisit the line in the linked page that you disagreed with so strongly. It doesn't say "go back and rewrite in assembly", but simply asks that you develop with performance in mind (with an awareness of the costs of the choices you are making). Your comment was that performance effectively doesn't matter, deal with that later.
Optimization in the context of "premature-optimization" doesn't refer to going back and rewriting code early. It refers to a mental concern about performance, where there is a very wrong, but persistent and common, attitude that performance is something you can add later. But in most cases that simply isn't true, and it's one of the biggest lies in this industry, trotted out like it's grizzled experience and wisdom when it's the foundation of countless project failures.
If we're just throwing around anecdotes, I've seen significantly more projects fall behind due premature optimization than failing to scale due to the lack of ability to optimize the app.
Most products are wrong out of the gate and will see on the order of 0 users. Iterate quickly until you get the features that users actually want and then in the rare case that is unoptimizable just rewrite from the ground up.
The term "premature optimization" is a peeve of mind as it's effectively a meaningless, prejudicial term.
If we're really farcically talking about 0 versus 1+ user products, however, in actuality the user will do a quick test of your wrapped web app with the glacial web services and the slow responsiveness and they'll dismiss it out of hand. A competitor will come along with a spritely alternative that has a fraction of the features and will eat your lunch.
This is the demonstrative history of our industry. Over and over again performance (and this is a relative thing -- a CRM with a 1.5s page render time is fine when everything was slow, but suddenly feels archaic and junky when a competitor is effectively instantaneous) has been the difference between winners and losers. But we still have these cheap conversations as if it's a feature that you just bake in later.
You aren't really prepared to detect it without some form of production performance monitoring
Which is secondary to actually releasing the code. That is, all code is subject to this measurement, and can be used as a signal for refactoring. Ere twas.
Oh, this old chestnut. It keeps coming around again and again and again.
Instead of complaining about this thing point by point I'll just ask a question. Has anyone taken this self-serious pseudo-quantified thing and tried to actually put it into practice? Have you found any quantifiable results?
This, TBH, seems like an arbitrary yardstick for insecure people to measure themselves by IMHO.
I don't agree with all the levels and their description, but I do think this is an excellent way for a senior engineering leader to ask themselves "What can I do to make the engineering team better?"
You'll notice that a lot of the different items are directly under the control of a VP Eng/Dir Eng role. "Do you insist on code reviews? Yes we do/ No we don't" etc.
So if you find yourself in such a role, whether you've inherited a "good" organization or one that needs work, its a good methodology for stepping through and figuring out what to improve.
So I use custom written scripts to deploy instead of Capistrano. Automatically I'm a lower level developer according to this chart. And somehow using docker is objectively better than Capistrano? I disagree. There are pros and cons. Docker adds complexity and has "setup costs" just because someone uses the simpler tool doesn't mean they are any less of a developer. If anything it shows pragmatism and humility
The matrix feels really rather cargo-culty to me. If deployment is pushing one file then use scp. If it's coordinating a world wide fleet of servers, use something more sophisticated.
I find it funny that we've seen a "You're not Google" article today, and then this gets posted.
At the end of the day, as long as you've cut out as many manual steps as you can, without being stupid about it (don't spend two weeks creating an all singing all dancing deployment pipeline for a microsite that's going away in three weeks), you should be happy with how you're doing things, regarding deployment. If that's running scripts, so be it.
The matrix feels really rather cargo-culty to me. If deployment is pushing one file then use scp. If it's coordinating a world wide fleet of servers, use something more sophisticated.
Agreed. And critically, if you can stay in a world where scp deployment (or something comparably simple) is working well, that's a good thing.
I don't know if "competency" is the right word to use.
As the OP points out in the "Assumptions", if you have a company with 5 engineers working on completely different codebases, you may make a conscious decision to not be at "level 4" code review status. Doesn't mean you are incompetent it just means you are practical.
A lot of small business operate efficiently by electing to not overly complexify their development process. So maybe instead of "competency" using a word like "sophistication" would be better
This blog post might be something in that direction. I usually do a similar evaluation when I decide if whether to recruit for a company. (Content marketing: I am programmer and now I source, assess and hire engineers for tech firms and startups in Zurich, Switzerland - see https://www.coderfit.com, and https://medium.com/@iwaninzurich/eight-reasons-why-i-moved-t...)
It is rather challenging as different things carry different weight to different people and there is also the thing that what is good for a big company, or high-growth startup might not make sense for a web agency that will stay below 20 people forever.
Nevertheless, I'd be super happy to brainstorm with like-minded people about what makes a company good from an engineering perspective.
I just got hired on as developer #1 at a 150 year old company that has until me depended entirely on outsourced developers (at great expense and you can imagine what the codebase looks like).
I'm the first developer but not the last so one of the things I'll be doing will be setting the engineering standards going forwards, I might drop you an email.
Slight tangent, but I'll just comment that the business world runs on generally bad software – if they even have "software" and don't just use insanely complex spreadsheets. It's one of those surprising things I've learned doing consulting for the last decade+. At first, I thought it was just the clients I happened to have, but given enough data points, a pattern emerged.
Most software is bad, especially at places that don't consider themselves software companies, e.g., they don't sell a software service/product, they just use software for efficiency.
I don't mean this as a judgement of the developers who wrote it. I've written plenty of software that looks bad in retrospect, from the outside. When you have the context of how decisions were made in the past, more often than not, you find a lot of small decisions that were reasonable in isolation but added together equal a big ball of mess where technical debt was rarely/never paid down, refactoring rarely/never took place, etc.
It's not that hard to convince non-technical business folks of the value of paying down technical debt, but I've found it is hard to convince them to prioritize it. It always gets planned for the future, after whatever super-urgent CEO-driven initiative is currently happening, which is quickly followed by another and another.
So yeah, I can imagine what the codebase looks like but not because of outsourced developers. You could just as easily say, "150 year old company depended entirely on overworked internal developers (you can imagine what the codebase looks like)."
Yep, there are definitely exceptions to my "generally reasonable in isolation" idea, where the software is just bad, in any context. I've seen 'em. Hell, I've probably written 'em.
Anyway, good luck. I've been in similar situations. It can be overwhelming, but if you have executive buy-in, you have a big opportunity to establish a new direction and effect significant change.
It's hilarious how some developers reflexively criticize things like the CMM in the abstract, and yet are unable to provide any hard data to show that specific elements of the CMM produce bad results.
One company I worked for decided to offshore, opened an office in a faraway land, six months later, that office stuffed to the gills with fresh grads, having never shipped a single line of code to a customer or into production, achieved CMM level 5 certification. It's a complete farce.
There is a distinction, though, between feeling that certification can be gamed and thinking that the goals outlined are poor.
And I am sympathetic on both ends. Nobody likes admitting that you will basically always start at level 1. More amusingly, folks that have progressed to later stages forget some of the advantages you have in earlier stages. If this was a completely solved problem, we would just set the counter at max and be done with it.
Taleb had a quote about this that I have misplaced, so I'm game of someone can find it. Basic gist is that even if you know what the end result should be, that does not mean you get to skip the steps that brought it about.
There is a distinction, though, between feeling that certification can be gamed and thinking that the goals outlined are poor
In that debacle they laid off 6000 people in the West, including me. Shortly afterwards they realised that they were unable to ship or even maintain the product. Shortly after that, they were taken over by a rival. The CEO who drove all this pocketed an 8-figure sum and walked away... It's clear what the "goals" were.
I think this ultimately runs into the field that as soon as you define what the grading criteria is, then there arises the serious risk of gaming the system. Especially when that grading criteria is a proxy of the actual value that the company is creating.
That is, at the end of the day, the only thing that matters is value delivered to the customer. Any other proxy measure ultimately doesn't matter. Good for prediction capabilities up to the point that they are gamed for the same prediction capabilities.
The amusement to me is that I understand and agree with criticism in the abstract. Even criticism in the concrete for a lot of these.
What I don't get, is criticism in the abstract, while building something that is basically the same. And yes, I realize there are a ton of examples of punchlines on this.
Correct me if I'm wrong, but for VCS, isn't it generally bad practice to unnecessarily branch off and introduce complex structures?
In my experience, branches generally lead to people feeling like they have free reign and introducing a slew of issues (reduced code quality, difficulties in merging, broken builds, etc)
What do you mean by "unnecessarily branch"? At least for git it used to be "new branch for each discreate task". Atlassian tools even support this: you have "Create branch" option for JIRA task. We use it and are very happy with this. Take a task, create branch (which automatically moves the related JIRA task to "In progress"), when done—create pull request (task status once again is updated automatically), when merged kick off automatic build and deployment.
It all would be a lot messier without branching.
I think it really depends on the atomicity of the said "tasks" and how often branches get merged back into the master branch. It's very easy for branches to get abused and merging long-standing branches in with the master branch is always expensive task that's bound to have scary issues.
In an organized enough team with a large enough project, tasks should be able to be carried through concurrently on a single branch for the most part.
> It's very easy for branches to get abused and merging long-standing branches in with the master branch is always expensive task that's bound to have scary issues.
You only get scary merge issues if you do not frequently pull in changes from your upstream branch! Daily works for me, and not once have I been let down when squash-merging back - even for large, multiweek changes. I'm always surprised to hear this is not common practice on HN.
Yes, this is a good point. Branching goes against "continuous integration" principle. It reminds me when back in the day every developer had his/her on copy and everything was put together at the end.
Branching is very useful, but you need to continuously merge, so you find conflicts as soon as they arise.
If branches live too long that's usually a symptom of a deeper team dysfunction. They haven't done a good job of breaking down large user stories into small, discrete user stories.
There is no one-size-fits-all approach for measuring competence. Different companies and different teams have different dynamics.
What makes sense for your team and the things you work on can be very different from mine.
The tools and process my team settles on can achieve better results despite looking more primitive on your measuring stick. Complexity is not the end goal.
There are many valid points raised here, though there is clearly room for argument with every metric. What strikes me about this article is the site and content itself. Reading this article loads layers upon layers of javascript which collectively download megabytes of content, the vast majority of which is utterly useless crap that contributes absolutely nothing to the experience of reading the article.
The intent of this article is to propose some metrics for maturity and capacity of technology development, but careful measure of downloading shows that this site is an abomination beyond reason that shuts out users lacking broadband and fast machines with plenty of memory.
If you really want to learn about technology development competence then compare how this page is served compared to a static plain text version of the same content.
JIRA? It is sad that even in 2017 people still promote JIRA. There are better alternatives to that ugly, clunky, stupid piece of junk. Clubhouse is one example.
Got a job at a level 1 once early in my career (2 years experience). Before I started the COO told me that the his team was the best and given I was just starting out in my career I had a lot to learn from them. Massive warning sign in hindsight.
First day on the job...so you guys really don't have source control?
There's a story somewhere of Paul Graham making live changes to the Viaweb production Lisp code while in the middle of a customer tech support call. OK, the bug should be fixed, try it now...
And that's precisely the problem. A handful of people can work this way and do great. But their visibility will make others try to emulate them, usually with disastrous consequences.
Nice. I don't agree with lots of things but it is a great way to think about how software organizations work.
It focuses a lot on whether or not tools are used. IMHO, the fact that a tool is used doesn't tell you much. Often is the wrong tool for the wrong problem, or the tools is not properly being used.
Another instance of Goodhart's Law: "When a measure becomes a target, it ceases to be a good measure." When companies introduce certain tools because they heard that successful companies use that tool, rather than because they want to use it to improve their processes, it won't help much.
Ruined in the very first line by suggesting that JIRA is somehow a project management tool.
JIRA manages issues.
Advanced use of JIRA may mean that you can use it as a risk log, or a milestone tracker, or an epic planner... but let's not mistake "advanced use of JIRA" with "advanced project management".
Just try and use JIRA to determine a critical path, or to track the impact on one project when a deliverable expected from another project slips, or to alert when the threshold of a slipped due date is exceeded. JIRA cannot even auto-promote a risk (something that may happen) to an issue (something that has happened) based on a change in circumstances (i.e. time-based overdue, or some threshold being exceeded).
Just try and use JIRA to go beyond a single project, and to manage a program of projects delivering multiple things as part of one complex product. If that sounds like jargon, imagine trying to use JIRA to project manage the the construction of a new vehicle, with multiple teams in different facilities providing the chassis, drivetrain, etc.
This is all basic stuff for good project management software, and only those not versed in project management make the mistake of thinking that JIRA is an issue.
On a project management tooling scale, JIRA itself would never pass the first level of maturity.
Few tech companies manage projects well. Few identify risks, few track inter project dependencies, few can determine whether there are resource issues (headcount availability) 6 months out due to multiple projects needing delivering at the same time and competing for the same internal resource.
JIRA is not a project management tool. It is a glorified issue tracker that allows the unskilled to imagine they are managing projects.
I guess that's a strong statement, but it does need to be made. JIRA can work for you, but it is only a simple tool.
"Few tech companies manage projects well." - That's because your comparing it to traditional style project management. It rarely works well in software.
If this style of project management worked, then companies that used it would be at the top. But they aren't.
The only companies that seem to use it are government projects, or corporations where software isn't their main concern. In my experience software output by these organisation is basically awful.
They don't get software development is more of discovery and learning process, where you become increasing better at serving your customers as you learn more. It's not a gather requirements, implement then finished thing.
Not sure why you're getting downvoted. That is a good book. I wouldn't necessarily follow everything in it, but every software manager could learn something from it.
> Ruined in the very first line by suggesting that JIRA is somehow a project management tool.
This level of pedantry is unwarranted, especially given that JIMRA was only mentioned in the comments. An issue tracker is a project management tool.
> imagine trying to use JIRA to project manage the the construction of a new vehicle,
Which is precisely why people don't use JIRA to manage construction. I don't see how this adds any value to whether JIRA helps manage software development projects.
> Just try and use JIRA to determine a critical path, or to track the impact on one project when a deliverable expected from another project slips, or to alert when the threshold of a slipped due date is exceeded.
I thought the idea of Agile is that you focus on the mechanics of delivery and not the expectations of delivery? This comment sounds so "enterprise IT" that I don't even know where to start with dissecting it.
You seem to have a gripe with JIRA specifically and I can't pinpoint why one would get so worked up about it.
Issue tracking is a very small subset of project management. It's true that an issue tracker is a “project management tool” in the sense that it is a tool for some aspect of a project. It's not a “project management tool” in the more comprehensive sense that that term is often, but not always used.
What tools do you recommend for project and requirements management, preferably ones that have an on-premise option to protect sensitive data like risk and talent dependencies? Some options:
Then there is JIRA Portfolio for multiple projects.
Software development projects often use vastly different tools for project management then bridge building.
I find critical paths at a project level fairy useless when using agile style methods. Since user story priorities can change fairy quickly based on feedback.
Yes the specific comment about Jira seems odd. It works, but there's nothing particularly great about it. When we did a competitive evaluation CA Agile Central (Rally) scored much higher and I find it works pretty well, at least for large complex programs.
If you meet their assumptions it might make useful, but it's a very opinionated list which is likely not applicable to many companies. I am not sure you can make such a list that is generally applicable to enough companies to be useful.
Level 4 isn't going around your codebase shaving milliseconds from execution time.. Level 4 is knowing that not everything needs to be optimized.. In fact, most code doesn't need to be optimized at all. The only parts that actually need optimization are those that have been deemed to be too slow (because of some external reason--ie: effect on users, for ex) or that are on a hot code path.
I'll go one step further... all new code should be written for clarity only. Optimized only if necessary.
No one cares if your function thats called once a month takes an extra few seconds to run.