Level 4 isn't going around your codebase shaving milliseconds from execution time.. Level 4 is knowing that not everything needs to be optimized.. In fact, most code doesn't need to be optimized at all. The only parts that actually need optimization are those that have been deemed to be too slow (because of some external reason--ie: effect on users, for ex) or that are on a hot code path.
I'll go one step further... all new code should be written for clarity only. Optimized only if necessary.
No one cares if your function thats called once a month takes an extra few seconds to run.
I maintain 2 positions, though we can discuss about the details:
- There is unoptimized code, and there's just stupid code. I'm fine with raw code without optimizations. But I hate code making all mistakes from the last 2 years of the company, and the last 10 years of experience in this industry again and again and again.
- All other optimization should be done based on production monitoring. Or, production-like loadtests if you have that luxury. For a new feature, good monitoring will allow a good developer to increase performance significantly with very small changes. For a legacy clusterfuck, good monitoring will allow devs and ops to build a strong case for a bigger rework. Which will result in much larger benefits for both teams.
I'm not saying you do, but many people I run across who have this point of view do a poor job at measuring the "if necessary" part. You aren't really prepared to detect it without some form of production performance monitoring, meaning on the chart, a level of around 2.5. I would say they should probably change this section of the spreadsheet to emphasize knowledge of performance.
This applies at virtually every level. From your web page being just a little bit faster than a competitor, to the sense of fluidity of an app, to being able to host a profitable service on a reasonable set of hardware (we've watched countless Ruby services fold when a trivial system serving a small number of users needs to be scaled across dozens of machines). Performance is one aspect that seldom goes without payoff.
So an argument in favor of optimizing whenever and wherever possible, is an argument in favor of introducing unnecessary complexity.
Optimization isn't free. It has a cost to implement (slowing development) and another cost whenever the code is read/refactored/extended/etc (also slowing development time). That second part is incurred by every developer working on that piece of code now and in the future.
So the danger in unnecessary optimization is both wasted time and slowing of development, delaying product market fit or making it difficult to respond to competitors introduction of new features (for example).
What is the more common story: start up died because competitor was slightly faster; or start up died because they never found product market fit.
But optimizations in a modern sense seldom means implementing a section in assembly. In most cases it means a skillful, well-considered use of appropriate technologies, appropriate algorithms (e.g. a hash table instead of a simple linked list for a lookup heavy section, appropriate database designs, etc) and a coherent design.
When you start from day 1 thinking "performance matters", it doesn't and shouldn't demand any added complexity. But it does demand constant consideration as implementing requirement.
Of course the counterpoint is that of course we should use appropriate technologies, algos, designs, etc. Who could argue otherwise? But whenever I've seen the premature optimization boogeyman appear in a modern context, it is usually in the context of just such a discussion. A sort of "performance is a concern for another day".
I'm not sure how to respond to this. You're using a definition I've never heard before.. this sounds like basic competency being called optimization.
Nothing I've written here should be misconstrued as an argument in favor of sloppy code.
Your root post states "In fact, most code doesn't need to be optimized at all". That is de facto meaningless if we go under the assumption that optimizing itself -- ergo implementing optimally -- doesn't count as optimizing.
Take coherent design. I started off saying everything should be written for clarity, and having a coherent design is part of that.
A coherent design can mean the code has lower performance than an incoherent design.
How could this possibly be considered an optimization? An optimization now includes things that reduce performance?!
Edit: Also "implementing optimally" is not the definition of optimization. optimal: "best or most favorable." optimize: "rearrange or rewrite (data, software, etc.) to improve efficiency of retrieval or processing."
Optimization in the context of "premature-optimization" doesn't refer to going back and rewriting code early. It refers to a mental concern about performance, where there is a very wrong, but persistent and common, attitude that performance is something you can add later. But in most cases that simply isn't true, and it's one of the biggest lies in this industry, trotted out like it's grizzled experience and wisdom when it's the foundation of countless project failures.
Most products are wrong out of the gate and will see on the order of 0 users. Iterate quickly until you get the features that users actually want and then in the rare case that is unoptimizable just rewrite from the ground up.
If we're really farcically talking about 0 versus 1+ user products, however, in actuality the user will do a quick test of your wrapped web app with the glacial web services and the slow responsiveness and they'll dismiss it out of hand. A competitor will come along with a spritely alternative that has a fraction of the features and will eat your lunch.
This is the demonstrative history of our industry. Over and over again performance (and this is a relative thing -- a CRM with a 1.5s page render time is fine when everything was slow, but suddenly feels archaic and junky when a competitor is effectively instantaneous) has been the difference between winners and losers. But we still have these cheap conversations as if it's a feature that you just bake in later.
Which is secondary to actually releasing the code. That is, all code is subject to this measurement, and can be used as a signal for refactoring. Ere twas.
Instead of complaining about this thing point by point I'll just ask a question. Has anyone taken this self-serious pseudo-quantified thing and tried to actually put it into practice? Have you found any quantifiable results?
This, TBH, seems like an arbitrary yardstick for insecure people to measure themselves by IMHO.
You'll notice that a lot of the different items are directly under the control of a VP Eng/Dir Eng role. "Do you insist on code reviews? Yes we do/ No we don't" etc.
So if you find yourself in such a role, whether you've inherited a "good" organization or one that needs work, its a good methodology for stepping through and figuring out what to improve.
The matrix feels really rather cargo-culty to me. If deployment is pushing one file then use scp. If it's coordinating a world wide fleet of servers, use something more sophisticated.
I find it funny that we've seen a "You're not Google" article today, and then this gets posted.
At the end of the day, as long as you've cut out as many manual steps as you can, without being stupid about it (don't spend two weeks creating an all singing all dancing deployment pipeline for a microsite that's going away in three weeks), you should be happy with how you're doing things, regarding deployment. If that's running scripts, so be it.
Agreed. And critically, if you can stay in a world where scp deployment (or something comparably simple) is working well, that's a good thing.
As the OP points out in the "Assumptions", if you have a company with 5 engineers working on completely different codebases, you may make a conscious decision to not be at "level 4" code review status. Doesn't mean you are incompetent it just means you are practical.
A lot of small business operate efficiently by electing to not overly complexify their development process. So maybe instead of "competency" using a word like "sophistication" would be better
Stackoverflow includes the famous Joel Spolsky 12-steps-to-better-code list to its job ads (https://www.joelonsoftware.com/2000/08/09/the-joel-test-12-s...). I think it needs an update after 17-years.
This blog post might be something in that direction. I usually do a similar evaluation when I decide if whether to recruit for a company. (Content marketing: I am programmer and now I source, assess and hire engineers for tech firms and startups in Zurich, Switzerland - see https://www.coderfit.com, and https://medium.com/@iwaninzurich/eight-reasons-why-i-moved-t...)
It is rather challenging as different things carry different weight to different people and there is also the thing that what is good for a big company, or high-growth startup might not make sense for a web agency that will stay below 20 people forever.
Nevertheless, I'd be super happy to brainstorm with like-minded people about what makes a company good from an engineering perspective.
I'm the first developer but not the last so one of the things I'll be doing will be setting the engineering standards going forwards, I might drop you an email.
Most software is bad, especially at places that don't consider themselves software companies, e.g., they don't sell a software service/product, they just use software for efficiency.
I don't mean this as a judgement of the developers who wrote it. I've written plenty of software that looks bad in retrospect, from the outside. When you have the context of how decisions were made in the past, more often than not, you find a lot of small decisions that were reasonable in isolation but added together equal a big ball of mess where technical debt was rarely/never paid down, refactoring rarely/never took place, etc.
It's not that hard to convince non-technical business folks of the value of paying down technical debt, but I've found it is hard to convince them to prioritize it. It always gets planned for the future, after whatever super-urgent CEO-driven initiative is currently happening, which is quickly followed by another and another.
So yeah, I can imagine what the codebase looks like but not because of outsourced developers. You could just as easily say, "150 year old company depended entirely on overworked internal developers (you can imagine what the codebase looks like)."
I found a function yesterday that was 15 lines and reduced it to one, it was a Boolean check but they hadn't just returned that.
Its mostly php and they declare all variables and then immediately overwrite them, I'm not convinced the main programmer had a good grasp of PHP tbh.
In any case its mine now. :)
Anyway, good luck. I've been in similar situations. It can be overwhelming, but if you have executive buy-in, you have a big opportunity to establish a new direction and effect significant change.
And I am sympathetic on both ends. Nobody likes admitting that you will basically always start at level 1. More amusingly, folks that have progressed to later stages forget some of the advantages you have in earlier stages. If this was a completely solved problem, we would just set the counter at max and be done with it.
Taleb had a quote about this that I have misplaced, so I'm game of someone can find it. Basic gist is that even if you know what the end result should be, that does not mean you get to skip the steps that brought it about.
In that debacle they laid off 6000 people in the West, including me. Shortly afterwards they realised that they were unable to ship or even maintain the product. Shortly after that, they were taken over by a rival. The CEO who drove all this pocketed an 8-figure sum and walked away... It's clear what the "goals" were.
I think this ultimately runs into the field that as soon as you define what the grading criteria is, then there arises the serious risk of gaming the system. Especially when that grading criteria is a proxy of the actual value that the company is creating.
That is, at the end of the day, the only thing that matters is value delivered to the customer. Any other proxy measure ultimately doesn't matter. Good for prediction capabilities up to the point that they are gamed for the same prediction capabilities.
What I don't get, is criticism in the abstract, while building something that is basically the same. And yes, I realize there are a ton of examples of punchlines on this.
In my experience, branches generally lead to people feeling like they have free reign and introducing a slew of issues (reduced code quality, difficulties in merging, broken builds, etc)
In an organized enough team with a large enough project, tasks should be able to be carried through concurrently on a single branch for the most part.
You only get scary merge issues if you do not frequently pull in changes from your upstream branch! Daily works for me, and not once have I been let down when squash-merging back - even for large, multiweek changes. I'm always surprised to hear this is not common practice on HN.
Branching is very useful, but you need to continuously merge, so you find conflicts as soon as they arise.
What makes sense for your team and the things you work on can be very different from mine.
The tools and process my team settles on can achieve better results despite looking more primitive on your measuring stick. Complexity is not the end goal.
The intent of this article is to propose some metrics for maturity and capacity of technology development, but careful measure of downloading shows that this site is an abomination beyond reason that shuts out users lacking broadband and fast machines with plenty of memory.
If you really want to learn about technology development competence then compare how this page is served compared to a static plain text version of the same content.
First day on the job...so you guys really don't have source control?
It focuses a lot on whether or not tools are used. IMHO, the fact that a tool is used doesn't tell you much. Often is the wrong tool for the wrong problem, or the tools is not properly being used.
JIRA manages issues.
Advanced use of JIRA may mean that you can use it as a risk log, or a milestone tracker, or an epic planner... but let's not mistake "advanced use of JIRA" with "advanced project management".
Just try and use JIRA to determine a critical path, or to track the impact on one project when a deliverable expected from another project slips, or to alert when the threshold of a slipped due date is exceeded. JIRA cannot even auto-promote a risk (something that may happen) to an issue (something that has happened) based on a change in circumstances (i.e. time-based overdue, or some threshold being exceeded).
Just try and use JIRA to go beyond a single project, and to manage a program of projects delivering multiple things as part of one complex product. If that sounds like jargon, imagine trying to use JIRA to project manage the the construction of a new vehicle, with multiple teams in different facilities providing the chassis, drivetrain, etc.
This is all basic stuff for good project management software, and only those not versed in project management make the mistake of thinking that JIRA is an issue.
On a project management tooling scale, JIRA itself would never pass the first level of maturity.
Few tech companies manage projects well. Few identify risks, few track inter project dependencies, few can determine whether there are resource issues (headcount availability) 6 months out due to multiple projects needing delivering at the same time and competing for the same internal resource.
JIRA is not a project management tool. It is a glorified issue tracker that allows the unskilled to imagine they are managing projects.
I guess that's a strong statement, but it does need to be made. JIRA can work for you, but it is only a simple tool.
If this style of project management worked, then companies that used it would be at the top. But they aren't.
The only companies that seem to use it are government projects, or corporations where software isn't their main concern. In my experience software output by these organisation is basically awful.
They don't get software development is more of discovery and learning process, where you become increasing better at serving your customers as you learn more. It's not a gather requirements, implement then finished thing.
This level of pedantry is unwarranted, especially given that JIMRA was only mentioned in the comments. An issue tracker is a project management tool.
> imagine trying to use JIRA to project manage the the construction of a new vehicle,
Which is precisely why people don't use JIRA to manage construction. I don't see how this adds any value to whether JIRA helps manage software development projects.
> Just try and use JIRA to determine a critical path, or to track the impact on one project when a deliverable expected from another project slips, or to alert when the threshold of a slipped due date is exceeded.
I thought the idea of Agile is that you focus on the mechanics of delivery and not the expectations of delivery? This comment sounds so "enterprise IT" that I don't even know where to start with dissecting it.
You seem to have a gripe with JIRA specifically and I can't pinpoint why one would get so worked up about it.
Issue tracking is a very small subset of project management. It's true that an issue tracker is a “project management tool” in the sense that it is a tool for some aspect of a project. It's not a “project management tool” in the more comprehensive sense that that term is often, but not always used.
Just not prince2/waterfall/pmo style project management, which I avoid at all costs.
Microsoft Project (non-cloud version)
MicroFocus (formerly Borland) Caliber, https://www.microfocus.com/products/requirements-management/...
JIRA Agile is Agile Project Management
Then there is JIRA Portfolio for multiple projects.
Software development projects often use vastly different tools for project management then bridge building.
I find critical paths at a project level fairy useless when using agile style methods. Since user story priorities can change fairy quickly based on feedback.
But can be useful at the portfolio level.