Considering that the author writes on his resume that he lead several teams, it startles me a bit to see how easily he puts all the blame on the developer and none on the other team members (including him) and the management.
The way I see it, as a senior developer or team lead it is your job to make sure that your junior level programmers are doing good work, and if they don't, you either help them to improve or (if that's not possible) let them go. Whenever I hear a story like "this person did bad work for six months so we had to delete all his code changes and let him go" I immediately know that something is very wrong with the management of the software project in question, as in a good team structure it's just not possible that a single person does "negative work" for several months, let alone a year without being noticed.
Also, whether a developer will do good or bad work does not only depend on his/her ability, but also on the circumstances under which he/she works. Some people need more oversight and a tighter feedback loop to be productive, while others can work in a more independent way. In any case, with a good team process it is really hard for a single person to do negative work, and if it happens it's usually the fault of at least several people (due to lack of clear standards, missing feedback cycles or lack of ownership).
So if you're a senior developer or team lead, don't put the blame on other programmers, but instead ask yourself how you can build a process that makes it really hard for individual programmers to do bad work.
Yes, I was reading that blog and trying to hear the voice of the frustrated junior dev who is blamed for everybody else's poor communication, who is not being supported and who is used as a scapegoat for the company's wider problems (structure, feedback, planning). Someone who spends their days wondering where it went wrong, why no-one is willing to work with them and thinking about how fast can they get out of there.
This article reeks of bullying, passing the buck and misplaced egos, imo. The solution provided is focused on how to avoid hiring 'problem' employees, unfortunately nothing about how to cope with the situation once it has arisen. People can change after the hiring process due to personal reasons, death, divorce, all sorts of things. The author's company does not sound like a good place to work staffed with nice people.
I think calling a developer 'awful' is rarely accurate.
If that developer has only ever made two contributions to the project, then obviously that developer isn't in the right mindset to make useful contributions. I find that when I switch to part-time on a project, my performance degrades. Producing good code requires that you understand the existing code REALLY well (and that implies that you have to spend a lot of time just reading it) - If you don't understand the existing code really well, then you cannot produce good work.
One of the worst things I keep witnessing is managers rushing new employees into finishing tasks quickly - This is really bad because new employees don't understand enough about the code to be productive; when you rush them, you essentially force them to implement dirty hacks which will end up costing way more in the medium and long term.
Most managers don't like to pay engineers to lean/understand stuff - They want to pay for code that works. Engineers should be treated more like managers and less like office cattle.
I've seen so many horrible managers in this industry and they always get away with it by blaming the engineers.
As someone with a fantastic manager: it's great. He handles the planning and resource allocation for projects and makes sure that I have everything I need to be able to develop, this includes making sure that any external dependencies (like people not on our team) do their stuff and that the team as a whole is maximizing our time.
Whenever for some reason something is blocking me that isn't directly code related, he's the best point of contact to get stuff moving again.
> We both agree that there is something to be done.
Right, and if there is enough of that to be done it makes more sense to hire someone to do it specifically, so that the senior/lead developer's time can be spent more effectively.
> I personally noticed that it doesn't have to be assigned to an full time manager position.
In my experience it's much more about the quality of the manager. Unless you're in a company of only 10 people or so there are almost always more things a good manager can find to help with.
On the other hand, even having every member of the dev team being swamped with management/coordination work is better than putting a bad manager over them.
It really depends on the corporate structure. The unfortunate fact of the matter is that, at least in some dosage, middle management is a practical necessity. Many organizations implement far more than they need for the sake of empire building and ego stroking.
Often, the team lead or senior dev is only aware of their own project, but not of the status of other projects and how they fit into the whole scheme for the company.
Management has a higher level view and often more experience to make the right decisions for the company, that a dev might not be able to make.
That's all a developer's jobs. I don't think that they should be strictly limited to "that guy who's got a special job title" (architect, managers, lead, whatever).
The way I interpreted larrik's comment, he meant things like interfacing with other layers in the company, interfacing with other managers about the team's capacity for support, etc. Most of the things you mention I would include in "development". What you seem to call development I would specifically call implementation.
Although, it can be beneficial to have different people for a lot of the things you mention, especially as a project grows in size and scope. And you need at least two people if you include review; reviewing one's own work is far less effective. Same with testing. Plenty of great devs are not great or don't like presenting, etc.
If you do go the route of splitting responsibilities, you're going to be more efficient if you have someone coordinating their efforts, reducing the communication overhead for the whole team by being a central point of synchronization. That's one of the responsibilities of a good manager.
The manager isn't a good politician if you don't believe his bullshit.
If the manager himself is a founder, former programmer, or otherwise both intimately familiar with and committed to the process that actually produces the outcome, he may understand the importance of the work and really be working to ensure his team has what it needs so they can do a good job for the company. He's still working for the company, but he appreciates and understands engineering, and since you're working together for common goals from a common understanding, your interests are aligned.
You can almost always expect people to consider their field of expertise the nexus of the company's lifeblood, the critical component that everything will die without. Everything else will be considered tertiary. In the case of most managers, their expertise is schmoozing, and that's going to be their primary focus. To the extent that good engineering jeopardizes schmoozing, you can and should expect the manager to strongly prefer schmoozes.
I've worked with a lot of really good managers who understand the role their group plays without having to delude themselves into thinking their team or function is primary. I think that's something that's really important regardless of the industry.
The most enjoyable jobs I've had have been those where there was good, self-aware management. While bad management is obviously a horrible time all-around, I've also had very bad experiences in the so called flat, self-managed models. For those to work, the greater majority of the team need to actually be self-aware, good (at least self) managers.
I'm not talking about the group or the team. I'm talking about the contribution that each person considers their primary value-add to the company. Every worker will say of their own class of workers "this company would fall apart immediately without us", exposing at least an implied, though it's often explicit, belief that the class of contributions they are personally making are the most valuable.
The issue is not "My engineering group is most important". The issue, rather, is "My engineering group, and consequently the company, would cease to exist if I stopped performing my core competency".
In the case of managers, this core competency is schmoozing. Thus, when the engineering management must choose between engineers who advocate a better/safer/more effective engineering path and their bosses who advocate not that, they will choose to schmooze and give their bosses good news that generates tingly feelings rather than bad news that generates angry feelings.
This is where the manager puts his craft into play. The goal is to manipulate the engineering team into a set of compromises that will allow the manager to generate good feelings for the bosses. Because the manager is not an engineer himself, he doesn't understand the ramifications of these compromises and/or whether it's actually the correct and wise way to proceed. Yet, he will still feel that he has saved the company (and possibly his team's paycheck) for exercising his core competency: manipulating subordinates into taking uncomfortable shortcuts while keeping the bosses satisfied. Middle management is a massive distorter.
Erm, that isn't how corporations work. First, team lead doesn't mean manager, and even then in some companies, first and second line have very little to say. HR needs to be involved, etc. Which creates more work. There are the people who simply don't care. Contractors on a gravy train or outsourced people. I've been there, there is nothing you can do in this case.
I think it's fair to say from your resume/LinkedIn that you have been shielded from such effects for most of your career. Consider that a blessing, but don't assume you always have so much control over your environment. A lot of us are more like mercenaries, making good money out of bad situations while we can.
That's a valid point of course, thanks for pointing this out! I assumed that the organization in which you work is at least partially functional and that management has an interest in ensuring good working conditions, which as you say is not always the case.
But even if you're not in a position to do hiring/firing decisions there is still a lot you can do to make it harder for other people to do bad work. One of the easiest things is to agree on a standard for your codebase with the other developers, and create a process through which you monitor this standard (e.g. through code reviews).
If that's not possible due to resistance from management or a dysfunctional organization, you should consider leaving that position as soon as possible as it is not a good environment to work in (and as the author says, luckily there are enough opportunities for good programmers these days). But again, the problem here would not be the single bad programmer, but the setup of the whole organization, which is unfortunately much harder to fix.
I agree. Funny story with the code review, we had that. But several devs kept submitting broken/bad code, so they could tell their manager "oh, I'm waiting on a code review". The loss of productivity due to other devs trying to do constructive code reviews was staggering.
One guy was asked to leave a project after he submitted 17 updates to the same commit, and all of them failed basic (and luckily automated) linting. Another contractor titled all his commit messages a single word: "update". I've even worked on a project where the build system was "maintained" by contractors - talk about a conflict of interest. You can imagine how transparent it was.
Of course there are developers who bring problems to projects.
However, many of the problems are organizational: I have seen some projects place sizeable incentives on any bug fixes (e.g., highly value number of such submissions / week), which encourages people to do quick fixes for simple things, not important ones and does not penalize for poor performing software and convoluted code. If one finds itself in this environment, IMO it is best to run away.
I wonder why such organizational behavior is very difficult to change -- I have seen, twice, smart people brought in to clean up a project stuck in this state. They were given freedom to run it the way they saw fit (upper management knew the project is failing) and both times they failed to change it.
> I wonder why such organizational behavior is very difficult to change -- I have seen, twice, smart people brought in to clean up a project stuck in this state. They were given freedom to run it the way they saw fit (upper management knew the project is failing) and both times they failed to change it.
It's a good question. I think it's often a hard sell, that the change, while more work in the short run is less effort in the long run. Why should I care? I'm still getting paid the same. Of course, that's a facile argument and easy to refute: professional development.
In my experience organisational behaviour has a root cause. You have to find that and understand it to change it, you can't just change the organisational behaviour but leave the root cause. To an outsider, the root cause probably isn't obvious. Heck, sometimes by the time you understand it yourself, you've been there so long you might as well move on. But often, it's also the devs who refuse to get political. Who understands the system better, and why should somebody else fix it? But then the obvious excuses happen. "Too busy", "Not my problem", "Nothing I can do". Devs start leaving the project or the company. Now, it's never going to get changed. Apathy is the worst symptom.
In the bad/broken code it again sounds like poor management - instituting the 'bums in seat' checklist of was present for a code review, versus the qualitative check of 'had done work that was worth reviewing'.
Sure, but any metric or process put in place is vulnerable to be gamed. In some way, I respect individuals who do this. Just don't bother my team.
But managers aren't intrinsically bad people, they just have different motivations. And they have their own problems caused by politics. Ideally devs would have tools to combat this problem directly. But life isn't always fair.
>I assumed that the organization in which you work is at least partially functional and that management has an interest in ensuring good working conditions, which as you say is not always the case.
I would say this is quite rarely the case. It's amazing how many companies are able to slog through with completely braindead personnel practices. It seems most places are either far too reluctant or far too quick to terminate employees.
I know of a company that refused to fire someone who not only destroyed a production database but, on a separate occasion, accidentally facilitated an ongoing leak of the source code of all web applications (which was only discovered when it was revealed to the CEO by a script kiddie with a guilty conscience). Because this guy is well-connected, he lives to imperil the company another day (usually in slightly less dramatic ways).
I also know of a company that fires otherwise good programmers for not being at their desks by 9:05 a few times per year. Those people are terrible.
It seems that a company that is actually reasonable about its personnel practices is very rare indeed.
>If that's not possible due to resistance from management or a dysfunctional organization, you should consider leaving that position as soon as possible as it is not a good environment to work in (and as the author says, luckily there are enough opportunities for good programmers these days). But again, the problem here would not be the single bad programmer, but the setup of the whole organization, which is unfortunately much harder to fix.
You're right that often you can't fire someone for legal or political reasons, which is unfortunately all too common. However, I must oppose your proposal to just leave the org.
As discussed above, there are not many sanely-run companies out there. In my career, I used to take the "just leave" advice, but I've found this really stunts your ability to grow, personally and professionally, along multiple axes.
Unreasonable people are, unfortunately, a fact of life. I used to think a smart company could mostly avoid having them in the ranks. I still believe a smart one could more or less do this, and that damage from the stragglers can be mitigated, but I now just believe smart companies are unicorns and you should never expect to be working at one. This is a safe/important assumption because "smart companies" can turn dumb real quick, especially when/as visionary leadership is replaced by conventional business school cogs.
One must become a politician, which will enable one to flourish in the chaos, especially if passable political craft is combined with actual technical skill and good judgment.
If you're stuck with bad talent that can't be fired, you should redirect them into a side channel where the amount of damage they do is mitigated. There's actually a lot of bad employees who are perfectly OK with this. Most of them are primarily interested in security, since they know it's difficult to find new employers with their limited skillsets. Giving them their own little irrelevant kingdom will help with that and also keep the high-level bosses happy because they don't have to fire their cousin and make Thanksgiving awkward for the rest of eternity, or deal with a lawsuit from an employee who has already threatened to take the company to the cleaners.
These are all long term solutions, and good ones. But life isn't always that simple. Why let one bad dev ruin it for everybody else? It's a bit like one person being rude. Would you leave for that? I meant it more in a "accept it and move on" way. To be honest, if you're in a similar situation at MegaCorp Inc, you already have to put up with many things out of your control.
But should we as professionals an answer to this problem? Or is it a management problem? (Although that's a cop-out, because that's still part of our profession). Is it human nature? Do other professions suffer from similar situations? These are interesting questions, to which I have no answer yet.
I have dealt with the "negative work" conundrum myself and just wanted to give some insight from my point of view on your comment.
1) It isn't always junior level programmers causing the negative work. When it is a junior level programmer, you are 100% correct that part of the blame (or MOST of the blame) falls on the lead/senior programmer. The lead/senior failed the junior without the appropriate structure, code review, etc to make their code maintainable and "healthy". Happened to me just this year and I let down one of the junior engineers on the team in a massive way and it showed.
2) What happens in the scenario that a senior/lead person is the one creating the negative work? Adding more oversight/tighter feedback loops constrains this person and will make them worse off than not. Anyone can make the argument that this person should not be a lead or a senior, but due to any number of business alterations (organizational changes, leadership changes, culture changes, etc) it is 100% a plausible scenario. What do you do then? How do you manage this piece of the puzzle?
Scenario 2 is where I have seen the most "negative work" created without finding optimal paths for fixing the issues at hand.
Bonus points if the senior/lead in scenario 2 really, really likes coding, and adds technical debt, bugs and ugly code at a rate faster than the other members can catch up with.
You are right but I also can understand his 'influential developer' negativity effect. In my place there's this one influential developer (the Team Lead) that can veto every decisions even though the others juniors/seniors under him already have consensus. He doesn't want the existing process to be altered. Now he's alone as everyone is leaving.
Had that happen, too. Had two tech leads over me who didn't want to learn anything new. One guy was in love with Apache Camel, the other, was just a J2EE guy from like the early 2000s. Frustrating.
Have you ever had to deal with a bad worker, though? In one company where I was a tech lead we had a guy who did nothing. When I assigned him work, he turned in garbage that someone else had to re-do (often me, working at home). He would go out to his car after lunch and get stoned and then come back in and fall asleep at his desk!
On paper, the team had four developers, in reality, there was only three. It was a huge headache trying to explain to PMs and other managers why our team didn't really have the bandwidth it appeared to have.
I therefore complained to our manager about this guy. The response was long in coming and when it finally arrived, we were to place the non-worker on an "improvement plan" which was mandated by HR. That means I had to work even harder now because I had to put together said plan and basically try to micro-manage this guy while also re-doing all his code. The improvement plan didn't work. His output was just as bad as always. This went on and on for months. Mercifully, the guy was finally let go after almost a year of dicking around.
All of this somehow reflected badly on me, however! My manager didn't like that I "made waves" in the company by requiring this guy to do work. He also didn't like having to smooth over all the ruffled feathers among the other teams who couldn't understand why we were always behind schedule.
An even more egregious form of negative work is a developer who is stuck using out of date programming practices AND has a large amount of influence at a company.
At the other extreme is the developer who is so entranced by "newer is better" mentality that they rewrite everything in an attempt to conform to "latest best practices", increasing complexity massively while introducing a bunch of bugs and huge dependencies no one ever actually needed. I've experienced that (and had to undo the mess) a few times.
Relatedly, just as there are "10x" developers, there are "-10x" as well --- it takes the average developer 10 times as long to fix as one of these takes to break.
At a previous company, the tech influencers believed in the archaic "do everything in the database." While we were technically using the .Net stack, we weren't allowed to do any actual business logic in C#. Instead it had to all be done in MS-SQL procedures (or at least at much as possible with very little CLR glue).
Similarly at my current company, we had a product were the initial devs wanted to jump on the RXJS and Socket.io bandwagons. The only problem was the rest of the company was using standard REST endpoints and promises to do the same thing, so any new devs who joined that team suddenly had massive cognitive overhead they had to overcome. Any changes to the codebase we're done by people who only half understood what they were doing, and so the complexity compounded. Thankfully, I was given the chance to rewrite the whole codebase to match what our other products looked like, so now the code is much more sane to work with.
Can you explain why you feel that "do everything in the database" is archaic? A lot of logic (especially authentication logic) can be put in the database only. Not to mention that I won't trust anything that only has application level security, and nothing at database level to check/limit it.
2. deployments, rollback, replication, synchornization - they don't work very well with db procedures
3. unless you connect directly to the db, then you must have some logic on the serverside, usually you end up replicating logic from the db to the serverside
4. Databases languages (even advanced ones like PL/SQL) are not expressive enough
5.It's much easier to scale out the server then the database (and if you are using Oracle/Sql-server etc... also cheaper), and you don't want your database's cpu to be clogged with logic code execution
6.Unit testing (or any testing) is extremely difficult
7.Debugging is hard and convoluted (also it doesn't usually work inside your IDE)
and a whole lot more.
Nothing is absolute or completely obsolete, but it is considered a bad practice for a long time by most industry professionals.
As far as I know the most popular article about it is:
i occasionally have to work with a big application which is essentially written 100% in sql. you simply can't easily change parts without testing the whole thing from start to finish because automated testing at a granular level is horrible. and sql does not lend itself to encapsulation, it does everything to make it hard to break stuff down to manageable pieces
and in sql everything you do is just so complicated, tons of boilerplate stuff you would not have in a real programming language. it's called "query language" after all, not "programming language". so don't use it for that
Its not that the logic goes into SQL that is the problem (that I usually see), its usually a poor database design and a load of code at the application level to compensate for the poor database design.
>Care to elaborate a bit? Especially, why do you think unit testing and debugging of SQL is not hard?
An SQL returns results directly in tables which you can check in all kinds of ways. You can create any number of temporary tables, with the same schema as your business tables, and check all kinds of invariants.
There's absolutely no reason why unit testing SQL should be harder than anything else, considering a single query as the "unit" of testing.
In fact, that's to the built-in checks, constraints and types a RDBMS has, you are freed from having to unit test all kinds of crap too (similar to having less to unit test in Haskell vs Ruby).
Well, you'll have to store your mock data somewhere, which in this case means more databases, often on other servers; so you'll have your SPs connecting to another DB in order to access their data.
It's not very hard, it's just impractical. Mainly because a database is a giant bag of statefulness (to put it scientifically).
You need to prepare test data, you need to update and maintain the test data. That is already a big barrier to entry.
The actual testing involves three simple steps: setting the initial state of the database, run your queries/procs, verify the results. This will be unbearably slow even for a small test set. So you start to make things complicated by trying to be smart, like only revert the state you modified, or using SQLite for tests and Postgres for production, or by running the database server on a RAM filesystem, etc, etc.
I've seen a few people go down the rabbit hole and noone came up with a solution I could be happy with.
I don't see that writing code to test SPROCs is that hard just have a set of inputs that match all of the use cases including all the edge ones run that through and check that the results on the db are as expected.
Debugging mm possibly slightly harder in that you might have to have a 3rd monitor for Toad or work Manager - but you code your sprocs properly in the first place you should not have that many problems that jump between code and sql .
Just saying its hard doesn't help in that case we ought to still be coding in GWBASIC
What's the problem with source controlled database logic?
Have your statements (including those that create stored procedures on setup, migrations, etc) on text files, and just load those into your Git or whatever.
The problem is ensuring deployment matches up. It's very easy to end up with subtle differences between a newly deployed database instance and a database instance that was deployed with an old version and then updated. For code you would think very hard before deploying each version as a patch to the previous version - it's easy and effective to just deploy the code afresh each time, with a complete artifact built from a specific VCS tag. It's much harder to do that with databases; the tooling just isn't there and it's hard to ensure that you wipe out previous logic but retain previous data, because data and logic are commingled. You could possibly build a system for this kind of thing, but the standardized tooling just isn't there.
I don't understand what's hard about mass-overwriting your previous stored procedures with new ones.
You're right about non-SProc code; just deploy all of it. Do the same thing with SProc code!
What's tough about keeping all your code in files that start with "CREATE OR REPLACE FUNCTION <funcname>", and just firing them all at the DB (within a transaction, if you like)?
I don't actively advocate putting all the code in sprocs, but I can see advantages. I also don't advocate using PHP, and yet people demonstrably build some great websites with it.
Your approach is a bit naive. You will accumulate a lot of crud if you don't drop any function you deleted or renamed. This crud could even set people up for making mistakes, like using a function that shouldn't exist and does stuff that harms the integrity of the data.
Back when I was writing Python and using Django, I found the migrations system provided by django-south was really good for exactly this.
A migration was a way to roll a database forwards or backwards; there were tools to create simple ones, and one was able to write whatever Python & SQL one wished in order to handle more complex cases. One might even archive off a column somewhere when deleting it, and load it up back when restoring it, if one wished.
Since the migrations were all just source code, they were perfectly well-suited to source control.
It was a really powerful system; I'm surprised that it hasn't seen wider acceptance.
You have an export file in your repo, containing all your stored procedures and part of the deployement process is to export the procedures to the db, updating as needed.
Absolutely. SQL is great, and having the DB just give you the correct data from the get go is convenient. But in the case where complex logic is necessary, SQL is much more difficult for correct implementation than something more expressive.
You _could_ delve into cursors or long merge statements or what have you, but in business logic specifically, the code will be read and altered numerous times by several different people. In that case, a language made specifically for expressive statements is significantly easier to deal with. That's not to say that I believe the opposite is true and that _everything_ should be done in programming space. I just think there is a better balance that can be achieved, and to default having everything in either category probably means you're not balancing correctly.
Write your stored procedures in any language you like. Postgres supports pgSQL, Tcl, Perl, and Python out of the box, with third-party bindings for some other languages.
When procedural coding is necessary, use a procedural language, in your stored procedures. What's the problem?
Valid point. I should have been more clear in my original post. The issue was more that it was institutionalized to write all stored procedures in MS-SQL, rather than utilizing available CLR options (at least for the vast majority of the time). It should be noted that I'm complaining about a specific company's development method and not the practice altogether.
And your point is? yes sometimes you have complex business logic but throwing way all the benefits of using a RDBS is a sign that your developers cant hack SQL properly.
I would be the first one to admit my SQL chops are probably lacking, so perhaps this is just my own bias revealing itself. I'm not saying to _not_ use stored procedures. I'm just saying that throwing literally _all_ of the business logic in them feels a lot like a silver bullet. I've always felt that any of the more "standard" backbend languages would be a better choice for that complexity, since their expressiveness helps to describe that complexity in a way that is easier to grok for a larger number of developers.
> Can you explain why you feel that "do everything in the database" is archaic
I think it is because I want to be able to compile and test from end to end without having a certain database on hand. I consider the (specific choice of) DB to be an implementation detail, just like whatever file system the application might reside on once deployed.
I know this is an idealistic point of view and sometimes you end up with terribly slow ORM multi-join code where a stored procedure update might have been very simple - but that's an optimization I'd like to keep until it's actually needed because of the flimsy guarantees and poor integration of SP tooling (If I misspell something in a a stored procedure can I be sure it's caught on my dev machine without having to run through integration tests?, etc)
I've been in this job long enough to know that there is no silver bullet in tech stack, methodology, technique, etc. Yet, it took me a while to get there and even if I did it means nothing because wherever I look I see S/W houses being governed by adherents of the this or that 'true religion'. I think that most of that is just cargo cults - i.e. what happens to work for this company (and whereupon people there believe in) is taken to be the way to go for any other company/product/case.
Are SPs bad/evil/nice/safe/etc? I don't know and I cannot tell unless we are talking about something concrete. Senior devs were saying a few years ago that they are the holy grail. Senior devs are saying now that they are the devil. Go figure.
Is TDD the holy grail? Dunno - these days TDD seems to be synonymous to progress and modernity. Any opposed view seem to belong to cavemen but wasn't that so with OOP just a few months before functional became the way to go?
Anyway - you catch my drift. I'm doubly cautious when I hear people speak with the greatest conviction about this and that these days unless they are speaking off a concrete example.
Well, for one thing, auth logic can only live in the database when your application uses no third-party authentication providers, and while "archaic" is maybe a strong word, that's a less and less common situation these days. Even in the "enterprise" world, single sign-on via LDAP is generally the order of the day, and good luck doing that in a stored procedure...
1. Everybody recognizes that "The Database" has now become its own unique product, which just happens to provide remote service-calls to other products over an SQL channel
2. I'm not the one responsible for managing the multi-tenant clusterfuck it will become
No separation between between the application, model and database layers? That makes it a huge problem making changes to just one of them, if you want to support a different database for example or move one of the layers somewhere else.
From what I gathered one of the biggest problems of stored procedures is that they do not play well at all with source control and associated tooling such as code review.
Ok. I did not know that. As I said this is what I have gathered from what I have read around as a common complaint. Out of curiosity, what is the testing-deployment cycle of stored procedures in source control? How do you assure yourself that the database uses a procedure coming from a specific commit?
How do you assure yourself that your non-database code uses code coming from a specific commit?
In general, you have a deployment procedure that overwrites all the old code with all-new code, and from there on out you trust, right? Why not use the same approach?
>At a previous company, the tech influencers believed in the archaic "do everything in the database." While we were technically using the .Net stack, we weren't allowed to do any actual business logic in C#.
There's nothing archaic about that, actually.
It ensures that your domain logic remains DRY and is not repeated for every new application or service that you connect to your data.
I guess it depends on your application. To me having multiple applications accessing the same database logic (or domain) feels very archaic, or at least hard to maintain.
You'll end up with highly coupled application(s) and a schema that is very hard to change and nearly impossible to deploy small changes in continuous deployment scenario's without maintenance windows. In my experience it's hard to maintain high performance and 0 deployment downtime in applications only using SQL databases multiple applications sharing tables/views.
Then there's always additional platforms for specific areas that are just way more hard in a relational database such as journaling (append only data), queuing, (distributed) caching and searching that will scatter your logic outside the database anyway.
I've found it really hard maintaining DRY principles, finding a balance between readability and performance in large SQL based applications without an extremely disciplined team.
Valid point. Sadly, the system I was working on at said company was monolithic, so I don't believe that advantage was being utilized. It does open it up the potential in the future though.
I feel that this could be solved in other ways as well. Such as with microservices to wrap your database and provide endpoints for other services to consume. This is more like how we do it at my current company.
I still can't quite come to terms with it, but I've worked with someone who managed to be at both ends of the spectrum at once. Insisted on using some newfangled thing for a major part of the product, and in doing it their way – and then, nobody was allowed to change it.
Mercifully, their work has generated so many bugs that they've been maintaining that product for the last year and haven't had time to touch anything else.
I used to work with a guy like that and it created a ton of tech debt. He would write new services using a new technology for each one, not to documenting or maintaining any of them.
What's sad is that it looks great on his or her resume to do that. That developer might leave a trail of carnage but they don't care--on to the new shiny job for them. It never catches up.
I can't really blame developers for this -- they're just responding to incentives. So long as hiring managers penalize candidates that don't have experience with trendy stacks then your actively doing your employees a disservice by prohibiting them. The only thing I can perosnally do to combat this problem is be conscious of it, not engage in that kind of hiring behavior in my office, and hope that the culture changes.
Giving employees opportunities for side projects helps somewhat. Allowing for gradual migrations to new technologies helps as well.
While I agree with you, who is in control there? Who lets the developer pick the tech and leave a trail of carnage?
Neomaniacs gonna succumb to neomania. The bigger question is why the system permits that, rather than steer those urges to try something new into useful experiments that might advance the status quo.
I think rewriting code for style purposes extremely naive. It introduces risk and sabotages `git blame`. Still see a lot of time wasted doing exactly that, as enthusiastic developers try to illuminate the rest of the team, or make things tidy. I recommend against switching to something like "standard.js" in a live project.
Agreed. There should be a standard way that everybody on the team formats their code on check-in, and that format should not change. That way, it can be enforced by a single git pre-commit hook to clang-format. If somebody wants to work on the code formatted in a different way, well then that's just more calls to clang-format, but it never touches the repository that way.
I would fault this article for treating each developer as a being intrinsically good or bad. I've seen people I know are very talented (based on past work) become negative developers. Depression is a common cause. If they are underperforming they may react to their own self-disappointment by acting defensive and resisting change of the inclusion of anyone they think may judge them. I think there's many destructive feedback cycles that can turn a good developer into a negative developer.
Also devs can have different skill level in différents areas. One could be a rock star at the server stuff and do a poor job at the UI/web stuff (or any other possible version of this).
So that's another layer of variation to take into account.
I have one exception to the "convoluted code" developer. I worked with a guy whose code was pure spaghetti. Mostly write-only code.
BUT: if a customer had a crisis he was the person to send. Amazingly quickly he would suss out the problem and get things running -- making the customer happy and rescuing the SLA. And he could explain what the problem was so someone else could implement it again, properly, perhaps in 10X the time, and ship the patch to all the customers.
In other words: if the river was rising and the dam was leaking, this guy would stride in confidently and jam his fingers into all the holes, saving the city. Not a long term or scalable fix, but preventing disaster.
(I'm pretty sure 50% of developers on HN have used this guy's code BTW).
> Amazingly quickly he would suss out the problem and get things running -- making the customer happy and rescuing the SLA
Probably skills they learned reading their own code. I've noticed it with other people, the ones he can quickly debug spaghetti are the ones that will create more of it. It's why they stick around, management likes them because the can solve problems, they just don't see the creation of yet more problems.
On the surface, I think you're describing fire-fighters (who are loved for solving high-visibility problems), but in retrospect, I think these are also developers with very high reading comprehension and reasoning abilities. That may not make them great designers and architects (though some are), but they can see code for what it is and fix it, as in "The Big Lebowski", "in the parlance of our times" (in other words, in a way that fits the code as it was originally written).
No, the reasoning is why build it to handle 600% specified load (and higher cost) if 150% specified load is sufficient?
(Not that there aren't good reasons to do this. Especially for public infrastructure which might be around for a shockingly large time. But predicting future usage is very difficult. For a small start-up, it's fair to err on the side of leaner/fix-it later IMO)
I might be one of these people. I've worked with enough underdocumented code/systems, I don't expect, look for, or provide documentation.(Exception: I provide documentation where there's a clear and stable boundary/interface another team will use)
I had this with the OpenStack API. It's so hilariously underdocumented that I've reverse-engineered it by poking at a Devstack and checking the HTTP requests in mitmproxy. After some time, even when I had to work with a new part of the API, I didn't even bother to check for documentation because just poking around was faster (and more reliable: one never knows how close the documentation is to the real deal).
It was my job for a while to work with scientists and their spaghetti Fortran codebases. One esteemed fellow still stuck to the FORTRAN IV of his youth (the original pasta language). It was amazing how much they could do with such old technology. The overriding impression ultimately was: "Never keep intellectual property just in code"
I feel your pain. Scientists & data scientists are notorious for spaghetti code.
Refactoring scientists code was how I got into software engineering. I used to be a scientist, and I found restructuring/modularizing the analysis code more fulfilling than the actual analysis.
As someone who is currently in that exact situation, and is considering switching away from science, how did the actual switch from science to software go?
- get really proficient in a language, for me it was python, it is a solid, flexible, minimal boilerplate, can use it already for science applications (amazing libraries for it, like pandas, scipy etc)
- understand databases, and how to interact with them
- start testing (unit, integration and acceptance), this is super important, and key difference with scientist programming. When doing a take home test for a job interview, always include tests, even if you get the solution wrong, they will give you massive points for having tests.
- start applying! You'll fail a lot, but you gain experience with each test and interview you do.
Thank you. I've been primarily working with C++ and Python, and consider myself proficient in both, so it is good to hear that Python is usable. I've been meaning to work more with databases, as I haven't interacted with them in particular. Maybe writing a mini-library catalog, as my bookshelves are starting to outgrow the haphazard google spreadsheet where their contents are currently recording. And yeah, testing is something that I try to do in my home projects, but 95% of the coding for research is throw-away code, and so I don't get much of a chance there.
C++ and python are a suburb combo, they complement each other really well. If you master both, by that I mean not just the syntax, but the different patterns and styles you need to employ in static vs dynamic typed, understand the tradeoffs of both, you've pretty much mastered programming.
Python also glues v. nicely with C, if a part of your code is v. CPU heavy, you can write it with cython, and then call the compiled code from python, as you would a normal python function, and it would run just as fast as native c. This is how all those python data libraries work under the hood (pandas, sklearn etc)
Library catalogue is a great idea for DB learning, you'll learn how to model data, and the relations between them, using tables, foreign keys, primary keys etc. I'd recommend postgres for the DB.
Again, python would be an excellent language for this, you can use sqlalchemy to define your tables, and query them. You can then use flask to provide a web interface for the catalogue.
Thank you, and it is very good that the path I'm looking towards is possible. I've been brushing up on algorithms, mostly as I run into them in things that I have been writing on nights and weekends. Primarily C++ and Python, with a smattering of C, bash, javascript, and php. I have probably about 6-9 months before graduation, so I've been starting to look at fields in more detail, rather than the nebulous concept of "industry" that you typically hear about in academia.
> management likes them because the can solve problems, they just don't see the creation of yet more problems.
In fire-fighting situations at least to my experience it is usually about fixing things fast and under stress. Not many devs are interested in doing that. These hotfixers probably know their code sucks, but I don't think they need to feel bad for solving the urgent problems in their own way - in my experience it is not a job that many are that willing to take. Majority like to solve the problems properly in a slow-paced less stressful way.
It's a balance you need to run. But as a PM, spaghetti fixers are indeed very nice to have around. Keeping an eye on technical debt is important though.
> So if the cost of a developer who does negative work is so high, how do they get hired? Part of it can be explained by an interview process that needs improvement, but a less talked about part is the temptation to lower hiring standards.
I've seen plenty of people who can reverse binary trees or fizzbang etc who are terrible additions to teams and do "negative work". That's because having a grasp of CS fundamentals and know how to contribute to a team are totally different skills.
The way to overcome this is not smarter and/or more challenging interview questions, but by hiring engineers that come recommended by other engineers. This is by far the greatest indicator of a successful engineer I've ever seen. Nothing comes close to it. As far as I care, if someone I work with and respect recommends someone else that's enough for me to give them a try without even needing to whiteboard.
Even though CS fundamentals and contributing in teams are somewhat different skills, in my experience the terrible developers that contribute negatively to the project tend to be those that do not have a good grasp of CS fundamentals.
I mean, some problems we are faced with are very, very challenging and if somebody can't figure out how to reverse a binary tree or do a FizzBuzz, then what can he do?
Also I'm hearing this advice to hire engineers recommended by other engineers quite often, however speaking as somebody constantly engaged in hiring decisions at our company, I must say that recommendations don't scale.
You see, outside the SF bubble of course, our friends are mostly not into software development and we engineers are kind of introverts, goes with the territory, so we don't have that many friends or acquaintances anyway, except for people that we meet on the Internet, which usually live in another city or country. If an employee can produce one good recommendation, that's way, way above the average. It's such a rare event actually that such employees need reward.
This advice also misses the point. The problem is that our industry is constantly growing and there aren't enough good people to go around. Which means we have to take the plunge and hire beginners as well. Investing in the education of beginners is the only way a company can scale in talent. But even then you need a filter. You can't hire anybody that walks through that door, as firing people is a serious drag on morale and a net loss for the company.
So what filter can you apply to beginners? CS education of course, or in other words what people should have learned in school.
It's hilarious, because our best hires are people who had no CS education. Meanwhile, some of the worst hires were CS grads.
Things that are hard to teach: Can you work in a team, especially if some of them are remote workers? Can you leave your ego behind? Can you deal with the corporate BS that gets in the way of programming? Can you think of the user, use-cases, business value, etc of the feature/product you're developing? Can you digest technical information?
Things that are not hard to teach: CS fundamentals, programming (given a basic understanding of logic).
I won't go as far as to say a CS education is harmful, but it isn't a good indicator, either. Personally, I find CS, in trying to validate itself as a "science" (which it is not, no scientific method, just like maths isn't science, but still valuable), makes stuff over-complicated and relies on terminology far too much.
And by focussing on it, you lose a huge pool of potential applicants.
That CS fundamentals and programming are easy to teach is a myth that keeps being perpetuated and I don't understand why.
I'm also involved in an education program for children and as a matter of fact computer science is among the most difficult subjects to teach, right up there with math. Yes, you can increase interest and engagement by better teaching methods (e.g. playing and building games), but that CS is hard is indisputable.
> Can you work in a team, especially if some of them are remote workers? Can you leave your ego behind?
In our line of work, working as an individual on your own piece is actually what happens in 90% of cases. The other 10% you have to cooperate with your team to make decisions and to reach compromises on points of intersection, on protocols, on best practices, etc.
But if we are honest with ourselves, collaboration in our industry means having contention / concurrency in the development process and that doesn't scale and minimizing contention leads to better productivity. This is why we hate open spaces, why we ended up with micro-services, or why we split big teams in smaller teams. It's all about achieving parallelism by reducing the needed collaboration.
Don't take offense in me saying this, but team work is a soft skill that isn't hard to teach or learn at all. So lets be blunt. You're talking about assholes and having assholes really depends on your company's culture and what gets tolerated. If you don't tolerate assholes, some people will change their attitude, some will leave and the rare few will have to be fired, but such cases are rare.
> I'm also involved in an education program for children and as a matter of fact computer science is among the most difficult subjects to teach, right up there with math. Yes, you can increase interest and engagement by better teaching methods (e.g. playing and building games), but that CS is hard is indisputable.
I'll dispute it. I did a year of a split maths/computer science degree, found the computer science easy, and so shifted into the maths degree (on the grounds that I was confident I could teach myself the computer science degree if need be, whereas I wouldn't be able to learn the maths without teaching). Now I work as a programmer.
Most of the best programmers I've worked with have had non-CS degrees - mostly mathematics/physics, but a few from chemistry/biology/engineering, even one historian. I would sooner trust a straight IQ test than CS exercises like reversing a binary tree - the first selects for general intelligence, the second selects for general intelligence plus having sat in a CS class and/or read some CS books, and is no more relevant to day-to-day programming work.
As I happen to know which course you're talking about, I would point out that the module of CS you took was the far easier half of the first year material, which was open to people from a variety of other subjects, and designed with that in mind. (And indeed, the Maths dept recognises this and doesn't reduce the workload to compensate for Maths students who choose it.)
While I wouldn't say the rest of the course pushed me exceptionally hard, and I do wonder if Mathematics might have been more useful overall, I don't think that module is particularly representative of the difficulty of the CS course.
> As I happen to know which course you're talking about, I would point out that the module of CS you took was the far easier half of the first year material, which was open to people from a variety of other subjects, and designed with that in mind. (And indeed, the Maths dept recognises this and doesn't reduce the workload to compensate for Maths students who choose it.)
I did everything other than the hardware module (digital electronics) AIUI? (Which was maybe the hardest part for some people, but doesn't really correspond to the kind of thing that's asked in interviews).
My understanding is that your course sits one of the two CS Papers? They tend to move modules around a lot, but based on a quick look through past years' material, I'm fairly sure Paper 2 would have covered significantly more than Digital Electronics in your year.
I don't remember which paper was which. I think my year ended up doing more courses than they'd originally planned because of a miscommunication between the departments about what was going to be covered where? I did an ML course, a Java course, Operating Systems, and part of Data Structures & Algorithms before I looked through the notes and decided I knew it all (and I think I also glanced over notes for a discrete maths course earlier in the year?). Certainly when comparing notes with full ("50%") compsci friends I remember Digital Electronics was the only obvious thing I was missing.
Come to think about it I do remember talking about a friend's question on sorting algorithms in an exam that I hadn't done myself. So possibly Data Structures & Algorithms wasn't on the exam for me? Afraid it was too long ago to remember exactly.
* Big-O. It's basically enough to know when an algo is going to perform horribly - e.g. O(n^3) - and even then you might choose it because it's simple or your n is currently small. Anyway, senior dev or architect can make this call way better than any grad could, CS or not.
* Data structures. Many high-level languages now have better basic data-structures than many devs could write, even if they know the theory.
I could continue, but most people do fine without these CS 101 basics. Besides, good/interested people will ask questions about the code base and will read up on these concepts when mentioned. Doesn't take a genius.
> but team work is a soft skill that isn't hard to teach or learn at all
Let me get this straight. You're saying that behaviours you might have established in your teens, and so held unconsciously for almost 10 years by the end of your degree, those are easier to change than something you just learned? Something that people can teach themselves as a hobby, and often do?
But, if soft skills are really that easy to teach, I have several, lucrative opportunities for you. Meanwhile, CS courses online are a dollar a dozen or free. Honestly, if you could write up some stuff about this, I'm sure several people on HN would appreciate truly actionable advice on how not to accidentally be an asshole. Even better, if you can somehow make great devs into great managers, that's also worth gold.
I will admit that good teachers really help with both maths and CS, and good teachers are rare. Luckily, I had a good mentor by accident when I was 14. This year, our team has been to twelve local schools to help teachers make it more fun. And yeah, some kids struggle, just like any subject, but most actually have no issue at all picking this stuff up. Not worried in the slightest about the next generation's programming skills.
> But, if soft skills are really that easy to teach, I have several, lucrative opportunities for you. Meanwhile, CS courses online are a dollar a dozen or free. Honestly, if you could write up some stuff about this, I'm sure several people on HN would appreciate truly actionable advice on how not to accidentally be an asshole. Even better, if you can somehow make great devs into great managers, that's also worth gold.
To be fair a lot of people seem to have their personal identity invested in poor communication abilities, or see it as somehow "fake" to work on them. E.g. I see How to Win Friend and Influence People recommended here, but you'll see a lot of people who refuse to read it on principle.
"How to Win Friend and Influence People" is a bit dated, but has some good, actionable advice. I find when people hear about negotiations/promotions/job offers they've missed because of poor communication abilities, and change their tune quickly.
But once you're into adulthood, it's rare to get constructive feedback about social skills (even from friends). So it can be catch-22, especially if you aren't very attuned to the reactions of others. It isn't that they don't want to change, but if you aren't charismatic or confident, you're going to have different outcomes in the same situation, so it's self-reenforcing. And infuriatingly, the intrinsically high-charisma people don't know what's needed.
A book I can recommend is Keith Ferrazzi's Never Eat Alone. I find the name-dropping cringe-worthy, but it made business relationships click for me, which got me to do more social stuff, etc.
>Data structures. Many high-level languages now have better basic data-structures than many devs could write, even if they know the theory.
I would say that ALL high level languages have basic data structures, and that those data-structures are better then 99% of the data-structures that users would/could write. Just because to be included in the language the coding level is very very high.
> Personally, I find CS, in trying to validate itself as a "science" (which it is not, no scientific method, just like maths isn't science, but still valuable)
Science can mean different things to different people. While in the english speaking world it is often equated to "natural science" this not the case for all languages.
Take the famous Gauss quote for example: "Mathematica regina scientiarum est et theoria numerorum regina mathematicae est."
In german, CS (Informatik) is widely regarded as a "structural science".
I'm weary of people throwing out Latin quotes, or trying to apply Latin or Greek to English. It's difficult, as words change meaning (my favourite example being "techne"). Gauss' quote translated is "mathematics is the queen of the sciences", but without context obvious worthless. He could have been joking, I for one haven't read his whole letter.
It's a bit disingenuous to bring German into this, luckily I speak it also. I would dispute that "science" == "Wissenschaft". Literally translated, "Wissenschaft" is "creating/establishing/managing/develop knowledge". In the strictest sense, "science" must be translated as "Naturwissenschaft". There simply isn't a direct mapping. Interestingly, "structural science" in English means something different than "Strukturwissenschaft", so even the example you gave undermines that "science" == "Wissenschaft". Wissenshaft encompasses more than science.
I find a useful test if something might be a science is check it doesn't have "science" in the name :)
But let's be clear, just because a field isn't a "science" doesn't diminish importance or value. I am sick of seeing this argument. But it is extremely important in the way you approach teaching that field.
I actually think it gives you far more flexibility and interesting ways to approach a field. Scientific method is very rigid, for good reason. But fields that don't employ the scientific method need not adopt this rigidity, especially if they have solid formalism behind them. Just don't try and dress it up like a science. E.g. I hate Big-O, or the almost religious importance CS puts in it. It's good formalism that can't be directly applied to reality. It isn't a scientific result. But that's a rant for another day.
I find a lot of CS education is not applicable to most SE roles, as they operate at a higher level of abstraction.
I've never had to implement a binary tree, if I did, I would be reinventing the wheel.
A lot of the necessary skills are hard, if not impossible, to teach in higher education. Mainly because there, you tend to be given an exact, unambiguous specification. Whereas in the commercial world, I've never seen that to be the case, the developers role is to collaborate with the business to nail down the specifics of a high-level business goal. Implementing the code is the easy bit :)
For web forms that replace an Excel spreadsheet, the kind that usually gets passed around over email or in a Dropbox, yes, the business requirements are usually the ones that are problematic, as you are expected to grok a lot of domain specific details in only a couple of days.
But then again IMO, your experience is biased by the problems you've been working on.
> I've never had to implement a binary tree, if I did, I would be reinventing the wheel.
If you actually had problems involving trees, you'd know that besides the usual data-structures we all know and love, problems involving trees and graphs rarely have prepackaged solutions and the available solutions are leaky abstractions for which you have to know how they work in order to fine tune them or to make a choice between the available solutions. Consider that problems involving graphs many times involve NP solutions that are unacceptable, but that admit solutions producing approximate results, which may or may not be acceptable for your problem.
And sure, maybe the problems that you're solving don't involve trees or graphs. That's fine, I think it's been 2 years since I saw a graph problem myself, but then again, medics learn a lot more than they need to treat the common cold, in order for them to be prepared. I wouldn't want to be treated by a doctor that doesn't have the proper education, since my life is on the line.
And yet we are the industry that accepts people without a college degree, but given that we also have a lot of responsibility on our shoulders, how accepting can we get? Consider that we build software that controls buildings, that drive cars, that process the confidential data of people.
And we as software developers, when we meet, what are we going to talk about anyway? Sure, you might have some personal projects we can talk about, but lets circle back to beginners. What are you going to talk about with somebody fresh out of college or high-school? And even with seniors, not everybody has an interesting project to talk about, so how are you going to assess their worth? Sure, you can always do a trial, but the reality is that firing people is hard and expensive, that's just the way things are and most companies prefer false negatives.
> If you actually had problems involving trees, you'd know that besides the usual data-structures we all know and love, problems involving trees and graphs rarely have prepackaged solutions...
You seem to argue that having deep understandings of these types of issues makes you a better employee in the long term because eventually you will come across a hard problem that requires these skills. I would argue that an engineer who doesn't have a great grasp on the internals of fundamental CS data structures, but knows how to priotize time, will work on somebody else's mess or deal with corporate BS without complaining is more valuable both in the short and long term.
These types of engineers are just simply better and more productive to work with.
I think the best (though not prefect) method to test new devs is with a take-home exercise, no time limit, that is representative of your domain. The key things I look for are:
- did they understand the requirements? If not, did they ask for clarification?
- did they choose appropriate abstractions?
- is the code easy to read? Is it overengineered? Are they trying to show off by overcomplicating things, or optimising prematurely?
- the simpler the solution, the better
- are the tests good, readable, not too brittle
I then ask them how would they extend the code to fit new/changed requirements, they don't necessarily need to write any extra code in my presence, whatever they're most comfortable with.
Junior people who are very good at CS often struggle with "real-world software".
I had to explain to such a developer once why it was a bad idea to store multiple values into a single database column.
It was a simple task of adding a few integer columns to a table but he couldn't let go of the "optimizations" in his mind about how he could use binary bits to represent what we wanted to represent and all that...
A person with good CS fundamentals need to go through some tough times for a few years at dev shops where the rubber meets the road and then they'll develop a technical taste.
The problem is he/she might be right in a vacuum. Denormalizing the data might be "100x faster!", but they're naive to every other technical non-technical burden that comes along with that decision. Part of our interview process is selecting for people who can come up with the most efficient answer, but efficiency is a squishy thing when you're a member of a team.
Most CS education has only limited applicability to real-world software product development. It's like hiring a BS Physics graduate to do mechanical engineering work. Sure a smart person can eventually learn to do the practical engineering stuff but it will take a lot of training and mistakes along the way.
Also if you restrict hiring to recommendations it's going to be even worse for your diversity. We can at least try to have a recruitment system that doesn't place too much weight on what people's faces look like or nebulous "cultural fit".
Other than warm-fuzzies, what are the benefits of a diverse workforce for a company? If you could eliminate the problem of bad hires by only hiring jewish black transwomen to the exclusion of anybody else, shouldn't you do it?
The benefit is that you satisfy unspoken but strongly enforced diversity requirements. I'm not being cheeky, this is very important when someone starts to take a critical eye towards your company.
Danger with this is people recommend based on friendship & personal gain as much as competence.
You can end up with mini guilds of ~5 people, who know each other well, and help each other with their career by recommending each other as they migrate from company to company.
Seen this happen twice now. At my current company, someone joined the devops/infrastructure team, then brought 4 over from his network at his previous company. Most are competent, but one seriously isn't, and he's protected by his cabal, who now dominate the team. He's a 'senior' engineer to boot!
I've seen it happen as well. If you look at each person individually you may have issues, but does the team function? Do they get their work done and stay out of your way? For all I care they can have a dog on their payroll as long as their team does what it's supposed to do and stays out of my way.
The team sort-of functions. But they're stuck in a form of groupthink, where they seem to prefer building their own tools/scripts, our server monitoring is poor/non-existent (only application logs), they are very reluctant to move to AWS, security obsessed to the point of manually setting routes and firewalls manually, very poor automation overall etc etc
But because they dominate our devops/infra, it's very hard to penetrate the groupthink, our CTO is thankfully getting involved.
> I've seen plenty of people who can reverse binary trees or fizzbang
That's the tests for entry level programmers who are 16-22: It's the basics for writing code (that could be learnt alone with a computer), it doesn't require to have had a programming jobs or to have maintained software in the long term in production.
The "convoluted code" gauge is a double-edged sword. You could also be working at a company with developers who have no experience with the benefits of functional programming. In this scenario it's those who write nested loops, branching if-statements, and mutating side-effects that are in charge and you're the bad developer for using fold and map. You could be seen as an elitist who likes to write clever, obfuscated code that nobody else can comprehend.
Not all positive contributions are worthwhile. One could simply blend in and add one more level of nesting, one more conditional, and mutate a few things here n there. After all, everyone knows what a for-loop is, right? Staying productive is important!
Well... until your most productive hours are spent chasing down errors you and your team designed.
There is a lot of value in conforming to however the existing code base is written. Some of the worst (in terms of time/money spent maintaining) code I've seen is code that sticks out for being different, even if it's arguably faster / more modern / uses better practices. Some C++ examples because that's in my wheelhouse:
1. Entire code base is "C with classes" or OO, but That One Guy on the team insists on sprinkling functional programming constructs all over the place. Now there's a huge overhead for the rest of the team who need to switch gears every time they encounter that code.
2. That One Guy who insists on doing everything with macros in a code base that otherwise makes very light use of the preprocessor.
3. Company standardized on using the C++ constructor to initialize classes, but That One Guy writes a seperate init() method on each of his classes and leaves the constructors empty.
4. (iOS/Objective-C example) Entire code base uses old-school delegate callbacks but That One Guy needs to write all his stuff using the arguably better block style.
It could be even as silly as:
5. Entire code base uses camelCase for variables and function names, but That One Guy insists on using_underbar_style. Now every time you have to call something or pass a variable around you have that tiny bit of overhead double checking which style needs to be used.
Functional programming does have some useful syntax and principles that can make code easier to reason about. But a much bigger red flag is if a developer is so fixated on a certain programming philosophy or methodology that they don't pay attention to writing simple, good code.
> that they don't pay attention to writing simple, good code.
Amen. If you have a complex problem to solve, and the style is between obfuscated FP and complex OO, the choice is already a bad one. If the code can't be written very clearly, probably the problem is posed wrong, or shouldn't be solved because the maintenance costs will be too high anyway.
The negative contribution developers I find are those that don't challenge the formulation of the problem. This takes some confidence to rather challenge a manager with a better design, rather than actually write a complex solution to a problem to get a pat on the head.
So the most dangerous developers in my book are those that like the pats on the head they get from solving complex problems by writing complex solutions.
> The negative contribution developers I find are those that don't challenge the formulation of the problem.
Couldn't agree more. Furthermore, sometimes the best code is no code at all. If the code has 0 lines, it will not contain bugs, and will not take time for bug fixing and maintenance.
Functional programming is nice, though I tend to find a procedural code easier to read than it's functional counter part. There is a place and time for mutation of state and their are other times when just simply duplicating data for clarity is much more straight forward.
To be fair, many debugging tool chains are crap with folds and maps instead of loops. And code is rarely written in anger, but often debugged that way.
Debugging can bring out the worst in the best of us.
Luckily I don't often have to debug code from functional programmers. The errors are usually mitigated by design and often easier to spot or reason about than in a function with a dozen branches mutating the object behind a pointer.
The reason I call it a double-edged sword is because the majority will determine what is normal or acceptable. If you come to their office expecting to reveal the shadows on the wall you may very well find yourself looking for another office elsewhere.
The harrowing difficulty is in bridging the gaps between each other and our differing approaches to developing software.
You can write functional code with dozens of branches too. In my opinion this is because functional programmers are usually enthusiasts and good programmers. If all programmers start to write functional code you will hate it too.
Are you suggesting that all functional code is readable/more readable? It has its merits for sure, but I have seen many functional monstrosities... some that have even caused plenty of negative work themselves!
What I am suggesting is that the majority of people at a company will decide what styles of programming are "convoluted." According to the article the programmers who write convoluted code are the ones having a negative impact. The double edged sword is the allegory of Plato's Cave: you might actually be bringing the team great ideas about how to improve their code but it will be useless if they can't see the shadows on the wall for what they are.
While I may be biased and find that functional programming styles are better I realize it takes all kinds to make a working system together with other engineers. The difficulty is in communicating these ideas and pulling the team together to see the merit rather than the shadows.
Good points. Regarding how such people come to influence, you have to remember a lot of people, especially in startups, are not hired through a process at all.
I worked with a guy for many years who was as described in this article.
He can't code and he can't do any math, despite holding a phd. He can talk about math and he can talk about code, but we're talking excel level skills when what you need is someone with a modern ML level skillset.
He made all the decisions about which trading strategies were worth pursuing and which ones weren't, despite the presence of plenty of more qualified people.
How could this be? First of all, the boss of the fund did not have the skills to judge who could write a trading strategy, and who couldn't. So he was stuck with recommendations from other people who also didn't have this skill, leading to this hire. He also relied on his bias towards people of his own ethnic group, which benefitted this chap we're talking about greatly.
Essentially, broken feedback. Someone who can't judge relying on the judgement of someone who isn't competent but has his ear.
I'm sure this has happened to a lot of folks. Probably you need a somewhat larger organisation for there to be enough informed people to point fingers, and you need some luck for the culture to be such that a complaint would actually come through rather than be suppressed.
> First of all, the boss of the fund did not have the skills to judge who could write a trading strategy, and who couldn't. So he was stuck with recommendations from other people who also didn't have this skill, leading to this hire.
Which is why it's important for managers to understand the technology that they're managing.
I ran into this 25 years ago. I knew someone who went to an MBA school. He spoke up at a party and said "I now have great management skills and can manage anyone!". My response was "No, because when you have a technical disagreement in the team, as manager, you have to make the final decision. And if you don't understand the issues, you're left deciding based on what, popularity of the engineers involved?"
> "No, because when you have a technical disagreement in the team, as manager, you have to make the final decision. And if you don't understand the issues, you're left deciding based on what, popularity of the engineers involved?"
And you've just done a mistake by forcing a decision.
You've got involved for a thing you didn't know and you didn't research. Right now, you're the least qualified people in the room to take that decision. You should have no say in it.
Maybe the guy was really dumb or "awful" as the author puts it, but I think it is wrong to blame it all on the hired developer.
It's like in sports, there can be great players underperforming because of the wrong team / wrong coach.
When a team hires a person, there is a decent amount of time during which the hired person's code is not coherent with the hiring team's. During that period, the hired person must be trained so that the code can match the quality standards / spirit of the team. During that time, other devs have to mentor the new dev, which in itself is "negative work".
What the author is really saying here is "beware of the developer that needs training". Of course it'd be better if all new hires didn't need any training, but it is unrealistic.
Training is necessary for all new hires, not because people don't code properly, but because their coding style not necessarily match the hiring team. You can hire a very experienced developer who you feel spends too much time on testing, while your team has a more "move fast a break things" approach to dev. Or having people who are used use design patterns, because it worked in their previous workplace. There is always some time required to adapt.
Hiring a developer and have him check in 2 things in a period of 6 months is a failure of the entire team. If the guy was so awful, that should have been spotted in the first 2 weeks. And his "awful" code would certainly not have gone through to production.
As an entrepreneur and former manager and engineer I find this analysis too simplistic.
In particular he talks about outdated methods making work negative and while I agree, most of the time I find the opposite problem: most programmers wanting to use the new language of the Week.
The new language of the Week was so great and saved so much time on some area but because it is not production ready you are forced to fill the gaps and do a ton of work that would never be necessary with the mature language in other areas, like installing libraries and dependencies, resolving conflicts that nobody had solved before because so few people use the new language...or just debugging the new language itself.
Also as a manager it is your job and responsibility to make things work. If you can't see the problems before they happen and manage it it is your fault, not the developer's.
People have a lot of psychological delusions and faults, but as a manager you study them and if you are good could handle it easily. If a soccer team is not well organized is not the responsibility of the players.
If a person has a tendency to use outdated software like the player wanting to dribble to much, you correct it and basically everybody is happy. When things go good and you have success everybody is happy.
While the concept of negative work is certainly instructive, my misgiving is that it's probably difficult or impossible to measure the overall value added (or subtracted) by any worker in a complex organization.
My guess is that most of us have our positive and negative moments, and hopefully the positives outweigh the negatives.
Great point. While articles would have you believe that programmer skill is bimodally distributed (good vs. bad), it actually follows a normal curve like most other things. We all make some good decisions and some bad decisions everyday.
I'm sure we can all find a piece of code we wrote a year or two ago that makes us cringe a little bit.
I've known people who created negative work. Back when the economy was tanking in 2008 into 2009, we fired 3 developers at my job, the rest of us ended up getting more work done than we did when also having to deal with the work of those 3.
One of the 3 you could argue about, but the other two 100% caused extra work from having to clean up after them regularly. At least one of which was somehow a senior developer and the other was either dev or senior so neither had the excuse of being new/junior devs, and had been around for a few years by that point so they had time to learn.
This is pretty much how I feel about it as well. Sometimes you'll deal with a more complex system where bugs might creep in even for an experienced developer, that then end up needing bugfixes for a week to come.
Yet you might also write code that does it's job great (mostly) without bugs a week later.
The blog post reiterates what everybody probably already knows; similar content gets posted on HN semi-regularly.
On the other hand, I've also heard that most of interns' / junior devs' first projects end up shelved (i.e. net contribution = 0).
From a viewpoint of a mediocre developer, a far more interesting question is how to get into the feedback loop where you actually learn from your mistakes and the quality of your contributions improves.
The solution to this problem is code review. Good developers do not want bad code in the codebase. If you give them authority to stop bad code getting in, it won't.
Unfortunately, 'negative work' developers are often perceived by the rest of the organisation to be doing good work. They can make quick changes and deliver results fast. It is almost impossible to measure the real impact a developer's changes has, but easy to measure how much they are doing week-by-week. Therefore, the only practical way to make the issue apparent is by stopping the bad code getting in.
So when the developer says my work is "done", it sits in code review for another 2-3 works until it is "done done".
> An even more egregious form of negative work is a developer who is stuck using out of date programming practices AND has a large amount of influence at a company.
Well, out of date may also be seen as "battle proven". Just look at the JS scene and Angular 1 vs 2 (not to mention the boatload of now dead frameworks, tools etc)... often enough sticking with proven tech instead of hipster dung is the more long-term viable solution. But of course to do this, managers and developers have to adopt a long-term view (one or two years) instead of a 2 week sprint...
There's this magical process called continuous integration that internalizes the impact of bad (or, more likely, misguided) developers. Don't let them merge their branch until all tests pass. If their commit breaks something while all tests still pass, then direct them to write the missing tests.
If it doesn't "prevent" people from checking in code (in the strictest sense of the term) that breaks lots of other code, it certainly does some combination of slowing them way down, notifying other people way sooner, or forcing them to be a great deal more skilled in how they check in their breaking code (in the "if you make something idiot-proof, someone will build a better idiot" sense of skilled).
It's not perfect, but it's better than most of the alternatives, and perfect isn't actually on the table anyhow.
I've found that code reviews are one of the best ways to increase code quality. Even just talking with other developers about why they've done things is helpful in thinking about simpler options and keeping things closer to the architectural vision.
Who cares about poor unit tests in this case? It's all about the INTEGRATION tests that run on CI to make sure your bad developer can only fuck their shit up and not anyone else's. And if you have bad developers building your integration testing framework, you have bigger problems.
If someone checks in code that breaks a bunch of tests, then you know right away. And the underlying assumption is that tests are written by multiple people with varying degrees of quality. Can bad code sneak through the tests? Of course! But to say it does no good at all is wrong.
That is quite sad. In this case you are not firing for the wrong reason.
I get that HR might need to verify that this person is being fired for the right reason and that no discrimination is in play (which might happen unwittingly), but just keeping someone employed because they are a minority is not a good solution.
On a related note. If I was a minority, I would think that I was just being hired or kept on the team because I am a minority and not for my technical ability. This behaviour from HR would further encourage that thinking.
Please no. That might get them out of your hair (maybe), but then you've just put someone incompetent in a position where they can exercise it even further.
I think people doing this is the direct cause of some head-scratchingly awful middle managers I've had to deal with..
If HR can't get rid of an underperforming employee because they're too lazy to do their homework, the company is fucked anyway. At least the dude is out of your hair.
One of our (now ex) developers would write tests that setup a mock, and test the properties of the mock, so if you deleted the source code, the test would still pass!
His pull requests sucked up hours of other devs time, but bad stuff still leaked through simply because the other devs didn't have time to rewrite his PR. He was fired eventually, but the ongoing cleanup still continues...
> one influential developer didn’t like any kind of change and they were allowed to veto any forward progress
I'm a little skeptical, it feels like I'm only hearing one side of a story. How does the author justify the characterization "didn’t like any kind of change".
You are making the calculation at a point in time. What if the 'bad' developer is on a fast growth curve and in only a few months time will be a net gain for the company?
A developer on a fast growth curve generally responds well to corrective action: hey, let's work together to make you a winner in our environment, instead of the person whose bugs are caught by your coworkers.
I've had successes and failures trying this strategy.
This guy is so negative. The word 'bad' itself fills you with so many negative connotations. What were the reviewers doing when so called bad developer was pushing code?
Without sounding like a curmudgeon, "An even more egregious form of negative work is a developer who is stuck using out of date programming practices AND has a large amount of influence at a company. " can happen when new developers come up with ways of re-writing applications without understanding the full problem statement or spending the time to understand the existing codebase.
Usually, they might have worked on a small to medium sized projects and show up to work on a project with millions of lines of code with teams in different geographies and expect to change things across the board. When the same explanation has to be given the 15th time, you can turn into a toad and start saying "NO" first. :)
It's true that not all developers make positive contributions, however, I think that blaming "lowering hiring standards", as the author said, is a complete red herring.
There is such thing as hiring without doing even the most basic test for technical competency: Last year, at a different job, I worked with a guy that though the best way to implement a CRUD service was an nginx plugin, and when faced with a real programming language, managed about 4 lines of code a week, and not good ones. But that's an extreme case of not even checking.
In practice, we have to face that all that our quest for more stringent hiring standards is not really selecting the best, but just selecting fewer people, in ways that might, or might not, have anything to do with being good at a job. Let's go through a few examples in my career:
A guy that was the most prolific developer I have ever seen: He'd rewrite entire subsystems over a weekend. The problem is that said susbsytems were not necessarily better than they started, trading bugs for bugs, and anyone that wanted to work on them would have to relearn that programmer's idiosyncrasies of the week. He easily cost his project 12 man/months of work in 4 months, the length of time it took for management to realize that he had to be let go.
A company's big UI framework was quite broken, and a new developer came in and fixed it. Great, right? Well, he was handed code review veto to changes into the framework, and his standards and his demeanor made people stop contributing after two or three attempts. In practice, the framework died as people found it antiquated, and they decided to build a new one: Well, the same developer was tasked with building new framwork, which was made mandatory for 200+ developers to use. Total contribution was clearly negative.
A developer that was very fast, and wrote working code, had been managing a rather large 500K line codebase, and received some developers as help. He didn't believe in internal documentation or on keeping interfaces stable. He also didn't believe in writing code that wasn't brittle, or in unit tests: Code changes from the new developers often broke things, the veteran would come in, fix everything in the middle of the emergency, and look absolutely great, while all the other developers looked to management as if they were incompetent. They were not, however: they were quite successful when moved to other teams. It just happens that the original developer made sure nobody else could touch anything. Eventually, the experiment was retried after the original developer was sent to do other things. It took a few months, but the new replacement team managed to modularize the code, and new people could actually modify the codebase productively.
All of those negative value developers could probably be very valuable in very specific conditions, and they'd look just fine in a tough job interview. They were still terrible hires. In my experience, if anything, a harder process that demands people to appear smarter or work faster in an interview have the opposite effect of what I'd want: They end up selecting for people that think less and do more quickly, building debt faster.
My favorite developers ever all do badly in your typical stringent Silicon Valley intervew. They work slower, do more thinking, and consider every line of code they write technical debt. They won't have a million algorithms memorized: They'll go look at sources more often than not, and will spend a lot of time on tests that might as well be documentation. Very few of those traits are positive in an interview, but I think they are vital in creating good teams, but few select for them at all.
So I think that it's better to be a bit less stringent early, make take homes part of the interviews, and just learn that it's OK to fire people if they aren't working out.
> I think that blaming "lowering hiring standards", as the author said, is a complete red herring.
I've seen it done, far too often. Management-only hiring interviews because devs were being "too stringent" and it was "time-sensitive". A guy who had "contributed to the Linux kernel", but his FizzBuzz implementation didn't work. Of course, management didn't notice, only by luck did a dev look at the whiteboard after the interview.
Or, even if they haven't lowered, someone slips through the cracks. They then usually bounce from team to team, happily collecting paychecks. Then, after they've been around for years, having worked on so many projects, management considers them senior somehow. Everybody thinks "can't be that bad if nobody has fired him", and thus firing never occurs.
After reading the article, I feel like a code test and discussion and or whiteboard would have filtered the type of developer they mentioned who is a net negative on the code base.
I've seen devs that passed a code test and discussion that had this problem. For example, you can interview someone on a good day, when most of their days are bad days.
I don't think there's a magic interview style that prevents this problem. Corrective action and fast firing if necessary have worked for me. And whatever you do, don't impose onerous procedures on all of your devs because one of them screws up on a regular basis!
Seconded. Sometimes the answer to a mistake being made is not to take on a boatload of process in an attempt to make sure it can't happen again, it's to slap the hands of the offending person and say don't do that.
Ive never done the interviewing, so Im not sure. But if someone fails their code test or discussion doesnt that suggest they are a higher risk factor for being a "net negative"?
I agree with the code test, the discussion and whiteboard I would say depends on the type of developer. The former member of my team that produced negative work did well with discussions and popular little whiteboard problems. Our normal policy has become to have applicants do a mini project that represents actual work that we are doing. So far it has worked flawlessly. This guy didn't have to do that because he was so experienced. He ended up creating over-complicated, very buggy code that took more time for us to review than it would have taken for someone else to write. And when we needed to make even minor changes to his code the fastest way to do it was to just delete it all and start from scratch, everything was so complicated and codependent.
He had an appearance of technical skill, was very well spoken, knew all the interview questions and answers, but had really bad judgment and somewhat poor logic when he had to come up with novel solutions. On the contrary, I taught myself and didn't have the CS background to do well in purely technical interviews. But I was given the practical challenge and crushed it. I knew what I needed to know for my job. Of course I had a lot to learn (and still do), but I was productive from the start, or so I have been told.
I guess there is nothing wrong with whiteboard tests, but I think a practical coding challenge where the applicant makes part of an application is much more valuable. You can see that they have basic coding ability, but also how they organize code and break down larger problems. Can they get the big picture right while making clean, maintainable code?
I was good at code tests straight out of university (when I practiced that sort of thing). 13 years later I am a far better software developer, keeping things simple but functional. I am a lot worse at code tests now, as I hardly ever need to write that type of code. I am able to design things in a way that doesn't usually need complex code to achieve the required functionality.
How does a whiteboard test speak to documentation and code quality? By definition pseudo code is not production code and writing production style documented and compiling code on the whiteboard sounds like a terrible idea.
I feel like whiteboarding is really essential when talking about how code should be written, but most whiteboard tests are about writing code on the whiteboard. There's a big difference between the two activities.
Only if you assume it actually works, which is the most blatantly circular logic in this thread.
This guy could have said "I feel like a daily prayer in the server room would filter bad devs" and it would be just as proven as his feeling about whiteboarding.
There are both upstream problems (hiring the wrong people) and downstream problems (having process to catch garbage before it gets into mainline). You need both as you will inevitably make a bad hire somewhere along the way. For downstream prevention you need good processes to catch poor quality code. We've found that automated (static analysis [we like sonarqube]) plus consistent human code reviews goes a long way to ensuring a high quality code base.
The way I've seen it happen is this: you have some developers, some are good, some are not that good. The good developers leave, the bad stay. In time that bad developer will be the only one on the team that has 10 years experience. It will be the only person that knows all the little details of your project and so he will even get a management position, or a senior title. Of course, all those details should be documented somewhere but they are not.
The funny thing is I read a the first few paragraphs and thought two things. .
1 - proper onboarding
2 - standardized code
My first gig as a developer was at a place that hammered out some 2,000 websites a year. When you start to think about that number, you get a headache. That roughly meant that our devs were required to cut up a psd, code and integrate 15-18 sites a month.
In order to do this, you need to have really strict standards. You need to have a stable, repeatable process in place to be able to handle that many sites in a year.
Two things the company did to ensure this was to first to have a two week training. Then you did pair programming for another two months. By the third month, you were far enough along where you knew the standard templates, the naming conventions, the JS conventions and you stuck in that lane and didn't do anything outside of that without a senior devs approval.
This lead to having standardized code for every site that was built. You could pull out a site that was developed two years ago and easily change or update the code because everybody coded the same way so it was easy to dig into the code and find or change something. The advantages were obvious. Minimal cross browser issues, standardized coding by developers, faster dev times, less errors and weird coding issues like the author points out. It literally came down to then how fast a dev can code and how productive he can be.
Sure, it was a little repetitive and boring at times, but the efficiencies were undeniable. That company was the last place I worked at where they went to such great lengths to train and standardize their processes. I've since run into many of the issues the author points out because of the lack of coding standards and training new devs to those standards.
Sounds like you have no process:
Before task is assigned, there should be a technical design and acceptance criteria.
All code should be peer reviewed.
All code should have an attached unit test.
All code should be tested in a dev and cert environment.
Interesting, just recently read about similar concept from Chris Hadfield's book "An Astronaut's Guide to Life on Earth". He puts all team newcomers intro three categories minus ones, zeros and plus ones. Basically minus ones will think they always know better and do not spend effort on familiarizing themself with existing system, they also ignore epistemological category of "unknown unknowns". Chris recommends always to strive for being a zero at first.
TL;DR: Some devs introduce toxic code in your codebase, you shouldn't hire them.
However, it seems to me proper code reviews and mentoring would have prevented that in the first place? That's an important responsibility for the 'good' devs, and you can't simply complain after the fact if you aren't proactive in that sense.
If A. Random Developer can't tell that/if her changes work before checking them in, then ... how are they to justify checking them in at all?
A first alternate to actually testing things might be "conformance to a model of Best Practices for changes that we think might - maybe - do minimum damage."
For certain definitions of what is considered a positive contribution.
If the resident developer A has used the wrong approach to solving a problem and developer B comes along and uses a better approach, dev B could be seen as introducing something overly complicated and less understandable.
I call them destructive, or previously also "negative busfactor". Without the few bad apples in my community the product would actually get better, but so far it continues to decline over the last 15 years.
There is another kind of pernicious developer: the one who writes large amounts of seemingly effective, but ultimately bloated code.
What we often fail to realize is that 'code = cost'. Once written, code has to be maintained, and every line adds to the inherent complexity of the system.
I think we are all familiar with that old IBM (OS/2?) allegory of the dev who mostly spent time removing code from the system, and had to justify his salary because they were measuring 'lines produced' as a metric. If you can 'remove a line of code' from software and it still 'does the same thing' ... well, that's definitely worth more than adding code :).
Anyhow, it's worth considering that 'number of lines of code' is really quite a bad measure of anything.
Second - some people are really bright, but they struggle with clarity etc.. Perhaps it would be appropriate to put someone like this on a bug clearing team? They can 'solve problems' by tracking down issues, and hopefully fix them. 'Fixing' code is often much safer than writing new code as the patterns, standards, practices are already 'in place'.
It's a rather a paradoxical and intriguing business, writing code!
I think cripytx might be suggesting that the negativity of one's work might be subjective.
In software development, this idea has proven to be a very dangerous one. And it's false.
Take a perfectly well-written module, it's always possible to transform it into objective crap.
Just for fun, here are some recipes for C++:
- add to the module a dependency on a framework (e.g to use QString instead of string just because the dev is more familiar with QString)
- replace implicit memory management with explicit one (e.g just because the dev doesn't like unique_ptr)
- reformat some parts of the source files one's own coding style (e.g because the official one isn't good anyway)
- inline every function that's only called once (e.g why the need to factorize if it's only needed once?)
- up-front convert functions into function templates (e.g one day someone might need the generic version!)
- "optimize" the code (unroll loops, inline calls) without profiling it (e.g this part can't be profiled anyway because it doesn't take enough time)
Let's take one module (module A), apply these recipes, you get module B.
At first, A and B both have the exact same number of features, and the exact same number of bugs.
However, as time passes, the stability of both will quickly diverge, the cost of new features will also quickly diverge.
It doesn't require more work to directly create module A, because it's actually about not doing some things ; however, it certainly requires more knowledge.
Developers directly creating module B are implicitely relying on the ability of their team to transform it into module A. And _this_ will require work.
This is negative work, i.e work that should have been done but hasn't (also known as "technical debt").
Not all developers make positive contributions, but no single developer in a team of developers can make a non-positive contribution. They can't, because as a team, you've decided to allow this person to make contributions alone. You need to own that contribution. If you don't want to that responsibility, there is a solution: code reviews.
Anything that gets submitted is literally something you've agreed to support and are okay with being in the code base.
I have seen the type always conform to a style or structure. The designers like to talk to design bent leads on devevelopment and manage only a subset of issues that could materialise.
A dev team for three or more years brings a better structure to code and developement. Just the fact, the leisure time well spent in code and banter bring enough structure to the team.
I guess you have to know your team better than your desk-jockeys.
Producing a lot of code which has to be reviewed without ever actually landing any of it is itself a non-positive contribution. Code reviews are wonderful and I'd never want to go back to working on a team that didn't do them, but they do take time from the reviewers.
The way I see it, as a senior developer or team lead it is your job to make sure that your junior level programmers are doing good work, and if they don't, you either help them to improve or (if that's not possible) let them go. Whenever I hear a story like "this person did bad work for six months so we had to delete all his code changes and let him go" I immediately know that something is very wrong with the management of the software project in question, as in a good team structure it's just not possible that a single person does "negative work" for several months, let alone a year without being noticed.
Also, whether a developer will do good or bad work does not only depend on his/her ability, but also on the circumstances under which he/she works. Some people need more oversight and a tighter feedback loop to be productive, while others can work in a more independent way. In any case, with a good team process it is really hard for a single person to do negative work, and if it happens it's usually the fault of at least several people (due to lack of clear standards, missing feedback cycles or lack of ownership).
So if you're a senior developer or team lead, don't put the blame on other programmers, but instead ask yourself how you can build a process that makes it really hard for individual programmers to do bad work.