I have a similar switches between management and occasional development and there are a lot of "interesting" and frustrating challenges in returning to coding after a break of several years. Hearing other people's experiences helps dealing with the frustrations. There is a lot of tooling, frameworks, build processes, etc... which weren't widespread a decade ago which make starting a side project or jumping into code only occasionally an obstacle course for those who do not do it regularly. I'm ever grateful to those who write good how to guides for these things that I can refer to when I miss a crucial step in my setup.
I switched from medical, with fierce QA, to more general SW dev over the last 3 years. Developers waste time writing braindead unit tests and don't stress their software like QA teams. Major bugs run undetected for weeks and cause chaos. Managers grin believing they're net-ahead.
There is an unwavering belief that knowing the corner cases and weak points of your own code makes you somehow better at writing the tests. We also seem to be obsessed with testing that is half-way between unit level testing and system level testing. We test APIs and features more or less in isolation.
Tests should be written by antagonistic developers who are not at all familiar with the code. They should stress the capabilities promised in the interface documentation. Hiring more people who are dedicated to testing costs a lot of money though. I don't see things changing anytime soon.
> There is an unwavering belief that knowing the corner cases and weak points of your own code makes you somehow better at writing the tests. We also seem to be obsessed with testing that is half-way between unit level testing and system level testing. We test APIs and features more or less in isolation.
You're both right. Devs should write tests for their own code, but they shouldn't be the only ones writing them.
One of the problems (at least at my employer) is QA is considered a place for second-rate developers. Some of them are so bad that they rely on the devs to basically write their tests for them, by requiring extremely detailed and specific acceptance criteria. Only once have I worked with a tester that was engaged enough to really understand the requirements on their own and call out weird behavior or corner cases that weren't spelled out in advance. It was great.
So why is your company hiring bad QAs?
> QA is considered a place for second-rate developers
Obviously, if you look for second-rate developers to be your testers, you will end up with second-rate testers or worse!
Yeah, and to make my thoughts a little clearer: if your organization does that, it's no wonder some people come to the (wrong) conclusion that only devs should test their own code. To come to the right conclusion in such an org, you either need an outlier experience (e.g. a talented tester who chose the role against organizational incentives) or the ability to see past your own direct experiences (which is really hard).
I'll die on this hill. The developer that wrote the code by definition cannot write an appropriate test suite for it. It is entirely possible/probable that details missed in implementation will be missed entirely in test, as the developer missed them.
< controversial take> Manual QA promotes bad habits and is not a great thing to introduce to an org. SDETs/QA Engineers/Developers whose role is explicitly to create tests for an org are worth their salary 10x. </ controversial take >
Unit tests written by devs are not testing that the system does what it is supposed to do. They're testing that the code does what the programmer intended it to do. They have to be written by the programmer because they're coupled with the code.
There's a separate testing process needed (as you say) that tests if the system does the thing it's supposed to do. The programmer cannot do this, precisely because it's basically testing their understanding of the requirements.
I have my co-founders Q&A everything before rolling out to production. It's a pain, but it's saved us a few headaches where I was just following the golden path and missed some obvious problems.
My original post was intended to focus on system and api level testing but I wasn't clear on that.
Developers also risk the "I thought of that already so I don't need to test it thoroughly" pitfall, eventually leading to the usual "we didn't think it was possible fail like that"
However, I have to say, depending on your product, some of the best damn QA I've worked with aren't writing integration tests, at most they're using SQL to get the DB into an appropriate state, and then they're doing the rest manually, but it's really about their mindset, manual is just the easiest approach for them. Manual encompassing "click on the thing" as well as "hit the endpoint with Postman".
It's their mindset I value most. And some of the best pairing I've ever done is with a QA to write integration tests.
Interactive software, and very interactive software like games, absolutely requires good manual QA. There's a lot of interactive software out there.
Are you saying it's a bad role to get into because it's underpaid? ;)
You can't finish a project without developers and you can finish it without testers.
I've seen a tester changing requirements on the go, delaying features for things that aren't even the case. Those change requests then cause bugs, because... Dev.
The most important thing for a dev is domain knowledge and letting them care about the product.
Usually, when the original Dev leaves ( monolith), it's enough to call the project abandoned, but maintained. Leaving the devs wanting to spend a minimal amount of work with it.
And that's going to trigger sloppyness.
> without testers.
A "tester" is far far far away from a legitimate QA person.
> I've seen a tester changing requirements on the go, delaying features for things that aren't even the case.
Hiring bad employees doesn't invalidate any role. It invalidates A.) Your hiring process for that role, and B.) That employee
...for a given value of finished. My value of finished involves "no significant bugs that cost thousands of dollars of revenue per day". And in the past, QA has repeatedly caught such bugs, despite ample testing by the devs.
It really comes down to the mindset, and the approach - a QA approaches testing code differently to a dev who wrote the code.
In a well-functioning team, requirements should be explicit, understandable, achievable, realistic, and _agreed upon_ from the get-go. Once agreed, devs and testers work to those requirements. If, during the development cycle, it's realised that a requirement is lacking, or indeed, entirely absent, then it should definitely be discussed between devs and QA, at the very least, and _agreed upon_ - noting that sometimes, it might be a reasonably significant change in requirements that the business needs to be involved also in reaching that agreement. It may change delivery time, or delivered capacity etc.
The ideal, from my POV, is having testers fully involved in the planning discussion / backlog grooming whatever you call it, where your team ensures that the requirements from business are specific, realistic, achievable, and useful. And then, hopefully, given the entire team's domain knowledge, that is, domain knowledge from developers AND QA, because they will have a metric shit ton also, and I find it curious you only ascribed that to devs... ... then, hopefully you can determine which requirements are missing, get the business to agree, and then make those part of the agreed upon work also.
You'll note that I'm not speaking on dev vs. QA, rather on process and team structure. The issues you faced are not issues inherent to QA. The issues you faced are due to process and team structure.
I could understand Manual QA does not scale, but why bad habits? What if the person doing Manual QA of an application also then writes tests afterwards based on what they have determined are the likely problems in the application - which is what I would do if I was going to write (ui) tests, test it by hand first.
I wouldn't really call this manual QA then. I'm describing the thousands of orgs whose QA processes are limited to "These people will run these five thousand test cases by hand for every release", and/or "These people will be handed a ticket and manually click all the buttons before marking it done"
Having the developer writing the tests suits scenario 2) but it’s totally inappropriate for scenario 1).
The same is true of pretty much everybody - PMs and QA included. Everybody misses details and edge cases.
It's better to write the test suite in a readable form and get everyone to take a look at it.
Unit tests are almost entirely unsuitable for this purpose most of the time.
No, the same isn't true of everybody. They would all have to miss the exact same details and edge cases, was the point. It's the same reason that you have sensor redundancy, you don't have the same sensor measure twice and trust it.
That's exactly what I meant. With inputs from a diversity of people you can pick up the edge cases more easily. Separately everybody will miss some.
> The developer that wrote the code by definition cannot write an appropriate test suite for it
You said that issue could be applicable to everyone:
> The same is true of pretty much everybody - PMs and QA included. Everybody misses details and edge cases.
I pointed out that the issue can't be true of everybody, since everyone other than the original developer would generally avoid it. You then said I repeated what you said. Which is it?
The problem is it's difficult to define what that is (and particularly to teach a junior Dev what to do).
This is a very common -- but inevitable -- misunderstanding.
The real reason the same person should write tests and code is if you find a person who is competent enough in both technical domain and problem domain to write the tests -- you definitely want that person to write the code as well.
See how it's not "developers should write tests"? It's "domain experts should write the tests... and domain experts should write the implementation."
The two tasks happen to need overlapping skills.
By definition, the author of a unit cannot write a black box test for it. Any test they write is going to be a white box (clear box?) test, because they know how the unit works.
If you interpreted the spec incorrectly when writing the unit, you're going to interpret it incorrectly in exactly the same way when writing the test. White box testing has its place, but you need a second person if you want to do black box testing.
1. Write (some) tests as you develop to help you
Writing down your program's test cases should not add much time (assuming you have test infrastructure setup like you should), since you are already having to consider these things to simply figure out what the right logic is. With a little practice, you find a good rhythm for how far to take it for yourself. In my experience, I find this actually leads to faster and more correct solutions.
2. Collect tests while in review to cover the requirements
Once you've written it though, it's always important to try and get others to look over your code and try and find flaws. Ideally, they can even write tests for you to add to the feature. This is just a standard part of any good peer review.
3. REGRESSION TEST
Finally, though I'm not sure it matters very much who writes it, regression tests are something I never see stressed enough. I feel like if you are serious about testing, every bug should be required to add at least one test to the suite before it can be closed.
Just my 2-cents.
Getting constantly dragged back to fix last week's bugs sounds like a chapter from Dante's Inferno. I'd rather discover them while the code is fresh in my mind.
Without tests, how do I know when I'm done?
Initially separate QA teams that would write tests, a subset of which might do manual testing in the UI.
The feedback loop for this was brutal as you suggest and caused a lot of problems, with delayed breaks that become hard to fix and debug. It's also hard for QA to often know what the code is doing or interact closely with the dev teams. You can try to get them to be as close as possible, but it doesn't work super well.
Now devs are responsible for writing tests and teams think about potential failures when designing things/discussing the plan with their team. They were supposed to consider these things before too, but the incentives weren't aligned - now they are. We have infrastructure teams to make automated testing as easy as possible and spin up a test environment where you can see your change, etc.
I think this model is better, testing well is hard and reasoning about failures and how things can go wrong is an important skill to develop. Offloading this to a different team feels like a variant of the dev that 'just writes features' and doesn't consider actual deployment.
This is often made doubly worse by companies that have QA teams often treating them as second class citizens both in status and in pay. I think it's better to have it be part of the job of the person writing the feature also write the tests, it doesn't help that writing tests is something few people know how to do coming out of school.
I haven't done this in probably 7 years, though. Mostly because I haven't cultivated this kind of working relationship with my coworkers.
Write the tests.
Write the code.
I'm taking about API and system level tests. Unit tests vary in scope between 'test a function' and 'test a unit interface'. Former should be done by the dev during development. Latter should be done by someone else. It's a grey area between those extremes.
I'm coming from a systems programming and embedded background so I have no idea if a web dev or someone else should follow my advice, but I suspect so.
It's the principle of TDD but separated in to different roles. It works well and produces good results but you need exceptionally good planning upfront to actually do it.
That surely implies a specification which, in my experience, is a rare thing in these Agile days.
> the QA engineer
In the last 10 years of contract and perm jobs, only once have the QA people actually written tests (and that was largely translation of an existing manual suite to a node-based automatic runner.) It's pretty much always been "these are the things the devs/product manager/pm wants to test for these changes, please test those and approve".
Usually it's done as a collaboration between the systems architect and the QA team. The process is essentially "design the system architecture -> document the APIs -> write the tests -> implement the code -> QA". It takes a level of rigour and planning that most teams don't want to do, and it relegates developers to a pretty minor role in the whole development process. They end up just filling in the blank bits in someone else's design. Most changes in team structure over the past couple of decades have been about developers taking on more of the responsibilities rather than less.
It's actually one of the aspects of TDD that makes it so robust. If you properly design your code and really think about its APIs you get a better app at the end. It does take longer to get something a user can see though, but often the process gets to a finished iteration of an app quicker because there's fewer cycles of debugging and fixing needed.
That's a very charitable way to say it, on my case I would call this kind of process very convoluted and similar to early 2000s style development. I obviously would not want to work in such a bureaucratic environment where every small tech decision has to go through multiple boards.
Everyone who works on the app is making those decisions no matter what you do. The difference is that in the "old style" the decisions were made before the code is written, and in the agile "new style" the decisions are made after the code is written. Often that's fine, because developers are generally good at their work and they make decent decisions, but sometimes they get it wrong and that's when code design issues arise. It's also what leads to automated tests failing to test for a lot of cases that a good QA engineer would have written tests for. Those things have an impact on the user.
A huge amount of the code in apps we use every day was never designed. No one thought it through. No one considered the edge cases. Features are thrown together in a week and 'sort of' work. Every time you see a shitty broken website, or a bug in production, or some crappy slow thing that should be fast the reason behind the problem is that the developers who made it didn't take the time (or even have the time in really bad companies) to think about what they were building. A lot of developers like that working environment because they can hack on things and move on to the next challenge quickly. That's fun. I argue it's also bad for users, and I care more about users than I care about developers (and I say that as a developer whose been making web stuff for almost 25 years.)
I'm not arguing that we go back to Prince2 and waterfall. There's a limit to my tolerance of bureaucracy too. I'm saying that things have gone a bit too far the other way, and many developers need to spend more time planning what code is written before leaping in and coding up the first solution they think will pass the acceptance criteria someone from product wrote.
Earlier you proposed: "design the system architecture -> document the APIs -> write the tests -> implement the code -> QA". This is precisely waterfall. Detaching the term from the emotions and bad PR, there are decades of real experience of people who did that and discussed their results.
The bureaucracy is not the cause. I argue the causality is: the "design first" assumption -> the "throw it over the fence" practice -> everyone blames everyone -> Prince2 comes to the rescue.
> I say that as a developer whose been making web stuff for almost 25 years.
> I'm saying that things have gone a bit too far the other way, and many developers need to spend more time planning...
I'm not sure I fully agree, unless we want to define waterfall as "anything where a bit of time is spent up-front to decide how a part of the system should work" :)
For me, waterfall is where every single aspect of the project is pre-defined, and cannot be changed during development without serious pain and lots of awful bureaucracy
But in the above workflow, there isn't anything stopping us from making a loop for example on a sprint by sprint basis, and using feedback from both the tests and changing requirements to improve the design, update the APIs, change the tests, etc.
I suppose we could argue that this is "mini-waterfall" but it works in my experience :)
The 2000s style you are referring to was centered around "features" or "business cases". One unit of work = one feature. The bureaucracy is orthogonal.
Modern "agile" style is centered around "sprints". One unit of work = one sprint duration.
The bureaucracy inevitably involves synchronization points and enforces more or less linear process, which usually implies longer "real world" feedback loops. Lack of bureaucracy allows for concurrency and possibly shortens the feedback loop, which may be nice at first, but then you slowly informally incorporate the bureaucracy back - code owners, design sessions, etc..
The bureaucratic process is not inherently bad, the agile process is not inherently good. In some cases quick turnaround of basic features is more profitable, in other cases correctness may be most important. I can agree that quick feedback is more fun to work in, but it is not necessarily the best way.
Change Request (or whatever you call it in your flavor of agile) is the source informal description of behavior, while Specification, Code and Tests are three distinct formal descriptions of the same. If any of those three are written down by the same person, assumptions and thinking errors translate to the other largely defeating the very purpose of writing down another formal description in the first place.
Unit tests became mainstream with proliferation of weak-dynamic languages (JS, Python) and concepts like TDD largely became mainstream due to lack of formal interface specification capabilities in the languages themselves. Stricter languages (Java, C++) cover huge portions of the testing in the type system. Tests written by the developer are okay if they are used as a substitute for a formal type system. However, tests intended to catch logic errors will contain the very same logic errors found in the code and such tests will only give you false confidence.
It does not mean that tests must be written by a dedicated QA, but rather that tests should be written by a different person for them to serve the purpose they are intended to serve.
I 100% side with the grandparent comment. When you've worked with a great QA, it's night and day vs a dev writing their own tests, but a dev not writing its own tests is a nightmare.
Ideally there's a place for both and they work very closely with one another to make the software stable at all levels. It's arguable the SRE is the new QA engineer.
You need both and ideally the original author writes the test and somebody else reviews the test thoroughly. That reviewer should have an intimate understanding of the spec you are implementing.
The more formal that spec, the less need for that external reviewer since the less ambiguity there is leaving room for missunderstanding. In a perfect world, the spec is completely formalized and instead of a test that shoots in the dark, you'd write a formal machine-verified correctness proof, automatically covering all cases. Then you would not need that external reviewer and could just have the programmer do it by themselves.
I think that's where most testing should live. Testing single modules means you don't test how they plug together and makes it hard to refactor responsibilities between modules, such that it often doesn't happen. Full system level testing is painful to write as you often have to mock out the world, and painful to run as it it usually slow.
The middle ground where you test a block of modules together through the top interface is a great middle ground. It does not replace the others- you need some full system level testing to check that your system integrates properly, and most modules will need some unit testing (and some will need a lot), but it's the best "bang for your buck" in terms of invested effort.
QA teams and devs write totally different tests. Devs go about it to help future refactoring, e.g. codify what the feature is supposed to do, so that if we change it in the future, we will not inadvertently break something. QAs write tests that look from the point of the user.
QA tests are arguably more valuable, but writing testable software itself can only be done if the devs too are invested in the task.
Maybe we can form DevQA teams? the way we did for DevOps and DevSecOps?
But alas, I still think having everyone though out the whole stack invested in writing tests makes the the final product better.
Are folks downvoting this because they don't like testing? Is this idea that prevalent?
Is it not common knowledge already that code review is the single most valuable tool in getting rid of bugs? I remember reading it on Code Complete, maybe The Pragmatic Programmer. Can't remember.
I think there's still value in writing your own tests, at least with regard to refactoring (and sleeping better at night after deploying to production).
No, it isn't. Because there isn't that much evidence it is. It is a good tool, but "most valuable tool"? That's a very bold claim. I would still place code analyzing tools, linters and fast feedback cycles (= fast compilation and ability to check changes) over code reviews, if only for the fact many code reviewers tend to divert into "I like this way better" arguments.
> Glenford Myers points out that human processes (inspections and walk-throughs, for instance) tend to be better than computer-based testing at finding certain kinds of errors and that the opposite is true for other kinds of errors (1979).
> This result was confirmed in a later study, which found that code reading detected more interface defects and functional testing detected more control defects (Basili, Selby, and Hutchens 1986). Test guru Boris Beizer reports that informal test approaches typically achieve only 50–60 percent test coverage unless you're using a coverage analyzer (Johnson 1994).
So I would conclude that the best approach is to use both code reviews/inspections and automated tests/linters/analyzers.
The part about QA is perhaps the most infuriating. Most business logic is so simple it can be presented as test/result matrices, which generally translate cleanly into requirements and automated tests, are easy to reason with, implicitly have three parties with different perspectives, and should be blind to technical details giving devs more freedom. Yet instead, this piece gets relegated to "software requirement engineers" writing walls of text, who tend to forget even the simplest cases (null, empty list/string, etc.), and any automated tests are then designed by the same developers making the implementation and only caught after the fact by manual QA a few months later.
Doesnt Google have a Software Developer in Test role? I work in a bank and every project has these people - their only job is to write automated tests (usually in Python robot framework).
Software engineers benefit from having tests that encode invariants they cannot otherwise encode in the type system, so that they have a machine assisted refactoring aid. It's just an extension of writing the code; if you think you know what you want your code to do, you just add more of that
But it's very hard for software engineers to think about what happens outside of that realm, whether because the software encounters edge cases naturally or due to "creative" ways the end users find to (ab)use the software.
Some of those cases can be explored with a fuzzer, some of them are best found with actual human QA testers.
As always, things stop working when you just go through the motions and become cargo cults.
I mean by all means, if a tester finds an issue, write a test to replicate it before fixing it so that it can't recur.
But also, tests are code and code rots. Changing that implementation entirely voids the test. And at some point your test codebase size far exceeds that of your implementation.
Try to say that such test is useless and even harmful to long term value. You get "we all are people and make mistakes, we want to prevent that", if someone is trying to prevent wrong assignment in strongly typed language well I will call them "braindead".
But it's not just software development, online news articles also tend to get worse with more journalists publishing without a proof-reader involved.
The frustrating thing for me is that GUI development has apparently gone backwards a decade or so, with the rise of web frameworks and all the extra layers and connections between the user and the actual data.
It used to be that you could grab some components in VB6 or Delphi, and have your database prototype up and running in an hour, and make a stream of modifications as required, with the main delay being pkZIP and copying files prior to making any big changes, just in case. Compile cycles even then were in the 0.5-5 second range, like now.
I've wimped out and use a GUI version of GIT and GitHub, but it is amazing to start a project, and once it's running, push commits to the internet with a single click.
The main limits I see right now are,
My age - I'm 57
My skillset doesn't include the Rust, Clojure, etc...
The programming languages are less impedance matched to the human mind than they were in the past. Pascal made you type a lot of things, but that also made it far less ambiguous, and much, much easier to read and write.
Large firms like Google/Amazon/Apple etc. tend to have an unquenchable need for skilled java/C++ devs. Going beyond the top firms large startups like Square/Uber/SalesForce similarly use java stacks.
The answer is simple: different layers, different requirements. I'm not particularly fond of the myriad electron-apps out there, but we know how the alternative looks like: a world where most programs only work on windows.
Sometimes "a fancy dialog box with fancy animations" is precisely what a business needs. The fact that it can be created and deployed worldwide in 20min. by a run-off-the-mill worker is a plus in this context. The world doesn't care about good software, it cares about software that fulfills an otherwise unfulfilled purpose.
Every platform has regressed significantly compared to just 10 years ago. Modern UIs are slow, laggy, visually poorly designed (both in layout and in control design - they're just "flashy"), they're completely inconsistent with each other.
They seem to completely have lost 20 years to refinement in user interaction.
During the amiga times you would be yelled at for placing the "cancel" button of a dialog in the unexpected position. And I would say: rightly so. Following system guidelines to have consistency gave you great productivity as a side-effect.
Today you're lucky if you can guess that the dialog is modal at all, let alone figure out which one is the ack button and which one is a link that redirects to generic online help page.
The system design guidelines are ignored by system and app developers alike. It's a free-for-all.
I get a good chuckle when I see flip-switch style checkboxes which include a description text that also flips in meaning when you toggle it. Because flip-switch style are already frequently more ambiguous than a checkbox already, but the changing text really brings the ambiguity to a whole new level. Checkboxes were perfectly easy and took less space, but flipswitches took over most modern UIs on mobile and desktop because they look cool somehow?
Drop-downs popping-up a borderless dialog showing you some choices, with a modern scroll-bar that's hidden by default. Depending on vertical size, you might not realize there's more if you scroll, since there's no indication of further elements unless the next element happens to be truncated at the text, making it obvious.
Actions (which were traditionally buttons) interspersed into the text as links, making the distinction a crapshoot.
Responsive "desktop" UIs that can't fit 10 toolbar elements in view and switch to a hambuger menu if the window is smaller than 2/3 of the screen. Make it better by actually shrinking the available workspace when that happens due to an extra-large sidebar that pops in it's place.
Broken scrolling (as in non-standard behavior/travel lenght and so on) is something I've never seen in desktop apps until recently. Custom widgets that do not replicate correctly the system behavior. People complained for decades about Gtk/Qt not behaving exactly like Win32 (or say the same with Cocoa/Qt on Mac, or any combination), and yet they perform like a dream compared to Electron, Flutter, even QML.
Of course you had bad UIs back then and now. But I've never been so frustrated by so many commercial programs switching over to multiplatform development and custom UIs.
I find this the exact opposite. Okay, maybe I could break out Delphi or Visual Basic and make a crap UI that only ran on Windows and required users to install the software.
Today, in a few minutes, I can make an app in HTML/JS that runs in a webpage and several billion people can access and use immediately on either their PC, Tablet, or phone. That it me is way better than it was 30 years ago.
And, if I want it to be an app I can use some HTML/JS wrapper and be done in a similar amount of time. I can have a new electron app up and running and on 3 platforms in under an hour. If I had more mobile experience I'm sure I could do the same there.
So now you have made a crap UI that is running on 3 platforms. It was quick and easy for you, but now every user has to suffer with a slow and bloated electron "app".
GUI development definitely has gone backwards at least a decade. Of course it can still be done right, but with so many low skill web developers everywhere, electron "apps" are what we get.
We're accustomed to great mobile/desktop UIs from big companies with dedicated people for each platform. And these UIs take months or years to polish.
An individual maybe could setup a project skeleton in 1 hour, but no more.
I do the same with Lazarus, except that my app will have a richer interface with better widgets.
The modern trend towards tiny screens that have to be scrolled in order to fit any usable amount of data in, and the various different scroll / UI paradigms means that you can technically have something that most of humanity could have access to, but most users aren't likely to be happy.
Maybe users today have zero expectations because they don't know better? I'm a dev too but my expectation as a user is higher. I never saw a modern UI framework not being a complete piece of garbage. This includes relatively "performant" stuff like QML.
I'm pretty sure regular young users would notice too, but it seems that most people just accept and adapt to crappier trends every year.
And web browser/css if pretty good when supporting different devices/platforms.
These new realities the author talks about are the only case if you're making sole source standalone services utilizing only externals that use stable, open APIs or standard protocols.
For example whatever handles oAuth in a local domain/test environment. When that goes down and the entire app/test harness doesn't work... at least there are other domains to use in our case but sometimes those are down too.
Set some alerts please.
I absolutely welcome automated testing wherever possible but for any software with a user interface (including websites) it can never fully replace manual testing.
Right now, I am doing a rewrite of my online 3d engine for consumer products, after being active for 3 years as an enterprise consultant for mid and large sized organizations, mostly initiating first roadmaps and identifying how to onboard new clients, reorganizing my client's product and solution portfolios, figuring out/maturing pre-sales tracks etc...
The reason I switch from development to other tracks and back, is that after a few years I feel like I am stuck in an echo chamber, where I tend to overestimate the value of the thing I am doing, and underestimate the importance of what others do.
The biggest disadvantage is that I could have probably retired from a financial POV about a decade ago if I had stayed on the same track, while this might now take another decade, but on the other hand, for me the chase matters more than the catch.
The biggest advantage is that - due to my broad experience - my kind of profile is extremely unique, and I can find a well-paid freelance gig in about a week without to much effort if I need some cash to bridge one of my many mini-sabbaticals.
This might not be for everyone, but if you like to spend time out of your comfort zone, I would strongly advise anyone who tries to understand the bigger picture to switch every n years; it is super valuable to understand the drivers from both sides and makes the job so much more interesting.
But specialization is still fairly important. I am specialized in Swift (with a minor in PHP). This has been beyond invaluable, in the work I do. To be fair, I am involved in multiple platforms and tech within the Apple ecosystem (like writing stuff to work on all Apple hardware platforms), but that is still a fairly constrained environment.
I have found that it’s quite possible to write acceptable software, quickly; especially if we are willing to loop in a great deal of “mystery meat” code, but it’s another matter, entirely, to create awesome code.
I once had someone boast to me that they only hire people that can switch languages and environments in a matter of a couple of weeks.
Wide and shallow worked 20 years ago. Not so sure it is as effective, these days, as the river is a lot deeper, and runs faster.
The stakes for screwups, nowadays, are also pretty terrifying.
I had this idea for a killer app that required a lot of deep thinking on the design to make it either a huge success or sunk it on release. I knew whatever I decided on those aspects would also be hard to implement. Every day I would just contemplate the list of pending items in my amazing project for hours, I entered in a spiral of indecision and procastrination that burned me out. Lost all interest in programming. Backed up all my code of years and cleaned my hard disk. Changed work, changed habits and hobbies. No more programming for me.
After a year I still haven't recovered my interest in programming, it must be my age, but I still believe my idea is something worth to be completed. Two weeks ago I restored my backup and was able to solve all the critical decisions, implement, test and fix bugs. That would have taken months with all the additional procastrination.
This almost happened to me.
I switched to using Emacs org mode to track my personal projects. It lets me do the contemplating in a written form, and I gradually refine high-level targets into sub-tasks until one of them is small enough to implement (at which point I implement it).
It also means that if I am blocked (can't think of a good solution) on a project, I just switch o the next one and edit that org file instead.
It's worked out really well for me. My projects are still mostly unfinished, but at least now I have time-tracked against what has been done and have clear goals written down so I know where to pick up again. This means that I have actual visual feedback of progress!
Sick of repeating the same status update every day for no benefit.
One of the more difficult aspects of a team lead role is balancing all the different personalities and work preferences in that team. Consider that "toughing it out" might be a desired work practice for some people but for others it's the opposite, it's leaving them struggling for longer than they need/want to be. Stand ups offer a chance to identify those situations, as not every developer will ask for help when they need it. The post stand-up follow up should then considers the individual developer's wishes (e.g. you should be able to ask to tough it out for longer, probably as long as it's not a critical blocker for other work) and that seems to be one of the places your leadership is going wrong.
I could write a paragraph reputing point by point every single one of your sentences in a healthy organization.
All I can really say is you should consider switching jobs. You'll have to take my word for it today but it's way better out there, and you can join a company and a team that will make you feel like NONE of those things (yet still have daily standups)
Me too, then, because that accurately describes every Agile workplace I've been in the last 10+ years.
(I will concede that a lot of those were contracts and almost by definition, if they're hiring contractors to come in as emergency fixers, things are not going well anyway.)
Response is always the same: during lockdown/WFH, it is one of the few times -- sometimes the only one -- we get to talk and hear other people so they'd like to keep it.
So instead I'm trying to move away from it being a traditional scrum round-table and try to move towards chit-chat, show and tell, general updates, group quizzes, etc. All more useful probably.
In the first case you're depriving yourself of the opportunity to be helped by your team.
In the second you're depriving your teammates of the opportunity to learn from your process, insights, and potentially helping them on their own task.
Get your head out of your ass, and think about the maximum value you can provide to your team in 30seconds-2minutes in a daily status report.
I find that some people in the team want to help too much, wasting (a lot) of time on things that would've been solved in the same time by the one original person instead of by a bunch of persons while wasting multiple person's time. When someone is really stuck, then sure, but more often (in my experience) it's just the kind of stuck which is there because it just takes time to work through a particular problem and trying out solutions. I find that people often 'want to help' to show they need to be promoted / get more money because 'they are so good' and 'are such a great team player' (aka signalling instead of actually contributing to the bottom line; doing the opposite by, every day, wasting time 'helping out' instead of finishing their own tasks faster). Helping out is good, but I think daily is too often to assess (assessing from a 2minute blurb, because apparently the victim is not asking for it; they get 'help' forced upon them) if help is needed or not.
Repeat the next day for C, then the next day for D.
How is this useful information to anyone that it warrants a recurring meeting?
The problem is standup is useful like 5% of the time when someone is actually blocked yet for some reason we decided it needs to be every day rather than more efficient ad-hoc meetings to unblock people.
Being out of the industry for 10 years is basically starting from scratch (almost).
1. He moved to management 10 years ago. But most things he describes were already a thing at that time. Git existed, unit tests were a thing, we already had a CI workflow at my company at that time. Sounds like he is looking 30-40 years back, not 10. Or he worked at a place that kept innovation outside their doorstep and didn't look at what others did either.
2. Moved to management and got that detached from dev life? All sw managers in my company are up to date with dev practices. How else could they make reasonable management decisions? Everybody up to the CTO is regularly committing code. Not as much as they'd like to, but they are familiar with workflows and have contributed key pieces of our codebase. Meritocracy at its best. Not sure what kind of manager he was, but I doubt I'd have liked to work under him.