I guess you could attribute this to cargo-culting / resume building. It sounds like the problem is with inexperienced people wanting to (and not being stopped from) using patterns/technologies for their own sake instead of from a business-value perspective.
Part of this is a fault in business, for rewarding this type of behavior (better to have Kafka on my resume [even if the business justification was nonexistant] to get myself past the no-nothing recruiter), not to mention the "internal resume" factor of rebuilding something.
One other way to look at this problem is to see that in most other disciplines, impactful resumes are results-oriented rather than methodology-oriented. A sales resume says how much revenue they brought on. An operations resume might state ways they created new customer value or efficiencies. A management resume talks concretely about growing a team. A Java developer’s resume says they did stuff in Java.
It shouldn’t be enough. Software engineers have a duty to identify and inventory the value they create for an employer, rather than just listing all of the tools they use. And if those engineers can’t talk about the value they create, they should take a big step back and ask if they’re actually adding value at all.
By the way, employers should share the burden of identifying software engineering value as well, and have a similar responsibility to demand an accounting of effectiveness when screening candidates. Most of the time that’s from a code test or trivia questions about some language or technology, but that still doesn’t mean you’re effective and valuable as a result.
" Software engineers have a duty to identify and inventory the value they create for an employer, rather than just listing all of the tools they use. And if those engineers can’t talk about the value they create, they should take a big step back and ask if they’re actually adding value at all."
I don't think that's realistic. If you work on some backend or infrastructure how do you measure your value? Maybe your department can do it but not the individual.
My company has a reward system for this kind of stuff. When I look at the awards they make some sense in production because they often can show direct cost reduction. But how do you measure the impact of using Jenkins? Most likely you will have to make up some BS numbers.
- Reduced the average time from pull request to deploy from 6 days to 1.5 days (efficiency gains)
- Increased the number of deploys from 1x/week to 4x/day (output gains)
- Reduced the number of production quality incidents from 36/month to 3/month (quality metrics)
- Enabled the team to ship XYZ Project 90 days earlier, which enabled an new $10 million annual revenue stream for the company.
If one has to "make up some BS numbers" then one either doesn't understand what Jenkins is good for, or doesn't understand how to identify and measure the positive benefits of Jenkins. And that's kinda my point.
edit: I should also mention that yes, it's on your employer to help you account for this as a backend/infrastructure engineer. If not, they're kinda stacking that deck against you and you should speak up!
That's a great list in theory, but what would it really mean?
- Pull-request-to-deploy is simply a latency, not a throughput factor, and may or may not correspond to business value.
- Number of deploys (without looking at features/deploy) doesn't necessarily measure business value
- 36/month -> 3/month meaningful statistic, made up numbers
- "Ship product 90 days earlier"- You can't really objectively prove when it would have come out otherwise
Other problems include - usually the engineers are the ones who are collecting and researching these metrics so it's a conflict of interest: an engineer is never going to mention the downsides of their work in such statistics.
I'm saying if I saw those "facts" on a resume (or as a manager) I'd ignore them all; they don't sound objective at all to me but more like good sounding pseudo-truth to give a low-competence manager something to work with.
I think you'd be shocked to find that on any results-oriented resume, it's nearly impossible to objectively prove essential and sole causation of positive results. A sales rep bringing in $10mm/year in deals is a great achievement, and a hiring manager may ask about the details, but just because you needed marketing, a good product, and biz dev to warm the pipe doesn't mean that you are bullshitting when you claim those results and talk about your impact on them.
> usually the engineers are the ones who are collecting and researching these metrics so it's a conflict of interest: an engineer is never going to mention the downsides of their work in such statistics
Yeah, that's called effective self-marketing, and everyone does it.
> I'm saying if I saw those "facts" on a resume (or as a manager) I'd ignore them all; they don't sound objective at all to me but more like good sounding pseudo-truth to give a low-competence manager something to work with.
Thoroughly disagree, and I would hope as an engineer to avoid encountering hiring managers who would harbor a prejudice against engineers who can identify and speak cogently about the impact of their work.
The impact of using Jenkins should be easy to measure with metrics like "number of new version of the application released per week", "time it takes a commit to get released" or even "number of rollbacks caused by a buggy version".
With a continuous delivery pipeline setup you'll release often and the new version will work. And that's great from the business point of view.
The kinds of things I look for in evaluating senior candidates are things like:
- Payments flow kept going down because of lack of reliability in our CI, so I rebuilt the job queueing system to get to 99% reliability
- Most of our business is outside of the US, so we invested in better CDN routing, got a 50% improvement in our web TTIs
- We weren’t able to deploy code often because of our monolith, so we moved to a service-oriented architecture in order to speed up development
If candidates can’t justify why a project happened and why it helped either the company or their team, then that candidate isn’t a senior engineer, they’re just someone taking marching orders.
> A management resume talks concretely about growing a team.
FWIW, this is also methodology-oriented, not results-oriented. Hiring people costs a business money. It only makes more money if they're effectively deployed and utilized. Hiring the right people is a critical task for any manager, but listing only hiring on a resume should be a huge red flag. A manager should be able to demonstrate how s/he was able to apply those hires to efficiently meet business goals.
This was an area of frustration for me at my last managerial gig. It was always a challenge to decouple headcount from influence in the minds of upper management. Too many teams just hired and hired and had very little accountability for delivering results commensurate with how they were resourced. We also had smaller teams that were delivering a ton of value to the company whose leadership was constantly marginalized because upper management didn't see them as representing as large a percentage of the engineering team.
You're absolutely right, and sorry for glossing it over there. Being able to hire is a competency, but deploying those hires for the success of the business and quantifying it is good performance.
I will say, however, that in highly competitive/politicized hiring environments, wresting away budget for hiring can be an indication of good performance, but that should be stated.
Maybe I'm at a disadvantage for doing this, but technologies are really only a minor detail of my resume. I focused it mostly on the things I actually did. My assumption was that this would be more important to people beyond the HR/recruiter part of the hiring phase. Also I prefer not to bullshit, and making my resume some buzzword laden list of technologies is off-putting lol
You would only be at a disadvantage for that if the recruiters looking at your resume were just looking for keywords. And in many cases they are, and that's where i've seen it useful to put your concrete accomplishments in the experience portion of the resume, and put all those keywords in a gutter or in some other section (even better, put it in a word cloud for that extra street cred!)
I don't think it's just that. Part of it is just that some technologies are more fun or nice to use even if they're less practical. QoL and keeping things interesting is going to be more valuable for a typical employee than doing something boring that helps them drag JIRA cards to the 'Done' column a little quicker. And ironically, it might be better for the business too just to keep engineers engaged.
Although none of that really applies to the specific things he's mentioned in this article...CQRS/Event Sourcing are pretty much a terrible idea for most people and is just going to create unnecessary complexity and misery.
Technologies like immutable log-structured storage do have advantages from the business perspective, such as lower defect rate and speed of development. Few people use technologies just to play with them when something serious is on stake (or so I hope).
OTOH not understanding business requirements before picking technologies is indeed a problem. You have to ask a lot of questions to extract the relevant info from the business side before you can make a choice. This is especially important for tacit, "obvious" assumptions that business people honestly forget to mention. (Like, water is wet, daytime sky is blue, and a kilobyte is 1024 bytes; these are facts that everybody surely knows, right?)
Another thing is that requirements constantly but slowly change. A key assumption of the architecture that was correct a year ago may be challenged due to business considerations, legislation, etc. You have to make your architecture flexible enough to allow for unexpected shifts, but this does come at a price of its being less simple, less elegant, and less error-resistant. Yes, it contradicts other business interests (less downtime, faster features rollout, smaller IT team headcount, etc). You have to strike a balance, and ideally be able to shift the balance without a major rewrite if need be.
So the problem is not in the architecture chosen, to my mind, but in the (wrong) process of choice.
>>> Few people use technologies just to play with them when something serious is on stake (or so I hope).
I wish that were true. For example, I bet the actual number is < 10% of people who use Kafka need it. There's nothing Kafka can do that SQL can't, it's a highly-specialized tool that drops 90% of database features for a performance gain. I suspect very few companies need Kafka. [Aside-- honestly, Kafka should exist as a storage engine within SQL and nothing else]
It's the whole "Mongodb is webscale" debate again.
Messaging has been a tool of enterprise architecture for 10+ years and is not so much a replacement for SQL as a way to ship information between different service backends through a small and well-defined interface, rather than the enormous coupling surface area of sharing a DB. I would expect most Kafka messages to both originate and terminate in SQL databases.
The place I often see this is in localization. If I had a $CURRENCY for every time I've seen people try to build their own translation layer rather than relying on whatever's built in to the framework they happen to be in... well, I'd probably have enough for a decent dinner, but that's still far too much.
I think this comment is a succinct view of the problem. Unfortunately there does not seem to be any easy solution to this problem.
Part of the problem is most interviewers out there do not check what the interviewee is good at. Instead they check whether the candidate knows what I know. If not, does (s)he knows the "flavor of the year" technology which I want to be used at my company.
I agree with the gist of the article in the sense that these patterns (and software architecture patterns in general) are often misapplied.
The first two items in the list at the start of the article have come up where I work recently, in fact, with no good technical justification behind them.
This in particular is something I wish developers (I've stopped calling these folks "engineers"--they aren't, and likely never will be) would read and internalize:
> It’s important to understand why Google take decisions in the way they do. However, most of their problems don’t apply to anyone else, and therefore many of the solutions may or may not be appropriate.
Substitute Netflix, Amazon, Twitter, LinkedIn, or any other "big name" company that operates at very large scale in for "Google".
The importance of the "why" resides in the ability to compare the business needs to the reasons Google (et al) did what they did: if the business need isn't similar, it almost certainly isn't necessary to do it "the Google way".
The author is also right about the cause: it's developers always chasing the newest toys. There is a reinforcement effect at work, because these same developers conduct interviews, so there's this seemingly never ending treadmill or "keeping up with the Joneses" effect.
In my experience, software architectural failure is endemic across multiple organizations and not an epidemic. The subtle difference is that many companies lack the discipline or the desire to vet new technologies or to review existing technologies to ensure that they still fulfill the needs of the organization.
A hospital, for example, isn’t going to just up and all their preop procedures just because a doctor went to a conference and learned about some new technique.
The software world suffers many organizational failures in this regard, which is why I say it’s an endemic problem. I’ve been in both kinds of shops, the kind where tech was scored according to its ability to fit an organizational need, and the kind where developers recode the whole front end in the “js flavor of the year” because it seemed cool.
There are for sure engineers just as much as there are developers. Engineers are the stewards that don’t implement a product because “web scale”. They implement because they understand the problems of the organization they’re in. We have to learn better to spot this shit and shit it down. I follow one rule right now that stops most of it in its tracks.
If a developer tells you they want X technology because it can process 1.2 jiggahertz requests per second, they have no idea what problem it solves, unless your organization is facing performance issues.
A good developer on the verge of becoming engineer grade will tell you that they want X technology because trying to perform Y process is painful and unmaintable in the current system, and that if we implement with X, it’ll reduce labor spent on this problem and deliver Z value to the company.
Architecture isn’t failing, the folks with the hammers aren’t following the blueprints correctly.
A good developer will show how that implementing it in X will lead to the gains described, absolutely. The problem is much of the time "implement with X" doesn't actually lead to those gains, but there's some cargo culting or other parrot-a-blog-post justification for it.
I think sometimes developers just have trouble conceptualizing ‘performance tiers’ at the low to mid end.
One example is thinking that horizontal scalability of the persistence layer is necessary before you get to even perhaps Series C/D scale. For instance, I had a serious epiphany when I read that Braintree managed to vertically scale a two node (“HA”) Postgres setup to transaction volume in the millions and a massive valuation. Stack Overflow has had a similarly lean footprint for much of its history.
Most developers usually need to focus a lot more on the 20x more relevant task of product engineering. Unfortunately, a product focus usually means pretty boring or repetitive coding work that doesn’t widen your skillset as an engineer. These competing incentives are how complexity gets introduced, how more code than necessary gets written, and how poor/premature technology choices are made.
The right attitude to have in a small to medium size tech company (and particularly one that doesn’t have product-market fit) is that every LoC introduces marginal risk, every additional package is another dependency that needs to be grokked (so it better be delivering serious ROI), agility is paramount, and innovation should only be happening where it’s necessary to deliver on the company’s unique selling point.
But the most important thing is agility; and moving fast (contrary to what the industry generally believes imo) is less a function of rockstar talent than of simply making the right choices, having the right processes, and persistently reexamining and refining things to go even faster. It’s just a completely different mindset from the norm at a large tech company. At a really early stage, you might even be better off hiring a prodigious hackathon talent than your average Google engineer.
Some are engineers. But when you don't flit like a magpie to the new shiny you become a codger or geezer. Managers can't tell the difference, gotta just buzzword pad your resume to get a job.
There’s something to be said for balancing conservatism, because there is far to much risk of just becoming an obstacle in the way of making any progress.
I’ve had to deal with a situation recently where they’ve become so conservative that they’re afraid to even consider embr@cing tests, build tools (I.e, use ant/maven/gradle instead of Eclipse as the sanctioned build tool), dependency management (I.e, not just commit jars to source and never document anything), etc.
They guy in charge had been burned too many times, but part of that was his own fault for not providing sufficient oversight and advice to junior devs and frankly I’d say a degree of short sightedness with the general approach.
Nothing like the lead engineer fiddling with the tooling and the tech stack every day and not mentioning it to anyone, to sap productivity. I started to anticipate spending the first couple of hours every day tweaking to my development environment and dependencies to resolve today's mystical crash.
And I felt too junior to say anything. I figured that if the grown-ups want the new shiny, it must be worth something. In actual fact, our real-time, React-Native, ES7-experimental-flag would have been as easily built in php.
I’ve worked in that sort of environment before. Watch out for senior staff obsessively following certain people promoting bandwagons built on lies, conferences and marketing. Next thing you know you hit a wall that’s metres thick and high.
It's not knee-jerk conservatism. Literally in his comment:
> Prove it works. Qualify it properly.
I'm the same way. I refuse to use whatever's hot tomorrow unless it's been qualified and tested. That's not knee-jerk conservatism, that's just smart product development.
I have had many bad experiences where unreliable software has fucked production even if it is the current fad and has an arena of consultants and conferences behind it. If you don’t independently test and understand the software and just chuck it in and see what happens, which is how some people think it should be done, then you’re burning your business badly.
> "The author is also right about the cause: it's developers always chasing the newest toys."
That's certainly true. But undefined and wishy-washy business needs don't help either. It's easy to look at something finished and think "that's not right, that's not what I would have done."
Sure there's negligence or just poor decisions. But often there are good reasons for wonky architecture, and often those reason have little to do with technology and the people who implemented it.
When throughput is a few GB (< 10) of data per day, most of it coming in a large batch that doesn't have any real time requirements for processing and someone starts blathering about Kafka and Event Sourcing that is not a wishy-washy business need, that's unvarnished resume driven development.
True. I think what I'm implying is there is a co-dependency going on. That is, in light of the lack of a clear (biz needs based) path a given team / individual is going to do what they feel is "right." That decision is going to have a natural bias. If for no other reason, people learn that ultimately the only person looking out for them is them (especially when the business isn't even looking out for itself).
"Here are three examples of people driving cars off the road into a tree" → "Transportation is failing"
This isn't software architecture, this is potential/alleged mis-application of three very specific patterns (I am not sure I'd even call them architectural styles).
Well, I think the author is alleging (and it concurs with my experience) that at a number of companies patterns are misapplied much (most?) of the time [and the more complex the pattern, the less likely it's necessary].
So if the majority of drivers hit trees then yes, transportation would be failing.
That would be just as invalid an overgeneralization. It seems more likely that three drivers are failing. It is less likely that it could be three makes of cars. Generalizing to all cars is ridiculous, generalizing to all of transportation off the charts.
Maybe these patterns are being misapplied. Most likely, that's because people misapply stuff all the time. (Paraphrasing Sturgeon: sure 90% of software is crap. That's because 90% of everything is crap). Now it is also possible that these particular architectural patterns are prone to misapplication, though there is little evidence of that. Maybe there is a general tendency to apply over complex architectures (see architecture-astronauts), but even that doesn't mean that "architecture has failed".
At best: over-complex up-front architecting is maybe not such a good idea. But any competent architect will tell you that. Minimal, evolutionary architecture is just as much (and in some senses more) "architecture".
Every software system has an architecture. There is good architecture, bad architecture, big-ball-of-mud architecture etc. Citing examples of bad or badly applied architecture and claiming "architecture has failed" is a category mistake.
I'd also argue that the cost of misapplying some design patterns is pretty low. I've seen some overengineered solutions before, but they are rarely the cause of project failure.
I do not understand the addiction to using the "framework of the month".
It's also hard to keep a straight face when someone calls themselves an engineer, but makes absurd claims like "x is more productive" without ever presenting proof or measuring such a quantifiable claim.
The problem is that resume driven development gets rewarded. If you run some old Cold Fusion site that works perfectly with low maintenance costs you will get no respect when a new project comes up or when changing jobs. On the other hand if you convert that Cold Fusion site to nodejs, Cassandra and 19 microservices you are valuable on the job market. Even though you have replaced something simple that works with a complex monstrosity.
That's the stupid nature of our industry. Everything is buzzword driven.
I think a lot of that is a gripe with the Windows platform itself and the "enterprisey" culture that surrounded Windows. It's pretty decent now but carries a really really ugly legacy in terms of performance, security, technical debt, and bad managers that insisted on $MS everything.
I agree with all the above points, and like to add a few more.
There's incredibly good aspects to .NET, and it was better in the past than it is now relative to the competition..
However it's been a victim of it's own success, both in attracted a lot of recent mediocre talent due to it's success, and in growing more complex in it's 15 year lifespan so far. The platform complexity has increased significantly with PCLs, .NET Core, and moving from a dependable 18 month release cycle to a fragmented multi-channel release cycle.. ostensibly to keep up with the competition, but really just getting trapped into a classic prisoners dilemma of a race to the bottom of fragmentation and dependency hell. I think they should have left that particular trick to the Javascript framework of the month ..
Another issue that holds .NET back is licensing, startups don't want to worry about the licensing in case they scale. .NET core is improving this though and there is a huge amount of open source .NET code available.
Despite these drawbacks it's still an awesome platform, with possibly the best general purpose language available in C#. But it's not cool, and yes a lot of that is cargo cult based misunderstanding of it's abilities.
I do not understand when someone calls themselves an engineer and sets an astronomical burden of proof for any practice they don't currently follow, but are totally uncritical about the practices they already follow.
The argument for doing things the current way should be at least as good as the one you'd require to permit a change.
(I am on the backend where things are maybe not so crazy as JS framework world).
Think you meant to say Javascript front-end world, and yes I'd agree then.
Yet many of it's patterns are rehashing of previous front end patterns - it's very similar to e.g. Winforms in a lot of ways.
Conflating Javascript front end to front end in general is just an example of the myopic short sighted focus of the overall Javascript community, that caused them to miss out on the pre-existing body of knowledge in other languages for as long as it did..
You know what you can do with an immutable log? You can rewrite it. Then ditch the old version. It’s like git. You can rebase. This blog seems to confuse people being dumb or in the process of learning with bad architectural choices. Just because the person this blogger spoke to couldn’t come up with an answer, doesn’t mean ES is bad or wrong. ES is liberating.
Posts like this advocate a mentality that leads to people using Django or Rails for apps where these tools are not long-term good fits. It’s better, I think, if people spent a bit of their lives learning how to build architectures beyond tools like Django and Rails, because these tools are actually really limiting, not just in terms of theoretical things like scaling but practically in terms of what they can express. There’s a very common ethos that if people just focused on shipping they would somehow magically ship but that’s not how software works. You can’t just will shipping. You need to know what you’re doing. And we advocate every job be a rush job, for the sake of what? Shipping something that likely has no chance anyway?
This blog also talks about CQRS and ES like they’re more complicated than “traditional” approaches. But that’s only true if you don’t know how to work them. Once you learn how to have a database that’s inside out, you never want to go back. Once you use kubernetes, you never want to go back. CQRS/ES/kubernetes are the things I intuitively wanted since I was a kid learning how to build things. I couldn’t have explained then what made it hard, but it was the absense of these tools and approaches which make managing complexity much easier.
> This blog seems to confuse people being dumb or in the process of learning with bad architectural choices.
This blog describes a number of the infrastructure teams at companies I've worked for in the past decade. These teams have Lead Engineers and Architects with lots of education and experience. They aren't dumb. They just also aren't really engineers. It's not their fault, really; it's the practice of the industry in general.
The thing is, the majority of people in the industry fall into that category (not of being dumb, but of being inexperienced, and often kind of pretending they aren't).
The reality is many/majority of developers are actually beginners! It's simple logic once you think about it..
Consider the huge range of skills required (language, libraries, os, networking, patterns & techniques, industry advancements, then general professional skills such as organisation and communication and time management, then economic understanding to apply these to the business domain etc, the list goes on and on). It basically takes most people 5-10 years to actually grasp the skillset, and then double that again to master it.
Combine this with the growth of the IT industry which means greater numbers joined recently and fewer numbers of more experienced people started decades ago, even if they are still working. Oh and chasing the latest cool thing, ageism, and NIH ..
Overall the ratios are terrible with the majority of people not having the level of mastery and perspective to deliver at a high level on all of these skills simultaneously .. so their 'architectures' are actually controlled A/B tests if they have an engineering mindset, or just fashion driven development and cargo culting if they don't..
Sophomore developers love to use every single tool and pattern as much as possible. I am not excluding myself as I clearly remember doing this and still catch myself over engineering.
My favorite is what such devs will do with C++. I once simplified a C++ boost code base using binding, functional patterns, etc. down from dozens of files and thousands of lines to one class with less than 1000 lines of straightforward code. I am not exaggerating. Java design patterns cruft is hilarious too.
I think it comes from a not entirely bad drive to explore. Problem is when it goes so haywire that it causes unmaintainable bloatware that consumes hundreds of times more resources than it needs.
Edit: three other observations.
I think Golang is deliberately engineered to limit this by offering fewer language features and discouraging towers of Babel.
Overengineering is death in dynamic languages since without strong typing your tower of Babel becomes a bug minefield.
Finally wow has Amazon ever hit gold by monetising this. They offer compostable patterns as a service and market them like Java patterns were marketed.
It's not typing but state management that causes the tower to collapse. Typing solves minor inconvenient bugs that are quickly fixed. Poor state management slows development cycles with inflexible data structures and causes really hard to trace bugs.
Color me "pernicious", but who has ever had a heartwarming experience with ORMs? (I'm not talking about the new breed of micro-ORMs that work very well with CQRS, but the big old monolithic frameworks).
CQRS is a breath of fresh air after years of dealing with ORM overreach and inflexibility.
I’ve given up on the entire concept. I’ve used several ORM frameworks, and when interfacing with real world data models (not constructing one from scratch to fit the ORM) I’ve found that writing raw queries and some marshalling code is usually faster and easier to maintain than using the ORM. Plus, the ORM-based models are usually significantly harder to test.
ORMs promised to make things simpler, and were considered 'best practice' for a long time. I used them myself for several years, mainly NHinernate and Entity Framework.
But on every project I'd spend a lot of time fighting with it, trying to bend it to my will. It's invariably difficult to create mappings with any complexity, such as composite keys. And the queries they generate are usually inefficient monstrosities.
I discovered CQRS and micro-ORMs a few years ago, and with few exceptions, I haven't looked back!
Don't forget the overhead that comes with the ORM framework as well.
I happen to be of the opinion that ORMs are good for prototyping and very small data models and one-off type projects but not for big leagues, production ready code. For that I would do as you describe and build my own data connector.
Django ORM and SQLAlchemy are both good. They're easy enough to break out of when you want to handcraft more obscure SQL and at the same time they save a massive amount of boilerplate.
Some languages - particularly languages with overly rigid and static type systems - are ill suited for building ORMs, so the ORMs in those languages will probably always suck.
I think attachment to those languages is what drives some people to think that "ORMs always suck".
Well-designed ORMs like SQLAlchemy (and others) are a pleasure to use and make your life so much easier without overreaching and taking over your domain models (in contrast to more opinionated frameworks like Active Record).
I think you'd probably include Rails/ActiveRecord in the big old monolithic framework category.
So...yeah?
I occasionally dip a toe outside to other frameworks and setups and always end up having to keep trying to pull more and more pieces together that I end up getting frustrated and just using Rails.
I imagine that everyone's experience is vastly different given the scopes and problems that you're trying to address. I write a lot of little one-off business productivity apps, SAAS backends, etc. YMMV.
Well I've had quite a few heartwarming experiences with ORMs, when used in a specific way.
I've found the right ORM a joy to use on small-medium size projects, with a simple data layer using a database driven approach (ie not code-first).
The key has been to be able to blow away the ORM code and recreate it at will with no more than a few seconds work (an application of the Factory pattern..).
With this approach the ORM got out of the way and didn't cause friction. Then combine the generated classes with functional LINQ/map reduce to enable complex, expressive querying code in the first class language of choice.
As this is a controversial topic for some, I want to make a few disclaimers ..
- This has worked very well when I've kept the database simple, with logic in the application not the db. (Constraints in Db though).
- Extending the DAL domain can be done through partial classes, that are kept when the ORM code is blown away and recreated.
- As a bonus of this style, the database is done with standard sql, and could be swapped for another relational one easily.
- Upgrade scripts need to be created for database migrations, tooling is available to autogenerate these from snapshots of development databases, and manually extended where needed.
- I treated the Db as a Db, not some magic object oriented thing, and handled the (fairly minor) impedance mismatch in application code.
- You can always hand optimise some sql if needed, or even call direct bulk update APIs if you need top performance in specific parts of the application .. I use the ORM was used as a unit of work pattern so it's possible to do other units of work in other data access technologies, when optimising for something other than developer ease.
On the other hand I've seen configuration heavy ORMs suck a lot of time and cause heartache ..
Of course, ymmv, but using the approach outline above it has literally been a breeze, and a heartwarming experience for me and other devs when they get used to it, on dozens of projects done this way.
For starting up I like ORMs a lot. You just need to know when to stop using it or limiting its use. From my experience using an ORM with a few straight SQL queries when needed works pretty well.
I thought this was a really insightful portion of the article:
This is the problem being ahead of the curve – the definition of “success” (it works great, it’s reliable, it’s performant, we don’t need to think about it) looks a hell of a lot like the definition of “legacy”.
If you're able to make the app and do it well, it's boring.
I place the blame on technical leaders like myself. For those in tech who are not working at Facebook/Google/Amazon, we’re simply not talking enough about what systems at smaller enterprises look like. We’re not talking about what is successful, what works well, and what patterns others might like to copy.
A lot of technical write-ups focus on scaling, performance and large-scale systems. It’s definitely interesting to see what problems Netflix have, and how they respond to them. It’s important to understand why Google take decisions in the way they do. However, most of their problems don’t apply to anyone else, and therefore many of the solutions may or may not be appropriate.
--- end quote ---
Too many devs, and startups, and companies rush to every new thing the moment it appears on Facebook's/Google's/Netflix's blog
It's the same mentality behind cargo cults. A is successful, A does X, therefore doing X will lead to success. It's what humans do when they don't really understand why A is successful.
I think when architechture fails, its a failure to understand the problem you're trying to solve and the value an architecture gives you.
Some problems are such you don't need audit, you don't need to roll-back time and you don't need multi-database coordination. Then you don't need event sourcing.
CQRS is a more generic concept, but you also often don't need it. That's because RDBMS do CQRS for you. You can design tables in a normalized way, to maximise writes, updates and deletes guarantees and consistencies. And then you can do whatever query projection you need. All view aggregation is magically handled by the RDBMS for you. You need to manually implement CQRS when that RDBMS limits your scale or performance.
Mediocre software pervades the industry because mediocre developers are the ones writing the blog posts that get shared around. Mediocre developers are the ones who are so loud in defending their choices (and criticizing others for not making the same choices) because when you don't completely understand something, the way to win is to be the loudest. Mediocre developers do RDD, Resume Driven Development. And businesses don't help the matter because they don't know that a talented developer can come up to speed on any tech that they're using reasonably quickly so they often hire the RDD developers to "fill a need".
I’ve never liked the term over-engineering. People seem to use it as an excuse for pre-mature pessimization.
The cost of not forward thinking about architecture is that companies and teams can wallow in a land of low productivity while they try to evolve an architecture that wasn’t thought through.
Obviously you need to be aware of your market - for example, an app for local real estate doesn’t have to scale to billions of users per city - but there are common patterns that will scale up well for almost all companies.
I agree whole-heartedly. Every place I've worked, I get the side eye whenever I talk about how the patterns being used are dangerous, and will only get us so far before we have to re-write everything in a panic to handle a new use-case for a customer.
It's something that took me a long time in my career to understand: businesses care about profits, and since we are all salaried, they don't care if we have to work ridiculous hours patching a buggy, shit system. They just want to get a polished turd into the hands of customers ASAP.
On one hand, I get it. Why build the worlds greatest system, and design it so well that it will never fail, and it will scale effortlessly, and it will be easy to add new features indefinitely? That will likely take 3 or 4 times longer to bring to market, and if the value proposition doesn't add up, it's not going to happen. So I guess we have to leave the good architecture to the firms where it is critical: NASA, medical devices, avionics, etc.
Businesses do often (usually) care how many hours you spend patching "buggy, shit systems". Because programmers are expensive. Obviously.
They do not care about meeting arbitrary, irrelevant standards, like extensibility for features they will never need. Because that's expensive, obviously.
Meeting arbitrary unneeded goals even takes a toll on actually relevant goals, like being able to adapt software quickly. I don't understand how you think this would not be a worthwhile goal.
> design it so well that it will never fail
Too much black and white thinking. Most problem domains don't need "will never fail", but are ok with "fails only seldom". And meeting "will never fail" is extremely expensive, obviously.
> So I guess we have to leave the good architecture to the firms where it is critical: NASA, medical devices, avionics, etc.
A good architecture is one that helps towards actual goals. And that's not extreme extensibility or extreme correctness, in most cases.
For a concrete example, think about a game engine. It will not meet most arbitrary goals you could make up (it will fail, sometimes, and it can't make your coffee). But still I hope you can agree that there are many really well architected game engines.
Most businesses don't (and shouldn't) care about software for software's sake. But they do care about their software delivering value, and product and sales teams are very aware of the value that good software provides - and the costs of bad software - since they're the ones who have to drop feature ideas or turn away clients.
Ironically, I've found that selling good software engineering practices to engineers can be much harder than selling it to other teams. Good software also doesn't take longer to get to market - I think on anything but the shortest of terms, building high quality software pays off immensely. You're saying two contradictory things here: that we can "design it so well that it will never fail, and it will scale effortlessly, and it will be easy to add new features indefinitely" but also that it takes 3-4 times longer to bring to market. If I can effortlessly build features, why do they take longer to get to market? I think a lot of software engineers believe that there is a trade-off between quality and efficiency, but I think that's the opposite of the truth: if you trade off quality, you are trading off long-term efficiency as well. Likewise an investment in quality is an investment in efficiency.
There's lot of software written in non-tech firms to support the company's core business. The whole finance industry is like that for example. In my experience, they care a lot about quality and architecture (for example, in some of the projects I've seen there's more devops or automatic testers than developers).
I have made a ton of mistakes for sure. But at almost every place I've worked, under-engineering has been a much more pervasive problem than over-engineering. And so over-engineering is often used as an excuse not to do engineering to begin with.
Building a Rube Goldberg machine to solve a simple problem, or reinventing the wheel poorly isn't over-engineering -- it's just bad engineering.
That said, a term I do agree with is YAGNI, which I think is much more concrete and well defined.
The description of the legacy == successful application sounds like that J2EE app that every company has that isn't being patched and just may end up getting you on the front page of the WSJ once it gets breached.
How many people are updating the hundreds of packages their NPM apps depend on? General advice is to pin them so they don't break your app daily, which is another way of saying don't update them..
Those packages are also of hugely varying quality, in a language that leaks abstractions for breakfast, and with minimal integration testing between all the versions of all the packages .. Platform frameworks such as J2EE and .NET are much safer by my analysis, although .NET has recently moved to a similar package potpourri.
Security in the real world is about risk analysis. In broad form, if the risk of the breach and pad publicity times the cost said breach is less than the cost of redesigning a working tool (or implementing any other fix), then you don't do it.
And I'm trying to point out that those issues are largely irrelevant in the context of how the world actually works.
You can never completely eliminate risk, only reduce it, and that reduction has costs associated with it (whether you choose to consider costs only as internal to an organization or in a broader context doesn't matter), and many times, the cost of reducing the risk outweighs any gains. It annoys me that people see "security" as a thing worth doing for its own sake. It isn't. Like almost everything else in life, it is a matter of balance.
Besides, the cost of risk isn't going up. Big breaches lately off the top of my head: Sony PSN (several times), Equifax, Target, and several hospitals and schools. To my knowledge, nearly all organizations breached are still in business and often not significantly less profitable than they were before. By the internal cost metric of the organization, then, the cost was minimal. But even considering the larger ramifications on a societal scale, what has really been affected? There's been a lot of talk, and I'm sure there's been some identity theft, but the economy remains largely unaffected and so do our daily lives.
But I agree the cost is often not significant to companies, yet. Legislation and company costs are going to ramp up though, the pooch is getting screwed too often, and publicly ..
Equifax CEO, CIO, CSO all gone, and may be done for insider trading ..
The insider trading is orthogonal to the issue of security policy though. Companies shed C levels all the time for a wide variety of reasons, it isn't that big a cost on the whole.
The other thing to consider is the impact it can have on someones life, even if 99.5% of Americans aren't affected in any given year and only 17 million were, the impact of it can be huge on an individuals life.
ps just used America as that was the first stats I found. First world is going to be a more prevalent target as they have more money to steal ..
Pretty much any policy decision will have a huge impact on some individual's life, but that has nothing to do with whether or not we should enact policy does it?
If I want to end the mortgage tax deduction, that's going to negatively affect every homeowner (including me, I might add), but that doesn't mean we shouldn't do it.
I’ve been talking about the problems with tech-driven architecture for years.
I would simplify this entire discussion by saying…
“When you create distance from code to business processes, the less effective and maintainable your architecture and software development becomes.”
I have always argued that your stakeholders should be able read your code, understand your architecture as an automated view of their world.
Any abstraction away from that perspective is self-serving, tech-driven, and bad for the business.
Agile project management isn’t about unit tests and CICD. It’s about removing the gap between the business and its software development.
I appreciate SOLID principals and use various patterns, but my number one rule is we should never build something that does directly equate to a business process.
ORM’s hide business processes. Complex abstract frameworks hide business processes.
It takes discipline, and a certain approach, but it's actually achievable - I've done it, and taught many others to do it. Usually takes six months to a year of regular coaching to train it for most experienced developers.
This is in C# and Python, but it's generalisable to most high level languages. Harder with lower level languages because of the higher noise to signal ratio.
I've been able to take function I wrote and work through them with non technical stakeholders, successfully, for them to verify the behaviour matches their understanding of the business logic. I only did that occasionally, the major value is more in the ease of understanding for the developers.
The key is the code must read quite similar to English, and be written in an expressive style conveying intent.
I've written executable user stories that stakeholders can read, which is a good way for them to verify that the implemented business logic matches expected behavior, and is significantly easier to read than turing complete code.
I'm honestly not sure if there's value in going beyond that though. The "signal to noise" ratio in any turing complete language is too high even when you try to write code as clearly as possible (which I do).
Implementation details aren't all that interesting to stakeholders and understanding code even in languages that emphasize readability requires tons of implicit knowledge.
This is a technique I've used a lot, A/B testing things to understand how well they work and get detailed understanding of strengths and weaknesses in action.
But for the love of reason don't build two identical production systems with different hidden internals and ditch one .. that's ridiculous. Maybe it was exaggeration as hyperbole?
Instead, the technique I use is when I'm building a system, and a choice comes up where two approaches would work reasonable well, and I've used one and not the other - then I use the approach I hadn't used previously. Just the one approach in that system (as long as it works reasonably well as expected). This means I end up using both approaches (in different systems, at different times) and get for nearly free the synergistic benefit of actual real world A/B testing the two approaches over time.
The problem is I always end up writing a state container that's similar to redux at some point if I don't use it. For simple things, you don't need a state container, but where is the line between simple and complex? It's hard to tell, and since redux is so low overhead, why not err on the side of using it now if you think there's a chance you'll want to later?
The key here is end up if you project too much into the future then it is premature optimization. I might end up becoming the next Google let's write our back-end in C++.
Have a go with VueJS - it's even nicer than React, kind of elegant, surprising though it is to include that word in the same sentence with Javascript :)
Redux (and every design pattern) is something most ppl don't need, but they apply mindlessly. At least Redux abstracts the application logic into a central state and thus makes it easier to replace Angular in favor of React or Vue.
True, mindlessly applying patterns is silly, but not applying any design pattern means you will have a random software design (aka. spaghetti code) which is way worse.
Funny, I started writing a blog post about this very subject today.
I think the common problem with people selecting poor patterns for a given context, is they don't pay attention to the context. It's more 'I got this pattern, lets use it' rather than 'I have a whole toolkit, and I can identify exactly where each one need to be used'. It's such a problem I even came up with a term to define that specific anti-pattern, 'MissingContext'.
Even as complex as my application goes ... simple local instance variable as state still works fine most of the time. In fact i find passing in callbacks works fine for most web apps i dare to say unless maybe when you're writing some sort of editors for the browsers.
I come from Object-oriented background therefore i find the Redux-way (functional programming) hideous.
> “How are you planning to handle GDPR requirements and removal of data?” – turns out the answer is often “Er – we haven’t thought about that.” Cue a sad face when I tell them that if they don’t modify their immutable log they’re automatically out of compliance.
Can't you remove data by iterating through the log in a separate process, and remove data as you go? It will be slower, for sure, but is speed a requirement when removing data?
There are some real great gains being made ATM. Functional as-an-idiom, applied in more than just languages... Docker, Nix, Stateless application server, servers as cattle, Redux, event sourcing, micro architecture, serverless. I think great strides are in motion, and there is up heaval and mis steps, but I think the overall trajectory is genuine improvement.
Very happy to read this article which materializes the elephant in my mind. My tldr; for it is:
* technical vision (and therefore core tech decisions)
should match business vision and be justifiable,
manifested by making architecture choices driven by thinking
about constraints and working at high level as best as one can
to get around them, rather than jumping straight to eager-
architecturing to solving the non-existent problems only the
likes of massive tech companies face.
Some suggestions that made were include:
* Try to re-use for speed, rather than roll-out-your-own,
* Design for adaptability
I think I agree. It sounds like the right thing to do. I hope the author follows up with multiple case studies of varying degrees of how this is happening in industry, and what he thinks could have been the right approach.
I also think that this might be just one side of the coin. I am really interested in hearing opinions that constrast his.
What does framework of the month mean? As far as I know, big JS frameworks aren't popping up all the time. Ok, so a company switches to React. What do they switch to after? Vue? CycleJS? I don't think companies do this.
I'm pretty sure most of the churn in the front-end world is about switching to React or Angular, with more overseas firms using Vue.
And btw React is now on major version 16 (or is it 17 yet?) .. but not many React projects are up to date, all those dependencies you know
Framework of the month has multiple, synergistic, meanings
Which one are you learning this month?
Which one is new this month?
Which version (of which one) is new this month?
The tendency of these things towards write only software (all those dependencies you know) so you write a new one each month, in a new version, or a new framework
A lot of it has to do with the "hype" cycle going on. If NY and some CIO magazine says "blockchain is the next best thing", everyone dives head first into it.
Then there are a lot online courses which promise to make you an "expert" with couple of hours in training. What people actually get is a very basic level of understanding. This step plays a particularly large role.
The next step is to build something using the just acquired knowledge. And once someone reaches this stage, it becomes a case of hammer and nail - To someone with a hammer everything looks like a nail. Solutions which barely meet the problem criteria are taken up because "blockchain" (or AI/ML)
Anecdote - We had someone after completing the famous Andrew Ng's coursera course ie novice in AI/ML proposed to build an ML library for data cleaning. The management was very happy with the revolutionary idea - at least he is thinking was what they said.
There are some interesting conflicts that architects must resolve. Technology driven design is not great, but I feel things like CQRS fundamentally alter business expectations. Think about read-after-write usecases that customers are accustomed to because we used an RDBMS. If now we are forced to rearchitect because of scale and complexity, we have to change the product behavior. Try sell that to PMs and customers.
I believe the systems should be built ground up for scalability, but does not necessarily mean it has to be a complex implementation.
Ultimately, more commoditization of building blocks will make a lot of such decisions a moot point
Also a whole load of numpties in higher management who dont see the value of proper architecture and how it will prevent bad things happening in the future. Because the architecture is invisible to them, they fail to see value.
This isn't higher management's fault in any meaningful sense. Maybe they should educate themselves so they can stop hiring fad chasers. That's about the extent of their responsibility.
Things like what this blog describes exist because the technical people hired are engaged in one long, giant act of bike shedding and resume padding. They're supposed to be educated and experienced enough to know better. They often do, but because the fad chasers are also the interviewers even the ones who know better go along with the silliness.
There is a difference between good architecture and fad chasing, the problem is higher mgmt never buy into it and so it never gets implemented correctly hence becomes a fad in their eyes, perfect example of this in software is agile
I think he might be being a bit too hard on himself and the profession. The challenge is that fundamental problems aren’t always universally solved even though it seems they should be. “Hey this is a database backed order entry system - should be a solved problem right!” Except then you dig in and see the nuance. And it’s solvable ... but it is ugly.
So, perhaps we should buy a order/demand management system rather than build it. This too has its tradeoffs. SAP is in business for this very reason. There is a massive complexity and capital trade off when buying a system, it is never “buy”, it is buy, install, configure and integrate, and don’t customize so much you can’t upgrade.... I’ve seen many cases where that game is 10x the cost of “build with a small team using open source and/or simple cloud services”.
So we eventually solve the problems somewhat messily with newer architectures and eventually hone our learnings into patterns that Martin Fowler inevitably publishes. These eventually become popular techniques with every new generation of developers and they become fads ... and may be misapplied.
I think it’s the way it’s always been. Today CQRS/ES or microservices are the fad. 10 years ago it was web services and ESBs and 15 years ago it was transactions and distributed objects and 20 years ago it was CGI scripts and Perl . All of these solved lots of issues and caused lots of issues. The question is whether they solved more problems than they caused. The record varies.
Personally I have seen CQRS/ES in lots of places lately for legacy modernization. It’s been around under various guises for a long time (10 years at least) - cache your legacy data, expose it with an API (you absolutely can do CQRS with REST btw, the commands themselves are resources), force the use of messages/events to update the legacy only and use that to keep the cache coherent, and you can strangle the legacy gradually without a ton of huge changes in theory. Eric Evans indirectly talks about this as the 4th strategy in his great talk on “four ways to use domain driven design with a legacy”. One absolutely should consider the other three ways first (a bubble context / repository, anti corruption layer, etc)
The other context I saw CQRS without ES is when you have a rich data display and your simple three tier JS to Java to ORM is starting to creak. I had a project that required a near real-time spreadsheet like view with security role filtering of data, custom column sorting, transforms, and consistent query across 10 tables that also allowed full text autocomplete search across the grid. Materialized views / denormalization wouldn’t work too well in this case because the updates come in to the various tables from business events and other team members and the grid needed to be up to the second valid and quickly refreshed. The queries on the SQL database with Hibernate HQL wound up being massive 10 way outer joins , a bunch of nested scalar subqueries, lots of dynamic WHERE clauses and GROUP/HAVING clauses plus full text indexing (this all ran in under 200ms mind you, so not terrible performance wise :). The problem was these were unwieldy to maintain as new data and indexing requirements came up and required deep understanding of SQL voodoo and performance tuning. This was not a big project either - high value (several hundred million ), high impact (multi billion dollar revenue stream), but a small team (8 people) and modest budget ($1m). Migrating our ORM to write commands on SQL and using Solr for query was the right move for the health of the system and long term performance. Btw, this was a project that was going to go on SAP for the low price of $30m and shipping in 9 months vs shipping in 3 months and evolving it after...
My point is that...
- new architects always want to design-by-resume to some degree, and tackle the big bad hairy stuff with new techniques. this is why we see people throwing out proven components for terrible replacements on day 1 and then going back to the old 3 years later... looking at you , Mongo and Postgres! But sometimes the new technique is actually better (Looking at you, Kafka)
- Most people only learn through failure, they don’t read the warning labels ... Applying patterns tastelessly is a rite of passage
- even in smaller projects these patterns have applicability
- I’ve rarely bought software that I loved, they’re all 10x more complicated than they need to be and didn’t do necessarily a better job ... that said there’s also no guarantee that you and your team won’t build something that also is 10x more complicated than it needs to be... “all regrets are decisions made in the first day” as they say... it really depends on your luck and circumstances.
Perhaps one of the lessons of architecture that is missing is to teach people how to evaluate tradeoffs, or in other words, “taste”. I don’t think we’ve ever really had good taste as an industry. Buzzword bingo has always ruled, with some exceptions. One of the things I loved about Roy Fielding’s REST thesis was a way to analyze capabilities , constraints and tradeoffs on an architecture structure that consisted of components, connectors and data elements. That was the most important take away of that work IMO, we seem to have never learned how to critically look at these in favour of buzzword bandwagon jumps.
Re your comment on "taste".. Anders Hejlsberg is on video saying his choices in compiler and IDE design come down to taste.
I think he's got good taste, so I followed his platforms from Turbo C++, to Delphi and C++ Builder, then onto .NET and now Typescript (Yes, I skipped J++). When you actually understand how he thinks and how the tools are meant to be used, they are incredible. He balances the tradeoffs to achieve fast performance in all aspects from design to compilation to great runtime performance, with actual simplicity and ease of use. I learnt so much reading the VCL source code..
Anyway I almost agree when you say >> I don’t think we’ve ever really had good taste as an industry.
But Anders' work shines through as a guiding light, when people really get it.
Sorry, i wasn’t clear. I meant that previously we did read/write using HQL. We migrated to CQRS where the command part would generate an INSERT statement to the database (Using the ORM) and also update the Solr index (using Jackson data binding to JSON).
Ah, gotcha, makes sense. Thank you for the explanation. Did you guys treat Solr as a source of truth? Curious about use cases in which client updates a record and then immediately queries it through something like Solr.
Yeah we used The Solr real time get handler and a few caching tricks - we updated the cached result set for the client so we didn’t need to request Solr immediately. If I recall the cache had a TTL of about 30 seconds that could be bypassed with a refresh button for super users - intended to shock absorb the system for common queries. Generally it took a second for new data to be queriable. For the writes to Solr we listened for Hibernate events.
It’s been 5 years or so and I don’t think this has changed much with either Elastic or Solr product-wise
The numbers of developers working at an inappropriately low level is frightening.
this very true but unfortunately the packaged solutions available are vendor lock in or extremely opinionated frameworks. too much fetishism about frameworks.
why not just use react? cause i already had to learn js and framework of the month has a short shelf life.
theres an unsolveable problem here. the trade off between flexibility and the pre packaged abstractions called frameworks
I dunno, dude, most of the "masters" I know would come down off that hill and go "yeah, this is pretty obvious stuff he's saying." There are things in there I'd quibble with, but overall? Yeah, he's not wrong.
The way you're gunning for somebody who's trying to make things better, who's trying to improve the craft, is strange in its own right.
Yes I'm starting to think that good ideas can grow out of anything and can be facilitated by any technology. Blanket statements which warn against specific approaches or technologies are often incorrect.
I'm working at a startup and we're starting to use event sourcing and CQRS a lot more in the replatforming of core components of the application with kafka for an event log.
Personally, I find these patterns and approaches coupled with sound domain modelling to be perfectly fine. There is some technical overhead for event sourcing so some places we do not use a journal but instead persist current state but we still emit events and do interesting things with those events. CQRS is pretty safe if you're investing into event driven architecture across your organization. You don't need to use both - one or the other is okay too.
I wouldn't have designed this system any other way - there are so many opportunities to do interesting things with the event stream that we discover almost daily.
Event sourcing is more dangerous and you need to have some really sharp people around to fix issues that come up and make it hum. We have multi-million line logs and I've had to spend some time going through open source libraries and changing implementations for projections from the log that were creating contention and performance issues, as well as modifying how the open source libraries recover to paginate through the events. All in, we can now recover from json in postgres at a rate of 20k events/s (although usually we recover from snapshot!) and we can tear through thousands of commands a second in the running system. After these hurdles, everything is contant time or logarithmic so we can handle orders of magnitude growth without any issue. One day we'll have to flip our aggregates to have a clean journal. It's just life in engineering though - software isn't an event, it's a process so if you want to get the gains sometimes there are some costs. As long as you are smart about your choices and have a good team, you can make anything work.
I'll add that the existing stack uses ORMs/active record and the performance overhead of all those damn queries is now too expensive so we're building some screaming fast Eventsourced and commandsourced apps SPECIFICALLY TO GET AWAY FROM THOSE TRIED AND TESTED CLASSICAL PATTERNS. Mind you, we could make the existing application hum too with some careful analysis - I'm not saying one is better or worse, just that they are both equally viable with some smart minds around and good decisions being made. DDD is never a bad thing and ES/CQRS happen to be a reallly incredible fit in spaces where the domain is central as you end up very easily discovering you can have pure and beautiful domain models when you think in commands and events.
I'll just leave a caveat that, while I think we're having good success with the decisions, we also work in a domain with enough complexity that we get a lot of benefit in how ES acts as a heuristic in our modelling. Doing the same with a shopping cart may be overkill.
Fundamentally its about a lack of qualified professionals that are willing to think through their application in detail.
As a by product people slavishly follow concepts and ideologies as opposed to thinking through which concepts actually apply to their current use case. People can't be fucked to do that.
In their minds a simple "this good, that bad" construct pops into their heads. "Waterfall bad, agile good" without asking themselves what the trade-offs of either pattern provide.
I'm currently living on the flip side of this article. A company that decided to give their clients the ability to have a completely free schema for their data backed by a real-time CRUD architecture which follows an EAV pattern so every column value is represented by a database row. Its inherently flexible but it doesn't scale. What would be a single row select in any "sane" database retrieval routine is a multi-row and multi-table join on ridiculously tall tables.
Nobody thought through the implications of providing the customer with that free schema and nobody ever closed the loop on that problem set. Ideas like CQRS (who the fuck named that pattern so awfully btw?) or even things like NoSql would be preferable to the architecture but they're somewhat slaves to the notion of ORMs and RDBMS. As a product, we try to spin on a dime and an optimisation for client A's infrastructure will result in a detriment to client B.
Staff remain convinced that its an issue with the transport protocol chosen or ORM used but really its a problem with the feature set we chose to offer and failing to architect the solution to that problem created by our feature into the project.
> Fundamentally its about a lack of qualified professionals that are willing to think through their application in detail.
There’s also a distinct lack of companies willing to pay the premium for quality software and in the corners companies that don’t need to have anything more than some throwaway solution.
A lot of quality software is "not required" because it's sold with steaks, strippers, kickbacks and conservatism (the "nobody ever got fired for choosing IBM" effect), not because the end users don't care about quality.
Hmmm mmm mmm, that sounds like the kind of software that comes in 200% late, at 300% of original budget actually - and that no one actually thinks is quality except the management, and then only during the honeymoon phase ..
Part of this is a fault in business, for rewarding this type of behavior (better to have Kafka on my resume [even if the business justification was nonexistant] to get myself past the no-nothing recruiter), not to mention the "internal resume" factor of rebuilding something.