The industry constantly mints senior engineers who have been bitten by complexity, but doesn't want to hire them, or listen to them. More often than not senior engineers pretend to be excited about complecting the tech stack, because hey, it pads their resume with the latest buzzwords and they've given up trying to fight against it anyway.
The last line of defense against a rogue engineering team is managers who have studied this stuff. How many engineering managers can spot the common situation "the engineers are bored and they're rewriting perfectly-good codebases in Common Lisp and OCAML for funsies"? And how many know what to do about it when they see it?
Anyway, this is a cool website, and it'll be useful to point people to it in the future, so thanks for that.
This completely reverses the typical market dynamics. The company is more focused on catering to investor’s wet dreams than actually solving business problems, and “engineering playgrounds” with 3 different programming languages, microservices, AI & blockchain netting them a 5-figure monthly AWS bill is something that appears to please investors more than a rock-solid, “boring” backend. Maybe the complex architecture succeeds at obfuscating the fact there’s no business model nor plans for one?
If you instead use cheaper 3rd party hosting companies, you may have hurdles to growth and future migration cost since those companies do not have many of the certifications required.
From an investor POV, paying a little extra now is often worth it to reduce risk and remove barriers to explosive growth.
I can see the upside for many things, but you can do that efficiently, not costing 5 figures.
The only risk I can see is the lack of managed services, but Postgres isn't that hard to manage yourself, and other value-add AWS services can still be used (the cost savings of your baseline load being on bare-metal would most likely still offset the bandwidth costs of moving data between AWS managed services and your bare-metal host).
For SNS, what does it offer that RabbitMQ doesn't? I've always found RMQ to be rock-solid and wouldn't use SNS anyway.
We had migrate our servers 4 times as hosting company kept changing things, then getting purchased by IBM, then IBM was trying to figure itself out.
I didn’t kill the company, mostly due to a lot of really hard work.
But it came close.
This also goes some way to explain the leetcode, "hiring only the best from Stanford/MIT" hiring nonsense for pretty mediocre cookie-cutter products that could be perfectly well executed by couple mid-range developers: your developers' resumes are as much a part of any potential acquihire package as the codebase and user data.
2. It is actually increasingly hard to get an industry wide reputable hosting provider ( Cloud, VPS , Metal or not ) that many investors could agree upon. Just like you said which makes DD harder.
3. Amazon actually offer heavy discount and even lots free credit to startup by accredited VC. Meaning the difference in the first few years is actually tiny. And in your example, if it can be run on $500 budget elsewhere you can bet it will literally be free on AWS from those startup.
Reward employees a direct and substantial cut of the profits and incentivize to them to stay 5-10 years and these behaviors should disappear.
The loss of job security, frequent job hopping has created more incentives to optimize for the next job switch and not value add.
The explosion of startups also contributes to this. They often have to attract employees by offering the promise of new tech. New tech can propagate these days same way how Bitcoin prices rise. Our industry is in a financial bubble which has created a complexity bubble. The financial bubble collapsing will pop the complexity bubble leading to huge surge in boring / low overhead stable tech.
Most startup employees know they aren’t getting rich. They go on to milk the startup for maximum resume points and move on.
The VCs unload these bloated companies into inflated stock markets and the cycle continues. Some small progress at the cost of tens of billions and lots of running in the same place.
Our industry is like some eccentric Howard Hughes drowning in so much money that all we do is come up with ever more esoteric contraptions to escape from reality.
DHH starts really small companies and pays his employees really well and doesn’t work them too hard. Employees have no real reason to leave. They see a direct link between the low overhead and their job security and work life balance. Since the team is smaller the work is less alienating / hyper specialized leading to a deeper connection with the company and its customers. Aligning incentives fixes a lot of problems.
Assume company A sells a widget for 10 silver
Then company B comes along and sells same widget for 3 silver.
Company A will either learn to be more efficient and sell at 2-3 silver, or go out of business.
By the time the market wakes up you’ve switched 3 times and after 10 times you’re retired in the suburbs with a nice BMW. Who cares then.
Capitalism is not the same as free market; capitalism is rather a system where your economic power is mainly determinated by the amount of capital you have (money, investments, means of productions...) rather than class, bloodline, titles, popularity, or legislative regulations.
Capitalism is linked to free market in that they work well together: free markets want enterpreneuers to be compete for efficient capital gains and people with capital like how their economical power is only bounded by their capital (with the potential of an exponential growth). Apart from this ether can exist without the other.
This is how supporters would like them to work together, but there can be a lot of bugs and traps on the way.
: This last one would be the characteristic of planned markets
You could create a successful company on any tech stack. It's really like trying to invest in a company based on the way they decorate their HQ. Does it matter at all? Maybe only if it's ridiculously extravagant compared to their revenue.
They pretty much ghosted us soon after that meeting. Though they did not specify that as the reason, it seems most likely that their advisor told them we don't know what we are doing technically.
What matters with a startup is executing, and that means using tools that let you execute well. If you're most familiar with PHP and MySQL, then that's what you should use.
As someone biased against your tech stack, I fully support your decision and think the expert and the investor have no idea what really matters, hence I stand by that they're a bad investor.
On the other side I'm evaluating CL for my next endeavor since I personally find it to be my most productive language, but realistically I'll settle for Clojure and even then I'm worried if that is a bridge too far when it comes to the whole funding/due diligence issues.
I'd love to know what their export advisor considers the right decision?
Php 7.2+ is great for a lot of saas products and is super fast now. and mysql 8 is rock solid and battle tested in a lot of production systems.
Starting over, I would consider PostgreSQL because it has some nifty features. But now that mysql has added json support, I am less inclined and I see a lot of complaints about performance and scaling that I don't think is as much of an issue with mysql. There isn't a whole lot more out there for relational databases (that doesn't cost a ton). And not using a relational database for most saas systems is just crazy talk.
But I assume you have a relational database somewhere to manage users and do other relational-type things?
I’m not a huge fan of mysql, but have not used it in quite a while so I’m assuming things have improved considerably.
No argument from me there.
I think there is a tension between this kind of actively guiding anti-complexity management and hiring "top talent".
The very best developers are capable, and avoid complexity. The next best developers are capable, and love complexity. The worst developers are not capable.
There aren't enough of the very best developers for a company to plan around hiring only those. So, if you want to hire developers who are at least capable, you have to give them some leeway to make things overcomplicated. Yes, that incurs a real cost to the business. You can think of that as just being another part of their compensation package.
Even the best developers get ignored if they try to justify pure-tech-debt fixes. So they learn to include fixing tech-debt as part of a fixing a problem that has some _direct_ business relevance if addressed. This gets clearly observed and taught to the all tiers of developers further obscuring the rationale for architectural changes from more senior management.
"for funsies" probably isn't that far off. Because the process is more like someone gets interested in something at some point. Then at that point +~1-6 months someone raises a problem and some senior dev gets stuck on the idea that the awesome thing they read about can solve it. Then before you know it whatever tool they want to use has more bells and whistles than the average mars lander and does everything short of curing cancer.
There's rarely any good correlation between the problem and the solution. That gap can just be bridged by buzzwords. The true correlation is usually between the solution and whatever the most senior dev on the team thinks is shiniest at the moment.
It's not always in one's control to avoid complexity. The simplest solution to a problem in a lot of cases may be 2-3x the lift (simplicity tends to require more work, complexity is easy) and thus blocked by the business. A holistically simpler solution may be blocked politically because a certain team doesn't want to own some functionality etc ...
I would say the best developers can see complexity coming and have a healthy fear of it, the medium devs don't mind complexity and the worst devs can't get anything done without increasing complexity.
This is such an important point. For whatever reason it has become ingrained in people's heads that the simplest solution must by reductionist logic be the easiest one. And therefore the easiest solution is the simplest one and it is good to be lazy and just introduce complexity everywhere.
This is where the whole "MVP" concept got out of hand. MVP didn't mean an overly-simplified prototype. It meant solve one, narrower, problem well. This also pairs well with PGs "do things that don't scale" advice. You are taking on what others might think of additional complexity, to solve a very targeted problem more effectively from others, because using either deeper analysis or first-principles or whatever, you've actually better modeled the underlying complexity. Then you try to scale given those insights.
You see this a lot with writing as well. It's very easy to ramble on, it's very hard to concisely convey your point.
That's much harder than doing less simple work on your own team
I've seen no evidence that companies are even trying to hire developers who "avoid complexity". If anything, the interview processes are designed to select for engineers who bathe in complexity. There are so many interviews which consist of "How would you rewrite from scratch this thing that already exists?"
There is such a shortage of people that know the basics of programming, that selecting for such l33t skills is out of the question.
The hiring process right now is not about selecting the best, it's about selecting those that pass some low bar.
On the other hand, great for us who can code :D.
So if that isn't part of the deal, I don't even bother.
"How high do you score your C++ skills out of 10?" "9/10" "OK, can you explain what a pointer is?" blank stare... "no idea"
I would definitely consider pointers pretty fundamental if you're a C++ dev.
It is pretty hard to believe that 50% of candidates don't know what a pointer is. I've barely touched C/C++ and still know what pointers are and how they work.
Congratulations you’ve successfully triggered the pedantic interviewer. You will now face six questions on pointers in C++ they just looked up each more trivia based than the last.
I'm one of those, and it really gets on my nerves when systems are overengineerd, or use tech that has more drawbacks than benefits for our specific case.
This is always a challenge in general - how do people learn the lessons of complexity without creating it and then seeing the effects? I wish there was a better word for it as every person who reads "complexity" says well "duh of course I don't want that" before they then go and manufacture another bucket full of it. Complexity masquerades as simplicity - in the first instance it nearly always solves the immediate problem better than anything else. Recognising the latent complexity of choices is one of the hardest but most important skills to learn.
It was a hot time for NoSQL and document DBs. Having investigated using Mongo myself to little avail, I asked why they didn't just use Postgres. If I recall correctly, a couple years later they published a Mongo at Etsy postmortem which concluded they should have just stayed with Postgres.
Which was compiled here:
IMO this would be a case where if you're dealing with a relational domain and the engineers really don't know SQL you should either (a) rethink your hiring policy or (b) spend one of your innovation tokens in having everyone learn SQL.
(I have to add the inevitable disclaimer that I actually love JS and do not want my words to be misinterpreted as a cheap dig at it)
If you have an application that retrieves an works on a top level entity then NoSQL fits very nicely. When you have a dataset that is shared and aggregate information is needed not so much and you are likely better of considering a SQL database of some sort.
There are best practices for this. Simply create a microservice per table, and then create a microservice that acts as a client to the other services and aggregates or joins the data from those services.
No, I'm not kidding. This is literally what people do and recommend.
Fuethermore, when a join is desired, the best practice is to implement not just a service that joins the data but maintains it in a table/materialized view of its own, along with a message server such as Kafka. The services responsible for the tables to be joined (customers and orders, for instance) put events on this message bus which the joining service subscribes to in order to know when to update its view. See: https://microservices.io/patterns/data/cqrs.html
And don't get me started on how you do transactions that touch multiple tables in this setup: https://microservices.io/patterns/data/saga.html
I have seen these deployed and advocated as the modern way to write business applications in the wild.
Combine that with gatekeeping/I did my time you have to do yours and not much has changed there over the years.
I'm sure there are some great companies out there, but they seem to be rare. I just see don't see work in this industry to be sustainable and I cant see myself working as a Software Engineer when I'm 30+ or with a family.
> The industry constantly mints senior engineers who have been bitten by complexity, but doesn't want to hire them, or listen to them.
I guess they failed to understand that choosing boring software is different than depending on a package manager to write all your software.
Except I fucking hate using Typescript, and totally wasn't expecting to see you mention you like it, given all the other stuff.
IME all the same people that overengineer everything with god awful dependencies are the same ones pushing super hard for typescript on every project I'm on. When they get their way (as always, since everything is decided democratically and everyone is dragged down to the level of the worst dev on the team), they write the worst typescript ever. On my last project, one of the people championing typescript defined some constant strings containing css breakpoints as the type 'string | int'. Rather than getting knocked back as one of the dumbest lines of code in the history of front-end, this somehow generated 3 pages of discussion in code review then got left in. I'd give the person a pass, assuming they'd never used a typed language before, except (a) they were senior and (b) they're the one that wanted types. These "seniors" lack even the most basic understanding of the shit they're using, but feel the need to impose their opinions about libraries, tooling and languages constantly.
I don't feel like I'd mind using TS at all on my personal projects, but on work projects with average devs it just adds another entire layer of complexity that they spend hours and days and weeks and months wrangling with instead of writing any code that might be remotely useful, by say, maybe implementing a business requirement, or doing anything that makes the company money instead of pissing away millions in salaries playing npm lego.
Plus, although I'm not familiar with any of it since I'm never the one pushing for TS, and so never the one setting it up, I've seen people spend absolutely ludicrous amounts of time tinkering with webpack and fussing over TS integration with 3rd party libraries and whatnot.
For some stupid reason this entire industry seems to suddenly believe that a 40+ year old debate of static vs. dynamic typing was settled because Microsoft came out with TypeScript.
Maybe one day I will have that aha moment and get it, but the only real reason I see to learn Typescript is if the industry deems it essential to learn.
But naturally every typescript fan has a few stories about 'type-safety' saved their bacon, and no stories about how they struggled with defining types for callback functions or getting everything scaffolded for hours.
* structural instead of nominal types (i.e. if it walks like a duck and talks like a duck, it is a duck)
* mapped types let you transform object types into other object types: https://www.typescriptlang.org/docs/handbook/2/mapped-types....
* conditional types let you apply conditional type transformations, and combine with mapped types to provide the equivalent of for comprehensions https://www.typescriptlang.org/docs/handbook/2/conditional-t...
* template literal types let you generate dynamic string types on the fly, which you can use in various places including property names: https://www.typescriptlang.org/docs/handbook/2/template-lite...
and so on.
It does have a learning curve, and the tooling is annoying to set up.
I expect Template TypeScript and Liquid TypeScript are not far off.
My enthusiasm for it has died, and I enjoy just doing a plain old <script src="myfile.mjs"/> on my own projects.
> I'm surprised you haven't run into more of these people that just seem to use TS as a complexity multiplier for every bad engineering decision they make.
AngularJS claims to be TypeScript but most instances of Angular code rarely make use of type definitions, which then defeats the whole point of TypeScript and then only contributes to spaghetti and tech debt.
Configuring the compiler is now as complicated as selecting the right set of GHC compiler pragmas to successfully compile the code, and they are anyway type annotations that have zero impact in code performance, just a friendly than using JsDoc.
I look forward to when browsers just adopt WebIDL and be done with it.
Oh yeah, I feel this. The problem is many "seniors" went from junior->senior by being the big fish in a small pond (small startup + golfing buddies with the CTO) so they never had to question their own assumptions. Then they just glide from job to job at the senior level. It makes for monsters who can't differentiate their own personal preferences from industry best practices, and will attack your PRs if you go against either.
When I finished uni I had a few interviews for large enterprise-y companies to get into their graduate programs. None of those panned out. Then while I was looking for more things like that, one of my classmates asked if I wanted to interview to be employee #1 at the startup him and a non-technical friend had created.
They didn't have much funding, so it was ~45k (minimum wage or close to it in Australia) to start, as opposed to the grad programs that I think would have been around 55k-60k. But the job was basically building an entire fairly large and complex web app (as well as a bit of desktop and hardware related stuff) between the two of us, who had close to zero real world experience, my friend on the back-end and me on the front-end. So we basically just had all the responsibility, with no experience, and nobody to guide us.
From there I just hopped around a few jobs, looking for small places where I was the senior or second most senior person on the team. I learnt from the first job that the easiest way to learn is to be in a position where you have as much responsibility as possible so failure isn't really an option.
Things have improved a lot since 2009 however, IMO. There was barely a concept of modularity at that time, the tools (e.g. AMD / requirejs / r.js) were way worse and managing the state of the DOM was a pain, jQuery or otherwise.
If your engineering team is the one pushing in that direction I'd reckon the company was in a bad spot to begin with to have hired that team because it strongly indicates that the management layer (head of tech/CTO) has no technical clue.
Hire strong Lead Developers with a proven track record of delivering value to companies they worked at and you'll be mostly fine.
Also there's not much to study, in 99% of the cases in a web based startup if your stack deviates from a monolith with one of PHP/Ruby/Python/.NET/Go + Mysql/Postgres/MSSQL you're doing it wrong.
You can make it work with more maintenance, but you did ask the question what's wrong with it!
JS shiny new culture doesn't really exist on the back end (and even front end js has calmed down in recent years). Express.js, the go-to framework 7 years ago, is still the go-to framework on Node today.
Node and Mongo are at this point "boring tech". Their limitations and trade-offs are well known, their benefits are also well-established, and their APIs and tooling have matured.
1. 10 Things I Regret About Node.js - Ryan Dahl - JSConf EU (https://www.youtube.com/watch?v=M3BM9TB-8yA)
Then there is also the fact that JS is actually a pretty great language if you know how to avoid the footguns. Granted that's not always easy, but it's a language with lexical closures and easy and familiar syntax, it's also very expressive and has a vast ecosystem supporting it. And you can even add the typescript compiler on top if you want compile-time type-checking.
It's also async out of the box, and while that doesn't solve all problems, it scales surprisingly well with no performance tuning whatsoever, it even has decent asynchronous primitives that make it easier to write correct code.
Give me a break.
I was on a project for a bit using React and although it felt like an obvious way to write things, I can't help but feel you can't create something that will last for a decade with it.
Webcomponents in plain JS are also great to not have to deal with JS class/HTML Element binding and lifecycle yourself.
Why not add a bit of Flutter or React for a few features, but for most pages it's going to be an expensive overkill.
The most compelling feature is the guarantee of no runtime exception. It language is pretty stable with glacial release cycle. It also has a ui library called elm-ui which allows to develop ui components without css.
There are lot of posts criticizing Elm of slow release cycle and that the community does not take feedback properly. But atleast for my use-case it does not matter.
I like that the language is very opinionated and just works.
Few plusses for Elm:
- Static language
- Informative compilation language
- Awesome tooling. You just nees to install Elm compiler. No npm required.
- < 1s compilation
- The Elm Arcitecture(TEA) for ui event handling
- Beginner friendly community
React does have a ton of problems but they all come from the next level of dependencies down. Shit like Gatsby and Nextjs won't pass the test of time. Neither will redux (it's already pointless) and all the convoluted bullshit like redux-saga. If you learn to build stuff using just react and other basic dependencies (like express on the back end), you'll be in a good position going forward. None of that stuff is going anywhere.
I can't speak for Angular or Vue, but I'm 100% sold on Svelte. It cuts out all of the crap that React and Redux introduced (lifecycles, hooks, boilerplate, etc.) and boils it all down to fundamentals. You can read the entire docs in a day and fully understand how everything fits together. I dare say it, but Svelte's docs are a breath of fresh air. It's rare that I read documentation and want to keep reading it.
To me, that's what boring tech is about. It's about finding the simplest, cleanest way to do what you need to do. I hope Svelte takes the path of long-term stability over features and complexity and innovation for the sake of it. What they have right now is a solid foundation.
> convoluted bullshit like redux-saga
Wait until you meet saga's bigger brother RxJS/redux-observable. Someone on HN once mentioned JIRA was using RxJS and I realized "ah, that explains why JIRA is the slow pile of absolute shit it is." From just knowing a company is using RxJS I can already guess at the type of internal communication and politics at play in the company, as well as what their code base looks like.
I haven't done it so much lately but a couple years ago whenever I would check an app with nice UX it was React, and if it was terrible UX it was Angular or something else.
Also interested to see where Svelte will go. For my latest project I just didn't choose it because of lack of libraries.
I'm stuck on this idea that the best UX is a native app for performance reasons (responsiveness, memory, CPU, battery), aesthetic reasons (assuming your like your native platform) and longevity. On my desktop I often run applications that are a decade or more old. How many web apps rolling out today can sit untouched for the next decade and continue to do useful work?
The reason stuff like React exists isn't because it's some big generic library for doing "frontends" that everyone has to use (even though that's how people see it, how it's marketed, and how people use it). If you want to know what a library is good for it's easiest to look at what it was originally built for, the very first problem it solved.
For libs like React, that problem is DOM manipulation.
For most of the interesting things you can build on the web these days, DOM manipulation becomes a problem at some point because the solution has an inherent complexity to it that becomes hard to manage. That complexity is in procedurally updating the DOM, specifically getting the order of insertions and deletions correct and keeping track of every possible state the DOM can be in to make sure your app doesn't get in a weird state that it can't recover from.
The way React (and vue, angular, svelte etc, all the modern libraries) fix that problem is by changing the programming paradigm from procedural to declarative. The declarative paradigm is just fundamentally much simpler for the exact problem of handling DOM manipulation in a large app.
If you're learning, or building something for yourself and not worried about spending time on refactors, then it's definitely worth building something in vanilla JS first, running into some sticky DOM manipulation scenarios yourself, and solving them the hard way. People make the mistake of using React when they don't need to because they don't have a good understanding of where that line is in the inherent complexity of a web page/app, where you start to get a very good returns on bringing React in to simplify some of that complexity.
That's also why I really don't rate vue, angular or svelte. React is a big library in terms of code size (over 100KB still I think?), but almost all of that complexity is internal. The exact same API and functionality is exposed by Preact, which is a few kilobytes. React has a really small API, pretty much just three functions: createElement, render, and useState. I'm a big fan of libraries that do big things with only a few functions. Do one thing well and all that. There's also the JSX transform, which is a straight line for line transform, meaning the code you write is very similar to the code that runs in the browser, you can follow it line by line with no surprises.
React is a good tool to have in the toolkit, after you've gotten comfortable with vanilla JS. I wouldn't write it off based on how other people present it. You just need to avoid the insane amount of complexity and cruft that people have built around it. All that complexity will go away when people go running after the new shiny thing, but React or something very similar to it will stick around for a loooooong time because the fundamental ideas are so simple and powerful. DOM control through declarative coding, code over configuration, utilising the JS language itself as much as possible instead of relying on DSLs, and simple transforms that maintain the integrity of your code all the way to the production build.
If anything replaces React it either have to be quite similar, or be another entire paradigm shift (maybe the whole DOM/CSSOM thing will get replaced at some stage, who knows?)
Preact is a good alternative if you have a small app, but the reason it's so small is because it doesn't have a scheduler which might cause issues with larger apps.
I can’t speak to your own use case, but redux and saga absolutely save our bacon when working on a huge enterprise app.
I don’t even want to think about the crazy kinds of stuff we’d have to do without them.
Maybe someone will come up with a better abstraction, but I really think these are fairly good ones.
Redux has greatly improved workflow for integrating with React. The main issue with Redux is that it pretends to be generalized state management engine, with all the overhead, while it’s in a shotgun wedding with React.
Will React team attempts another state management, aka Flux, when Redux does 95% of features and is slowly being absorbed into React eco system anyway?
Gatsby/nextjs will likely merge into a single React static site generator. Similar to React router and Reach router merger.
React is like jquery, it’s going to be around forever. React is almost at the core web infrastructure tech level, just by consensus alone.
Both the Redux core and Redux Toolkit _are_ completely UI-agnostic, and can be used with _any_ UI layer or even standalone.
Yes, most Redux usage is with React, and we do orient our docs around the assumption that you're probably using Redux and React together, but there's many people who are using Redux separately.
The default assumption for any production React application is that it will need Redux at some point. It’s much more efficient to start the React project with Redux, than trying to bolt on Redux after React project is underway after a while. Redux Toolkit does make things bit easier.
It’s like how React pretends that JSX is optional, when we all know JSX is requirement in React projects.
Thanks for all the work on Redux and Redux Toolkit.
Yeah, as I've been redoing our docs, I've really tried to emphasize the "take some time to decide if you _really_ need Redux" aspect:
FWIW, we do take the "UI-agnostic" part seriously. We've got an upcoming new API for Redux Toolkit that we've dubbed "RTK Query", currently available as a preview release. We've got an example of it working with Svelte, and I know I saw someone else trying it out with Vue:
Java is definitely "boring technology", but hiring random Java developers will probably sink a company faster than doing the same for Go.
Is the industry biased against great engineers who have been working with Java for the past 20 years, even if they "deliver value" (which is pretty much impossible to determine externally)?
But I, personally, am biased against hiring people with only Java on their resume. Because 90% of the time what I've encountered are people who haven't examined their technology choices, questioned the status quo, tried to -improve- things.
That's not a sleight on Java, per se, but it is against anyone with only one language on their resume. It's just that if there is only one language on a resume in web dev land, it's almost always Java.
A large reason I avoid Java teams.
Java and .Net are more common in longer-lived, larger projects, or when performance matters.
And a GOOD Java dev will likely leave far more maintainable code than an AVERAGE Python dev.
Java programs are larger than those in other mainstream languages, just by dint of the verbosity of the language (and research backs this up; studies showing errors per LOC are consistent regardless of the language).
Also, Java’s “verbosity” is pretty much only a constant factor, and not even necessarily in terms of LOC, but width. What research also show is the benefits of static typing. Also, I am fairly sure there is some survivorship bias working in the background, where an ugly java version of a complex domain survived because the language’s great observability sort of kicking it into a working spaghetti code state, while other projects died a premature death.
That being said, modern Java is quite terse (var, streams, default interfaces, and now records and pattern matching, etc.).
I mean, in most cases it doesn't really matter what tech you choose as 1. Most products don't really need "massive scale" 2. It's more important to be proficient in the tech you pick rather than it being the "best tech ever". I mean Facebook still uses PHP no?
Sounds like the bored engineers need to be allowed to go home early, or have some 20% projects.
Also, as John Gall teaches us with his tounge-in-cheek, yet never-the-less true principles -- a principle so obvious most never give it any thought:
"New System, New Problems"
Can someone please just ask "what do we expect some of the new problems to be?" If you get blank stares and no good answers, then you know they haven't thought it through.
A name for this I’ve heard (and use) is the “pre-mortem”; you can get folks in the right headspace for what you are suggesting by asking them to imagine they are writing a post-mortem after the proposed initiative failed.
A good way of surfacing failure modes / potential quagmires.
I was thinking more along the lines of "imagine they are writing a post-mortem after the proposed initiative succeeds". Even if everything goes perfectly, what do we honestly expect to have at the end? A system without problems? Nonsense.
For example, you need a search feature. ElasticSearch is big in search, there's lots of article about people implementing ElasticSearch. Very infrequently do I meet people who just starts out with the full text search in their database, or maybe just try something extremely simple, like Sphinx, even if it would solve their problem quicker, safer and cheaper.
It's honestly starting to become a bigger and bigger issue. During the last few weeks I've talked to one customer who is think: Kubernetes. They don't have the funds, staff or need for Kubernetes. What they do need to do here and now is to focus on smarter utilisation of their cloud providers features and reduce the reliance on expensive VMs.
Another customer is going all in and wants to implement all the things. We're talking Kubernetes, Kafka, ElasticSearch and more. They'r are currently running on just a small number of VMs. While their stack does need updating, maybe start smaller?
But they were a medium sized company. They were absolutely crushed under the weight of FAANG "best practices" and technology. They lost time rewriting perfectly fine code. They chased the microservice fad. And they lost market share.
It's in the interest of FAANG to maintain this idea of needing k8s, this massive CI pipeline, certain processes, etc. etc. Because it slows down competition. It halts startups. It slows progress. They want to throw as much overhead as possible at smaller companies.
The thing people need to realize is that FAANG are entrenched. They are as risk-adverse as can be. They will happily write unit tests and maintain 100% test coverage and do all of this crap because they are more scared of losing market share than innovation. They are in full defense mode. Google is implementing all manner of protectionism to maintain their ad market, for example. Plus, they have the deep pockets to pull it off. Any company smaller than FAANG will sit there with their wheels spinning.
I'm not sure I agree that this is the underlying motivation. To get promoted at a FAANG you need to be demonstrating technical prowess. What says technical prowess like rewriting the app in a new framework that you open source to great acclaim? The business gets a feed of new talent, technical kudos from the community and maybe even a genuine benefit tht had some marginal gain at the scale that a FAANG operates at.
As an example, I don't think Amazon is recommending SOA to sabotage other companies. I think they're recommending the way of work required at Amazon scale.
> The thing people need to realize is that FAANG are entrenched. They are as risk-adverse as can be. They will happily write unit tests and maintain 100% test coverage and do all of this crap because they are more scared of losing market share than innovation.
I found the rate of feature delivery and innovation at Amazon to be way higher than the companies I've worked with since. 100% test coverage wasn't incentivized at all. Increasing revenue was.
Had this literally happen to me, but I was a low level manager and this was happening in another team. One thing I've taken from it, at the time the feeling around Sr Management was that we needed to allow this or we would lose the engineers. They allowed it, and after the conversion, those engineers left to start their own company. The remaining engineers had to deal with undocumented OCaml code and keep it running and were resentful.
I have seen this with React vs Vue, where an engineer not liking React, just did his code in Vue. 'We have to let him do this or he'll leave', but he left on his own accord.
Lesson, stick up for your codebase, and if engineers don't like it, let them leave or make them leave. The other engineers on your team will like it, and some of them will become your new Sr Engineers.
The first Sid came in and wanted to rewrite in C++.
Then the second Sid wanted to rewrite in Java.
The whole time the HTML is 25% space chars, served, sent, received, discarded, because the PHP guy likes deep indentation, and the DB is constantly burning like the sun because all the business logic is in stored procedures.
(That was the problem, not which abstraction the servers are written in, since all they do is pass data back and forth to the fiery inferno of the database.)
If there was a reliable general algorithm to make good software managers would hire lousy engineers then tell them to execute the algorithm. There isn't, they can't. The fallback is to be as picky as possible about who has influence on the software.
This is the obvious conclusion. I wonder when investors will wake up to the value of having engineering-savvy management.
To be fair, engineering-savviness and willingness to be management (especially the kind of management that are legible to investors) are substantially anticorrelated.
You can immediately spot a misaligned engineering culture when every team has its own tech stack and its own ops as it means that none of the teams trusted each other for anything and had to resort to federation. On the flip side you can se bad engineering cultures where decisions are made based on pure conformity with what was previously built regardless of the problem being solved.
There's a happy middle-ground, a company like etsy with 200-400 engineers can happily afford a small team of 2 engineers trying out scala/mongo for something, it might work out, and nuking it from orbit won't cost that much in the grand scale of things.
Isn't this literally one of the more common arguments for microservice architecture?
It empowers teams to choose the right tool for the job, i.e. run their own tech stack.
At the end of the day there are not large enough differences between say ruby and python to justify different teams choosing ruby or python at the same (medium-sized) company. If you have this level of difference, you've effectively permanently engrained conway's law into your organization.
But more often than not, this feedback comes from engineers that a) have never been (as you say) bitten by complexity, or b) they aren't in the position to deal with all the negative consequences for those decisions.
There's probably some wisdom in letting your direct reports experience the kinds of failure in making these decisions, so they develop that sort of empathy, but the cost of that failure is sometimes just unacceptable for the business; especially in periods of cash runway constraints.
Because you may know how to hire the best or have the skill to know who the best are, but you don't know how to be the best, so how can you judge their work if by definition you're not as good as they are?
I’m writing a fairly large-scale app, right now.
It’s written in Swift (frontend), using IB (classic UIKit), and PHP/MySQL/Postgres (backend).
It does not use SwiftUI (shiny), or Rust (shiny, but a bit more dusty), or some form of NoSQL.
I picked standard UIKit, because I like SwiftUI, but the app has a fairly intricate and non-simple storyboard. I am not confident that SwiftUI is up to the task, and I know that IB can do it.
I’ve been writing in Swift since the day it was announced, so I already know it is up to the task, despite being a fairly new kid on the block.
I picked PHP, because I’m already quite decent with it, and, despite the hate, it is a perfectly good, performant, proven, and supported enterprise language. There’s a better than even chance the server will be swapped or rewritten in the future, so it’s a good idea to use my implementation as a PoC and architecture model, anyway. It will need to run the system during the nascent phase of the project, so it needs to be solid and secure. There’s no way I will take the risk of writing such a critical system, in a language I barely know (See this scar? I’ve done that -long story).
I picked MySQL and Postgres, because they are proven, robust databases, and can be installed on most low-cost hosting solutions (the app is for an NPO). I used PDO to interact with the databases, for security and reliability, anyway, so it’s entirely possible to add support for more databases, in the future.
Also, backend is not my specialty. What I did, was design a layered architecture that will allow future “shiny” engineers a path to replacing the engine. I wrote an abstraction layer into the server, allowing a pretty wholesale subsystem replacement. The app communicates with the server through a classic REST-like JSON API, so there’s another place for a swap. I’m not married to a system like GraphQL, with the need for dependencies; but the layered architecture allows use of GraphQL, anyway, if people really want it (it is cool and robust, but is difficult to use without some big dependencies).
Speaking of dependencies, I do everything in my power to eliminate them. I have been badly burned, in the past (not too distant, either -I had to do an emergency dependencyectomy, just a couple of weeks ago), by over reliance on dependencies. It means some extra work, on my part, but not crippling.
Speaking of boring, few things are more boring than documentation, testing and quality coding techniques. My testing code usually dwarfs my implementation code. I spend many boring hours, running tests, and examining results.
In my experience, I don’t think I’ve ever written a test that wasn’t necessary. They always expose anomalies. I just went through that, in the last week or so, as I was refactoring a legacy system for the app I’m writing. I actually encountered and fixed a couple of issues (including a really embarrassing and scary security bug) that have been in the backend for two years.
But that’s just me. WFM. YMMV.
> I’ve been writing in Swift since the day it was announced
May I ask how you consider these to be compatible?
It was a calculated risk. Since the company I was working for, at the time, was never going to use Swift, my "bread and butter" was at no risk, whatsoever. We were a C++ shop. I just started working with it on nights and weekends.
Being a C++ shop, however, we were quite familiar with Lattner and LLVM, so we were aware of his propensity for WIN. That gave me some confidence, going forward. Also, Apple didn't just announce a language. They also announced a full system API, as well as a product roadmap. The API showed they were serious about it. Those don't come in Cracker Jack boxes. They take some serious work and investment.
It was definitely a risk, but I'm a conservative, scarred veteran of many errors in judgment (can you say "OpenDoc"? I knew you could!). I wasn't about to run into a burning dumpster, half-assed, and I thought it was worth it. I knew it would take four or five years to mature, and it has. I tend to play the long game. I learned that, from all those years, working with the Japanese.
I'm rereading your previous comment multiple times but unfortunately still failing to see what you're referring to. The only explanation I can see is "we were quite familiar with Lattner and LLVM, so we were quite aware of his propensity for WIN. That gave me some confidence, going forward."
> They also announced a full system API, as well as a product roadmap.
I'm not quite sure what you mean by "a full system API", and does Apple ever announce a product roadmap? I would definitely be interested in this roadmap of which you speak. :-)
They have had a Swift roadmap forever. I think it's now kept on swift.org. I'll see if I can find it. I think it's a fairly sparse one. I really only cared about the evolution through ABI Stable. All I needed to hear, was that was a goal.
You are right. They tend to eschew roadmaps, but they did a "hard-sell" with Swift. They knew it would be difficult to build momentum with.
"Full System API" is the native frameworks; UIKit, AppKit, WatchKit, etc., as well as things like WebKit and MapKit.
When Swift was announced, they had APIs for most of that stuff. I was pleasantly surprised. I had a full app, working within a day or so (using beta Xcode, of course).
Ok, but that came later and wasn't present in 2014.
> they did a "hard-sell" with Swift
I agree with that. :-)
> "Full System API" is the native frameworks; UIKit, AppKit, WatchKit, etc., as well as things like WebKit and MapKit.
Swift did of course have bridging to Objective-C and the preexisting Objective-C API. I find it strange to equate language bridging with announcing a full system API — an API originally announced around the turn of the century (can't believe I'm using that phrase now). Cocoa-Java, which no longer exists, also had such bridging, as does PyObjC and MacRuby/RubyCocoa. Still, most of the system API to this day are written by Apple in Objective-C.
SwiftUI shows promise, but it is still quite nascent.
Pretty much every bit of code I write is "pure" Swift linkage. I like things like Swift's enums too much to give them up. They make APIs really fun.
The idea is that I am not actually aware of what the final product will look like, when I start, so I take a very careful approach. I spent 27 years, working for a "Waterfall-based" corporation, where the system had to be 100% designed up front, and the end result would "meet spec," while still sucking. I am not particularly thrilled with many agile approaches, either, as I see many of the same problems. It's really just shifting the tech debt around.
My approach actually results in my having to throw away a lot of really good, tested, code, but I still end up moving lightning fast, and coming out with good results. If you look at my portfolio, you will see a whole bunch of small, heavily-tested module projects. Many of these were things that I ripped out of other projects, but didn't want to throw away. Some of them are crazy useful, like the Persistent Prefs Utility, or the Generic Swift Toolbox utilities, which show up in most of my work. The fact that they are treated as independent projects, with heavy testing, means that I can reuse them with confidence.
The Spinner project was an example of a UI I designed to be a central aspect of an app, then decided not to use it, as it deviated too much from the user workflow I had in mind. It will be back, but not until it's the best approach. Eye candy is nice, but it still needs to be usable.
That modular approach is not new at all. I think I may have been doing it since the early nineties.
True, there is flexibility, but that flexibility is implemented as a single-point hinge, not a bendable continuum. It's very clear where the flexibility goes, and that point is well-tested. I just got done refactoring the server, where I added a more flexible way of allowing users to implement security postures, and I'm really, really glad that I did things the way that I did. It was a pretty big job, adding personal tokens (the new functionality), but a lot of the work was making sure that I stuck with the "philosophical" domains of each layer, and testing the living bejeezus out of the code.
And each point of flexibility has a very clear domain. For example, the ANDISIOL layer is where the SQL turns into functions. You can rip out everything below that, and replace it with whatever you like, as long as the same functional API is presented to BASALT. That's a fairly classic pattern.
I set it up, so that all the security stuff was sequestered into its own "silo." This allows things like using monitoring and logging, or a hardened host, without affecting the main datastore.
The deal is, is that I expect the tech to get swapped out, down the line, for something more modern, and it might not even use SQL. But security is quite important (especially with the target user base of the initial release). I went kind of overboard with some structural support for security. I am quite aware that I could get better performance from a single, related DB, but I wanted to start off with an infrastructure-level support for security, with the anticipation of future tech making up for any performance issues.
In my experience, security is often spackled on, after the fact, and I think that it's important to start from scratch, with security.
Also, note the ridiculous simplicity of the DB schemas. That was because I used...yecchhh object-oriented design as the Model, and the datastore actually represents a generic base class state. This allowed me to write a whole bunch of code, early on, and test it, then never have to look at it again. The implementation was done in layers, over a period of seven months. Each layer was treated as a standalone project, with its own lifecycle and testing. The idea was to develop a robust structure that I could consider reliable, then build on top of that.
It worked fairly well.
In many ways it’s an argument for MongoDB considering if you’ve built a JS based application MongoDB minimizes any additional learning and having to translate between the objects on the front end and in the backend. Non relational DBs are also easier to scale horizontally without requiring any application changes.
The OP is an argument against introducing new technology without a significant clear benefit. It’s basically saying that simply having a new tech can add significant complexity, unknown unknowns, and require much more maintenance and other costs.
So if your web application is currently running on MongoDB and it’s running well then this is an argument to stick to MongoDB instead of say migrating to postgresql going forward.
And in the end many developer years are wasted just rewriting a working an existing software in a new stack they are not yet comfortable with and the new product is much less stable than the old one.
Or more like it's the scarcity, by definition new tech still doesn't have many experts in it. So if you're one of the few that learns it you're all of a sudden pretty differentiated compared to your peers. Even if the new tech turns out to be pure garbage down the road it doesn't matter because in the meanwhile you can land the hippest jobs and win the admiration of your peers by being so far ahead of everyone else.
Just to correct the misinformation, Java has been steadily improving since the Oracle years whether you like or hate oracle — the JVM is an absolute workhorse and new exciting features are underway.
It sounds like you're saying the fear is unwarranted. It very much is a real fear as long as people interviewing them actually count that against them and value new tech.
The explosion of startups also contributes to this. They often have to attract employees by offering the promise of autonomy. Most startup employees know they aren’t getting rich. So they milk the startup for maximum resume points and move on.
The VCs unload these bloated companies into inflated stock markets and the cycle continues.
DHH runs really small companies and pays his employees really well and doesn’t work them too hard. Employees have no real reason to leave. They see a direct link between the low overhead and their job security and work life balance. Aligning incentives fixes this.
I find it interesting that these technologies are considered "state of the art" (SOTA). What does SOTA mean in this context? I could see an argument for postgresql and rails/django being SOTA as I think many believe them to be fairly mature, secure, and feature complete.
That app could have been written in the mid-1990s using WebObjects in just a few months.
Technologies like MongoDB, React, GraphQL, Microservices etc exist because modern, real-world apps are generally far more advanced than just a glorified CRUD app. Consumers simply have higher expectations and more demands for what web apps should be able to do.
Far, far too often I think a significant source of complexity is enthusiastically added by engineers themselves assuming that the problem they're solving is sufficiently complex that boring technologies just aren't up to the requirements of their project.
... are they?
You need the wherewithal to stick with your stack and not get lured away. Maybe this is what boring means. Maybe boring is different to different engineers based on their background.
Sometimes you cannot select the stack cause there might be more senior engineers at a company and they have more sway. This is fine as long as the engineers picking the stack have picked a stack they have mastered and it is boring to them. As a regular engineer in their team I would hope to rely on their expertise and would hope to learn from them.
I remember at one job a rogue engineer picked a boring backend that would have been fine. But they fell behind because the other engineers knew their boring stack a lot better. Ultimately the rogue engineer had to switch to the other boring stack. The rogue engineer just was not fast enough to master it and implement the features required to keep pace with the demands of management. These demands were trace centralized logging, security, and of course features. So while they were still learning the ropes, we were moving on to even more advanced security, logging, and feature requirements. They just couldn't keep up.
This is 100% how we did things for the last 5 years. We used the exact same basic tools & APIs, but iterated on how we integrated them together by way of our code.
We took what most people would call a "toy" stack, consisting of just C#/AspNetCore/SQLite (and recently Blazor), and turned it into something that can serve thousands of users in production across several B2B customers. We don't use any sort of containerization or cloud tech. Our software can be deployed by unzipping a file on the target environment and running a single command. You would be surprised at how much speed you can get out of SQLite when you have had 5 years to live with and experiment with its capabilities. On top of its extensive internal testing framework, I have several testaments to its stability sitting in our customers' environments right now. We've got OLTP SQLite databases that are getting close to 1TB in production without any signs of distress.
So, instead of focusing all of our energy on shiny things, we focused on building a 100% integrated vertical with (mostly) boring tech so that we can fully manage the lifecycle of our software product with the product itself. We have a management web interface built out for everything we need to do for building & operating the solution. We are very close to being able to partner with other organizations who can run the implementations on our behalf because the software is so easy to manage now. This is the type of real-world horizontal scalability the CEOs are after.
It's sad that most places - and by definition the largest places also - end up being a meat grinder and people just hop companies and teams within companies every year or two. By the time you start understanding the domain you move on. It takes years to internalize a problem and understand it deeply.
It also applies to managing your life, personally. Know what things you do, what your personal goals are, and don't let yourself get distracted by the latest and greatest social media trends or stuff your friends are doing.
Focus. Mature. Achieve.
At my current job we call those snowflakes. "no snowflakes" is the motto of one of our senior architects.
Truth is you should choose technology given consideration of its pros and cons, not on the basis of some slogan.
There are very good reasons to use mature technologies and very good reasons to use current technologies and very good reasons to use absolute cutting edge technologies.
When someone comes at your approach wielding a slogan, be skeptical.
The trap most engineers fail is that they only think about scaling. It's obviously an interesting problem, but until a company becomes successful it's not really something you should worry about, and for the most part things that (allegedly) scale well are more expensive, slower and harder to maintain.
But there are so many ways to innovate that's not just about scaling: can you make your application faster? What are things you never even considered because you have subconsciously internalised as a physical limit of reality when actually it's not?
One of the examples I'm currently exploring is the idea of moving a large amount of data in memory. I remember decades back when Google announced that it's search indexes are now fully in memory (they proudly announced that any given search query might run through a thousand computers). I cannot imagine how many possibilities it enabled for their product that were not possible before. The experimentation with new technology in pursuit of completely new ways of exploring your problem space should always be encouraged, and if boring technology cannot do it then that's when you give up on it.
I've spoken with 70+ different devs working on 70+ different projects of all sizes on my Running in Production podcast and the choose boring tech phrase came up a whole bunch of times, and especially the idea of using innovation tokens. If it helps folks build and ship their app in a quick and stable way, that seems like a big win to me.
I do agree. Although the point of the article is to _lean_ more on "boring technology" side of, and paying extra effort when considering adopting newest flashy things.
Having read the article 3-4 time in the last years, I don't think they say "don't use new things", just "not too many new things at the same time"
Let's say you've given it the proper consideration, and it's clear beyond the smell of subjectivity that it needs to be flashy stuff, go for it. The point is that this is often not the case, and the argument is to go with boring then.
> The right technologies for a project are the ones you deem to be right, given appropriate consideration of many factors
It's _usually_ difficult to take into consideration all the factors of a new flashy thing. The unknown unknowns. Thus _maybe_ choosing a trendy set of technologies might indicate that the exercise of balance and consideration you were commenting and that I do agree with 100%, has not been as honest as possible.
For what it’s worth, it’s a great read that I would recommend to anyone in the industry.
They aren’t forcing technologies on you, but driving home the true cost of long term maintenance and investing in the “core stack” that you already have instead of adding N technologies to solve N business problems. This is good stuff.
Maybe read it before commenting next time.
I wish HN had a feature where it could detect that you clicked the link and disallowed commenting before that. At the very least you’d have to click the link, even if you just immediately click back without reading, and you’d know what you were doing was circumventing the spirit of the place.
That way the reader can decide for themselves. I'm less inclined to object to hasty responses to discussion points than I am to top level comments, but that's just personal choice.
It’s an order from the top. A commandment. A clear requirement. A statement of belief for the masses to follow.
It’s an unequivocal statement, a perfectly confident directive telling you precisely what to do without the slightest equivocation.
It’s not “read this powerful headline but then please read the in depth article only to find we don’t actually mean what is said in the headline, we mean something more nuanced and subtle why did you take our headline seriously?”
Amazing bit of self-justification you have there.
If you want to argue against what the author wrote, by all means. If you intend to argue against a straw man you have concocted out of a three word title, then you will seem a fool.