I always found it funny that companies will go to extreme Herculean lengths to hire the best programmers, and are incredibly fearful and paranoid that they could be making a "bad hire", and yet once hired they don't spend a second making sure engineers aren't completely running the software product off the rails and killing the company internally. The author mentions trying to rewrite Etsy's backend in Scala and MongoDB. That probably cost the company X million dollars. Etsy could still be recovering from that.
The industry constantly mints senior engineers who have been bitten by complexity, but doesn't want to hire them, or listen to them. More often than not senior engineers pretend to be excited about complecting the tech stack, because hey, it pads their resume with the latest buzzwords and they've given up trying to fight against it anyway.
The last line of defense against a rogue engineering team is managers who have studied this stuff. How many engineering managers can spot the common situation "the engineers are bored and they're rewriting perfectly-good codebases in Common Lisp and OCAML for funsies"? And how many know what to do about it when they see it?
Anyway, this is a cool website, and it'll be useful to point people to it in the future, so thanks for that.
In the case of a lot of tech companies, the entire market is broken and leads to weird incentives rarely seen in any other industry: companies that aren’t profitable, don’t have a real product people pay for, don’t have a clear, plausible path to profitability and yet somehow stay in business because investors are happy to burn money.
This completely reverses the typical market dynamics. The company is more focused on catering to investor’s wet dreams than actually solving business problems, and “engineering playgrounds” with 3 different programming languages, microservices, AI & blockchain netting them a 5-figure monthly AWS bill is something that appears to please investors more than a rock-solid, “boring” backend. Maybe the complex architecture succeeds at obfuscating the fact there’s no business model nor plans for one?
I often wonder why investors love paying 5 figure aws bills; even worse, why they consider lower bills or not using 'the cloud' a sign of cto incompetence. Even if the company can run on $500 hosting instead. Must be because it is easy to do DD on: AWS, check, TS (JS is now a reason to not pass VC dd I heard from friends) check , React check, microservices, check, etc.
Not using the cloud has many hidden risks. You need multiple physical locations for reliability, those now need to be staffed.
If you instead use cheaper 3rd party hosting companies, you may have hurdles to growth and future migration cost since those companies do not have many of the certifications required.
From an investor POV, paying a little extra now is often worth it to reduce risk and remove barriers to explosive growth.
You are right but that does not really explain why you would spend so much money, masking incompetence usually. Like not having database indexes because 'the cloud scales' which I see all the time. And it does scale, running up bills for rds or dynamo to extreme amounts. And investor dd sees this as a positive, like 'spending as much as possible' somehow means things are going into the right direction. It can be, however, I think spending it like this is a weird trend.
I can see the upside for many things, but you can do that efficiently, not costing 5 figures.
Is that a real problem with a reputable bare-metal hosting provider? I agree that a fly-by-night VPS provider would be a major risk, but OVH for example is a major host in Europe (that successfully hosted websites long before the "cloud" was a thing). They have all the major certifications and have been in the business for two decades now and their uptime is solid.
The only risk I can see is the lack of managed services, but Postgres isn't that hard to manage yourself, and other value-add AWS services can still be used (the cost savings of your baseline load being on bare-metal would most likely still offset the bandwidth costs of moving data between AWS managed services and your bare-metal host).
Sure cherrypicking postgres since it has more experience as "on-prem". Try and replicate s3 type of service for me or something like sns - even without fancy stuff.
You can still use S3 even without being on AWS, though for more reasonable (< 1TB) amounts of data, old school local storage served by Nginx does the trick.
For SNS, what does it offer that RabbitMQ doesn't? I've always found RMQ to be rock-solid and wouldn't use SNS anyway.
I wonder if this isn't so much about the tech itself, but rather the expectation that the best realistic outcome for a startup is an aquihire. Following the current "industry best practice", even if it's more expensive and might not even make sense for your product requirements, is just insurance that your acquisition will pass a future tech compliance audit: a buyer will want to be able to support your product and cannibalize it if necessary and that's harder to do if it's written in a lesser-known language/framework.
This also goes some way to explain the leetcode, "hiring only the best from Stanford/MIT" hiring nonsense for pretty mediocre cookie-cutter products that could be perfectly well executed by couple mid-range developers: your developers' resumes are as much a part of any potential acquihire package as the codebase and user data.
There's definitely a stage for startups where the AWS bill doesn't matter because sustaining user growth is more important, but there's also a stage where the company, usually B2B, is trying to improve margins because it helps a lot with valuation.
1. They are already an investor in Amazon. They have an interest, both in the actual spending and the trend setting of current and future companies to use AWS.
2. It is actually increasingly hard to get an industry wide reputable hosting provider ( Cloud, VPS , Metal or not ) that many investors could agree upon. Just like you said which makes DD harder.
3. Amazon actually offer heavy discount and even lots free credit to startup by accredited VC. Meaning the difference in the first few years is actually tiny. And in your example, if it can be run on $500 budget elsewhere you can bet it will literally be free on AWS from those startup.
Job hopping programmers are compensated for how rare their skill is and not how much value they add to the business. It’s another flaw in capitalism.
Reward employees a direct and substantial cut of the profits and incentivize to them to stay 5-10 years and these behaviors should disappear.
The loss of job security, frequent job hopping has created more incentives to optimize for the next job switch and not value add.
The explosion of startups also contributes to this. They often have to attract employees by offering the promise of new tech. New tech can propagate these days same way how Bitcoin prices rise. Our industry is in a financial bubble which has created a complexity bubble. The financial bubble collapsing will pop the complexity bubble leading to huge surge in boring / low overhead stable tech.
Not to mention how ageism plays into this. People will hire someone who spent the past 5 years switching between 5 different JavaScript frameworks over someone who spent five years writing Java at some boring company.
Most startup employees know they aren’t getting rich. They go on to milk the startup for maximum resume points and move on.
The VCs unload these bloated companies into inflated stock markets and the cycle continues. Some small progress at the cost of tens of billions and lots of running in the same place.
Our industry is like some eccentric Howard Hughes drowning in so much money that all we do is come up with ever more esoteric contraptions to escape from reality.
DHH starts really small companies and pays his employees really well and doesn’t work them too hard. Employees have no real reason to leave. They see a direct link between the low overhead and their job security and work life balance. Since the team is smaller the work is less alienating / hyper specialized leading to a deeper connection with the company and its customers. Aligning incentives fixes a lot of problems.
Then company B comes along and sells same widget for 3 silver.
Company A will either learn to be more efficient and sell at 2-3 silver, or go out of business.
Perfect competition only happens in text books. In the real world monopolies, oligopolies, regulatory capture, control over social connections and capital, markets with asymmetric information (VCs vs public, employees vs companies), management consecrating itself into a class with its own interests , mean that companies can go on for 10-15 years before they are punished.
By the time the market wakes up you’ve switched 3 times and after 10 times you’re retired in the suburbs with a nice BMW. Who cares then.
This is quite a simplistic view of capitalism. Consider the Moloch effect[1] where rational choices in environments with hard coordination problems can lead to degenerating incentives.
Capitalism is not the same as free market; capitalism is rather a system where your economic power is mainly determinated by the amount of capital you have (money, investments, means of productions...) rather than class, bloodline, titles, popularity, or legislative regulations[2].
Capitalism is linked to free market in that they work well together: free markets want enterpreneuers to be compete for efficient capital gains and people with capital like how their economical power is only bounded by their capital (with the potential of an exponential growth). Apart from this ether can exist without the other.
This is how supporters would like them to work together, but there can be a lot of bugs and traps on the way.
I am not sure about the down votes (no reason for them in my opinion) but the casual jab at capitalism looks gratuitous and does not help making the argument seem stronger.
It's not gratuitous if it is central to the argument. The state of affairs they describe is dependent on a virtualized capitalist economy being in place.
I wouldn't say that that's a flaw in capitalism, it's just that one (skill rarity) is observable ex-ante, the other (value added) is only observable ex-post.
I don't buy this theory, because I don't think there are many investors that actually care about the tech stack. If they are, I'd say they're bad investors.
You could create a successful company on any tech stack. It's really like trying to invest in a company based on the way they decorate their HQ. Does it matter at all? Maybe only if it's ridiculously extravagant compared to their revenue.
I have had it be an issue with investors during the due diligence phase. We were very far along in the fund raising process when we got to the technical due diligence. They brought in an expert advisor to discuss our tech stack and he seemed very concerned that we were building on a php/mysql stack.
They pretty much ghosted us soon after that meeting. Though they did not specify that as the reason, it seems most likely that their advisor told them we don't know what we are doing technically.
I've worked in over twenty languages in my career, and you can't pay me to work with PHP anymore, even though there was a brief period where I thought it was the best thing ever as a new programmer. But that's just an artist being picky about his tools.
What matters with a startup is executing, and that means using tools that let you execute well. If you're most familiar with PHP and MySQL, then that's what you should use.
As someone biased against your tech stack, I fully support your decision and think the expert and the investor have no idea what really matters, hence I stand by that they're a bad investor.
That...is honestly shocking. Yes the programmer in me cringes at the though of going anywhere near php/mysql, but the business side of me looks at this stack as pragmatic, well documented, well understood, and easy to hire for.
On the other side I'm evaluating CL for my next endeavor since I personally find it to be my most productive language, but realistically I'll settle for Clojure and even then I'm worried if that is a bridge too far when it comes to the whole funding/due diligence issues.
I'd love to know what their export advisor considers the right decision?
He was an ex-googler, so I'm sure only some combination of c, go, and java would have sufficed.
Php 7.2+ is great for a lot of saas products and is super fast now. and mysql 8 is rock solid and battle tested in a lot of production systems.
Starting over, I would consider PostgreSQL because it has some nifty features. But now that mysql has added json support, I am less inclined and I see a lot of complaints about performance and scaling that I don't think is as much of an issue with mysql. There isn't a whole lot more out there for relational databases (that doesn't cost a ton). And not using a relational database for most saas systems is just crazy talk.
> How many engineering managers can spot the common situation "the engineers are bored and they're rewriting perfectly-good codebases in Common Lisp and OCAML for funsies"? And how many know what to do about it when they see it?
I think there is a tension between this kind of actively guiding anti-complexity management and hiring "top talent".
The very best developers are capable, and avoid complexity. The next best developers are capable, and love complexity. The worst developers are not capable.
There aren't enough of the very best developers for a company to plan around hiring only those. So, if you want to hire developers who are at least capable, you have to give them some leeway to make things overcomplicated. Yes, that incurs a real cost to the business. You can think of that as just being another part of their compensation package.
Also it is almost never for "funsies," it is instead usually the "wrong solution" to a very real problem. The current system has bad performance, or doesn't support some new use case that is a major company initiative or whatever. But instead of fixing/augmenting the existing system the (probably multiple accumulated pain points by now) are used to justify an overly complex change. In fact the change probably becomes known internally at the company as some silver bullet that will fix all woes further reinforcing the drive to do it.
Even the best developers get ignored if they try to justify pure-tech-debt fixes. So they learn to include fixing tech-debt as part of a fixing a problem that has some _direct_ business relevance if addressed. This gets clearly observed and taught to the all tiers of developers further obscuring the rationale for architectural changes from more senior management.
Yeah the key thing here is it's just so easy to spin anything as the solution to some problem the company has, and there's always problems around.
"for funsies" probably isn't that far off. Because the process is more like someone gets interested in something at some point. Then at that point +~1-6 months someone raises a problem and some senior dev gets stuck on the idea that the awesome thing they read about can solve it. Then before you know it whatever tool they want to use has more bells and whistles than the average mars lander and does everything short of curing cancer.
There's rarely any good correlation between the problem and the solution. That gap can just be bridged by buzzwords. The true correlation is usually between the solution and whatever the most senior dev on the team thinks is shiniest at the moment.
"The very best developers are capable, and avoid complexity. The next best developers are capable, and love complexity. The worst developers are not capable."
It's not always in one's control to avoid complexity. The simplest solution to a problem in a lot of cases may be 2-3x the lift (simplicity tends to require more work, complexity is easy) and thus blocked by the business. A holistically simpler solution may be blocked politically because a certain team doesn't want to own some functionality etc ...
I would say the best developers can see complexity coming and have a healthy fear of it, the medium devs don't mind complexity and the worst devs can't get anything done without increasing complexity.
> The simplest solution to a problem in a lot of cases may be 2-3x
This is such an important point. For whatever reason it has become ingrained in people's heads that the simplest solution must by reductionist logic be the easiest one. And therefore the easiest solution is the simplest one and it is good to be lazy and just introduce complexity everywhere.
The "right" solution is usually the one that models the underlying complexity of whatever phenomenon the the systems is addressing. If you make it simpler, inevitably you'll grow the system to address those use cases. Sometimes this is good, you don't want to err on the side of an overly-complex system. By this reasoning, I usually find building of evolvability and maintainability the be stronger guiding principles.
This is where the whole "MVP" concept got out of hand. MVP didn't mean an overly-simplified prototype. It meant solve one, narrower, problem well. This also pairs well with PGs "do things that don't scale" advice. You are taking on what others might think of additional complexity, to solve a very targeted problem more effectively from others, because using either deeper analysis or first-principles or whatever, you've actually better modeled the underlying complexity. Then you try to scale given those insights.
> There aren't enough of the very best developers for a company to plan around hiring only those.
I've seen no evidence that companies are even trying to hire developers who "avoid complexity". If anything, the interview processes are designed to select for engineers who bathe in complexity. There are so many interviews which consist of "How would you rewrite from scratch this thing that already exists?"
I don't know what your company is, but it may not have a reputation for paying well. I've worked both at well-paying FAANG companies and lots of BasicAverageTech companies, and the candidate flow is night and day different. There is no shortage of people that "know the basics." In fact, there is no shortage of really strong candidates. They are out there, looking around to job hop like everyone else. It's just that they are probably just not applying at your company. Not that you can personally do anything about your company's compensation, but that might explain it.
My current employers hire +- 1/100 applicants. The vast majority fall off before even getting to a coding test. Of the last 20, most are not fluent with conditional logic and iteration, or cannot use a dictionary in an algorithm. In the most recent interview, an "experienced" React developer could not set properties on a JS object.
When an interview process produces the output 'doesn't know how to do basic things' on a "vast majority" of experienced people who've been doing said things daily for years, the most obvious conclusion is that the interview process is flawed.
I think it's because they have some success with making wordpress do what they want (after a fashion), or copy and paste some Javscript. That makes them a programmer, without knowing what they don't know, even 10 years later.
I was genuinely curious what you classed as basic programming.
I would definitely consider pointers pretty fundamental if you're a C++ dev.
It is pretty hard to believe that 50% of candidates don't know what a pointer is. I've barely touched C/C++ and still know what pointers are and how they work.
> I've barely touched C/C++ and still know what pointers are and how they work.
Congratulations you’ve successfully triggered the pedantic interviewer. You will now face six questions on pointers in C++ they just looked up each more trivia based than the last.
There is one problem with this: It will get on the nerves of the developers that love simplicity.
I'm one of those, and it really gets on my nerves when systems are overengineerd, or use tech that has more drawbacks than benefits for our specific case.
> So, if you want to hire developers who are at least capable, you have to give them some leeway to make things overcomplicated
This is always a challenge in general - how do people learn the lessons of complexity without creating it and then seeing the effects? I wish there was a better word for it as every person who reads "complexity" says well "duh of course I don't want that" before they then go and manufacture another bucket full of it. Complexity masquerades as simplicity - in the first instance it nearly always solves the immediate problem better than anything else. Recognising the latent complexity of choices is one of the hardest but most important skills to learn.
I had a small business depending on the Etsy API during the time they transitioned some storage to Mongo. The immediate effect for us was a downturn in functionality and reliability with no apparent advantages. In the midst of other serious concerns about their direction, we questioned why Etsy was doing this on the API mailing list and were told basically we didn't know what we were talking about and it wasn't out business. Fair enough, sort of.
It was a hot time for NoSQL and document DBs. Having investigated using Mongo myself to little avail, I asked why they didn't just use Postgres. If I recall correctly, a couple years later they published a Mongo at Etsy postmortem which concluded they should have just stayed with Postgres.
That repo is interesting. A quick ctrl+F seems to indicate that pretty much every instance of "MongoDB" is "Moving from Mongo to Postrges or DynamoDB" (there is one single entry of moving to Mongo from MySQL). Almost as if Mongo is just not a good database (or people are too eager to use it for things which it does not do well).
Making efficient use if mongodb is very difficult but if you build your app and expectations correctly you can get something very performant. For example pre-4.x listing huge collections was unexpectedly extremely slow.
Yeah Mongo is 10 years old or so at this point. This article was written in 2015 about decisions made years earlier. It's now reached maturity and stability. It's now "boring tech".
It may be more established but it suffers from a similar problem: some people continue choosing it for the wrong reasons, such as "our backend is in JS, most of our devs only know JS, and it's easy to just dump objects in there", which is an abhorrent reason for choosing it, and will end up biting you.
IMO this would be a case where if you're dealing with a relational domain and the engineers really don't know SQL you should either (a) rethink your hiring policy or (b) spend one of your innovation tokens in having everyone learn SQL.
(I have to add the inevitable disclaimer that I actually love JS and do not want my words to be misinterpreted as a cheap dig at it)
Um, even if it's mature technology that doesn't negate the costs of operating two different databases which is a large part of what the original presentation is about.
Lots of good reasons to use NoSQL. All pretty much hang off what sort of data access pattern you need.
If you have an application that retrieves an works on a top level entity then NoSQL fits very nicely. When you have a dataset that is shared and aggregate information is needed not so much and you are likely better of considering a SQL database of some sort.
> When you have a dataset that is shared and aggregate information is needed not so much and you are likely better of considering a SQL database of some sort.
There are best practices for this. Simply create a microservice per table, and then create a microservice that acts as a client to the other services and aggregates or joins the data from those services.
No, I'm not kidding. This is literally what people do and recommend.
Oh, pish and tosh! That's too much engineering! Instead, handwave about "eventual consistency" and save millions in infrastructure costs! Totally worth it for the benefits of being truly abstracted from your data storage layer. Because, you know, people change their database back ends more often than they change their sheets.
It's industry best practice -- not a joke. What's more, the recommendation is to have a separate database per service -- one table, one service, one database.
Fuethermore, when a join is desired, the best practice is to implement not just a service that joins the data but maintains it in a table/materialized view of its own, along with a message server such as Kafka. The services responsible for the tables to be joined (customers and orders, for instance) put events on this message bus which the joining service subscribes to in order to know when to update its view. See: https://microservices.io/patterns/data/cqrs.html
How about “leaves the industry rather than have to use terrible things at dumb companies”, thus giving a survival bias that selects for shiny. I know I feel that way about a lot of stuff now.
That's a fair point and I can totally relate to it. There's a toxic relationship of startups always trying to one up each other to attract more attention as a great place to work and in general tech stack is just another tool on this fight. As a consequence workers are pushed to the same mindset, where fixing a problem with 10 microservices and a dozen AWS services is the expected and if you prove you can solve the same problem with a single machine running a cronjob with no external dependencies you're the weird one.
That's a really common pattern in gamedev too. Median career is something like ~3 years so those that stick around are okay with the crunch and other shitty parts of the industry.
Combine that with gatekeeping/I did my time you have to do yours and not much has changed there over the years.
You don't even have to go that drastic. I was also tired of the "new technology treadmill" in software development, so I just changed roles in the same industry. Did a little product management for a while then settled on project management--for those same software companies. The pay is much worse but at least I'm not spending my time re-writing working software into "non-working software, but hey, it's in Scala."
yup I started my career and felt this way pretty quickly, I'm now trying to get a PhD instead, I love solving problems, so this seems like a good idea.
I'm sure there are some great companies out there, but they seem to be rare. I just see don't see work in this industry to be sustainable and I cant see myself working as a Software Engineer when I'm 30+ or with a family.
It´s kind of sad. I am the same. I want to do something different but I don´t know much else and the pay is reasonable (reasonable, not great, I am in Spain).
My main recipe for crisis management when a company is about to go off the rails or if it has already happened: cut down on complexity. 9 out of 10 times that's enough to get things moving again. Highly frustrating that we keep making these mistakes over and over again without ever learning from them. Complexity has a price, you should only spend it if you really need it.
It’s hard enough managing code complexity in software projects; introducing tooling into the stack that exacerbates the issues is definitely a large own goal.
One of the main culprits is often virtualization. Used without a good understanding of what goes on under the hood it is super easy to create a situation that heavily overloads some data path to storage without being aware of it because it's all so nicely abstracted away. Fifty virtual machines trying to access the same storage layer is a pretty good recipe for a disaster.
You are clearly not a JavaScript developer. It really feels like everybody has just given up and thrown in the towel. There are no good developers, let some giant monster framework make all your decisions, and frequently chase shiny shit. Of course this means starting over, from scratch, in small sections of the product every couple of years.
> The industry constantly mints senior engineers who have been bitten by complexity, but doesn't want to hire them, or listen to them.
Again, in JavaScript land that is not what it sounds like. The industry has minted a bazillion expert beginners who have never moved out of their parent’s basement and had to live on their own (in a technology sense). They are fanatically invented here fearing original code more than job termination and now they have somehow risen to make architectural decisions about the future health of your software and business.
I guess they failed to understand that choosing boring software is different than depending on a package manager to write all your software.
There was no hate intended in my comment, but I can understand how you came to that conclusion. My experience posting on r/javascript of Reddit has taught me some developers are insecure, extreme conspiracy-theory paranoid insecure, about working without their favorite framework.
I have been writing it full time about 13 years. I love writing in this language, and TypeScript even more. I am just frustrated by what appears to be some combination of insecurity, false expertise, and a vehement lack of passion in the work force. If I want to be happy I should move on to a different technology stack, but I really enjoy making products in this language.
It's funny, I have about 13 years experience, have been senior in both enterprise, start-ups and everything in-between, and have basically the exact same view about front end dev as your original post.
Except I fucking hate using Typescript, and totally wasn't expecting to see you mention you like it, given all the other stuff.
IME all the same people that overengineer everything with god awful dependencies are the same ones pushing super hard for typescript on every project I'm on. When they get their way (as always, since everything is decided democratically and everyone is dragged down to the level of the worst dev on the team), they write the worst typescript ever. On my last project, one of the people championing typescript defined some constant strings containing css breakpoints as the type 'string | int'. Rather than getting knocked back as one of the dumbest lines of code in the history of front-end, this somehow generated 3 pages of discussion in code review then got left in. I'd give the person a pass, assuming they'd never used a typed language before, except (a) they were senior and (b) they're the one that wanted types. These "seniors" lack even the most basic understanding of the shit they're using, but feel the need to impose their opinions about libraries, tooling and languages constantly.
I don't feel like I'd mind using TS at all on my personal projects, but on work projects with average devs it just adds another entire layer of complexity that they spend hours and days and weeks and months wrangling with instead of writing any code that might be remotely useful, by say, maybe implementing a business requirement, or doing anything that makes the company money instead of pissing away millions in salaries playing npm lego.
Plus, although I'm not familiar with any of it since I'm never the one pushing for TS, and so never the one setting it up, I've seen people spend absolutely ludicrous amounts of time tinkering with webpack and fussing over TS integration with 3rd party libraries and whatnot.
I'm surprised you haven't run into more of these people that just seem to use TS as a complexity multiplier for every bad engineering decision they make. Like spending 6 months setting up Gatsby or Next isn't costly enough for them, so they decide to tack on 3 months of TS integration and 'upskilling' for 3000 combo points, when they already (more or less, usually less) know how to write Javascript.
> some constant strings containing css breakpoints as the type 'string | int'
yeah this really is the fundamental problem with TypeScript advocates. They seem to come in two different breeds: A) people that would rather be doing Haskell but are forced to use JavaScript because that's where the jobs are and see nothing at all wrong with type inference and error messages that are 10 lines long and completely indecipherable by actual humans, and B) people that have never used a language that isn't JavaScript and have no clue how to structure type interfaces within an actual system (you end up with partial/omit and optional fields every-fucking-where and no rhyme or reason behind anything).
For some stupid reason this entire industry seems to suddenly believe that a 40+ year old debate of static vs. dynamic typing was settled because Microsoft came out with TypeScript.
This really isn't true, and TypeScript is fundamentally different from most other typed languages (except perhaps for typed Scheme / typed Racket) and as such is often used in different ways. You should look into the differences.
I've used typed languages before (C#), and I've used Javascript for 8 or so year, and yep, I too can't see what the hell Typescript brings.
I also used Haxe, which was a similar typed language, that compiles down to Javascript, and writing typed language for an untyped environment was an exercise in frustration.
It's almost like the React trend to write CSS in javascript. It separates the end result from the code even more, and doesn't really get you anything other than the typescript tax.
Maybe one day I will have that aha moment and get it, but the only real reason I see to learn Typescript is if the industry deems it essential to learn.
One of my favourite things about Javascript is how expressive you can be in it. Some of the new ES6 stuff helps with this too, but at the point when you write Typescript, you might as well be writing in a whole different language.
But naturally every typescript fan has a few stories about 'type-safety' saved their bacon, and no stories about how they struggled with defining types for callback functions or getting everything scaffolded for hours.
TypeScript is a fundamentally different type system to most of the other mainstream languages (similarities exist only in typed racket and clojure's core.typed). The fundamental differences stem from the fact that it tries and succeeds to model almost all existing expressive dynamic JS constructs. The resulting different features include:
* structural instead of nominal types (i.e. if it walks like a duck and talks like a duck, it is a duck)
TypeScript is nice when you are working with data structures because everything can be defined as interfaces, including functions and methods. This allows you to identify errors as you are writing code instead of having to execute it. Of course this only works if you make use of strict type definitions. TypeScript is like steroids in that it only makes you more of what you already are, which could be quite negative. TypeScript is not a supplement for missing discipline.
> I'm surprised you haven't run into more of these people that just seem to use TS as a complexity multiplier for every bad engineering decision they make.
AngularJS claims to be TypeScript but most instances of Angular code rarely make use of type definitions, which then defeats the whole point of TypeScript and then only contributes to spaghetti and tech debt.
I love Anders work, but TypeScript seems to have gone too far in type system experiment.
Configuring the compiler is now as complicated as selecting the right set of GHC compiler pragmas to successfully compile the code, and they are anyway type annotations that have zero impact in code performance, just a friendly than using JsDoc.
I look forward to when browsers just adopt WebIDL and be done with it.
> These "seniors" lack even the most basic understanding of the shit they're using, but feel the need to impose their opinions about libraries, tooling and languages constantly.
Oh yeah, I feel this. The problem is many "seniors" went from junior->senior by being the big fish in a small pond (small startup + golfing buddies with the CTO) so they never had to question their own assumptions. Then they just glide from job to job at the senior level. It makes for monsters who can't differentiate their own personal preferences from industry best practices, and will attack your PRs if you go against either.
Wish I had a better answer, but it was pretty much just luck. A bit of accidental networking and good timing.
When I finished uni I had a few interviews for large enterprise-y companies to get into their graduate programs. None of those panned out. Then while I was looking for more things like that, one of my classmates asked if I wanted to interview to be employee #1 at the startup him and a non-technical friend had created.
They didn't have much funding, so it was ~45k (minimum wage or close to it in Australia) to start, as opposed to the grad programs that I think would have been around 55k-60k. But the job was basically building an entire fairly large and complex web app (as well as a bit of desktop and hardware related stuff) between the two of us, who had close to zero real world experience, my friend on the back-end and me on the front-end. So we basically just had all the responsibility, with no experience, and nobody to guide us.
From there I just hopped around a few jobs, looking for small places where I was the senior or second most senior person on the team. I learnt from the first job that the easiest way to learn is to be in a position where you have as much responsibility as possible so failure isn't really an option.
Similar situation here. I disagree that frameworks are the problem though. Build tools, especially related to CSS and assets are IMO the biggest issue. TypeScript's support for monorepos could also be better.
Things have improved a lot since 2009 however, IMO. There was barely a concept of modularity at that time, the tools (e.g. AMD / requirejs / r.js) were way worse and managing the state of the DOM was a pain, jQuery or otherwise.
> The last line of defense against a rogue engineering team is managers who have studied this stuff.
If your engineering team is the one pushing in that direction I'd reckon the company was in a bad spot to begin with to have hired that team because it strongly indicates that the management layer (head of tech/CTO) has no technical clue.
Hire strong Lead Developers with a proven track record of delivering value to companies they worked at and you'll be mostly fine.
Also there's not much to study, in 99% of the cases in a web based startup if your stack deviates from a monolith with one of PHP/Ruby/Python/.NET/Go + Mysql/Postgres/MSSQL you're doing it wrong.
What's wrong with Node.js? It's super mainstream now. We have it in production since years, I see a LOT of companies migrating from everything else to node since years and it's a growing trend from what I see at my level with startups and even enterprises
It's really not high maintenance at all. Leftpad issue was fixed in less than hours the same day it happened and since then the NPM tooling for package management has improved a great deal.
JS shiny new culture doesn't really exist on the back end (and even front end js has calmed down in recent years). Express.js, the go-to framework 7 years ago, is still the go-to framework on Node today.
Node and Mongo are at this point "boring tech". Their limitations and trade-offs are well known, their benefits are also well-established, and their APIs and tooling have matured.
Sure but there are all known issues to any Senior Devs. You don’t have to pull shady hairballs from npm for every little feature you can think of. I’ve ran node in production for the last 5 years so at least to me it counts as battle tested and “boring”.
The thing is, if you have any sort of front-end that is not entirely server-side rendered, you're going to have to work in JS at least some of the time. If your back-end is also in JS, you now get the benefit of isomorphic code for things that you may want to do on both front-end and back-end.
Then there is also the fact that JS is actually a pretty great language if you know how to avoid the footguns. Granted that's not always easy, but it's a language with lexical closures and easy and familiar syntax, it's also very expressive and has a vast ecosystem supporting it. And you can even add the typescript compiler on top if you want compile-time type-checking.
It's also async out of the box, and while that doesn't solve all problems, it scales surprisingly well with no performance tuning whatsoever, it even has decent asynchronous primitives that make it easier to write correct code.
Yeah; even without frontend code, JS is surprisingly compelling. It's easy to hire for, easy to write reasonably performant IO code in, the footguns are pretty much all well documented and generally understood (rather than requiring knowledge of the deep magic to debug), library support is top notch, and as a language it supports both OO and FP, while being pretty small in scope, terse without encouraging too much code golf.
If you want something mature, with libraries for everything, solid backwards compatibility and basically the best "boring" choice, go with Java. And if your devs want to mess around a bit, mixing some Kotlin in is basically harmless and easy to reverse if needed.
And what about the front end? What's the best, most boring choice there?
I was on a project for a bit using React and although it felt like an obvious way to write things, I can't help but feel you can't create something that will last for a decade with it.
Unless you're trying to make a really rich SPA, it could be something of: unpoly, htmx.org, barbajs, knockoutjs, turbolinks/stimulusjs... Anything that lets you enjoy server side rendering, whatever your framework autogenerates for you (ie. forms) is better to "just have" than "have to implement", instead of having 2 projects (one API, one frontend), or even no frontend framework at all (ie. mvp.css and plain HTML), you can do a whole lot of relevant projects with that already.
Webcomponents in plain JS are also great to not have to deal with JS class/HTML Element binding and lifecycle yourself.
Why not add a bit of Flutter or React for a few features, but for most pages it's going to be an expensive overkill.
I recently started learning Elm. Elm is statically typed functional language which compiles to JavaScript.
The most compelling feature is the guarantee of no runtime exception. It language is pretty stable with glacial release cycle. It also has a ui library called elm-ui which allows to develop ui components without css.
There are lot of posts criticizing Elm of slow release cycle and that the community does not take feedback properly. But atleast for my use-case it does not matter.
I like that the language is very opinionated and just works.
Few plusses for Elm:
- Static language
- Informative compilation language
- Awesome tooling. You just nees to install Elm compiler. No npm required.
- < 1s compilation
- The Elm Arcitecture(TEA) for ui event handling
- Beginner friendly community
You absolutely can. React is the gold standard right now. It's already been king for 6 years and it's not going anywhere. The hype for angular died down. The hype for vue has started to die down. This little bit of hype svelte has at the moment will die down.
React does have a ton of problems but they all come from the next level of dependencies down. Shit like Gatsby and Nextjs won't pass the test of time. Neither will redux (it's already pointless) and all the convoluted bullshit like redux-saga. If you learn to build stuff using just react and other basic dependencies (like express on the back end), you'll be in a good position going forward. None of that stuff is going anywhere.
I can't speak for Angular or Vue, but I'm 100% sold on Svelte. It cuts out all of the crap that React and Redux introduced (lifecycles, hooks, boilerplate, etc.) and boils it all down to fundamentals. You can read the entire docs in a day and fully understand how everything fits together. I dare say it, but Svelte's docs are a breath of fresh air. It's rare that I read documentation and want to keep reading it.
To me, that's what boring tech is about. It's about finding the simplest, cleanest way to do what you need to do. I hope Svelte takes the path of long-term stability over features and complexity and innovation for the sake of it. What they have right now is a solid foundation.
> convoluted bullshit like redux-saga
Wait until you meet saga's bigger brother RxJS/redux-observable. Someone on HN once mentioned JIRA was using RxJS and I realized "ah, that explains why JIRA is the slow pile of absolute shit it is." From just knowing a company is using RxJS I can already guess at the type of internal communication and politics at play in the company, as well as what their code base looks like.
I use a browser extension for checking out the tech stack of web apps.
I haven't done it so much lately but a couple years ago whenever I would check an app with nice UX it was React, and if it was terrible UX it was Angular or something else.
Also interested to see where Svelte will go. For my latest project I just didn't choose it because of lack of libraries.
I've been looking at this from a slightly different level. React may be the best choice for a UX on the web but I think it's still far worse than any native app. It's one of the reasons I get so disappointed after hearing about something cool and new and then finding out it's an Electron app.
I'm stuck on this idea that the best UX is a native app for performance reasons (responsiveness, memory, CPU, battery), aesthetic reasons (assuming your like your native platform) and longevity. On my desktop I often run applications that are a decade or more old. How many web apps rolling out today can sit untouched for the next decade and continue to do useful work?
Gatsby at least spends a lot of effort playing cat and mouse with Google's Pagespeed algorithms so you don't have to. That by itself has tons of value.
My instinct is to build a native front end and connect to the back end over a REST (or similar) API. To me, that feels like the boring technology route.
Yeah, that's totally legit, and what I would default to.
The reason stuff like React exists isn't because it's some big generic library for doing "frontends" that everyone has to use (even though that's how people see it, how it's marketed, and how people use it). If you want to know what a library is good for it's easiest to look at what it was originally built for, the very first problem it solved.
For libs like React, that problem is DOM manipulation.
For most of the interesting things you can build on the web these days, DOM manipulation becomes a problem at some point because the solution has an inherent complexity to it that becomes hard to manage. That complexity is in procedurally updating the DOM, specifically getting the order of insertions and deletions correct and keeping track of every possible state the DOM can be in to make sure your app doesn't get in a weird state that it can't recover from.
The way React (and vue, angular, svelte etc, all the modern libraries) fix that problem is by changing the programming paradigm from procedural to declarative. The declarative paradigm is just fundamentally much simpler for the exact problem of handling DOM manipulation in a large app.
If you're learning, or building something for yourself and not worried about spending time on refactors, then it's definitely worth building something in vanilla JS first, running into some sticky DOM manipulation scenarios yourself, and solving them the hard way. People make the mistake of using React when they don't need to because they don't have a good understanding of where that line is in the inherent complexity of a web page/app, where you start to get a very good returns on bringing React in to simplify some of that complexity.
That's also why I really don't rate vue, angular or svelte. React is a big library in terms of code size (over 100KB still I think?), but almost all of that complexity is internal. The exact same API and functionality is exposed by Preact, which is a few kilobytes. React has a really small API, pretty much just three functions: createElement, render, and useState. I'm a big fan of libraries that do big things with only a few functions. Do one thing well and all that. There's also the JSX transform, which is a straight line for line transform, meaning the code you write is very similar to the code that runs in the browser, you can follow it line by line with no surprises.
React is a good tool to have in the toolkit, after you've gotten comfortable with vanilla JS. I wouldn't write it off based on how other people present it. You just need to avoid the insane amount of complexity and cruft that people have built around it. All that complexity will go away when people go running after the new shiny thing, but React or something very similar to it will stick around for a loooooong time because the fundamental ideas are so simple and powerful. DOM control through declarative coding, code over configuration, utilising the JS language itself as much as possible instead of relying on DSLs, and simple transforms that maintain the integrity of your code all the way to the production build.
If anything replaces React it either have to be quite similar, or be another entire paradigm shift (maybe the whole DOM/CSSOM thing will get replaced at some stage, who knows?)
React has innovated somewhat with lifecycle methods and hooks, but the main value is not in it's API. It's the ecosystem and scheduler.
Preact is a good alternative if you have a small app, but the reason it's so small is because it doesn't have a scheduler which might cause issues with larger apps.
Redux has greatly improved workflow for integrating with React. The main issue with Redux is that it pretends to be generalized state management engine, with all the overhead, while it’s in a shotgun wedding with React.
Will React team attempts another state management, aka Flux, when Redux does 95% of features and is slowly being absorbed into React eco system anyway?
Gatsby/nextjs will likely merge into a single React static site generator. Similar to React router and Reach router merger.
React is like jquery, it’s going to be around forever. React is almost at the core web infrastructure tech level, just by consensus alone.
Not sure what you mean by the "pretends" statement.
Both the Redux core and Redux Toolkit _are_ completely UI-agnostic, and can be used with _any_ UI layer or even standalone.
Yes, most Redux usage is with React, and we do orient our docs around the assumption that you're probably using Redux and React together, but there's many people who are using Redux separately.
The “pretends” comment is light hearted joke (or is it?) about keeping Redux UI library agnostic. When we all know React is the 8000 pound gorilla pulling on Redux.
The default assumption for any production React application is that it will need Redux at some point. It’s much more efficient to start the React project with Redux, than trying to bolt on Redux after React project is underway after a while. Redux Toolkit does make things bit easier.
It’s like how React pretends that JSX is optional, when we all know JSX is requirement in React projects.
Thanks for all the work on Redux and Redux Toolkit.
FWIW, we do take the "UI-agnostic" part seriously. We've got an upcoming new API for Redux Toolkit that we've dubbed "RTK Query", currently available as a preview release. We've got an example of it working with Svelte, and I know I saw someone else trying it out with Vue:
The Java community has some great developers, but also a lot of Serious Software Engineers who will sabotage everything with extra complexity, and then everyone who learned Java in school and never felt like looking at another language (not even Kotlin).
Java is definitely "boring technology", but hiring random Java developers will probably sink a company faster than doing the same for Go.
The GP comment mentions the need for finding engineers with "a proven track record of delivering value", yet here we have a concern about "hiring random Java developers".
Is the industry biased against great engineers who have been working with Java for the past 20 years, even if they "deliver value" (which is pretty much impossible to determine externally)?
But I, personally, am biased against hiring people with only Java on their resume. Because 90% of the time what I've encountered are people who haven't examined their technology choices, questioned the status quo, tried to -improve- things.
That's not a sleight on Java, per se, but it is against anyone with only one language on their resume. It's just that if there is only one language on a resume in web dev land, it's almost always Java.
Yup - it's possible to build uncomplicated software in Java, especially in recent iterations of Java and more... restrained modern frameworks. However, there's no guarantee that you're actually going to either join a team or hire a Java expert with those tastes.
Although still new I'm wondering whether Kotlin could be admitted to the boring technology category given that it was built to dovetail with Java and has first class Spring support?
I’ve found kotlin to be wonderfully boring. There are definitely some sharp knives that get abused though. I’ve met a few people who want to throw OO in the bin and treat kotlin as pure FP to their detriment.
Kind of. I don't think arrow - however good it becomes - can really compete with a language where those primitives (like do notation) come with the compiler.
I think maintainability is mostly down to developer skill and the ability to abstract to the right level. A good Python dev will likely leave far more maintainable code than an average Java dev.
I have yet to see a maintainable Java project of any reasonably large size, anywhere.
Java programs are larger than those in other mainstream languages, just by dint of the verbosity of the language (and research backs this up; studies showing errors per LOC are consistent regardless of the language).
There is definitely a certain writing style among Java developers that is overly abstract, but there are plenty of examples of properly written code bases as well.
Also, Java’s “verbosity” is pretty much only a constant factor, and not even necessarily in terms of LOC, but width. What research also show is the benefits of static typing. Also, I am fairly sure there is some survivorship bias working in the background, where an ugly java version of a complex domain survived because the language’s great observability sort of kicking it into a working spaghetti code state, while other projects died a premature death.
Can't one always say that?
Wouldn't it be more fair to compare equal level of skill?
I mean, in most cases it doesn't really matter what tech you choose as 1. Most products don't really need "massive scale" 2. It's more important to be proficient in the tech you pick rather than it being the "best tech ever". I mean Facebook still uses PHP no?
> The engineers are bored and they're rewriting perfectly-good codebases in Common Lisp and OCAML for funsies?
Sounds like the bored engineers need to be allowed to go home early, or have some 20% projects.
Also, as John Gall teaches us with his tounge-in-cheek, yet never-the-less true principles[1] -- a principle so obvious most never give it any thought:
"New System, New Problems"
Can someone please just ask "what do we expect some of the new problems to be?" If you get blank stares and no good answers, then you know they haven't thought it through.
> "what do we expect some of the new problems to be?"
A name for this I’ve heard (and use) is the “pre-mortem”; you can get folks in the right headspace for what you are suggesting by asking them to imagine they are writing a post-mortem after the proposed initiative failed.
A good way of surfacing failure modes / potential quagmires.
> imagine they are writing a post-mortem after the proposed initiative failed
I was thinking more along the lines of "imagine they are writing a post-mortem after the proposed initiative succeeds". Even if everything goes perfectly, what do we honestly expect to have at the end? A system without problems? Nonsense.
That's definitely part of it. I also think developer sometimes aren't sufficiently critical when picking technologies and solutions. They fall into the trap of looking at how bigger companies operate, without considering if they actually have the same requirement, budget or even problem.
For example, you need a search feature. ElasticSearch is big in search, there's lots of article about people implementing ElasticSearch. Very infrequently do I meet people who just starts out with the full text search in their database, or maybe just try something extremely simple, like Sphinx, even if it would solve their problem quicker, safer and cheaper.
It's honestly starting to become a bigger and bigger issue. During the last few weeks I've talked to one customer who is think: Kubernetes. They don't have the funds, staff or need for Kubernetes. What they do need to do here and now is to focus on smarter utilisation of their cloud providers features and reduce the reliance on expensive VMs.
Another customer is going all in and wants to implement all the things. We're talking Kubernetes, Kafka, ElasticSearch and more. They'r are currently running on just a small number of VMs. While their stack does need updating, maybe start smaller?
Great point. I’m working with a client right now where 90% of the operational pain and low impact dev could be resolved by admitting that the project does not need to resemble a FAANG system.
One company I worked for went to great lengths to emulate FAANG.
But they were a medium sized company. They were absolutely crushed under the weight of FAANG "best practices" and technology. They lost time rewriting perfectly fine code. They chased the microservice fad. And they lost market share.
It's in the interest of FAANG to maintain this idea of needing k8s, this massive CI pipeline, certain processes, etc. etc. Because it slows down competition. It halts startups. It slows progress. They want to throw as much overhead as possible at smaller companies.
The thing people need to realize is that FAANG are entrenched. They are as risk-adverse as can be. They will happily write unit tests and maintain 100% test coverage and do all of this crap because they are more scared of losing market share than innovation. They are in full defense mode. Google is implementing all manner of protectionism to maintain their ad market, for example. Plus, they have the deep pockets to pull it off. Any company smaller than FAANG will sit there with their wheels spinning.
> It's in the interest of FAANG to maintain this idea of needing k8s, this massive CI pipeline, certain processes, etc. etc. Because it slows down competition. It halts startups. It slows progress. They want to throw as much overhead as possible at smaller companies.
I'm not sure I agree that this is the underlying motivation. To get promoted at a FAANG you need to be demonstrating technical prowess. What says technical prowess like rewriting the app in a new framework that you open source to great acclaim? The business gets a feed of new talent, technical kudos from the community and maybe even a genuine benefit tht had some marginal gain at the scale that a FAANG operates at.
As an example, I don't think Amazon is recommending SOA to sabotage other companies. I think they're recommending the way of work required at Amazon scale.
> The thing people need to realize is that FAANG are entrenched. They are as risk-adverse as can be. They will happily write unit tests and maintain 100% test coverage and do all of this crap because they are more scared of losing market share than innovation.
I found the rate of feature delivery and innovation at Amazon to be way higher than the companies I've worked with since. 100% test coverage wasn't incentivized at all. Increasing revenue was.
The problem is the client doesn't want to admit it - if you ask the client how much volume they're realistically expecting and they insist on a ludicrous number; what do you do?
"the engineers are bored and they're rewriting perfectly-good codebases in Common Lisp and OCAML for funsies"
Had this literally happen to me, but I was a low level manager and this was happening in another team. One thing I've taken from it, at the time the feeling around Sr Management was that we needed to allow this or we would lose the engineers. They allowed it, and after the conversion, those engineers left to start their own company. The remaining engineers had to deal with undocumented OCaml code and keep it running and were resentful.
I have seen this with React vs Vue, where an engineer not liking React, just did his code in Vue. 'We have to let him do this or he'll leave', but he left on his own accord.
Lesson, stick up for your codebase, and if engineers don't like it, let them leave or make them leave. The other engineers on your team will like it, and some of them will become your new Sr Engineers.
> trying to rewrite Etsy's backend in Scala and MongoDB.
The first Sid came in and wanted to rewrite in C++.
Then the second Sid wanted to rewrite in Java.
The whole time the HTML is 25% space chars, served, sent, received, discarded, because the PHP guy likes deep indentation, and the DB is constantly burning like the sun because all the business logic is in stored procedures.
(That was the problem, not which abstraction the servers are written in, since all they do is pass data back and forth to the fiery inferno of the database.)
They are fearful and paranoid of making a bad hire because they don't know how to assess whether the engineer is destroying the company from the inside.
If there was a reliable general algorithm to make good software managers would hire lousy engineers then tell them to execute the algorithm. There isn't, they can't. The fallback is to be as picky as possible about who has influence on the software.
> They are fearful and paranoid of making a bad hire because they don't know how to assess whether the engineer is destroying the company from the inside.
This is the obvious conclusion. I wonder when investors will wake up to the value of having engineering-savvy management.
> I wonder when investors will wake up to the value of having engineering-savvy management.
To be fair, engineering-savviness and willingness to be management (especially the kind of management that are legible to investors) are substantially anticorrelated.
This reminds me of Google’s mysterious new operating system “Fuchsia” that they’ve been developing semi-publicly, which was said to mainly be a “senior talent retention program”
It's a mixed bag, if you hire engineers who exclusively want to refine the existing toolchain - you may find that solving new product problems becomes more difficult. You may get stuck with an ancient oracle stack drawing down your entire companies profit margin. Or the team responsible for some technology has ossified so heavily that your launch date is moved to twenty never.
You can immediately spot a misaligned engineering culture when every team has its own tech stack and its own ops as it means that none of the teams trusted each other for anything and had to resort to federation. On the flip side you can se bad engineering cultures where decisions are made based on pure conformity with what was previously built regardless of the problem being solved.
There's a happy middle-ground, a company like etsy with 200-400 engineers can happily afford a small team of 2 engineers trying out scala/mongo for something, it might work out, and nuking it from orbit won't cost that much in the grand scale of things.
The successful micro-service architectures I've seen have a remarkable level of uniformity. Many such environments have standards down to the level of use this particular http lib +internal wrapper.
At the end of the day there are not large enough differences between say ruby and python to justify different teams choosing ruby or python at the same (medium-sized) company. If you have this level of difference, you've effectively permanently engrained conway's law into your organization.
This is often a struggle I've had as an engineering manager (though also as an active individual contributor). When I push back on adding new components roughshod to a stack, its often framed as "not invented here" dogma. I certainly can do a better job at communicating my sentiments, because I can and do predictably come off as a "grumpy old man" in these conversations.
But more often than not, this feedback comes from engineers that a) have never been (as you say) bitten by complexity, or b) they aren't in the position to deal with all the negative consequences for those decisions.
There's probably some wisdom in letting your direct reports experience the kinds of failure in making these decisions, so they develop that sort of empathy, but the cost of that failure is sometimes just unacceptable for the business; especially in periods of cash runway constraints.
> I always found it funny that companies will go to extreme Herculean lengths to hire the best programmers
Because you may know how to hire the best or have the skill to know who the best are, but you don't know how to be the best, so how can you judge their work if by definition you're not as good as they are?
I've never witnessed rogue engineers. If the team has business-driven objectives how could they possibly have enough time to chase rainbows? Any time a rewrite or refactor has been done in my org, it was pitched up the chain of command and explained in terms of business value (ie performance, maintenance, retention).
I don’t do “boring,” as much as I do “mature and robust.” I like shipping products, as opposed to just “writing” them, and shipping is boring. Lots of annoying intricacies and processes.
I’m writing a fairly large-scale app, right now.
It’s written in Swift (frontend), using IB (classic UIKit), and PHP/MySQL/Postgres (backend).
It does not use SwiftUI (shiny), or Rust (shiny, but a bit more dusty), or some form of NoSQL.
I picked standard UIKit, because I like SwiftUI, but the app has a fairly intricate and non-simple storyboard. I am not confident that SwiftUI is up to the task, and I know that IB can do it.
I’ve been writing in Swift since the day it was announced, so I already know it is up to the task, despite being a fairly new kid on the block.
I picked PHP, because I’m already quite decent with it, and, despite the hate, it is a perfectly good, performant, proven, and supported enterprise language. There’s a better than even chance the server will be swapped or rewritten in the future, so it’s a good idea to use my implementation as a PoC and architecture model, anyway. It will need to run the system during the nascent phase of the project, so it needs to be solid and secure. There’s no way I will take the risk of writing such a critical system, in a language I barely know (See this scar? I’ve done that -long story).
I picked MySQL and Postgres, because they are proven, robust databases, and can be installed on most low-cost hosting solutions (the app is for an NPO). I used PDO to interact with the databases, for security and reliability, anyway, so it’s entirely possible to add support for more databases, in the future.
Also, backend is not my specialty. What I did, was design a layered architecture that will allow future “shiny” engineers a path to replacing the engine. I wrote an abstraction layer into the server, allowing a pretty wholesale subsystem replacement. The app communicates with the server through a classic REST-like JSON API, so there’s another place for a swap. I’m not married to a system like GraphQL, with the need for dependencies; but the layered architecture allows use of GraphQL, anyway, if people really want it (it is cool and robust, but is difficult to use without some big dependencies).
Speaking of dependencies, I do everything in my power to eliminate them. I have been badly burned, in the past (not too distant, either -I had to do an emergency dependencyectomy, just a couple of weeks ago), by over reliance on dependencies. It means some extra work, on my part, but not crippling.
Speaking of boring, few things are more boring than documentation, testing and quality coding techniques. My testing code usually dwarfs my implementation code. I spend many boring hours, running tests, and examining results.
In my experience, I don’t think I’ve ever written a test that wasn’t necessary. They always expose anomalies. I just went through that, in the last week or so, as I was refactoring a legacy system for the app I’m writing. I actually encountered and fixed a couple of issues (including a really embarrassing and scary security bug) that have been in the backend for two years.
> May I ask how you consider these to be compatible?
It was a calculated risk. Since the company I was working for, at the time, was never going to use Swift, my "bread and butter" was at no risk, whatsoever. We were a C++ shop. I just started working with it on nights and weekends.
Being a C++ shop, however, we were quite familiar with Lattner and LLVM, so we were aware of his propensity for WIN. That gave me some confidence, going forward. Also, Apple didn't just announce a language. They also announced a full system API, as well as a product roadmap. The API showed they were serious about it. Those don't come in Cracker Jack boxes. They take some serious work and investment.
It was definitely a risk, but I'm a conservative, scarred veteran of many errors in judgment (can you say "OpenDoc"? I knew you could!). I wasn't about to run into a burning dumpster, half-assed, and I thought it was worth it. I knew it would take four or five years to mature, and it has. I tend to play the long game. I learned that, from all those years, working with the Japanese.
> I actually mentioned that. Like, immediately after the quoted phrase.
I'm rereading your previous comment multiple times but unfortunately still failing to see what you're referring to. The only explanation I can see is "we were quite familiar with Lattner and LLVM, so we were quite aware of his propensity for WIN. That gave me some confidence, going forward."
> They also announced a full system API, as well as a product roadmap.
I'm not quite sure what you mean by "a full system API", and does Apple ever announce a product roadmap? I would definitely be interested in this roadmap of which you speak. :-)
You are correct. It was the next comment I made (so I removed that smartass line).
I apologize.
They have had a Swift roadmap forever. I think it's now kept on swift.org. I'll see if I can find it. I think it's a fairly sparse one. I really only cared about the evolution through ABI Stable. All I needed to hear, was that was a goal.
You are right. They tend to eschew roadmaps, but they did a "hard-sell" with Swift. They knew it would be difficult to build momentum with.
"Full System API" is the native frameworks; UIKit, AppKit, WatchKit, etc., as well as things like WebKit and MapKit.
When Swift was announced, they had APIs for most of that stuff. I was pleasantly surprised. I had a full app, working within a day or so (using beta Xcode, of course).
> I really only cared about the evolution through ABI Stable.
Ok, but that came later and wasn't present in 2014.
> they did a "hard-sell" with Swift
I agree with that. :-)
> "Full System API" is the native frameworks; UIKit, AppKit, WatchKit, etc., as well as things like WebKit and MapKit.
Swift did of course have bridging to Objective-C and the preexisting Objective-C API. I find it strange to equate language bridging with announcing a full system API — an API originally announced around the turn of the century (can't believe I'm using that phrase now). Cocoa-Java, which no longer exists, also had such bridging, as does PyObjC and MacRuby/RubyCocoa. Still, most of the system API to this day are written by Apple in Objective-C.
That's a good point. I never thought of that. It was actually an old bridge. I remember when they tried to make Java a "full citizen" language. Boy, that flopped...
SwiftUI shows promise, but it is still quite nascent.
Pretty much every bit of code I write is "pure" Swift linkage. I like things like Swift's enums too much to give them up. They make APIs really fun.
Here’s something I wrote up about a project that is into its second decade, and just picking up steam. It actually matured just in time for the COVID lockdowns. The best thing I ever did for it was walk away: https://littlegreenviper.com/miscellany/bmlt/
I don't look at it that way. I have a very "wishy-washy" design approach. I call it "paving the bare spots"[0]. It's definitely not a "classic" approach, and it would not be something that I would recommend to anyone that is not extremely experienced.
The idea is that I am not actually aware of what the final product will look like, when I start, so I take a very careful approach. I spent 27 years, working for a "Waterfall-based" corporation, where the system had to be 100% designed up front, and the end result would "meet spec," while still sucking. I am not particularly thrilled with many agile approaches, either, as I see many of the same problems. It's really just shifting the tech debt around.
My approach actually results in my having to throw away a lot of really good, tested, code, but I still end up moving lightning fast, and coming out with good results. If you look at my portfolio, you will see a whole bunch of small, heavily-tested module projects. Many of these were things that I ripped out of other projects, but didn't want to throw away. Some of them are crazy useful, like the Persistent Prefs Utility[1], or the Generic Swift Toolbox utilities[2], which show up in most of my work. The fact that they are treated as independent projects, with heavy testing, means that I can reuse them with confidence.
The Spinner project[3] was an example of a UI I designed to be a central aspect of an app, then decided not to use it, as it deviated too much from the user workflow I had in mind. It will be back, but not until it's the best approach. Eye candy is nice, but it still needs to be usable.
That modular approach is not new at all. I think I may have been doing it since the early nineties.
True, there is flexibility, but that flexibility is implemented as a single-point hinge, not a bendable continuum. It's very clear where the flexibility goes, and that point is well-tested. I just got done refactoring the server, where I added a more flexible way of allowing users to implement security postures, and I'm really, really glad that I did things the way that I did. It was a pretty big job, adding personal tokens (the new functionality), but a lot of the work was making sure that I stuck with the "philosophical" domains of each layer, and testing the living bejeezus out of the code.
And each point of flexibility has a very clear domain. For example, the ANDISIOL layer is where the SQL turns into functions. You can rip out everything below that, and replace it with whatever you like, as long as the same functional API is presented to BASALT. That's a fairly classic pattern.
I set it up, so that all the security stuff was sequestered into its own "silo." This allows things like using monitoring and logging, or a hardened host, without affecting the main datastore.
The deal is, is that I expect the tech to get swapped out, down the line, for something more modern, and it might not even use SQL. But security is quite important (especially with the target user base of the initial release). I went kind of overboard with some structural support for security. I am quite aware that I could get better performance from a single, related DB, but I wanted to start off with an infrastructure-level support for security, with the anticipation of future tech making up for any performance issues.
In my experience, security is often spackled on, after the fact, and I think that it's important to start from scratch, with security.
Also, note the ridiculous simplicity of the DB schemas. That was because I used...yecchhh object-oriented design as the Model, and the datastore actually represents a generic base class state. This allowed me to write a whole bunch of code, early on, and test it, then never have to look at it again. The implementation was done in layers, over a period of seven months. Each layer was treated as a standalone project, with its own lifecycle and testing. The idea was to develop a robust structure that I could consider reliable, then build on top of that.
The blog isn’t really an argument against MongoDB however.
In many ways it’s an argument for MongoDB considering if you’ve built a JS based application MongoDB minimizes any additional learning and having to translate between the objects on the front end and in the backend. Non relational DBs are also easier to scale horizontally without requiring any application changes.
The OP is an argument against introducing new technology without a significant clear benefit. It’s basically saying that simply having a new tech can add significant complexity, unknown unknowns, and require much more maintenance and other costs.
So if your web application is currently running on MongoDB and it’s running well then this is an argument to stick to MongoDB instead of say migrating to postgresql going forward.
Anyone in a hiring market that favors the job seekers has to look out for this. Employers typically see the employee as a tool to solve their problems, but I think a lot of managers don't pay attention to employees who use the company as a tool to further their careers. Resume driven development, chasing metrics, and self promotion are way faster routes to progress than actually doing a good job. Getting the job I want is a long process. I'm going to take the most direct route to that, and it's a manager's job to make sure our interests are aligned.
The problem with the non-boring technology club is that programmers see what problem FAANG companies are solving and wanting to be on the edge on new technology too. But they don‘t have the same problems. Another problem is they want to show what they can do. If they tell in an interview they are working with rails/django and a postgresql database they fear they look incompetent using those old technologies. So they try to convince their companies their products need to be rewritten in mongodb, react with graphql in a micro service stack and many more state of the art technologies.
And in the end many developer years are wasted just rewriting a working an existing software in a new stack they are not yet comfortable with and the new product is much less stable than the old one.
I love basecamp for their hotwired stack and showing that you can make the great software with old boring technologies and just a little bit of javascript magic pixie dust.
Or more like it's the scarcity, by definition new tech still doesn't have many experts in it. So if you're one of the few that learns it you're all of a sudden pretty differentiated compared to your peers. Even if the new tech turns out to be pure garbage down the road it doesn't matter because in the meanwhile you can land the hippest jobs and win the admiration of your peers by being so far ahead of everyone else.
They are big enough to follow all paths concurrently. The F is famously doing their own PHP and while the G was tickling "shiny receptors" all over the world with Go (which ironically could be seen as an example in boring technology enlightenment, considering how it is basically a modern take on 1980ies language features), they were also doing so much plain old java that the parts of the internal tooling they did for maintaining their own sanity published as Guava easily had more impact on the viability of java as a language than everything Oracle has ever released (yes, including Java 8 lambdas).
Your last point is just flat out false — even if sarcastic.
Just to correct the misinformation, Java has been steadily improving since the Oracle years whether you like or hate oracle — the JVM is an absolute workhorse and new exciting features are underway.
Yes! I work at a FAANG and over my time here it's been C++, Java, Python, and JS. The least "boring tech" part of my work has probably been that were moving from JS to Typescript? Mostly, we want to use things that we're confident can do what we need, and that we're confident don't have hidden surprises.
I was very disappointed when I joined Amazon to learn we were using plain old Java with servlets (this was 2015, I think kotlin is more common now). Since leaving I’m in awe of how sensible the technical decision making that led to that was.
I interviewed with Stripe and one of the engineers mentioned they’re transitioning many services to a exciting new technology called... Java. That was in 2021
I will confess that I'd struggle to go back to java now. Kotlin has all the sensibility with just enough power to make me happy. I get to live in the JVM eco-system without the song-and-dance of something like scala.
Not true - they use cutting edge tech that sprawls around sfba a few years later all the time. Maybe not on the web side bc you aint gonna add much value by doing that anyway
All true, but I would like to add that their manager would like to say (when interviewing at FAANG) that they managed cool edgy new technology as well. So they are not motivated to stop these things because it sprinkles hot keywords on their resumes, as well.
> they fear they look incompetent using those old technologies
It sounds like you're saying the fear is unwarranted. It very much is a real fear as long as people interviewing them actually count that against them and value new tech.
It’s because job hopping programmers are compensated for how rare their skill is in the market and not how much value they add to the business. It’s another flaw in capitalism.
Reward employees a direct and substantial cut of the profits and incentivize to them to stay 5-10 years and these behaviors should disappear.
The loss of job security, frequent job hopping has created more incentives to optimize for the next job switch and not value add.
The explosion of startups also contributes to this. They often have to attract employees by offering the promise of autonomy. Most startup employees know they aren’t getting rich. So they milk the startup for maximum resume points and move on.
The VCs unload these bloated companies into inflated stock markets and the cycle continues.
DHH runs really small companies and pays his employees really well and doesn’t work them too hard. Employees have no real reason to leave. They see a direct link between the low overhead and their job security and work life balance. Aligning incentives fixes this.
> So they try to convince their companies their products need to be rewritten in mongodb, react with graphql in a micro service stack and many more state of the art technologies.
I find it interesting that these technologies are considered "state of the art" (SOTA). What does SOTA mean in this context? I could see an argument for postgresql and rails/django being SOTA as I think many believe them to be fairly mature, secure, and feature complete.
Yes, Hotwire is another game-changer from that genius DHH. So much so that Django and Laravel already have their own implementations. I just love the passion that guy has for what he does and his commitment to Ruby. I think Rails is an even better choice since it became boring.
If you have a shitty job then it's really hard to get the experience you need for something better unless you invent problems. That's why making things way more complicated than they need to be is actually a good thing.
Basecamp is a simple app though. Ridiculously simple.
That app could have been written in the mid-1990s using WebObjects in just a few months.
Technologies like MongoDB, React, GraphQL, Microservices etc exist because modern, real-world apps are generally far more advanced than just a glorified CRUD app. Consumers simply have higher expectations and more demands for what web apps should be able to do.
While this is somewhat true and you can't solve everything with Rails + Postgres, you should ask yourself very, very hard about whether what you're building is in that category (and further, whether every part of what you're building falls into that category).
Far, far too often I think a significant source of complexity is enthusiastically added by engineers themselves assuming that the problem they're solving is sufficiently complex that boring technologies just aren't up to the requirements of their project.
My current gig is writing a very traditional Rails app (we hardly even dabble in Stimulus or Javascript all that much). Prior to that, I worked on a Javascript-backed fully reactive real-time app using the latest and greatest of technologies. My boring old Rails app, IMHO, has a much nicer user experience, far fewer quality issues, and is well loved by customers, where the bleeding edge Javascript app was constantly deried by customers for being difficult to use, buggy and unintuitive. You can go a long way with simple technologies if you design your experiences well.
The way I see it is that one should master their stack. If you work over and over again with the same stack you will know it well. You will be able to move mountains with it. But it takes years to arrive to that. It takes implementing multiple projects the same way over and over again.
You need the wherewithal to stick with your stack and not get lured away. Maybe this is what boring means. Maybe boring is different to different engineers based on their background.
Sometimes you cannot select the stack cause there might be more senior engineers at a company and they have more sway. This is fine as long as the engineers picking the stack have picked a stack they have mastered and it is boring to them. As a regular engineer in their team I would hope to rely on their expertise and would hope to learn from them.
I remember at one job a rogue engineer picked a boring backend that would have been fine. But they fell behind because the other engineers knew their boring stack a lot better. Ultimately the rogue engineer had to switch to the other boring stack. The rogue engineer just was not fast enough to master it and implement the features required to keep pace with the demands of management. These demands were trace centralized logging, security, and of course features. So while they were still learning the ropes, we were moving on to even more advanced security, logging, and feature requirements. They just couldn't keep up.
> The way I see it is that one should master their stack. If you work over and over again with the same stack you will know it well. You will be able to move mountains with it. But it takes years to arrive to that. It takes implementing multiple projects the same way over and over again.
This is 100% how we did things for the last 5 years. We used the exact same basic tools & APIs, but iterated on how we integrated them together by way of our code.
We took what most people would call a "toy" stack, consisting of just C#/AspNetCore/SQLite (and recently Blazor), and turned it into something that can serve thousands of users in production across several B2B customers. We don't use any sort of containerization or cloud tech. Our software can be deployed by unzipping a file on the target environment and running a single command. You would be surprised at how much speed you can get out of SQLite when you have had 5 years to live with and experiment with its capabilities. On top of its extensive internal testing framework, I have several testaments to its stability sitting in our customers' environments right now. We've got OLTP SQLite databases that are getting close to 1TB in production without any signs of distress.
So, instead of focusing all of our energy on shiny things, we focused on building a 100% integrated vertical with (mostly) boring tech so that we can fully manage the lifecycle of our software product with the product itself. We have a management web interface built out for everything we need to do for building & operating the solution. We are very close to being able to partner with other organizations who can run the implementations on our behalf because the software is so easy to manage now. This is the type of real-world horizontal scalability the CEOs are after.
Thanks for sharing that. I think we need more of these stories of how a team was able to use the boring tools they mastered to implement amazing things.
I agree completely. Another angle that is less common is remaining at one company you believe and respects your work. If you find somewhere like that, years of working in the same domain can make you more effective. Not to speak of the advantages of a team that works together for 5+ years and the power that comes from true camradarie with your team mates.
It's sad that most places - and by definition the largest places also - end up being a meat grinder and people just hop companies and teams within companies every year or two. By the time you start understanding the domain you move on. It takes years to internalize a problem and understand it deeply.
This equally applies to a company's business process. Focus on a specific scalable business model that scales - don't make a special niche process for every "opportunity" that comes by.
It also applies to managing your life, personally. Know what things you do, what your personal goals are, and don't let yourself get distracted by the latest and greatest social media trends or stuff your friends are doing.
The "opportunity" is often a dangling carrot from a big enterprise customer. It's very hard for a cash-strapped startup looking to make bank and reputation to turn these opportunities down, and they don't look at the TCO and long-term costs in terms of complexity and tech debt.
This is a great comment and is spot on. Mastery has exponential returns over proficiency. The best thing is that mastery feels really good once you have it. I've been bouncing around the past several years not finding what I want, but I've realized the past year that I want to have the mastery with a stack that I haven't since my C/C++ OpenGL computer graphics years.
The aviation industry has an expression: "there are old pilots and there are bold pilots but there are no old bold pilots". Young, inexperienced pilots often take unnecessary risks and occasionally learn important lessons - sometimes terrifying lessons. Those lessons lead to a more cautious approach to flying as they grow older. It seems to me that boring tech is equivalent to cautious flying. Experience matters. Maybe its time to co-opt that expression into the developer world - "there are old coders and there are bold coders but there are no old bold coders". Ageism is a problem in the tech world. Experience will almost always consider (maybe even prefer) boring tech - generally for good reasons. Perhaps its time to value experience a bit more than we have.
Generalizations like “choose boring technology” are just unhelpful slogans.
Truth is you should choose technology given consideration of its pros and cons, not on the basis of some slogan.
There are very good reasons to use mature technologies and very good reasons to use current technologies and very good reasons to use absolute cutting edge technologies.
When someone comes at your approach wielding a slogan, be skeptical.
One reason to use non-boring new technology is if it suddenly enables abilities that were previously not possible.
The trap most engineers fail is that they only think about scaling. It's obviously an interesting problem, but until a company becomes successful it's not really something you should worry about, and for the most part things that (allegedly) scale well are more expensive, slower and harder to maintain.
But there are so many ways to innovate that's not just about scaling: can you make your application faster? What are things you never even considered because you have subconsciously internalised as a physical limit of reality when actually it's not?
One of the examples I'm currently exploring is the idea of moving a large amount of data in memory. I remember decades back when Google announced that it's search indexes are now fully in memory (they proudly announced that any given search query might run through a thousand computers). I cannot imagine how many possibilities it enabled for their product that were not possible before. The experimentation with new technology in pursuit of completely new ways of exploring your problem space should always be encouraged, and if boring technology cannot do it then that's when you give up on it.
I think it's helpful if it brings awareness to the situation.
I've spoken with 70+ different devs working on 70+ different projects of all sizes on my Running in Production podcast[0] and the choose boring tech phrase came up a whole bunch of times, and especially the idea of using innovation tokens. If it helps folks build and ship their app in a quick and stable way, that seems like a big win to me.
It must have been a weird coincidence but I listened to a few episodes for your podcast and I've actually heard about boring technology in 100% of episodes I listened to
Hah yeah, coincidence for sure. I don't have hard numbers in front of me but based on reference links to the boring tech site it's been mentioned at least half a dozen times. I know innovation tokens have been spoken about a few times outside of those linked episodes too.
> When someone comes at your approach wielding a slogan, be skeptical.
I do agree. Although the point of the article is to _lean_ more on "boring technology" side of, and paying extra effort when considering adopting newest flashy things.
Having read the article 3-4 time in the last years, I don't think they say "don't use new things", just "not too many new things at the same time"
Perhaps the title should be “err on the side of boring technologies”, although I don’t even agree with that. The right technologies for a project are the ones you deem to be right, given appropriate consideration of many factors. Your project may really need to use all beta release software, cause maybe it just does.
Let's say you've given it the proper consideration, and it's clear beyond the smell of subjectivity that it needs to be flashy stuff, go for it. The point is that this is often not the case, and the argument is to go with boring then.
I would agree that we're talking about the same thing, really:
> The right technologies for a project are the ones you deem to be right, given appropriate consideration of many factors
It's _usually_ difficult to take into consideration all the factors of a new flashy thing. The unknown unknowns. Thus _maybe_ choosing a trendy set of technologies might indicate that the exercise of balance and consideration you were commenting and that I do agree with 100%, has not been as honest as possible.
The author even calls out the slogan as clickbait in the presentation.
For what it’s worth, it’s a great read that I would recommend to anyone in the industry.
They aren’t forcing technologies on you, but driving home the true cost of long term maintenance and investing in the “core stack” that you already have instead of adding N technologies to solve N business problems. This is good stuff.
I generally agree about slogans, but that seems unfair here. "Choose Boring Technology" isn't the sum total of the content, it's essentially the title. If you read past it, there is good stuff.
I wish HN had a feature where it could detect that you clicked the link and disallowed commenting before that. At the very least you’d have to click the link, even if you just immediately click back without reading, and you’d know what you were doing was circumventing the spirit of the place.
Interesting idea! Maybe instead of preventing you from commenting entirely, it just tagged all comments you leave as “have not read article”. I think the shame approach would actually work, but too many people would vehemently reject it for it to ever work.
Perhaps just annotate each comment with how many minutes between clicking the link and making the comment (similar to the green usernames)
That way the reader can decide for themselves. I'm less inclined to object to hasty responses to discussion points than I am to top level comments, but that's just personal choice.
“Choose Boring Technology” is an absolute statement. It’s a mantra. A meme. It demands you follow its lead.
It’s an order from the top. A commandment. A clear requirement. A statement of belief for the masses to follow.
It’s an unequivocal statement, a perfectly confident directive telling you precisely what to do without the slightest equivocation.
It’s not “read this powerful headline but then please read the in depth article only to find we don’t actually mean what is said in the headline, we mean something more nuanced and subtle why did you take our headline seriously?”
So a sufficiently compelling headline goes one step beyond intriguing your curiosity into clicking, it substitutes for the author's entire argument?
Amazing bit of self-justification you have there.
If you want to argue against what the author wrote, by all means. If you intend to argue against a straw man you have concocted out of a three word title, then you will seem a fool.
Yes, specially since different people will have different ideas of what is experimental and what not. Python is boring tech at this point for most people, but not everyone maybe using SAS for data analysis or Java for backend. You will need to evaluate case by case if it makes sense or not to change that stack, there is no silver bullet.
It is a general rule that you can fall back to. Like all general rules there are exceptions, but those exceptions have to be explained. If you do not use boring technology, you will pay a price, so you really need to think if the price is worth it.
The industry constantly mints senior engineers who have been bitten by complexity, but doesn't want to hire them, or listen to them. More often than not senior engineers pretend to be excited about complecting the tech stack, because hey, it pads their resume with the latest buzzwords and they've given up trying to fight against it anyway.
The last line of defense against a rogue engineering team is managers who have studied this stuff. How many engineering managers can spot the common situation "the engineers are bored and they're rewriting perfectly-good codebases in Common Lisp and OCAML for funsies"? And how many know what to do about it when they see it?
Anyway, this is a cool website, and it'll be useful to point people to it in the future, so thanks for that.