In the 90s-00s, when I was a younger, (even) more arrogant nerd, I looked down on Microsoft and their stupid technologies: Visual Basic, Access, Word, and more. Simple, limited tools that I only saw ugly, half-broken systems built with.
Until I saw a something by Steve Ballmer (I think), explaining that their strategy was to provide tools for those "99% developers". The ones who don't read HN, the ones who don't code for fun. Everyone for whom coding is something that gets in the way of their real objective.
No wonder they produced bad code! But that's not the point: they're solving something else. They're not spending effort making bug-free maintainable code, they've got more important things to do. They're doing the accounting for their small company, point of sale for the shops in their city, all the kind of things that may not be glamorous but that's all around us (for better or for worse)
So while I puttered around in my self-congratulatory ivory tower of unix, Microsoft understood what the real world needed, for the 99% of developers...
> Here's my point: yes, the whole Internet runs on Unix philosophy. But businesses sure don't. The big problem comes up in my description of pipelines up above: they only parse about 99% correctly, which is fine for your idiotic comments about YouTube videos, but pretty nasty when you mangle critical business data. And when the business dudes get involved, they'd rather do anything than mangle their critical business data. You hear me? Anything!
> If it keeps the data from getting mangled, they'd happily sacrifice searchability. Or developer hours. Or the ability to use off-the-shelf software. Or millions of dollars in licensing fees (because at least they'll accurately know how many millions).
> Pay attention, because that attitude is exactly why Windows is so strong, why the vast majority of developers prefer to develop on Windows, and why the vast majority of users prefer to use Windows. It may be gross, and adding new features or libraries may be a lot like stabbing yourself repeatedly with a fork (which, tragically, is not itself included in Win32), but Windows works consistently. So does SOAP. But not Unix or microformats or REST.
> It would be awesome if someone could find a way to satisfy both camps (the Internet people and the Enterprise people) at once. Then maybe one or the other set of technologies could finally die. But I'm not counting on it. Until then, one or the other technology is effectively dead, but which one it is depends who you are.
> but Windows works consistently. So does SOAP. But not Unix or microformats or REST.
I have no idea that windows works consistently (?!) where unix (linux) or REST (?) doesn't.
I like the idea of what they're trying to get to, but sometimes it helps to cite examples.
imo, LARGELY the reason that "windows=business" is from historical lock-in effects. I, as a linux developer, still get ".xsl" files on the regular, who have each gone through 4-5 different business people before me.
windows is largely a "suite" of software, from OS, to Office, to even Internet Explorer. Business types often didn't (and perhaps still don't?) know what separates Word from calc.exe - it's the entire computer interface, it works, and it's what others are using. Only a big "everything-in-one-package" could hope to upset them, and even then might have a hard time doing it.
I don't think I'll ever try to purposely run much windows software again, but I do give them kudos for what they've achieved and continue (for now) to achieve.
It was true then and it's true today. Certainly not in HN or at 8-person startups nobody's ever heard of, but the vast majority of developers work on boring business software in Windows.
I dunno, I kinda dislike Macs. At least Windows has the decency of not even pretending to be Unix-like. Other than the fact that it has a bash-like shell, and some token support for Posix APIs, the Mac is just as proprietary and opaque (I'd argue even more so) as Windows. At least Linux allows to poke around its innards and explore with nothing but a terminal.
Yeah and they have doubled down on things like the command key that inhibit adoption by wannabe former Windows users. NIH, or just plain cultural stubbornness prevents them from allowing the system to welcome new users. The Mac differences are not superior and it feels crusty and sub-par to me. However, the Mac doesn't track...so it wins. I bounce back and forth between Mac and Windows daily. I hate them both really.
As a counter point outside of gaming, I used to work for a Fortune 250 100+ year old banking firm, and you were given no choice. Heck it was a miracle if you could get a developer machine that was more powerful than your business users got.
To be fair if you’re using a game engine such as Unreal I’d expect most of the game-specific programming to be against that (as an SDK) and it’ll abstract the platform specifics for you. Linux compatibility work, if even required (as in the builds produced by the game engine don’t work out of the box) would be separate from the actual game development work and can be assigned to Linux-specific engineers with their dedicated Linux machines.
I haven't worked in game Dev but I'd imagine that each person would want/require a copy of the game (dev-build, etc) that they could probably test their aspect of development in, which is highly likely going to only run the platform of which they're developing for.
Developing something for like Sony or Nintendo might be more OS agnostic in the developer's seat, but then it relies on the tools working on those OS's.
Windows is never really a choice, it's more "I choose X, which only works reliably on windows"
I assume someone would prefer developing on whichever machine they normally use.
I have been a windows user for a long time and recently started working on a mac to write some iphone code. It is agonizing, but presumably not because there is any problem with macs, but just because the systems are different. Using the cmd key instead of the ctrl is the most obvious difficulty, but I can't say that makes macs better or worse.
One thing that really got me while using the mac is I was adding a file to my project and I double clicked on a folder to navigate into it. Coming from a pc this is what you would do. You also do that on a mac in other places. But here it inserted the folder into my project. I won't get into what happened when I deleted the folder from my project.
Using a system you are not familiar with sucks, whether it is a mac or a pc.
Other than this, is there something I don't know that makes developing on a mac so much better?
This makes no sense to me, if business types care about their business data and thats why they use windows, why does the argument start off by pointing out that all the mission critical stuff like email and customer facing sites run on unix? Even Microsoft runs its mission critical stuff on unix.
Windows isn't popular because its good at anything its popular because windows it meets the bare minimum and has a huge marketing budget unix doesn't.
Windows is an ecosystem, linux is an OS... that's the difference. For the most part, Windows is far far easier to configure and maintain than linux based systems (and I say this as someone who runs a large enterprise system with Windows, Fedora, Debian, OpenBSD, FreeBSD, SoNIC, Dell OS10, MacOS, Azure... etc, all mixed together).
I would love for you to come work at my company when I was told I couldn't connect my Mac to the VPN "because the network can't support OSX" which is a sentence that doesn't even begin to make sense for me.
"Windows" is a monolithic thing that can be referred to singularly, even across version. Unix is not. The first thing you need to decide if you want to switch to unix is "which distro," when Windows is the distro.
What windows? Datacenter? Essentials? Standard? which one do I need? will I be missing functions I later need? The answer to all of these questions is it probably doesn't matter that much as they all do the same thing.
What distro of linux / unix should you go for? FreeBSD? Fedora? Arch? It probably doesn't matter that much as they all do the same thing.
I get the idea, but it seems tragically misguided in some of the specifics. A lot of average developers can do REST (in the generic sense of JSON over HTTP) well. A lot of smarter-than-average people got mired in SOAP and ended up using a bunch of its features wrong and having integration nightmares with smart people in other departments or other companies who used a different subset of features wrong. The idea that SOAP's rich set of features would result in more secure, correct, and robust software compared to REST was a plausible, logical idea, but it was an idea that didn't pan out, especially for the 99% programmers who were so exhausted and behind schedule when they finally got their SOAP integration working that they couldn't bear to stop and ask themselves what further work they needed to do to reap the promised benefits.
Also, the idea that enterprise Windows business programmers were more into data integrity than the UNIX "Internet" programmers runs contrary to my experience seeing those two cultures encounter each other in the 2000s. In my experience, the former took human intervention for granted to handle unexpected or even routine-but-unusual cases, and the latter understood that their job was to build systems that could operate properly without humans constantly picking data up off the floor and dusting it off. The Windows programmers I worked with were great at creating rough-and-ready GUIs to help human employees do their jobs, honestly truly great at the "make users awesome" aspect of programming, but writing software that worked well enough to run without a human operator was just something they had never had to do. They couldn't understand why you were worried about some weird case that would mangle 0.1% of records, because in their mind that was the kind of thing that an expert user fixes up in Excel every Friday before the reports run over the weekend. A customer sends a payment after we've written off their debt? Never heard of that happening, but if it does, accounts receivable can escalate to Liz and she'll figure it out. February 29th? I think they know better than to run imports on a day like that, but to be safe, we'll remind them.
Those mushy "Internet" programmers inherited a legacy that was rooted in a requirement to be robust against nuclear war[1]. The mind boggling difference in ambition between surviving armageddon and "they know better than to run the software on February 29th, lol" made for a huge difference in culture and made me look up to the "Internet" programmers as intellectual role models while respecting the business-oriented Windows programmers as scrappy, context-sensitive, tactically oriented commandos.
[1] I'm not sure to what extent this was real, but the belief was real.
So, the thing is, I'd argue a focus on outcome with good technical skills to deliver it, makes you the 1% developer.
Take the POS example. You're helping real people be more efficent, and you've litterally got better things to do than be a software purist ( no offense, have same problem ). People are relying on you.
Meanwhile, serving the hospitality environment where multiple devices might be ran around a restaurant, and end up down a beer garden, all the whilst being expected to have every order on every device, because a table may call a wandering staffer, whilst having spotty AF network connectivity, processing trillion of transactions a year.
Suddenly you have a real life use case where your DS&A knowledge is useful, a real reason to be efficent on the wire, the device, and the server, and improvements in architecture can remove bad states, bugs, and improve the lives and fortunes of real people.
If being a 1% dev means profiling people and serving them ads, or writing pure code that nobody uses, then I'm quite happy in the 99%.
Disclaimer: I'm a CTO of one of those POS startups. :)
>>So, the thing is, I'd argue a focus on outcome with good technical skills to deliver it, makes you the 1% developer.
I didn't read the article the same way as many here did - my take was that he was talking about the 99% of developers who don't need the newest/leading-edge technologies because they are not solving for those problems (massive scaling for example), because the business they are working with/for simply will never need to worrying about that.
I didn't take it to mean that some of those same developers solving those '99%' problems, are not individually top 1% developers - you can be a top 1% developer imo, and still used tried-and-tested technology that is consider 'old' or boring.
Using the right tools, does not make one a top 1% developer; a top 1% developer will shine no matter what tools that need to use; they just use them better.
99% programmers need to pick up stuff, carry stuff, and throw stuff. 1% programmers are like circus jugglers - the same basic thing, but taken to a breathtaking extreme. They are skilful and impressive, but the scenarios where you need them are rare.
Those scenarios are rare if you have gotten used to what 99% can and can't do and therefore just dismiss many solutions without thinking since you know the 99% can't do those thing.
Of course your competitors will do the same, you can't scale up an organization of top 1% professionals, so you don't really need them to compete in the market but you do really need them to solve a lot of problems. Just that today those problems aren't being solved in most places.
>can't do and therefore just dismiss many solutions without thinking since you know the 99% can't do those thing
Is that assessment accurate to reality? How would we know if the bottleneck for most business is "my employees aren't good enough to solve hard technical problems" or "most of our problems aren't hard technical problems"?
You'll probably get a long list of stuff that includes things such as:
OSes, compilers, game engines, distributed process frameworks, etc.
Which as you'll note, are infrastructure things. Things which need 1/100th the manpower (at most!) to maintain.
While everyone else is busy churning user-facing apps.
You don't need to do that stuff, you career can be extremely successful without ever going near those things, especially if you are not interested in them.
That's the thing, I think a lot of those are, if not solved, problems where there's a lot of really well established good solutions that exist.
I could, I guess, tackle them. But I've got an appreciation that what's already there is already pretty great.
I suppose that's maybe the answer to my question. You don't need to be 1% to solve a problem as well as the solutions that already exist.
You need to be the 1% to improve on the current state of the art.
Which in a commercial environment, as you can imagine, is exceptionally rare. They are more about finding unsolved problems than iterating on problems already solved by FOSS.
For various reasons (usually control, but frequently spun as "needing to be cutting edge"), some groups or companies do reinvent those wheels and in some lucky cases we get progress. For example Google has recreated several of those things and especially in the data storage field they've probably moved things forward.
But in general I agree with you, the crushing majority of developers out there won't work on those things and won't ever even need to.
It's also a source of not always deserved elitism from the people that work on those things, and I definitely dislike that (there's enough elitism in this field, as-is).
Elitism is driven by ego and self actualization though.
I think there's a lot of people who are highly, highly motivated to work on tough problems with competent co-workers.
Those emotions are less driven by the domain itself, and more about the fact that those people have succeeded in finding meaningful work with co-workers of appreciable worth.
The tricky thing I find is finding that. Far too many regular projects are focused on mobilizing the dumbest blocks of wood towards a goal. Without actually investing in the human beings doing the work.
I think once we regain a culture of actually investing in people who aren't in the top 10% in their field, things can change. Currently though we do not have that culture.
Building a high availability system? The downtime (especially scheduled downtime) of many of these "99%" services is shocking compared to the cutting edge consumer services we're used to. The "hug of death" is an affectionate meme that basically describes the remarkably common scenario where a "99%" website goes down at the precise moment at where it's peaking in popularity and profitability.
Or how about high performance? Google famously knows that every millisecond of delay strongly impacts user retention and bounce rate, yet you wouldn't know it browsing many "99%" services. Achieving consistently low end-to-end latency is really hard regardless of scale, it's so easy for performance issues to creep in from every corner of your stack and accumulate.
These are hard problems and IMO they are rarely solved well by just stapling together some useful crap.
A hug of death when your product has peak user interest...doesn't materially impact users? Your users don't care about multi-hour or even multi-day downtimes in "99%" industries like...healthcare and banking?
What a perfect demonstration of "Those scenarios are rare if you have gotten used to what 99% can and can't do and therefore just dismiss many solutions without thinking since you know the 99% can't do those thing."
Yeah I wonder why these orgs always find these XXX hours & $$$ costs cost-prohibitive even though the benefits have proven to be substantial in cutting edge tech industries. It's almost like they typically dismiss smarter and more cost-effective technical solutions superior to naive "throw hours and hardware at it" approaches, because 99% can't do those things.
> Your users don't care about multi-hour or even multi-day downtimes
Should ask Reddit.
And yes, it's a sad state of affairs, but an hour of downtime in healthcare isn't all that uncommon. Convincing administrators to ~triple their hosting costs to avoid the occasional outage is pretty hard to do.
I mean, I recently watched a documentary on the 2003 east coast power outage, and one of the primary causes was the single computer in charge of issuing alerts (in an alert-based workflow system) going down.
I'm not entirely sure you're responding to my main point. The point is that tripling hosting costs is the "99%" naive solution to avoiding outages. The point is that smart people invent cost effective solutions to outage issues that don't involve tripling hosting costs. The point is that "99%" are ignoring these smart technical solutions because they can't do it, but they are still problems nonetheless. The point is that these smart inventors are not "circus jugglers" as derisively described by a parent comment, they provide real value that the "99%" reflexively dismiss because apparently the best solution to outages they can muster is to triple hosting costs, real value that you don't get by just stapling useful shit together.
The point is that when someone asks "As one of the 99%. What are some of these problems exactly, that me and my extremely large toolkit of cool things I can curl from Github can't solve?", the answer is "so you can build a cost effective system that doesn't take down the east coast power grid because a single computer goes down".
Yeah, reddit is notorious for its downtime issues. They're notorious because it's actually rather unusual in the consumer space to be so bad, they kind of suck compared to the majority of their high tech competitors.
The latest Microsoft stack is easily one of the most productive development ecosystems that exists today.
Just look at the default contents of .NET6 vs everything else out there and it doesn't even seem fair anymore.
We build a product for financial institutions that serves interfaces to multiple classes of devices and integrates with 15+ different 3rd party systems. It has to address concerns of multiple lines of business and regulatory regimes simultaneously.
Guess how many non-Microsoft dependencies we need to get this job done? I'll give you a hint- it's less than 5.
Any sort of ideological hang-ups are dashed to the rocks in my mind when I see how amazed the customer is with the final result. All of that principled idealism melts away after receiving their sentiment.
I think it's much more profane - Microsoft didn't understand what the real world needed, they just leveraged a monopoly (that IBM let them establish) to frame what a computer looks and feels.
They shaped the world to their liking and now it looks like a match. duh.
I'd argue the opposite. All Microsoft did was understand what the world needed. Were they successful because of their great marketing or their flawless coding? No. Was Windows spread like a wildfire because it solved real world problems? Yes. Is it one of the most, if not the most, influencial software ever made? Yes, no matter how dull and incompetent people might say it is. For the accountants out there, school teachers and doctors, it was magic.
Microsoft wasn’t the clear market leader in the 70s and 80s. Look for example at the spreadsheet.
Multiplan was loosing against Lotus 123. (123 being the VisiCalc killer itself)
Microsoft first launched Multiplan as its spreadsheet software on Macintosh (1984), it was something like the 22th or 23rd port of Multiplan! Then in 1985 very first version of Excel (Macintosh only)
It was an incredible step up from text-screen spreadsheet softwares, but only on Macintosh so Lotus didn’t cared.
Then Excel 2.0 got released (1 year before windows, using the windows window manager underneath!) and as people who saw it at the time remember thinking: "lotus was toast"
I studied the spreadsheet applications design history to Create my class History of Tech Design [0] and got a totally new perspective on how good the excel team was at getting things right.
They were the market leader for PC operating systems since they entered the market with MS-Dos and managed to preserve that monopoly after the transition from TUI to GUI.
And later they expanded into applications as we know today and built monopolies for email, texts, presentations, spreadsheets and browsing the information superhighway. All prior art, indeed, near zero innovation.
They didn't fall behind all that much, they were just too expensive. The PC platform used to be expensive too, but then became quite a bit cheaper in the 1990s.
Yep, that's the answer. But even in 1984 already, the year the Mac was released, there were PC-compatible machines like the Tandy 1000 for less than half: $1200. And the difference became bigger with time. The Mac was $2495 and AFAIK the lowest it got was around $1400 in the 80s. Plus, the Mac was not upgradeable.
Even nowadays go to electronics shop and try buying a laptop without Microsoft tax (without spyware which OEM Windows is). Since their beginnings they used predatory monopolistic market practices and as a result they could fart out miserable things like their entire development ecosystem.
> Everyone for whom coding is something that gets in the way of their real objective.
While tech is (allegedly) evolving as fast as ever, the problems to be solved often do not. Sure, scale is always an issue and you want to reduce maintenance. But how many project do you have that have to scale to millions of users?
> Visual Basic, Access
Very true, but those tools have evolved as well. Sure, everyone laughs about people not using SAP or Salesforce and instead use a self-made CRM. I have seen terrible things here, but also good ones with more features than both other tools could provide. And it made working far more efficient. Of course Access is only used as a front end today, but the fact that there are no real alternatives is a certificate of failure of those that criticise these tools the most (I did so as well). And it is surprisingly adaptable to new feature requests.
Yes, that is the usual way how it is used today. It is used as a frontend to quickly generate queries against a backend SQL server and make some fancy pivot tables.
Very rarely is the data stored in the Access file itself.
As a kid, Visual Basic allowed me to write my first shareware, ship it and get sales (not a huge amount, but I think I made about 6000 USD total, which was great as an high school student). I'll agree that it was not a great tool, there were many limitations, very stupid bugs (Visual Basic had very weird behavior in non English Windows) but it got the job done and introduced me to programming.
Well, fine. They feel that way, but it isn't really true. Their going to be the ones maintaining that software for a while and even a basic Google like "Visual Basic best practices" would save them mountains of time in the long run.
And Visual FoxPro. Perhaps the most underrated tools for SME. And possibly one of the reason why CRM and ERP have a hard time breaking into those market.
And this is going to be rant-ish..
Then somewhere along the line came Google. Yes I absolutely blame them ( In terms of mentality ). Instead of Microsoft which at the time was the ultimate evil company, providing tools ( not even programming or code, but "tools" ) to 99% of developers. We got Google, representing "real nerds" on the opposite side of spectrum telling you these technology are utter piece of crap and ours are the greatest. The proper way to do it. Embrace complexity to its maximum. And of course a lot of people were sold. Nerd could finally get rid of all the ugliness from computing. Visual Basic? Ugh. Learn a proper programming language. PHP? ROFL.
And of course there is Resume Driven Development. The actual technology in itself doesn't matter. What matter is if MAMAAN are using it. So you have a better chance or joining them.
There is no fun in computing anymore, not for the 99% developers, and even more so for those who only wants to be 99% developers.
Your argument suffers from the fact that Ballmer almost tanked the company, so perhaps this rosy look back isn't the whole story, and you need to focus a bit on "get things done" and a bit on "maintain it for the future", it's not as simple as FAANG=good or FAANG=bad.
Unless you're happy that all those 99% developers have created towers of COBOL that run the world and they can't touch or upgrade.
I think there is really not much to add to this, I just wanted to say thank you for this, a lot of people need to read this and wake-up call, that markets hammer on your. The true feedback loop has no feelings, no compliments - just pure and direct feedback by either using the product, or not, irregardless whether its Kubernetes, Solana or <script>
Partially I agree, partially I don't. I would say the long list of failed operation systems versions (Vista, Windows 98 ME) that were causing a lot of issues for users hit in a unwanted way all those small companies. Not that there was anything better at that time, Linux on desktop is still very problematic, but releasing crappy software will cause issues one way or the other.
Yep the world is full of in-experienced, arrogant, self-congratulating developers who thinks they know it all while having zero clue on what is really going on. They are easy to identify. The moment they say things like “all X developers are crap” or “all software developed by super successful company X is crap” or “developers using technology X are stupid” or “smart developers who gets it use X” you know you have identified one. Run (don’t walk) away from them. They are toxic and should be avoided at all cost.
To be fair, slack kinda messed up themselves. I am still astonished that they never built video calling.
Like MS did use their existing sales channels to stop slack growing, but if slack at least had feature parity with Teams then it would have been a harder sell.
Microsoft also tried to M&A Slack before building Teams. As far as exits go, Slack may go down in history as a particularly not-smart one: they were allegedly quite rude in refusing the M&A offer for what might have been a "golden" exit, encouraging Microsoft to build a competitor in a hustle, and then their later exit to Salesforce was a lot less "golden" and at a lesser valuation due in part to the competitor they helped create through their alleged rudeness.
(ETA: Of course, easy to spin things the other direction and read Microsoft's offer as a bully move "join us or we'll build a better you without you". The reality is somewhere in the middle of the two extreme perspectives.)
Honestly i think more than 1% of developers code for fun, understand at least basic unix, and can clearly see when a system is shoddy and dumbed down.
Maybe a majority don’t, and my experience is definitely biased. But i’ve met many programmers of diverse backgrounds, and a lot are very passionate and research the newest and best technologies on their own. They probably don’t read Hacker News, but I think “hacker” programmers are more common than you realize.
Greatly successful projects often have horrible(from developers perspective) codebase that barely doesn't fall apart. They call it technical debt and fix il later when things stop moving fast.
I think it's happening because successful projects focus on the product delivering on the reason it exists(i.e. facilitate file transfer in the most intuitive way possible) and to do that quickly as possible for the lowest cost possible engineers hack together solutions derived from the existing tools without focusing on the technical or ideological perfection. Also, they cannot focus on the technical stuff anyway because to needs and direction change all the time, therefore it's not possible to crystallise an optimised and well built solution anyway.
What Microsoft or other companies do is to built versatile tools that that engineers can bend and stick together to accomplish the tasks in hand very quickly and if it looks like the thing is here to stay they can design and implement an optimised and elegant solution later on. At first it may look ugly but it is usually an original work done by domain experts who are exploring technical solutions to the problem in hand. Once the solution is found, experts in computer programming and architecture can step in and make it elegant but that last step is not needed for the wast number of solutions.
An example for this is UK's covid case tracking early on. Apparently they quickly implemented a central Excel spreadsheet that would collect the CSV data sent from the test centres. Unfortunately, the solution they implemented was way too hacky and they lost data once they reached the limits of the spreadsheet format they have chosen. Had they chosen a better format, their solution would have worked up until much higher scale and once they have a better understanding of the nature of data collection etc. then they could have implemented a clean sheet solution with "perfect" code and scalability, maybe years later for the future pandemics and write blog posts about the enormous challenges and their ingenious solutions. They couldn't have start by firstly building the perfect data collection solution because they wouldn't know how things will pan out and if they tried to force their way(i.e. mandatory formats by the tests centres) it would have been too big of a projects.
> An example for this is UK's covid case tracking early on.
Your choice of the UK's Covid tracking spreadsheet debacle to illustrate your point is curious, to put it politely.
As The Register [1] (and many others) pointed out at the time, it was the wrong tech choice, executed slowly and at huge cost, by the wrong people with little knowledge of better choices, and directed by management and government leaders with track records of lying and incompetence.
> They couldn't have start by firstly building the perfect data collection solution because they wouldn't know how things will pan out and if they tried to force their way(i.e. mandatory formats by the tests centres) it would have been too big of a projects.
I have a different view on every part of this sentence!
I don't say anything different, actually. As I said, had they chosen something that's not that unreasonably limited, we would have never known about it since there wouldn't be a cockup. When the policy(back then nothing was clear) is in place for good, then they can build something elegant and efficient. IMHO, they should have had it before the pandemic even was a thing but that's totally another discussion.
You'd be surprised at how fast you can create something with an SQL database, a well used backend framework (eg. Rails) and a well used frontend framework.
The things big tech companies do are usually to avoid scaling problems. In particular scaling on the engineering side. The above can scale to billions on the user side if needed but 10k engineers constantly updating and messing with a traditional SQL schema doesn't work.
You'd be crazy to try to replicate something like the big tech companies database storage systems when you're just starting out. You don't have 10k engineers. You may think "but i want to scale to a billion users". Well you still don't have those problems. The scaling problems big tech companies have are scaling to 10k engineers.
It's also that, in the past, the capacity of our monoliths to scale was far lower.
There's many apps that need say, a 100GB database in 2022. Those apps also needed 100GB database in 2007, when horizontal scaling was the hottest thing around.
Nowadays however, Moore's law has steadily overtaken an order of magnitude of use cases from 2007.
Maybe 1/10th of the 2007 use cases still need that kind of enormous, big tech scaling.
Scale vertically and you're still a single point of failure away from a big, potentially fatal outage. The coding and architecture can often be trivial though. Definitely good for internal systems and more green programmers.
Scale horizontally and you're often constraining and or complicating your design/architecture, but you can handle outages far better, if not seamlessly. It's much harder to get right, so it's usually best for more senior teams that have specific need for it.
It's wrong to mix horizontal scaling with availability. They are two different things that solve two different problems and solving one doesn't necessarily mean you've solved the other. You can scale horizontally in a way that decreases your total availability, e.g. by sharding.
> Scale vertically and you're still a single point of failure away from a big, potentially fatal outage.
When folks talk about scaling up for stateful resources (database), this typically includes having stand-by replicas to which one can fail over within seconds.
Scaling up the DB and scaling out stateless compute nodes is more than enough for many, many use cases. Having a DB machine with one 1TB RAM isn't anything special these days.
> Scale vertically and you're still a single point of failure away from a big, potentially fatal outage. The coding and architecture can often be trivial though. Definitely good for internal systems and more green programmers.
How fatal[1]? Not all outages are fatal. Scale vertically and you'll have some downtime if any node between (and in) your user and your system goes down. For 999 out of a thousand businesses, it won't be fatal.
If your business is so fragile that the first 24 hour outage kills it, then it would have been killed anyway in the near future.
[1] I know, this is liking asking "how pregnant?"...
I've seen more outages (or worse - silent data corruption, inconsistency, or deviations from the intended business rules) due to home-brewed, horizontally-scalable engineering playgrounds having a flaw than a good old Postgres on bare-metal hardware suddenly going dark.
> Scale vertically and you're still a single point of failure away from a big, potentially fatal outage. The coding and architecture can often be trivial though. Definitely good for internal systems and more green programmers.
Setting up something like a read replica on RDS (or frankly even a DB you manage yourself is pretty trivial, especially if you're using it solely for availability and not actually reading from it.
This really depends. The majority of services out there that can get away with a single box for a scale perspective can probably afford just to take the downtime.
The best part is when a company is building a line-of-business app which they know will get maybe a hundred users and end up throwing tools and processes at it that are complete overkill.
I would not be surprised, but it easily becomes a trap once you grow.
With CRUDs / "Smart UI's" I'd always sign a contract in blood / note in ADR that we choose this to prototype quickly, but we'll stop and rebuild it before someone asks if he can "save his 20 fields grid filter he has to click through every day".
Or (another Greg Young reference) - optimize for the ability to delete your code.
> Just like influencers in any other field, developer-influencers often describe a reality that is aspirational even for their own companies. It may be true that people writing about ideal processes live in an idealized situation where it’s possible, in which case they’re the exception that provides the rule. But most of the time — even if it’s true in one part of an organization or at one moment in time — this reality does not hold across their entire company and forever more.
BINGO! This really resonated with me. It's a good reminder that developer social media is going to have some of the same things going on as regular social media, like unrealistic comparisons to idealised snapshots.
> Too many people believe that aiming for good software quality means you need to fully adopt that new technology, whether it’s microservices, GraphQL, or distributed tracing. You’re not done until you’ve switched fully over to the ideal technology.
This. We are constantly presented with new and shiny things. And then, barely a year later, there is the next ideal technology. And the next, and the next, and the next.
We have seen this with everyhing: Languages, coding standards, paradigms, editors, organisational methodology, interview methodology, frameworks, databases, security, design basics,...the list is endless.
But here is some truth:
C is still written. As is pure JS. Imperative/Prodecural code never went away. Websites on servers we can physically touch are still deployed. People still write working implementations for almost everything under the sun from scratch. Information-Dense, slim, efficient interfaces that get jobs done are still there. As are monolithic architectures. Good 'ol RDBMS still work great. bash is still the dominant shell. grep/cut/awk are still great for analysing logs. Water still flows downhill.
"new" and "better" are 2 different words, with different meanings. Something that's new could be better. But he fact that it's new, in itself, doesn't automagically make it better. And even if its better in one scenario, its doesn't have to be better in all scenarios.
Indeed, that is why even though I regulary rant about C or its influence in C++/Objective-C, I am not naive to think it is going to go away until we switch to something radically different in computing models (away from UNIX, quantum, whatever), hence why at the same time there is a very big value in achieving some improved security while using those languages, regardless that we still cannot make it 100%, 90% there is better than nothing.
> new, in itself, doesn't automagically make it better.
Indeed. It’s often the case that new is worse, and the authors and influencers pushing the new don’t know it yet, or sometimes they do but they downplay that knowing it will take time to get better, and underestimate how long. The reason old defaults to better is simply because it’s been used more, it automatically solves more problems because by the time it’s old it’s been built to solve more problems and it’s weaknesses have been fixed or patched or shimmed or worked around. Old isn’t automagically better either, of course, and things can get too old and too shimmed, but if the new thing is solving the same problems and being compared to something well tested and used by many, it’s very unlikely to be better until it starts getting old.
Maybe we should take a minute to ask: better for who? Better for customers is different than better for programmers. New is both more fun and easier to maintain. New is better for programmers. Old is better for customers, it’s more stable and changes more slowly.
There is something pretty important to be said for being an early participant in a project, programmers who are involved in building and integrating new things automatically become more productive than programmers who get hired to maintain old things. I’ve seen this first-hand in several different ways, the most stark of which was selling a company to programmers better than me, but took years to become productive because they didn’t know how everything worked and didn’t want to break anything.
There is also something to be said for replacing old with new. I’ve seen first hand teams tear down old engines because they felt crufty to build new ones, with the promise that it was going to be fast and easy and that they learned their mistakes the first time. Two completely separate companies launched into a 1 year rewrite that ended up taking 5 years, costing tens of millions, and making a good deal of the same mistakes over again before they were done. Both cases were failures to evaluate how well things were working in the old system, they were blinded to the overall success by the long list of rather minor problems.
> a lot of what most “developer influencers” say is fairly aspirational. Their own companies don’t necessarily do things as smoothly as they preach to others.
There's a pride element there: developers don't want to admit that they work in less-than-stellar conditions.
Some developer talks also double as recruiting events, and you're going to turn people off when you mention that the test suite takes a day to run and requires the use of a fax machine.
One of the things I've learned the hard way is you're never going to attain aspirational software purity in a corporate, commercial software environment. I used to say naive things like "Software should compile cleanly with every warning enabled, produce no lint warnings, have zero memory leaks, and zero crashes." But the only place that is true is in my own personal hobby projects that I am under no pressure to release and therefore have years to sand and polish with love and care. I moved out of software dev (into more product/projecty roles) largely because I just couldn't bear to release software into the wild that I knew was flawed, just because of the stupid deadline. Ironically, now I help make/enforce the deadlines so there's the whole duality of man thing...
I've had a similar experience, but I keep on producing less than perfect code even for my hobby projects. What we think of as "high quality code" usually boils down to maintainability. We like code that is easy to understand because it is easy to maintain and evolve further.
But maintainability is not an end in itself. The software also has to do what it is intended to do. That is way more important than how the code looks, in my (current) view. Therefore I write some pretty sloppy code. The reason not to write "perfect code" is also that lot of code gets thrown away at some point. Then all the effort that went into polishing it was wasted.
I've seen more sloppy code being kept and built upon, than any code being thrown away.
Once it is sloppy, it does not get easier to make code unsloppy as time goes by. Of course, "not sloppy" does not mean perfect; but it does mean understandable, testable, maintainable and extensible.
I agree on the importance of always striving to write not sloppy code. It is especially critical if you are building a system which must scale and must be maintained over a long time. Trying to avoid sloppy code is one of the things that can make programming a rewarding if challenging experience.
> understandable, testable, maintainable and extensible.
Those are all adjectives, just like "sloppy" is. So we must each make the decision as to how much effort we will invest in making code MORE understandable, MORE testable, MORE maintainable and MORE extensible.
If the value of any of those properties is 0, then the code is definitely sloppy.
In a corporate environment I insist on treating warnings as errors. They are warnings for a reason and it will lead to hard to debug errors in a sufficiently large codebase/team. Enabling it after the fact is a chore, but once it working its not much extra work.
Linting I only use for adding/correcting a file header, order or methods or other things that it can fix automatically.
> you're never going to attain aspirational software purity in a corporate, commercial software environment
Fun problem a few days ago: a slight design flaw allows a class of bug that a typechecker can't currently catch. Fixing the design flaw involves updating 25,000 lines across a few thousand files. Only 6 of the 25,000 lines demonstrated the bug (~1 in 4,000).
If we wait two months we can build a tool to catch the errors (and many others like it) without updating the code. The design flaw stays, but its impact is nullified.
Reminds me of a frequent problem I observe with developers and operations teams moving Java workloads to Kubernetes. They focus on scalability, but miss the whole point of how the runtime (and not exclusive to the JVM) behaves - and prefers - when given proper resources.
Suddenly, you see Kubernetes clusters of 2 to 4 vCPUs VMs/Nodes and containers with limits to 1,000 milicore (or 1 vCPU), and then the team solves the performance problem with dozens, sometimes hundreds of replicas for one particular microservice. Many developers don't even understand the impact the JVM has when running on a single core (yeap... you get Stop the World GC - aka Serial GC - by default).
And then, the dev team decides to move to a new language because of performance and cost issues. And that by itself just brings many other problems.
All they had to do was keep the same amount of CPU and memory, but less VMs and replicas, with more vertical resources. Depending on the workloads and the system, it is even possible to reduce the cost.
All this is caused by the push in the industry for companies to go Cloud Native.
From seeing the difficulties containerizing java application servers at my own workplace I would say the hard part is making the application server stateless without sacrificing performance. Much of the older java (EE) ecosystem seems to assume a stateful application server that scales up, instead of a stateless one that scales out.
Not that I think stateless is inherently superior, just that containerizing and scaling out sort of assumes moving state out of the application server and into redis, postgres, mongo, elastic, kafka, rabbitmq, …
I have amateur level JVM knowledge, but what you describe here seems intuitive. It feels like a waste to split a JVM application like this. I‘ve read a bit about new advancements in GC design, they’ve become highly parallel and layered. And then there are all sorts of JIT optimizations that may happen in a more comprehensive, larger service, right?
Any stateless service can be scaled horizontally. But the fixed cost of running the JVM is higher than the runtimes in other ecosystems (say a rust application as an extreme example). The JVM has had an outrageous amount of effort put into having it scale well for large heaps with high CPU counts.
As the OP points out, GC in java-land isn't worth writing home about on 1vCPU. When you throw a 16GiB heap at it though, with 16vCPUs then you'll unlock some of the really interesting optimizations and garbage collectors that the showcase the JVM in an advantageous light.
So it's not that the JVM can't scale horizontally - that's a matter of system architecture - it's that scaling it as 100 tiny nodes is far less impactful than having 5 large nodes.
I don't know java, but that's not the message I got from the comment.
It looks like with java, 3 VMs with 2 cores are better than 6 VMs with 1 core, which may be forgotten when configuring equivalent kubernetes services.
Their point is that you can and probably even reduce cost, but you need to decide on the atomic resource unit and whether that makes sense (like having a measly single core, low ram instance/container).
99% feels like an exaggeration. I've talked to many developers from non-FAANG companies, and it isn't at all uncommon for them to be using GraphQL or serverless. I guess there is some selection bias since they are usually applying to a unicorn, so they are probably more likely to come from environments that fit the "1%".
Whether this distinction is relevant to you depends on where you sit. If you are a startup selling developer tools, by all means think about the 99% developers, but also know that a lot of them are in environments that don't spend a lot on tools and are averse to exploring new technologies. If you are a developer in one of these environments, well, the standard advice on Hacker News is already "You're not Google."
I feel like some of this is a bit fatalistic. The 99% can follow a DevOps playbook if they realize its value, and it's cheaper to have DevOps than to not have DevOps, in the anything-more-than-short timeframe. The 99% can certainly have test coverage standards for new code. Somehow, we moved from a world where the 99% didn't use source control, and now they do! Some technologies and practices are so impactful, you should aspire to them no matter what you're environment is (e.g. code review, CI/CD).
The 99% concept also hides a lot of important details, as it's defined by exclusion. The choices you need to make for an early stage startup, a mature WordPress shop for small businesses, and a legacy mainframe team in a F50 are as different from each other as they are from FAANG.
Oh woe is me, etc etc, but you know what? It makes the customers money, which makes the company money, which makes me money. I've gotten used to eating and living indoors, so this is a good thing.
It's not serverless, there is no GraphQL, it's not within a mile of the nearest Rust compiler and it could not be implemented in golang even in my wildest fever dreams, with or without generics. There isn't even any machine learning!
We need to keep in mind the trillions of lines of legacy code out there that is still being maintained, refactored and rewritten, every day, that is not a new product or a groundbreaking new paradigm. It's just the internet, and the hacky PHP and smooth perl5 that keeps it running.
So yeah, 99% might be an exaggeration, but if I was making a new IDE (or whatever) today, I'd sure as hell target WordPress before I started making up my own buzzwords. I can't say for sure if that's because of my perspective or because of some objective data, but I do know that it would be a product with a hell of a lot more customers.
What you seem to be implying is that there isn’t a profitable market for dev tools for the “rest” of developers (whatever the % might be). 2 observations:
* it’s possible the market isn’t lucrative enough for a VC funded SV company; perhaps other models in other locations might make it more cost effective to serve that market
* putting the profit motive aside: I think there is a lot to learn from understanding how the 99% of developers work and how they may be enabled by tooling to make them more productive. The solutions to the problems faced by that group may actually help build better technologies.
… I'm in a small company, and we "use" serverless. I've never once asked myself "Should I move to serverless?" It's just whether, for some application, it's the right tool.
We run a few Github bots & a function that updates a Route 53 record on serverless. (Security didn't want to give permission to R53; "too much, too broad"; a lambda that exposed only the necessary action to the service that required it was the compromise). But it's all extremely low-frequency stuff with no or little state, where the costs of a VM would far exceed the costs of "serverless". It was the right tool, for those jobs. (& it's usually niche stuff… I'm trying to think if I've ever worked somewhere with something core on Lambda or the like…)
But also got tons of VMs. Lots of VMs. Probably too many VMs.
> You have to have a VM to run a _single_ service?
This is extremely common, in my experience. (Like it's a default tendency of nature.)
So, the VMs example was my previous employer. An yep, not really any multi-app VMs. There were some that did do a couple of things, but it wasn't great: it meant the deployment and dependencies of anything sharing a VM were all interdependent … and often under specified. Painful.
It's why Kubernetes exists, really. (Which is funny how much hate it & Docker seem to get on HN.) In my current employ, we do use k8s, and while much runs on it, and it's nice, we still haves some single-service VMs. I'd like to move them all into k8s if at all possible, but it is not always possible. Or it's not always that time gets dedicated to it.
> Will their requirements ever scale to a point where it actually makes sense
And even that. Let’s treat scaling issues latter. Create something that can be scaled horizontally dumbly, like a monolith you can run on n servers so you can scale without loosing money and then scaling can become your problem.
And that’s probably never if you are b2b like a majority of companies, since your usage depends and can be predicted from your sales team performance.
mod_php is a serverless compute platform. The better question, will anyone’s requirements ever scale to a point where it makes sense for them to NOT be deployed as a serverless service? FTPing source code to a shared hosting provider is about as simple a deployment story as it gets; people bring all kinds of incredible complexity and thousands of hours of work on themselves messing with daemons, init scripts, systemd units, VM images, containers, schedulers, etc.
I guess you could pretend mod_php serverless until it needs to be load balanced and then it turns out the code actually relies on writing to the filesystem...
I've used graphql at a small company because it solved a specific problem we had, which was how to deduplicate all our kludgy page-specific views and let front end devs write new ones easily. I've also written runbooks and playbooks and used lambda functions to grab webhook payloads.
The given examples are really weird to me because they're some of the few things from gigantic companies that actually work properly.
As one that usually works in the 99% developer space, when we adopt stuff like GraphQL or serverless, is mostly because we are forced into it by new products, or they were the sales pitch to get new consulting gigs.
We've been using SQLite for 100% of our data persistence needs for the last ~5-6 years now. Our largest single environment is probably getting close to 500gb total size. Hundreds of concurrent users are no problem for us, even without these enhancements (we use WAL currently).
The biggest single trick I learned was to use 1 sqlite connection instance for the entire lifetime of the application. You can add orders of magnitude more throughput with this path. SQLite serializes writers by default so dont duplicate the effort. You can even use the RETURNING keyword to grab insert keys without needing to lock over LastInsertRowId.
We also are close to 100% of business logic being executed via SQL queries as well.
Nice. That's basically my ideal work environment; if you happen to have an opening for a .NET developer who loves SQLite and databases in general, my contact info is in my profile.
I would just think of it as 1 global static SQLiteConnection instance. There's really not a whole lot to it. Like 4 lines in a class file to set this up.
I think what bob1029 meant was just that opening multiple connections doesn't increase concurrency so don't do it unnecessarily (by using a connection pooling library for example). With PHP you have one "process" per request, so you'd just need to make sure you only open a single SQLite connection per request (using the singleton pattern for example).
The traditional pattern of one connection per transaction is actually a strong anti-pattern when working with SQLite in particular.
This is because SQLite is just a file, not a network connection or named pipe. Obtaining and releasing file handles is a non-trivial task that takes significantly longer than reusing the same one.
That's what I meant with "opening multiple connections doesn't increase concurrency so don't do it unnecessarily (by using a connection pooling library for example)". I was suggesting to only use a single connection for the entire request and NOT to use a connection pooling library.
Now, PHP doesn't share memory between requests so there's no way to re-use the same connection across requests. That's just a sad fact about PHP and I don't think there's a reasonable way around it other than not using PHP.
PS: I haven't used PHP in over a decade so my knowledge might be outdated.
> Now, PHP doesn't share memory between requests so there's no way to re-use the same connection across requests.
That depends on the execution model/PHP SAPI. mod_php and php-fpm use long-lived PHP processes and so can support persistent database connections depending on the configuration.
Well, it's ultimately files on a disk, as well as queryable data. So probalby pretty easily.
Logical database backups (I'd point to mysqldump/mydumper & myloader if we were talking about mysql), combined with physical snapshots & copies of the binary files.
500 GiB is a lot, but not too out there these days.
It's typically being backed-up by way of whatever hypervisor running snapshots of the VM.
Longer term, we are looking at synchronous log replication so that no business data would ever be lost on a primary outage. Having an availability problem is acceptable as long as we can bring everything back up in a reasonable amount of time.
You're not the only commenter saying this, so I gotta ask: whatever VM that SQLite DB is on, is your business cool with a business disruption, dataloss, or both when that VM goes down, when that AZ goes down, or that disk fails? Or am I missing something? If you have some sort of fail-over-to-last-backup plan, is that not just a distributed database with more steps, and why not something like RDS, or CockroachDB?
(From mine, it is an explicit requirement that we weather such events. I keep trying to keep global outages from our platforms off the requirements list…; so, IDK, perhaps your requirements allow different a different approach.)
Mentioning RDS is an apples to oranges comparison. RDS is a fully managed service by a 3rd party.
Nevertheless, checkout https://litestream.io - given that SQLite is just a file, it’s incredibly easy to replicate. Failover is coming in the next release, which is already beta.
I mention RDS because it mostly solves the failover/continuity problem. While failovers on RDS are typically outages, they're also typically quite short lived, no more than a few minutes. That can work for many people/business's requirements. That it is managed means you (in theory) can rely on AWS to manage the underlying infrastructure. You just worry about the SQL. It is a "boring technology" choice, as the article that HN likes to post every now and then says.
Litestream: What are the durability guarantees? (Are there any?) what would make me pick that over RDS or Cockroach? "given that SQLite is just a file, it’s incredibly easy to replicate" — sorry, that just doesn't click with me. What about a file is "inherently easy to replicate"? I would not know how to implement something that wouldn't be O(n) with the filesize. Ideally you'd just sync up changes, but how to get those? Normal DBs with streaming WALs (RDS/PG/MySQL) or Raft-based replication (Cockroach) have good answers there.
Expensify: I skimmed the article, but AFAICT, the article talks about scaling QPS to a single SQLite file, but nothing about replication/durability.
"[Litestream] runs as a separate background process and continuously copies write-ahead log pages from disk to one or more replicas." - https://litestream.io/how-it-works
Instead of just continuously firing off questions when you acknowledge you’re not even reading the articles linked too … nonetheless, bedrockdb has replication and failover.
Can you really still call what Expensify uses SQLite? It is very heavily based on SQLite, but they say that they've "wrapped it in a custom distributed transaction layer named Bedrock". Feels like they've gotten much closer to a traditional DB architecture, with the related management overhead that people try to avoid by using SQLite.
> is your business cool with a business disruption
Every few years in Seattle, there's a big snowstorm, and everything just shuts down. We live.
If a service goes down once in a blue moon, I could catch up my email, spend some time morale building with my team, or, god forbid, just go home because work isn't that important.
I’d wager that many of the people pushing SQLite for 95% of use cases have not been involved in running a database for a large always-on business. Just because your SQLite database can handle the traffic doesn’t mean you’ve solved robust database infrastructure.
Now if they said postgres on RDS can be used for 95% of real world use cases, that I could get behind.
The customers we have today can tolerate some degree of outage. VM snapshots are usually the path.
Our roadmap has replication being done at the application level (i.e. business logic is aware of other nodes and policy around replication). Those specific entities which absolutely must survive would be synchronously replicated to additional witnesses on the network.
Note that this is about non-repudiation, not operational uptime. If we cared about always on operations, we would take a radically different path.
Abstractly, I think the role of ensuring business data survives some incident is not something to be pushed down to the database layer. When you pull this concern into business logic, you can produce far more robust solutions.
> is your business cool with a business disruption, dataloss, or both when that VM goes down
Most businesses seem to handle downtimes okay; some money[1] is lost, maybe.
[1] And if the business feels that the expected (negative) value of the downtime is less than the cost of mitigating it, they simply (correctly) ignore it. For example, most businesses don't build a separate highway to their building in the event that a traffic jam causes fewer customers to show up. The expense is not worth the savings.
Buy something like nimble and with the correct config your system is down less than 30 minutes if something really bad happened. And no, that is usually not a severe problem for most businesses.
There is good middleware like activeMQ that can manage transactions in the meantime if your DB is on holidays.
A but of planning makes you very resilient against threats.
The amount of IO we gained in the switch from spinning rust to NVME SSDs means that a SQLite scales serialised 'concurrency' much further than eg. Postgres ever could before.
This article didn’t resonate with me as much as I was expecting.
While the insights on developer influencers was sharp, the article itself felt like more of a reaction than an evaluation.
I’ve worked in many legacy companies. The rationale for staying on the tech stack they have, and the approach they take to DevOps, is not particularly well reasoned. Often they are experiencing painful consequences due to their adherence to old design patterns.
I wanted each paragraph to be more diagnostic, frankly more reminiscent of the wide ranging and interesting debate we find here (especially when the old guard shows up and speaks with real authority and wisdom on how the problems they solve don’t map to Kubernetes, etc.).
I’d really welcome counter-arguments to the point I’m making, so I’ll frame it this way:
This felt like the same kind of “playing to the crowd” that dev influencers do, just a different crowd.
I think the point that should be well taken is that if you're hoping to service the greater programming community, you can't do so in your ivory tower solutions on a community Ill equipped to utilize them. From experience, DevOps done poorly is terrible and demoralizing. Agile done poorly (often) is terrible and slow. Take any best practice and try to apply it to any organization and you'll get a lot of flaming wreckage. Ideally those companies will have some pragmatic people that can nudge their people in the right direction gradually.
>"Should you move to serverless? Is GraphQL the answer to your API woes? Should you follow the latest DevOps playbook to increase your system reliability? In the world of tech tools, there’s a lot of buzz. But it doesn’t always reflect the daily reality of programmers.Should you move to serverless? Is GraphQL the answer to your API woes? Should you follow the latest DevOps playbook to increase your system reliability? In the world of tech tools, there’s a lot of buzz. But it doesn’t always reflect the daily reality of programmers."
My approach - I could not care less about what FAANG does. Due to their scale and org structure they are solving problems which 99% of mere mortal businesses will never face. I am not a luddite and am constantly looking for new things that can make my development easier. But I consider those from ROI point of view as I am vendor with clients and I want to make money, not waste it. Coolness factor, fashion, corporate propaganda / indoctrination mean zilch to me.
I am curious to know how you evaluate a new development product in respect to the ROI? What characteristics should a product have to satisfy your criteria for considering it?
Example - I have JSON based RPC so that systems from other vendors can talk to mine (enterprise backends written in C++). It works like a charm and has ben doing so for years. Here comes this architecture astronaut and tells ma that I should do GraphQL and proceeds to explain me how powerful and cool it is and how everybody and his cat uses it. So on the downside I will waste a gobbles of time and money, on upside - zilch because nobody gives a shit. The problem is already solved for us year ago so buzz off. And that guy could not give a single example of how it can help me. Just spreading FUD about existing things.
There are numerous opposite examples when I see that this new tech, lib, tool actually saves me time and money. I pay quite a few dollars for software tooling. Well if the tool does not offer perpetual license it is a no go for me then.
This is all that matters to me. On desktop for example I skipped moving to that .NET bandwagon and stayed with Delphi for my GUI desktop products. They worked 20 years ago and they work the same now. Single 10MB self updating exe with zero deployment issue. And free from numerous limitations imposed by UWP. Competitor is 1GB package with crapload of problems and every update turns a nightmare for customers. In my case all the time is spent creative stuff that brings me new customers / money instead of feeding someone else. Sure it costs me few hundred a year but that is peanuts.
Thank you for taking the time to answer. You sound like a good rational engineer. Things that work don't need unnecessary splash of coolness.
On my previous job, I had been working for 15 years on the development of a complex business system. It included desktop apps, mobile apps, webs, on-premises and clouds. Throughout the years, we have introduced many then cutting-edge technologies for new products within the system. Some technologies before they were cool. But, the fine-working-already-done products, we kept supporting with the original technology for the lifetime of the product.
The point is that many new tools and technologies bring a very limited value to the finished working products.
Now, I am a maker of the new development tools. So, I am eager to push them to the world, but wouldn't like to be perceived as an "architecture astronaut". Your opinion helps in understanding how and why engineers choose new tools and technologies.
"Coolness" is not in what I use inside my product but how "cool" customers think it is because of features, robustness, price etc. It feels very "cool" to me when my products work and serve customers.
>"Now, I am a maker of the new development tools."
This is a part where I spend money. Good tools are very valuable as they directly save me time / money.
>"The point is that many new tools and technologies bring a very limited value to the finished working products."
Even for new ones. For example my servers are modern C++. In theory I should be using Rust / Go for new ones if I listen to a chorus. Guess what modern C++ works just fine for me and produces stellar results hence no reason for me to switch. I do some toy projects with new languages / tech to get a grip and be aware just in case.
I never jumped into the React wagon, just as I never took the Wordpress train. You do not hire an 18-wheeler to deliver a pizza. Engineers on any subject have one job: design the optimal solution, be it a bridge or an SPA to take online orders.
React makes sense for FB's bloated, complex web app. Your CRUD app will do fine with far simpler dev tools. Even vanilla JS does the job in most cases.
99% developer feels compelled to learn and adopt what is trendy because that is what the client/market/boss demands; but it is you the 'engineer' who has to tell the client/boss how that bridge is to be built. When was the last time you went to a doctor and said: hey doc I have this, give me that medicine.
> React makes sense for FB's bloated, complex web app. Your CRUD app will do fine with far simpler dev tools.
And that would be what? Exactly?
I did a personal project last year that was vanilla HTML/CSS/Js, with some templating in Go (I know it's templating isn't the best, comparatively). By the end, I wish I would have done it in React. I ended up re-inventing the wheel for a lot of things, and ended up with something that was decidedly messier. All while attempting to be more simplistic.
I wish Web Components would have taken off. When Polymer first launched, I was totally on board. I loved the idea of writing vanilla frontends, and using Web Components where I needed extended functionality (vs a full framework). But, it didn't, and unless your site is super simple, and mostly static, React et al. are the best ways to create frontends.
Tools actually intended for building non bloated, non complex CRUD apps. Django, Rails, Laravel, .NET, Spring, I can go on.
Instead Go was selected, templating HTML by hand, and assumedly doing nearly everything else by hand well. That is not the optimal solution for a simple CRUD app’s implementation, and isn’t reflective of anything but poor engineering.
In a lot of ways, large tech companies are living in the future. They have to invent things that later open source tools will imitate and then others start to use. Examples: map reduce, various rpc standards like graphql and grpc, and a whole mess of non-relational data stores. I work with kubernetes a lot right now and it's effectively a smaller, less featurefull version of what google runs on, but it's light years ahead of what we were using before.
A huge portion of the industry will be 10, 20, even 30 years behind and that's fine. I don't think there's any myths about this.
I believe Serverless technologies are a game changer for the 99%. I work at a firm that could be considered part of the 99%. We build mission critical apps, but don’t have entire teams dedicated to building platforms or managing Kubernetes. Most of our talent is average compared to FAANG. We don’t have SREs or developer advocates, but we do have customers who rely on our software for life and death situations.
Serverless technologies help us build scalable, highly-available, and performant cloud applications that run by default in multiple availability zones and multi-region with minimal additional effort. We leverage GitHub Actions to drive our CI/CD process and automated testing. Instead of focusing on managing servers and cloud infrastructure, us simple 99% developers, those who get passed up by FAANG and Leetcode grinder interviews, can build solid applications that provide value to our customers and generate nice profit margins. The 99% can deliver apps that can perform similar to the 1% with far less complexity.
There are thousands, if not millions, of developers, now using serverless to build real-life customer-facing applications. As in the whole new tech, it gets adopted for non mission critical apps first between 2016-2018. Starting with 2019, more and more companies adopted the technology and the Cloud providers, especially AWS, invested heavily to improve the integrations and remove the roadblockers. See how Lego has moved the whole e-commerce app into serverless. https://medium.com/lego-engineering/accelerating-with-server...
I also strongly believe that severless is an overloaded term and everyone understands something different from it. Some people consider this only as a FaaS but I think that it's any cloud service, that can auto-scale to infinity, scales to zero, pay-per-use and managed. I frankly believe Serverless should be regarded as a paradigm rather than a technology advancement. here's a very nice blog about it: https://ben11kehoe.medium.com/serverless-is-a-state-of-mind-...
100% agree. Even tough i work at F500 company our team acts like a startup. With server less we don’t need to hire expensive devops and shell out millions for expensive k8s clusters. I am pretty sure it’s the future for most of us.
Serverless as a term mixed up with FaaS. In 2022, Serverless is more general as it can be defined anything that autoscales to infinity and scales to zero when not used and pay per use. Amazon DynamoDB is also serverless in that sense for example.
I understand the point of the article but I have a different approach when I decide to start a company. I'm going to hire all the weirdos that want to build things in Rust and try EdgeDB or render in WebAssembly. I think most great software are built by those "top 1%" engineers. See Figma for instance, it is built using technologies far ahead of its time. And that's why it's such a delight to use.
My personal recommendation is to do some self reflection on what things are most important to you when it comes to spinning up a new project and build your own kit. This is your tool bag as a developer.
I have seen a lot of people try this and fail because they aren’t building a personalized kit, they are putting together popular dependencies. That is radically different.
When you build your own kit it is tiny and extremely portable. You dig into the nuts and bolts to see what works and what plays well together. Since it’s all your personal preferences you maximize on productivity every time you use it. The best part is that you only have to build it once. Then after it’s a few hours of maintenance every six months as your requirements and preferences evolve.
What happens when you work at a company that uses a different stack though? Can’t force a Java shop to start using Golang just because I like it and have developed my experience around it.
99% of all developers had enough todo with their work, they don't have time to write shiny articles about the used technology.
This should make it clear that many great articles on many great technologies are written by a few, most of which are at an academic level and have less to do with the practice of 99%. But everybody thinks that using this tools that way is normal, it isn't.
I don't think this is true. As a developer my job isn't just to crank out code, it often involves doing tasks that are important to the company like interviewing people, writing documentation, occasional team meetups. One of the things that's pretty apparent after spending time on HN and other forums is that these articles when well written are absolutely excellent marketing material. It makes sense for a company to allocate development resources to writing them.
"I’ve encountered so many teams who say that migration will happen “next quarter.” The reality is that, even when they manage to finally start, migrations have become continuous, rather than discrete, processes. A 99% Developer team with legacy code and a lean team is probably never going to convert their entire code base over to microservices or GraphQL. For most organizations, tech stacks and tool chains are heterogeneous, a combination of the layers of languages, frameworks, and tools that have been picked up over the years."
Generally observed that the less you see engineering as an ideal state or set of standards, and rather as a living and breathing organism in your company, the easier your life will be. If you're ever working on a 100% refactor of something to a new framework or system, chances are you're not focused enough on the top line problems at your company. Which is related to https://rkg.blog/desperation-induced-focus.php
I use MongoDB, Cloudflare, Snowflake, and Datadog and I love them all because they are all great and easy to use products that get the job done and make it easy for other developers to collaborate as well. People will mock me mercilessly for not using free OSS, but guess what, I don't want to deal with the headache of setup and maintenance! These products all have their parallels at big companies (such as Borgmon and Bigtable), but they're much better in practical use cases in non gigantic companies.
Great to see that big VCs are having these deep insights.
As a developer, I've essentially had to leave the industry because all the small software companies are constantly coercing all their developers into using specific tools and processes from big tech companies and unicorn startups and I was constantly pushed towards solutions which are too bureaucratic to work efficiently in a small company setting.
I was too passionate about coding to butcher it in this way so I left the professional sector and focused on open source (thankfully I have enough side income to do this). It's mental torture to come to work every day, working in an industry which is supposedly all about logic and problem solving and be constrained by what is essentially religious dogma when applied in context.
10 years ago, it was such fun to be in the software industry as a developer. The company directors would trust you to choose any stack/framework you wanted or even build your own lightweight in-house framework (or no framework at all). It's no longer the case.
Weirdly one of the reasons I moved from development to a test engineering role is you have way more technical freedom. I can basically use whatever tool I want because there’s way less oversight as long as the job is done. Want to test a CLI? Use Go/Rust. Want to load test a website? Use Scala. Data parsing? Write a lexer in Haskell. I think I’ve used almost every language in the top 30 in production thanks to this. And it’s not even just language - need reporting? Try a new web framework. Need to store data between tests? Try a new database.
I’ve written far more fun code in a testing role than I ever could in a dev role :)
Sounds like you found a good way to keep learning on the job. This approach wouldn't work for me though because I like building products and features. The way I kept learning was by doing open source on the side.
> It’s well-known among developer tools creators, for example, that integrating with GitHub and GitLab will help make your tool much more useful and appealing.
Certainly not for me personally, nor for the 10 or so companies I've worked for as a freelancer over the last 10 years. Among them are some - real - market leaders as well as small and fine companies that are extremely demanding when it comes to self-used technology (including development tools, of course).
Just look at who the owners of github (most people know it) and especially gitlab (most people don't) are.
More generally, just the single arbitrary picked difference between one company producing something tangible and another producing a pure software product should be eye-opening to anyone who has had both types of experiences. As for my career path, I've worked in instrumentation, avionics, cars, and Embedded, among others. Each with a very different culture, even among themselves, not to mention compared to software companies.
I've been working on an SDK for Flutter that is simpler, easier to code, and is closer to HTML/JS. You write your code on the back-end, although you could use the same SDK on the front-end. Nim is the initial back-end SDK: https://nexusdev.tools/
On the one hand, your MVP version 1.0 should probably be built in anything that is fast and works.
On the other hand, you're gonna want to rebuild that ASAP in something that has legs for longevity's and paying down tech debt's sake. You won't potentially have an edge over any competitors by using their same tech stack.
I suspect that for most companies, the tech stack they run, the custom software on top of it, isn’t meant to be a competitive advantage. It’s just the cost of doing business.
Right, but I know that to be false. In the extreme case to prove my point, if you write your stack in Brainfuck on top of Oracle, you're going to have a much harder time than if you write it in JS on top of Postgres
Most people need to get some data from a database to a web page and back again, to paraphrase dhh (I think).
This does not require a huge amount of architecture or infrastructure in most cases, even at scale. The engineering challenges should be elsewhere, not in this relatively simple and solved problem.
I absolutely agree with your comment and I’m sad that it’s being downvoted.
Thing is, I see all the times web frameworks that would be used in those cases, but improve in many directions almost irrelevant for it. Performance? Ability to do same webapps in yet another boring scripting language? Very specific and rigid abstractions and fancy tools that provide for very little flexibility? Fancy reactive approaches that manage to turn the codebase inside out, create hard and undebuggable problems out of the easy ones, and sometimes provide random subset of guarantees of various degrees of usefulness (I’m actually a fan of FRP, just not the misuse of it)?
Somehow Django, with its clear admin system and other out-of-the-box QoL modules remains as close to the most practical and useful approach as one can imagine. (Still even people who use it manage to reimplement its parts for no reason. Every sufficiently big webapp contains a bad, buggy, incoherent implementation of 90% of Django)
There are incredible amounts of money and effort and time to be solved by creating frameworks and abstractions that help with the glorious use-case of “huge, annoying, evolving in all the wrong directions essentially CRUD app”. Yet we’re not even stuck with Django - I see webdev regressing into horrifying piles of JS…
Because Rails doesn't encourage modularization. It's fine at small sized codebases but eventually you want to "package by feature" (to borrow a Java term) rather than by layer.
> For instance, companies with legacy systems that can’t afford to migrate to the newest architectures need to adopt new tools differently than newer companies, or companies that can dedicate a team to a large migration.
Not to mention, this sort of company needs to do a careful evaluation if the chosen product and especially support for it is likely to be around for a while. Many got bitten hard by Angular's demise - and now imagine this for something like a database that's supposed to hold your entire corporate data.
Nowadays, there is a lot of focus on process in software development. and this often allows us to miss the target. The tools used are important yes, but how you use them is more important.
I think the article makes a very important point about under-resourced developer teams but misses an opportunity to point out that most other teams within organizations and businesses face similar constraints.
I think this favors stacks and frameworks where the business logic of a domain is already reflected, at least partially.
This might explain why some open source frameworks persist and thrive even while their "tech" is deemed obsolete / deficient...
Having new joiner stating on 2nd day that the whole system has to be moved to microservices on graphql with blue/green continuous deployment to solve all past and future problems is a real problem, it does exist.
Some people seem to fall into this trap described in the article of reading some blog post on some approach and fixating on it in religion-like mindset. It makes it hard to unwind ideas they throw into business people's minds because they all sound superficially great. You have to break it down to first principles and go through it, explain why things make sense for 10k+ engineering teams but don't for our dozen people team.
They will call current architecture legacy from the start and that sticks with business people. The reality is often the other way around - big players would love to be able to run systems using those simple, straightforward arrangements: single version for all services, ability to have few minute downtime to upgrade system offline, single monorepo, single database – but they simply can't because they have thousands of people working on it, running at massive scales across the globe.
When you say those things people get very defensive and it's hard to keep dialog on. Because those new things could actually work very well, but ie. not as replacement for everything but to create some satellite services, maybe for things that crystalized over years, unlikely to change and can be extracted as dedicated service, maybe graphql makes sense for admin section where the f/e team wants to experiment more etc.
I'd say:
- be rational
- analyze from first principles
- use N-order thinking
- be open minded - for new tech and ~legacy~ current tech
- judge on simplicity as one of main criteria
- stop calling tech you're currently running without already available alternative "legacy" - it's your "current" tech
- monoliths are not "legacy", they are desired if possible, split if they'd simply not work or you organically grew into maturity level where it start to make sense
- same with single database
- same with single versioning
- same with single monorepo
- use your own business reality as base for evaluation, not somebody else's reality
It means asking "then what?" questions to get insight into implications, you can search for "second order thinking".
An example could be - you're arguing with somebody in another team at similar level. If you convince them - you win, as a first order consequence, you may think: great. But if you dig deeper you may realize that doing things your way may not be so great for you in the long run; ie. if you have to deal with passive-aggressive attitude or don't have 100% support of that team. So selfishly, 2nd order thinking - maybe you should let go and you'll win long run.
Yep, great write-up . I spent so many years in companies that were perpetually behind the times and rarely catching up. I now work for a large scale SAAS offering trying to sell to these developers. It's consistently an up hill battle trying to sell my peers on the realities outside their bubble, but it seems pretty fruitless.
Not sure I understand what the message of the article is. Sure not everything companies with big scalability requirements do, makes sense for smaller projects. Nevertheless some principles still make sense. E.g. a mono-repo can make sense (again depends on what you want to build) and it is even easier to handle at a small scale.
> what coding, testing, and shipping looks like with short-staffed teams, teams without dedicated devops experts, and teams where everyone who originally built the system has left.
This one hits really hard and I'd love to see more of that content.
I’ve been shipping (as opposed to coding), for my entire adult life, and I’ve learned (the hard way, of course) that “ship” is always at least a couple of clicks back from “bleeding edge.”
I’m in the home stretch of an app that I’ve been developing for the last year and a half, or so.
The backend is written in PHP, and the frontend is a “classic” UIKit/Storyboard/MVC app, as opposed to a SwiftUI/Combine/MVVM project.
It’s fairly ambitious, and, when I started, I was not confident that “the bleeding edge” would work (it might have, but I didn’t know that it would). There was no question that the classic patterns would work, so I picked them.
It’s coming along great; far better than I had originally envisioned the project. It is ultra-high-quality, fully native, with only one small external (meaning that I didn't write it, myself) dependency (the backend has zero dependencies), has many capabilities that have only appeared in the last couple of OS releases, is easily localized, conforms to multiple device configurations, dark mode, accessibility features, etc.
I read (here), about a well-known app that had been a highly successful classic native app (probably written in ObjC) that was supposed to be rewritten in SwiftUI, but the project failed, and was eventually shipped in Electron. That’s kind of my worst nightmare. I suspect the developers had to put clothespins on their noses, for much of the project.
I will be releasing software, using more cutting edge tech, but all in good time. It’s unlikely to be “cutting edge,” by the time I use it. I like to give the stack enough time to smooth off the rough edges, before I rely on it for ship.
I know that many fairly well-known applications for Apple platforms, are still written in Objective-C. Even Apple still uses it, for some of its internal tooling.
I will admit that one gamble I took, was jumping on the Swift bandwagon, almost immediately, but there were multiple signals that it was not another OpenDoc[1], and that I could trust it. I’m fairly conservative about bandwagons. It’s earned me more than a few sneers (especially since I’m an older chap), but -and this bears repeating-, I’ve spent my entire adult life, delivering finished software. It’s not always been great software, and it has not always been commercially successful, but it has all been “finished.”
There was a post here, some time back, where the author challenged the reader to mention three projects in their career that they had finished. My comment was that I could mention thirty, and point to the repos. This was met with incredulity, which shocked me, as I have known many developers, far more productive than I. I guess times have changed.
[1] https://en.m.wikipedia.org/wiki/OpenDoc(Full disclosure. I was very much a “bandwagon” guy, back then, and even took an OpenDoc course, from Apple’s DU).
let me calm down and breathe and I'll take the bait ...
look just because you're not Lebron James or Michael Jordan doesn't mean you can't go to your local basketball court and enjoy a game of pick up.
But the same way you don't conflate what you need to do to be one the of the best people to ever play a game to play at all...
your software engineering team doesn't need to feel completely useless because they aren't immediately using <insert new hot tool> .
Everything in software engineering from side project to unicorn startup service is about tradeoffs.
Tradeoffs happen because of constraints.
I don't expect a consulting shop to build code the same way an open source project does or a publicly traded company that has been profitable for 10 or 20 years.
I'm hoping you can read in this the need to be cognizant about manpower tradeoffs.
So yes I fully appreciate this as someone who has consulted, worked a public companies, and is now at a rapidly growing startup...
Just because your team isn't meeting some imaginary ideal for some other form of team with different constraints doesn't mean you can't ever strive for any ideal or give up...
you just have to be realistic about trade offs...
my favorite example of this is thinking about a system like rails vs early react (loooong next.js or even before the context system) or a lot of things built in the python web ecosystem.
With rails you have scaffolds and endless convenience methods that let you be productive quickly even as a small team or shop...
with early react or python web you were largely left to your own devices out side of the core flows they had solved (painting updates to the dom with react or some rudimentary crud stuff with django or flask) ...
Because they were built by and more importantly FOR different types of teams.
Rails by the relatively slender (and famously against VC type scaling) 37 signals consulting crew and react by facebooks behemoth engineering team.
Of course they are going to build for their own trade offs...
why would you not?
(I intially opened this rant with this final piece thata I thought better of starting with but I still stand on the feeling...
The is the biggest piece of curmudgeonly nonsense I've ever read... its like someone training an ml model on last 5 years of HN comments and churning out a click bait blog post)
IME most small software companies could be run off SQLite, a $5 linux VPS, vanilla JS, and a single line deploy script. (Pieter Levels seems like the master at this)
I'd never bother suggesting it though:
- no one likes being told they're small when they think they're big
- it's bad for devs careers (Resume-Driven Development)
- as TFA points out - it's not what FAANG are doing so I'd lack legitimacy
In a past life, we were reading from twitter firehose and the offshore implemented a solution that spawns a large spark cluster to read from twitter, dumps it to kinesis, then has a cloud function that writes it to dynamo, then merged it into redshift.
It worked poorly, took months to build, and lost data. I implemented a python script that read from twitter firehose and wrote directly to redshift. It was about 50 lines long and did the same thing, and I deployed it to two existing prod servers that were not network bound.
There was a few more cases like the on-shore database identity reconciliation being inefficient, so the company wanted to leverage cloud auto scaling. Well, if you have an exponential time algorithm and you put it into the cloud, it will still be exponential time and cloud auto-scaling will be extremely expensive, and also introduce latency between distributed system components, and indeed it was. The offshore teams initial logic was only that, cloud is auto scaling so can accommodate any performance the business logic needed.
Sometimes people just view tech/cloud as lego blocks to glue together, based on the product sheet marketing, and do not have enough basic understanding of computer science to understand performance, or understand what a simple solution would look like
50/50 It's what employers look for in Senior Engineers, but junior engineers are often hired on pure tech. Historically there was a high amount of BS where people would take credit for deliveries that they were only tangentially a part of, or get ridiculously lucky on their first big project.
It's next to impossible to hold an engineering team together to even KTLO if you don't have some project which grow's their careers.
The number one requirement for Senior Engineers is their extensive experience with the company's tech stack. At least that is what is written in job ads outside of FAANG. You're probably not passing the HR stage when you have experience in Java but the company is looking for a senior .NET developer.
You seem to be misunderstanding people’s motivations. Most developers aren’t trying to be “better” we are trying to exchange labor for money like everyone else.
“Grinding LeetCode” (tm r/cscareerquestions) doesn’t make you a better developer. But it does get you in the top paying companies. Well I didn’t. But that’s a different story.
On the other side of the fence in corp dev where pattern matching is the norm, employers aren’t going to want to hear that you used Sqllite.
Try getting a job as a dev with vanilla JS and SQLite skills, as opposed to React and Microservice skills. What a business needs on a technical level doesn't matter when it comes to finding jobs.
I disagree; you add complexity to a stack for a reason, usually to provide functionality that would be difficult to reproduce otherwise.
Hand-building a reactive (lowercase r) table with sorting, filtering, etc. that's performant at large data volumes to keep your stack "simple", would be a nightmare to maintain and wouldn't allow other developers to bring prior experience with them when they later have to maintain that code.
I agree, but what you are describing is a different thing: it's reinventing the wheel, not avoiding additional complexity. Avoiding additional complexity would be not having that reactive table. (And that of course would probably result in worse user experience, so it's really not a good example of what I've meant.)
Not at all - it’s way simpler to set up than anything involving kubernetes, virtualization, or containers, and incomparably easier to fix when it breaks.
Here are my rules for picking framework/languages:
- Avoid if specifically mentions big companies as users: Facebook, Twitter, Google, these have as many devs as needed to throw at the most trivial tasks, and then some. I am in a team of 3 that need to get shit done, not chase package manager and compiler throw-ups. Tell me your language is used by a one person show to serve millions of people and then I'll listen.
- Avoid if the developer base is mostly young people: They are energetic and they don't mind complicating the shit out of something, because they can figure it out now and probably down the road. I'm old, and I just want things to work and I want to understand how it all works without spending a week and wrestling with tooling. Not all out-of-college folks are the same but most are (and I used to be one, so there).
- Avoid kitchen sinks: borrowing ideas sparingly is fine - throwing in every new feature in another language you come across makes for a soup, not a tool. A good heuristics is the rate at which features are added. Look for logarithmic trend.
I've come to appreciate that the bleeding-edge really means you'll bleed over it. In the search for "magic" to make my job easier, it often falls apart of the edge cases. So to just get stuff done, I'll trend towards lower-level software even if it means more boilerplate.
Also, there's a great idea that seems very worthwhile. I don't recall the source or exact phrasing: you only have 1 innovation point for a project. Everything in your stack should be familiar to you (and you know the pros, cons, and issues) but you're allowed ONE new magic/helper/tool. That limits the blast radius of all this new-fangled stuff and gives you room to try out new tools.
> I'll trend towards lower-level software even if it means more boilerplate
This is exactly what 90%+ of people working on frameworks/languages don't get. Somewhere in their education/career they learnt that repetition is bad. DRY everything. And then they follow that religiously. Pragmalism (minimal pragmatism) is sorely missing.
Would you be willing to share some of the languages/frameworks that have made the cut or washed out for you based on the rubric you’ve given. I’m honestly curious.
- Does it seem like it's going to be around in the next 5 years (weaker version: do I need to worry about it being around in the next 5 years?)
- Can I debug issues with it relatively easily?
Turns out that good rules of thumbs are ones that look at stuff on the merits, instead of trying to find catchy third-order effects that you see in projects that rub you the wrong way.
Google uses a ton of Java internally. Microsoft uses C# and .NET (obviously). Facebook uses PHP and C++. Are you really avoiding all those tech stacks? They're old, bulletproof, and nobody ever got fired for picking them as choices. Just because big companies use them doesn't mean they have to complicated or trendy.
Out of sincere curiosity: Why do you think GraphQL is so bad?
I've been using GraphQL for a little while now and I've had nothing but good experiences (although my use case might not be the most common) so I'm interested in knowing what makes you think so poorly of it.
Not the one you asked, but I'm also stuck with supporting graphql in relatively small project. Zoomers invented SOAP. It's a perfect example of adopting what FAANGS do just for the sake of it. It's probably great when you have dozens of consumers with very different needs and usage patterns. But for the single frontend it adds way too much complexity on the server-side. Debugging, testing, writing tons of boilerate code, not to mention POST requests for getting data - everything becomes more complicated with 0 benefits
In my case, it feels like GraphQL has allowed me to handle a product with very rapidly changing needs without turning our network communication into a gigantic mess with dozens of requests happening for every little thing - which is my previous experience with REST APIs.
Guess it depends on use case. When it's not very clear what your client wants to consume and your product is evolving rapidly, HTTP APIs tend to turn into a mess and lead to a gigantic amount of requests just to start the app (which, to be fair, is less of a problem now with HTTP/3).
GraphQL has allowed my company to keep our network traffic very lean while evolving a product very quickly and to have very few issues in communicating between our mobile application and backend teams. But once again, I can see how our requirements are not everyone else's.
Until I saw a something by Steve Ballmer (I think), explaining that their strategy was to provide tools for those "99% developers". The ones who don't read HN, the ones who don't code for fun. Everyone for whom coding is something that gets in the way of their real objective.
No wonder they produced bad code! But that's not the point: they're solving something else. They're not spending effort making bug-free maintainable code, they've got more important things to do. They're doing the accounting for their small company, point of sale for the shops in their city, all the kind of things that may not be glamorous but that's all around us (for better or for worse)
So while I puttered around in my self-congratulatory ivory tower of unix, Microsoft understood what the real world needed, for the 99% of developers...