Hacker News new | past | comments | ask | show | jobs | submit login
Imaginary problems are the root of bad software (cerebralab.com)
946 points by deofoo on June 18, 2023 | hide | past | favorite | 393 comments

If anything it's the incentive system in software industry, which is at fault.

1. No designer is given promotion for sticking to conventional designs. It's their creative & clever designs that get them attention and career incentives.

2. No engineer is paid extra for keeping the codebase without growing too much. It's re-writes and the effort he puts in to churn out more solutions (than there are problems) that offers him a chance to climb the ladder.

3. No product manager can put "Made the product more stable and usable" in their resume. It's all the new extra features that they thought out, which will earn them reputation.

4. No manager is rewarded for how lean a team they manage and how they get things done with a tiny & flat team. Managers pride themselves with how many people work under them and how tall in the hierarchy they are.

Our industry thrives on producing more solutions than needed. Efforts are rewarded based on conventional measurements, without thinking through- in what directions were the efforts pointed at.

Unless the incentives of everyone involved are aligned with what's actually needed, we'll continue solving imaginary problems, I guess.

> “No designer is given promotion for sticking to conventional designs. It's their creative & clever designs that get them attention and career incentives.”

This is a massive change from my first software industry job in 1997.

I was essentially a “design intern who knows HTML” on a team that built a shrinkwrap Windows application for enterprises. The core of the design team was a graphic designer, a cognitive scientist, an industrial designer, and a product manager with industry experience in the customer domain.

The application was Windows native. User research was conducted on site with scientific rigor. Adhering to platform guidelines and conventions was a priority. I think I spent a few weeks redrawing hundreds of toolbar icons so they’d be in line with the Office 97 look (the kind of boring job you give to the junior). If the app stood out on the Windows desktop, that would have been considered problematic.

Today a similar design team would only have a graphic designer and a PM, and neither of them would care the slightest about platform guidelines or customer domain. The UI is primarily an extension of the corporate brand. Hiring a cognitive scientist? Forget about it…

Everything certainly wasn’t perfect in the Windows/Mac desktop golden era. But the rise of the web wiped out a lot of good industry practices too.

Remember times where you could change the theme and all of the apps followed suit ?

Even in Linux there were tools to sync gnome with QT look so you could have one theme applied to every app for nice and consistent look, all the way to how the common icons look.

Nowadays ? Every fucking app gotta have their own different styling. Will the setting icon be three dots, gear, or honey badger ? WHO FUCKING KNOWS. You'd be lucky if you even get a choice between light and dark theme

But hey, we can write same code to work on windows, mac and mobile ! It will work shit in all of them and be slow but we don't care!

> Will the setting icon be three dots?

Multiple hamburger menus with a scattering of cryptic icons stuck at arbitrary places on the screen. What does the swirly icon with up arrow do? No text label for you!

Oh and let's move the next button to the top left of the screen and not highlight it. Mmmm that's some good UI design.

Might be a coincidence, but that's a weirdly accurate description of MS Teams.

Not mentioning about the wording (at least in spanish) is awful too.

I remember that theory. I also remember the reality that if you changed your background colour to anything but white, some app, somewhere, was going to become an unreadable black text on black background mess.

> Nowadays ? Every fucking app gotta have their own different styling.

This has more to do with the current state of GUI frameworks than with developer mindset. Microsoft is between GUI frameworks and rumoured to have deprecated everything between win32 and WPF, and in the meantime they are pushing React Native. Apple doesn't seem to know what to do with desktop environments and is stuck between their legacy objective C frameworks which they seem to purposely hide any form of documentation and swift-based frameworks which are broken out of the box. Linux has a couple of options which are ugly as sin since ever. There's Qt, but their licensing scares away anyone with two brain cells to rub together.

So where are we left?

Well, with webview-based frameworks, which is the worst of both worlds but at least they don't look half bad.

Except that webview-based frameworks are a far lower-level abstraction than any native widget-based framework. Developers are forced to reinvent the wheel, and this means dropping any standard look-and-feel because that's already ugly to start with but takes even more work to get into a working state.

And all you want to do is to provide a GUI for users to click around.

And it got worse with client side decorations. Now even window interactions are app-specific and don't adhere to global settings.

> Even in Linux there were tools to sync gnome with QT look

There are still such tools, you don't have to use the past tense here.

Even better, every app is doing their own styling so they can all look like Discord

Is it just me or there are others who find discords handling of threads terrible?! I mean Slack is far from my favourite but at least they treat a thread like a thread where I can see the entire conversation in one place instead of visually parsing questions and replies while frantically scrolling up and down.

Discord and Teams both have terrible UX. Discord has the look and feel of a poorly designed game and Teams can't even get highlight and focus right.

Threads in Discord appear in the sidebar usually. Are you thinking of replies?

I was thinking more like seeing all the replies to a comment as a thread without the explicit "create thread" which mostly people don't. E.g. I right-click on a comment that has replies and I see the root comment along with the replies in the sidebar.

Oh man I totally forgot about that. Thank you for the reminder.

Total flashbacks to windows 95 and every so often changing the windows color, text font etc. for the entire system

Good times

A lot of native apps on iOS still at least follow the light/dark theme and global font sizes. But I’m not sure if that works by default with Flutter or React Native etc. or if they’d have to implement it explicitly.

> Every fucking app gotta have their own different styling

Have you considered that the average user never cared and never will?

Having ultra consistent styling across all apps ruins the ability to... you know... sell the software. It gives far too much power to some group of annoying elitist nerds in denial of their opinions preaching UI/UX pseudoscience.

> But hey, we can write same code to work on windows, mac and mobile ! It will work shit in all of them and be slow but we don't care!

Ain't nobody got time for reading even shittier documentation for badly written OS APIs.

From a person who started using computers from the early 2000s era:


None of the current SaaS apps I use can come close to the experience of using softwares from that era.

Take a simple list view of a typical Windows/Mac software?

1. Command clicking selected multiple objects

2. Shift clicking selected a range.

3. Right clicking brought up selection actions.

4. Double clicking opened an object.

This pattern was followed in almost list views and there was no re-learning and surprises.

Now can you say the same about the list views of modern web apps?

Can you apply the same list selection experience across Google Drive and Microsoft OneDrive and Apple iCloud? Nope.

That's were we failed as an industry. We let a lot of designers run too wild with their ideas, to put it bluntly.

The problem isn't really with designers per se (tho they played a role). The problem is with the web as an application delivery platform - it was never designed for this.

Windows, or MacOS, both have design guidelines that was produced, and they expected native apps to follow. Most native apps do (the few that doesn't have either good reason, or were unique enough that their customers don't care - think photoshop).

With the advent of the web, such a guideline for software no longer mattered, because the controls and UI elements are all custom - since html is not an application GUI library!

So every man and their dog has a different UX and UI interaction built for their own app, because the web encourages it. The designers are also at fault for not standardizing on a set of common UI widgets, but i cannot blame them as this isn't the easiest path.

> design guidelines that was produced, and they expected native apps to follow.

It almost sounds old-fashioned in 2023 to talk about usability, affordance, and user-experience.

Part of being a native application was/is for the application to look and behave like the rest of the user interface. Standards are important because learning how to use a tool is important. Users are important.

Software has become a way to make users miserable. Oh, and while confusing them, throw some advertising at them too. ^_^

And everything had a keybind so if you worked in software every day you could be as fast as cli nerds.

Absolutely...closing a popup with 'esc'? Naah, wasn't requested so tha message stays up

And not even be slowed down by animations.

I read "even" as "ever" - still made complete sense.

Animations may seem fine and fun the first time you encounter one, but when you think you'll have to suffer through it every time an action is taken, it becomes a whole different story.


CTRL+A selects all

CTRL+Shift+End selects all from where you are to the end

CTRL+Shift+Home selects all from where you are to the top

One of the (many) problems of web UIs is they often ignore the keyboard completely.

Or as is increasingly the case: they actively hijack what should be system-wide keybindings - making it even worse.

>CTRL+Shift+End selects all from where you are to the end

>CTRL+Shift+Home selects all from where you are to the top

Those two don't need CTRL, just so you know.

>One of the (many) problems of web UIs is they often ignore the keyboard completely.

They are also starting to ignore the mouse.

> Those two don't need CTRL, just so you know

Well, yes they do, only it's different with and without CTRL. For example in a text editor (or a normal webpage without any fancy JS):

- Shift+End goes to the end of the line

- CTRL+Shift+End goes to the end of the document

and the same is true for Home (substitute "end" with "beginning").

I have seen people, even "technical" people, select a text with the mouse from the middle of a large Word document to the end, because they didn't know this.

They also had to do it often (several times a day) and it had become a significant part of their workload...

Emacs would like to have a word.

Emacs is a text ecosystem. And it's trivial to add these shortcuts. Evil[0] basically rewires everything to be Vim.

[0]: https://github.com/emacs-evil/evil

It is a desktop application that doesn't adhere to the user experience guidelines of the OS it runs on. I'm an emacs user and fan.

I can’t help but feel that Agile is at least partly to blame for this. Things like you are describing usually don’t come under the “minimum viable product” purview and thus get pushed out indefinitely until the product is at the “very mature” stage. At that point there’s the risk that the product will be re-written and the cycle reset again.

Nah. If anything a lot of these trends are directly anti-agile, e.g. avoiding labels and using icons so that it's easier to translate your app, even though the icons take longer initially and you're never actually going to translate your app.

I’m not really arguing that. I’m saying there is a trade-off between having the common OS feature set described above (Ctrl/Cmd-select, etc) and implementing/iterating a web/mobile product quickly. Often standard UI paradigms are never implemented or implemented inconsistently. AWS, widely known for their Agile practices, might be the epitome of this with their web console. Sometimes I can sort columns by clicking on them, sometimes I can’t. Some products allow shift or control click and some do not. Etc etc. I can only assume the product teams are doing their best within their constraints but as an end user it’s a piece of suck.

The problem with the controls is mostly because they don't want to pay for Qt (multiplatform toolkit), so instead every company (badly) implements their own controls in HTML to save money. I suspect ultimately they waste much more money than they save.

Qt isn't even in the radar of most companies currently building multi-platform applications in HTML. And it won't be soon, for two main reasons: developers able to use it are expensive, and most Qt applications in the wild still have the "uncanny valley" look and feel about them on every OS but Linux.

Not to mention that with SaaS being more profitable than selling unlimited-use licenses, a lot of apps also have HTTP backends and webapp versions, which can share a lot of HTML/CSS/JS code with a browser-based desktop version. Think of Slack/Discord/VSCode/etc. Sure: Qt, Flutter, etc also have web versions, but they just don't look/feel as good in the browser as an HTML app normally can.

If you want a "Premium" native look and feel, people gotta go directly to the source: native APIs. Qt won't do it without a lot of work. Lots of companies have separate Android and iOS teams. Or they go directly to HTML when there's not enough cash (or even things like Flutter, which look ok in mobile). High-quality macOS apps, like those made by Panic, Rogue Amoeba, etc, use Cocoa directly.

Standardized controls and shortcuts unfortunately end up being collateral damage in all of this.

All this sounds great from the perspective of retail shrink wrap software. Today, the major use cases for Qt on the desktop are in-house corporate software. And, no one cares about "uncanny valley". Only HN and online nerds care about that stuff. Average, non-technical, corporate users care about functions, not form. Hell, they are happy with a VBA app! Also, lots of in-car and in-seat (aeroplane) entertainment systems are built in Qt.

    If you want a "Premium" native look and feel
Who is doing this in 2023? No one outside my examples above.

There's a reason in-house corporate software started becoming web based 15-or-so years ago: web is cheaper, easier to hire for, faster to develop and deploy, doesn't require installation or updates, easier to troubleshoot, doesn't need special pipes for communicating with the server (yeah I remember the days of DCOM, CORBA, WCF and other weird protocols instead of HTTP).

It is things like Qt that only HN and online nerds care about.

Don't get me wrong: I really like well made native apps and I think they're great. But Qt apps are almost never as great as software made with native toolkits directly.

No, long term, web apps are more expensive because the front end needs to be completely re-written ever few years. A desktop app written in Qt can be kept running for a decade with very little work. (This assumes no Qt library upgrades, which for in-house apps, are most unnecessary.)

The primary advantage of web apps is "zero install", which is a major point of friction at large corps.

> ...needs to be completely re-written ever few years.

No it doesn't, the HTML spec hasn't changed, whereas QT has. The issue is how the webapps were built in the first place. You don't need Nextjs and all it's friends to post a form to a backend once an hour but somehow that is becoming the standard approach...

“because the front end needs to be completely re-written ever few years”

Citation needed. I’m currently working in a jQuery+Coffeescript app that still works like a charm.

Like the sibling post says, Qt itself had breaking changes that HTML/JS/CSS didn’t.

> Who is doing this in 2023? No one outside my examples above.

For macOS check apps made by Apple (Logic, Final Cut), Panic, Rogue Amoeba, or apps like Pixelmator, Affinity, Pixelmator, TablePlus, Tower, Dash... there's more.

There's even open source ones like IINA providing a premium look+feel.

> If you want a "Premium" native look and feel, people gotta go directly to the source: native APIs.

I've seen this regurgitated a hundred times, but I don't really know any more what that "native look and feel" is. On Windows, it might be the old Windows 95 controls, but those are quite limited. What system today is made of dropdowns, checkboxes and OK buttons?

Well, yeah. Windows is... special. Even Microsoft stopped using the normal controls. But the problem is that Qt looks like an uncanny valley version of those Windows 95-style apps.

But by Premium I'm talking more about macOS, iOS and Android.

For macOS check apps made by Apple, Panic, Rogue Amoeba, or apps like Pixelmator, Affinity, Pixelmator, TablePlus, FantastiCal, Tower, Dash. Or even "non flashy" apps like Pacifist.

Those all use Cocoa and have a bit more of a slick appearance, even though they're mostly using native controls, with very few special parts here and there.

According to Microsoft, native look and feel includes tiny tiny dialog boxes with even smaller panes inside that can't be resized, so yeah, not going to bother with that

Qt has uncanney valley... do you think people using HTML care to make things look native then?

No. Why would they? The reason HTML apps don’t suffer from the uncanny valley problem is because they don’t try to look native at all, it's just something totally different.

When they suck it's not because it looks "almost but not quite there".

But there are exceptions: when Cordova/Ionic tries to imitate the look of iOS/Android's native controls. Then we have an uncanny valley problem.

> None of the current SaaS apps I use can come close to the experience of using softwares from that era.

I'm afraid you're looking at the past with rose-tinted glasses. In general software back then sucked hard. User experience did not existed at all. Developers bundled stuff around, and users were expected to learn how to use software. Books were sold to guide desperate users through simple user flows. Forms forms forms forms everywhere, and they all sucked, no exception. Forget about localization or internationalization. Forget about accessibility. Developers tweaked fonts to be smaller to shove more controls into the same screen real estate, and you either picked up a magnifying glass to read what they said or you just risked it and clicked on it anyway.

Software back then sucked, and sucked hard. Atrocities like Gimp were sold as usability champions. That's how bad things were.

Those four actions worked for me on the Google Drive web interface

Great! Now try it in any other list view in any other app.

Maybe the list of docs in https://docs.google.com?

On my phone so it's hard to check, but Gmail's ctrl and shift clicks work really well and is intuitive to me. I'm shocked they wouldn't use the same mechanics everywhere.

Best example: Gmail is one of the only webapps I'll ctrl click a few items at the top of the list, ctrl click a few in the middle, and then shift click to the bottom and it works exactly how I'd expect - everything stays selected and the shift+click selects from your previous click to the next item. I think it gets wonky if you change directions but I can't imagine how I'd expect ctrl+click item 1, 2, 9, 10, then shift click on 4 would work.

For myself, I'd expect the items between 4 and 10 to be added to the selection: everything but 3 would be selected.

Having just checked, this is indeed what the file explorer on my desktop does, and I'm pretty sure windows file explorer does the same.

Today they might have a psychologist on the team to research which buttons may serve as dopamine triggers so you're a lot more likely to upgrade to Premium before thinking it over

Right. Making people click on things they didn’t mean to and buy things they didn’t want — those goals were not even a part of the 1990s UI paradigm.

1990s design was all about preventing people from accidentally doing actions they might not have wanted to. Even down to the “Are you sure you want to quit” dialogues that were all the rage.

It’s sad just where we are now compared to the design goals of old.

There's been a large influx of people chasing dollars and status, they could not care less about the product or who uses it. You can still find orgs that do care, but they're swiftly pushed out by the VC-funded "growth at any cost" model or are acquired into oblivion.

The indie scene is looking excellent however, a lot of programmers who have already made their money seem to be pushing excellent hobby projects out.

There is a fundamental difference between working on software products vs bespoke software development:

In the former you make money from selling the result, in the latter you make money from selling the hours spent creating the thing.

If the former is unusable it will lead to bad sales. In the latter it might even lead to additional hours sold in change requests.

The former is bought often after evaluation and comparison by the user of the software. The latter is sold to an executive that will never have to use the software as a project.

Microsoft basically used to do this too. We can see that they clearly don't anymore.

I have another theory: it's all about the screen size. When you had only 320x240—1014x768, you simply MUST have thought about UX (in the sense of "How can I fit all this info in this little space?"

Now you don't have to. So no one does.

This is true in a sense, but we now have a different restriction, called a mobile view.

That sounds expensive AF though. Was that really necessary?

I’m all for craftsmanship, but having four top guys fiddling with design.. it all depends on the domain I guess.

Considering most of us here are ready to murder a few designers for wasting so much of our time now?

Yes, absolutely necessary cost.

Uh ... plenty of companies have a UX team. They're mostly all graphic designers. "four guys fiddling with design" is nothing. Either you care about having a well-designed UI or you just want a pretty one with a high conversion rate, and who you hire reflects what you really value.

Uber was a massive example of this. The best engineers kept their head down and just tried to keep things afloat, fixing bugs, etc.

However, a large and insidious cadre of B tier engineers was constantly writing docs proposing meaningless arbitrary system changes and new designs for the sake the changes themselves. These new projects were the only way to get promoted. The entire e4->5->6->7 track was written in such a way that it only ever encouraged “TL”/“architect”types to grow.

This led to constant churn, horrible codebases, and utter bullshit self made problems which could only be solved by digging a deeper hole.

There are companies who handle this well. Ultimately it comes down to engineering culture.

The career ladder is among the biggest fuck ups of the tech industry. They incentivize bullshittery more than actual innovation. There are more rewards for BS RFCs than in keeping the ship running.

I had a completely different view of RFCs before coming into contact with some peers that followed this approach to the letter. RFCs for such small issues would take 10 pages and barely mean anything. Of course they would be praised by upper management (it didn't matter the RFCs would be ignored most of the time).

> No engineer is paid extra for keeping the codebase without growing too much.

I am. I'm paid more than most developers to run a team doing just this. We make minimal change, have an absolutely non-negotiable focus on stability and minimalism and reject any change that isn't absolutely driven by validated majority user need. Even then, the bar is high.

I'm not saying this is a common situation, but it certainly isn't rare in my experience. Software is a vastly wide scope of software types and requirements. I'm paid to be ruthless, and to know what ruthless looks like in terms of delivering consistently without downtime/data loss/critical issues.

Confirming a hypothesis I put forth in a different comment: Would you say you and your team have ownership of the product?

That is to say, there isn't one team doing what you're doing and then another separate team trying to graft new features on all the time, is there? Maybe there is, and maybe that causes issues down the line.

We do have ownership, and I try and structure the development such that every engineer has ownership, decision making power and accountability. I aim for a flat responsibility structure as much as possible. We have lots of work to do, lots of changes in process and despite the constraints we have a steady stream of features we do add.

The trick is to ensure the culture of solid engineering goes right through the organisation and informs everything from commercial/financial through to QA.

Are you guys hiring. I pride myself on writing short, simple and readable code.

Minimal change, or minimal code? Refactoring code can make code smaller but depends on good testing. Applying minimal changes results in redundant and complicated code, but less likely to break existing functionality.

In the first instance, both. However, i'd take more code that was better reasoned and easily understood over less verbose code that was smaller for smaller sake.

In terms of minimal change, we refactor when there's a clear business case to permit taking on the risk. Otherwise, we make the most minimal, most stable, least risk change to the existing code even if that code isn't optimal/pretty/well-structured/has-errors/...

Like most other engineering in the world really.

IME this can be hard if built on a platform or dependencies one doesn't control, which is common at early stage companies.

Because often the dependencies require surfing latest, or close enough, versions to maintain a secure system or avoid stalls for jumping major versions. Sometimes even core languages and standard libraries may require staying at least near latest versions.

This is true, but in our case all dependencies are vendored and frozen.

We _do_ have instances where target systems for deployment become ABI/API incompatible with the libraries, which is rare and happens roughly every 5 years.

The project was structured to put stability at the core, rather than being cutting edge.

What kind of software do you work on? At what company?

I have seen low level parts that are managed well, because employees have skin in the game

But I’ve also seen a lot of what this post is talking about

I'm in the same boat as GP. I work in finance. CTO said he wanted to keep things simple and stable, so I do it.

It is only possible because I'm a technical manager myself and I have a very competent (and technical) product manager working with me.

But for every person like me, there are 10 other devs trying to cram every pattern from the GoF book in their corner of the codebase, so I have to spend my scarce time reviewing PRs.

Am GP, it is finance based, but not commercial finance.

Most of the systems handle complex calculations. The system is a monolith that has been around for 15 years or so.

It isn't cool. It isn't pretty. Lots of it would be better for a refactor, but absolute stability is the goal. Refactoring things may result in long-term cost savings, but with risk. The business has no risk appetite, so it doesn't make sense. If it works, it stays, however ugly and costly.

That isn't to say some things don't get refactored, but there's a strict business case that needs to be met. Usually if the system is under performing, error-prone or end-users want features/performance that can't be accommodated without a refactor.

It's nice. The latest framework isn't being integrated year after year, there's no microservices, nothing fancy.

It's java. It's tested. It works. It makes money. It pays.

Interesting, I believe there is something to finance that allows or forces things this way. There’s much less fidget spinning and much more business in it for some reason.

Regulations (as sibling comment mentioned) is one aspect, but also the cost of screwing up is real-world. A status quo system that isn’t screwing up has a much higher bar to replace where the risk that the replacement system will screw up.

Much of the same reasoning and even more extremely applied, is medtech - hence why you see many medical imaging setups still obviously running a version of Windows XP.

Regulation and risk. The same applied to government too.

The cost (financial, legal, reputation) of certain classes of bugs is so high that avoiding those risks becomes top concern.

One calculation issue, one buggy float operation means millions or billions in damage and the loss of your clients ... never to return ... because your name is tarnished.

I actually think this is _wrong_ and we need better resiliency in finance to being able to roll-back transactions and rewind and replay with adjustments.

Regulations, I’m guessing.

Regulations are a good excuse, but there is still a lot of fuckery in Finance apps, especially in the Fintech space.

Stability is something that must be culturally hammered and enforced by leadership.

If it's done by stakeholders, the app will look simple but still be a juggernaut of over-engineering underneath.

I work at a consulting firm.

Our salary is loosely linked to what percentage of our work is billable (with leniency for inexperienced staff, who aren't expected to be profitable while they're learning their craft).

If you spend three hours figuring out why things fall apart on the 31st of the month... that generally can't be billed to the client, and therefore it's bad for your salary.

On the other hand, if you spend three hundred hours writing tests and implementing an awesome multi-stage deployment process that avoids one production bug a month? Your manager can totally bill that work (with the right client).

I would argue the billing model, client relationship and everything else commercial isn't running effectively at that firm.

If I were a client, I wouldn't want these perverse incentives to exist. I would want a razor sharp focus on _my_ needs, and assurance that _my_ needs are modelled in the billing.

And for that, I would pay more.

Why is the bug fix not billable, but the test writing is?

Until you have the star developer that starts promising the product manager he can do all the extra features in a week. And then of course none of them actually work decently or at all. But at that point it must be maintained by the whole team anyway.

Well yes, but I have veto power for that reason, as lead engineer.

I am the lead because I know, from experience, not to allow this kind of nonsense to happen.

I’m also I’m a similar situation, but I get the call when the thing has been on fire for a while so it’s a lot easier.

I can’t imagine a software engineer developing an interest defensive software engineering will be very visible until after there has already been a crisis to screw people’s heads on straight.

A lot of people seem to see “Do things that don’t scale” and think that’s a phrase meant for engineering.

For that to happen, two things must match: a product guy who knows your job and you who know how to make products. It doesn’t even have to be stable/featureless, in my experience. New developers tend to worship some new paradigm that focuses on “how” instead of “what”, which is all paradigms can do. And once they’re in, it goes downhill because the how dominates the what. Add a clueless product guy into the mix and it loses all limits, including budget. In the end they proclaim “software is hard” and move on.

I'm not GP but I'm in a similar situation, and yeah, this is how I do it.

My Product Manager is competent both in technical and product/design matters and is also able to call BS on complexity for complexity sake. I ensure that the development part is focused.

New developers have to prove with technical and business arguments any new paradigm or random refactoring they want to do. If there is no immediate need, we just skip it.

I'm fortunate enough to be both the lead engineer and the person making product/design/feature decisions with the engineering team.

> I'm not saying this is a common situation, but it certainly isn't rare in my experience.

I think your experiences may be skewed by the position you find yourself in?

Bless you man, you're doing the lords work

It ain't much, but it is honest work ...

... and well paid.

I've been working in consultancy for most of my career and have been in so many projects by now that seemed to be bullshit rewrites in $tech of the month; at least two projects where microservices were pushed through. The last one I was in was funny because while they had a small army of consultants and self-employed engineers vying for influence and carving out their own slice of the pie, the existing team of running, working, earning .NET software was just going about their day.

It was quite telling that after 1.5 years of that project, where all staff had already been replaced once, all they had to show for was a fancy product name and a presentation. And that the manager that led the project for ~2 years left right before or right when it went live - and he did that in a previous project too, where a working .NET backend for ecommerce was replaced with a Scala microservices on AWS system.

I did hear about the latter; I heard they went back to .NET, but the Scala services are still up and running and a maintenance nightmare.

But the lead developer got to play with his favorite tool and landed a job at lightbend because of it. Career-driven development, and I don't even believe he did it for his own career, but for self-gratification. Ecommerce is boring; REST APIs and a front-end are boring. But Scala, distributed systems, AWS, that's all cool and new.

I'm so tired.

New is not always better, but many times it is. We see this for example in programming languages, where newer ones incorporate the best features of their predecessors.

I think there are two things to be wary of: 1) Selecting a new technology just because it's hot, and 2) Refusing to consider new technology because the old stuff "just works." A good engineer looks at the requirements and selects the best tool to solve the problem while weighing the costs and benefits. Sometimes that's microservices. Sometimes it's monoliths. Granted, I don't know anything about the developers or business problems at that company, but to say that Scala microservices are just bad without justification doesn't sit right with me. It's all situational.

If an engineer comes to me and asks to use something like Scala, he'd better know all the upsides AND downsides (e.g. effect and streaming abstractions, ease of long-term maintenance, referential transparency, vs learning curve, hire-ability, 100 different ways of doing things, etc).

If new is not always better, then you’re stuck with the really hard job of knowing when it’s worth moving to the new thing.

Worse, you’ll be blinded by survivability bias. One easily notices the good rewrites and can easily ignore the bad ones.

Even worse, bad rewrites may be noticed in a place that a year or two ago was deemed a success story. I’ve seen many such cases due to misunderstandings or just political dynamics.

And lastly, don’t let that Engineer do Scala, they’ll brush off the compilation time regression and make all developers lives slightly worse (assuming the project is big enough)

Yeah, good point--when I said new wasn't always better, I was just talking about the case where the new tech solves a problem, but it's not the one you have.

Like choosing GraphQL just because it's new, even if your data doesn't have the structure for it.

Will have to disagree with you on Scala for several reasons I won't go into here--but the point was just that, in order to make these arguments in the first place, you need to do your research. Seems commonsense, but surprisingly many people don't do it (including younger me).

With developers, incentive misalignment is just insane at all levels.

- There is bias towards rewarding more lines of code or more code commits (which is often the exact opposite of what characterizes good software design).

- There is bias towards rewarding speed of initial implementation (which often goes against medium-term and long term maintainability and ability to handle requirement changes which is usually far more important). The costs of this tends to fall on other developers who come in later and act as scapegoats for the 'apparent 10x developer'.

- The industry keeps dismantling narratives which would otherwise help to reward talent. For example, many companies dismiss the idea of a '10x developer' - Probably because they are aware of the previous point about the fast developer who cuts corners and creates technical debt for others. 10x developers do exist, but they're not what most people expect because paradoxically, they may be slow coders when it comes to LOC metrics and speed of implementation for new features; their code really only shines in the medium and long run and it improves the productivity of their colleagues too so it's hard to properly allocate credit to 10x devs and they only really matter on greenfield projects.

Mega agree with this. It was really bad for my personal/career growth to get a ton of praise for doing things fast: granted, a lot of people doing the praising had precious little experience in tech themselves. I probably have 2-3 whole dead years where I could have been learning/improving a lot more but got put in “10x developer” expectation projects where I’d churn something out, get a big shiny star sticker for it, and then 2 years later it would be abandoned because there was no incentive for anyone but me to maintain it, and who would want to because it was shitty code with hacks and tech-debt, and anything that isn’t writing a fucking mountain of new garbage code gets in the way of shiny star collection.

> but got put in “10x developer” expectation projects where I’d churn something out, get a big shiny star sticker for it, and then 2 years later it would be abandoned

I feel like I've fallen into this hole at my current gig, where I just churn shit out to solve a problem as quickly as possible

I get away with it just bc general code quality was already not good to begin with

Biggest mistake was going fast the first time, now I'm getting assigned way more shit

Word of advice to readers: don't make the same mistake I made. You'll just get taken advantage of

I really think we have too many people working at most companies. It pushes people to the extremes and edges just to have something to work on. Managers need more people under them to get promotions. And managers want to manage managers to keep moving up. They fill teams of people on products that could really be ran by a fraction of the engineers. But that’s not where we are, we are on large teams working on small areas of the product inventing areas to build in and often ruining the product as a result.

We also get slower with so many people. The coordination overhead is killer and losing context as the product is sliced up into small parts that move on without you

> we have too many people working at most companies.

I half-disagree with this. My take is significantly more top-down: senior management has a deficient concept of how product development works. They believe Manpower is to be spent to achieve revenue, either by directly selling the result as a product (e.g. airplanes selling wifi to passengers) or by it being a differentiating feature for the sales department. This causes every allocation decision (like hiring) to fundamentally be biased around getting a tangible return: by creating new projects, new features, and new buggy microservices.

Further, since management only has two knobs (manpower and timeline) to play with, they like to move them to feel like they're optimizing the project. It's always the same fallacies, too: "we hired more people so we can create explosive growth", "we created ambitious timelines, now we're striving to fill them" etc.

I don't have a solution for this, except to note that it can be mitigated by managing up. Construct your own narrative, and take advantage of the fact that the non-technical people above you govern almost entirely by gut feeling.

Yeah I dunno, I hear this a lot, but there has universally been way more work to do than people to do it at every company I've worked for. But that doesn't mean the right things are being prioritized.

There is a lot of work, but a lot of that work is generated by people doing the wrong thing too often.

If we had a smaller and more competent team, the initial build might have been marginally slower, but we wouldn’t have to spend a permanent 50% to just keeping down the technical debt.

You’re working from a cost-efficiency / cost optimization perspective. That’s a great perspective in some contexts, for example, mature late-stage products, fully saturated markets, etc.

Is cost efficiency an effective perspective for innovation or revenue growth? Mostly, no. As long as your risk-of-ruin is low, then you want to fail. Sometimes people misinterpret this as “doing the wrong thing”. But it takes doing a lot of wrong things to do the right thing.

The difference between right and wrong, if there ever was such a simple dichotomy, is so marginal and only understood in hindsight.

You ask the executive if he wants to get to his goal with a team of 200 after lighting on fire 20M, or with a team of 20 after paying 2M.

Ultimately you end up in the same spot, but one choice is fairly suboptimal there.

The whole point of the comment you replied to is that, if the "goal" is something innovative rather than just sustaining, then no, you don't end up in the same spot in those two scenarios.

But this is why some executives are better for some kinds of businesses and others are better for other kinds. Some executives don't understand your parent comment's point (or just don't find it comfortable), and will be very allergic to the "waste" necessary to experiment and iterate on poorly understood projects. Other executives will be uncomfortable just constantly figuring out how to optimize costs without damaging revenues.

A very tricky part of the lifecycle of many companies that get gigantic is to figure out when to flip this and start switching out the executive team to focus on a different model.

Maybe I am biased by my background in finance tech but I saw a ton of guys get rewarded for what (at the time) seemed like boring maintenance of systems.

In retrospect - it was recognized that they built or took good care of money making systems with little drama and that was well appreciated by the companies.

In FAANGs I see now more of "what will get me promoted" versus "what is gonna make the company money" ethos.

"finance tech" -- those people maintaining those systems are middle-aged, comfortable, and well-paid. If you were junior under them, you would never want to stay. Fifteen years ago, there were large fresh hire classes each year. Tons of juniors around the office. Plenty of young ones dreaming up unnecessary "upgrades". Most of that is gone as the industry has matured. If anything, the hoards of junior hires have moved from finance to Big Tech.

Promotion driven features are definitely not entirely a myth IMO, but on the other hand, anecdotally I saw quite a few people get promoted at Google for doing make-things-work-better work. The trick was figuring out which were the important things to maintain and make small improvements to, rather than just which were the things that seemed fun to tinker with, but weren't as impactful.

They're not mutually exclusive.

FOSS software has vastly different incentives than commercial software, yet suffers from many of the same problems: bugginess, poor performance, lack of documentation, feature misprioritization, bad UI.

That alone indicates that the problem is not merely "misaligned incentives".

Actually, you can reduce most problems down to "misaligned incentives" if you're overly reductive enough. That doesn't mean that it's a useful way to think about the world.

I think Free Software suffers from the misaligned incentives. Take documentation for example. Why would I write it? I already know how the system works. I designed it! If I forget in a few years, a quick glance at the code will refresh my memory. One would argue that you should write documentation so that people will use your thing. That's true! But there is almost no incentive to have users; you pay a cost, but they pay nothing in return. (Someone will send a bugfix now and again, of course, but it's very very rare.)

Some other incentives are balanced, though. Persistent low performance or bugginess affects the author and end users equally; the more the author uses their own software, the more this will hurt. Sometimes the low performance is a design trade off; Python isn't Rust, and the users seem to be okay with that. It was done on purpose. Sometimes low performance is a factor of the author's needs; you're trying to run the thing on 1 billion machines, they only have 1; something has got to give. But that's not misaligned incentives so much as it is lack of suitability for a particular purpose. A screwdriver is terrible for hammering in nails. That's not the screwdriver's fault.

> I think Free Software suffers from the misaligned incentives.

It's really hard to tell what the motivation of any given free software author is. That makes it really hard to even know what incentives matter to any give author, team or community. It's just really diverse.

> That's true! But there is almost no incentive to have users; you pay a cost, but they pay nothing in return.

It's fascinating to see free software with huge user bases getting on with a tiny number of contributors. It seems like good code + near zero support cost + near zero support expectation seems to work.

Many, likely most, FS projects seem to fall under "likes tinkering." So it's often a different spin on "add shiny tech", except without any PM at all.

Reminds me of one of my favorite papers ever: "Nobody Ever Gets Credit for Fixing Problems that Never Happened". https://web.mit.edu/nelsonr/www/Repenning%3DSterman_CMR_su01...

Unfortunately, for my money, I think the only real way you can create an incentive structure which emphasizes stability and change is by offering some kind of form of insurance.

My father was an electrician who often complained about how he never got paid adequately for the stellar, stable work he did, and one day I asked him whether he ever thought of raising his rates but providing a kind of service guarantee, where if a problem occurred that could be traced back to his own work, he would step in and perform the additional work at a reduced fee. Naturally he laughed out loud, because that's not how business works.

Ownership of an already-mature product is sort of like providing an insurance policy by default, of course. And sticking with conventional designs can be a solid business strategy if you use their slow-changing nature to e.g. build the thing faster than you could otherwise. That's the strategy I'm using for my consulting: Stick with what we know best (Hugo+Bootstrap for a napkin sketch UI demo as fast as possible, then SQLite+Django+React to build out the main functionality ASAP too). Emphasize solving the _business_ problem over the shiny tech.

I don't know if there is a name for it, but this is a plague for every security-related thing, or any jobs where the more skilled you are, the more people forget you (like a sound engineer for a movie production).

An ex-director of a French national security agency complained about exactly that during an interview, that you get more budget after a terrorist attack, or after you stop one that was well under way, but never if you avoided the condition to create a terrorist cell altogether, or nipped it in the bud.

I don't know, the more I advance in my career, the more I see it as the opposite. Wide eyed developers with big designs, who are obsessed with the technical aspects of a solution, who disregard the practicalities, long term implications at the social level (who is going to maintain this, do we have people that have that skillset, is this worth the effort, does it really matter to be this elegant, or is it more important to ship quickly and economically) come off as a bit immature and the more effective engineers who understand these priorities are given more respect and authority.

As a team lead, I’ve found it really difficult to keep curious, smart, young engineers on track. Everyone wants to go off and build shiny things instead of solving real problems. I have to find enough shiny problems that actually need solving to balance out the daily grind. Interestingly, I also find it difficult to instill a sense of meticulousness, and how important it is to write code in a way that reduces bugs. Clever engineers come up with clever, complicated solutions that are written quickly and rely on coincidence to function. Life experience is the best teacher for this, but I often need to step in. I’m still not sure what the balance is there.

> I’ve found it really difficult to keep curious, smart, young engineers on track

I’ve found the opposite. The young engineers are generally willing to listen to reason. The older Enterprise Architects are the ones that want to keep making things more complicated, or want to keep using suboptimal solutions because we’ve been using them for years.

Now that I write it down it’s kind of curious how on one hand it’s complicating things with stuff they already know, and on the other hand it’s absolute rejection of stuff they don’t.

Maybe I’m the same?

> the more effective engineers who understand these priorities are given more respect and authority.

The problem with "given more authority" I see is that management plucks these engineers out to make their day job basically "sit in meetings" if you're even slightly effective at simplifying life for everyone else.

Because that is the place of most leverage to place those people, but then those people are in a constant tug-of-war with the first group of "fresh ideas".

Eventually, the people who are in charge of the "prevention of bad architecture" become the bad guys because they (or me, I'm projecting) get jaded into just finding out what's wrong with something as fast as possible to be able to keep up with that workload.

You go from a creative role to a fundamentally sieve role with destructive tendencies, where you are filtering out the good from the bad as fast as possible, be.

First of all, not all new ideas are bad and "there's something bad about X" is not a "let's not do X".

Secondly, going from making things to shooting down things is intellectual suffering if you have a bit of empathy.

Some people on the "committee" with you don't have empathy & literally enjoy it - you're either trying to damage control on a case-by-case or building a "these assholes need to be fired" doc out of the meetings.

I realized what I would become if I conflated authority and respect ("respect my authoritah!").

Quitting was really the only way out of it. But it wasn't hard to explain to my spouse that i needed to leave for a job a level down and which paid half as much, because she could see me bringing my "why the hell do we have to do this" attitude home & to family decisions.

The problem for a lot of the social level issues is that it is pure pure politics.

There’s a couple of woodworking hand tool companies who among other things make replicas of old school Stanley tools, the way Stanley used to make them (materials and tolerances). They also fuse the best elements of several eras or manufacturers to make slightly better versions. Surfaces from this one, handles from that one, adjustment mechanism from a third.

I hope that I live to see a time when software applies modern algorithms to classic designs and produce “hand tools” in software.

The field of software is maturing as we’re reaching the end of Moore’s Law and time passes. Times of constant innovation is very slowly coming to an end, the curve is slowly flattening. You can already see it with general trends like type-safety, DX features universal in all languages (linting etc.), browsers finally becoming the universal OS (Wasm, WebUSB, GPU), more and more things being standardized every day.

Proebsting's Law says compilers double code efficiency every 18 years. I wonder what the doubling interval is for algorithmic performance. I expect it would be tough to calculate, like the cost of living, because algorithmic improvements rarel affect all aspects of code performance equally. Incremental improvements in sorting efficiency likely have one of the broadest reaches, followed by concurrency improvements and object lifetime analysis. Then there's a long tail of niche applications that only apply to certain domains. Only the Amdahl's Law parts of the code has a substantial impact on performance.

> No designer is given promotion for sticking to conventional designs. It's their creative & clever designs that get them attention and career incentives.

This. I recall the case of a couple of FANGs who on one hand expect their engineers to deliver simple, maintainable and robust systems to minimize operational costs, but on the other hand they demand engineers operate at the next level to be considered for a promotion, which means they are expected to design non-trivial systems which have a certain degree of complexity and require significant amounts of work to pull off. Therefore, as an unintended consequence, the pressure inexperienced engineers to push needlessly complex projects where they are required to design solutions well above their level of expertise, and put them in a position where their career is personally threatened if anything gets between them and their promotion-driven project.

Coworker of mine wrote a thing in java… it was too slow (intensive locking between threads)… so he rewrote it in C… it was crashing all the time (because he sucks), then he rewrote it in go. Got promoted for this feat.

That's kinda what initally Go was made for IIRC. They noticed people which job is not programming but had to write some code (say some analytics or sth) often did it in Python, and if it was too slow they moved to C or Java and were predictably terrible at it, so that's why Go was made simple and with builtin concurrency primitives.

But his job is programming. He is very proud of his C skills. If you listen to him, it crashed because C is intrinsically bad (it is difficult yes), but I guess it was also such a bad codebase that rewriting it made sense.

He also holds a grudge against me for having dared to rewrite a sacred C library he wrote (that was a constant source of segfault and I rewrote in 1 afternoon).

Sounds more like his job is producing tech debt. I saw few people like that, basically none of their code was left as most of that needed to be eventually replaced coz it was shit.

Yet, the people who wrote the minimalist, elegant and usually open source software we all rely on (e.g. sqlite) are highly regarded.

All of what you said is true, but there are still people who think a minimalist, rugged and reliable solution is superiour. That maintainability is a value in itself (and thus, one should not choose the wildest, most experimental dependencies).

I'm not sure it's just incentives. Inexperienced early stage founders often end up solving imaginary problems, despite having a real incentive to get it right. The Y Combinator moto is "make something people want" because so many people don't.

^ this ^

Until we figure out a nice metric for "removing complexity" and then rewarding for it, it's not likely to change, IMO.


I’ll say one thing though: all of those skills you mentioned as not being valued are extremely useful for indie dev/taking ownership. Shipping simple + correct code frequently is extremely possible with sufficient practice and discipline.

More to your point, this is why I switched toward being a research engineer. There is a higher barrier of entry, projects are quite technically challenging, and constraints often force thinking of a world of computing beyond the tiny sphere of the web browser.

It’s hard work, but I love it.

If this resonates, you are in the US, and looking for a change, drop me a note (see profile).

Just want to throw out a way we could align everyone’s incentives: a robust universal economic safety net. If people were working to make a good product rather than stop their children from starving, our natural inclination to take pride in our work would be allowed to flourish.

Not gonna convince anyone but hopefully someone reads this and starts thinking about such options. They are not as impossible as those in power would have us believe.

You can still get away with these things if your only user is yourself or maybe a small handful of non-enterprise folks. This could be why the so called "scientific" programmer feels like they can be as productive as a team of 10+ software developers.

And also why the most frequent request from the users is: Please don't change anything.

The principal-agent problem looms large in software.

"And also why the most frequent request from the users is: Please don't change anything."

I think it is: "Please don't change anything I did not request"

(who does not hate updates that break your workflow?)

But they usually very much like changes, that makes life easier for them. Best way to find out, is to watch them using your tool and talk with them.

I like that it's this way so we can easily compete with these dysfunctional companies. Don't fix them :)

There are exceptions to all the above, but only after the worst case burns from failure to do those things. If you are losing customers to competition that doesn't crash, then suddenly you can be the hero by making yours more stable. Of course only if you can do this before the company dies.

I agree with just about everything you've said, except that bit about rewrites. Rewrite projects typically go down in flames, and on the rare occasions that they don't, the business stakeholders are still mad because their precious feature factory was down for maintenance for months.

I don't think you two disagree, GP is just saying that large-scale rewrites are rewarded, regardless of the result. I've seen that happening even when stakeholders were unsatisfied.

> months


Not universally true. Just find a manager who values successful delivery of project goals and is fine with "boring" i.e. tried-and-tested technology.

I blame Career-Driven Development for a lot of "shiny toy" (i.e. failed and complicated) projects.

Whether that's wrong is a deeper question - if it leads to better salary through job-hopping and promotions, despite a catalog of failures, is it actually the wrong approach from an individual engineer perspective?

I still say yes, FWIW; I enjoy seeing my projects succeed and thrive. But I know others may disagree.

And at least at the “enterprise” level for B2B software, there is intense customer demand for more features and, simultaneously, more stability.

From my view, that and pressure from the analyst racket are the main drivers behind feature bloat with self promotion a distant third.

You can think of it in even broader terms: being/staying lean, re-using tried and tested things, adapting your requirements to widely available solutions rather than developing custom solutions to fit your requirements, making everything work more efficiently, etc -- all those resource-minimization issues are "less" problems: not good problems to be working on when your raison d'être is "more" -- and that's the only raison d'être for a lot of people and actually for all businesses. (Obviously there are also specialists for doing "less" "more": unsurprisingly, they are generally paid a cut of savings.)

and the re-writes have to be in a trendy language that other companies are using, just for the engineer to stay relevant.

everything about engineering group decisions are about creating a reason to use a newer framework in a way other people can vouch for.

"Show me the incentive, I'll show you the outcome." - Charlie Munger

Isn’t he the guy who tried to get a university to build a giant windowless dormitory cube?

I mean, he's not wrong though - the incentive was "University gets a large amount of cash" and the outcome, predictably, was "University bends over backwards to accommodate insane requests of donor".

(The correct solution, obviously, is that universities should be sustainably state-funded and not require mega-donors with their associated insanities, etc. to survive.)

Still don't understand the opposition to that. Students get affordable, safe, private housing on campus, where there's a million places to hang out beyond your windowless bedroom. Libraries, open spaces, study halls, cafes.

The alternative is often paying $1K for some barely maintained triplex basement shared bedroom off of campus, from a negligent landlord.

I don’t think it’s as easy as you’re making it to seem. The issue is that there are unintended consequences to each one of those points you mentioned. I’m pretty sure a substantial amount of thinking goes into software design from all aspects and it’s a bit reductive to say that it’s just lack of incentive. Humans doing software are not some machine learning algorithm to train with reinforcement learning techniques.

All of those are absolutely real. There is a lack of economics and a surplus of optics in the game for all players.

I don't think these "No"s are entirely right, though they are directionally right. But there are actually healthy businesses (and healthy divisions within less healthy businesses) out there that do incentive those behaviors, and it's a huge competitive advantage for them.

Just for context for others: This is an extreme description that doesn't match many of the actual jobs. My team/environment is the exact opposite of this, for example. I think parent projected their experience way too far on the whole industry.

I’m going to reference this comment in the future when I build my next software product. Thanks

This kind of problem surfaces in a large amount of systems involving humans and roles.

I call this "the tragedy of software development"

How do companies keep pushing their quarterly numbers higher and higher? By manufacturing innovations! Welcome to 21st century capitalism.

The author hits the nail on the head with his claim that imaginary problems are more fun than real ones.

As developers and smart folks in general, we like complicated problems that are big and far away. How many times have I heard in a meeting, "Yeah, but when we have 1M users..."

It's great fun to think your product will get to 1M users. It's also very unlikely. It's not nearly as fun to finish and ship and market and monetize the half-broken thing the team is working on now. Yet that's the only way out and the only way anyone gets to 1M users to begin with.

> The author hits the nail on the head with his claim that imaginary problems are more fun than real ones.

Not necessarily. It's just that most developers have never worked in a setting where they got to work on problems properly.

Solving real problems for real people is very addictive. There is a reason some people like working for startups. It's because you live very close to your users and when you make them happy you know.

The second interesting fact is that if you just plow ahead and solve many real problems fast you will eventually run into problems that are both real and interesting.

After having tried that there has been no going back for me. I am allergic to imaginary problems. It feels as pointless as watching people who are famous for being famous argue on TV.

I think we are all victims of our feedback loops. (University) Education sublely teaches us that the only important problems are those that are very difficult and preferably have never been solved before. Those same problems also make for better blogposts. In the real world the incentives are mostly opposite. Problems with no known solutions (or only really difficult solutions) are generally bad. They can be worth it, but you should stay away from them until you know that are worth it. Software engineers seem to almost pride themselves on not knowing what their users want.

It takes a while to scrub all that bad learning out and replace it with something better. Unfortunately some people are stuck.

> (University) Education sublely teaches us that the only important problems are those that are very difficult and preferably have never been solved before. Those same problems also make for better blogposts. In the real world the incentives are mostly opposite. Problems with no known solutions (or only really difficult solutions) are generally bad.

This is a great insight. One can add value to other people's lives by applying known solutions in relatively novel contexts (e.g. building a CRUD form at XYZ employer), whereas it's very hard to add value to other people's lives by trying to develop entirely novel solutions (because the probability of success is so low). Most of our training however, focuses on the methodology used to develop these novel solutions, rather than on the application of the solutions themselves.

> Solving real problems for real people is very addictive.

This was the feedback loop that worked best for me. imaginary problems and needless complexity go hand in hand, ruthless editing at the planning stage is necessary to combat it.

Second this.

The endorphins from making someone's job less sucky are a way better high than solving some code puzzle.

Reminds me of a PM I used to work with. "Will this work for 1000 simultaneous users?" After almost 2 months, we have less than 100 users total, maybe 5 of them log in a day, and maybe 1 will actually do anything of interest.

There is no technical problem. The problem is nobody worked on actually marketing the product. Build it and nobody shows up is the norm.

I was interviewing with a company that had barely any customers and they were asking scaling questions with Spark, etc. The salaries they paid could barely hire a team capable of dealing with the complexities of Spark, so they asked, "what would you do."

I told them I'd buy another stick of RAM and scale vertically until I had more customers, and save money on staff in the meantime. The interviewer went cold, I didnt get the job.

About 10 years ago, I worked on a project where I had to develop some sort of elaborate chain of map/reduce jobs because "big data" and "Hadoop." We were processing about 10 megabytes for each run. Most of the processing was consumed in job scheduling / overhead.

Sounds like a dodged bullet.

""Will this work for 1000 simultaneous users?""

Whenever someone asks me this question, I reply with a question "How many simultaneous users do you/we have today and what is our projection say for 12-18 months from now ?". If the answer is not clear, I tell them not to worry yet. If the answer is very clear but numbers are much smaller today (say 5-10), then I challenge them on where they think they/we could be in 12-18 months. A lot of times, it helps the other side see that they are mostly asking "How long is a piece of string".

I’ve worked somewhere like that. Our baseline requirements for concurrent users were based on the numbers required for the product launch team to maximise their bonus.

We never saw anywhere near those numbers in production, but I don’t really blame them - it was a big company and you do what you can to get ahead. A lot of money was spent on infrastructure that wasn’t needed but nobody seemed to care.

And people underestimate how well some solid, dumb solutions can scale. Boring spring boot with a decent data model and a bit effort to stay stateless scales to the moon. Or, we're having a grand "data export" system customers use to collect data from us for their own DWHs. It has survived 2 attempts at replacement so far. At it's core, it's psql + rsync, or was recently migrated to psql + s3 when we decomissioned our FTP servers. And it's easy to extend and customers are happy because it integrates well.

> And people underestimate how well some solid, dumb solutions can scale.

I'd say they underestimate how long "just throwing money" (servers) at the problem can work.

If you earn decent money now, scaling number of app servers 10x times to serve 10x times the traffic will still earn decent money. Doesn't matter that "PHP is slow", deal with it when your infrastructure cost warrant hiring more/better developers to fix it.

Especially now. Pair of fat servers on some NVMes and 1TB of RAM gonna cost you less than few dev-months and that can serve plenty of users in most use cases even before any extra caching will be needed.

Even then, you don't have to throw money at the problem right away. If you feel you can save time and money by using Rust instead of PHP (just using the two languages as examples, not a specific indication of Rust or PHP's resource drain), go ahead. Making that decision early on costs nothing.

It's only after a project is off the ground that caring about these decisions winds up wasting everyone's time, that's when you wind up slowing momentum tremendously due to dangling a potential new toy in front of your team.

>> And people underestimate how well some solid, dumb solutions can scale.

I think this started with the old Apache web server. When the www got started, that server did the job so everyone used it. The problem was it didn't scale, so all kinds of cool solutions (load balancers and such) were developed and everyone building something bigger than a personal blog used that stuff. For most the root problem was that Apache had terrible performance. Nginx has solved that now, and we also have faster hardware and networks, so anything less than HN can probably be hosted on an rpi on your home network. OK, I'm exaggerating, but only a little. Bottom line is that scaling is still treated like a big fundamental problem for everyone, but it doesn't need to be.

Yep. Facebook got pretty far down the road with PHP, MySQL, memcached, etc.

All they had to do was write a PHP compiler and a new storage engine for MySQL.

HipHop (later HHVM) was around 2010, so they scaled from 2004-2010 before that became needed. MyRocks was 2015. Wikipedia says FB was around 300 million users in 2009, then 400 million users in 2010.

Yes, good point. But you have to wonder what kind of engineering effort went into scaling PHP and MySQL up to the point where they decided to build a compiler and a storage engine.

When you have half a billion users that both read and write all day, you have to optimize, no matter the tech.

That is undeniably true, but I do think the starting point still matters.

It was much, much, much cheaper than a rewrite would have been, that's why they did it.

Edit: also, in 2004 when they got started, what else could they have used?

The trick was there was enough growth that the savings from the compiler were massive. (I worked there at the time.) The inefficiency of the PHP interpreter was a great problem to have, because it came from the success it enabled.

So I think the interesting question is whether the rest of us can learn anything from what happened there.

I believe Mark Zuckerberg simply used the technology he knew and took it from there. That's fine. I probably would have done the same thing.

But many people are making an ideology out of this, arguing that not giving a shit about performance is always the right choice initially because that's how you grow fast enough to be able to retool later.

I think this is based assumptions that are no longer true.

In the early 2000s, mainstream programming languages and runtimes were either fast and low productivity or slow and high productivity (Exceptions such as Pascal/Delphi did exist but they were not mainstream). And the cost of scaling up was prohibitive compared to scaling out.

Today, you can choose any fast high productivity language/runtime and go very far mostly scaling up.

I take away two lessons from it, which are in a kind of essential tension that can only be mediated by wisdom and experience:

1) Pick technologies that you can optimize.

2) Don't over-optimize.

Also, the concept of "optimization" here has very little to do with the language itself. It's far more about the overall stack, and it definitely includes people and processes (like hiring). It's not like FB invested $0 toward performance before swapping out PHP interpreters! Its massive caching layer, for example, was already taking shape well before HPHP (the C++ transpiler which preceded HipHop), not to mention the effort and tooling behind the MySQL sharding and multi-region that still exists in some form today. Many backend FB services were already written in C++ by 2010. But they had already gone very, very far—farther than most businesses ever will—on "just" PHP. Heroics like HPHP only happened after enormous pipelines of money were already flowing into the company.

Learn what? That you should use the language that you’re more comfortable with and then scale? Or that languages have become more efficient? Php 8, for example, is many times faster than the php 4 and 5 that Facebook was using.

In part the reason that PHP8 (and it was 7 that had the quantum leap in perf) are now so fast is precisely because of Hack - it was easy to accept the status quo on performance until Hack showed there really was a lot of performance left on the table.

For me the biggest win was the changes they made to how arrays are stored in memory, I saw some systems drop by half in memory usage and had to change basically nothing - those kinds of wins are rare.

Yeah, I know the performance optimization were in part because of hhvm.

I think using what you already know remains a choice that is very hard to criticise. But we didn't have to learn that, did we?

Beyond that I think there is more to unlearn than to learn from the history of the Y2K batch of startups. The economics of essentially everything related to writing, running and distributing software have changed completely.

Did they succeed because of PHP or was it just a tech used at the time and anything else similar at the time would be fine either way?

They succeeded because of php. It was easy to use for them. So it enabled them to materialize their ideas. It was the right tool for them. Anything else would have been fine either way, if it was the language they were the most comfortable with. In their case, it happened to be php.

That just sounds like "they succeeded because they knew a programming language", not that it was right one compared to competition

No, they totally succeeded because the used php. I think zuckerberg said it himself that php allowed them to add new features easily. I think he mentioned that it was easy for new people to pick it up. I’m pretty sure Facebook wouldn’t exist today if it had been written in the more corporate/esoteric languages available at the time.

Its ease of use allowed him to launch the site from his dorm room. Iirc, YouTube was also written in php (it had .php urls), before google bought it and rewrote it using python, so you could probably thank php for that site too.

Just checked. It appears it was indeed first written in php then changed to python then to java.

Well, OK. But by that logic, if the language they had been most familiar with was Fortran, should they have used Fortran for Facebook? I tend to think that there are actually material differences between languages and technologies, and it's worth knowing more than one language and not using terrible ones.

“ But by that logic, if the language they had been most familiar with was Fortran, should they have used Fortran for Facebook”

Absolutely. Otherwise they wouldn’t have been able to release the actual product and keep adding features to it the way they did with Facebook. They’d spend half the time learning the “right” language & environment. That would have slowed them down to the point they wouldn’t have been able to work on the actual product as much as they did.

And feature-wise, Facebook evolved really quickly.

I don't think there was anything similar to PHP that wasn't proprietary (Cold Fusion etc.), and FB engineering culture was to avoid vendor lock-in.

In any case, in the 2000s a PHP programmer/designer was analogous to a JavaScript developer today. Lots of talent out there, and it only took a few weeks of orientation and familiarizing for new hires to be productive.

your comment implies your understanding of the timeline is backwards. they had to do those things after they had gotten hundreds of millions of users.

Depends on what you consider "far down the road" and what they had to do before writing a compiler and a storage engine.

How long did it take until Facebook engineers realised that their technology stack was not the best tool for the job? It definitely wasn't the day when they decided to build a compiler and a storage engine.

I'm not sure there was really a best tool for the job in 2003-2004 that would have been high-level enough to be productive, and scalable enough to stay mostly as-is. Java, maybe.

I agree, and I'm not criticising the choices that Mark Zuckerberg made at the time. But we are no longer facing the same situation he did. We do now have high productivity, high performance language runtimes. And scaling up has become much cheaper (relative to the number of users you can serve).

That's why I think it can't hurt to remind people of the great lengths to which Facebook had to go in order to deal with the limitations of their chosen platform.

Yeah, I kinda don’t agree with the dichotomy of “you either optimize or you build features”. They’re not exclusive. If you understand the tools and their trade offs you should be able to use the right tools for the job which won’t hinder you in the future.

Of course if all you know is JavaScript, then that requires going a bit outside your comfort zone.

> And people underestimate how well some solid, dumb solutions can scale.

And overestimate how expensive it is to just add another server, or underestimate how expensive a rebuild is. But then, part of that is also that IT departments want the budget; if they don't spend their annual budgets on innovation, their budget gets cut.

Or in my neck of the woods, the EU subsidies will stop if they don't have anything they can file as "innovation".

I worked with a project manager that went too far in the opposite direction, though. Their pushback against premature optimization manifested in wanting to start with a functionalish "proof of concept" without any real design phase so they can say we cranked out an MVP in the first sprint... and before you know it, like most non-blocking technical debt, the "migrate functionality to final codebase" kanban card moves to the "long term goals" column (aka trash) and you're stuck with a shitty, fragile producion codebase. The opposite side— trying to get everything into a final state right off the bat— it is like trying to play an entire 8 measure song in one measure.

At the beginning of a project, before I write a line of code, I try to: a) if it's user-facing software, get some UI designer input to shape functionality/interactions, not styling. To end users, the UI is the software, not just the shell, so mediocre UI = mediocre software. It can also illuminate needs you didn't consider that affect architecture, etc.; then b) block out the broad stroke architecture, usually on paper, c) intentionally choose languages/environments/tooling/etc rather than reflexively going with whatever we've been using recently, and d) spend some design time on a reasonably sane and extensible, but not overly detailed data model.

There's no perfect solution, but at least in my cases, it seems like a good middle ground.

The trouble is, an 'MVP' is often far from 'minimum' in those sorts of situations.

The reality is that MVP should be missing most planned functionality and should really just be a few core features and functions that the rest of the application builds off of, the trunk of the dependency tree so to speak. That idea is, unfortunately lost on the majority of PM's, and ultimately it costs more time/money to get to a finished v1 because of it.

It was actually the appropriate scope for an mvp, it was just an unreasonable time frame to make a solid codebase for anything other than a demo for the complexity of the project. That's fine for a genuine proof of concept/rapid prototype you're going to be disciplined enough to trash, but letting that slip into the role of your core codebase is like pouring a sloppy scaled-down concrete foundation as a test for a house, and then trying to just expand it into what you need.

As a solo dev on a project, I constantly re-evaluate whether the thing I am working on is beneficial to my users or if it's an "imaginary problem" as this post describes. In a large software project you always have a laundry list of things to do from: "fix the date format on this email template" to "implement a better feature flag system". While I'm tempted to always work on the "fun stuff" (the feature flag system), I make myself work on the "boring stuff" (fixing the date format on an email template) because I know that's what users need. Occasionally you get an intersection where the cool fun an interesting problem is also the most pressing one but I found those times are few and far between and most times I have to decide between the "cool fun [imaginary] problem" and "the boring [real] problem".

I remember having a similar argument with someone saying that your C code has to compile and work on every platform that exists, including weird CPUs with known bugs.

Unless you're working on something like the Linux kernel, that's an imaginary problem.

"Newer versions of the compiler build your code with security vulnerabilities" is a very real problem in C. E.g. since x86_64 has no aligned memory access instructions, a lot of programmers assume there's nothing wrong with doing unaligned memory accesses, but actually recent gcc/clang will happily compile those into RCE vulnerabilities.

This is why I think running a small business was the best thing I ever did for my software career.

How do you manage sales?

Now you have a whole bunch of New SQL databases that will scale past whatever number of users you can imagine. So your good old Django or Rails app can scale to 1M or 10M users without you doing anything exotic. That's not "fun" though.

I wonder if you've ever seen this classic video mocking such marketing claims which is called "MongoDB is Web Scale": https://youtu.be/b2F-DItXtZs

Have you ever seen Spanner? things don't stay static. And yes I have seen those vidoes since I've being doing web dev since 1999.

I recently added a couple of constants to some project. One of my teammates said it wasn't a good idea, because we could have hundreds of similar constants eventually.

Those constants represent the markets supported by that app. By the time the app supports even a few dozen markets, every engineer involved will have exercised their stock options and left.

I think this is an attitude shift that a lot of developers need to get over. They like writing code, they like working with computers, and they pick that to do as their day job.

But their day job actually isn't writing code, it's solving a problem for an end-user; the programming language is just a tool.

Rethink your job from a coder to a problem solver and you should be able to get over the compulsion to overcomplicate things for your own gratification.

Not the first time the issue has been pointed out:

> Simplify the problem you've got or rather don't complexify it. I've done it myself, it's fun to do. You have a boring problem and hiding behind it is a much more interesting problem. So you code the more interesting problem and the one you've got is a subset of it and it falls out trivial. But of course you wrote ten times as much code as you needed to solve the problem that you actually had.

[1] http://www.ultratechnology.com/1xforth.htm

I am dealing with that today. Talking about scaling out to hundreds if not thousands of aws accounts and I'm like, "we've added 6? in two years?" Why are we wasting time on this?

Reddit has 1M users and is half-broken, yet monetization is still a problem.

1M+ was implied. I am sorry I forced you to go on a stats-collection side quest.

So what’s your point exactly? The type of problems you have to solve for 1M users is completely different than 500M users including profitability.

I agree with this to some extent. but there’s a flip side too.

This mentality is often taken way too far. I had an old boss who wouldn’t allow me to write unit tests citing this thought process.

Even at places with decent engineering practices, I’ve seen so many examples of software where you’re limited to a one to many relationship for something that could and easily should have been implemented as many to many, rendering the product useless to many because a product person couldn’t stand to think a couple months ahead.

Some people seem to take this idea too far and basically assume that if a problem is interesting or takes away a tedious task, it must be overengineering, premature optimization, or an imaginary problem.

Perhaps a better way to phrase the issue would be “artificial constraints”, which would encompass the flip side too.

Yes. While it’s less common, I’ve seen orgs struggle because they didn’t have enough imagination.

Every feature is done quick’n’dirty and eventually you have people whose full time job is to respond to customer complaints and fix data straight in the production database.

Bad engineering but potentially good business if it’s all billed to the customer…

No, it’s bad business because it doesn’t scale. Software is lucrative because you make it once and sell it to thousands of customers. If you’re making every customer their own bespoke thing, you’ll spend all your time for little return.

“Billed to the customer” means you’re charging the customer by the hour / project. You can get plenty of return selling bespoke things this way. Accenture is a $200 billion company.

That’s called Professional Services. Professional Services assemble a solution for a customer from a variety of components and maybe build some glue or the equivalent of a dashboard. This is not the same as having a ton of “if” statements in code to handle customer X vs customer Y.

The secret, as a software vendor, is to generalise these bespoke customer requests so you can sell the solution to all your customers (and get more customers!). If you are really cheeky, you can even get that customer to help fund the development that will make your business more money (hey, it’s win-win). You need to ruthlessly follow this approach though, as the rot of bespoke code will quickly become an insurmountable quality nightmare that can sink your business.

Earlier in my career I ate up the lean startup, move fast and break things, y combinator stuff. And while there are some very good lessons there, I’ve also come to realize that when you stop working on a part of the code, that may very well be the last time someone goes in there to make serious changes for a while. So sometimes it makes sense to do it right, even if it takes a few days longer (but not if it’s going to turn into some massive overengineering project).

Yeah I agree. I think the biggest mistake people make when applying YAGNI is not considering how difficult it will be to change later. If it's just some hard-coded value that you could easily add to a config later? Fine. YAGNI.

If it's something more fundamental like language choice or system architecture... Well fine YAGNI now but if you ever do need it you're screwed.

Ive seen a lot of engineers complain about YAGNI being taken too far but none who have seen their concerns validated by reality.

I have seen it validated by reality several times… more times than the opposite. I had a boss refuse to let me do a refactor that changed these sketchy dynamic field tables into json columns because “it’s not customer facing.” They were unable to show off features in an important demo because the endpoints were timing out despite putting 2 other people on it for 2 weeks to find code-based optimizations.

3 days later I deployed my “nice to have” fix and the performance issues disappeared.

I’ve also seen a company stall out scaling for years and lose multiple million-dollar customers despite having a novel in-demand, market leading product because they refused to do anything to clean up their infrastructure.

>I had a boss refuse to let me do a refactor that changed these sketchy dynamic field tables into json columns because “it’s not customer facing

YAGNI isnt about not refactoring existing technical debt. It's about not trying to pre-empt future requirements.

If youre refactoring in anticipation of as yet unmaterialized requirements then YAGNI applies - e.g. generalizing code when when today there is 1 specific use case because tomorrow you think there will be 3+.

If youre cleaning up existing code while working on it and the boss stops you because "it's not customer facing" then he's just asking you to violate the boy scout rule.

All of these definitions are fuzzy... refactor versus upgrade versus feature. When the people wrote it the way they did, they were almost certainly thinking that they don't need to overthink or over-engineer, and that they should discount hypothetical future concerns.

I can give you an abundance of examples. We were creating a page that was going to use state in a certain way. I was trying to insist that we address the way state will be handled across pages ahead of time. These concerns were dismissed as premature optimization. A few months later we had 5 pages with the state being handled in 5 different ways, and being synced in different ways between each page, complete with if statements, sometimes passing state through URLs, sometimes through local storage, sometimes through session, sometimes through JWT data, generally through a combo of several of them. Then we'd end up with confusing redirect loops for certain edge cases, state getting overwritten, etc.. We spend weeks fixing these bugs, and, eventually, weeks refactoring to manage state in a simpler way. These bugs often got caught by customers, drawing us away from feature delivery that was critical for demos to large customers.

All of that could have been avoided by spending 1 day thinking a little harder and planning for the future.

It ultimately boils down to a couple assumption that people like to make. (1) engineers know nothing about the domain, they can never predict what will be needed. That might be true in a large company with obscure domain-specific things for engineers who work far away from the day-to-day, but sometimes the engineers know exactly what's going to come up. (2) You can hill-climb your way into optimal program implementation. You can get to local maxima this way, but there are regular ways that programs grow based on how the business is growing and you can predict certain places where you will soon hit diminishing returns for current implementations. As long as you're up front about it and double-check your assumptions about the way the business is growing (and hence the application), I think there are ample places where you actually are going to need it.

>I can give you an abundance of examples. We were creating a page that was going to use state in a certain way. I was trying to insist that we address the way state will be handled across pages ahead of time. These concerns were dismissed as premature optimization. A few months later we had 5 pages with the state being handled in 5 different ways.

The right time to address this was probably a bit at a time after the 1st, 2nd and 3rd pages. Certainly not before the 1st and definitely not after the 5th.

>All of that could have been avoided by spending 1 day thinking a little harder and planning for the future.

The reason why you try as hard as possible to avoid planning for the future is because it's really hard to predict the future. Moreover humans have an inbuilt bias towards thinking we are better than we are at it (hence the gambling industry).

Refactoring as soon as possible after the fact will always produce better designs than up front planning for this reason.

>there are regular ways that programs grow based on how the business is growing and you can predict certain places

This is the kind of phrase that makes alarm bells go off in my head that somebody SHOULD be following YAGNI and isnt.

If it's defaults in a well worn framework that doesnt railroad you then fine but anything more than that - red flags all around.

Rule of 3 is often correct, first time just do it, second consider if it'll very likely happen a third time and when the third time happens it's darn well time to do it!

HOWEVER, this only works if you have the agency at an organization to allocate time for doing something. Contrary to this when you are in an organization of management that doesn't understand technical debt (or is fine with it because it just means more consulting hours) then it's absolutely the correct choice to stall and/or fix things "prematurely" (iff you can see what product they are trying to create without knowing it) if you'll be holding the shit-can of duplicated crap down the line getting knuckled because things aren't going fast enough due to technical debt.

The problem comes from the emotional pressure to finish tickets quickly - this can be external pressure but it can also be internal.

In that case the temptation to close the ticket and skip the refactoring step can be too great.

If you're begging for time to refactor from the PM youre doing it wrong.

The good old "We have other priorities right now and lack of resources"

Preventing fires will never be a priority. Not even if you smell smoke.

I’ve had tons of times where YAGNI has bitten teams at a FAANG. It’s been responsible for re-orgs as products have to pivot to meet the goals that were dismissed but turned out to be needed.

I was creating a very important demo once, features i had said were important were classified as YAGNI. Leadership eventually saw that we couldn’t deliver without said features. YAGNI bit those teams in the butt.

these things happen all the time internally to companies but get ironed out internally as well.

All depends on what the I is in the YAGNI. I have seen development be parallised where the same work is being done again and again by different developers in different ways because YAGN { maybe 2-3 days of upfront architecture and design for a 6 month project }. This results in bugs and maintenance nightmares. This was before unit tests were common though, so maybe unit tests may have saved it. But surely it was slower to develop that way.

But tautologically you can't take YAGNI too far, if the "YAGN" part is actually true :-). But that is always under debate.

It certainly feels that way. 2 or 3 days of up front architecture and design with hindsight is always better than 2 or 3 days of up front design in reality, but of course you dont have that hindsight when you start.

I've had to do up front on multiple projects and it always results in overengineering - we focused on things that didnt matter, designed things that were inappropriate, etc.

I'd always rather take those 3 days and redistribute them as extra refactoring time.

I don’t know what is going on with this article. The first half is a maybe reasonable description of a common way for certain kinds of contracts to go wrong. But obviously lots of software doesn’t get developed in this sort of arms-length way. I would say that imaginary problems (as the author defines them) cause failed projects by consultants/contractors.

I find the rest of the article to be bizarre. The discussion around retail banking software seems unacceptably incurious and a very likely incorrect diagnosis of the cause of the problems (it basically stoops to an ‘I could do that in a weekend’ level of criticism[1]). It then transitions to a screed about Goldman Sachs which is, as far as I can tell irrelevant (Goldman do very little retail banking; their software development will be quite different to that done for retail banking), and then some description of how the author thinks (all?) large companies are (mis)run. I don’t know if Goldman was meant to be a prototype for this model of company management but it seems like a particularly strange example (in particular, they still will have some remnants from the culture of being a partnership, so they’ll be run somewhat differently from other big investment banks).

I found the second half did not ring true. I’m sure software projects fail at big companies (including retail banks, Goldman Sachs, other investment banks, tech companies, and so on) but I don’t find the reasons given in the article convincing to the extent that I think that section could have been written by someone who had only ever worked at very small companies. But maybe it’s just me and most companies are so obviously terribly wrong in these ways that no one even bothers to write about them and so I only see subtle ways they go wrong like projects dying off due to management acting in bad-faith ways or rewarding people for things that aren’t actually so good for the company or whatever.

If you’re interested in better discourse around certain kinds of bureaucracy, look into moral mazes.

[1] generally ‘I could do that in a weekend’ is code for ‘I could do some minimum thing that doesn’t actually solve whatever the important problems are in a weekend’

The second part of the article makes it clear that the author has never worked in online banking (I have), and possibly any other complex domain.

> Have you ever heard about those three web engineers who figured out that secure online banking is actually quite an easy problem to solve?

> The storage and transfer of numbers is not a particularly hard problem.

These quotes are so incredibly disingenuous that make me question any advice OP has to offer.

First, banking is quite a complex domain, and it's complexity increases exponentially with the kind of services that you offer.

Second, banking is a highly regulated industry, which makes everything why harder than "it should". In fact many "neobanks" have appeared in the last decade, and this is usually their biggest hurdle.

Third, online banking needs to deal with quite a few hard technical challenges. That's why the likes of Monzo, Starling or Revolut often given tech talks.

So no, imaginary requirements weren't the root cause of bad software when I worked in banking. A 20+ years old big ball of mud, inability to pay off any tech debt (unless you wanted to get literally yelled at in front of the entire team), flaky and severely insufficient tests, and a very toxic working environment were all causes of bad software.

Yeah I deliberately didn’t want to write that much about the retail banking stuff because I don’t know why it is the way it is (though there are a few reasons to guess). People would often give regulations/compliance as an excuse for eg not being able to set up an account online, but then the pando happened and somehow this stopped being such a problem. I feel like either those people were just not knowledgeable about the reasons, or they were rationalising business reasons for the bank to not do those things.

Yeah I agree the author got a bit dismissive about the inherent complexity of solving business problems on an ongoing basis. He even links to a Wikipedia article about Google and offhandedly claims that the problem of indexing the whole web was solved by a couple of guys. We all know Sergey and Larry created the original Pagerank algorithm, but it's farcical to believe that their original algorithm would have stood the test of time without input from hundreds of engineers who had to deal with the rapidly evolving web and all the ensuing SEO spam, ad scams, revenge porn, illegal content, international firewalls, international regulations, scaling their infrastructure to handle billions of requests, creating an ad network to support the endeavor, etc etc. That all cannot be done by two guys in a dorm room.

I'm sure Google as an org has accrued plenty of staff that are working on mild to non important tasks over the years, and I get where he's coming from, but reality is far more nuanced.

The second part might be summarized as “when technology starts to diverge from the business model, or vice versa, both become messy.”

This resonates and one way to describe it is an incentive problem. Someone whose incentives are tightly aligned with the business is going to solve the actual problem and simply and effectively as possible. Someone who is incentivized to build career capital and experience other than via impact (e.g. so they can get uplevelled, pass an external interview loop, etc) is much more likely to focus on unimportant hard problems and/or over engineer.

> Someone whose incentives are tightly aligned with the business is going to solve the actual problem and simply and effectively as possible.

Equity is the entirely the answer for cutting through all the bullshit. At least in my head. I don't know how it plays in other people's minds but mine sounds like: "If we ship and go live, I get x% of all profit moving forward in my personal scrooge mcduck money bin". Pretty big carrot. It's kind of like time share in my own personal business, but I don't have much of the headache that goes along with running my own 100%.

This has some caveats, namely that equity in a 10k person org is often times not nearly as meaningful as equity in a 10 person org. Shipping your code 2 weeks early at Ford or Dell means what, exactly? If the code you are shipping is the business, then things are different. It also really helps if you care about the problem you are solving.

I'd say this - if the idea of direct equity/ownership doesn't get you excited about pushing simple & robust techniques, then you are completely in the wrong place. You should probably find a different problem space or industry to work in. Hollywood might be a better option if the notion of equity or straight ownership in the business still isn't enough to put your tech ego into a box.

>> Equity is the entirely the answer for cutting through all the bullshit.

I agree for small companies which are largely founder owned. I think outside of that, Equity doesnt do much because so much effort is put into obfuscating the value/share of the equity. If you cant see the cap table, and you cant see the preference overhang, the equity is as good as worth zero. There is no discernable value for a fraction with no denominator.

I have a little bit of equity in the company that I work for now. It's super small and early stage, and still between me and the product decision there exists a Designer that reports to a CTO that reports to a CEO. For everything that I want to see done differently, I have to make a case that convinces all these stakeholders that it's the right way. Ultimately, equity or not, my job is to row the ship where the cap'n tells me.

Equity is the answer, I work in investment banking and we all get a share of firm profits, I'll often sideline small projects in favour of projects that I think will be more valuable to the org and increase my/our pay cheque come bonus time

You hit the nail on the head. There are different motivations for different roles within the same company, sometimes those motivations clash internally, all the while each individual IS acting completely logically from their own unique perspectives.

RDD: Resume Driven Development

This is absolutely a thing, but I'd say there's a related option which is "Job Listing Driven Development". The more niche, dated, or specific your platform is, the harder it is to hire people onto the team who don't need months of on-the-job practice and training to be useful.

You see the most extreme versions of dangers of this in stories about governments and older companies having to pay insane salaries to bring FORTRAN or COBOL developers out of retirement to keep systems running. If you keep doing the simple solutions within the existing system, you risk creating a system so inbred that only the folks who built it can maintain it effectively.

For less extreme setups, it's still a balancing act to consider how much your unique and specific solution that is the simple option for you company starts closing you off of the larger hiring pools in more common technologies and patterns.

What's kind of funny is that MUMPS is equally as archaic and idiosyncratic as Fortran or Cobol, yet there are companies willing to put new hires through a bootcamp to make them productive. Are all the Fortran and Cobol companies too small to afford a month or three of training time on new devs?

As someone who maintains a large Fortran codebase actively maintained from the 50s, I can say with 100% confidence that syntax, compiler, and other tools aren't even 10% of getting up to speed. It's some of the worst code you will ever see. A lot of it predates "GOTO considered harmful." It also comes from an era where different common blocks and subroutines were moved into and out of memory using a custom virtual memory system.

The demand for Fortran/Cobol experience has nothing to do with training. We need to make sure you are masochistic enough to trudge through the sludge.

My guess would be that the entities short-sighted enough to still be using those languages in 2023 are also ones short sighed enough to not invest in the training to preemptively hire juniors without the skillset to train them up.

In a large government IT department:

“I think we should use a Kubernetes cluster!”

“You’re joking, surely? This is a tiny web site with mostly static content!”

Next project:

“For this web app, I propose we use Kubernetes…”

I will take that!!!

Doing the sort of simple solutions to your specific job's actual problems can also be something that constrains your ability to work anywhere else. Often the best simple solution that's tightly integrated into your job's environment is something that is inconceivable as a good idea anywhere else. You're optimizing around other old decisions, good or bad. You're often correctly overfitting a solution to your specific problems.

I've often found myself now having issues even updating my resume, because what I did for the last year at work barely is explainable to other people on my team, let alone to someone in HR at another company. Or the more simple explanation is something that sounds like I'm doing work barely more complex than an intern could have done. Which often isn't wrong, but the intern wouldn't know which simple work to do.

My years of experience in the company's stack and org is valuable to the company, and nontransferable elsewhere.

I share this problem in the last year+ of job searching I've been up to.

And thus we will see the rise of the software solopreneur.

That's been a thing for 30 years. Entrepreneurship is HARD, and tech salaries are fat right now. I think we'll see a lot more software entrepreneurship when there's another recession.

Makes you wonder what the actual state of the industry is right now with thousands of layoffs, but then comments like this one. Probably it's a bifurcation and an uneven distribution of reality.

There were layoffs in the big tech companies, but the sector itself is strong. Still very low unemployment. They over-hired. It happens. It's been a relative minor correction.

> Someone whose incentives are tightly aligned with the business is going to solve the actual problem and simply and effectively as possible.

On average, and depending on skill. Incentives are hugely important (probably the most important metric any manager could work on), but even they do not guarantee results. If you hire so many juniors that nobody is there to upskill them fast, you only get one lottery ticket per employee. Conversely, if you hire a bunch of geniuses and fail to give them incentives to work on realisable, useful problems together, you get two lottery tickets per employee at twice the price.

(This comment feels woefully incomplete. Does anyone know of good resources to learn more about incentive structures and how they relate to individual and company success? I feel like the problem is that incentive structures change massively when companies grow, so even for unicorns there's just a short sweet spot where we can actually learn how they are supposed to look.)

I don't think its just an incentive problem only. I know plenty of engineers doing premature optimization or scope creep in good faith.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact