Hacker News new | past | comments | ask | show | jobs | submit login
Why enterprise software is bloated (mailbox.my)
118 points by MailNerd on March 3, 2022 | hide | past | favorite | 137 comments



An important factor is new manager syndrome.

You start with some new head honcho somewhere. A CxO, an Enterprise Architect,... These tend to swap every 3 to 5 years.

Head honch sees horrible bloat, and decides to Act with some Master Plan. This entails buying some expensive software, deployed by a random external team, that will solve everything.

In practice, expensive software tends to barely work. Also, the deployers have no idea what the company is doing. But, anything that might be bad news is career ending, so things get deployed swiftly.

Then comes integration. External team chooses some integration point, probably somewhere in the last head honcho's expensive software, as that's the only thing where the design is not yet completely forgotten. There will be impedance mismatch, i.e. bad news, i.e. unspeakable. So people do something, anything to forcibly connect A to B.

Someone presses start, then the deployers run away in 2 weeks tops. All kinds of weird crimes are done by the new expensive software. Bad news is still not welcome, but things start to hurt more and more over the next few months. Staff was already overworked, so does some quick and dirty fixes. Head honcho falls out of grace, a reorg destroys every shred of knowledge gathered in the exercise, and a new new head honcho floats to the top. The cycle starts again.


> the deployers run away in 2 weeks tops

Well, they probably didn't run away - they were probably only paid for two weeks tops. The only constant I've ever observed in 30 years of software development is that the people who make decisions think that saving a few thousand dollars in programmer salaries is worth having a business that nobody really understands, that operates at minimal efficiency, and generates unhappy customers. God forbid anybody ever treat highly educated programmers as competent professionals and equal partners in the business.


Didn't you get the memo? Profit is more important than life itself

/S


Absolutely. At root, this is the management version of xkcd: Standards https://xkcd.com/927/ Possibly no human endeavor is immune to it without deliberate steps.


I have an idea now what happened at TSB.


In a lot of companies, feature development trumps optimizing, refactoring or removal of legacy code.

Dev: Hey Steve, I'm working on issue #4546, but it just occured to me that that if I could just refactor that one method in SuperFactory it'd make code much cleaner and easier to reuse. Just a quick fix!

Manager: No. Work on #4546.

Dev: Sure, #4546 will be done soon, but it'd be really easy fix, it just occurred to me yesterday that there's a better way to build things with SuperFactory.

Manager: No! We already closed that issue!

Dev: No problems. But I thought that now that I have some extra time until...

Manager: Look, Dave, it's working as intended, the solution was reviewed and accepted. I will not create another task. You'll take #7839 next!

[...]

Manager: Hey, Dave, I recall you had some ideas about SuperFactory. It's been acting up lately, they keep creating tickets.

Dev: Nope. None. All gone now.

Manager: But you had, right?

Dev: Yes, but I'd have to start digging in again and I don't have time for that.

Manager: Oh, ok, you're right.


I know this is just an anecdote, but a good developer wouldn't ask the manager to approve every small refactoring or expect them to understand the importance of "one method in SuperFactory". They would have instead made a judgement call and taken the responsibility of doing the quick fix.


> wouldn't ask the manager to approve every small refactoring

It becomes an issue if it takes more than a day. Scrum, Kanban, RUP, XP, waterfall - whatever "methodology" they say they're following, it boils down to "tell me how long this is going to take and I'll check to see how close what you said was to the time it took". If you can make a change in an hour, sure. If it takes a day, it's going to break your "commitment".


Except in XP the developer will refactor before and after implementing the feature, the customer doesn’t get to say how things get done, and there is no “manager” role.

Not to say that people don’t operate completely differently and call it XP. That’s always a problem. But it isn’t “No True Scotsman”, it’s literally just not following the recipe and expecting the cake at the end.


Or... Management could choose not run their software development organisation with the kind of micromanagement strategy that requires everything to be allocated in units of one day or less. It's another red flag that has become disturbingly common in the industry and suggests managers more interested in "visibility" and "metrics" than actually doing a good job, sustainably, by trusting their technical people to do theirs.


I love how "story points" aren't supposed to represent days, yet conveniently everyone winds up with 10-ish over their 2 week sprint, and it is totally micromanaged. Meanwhile none of the non-technical "product people" are ever asked to account for their time like this.


> Management could choose not

Sure, they could, but they never have.


No, that would be preposterous.


While I agree with you, some managers micromanage and freak out for any change that isn't directly related to doing or fixing X. They will reject the change and it becomes painful to keep working like that because they hold it against you. Toxic workplaces exist.


And in a market where developer skills are in incredibly high demand and a new job can be lined up in a couple of weeks, such workplaces should cease to exist due to lack of developers.


To the average hackernews, lucrative tech jobs may grow on trees, but that is not the case for everyone, even those with technical skill. It usually takes me between three and six months to get a new job.


Yes, the general rule for me is if I see something completely whack on the ticket I'm working on, I'll clean it up as long as I know there won't be collateral damage. The problem comes when these systems become so complex and so old and the people working on them don't really know what changes will affect other systems down the chain.


In Enterprise software, everything is a Chesterton's Fence.

Or a Chekov's Gun.

The problem is that you can almost never tell which is which. And sometimes they're both.


IME in SAFE® Agile™ world developers are fully empowered to not take decision on things which are domain of enterprise architects/ Product manager or leadership calls.


There are companies in which the process is organized in such a way that taking responsibility and doing a quick fix can put your career at risk.


Or throw it on the backlog to track it…


You could be a good developer with a manager who gets angry if it’s discovered you took that kind of initiative.


But the reason this happens is because of previous painful experiences like the following:

Dev: Hey Steve, I'm working on issue #2312, but it just occurred to me that that if I could just refactor that one method in SuperFactory it'd make code much cleaner and easier to reuse. Just a quick fix!

Manager: Huh. How much more work is it? If you can time-box it to half a day then go ahead.

Dev: Great, it should take just a couple of hours!

The change is merged and deployed, and several weeks go by...

Data Science: Hi team, we are wondering if you know if anything changed in this module in the past couple of weeks. The numbers from non-English speaking domains tanked.

Manager: Uh oh

Dev: Uh oh

Data Science: We just look at data in aggregate with significance only reached with weeks of collection. But at this point it looks like we lost millions of dollars.

Manager: oh shit

Dev: faints

... followed by weeks of post-mortems, meetings, process improvements, if not outright terminations.


Allowing technical debt to accumulate like that will, with near 100% certainty, damage your development activity sooner or later. The only exception is if you've already damaged it critically in some other way.

Some contrived example where you might lose significant money because you made a generally good change but it had a bug and that bug was somehow missed by your entire review and testing process and the consequence of that bug was able to go unnoticed for a long time in production and then the result was disastrous isn't really a very compelling counter-argument.

If you subsequently hold weeks of post-mortems, meetings, process improvements and outright terminations, the person who made the otherwise useful change that had a bug should be among the last to get called out, somewhere after the entire management chain who utterly failed to competently organise critical development and operations activities, everyone responsible for QA who couldn't spot such a critical problem early, and everyone involved in the data science who ran such a hazardous experiment without taking better precautions around validity.


Well. It's not a counter-argument to anything, it's an illustration of how we end up with bad codebases, and why specifically in big enterprises. The incentives are set up exactly in the way that lead to it, particularly by making it expensive to clean up tech debt.


Fair enough if that was the point you wanted to make, though in that case I'd argue that the kind of disaster scenario you described isn't specific to big enterprises but to disastrously bad software development organisations. A lot of small organisations think they're operating at enterprise scale and make the same kinds of mistakes!


That has nothing to do with giving devs lee-way.

If you work without tests/QA, you are shooting from the hip. The scenario above as-such should not happen. If it ties in with million-dollar processes, even more so. What you are saying is "We don't trust our process, so we do as little as possible outside authorised tasks"; Instead, you should fix the process. If this led to post-mortems and process improvements, as in QA/dev process, not simply bug-fixes, then why is the process not improving and/or better trusted now?

Also, the original task is described as a "refactor", so the numbers should not be affected - was was it not just a refactor?


> If this led to post-mortems and process improvements, as in QA/dev process, not simply bug-fixes, then why is the process not improving and/or better trusted now?

IME things do improve after taking those steps. But with large code bases there are a lot of layers of products over time and it's difficult (and expensive) to make sure everything is covered. And it's also a moving target.

> Also, the original task is described as a "refactor", so the numbers should not be affected - was was it not just a refactor?

IME the most useful tech debt interventions are where some legacy module is deleted or some unused code retired. Unfortunately those are often not provably without side effects and sometimes even with a diligent investigation side effects can be missed, especially when the components involved are old and original creators and product managers have left the company.

At the end of the day cleaning up tech debt has non-zero risk, guaranteed cost, and very often, in the eyes of management, negligible reward. So on average it grows and thus enterprise codebases are born.


Testing is a thing.


Ok, but now your “time boxed” half day refactor is two man weeks of testing, bug fixes, back and forth, etc


If you planned your refactor AND testing and bugfixes to take half a day, and it is going to go way over, you (a) tell your boss your estimate was off and the refactoring needs to be a separate task, and (b) revert it.


Technical Manager: Dev, it's only a quick fix because there are no test cases for that area, and you're not adding any before refactoring, so then you don't think you're breaking anything.


Left a job in 2020 over exactly that. Our customers loved when we could fix/build features in a week, but after 18 months the bloat made it impossible to ship a new feature in the same month. I actually went behind my manager's back and did a full re-write (~25k sloc down to ~15k) at the 8 month mark, but the 2nd time he put his foot down and said absolutely not. I left within a 6 months of that discussion.


What do you mean by the 2nd time? Did you rewrite the whole thing again some amount of time later?


For me the issue is rarely refactoring a single method, but trying to improve features given new tools and use cases. I'm always met with the same issue that it may cause regression and we would need to allocate more QA time. Really hampers improving old code.


Managers should never drive technical decision making. This is one of the key offenses some companies continue to commit; they let managers think that they're still engineers. A manager, imo, needs to be paired with an engineer that has the same scope and adjacent level as the manager so that things like this don't happen.


Dev: If we just rewrite everything in Ruby on Rails and Coffeescript, our productivity will go through the roof after a short time investment.

Manager: Sounds great, let me know when you are finished.


Is this how software development in corporations really works? I'm not a software developer, but I've contributed to some large open source projects and I thought it would be similar. Maybe with the difference that the issues would be raised by a Q/A team or other people in the company instead of random people across the internet.


I’m sure that some teams are like that but it is not universal


I think that issue here is that manager is approving method refactoring in SuperFactory. And I mean, I work on dysfunctional enterprise company software. My point here is that this is not how things get bad, because this is not how things work, at least where I have seen.


The article didn't mention what I have observed as the worst cause of bloat - what I call the "New Toys" problem.

For example, you need a process to export data every 4 hours, with some visibility of success and failures. I could have written a cron job/scheduled task in 4 hours and be done. What I found instead is Kafka with node.js and couch.db. Yes, for that one export. Not only that the were paying monthly for the Kafka. Soooooo, it got replaced.

I've seen this a lot more in the last 10 years. I call them "stitcher programmers." They are near useless at providing solutions unless they can stitch together some byzantine Frankenstein's monster from existing tech, usually with extreme overkill. On the front end is the worst with React and thousands of dependencies for simple forms.

Right sizing a solution is not in their vocabulary.


Look, I obviously don't know the specifics but I think you swung too far on the other side and also just stitching together existing components. A single box running cron is quick and easy but I would be wary of hinging anything non-dev facing on that.

* You can't fearlessly patch the cron box without knowing when the jobs run, I don't want any special cases. Also how would I even know when you have jobs scheduled looking at a fleet -- read your out of date docs? Ew no. So a messaging system, guess the devs were already familiar with Kafka, is necessary to process the jobs across multiple nodes.

* Individual nodes are unreliable and you don't have any durable persistent storage. Most people don't like storing data in Kafka even though it's possible so they went with a database.

* Cron doesn't have any mechanism to give you a history of jobs that isn't built into your script or parsing logs. Ditto with failure notifications. You also can't reprocess failed jobs except manually. Guess you just wait another 4 hours?

* You now also can't duplicate the server because they're both going to try and do the export every 4 hours and step on one another. Woop, you made a system where an assumed safe operation "adding something" breaks stuff.

This kind of thing is a nightmare if you already have queues and a database because why would you stand up another thing but if you had none of that to begin with then yeah... makes total sense.

Like this is the reality of ops and running something "production ready" it's a lot of big ole complex HA platform so that you can run your 5 lines of code and not have to worry about any of the hard problems like availability, retrying, resource contention, timing, data loss, locking.


You're proving my point. None of those bullets are even considerations - they're problems in search of money. The initial solution was implemented by someone who didn't bother to notice everything else on this system was using scheduled tasks (or services and Quartz) to do their work. The didn't bother to notice that there was already Serilog setup to do reporting to another system that the customer was already using to monitor processes. They didn't bother to look for the local storage that was available. Instead they purchased a Kafka instance at the company's cost and threw tech at it. Oh, and then quit and got a different job, before writing any docs.

My solution integrated with current tech, didn't have any of the problems outlined in those bullet points, and has required 0 maintenance and or updates for over 4 years. I don't want to even think about how many Kafka and couch releases there have been since then.


This kind of developer is always trying to bring something into the stack with an ulterior motive of being able to put that new piece of tech into their resume as another bullet point.

It's a HUGE problem.. the devs who do it also tend to be the exact opposite of K.I.S.S. They're always looking for the most complicated and obtuse way of doing anything to make themselves look smart.


> stitch together some byzantine Frankenstein's monster from existing tech,

Or, you could stich it together from new tech.

It wasn't long ago I was surprised to discover a Hadoop installation at my current professional setting, complete with Zookeepers and everything. After some sleuthwork (which wasn't that bad as they keep everything in git, in a future when everything is api calls to some Kuberentes clusters we're all smoked) it turned out that all it does it move data between two systems.

Literally. It could be replaced with a periodic rsync run. Instead someone has to maintain and monitor a whole suite of software, prepare, test and run upgrades and so on.


I don't think "stitching" is the problem per se; taking off-the-shelf parts can be a good thing or a bad thing depending on the parts and how they're used. I mean, is there any difference between stitching together cron and a couple shell scripts vs stitching together kafka and node, other than how many components are and how well suited they are to the job and environment? It sounds like your problem is with people who have new-shiny syndrome and try to force their latest interest into their work regardless of whether it makes sense.


His "solutions" is to work in "pockets of the industry" that don't have "bloat". (Among one of them is use more cloud services. Which makes me wonder if he's seen some of the engineering disasters people are churning out. This makes me want to just summarily dismiss the whole piece.)

The real solution is to systematically address technical debt as part of your development process. Did that feature that someone swore up and down would be your money maker three years ago not pan out? Delete. Is that abstraction leaky and not that useful? Delete. Are there code paths that never get used and are more or less untested? Delete or assert they never happen.


My suspicion is that any "cloud service" that isn't some proof-of-concept one-off is also filled to the brim with bloat, you just don't see it as a customer.

"Delete" is the solution, but that is a tool they don't want to use, because every feature somewhere is used by some paying customer, who will complain loudly - and may even move to a competitor. Often the pain of updating and the pain of migrating are similar.


> My suspicion is that any "cloud service" that isn't some proof-of-concept one-off is also filled to the brim with bloat, you just don't see it as a customer.

Hey, encapsulation is a decent mitigation to complexity issues.


uh yeah until you need to modify it, then it becomes another complexity issue. Or in the case of 'microservices', now you have to solve a distributed systems problem.


No, you're still better off having clearly defined boundaries. I will grant that multiple systems can have emergent effects that are more painful to debug, but to the degree that things can actually be made independent I think it helps.


For that real solution you need to have a manager who approves spending time on removing unused cruft. Otherwise you'll end up smuggling refactoring inside other tasks, which is terrible for quality control.


I think this is really the crux of the matter. More generally, STABLE management that drives software quality. Even if you're lucky enough to be part of a team that starts of with good management, reorgs, mergers, and turnover are just way too frequent to provide the year-over-year improvements needed for really good, efficient code.


Is easy to blame the wish of people for this (and IS true), but the major point is this:

A enterprise NEEDS ALL.

I work in this niche (mostly for small companies), and what I see for this past +25 years is that even the most "small" of all companies have a HUGE array of needs, apps, data to work, laws to comply, demands of suppliers AND their customers (that RECURSIVELY add bloat!), both ancient, current, modern, and next-gen tech in their stacks.

Is like a developer that instead of being only "LAMP" + editor, is one that:

- Support Mysql, Postgres, Sql Server, Sqlite, FoxPro, Firebase, DBISAM, redis and in terrible days mongo

- N-variations of csv and alikes, json, toml, yalm, .ini, binaries formats...

- Talk to cobol, web services (SOAP, JSON, RPC, GraphQL), pipes of commands

- Deal with python, java, swift, obj-c, .net, c, c++, c#,, f#, go, rust, js, typescript, css, html

- Test on chrome, safari, firefox, ie (OLD ie)

- Windows, Linux, Mac, iOS, Android, Web

- bash, cmd, powershell

- VS Studio toolchain, LLVM toolchain, OSX toolchain, Android toolchain

- Docker, normal deploy, CI

- Sublime, VS Code, Xcode, IntelliJ, Notepad++, Notepad (as-is), nano and in very bad days, vim

- Have Hardware: M1 laptop, Lenovo Windows machine, iPhone, iPad, Android phone

WHO can be the lunatic that deals with all of this?

ME.

(if you wanna understand why I so grumpy about C, C++, Js, Android, the state of shells, terminals, rdbms, nosql, now you know)

I don't mean I fully deep dive to ALL of this, but I need to at least HAVE it or install, or touch it here and there. Is like I say:

Not matter how SMALL a "company" is, it

NEEDS ALL.


> Test on chrome, safari, firefox, ie (OLD ie)

I have seen the opposite: Test only on IE6, and when it turns out that the stuff doesn't work on any other browser and it's too much work to fix everything, make IE6 the only supported browser.


Why don’t devs and management try to consciously trim that list?


For more than one enterprise I've seen, your mergers and acquisitions are bringing in new technologies (and incompatible systems that need integration) faster than you can trim the list.


I WANNA. You bet on it.

But I need to install that stuff so I can run the integration, do the tests, see how they work, add a little code of it, etc.

For example, I need to install https://www.elevatesoft.com/products?category=dbisam because just ONE of my customer use it.

Then I need to add ODBC to OSX.

Then I need to install FreePascal, and make some DLL on it so I can decode just ONE field that whole depend on the binary representation that exist there. More fun? That field is where is store the "price" of the product.

Why the heck that developers decide to dump unportable binary, from a certain version of FPC, on that field, hell I know...


Because the precise costs and benefits of each technology are invisible to them.


Because every customer needs different features. No one wants to make their workflow follow what the software dictates, they want the software to support the workflow the business uses


True. People only miss out on the fact that, especially ERP systems, are to a huge extend a colection of process best parctices. By sticking as much as possible to the out of the box solution, not only and release changes or integrations become easier, but you also get a busoness process benchmark and consulting on top, automatically. Benefiting from all that does require so more depth of thought than most upper managers I ever encoutered are willing, or capable of.


A lot of variation in business processes doesn't have any real justification it's often just "that's how we've always done it".

Edit: There can be risks when an ERP supplier fundamentally fails to understand your business model - SAP managed to do this with a former employer of mine which led them to be shown the door.


> A lot of variation in business processes doesn't have any real justification it's often just "that's how we've always done it".

Far too often.

Sometimes we fight a bit to convince our clients to actually take advantage of what a modern system can do for them.

Sometimes we have to give in watch the horror unfold as they use our services to hackily re-implement something that is basically a copy of an old desktop app that was a copy of an old mainframe app that was just an automated version of a paper-based system…


I always find it amusing when people talk shit about SAP and brag about giving them the boot. SAP already got your money... big deal, you cut off a few years of support fees. SAP grows and grows despite seemingly everybody saying how terrible the product is. My hat is off to them for finding (and heavily compensating) some truly talented salespeople who can consistently resell their turd to CEOs.


I should have mentioned that they got the boot during the sales process - so they hadn't won the work yet.

It wasn't a criticism of the SAP product - just that they the sales team constantly got a a basic thing incorrect in a rather dogmatic way.


Then it was the right way to select a provider. Realizing that during implementation is recipe for disaster. Wether it is SAP or someone else.


I mostly agree. However "out-of-box solutions" often ALSO means "one-size-fits-all".

There's flexibility in ERP's, for sure, but it's not necessarily accessible to the people that use it.

I once had to implement a "screen" to physically divert a list of devices with certain serial numbers. Basically: send a notification whenever one of these devices showed up at a loading dock.

The most logical way to do it, which I naively considered first, was to set-up something in one of the ERP modules which is specifically focused on "material movement". After some tedious email exchanges and a phone call it turned out that it was, in fact, "possible". The catch was that it would take WEEKS and involve an expensive Oracle consultant requisition.

I put an end to that and instead had a junior implement the solution in a downstream application (which we actually develop, own and control), in about an hour. It was worth it even though it meant the stuff left the loading dock and ingressed into the building, requiring some additional physical "moving-around" hassles.


> especially ERP systems, are to a huge extend a colection of process best parctices

I would actually go further and say that enterprise ERPs are full-fledged development environments, often with custom languages and code editors. Quality varies wildly, though.

The fact that some of them are able to handle the processes of some companies out of the box is almost accidental.


Well, between those there is also a collection of bad practices that the entire industry shares for some reason (the existence of the ERP being the most likely culprit).

It's not all gain.


This is, I think, why simple things are so difficult to do in large corporations and everythings costs about 10-100x more money than you would expect. Often, it would be much easier to change the processes and workflows than the software.


Funny thing happens when you start asking real money for features ... suddenly it turns out that customers can make changes in their workflows.


When a private citizen buys software, the question is "does it do enough, for the price, for me to buy it?"

When an enterprise procurement office buys software, the question is "is there anything it does NOT do that will cause someone to fire me for having purchased it ?"


I don't know that's necessarily the cause, but if you've ever seen an RFP for enterprise software it's obvious why bloat happens. There's hundreds of does it do X, does it do Y... questions. And in order to win the bid you have to answer yes. Even if it doesn't make a lick of sense.

Like I worked on a big server side Java application and when offline became a thing one of the questions was "Does it support offline usage?". Obviously, since it's a gigantic Java server side application, the answer should be no. And there's no reason that a customer should want to run it or any application like it offline. But the question is there and if you answer "no" then you don't get the sale. So they built some half assed terrible bit of offline functionality that no one in their right mind should ever use. Now they can answer "yes" to the dumb question and get more sales.


And as someone else mentioned, it often doesn't go through procurement - there is yet another merger or acquisition, and now you support two incompatible piles of enterprise software.


In my not very long but significant experience, it often boils down to having no intention to allocate time/resources for optimization (just convince the customer their hardware is obsolete, and possibly be the one who scores a sale of new hardware) and contracts with other parties that force developers to keep old software modules/libraries in place.

Been there done that; there was this 3rd party software module "Y" that exchanged data packets over the network between "X" and "Z", supposedly doing complex operations, and my company was developing both X and Z. I was in the Z developers team. We had all protocols documentation, so although I didn't have the sources of that Y module, I could see that it was just passing around packets without performing any functions that couldn't be easily integrated in either X or Z, I mean really 2 hours of work in a government project that lasted years, so I asked about the opportunity to some colleagues who confirmed that getting rid of that module was a no-no because by contract we were forced to partner with that company, therefore we had to keep their module that essentially did nothing but passing packets (and money). I recall my immediate thought was "software bureaucracy", which probably boosted even further my decision to run only Open Source software wherever I can.


The article strikes me as being dismissive about accessibility, merely describing it as a legal requirement and source of bloat. For a lot of software, I'd say it's a moral imperative, and that's why it's a legal requirement. I'm afraid that the treatment in this article will encourage developers to ignore accessibility in useful applications that could in principle be accessible, and these applications will then become required for jobs, education, etc., thus erecting new barriers for disabled people. But now that someone has directly linked accessibility to bloat, I guess I should make sure that my own in-development solution for cross-platform GUI accessibility [1] can never be described as bloated.

Edit to add: The article did also mention that accessibility is a must-have feature, though I can't remember now if that bit was there in the original. Sorry if it was.

[1]: https://github.com/AccessKit/accesskit


A big difference is that most of enterprise software is designed for a limited set of internal users, not the general public. If you have a 1000 employees and one of them is blind, then the software used by the department where that person works needs to be accessible to blind people as a reasonable accommodation, but you don't need to make e.g. a random accounting tool used by five specific users accommodate something that none of those five people have. The same applies for physical work such as in a manufacturing environment - you may have to accommodate a particular workbench/tools so that they are usable from a wheelchair (e.g. lowering certain things) but you don't have to redesign all your workplaces for that, that would be bad as it would make them less usable for their current users.

Also, "accessibility" is not a single feature; different aspects of accessibility - for different needs - are quite different, unrelated, separate features.


> you don't need to make e.g. a random accounting tool used by five specific users accommodate something that none of those five people have

What if that team then wants to hire a sixth person, and one of the qualified candidates is blind (or has some other disability that's relevant for software)? If accessibility isn't the default, it's too easy to pass on qualified candidates in a category where many struggle to find work.


Yeah, I agree. I don't think a11y should go away even for open-source software, unless it's lightly used or just a toy project.


One only needs to look at the language he suggested, Javascript, to see an example of incredible bloat with the 5gib of modules it brings it to do print "hello, world"... but despite that, I don't think that's the root cause.

It has to do everything with poor management of features and lack of leadership. Software _should_ be developed as features are needed. Average humans are absolute crap at predicting things like markets or what will actually be used in production. The translates to developers wasting tons of time on features that provide little value. Lack of leadership and communication of a clear business vision contributes to a panic mentality "if we don't do it now, we'll never get the chance to do it!" and edge cases are chased down, delaying the move to production.


This is just pure hyperbole. The modular system of javascript is designed explicitly to ensure features can be made when needed. And I'm not sure where you breathlessly pulled 5GB of modules from, to write "Hello World" but I bet its dark and smelly.


> but I bet its dark and smelly.

I'm going to steal that from you and use it for a rainy day if you don't mind. :)


The question is: what is bloat? Is everything that depends on Electron bloated? I'd say yes, and there's a lot of non-enterprise software that uses electron. So I'd rather say what is special about enterprise software are UI/UX-trade-offs that are unimaginable for consumer software. And these trade-offs are often (not always) reasonable.


Everything that isn't based on the binary lambda calculus is bloated https://justine.lol/lambda/


> What is Software Bloat? > > Bloated software uses much more resources than necessary to do its job, the most important resources being CPU time, memory, I/O, and disk space. Furthermore, software with lots of features can also be called bloated since it is harder to learn and use, and naturally requires more resources than software with only the desired features would.


Because checking off features matter more than usability when a manager is the one buying it instead of the person using it

Next question


>There are two types of baby outfits. The first is targeted at people buying gifts. It's irresistible on the rack. It has no fewer than 18 buttons. At least 3 people are needed to get a screaming baby into it. It's worn once, so you can send a photo to the gifter, then discarded.

>Other baby outfits are meant for parents. They’re marked "Easy On, Easy Off" or some such, and they really mean it. Zippers aren't easy enough so they fasten using MAGNETS. A busy parent (i.e. a parent) can change an outfit in 5 seconds, one handed, before rushing to work.

>The point is, some products are sold directly to the end user, and are forced to prioritize usability. Other products are sold to an intermediary whose concerns are typically different from the user's needs. Such products don't HAVE to end up as unusable garbage, but usually do.

https://twitter.com/random_walker/status/1182635589604171776


I thought the example was convincing until I got a child myself. The hard part is getting the legs and arms in. The 10 buttons are easy.


> The 10 buttons are easy.

Yes, until you reach the 9th buttonhole, but you've already used up all the buttons.


Status: CLOSED

User error.

Really though, where does the last button hole go? You count buttons - 10. You count holes - 10. You put the buttons in the holes - one is no longer 10.


Usually another thread is to blame.


Perfection is the enemy of good.


As a parent who remembers all to vividly dressing uncooperative baby I've often wondered if designers of baby clothes have even seen live baby in person or are they just going off by random pictures on the internet.


I have a four month old and was inspired to buy one outfit with magnets because of this tweet. It works fairly well, but is no better than a single zipper outfit and is much more expensive.


Magnets are dangerous for children of they swallow them. Unless it is a magnet strip I would prefer buttons.


The magnets are well embedded in the clothes so there's little danger there.

The reality is it doesn't do much more or less than normal onesies but costs an insane amount more.

Also sticks to the side of the wash machine lol.


That sounds handy. Parent sticks the baby on the side of the washing machine and can now load it using two hands.


Because businesses, in order to match compliance rules for their business types, size of business etc. often have lots of rules and processes in place about buying things like software, thus it becomes really beneficial for a manager if they can buy all related functionality that is needed by their business (especially functionality that is required by law) in one place.

Furthermore considering that some things will often be very complicated to do because of legal burdens it makes sense that one piece of software for doing invoicing handles all your invoicing needs across all markets you operate in. And suddenly when that happens it might be that you get a more complicated piece of software than if you bought 10 different pieces of software each supporting the standard needed for a particular market.


> Because checking off features matter more than usability when a manager is the one buying it instead of the person using it

As someone that buys a lot of enterprise software and runs a lot of software tenders (particularly for enterprise ERP and WMS) - I would say picking software which matches user/organisational requirements is actually the most important thing (which is checking off the right features).

Absolutely this is more important than usability, which comes secondary to meeting the requirements (what good is usability if I can't get it to do what I want?).

Most botched software tenders I have seen happened because the company purchasing the software wasn't clear on their requirements (i.e. the features they needed) and then bought a software which did not match what they required and then need to somehow just 'make it work'.

Big ticklists of generic features aren't useful though, but ultimately if you are purchasing an ERP and need to be able to put stock into bins within it, and it doesn't have this feature but it is really usable and built with really great architecture, it's still not going to work.


This plus I think legal requirements between various countries. Lots of logic to meet each country's requirements.

Then each customer wants their own specific crazy workflow in the product after they "rent" the software. An endless circle of bloat increasing. Just look at SAP (and Oracle).

Some of this is touched upon in the article.


My experience is that the manager seldom knows enough to judge usability - vs. merely not giving a crap. Especially not usability when in production - on well-loaded servers, via the lower-end workstations and network connections given to the day-to-day workers, etc.

And generally the most important feature for a manager to check off is the one never mentioned (at least on the customer-organization end) - "How does the Shiny Newness, Dog & Pony Show, and Buzzword Parade offered by this software make me feel about myself?"


It's easy to say this (and it's part of the problem) but it ignores why software built in-house is often just as bad, or worse. The article addresses this, pretty early on in fact.


> why software built in-house is often just as bad, or worse

In my experience this is largely due to PoC (or otherwise “temporary”) tools getting used long-term without refactoring, and growing further PoC features over time that compound the problem.

This also affects production services, if management let (or demand!) PoC code gets released before it is really ready.


And not just the managers, but in some industries(i.e. healthcare) it's also government and insurance requirements demanding constant changes and configurability.


Plus: Parkinson's law for software. As PG wrote: "Software has bloated to consume the resources available."


Another consideration a lot of people don't really have is that enterprise software needs to answer a ton of other requirements that Joe Rando's Cloud App doesn't really care about.

Employers in the US can face consequences if they use software that doesn't have accessibility features, thus not complying with the ADA.[1] Clients of theirs can also sue.[2]

Some countries have multilingual requirements[3], not to mention the markets an enterprise loses out on by not having translations in dozens of other languages.

Enterprise software often has to be built and sold with the idea of scalability ingrained. Flexibility in scale here is where you get a lot of sales, e.g.: "WidgetSys can scale to support 1 million concurrent users". Some customers legitimately need that. This can also help deal with demand spikes, such as tax season, school registration "season", etc.

Some places have strict compliance requirements that don't make sense for most businesses. Most businesses probably don't care about FIPS-140-2, but some do because of who their customers are. Because of this, many pieces of enterprise software require incredibly fine-grained control over audit data.

Some require the ability to connect to LDAP, AD, and OAuth sources (sometimes all three at the same place).

Just this list represents features that can impact "bloat" but a "lean" app often doesn't have. This can get a business in trouble.

Now, the next question is obviously: Doesn't it make sense to have a user-specific build of the software. For example, if I don't need AD/LDAP access, can't I just get an OAuth only version? This doesn't work for a few reasons but let's assume it could. Now you have X versions * Y features worth of SKUs for your software.

It's also worth noting that while a lot of this stuff is huge on disk (relatively speaking) it often doesn't actually need to load and run all of that code. A lot of well-designed enterprise software is designed to essentially enable/disable functionality in a modular way because of it. They also tend to have many SKUs but segment it out along lines that make sense, e.g.: SQL Server Express edition is pared down SQL Server Standard missing a bunch of these features.

An easy way to look at this from the American perspective is to preface every feature tick box with "will the company be sued if we don't support ..." -- at least that's been my experience.

[1]: https://www.ada.gov/civil_penalties_2014.htm

[2]: https://www.nad.org/2016/09/06/the-nad-and-hulu-reach-agreem...

[3]: https://www.tbs-sct.gc.ca/pol/doc-eng.aspx?id=26164


this.


> Sir Tony Hoare famously said: “Premature optimization is the root of all evil.”

Er, wasn't it Donald Knuth? https://wiki.c2.com/?PrematureOptimization


It was, but the author can be forgiven for mixing them up. Knuth did say it originally, but Hoare repeated it (in writing), properly attributing it to Knuth. Knuth then read Hoare's quote, missed the attribution, forgot that he was the one who said it, and repeated it again in writing, mis-attributing it to Hoare.



Not totally sure.

https://ubiquity.acm.org/article.cfm?id=1513451

> Every programmer with a few years' experience or education has heard the phrase "premature optimization is the root of all evil." This famous quote by Sir Tony Hoare (popularized by Donald Knuth) has become a best practice among software engineers.


My experience is enterprise software is not really engineering-problem solving, more like an organizational-relationship glue. And it's a fragile and moving target and makes developers feel less accomplishing.


Funding models are important too.

If only “capabilities” or “functionality” gets engineering budget, then improvements without a business sponsor don’t get done. It’s also easier to find funding for a project that has a great dollar benefit to one payer than an improvement with a broader and fuzzier purview. Sometimes this happens under the guise of “turning tech spending from fixed cost to variable.” (The irony is this usually increases costs)

The best orgs get around this by putting a tax on technology budgets. “No matter what you spend, we need one dollar out of 5 more to pay for then sins and debt of our predecessors.” Or call it Kaizen or Continuous Improvement if you need. If they don’t trust you to spend that money wisely, they shouldn’t trust you to build anything new either.


I have witnessed quite a few enterprise software projects. My take on it is that quite often - the decisions that should've been made on tech level are being made by company politicians instead and the general tendency to hire at mediocre level.


Easy. A marginal purchase of software in an enterprise is expensive in human terms. Adding a feature and notching the price up is trivial.

Case in point, I manage an org with a $200M budget. I have a budget line for a $90 software item for some VIP somewhere that has to be justified/validated annually by somebody. That $90 probably costs is $500.

But… We subscribe to office. Adding teams required zero effort, because of the bundling effect. Slack at the time was going through a PoC/vetting process, but why bother if I have a 80% product. The pitch was that Teams was “free”… although mysteriously the price of Office was revised upward.


And this is why Atlassian is buying everyone in the world. If you're already setup as a vendor with a product into a company, adding a line is easy.


Hardware is quite cheap. Even with bloated software companies are making ton of money. Software developers like to optimise resource but never have enough time for this as business requirements itself keep changing. Time to market and engineering resource required are bottlenecks here.

Only part of software optimisation which should be focusing early should be cross cutting concerns like logging, monitoring, authentication, etc. Business logics should be separated from these and optimised only when required. Consuming more resource is better than rewriting business logics and fixing bugs.


> Hardware is quite cheap.

Enterprise has embraced a cloud-first stragegy.

Suddenly, throwing hardware at a problem becomes throwing cash at the cloud.


You are missing broader point here. If something is in production and burning cash, it can be replaced with optimised code if system are decoupled. But forcing optimisation earlier will make your developers less impactful and loose interest in the project or job. Most business logic code is updated frequently due to change in requirements and spending 100s of hours for feature which will be used by just small set of users is bad investment. This strategy can help you release some feature faster, get it A/B tested and once you have scaled enough you can start focusing on the individual decoupled system to be optimised.


> Hardware is quite cheap.

Joe Armstrong (Erlang) always joked that if you wanted your program to run faster, just wait a few years and the computing power will increase. I found it to be a profound statement about shrugging about optimizing things needlessly unless it was absolutely necessary.


Yes, and now Windows 10 is slow even with an SSD boot drive.


> Usually only the minimal amount of work will be done to get an integration to work, skipping the refactoring or code changes to internal data structures or algorithms, so the “new” product will be the sum of its parts also from a resource consumption point of view.

This describes Atlassian stuff just perfectly.

> As the code base gets large, bugs will creep in and become harder to fix.

And this is why automated tests - even if it's "just" end-to-end functional tests - are so important. But most managers aren't willing to give developers the extra budget do set up proper testcases...


That's an easy one: follow the money. Enterprise developers are not rewarded for making beautiful, fast, and user friendly software. Enterprise developers are rewarded for getting functionality needed by the business done NOW. Nobody making money decisions care about the actual users. The fact that it takes forever to do even simple things in the software is irrelevant. As long as the business gets done.


The problem in Enterprise is that the environment is designed to fracture the Engineering team as much as possible so that it isn't capable of collective bartering. This is the point of most de facto Agile SCRUM applications: if I, a manager, can't get the estimate I want from you, I can get it from someone else or I can coerce you into the estimate I want indirectly by shoving the burndown chart, or any other weaponized metric, into your face to train you into providing the estimates and deliverables I so desire, which have nothing to do with efficiency.

Because of this leverage, technical debt quickly stacks up as everyone is policing themselves and others to not do the unanimously agreed upon 'right thing' to deliver a more cohesive software infrastructure; my god is cohesion the least likely property of enterprise stacks, at least in my experience, hence: all the local heroes, the mounds of manual testing and lack of automation, the 'everything-at-once-per-quarter' releases instead of CI, the distinct aggregation of 'flags' over parsing data structures, etc.

It is impossible that people are working in such circumstances and are just entirely unaware that things could be better; there is an immense amount of pressure from all sides to essentially 'shut up and dribble'. But that also facilities an environment where individuals or teams are just implementing whatever in their own little kingdom so long as it gets in before the sprint is done. My org alone has three different ways of doing the same exact thing amongst three different teams.

Engineering teams should be reviewing the product roadmap as an independent entity and deliberating on how to approach that collectively. A Director of Engineering is the tie breaker. Estimates come from the team, not individuals, or even individual teams.


Reminder that the Computer Languages Benchmark Game itself recommends against using it to draw general conclusions about performance of languages in real-world apps: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

> We are profoundly uninterested in claims that these measurements, of a few tiny programs, somehow define the relative performance of programming languages aka Which programming language is fastest?

Now, I challenge you to find a major bloated software where the main source of overhead is Python interpretation. IME it's always something else, like the surrounding UI framework.

The Office suite is written in C++ and is badly bloated, obviously not because of language execution overhead but because of technical debt, which if that's any indication recommends against using low-level languages.


> I challenge you to find a major bloated software where the main source of overhead is Python interpretation

In every piece of non-trivial software I’ve written in python, the main source of overhead has been Python interpretation.

I don’t think it’d be hard at all to meet your challenge.


If it wouldn't be hard at all, we're left to wonder that you don't seem to have tried.


Performing a legitimate performance benchmark of even one piece of enterprise Python software — much less across a representative survey — is well beyond the reasonable scope of a comment board reply.


So, too hard.


Back when I was consulting it always surprised me how broadly similarly sized firms would have massive differences in the size of their IT departments. One company might have 100 developers then you would go down the road to their competitor to find they only needed 10.

Needless to say more staff seemed to correlate with more bloat.


The first point tracks with my experience building in-house software for a publicly traded Enterprise healthcare company. I think in our case it was a byproduct of having one person on the business side who "owned" every application. So anything they thought was important, they wanted as a feature in "their" app. It didn't matter if that feature existed in this other app, that they also used often. If it wasn't in "their" app, they would complain - usually to the VP or SVP level, and let it trickle down the multiple layers until it reached development. It got so bad when we were talking about what we were working on we'd say "today I'm working on the $NAME web app and tomorrow on the $OTHER_NAME mobile app."


Because features trump everything else when it comes to enterprise software. Teams are often working in Parallel on features and so will duplicate work until the whole thing becomes a giant mess.


Working in enterprise software: very curious how many entrenched enterprise solutions don’t have enterprise features (SSO, etc) and look/feel 20 years old. Spend a lot of time migrating to replacement platforms that do. Entrenched players do nothing to upgrade basic enterprise feature set. Rinse, repeat.

Every customer replacing the legacy solutions is doing the work that the legacy org won’t do. The purpose of a solution is to build once the thing all your customers need. This dynamic is the exact inverse of that.


I think the better question is "why [is] enterprise software so bad?" I'm sure we can come up with a million reasons too.

For examples, just click around the admin interfaces to O365, GApps, or AWS, and I'm sure you can find many annoying issues and/or bugs.


Enterprise software is bloated? What?

Have you looked at some the VM footprints of the text editors people are using?


ItsHonestWork.jpg


I think this is quite unfair to Enterprise development. There are several business reasons to not use off the shelf scripts or libraries, namely licensing, governance, security, and support, all of which add perpetual costs and further 3rd party governance to any project, specially those in large Enterprises. Very rarely does any business prepare for this at the planning stage, even more so when the software is siloed within a business unit.

In terms of using "bloated platforms" like Javascript and Python, I get a whiff of superiority from OP as there simply is no reason to build for size or speed unless it is part of the deliverable feature set. Nobody in their right mind would be writing serverless functions in C++/Rust or a Windows form to enter timesheet information (UX is about design, not platform, and is always seen as a secondary cost). If you are determined to use C++/Rust before a project has started then you're under the spell/threat of rockstar employees without a care for long term support.

The problematic Enterprise Applications I've worked on all had the same things in common, a bad maintenance plan or an expectation that the software will last decades without change. It was never, "this should have been written in xyz", its almost always that the domain knowledge has gone, and alongside it, the source code.

If you're in a business, expecting to exist in decades time, using a moving target to host your systems, like any OS, you better look at the long game, as well as the short, and factor in versioning, source control and inevitable bit rot. Its not about how old it looks, or how fast it could be.

Ultimately, there is a massive desire for businesses to offload development entirely via no-code platforms like PowerApps and absolutely no desire to make code that requires more expensive technical hires to maintain, or add more process to manage.

Finally, I've been coming across a lot of developers pining for the "old days" where you could change change things willy nilly and release it, without writing tests or having code reviews. These were the bad old days, and they're long gone. They got away with it because software was not as ubiquitous, and the internet wasn't around to spread 0-day vulnerabilities, and had very little oversight.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: