It's also hilarious that right after referencing the dependency problem in the JS ecosystem the OP then goes and advocates splitting up your library into a bunch of mini libraries. That's exactly how we got into this mess in the first place.
Also, computer speed has definitely gotten faster. There was a time period during which this was not the case, but particularly the switch to SSDs resulted in a massive jump in computer responsiveness. (The previous jump happened when we no longer needed to wait for dial up to establish a connection)
I recently buddied up with someone who has a CS degree to try and make a desktop app, as a side project. I asked if he is familiar with MariaDB and whether or not he's used C++ ODBC.
Our discussion quickly turned into choosing libraries / existing code. He uses Spring and Hibernate and was bewildered by the concept of tying columns to application variables "manually". After I told him I have some CentOS servers where we could put a shared database, he bought a Windows Server because he "needs a GUI". He added, "Ideally, after it's setup, we'll never even have to remote in to it." For the actual desktop application, he wants to use Electron.
It seems there are two distinct branches of computing emerging - one where performance matters, and one where it doesn't. Performance will always matter in places like the stock market or on IBM mainframes. In the consumer-facing world, All that seems to matter is perceived performance. Slack and Discord seem fast, when you watch how quickly a new message pops up on your laptop screen after you sent it on your phone. They seem egregiously slow when you open up task manager and see just how much overhead the chromium-based engine adds -- but most people won't care.
Applications made for "consumers" tend to be made like a cheap car - corners are cut, the end result isn't pretty, but things in the category are what makes up 90% of ordinary use cases. Slack isn't meant to be open on your work machine while you're compiling code in the same way a Prius isn't meant to chauffeur top-level executives. It doesn't mean that the Prius is bad or unimportant - it will do far more for more people than the entire lineup of many luxury car brands.
But I am damn sure that I'd rather be engineering a Bentley than a Prius.
Edit: In the metaphor, luxury and performance are sacrificed, not efficiency. I probably should've used Fiat Chrysler, but unlike Fiat Chrysler, consumer-facing software has its place. I just don't enjoy working on it myself.
And the thing is that connectivity has little shit to do with the layers of abstraction to idk render a button on a screen. Even developers, as your anecdote shows, lack the creativity to imagine better things! Christ if that doesn't show something is wrong than what will?
I started my career as a software engineer writing C, then C++ working on desktop software. Things made sense back then, and that was just 10 years ago.
I’ve tried contributing features to two Electron-based apps. I gave up on both. I just can’t make sense of it.
I’ve sat down countless times over the last decade telling myself I’m going to teach myself this damn frontend web stuff. I gave up every time. It’s ridiculous.
Young folks who have only been exposed to modern garbage have no idea how good it was before the web took over.
I often wonder how this came to be. How we went from well-documented, efficient, sound APIs and libraries to the monstrosity that we have now. I have nothing to offer though.
Charity Majors said about reliability: "Nines don’t matter if users aren’t happy."
I'd propose an alternative view here: "Bloat doesn't matter if users are happy."
Today bloat doesn’t matter, tomorrow you’re are a fat dinosaur
You should not outsource your main product. And if your product is a chat application that is supposed to be fast and always on the background, then you’re outsourcing your main area of expertise.
The unfortunate corollary being that the prevalence of developers who treat it as if it doesn't makes it darned hard to find one who treats as if it does when you need one.
> "Applications made for "consumers" tend to be made like a cheap car"
Not really a great analogy. The Prius has a reputation for quality and reliability; it's often seen used in taxi fleets, where those qualities are a must. Luxury cars often have high maintenance requirements and poor reliability since cost is not a concern to their owners.
This is absolutely a nit and does not disqualify your overarching statement at all, but generally these attributes are due to the constraints of technology-- for example, fitting large, high performance engines necessarily increases the frequency of maintenance intervals as well as the difficulty (and therefore, cost) of maintenance. If luxury car manufacturers could make cars with the handling and performance characteristics of their flagship models but with the maintenance costs and intervals of a Toyota Corolla, they absolutely would.
Luxury models are used by the manufacturer as a proving ground for the tech they're going to filter down to the mass market models next. Luxury model customers are relatively price-insensitive but want to be able to show off shiny features, so they get the exclusive new tech that's not been tried at scale yet. Thus the unreliability. It's not the technical constraints, it's the maturity of the technology.
(Also, the point about your someone "with a CS degree" - it's completely irrelevant that they had a CS degree, AFAICT. It mostly comes down to their prior experience, and they clearly didn't have a lot of it. That's mostly orthogonal to education.
I also wouldn't pick C++ for a CRUD app, not without very specific requirements. I'd choose C# if there's no existing web app, but if there's an existing web app, or there's going to be one, Electron makes a lot of sense.)
The thing is - there is no need to address performance if it does not matter. That would lead you to Knuth:
"The real problem is that programmers have spent far too much time worrying about efficiency in the wrong places and at the wrong times; premature optimization is the root of all evil (or at least most of it) in programming." - Knuth
So I think there is actually a dial: if performance matters you turn it one way and optimize as necessary; if it doesn't go onto the next project that is fighting to get out of your head.
These are also very cheap devices, but they rarely have the resources to run something as bloated as an Electron app.
meant by who ?
Because I certainly keep slack open while I am compiling. And it seems to me that it is meant to be kept open in the background while you do other stuff.
When i first installed Win8 on that laptop i was so impressed i even took a video of it (and yes, it seems like they kept that performance in 8, though sadly by 8.1 it became slower).
I'd rather windows uses my SSD to the fullest, than that it makes concessions to spinning disks.
Windows 10 is for newer hardware anyway, it runs slow on older hardware.
In 2018 Microsoft disbanded the Windows team and moved engineering efforts to its cloud and AI teams. Windows 10 is on life support by engineers who aren't intimately familiar with its codebase. Performance and usability will only degrade in this situation and the specifications of newer hardware don't excuse constant deterioration in performance.
Windows 10 is probably slow on HDDs because it's updating a load of UWP apps in the background, resmon.exe disk tab will show what's going on.
The point is that a dependency is a dependency, and making software needlessly complex in any way is bad.
tl;dr: the complaint is equally valid when applied to build tools, which are themselves also software and becoming bloated.
I believe all pieces of technology have similar dependency graphs.
In the same way, the "dependencies" required to build my hardware aren't counted in that number. What counts is the extra stuff that isn't already part of the running system.
Eventually every problem goes to logn time, best case. The logn factor shows up over and over, from constraints on on-chip cache to communication delays to adding numbers of arbitrary size. We make a lot of problems look like they are O(1) until the moment they don't fit into one machine word, one cache line, one cache, into memory, onto one machine, onto one switch, into one rack, into one room, into one data center, into one zip code, onto one planet.
If we can't solve the problem for all customers, we dodge, pick a smaller more lucrative problem that only works for a subset of our customers, and then pretend like we can solve any problem we want to, we just didn't want to solve that problem.
I'd like to watch the actual talk. Hope you could try remembering the name.
Probably one of my all time favorite performance talks.
Funny! Depressing :(
It's the same freaking fallacy I see again and again on here - It's simple, easy to understand, and dead fucking wrong.
The vast majority of the complexity you're dealing with in modern computing comes from three sources. In the order of impact
1. Networking. It turns out there are real and hard limits on how fast we can pass data around over copper wires. Fiber is better, but you just literally cannot move faster than light, and latency is a big deal when the items you're accessing and changing live somewhere else (and basically everything of value does live somewhere else, or thinks you are "somewhere else").
2. Security. This is a direct consequence of number 1. When everything is connected, everything is connected. You can't just lock the lab door and call it a day now.
3. Compatibility. This is a direct consequence of both 1 and 2. Value is a consequence of compatibility (this is why a good chunk of you on here still support IE, even though you don't want to). We have more devices, of more kinds than ever before.
We have more people, with more use cases than ever before. There is value in being as compatible as possible with all those devices and people.
All those devices are connected in ways designed to keep them compatible, but also secure. It turns out this is not an easy task.
If you'd like to go wank off over how fast your pre-network, unsecured, unsupported, inaccessible and manually configured systems are, be my guest (oh, and I hope you read english...). The rest of us will continue to produce items of value.
I don't see how having IDEs implemented in browsers has anything to do with security, the speed of light or compatibility. It's just the lack of constraints allowed by advances in computer hardware.
Most software is written with no performance considerations in mind at first and the performance issues are addressed only when they become visible. However, if there is abundant memory available, why bother?
This isn't a compatibility issue? We've seen about 8-16 branches of the write-once, run-everywhere tree over the past 25 years, I'm not sure how that isn't seen as a constraint on programmers. JWT, Swing, Web, Cordova, QT, React/<web front-end> Native, Xamarin, Electron, Flutter and even quirky ones like Toga have all attempted to solve this problem. The only unifying thread has been that managers follow greedy algorithms and choose the lowest common denominator platforms as possible. QT, the Java tools and Xamarin at least can't be lumped into the inefficient language bucket, though the UX is just awful. Other than hardware drivers, it's hard to think of a clearer example of compatibility constraints.
... and for the most part they have. You can write your app right now and the only thing you need to worry about is screen size. If you use bootstrap, even this is mostly solved. Your app is accessible on Windows, Linux and Mac; Chromebooks and Tablets; iPhones, Android and even the one Symbian user. Of course it's not perfect yet, there are edge cases and you cannot do everything, but let's not act like things have gotten worse.
> The only unifying thread has been that managers follow greedy algorithms and choose the lowest common denominator platforms as possible.
Yes, I agree. But for nearly every use case, it's good enough. Take HN as an example: Does it need anything more?
Of course, if you need access to specific hardware, you'll have to go deeper. But if you do not, it would simply be you taking the lowest common denominator. And I'd argue that the framework probably did a more thorough search.
Java is definitely not a good example of a memory-efficient language when compared to its non-GC alternatives.
It all comes down to economics, software is written as inefficiently as possible as long as it does it jobs and is not hindered by this and this actually the crux of "Wirth's law".
You realize that this is a good thing, right?
Usually if there is latency caused by networking in an application it's usually unneeded roundtrips caused by inefficient programmers or inefficient layers in the software stack.
It's insane how much overhead there is.
At speed of light in a vacuum, from New York to London is 19ms. That's the physical limit of what a network could ever hope to accomplish, but in reality in fiber it's apparently about 28ms.
At 60fps we need to present a new set of pixels to the screen every 16ms.
So in the time that a packet goes in one direction to Europe we could have presented almost two full frames of pixels.
John Carmack is much smarter than me, so I can't believe he meant this literally, or it's been taken out of context.
The network is definitely a bottleneck.
“ The bad performance on the Sony is due to poor software engineering. Some TV features, like motion interpolation, require buffering at least one frame, and may benefit from more. Other features, like floating menus, format conversions, content protection, and so on, could be implemented in a streaming manner, but the easy way out is to just buffer between each subsystem, which can pile up to a half dozen frames in some systems.”
What he’s talking about is (in his opinion) unnecessary buffering that causes a delay in the pixel actually appearing on screen.
He blames the driver and the display’s internal software, so his argument could be made out to support OP, but I think the situation is a bit more complex than Wirth’s law here.
I'm well aware of this kind of bloating (guess I should have said something in my comment to avoid the downvotes...) but it still doesn't support the OP's comment.
Network latency is not only high, but there's literally nothing that can be done of it -- because of the speed of light!
(I am somewhat lucky to work on something where we can optimize away much of the crap you're talking about here, as we own the whole package.)
More context: typical network latency is good enough that video games rendered on a remote server are becoming practical, or at least salable. "Network latency is high" is a vague enough statement that it could mean anything, but if being able to render video games remotely and stream the output to the client doesn't make you reconsider, I question what you would ever consider network latency that's not too high.
The kicker with these games, that perhaps speaks to the original, crazy post by horsawlarway, is that it's normal for a TV set and set of controls to introduce a lot more latency than the network connection itself: the network is not the bottleneck. There's a good excuse for the latency in involved in networking, rooted in physics, but this is not true for the hardware and the software stack.
Yes I can't control what TV manufacturers do, that is a wild card. But the quote taken out of context is more than a little inflammatory -- the network has a hard physical limit that the local device does not.
FWIW I'm just as dissatisfied with software bloat as the next person. Retro computing is one of my hobbies, and the latency measurements there are something to be envious of.
ChromeOS generally does better in latency measurements than other platforms; much effort was made there, much of it by people I know.
If you have a completely black screen and want to draw a tiny white circle in the middle of it, it will take your processor or GPU less than a microsecond to change the bytes. Less than 16 milliseconds later (an average of 8ms, actually) the updated bytes will be flowing through the HDMI wires and into the monitor. There they will be stored into a local buffer. If there is a mismatch between the image format and the LCD panel then it will probably be copied to a second buffer. Some DMA hardware will then send the bytes to the drivers for the LCD and the light going through those pixels will change. All that can easily add up to 50ms or more.
Windows being able to run Win32 applications isn't what makes Windows Calc slow nor is anything else you mentioned - you do not need a supercomputer to download currency conversion rates (which, btw, is bloat by itself and could be handled by a dedicated program instead of being shoved into calc) and even the ability to download a CSV or whatever from a remote server via HTTPS (all you'd really need to make such an update) is something the OS can provide to every application and keep secure for everyone (and you know what, this sort of functionality was something Windows provided ever since Win98, but how many people use it vs bundling their own?).
Android is pretty inconsistent too, the same series of steps are instant or hang the UI for 5 seconds depending on the current moon phase or something I haven't worked out yet.
A. Backwards compatibility considerations. Early mistakes in the design of an interface are often perpetuated indefinitely. Supporting them takes code. Projects that add many new features also accumulate many missteps. Over time the weight of these past adaptations starts to prevent future adaptation.
B. Churn in personnel. If a project lasts long enough early contributors eventually leave and are replaced by new ones. The new ones have holes in their knowledge of the codebase, all the different facilities provided, the reasons why design decisions were made just so. Peter Naur pointed out back in 1985 (http://akkartik.name/naur.pdf) the odd fact that that no matter how much documentation we write, we can't seem to help newcomers understand our programs without talking to the original authors. In-person interactive conversations tend to be a precious resource; there's only so many of them newcomers can have before they need to start contributing to a project, and there's only so much bandwidth the old hands have to review changes for unnecessary complexity or over-engineering. Personnel churn is a lossy process; every generation of programmers on a project tends to know less about it, and to be less in control of it.
C. Vestigial features. Even after accounting for compatibility considerations, projects past a certain age often have features that can be removed. However, such simplification rarely happens because of the risk of regressions. We forget precisely why we did what we did, and that forces us to choose between reintroducing regressions or continuing to cargo-cult old solutions long after they've become unnecessary.
I can't really rebut anything you say. I'm going to keep it on my radar as I go about my project. It's currently pre-network, unsecured and inaccessible. And will always be manually configured, for reasons described in the link. But it's supported, for what that's worth.
1. Networking isn't the number one source of complexity, since local apps can be bloated, slow, buggy, and complex.
2. Security is not a consequence of networking. A non-networked system can be insecure, and a networked system can be secure. Security is more normally viewed in the context of confidentiallity, integrity, and availibility along with the concepts of authentication, authorization, and accountability.
3. Compatibility is not a consequence of either networking or security.
Your comment makes it seem that you believe complexity comes from having to adhere to constraints. That attitude is the issue! Not wanting to adhere to constraints is why software is bloated, complex, full of thousands of components, slow, etc.
Security is improved by reducing the number of disparate components.
I have a theory that this has something to do with incentives: competition can be good for the company that is able to grab another's territory, but on average it's usually a loss. That's especially true from the perspective of shareholders, most of whom don't just hold one stock but many. Some competition is necessary, but too much is inefficient. If you own stock in both Apple and Google, you're happy with them each having their own group of loyal phone buyers. Spending more money than necessary to try to attract the other company's customers to switch is almost a pure loss from the point of view of the shareholder. Companies might not explicitly collude to divide markets (which would be against anti-trust law), but they're still subject to the disapproval of their shareholders if they rock the boat too much. Sometimes I think it makes more sense to think of all publicly-traded companies as being a sort of loosely-structured single corporation rather than truly separate competing entities.
So we have all these different software ecosystems that all these different businesses have built their respective walled gardens around, but software developers suffer because they can't just write to one platform and have their application be portable. Instead, they need a totally different application if they want to be accessible to desktop PC users, Android, iPhone, the web, tablets, servers, enterprise customers, HPC, cloud, game consoles and so on. Sure those users all have different needs, but surely they could have quite a bit more cross-platform consistency and common tools and standards than what we have now.
This isn't it. It's literally what is creating the problem.
Small libraries mean duplication, a lack of shared abstractions and dependency hell.
The reason garbage collection was such a huge win is that before that every C library shipped with it's own completely different way of managing memory, that's a nightmare.
We need bigger libraries providing better abstractions that are reused through out and written to allow composition and dead code elimination('tree shaking' for the JS crowd) to work well. So you can opt in to functionality you need and opt out of functionality you don't need.
Communities working on big libraries can actually do good release planning, backwards compatibility and timely deprecations. React is a example of a library that does a really good job of this. All the smaller libraries in the JS ecosystem are the cause of the pain in modern JS development.
I found that the standard python library has enormous functionality that makes it possible to solve many problems in a self-contained fashion.
This sort of solution will give you a big toolbox spread out in front of you. You will be less likely to re-invent the wheel. Many eyes on the standard library may lead to optimizations used by everyone.
And it makes it possible to share effectively. You can talk about functions with others using common terms. You can share your code and it will work in other environments. Education can teach in an unambiguous fashion.
Oh, please. There are some good points in the article but that hyperbole was unnecessary. I was programming in 1995, and nobody stopped thinking about the quality of their software.
It is also interesting to note that Knuth also does not do libraries
Well sir, you have me by ten years, and believe me I respect that extra decade. That said, I think you're falling into the trap many of us greybeards tumble into as we get older: the things we cared about and focused on become, in our minds, the right and true perspective that has been lost or disregarded by younger practitioners. It's really not very different from "When I was your age I walked to school every day, uphill, both ways!" Or maybe it's just me.
Yes, there are far fewer programmers hand optimizing assembly code, and yes every programmer now cobbles together applications by reusing code written by other programmers, some of which is very good, and some of which is not. But if programmers were still spending their time optimizing low level loops and eschewing any code that they did not personally write and verify the beauty of, the world we have today would not exist. Instead of zooming with my family in the middle of a pandemic we'd be exchanging emails, or posting on a BBS to say "Hi!" There's obviously still a need and role for people who like to work at that level, and that kind of engineering remains fascinating (one of the reasons I love to read the linux kernel mailing list), but I don't bemoan the rise of high level languages, libraries, package management ecosystems, frameworks and the like. That stuff has given us the world we have today, and a few warts notwithstanding I still like it much better than the one we had.
None of what you seem to think about my rant represents what my position is. Read my other note in my thread about the NYT article concerning Knuth and Norvig’s commentary. There is a time today for total deep dive. There is skill involved to do this and wisdom when to do this.
There are folks who write with minimal libraries cf qmail.
What seems to be completely missing from today’s discourse about programming is something dijksata said about interrupts. Paraphrasing “If you don’t see the code on the page in front of you, you will make mistakes. “
Take a look at modern Java. Levels of abstraction in use require serious deep dive to truly know what is going on. There is a famous Node package issue where code that wasn’t even on your computer crashed a swath of applications.
Quality in the context of the article means the code is pleasing to read, doesn’t crash unexpectedly and doesn’t have side effects that you may only discover when Brian krebs emails you about your customer’s data ending up in some remote online flea market
I cringe today when programmers struggle with async processes in C or other languages, or async/await. We (Michael Whinihan and I) developed a dead-simple pattern using co-routines that vastly simplified interrupt driven programming. It is as if folks these days haven't read https://www.amazon.com/Operating-Principles-Prentice-Hall-Au...
I did a significant fraction of the work to build this, I wasn't the smartest guy in the outfit either. (Probably the most smart-aleck.) Check out one of the team members: https://en.wikipedia.org/wiki/Fibonacci_nim.
None of us were experts when we started this. We figured all the parts out and made it work, reliably. We had on the order of a small integer numbers of hours of outages per year. This was before Tandem Computers was born.
Now serving the medical community puts some pretty stringent requirements on what you build and how you operate it.
The wonderful NY times article https://www.nytimes.com/2018/12/17/science/donald-knuth-comp... talks about this, with a quite from Norvig:
In those early days, he worked close to the machine, writing “in the raw,” tinkering with the zeros and ones.
“Knuth made it clear that the system could actually be understood all the way down to the machine code level,” said Dr. Norvig. Nowadays, of course, with algorithms masterminding (and undermining) our very existence, the average programmer no longer has time to manipulate the binary muck, and works instead with hierarchies of abstraction, layers upon layers of code — and often with chains of code borrowed from code libraries. But an elite class of engineers occasionally still does the deep dive.
“Here at Google, sometimes we just throw stuff together,” Dr. Norvig said, during a meeting of the Google Trips team, in Mountain View, Calif. “But other times, if you’re serving billions of users, it’s important to do that efficiently. A 10-per-cent improvement in efficiency can work out to billions of dollars, and in order to get that last level of efficiency, you have to understand what’s going on all the way down.”
There were people who didn't care about software quality in 1960 and there will be people who do in 2060, so no matter what year he chose, a lot of people would dismiss any chosen year as hyperbole.
As time moves forward, and more "healthy" ecosystems grow up around languages (like unwanted mold or fungus always does) software is going to get slower and slower and slower. And saying that it's a problem will always be an "old man is shouting nonsense again" moment to most young developers.
There's always been good software and bad software.
It really took incredible hubris to sell Win95 as a huge step forward for the industry when it was so hugely inferior in quality to an OS that had been hacked together by a small team of best-in-world hardware and software designers ten years earlier.
But it did succeed in one area - which was doing a great job of lowering consumer expectations and making the most appalling carelessness and incompetence seem like a boon from the tech gods.
People spend time making software less bloated, when it's the number one problem they have. When hardware speed is taking care of making that only the #2 or #3 problem they have, then they will work on whatever the #1 problem is, meanwhile adding more software bloat.
When Moore's Law once and for all stops, due to some law of physics reason, then software bloat will become a priority. Until then, other things are.
It is a form of Induced Demand: https://en.wikipedia.org/wiki/Induced_demand
I don't want to say that performance does not matter at all - it does - but with hardware being as cheap as it is and developer time and time-to-market being as expensive, optimizing that last 500ms and 200MB out simply is not going to be worth it.
And let's not disregard the expense of performance optimization - you'll not only need a reference to benchmark and test against, but also spend a lot of time debugging and writing very plattform-specific code with tons of edge cases. It's not like saving 2GB of memory comes for free.
Everything seems to have been optimized for the enterprise market.
I'm not trying to say performance doesn't matter - I'm using lower-powered devices myself - but development time is also a big factor for b2c.
"About 25 years ago, an interactive text editor could be designed with as little as 8,000 bytes of storage"
Such a text editor likely couldn't handle lowercase in English, let alone any other Latin script language, let alone cjkv or bi-di. The bloat in software of 95 and the present day is real, but there is no real effort to make an apples to apples comparison in what our expectations of software are, and it massively weakens the argument
Parallel arguments can clearly be made for compilers etc.
A lot of these are less than 8K, none of them can't handle lowercase, and I bet the majority of them will be fine with "high CP437" bytes (so other Latin languages.)
Much of the cability of a modern tiny editor comes from the environment in which it is running, we just expect more from an editor now, and the developer expects more from their operating system.
What use would one million, highly functional, stand-alone 8K programs be? What about a hundred thousand 80K programs?
I'd love wholly new categories of software to just pop up. I think about what those might be. But it seems like actually they appear quite seldom. So what else is there to do but throw code at what people use every day, for marginal improvements in existing functionality.
Some of the improvements are marginal, but not all, from a developer perspective moving between editors that support everything from syntax highlighting, intellisense, refactoring etc, to then using an editor that has none of that is quite a shock.
There was no text editor. The code for these was entered on punched cards or tape. Somebody typed those cards from sources on paper, maybe on a bunch of K26-5994 forms
Fragmented parts could be suckless.org, old cheap thinkpads from ebay, fast Linux distros, unix command line tools, retrocomputing, raspberry pi; all things with communities and fans who like a certain quality, simplicity and the good old days.
It turns out, people are perfectly happy to put up with slightly slower software for all of the other benefits we get from modern software: rapid development, rapid deployability, ease of code comprehension, and more.
When users complain about slow software enough to buy a different product (in whatever variation of 'buy' that may be) then it becomes a high priority thing to fix. Software today is precisely as fast as it needs to be- and no more, typically.
To me, this article is just the software engineer's version of Grandpa complaining that things were better in the old days- he's ignoring the reasons why things changed.
I don't think this is true. I think this is more of a 'boiling frog' situation. Increase the temperature one degree at a time and it won't jump out of the pot.
Every individual piece of software slowly eating up more resources is something the user barely notices or doesn't even attribute to the software in question, but give it a few years and everything grinds to a halt, and people very much dislike this, hence the infamous 'Windows rot' that everyone has suffered from at one point of their lives.
For one, PHP did not start out object oriented. Nor did it come with an IDE. Sure, it was dynamically typed (unlike Java). If I recall, it was 1994 too but I can’t be sure.
It was only PHP4 that added an abomination of a class system that motivated the current object system in PHP5. In fact, the problems with PHP have more to do with that pattern: creating abominations that motivate the development of something that isn’t a total abomination (often whilst retaining said abomination for a while).
And, as others have said, small “libraries” is exactly why the todo app needs 13k NPM packages.
I say “libraries” but you can’t call them that. I mean where’s the analogy? A library of one function? Behave
For starters, there’s 64-bits. Pointers were a quarter of the size in the ‘90s.
Then, there is Unicode. All programs need ICU (http://userguide.icu-project.org/icudata). Even if you dynamically link it, many of its symbols (or entry point IDs) end up in your executable.
Unicode isn’t an exception, though. Every library choice you make adds a bit more in size and takes a bit more in performance than the solution from the ‘90s would have.
For example, the moment you decide to use json, you get its entire feature set (arbitrarily nested arrays, a multi-line string parser, ability to read fields in arbitrary order, etc), even if all you need to do is pass two integers and get one back.
A parser generator that generates code for the json subset you need would help here, but would mean extra work for the programmer and the overhead typically isn’t that large, so why bother? It all adds up, though.
Even if you can’t remove some code, you still could optimize memory layout to move code you expect will rarely run into separate code pages so that it likely will never be mapped into memory, but that’s serious work.
And of course there’s all that argument checking/buffer overflow protection people do nowadays that ‘wasn’t necessary’ in the ‘90s.
As for pointer size, personally I agree -- most processes can get by just fine with their own 32bit address space, so I'm not sure why we need to double the working-set size of all pointer-based data-structures.
If your data structures can fit in a 32-bit address space, you can just place them in an arena w/ 32-bit indexes. You do need to use a custom allocator for every element of that data structure, but other than that it ought to be feasible. Link/pointer-based data structures should be used with caution anyway, due to their poor cache performance
It’s so unknown that it’s shocking. Imagine designing an entire OS that was used by dozens of people, and no one knows about it. http://worrydream.com/refs/Wirth%20-%20Project%20Oberon.pdf
This meant that programmers had fewer things to worry about"
Yeah, right. Those lazy programmers.
It's obviously become easier to build complex software, but software is now required to be much more complex. There are still many things to worry about (actually far more than previously, I'd say), but they're not the same things.
That would be in 1970, but my guess is that "ed" would be a hard sell today.
There is plenty of bloat to go around these days and I think we could do a lot more to address that. But we've all got too much skin in the web game to own up to the embarrassing fact that a chat program that's basically IRC with pictures feels like glue on a 2.4 GHz, multi-core CPU.
With that out of the way, we shouldn't get silly, either. Every actually useful feature added will increase complexity and resource usage. I like split-window, code-folding, auto-indented, word-completing, syntax highlighted multiple document editing more than I like saving a few fractions of a percent of my hard drive space.
For one, there is no text based system like those old machines that handles all that so I have to switch to graphics. Just the font alone for all of those languages will be multi-mega bytes and I'll need multi-megabytes more for space to rasterize some portion of those fonts. Rasterising just 4 or 5 glyphs is more work than that entire 8k word processor had to do on its 40x24 text screen.
Then for each language I'll need multi-megabytes for handling the input methods. The input methods are complex and will likely need a windowing system to display their various options and UX so add more code for that.
The point being that we need the complexity. That 8k editor wasn't useful in the same sense as our stuff today. I don't know a good analogy. It's like complaining that people use CNC machines today and at one point got by with a hammer and chisel. I'm not going back.
Just the font alone for all of those languages will be multi-mega bytes and I'll need multi-megabytes more for space to rasterize some portion of those fonts.
Then for each language I'll need multi-megabytes for handling the input methods.
Those statements clearly show your lack of awareness of what things were really like 40 years ago. They had CJK input and output(https://en.wikipedia.org/wiki/Cangjie_input_method was invented in 1976, for example) on the systems of the time, and that certainly did not entail "megabytes of code".
What it did entail, however, was a certain amount of skill, creativity, and an appreciation for efficiency and detail that lead to being able to do it with the hardware of the time, skills which are unfortunately a rarity today. Instead, we are drowning in a sea of programmers who think the simplest of tasks somehow requires orders of magnitude more resources than were available decades ago, when the reality is that there existed software at the time able to do those tasks perfectly well and at a decent speed.
The point being that we need the complexity.
The point is precisely that we don't.
I even think comparing with something that is 20 years old today is more interesting than comparing a 1995 IDE to a line editor written for teletypes.
The web version of Outlook is a great example --- on a computer only a year old, it often lags on every keypress when writing an email.
YouTube's redesign ("Polymer") is another example, where the new site is much slower than before, despite not really increasing in features.
Hard disk space is seldom the actual issue. Instead, it's bandwidth used (too expensive to download over a mobile connection, or maybe not even feasible to download over a low-quality connection), or memory requirements (can't reliably use slack + spreadsheet + photoshop at the same time), or power consumption (laptop out of battery in 1 hour).
Do you like split-window, code-folding, etc etc so much that you can't download it while traveling in a rural area, have to close it so you can run Photoshop, and have to carry a spare battery so you can use it for the entirety of a 3-hour plane ride?
Why would I download a text editor every time I wanted to use it? Even VS Code is stored locally, ready to be used off-line. But let's not pick an Electron app for _everything_ we want to do: last time I checked, Notepad++ does all of what I listed, and more.
> memory requirements
Yes, this can be a problem and is a sure sign of bloat. Meanwhile, I think we're bad at picking our software: why do we just sit back and accept stuff like this? I've got 8 gigs of RAM in my dirt cheap home laptop and I can run Gimp, GNumeric, Firefox and watch a movie at the same time just fine, with plenty of RAM left to spare. For professional use, requirements are and have always been higher: hence the $15k workstations of yesteryear.
I think we're doing a bad job at promoting that the use of native software would likely have the outcome of higher productivity and lower hardware costs, because that would probably mean we're putting our own cushy web coding jobs on the line, too.
> battery life
There are still some hard limits we have to take into account. Even if I for some reason did have to work with both programming and Photoshop on a 3 hour plane ride (I luckily do not), I don't think the answer to my problems would be to switch to "ed".
>Why would I download a text editor every time I wanted to use it?
To give a serious answer: The web is literally that. And I suspect the success of the web is in a major way due to Windows' lack of streamlined package manager, as the installation ("loading") of a webpage is about a second, whereas the install time of an average Windows program involved potentially several minutes and clicking "next" several times.
Fast install times helps with discoverability - if you can click a link and "install" a browser in under a second, it becomes pretty trivial to try out ten browsers in under two minutes and encourages experimenting. Plus, you'll have reasonable expectation that you won't have to spend time cleaning up the cruft left by unwanted IDEs you don't plan to use.
Also, the install time issue also applies to updates. Web pages mostly don't have an "updating..." loading popup like e.g. Steam has every time there's an update.
Note: I am NOT advocating we all switch to web browsers. I AM advocating that we try to make our package managers a couple of orders of magnitude faster.
While multi-hundred-megabyte text editors consuming double digit CPU to render some text are definitely a sign of inefficiencies _somewhere_, I value any marginal productivity benefits from these additional features over (possibly significant!) usability in very resource constrained situations.
The essence of BSD is also still in use, but I'm willing to bet that once this essence - much like that of ex - has been expanded into something we'd consider a capable system today, it's also going to require a lot more resources.
The OP’s viewpoint is akin to claiming that aviation has not advanced beyond Kitty Hawk because modern aircraft are simply wasting fuel when they take to the skies. It doesn’t take into account the huge differences in the modern computing environment.
The computers of the recent past weren’t even powerful enough to encrypt a modern TLS session. Does that mean using all that power to encrypt the session today is “bloat”? Of course not. One person’s bloat is another person’s feature. Note that sometimes that feature is that the software can actually exist and be maintained on your preferred platform.
If you think about it, it would be an utterly bleak future if we were just running the exact same software stack on faster and faster hardware. That would represent stagnation, not progress.
The biggest differences are screen size - full document vs a few lines of text - and support for colours and outlining. Also, PDFs, which weren't a thing in the early 80s.
But the core features in WS2000 are a lot more than a "rounding error" compared to Word.
It is the reality of the software industry. Time-to-market is the only rule. and it dictates how we write software: Agile, Scrum, don't reinvent, deploy early, fix it later, technical debt.
There are many of us who care about quality of our work. But in performance reviews, the only thing that matters is how many features you shipped. You cannot demonstrate the quality of your work in your performance review. Or better yet, the fact that the software you wrote is going to be bug-free for next 10 years.
While Moore's law may be over in terms of number of transistors per chip, we are still seeing growth in the number of cores per CPU, not to mention faster memory and the death of spindle storage.
ImO software will start leaning more heavily towards parallelesim (if it hasn't already, building a threaded or asynchronous application is no longer an arcane dark art). Couple that with the surge in adoption of distributed architecture (yes microservices), and the emergence of languages which encourage pragmatism and efficiency (ala Go and Rust) and I think we'll be okay for the next while. Can't speak to electron though (gross).
That means the programmer (or their employer) doesn't pay for the electricity it uses (or battery it drains), the RAM it allocates, the disk space it wastes, or the hardware upgrades necessary to make it run acceptably.
Why conserve a resource that you're not paying for? Especially if you have to expend your own resources to do it.
I'm not defending crappy software (nor, apparently, offering a solution), but if a programmer's personal sense of honor is the only weapon in this fight, then it's not a great formula for winning the fight.
>"Niklaus Wirth, the designer of Pascal, wrote an article in 1995:
About 25 years ago, an interactive text editor could be designed with as little as 8,000 bytes of storage. (Modern program editors request 100 times that much!) An operating system had to manage with 8,000 bytes, and a compiler had to fit into 32 Kbytes, whereas their modern descendants require megabytes. Has all this inflated software become any faster? On the contrary. Were it not for a thousand times faster hardware, modern software would be utterly unusable.
Niklaus Wirth – A Plea for Lean Software"
>"Time pressure is probably the foremost reason behind the emergence of bulky software.
Niklaus Wirth – A Plea for Lean Software
And while that was true back in 1995, that is no longer the most important factor. We now have to deal with a much bigger problem: abstraction. Developers never built things from scratch, and that has never been a problem, but now they have also become lazy."
These numbers are insane, but this problem will only keep increasing. As new, very useful libraries keep being built, the number of dependencies per project will keep growing as well.
That means that the problem Niklaus was warning us about in 1995, only gets bigger over time."
Related: "The Law Of Leaky Abstractions" by Joel Spolsky: https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-a...
I’m not actually sure we really even have “slow” software. At least not relative to how much that software can do. Latency is a different story.
There are some valid points in this article, but seriously?
Another thing.. computing is over. At least in the previous era form. It's not bringing dreams anymore, will probably turn into an ubiquitous invisible form where intrinsic details such as resource usage won't matter.
low power always-on devices with long range radio (LoRa),
few resources (32k ram),
the security constraints of securing every device,
dealing with terabytes of log data in the cloud,
ML at the edge (Kendryte K210),
open-source firmware including radio (DASH7 firmware),
open-source hardware (and open-source FPGA tooling) to create custom hardware designs (hardware implementation of algorithms),
formal verification of OS and protocols,
Even a handful of innovation in any of the above would be groundbreaking.
These are all very advanced low level technology subjects (some of which I like a lot btw).
What Wirth said only concerns a few grams of people on earth, the rest will stop using computers just like they stopped using desktops and just stream/talk on smartphones. If you ask most users they'll probably root for whatever electron app they use compared to frugal but powerful programs. For the layman computing will fade and become like roads. And I believe they never really needed nor liked computers, it just was a 20 year period where it was thought to be a technological wonder to have in your home.
The balance is the same as ever, developer time vs. compute power. As compute power gets ever cheaper, and developer time still costs the same, we put bigger burdens on the machine to save developer time.
In other words, simple economics.
Yes, every project can get to market or to the next release with a startup company's speed, but you trade-off and generally get "startup quality" that way. That is the actual point.
This industry is so fast-moving that the costs associated with that level of quality is often paid later by other people. There are plenty of examples of this.
I'm reminded of this  article comparing Bruguet and Carson numbers. You have ever cheaper hardware. The marginal utility of any given unit of new, equally serviceable computing power is going to be less than the previous one. So eventually if something can be 2% better/more functionality, etc. at 10x the bloat it is rational to accept that. When you have lots of something you become less efficient in how you use it. If you're surprised by that go and read economics.
The number of books and articles about the quality of software engineering published in the past 25 years certainly seems to prove we care a great deal about code quality. Probably more than we ever did prior to 1995 when Pascal, BASIC, FORTAN, COBOL, and self-modifying assembly code were being taught in colleges.
As for the point of the article, that hardware doesn't accelerate as fast as programs using it, same challenge: prove it.
This article is all handwaving and speculation with no data to support any of its claims.
My reply was that now days you can, at home, search for a book on a specific interlibrary system, and find what specific libraries have it, download and check out an Ecopy, find out the number due back if you still wanted the hard one - AND have someone go hold it.
It's just not apples and apples anymore. I don't care how fast you can scan a text file in 90s written software.
That complexity is Google (or your favourite search engine) running a datacenter indexing exabytes of data so that you can search it in the blink of an eye. Yes, it takes 600ms now instead of 50ms - but that's like complaining that your new eco car engine can't even properly run on aftermarket lamp petroleum.
Software will just keep getting bloatier and bloatier and hardware more and more powerful to cope with the software inefficiencies (what? you think hitting hardware limits will solve this? nah, we'll just put more cpus in there so that software can be slow in parallel and of course train users to think slow software is normal).
This isn't true for all search engines, but it is true for quite a few of them.
Gatekeeping bullshit? The scent is unmistakable.