Hacker News new | past | comments | ask | show | jobs | submit login
When did software go off the rails? (jasonhanley.com)
244 points by jasonhanley on Aug 16, 2017 | hide | past | favorite | 323 comments



> Hardware capability increases exponentially, but software somehow just bloats up, using the power and space, without providing much more functionality or value.

I disagree. Our baseline for software has increased dramatically. If you don't care much about the added functionality or value of the new software, use Mosaic or Netscape 4.0 to browse the web.

There are obvious improvements in browsers which you are so used to that you forgot them: tabs, ability to zoom, process-level isolation for websites, support for newer protocols, CSS, Unicode support, font rendering things I'm probably not aware of, a built-in debugger and inspector in the browser, and so on.

Again, if you think that software hasn't advanced much, simply use the software from 10 or 20 years ago, and see if you can stand the experience.


>If you think that software hasn't advanced much, simply use the software from 10 or 20 years ago, and see if you can stand the experience.

I still use Jasc Paint Shop Pro 7 whenever I can. It's faster to startup and use than any of the versions that came afterwards based on .NET. The built-in effects are a bit outdated, but it still runs most Photoshop compatible plugins.

And I still run Windows 2000 in an isolated offline VM for some of my work. Even emulated, it's blistering fast on modern day hardware. The entire OS runs great in less RAM than a Blogger tab (assuming the article's numbers about 500MB - 750MB RAM are correct).

There is some excellent and efficient new software being made (Sublime Text and Alfred spring to mind), but please, don't give me another Electron-based 'desktop' app.


A lot of the memory usage in modern browsers is from aggressive caching - computers today have multiple gigabytes of RAm that would otherwise go unused, so browser vendors keep more assets in memory between pageloads to avoid reloading them from the disk cache or worse: the network.

Operating systems themselves take advantage of this abundance of memory by also keeping things in-memory for longer - I remember my beefy 2GB/RAM computer from 10 years ago still paged processes and data out to disk when I had Photoshop CS and Firefox 2 side-by-side, but now that I have 32GB of RAM - and have done for the past 2+ years, I csnnot recall having experienced any disk-thrashing due to paging since then.


The problem is the myriad of low end devices for sale that many consumers buy. Check out all the sub-$300 notebooks and you'll be lucky to find 4GB of RAM. So while 32GB is available today, I wouldn't call it the norm.


And the probability that you can successfully read a web page out of local cache (whether memory or disk) when the internet connection is down is far less than in the '90s. And this in the brave new mobile-first world.


I do think it's a mistake to look at high memory usage and immediately assume the software causing it is bad.

There's basically no downside to using otherwise-inert memory, so plenty of programs will eat whatever memory you have but degrade nicely when you're out. Hell, several IDEs seem to cache their tab contents just because they can, and that's only a text file on disk. If my memory usage never drops below 70% of available, I have no real complaints.

Of course, there is the question of why that much memory usage is even possible. I don't blame Chrome for eagerly caching webpages - it's delightful when I'm on shaky internet and pages don't reload every time I change tabs. But I do object to whoever decided a Blogger tab should have 750 MB worth of crap to cache in the first place.

I increasingly feel like many webpages are actively hostile to their users; in service to click rates and advertisers they eat bandwidth, squander memory, and actively oppose security measures like adblocking and SSL enforcement. That, at least, seems like a step backwards.


> >If you think that software hasn't advanced much, simply use the software from 10 or 20 years ago, and see if you can stand the experience.

> I still use Jasc Paint Shop Pro 7 whenever I can. It's faster to startup and use than any of the versions that came afterwards based on .NET. The built-in effects are a bit outdated, but it still runs most Photoshop compatible plugins.

> And I still run Windows 2000 in an isolated offline VM for some of my work. Even emulated, it's blistering fast on modern day hardware. The entire OS runs great in less RAM than a Blogger tab (assuming the article's numbers about 500MB - 750MB RAM are correct).

> There is some excellent and efficient new software being made (Sublime Text and Alfred spring to mind), but please, don't give me another Electron-based 'desktop' app.

Ha, I too keep Paint Shop Pro 7 around for the reasons you mentioned.


Can you clarify which "alfred" software you are referring to?



Yup! That's the one. The entire Alfred application is just 6.4MB uncompressed (the previous version was 9.4MB, so they've made it even smaller!) And it does a lot more than replace Spotlight searches, it supports plugins for various workflows that you can design with a drag & drop component UI layout, there's a calculator and a music controller (iTunes, but with plugins it can control Spotify), text snippet support across the OS, and all in just 22MB of RAM (on my machine). Runs on macOS.


Wow, that's actually remarkable. I've already been really impressed by Spotlight - where Windows search is slow, unreliable, and constantly shows me ads, Spotlight is fast and almost never misses. But some actual control over what I'm getting from it looks fantastic.


Alfred is a replacement for Spotlight (the macOS run dialog) with a bunch more features.


The web, where I can now use a browser to get a slower and clunkier version of the same sorts of simple applications I could use on the desktop 20 years ago? Applications whose interfaces used to be snappy on 1997 hardware and now crawl in my poor abused browser? Oh, but the data is now available over a network.

And don't get me started on how text-centric websites used to load faster over a 56k connection than I can now get a JavaScript-abusing news site to load and render a simple article over fiber.


I don't think web technologies are the main problem (perhaps with the exception of CSS but that's a productivity issue)

The problem is that we're not actually running applications on the web platform. We're running them on advertising platforms.

I'm pretty sure that roughly 99% of my browser CPU usage goes towards loading and running ads. And browsers are optimized for that task, which has consequences even when not running ads.

We have a business model and payments processing crisis, not a technology problem.


I disagree, paid ad-free Google Mail still eats hundreds of megabytes of memory. The same is true for many other paid-for web apps.


That is true to some degree for memory but not at all for CPU usage and only to a much lesser degree for network usage.

Also, the entire architecture of browsers is very much geared towards running tons of totally unpredictable crappy adds. I don't think the multi process architecture most browsers use nowadays would have come to pass without crashing Flash adverts.


I don't think web technologies are the main problem (perhaps with the exception of CSS but that's a productivity issue)

As an application delivery platform, web technologies certainly are a serious problem. They're literally decades behind the curve in many respects.

A big part of this is the classic problem that tools built for one purpose (presenting hypertext documents with some light interactivity) have been repurposed for something completely different (web apps) because they were what was available, not necessarily what was good. This goes for HTML and CSS of course, but also for JavaScript. Even with the improvements of recent years and the much better runtime environments provided by modern browsers to execute the JS code, it's still a relatively weak language for writing non-trivial applications, and the community is still following numerous practices that we've long known to be unwise and learning lessons that the rest of the application programming world figured out many years ago.

A textbook example is that in JS world, "unidirectional data flow" in modern UI libraries is treated as some sort of amazing new idea. SmallTalk was using the same fundamental principle in MVC in the 1970s. Many variations have evolved in desktop applications since then, dealing with a variety of practical lessons learned from experience with the original concept.

In JS world, it's considered a big new feature of recent years that you can now write code in multiple files, import what you need from one file in another in some standardised way, and thus break your application down into manageable parts for development but ship a single combined "executable". Again, most major programming languages created since at least the 1970s have supported modular design, in many cases with much better support for defining clear interfaces between the modules than JS offers even today.

In JS world, having discovered the concept of modular design and libraries relatively recently, naturally we're now seeing state-of-the-art build tools that use "tree shaking" to import only the code actually needed from one module in another, instead of bringing in the whole module. Dead code elimination has been a bread and butter feature of optimising compilers since... Well, you get the idea.

Next week: Stability and standards are useful things, data structures and algorithms matter, standard libraries with good implementations of data structures and algorithms are how you combine those things to make the everyday requirements easy to meet, and if much of software development is about managing complexity then routinely pulling in minor features from an ecosystem that exponentially increases the number of transitive dependencies you have to manage probably isn't a smart move.

It's not just JS, though. The layout engine in browsers was designed to support HTML documents, reasonably enough. Unfortunately, that means laggy performance in web apps that keep regenerating a large amount of page layout information because of that heritage. A great deal of effort has been spent in recent years trying to work around that fundamental weakness, with varying degrees of success.

Likewise the CSS formatting model were designed around those documents, and again is quite limited in even basic layout needs for an application UI. Some of the bleeding edge changes are making that slightly less painful, but there are still a lot of awkward interactions.

It is true that much of the modern web seems to have been taken over by spyware and ads. I'm afraid that's what we get for having a culture where a lot of people want all their content and services for free, and until we fix that culture we're probably stuck with the ads (and will probably see a variety of laws in the coming years to fight back against the technical victory of ad blockers). But this definitely isn't the only reason the web of today is bloated and slow.


I don't deny that there is a lot of valid criticism of web technologies. I dislike quite a lot of it myself and I share your bewilderment about some of the software engineering practices.

But when it comes to performance, all of that just pales in comparison the fact that almost none of the CPU cycles executed on a typical web page are actually part of providing service as opposed to part of the payment (i.e. ads).

The subscription based web apps I use are not sluggish at all (with some notable exceptions)


The web apps I use are incredibly sluggish when compared to a desktop app on a 1993 Mac.

And I'm not just talking about operations that require network requests. I'm talking about simple UI responsiveness to (for instance) acknowledge a click and hold on a handle so I can start to drag-and-drop items in a list. I'm talking about drop-down menus that take over a full second to open.

I'm talking about lag when typing text into a UI, such that if you type a whole sentence and then click in the wrong place after you finish typing but before the second half of it renders out character by character (for instance, because you noticed and want to correct a typo), the second half of your sentence ends up transposed into the middle of the first half.

Ridiculous. This was a solved problem 25 years ago.


>I'm talking about drop-down menus that take over a full second to open.

That is not a limitation of the platform though.


Indeed, it's instead a symptom of the software bloat that the OP discusses. Hardware is quite literally millions of times more powerful than it was 25 years ago, but software is slower.

Many of the most egregious examples of this happen to be on the web because a) it involves an additional layer of abstraction, relative to OS-native applications; b) designers are involved more in web apps than native apps, and they refuse to use native browser widgets, which are performant; and c) for a handful of reasons web development culture is biased towards building on top of existing libraries, which results in excessive layers of abstraction.

And then you add the networking layer (and for many web apps having your data "in the cloud" adds very little beyond what a native app provides even without background data syncing), and add seconds to every tiny interaction that used to seem instantaneous.


This is true. Except that now the "desktop" apps are even worse!

Now they've taken web apps and packaged them up to run their own web server and browser locally with even more abstraction layers that chew through your system resources.

What a strange world.


On the Web the balance is not as good.

We had decent video, CSS, page update, notifications, unicode support and the like 10 years ago. We didn't gain that much since then. But the page load time took a X5 hit.

Yeah some UI is nicer, and we do have better completion, real time is snappier and all.

But hell.

We lost most pagination, pages feels slow, there are so many, sharing informations went down the toilet in exchange for big echo chambers.

The only thing that are clearly better is my hardware and bandwidth, both on mobile and computer, and professional service quality that went up in quality a lot.

The web itself, the medium, feels more sluggish than it should be given the power horse of infrastructure I got to access it.


Pages are now over 3 MB in size on average [1]. Just 5 years ago (already post-iphone) they were 800 KB. This mirrors my experience, where the mobile web feels slower despite my phone and connection being much faster. It seems web developers have stopped caring at all about performance.

If you look at where the bloat went, it's mostly three things:

- Images ballooned in size by a megabyte (without adding much extra requests)

Most likely culprits are retina class graphics and "hero graphic" designs.

- Javascript sizes and number of requests doubled

Corresponds to a shift from jquery + some manually added plugins to npm / webpack pipelines and using a massive amount of (indirect) dependencies.

- video

Now that flash has disappeared, it got replaced by auto-playing video, which is even worse as far as page bloat is concerned.

[1] https://speedcurve.com/blog/web-performance-page-bloat/


> - Images ballooned in size by a megabyte (without adding much extra requests)

For some reason, around the time "digital native" design and designers got big, it suddenly became OK to include giant design-only images (not content, not a logo, not even a fancy button, just because the designer wanted a giant image) in web pages, which before was one of those things you Just Didn't Do in proper web design. It's gotten so bad that now it's done with videos. Web old-timers find this to be in shockingly poor taste, but that's where we are. Bring back the "bad" print-background designers, I say.


This didn't just happen by accident. It happened because the market demanded it. At a previous company I was at, we A/B tested this, and an image-heavy design did better on every relevant metric we could think of (and I assume many others have done the same).


Like a peacock, it signals the health of the organization that produced it because of the resources brought to bear on otherwise frivolous things.


A particular hatred of mine are "load-under" images. You know, the ones designed to look like a cutaway into the page, so you load 1000 pixels of height, but see a sliding 400 pixels of the image as you scroll.

It's like a microcosm of everything wrong with modern web design. It's wasteful on data, wasteful on memory, obscures useful information (like the actual picture), and actively distracts from the surrounding text. But it looks fancy, so it's not going anywhere.


- Javascript sizes and number of requests doubled

Corresponds to a shift from jquery + some manually added plugins to npm / webpack pipelines and using a massive amount of (indirect) dependencies.

You are blithely ignoring the reason for all this bloat: unwanted adtech surveillance.


No, most of it is a move away from serverside page rendering to single page app memes.

Its why the first meaningful page draw on most webpages takes 3x as long to happen as it did a decade ago.


Certainly adtech is evil, but even surfing with it aggressively disabled has gotten worse. Crank everything you can down to zero, and you'll still find bizarre multiple-source loads that won't draw anything useful until the entire page is loaded.


use Mosaic or Netscape 4.0 to browse the web

All the actually useful bits of the web, text, images, etc would still work. I bet you could still shop on Amazon with one of those old browsers.

The power is being sucked to run the 10Mb+ of JS, Flash and other crap that ads add to every page.


Working in e-commerce, there is a truly monstrous amount of javascript tracking snippets that enterprise level sites use. 30+ individual snippets and tracking pixels being pulled into the page. Polluting the console with JS errors and console logs.

Thanks to Google Tag Manager, we don't even get a say anymore. It's out of our hands. We can put hours into optimising animations and UX but it is in vain once the marketing companies get their hands on the GTM account. With enterprise level clients near quarterly change of marketing provider, I'm sure many of the snippets aren't even being used anymore, they sit there because the new account manager is too afraid to remove them and no one truly understands what gets delivered to the customer's browser.

We will spend immense time architecting performant backend solutions and infrastructure, then we decide that tens of thousands of completely disjointed lines of javascript, that we've never seen unminified and have no idea who wrote it, is probably fine to deliver straight to the browser. Don't worry, caching will solve the network problem, and everyone has quad-core smartphones now so it's probably okay.

Weird, the screen keeps flickering and scrolling isn't smooth. Oh well, probably a browser bug.


...and then you deliver those tens of thousands of lines of JS to a browser running uBlock Origin and 80% of it gets ignored, restoring speed and lowering memory requirements at the tiny cost of not seeing your ads or being tracked by ad networks.


I'm always a bit surprised at how many sites blocking doesn't break. Sure, it occasionally murders some S3 load I actually needed, but for the most part it doesn't perceptibly change my experience.

It's a bit alarming that you can kill 80% of a page load without your experience degrading in any way. At that point I'm prepared to call what's happening hostile design - the user's well-being has clearly taken a serious backseat.


Unfortunately we have to test as if we were the average user of our websites, so we are not afforded that luxury!


Lack of TLS would prevent Amazon shopping.


> Again, if you think that software hasn't advanced much, simply use the software from 10 or 20 years ago, and see if you can stand the experience.

If I didn't have to trade files with others I could quite happily use Microsoft Office 97 in lieu of whatever the new thing is called.

The issue with web browsers is only slightly more complicated. I'd love to go back to a world where web pages didn't try to be computer programs, but that's obviously not going to happen for a while.


Agreed. I used Office XP (2002) until... I think about 2013 or so where I started running into incompatibility problems that just couldn't be resolved no matter what I tried.


That was a good version as well. I can even stomach - barely - Office 2007, with all the ribbon-related goofery, but the low-contrast UI on the newest stuff is just sadistic.

(I mean, it's not enough to make me switch to Open Office or Nacho Libre...)


It pushed me off of Office completely. I used Office XP as long as I could, then 2007 when I could. I gave up on 2013 completely - it's just not worth using a proprietary program that clearly hates its users.

If I need to export to Office I use Google Docs, because Libre is still too frustrating. If I don't need a .doc output, it's Latex for pretty things and Textedit for the rest.

I agree that most of the Office-analogues out there aren't very usable, but I'm surprised at how rarely I need one.


>If I didn't have to trade files with others I could quite happily use Microsoft Office 97 in lieu of whatever the new thing is called.

I could go back even further and use Microsoft Works.


You can't avoid the bloat by using an older browser, because the web itself has grown bloated over the years. Modern browsers try their best to make that bloat a little more manageable but despite being written by probably some of the best developers out there they can barely keep up.


I think for the starkest contrast, play games from 10-20 years ago. If the difference in graphics, animation, physics and world size doesn't amaze you, then the changes in user interface should. It is so much easier to play modern games, especially for the disabled.


For fancy modern games, it's pretty clear where the increased hardware power went - indeed, "graphics, animation, physics and world size".

But people complaining about the poor use that today's software makes of today's hardware are usually not talking about games.


Yeah but the level design sucks, the writing is terrible, open worlds are bloated with useless details and not much gaming, gameplay are repetitive and lack innovations, atmosphere and personal touch has been traded for myriad of pixels. Those games are big fat short interactive movies and without souls.

The best games I played recently are all indie stuff that I could have played on a much older machine.


"The best games I played recently are all indie stuff that I could have played on a much older machine."

If you try it, you may discover that's not the case. A lot of indie stuff is taking advantage of faster computers to use things with higher levels of abstraction, and indie games often have really quite terrible levels of performance relative to the complexity of the game as compared to a AAA title. They run at 60fps because they're net simpler, and a lot of times you may find they're barely running at 60fps on a medium-class machine.

I'm not complaining, because the alternative is often that the game simply wouldn't exist if the only choice the developers had was to drop straight into C++ and start bashing away with OpenGL directly.


I think this is a really good point. It's true that indie games are often simple, and don't use complex graphical features. But it's also true that a lot of them are startlingly memory-heavy and poorly implemented - they're comparable to AAA performance, but for a vastly simpler task. Every so often something like Dungeons of Dredmor will bog down on me despite being a very basic task on a high-end machine.

I don't object to that, either. People are knocking out games in high-level languages or using extremely rich frameworks. You can put out an Android game and barely touch a single Android feature because your environment is so extensive. We do pay a price in speed and memory usage, but the upside is that people get to release all kinds of wild projects without learning the arcana of their environment.

It's fantastic that a team of <5 people with limited software expertise can put out a genuinely brilliant game. I'm less happy with it outside of gaming, where people often have to standardize on a single product. But video games are moving steadily closer to boardgames, novels, and other art forms where the tools of the trade aren't a limiting factor.


> The best games I played recently are all indie stuff that I could have played on a much older machine.

Often you couldn't. FTL or Child of Light (as examples of games I've enjoyed fairly recently) look like they would be fine on an older machine, but actually made my modern laptop fans run pretty hard. Modern hardware meant the developers could used frameworks to save them time, and focus on making a good game rather than microoptimizing performance.


FTL will definitely make a high-end Macbook Pro break a sweat. It's not that it couldn't have been written a decade ago, but it would have taken far more man-hours on the implementation side of things.


> Those games are big fat short interactive movies and without souls.

Oh, I dunno, I'm pretty sure the actual full-motion, "interactive" movies from the 90s were more soulless: http://listverse.com/2011/12/24/10-notorious-full-motion-vid...


I'd be interested in hearing more about the accessibility changes?

But I think the popularity of "retro" aesthetics and mechanics signals that progress in games is not at all linear.


Watch "Halfcoordinated" speedrun NieR: Automata at the summer GamesDoneQuick event that just happened. Because the game allowed remapping of inputs (even on the controller) it meant that he could play the console game using only one hand.

I would think that some more specialist peripherals could also be used for people with even less mobility, so they could also play these games to some degree and enjoy them.


> the changes in user interface should

Yeah, but that's not a consequence of better software!


What would you call the exact same functional piece of software with a more effective user interface besides better software?


I'm not sure if I agree.

My goto example is Blender. It has a footprint of 300MB, starts within 1-2 seconds and uses 70MB on startup.

Compare this to a program like 3ds Max and you will ask yourself where all your resources went.

I think today most software loads everything on startup, while programs like Blender use a modular approach.


Just another example of sheer brute power alone not making things fast. It's also about latency and responsiveness.


> if you think that software hasn't advanced much, simply use the software from 10 or 20 years ago, and see if you can stand the experience.

But perhaps that's only because we had more time to figure the requirements out.


Yup. I don't think an improvement in UX requires additional computer load.


Yeah, screw everyone who makes use of Lynx, Netsurf or any of the extreme lightweight browsers still maintained today.


Lynx and NetSurf are more usable because they have been maintained. I'm sure 2017 Lynx is much more bloated than 1997 Lynx. Current day Lynx users will probably find 1997 Lynx unusable. All of this despite the fact that preventing bloat is probably one of the core goals of Lynx, which it obviously wasn't for Netscape.


All 7 of them!


I love lynx.

I may be alone (or in a group of 7), however, judging by how often I get a 403 Forbidden error because a site is set up to reject anything with a user agent of lynx. Seems to be a default of many Wordpress setups. e.g. lynx slatestarcodex.com.

Maybe I should just tweak the user agent, but hey, lynx pride.

(edit: looks like it's the "libwww" part of the default lynx user agent which gets me blocked, not the "lynx" part. OK, I will edit that out. Ha!)


Functionality has increased dramatically... but has it increased orders of magnitude? It doesn't feel that way.

(For comparison, I ran Netscape 4 on a Pentium 100 with 64 megs of RAM.)


> Again, if you think that software hasn't advanced much, simply use the software from 10 or 20 years ago, and see if you can stand the experience.

Try 1987 Ventura Publisher. That's 30 years ago. You can publish a whole magazine or a 1000 page book on a modest (<4MB) PC with 286 processor with it. It has a great GUI, excellent usability and, dare to say, isn't slow at all.

On the IDE side, the IDE environments for Lisp on the Lisp machines of the early 80s, or the Smalltalk environment for the early 80s Xerox computers have nothing to envy the modern IDE environments that are commonly used for languages like Java.


a lot of people do use software from 10 years ago because it works better and is more efficient.

old versions of visual studio, vim, emacs, all come to mind.


> If you don't care much about the added functionality or value of the new software, use Mosaic or Netscape 4.0 to browse the web.

If I can have the web from that era to go with it, I'm happy.


I think a lot of it is you tend to only optimise as much as you need to. When the average user only has 4MB of memory, you're going to spend a lot of time optimising image size for example. When you can assume 4GB of memory, you're going to put that time into other places as the effort would be very difficult to justify.

Wouldn't your users appreciate more features than optimisations most of them aren't going to notice? For the same block of time today compared to decades ago, you're going to be creating an app that uses more memory and CPU but has a lot more features. Developer time isn't free and needs to be justified.

I used to write DOS apps for 66MHz PCs and you'd spend an enormous amount of time optimising memory and CPU usage. This was similar for the initial batch of Android phones as well as you didn't get a lot of resources (e.g. loading a large image into memory would crash the app). Now I can create more features in way less time and rarely have to think about optimisations unless dealing with a lot of data.

I think expecting every software developer to release software that uses the absolute minimum amount of CPU and memory possible is completely unrealistic. The people commenting that developers don't know how to write optimised code anymore have different priorities to a typical business. I know how to make low level optimisations and have even resorted to assembly in the past but I'm only going to these lengths when it's absolutely required.

For commercial projects, it doesn't make any business sense to optimise more than is needed even if that makes software developers squirm.


There is a huge different between not optimizing and giving decent perfs.

Allowing video content, using images and a JS framework is alright. Not making sure the first load is under 1mo is, however, unprofessional.

I get that some sites do need big pages: video tubes, big SPA, etc. But most sites are not youtube or facebook. If your blog post takes 3Mb to display text, it's doing it wrong.


> There is a huge different between not optimizing and giving decent perfs.

> If your blog post takes 3Mb to display text, it's doing it wrong.

I agree if it's not much effort and it makes a big difference (e.g. to mobile users) you should make the optimisation.

I'm more talking about people saying that instead of having one Electron app to support multiple platforms, a developer should be writing several native apps. The latter is a huge amount of development time to save a few 100MB of RAM that most users probably wouldn't notice anyway. Nice to have but it doesn't make business sense most of the time.


Depends for electron. First, 4 years old computers will feel the difference. Second, if your app is started a lot, the startup time will add up and be annoying. Third, if your app stays open a long time and you have several of them, they will end up eating from that 1Gb of RAM. It's really a bummer on low end computer. I got 32Gb of ram so I don't care, but my browser has 8. So if you have skype, chrome, thunderbird and an editor open, the new apps being electron or not will make a big difference.


You are right, but the result of all apps (plus adverts) taken together is a very sluggish system.

I think what operating systems should do is to allow users set per app/website quotas and use sensible defaults.

Developers should get the message that no we can't use all the resources available on a given system just for our particular app.


> allow users set per app quotas

This sounds sort of like what classic Macs did: there were some settings in the application's "Get Info" dialog to adjust the "current size" of the application (along with a note as to what the "preferred size" was); if there wasn't enough free memory the program would refuse to start.

In practice, this is a terrible idea: it generally turned into "well, this program isn't working right, what if I fiddle with it's memory allocation?" and did nothing to actually help the user. (To be fair, the setting was necessary as a result of technical limitations, but I don't see it working out any better if it was implemented intentionally.)


>In practice, this is a terrible idea

Perhaps it was in the 1980s, but back then applications didn't get funded by downloading adds from third parties using 100% of our CPU. We didn't use many applications at the same time either.

So the problem we have right now simply didn't exist back then. And where it did exist, like on mainframes, they did actually use quota systems rather successfully I believe (just as we are right now with virtual machines).


Whilst this would be nice for power users (I've certainly hit out-of-memory on Linux and had random processed killed), it wouldn't have much effect in the grand scheme of things.

Commercial developers will use as much memory as they can get away with; OS vendors would disable the quota system (or set the quota to 100% of whatever RAM the device has) since they don't want apps to "break" on their laptop/tablet/phone and work on their competitors'.


There is some very basic education and industry choices we could make to get a ~4x or so efficiency gain without any substantive increase in development time/effort/maintance cost etc.

If people graduated from college with just a little bit of understanding of how slow ram is, how CPU caches work, how expensive allocating memory on the heap is (either, initially with something like malloc, or amortized with garbage collection) and if we stopped using languages that haven't at least built a good JIT yet, and use AOT native compilation more often, we would all be in a much happier place. Systems would be much more snappy, or we could add more features, users can run more things at once before it goes to hell, batteries would last longer.

None of this requires even changing what language you use, or massive changes in coding style. Just knowing some basic facts about how the computer works can let you choose the more efficient option among equivalently complex options -> "don't need a linked list here, let's use an array backed list instead" -> let's not use LINQ here because that allocates and this is a tight loop that runs on every request -> lets not use electron, let's build something with a similar external API that is more efficient


Why do you just assume apps using more resources now is because of coders with bad optimisation knowledge?

> lets not use electron, let's build something with a similar external API that is more efficient

People also choose Electron because it's an efficient in terms of development time to release apps on multiple platforms, not because they don't know how to optimise.


Tools like Qt+QML are, in my experience, just as efficient in terms of development time, but much more computationally and memory efficient than a typical electron application. I'm not saying that you can't make electron run efficiently, just that it takes more work and therefore since most teams optimise for development rather than computation, the end result is that electron is (again in my personal experience) typically less computationally/memory efficient. It also helps that in QML javascript is typically used only as non-performance-critical glue code and the rest (and the framework itself) is in C++ and the GUI rendering is in OpenGL.


I think that's part of the problem: people are optimising for development time and effort rather than end user experience.


>Commercial developers will use as much memory as they can get away with

I doubt that. Developers will try to work within their quota so they don't have to ask for permission to use more and risk annoying users.

Also, it would create resource competition between websites and the adverts they use to fund themselves. This would lead to much less annoying ads.


> I think what operating systems should do is to allow users set per app/website quotas and use sensible defaults.

I can't see how this would work. What would the sensible default be for a computer game? A text editor? An IDE? A video editor? This kind of thing is way over common users heads to set their own quotas as well. How would you avoid "This app wants to use more resources than the default. Allow?" popups with low defaults?


The defaults would be set according to some simple statistics. If an app or website wants to use more it must ask for permission, but it can only ask a limited number of times within a given period.

Of course it's easy to imagine an implementation of this idea that is annoying and confusing. But I can imagine this working very well and provide an incentive for developers to not waste resources.


What statistics would you use though? My point was a high-end game is going to use close to 100% CPU all the time and lots of memory, a video editor is going to be idle sometimes and then try to use 100% CPU to make edits fast, text editors will be more modest etc.

All apps have different resource usage profiles. I can't see how you could make this work without user interaction or a review process for apps.


On second thought, I wouldn't actually use statistics at all. I would simply set all defaults to a low value in the single digits percentage range on average over a period of one minute or so.

The overwhelming number of apps need next to no CPU (on average that is. Spikes are OK) during normal operation, and that includes text editors.

Other applications, such as games, need all the juice they can possibly get. Users usually know why that is the case and will be happy to grant them that permission.

There needs to be a way for an app to request a temporary suspension of resource limits, e.g. for running batch job that should finish as quickly as possible.

Each website should be treated as a separate application and needs to compete for resources with the ads it runs. So there would be an incentive for website owners to keep the resource usage of ads in check.


"app XYZ seems to be using a lot of resources. Do you want to stop it ? Yes/No", with a "don't ask me again" checkbox.


That would be a glorious feature. I wonder if QubesOS can do that.

In any ways, macOS that I am running as my personal system probably ain't gonna get that feature anytime soon.


Organizations get what you reward.

Tech Debt is rewarded.

Doing something for the first time, almost by definition, means one does not really know what one is doing and is going to do it somewhat wrong.

Hiring less skilled labor (cheap coder camps for example) to implement buzzword bingo solutions gets you into a place where all the software contains large chunks of it's substance coming from people doing it for the first time... and not 100% right.

As we never go back to fix the tech debt we end up building stills for the borked and less than great. When that structure topples over we start over with a new bingo sheet listing the hot new technologies that will fix our problems this time round for sure.

I'd think that a good fraction of the current language expansion is that the older languages are too complex and filled with broken. Green fields allow one to reimplement printf, feel great about it, and get rewarded as a luminary in the smaller pond.

.....

oh... and well the cynic in me would argue planed obsolescence at the gross level. No one buys the new stuff unless there's new stuff.


Software engineering reinvents everything every decade or so. People were complaining about it 20 years ago - I suspect they have from the beginning. Not only new languages, also new hardware and new platforms in general. Moore's law seemed to be the driver, but the constant growth means constant new people... unlike scientific paradigms, we needn't wait for old practitioners to die.

BTW is "stills" a typo for something? (shims?)


> Doing something for the first time, almost by definition, means one does not really know what one is doing and is going to do it somewhat wrong.

The Mythical Man-Month has a chapter on this titled "Prepare to throw one away". Brooks argues, that the program/OS/whatever you build the first time around should be considered a prototype. Then you reflect on what problems you encountered, what went well, and so on and use those insights to guide you when start over.

It seems like such an obvious idea, but Brooks wrote that almost 50 years ago, and it seems like only very few people listened. Primarily, I guess, because much software is already written under highly optimistic schedules - telling management and/or customers that you want to throw the thing out and start over is not going to make you popular.


That do it twice idea originally came from Royce's famous article in 1970 which lead people to claim he invented Waterfall (due to the diagram he started with). Reflecting on practices from leading teams in the 60's was that do the project once to learn everything, and then do it again and ship the second one. Worth reading.


On the other hand, we have the Second System Effect, about the dangers of throwing away the first version of Thing in favor of "Thing Done RIGHT This Time".


What's also obvious is practically no manager is going to go for that.


This is not a single thing.

I think the biggest culprits are abstraction layers. You use a framework that use js that use a VM in a browser that runs on an OS. The time when showing text was done by the application writing a few bytes in VRAM is long gone. Each layer has its internal buffers and plenty of features that you don't use but are still there because they are part of the API.

Another one is laziness : why bother with 1MB of RAM when we have 1GB available? Multiplied the number of layers, this becomes significant.

Related to laziness is the lack of compromise. A good example is idTech 5 game engine (Rage, Doom 2016). Each texture is unique, there is no tiling, even for large patches of dirt. As expected, it leads to huge sizes just to lift a few artistic constraints. But we can do it so why not?

Another one is the reliance on static or packaged libraries. As I said, software now use many layers, and effort was made so that common parts are not duplicated, for example by using system-wide dynamic libraries. Now these libraries are often packaged with the app, which alleviate compatibility issues but increase memory and storage usage.

There are other factors such as an increase in screen density (larger images), 64 bits architectures that make pointers twice bigger than their 32 bit counterparts, etc...


> The time when showing text was done by the application writing a few bytes in VRAM is long gone.

Related: just went to a news site that had this absolutely gorgeous, readable font called "TiemposHeadline" for the headlines. That font was loaded through the @font-face interface of CSS, so that's probably somewhere in the ballpark of 1M.

That I so casually navigate to find the name of the font and how its getting loaded is due to devTools, which is tens of megs of storage space plus whatever chunk of memory it's eating.

I think examples like Doom are misleading. Those are essentially hand-compressed diamonds of software. Some of the resource-saving techniques used there are bona fide legends. It should come as no surprise that an increase in availability of RAM and storage space causes the software to accordion out to fill the available resources in return for ease-of-development.

That said, I'm all for software minimalism movements as long as the functionality and usability remains roughly the same.


just to set the record straight, idtech5 uses "megatextures" / sparse virtual texturing, which is actually a very clever performance enhancement - a low-resolution render of the scene is made to determine the needed mip levels for the visible textures, which are streamed from disk into an atlas texture. then there's a redirection texture that maps from the textures needed by models to the UVs of the correctly mip'd texture in the atlas. it's a great solution to disk and API latency in games. to call it bad because it's a big texture instead of 50 textures individually streamed from disk...it's not a lack of compromise. it's a great engineering solution, you dingus!


I don't think that megatextures are bad in fact I am kind of a fan of John Carmack and id software.

The rationale behind megatexture is that storage capacity increase exponentially but our perception doesn't. There is a limit to what our eyes can see. In fact for his future engines, John Carmack wanted to go even further and specify entire volumes in the same way (sparse voxel octrees).

And sure, the way megatexture is implemented is really clever, and yes it is for a good reason, but it doesn't change the fact that it makes some of the biggest games on the market (Doom is 78GB)

When I said no compromise, it is no compromise for the artists. The whole point of megatexture is to free them from having to deal with some engine limitation. They don't have to be clever and find ways to hide the fact that everything is made of tiles, they just draw. And yes, this is a good thing, but a good thing that costs many gigabytes.


I was actually just thinking about this.

Yeah, it probably doesn't really help the situation when you build several layers of abstraction in an interpreted language that runs in a virtual space on a virtual machine using another abstraction layer running on virtual cores that go through complex OS layers that somehow eventually possibly map to real hardware.


To understand this, you need to read Andy Grove, especially "Only the Paranoid Survive". It's fascinating: basically everything I, as a user, see as a boon, he perceives as a threat. From his point of view, everything that allows people to buy cheap machines, run fast software etc. is negative and needs to be dealt with. Intel basically didn't change over the years, with the recent x86/ARM fuss showing just that. On the other end of the spectrum are companies exploiting the existing possibilities - for a long time it was Microsoft, making each version of their flagship products more and more resource hungry, so the users were forced to upgrade ("What Andy giveth, Bill taketh away"). What is happening now is the extension of the same story - "Memory is cheap? Let's use all of it!".

As a developer, you rarely care about memory usage; as a web developer, you have limited influence on CPU usage. And since most managers care only about getting the project done on time and within the budget, this is what most developers concentrate on.


> And since most managers care only about getting the project done on time and within the budget, this is what most developers concentrate on.

I think that is the crux of the issue succinctly put.


Commercial software is very rarely of a sufficient quality.

Software that comes out of nonprofits or the free software movement is arguably better built and treats the user better.


[citation needed]

Lots of people stopped using firefox in favor of chrome precisely because firefox was incredibly greedy memory-wise.


I stopped using Chrome on Linux (except for development and testing) because it absolutely batters the CPU, enough that it powers on the fans on my laptops where FF rarely does.

I've never really understand why either and it's been on two totally different laptops.


In their defence, Mozilla has done a lot of work to fix these issues, and Chrome really isn't that much better.

I'm on FF 57 at the moment (nightly) and it's stupid fast. Like, really fast.


Yeah. This is a business and management problem, not a programming problem. If the business wanted to impose memory and compute budgets for different parts of the program, they could. This is what happens in aerospace and automotive when you have more than one program sharing a processor.


It happens in game development when you're targeting resource-constrained hardware consoles.

It can happen in web development as well if you're working on a disciplined backend team that cares about designing for request latency and concurrency.

But often scrappy startups don't have the time or money to care about those things. It's more about product fit and keeping customers happy to keep the money coming in.

I personally don't use that excuse and design everything I can with a budget in mind. It's a nice constraint to have an upper bound on response time. Forces you to pick your battles.


My targets depend on context but I like client side stuff to be done in under 100ms since that was the limit for a human to feel like the interface is 'instant' I learnt at college.

There a bunch of tricks you can pull to give that impression even when you have to go over that limit.

These days I'm lucky, I only work on an internal web app that is almost entirely served over gig ethernet.


Software is causing science is moving backward.

Software makes the world more complex faster than we can understand it, so even though we have more knowledge we understand less about the world.

We used to know how cars work. We used to know how phones work. Now we don't, and never will again.

The implications are unsettling.


+1000 to this.

Imagine a world populated entirely by IOT devices. Imagine, for a moment, starting with a blank slate and trying to make sense of these devices using the methods of Science. They are so complex and their behavior governed by so much software that it'd be impossible to make a local model of how the device actually worked. You simply would not be able to predict when the damn thing would even blink its lights. When the world gets to this point...One would have to understand how software worked, in many different programming languages; kernels, scripts, databases, IO, compilers, instruction sets, microarchitecture, circuits, transistors, then firmware, storage...it'd be impossible to reverse engineer.


I guess this is where future generations can get magic from. Someone figures out some small part of the protocol and they basically become a mage by sheer virtue of their limited understanding.

I mean how many times has someone called you a Wizard when you've fixed some random piece of tech or gotten something working for them?



It would still be simpler than figuring out how biological machines work but biologist are trying with some success.


My point is that it is completely stupid to think of the stack of knowledge necessary to understand how an IOT device really, fundamentally works.

A toaster is not a complex thing. It has a few springs, a tray, a body, some heating elements. Some wires. There is absolutely no need to put the internet in there.

/rant


The best example of how uncomplicated things should be is the design of an ordinary kettle. I remember an article here on HN where a guy disassembled a kettle to find out how they make it so cheaply.

It was some simple bi-metal wire that would bend when it reached a certain temperature and shut off the power to the heating coil.

I guess the whole aspect of these IoT is that it violates the simplicity of these devices. Kind of like the weapons in Judge Dredd vs a simple modern mechanical handgun.


You think a toaster is simple? Try building one from scratch! This guy did: http://www.thetoasterproject.org and it was HARD!


Well, smelting your own iron and making plastic are the hard part. There is nothing particularly challenging if you have a few pieces of metal laying around.


Sure, it all depends on how you define a "blank slate". The whole world of engineering is a huge stack, and near the bottom are things like smelting iron and making plastic, up a few layers you have things like standardized screws, near the top you have things like kernels, databases, etc.

If all these things can be taken as a given, why would you not want to use them? I mean, yes, you can avoid some complexity by making a simple toaster, but the second the consumer wants things like "never burn my toast" or "personalized toast levels" you need to go up the stack.

That said, some IOT things are clearly lame ideas that should never have been made in the first place, but that doesn't mean you should avoid using existing technology.


> If all these things can be taken as a given, why would you not want to use them?

When the device breaks, what do we do with it? If it is mostly software, it is not user serviceable, whereas something with a spring and clips and wires is something that a curious person armed with a screwdriver could disassemble and fix.

I fear that software is ushering in an age where users are absolutely helpless when something breaks. Then we get big stupid bricks with chips in them.


The complexity can be worth it. Take electronic fuel injection. Uses less fuel, has a better power band, has a wider range of self correction.


Cars are actually a great example of where things have become highly complex, which means that they are now essentially impossible to fix yourself. On the other hand, for regular day-to-day use they are a lot better.


What an interesting guy...

Right now he's investigating what it is like to be a goat: http://www.thomasthwaites.com/a-holiday-from-being-human-goa...

Don't knock it until you try it I guess?


Neurologists with EE backgrounds apply neurology techniques to understanding the 6502 CPU: https://www.economist.com/news/science-and-technology/217149...

It goes poorly.

The economist article links to the original paper, which is quite readable too.


That level of success is on par with figuring out that IoT devices run on electricity and use some kind of radio waves to communicate. Maybe that they store binary data somehow.


I disagree. Biological machines at least are all coded in the same language with the same basic runtime and intended output. They're all similar machines doing similar things via similar means.


I encourage you to pick up a biochemistry textbook.


Someone said that being surrounded by bad or malicious IoT devices would be indistinguishable from being haunted.


Many times I've had to explain to electricians that you can't "figure out" what's happening inside a PLC or other programmable device without having a copy of the program. It doesn't matter whether turning on input A turns off output B the first time you try it, or the fifth, or the hundredth. Maybe it also depends on the time of day, or some data coming over the network, or maybe there's a counter that turns output B on permanently after 100 rising edges on A. You just. Don't. Know.

"Yeah nah but when you turn on input A, input B turns off so we know how B works."


> We used to know how cars work. We used to know how phones work. Now we don't, and never will again.

Philip K. Dick wrote a short story on this, some 60 years ago: https://en.wikipedia.org/wiki/Pay_for_the_Printer


I think it started when we went away from native compiled languages. Visual Studio 6 was in my opinion the best version in terms of responsiveness/functionality. After that, with .Net, Visual Studio started including more .Net code and become more and more slow over time. Over time, people slowly got used to their applications being slower and slower. Then the web came and people started writing apps in Javascript. The Javascript apps are not too bad compared to the .Net apps, so people did not notice. However, if you were comparing them to Pascal/C/CC++ apps, you would have noticed a big difference.


I don't think that's quite true. Erlang is compact and fast (for many things) and it's interpreted. And there's tons of bloated C++.


.NET was always compiled to native code.

AOT to native code via NGEN, or JITed on load.

The only .NET variant from Microsoft that wasn't compiled to native code before execution, was .NET Micro Framework.


I always know when an app is JavaScript. Also .Net apps are native.


I don't think that interpreted languages is the problem since people were using Visual Basic since the Windows 3.0 days. There are some collections in archive.org for 16bit Windows shareware programs and games with hundreds of entries and like 2/3 of them are made in some version of VB.

Now, .NET and JVM might be a problem since those VMs tend to be resource hogs (after all both use GC methods that allocates tons of memory whereas something like Python or even classic VB use reference counting - even then there are languages that aren't using reference counting but some other method of GC and still are fast). But i don't think you should put all interpreted languages at the same box.


Reference counting is GC.

Also you should not put all GC languages in the same box, as many do allow for AOT compilation to native code and do support value types and GC-free memory allocation as well.


Heh, it finally happened on Hacker News too, people misunderstood what i wrote and downvoted me for it instead of trying to understand what i am talking about (yes i am annoyed with that, it is one thing to be misunderstood and another to be penalised for being misunderstood - especially on HN where messages fade out when downvoted).

So, first of all:

> Reference counting is GC.

How did you thought that i said otherwise when i clearly wrote "aren't using reference counting but some other method of GC" ("other method" here implying that reference counting is also GC)?

Moving on...

> Also you should not put all GC languages in the same box

I did not, as should have been obvious from the "after all both use GC methods that allocates tons of memory whereas something like Python or even classic VB use reference counting" where i compare two different methods of GC, one that uses a lot of memory and another that doesn't.

Now i get that making the previous misunderstanding would make this bit sound as if i was making a comparison between "GC" (Java, C#) and "non-GC" (Python, classic VB) - and please note that the quotes here are to show what one could think while having that misunderstanding, not what i really think, after all i already made it clear with the previous quote that i think that reference counting is a method for GC - however i do not think that it is my fault here, i gave examples and tried to make myself clear about what i mean. After some point i believe it is up to the reader to actually try and understand what i am talking about.

I think the rest of your message (the "as many do allow for AOT compilation to native code and do support value types and GC-free memory allocation as well.") is relying on the above misunderstandings, especially considering i didn't do what you describe, so i am ignoring it.

Now don't get me wrong, i am not attacking you or anything nor i believe you are wrong with the fact parts of your message ("reference counting is GC", "not all GC languages are the same"), it is just that the message doesn't have much to do with what i wrote.


I still find it amazing that DOOM was 2.5MB. A Google search page is ~20MB (16MB in DDG). And a Wikipedia page is ~19MB. (FF 55). This is crazy to me. That even simple things take so much space now. I know space is cheap, but this does feel bloated. And while these sizes might not be noticeable on a computer, it definitely is on a mobile connection. I had figured the advent of mobile would make optimization more appealing, but it seems to go in the other direction.


There is no way a google search is 20MB and a wiki page is 19MB. My tests shows a google page is around 1MB, and wiki pages obviously depends on weather or not the page is has many and large images. But the average page definitely isn't near 19MB that's for sure.


Maybe he means how much the browser uses to display the page, which is much much larger than the size over the wire.


I just gave what about:memory was giving me. So yes, what the browser uses to display the page.


What is also true for DOOM.



The bar in UX goes higher with every new competitor, and with that the amount of code under the hood.


This surprises me as minimalism is becoming extremely popular. I mean, websites shouldn't be myspace pages.


I think the explanation is that bloated software is cheaper to make.

It is cheaper to develop a .Net app than a C app. Cheaper in Development and maintenance.

It is cheaper to not care about efficient data management, or indexed data structure.

What we're losing in efficiency, we gain in code readability, maintainability, safety, time to market, etc.


> It is cheaper to develop a .Net app than a C app. Cheaper in Development and maintenance.

I think this is true but I disagree that it's inherently true. Slapping a UI together with c and GTK is pretty strait forward for instance, here is a todo list where I did just that (https://gitlab.com/flukus/gtk-todo/blob/master/main.c). It's not a big example, but building the UI and wiring the events was only 40 lines of code, it's the most straightforward way I've every built a UI. More complicated things like displaying data from a database are harder, but I think this comes down to the libraries/tooling, the .net community has invested much more time in improving these things than the c community.

> What we're losing in efficiency, we gain in code readability, maintainability, safety, time to market, etc.

I don't think we've exhausted our options to have both. We can build things like DSL's that transform data definitions into fast and safe c code for instance. Imagine instead of something like dapper/EF doing runtime reflection we could build equivalent tools that are just as easy to use but do the work at compile time. Or we could do it via rusts kickass compile time meta programming.


Try to slap some GTK components similar to these ones:

https://www.devexpress.com/Products/NET/Controls/WPF/

Also are you sure your C app will have a high score on clang and gcc sanitizers?


C is too easy to target (and I say is a HUGE part of the problem).

However, if you pick .NET vs Delphi you will see is easier to develop UI apps (and general apps) in Delphi than .NET.

Even today.

The exception is web apps, and web apps is another big problem that "normalize" bad architecture, bad language and bad ways to solve everything.


Really? Why do you say Delphi is easier? I work with people that maintain both C# and Delphi UI, and none of them think there is a future in Delphi.


Being a good tool and have a bad management are orthogonal things.

If wanna take a look for your self you can install Lazarus (http://www.lazarus-ide.org/, discussed https://news.ycombinator.com/item?id=14973706) a close clone of Delphi (version 7).

Or download the free edition (modern):

https://www.embarcadero.com/products/delphi/starter/promotio...


The problem with Delphi was Borland kind of killed the product by pushing it to enterprises with deep pockets and taking too much time to adapt to 64 bits and newer platforms.

The language itself is still quite ok.


As a developer and development manager, I haven't personally noticed major improvements in any of those metrics over the past 20 years.

But I'd definitely be interested in any studies that have tried to measure these over long time periods.


It's a gas law, as software expands to fill the available hardware. If it can reach a minimal standard with less work, a smaller budget is allocated.


If it's stayed the same that's a huge win because software has become incredibly more complex.


I don't agree necessarily with the first three items on your list.

In fact I think there is a sort of casual indifference to the first two that frequently borders on criminal neglect. Why bother with them when the "time to market" driver of methodology selects for the most easily replacable code?

Safety is also debatable and mostly accidental. Most of the languages that are fast and "easy" to develop in rest on a core of C or C++, and are really only as safe as that code. Safer, because there may be fewer foot guns, but not necessarily "safe."


Software complexity increases exponentially with linear growth in features and polish. Occasionally someone takes the time to step back and rethink things, but generally you’re just adding another layer on top of the ancient ruins. Code archaeologist should be a job title in many organizations.


It's less than archaeology (where artifacts are embedded in soil) and more like geology (because everything is code, i.e. code is the soil). But yeah, I've had the same feeling when refactoring a decade-old application. You could really recognize the styles of different developers and eras in the same way that geologists recognize eras by looking at the type of stone deposited during that era.


This keeps coming up every couple of years, but is just wrong.

In the last 5-10 years, there hasn't been almost increase in requirements. People can use low-power devices like Chromebooks because hardware has gotten better/cheaper but software requirements haven't kept up. My system from 10 years ago has 4gb of ram - that's still acceptable in a laptop, to say nothing of a phone.

If you're going to expand the time horizon beyond that, other things need to be considered. There's some "bloat" in when people decide they want pretty interfaces and high res graphics, but that's not a fair comparison. It's a price you pay for a huge 4k monitor or a retina phone. Asset sizes are different than software.

I won't dispute that the trend is upward with the hardware that software needs, but this only makes sense. Developer time is expensive, and optimization is hard. I just think that hardware has far outpaced the needs of software.


> Developer time is expensive, and optimization is hard.

In the case of front-end development also "Developer time is paid by the company while hardware is paid by the users."

This is basically a nicer way to put the "lazy developers" point from the article, but I think that's actually important.

The problem is that this seems to create all sorts of anti-patters where things are optimized for developer-lazyness at the expense of efficiency. E.g., adding a framework with layers of JavaScript abstraction to a page that shows some text - after all, the resources are there and it's not like they could be used by something else, right?


There are many kinds of efficiency, and some of them matter more than others.

We as a society have voted with our wallets that yes, we really really want a process that is efficient on creating more features within the same amount of developer-time instead of a process that creates more computationally efficient features.

The increased hardware capacity has been appropriately used - we wanted a way to develop more software features faster, and better hardware has allowed us to use techniques and abstraction layers that allow us to do that, but would be infeasible earlier because of performance problems.

It's not an anti-pattern that occurred accidentally, it accurately reflects our goals, needs and desires. We intentionally made a series of choices that yes, we'd like a 3% improvement in the speed&convenience of shipping shiny stuff at the cost of halving the speed and doubling memory usage, as long as the speed and resource usage is sufficient in the end.

And if we get an order of magnitude better hardware after a few years, that's how we'll use that hardware in most cases. If we gain more headroom for computational efficiency, then we've once again gained a resource that's worthless as-is (because speed/resource usage was already sufficiently good enough) but can be traded off for something that's actually valuable to us (e.g. faster development of shinier features).


There is a cost to the company for non performant front end code though. If the front end preforms poorly users are less likely to use it.


If that were the case, I think there wouldn't be that much discussion about the "website obesity crisis". E.g., see this post from another thread: https://news.ycombinator.com/item?id=15028741


Users who aren't using an up-to-date phone are probably not an audience websites are likely to make money from. If a website's performance isn't "good enough" on a modern phone, that will hurt the site.

Dark thought: maybe sites actually profit from a certain level of "bloat", if it drives away less lucrative visitors while not affecting the demographics that are most valuable to advertisers.


I don't disagree, but the counter is also true. If the developers didn't care about front end performance there wouldn't books on the topic.


I have a 2015 Toshiba Chromebook - 4GB RAM, 1.7GHz Celeron I think - and while it's fine for a lot of things, browsing the 'modern' web on it is so painful.

Sites cause the UI to hang for seconds at a time. Switching between tabs can take an age. Browse to some technology review site and you sit and wait for 10 seconds while the JS slowly loads a "please sign up for our newsletter" popover.


Funny how these things vary. I've got a 2013 macbook, 4GB RAM, 1.3GHz processor and everything is snappy. Good SSD and OSX I guess.


Intel's naming is confusing, but I think the Celeron is an Atom-based N2840, roughly 30-50% of the performance of that MacBook.


It happened one step at a time.

We wanted multi-tasking OSes, so that we could start one program without having to exit the previous one first. That made the OS a lot bigger.

Eventually, we got web browsers. Then Netscape added caching, and browsers got faster and less frustrating, but also bigger. And then they added multiple tabs, and that was more convenient, but it took more memory.

And they kept adding media types... and unicode support... and...

We used to write code in text editors. Well, a good IDE takes a lot more space, but it's easier to use.

In short: We kept finding things that the computer could do for us, so that we wouldn't have to do them ourselves, or do it more conveniently, or do something at all that we couldn't do before. The price of that was that we're dragging around the code to do all those things. By now, it's a rather large amount to drag around...


This is all very true, but I feel like we (as users) haven't really gained proportionally compared to the increase in computing power and storage.

For example IDEs: Visual Studio in 2017 is certainly better than Visual Studio in 1997, but do those advancements really justify the exponential growth in hardware requirements?

How'd we get so little usable functionality increase for such a massive increase in size/complexity?


Yes, they do. Computers are cheap; humans are expensive. In an average application (including IDEs), most CPU time is spent idling waiting for user input. Most memory is idle unless you have something like superfetch caching stuff ahead of time for you. If the new features make users faster, they're decidedly worth it.

Fundamentally, modern systems are much more usable - in every sense of the word. Modern IDEs are more accessible to screen readers, more localizable to foreign languages including things like right-to-left languages, do more automatic syntax highlighting, error checking, test running, and other things that save developer cycles, and on and on. Each of these makes the program considerably more efficient.


> Fundamentally, modern systems are much more usable...

I just don't buy this premise. _Some_ systems are more usable. But take a look at Microsoft Excel from circa 1995-2000, which came with a big thick textbook of documentation, explaining what every single menu item did. Every single menu item was written in natural language and it told you what it would do. Professionals used (and still use) Excel, crafting workflows around the organization of the UI. It's a tool that is used by actual people to accomplish actual tasks.

Now look at Google Sheets. It has about 1/10th the functionality of Microsoft Excel (hell--it can't even do proper scatter plots with multiple data series) and its UI is an undiscoverable piece of crap because half of it is in _iconese_--a strange language of symbols that are not standardized across applications, confusing and ironically archaic depictions of telephones, arrows, sheets of paper, floppy disks. The program is written in a pictographic language that must be deciphered before being used. Software doesn't even speak our natural languages anymore...we have to learn _their_ language first...and every application has its own and that language changes every six months. Worse, all those funky pictograms are buttons that perform irrevocable actions. They don't even explain what they did or how to undo it...it makes users less likely to explore by experimentation.

...and there is no manual, there is no documentation. It will be all different in six months, with less functionality and bigger--different!--icons...takes more memory.

We are regressing.

Hey but those animations are spiffy.

/rant


Google Sheets can be shared via a URL and anyone with a web browser can access it instantly. No need for Windows, no need for an Excel license, no need for a desktop computer. You cancollaborate in real-time too.


So you can share and collaborate...but the functionality to actually _create_, that's hopelessly oversimplified and undocumented. I think you are missing my fundamental point.


My point is that for many, the creation functionality is not bad enough to offset the availability and collaboration benefits.


I see this as a disappointing race to the bottom for the quality of artifacts that we produce and the tools with which we produce them.


Personally I get grumpy when modern IDEs make me wait on them. I don't care that much about CPU/RAM usage until the computer starts wasting my time while I'm trying to work. That's sadly very common these days even on relatively beefy hardware.

So I agree with you that computer time is cheap and user time is not, but I think we could optimise better for user time.


My exact thoughts. What the heck is VS doing? No IDE should require the amount of resources it does to where it is almost unusable on any computer > 5 years old (assuming said computer wasn't overpowered to start with).


Would you get twice as much from your car if you install 2x more powerful engine? Would a 2x more powerful weapon win you 2x more wars? Would 2x better medicine technologies allow you to live 2x longer?


Exactly that : diminishing returns are everywhere.


> For example IDEs: Visual Studio in 2017 is certainly better than Visual Studio in 1997

Is it? 97 might be a bit extreme, but the other day I opened an old project which was still on VS2010 and I was struck by how much faster 2010 was while still having nearly every VS feature that I wanted. They're slowing porting VS to .net and paying a huge performance penalty for that.


And VS2010 is itself _much_ slower than previous versions.


That's the type of example I've come across all too frequently. Software that's 5-10 years old, has all the same functionality, uses a fraction of the resources, and is often "better" in several ways.

Older versions of Android Facebook seem massively faster and use a fraction of the RAM while providing (nearly?) the same functions and features.


Excel 2000 is lightning quick compared to the modern versions (well, 2013 is as modern as I got)


Try VS 2008 for speed. And it had distinguishable icons!


I think this is the tragedy of the commons. Computers are getting better. Each team says in its heart "We can be a bit wasteful while providing feature X." When aggregated the whole thing is slower.


I think your comment is closest to reality. I believe it was incremental and deliberate, but the key piece that is missing from your assessment is that we (the royal "we") haven't given a lot of attention to correcting any poor choices. I think you are right when you say that hardware improvements helped encourage devs to give more tasks to the machine like encoding, caching, etc. However, it also became less important to revisit the underlying pieces on top of which we added these new features. It eventually became this dirty snowball rolling downhill that was built from layers of whatever was in the way as well as anything we could throw at it.

For example, the web might be filled with redundant and bloated software, but the real problem is that the browser has become the nexus of virtualization, isolation, security, etc. for almost everyone from the causal user to hardcore admins and for every piece of software from frameworks/utilities to full-blown SAPs. It's like we have all reached a common understanding for what comprises a "good" application, but then we lazily decided to just implement these things inside another app. I mean, webassembly is great an all, but is it wise?

I don't think it's about IPC or RAM or (n+1) framework layers that each include "efficient list functions". I think it about the incremental, deliberate, and fallacy-laden decisions that assign more value to "new" than to "improved".


Wonder how much of it is only due to asset. I keep making my colleague notice than a single background image displayed on a high res iphone occupies more ram than the total data of their app by a factor of 100 (at the minimum). Same goes for app size on disk : it's most often mainly assets.

So just loading from disk, decompressing, put on Ram then moving around and applying visual effects is probably half of the reason everything is slow. Those "bloat" graph should also mention screen resolution and color depth.


My first thought too.

I might have done the same on my 2000 era machine as I do today (browse the web, listen to music, program, maybe edit a picture or two) but I'll be damned if I had to do all this in Windows 98 with a resolution of 800x640 again!


640x480 or 800x600 and I think you mean the latter. 640x480 was more a Windows 95 resolution.

We could waste some words here how display resolution didn't keep up due to Windows being crap and people being bamboozled into 1366x768 being "HD" or "HD Ready". 800x600 vs 1366x768 that's only double the pixels and barely more vertical.


Win95 would default to 640x480 at some low color depth if there were nothing but vesa drivers to work with.

Made for a fun first hour after a fresh install.

Back then i had a habit of building up a cache of drivers and common software on drive, later dumped to CDs at irregular intervals, just to lessen the rebuild time.

Funny thing is that i kinda miss those days.

Back then i could pull the drive from a computer, cram it into another, and more often than not 9x would boot. It may give me that vesa only resolution etc, but it would boot.

These days it feels like it would just as well throw a hissy fit ascii screen about missing drivers, or something about the license not being verifiable.

I thought maybe Linux would rekindle this, but sadly it seems the DE devs are driving even it head first into the pavement. This so that they can use the GPU to draw all their bling bling widgets and paper over the kernel output with a HD logo...


TFA doesn't even load without javascript. I use noscript and this HTML file won't display anything unless I allow JS to load from a different domain. Perhaps the author should practice what he preaches. Then again, I don't really know what he preaches because I can't be bothered to allow the JS to load.


Didn't really expect to hit HN front page with my random rant :)

In any case, glad I was hosted on Google infrastructure but embarrassed by the bloated, default Blogger template.

Interested in suggestions for simple, lightweight alternative.

Medium, Square, WordPress, etc. all seem to suffer from similar insane bloat.


Then build it yourself. It's not that hard to create your own simple, static webpage.


It's a mega bloated site with a loading spinner that eventually opens a list of posts and then does a painfully slow animation opening the actual post in a modal window over the list. I thought for sure there'd be some nod to the site being a prime example in the article, but there wasn't.

Edit: There was kind of a nod, but more about the Blogger editor than the reader view.


Remember, things are this way because they're "better for users".


Even when I temporarily allowed the scripts, all I got was some spinning gears. Yes, software has indeed gone off the rails.


Back in the late 1960s, see

https://en.wikipedia.org/wiki/The_Mythical_Man-Month

Note NASA controlled the moon missions with an IBM 360/95 that had something like 5 MB of RAM, 1GB of disk, and about 6 million instructions per second.

Today an internet-controlled light switch runs Linux and has vastly larger specifications. Connecting to WiFi is more complex than sending astronauts to the moon!


> Note NASA controlled the moon missions with an IBM 360/95 that had something like 5 MB of RAM, 1GB of disk, and about 6 million instructions per second.

And an army of technicians available around the clock to keep it working. Whereas your IoT light 'just works' and isn't expected to require any support at all.


Actually, the 360 was a big leap forward in reliability as it was second-generation transistorized, made with automated manufacturing, etc.

As for the high complexity of IoT things, I don't think the extra complexity helps reliability, security, etc.


There are two possible conclusion to that : the first is that we suck at programming. The second is that what we think is easier to do (today's programs vs controlling moon mission) is actually not.

Maybe a lot of what those computers were doing were just raw computations, much like a DSP, to control trajectories, and nothing more. Something like a big calculators, with some networking capabilities to receive and send a few sets of instructions.


This is what I get after starting Chromium, before opening any website. Does this mean it is using 178064 + 175208 + 92132 + 90492 + 87596 = 623492 KB = 623 MB right off the bat without having loaded any HTML?

    NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND        

     0  979612 178064 108084 S   0,0  1,1   0:01.38 chromium-browse
     0 3612044 175208 128552 S   0,0  1,1   0:01.83 chromium-browse
     0 1372444  92132  67604 S   0,0  0,6   0:00.27 chromium-browse
     0 1380328  90492  58860 S   0,0  0,6   0:00.62 chromium-browse
     0  457928  87596  75252 S   0,0  0,5   0:00.67 chromium-browse


For Firefox, I have 378.3 MiB. Better, but much more than I expected:

  USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
  user      9309 59.0  4.6 2020900 282568 pts/2  Sl   09:46   0:01 /usr/lib/firefox/firefox
  user      9369  8.0  1.7 1897024 104800 pts/2  Sl   09:46   0:00 /usr/lib/firefox/firefox -contentproc -...
However, When I look at the "available" column in free(1), it looks much better. Only 170 MiB increase when Firefox is started:

  $ free
                total        used        free      shared  buff/cache   available
  Mem:        6115260      598900     4324616       41748     1191744     5205960
  Swap:             0           0           0
  $ free
                total        used        free      shared  buff/cache   available
  Mem:        6115260      781304     4116456       68908     1217500     5032360
  Swap:             0           0           0


Safari (Technology Preview) uses ~50MB with only the "favourites" tab open (default):

    PID    COMMAND      %CPU TIME     #TH   #WQ  #PORT MEM    PURG
    44688  Safari Techn 0.0  00:03.41 10    3    311   51M    2648K
Safari (original) uses ~300-400MB with 20-100 tabs open. I think they use a different way of storing tabs which are not on your screen at that moment.

(I didn't check Safari (original) for nothing-open-and-idle RAM usage, simply because I didn't want to have to reload all my tabs, so I booted Safari (Technology Preview) to quickly test it.)


Software went off the rails when Java decided to store all nested nonprimitive objects on the heap, and Sun thought this was an acceptable choice in a language marketed at general purpose app development.


Typically as developers we let the software bloat until it hurts us directly. Since we are always using more and more powerful machines we are always going to let the bloat continue. If we get called on it we've got the slam-dunk in our back pocket: "Premature optimization is the root of all evil."


I'll make three points.

The article is focused on machine efficiency. Human efficiency also matters. If you want to save cycles and bytes then use assembly language. Geez, we had the assembler vs fortran argument back in the 70's. Probably even in the 60's but I'm not that old. Guess what? High level languages won.

Next.

Hey, Notepad is much smaller, leaner, and faster than Word! Okay. But Word has a lot of capabilities that Notepad does not. So is it "bloat" or is it "features" ? Guess what, Windows is a lot bigger and slower than DOS.

Imagine this argument from a business person:

If I can be to market six months sooner than my competitor for only three times the amount of RAM and two times the amount of CPU horsepower -- IT IS WORTH IT! These items cost very little. You can't buy back the market advantage later. So of course I'll use exotic high level languages with GC. I'm not trying to optimize machine cycles and bytes, I'm trying to optimize dollars.


> If you want to save cycles and bytes then use assembly language ... High level languages won.

This is a ridiculous straw man. It is completely possible to write efficient software in high level languages, and no one is suggesting people extensively use assembly language. Actually, in many cases it is very difficult to write assembly that beats what's generated by an optimizing compiler anyway.


Considering how many people are preaching "do not use electron, write true native apps", the argument is the same (use lower level tools) and reply to it also stays the same:

Dev time is the most expensive resource.


Ok, fair enough, but have a look at yourself. Your blog is static content not served statically, that requires javascript.


On that point:

> The Blogger tab I have open to write this post is currently using over 500MB of RAM in Chrome.

If that is so, why post it and have it use a similar amount of RAM on others machines? If they know sooo much about software, why even use Blogger, a site that's heyday was 15 years ago?


Look at any analog - real estate property, an army of people, a kitchen. Anything left un-groomed during high use becomes unusable. Grooming software needs to be either someone's full-time job (real-life custodians), or a big constantly-enforced chunk of everyone's job (military grooming standards).

At a big company could argue requirements writers need to be technical, but once you've done a startup you'd know that you're in an army and not in a fixed building that needs architected once. The customer and your enemies are always on the move and you have to keep moving on new terrain as the money and your business keeps moving there. Build with code of conduct for your developers, and allow the code base to evolve with modules, or some other approach.


I love bloated software, I wish opening facebook would require 128GB RAM.

It makes hardware cheaper for everyone, even nerds who use terminal and lightweight WMs ;-)


Exactly the same for Snapchat and Mobile Internet speeds, I love those people taking selfies constantly.


To take a serious look at this scenario, being abroad at the moment without a foreign sim makes me really frustrated as to how greedily and wastefully apps on mobile will use mobile data. Google is pretty bad about this with their array of apps, and I must admit I'm surprised that there isn't a function built into Android and iOS that triggers when the phone returns that it's on roaming data - the OS seems to be able to tell at a much deeper level than the apps run at so to me it seems like an easy path to have a true low bandwidth mode kick in, or at least offer to when the phone reports a roaming condition.

I understand this is edge case material though so such ideas need not apply, but it seems like a fairly easy to implement idea that is one of those "oh that's really nice" features customers stumble across.


Software was never on the rails. This whole industry has, AFAICT, been flying by the seat of its pants from Day 1 with whatever hacks get today's job done. For a while, that meant hacks to save space and time. Now... it doesn't.


I suspect this is the inevitable price for the expanding scale of the software industry. Perhaps the problem isn't about depth, as in what a given piece of software can do, as much as breadth, as in how much software there is now and how many different things it can do collectively.

One cost of the increasing breadth in the industry is that if we want to have a lot more software then obviously it can't all be written by the same relatively small pool of expert programmers who wrote software in the early days. With more software being written by people with less skill and experience, many wasteful practices can creep in. That can include technical flaws, like using grossly inefficient data structures and algorithms. It can also include poor work practices, such as not habitually profiling before and after (possibly) optimising or lacking awareness of the cost of taking on technical debt.

Another cost of the increasing diversity in the software world is that to keep all that different software working together, we see ever more generalisations and layers of abstractions. We build whole networking stacks and hardware abstraction libraries, which in turn work with standardised protocols and architectures, when in days gone by we might have just invented a one-off communications protocol or coded directly to some specific hardware device's registers or memory-mapped data or ioctl interfaces.

There is surely an element of deliberately trading off run-time efficiency against ease of development, because we can afford to given more powerful hardware and because the consequences of easier development can be useful things like software being available earlier. However, just going by my own experience, I suspect this might be less of a contributory factor than the ones above, where the trade-off is more of an accident than a conscious decision.


Besides all other things people have mentioned there is also new requirements for ui:s. People complain if their drag-movement with a finger don't align perfectly with a nice animation on the screen on a cheap phone. Those things did not exist 10 years ago or 20 years ago. Interfaces were simpler and easier to build and users were happy that it worked.

But I do agree that software could be a bit faster nowdays.


A: Around the time that a simple blog post required JS from four separate domains in order to display anything other than a blank page.


It's kind of because we can.

I'm currently on a Mac and a PC. The Mac's CPU is at maybe 5% when I'm doing most things.

The PC is at 1%.

I'm using half the PC's memory and 3/4 of the Mac's.

These are no up to date, high memory or high performance machines.

Have a look at your own machine. Surely for most of us it's the same.

And that memory usage is mostly in one application - Chrome. The only bloat that hurts a bit now is web page bloat. And on a good connection this isn't an issue either.

It's also different on phones where application size and page size seems to matter more.


Chrome regularly uses about 10% of my RAM, but honestly Atom was the biggest offender, which is why I switched to Visual Studio. Also, games can be a huge CPU and RAM eater but it is almost always completely necessary.


I do often wonder wth is going on with bloat. I can understand a videogame which has huge amounts of media might be large. But business apps?! It doesn't make sense.


I think it makes absolute sense (in overall terms), hardware capabilities at a fixed price scaled exponentially while the costs of producing software went up mostly linearly, or to put it another way.

I have a 5 minute mp3 that takes more space than my first hard drive had and some icons on my desktop that take more space than my first computer had RAM.

Whether that will continue to hold I don't know, mobile has certainly pushed certain parts back towards caring about efficiency (though more because it impacts battery life).

If you remove a constraint people stop caring about that constraint.

The old school geek in me laments it sometimes but spending twice as long on developing something to save half the memory when the development time costs thousands of dollars and twice as much RAM costs a couple of hundred seems..unwise.


> Whether that will continue to hold I don't know, mobile has certainly pushed certain parts back towards caring about efficiency (though more because it impacts battery life).

Has it? Try using a low end phone with something like 8GB of internal storage, mobile apps are ridiculously slow and bloated. It's to the point where I haven't looked in the play store for years because I simply don't have enough room on my phone. That means the dev community has screwed itself over with wastefulness.


> mobile apps are ridiculously slow and bloated

When you look at the Android SDK you have to wonder if it's even possible to have a different outcome.


Given that my phone is broadly capable of doing the tasks of my laptop from 5 years ago and that it does it with half the RAM, a fraction of the processing power and a power budget of 3500mah for the entire day (something my laptop from 5 years ago would blow through in minutes if not seconds..) yes?


I think in some cases "improved" efficiency for mobile has actually harmed subjective performance, because it encourages trading latency for throughput. I care more about how fast software feels than how fast it benchmarks. Worst cast latency matters. I'd happily accept a thicker and heavier device if it meant I never had to wait for it.


Yes I see your point. But 1Gb installs?! I don't think you get as much waste on Linux.


As much perhaps not, quite a lot still though but I simply don't care anymore.

I copied 153Gb of data onto my laptop earlier over my fiber connection because the project I'm working on needs it and I couldn't be bothered to go find the external drive with it on in the storage closet in the 2nd bedroom.

I can buy 500GB of really fast m2 SSD for 153 quid (approximately 30p per GB) or terabytes of storage for 153 quid.

I got a new thinkpad a few weeks ago, I specced it with 16GB on one slot because I fully intend to upgrade to 32GB fairly soon with virtualisation I can bump up against 16GB, Let that sink in, My time is so precious (to me on my machines and my employer on theirs) that I'm happy to virtualise entire operating systems and allocate billions of bytes of memory and storage to save some of it.

Hardware is absurdly cheap and I can't really see that changing for a while, from a systemic point of view it's ridiculously more efficient to spend a lot of money in a few places (Intel, Samsung, IBM etc) than to spend a lot of money in every place.

Every time Intel puts out a processor that is 10% faster at the same price everyone elses software just got 10% faster for free* (*where free = the price of the new processor).

There just isn't a market incentive (financial or otherwise) to rollback bloat, if there where it would be a competitive advantage and everyone would be doing it, that they aren't shows that it isn't.

I suspect a lot of the reason why Linux installs stayed so relatively lean was because for a long time most people had CD burners not DVD burners, once those where common install ISO's blew right past 650Mb, I think Fedora 26's was 1.3Gb, I didn't really pay any attention.


So, even without Moore's law, it's better to improve percentage efficiencies at a few highly leveraged points, rather than at every point. A percentage improvement in the processor, gives improvement for all customers, and for all layers of their stack, all parts of their software, every line of code: OS, DB, language, stdlibs, third-party libraries, in-house libraries, yesterday's code, code you're writing write now. I unpack that so much to show how much benefit it gives, and that, perhaps even a tiny percentage benefit... 1%? 0.1? 0.001? etc? ... might still be a win...


Latest Intel CPUs got 1-3% faster, not 10%.

In any case, that is irrelevant as 14nm and ipc are pretty much maxed out, and from this point on, this is it. Unless CPUs move away from silicone, this is as fast as it gets (save for adding cores to the problem).


Yes and no, I built a Ryzen 1700 desktop for work, that processor cost about the same as the K i5 I bought a few years ago (adjusting for the pound cratering).

On multithreaded workloads that I care about its not just a little faster, it's a lot faster.

There is still a lot of fruit to be had in that direction I think and that's before you consider the other areas left for performance improvement.

Of course for some workloads/people they are already butting up against a different cost/benefit and they do care about ekeing every cycle out the processor but for me it hardly matters.

My desktop at work runs a development version of our main system faster under vagrant than it runs in production since I've got more RAM and a machine with twice as many cores.

It's a strange market when that happens..


:) I'm in the same boat. Built a 1700x with 32gb for my home workstation.

Feels lot faster than the 4670k it replaced.

I suspect most of that is the ddr4 + another 4 cores + nvme , rather than IPC gains.

I like the direction the CPUs moved, although after 7nm in few years, they'll have to redesign a lot more than CPUs to get anything substantial out of it.

We'll see, exciting times otherwise.


Visual Studio is an IDE that takes 19 hours to install.


It used to take many hours to install if you were installing it from a CD, DVD, or spinny HDD onto a spinny HDD. The last time I had to install it, I did it from and to an SSD, and including patches and extensions it took about half an hour.

The main culprit was extracting and copying all of the small source files that come with it.


That makes sense. It still seems odd that every other tool I've used takes a small fraction of that.


Installing big c++ libraries also takes a long time sometimes, for example when decompressing Boost onto your drive. Lots of little files that need to be written.


A literal 19 hours? Or just "a really long time"?


I'm being a little bit ridiculous, but most people I know run it overnight. I did an install recently and several hours was correct for the IDE + data science pack. Put another way, I could install Haskell, Python, Ruby, Perl, Free Pascal, Dyalog APL, Nim, Elixir, Julia, and a few more with time to spare. Granted, the data science pack in VS comes with R and Anaconda Python, but still it is insanely big by almost any standard. I think at that point it's time to take a step back and evaluate modern programming practices (after I uninstall a bunch of junk ;)).

Could a .NET expert break it down for me why VS takes the size and memory it does. I know why VS Code needs 150 MB of RAM (JavaScript), but VS should be written in C++ right?


VS is written in WPF since VS 2010. The last native version was 2008 which was much faster.


So big slowdown from C++ to C#?


Several hours for sure. Sometimes I wonder how I would go about building an installer that slow. I am not sure how to do that.


I guess it depends on your hardware/internet. Last time I tried it it was 4+ hours on my non-devbox (aka <16G RAM, etc) laptop.


Had a mishap last week and reinstalled vs on my workstation after. Took about 15 minutes from a pre downloaded image


Maybe they've improved it since I last used it (was two or three years ago now). But you say reinstalled? Is it possible it only reinstalled the VS-proper components and not all of the dependencies that the first-time install pulls in?


The latest version is much better for install times as long as you have fast internet.


Been there


Look at the dependency list for Doxygen. Last I checked, it pulled in all of LaTeX whether you want it or not.


To give one example, Xilinx ISE is 18GB installed. Seemingly everything carries around its own copy of everything. The output of a basic fdupes -r on my /opt/Xilinx is 114976 lines.


Software is written better.

In the past, computational complexity was lowered by arbitrary size limits. e.g. if you had a O(n^2) algorithm you might cap n at 10 and now you have a O(1) algorithm. Job done.

Now, computational complexity is lowered by aggressive use of indexing, so you might lower your O(n^2) algorithm by putting a hash table in somewhere, and now you have an O(n) algorithm. Job also done.

The practice of putting arbitrary size limits on everything has almost died out as a result.


This article is a waste of bandwidth and server storage space, the entire content is just an elaboration of the title, with not even a single consideration to what the cause is.

There are also a few graphs to make the author feel like he is a scientific researcher writing a paper instead of what he is actually doing , which is posting a question that quite frankly could with little extra thought fit in a tweet.


At least it's promoting some decent discussion in this thread?


Thanks :) That's what I was hoping for.

It's a serious problem with no clear solution.


I've been teaching myself C, with the intention of learning how to write code for embedded applications where efficiency is key.

It seems to me that new languages prioritize quick iteration over effective machine operation. The easier a language is to write and interpret, the faster an outfit can churn out an application. The exponential computing power growth has been sufficient enough to absorb these collective "shortcuts". Thus, it is not being taken advantage of properly.

The CSCI/Engineering fields have become more of a gold rush than thoughtful trades. Boot camp management seeks profit, and trainees seek to quickly fill high paying jobs. It all culminates into this situation where code doesn't need to be clever and thought out - just created A.S.A.P to handle whatever trending niche market or "low-hanging fruit" there is. The work of these products get handled server side, where electrical costs for cooling is a fundamental expense.


Rust is new. It's made with performance in mind.


I want to second Rust. I've been writing C for 5+ years and finally decided to give Rust a serious try... and I love it. I never ever thought I would say this, but I don't see myself going back to C (with minor exceptions)


I recall being excited the first time i learnt C. But after i learnt other programming languages, the defects of C started to appear clearly before my eyes. C++ is even worse (in terms of things that look like hastily designed)

Rust is comparably more coherent and elegantly designed.


That was pretty much my reaction too, as someone whose first programming language was C …


Thanks Jason, enjoyed your thought-provoking post. I'm reminded of Parkinson's law that "work expands so as to fill the time available for its completion". It's as if the software bloats up to fill up the available hardware capacity.

From a fundamental level though, my hunch would be how modern development takes modularization / abstraction to a type of extreme. Imagine a popular Node.js module and how many dependencies it has and how many dependencies its dependencies have.

It's not hard to imagine a lot more computing power is required to handle this. But that's ok to decision makers, computing power is cheap. Saving developers time by using modularized developments brings more cost/profit benefits, like what Dan said.

PS: the link on Visual Studio. Oh wow, what fond nostalgic memories it brings me :)


I would guess at least some of the issue is that most users don't show a preference for faster smaller software, especially because it doesn't benefit them if a product uses less RAM and CPU. Displaying a UI is better than a text mode interface. Icons that don't look jaggy on the screen are nicer than ones that do. Anti aliasing, gradients and drop shadows make things look nice. Drag and drop that has a pretty animation is nicer than drag and drop workout. It is the same reasons people choose a BMW when a Corolla does pretty much the same job. People pay the cost they want to live with. In the trade off between functional and thrifty and pretty and feature packed, the latter nearly always wins.


Looking pretty is good for marketing.

Users are showing "a preference for faster smaller software" - the author of that blog is one of them, and I'm another. But even the best software has to be passed to marketers before it can reach your hands. There are some small, efficient programs out there, but they're overlooked because they don't pay.


"The Blogger tab I have open to write this post is currently using over 500MB of RAM in Chrome."

So, why are you using Blogger instead of emacs/vi/notepad to write a static HTML page?

Apparently the author seems to think that all that bloat DOES give him something, no?


Software does so much more today as a baseline requirement. Think about all that internationalization, high DPI graphics, security, nice APIs and modularity: these aspects of software have never been at such a high level before.


I look at articles like this and the comment responses to it and I can't help but think everyone is like the old man grumbling how "things used to be built to last!"

Have people really forgotten their computing history so soon?

Let's roll back the clock. Windows 95 ran for a total of 10 hours before blue screening. Windows ME ran for -2 minutes before blue screening and deleting your dog.

Roll back further. IBM was writing software not for you. Not for your neighbor. They were writing software for wealthy businesses. Bespoke software. Software and hardware that cost more than you make in a lifetime.

Software, today, represents responses to those two historical artifacts.

1) At some point software became complex enough that we discovered something we didn't know before ... programmers are really bad at memory management. Concurrently, we also realized that memory management is really important. Without it, applications and operating systems crash.

And yes, this point was hit roughly around Windows 95. You really couldn't use Windows 95 for more than a day without something crashing.

So the programming ecosystem responded. Slowly and surely we invented solutions. Garbage collected languages and languages without manual memory management. Java, .NET, Python, etc. Frameworks, layers of abstractions, etc.

Now fast forward to today. I'm absolutely shocked when an app crashes these days. Even games have become more stable. I see on average maybe 1 or 2 crashes in any particular game, through my _entire_ playthroughs. And usually, the crashes are hardware bugs. I haven't seen a Firefox crash in ... months.

This is leaps and bounds better. Our solutions worked.

The caveat, of course, is that these new tools use more memory and more CPU. They have to. But they solved the problem they were built to solve.

2) In the "good old days" software was bespoke. It was sold strictly B2B. For a good long while after that it remained a niche profession. Does no one remember just how expensive software and hardware used to be? And people scoff at $600 phones...

But software exploded. Now everyone has a computer and software is as ubiquitous as water.

With that explosion came two things. Software got cheaper. A _lot_ cheaper. And software filled every niche imaginable.

When software was bespoke, you could get the best of the best to work on it. Picasso's and Plato's. But those days are long gone. Picasso isn't going to make Snapchat clones.

We needed a way to allow mere mortals to develop software. So we created solutions: Java, JavaScript, Python, .NET, Ruby, etc. They all sought to make programming easier and broaden the pool of people capable of writing software.

And just like before, these tools worked. Software is cheap and plentiful.

We can bemoan the fact that Slack isn't a work of Picasso. But who wants to pay $1 million per seat for Slack? Instead, Slack is free in exchange for the sacrifice of 4GB of RAM.

The lesson here is two fold. Software today is better than it ever was, and it will continue to get better. We've learned a lot and we've solved a lot of problems. Battery constraints are forcing the next evolution. I would never have dreamed of a world where my laptop lives for 10 hours off battery, but here we are. I can't wait to see what the next decade holds!


As a software developer and manager of software development projects, I wouldn't say that software quality has increased significantly since I started working professionally in the industry (1997).

As mentioned elsewhere, web apps frequently "crash" (ie. fatal JS error) and have strange behavior.

And those bugs are often platform/browser/version-specific so very difficult to fix.


> And yes, this point was hit roughly around Windows 95. You really couldn't use Windows 95 for more than a day without something crashing.

I still have the same experience with Win10. Hardly a week without a BSOD. On the same machine linux flies and flies.

> Even games have become more stable. ... I haven't seen a Firefox crash in ... months.

Yet Firefox and most AAA games are written in C++.


> Yet Firefox and most AAA games are written in C++.

C++ is a very different language from what it was.


From the code I saw of it firefox's code is still pretty much in the 90s mindset. I mean I just went to the github repo and opened a random file and look at this : https://github.com/mozilla/gecko-dev/blob/master/widget/Text... or this : https://github.com/mozilla/gecko-dev/blob/master/dom/base/At...

raw pointers everywhere, macros, etc etc

    nsresult Attr::Clone(mozilla::dom::NodeInfo *aNodeInfo, nsINode **aResult, bool aPreallocateChildren) const
    {
        nsAutoString value;
        const_cast<Attr*>(this)->GetValue(value);
        ...
makes my eyes bleed. the references to netscape everywhere are funny, too.


Sure, but a project like Firefox has a lot of the old C++ too.


You've gone back down the wrong branch of the tree. Look at what an Amiga could do with 512k RAM and a 7 Mhz processor.


I don't know. I'm sat here creating UML diagrams in a JS-powered diagram editor built in to a wiki. It works, and it's great to be able to create diagrams straight in the editor.

But it's buggy as anything. Selection is unreliable, redraws go wrong. It's lacking lots of tools (align, size group to smallest/largest etc) that are standard in most vector tools. It can't save undo history between sessions.

I guess that's better than my Omnigraffle workflow of 5 years ago? It's certainly cheaper.


I don't remember NT4 crashing around that time. And my Win95 set-up would crash sometimes when booting an audio app or game with a faulty driver. But again I would shut down the computer every day. It must have been around the same time Linux geeks started posting their uptimes.


I remember installing NT4 on my desktop PC in ... 1997 or '98. Within 24 hours, it wouldn't boot any more. I formatted the hard drive an installed it again, and within 24 more hours, it once more refused to boot.

Of course, I did not have much of a clue what I was doing back then.


Windows NT4 was "peak stability" in my experience.

Although some have said that Windows 2000 was more stable, I found the fusion of 95/98 into the NT kernel made it less so.


"Ruby, etc." - are you saying that Ruby On Rails made software "go off the rails"?


Your comment is spot on. Muh in life happens because of tradeoffs on available resources over time.


Is this a projection of "the next 90%" issue?

"The first 90% took 2 weeks to finish. The second 90% also took 2 weeks to finish (and now your 99% done). The next 90% also takes two weeks to finish (99.9% done)..." Reapplied to another resource..."The first 90% consumes 1GB of RAM. To solve the next 90% of the problem, takes 1GB of additional RAM...

If you continue this trend, the problems solved in the incremental steps maybe used fractionally less often, but are probably also more complex and required a greater resource investments. Our software does a lot more, but the later developed parts are usually used left often and are more complex. Talking to the one and only ship headed to the moon when you don't particularly care who hears you is less difficult than securely purchasing things online over a WiFi connection. At the user experience level its just "thing A talks to thing B" but the later case has also had to solve n-th 90% issues of congestion and security and handshake and...

That being said, we rarely go back and see what in the earlier iterations are now based on false assumptions. So there probably is a fair amount of accumulated cruft with no clear detector for what is cruft and what is essential.


> There have been a few attempts at a Software Minimalism movement, but they seemingly haven't gained much traction.

For those interested in exploring a minimalist approach, it's worth checking out http://suckless.org/philosophy and https://handmade.network/manifesto


Excuses excuses, most modern software developers have no clue as to how to right tight code


This line caught my attention:

"And somehow our software is still clunky and slow."

It is? I haven't really noticed. It seems to me that my 9 year old desktop still runs most modern software reasonably well. My new laptop runs it much more quickly than machines from 15-20 years ago.

Granted, in an abstract sense, CPU usage and memory consumption has grown a bit, but the actual user experience is better.


It's all about cost and what users can accept. If a feature costs 10 times less, takes half the time to implement and is easier to maintain but produces the same result for the end user, why would software companies bother to optimise (if the end user does not care)?

Also, it seems to me that the optimisation on the web is done on speed at the detriment of memory usage.


Older software was written in languages you had to manage the memory in C, C++, pascal etc, it requires more skilled developers (I'm told), simpler languages like javascript require less knowledge to write applications. The cost is though higher resource usage.


I think a potential solution would be to encapsulate every program in their own VMs, with memory & cpu limits set.

Put some control back in the user's hands, prevent less run-away bad behaviour from the apps.


> The Blogger tab I have open to write this post is currently using over 500MB of RAM in Chrome. How is that even possible?

Mate, you picked /blogger/ as your preferred blogging platform. Blogger can't even deliver a page title without Javascript.

There's a whole world of much higher quality software out there, it may be that you've chosen not to use it.

This dude should try switching his blog to Pelican[1], it might be something of a revelation.

[1] https://blog.getpelican.com/


While application software has become more bloated, Microsoft has done a fairly good job at keeping Windows necessary footprint down.

After Apple stopped supporting 32 bit x86 Macs years ago, I decided to put Windows 7 on my old 2006 era Core Duo 1.66ghz Mac Mini with 1.25GB of RAM. My parents still use it occasionally. It can still run an updated version of Chrome and Office - not at the same time of course - and it isn't painful.

My Plex Server is a 2008 era Core 2 Duo 2.66Ghz Dell business laptop with 4Gb of RAM.


Just a couple of days ago I was unable to put a Win10 iso on a 4GB usb drive, it was something like 4,5-5GB.


I don't know how much drive space Windows 10 needs. But the Core 2 Duo with 4Gb of RAM I referenced is running Windows 10 and can transcode at least 2 streams at once. It's running the Plex Server and Plex Connect - a Python (?) web server that intercepts requests from the 3rd gen AppleTV to render a Plex client.


I would posit that the decrease in price of computational resources drives a couple of things - better quality (more resolution, colors, for e.g) and ease/cost of development.

You might see individual applications do the same things, and consume more resources due to the layers and layers of abstractions, BUT be cheaper to build. As a consequence, many, many, many more applications being built, for cheaper, reaching more people, "eating the world".


At the risk of being downflamed for not knowing something obvious, I see it as the gap between what standard runtime libraries provide (e.g. nice generic list, stack, and queue classes) and the algorithms devs implement on top of them. A part of me wants to see all of the grotesque implementations of "visit every relationship in this tree" that get invoked when I click the "add comment" button on this.


Not sure about when, but I think that restrictive software licensing combined with Moore's law guaranteed wasted computing power. Companies that sold both hardware and software had an incentive to soak up that power to encourage perpetual upgrading. It grew wasteful "software ecosystems". That or testers should fail more software that doesn't run quickly on low end hardware.


IMHO, the starting point was the requirement of 3D accelerated video cards. The last 3D games I have played (alone in the dark 1, doom) did not require any 3D card. But now, you can't display a line of text without a huge pile of crappy layers above insane GPU. 25year ago, direct video memory mapping and bitblt operations were sufficient.


I think the author demonstrates well the problem by using a blogger theme that requires JavaScript to be turned on instead of statically rendering the blog post that is just a few paragraphs long as HTML and serving that from a simple server.

He shouldn't ask "when" though but who made it go off rails (sic!) and why.


This article is based on the misunderstanding that software complexity has NOT grown exponentially. It has.

Even though features might be added in a linear fashion (and I think that's not true either - the teams that build large applications have grown too), the complexity the whole system might scale as the square of the number of features, or exponentially. That is: if word2017 has 10 times the number of features as Word6.0, we should not be surprised to see CPU and RAM requirements be 100 or 1000 times higher.

Finally, just like a memory manager in an OS makes sure all memory is actually used, software should be using the computers' resources. If an average computer now is 4x3Ghz then a foreground application such as a photo editor should have features that at least some times puts that hardware to good use. Otherwise there was no point in having that hardware to begin with. As software developers we should aim to scale our software to use the available hardware. We should not just let twice as fast hardware run our software twice as fast.


I completely disagree with you.

I don't know whether the "square of the number of feature" claim is true, but we have certainly made our system huge for no reason—besides perhaps making it more convenient for programmers.

I go to http://artelabonline.com/home/ every once in a while—which will crash if many people click on this because there are only 128MB of RAM—which I built in 2009. With the worse PHP one can think of, no CDN, etc., it's lightning fast. Websites nowadays are over 10MB, and most of that crap are JavaScript libraries and frameworks. Most apps I use daily are Electron apps, which contain a whole copy of a browser even though I already have one installed, and routinely take up 700MB of RAM to show me an app that is actually a web page.

I believe that the problem is that programmers are doing things for themselves, and not users—which is eventually who will use the product. Electron is a great example. That, mixed with this idea that a little control panel for a client who has to check 10-20 orders for his store should be built with the same framework used by the most visited website in the world.


> completely disagree with you.

Well I was being perhaps a bit deliberately controversial

> for no reason—besides perhaps making it more convenient for programmers

That's a massive and excellent reason to make a system consume more resources. In fact I think it's probably the main reason programs do! If new feature X can be done in 1 man-week and consume Y resources, it's entirely possible that it can be done in such a way that it consumes just 1/4 of those resources. That might take 10 man-weeks (and/or much better devs). So you don't, because users generally aren't willing to pay for that. Basically only a few very niche products do this (game engines, embedded, ...). Basically, the economics of adding feature X was such that if it can't use a huge amount of resources, the buyer can't afford it. So it uses a lot of resources, becaue the buyer wanted the feature.

> I believe that the problem is that programmers are doing things for themselves, and not users—which is eventually who will use the product. Electron is a great example.

This is partyly true. Electron (and similar) is an excellent example of the economics above. I also can't believe how someone can write a chat client that uses 1Gb of ram in any universe. But the economics were such that JS developers, unfortunately were easy to come by, and a browser engine with DOM (of all things) was the best way to get a cross platform UI running with these developers. So the arrival of Slack was really just like any other feature. Someone wanted a cross platform shiny group chat application, and they wanted it now and not in 10 years, and they wanted it to cost reasonably little. The answer to that, unfortunately, was "ok but it'll cost you two cpu cores and a gig of ram".

Was it just for the developers? well, partly. But indirectly it's for the users who weren't going to PAY for C++ devs to write slick and lean native versions of this software

Bottom line: every user has a limited amount of money and a limited amount of computer resources. When given a choice, my experience is that users are much more willing to pay with more resources and less money, than vice versa. The important thing to remember is that the two are connected - a program that takes less resources is more expensive.


Don't forget what I call "technological supremacy" bias:

https://news.ycombinator.com/item?id=14902333


Of course, and having system requirements is obviously a good thing.

While it's obviously a fact that devs have more powerful machines than their users, I'd argue that the tolerance for performance is MUCH lower among devs. When my IDE does a GC pause for 5 seconds I go insane.

Yet I have users that use the software I write, and somehow insist on using enormous datasets in it (much more than we would have imagined) meaning completely ridiculous delays (minutes if not more). When I ask if it doesn't drive them crazy, they say "no, because in the past thus job took a week with pen and pencil" or "the old program didn't have this feature at all, so of course a minute wait is OK in the new program!".

A developer would never say that about the new shiny feature in their IDE or similar "yeah I like the new go-to-symbol even though it takes 15 seconds..." (and it might SAVE us time still, compared to doing it the old way). But we'd complain like crazy.

So our technological supremacy bites us some times too.


Software went off the rails approximately when JavaScript became mandatory to display text on a webpage.


Exactly. Its ironic his post doesn't even show up until I allow js loaded from multiple domains just to show 4 pictures and a couple paragraphs of text. Sites like his are huge part of the problem.


The irony is not lost on me.

The unfortunate thing is that Blogger started out being fairly light and clean -- at least it was back when I migrated to it.

At least static site generators are coming back into fashion. I've always considered them a better solution for media/news/blog type sites.


FYI: I think you can still select blogger themes which don't require JS to render. Some people use them, including (perhaps just as ironically) the Google Developers Blog[0].

[0]: https://developers.googleblog.com/


It's even a pain on archive, apparently.

http://archive.is/fJ7nu


Though JavaScript is of course not mandatory, the mind set and incentives in which web specs have been created by browser vendors is bound to pile up complexity, in that developing a browser from scratch is considered infeasible (last one to try was Opera), and browser vendors are rather trying to turn existing browser code bases into universal runtimes for apps. Point in case: WASM, which I consider bloated and out of place in a web browser.


JavaScript from 3 other domains is mandatory for this site. It won't display even with its own JavaScript.


Sorry, I seem to have misunderstood what you meant re: "mandatory JavaScript".


Software itself is the sacrifice of system optimization for human optimization. Give me unlimited human power and I'll give you purpose-designed chips for each individual use case with the program encoded in the wiring of the logic gates.


Software went off the rails following the advent of the downfall of the real IDE, the mouse and the web browser.

The browser is the big one, you're running an operating system inside of another.

The IDE not being able to target a true cross platform binary has hampered us.

The mouse has made developers lazy and et them put interfaces in clunky places.

(contradicting myself) Not using the mouse for coding anymore (see the downfall of the IDE)

And OS and browser vendors should have allowed binaries to run at a higher ring level, IE run something similar to dos inside of a browser, I would much rather cross compile to the major architecture than code in HTML/CSS/JS.


The Jevons Paradox applies here: As technology improves, cost goes down, which increases demand, which causes more of a given resource to be used, not less.


Everybody here says that "software bloats" but nobody says how it bloats. Why does chrome take 500mb to render a page? Where does that memory go.


Although it's great for campfire stories I don't really see a problem. Resources are there to be used - the lesson from StarCraft is you don't horde resources to victory. I think most believe this as how many of us are writing unikernel OSs purposed built for our hand tuned assembly backends


Sticking with the Starcraft analogy: We're spending the resources constantly, and even increasing the size of our army to combat the constant onslaught of enemies. If we could basically decrease the price of our units, then wouldn't that be a benefit? You spend a boatload of time and resources on research, and gain on ongoing benefit from it.

Take this laptop: spinning platter drive and 4GB of RAM. I'd be happy if it could do more with the same hardware, and I think that's the core of what we're talking about.


But time took care of the cost of a given hardware unit.


You are all missing my point (or I did not make it clear) - the resources are there for you to burn in achieving your goals. You will never be able to deliver a product in competitive fashion compared to groups that are more willing to burn resources (with "reason") - the market simply doesn't pay for high efficiency on the desktop or in the browser


I have 32GB ram on my home machine because 16GB wasn't enough. I moved to 8 core CPU because 4 weren't enough.

StarCraft analogy is far removed from real life user and business cases, all of which pay for the hardware with real money and time.


Just like roads and budgets the utilization will always grow to overflow.


Why did a bunch of open, unintelligent questions even get to the frontpage of HN? If this is the way one should think (ask questions we all on a low, unintelligent level agree are interesting) in order to become CTO, I will never get there...

//The Engineer


Ironic that your blog needs javascript to display a page.


Written in a blog that does not work with javascript off.


Yes - I often think about posting this sort of comment, but won't if it's not really relevant. But not only do you need to enable JavaScript for the domain itself, then you need to enable it for blogblog.com, and load 7 resources from that site. The page loads up images for about a dozen unrelated articles in the background behind the modal.

It would be great to walk the walk when you post about this kind of topic, by using a simple HTML web page with just the images you need to aid in presenting your argument.


IMHO, there is no Software Engineering. There is only Computer Science. There is none of the discipline that comes from engineering, and only the messiness that comes from science. The heroes in software are Computer Scientists and everyone wants to be a Computer Scientist. No one wants to be a Software Engineer. It isn't even clear to me that Software Engineering exists in a clearly defined way. Maybe a manager or director imposes their will on a group somewhere and engineering is done.


This is simply untrue. There _are_ people who want to be software engineers -- I'm one of them! The field is still young, but there are those of us who strive to apply the rigour seen in other engineering disciplines to the work in which we engage today. We seek to balance competing demands of quality, features, and schedule.

Though I get to produce programs when it makes sense, I spend a lot more time writing prose and communicating with others inside and outside the immediate engineering team in which I work. I also spend a lot of time chasing down problems in existing software; each new failure provides an opportunity to improve our overall practice. Mistakes can and will always happen; negligence must not.

To suggest that there isn't, or cannot be software engineering is maddeningly self-defeating.


Arguing semantics is usually a waste of everybody's time.


And not even Computer Science, just Programming. Maybe here at HN there's a will to understand why the algos are like that, or when to choose a given data structure. But most workers in this industry are content with writing code, seeing it compile, and pass it to the next person in the chain.

Source: half assed personal opinion base in 10+ years experience


An engineer is just an ingéneur - someone who applies ingenuity.


* ingénieur


Oops-a-daisy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: