I disagree. Our baseline for software has increased dramatically. If you don't care much about the added functionality or value of the new software, use Mosaic or Netscape 4.0 to browse the web.
There are obvious improvements in browsers which you are so used to that you forgot them: tabs, ability to zoom, process-level isolation for websites, support for newer protocols, CSS, Unicode support, font rendering things I'm probably not aware of, a built-in debugger and inspector in the browser, and so on.
Again, if you think that software hasn't advanced much, simply use the software from 10 or 20 years ago, and see if you can stand the experience.
I still use Jasc Paint Shop Pro 7 whenever I can. It's faster to startup and use than any of the versions that came afterwards based on .NET. The built-in effects are a bit outdated, but it still runs most Photoshop compatible plugins.
And I still run Windows 2000 in an isolated offline VM for some of my work. Even emulated, it's blistering fast on modern day hardware. The entire OS runs great in less RAM than a Blogger tab (assuming the article's numbers about 500MB - 750MB RAM are correct).
There is some excellent and efficient new software being made (Sublime Text and Alfred spring to mind), but please, don't give me another Electron-based 'desktop' app.
Operating systems themselves take advantage of this abundance of memory by also keeping things in-memory for longer - I remember my beefy 2GB/RAM computer from 10 years ago still paged processes and data out to disk when I had Photoshop CS and Firefox 2 side-by-side, but now that I have 32GB of RAM - and have done for the past 2+ years, I csnnot recall having experienced any disk-thrashing due to paging since then.
There's basically no downside to using otherwise-inert memory, so plenty of programs will eat whatever memory you have but degrade nicely when you're out. Hell, several IDEs seem to cache their tab contents just because they can, and that's only a text file on disk. If my memory usage never drops below 70% of available, I have no real complaints.
Of course, there is the question of why that much memory usage is even possible. I don't blame Chrome for eagerly caching webpages - it's delightful when I'm on shaky internet and pages don't reload every time I change tabs. But I do object to whoever decided a Blogger tab should have 750 MB worth of crap to cache in the first place.
I increasingly feel like many webpages are actively hostile to their users; in service to click rates and advertisers they eat bandwidth, squander memory, and actively oppose security measures like adblocking and SSL enforcement. That, at least, seems like a step backwards.
> I still use Jasc Paint Shop Pro 7 whenever I can. It's faster to startup and use than any of the versions that came afterwards based on .NET. The built-in effects are a bit outdated, but it still runs most Photoshop compatible plugins.
> And I still run Windows 2000 in an isolated offline VM for some of my work. Even emulated, it's blistering fast on modern day hardware. The entire OS runs great in less RAM than a Blogger tab (assuming the article's numbers about 500MB - 750MB RAM are correct).
> There is some excellent and efficient new software being made (Sublime Text and Alfred spring to mind), but please, don't give me another Electron-based 'desktop' app.
Ha, I too keep Paint Shop Pro 7 around for the reasons you mentioned.
The problem is that we're not actually running applications on the web platform. We're running them on advertising platforms.
I'm pretty sure that roughly 99% of my browser CPU usage goes towards loading and running ads. And browsers are optimized for that task, which has consequences even when not running ads.
We have a business model and payments processing crisis, not a technology problem.
Also, the entire architecture of browsers is very much geared towards running tons of totally unpredictable crappy adds. I don't think the multi process architecture most browsers use nowadays would have come to pass without crashing Flash adverts.
As an application delivery platform, web technologies certainly are a serious problem. They're literally decades behind the curve in many respects.
A textbook example is that in JS world, "unidirectional data flow" in modern UI libraries is treated as some sort of amazing new idea. SmallTalk was using the same fundamental principle in MVC in the 1970s. Many variations have evolved in desktop applications since then, dealing with a variety of practical lessons learned from experience with the original concept.
In JS world, it's considered a big new feature of recent years that you can now write code in multiple files, import what you need from one file in another in some standardised way, and thus break your application down into manageable parts for development but ship a single combined "executable". Again, most major programming languages created since at least the 1970s have supported modular design, in many cases with much better support for defining clear interfaces between the modules than JS offers even today.
In JS world, having discovered the concept of modular design and libraries relatively recently, naturally we're now seeing state-of-the-art build tools that use "tree shaking" to import only the code actually needed from one module in another, instead of bringing in the whole module. Dead code elimination has been a bread and butter feature of optimising compilers since... Well, you get the idea.
Next week: Stability and standards are useful things, data structures and algorithms matter, standard libraries with good implementations of data structures and algorithms are how you combine those things to make the everyday requirements easy to meet, and if much of software development is about managing complexity then routinely pulling in minor features from an ecosystem that exponentially increases the number of transitive dependencies you have to manage probably isn't a smart move.
It's not just JS, though. The layout engine in browsers was designed to support HTML documents, reasonably enough. Unfortunately, that means laggy performance in web apps that keep regenerating a large amount of page layout information because of that heritage. A great deal of effort has been spent in recent years trying to work around that fundamental weakness, with varying degrees of success.
Likewise the CSS formatting model were designed around those documents, and again is quite limited in even basic layout needs for an application UI. Some of the bleeding edge changes are making that slightly less painful, but there are still a lot of awkward interactions.
It is true that much of the modern web seems to have been taken over by spyware and ads. I'm afraid that's what we get for having a culture where a lot of people want all their content and services for free, and until we fix that culture we're probably stuck with the ads (and will probably see a variety of laws in the coming years to fight back against the technical victory of ad blockers). But this definitely isn't the only reason the web of today is bloated and slow.
But when it comes to performance, all of that just pales in comparison the fact that almost none of the CPU cycles executed on a typical web page are actually part of providing service as opposed to part of the payment (i.e. ads).
The subscription based web apps I use are not sluggish at all (with some notable exceptions)
And I'm not just talking about operations that require network requests. I'm talking about simple UI responsiveness to (for instance) acknowledge a click and hold on a handle so I can start to drag-and-drop items in a list. I'm talking about drop-down menus that take over a full second to open.
I'm talking about lag when typing text into a UI, such that if you type a whole sentence and then click in the wrong place after you finish typing but before the second half of it renders out character by character (for instance, because you noticed and want to correct a typo), the second half of your sentence ends up transposed into the middle of the first half.
Ridiculous. This was a solved problem 25 years ago.
That is not a limitation of the platform though.
Many of the most egregious examples of this happen to be on the web because a) it involves an additional layer of abstraction, relative to OS-native applications; b) designers are involved more in web apps than native apps, and they refuse to use native browser widgets, which are performant; and c) for a handful of reasons web development culture is biased towards building on top of existing libraries, which results in excessive layers of abstraction.
And then you add the networking layer (and for many web apps having your data "in the cloud" adds very little beyond what a native app provides even without background data syncing), and add seconds to every tiny interaction that used to seem instantaneous.
Now they've taken web apps and packaged them up to run their own web server and browser locally with even more abstraction layers that chew through your system resources.
What a strange world.
We had decent video, CSS, page update, notifications, unicode support and the like 10 years ago. We didn't gain that much since then. But the page load time took a X5 hit.
Yeah some UI is nicer, and we do have better completion, real time is snappier and all.
We lost most pagination, pages feels slow, there are so many, sharing informations went down the toilet in exchange for big echo chambers.
The only thing that are clearly better is my hardware and bandwidth, both on mobile and computer, and professional service quality that went up in quality a lot.
The web itself, the medium, feels more sluggish than it should be given the power horse of infrastructure I got to access it.
If you look at where the bloat went, it's mostly three things:
- Images ballooned in size by a megabyte (without adding much extra requests)
Most likely culprits are retina class graphics and "hero graphic" designs.
Corresponds to a shift from jquery + some manually added plugins to npm / webpack pipelines and using a massive amount of (indirect) dependencies.
Now that flash has disappeared, it got replaced by auto-playing video, which is even worse as far as page bloat is concerned.
For some reason, around the time "digital native" design and designers got big, it suddenly became OK to include giant design-only images (not content, not a logo, not even a fancy button, just because the designer wanted a giant image) in web pages, which before was one of those things you Just Didn't Do in proper web design. It's gotten so bad that now it's done with videos. Web old-timers find this to be in shockingly poor taste, but that's where we are. Bring back the "bad" print-background designers, I say.
It's like a microcosm of everything wrong with modern web design. It's wasteful on data, wasteful on memory, obscures useful information (like the actual picture), and actively distracts from the surrounding text. But it looks fancy, so it's not going anywhere.
You are blithely ignoring the reason for all this bloat: unwanted adtech surveillance.
Its why the first meaningful page draw on most webpages takes 3x as long to happen as it did a decade ago.
All the actually useful bits of the web, text, images, etc would still work. I bet you could still shop on Amazon with one of those old browsers.
The power is being sucked to run the 10Mb+ of JS, Flash and other crap that ads add to every page.
Thanks to Google Tag Manager, we don't even get a say anymore. It's out of our hands. We can put hours into optimising animations and UX but it is in vain once the marketing companies get their hands on the GTM account. With enterprise level clients near quarterly change of marketing provider, I'm sure many of the snippets aren't even being used anymore, they sit there because the new account manager is too afraid to remove them and no one truly understands what gets delivered to the customer's browser.
Weird, the screen keeps flickering and scrolling isn't smooth. Oh well, probably a browser bug.
It's a bit alarming that you can kill 80% of a page load without your experience degrading in any way. At that point I'm prepared to call what's happening hostile design - the user's well-being has clearly taken a serious backseat.
If I didn't have to trade files with others I could quite happily use Microsoft Office 97 in lieu of whatever the new thing is called.
The issue with web browsers is only slightly more complicated. I'd love to go back to a world where web pages didn't try to be computer programs, but that's obviously not going to happen for a while.
(I mean, it's not enough to make me switch to Open Office or Nacho Libre...)
If I need to export to Office I use Google Docs, because Libre is still too frustrating. If I don't need a .doc output, it's Latex for pretty things and Textedit for the rest.
I agree that most of the Office-analogues out there aren't very usable, but I'm surprised at how rarely I need one.
I could go back even further and use Microsoft Works.
But people complaining about the poor use that today's software makes of today's hardware are usually not talking about games.
The best games I played recently are all indie stuff that I could have played on a much older machine.
If you try it, you may discover that's not the case. A lot of indie stuff is taking advantage of faster computers to use things with higher levels of abstraction, and indie games often have really quite terrible levels of performance relative to the complexity of the game as compared to a AAA title. They run at 60fps because they're net simpler, and a lot of times you may find they're barely running at 60fps on a medium-class machine.
I'm not complaining, because the alternative is often that the game simply wouldn't exist if the only choice the developers had was to drop straight into C++ and start bashing away with OpenGL directly.
I don't object to that, either. People are knocking out games in high-level languages or using extremely rich frameworks. You can put out an Android game and barely touch a single Android feature because your environment is so extensive. We do pay a price in speed and memory usage, but the upside is that people get to release all kinds of wild projects without learning the arcana of their environment.
It's fantastic that a team of <5 people with limited software expertise can put out a genuinely brilliant game. I'm less happy with it outside of gaming, where people often have to standardize on a single product. But video games are moving steadily closer to boardgames, novels, and other art forms where the tools of the trade aren't a limiting factor.
Often you couldn't. FTL or Child of Light (as examples of games I've enjoyed fairly recently) look like they would be fine on an older machine, but actually made my modern laptop fans run pretty hard. Modern hardware meant the developers could used frameworks to save them time, and focus on making a good game rather than microoptimizing performance.
Oh, I dunno, I'm pretty sure the actual full-motion, "interactive" movies from the 90s were more soulless: http://listverse.com/2011/12/24/10-notorious-full-motion-vid...
But I think the popularity of "retro" aesthetics and mechanics signals that progress in games is not at all linear.
I would think that some more specialist peripherals could also be used for people with even less mobility, so they could also play these games to some degree and enjoy them.
Yeah, but that's not a consequence of better software!
My goto example is Blender. It has a footprint of 300MB, starts within 1-2 seconds and uses 70MB on startup.
Compare this to a program like 3ds Max and you will ask yourself where all your resources went.
I think today most software loads everything on startup, while programs like Blender use a modular approach.
But perhaps that's only because we had more time to figure the requirements out.
I may be alone (or in a group of 7), however, judging by how often I get a 403 Forbidden error because a site is set up to reject anything with a user agent of lynx. Seems to be a default of many Wordpress setups. e.g. lynx slatestarcodex.com.
Maybe I should just tweak the user agent, but hey, lynx pride.
(edit: looks like it's the "libwww" part of the default lynx user agent which gets me blocked, not the "lynx" part. OK, I will edit that out. Ha!)
(For comparison, I ran Netscape 4 on a Pentium 100 with 64 megs of RAM.)
Try 1987 Ventura Publisher. That's 30 years ago. You can publish a whole magazine or a 1000 page book on a modest (<4MB) PC with 286 processor with it. It has a great GUI, excellent usability and, dare to say, isn't slow at all.
On the IDE side, the IDE environments for Lisp on the Lisp machines of the early 80s, or the Smalltalk environment for the early 80s Xerox computers have nothing to envy the modern IDE environments that are commonly used for languages like Java.
old versions of visual studio, vim, emacs, all come to mind.
If I can have the web from that era to go with it, I'm happy.
Wouldn't your users appreciate more features than optimisations most of them aren't going to notice? For the same block of time today compared to decades ago, you're going to be creating an app that uses more memory and CPU but has a lot more features. Developer time isn't free and needs to be justified.
I used to write DOS apps for 66MHz PCs and you'd spend an enormous amount of time optimising memory and CPU usage. This was similar for the initial batch of Android phones as well as you didn't get a lot of resources (e.g. loading a large image into memory would crash the app). Now I can create more features in way less time and rarely have to think about optimisations unless dealing with a lot of data.
I think expecting every software developer to release software that uses the absolute minimum amount of CPU and memory possible is completely unrealistic. The people commenting that developers don't know how to write optimised code anymore have different priorities to a typical business. I know how to make low level optimisations and have even resorted to assembly in the past but I'm only going to these lengths when it's absolutely required.
For commercial projects, it doesn't make any business sense to optimise more than is needed even if that makes software developers squirm.
Allowing video content, using images and a JS framework is alright. Not making sure the first load is under 1mo is, however, unprofessional.
I get that some sites do need big pages: video tubes, big SPA, etc. But most sites are not youtube or facebook. If your blog post takes 3Mb to display text, it's doing it wrong.
> If your blog post takes 3Mb to display text, it's doing it wrong.
I agree if it's not much effort and it makes a big difference (e.g. to mobile users) you should make the optimisation.
I'm more talking about people saying that instead of having one Electron app to support multiple platforms, a developer should be writing several native apps. The latter is a huge amount of development time to save a few 100MB of RAM that most users probably wouldn't notice anyway. Nice to have but it doesn't make business sense most of the time.
I think what operating systems should do is to allow users set per app/website quotas and use sensible defaults.
Developers should get the message that no we can't use all the resources available on a given system just for our particular app.
This sounds sort of like what classic Macs did: there were some settings in the application's "Get Info" dialog to adjust the "current size" of the application (along with a note as to what the "preferred size" was); if there wasn't enough free memory the program would refuse to start.
In practice, this is a terrible idea: it generally turned into "well, this program isn't working right, what if I fiddle with it's memory allocation?" and did nothing to actually help the user. (To be fair, the setting was necessary as a result of technical limitations, but I don't see it working out any better if it was implemented intentionally.)
Perhaps it was in the 1980s, but back then applications didn't get funded by downloading adds from third parties using 100% of our CPU. We didn't use many applications at the same time either.
So the problem we have right now simply didn't exist back then. And where it did exist, like on mainframes, they did actually use quota systems rather successfully I believe (just as we are right now with virtual machines).
Commercial developers will use as much memory as they can get away with; OS vendors would disable the quota system (or set the quota to 100% of whatever RAM the device has) since they don't want apps to "break" on their laptop/tablet/phone and work on their competitors'.
If people graduated from college with just a little bit of understanding of how slow ram is, how CPU caches work, how expensive allocating memory on the heap is (either, initially with something like malloc, or amortized with garbage collection) and if we stopped using languages that haven't at least built a good JIT yet, and use AOT native compilation more often, we would all be in a much happier place. Systems would be much more snappy, or we could add more features, users can run more things at once before it goes to hell, batteries would last longer.
None of this requires even changing what language you use, or massive changes in coding style. Just knowing some basic facts about how the computer works can let you choose the more efficient option among equivalently complex options -> "don't need a linked list here, let's use an array backed list instead" -> let's not use LINQ here because that allocates and this is a tight loop that runs on every request -> lets not use electron, let's build something with a similar external API that is more efficient
> lets not use electron, let's build something with a similar external API that is more efficient
People also choose Electron because it's an efficient in terms of development time to release apps on multiple platforms, not because they don't know how to optimise.
I doubt that. Developers will try to work within their quota so they don't have to ask for permission to use more and risk annoying users.
Also, it would create resource competition between websites and the adverts they use to fund themselves. This would lead to much less annoying ads.
I can't see how this would work. What would the sensible default be for a computer game? A text editor? An IDE? A video editor? This kind of thing is way over common users heads to set their own quotas as well. How would you avoid "This app wants to use more resources than the default. Allow?" popups with low defaults?
Of course it's easy to imagine an implementation of this idea that is annoying and confusing. But I can imagine this working very well and provide an incentive for developers to not waste resources.
All apps have different resource usage profiles. I can't see how you could make this work without user interaction or a review process for apps.
The overwhelming number of apps need next to no CPU (on average that is. Spikes are OK) during normal operation, and that includes text editors.
Other applications, such as games, need all the juice they can possibly get. Users usually know why that is the case and will be happy to grant them that permission.
There needs to be a way for an app to request a temporary suspension of resource limits, e.g. for running batch job that should finish as quickly as possible.
Each website should be treated as a separate application and needs to compete for resources with the ads it runs. So there would be an incentive for website owners to keep the resource usage of ads in check.
In any ways, macOS that I am running as my personal system probably ain't gonna get that feature anytime soon.
Tech Debt is rewarded.
Doing something for the first time, almost by definition, means one does not really know what one is doing and is going to do it somewhat wrong.
Hiring less skilled labor (cheap coder camps for example) to implement buzzword bingo solutions gets you into a place where all the software contains large chunks of it's substance coming from people doing it for the first time... and not 100% right.
As we never go back to fix the tech debt we end up building stills for the borked and less than great. When that structure topples over we start over with a new bingo sheet listing the hot new technologies that will fix our problems this time round for sure.
I'd think that a good fraction of the current language expansion is that the older languages are too complex and filled with broken. Green fields allow one to reimplement printf, feel great about it, and get rewarded as a luminary in the smaller pond.
oh... and well the cynic in me would argue planed obsolescence at the gross level. No one buys the new stuff unless there's new stuff.
BTW is "stills" a typo for something? (shims?)
The Mythical Man-Month has a chapter on this titled "Prepare to throw one away". Brooks argues, that the program/OS/whatever you build the first time around should be considered a prototype. Then you reflect on what problems you encountered, what went well, and so on and use those insights to guide you when start over.
It seems like such an obvious idea, but Brooks wrote that almost 50 years ago, and it seems like only very few people listened. Primarily, I guess, because much software is already written under highly optimistic schedules - telling management and/or customers that you want to throw the thing out and start over is not going to make you popular.
I think the biggest culprits are abstraction layers. You use a framework that use js that use a VM in a browser that runs on an OS. The time when showing text was done by the application writing a few bytes in VRAM is long gone. Each layer has its internal buffers and plenty of features that you don't use but are still there because they are part of the API.
Another one is laziness : why bother with 1MB of RAM when we have 1GB available? Multiplied the number of layers, this becomes significant.
Related to laziness is the lack of compromise. A good example is idTech 5 game engine (Rage, Doom 2016). Each texture is unique, there is no tiling, even for large patches of dirt. As expected, it leads to huge sizes just to lift a few artistic constraints. But we can do it so why not?
Another one is the reliance on static or packaged libraries. As I said, software now use many layers, and effort was made so that common parts are not duplicated, for example by using system-wide dynamic libraries. Now these libraries are often packaged with the app, which alleviate compatibility issues but increase memory and storage usage.
There are other factors such as an increase in screen density (larger images), 64 bits architectures that make pointers twice bigger than their 32 bit counterparts, etc...
Related: just went to a news site that had this absolutely gorgeous, readable font called "TiemposHeadline" for the headlines. That font was loaded through the @font-face interface of CSS, so that's probably somewhere in the ballpark of 1M.
That I so casually navigate to find the name of the font and how its getting loaded is due to devTools, which is tens of megs of storage space plus whatever chunk of memory it's eating.
I think examples like Doom are misleading. Those are essentially hand-compressed diamonds of software. Some of the resource-saving techniques used there are bona fide legends. It should come as no surprise that an increase in availability of RAM and storage space causes the software to accordion out to fill the available resources in return for ease-of-development.
That said, I'm all for software minimalism movements as long as the functionality and usability remains roughly the same.
The rationale behind megatexture is that storage capacity increase exponentially but our perception doesn't. There is a limit to what our eyes can see. In fact for his future engines, John Carmack wanted to go even further and specify entire volumes in the same way (sparse voxel octrees).
And sure, the way megatexture is implemented is really clever, and yes it is for a good reason, but it doesn't change the fact that it makes some of the biggest games on the market (Doom is 78GB)
When I said no compromise, it is no compromise for the artists. The whole point of megatexture is to free them from having to deal with some engine limitation. They don't have to be clever and find ways to hide the fact that everything is made of tiles, they just draw. And yes, this is a good thing, but a good thing that costs many gigabytes.
Yeah, it probably doesn't really help the situation when you build several layers of abstraction in an interpreted language that runs in a virtual space on a virtual machine using another abstraction layer running on virtual cores that go through complex OS layers that somehow eventually possibly map to real hardware.
As a developer, you rarely care about memory usage; as a web developer, you have limited influence on CPU usage. And since most managers care only about getting the project done on time and within the budget, this is what most developers concentrate on.
I think that is the crux of the issue succinctly put.
Software that comes out of nonprofits or the free software movement is arguably better built and treats the user better.
Lots of people stopped using firefox in favor of chrome precisely because firefox was incredibly greedy memory-wise.
I've never really understand why either and it's been on two totally different laptops.
I'm on FF 57 at the moment (nightly) and it's stupid fast. Like, really fast.
It can happen in web development as well if you're working on a disciplined backend team that cares about designing for request latency and concurrency.
But often scrappy startups don't have the time or money to care about those things. It's more about product fit and keeping customers happy to keep the money coming in.
I personally don't use that excuse and design everything I can with a budget in mind. It's a nice constraint to have an upper bound on response time. Forces you to pick your battles.
There a bunch of tricks you can pull to give that impression even when you have to go over that limit.
These days I'm lucky, I only work on an internal web app that is almost entirely served over gig ethernet.
Software makes the world more complex faster than we can understand it, so even though we have more knowledge we understand less about the world.
We used to know how cars work. We used to know how phones work. Now we don't, and never will again.
The implications are unsettling.
Imagine a world populated entirely by IOT devices. Imagine, for a moment, starting with a blank slate and trying to make sense of these devices using the methods of Science. They are so complex and their behavior governed by so much software that it'd be impossible to make a local model of how the device actually worked. You simply would not be able to predict when the damn thing would even blink its lights. When the world gets to this point...One would have to understand how software worked, in many different programming languages; kernels, scripts, databases, IO, compilers, instruction sets, microarchitecture, circuits, transistors, then firmware, storage...it'd be impossible to reverse engineer.
I mean how many times has someone called you a Wizard when you've fixed some random piece of tech or gotten something working for them?
A toaster is not a complex thing. It has a few springs, a tray, a body, some heating elements. Some wires. There is absolutely no need to put the internet in there.
It was some simple bi-metal wire that would bend when it reached a certain temperature and shut off the power to the heating coil.
I guess the whole aspect of these IoT is that it violates the simplicity of these devices. Kind of like the weapons in Judge Dredd vs a simple modern mechanical handgun.
If all these things can be taken as a given, why would you not want to use them? I mean, yes, you can avoid some complexity by making a simple toaster, but the second the consumer wants things like "never burn my toast" or "personalized toast levels" you need to go up the stack.
That said, some IOT things are clearly lame ideas that should never have been made in the first place, but that doesn't mean you should avoid using existing technology.
When the device breaks, what do we do with it? If it is mostly software, it is not user serviceable, whereas something with a spring and clips and wires is something that a curious person armed with a screwdriver could disassemble and fix.
I fear that software is ushering in an age where users are absolutely helpless when something breaks. Then we get big stupid bricks with chips in them.
Right now he's investigating what it is like to be a goat: http://www.thomasthwaites.com/a-holiday-from-being-human-goa...
Don't knock it until you try it I guess?
It goes poorly.
The economist article links to the original paper, which is quite readable too.
"Yeah nah but when you turn on input A, input B turns off so we know how B works."
Philip K. Dick wrote a short story on this, some 60 years ago: https://en.wikipedia.org/wiki/Pay_for_the_Printer
AOT to native code via NGEN, or JITed on load.
The only .NET variant from Microsoft that wasn't compiled to native code before execution, was .NET Micro Framework.
Now, .NET and JVM might be a problem since those VMs tend to be resource hogs (after all both use GC methods that allocates tons of memory whereas something like Python or even classic VB use reference counting - even then there are languages that aren't using reference counting but some other method of GC and still are fast). But i don't think you should put all interpreted languages at the same box.
Also you should not put all GC languages in the same box, as many do allow for AOT compilation to native code and do support value types and GC-free memory allocation as well.
So, first of all:
> Reference counting is GC.
How did you thought that i said otherwise when i clearly wrote "aren't using reference counting but some other method of GC" ("other method" here implying that reference counting is also GC)?
> Also you should not put all GC languages in the same box
I did not, as should have been obvious from the "after all both use GC methods that allocates tons of memory whereas something like Python or even classic VB use reference counting" where i compare two different methods of GC, one that uses a lot of memory and another that doesn't.
Now i get that making the previous misunderstanding would make this bit sound as if i was making a comparison between "GC" (Java, C#) and "non-GC" (Python, classic VB) - and please note that the quotes here are to show what one could think while having that misunderstanding, not what i really think, after all i already made it clear with the previous quote that i think that reference counting is a method for GC - however i do not think that it is my fault here, i gave examples and tried to make myself clear about what i mean. After some point i believe it is up to the reader to actually try and understand what i am talking about.
I think the rest of your message (the "as many do allow for AOT compilation to native code and do support value types and GC-free memory allocation as well.") is relying on the above misunderstandings, especially considering i didn't do what you describe, so i am ignoring it.
Now don't get me wrong, i am not attacking you or anything nor i believe you are wrong with the fact parts of your message ("reference counting is GC", "not all GC languages are the same"), it is just that the message doesn't have much to do with what i wrote.
It is cheaper to develop a .Net app than a C app. Cheaper in Development and maintenance.
It is cheaper to not care about efficient data management, or indexed data structure.
What we're losing in efficiency, we gain in code readability, maintainability, safety, time to market, etc.
I think this is true but I disagree that it's inherently true. Slapping a UI together with c and GTK is pretty strait forward for instance, here is a todo list where I did just that (https://gitlab.com/flukus/gtk-todo/blob/master/main.c). It's not a big example, but building the UI and wiring the events was only 40 lines of code, it's the most straightforward way I've every built a UI. More complicated things like displaying data from a database are harder, but I think this comes down to the libraries/tooling, the .net community has invested much more time in improving these things than the c community.
> What we're losing in efficiency, we gain in code readability, maintainability, safety, time to market, etc.
I don't think we've exhausted our options to have both. We can build things like DSL's that transform data definitions into fast and safe c code for instance. Imagine instead of something like dapper/EF doing runtime reflection we could build equivalent tools that are just as easy to use but do the work at compile time. Or we could do it via rusts kickass compile time meta programming.
Also are you sure your C app will have a high score on clang and gcc sanitizers?
However, if you pick .NET vs Delphi you will see is easier to develop UI apps (and general apps) in Delphi than .NET.
The exception is web apps, and web apps is another big problem that "normalize" bad architecture, bad language and bad ways to solve everything.
If wanna take a look for your self you can install Lazarus (http://www.lazarus-ide.org/, discussed https://news.ycombinator.com/item?id=14973706) a close clone of Delphi (version 7).
Or download the free edition (modern):
The language itself is still quite ok.
But I'd definitely be interested in any studies that have tried to measure these over long time periods.
In fact I think there is a sort of casual indifference to the first two that frequently borders on criminal neglect. Why bother with them when the "time to market" driver of methodology selects for the most easily replacable code?
Safety is also debatable and mostly accidental. Most of the languages that are fast and "easy" to develop in rest on a core of C or C++, and are really only as safe as that code. Safer, because there may be fewer foot guns, but not necessarily "safe."
In the last 5-10 years, there hasn't been almost increase in requirements. People can use low-power devices like Chromebooks because hardware has gotten better/cheaper but software requirements haven't kept up. My system from 10 years ago has 4gb of ram - that's still acceptable in a laptop, to say nothing of a phone.
If you're going to expand the time horizon beyond that, other things need to be considered. There's some "bloat" in when people decide they want pretty interfaces and high res graphics, but that's not a fair comparison. It's a price you pay for a huge 4k monitor or a retina phone. Asset sizes are different than software.
I won't dispute that the trend is upward with the hardware that software needs, but this only makes sense. Developer time is expensive, and optimization is hard. I just think that hardware has far outpaced the needs of software.
In the case of front-end development also "Developer time is paid by the company while hardware is paid by the users."
This is basically a nicer way to put the "lazy developers" point from the article, but I think that's actually important.
We as a society have voted with our wallets that yes, we really really want a process that is efficient on creating more features within the same amount of developer-time instead of a process that creates more computationally efficient features.
The increased hardware capacity has been appropriately used - we wanted a way to develop more software features faster, and better hardware has allowed us to use techniques and abstraction layers that allow us to do that, but would be infeasible earlier because of performance problems.
It's not an anti-pattern that occurred accidentally, it accurately reflects our goals, needs and desires. We intentionally made a series of choices that yes, we'd like a 3% improvement in the speed&convenience of shipping shiny stuff at the cost of halving the speed and doubling memory usage, as long as the speed and resource usage is sufficient in the end.
And if we get an order of magnitude better hardware after a few years, that's how we'll use that hardware in most cases. If we gain more headroom for computational efficiency, then we've once again gained a resource that's worthless as-is (because speed/resource usage was already sufficiently good enough) but can be traded off for something that's actually valuable to us (e.g. faster development of shinier features).
Dark thought: maybe sites actually profit from a certain level of "bloat", if it drives away less lucrative visitors while not affecting the demographics that are most valuable to advertisers.
Sites cause the UI to hang for seconds at a time. Switching between tabs can take an age. Browse to some technology review site and you sit and wait for 10 seconds while the JS slowly loads a "please sign up for our newsletter" popover.
We wanted multi-tasking OSes, so that we could start one program without having to exit the previous one first. That made the OS a lot bigger.
Eventually, we got web browsers. Then Netscape added caching, and browsers got faster and less frustrating, but also bigger. And then they added multiple tabs, and that was more convenient, but it took more memory.
And they kept adding media types... and unicode support... and...
We used to write code in text editors. Well, a good IDE takes a lot more space, but it's easier to use.
In short: We kept finding things that the computer could do for us, so that we wouldn't have to do them ourselves, or do it more conveniently, or do something at all that we couldn't do before. The price of that was that we're dragging around the code to do all those things. By now, it's a rather large amount to drag around...
For example IDEs: Visual Studio in 2017 is certainly better than Visual Studio in 1997, but do those advancements really justify the exponential growth in hardware requirements?
How'd we get so little usable functionality increase for such a massive increase in size/complexity?
Fundamentally, modern systems are much more usable - in every sense of the word. Modern IDEs are more accessible to screen readers, more localizable to foreign languages including things like right-to-left languages, do more automatic syntax highlighting, error checking, test running, and other things that save developer cycles, and on and on. Each of these makes the program considerably more efficient.
I just don't buy this premise. _Some_ systems are more usable. But take a look at Microsoft Excel from circa 1995-2000, which came with a big thick textbook of documentation, explaining what every single menu item did. Every single menu item was written in natural language and it told you what it would do. Professionals used (and still use) Excel, crafting workflows around the organization of the UI. It's a tool that is used by actual people to accomplish actual tasks.
Now look at Google Sheets. It has about 1/10th the functionality of Microsoft Excel (hell--it can't even do proper scatter plots with multiple data series) and its UI is an undiscoverable piece of crap because half of it is in _iconese_--a strange language of symbols that are not standardized across applications, confusing and ironically archaic depictions of telephones, arrows, sheets of paper, floppy disks. The program is written in a pictographic language that must be deciphered before being used. Software doesn't even speak our natural languages anymore...we have to learn _their_ language first...and every application has its own and that language changes every six months. Worse, all those funky pictograms are buttons that perform irrevocable actions. They don't even explain what they did or how to undo it...it makes users less likely to explore by experimentation.
...and there is no manual, there is no documentation. It will be all different in six months, with less functionality and bigger--different!--icons...takes more memory.
We are regressing.
Hey but those animations are spiffy.
So I agree with you that computer time is cheap and user time is not, but I think we could optimise better for user time.
Is it? 97 might be a bit extreme, but the other day I opened an old project which was still on VS2010 and I was struck by how much faster 2010 was while still having nearly every VS feature that I wanted. They're slowing porting VS to .net and paying a huge performance penalty for that.
Older versions of Android Facebook seem massively faster and use a fraction of the RAM while providing (nearly?) the same functions and features.
For example, the web might be filled with redundant and bloated software, but the real problem is that the browser has become the nexus of virtualization, isolation, security, etc. for almost everyone from the causal user to hardcore admins and for every piece of software from frameworks/utilities to full-blown SAPs. It's like we have all reached a common understanding for what comprises a "good" application, but then we lazily decided to just implement these things inside another app. I mean, webassembly is great an all, but is it wise?
I don't think it's about IPC or RAM or (n+1) framework layers that each include "efficient list functions". I think it about the incremental, deliberate, and fallacy-laden decisions that assign more value to "new" than to "improved".
So just loading from disk, decompressing, put on Ram then moving around and applying visual effects is probably half of the reason everything is slow. Those "bloat" graph should also mention screen resolution and color depth.
I might have done the same on my 2000 era machine as I do today (browse the web, listen to music, program, maybe edit a picture or two) but I'll be damned if I had to do all this in Windows 98 with a resolution of 800x640 again!
We could waste some words here how display resolution didn't keep up due to Windows being crap and people being bamboozled into 1366x768 being "HD" or "HD Ready". 800x600 vs 1366x768 that's only double the pixels and barely more vertical.
Made for a fun first hour after a fresh install.
Back then i had a habit of building up a cache of drivers and common software on drive, later dumped to CDs at irregular intervals, just to lessen the rebuild time.
Funny thing is that i kinda miss those days.
Back then i could pull the drive from a computer, cram it into another, and more often than not 9x would boot. It may give me that vesa only resolution etc, but it would boot.
These days it feels like it would just as well throw a hissy fit ascii screen about missing drivers, or something about the license not being verifiable.
I thought maybe Linux would rekindle this, but sadly it seems the DE devs are driving even it head first into the pavement. This so that they can use the GPU to draw all their bling bling widgets and paper over the kernel output with a HD logo...
In any case, glad I was hosted on Google infrastructure but embarrassed by the bloated, default Blogger template.
Interested in suggestions for simple, lightweight alternative.
Medium, Square, WordPress, etc. all seem to suffer from similar insane bloat.
Edit: There was kind of a nod, but more about the Blogger editor than the reader view.
Note NASA controlled the moon missions with an IBM 360/95 that had something like 5 MB of RAM, 1GB of disk, and about 6 million instructions per second.
Today an internet-controlled light switch runs Linux and has vastly larger specifications. Connecting to WiFi is more complex than sending astronauts to the moon!
And an army of technicians available around the clock to keep it working. Whereas your IoT light 'just works' and isn't expected to require any support at all.
As for the high complexity of IoT things, I don't think the extra complexity helps reliability, security, etc.
Maybe a lot of what those computers were doing were just raw computations, much like a DSP, to control trajectories, and nothing more. Something like a big calculators, with some networking capabilities to receive and send a few sets of instructions.
NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
0 979612 178064 108084 S 0,0 1,1 0:01.38 chromium-browse
0 3612044 175208 128552 S 0,0 1,1 0:01.83 chromium-browse
0 1372444 92132 67604 S 0,0 0,6 0:00.27 chromium-browse
0 1380328 90492 58860 S 0,0 0,6 0:00.62 chromium-browse
0 457928 87596 75252 S 0,0 0,5 0:00.67 chromium-browse
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
user 9309 59.0 4.6 2020900 282568 pts/2 Sl 09:46 0:01 /usr/lib/firefox/firefox
user 9369 8.0 1.7 1897024 104800 pts/2 Sl 09:46 0:00 /usr/lib/firefox/firefox -contentproc -...
total used free shared buff/cache available
Mem: 6115260 598900 4324616 41748 1191744 5205960
Swap: 0 0 0
total used free shared buff/cache available
Mem: 6115260 781304 4116456 68908 1217500 5032360
Swap: 0 0 0
PID COMMAND %CPU TIME #TH #WQ #PORT MEM PURG
44688 Safari Techn 0.0 00:03.41 10 3 311 51M 2648K
(I didn't check Safari (original) for nothing-open-and-idle RAM usage, simply because I didn't want to have to reload all my tabs, so I booted Safari (Technology Preview) to quickly test it.)
The article is focused on machine efficiency. Human efficiency also matters. If you want to save cycles and bytes then use assembly language. Geez, we had the assembler vs fortran argument back in the 70's. Probably even in the 60's but I'm not that old. Guess what? High level languages won.
Hey, Notepad is much smaller, leaner, and faster than Word! Okay. But Word has a lot of capabilities that Notepad does not. So is it "bloat" or is it "features" ? Guess what, Windows is a lot bigger and slower than DOS.
Imagine this argument from a business person:
If I can be to market six months sooner than my competitor for only three times the amount of RAM and two times the amount of CPU horsepower -- IT IS WORTH IT! These items cost very little. You can't buy back the market advantage later. So of course I'll use exotic high level languages with GC. I'm not trying to optimize machine cycles and bytes, I'm trying to optimize dollars.
This is a ridiculous straw man. It is completely possible to write efficient software in high level languages, and no one is suggesting people extensively use assembly language. Actually, in many cases it is very difficult to write assembly that beats what's generated by an optimizing compiler anyway.
Dev time is the most expensive resource.
> The Blogger tab I have open to write this post is currently using over 500MB of RAM in Chrome.
If that is so, why post it and have it use a similar amount of RAM on others machines? If they know sooo much about software, why even use Blogger, a site that's heyday was 15 years ago?
At a big company could argue requirements writers need to be technical, but once you've done a startup you'd know that you're in an army and not in a fixed building that needs architected once. The customer and your enemies are always on the move and you have to keep moving on new terrain as the money and your business keeps moving there. Build with code of conduct for your developers, and allow the code base to evolve with modules, or some other approach.
It makes hardware cheaper for everyone, even nerds who use terminal and lightweight WMs ;-)
I understand this is edge case material though so such ideas need not apply, but it seems like a fairly easy to implement idea that is one of those "oh that's really nice" features customers stumble across.
One cost of the increasing breadth in the industry is that if we want to have a lot more software then obviously it can't all be written by the same relatively small pool of expert programmers who wrote software in the early days. With more software being written by people with less skill and experience, many wasteful practices can creep in. That can include technical flaws, like using grossly inefficient data structures and algorithms. It can also include poor work practices, such as not habitually profiling before and after (possibly) optimising or lacking awareness of the cost of taking on technical debt.
Another cost of the increasing diversity in the software world is that to keep all that different software working together, we see ever more generalisations and layers of abstractions. We build whole networking stacks and hardware abstraction libraries, which in turn work with standardised protocols and architectures, when in days gone by we might have just invented a one-off communications protocol or coded directly to some specific hardware device's registers or memory-mapped data or ioctl interfaces.
There is surely an element of deliberately trading off run-time efficiency against ease of development, because we can afford to given more powerful hardware and because the consequences of easier development can be useful things like software being available earlier. However, just going by my own experience, I suspect this might be less of a contributory factor than the ones above, where the trade-off is more of an accident than a conscious decision.
But I do agree that software could be a bit faster nowdays.
I'm currently on a Mac and a PC. The Mac's CPU is at maybe 5% when I'm doing most things.
The PC is at 1%.
I'm using half the PC's memory and 3/4 of the Mac's.
These are no up to date, high memory or high performance machines.
Have a look at your own machine. Surely for most of us it's the same.
And that memory usage is mostly in one application - Chrome. The only bloat that hurts a bit now is web page bloat. And on a good connection this isn't an issue either.
It's also different on phones where application size and page size seems to matter more.
I have a 5 minute mp3 that takes more space than my first hard drive had and some icons on my desktop that take more space than my first computer had RAM.
Whether that will continue to hold I don't know, mobile has certainly pushed certain parts back towards caring about efficiency (though more because it impacts battery life).
If you remove a constraint people stop caring about that constraint.
The old school geek in me laments it sometimes but spending twice as long on developing something to save half the memory when the development time costs thousands of dollars and twice as much RAM costs a couple of hundred seems..unwise.
Has it? Try using a low end phone with something like 8GB of internal storage, mobile apps are ridiculously slow and bloated. It's to the point where I haven't looked in the play store for years because I simply don't have enough room on my phone. That means the dev community has screwed itself over with wastefulness.
When you look at the Android SDK you have to wonder if it's even possible to have a different outcome.
I copied 153Gb of data onto my laptop earlier over my fiber connection because the project I'm working on needs it and I couldn't be bothered to go find the external drive with it on in the storage closet in the 2nd bedroom.
I can buy 500GB of really fast m2 SSD for 153 quid (approximately 30p per GB) or terabytes of storage for 153 quid.
I got a new thinkpad a few weeks ago, I specced it with 16GB on one slot because I fully intend to upgrade to 32GB fairly soon with virtualisation I can bump up against 16GB, Let that sink in, My time is so precious (to me on my machines and my employer on theirs) that I'm happy to virtualise entire operating systems and allocate billions of bytes of memory and storage to save some of it.
Hardware is absurdly cheap and I can't really see that changing for a while, from a systemic point of view it's ridiculously more efficient to spend a lot of money in a few places (Intel, Samsung, IBM etc) than to spend a lot of money in every place.
Every time Intel puts out a processor that is 10% faster at the same price everyone elses software just got 10% faster for free* (*where free = the price of the new processor).
There just isn't a market incentive (financial or otherwise) to rollback bloat, if there where it would be a competitive advantage and everyone would be doing it, that they aren't shows that it isn't.
I suspect a lot of the reason why Linux installs stayed so relatively lean was because for a long time most people had CD burners not DVD burners, once those where common install ISO's blew right past 650Mb, I think Fedora 26's was 1.3Gb, I didn't really pay any attention.
In any case, that is irrelevant as 14nm and ipc are pretty much maxed out, and from this point on, this is it. Unless CPUs move away from silicone, this is as fast as it gets (save for adding cores to the problem).
On multithreaded workloads that I care about its not just a little faster, it's a lot faster.
There is still a lot of fruit to be had in that direction I think and that's before you consider the other areas left for performance improvement.
Of course for some workloads/people they are already butting up against a different cost/benefit and they do care about ekeing every cycle out the processor but for me it hardly matters.
My desktop at work runs a development version of our main system faster under vagrant than it runs in production since I've got more RAM and a machine with twice as many cores.
It's a strange market when that happens..
Feels lot faster than the 4670k it replaced.
I suspect most of that is the ddr4 + another 4 cores + nvme , rather than IPC gains.
I like the direction the CPUs moved, although after 7nm in few years, they'll have to redesign a lot more than CPUs to get anything substantial out of it.
We'll see, exciting times otherwise.
The main culprit was extracting and copying all of the small source files that come with it.
In the past, computational complexity was lowered by arbitrary size limits. e.g. if you had a O(n^2) algorithm you might cap n at 10 and now you have a O(1) algorithm. Job done.
Now, computational complexity is lowered by aggressive use of indexing, so you might lower your O(n^2) algorithm by putting a hash table in somewhere, and now you have an O(n) algorithm. Job also done.
The practice of putting arbitrary size limits on everything has almost died out as a result.
There are also a few graphs to make the author feel like he is a scientific researcher writing a paper instead of what he is actually doing , which is posting a question that quite frankly could with little extra thought fit in a tweet.
It's a serious problem with no clear solution.