It is describing how web browser, a piece of software with extremely high inherent complexity, interacts with the memory allocator of the operating system, another piece of software with high inherent complexity, combined with a rarely used feature from Gmail, can trigger complex and complicated interactions and cause major problems due to hidden bugs in various places. This type of apparent "simple" lockup requires "the most qualified people to diagnose".
These problematic interactions are unavoidable by running fewer "gadgets" on the desktop environment, it can be triggered and cause lockups even if the system in question is otherwise good-performing. Installing a Linux desktop doesn't solve this type of problem (though this specific bug doesn't exist).
The questions worth discussing are, why/how does it happen? how can we make these problems easier to diagnose? what kind of programming language design can help? what kind of operating system/browser architecture can help? how can we manage complexity, and the problems came with such complexity, what is its implications in software engineering, parallel programming? etc.
From another perspective, bloated software is also an on-topic question worth talking about. But instead of the talking point of "useless programs wasting CPU cycles", or "install minimum Debian", we can ask questions like "do _ALL_ modern software/browser/OS have to be as complex as this?", "what road has led us towards this complexity nowadays?", "what encouraged people to make such decisions?", "can we return to a simpler software design, sometimes?" (e.g. a vendor machine near my home, trivially implementable in BusyBox, or even a microcontroller, are now coming with full Windows 7 or Ubuntu desktop! Even the advertising screens use Windows 8, and BSoD sometimes, despite all they need to do is just showing a picture. same thing for modern personal computers.), or even "is Web 2.0 a mistake?" (so we are here on Hacker News, one of the fastest website in the world!). These topics are also interesting to talk.
While these things are important, to me the critical phrase in the article is this: ...It seems highly improbably that, as one of the most qualified people to diagnose this bug, I was the first to notice it...
My system hangs when typing text all of the time. Reading this article, this indicates to me that 1) It probably hangs for tens of millions of other people, and 2) Nobody either has the time or money to do anything about it.
That sucks. Additionally, it appears to be a situation that's only gotten worse over time (for whatever reason).
You can look for potential answers as you point out. More importantly, however, is the fact that nobody is aware of the scope of these problems. Millions of hours lost, the situation getting worse, and there's nobody hearing the users scream and nobody (directly) responsible for fixing things. In my mind, figure those things out and then we can start talking about specific patterns of behavior that might limit such problems in the future.
tl;dr Who's responsible for fixing this and how would they ever know it needs fixing? Gotta have that in place before anything else.
But now desktop apps have the same issues. And its not going back to where we were. So I guess we'll get used to it.
Or, if you prefer your optimism a bit more dystopian-flavored, some megacorp will come around with a walled garden whose user experience is just so good that users will flock to use it, and the rest of the industry will have to adapt to compete.
In either case, I don't think getting used to it is our only choice :)
The answer is ever-increasing complexity, no?
I don't want to over-state this, but it's a hell of a lot more important than people think, mostly because it attacks you in little bits here and there. It's never a direct assault. We are creating a consumer society in which we're becoming slaves to the tech. It entertains us, it connects us, it remembers for us, it guides us. All of that might be fine if that's your thing. But there are dark sides too. These kinds of hidden bullshit annoyances are one of the dark sides.
The root of the darkness is this: if you steal just a few minutes per day here or there with hung-up text editors and such, how many man-years of productivity are you stealing from mankind?
I really think we need to go back to the metal and start designing systems with separate fault-tolerant systems dedicated to being humane to the users by invisibly handling the kinds of things that keep wasting huge parts of our collective lives.
Or, as you said, we could just keep adding complexity. That's always the answer, right? sigh
I think that we got to where we are now exactly because of the addition "but only if you have to deal with it". Software consumes many useless cycles exactly because developers on all layers shift their responsibility to deal with complexity to other layers (either the CPU, or the downstream developer). Sometimes it's because they have to, but most of the time it's simply because there's too much distance between producing and consuming developers.
being humane to the users
"users" all too easily implies end-users only. I'd add that developers also need to be humane to downstream developers (circle of influence and all that), including better API design and better instrumentation hooks. But that latter would be adding complexity :(
That's a bad example. My BMW E90 has a gasoline direct injection system, which was state of the art when the car was made 9 years ago. It is very complex and the parts it is made from are very expensive. The BMW specialist tell me the misfire my car has will therefore cost more than £2500 to fix, and even then they'd be guessing.
It would be better if car engines were simpler, like they used to be before they had to start passing artificial emissions tests that don't measure the impact for the whole life-cycle of the vehicle.
When I travel, I get a rental that somebody else worries about.
These things are as complex as we will tolerate. I love new vehicles and have a blast driving them while traveling, but frack if I want to have to update and reboot my car. What kind of living hell is it where everything we touch is complex like this?
Modern cars are FAR more reliable than anything made in 1978, this is a simple fact proven by mountains of data. Cars last far longer than they used to; you can easily go 200k or 300k miles with basic maintenance, and I'm sorry, but despite what you might want to believe, that was just not the case in 1978.
And BMWs are terrible examples; those cars seem to be designed for expensive and necessary maintenance, so they can extract more profit from their owners. Japanese and American cars aren't like this.
I wonder if there is a chance we could take another try at that.
One of the most effective measures to combat such issues is to... reduce the system's complexity. E.g. by not having another VM running on top of the OS just to read and write e-mail.
Since this won't happen any time soon due to various reasons, the only reasonable thing left to do for most of us is to grab some popcorn and watch how the software development world struggles to contain the mess we made and fail at it.
And yes, web browsers are becoming an OS on their own. I consider that a failure of the underlying OSes we have. Tabbed browsers are awesome, but they exist because OSes and standard desktops (GUI toolkits) didn't come up with decent ways to handle that. Browsers are also trying to implement fine grained access to resources - because our OSes haven't managed to do that for them yet. Memory management? I have no idea why you'd do that in ANY application software today. Actually there is a reason - people don't trust the OS or think they can do something better, but it ends up creating extra complexity. Complexity is categorically bad and should be avoided unless it's the only way to do something. Remember how X got its own memory and font management? Same thing.
We have a GIANT address space to play with. Why not use it?
They’re not actually using 128MB per function.
Interesting that you ask how to diagnose and manage the complexity, but not how to avoid it. Do we really need a more or less complete OS+VM (aka web browser) running on top of another OS (Windows etc.) to read e-mails?
Original post updated!
That being said i do agree with a lot of people there is a lot of bloat. But often this bloat is caused by lack of understanding of this complexity in what they are building software. If things like this are more generally known and understood problems like this memory issue in google application would be less.
my own solution to my hate for bloat is to write my own software from scratch. and before i complete that lifetime task i feel it's unfair to complain at others who spend their entire lifetime making programs you use because they made it a little too bloated for you due to whatever reasons...
I think for your question worth discussing, why/how this happens, and how to make it easier to diagnose, is that more people like the writer of this blog are so kind as tho share their findings with us :)
There's an argument to be made that maybe 'security' isn't worth as much as the blogosphere thinks it is. Like everything else in life, it's a trade off, and because it is in their best interest the security "experts" do their best to sensationalize and promote paranoia and over-reaction to every little potential problem without regard for the cost, namely inconvenience and slow software.
What was the solution to Meltdown and Spectre again? Oh yeah, make everything slower on the off chance someone will use a timing attack to maybe slowly exfiltrate some information from memory that might be important. If you're a cloud host that tradeoff is probably worth it, if you're a desktop user outside of an intelligence organization it probably isn't, but you'll pay the cost none the less. 1% here, 2% there, no big deal right? But it sure adds up. Do an experiment: install 2 VMs, one with Windows Server 2016 (or Windows 10), and one with Windows Server 2003 (Or Windows XP). The 2003 (XP) VM will be so much more responsive it will freak you out because you aren't used to it. How much of your life has been wasted waiting for windows to appear and start drawing their contents? What are we getting in exchange?
How many 2005 era applications, print drivers, toolbars or screensavers, and whatever else was cool in 2005, can you install before the machine is as responsive as a 300 baud connection?
XP era was probably peak crapware with people having IE with 12 added toolbars and unusable everything. Often solved by buying a replacement machine because the old one got so slow.
Computer insecurity is costly and counterproductive. It helps criminals and maybe the occasional oppressive regimes walk us backwards, mess up lives, mess up businesses. I don't think privilege escalation and encryption key theft should be taken lightly. Abusable things get abused.
Money is not a excuse, because browser/os/languages are already HUGE money losers.
Sometimes it's ok, sometimes it's not. I tend to wish that we could get a lot better at building systems, but that involves a number of difficult problems that people far smarter than I have been thinking about for far longer than I've been alive.
The future in my head doesn't have so many systems developed by accretion, but maybe that's how it has to be (for now).
- - - - - - - -
- - - - - - - -
Browser (doing display things if not more)
- - - - - - - -
Haven't we figured out thread prioritization by now? Can't we make sure something draws 60 times per second while things are going on in the background? My Android Studio build should be totally isolated from my inbox.
I know this is a bit orthogonal to the article and that I'm certainly not well informed about Operating Systems these days, I'd love to get schooled in the comments.
Both OSX and Windows 10 do suffer from UI lag but, in my experience, it is far worse on Windows 10 to the point where I have come to absolutely detest Windows 10. It's particularly bad on the login/unlock screen, which often takes multiple seconds to even appear when you get back to your machine - frustrating when you're in a hurry.
With that said, some of it is certainly application specifically. Office 365 Outlook, for example, is particularly egregious in this regard: switching between windows, or between mail and calendar is awful. Microsoft Teams also regularly hangs for multiple seconds when switching between teams, or between chats. Extremely aggravating.
Edit: on the other hand, computers do start up a heck of a lot faster than they used to.
At least in Windows you aren't at a TTY that echos everything to the screen by default until PAM has hauled itself (and its umpteen libraries) off the disk (yes people still have 5400RPM HDDs :D), initialized, and put the terminal into no-echo mode...
I don't know much about the situation on Windows but at least on linux your HDD is hardly going to cause lags if you have a well configured hardware setup.
Laptop users are many, and they care about boot up times. Especially since Linux power management is still awful.
On my current X1 Yoga 1st gen, I do regularly get almost 1.5x the uptime on battery compared to windows 10.
We got a batch of X1 Carbon 6th gen, which are "optimized for windows 10", where the S3 sleep state was replaced by the S0I3, which is OS-assisted. The X1 Yoga with S3 can both suspend and resume faster than windows 10, despite not having S0I3 (and has longer battery lifetime to booth).
And by fast I mean that by the time the lid is up, the OS is ready. By contrast, windows 10 seems to always shuffle for several seconds after resume, even on the carbon 6th gen.
One of the problems here is that power-draining can happen by different components. Quite popular are graphics chips, but other components can be part too. For example, a friend of mine has problems with his device waking up from s2idle during transport. That is not exactly what you would expect in such a discussion, but from a users perspective, he has less time to work with his device given his usage pattern.
For me, this isn't much of a problem as I prefer working on a desktop, but I wouldn't accuse someone of being a liar just because he had a bad experience with power consumption on Linux.
This has the advantage of not requiring any manual intervention every time your kernel is updated. The downside is that if you try to dual boot W10, it will BSOD on boot with an ACPI error. To overcome this, take the original dsdt.aml and load it in a custom (not auto-probed) menu entry for windows. You can just clone the auto-probed line, and add before the chainloader line:
Sadly, the 6th edition of the Carbon also runs waaay hotter than the 3rd, and I did notice occasional coil whine which was previously absent. When running normally the kernel writes regularly about thermal warnings. It seems that power and heat management is not well managed in this edition, and I see similar reports for the x1 yoga 3rd gen.
My system is exceptionally lean. I run bspwm and the radios are always off except WiFi. Still, I feel like I could get a lot more from this laptop. I cannot (and I certainly wouldn't want to) install Windows on dual boot to make a comparison though.
What distribution do you use?
I personally use debian unstable with a tiling window manager (awesomewm) without a specific DE. However, I generally setup Mint for all my colleagues on the same laptop lines, and there's one colleague running Arch. We all have very similar battery lifetimes.
High-end laptop lines (Dell, HP, Lenovo) have actually pretty good linux support, they are almost always intel i5+ cpus with integrated graphics which are all very similar. There is next to zero setup required in almost any case. This is known. The drivers are tried and tested. I've never experienced any random "power management issue". Suspend/resume/battery consumption/acpi is working perfectly fine. What else does qualify for "power managment"??
I've been supporting a team of 20+ people with these lines in a mixed environment, and linux since more than a decade was always basically plug and play.
There have been some quirks in some models, which generally required some tweaks in the kernel boot line, to fix issues with backlight tweaking.
Sure, but Linux really ought to shine on the low-end laptops.
But this doesn't save you from crappy hardware, which is why this kind of argument is a non-starter for me. How well does current windows editions work on low-end crappy laptops that sell for the lowest tier? Not great. These laptops have plenty of little issues on windows as well.
Maybe better than linux, because the drivers have way more work-arounds than linux has.
Have you ever seen the internal linux quirks table to work around buggy and downright horrid hardware? That's what you get with those. If it works at all is thanks to the patience of people that put in the time to work those issues around. I frankly do not blame linux if it doesn't work well with such hardware.
Ah! So that's what's going on :D.
And you can always run strace -f on whatever handles login (getty, logind, systemd, etc), drop to a TTY and login, to see all the stuff that gets loaded.
"Enough" meaning "infinite", because you're going to have something leaking all that memory. Task manager doesn't even show memory usage by default (add the "commit size" field), only the working set (and the swapping is insanely aggressive, so memory leaks just don't show up). Nobody seems to actually check that; even built-in stuff like Windows Update leaks pretty badly.
Oh and the memory accounting doesn't even work.
I've also found macOS provides the smoothest experience. I haven't found W10 that bad, but I haven't used it that extensively. I really only boot into Windows to play games these days.
Edit: Having tried in on my laptop, installing the gjs + libgjs0g version 1.53.3 debs from http://gb.archive.ubuntu.com/ubuntu/pool/main/g/gjs/ onto a bionic install seems to work fine. Worth trying if this bug is affecting you?
that's not what mainline kernel optimizes for, you have to use liquorix or another kernel (https://liquorix.net/)
I've recently given KDE Plasma a try and was really impressed by it's speed and smoothness.
I think it shows when your desktop does not rely on interpreted langues so much.
This isn’t how it used to be. Sigh.
I'm playing with a Win10 machine at home, the CPU and RAM is far in excess of our work machines but it is so slow to use. It seems to regularly become unresponsive, programs take ages to start, it seems to be continually working away in the background doing 'nothing'. It's a truly horrible experience.
It's buttery smooth and I get no noticeable lag. Exactly 7 seconds from power button to login screen on a cold boot too.
Just today on Patreon, I was going through a creators posts. After about 100 posts or so the page was barely usable. Animations took minutes to complete, loading more posts took 30 seconds and Firefox (Chrome quit after 80 posts when Linux' OOM decided that the fun was over) was struggling to repaint the view port, lots of white areas. At the end other applications were severely lagging too both because Patreon's webshit was pulling 80% of all cores when doing nothing on the page and the Linux kernel was shoveling everything into ssd swap like crazy. Animations didn't work at all (likely if I waited 60 minutes it would have shown the first frames) and clicking on links to open in a new tab took about 15 seconds until the JS behind the scenes completed.
It's simply shitty design that there is no obvious way to say "show all posts from January 2016" and jump back month by month. Or atleast UNLOADING posts I've long scrolled past.
I have 8 Cores and 32 GB of RAM at my disposal, a website has no excuse to have that much of a shit performance. Especially when it's a platform where lots of money flows.
But hey, it's cheaper this way.
The issue is not thread priorization. Or processes. Your mail app and android studio are both competing for resources and either android studio will lag because the mail app is running on all the CPUs or the other way round. The OS doesn't have a way to tell which is more important and trusting applications to tell is not really reliable (devs will just say "I'm the most important").
A real partial issue is that a lot of modern apps are not engineered to save resources. They just take what they need and the user better provide enough CPU and RAM to do it. There is no sense in self-limiting in a lot of modern apps.
Frontend devs should wake up to the reality that they aren't running on the only instance of the chrome browser with only one tab open. There is other apps around too. There are other browsers around too. Sharing resources gives the user a better experience than just taking them all for yourself.
Edit: Apart from being able to confirm the less-than-stellar performance problems, I had no trouble with memory at all. Chromium hovered at about 2G of memory. It fairly aggressively garbage collected as I browsed different sections and different creators.
This case isn't some web app that uses infinite memory; the contended resource is a lock.
Our processors are now not advancing nearly as quickly as people can create solutions for them. Case in point - Virtual Reality, which has technically been on the market for years, but out of reach for most users.
If processors are not improving, everything else slows down. Graphics, games, and the internet ten years from now will look pretty much the same as it does today.
My point is, by design, an operating system that has a fully async interface can dispense with locks altogether. Resource contention is then properly resolved through queuing. The OS can then strictly enforce a priority scheme on its various queues, preventing an opportunistic thread from dominating the system. Or enabling higher priority requests to be serviced first. But that decision is made intelligently, not as a result of optimizing to minimize context switching.
No locks means this can never happen.
Async style enforces a kind of minimalism that in my opinion simplifies the problem space overall. Obviously, one has to learn different approaches to problem definition and solution to embrace event-driven coding.
To me, in the comparison of linear programming handling an exception when your call stack is 5 levels deep (or maybe 3 in a different use case), having to bubble the appropriate handling at each of those levels, versus async where in your closure you have a 4-line if/else construct that fires off one event for success and a different event for failure, async wins the simplicity competition every time.
If race conditions are an issue, events should be dispatched through a single-threaded finite state machine dispatcher for resolution. Such events can fire up multiple worker threads to do their bidding, but when state changes, that goes through the dispatcher which inherently resolves all races.
If web developers never learn to write multi-processing, multi-threaded, multi-programming, event-driven, decentralized or distributed applications the web will forever look like it looks today.
This is such a contradictory sentence. The whole point of Moore's law was that software developers could do what they want, freed by limitations of hardware because in 18 months, when their product was ready for launch, the hardware became capable enough to handle it.
Forcing developers to focus more and more on performance (via threading, distributed systems, and custom die co-processors), rather than innovating new applications, is a huge shift. It's like the hardware developers gave up, and tell the software developers that it's in your hands now.
Also, comparing a hello world program 10 years ago to today is a meaningless comparison because no one uses hello world to do anything. A better example would be Excel in 1998 and Excel in 2018. Sure today, it's a 500 megabytes instead of a just a handful, but it does quite a bit more (for better or worse).
Also if General processors are now hitting a wall, it’s only a matter of time before GPUs do.
Locks are hard.
Yes, especially when the OS does nothing to help a backing-off contender acquire a resource that is greedily dropped and contended by another thread.
> I'll go into more details in part two
Great! In anticipation of that, I have a couple questions:
> but part of the problem would have reliably disappeared if I had been running a single-core CPU
Single-core or single-thread? Does the atomic instruction switch to the other HyperThread generally?
Do you see the same symptoms if you have a sufficient number of other unrelated CPU-intensive threads trying to run that the OS will want to deschedule the aggressive contender thread(s) from time to time?
The ways to deal with this that come to mind are the aggressive contender thread(s) sleeping from time to time, or yielding in hopes the decision about waking up the thread recovers some fairness lost by the decision about which lock waiter win. Perhaps one could just put the lock's cache line on as equal a NUMA footing as it can (a flush, for instance) so the aggressive contender has a chance to lose a race. However, having to reason about cache coherency protocols (differing from vendor to vendor and even same-vendor model to model) seems like a huge headache that anyone would want to avoid if possible. Is a fix described in the long crbug discussion?
Making the locks fair or occasionally fair would solve this problem, but perhaps cause others.
I think that an appropriate CPU-saturating background task might have made my machine more responsive, which is deliciously weird.
I too wonder why UI threads are still so obnoxiously tied to... everything else. Put UI in the fast lane! (At least until some bug tries to make a bajillion calls a second)
Then I remembered how much more complicated it is to write proper multithreaded GUI code :P
(Although with 24 cores...)
Totally agree that Windows is the worst of the bunch and its still a huge problem on macOS.
On Fedora+i3wm, when my desktop opens from the login it is "immediately" ready for input. I've tried to ask some people to compare between macOS and my laptop and they say they don't notice a difference, so I think people are conditioned to waiting with these little unconscious pauses they don't notice.
I wonder is there a way to do accurate and meaningful timings for responsiveness in these situations.
This is so ugly I hesitated to post it, but here: https://github.com/jakeogh/glidestop
In the case of glide, having tabbed take care of all of that would be much easier, or even better it would be integrated directly into a "tabbed like" process that acts more like a windowmanager and does not necessary have "tabs". I'm thinking of a more generalized version to tabbed with a --notabs switch or something.
Add locks and you add many more causes for the main thread to be blocked beyond pure computation.
And that leads us to the real problem. Chips are not advancing at the rate they should. Additional performance gains come from adding multiple chips to the same die, not making the chips faster. This forces programmers to think about parallelism, which makes everything harder.
In the mid-90s, chips were improving so faster that you wouldn't need to consider special tricks, you could just wait 3 or 4 months and a chip would come out that handled your application.
Now single processor performance is basically at a standstill and it's up to the software developer to spread work in parallel to make things faster.
A weird interpretation of a singular event in the history of technology. No wheel maker complained because wood wasn’t getting twice as strong every 18 months.
We’ve been taught for decades that chips will double in performance every 18 months (I know Moore said transistor count, not performance). It’s now turning out to be untrue. We think about the singularity all the time, but haven’t properly considered what will happen if technology improvements stall.
But good luck with that! You're fighting against the most powerful force in the universe: economics (more specifically sunk cost into existing investments). "Soft"ware is a lot "harder" once economics are taken into account.
The field has made progress in "figuring out" the general problem in the form of other programming models but that stuff is not close to gaining popularity on the desktop. Things like STM, lockless peristent data structures, and shared-nothing concurrency.
There are also attempts to make C++ & locks real-time friendly, but the programming model is already prohibitively hard wrt correctness, and the added complexity explosion from handling aborted locks would make it completely unmanageable.
Software will always expand in resource usage as computing becomes faster. This ought to be an adage that more programmers should think about.
My machine randomly has input delay or mouse even after new install...
And a slow hard disk is often the bottleneck, in case of windows.
Both those are designed to enable developers to easily offload work to background threads and prioritise queued work for the user. No open-for-interpretation thread priorities, but named QoS priorities (User Interactive, User Initiated, Utility and Background). It makes much more sense for that abstraction.
We just need more developers to make use of it.
It's funny how tides turn. Initially Gmail was the king of performance.
What is all this stuff even doing?
Ive always loved Japanese website layouts. Content is beauty.
Actually Gmail's approach is sane. They just keep old UI working as it used to many years ago although the backend could have changed completely.
they probably have an api in which you can limit the results by category. As the old interface doesn't have any limiter implemented, they're getting all the results back.
you can also disable categories in the newer interface and the effect would be the same (everything in the inbox)
Other than fighting with the calendar occasionally (sometimes needing to force a manual sync to see coworkers' events), Gmail in Thunderbird has been pretty smooth. I have not noticed slow search, slow mail receipt, or application crashes. In fact, the search is often too good, in that after running it it matches hundreds of more emails than I usually expect. I usually stick with the quick filter which is a bit less flexible, but generally returns more useful results.
Another advantage, on Linux at least, is that if you copy your ~/.thunderbird folder to another machine, all accounts, GUI layouts, settings, search results, tabs, etc move over flawlessly. I think you just have to re-sign in for Google accounts on the new machine and you are good to go.
I have decided to just leave it be, and let it hog my computer, because it interface decently with google calendar.
That being said, I can definitely vouch for clients like Claws Mail (a bit ugly, but does its job) and Evolution (super fast, but it's written in C#/Mono)
When, and compared to what? I'm pretty sure it was never faster than Thunderbird let alone something like Mutt.
Outlook the Windows app deserves to be the most awful app ever, but Skype for Business worked harder!
Address space randomization is done because buffer overflows allow exploits. But rather than fixing the underlying problem, we now have complex schemes to spread programs over the entire 64 bit address space to make such exploits unreliable.
If the current approach from the C+Unix era to programming and OS is still in use, there are only three solutions available...
1. Fix-and-miss all the bugs, safe programming practice.
Isolation is useful to limit the scope of a security breach but cannot stop attackers from exploiting the bugs. The only solution which is able to stop attackers from exploiting existing programs is mitigation - you don't fix and miss individual bugs, bugs are always there, what we need stop attackers from exploiting them. Some exploits are easy to stop, hence NX. Others can only be stopped in a probabilistic way, hence ASLR.
This is Google's. The problem manifested in Chrome. From the article:
A fast client running on a lean OS would not exhibit these problems - and AFAIK gmail still supports IMAP.
(I use Fastmail on MacOS&IOS/Safari and have no such issues either).
2. a user interface that requires you to learn two additional declarative languages (HTML and CSS), both of them equally incomplete and crappy.
3. an API between 1 and 2 that is so lacking that you need an external library (e.g. jQuery) to reduce the amount of boilerplate code to a sane level.
4. a network protocol (HTTP) that was designed for static web pages with a few pictures and that has serious performance issues for anything more complex.
5. and finally, the whole thing implemented in a language (C or C++) that is fast but offers you plenty of opportunities to shoot yourself in the foot in obscur ways security-wise.
Things are changing fortunately (HTTP/2, QUIC, Rust).
Pretty certain Google Apps administrators can disable IMAP and POP for their organizations.
It also uncovered a "bug" (performance weakness?) in v8 that they were able to fix so less CFG blocks were allocated.
So kind of a win/win in the end, bugs fixed, world a slightly better place.
They implemented a freelist, it's a common workaround for problematic allocators, but has its own issues (https://www.tedunangst.com/flak/post/analysis-of-openssl-fre...)
Because the memory is fully freed and reallocated this also avoids security concerns.
I don't understand how all these services must use 100% CPU for minutes, on a fast cpu core, and why they must run ANY logic on startup, which is when the user is most likely to use his machine. Don't even read your app config on start of process. Sleep a few hours and THEN wait until the machine is idle and THEN do logic! Using power save modes is no better. When you take it out of sleep after 12 hours, the services are very eager to check for updates or antivirus or backup again, regardless of time.
I wish windows could just let me use my machine for what I intend to - which is use ALL my cores for ONE foreground application, ONLY. I'd be happy to boot into a freaking "game mode" or "work mode" which is like safe mode and has ZERO crap running (no unnecessary services, no scheduled tasks can start and so on).
As for free software: Game titles are rarely free and most don’t even run on a free OS so really using Linux or not switching on the gaming machine solves the problem of CPU use without me being able to run the app I want. But it’s just not a very good solution.
Unfortunately it quickly gets really complicated so a bit of hand waving is necessary.
The sparse/virtual bit mask is an optimisation technique so the validation of the target address is quick.
Boot speed from off to login is less than 5 seconds. From hibernate it's instant.
My girlfriend brought similar desktop to work and people with windows couldn't believe the machine was that fast. It runs circles around their new windows laptops with tons of memory and cpu cores.
Firefox loads in a second. Kingsoft office also loads in less than a second.
I think it's time people start to understand what is out there now as options for their laptops. And in particular, deepin desktop is very fast and beautiful.
There's this concept of right tool for the right job. And even if you save 2 seconds of boot time, that is meaningless when you encounter a similar bug with your software and waste a day, or when you simply waste time because you needlessly switched to an unfamiliar OS.
Just as a note, I have a Windows 10 booting on a decade old Lenovo netbook. That's a single core 1.6GHz Atom with 1.5GB RAM. It boots in ~10s. Much of it is BIOS. And yes, I can type an email with no issues. You should really try switching to Windows 10 and this dirt cheap netbook. :)
The year of the Linux desktop was some time in the 00's, just that few people noticed.
A SoC-like, high performance, low power mobile CPU. Performance per MHz ratio is high, allows reasonable performance with low clock frequency.
If I really want to extend battery life, forcing the CPU clock down to 800 MHz is a good idea.
This seems to be right for "most people". But there are definitely a few people who are annoyed by issues like this but aren't in a position to troubleshoot and report it.
Even I, after helping people with computers since mid nineties I still can't troubleshoot like that. I'll fall back to latency checking and turning off services one by one, combined with a fair amount of experience + googling.
Because these days, unlike in the 90's, it's no help waiting for the next Pentium processor to come out this usually results in a heavy optimisation cycle in the underlying engine. For example Firefox has advertised a speed-up several times in the last 10 years, each coming from a focused effort to rewrite or optimise the JS engine, the rendering engine, or something else. Then things accumulate again, until things are too slow.
Obviously the cycles aren't wasted as you can do things with a few lines of high-level script that would've taken months to implement in the "good old times". But this development inevitably creates bizarre flashbacks where, occasionally, you're doing something simple like typing text on today's monster machines and it takes a few seconds for the screen to catch up your typing.
When I started programming, things were roughly and more or less instant. Typing was instant. There was a very short code path from handling the interrupt to updating screen. The computers reacted more like physical apparatus. Terminals at shops, warehouses, hospitals, with text-based programs to update data were pretty much instant too. Good clerks could bang their keyboard through multiple subscreens of their program in seconds, and you were checked in, goods reserved, or patient data updated. Later, when multitasking hit the mainstream, flipping between programs was instant. (Not on Windows, though.) Things like I/O took its time, of course, but at one point there was a peak moment where a software machine had nearly as low latencies as a physical machine.
From those times things have mostly gone downhill. Yes, we have immense processing power and near endless fast memory and disk storage. You can switch between browser tabs 300MB each pretty fast but there is a constant, nagging sense of slowness present all the time. Maybe sometimes switching that tab or bringing up another program takes longer, surprisingly, or there is something simple that just doesn't happen right away. The feeling of instant responsiveness is broken into shards: you can still see a reflection of it if you happen to look at the right angle but mostly its crumbles are pointing elsewhere.
Things like BeOS tried to reach back to that, with varying but acknowledgeable success. But real success in terms of popularity and market penetration seems to come from piling up stuff until things get too slow. So I doubt we never get back to the old world of instant response in spite of processing power keeps climbing up.
For the last couple of years it seems I'll get some issues on some Google properties if I dare use a different browser than Chrome.
For a while it was search results doing interval training on my CPU, while the page was supposedly idle.
Last it got to the point where I started troubleshooting it was calendar that acted up.
You'd think a company like Google had resources to verify the UI across at least the 3 or for biggest browsers across the three biggest desktop platforms, -but it doesn't seem like.
Should they ever want to do that. I think I've read multiple discussions in the past about how Google is optimizing exclusively for Chrome while hurting the performance and compatibility with any other browser. Which is why Chrome is now basically called the new IE6.
Seriously. Adblock is the way you have to fight back, to change the economics of the internet. Install it, use it indiscriminately, and make sure you tell others to use it.
My company makes a decent chunk of money from ads btw. Not because we can't make money another, better and cleaner way, but because it makes no sense to leave money off the table when the adblock rates are so low.
Says the person with +500 tabs on chrome android
I too hate side activities when I'm in the zone but you have to admit this situation can eat away at your audience. :(