Hacker News new | comments | show | ask | jobs | submit login
24-core CPU and I can’t type an email (randomascii.wordpress.com)
629 points by ghuntley 62 days ago | hide | past | web | favorite | 310 comments



It seems many comments missed the point. The article is not about how bloated modern software is, how many useless features and programs are wasting CPU cycles for pointless jobs, etc. (Yes, modern software is bloated, for this reason, I'm using the MATE desktop on a minimum Gentoo installation, but this is not what the article is about.)

It is describing how web browser, a piece of software with extremely high inherent complexity, interacts with the memory allocator of the operating system, another piece of software with high inherent complexity, combined with a rarely used feature from Gmail, can trigger complex and complicated interactions and cause major problems due to hidden bugs in various places. This type of apparent "simple" lockup requires "the most qualified people to diagnose".

These problematic interactions are unavoidable by running fewer "gadgets" on the desktop environment, it can be triggered and cause lockups even if the system in question is otherwise good-performing. Installing a Linux desktop doesn't solve this type of problem (though this specific bug doesn't exist).

The questions worth discussing are, why/how does it happen? how can we make these problems easier to diagnose? what kind of programming language design can help? what kind of operating system/browser architecture can help? how can we manage complexity, and the problems came with such complexity, what is its implications in software engineering, parallel programming? etc.

From another perspective, bloated software is also an on-topic question worth talking about. But instead of the talking point of "useless programs wasting CPU cycles", or "install minimum Debian", we can ask questions like "do _ALL_ modern software/browser/OS have to be as complex as this?", "what road has led us towards this complexity nowadays?", "what encouraged people to make such decisions?", "can we return to a simpler software design, sometimes?" (e.g. a vendor machine near my home, trivially implementable in BusyBox, or even a microcontroller, are now coming with full Windows 7 or Ubuntu desktop! Even the advertising screens use Windows 8, and BSoD sometimes, despite all they need to do is just showing a picture. same thing for modern personal computers.), or even "is Web 2.0 a mistake?" (so we are here on Hacker News, one of the fastest website in the world!). These topics are also interesting to talk.


I get what you're saying, but it seems you already have a place you want to go and are using the article to get there -- much like the other commenters.

While these things are important, to me the critical phrase in the article is this: ...It seems highly improbably that, as one of the most qualified people to diagnose this bug, I was the first to notice it...

My system hangs when typing text all of the time. Reading this article, this indicates to me that 1) It probably hangs for tens of millions of other people, and 2) Nobody either has the time or money to do anything about it.

That sucks. Additionally, it appears to be a situation that's only gotten worse over time (for whatever reason).

You can look for potential answers as you point out. More importantly, however, is the fact that nobody is aware of the scope of these problems. Millions of hours lost, the situation getting worse, and there's nobody hearing the users scream and nobody (directly) responsible for fixing things. In my mind, figure those things out and then we can start talking about specific patterns of behavior that might limit such problems in the future.

tl;dr Who's responsible for fixing this and how would they ever know it needs fixing? Gotta have that in place before anything else.


We've all been trained by web apps that stuttering, jaggy rendering and hangs are normal and expected. I've railed against web apps for a decade now, most to deaf ears. They are so broken, unperformant, unreliable that in a previous time they would have never released with problems like those.

But now desktop apps have the same issues. And its not going back to where we were. So I guess we'll get used to it.


Jittery rendering, hangs, and high-latency are deal breakers in Virtual Reality. I think things will get better as a matter of necessity for VR and AR, and then "veterans" from the field will bring back "new ideas" about performance and user experience.

Or, if you prefer your optimism a bit more dystopian-flavored, some megacorp will come around with a walled garden whose user experience is just so good that users will flock to use it, and the rest of the industry will have to adapt to compete.

In either case, I don't think getting used to it is our only choice :)


Except in VR's case they just ask you to purchase a $1000 set of kit and if you experience stuttering just shrug and suggest upgrades


> a situation that's only gotten worse over time (for whatever reason)

The answer is ever-increasing complexity, no?


Complexity is always the enemy, but only if you have to deal with it. My car engine is very complex and I don't ever think about it.

I don't want to over-state this, but it's a hell of a lot more important than people think, mostly because it attacks you in little bits here and there. It's never a direct assault. We are creating a consumer society in which we're becoming slaves to the tech. It entertains us, it connects us, it remembers for us, it guides us. All of that might be fine if that's your thing. But there are dark sides too. These kinds of hidden bullshit annoyances are one of the dark sides.

The root of the darkness is this: if you steal just a few minutes per day here or there with hung-up text editors and such, how many man-years of productivity are you stealing from mankind?

I really think we need to go back to the metal and start designing systems with separate fault-tolerant systems dedicated to being humane to the users by invisibly handling the kinds of things that keep wasting huge parts of our collective lives.

Or, as you said, we could just keep adding complexity. That's always the answer, right? sigh


Complexity is always the enemy, but only if you have to deal with it

I think that we got to where we are now exactly because of the addition "but only if you have to deal with it". Software consumes many useless cycles exactly because developers on all layers shift their responsibility to deal with complexity to other layers (either the CPU, or the downstream developer). Sometimes it's because they have to, but most of the time it's simply because there's too much distance between producing and consuming developers.

being humane to the users

"users" all too easily implies end-users only. I'd add that developers also need to be humane to downstream developers (circle of influence and all that), including better API design and better instrumentation hooks. But that latter would be adding complexity :(


> Complexity is always the enemy, but only if you have to deal with it. My car engine is very complex and I don't ever think about it.

That's a bad example. My BMW E90 has a gasoline direct injection system, which was state of the art when the car was made 9 years ago. It is very complex and the parts it is made from are very expensive. The BMW specialist tell me the misfire my car has will therefore cost more than £2500 to fix, and even then they'd be guessing.

It would be better if car engines were simpler, like they used to be before they had to start passing artificial emissions tests that don't measure the impact for the whole life-cycle of the vehicle.


It's actually a very good example. I drive a 1978 GMC pickup. Once a year I drop it off at a place and the guy does maintenance. That and adding gas are all that I do.

When I travel, I get a rental that somebody else worries about.

These things are as complex as we will tolerate. I love new vehicles and have a blast driving them while traveling, but frack if I want to have to update and reboot my car. What kind of living hell is it where everything we touch is complex like this?


This is ridiculous. I have a pretty new car (2015), with GDI, and I sure as hell don't have to "reboot" it. It's been perfectly reliable. The infotainment system does have a few issues, but it has nothing to do with the driveability of the car (the engine, and other critical systems are not tied to it).

Modern cars are FAR more reliable than anything made in 1978, this is a simple fact proven by mountains of data. Cars last far longer than they used to; you can easily go 200k or 300k miles with basic maintenance, and I'm sorry, but despite what you might want to believe, that was just not the case in 1978.

And BMWs are terrible examples; those cars seem to be designed for expensive and necessary maintenance, so they can extract more profit from their owners. Japanese and American cars aren't like this.


A modern car is far safer, far more reliable, far more efficient, far more powerful, and far better than the environment. The added complexity only makes things better, in this example.


My memory may be faulty, but wasn't this one of the design goals of BeOS?

I wonder if there is a chance we could take another try at that.


That's 80-20 economy which is growing more and more, going hand in hand with "economy of scale". If tens of millions are less than 20 per cent of users - so be it, nobody cares. What does matter is 80 remaining per cent.


We could ask a lot of questions, but at the end of the day system complexity will increase the likelyhood of so-called "system accidents" for any type of system, not just SW.

One of the most effective measures to combat such issues is to... reduce the system's complexity. E.g. by not having another VM running on top of the OS just to read and write e-mail.

Since this won't happen any time soon due to various reasons, the only reasonable thing left to do for most of us is to grab some popcorn and watch how the software development world struggles to contain the mess we made and fail at it.


It was eye opening to me when there was mention of a 2 TiB map being created and something about 128MiB chunks. I'd just like to smack the person that thought that was a good idea. I can understand thoughts like "but the blocks won't actually be allocated" or some such, but you have to step back and say "WTF are you even doing with anything that large?" Control freak.

And yes, web browsers are becoming an OS on their own. I consider that a failure of the underlying OSes we have. Tabbed browsers are awesome, but they exist because OSes and standard desktops (GUI toolkits) didn't come up with decent ways to handle that. Browsers are also trying to implement fine grained access to resources - because our OSes haven't managed to do that for them yet. Memory management? I have no idea why you'd do that in ANY application software today. Actually there is a reason - people don't trust the OS or think they can do something better, but it ends up creating extra complexity. Complexity is categorically bad and should be avoided unless it's the only way to do something. Remember how X got its own memory and font management? Same thing.


It’s keeping track of 16k blocks of memory that way. If you changed it to 4MB you’d have to track and scan 512k.

We have a GIANT address space to play with. Why not use it?

They’re not actually using 128MB per function.


> The questions worth discussing are

Interesting that you ask how to diagnose and manage the complexity, but not how to avoid it. Do we really need a more or less complete OS+VM (aka web browser) running on top of another OS (Windows etc.) to read e-mails?


Likewise, one could ask if we need all this to read any sort of text - news, articles, etc.


Or even to just play music or stream videos.


Completely agree. I realized this perspective immediately after I made the post.

Original post updated!


agree, this kind of issues exists in any highly complicated systems which need to take into account memory security etc. I dont think a lot of people realise that modern browsers are about as complicated as the operating systems they run in and have about as much counter measures for security problems as the OS undernearth it. To manage these systems resources securely and not have them fight eachother in the process is an incredibly complicated and tedious job.

That being said i do agree with a lot of people there is a lot of bloat. But often this bloat is caused by lack of understanding of this complexity in what they are building software. If things like this are more generally known and understood problems like this memory issue in google application would be less.

my own solution to my hate for bloat is to write my own software from scratch. and before i complete that lifetime task i feel it's unfair to complain at others who spend their entire lifetime making programs you use because they made it a little too bloated for you due to whatever reasons...

I think for your question worth discussing, why/how this happens, and how to make it easier to diagnose, is that more people like the writer of this blog are so kind as tho share their findings with us :)


> I dont think a lot of people realise that modern browsers are about as complicated as the operating systems they run in and have about as much counter measures for security problems as the OS undernearth it.

There's an argument to be made that maybe 'security' isn't worth as much as the blogosphere thinks it is. Like everything else in life, it's a trade off, and because it is in their best interest the security "experts" do their best to sensationalize and promote paranoia and over-reaction to every little potential problem without regard for the cost, namely inconvenience and slow software.

What was the solution to Meltdown and Spectre again? Oh yeah, make everything slower on the off chance someone will use a timing attack to maybe slowly exfiltrate some information from memory that might be important. If you're a cloud host that tradeoff is probably worth it, if you're a desktop user outside of an intelligence organization it probably isn't, but you'll pay the cost none the less. 1% here, 2% there, no big deal right? But it sure adds up. Do an experiment: install 2 VMs, one with Windows Server 2016 (or Windows 10), and one with Windows Server 2003 (Or Windows XP). The 2003 (XP) VM will be so much more responsive it will freak you out because you aren't used to it. How much of your life has been wasted waiting for windows to appear and start drawing their contents? What are we getting in exchange?


How many minutes would that highly responsive Windows XP install survive browsing the web before it's rendered useless by tons of malware?

How many 2005 era applications, print drivers, toolbars or screensavers, and whatever else was cool in 2005, can you install before the machine is as responsive as a 300 baud connection?

XP era was probably peak crapware with people having IE with 12 added toolbars and unusable everything. Often solved by buying a replacement machine because the old one got so slow.


There's not too much in the way of xp malware still actively spreading, but you're right, an xp machine at one point was like a public domain colocated computer to be abused by whoever.

Computer insecurity is costly and counterproductive. It helps criminals and maybe the occasional oppressive regimes walk us backwards, mess up lives, mess up businesses. I don't think privilege escalation and encryption key theft should be taken lightly. Abusable things get abused.


There's plenty of crapware today, hell, Microsoft forces some of it on you in the default install. The same rules more or less still apply: it's risky to install crap from random untrusted sources. I still have an XP machine I use all the time at work because it has a real physical serial port for talking to some equipment with. It hasn't been a problem.


As I write in https://news.ycombinator.com/item?id=17775303 the high reluctance in the industry to clean/refactor core tech is a huge cost generator. Eventually is necesary to accept that "move forward" is not the way.

Money is not a excuse, because browser/os/languages are already HUGE money losers.


And the debugging exercise he went through was insane! I wouldn't have had a clue how to even begin tracking a performance glitch like that that results from an interaction between a very complex program and the OS.


I should have grown out of it by now, but I still dream of a Star Trek future, and I've developed a guideline (mostly in jest) for how I think about systems. It goes something like this: if the starship Enterprise turns out to run Linux, or windows, or the driver I work on, or Electron, or whatever I have in mind, and all the problems we see now show up on the monitors on the bridge, how would I feel about that?

Sometimes it's ok, sometimes it's not. I tend to wish that we could get a lot better at building systems, but that involves a number of difficult problems that people far smarter than I have been thinking about for far longer than I've been alive.

The future in my head doesn't have so many systems developed by accretion, but maybe that's how it has to be (for now).


We're still pretty early in computing history. It makes sense for things to be built by accretion the first time around.


I agree, and really appreciate your post. One thing I can't get over is what happens when everything is written in Javascript and runs in a browser? Clearly, we are not far from that now, which means the stack looks like this:

    Javascript code
    - - - - - - - - 
    Javascript interpreter
    - - - - - - - - 
    Browser (doing display things if not more)
    - - - - - - - - 
    OS
Running an app in a browser is cool, but the complexity is huge. Running a compiled app on the OS removes two layers of complexity from this situation!


I just kind of figured we'd stop having UI lag by now. I work on a 24 core workstation with 64GB of RAM and things lag all the time. Not slow to complete, like jittery key entry and non-responses.

Haven't we figured out thread prioritization by now? Can't we make sure something draws 60 times per second while things are going on in the background? My Android Studio build should be totally isolated from my inbox.

I know this is a bit orthogonal to the article and that I'm certainly not well informed about Operating Systems these days, I'd love to get schooled in the comments.


This isn't meant to be an especially religious comment, but I regularly switch back and forth between OSX and Windows 10 and wanted to make mention of the differences. I also use Ubuntu fairly regularly but only over ssh so I'll discount it from this comparison.

Both OSX and Windows 10 do suffer from UI lag but, in my experience, it is far worse on Windows 10 to the point where I have come to absolutely detest Windows 10. It's particularly bad on the login/unlock screen, which often takes multiple seconds to even appear when you get back to your machine - frustrating when you're in a hurry.

With that said, some of it is certainly application specifically. Office 365 Outlook, for example, is particularly egregious in this regard: switching between windows, or between mail and calendar is awful. Microsoft Teams also regularly hangs for multiple seconds when switching between teams, or between chats. Extremely aggravating.


One thing can't wrap my head around: if I receive a big (10Mb) email through a slow connection, Outlook completely freezes until the email has finished downloading. I'm talking, not even rendering correctly, and showing white rectangles instead of drawing components. How on earth can this not be a tested use case? Is there no separate thread to connect to the Internet?


Don't know for certain, but the issue might be that it's using the same thread to scan the attachment and hang, rather than download it and hang.


A brand new laptop I bought this year can't run Windows 10, the system is basically unusable. Not a cheap laptop, either, it was one of the more expensive ones. It runs Ubuntu no problem. All of Microsofts products seem to have latency issues, and I still can't figure out for the life of me why it takes longer to open a file in Visual Studio or in Word on my gaming desktop than it does to load a current year $60 video game. I wish designers focused what they call "gamefeel" in video games: how quickly, clearly, and satisfyingly does the program react to user input.

Edit: on the other hand, computers do start up a heck of a lot faster than they used to.


I'm typing this on a 5 year old netbook that had cost me $300 back then with 2GB, which I later upgraded to 4GB. It runs Windows 10 very smoothly, it's my living room TV computer for browsing, torrenting (up to 1080p) etc. At this time I'm running Firefox, Chrome, VLC and uTorrent seeding and performance is decent, certainly not "unusable".


Notably not running visual studio or MS office.


It is probably all the spying /s


OSX capturing and buffering the password input on the login overlay is definitely nice, compared to windows 10 dropping inputs until it is ‘ready’ for the last 2 characters of my password.


Thanks to PAM on Linux, "<hostname> login: " is printed by an entirely different set of processes than the "Password: " that follows it.

At least in Windows you aren't at a TTY that echos everything to the screen by default until PAM has hauled itself (and its umpteen libraries) off the disk (yes people still have 5400RPM HDDs :D), initialized, and put the terminal into no-echo mode...


I wonder why so many are talking about SSD vs. HDD in this discussion. From my experience on Linux it doesn't really matter if you have enough RAM. Yes, booting from a SSD is faster, but once everything is loaded the system mostly works on top of the RAM (waiting for FS sync is rare).

I don't know much about the situation on Windows but at least on linux your HDD is hardly going to cause lags if you have a well configured hardware setup.


> Yes, booting from a SSD is faster

Laptop users are many, and they care about boot up times. Especially since Linux power management is still awful.


This is a blatant lie that needs to stop. I've been using linux exclusively on laptops for more than 10 years. The first 5 with the HP EliteBook like, currently with Lenovo X1 line.

On my current X1 Yoga 1st gen, I do regularly get almost 1.5x the uptime on battery compared to windows 10.

We got a batch of X1 Carbon 6th gen, which are "optimized for windows 10", where the S3 sleep state was replaced by the S0I3, which is OS-assisted. The X1 Yoga with S3 can both suspend and resume faster than windows 10, despite not having S0I3 (and has longer battery lifetime to booth).

And by fast I mean that by the time the lid is up, the OS is ready. By contrast, windows 10 seems to always shuffle for several seconds after resume, even on the carbon 6th gen.

So please...


I am not sure if I would categorize it as a lie. I think it is unprecise and probably misleading. I have very little expertise on the subject but from my perspective, it is more like a driver/support problem. If everything is supported (e.g. some Laptops, but also Smartphones) Linux has quite a good energy footprint, but there seem to be many devices out there which do not have good support (at least out of the box).

One of the problems here is that power-draining can happen by different components. Quite popular are graphics chips, but other components can be part too. For example, a friend of mine has problems with his device waking up from s2idle during transport. That is not exactly what you would expect in such a discussion, but from a users perspective, he has less time to work with his device given his usage pattern.

For me, this isn't much of a problem as I prefer working on a desktop, but I wouldn't accuse someone of being a liar just because he had a bad experience with power consumption on Linux.


My wife just got the Carbon 6th gen. There is an ACPI patch to restore S3. See https://delta-xi.net/#056. Once you've got the patched dsdt.aml, I highly recommend not following their directions, and just loading the file in grub by creating /boot/grub/custom.cfg with the line:

acpi patched_dsdt.aml

This has the advantage of not requiring any manual intervention every time your kernel is updated. The downside is that if you try to dual boot W10, it will BSOD on boot with an ACPI error. To overcome this, take the original dsdt.aml and load it in a custom (not auto-probed) menu entry for windows. You can just clone the auto-probed line, and add before the chainloader line:

acpi old_dsdt.aml


I already did it, and I'm really glad I can do that since I have no use for S0I3 and I'd rather have the extended battery lifetime. However on windows there's no way to disable this sleep mode, and the laptop has consequently lasts way less when put to sleep in Windows 10, even compared to the 3rd edition (which we also have so that I can compare). How does this put linux in power management, huh?

Sadly, the 6th edition of the Carbon also runs waaay hotter than the 3rd, and I did notice occasional coil whine which was previously absent. When running normally the kernel writes regularly about thermal warnings. It seems that power and heat management is not well managed in this edition, and I see similar reports for the x1 yoga 3rd gen.


I get 6-8 hours out of my Zenbook with ArchLinux. Nowhere near the touted 9 hours of the specs, or the 12+ of the Macbook.

My system is exceptionally lean. I run bspwm and the radios are always off except WiFi. Still, I feel like I could get a lot more from this laptop. I cannot (and I certainly wouldn't want to) install Windows on dual boot to make a comparison though.

What distribution do you use?


How can you compare two different laptops? I'm comparing windows and linux on the same laptop.

I personally use debian unstable with a tiling window manager (awesomewm) without a specific DE. However, I generally setup Mint for all my colleagues on the same laptop lines, and there's one colleague running Arch. We all have very similar battery lifetimes.


Have you tried looking for offending processes with powertop?


Today on Linux Evangelism 101: using a single anecdotal experience as proof that something works right always.


Sure, however I really like to see what downvoters have to say. The "power management is bad" is pretty generic, isn't it?

High-end laptop lines (Dell, HP, Lenovo) have actually pretty good linux support, they are almost always intel i5+ cpus with integrated graphics which are all very similar. There is next to zero setup required in almost any case. This is known. The drivers are tried and tested. I've never experienced any random "power management issue". Suspend/resume/battery consumption/acpi is working perfectly fine. What else does qualify for "power managment"??

I've been supporting a team of 20+ people with these lines in a mixed environment, and linux since more than a decade was always basically plug and play.

There have been some quirks in some models, which generally required some tweaks in the kernel boot line, to fix issues with backlight tweaking.


> High-end laptop lines

Sure, but Linux really ought to shine on the low-end laptops.


Linux shines also on low-end laptops as long as you're still decent hardware. Nowdays it's still generally an intel core with integrated graphics, memory and I/O controller, so the result will be pretty much the same. Old, good laptops work even better than brand new editions of current laptop lines.

But this doesn't save you from crappy hardware, which is why this kind of argument is a non-starter for me. How well does current windows editions work on low-end crappy laptops that sell for the lowest tier? Not great. These laptops have plenty of little issues on windows as well.

Maybe better than linux, because the drivers have way more work-arounds than linux has.

Have you ever seen the internal linux quirks table to work around buggy and downright horrid hardware? That's what you get with those. If it works at all is thanks to the patience of people that put in the time to work those issues around. I frankly do not blame linux if it doesn't work well with such hardware.


> Thanks to PAM on Linux, "<hostname> login: " is printed by an entirely different set of processes than the "Password: " that follows it.

Ah! So that's what's going on :D.


On FreeBSD, at least, you can tinker with the sshd to switch between PAM, to get "Password: " and some other sshd-internal mechanism, which prints "<whatever login you just typed>'s password: " instead. Kind of cute.

And you can always run strace -f on whatever handles login (getty, logind, systemd, etc), drop to a TTY and login, to see all the stuff that gets loaded.


Really? It works for someone like this? Damn you Apple, not for me :(.

https://news.ycombinator.com/item?id=17786400


That's only if it doesn't come from suspended sleep. If it has to actually pseudo boot, it drops.


On Win10 you need to have enough swap space on an SSD; HDD I/O has totally kneecapped latency, seemingly due to blind usage of NCQ (AHCI driver got replaced from Win7), so it chokes out entirely when combined with the even more broken swap manager.

"Enough" meaning "infinite", because you're going to have something leaking all that memory. Task manager doesn't even show memory usage by default (add the "commit size" field), only the working set (and the swapping is insanely aggressive, so memory leaks just don't show up). Nobody seems to actually check that; even built-in stuff like Windows Update leaks pretty badly.

Oh and the memory accounting doesn't even work.


Ubuntu w/Gnome is possibly the worst offender. I actually like Gnome a lot but anytime I leave my PC on for more than a day the UI gets incredibly sluggish. Just moving windows around gets choppy and there is a annoying pause whenever I click the application launcher (happens even without the animation).

I've also found macOS provides the smoothest experience. I haven't found W10 that bad, but I haven't used it that extensively. I really only boot into Windows to play games these days.


There's a bad memory leak in gnome-shell which was found & fixed back in March, but the Ubuntu devs still haven't merged the full fix into Bionic for some reason: https://bugs.launchpad.net/gnome-shell/+bug/1672297

Edit: Having tried in on my laptop, installing the gjs + libgjs0g version 1.53.3 debs from http://gb.archive.ubuntu.com/ubuntu/pool/main/g/gjs/ onto a bionic install seems to work fine. Worth trying if this bug is affecting you?


I'm running Gnome and for the most part it has been excellent; Linux in general, though, still has a lot of trouble maintaining good UI speed at the same time as I/O throughput. I'm on Fedora, and dnf/rpm is an excellent example - even though I'm on an SSD, I can tell when something is updating in the background because stuff will lag for a little while. It feels like this has been a problem forever, and while it has definitely improved over time it's still not really possible to say, in a simple way, "I care more about interactivity than other stuff, prioritise that" (as far as I know, anyway).


> while it has definitely improved over time it's still not really possible to say, in a simple way, "I care more about interactivity than other stuff, prioritise that

that's not what mainline kernel optimizes for, you have to use liquorix or another kernel (https://liquorix.net/)


You would assume that big guys like RedHat would already have done this for you.


RedHat's business is servers' support: it's quite logical that the desktop they provide is as much as possible similar to the server's configuration because developers will test on their workstation.. So no, I don't expect that RedHat will do this.


I've found alt-f2 then r (ie. restarting gnome) puts it back in as good a state as a fresh boot. Sometimes my extensions won't load and this also restarts those.


Just tried this and it works great. Much better solution than rebooting. Thanks!


This doesn’t work under Wayland. Is there a way to something similar on Wayland.


Using Fedora 27 with Wayland, I have not found a way to do this other than logging out and back in :/


You should try cinnamon. There's no accounting for taste, but after using gnome 3 for three years I tried cinnamon (version 3.6, older versions weren't good) and I found it better across the board.


I've been very happy with Cinnamon for a while on Mint 18.x, and since I upgraded to Mint 19 last night, Cinnamon is even better. It's just a bunch of small improvements all over, that just make the overall experience that much better.


Tried it with for my now very old netbooks (because 1024x600 is low-res now), kept it on my desktop machine because it's simple, out of the way and fast.


I love Cinnamon and run it on a few machines, and as soon as they sort out mixed DPI support I'll be switching back from Gnome on my main machine too. It's been a while since I've tested actually...


I just stayed with Mate with Compiz (Ubuntu Mate) and my desktop is still as snappy as it was during Gnome 2 days.

I've recently given KDE Plasma a try and was really impressed by it's speed and smoothness.

I think it shows when your desktop does not rely on interpreted langues so much.


KDE Plasma is written mostly in QML, so JavaScript.


Mostly? Nope, just the UI elements. Much of the heavy lifting is done by KDE framework and Qt libraries (C++).


AFAIK it gets compiled AOT to C++ code. Technically, should be pretty fast out of the box.


I have the same problem at home. Oddly, my computer at work doesn't have this problem. Both run Ubuntu with Gnome.


This is strange. I run a 6 year old PC with win 10 on it. It has 4GB of RAM. I use an SSD and an M.2 drive. The thing is snappy as hell. I face zero lags, jitters, or switching delays at all. I get the feeling that writing things to disk is causing delays.


I had a friend telling me "check out my new computer it's so snappy!" once and the darn thing was so slow it made my hands shake from the stress. Can't believe anyone on the internet telling they don't have any lag.


Every time we get snappier hardware, we just add more software bloat. I pretty much run two kinds of GUI apps (on Linux with a lightweight, tiling window manager): web browsers and terminals. My terminal apps are always very snappy, the web browsers not so much. I'm of the opinion that the computer should wait for me, not the other way around. Doesn't seem to work out that way with "modern UIs".


I run a 6 core desktop with 32G, and 1TB M2 SSD, and after an upgrade from Windows 7 to Windows 10 last Christmas, input lag in mintty terminals was dramatically worse. Even just holding down a key and observing the stream of characters at 15 per second or whatever my repeat rate is, it glitches and stutters.


I should probably make a video of me opening an i7 2017 Macbook Pro with 16 GB RAM and waiting 3-5 seconds until the system allows me to type the password (also, I sometimes switch to Czech keyboard, and High Sierra has a bug where it randomly shows one input language as active in a top right but actually uses a different input language in a password field - this happens in about half of login attempts).

This isn’t how it used to be. Sigh.


Meanwhile my experience is opposite. I use Windows 7 at home and OSX for work. My personal machine is responsive, zippy, and most important of all, predictable

Meanwhile, my SSD filled work mac bogs down in completely unpredictable ways, with weird antivirus scans that I cannot control somehow destroying javascript performance, and needing to be rebooted more than weekly to stay functional in multiple ways. I was similarly dissatisfied with OSX in the mid 2000s when I used it at school. I just don't get it


At work we've recently moved from XP to Win7 and they seem to perform about the same for heavy office use.

I'm playing with a Win10 machine at home, the CPU and RAM is far in excess of our work machines but it is so slow to use. It seems to regularly become unresponsive, programs take ages to start, it seems to be continually working away in the background doing 'nothing'. It's a truly horrible experience.


i really like office 365 but the most productive thing i ever did was get rid of it on my work machine and install office 2010 again. it's just so much snappier there is no comparison. (although i also tried office 2003 for a day and boy it's blazingly fast. on a modern system you don't even see the startup screen, not to mention any input lag.)


Just curious, is that windows machine possibly not on an SSD? My windows 10 rig opens a profile faster than my MacBook


Ya, I just built a new windows 10 rig. I went for single core speed over more cores (i7 8086k) for the very reason of stamping out any UI lag.

It's buttery smooth and I get no noticeable lag. Exactly 7 seconds from power button to login screen on a cold boot too.


A fair question, but both SSDs so it's definitely not that. Since we're on hardware differences though the Macbook Pro is 3 years old and has a quad-core i7, whereas the Windows 10 laptop is 1 year old and has a dual-core i7. I'm not in love with having only dual cores but I don't think that's the issue either.


its not about opening the profile its about getting to enter your password from a locked screen - i love my w10 machine and its quite snappy when logged in but every unlock has this half-second or so lag until it lets me enter my password - i suspect the animation of the appearing login-form but who knows...


Windows 10 is really inconsistent. I have it running in a VM on a Fedora host and it's snappy as hell. Also on a pretty basic office-spec Celeron, but then running it on a new Ryzen 7 sometimes it lags. I don't think it's directly related to the performance of the hardware, there must be something else going on.


Windows 10 has all kids of debuggers enabled on millions of computers, MS calls this telemetry :) Go to Management and check out startup event trace sessions. Those are the source of mysterious >hundreds of megabytes of logs written to disk per day.


Virus and malware checks could be slowing things down


It's simple: Bad Engineering is cheaper.

Just today on Patreon, I was going through a creators posts. After about 100 posts or so the page was barely usable. Animations took minutes to complete, loading more posts took 30 seconds and Firefox (Chrome quit after 80 posts when Linux' OOM decided that the fun was over) was struggling to repaint the view port, lots of white areas. At the end other applications were severely lagging too both because Patreon's webshit was pulling 80% of all cores when doing nothing on the page and the Linux kernel was shoveling everything into ssd swap like crazy. Animations didn't work at all (likely if I waited 60 minutes it would have shown the first frames) and clicking on links to open in a new tab took about 15 seconds until the JS behind the scenes completed.

It's simply shitty design that there is no obvious way to say "show all posts from January 2016" and jump back month by month. Or atleast UNLOADING posts I've long scrolled past.

I have 8 Cores and 32 GB of RAM at my disposal, a website has no excuse to have that much of a shit performance. Especially when it's a platform where lots of money flows.

But hey, it's cheaper this way.

---

The issue is not thread priorization. Or processes. Your mail app and android studio are both competing for resources and either android studio will lag because the mail app is running on all the CPUs or the other way round. The OS doesn't have a way to tell which is more important and trusting applications to tell is not really reliable (devs will just say "I'm the most important").

A real partial issue is that a lot of modern apps are not engineered to save resources. They just take what they need and the user better provide enough CPU and RAM to do it. There is no sense in self-limiting in a lot of modern apps.

Frontend devs should wake up to the reality that they aren't running on the only instance of the chrome browser with only one tab open. There is other apps around too. There are other browsers around too. Sharing resources gives the user a better experience than just taking them all for yourself.


What's weird is, I have 8G (edit: it did say megs -- of course I have more than that :-) ) of memory and no swap. Never OOMed this machine in 3 years of development work. My colleagues run out of memory constantly. Sometimes I wonder if there isn't some other problem that isn't obvious. I run a very tight ship -- nothing runs on my machine that I don't know about. I'm going to go have a look at the Patreon page and see what happens...

Edit: Apart from being able to confirm the less-than-stellar performance problems, I had no trouble with memory at all. Chromium hovered at about 2G of memory. It fairly aggressively garbage collected as I browsed different sections and different creators.


In the scenario described in the article, where was the bad engineering? It seems to me that the developers of each component (Chrome, OS, other processes) did things that, while perhaps not perfect, are quite reasonable, and definitely not exceptionally bad. Only together did they interact in this undesirable way.

This case isn't some web app that uses infinite memory; the contended resource is a lock.


More cores, more RAM, more cache, more concurrency will help create higher total throughput but usually also mean higher latency because of increased context switching, natural need for more resource locks, increased delay in searching/reading larger stores like cache & RAM, and, higher latency penalties when there are cache misses and/or applications that require write-through policies. Things like faster clocks (which is why we now have dynamic, turbo clocks, aka Intel Turbo Boost), lock-free schedulers, and O(1) (constant-time) schedulers. However, notice that we have few CPU SKUs that go above 4GHz or 5GHz? Why don't we have 10GHz, 25GHz or 50GHz CPUs? It's because silly things like heat, cross-talk, speed of light, and physics at nano scale get in the way. Higher clocks would improve latency. Also, applications can improve their latency if they're written to take advantage of multiple cores, their workloads are inherently parallel or mostly parallel, manage their own thread/process scheduling, and use lock-free work stealing schedulers. CPU OEMs have proven developers prefer the manage all this complexity AND also calculate their ILP (instruction level parallelism) for them. Intel's Itanium failed because it required explicitly declared ILP (from compiler and/or developer) which can be tricky especially when most CS literature/education materials, programming languages, software constrains itself to using simplistic single thread of execution mental/computer models.


All the concerns in your comment are real, but they can be boiled down to one simple fact: Moore's law is dead.

Our processors are now not advancing nearly as quickly as people can create solutions for them. Case in point - Virtual Reality, which has technically been on the market for years, but out of reach for most users.

If processors are not improving, everything else slows down. Graphics, games, and the internet ten years from now will look pretty much the same as it does today.


If we had an operating system where every system call used an async calling interface, the type of spinlock latency observed in this article would be greatly reduced. It wouldn’t bring Moore’s Law back but it would make the most of what CPU resource we have.


Wasn't the lock in the article kind of async? From my understanding, the problem was not the wait time on the lock alone, but that the lock is not fair: so even if you asynchrounously locked, someone else (in this case the WMI scan) can get the lock everytime before you get it so you are blocked a long time (you cannot continue with this particular operation until you get the lock, even in the async case).


In the article, they illustrate a poor design choice. Or probably more likely, a continuation of the programming practice utilized elsewhere in the kernel. That being, slap a lock on any resource contention. That practice is necessitated solely by the need to support a linear calling interface. Even if this particular issue were to be fixed through queuing instead of locking, the existence of locks elsewhere will still become an issue in a given use case.

My point is, by design, an operating system that has a fully async interface can dispense with locks altogether. Resource contention is then properly resolved through queuing. The OS can then strictly enforce a priority scheme on its various queues, preventing an opportunistic thread from dominating the system. Or enabling higher priority requests to be serviced first. But that decision is made intelligently, not as a result of optimizing to minimize context switching.


Don’t see the fundamental difference between locks and queues. In a system that uses locks, the kernel can still internally use queues to grant some lock requests before others.


The fundamental difference is this. With locking, when a thread’s time slice expires while holding a lock, the dispatcher switches to a different thread and no other thread can gain that particular lock until the dispatcher resumes the thread holding the lock.

No locks means this can never happen.


If every OS required async calls, it would require every developer to learn async programming. They probably wouldn't learn it properly, and you'd end up with more race conditions and bugs than if async was optional.


Closures/blocks really make it a lot easier to follow the flow of async coding.

Async style enforces a kind of minimalism that in my opinion simplifies the problem space overall. Obviously, one has to learn different approaches to problem definition and solution to embrace event-driven coding.

To me, in the comparison of linear programming handling an exception when your call stack is 5 levels deep (or maybe 3 in a different use case), having to bubble the appropriate handling at each of those levels, versus async where in your closure you have a 4-line if/else construct that fires off one event for success and a different event for failure, async wins the simplicity competition every time.

If race conditions are an issue, events should be dispatched through a single-threaded finite state machine dispatcher for resolution. Such events can fire up multiple worker threads to do their bidding, but when state changes, that goes through the dispatcher which inherently resolves all races.


This has nothing to do with Moore's Law or improving processors directly. If anything, this is a software optimization problem. Half of what I mentioned was locks and schedulers, which are directly software problems. Layer on the fact that a helloworld application in most languages and frameworks is 100s of KB to dozens of MBs, and you have really, really, unoptimized software. 30 years ago, lots of software was written in Assembly and some C. But software has gotten fatter and slower at a much much faster rate than hardware has gotten faster/more parallel since the industry introduced ILP, caches, multi-stage instruction pipelines, etc. with the Pentium/5x86 class microcomputers & processors. Most people never learn to write multi-processing, multi-threaded, multi-programming, event-driven, decentralized or distributed applications. The model that browsers and most languages & frameworks provide is usually inherently sequential and single processing.

If web developers never learn to write multi-processing, multi-threaded, multi-programming, event-driven, decentralized or distributed applications the web will forever look like it looks today.


> This has nothing to do with Moore's Law or improving processors directly. If anything, this is a software optimization problem.

This is such a contradictory sentence. The whole point of Moore's law was that software developers could do what they want, freed by limitations of hardware because in 18 months, when their product was ready for launch, the hardware became capable enough to handle it.

Forcing developers to focus more and more on performance (via threading, distributed systems, and custom die co-processors), rather than innovating new applications, is a huge shift. It's like the hardware developers gave up, and tell the software developers that it's in your hands now.

Also, comparing a hello world program 10 years ago to today is a meaningless comparison because no one uses hello world to do anything. A better example would be Excel in 1998 and Excel in 2018. Sure today, it's a 500 megabytes instead of a just a handful, but it does quite a bit more (for better or worse).


You don't understand Moore's Law. Moore's Law says nothing about software devs doing what they want or processor speed or even hardware capabilities or qualities. It's about the manufacturing capability and integration by semiconductor fabs. Specifically, "Moore's law is the observation that the number of transistors in a dense integrated circuit doubles about every two years.' [0]. Hardware engineers never gave up; they're the reason your bloated apps takes 60 secs to startup instead of 3600 secs.

[0] https://en.wikipedia.org/wiki/Moore%27s_law


The internet, sure. But GPUs will figure out new ways to parallelize, which is not dead yet.


The problem is not that we cannot parallelize hardware. The problem is that the software making use of it is lacking. 24 cores is plenty, but if most software only uses one core even more cores won't help.


GPUs can just do fast matrix multiplications in parallel, but it’s up to the developer to reframe their software into matrix multiplications. For some applications it’s great, but for most it isn’t realistic.

Also if General processors are now hitting a wall, it’s only a matter of time before GPUs do.


I'll go into more details in part two, but part of the problem would have reliably disappeared if I had been running a single-core CPU.

Locks are hard.


> Locks are hard.

Yes, especially when the OS does nothing to help a backing-off contender acquire a resource that is greedily dropped and contended by another thread.

> I'll go into more details in part two

Great! In anticipation of that, I have a couple questions:

> but part of the problem would have reliably disappeared if I had been running a single-core CPU

Single-core or single-thread? Does the atomic instruction switch to the other HyperThread generally?

Do you see the same symptoms if you have a sufficient number of other unrelated CPU-intensive threads trying to run that the OS will want to deschedule the aggressive contender thread(s) from time to time?

The ways to deal with this that come to mind are the aggressive contender thread(s) sleeping from time to time, or yielding in hopes the decision about waking up the thread recovers some fairness lost by the decision about which lock waiter win. Perhaps one could just put the lock's cache line on as equal a NUMA footing as it can (a flush, for instance) so the aggressive contender has a chance to lose a race. However, having to reason about cache coherency protocols (differing from vendor to vendor and even same-vendor model to model) seems like a huge headache that anyone would want to avoid if possible. Is a fix described in the long crbug discussion?


On a single-core/single-thread Windows system the problem would be avoided because when the waiting thread is signaled it gets a priority boost. With just a single core this boost means that it gets to run instead of the other thread. With multiple cores it gets to run as well as the other thread.

Making the locks fair or occasionally fair would solve this problem, but perhaps cause others.

I think that an appropriate CPU-saturating background task might have made my machine more responsive, which is deliciously weird.


Didn't BeOS get this right?

I too wonder why UI threads are still so obnoxiously tied to... everything else. Put UI in the fast lane! (At least until some bug tries to make a bajillion calls a second)

Then I remembered how much more complicated it is to write proper multithreaded GUI code :P


I remember Safari in the iPhone 3G would ALWAYS scroll even if it meant showing a checkerboard pattern while the page rendering caught up.


Safari did that for a while too


I remember old Mac OS 9. If you started a print job, and held down the mouse holding down a click, the printer would just stop. The entire system wasn't properly multi-threaded.


That’s something that I remember people in my circle joking about at the time, but in retrospect maybe the joke was on us. User input should always be absolute top priority, and if that means something else has to stop then so be it.

(Although with 24 cores...)


To your point, I also remember working with early windows systems and when the network went down for a moment, the mouse and screen would freeze. Given the two options, Most would probably take the Mac.


I've gone from Windows -> macOS -> Fedora running i3wm.

Totally agree that Windows is the worst of the bunch and its still a huge problem on macOS.

On Fedora+i3wm, when my desktop opens from the login it is "immediately" ready for input. I've tried to ask some people to compare between macOS and my laptop and they say they don't notice a difference, so I think people are conditioned to waiting with these little unconscious pauses they don't notice.

I wonder is there a way to do accurate and meaningful timings for responsiveness in these situations.


I got annoyed enough to auto -STOP every tab but the one in the foreground (using surf [with or without tabbed] and a bunch of X glue), coupled with JS disabled by default is very nice. It's got bugs, but I'll add the X plumbing to my version of surf sometime soon: https://github.com/jakeogh/glide


Thanks, this looks really nice. How are you sending SIGSTOP to background tabs? (assuming its not built in to glide; I didn't see it mentioned in the README)


I should have elaborated more on the "it's got bugs", it's been awhile since I worked on it, I just looked through the bash plumbing I was using and now I remember what needs to be done, the big issue is copy and paste, since the owner of the current selection needs to be woken up when you paste.

This is so ugly I hesitated to post it, but here: https://github.com/jakeogh/glidestop

In the case of glide, having tabbed take care of all of that would be much easier, or even better it would be integrated directly into a "tabbed like" process that acts more like a windowmanager and does not necessary have "tabs". I'm thinking of a more generalized version to tabbed with a --notabs switch or something.


Almost all UI programing is single threaded. If the main thread is blocked for more than 10-16ms, then frames start dropping and the UI is unresponsive for that time.

Add locks and you add many more causes for the main thread to be blocked beyond pure computation.


Not just UI programming, almost all programming is single threaded. What programmer wants to deal with threads, really? They only do because they need to get more performance out of a chip.

And that leads us to the real problem. Chips are not advancing at the rate they should. Additional performance gains come from adding multiple chips to the same die, not making the chips faster. This forces programmers to think about parallelism, which makes everything harder.

In the mid-90s, chips were improving so faster that you wouldn't need to consider special tricks, you could just wait 3 or 4 months and a chip would come out that handled your application.

Now single processor performance is basically at a standstill and it's up to the software developer to spread work in parallel to make things faster.


> Chips are not advancing at the rate they should.

A weird interpretation of a singular event in the history of technology. No wheel maker complained because wood wasn’t getting twice as strong every 18 months.


Moore’s law and folks like Ray Kurzweil have for decades been driving the point that this isn’t a single event, but a trend that will continue indefinitely. I don’t buy it myself, but it’s now become conventional wisdom.

We’ve been taught for decades that chips will double in performance every 18 months (I know Moore said transistor count, not performance). It’s now turning out to be untrue. We think about the singularity all the time, but haven’t properly considered what will happen if technology improvements stall.


We need multi-threading to be super simple but for that we need mass adoption of new programming tools and techniques (and the corresponding mass-abandoning of old ones).

But good luck with that! You're fighting against the most powerful force in the universe: economics (more specifically sunk cost into existing investments). "Soft"ware is a lot "harder" once economics are taken into account.


That's why I opted for fewer cores but highest possible core speed. I can't stand any UI lag.


The "C++, threads and locks" programming model is inherently performance-unpredictable, since a high priority thread can't interrupt a low priority thread holding a lock to a contended resource.

The field has made progress in "figuring out" the general problem in the form of other programming models but that stuff is not close to gaining popularity on the desktop. Things like STM, lockless peristent data structures, and shared-nothing concurrency.

There are also attempts to make C++ & locks real-time friendly, but the programming model is already prohibitively hard wrt correctness, and the added complexity explosion from handling aborted locks would make it completely unmanageable.


The C++ (and many other languages) model of threads and locks have clearly proven overly complex and bug prone. It's incredibly easy to write software with race conditions and incredibly difficult to debug.

The best advance in recent years, is recognizing that most latency is due to network access, and using single-threaded, but asynchronous models like in javascript. It definitely does not remove race conditions, but makes them much easier to understand and debug.


Yeah,the only sane way to do parallelism is message passing. (preferably using zmq)


I don't think thread prioritization is the cause of the apparent lag. We've gone from mostly synchronous systems to highly asynchronous systems. Typing on a keyboard used to be a hardware interrupt that would also print instantly on the screen. Now it's a serial bus being polled every few milliseconds, followed by a buffer copy between kernel and user space, with a/several memory allocation and transfer of ownership to a GUI thread, that when scheduled may draw the character on the screen using a DOM that needs to be recalculated which requires accessing the memory allocator in the OS which is currently locked by your Android build.


I have a similar workstation (though 12 core I think) and recently I've thought about just running entirely separate VMs, each with dedicated cores and RAM. I'm sick of random slowdowns like you mentioned.


I think qubes os could be a good fit for that workflow


I remember back in the Amiga days, the processes for the GUI would be totally separate from the program logic. Add the pre-emptive multitasking and you would have GUI elements that would respond instantly always, even when the rest of your program was hung! Of course, the button callbacks wouldn't fire, but the button would appear to press, you could move windows, etc. Did a lot to reduce the perception of lag, whilst obviously introducing quite a few quirks of its own.


The Amiga didn't have multiple address space so that's cheating.. I've never used an Amiga but BeOS was really much more responsive than either Linux or Windows but that was before SSD and multiple cores were available so maybe now the difference would be much less noticeable.. There is Haiku (in alpha or beta) but AFAIK it hasn't any native modern web browser..


Depends on what you mean by "modern", I guess? https://www.haiku-os.org/files/get-haiku/webkit.png


WebKit is modern yes but it isn't really "BeOS native"


It uses native APIs for drawing, text layout, media playback, and HTTP. I don't know how much more native you want...


WebKit wasn't designed for BeOS so I'm not sure that using native API to draw is enough to make it a real native BeOS..


You might find this study of computer latency across 40 years of hardware interesting

http://danluu.com/input-lag/


>I just kind of figured we'd stop having UI lag by now

Software will always expand in resource usage as computing becomes faster. This ought to be an adage that more programmers should think about.


Could this be because many programs are still single threaded (eg. javascript on the browser), but multi-cores workstations usually have a lower performance per single-thread?


The article states that this problem actually came from service workers (and how MS allocates pages so V8 can spawn this them). Service workers are specifically designed to circumvent issues regarding UI lag from a single thread..


We probably need to be using a real time OS like QNX or other.


I have a core i5 with 32gb of factory oced ddr3 ram that is faster than many ddr4 ones

My machine randomly has input delay or mouse even after new install...


Are you using gnome?

And a slow hard disk is often the bottleneck, in case of windows.


Priority inversion.


Have to give it to Apple for acknowledging the difficulty in writing responsive (as in no lockups) applications and designing multiple developer framework solutions to solve it (Grand Central Dispatch and NSOperationQueue).

Both those are designed to enable developers to easily offload work to background threads and prioritise queued work for the user. No open-for-interpretation thread priorities, but named QoS priorities (User Interactive, User Initiated, Utility and Background). It makes much more sense for that abstraction.

We just need more developers to make use of it.


Maybe more developers would make use of it if it was open source, existed for other platforms, and available for languages people actually use on those other platforms.


GCD is. NSOperationQueue is too tightly coupled to Foundation to be useful/relevant on other platforms.


It looks like there's a working GCD port for Linux in https://github.com/apple/swift-corelibs-libdispatch


Gmail and Inbox both hang for me on my Chrome on Linux with almost no load.

It's funny how tides turn. Initially Gmail was the king of performance.


I need 27 seconds on Firefox 52.9 ESR (Debian) and about 20,287 kbyte of data transferred in 148 requests just to reach an idle GMail tab.

What is all this stuff even doing?


I usually append /h/ to the URL (/mail/h/) and get a HTML version. It can work even without JS and it has classic design with small rows that works well on my small screen. No Material Design and no huge elements with large offsets.


I hate material design. I don't want more vacant space, I want density.

Ive always loved Japanese website layouts. Content is beauty.


I hope you have noticed that in GMail you can swtich between different "density" modes. In your Inbox, click on the cogwheel on the right-hand top corner and select the second option "Display density". The "Compact" setting is actually rather dense. Although, it mostly affects the density of rows (conversations) in your inbox and does not scale e.g. the search box input field above.


do you have any examples or resources on Japanese style web design?


It's everywhere. Simple example: Compare the differences--and even the subtleties--in information density between yahoo.co.jp and Yahoo.com.


Oh wow that is a stark contrast. yahoo.co.uk, yahoo.co.in, yahoo.co.id, etc. are all incredibly low-density as well. I have always loved the Japanese aesthetic, I wonder if any other cultures have a similar preference for websites.


Only downside being that it's common to see people walking around in Japan with their phone about a foot in front of their face.


Sounds like me. On the couch or walking around I'm at 10-12 inches away. I wonder--do people who don't prefer high information density hold their devices farther away?


It's not just Japanese, Chinese also. An article not long ago theorized some reasoning behind it: https://randomwire.com/why-japanese-web-design-is-so-differe...


I just tried that, and noticed promotions in the inbox, the non-HTML version separated the "primary" from the "promotions". Are they merging them to keep HTML version annoying?


The HTML version is the legacy UI from a few generations ago, before features like inbox categories were implemented.


I always assumed the behaviour and categorization was in their backend. I think its weird that the view (HTML vs javascript) is not decoupled from the model?


While that is possible it would make no sense. Why spend time refactoring code to improve a legacy UI for old browsers almost nobody is using? Legacy UI doesn't know anything about categorization because it was added later.

Actually Gmail's approach is sane. They just keep old UI working as it used to many years ago although the backend could have changed completely.


Maybe the categorisation is metadata against each email. Maybe even so they could use it without breaking the old legacy systems.


what exactly are they supposed to do in the backend?

they probably have an api in which you can limit the results by category. As the old interface doesn't have any limiter implemented, they're getting all the results back.

you can also disable categories in the newer interface and the effect would be the same (everything in the inbox)


Now this is what I call a sane UI!


this does not work for me. when I type "gmail.com" it redirects to:

    https://mail.google.com/mail/u/0/#inbox
where should I put the /h/?


    https://mail.google.com/mail/u/0/h/



I ended up going to Thunderbird for my main email addresses (work & personal), with Rainmail for quasi-disposable addresses on my various domains. Works much better than Gmail did towards the end of my years using Gmail.


I have to use Thunderbird for work and I wish it was 1% as good as Gmail. The search sucks, it's slow to receive mail, it crashes quite often. Either your Gmail is profoundly broken or you have a magic Thunderbird. In case the latter is true: do you have any tips to optimise Thunderbird?


Have been running Thunderbird for G-Apps based work email for over a year (Fedora 26,27). Using the Provider for Google Calendar extension works well for integrating calendar into Thunderbird.

Other than fighting with the calendar occasionally (sometimes needing to force a manual sync to see coworkers' events), Gmail in Thunderbird has been pretty smooth. I have not noticed slow search, slow mail receipt, or application crashes. In fact, the search is often too good, in that after running it it matches hundreds of more emails than I usually expect. I usually stick with the quick filter which is a bit less flexible, but generally returns more useful results.

Another advantage, on Linux at least, is that if you copy your ~/.thunderbird folder to another machine, all accounts, GUI layouts, settings, search results, tabs, etc move over flawlessly. I think you just have to re-sign in for Google accounts on the new machine and you are good to go.


I would check that Google isn't ratelimiting you, and I would also check that your hard drive/SSD and RAM aren't dying. I've experienced issues like what you describe, but the first time was caused by a dying hard disk, and the 2nd by Google limiting the number of IMAP connections to a hilariously low number.


Thunderbird is quite a piece of crap. Apart from the core, everything else is written in javascript and thus it is a single-core web application.

I have decided to just leave it be, and let it hog my computer, because it interface decently with google calendar.

That being said, I can definitely vouch for clients like Claws Mail (a bit ugly, but does its job) and Evolution (super fast, but it's written in C#/Mono)


Mining your personal data for ad revenue. Or was this a trick question?


yeah inbox basically causes chrome to beachball on OS X. They control the browser, the webapp and have still managed to put together an experience like this.


And their latest interface broke middle-click to open an email in a new tab. One the major advantage of using Gmail in a browser vs in the app, gone. Oddly ctrl-click still works.


> Initially Gmail was the king of performance.

When, and compared to what? I'm pretty sure it was never faster than Thunderbird let alone something like Mutt.


When it was just release, compared to things like yahoo mail and hotmail.


I just gave up on running Gmail at all. Switched to Thunderbird and don't miss Gmail at all.


Try using Outlook. Gmail is a Ferrari in comparison.


Outlook.com was super snappy when it arrived, today is super slow. "Modern" web development always ends up here.


Oh I wasn’t even going there.

Outlook the Windows app deserves to be the most awful app ever, but Skype for Business worked harder!


I have ditched Inbox by Google and went back to Gmail by Google.


It also eats battery ridiculously fast


That was a lot more interesting than the "man, bloatware" complaint I was expecting from the title.


[flagged]


Why do you say "he is partially responsible for it"?


FTA: "I work on Chrome, on Windows, focused on performance. Investigating this hang was actually my job."


So? The two bugs uncovered were Windows bugs.


The user doesn't care. Chrome is slow, he won't switch OS just because of that.


over 300MB per open tab is not a windows bug. Opera 12 does "legacy" Gmail just fine in ~5MB of ram.


It appears the author works on Chrome.


Holy smokes! That is a freakin' awesome deep dive into a bug that has be irritating the crap out of me of late. My gmail window would just freeze for long periods of time, other windows were fine, and restarting the browser (Chrome) would fix it for a while. I had zero idea how I would figure out what it was doing, now I have a road map for looking at these kinds of things. Clearly some tools to play with there.


So why are we in this mess? Because there are still buffer overflows.

Address space randomization is done because buffer overflows allow exploits. But rather than fixing the underlying problem, we now have complex schemes to spread programs over the entire 64 bit address space to make such exploits unreliable.

Then, apparently Microsoft's Javascript JIT engine has enough problems with buffer overflows that each compiled program is in a different random part of the address space to try to prevent Javascript exploits.


Because the underlying problem cannot be solved, until all the programs and systems currently in use can be completely rewritten and reimplemented with a widely-accepted programming language and methodology to ensure correctness and security.

If the current approach from the C+Unix era to programming and OS is still in use, there are only three solutions available...

1. Fix-and-miss all the bugs, safe programming practice.

2. Isolation

3. Mitigation

Isolation is useful to limit the scope of a security breach but cannot stop attackers from exploiting the bugs. The only solution which is able to stop attackers from exploiting existing programs is mitigation - you don't fix and miss individual bugs, bugs are always there, what we need stop attackers from exploiting them. Some exploits are easy to stop, hence NX. Others can only be stopped in a probabilistic way, hence ASLR.


> Microsoft's Javascript JIT engine

This is Google's. The problem manifested in Chrome. From the article:

> It turns out that v8 (Chrome’s JavaScript engine) has CodeRange objects for managing code-gen [...] But what if you have multiple CodeRange objects, and what if those get allocated at random addresses and then freed?


Isn‘t that a bit ignorant when nobody is forced to access their e-mail through a website running JIT-compiled JS in a browser with built-in OS features on a bloated, graphical UI of an OS with the burden of 25+ years of backwards compatibility?

A fast client running on a lean OS would not exhibit these problems - and AFAIK gmail still supports IMAP.

(I use Fastmail on MacOS&IOS/Safari and have no such issues either).


Web applications are the best example for Wirth's law. It's really mind-boggling if you think about it. None of the client-side components of a web application were originally designed for what they are used for today:

1. a programming language (Javascript) that turns into a nightmare if you try to write programs with more than a few thousand lines

2. a user interface that requires you to learn two additional declarative languages (HTML and CSS), both of them equally incomplete and crappy.

3. an API between 1 and 2 that is so lacking that you need an external library (e.g. jQuery) to reduce the amount of boilerplate code to a sane level.

4. a network protocol (HTTP) that was designed for static web pages with a few pictures and that has serious performance issues for anything more complex.

5. and finally, the whole thing implemented in a language (C or C++) that is fast but offers you plenty of opportunities to shoot yourself in the foot in obscur ways security-wise.

Things are changing fortunately (HTTP/2, QUIC, Rust).


nobody is forced

Pretty certain Google Apps administrators can disable IMAP and POP for their organizations.


So basically it is the problem with CFG (exploit protection) which is not ready for the cases when there are many allocations and freeing of excutable memory blocks.


Kinda; mainly that NtQueryVirtualMemory was super slow when scanning over CFG, which was bug fixed in the April 2018 Windows 10 update.

It also uncovered a "bug" (performance weakness?) in v8 that they were able to fix so less CFG blocks were allocated.

So kind of a win/win in the end, bugs fixed, world a slightly better place.


> It also uncovered a "bug" (performance weakness?) in v8 that they were able to fix so less CFG blocks were allocated.

They implemented a freelist, it's a common workaround for problematic allocators, but has its own issues (https://www.tedunangst.com/flak/post/analysis-of-openssl-fre...)


Normally a freelist is a tradeoff between memory and speed, but in this case there is essentially no tradeoff. If you look at the fix you will see that we don't maintain a freelist of memory in any traditional sense. We just retain a freelist of addresses. These addresses are then used as hints for where to allocate future CodeRange objects. If that address is gone, we'll go somewhere else.

Because the memory is fully freed and reallocated this also avoids security concerns.


It didnt strike me as a bug, but a Windows brokeness workaround (windows doesnt release allocations).


Yes


Maybe it is better to disable CFG then. Because now it makes a memory leak.


That was considered (see the discussion in the bug) but the v8 fix will also fix the memory leak, while still retaining the security value in CFG


I don't mind so much when a single program that I'm using interactively has a pause, even though it's annoying as hell. What really bugs me is that regardless of how many cores I have, there is always one pegged by that annoying background service. If I start up windows, you can bet that windows update will first peg a core for a few minutes. Then the builtin windows antiviris takes over (perhaps because win update wrote some files, who knows), pegging a core for a few more minutes. Then my backup program service (iDrive in this case) pegs a core for a few more minutes. All of these programs I understand why they are running, but despite me setting "quiet hours" and "please backup only at night" etc, they seem to run for several minutes when I least want it. After all these programs are done (or, I have killed their processes and stopped their services more likely), some completely random apps and services seem to always take a core and peg it. SNMP (some network protocol service) often does it for hours on end. Killing it doesn't make anything obvious stop working - so I have no idea why it can use 100% cpu for hours on end. Explorer.exe (the desktop process) often goes into 100% cpu-mode. The "sound graph isolation" service is a common culprit. When doing normal desktop work, this is often barely noticable. But when I boot my windows machine, it's usually to play a game, immediately after startup. And despite having many cores to spare, if one core is pegged, the framerate is 20fps instead of 100fps. This is presumably not because of CPU starvation, but more likely because of competition for memory/cache/storage resources.

I don't understand how all these services must use 100% CPU for minutes, on a fast cpu core, and why they must run ANY logic on startup, which is when the user is most likely to use his machine. Don't even read your app config on start of process. Sleep a few hours and THEN wait until the machine is idle and THEN do logic! Using power save modes is no better. When you take it out of sleep after 12 hours, the services are very eager to check for updates or antivirus or backup again, regardless of time.

I wish windows could just let me use my machine for what I intend to - which is use ALL my cores for ONE foreground application, ONLY. I'd be happy to boot into a freaking "game mode" or "work mode" which is like safe mode and has ZERO crap running (no unnecessary services, no scheduled tasks can start and so on).


Analytics about you and your machine won't get collected by themselves. Have you noticed this phenomenon with free software?


I don’t mind analytics/telemetry if done right, just don’t use my CPU when I need it...

As for free software: Game titles are rarely free and most don’t even run on a free OS so really using Linux or not switching on the gaming machine solves the problem of CPU use without me being able to run the app I want. But it’s just not a very good solution.


Great article! The only thing I wish was explained a bit better was the 2 TiB CFG memory reservation. What's that for, again?


Basically, one byte of CFG memory "controls access" to 64-bytes of executable memory - indicating which addresses are valid indirect branch targets. With appropriate compiler and OS support this can help stop some exploits.

Unfortunately it quickly gets really complicated so a bit of hand waving is necessary.


CFG == function pointer checks prior to calling them.

The sparse/virtual bit mask is an optimisation technique so the validation of the target address is quick.


I'm running Manjaro Linux with Deepin Desktop on a Dual core 0.9Ghz CPU with 8 GB of memory.

Boot speed from off to login is less than 5 seconds. From hibernate it's instant.

My girlfriend brought similar desktop to work and people with windows couldn't believe the machine was that fast. It runs circles around their new windows laptops with tons of memory and cpu cores.

Firefox loads in a second. Kingsoft office also loads in less than a second.

I think it's time people start to understand what is out there now as options for their laptops. And in particular, deepin desktop is very fast and beautiful.


So some code has quirks/bugs and your conclusion is that people should switch to Linux...

There's this concept of right tool for the right job. And even if you save 2 seconds of boot time, that is meaningless when you encounter a similar bug with your software and waste a day, or when you simply waste time because you needlessly switched to an unfamiliar OS.

Just as a note, I have a Windows 10 booting on a decade old Lenovo netbook. That's a single core 1.6GHz Atom with 1.5GB RAM. It boots in ~10s. Much of it is BIOS. And yes, I can type an email with no issues. You should really try switching to Windows 10 and this dirt cheap netbook. :)


I get less than 5 second from off to login on a 5 years old Windows 10 laptop (dual core, 10 GB, SSD). So there are options that can work fast, not just on Linux.


It has nothing to do with Linux though. This morning my ubuntu 18.04.1 running kde took 2 minutes to boot and then 3 minutes and a half to load the desktop. Then it'd hang on the menu when typing/searching for an application, I'd wait for the menu to disappear and then it'd work. shrug


My Windows laptop boots incredibly quickly. I ruthlessly uninstall bloatware and, crucially, I have a fast SSD.


I run Debian Unstable/Testing, at 2000+ tabs on Firefox and three weeks up time the used memory is 3.21G/15.4G.

The year of the Linux desktop was some time in the 00's, just that few people noticed.


.9ghz in 2018!? What machine is this?


Intel Core-M?

A SoC-like, high performance, low power mobile CPU. Performance per MHz ratio is high, allows reasonable performance with low clock frequency.


Yep it is. Also the computer don't have fans so it's dead quiet at all times. It's an Asus Zenbook ux305.


It could be my 2017 Dell XPS 15.

If I really want to extend battery life, forcing the CPU clock down to 800 MHz is a good idea.


> For some reason most people see either no symptoms or much milder symptoms than I do.

This seems to be right for "most people". But there are definitely a few people who are annoyed by issues like this but aren't in a position to troubleshoot and report it.

Even I, after helping people with computers since mid nineties I still can't troubleshoot like that. I'll fall back to latency checking and turning off services one by one, combined with a fair amount of experience + googling.


Interesting story in particular, but in general, performance and memory behave like any other resource that is plentiful: they get used up until things are slowish again. Like road space. Things get added on top of each other until the reduction in speed becomes visible.

Because these days, unlike in the 90's, it's no help waiting for the next Pentium processor to come out this usually results in a heavy optimisation cycle in the underlying engine. For example Firefox has advertised a speed-up several times in the last 10 years, each coming from a focused effort to rewrite or optimise the JS engine, the rendering engine, or something else. Then things accumulate again, until things are too slow.

Obviously the cycles aren't wasted as you can do things with a few lines of high-level script that would've taken months to implement in the "good old times". But this development inevitably creates bizarre flashbacks where, occasionally, you're doing something simple like typing text on today's monster machines and it takes a few seconds for the screen to catch up your typing.

When I started programming, things were roughly and more or less instant. Typing was instant. There was a very short code path from handling the interrupt to updating screen. The computers reacted more like physical apparatus. Terminals at shops, warehouses, hospitals, with text-based programs to update data were pretty much instant too. Good clerks could bang their keyboard through multiple subscreens of their program in seconds, and you were checked in, goods reserved, or patient data updated. Later, when multitasking hit the mainstream, flipping between programs was instant. (Not on Windows, though.) Things like I/O took its time, of course, but at one point there was a peak moment where a software machine had nearly as low latencies as a physical machine.

From those times things have mostly gone downhill. Yes, we have immense processing power and near endless fast memory and disk storage. You can switch between browser tabs 300MB each pretty fast but there is a constant, nagging sense of slowness present all the time. Maybe sometimes switching that tab or bringing up another program takes longer, surprisingly, or there is something simple that just doesn't happen right away. The feeling of instant responsiveness is broken into shards: you can still see a reflection of it if you happen to look at the right angle but mostly its crumbles are pointing elsewhere.

Things like BeOS tried to reach back to that, with varying but acknowledgeable success. But real success in terms of popularity and market penetration seems to come from piling up stuff until things get too slow. So I doubt we never get back to the old world of instant response in spite of processing power keeps climbing up.


As someone who remembers those times, web apps have been a terrible movement. On the other hand I'm kind of happy that I can do almost everything the same between Linux and Windows, so it's not all bad.


Around what year were those magical times?


I think the dynamic was more that that software was slow when it was written, but had a long shelf life, so after a hardware upgrade or two it seemed very fast. I remember using Norton Commander (in DOS) on an 8086 and it was laggy scrolling a directory with 50 files.


The move from CRT to LCD adds a few milliseconds depending on the screen. We're so used do it by now, that typing on old hardware is almost jarring. It feels almost TOO responsive.


Latency is not everything. Moving from CRT to LCD probably saved my eyesight from degrading 10 years too soon.


It's sure nice to not have your corneas constantly assaulted by charged dust particles.


i've got a 144hz lcd display. it's amazing. i feel almost like in the good old days.


Yes. Especially if it has an ULMB function that can strobe the backlight.


On a completely different matter... have just switched to Gmail new UI, and now my mac Mail keeps hanging on threaded emails while consuming very high CPU. Anyone else encountered this issue?


What browser?

For the last couple of years it seems I'll get some issues on some Google properties if I dare use a different browser than Chrome.

For a while it was search results doing interval training on my CPU, while the page was supposedly idle.

Last it got to the point where I started troubleshooting it was calendar that acted up.

You'd think a company like Google had resources to verify the UI across at least the 3 or for biggest browsers across the three biggest desktop platforms, -but it doesn't seem like.


> You'd think a company like Google had resources to verify the UI across at least the 3 or for biggest browsers

Should they ever want to do that. I think I've read multiple discussions in the past about how Google is optimizing exclusively for Chrome while hurting the performance and compatibility with any other browser. Which is why Chrome is now basically called the new IE6.


Every other paragraph the ad box tries to run a full screen video with unmuted audio by default. I got headache some half way thru article and then gave up reading. From authors work, it doesnt seem this particular blogger needs my two cents from ads he runs to live a decent life. Its sad how the state of internet looks like these days ;(


If you hate ads, and you hate the state of the internet, run adblock.

https://chrome.google.com/webstore/detail/ublock-origin/cjpa...

Seriously. Adblock is the way you have to fight back, to change the economics of the internet. Install it, use it indiscriminately, and make sure you tell others to use it.

My company makes a decent chunk of money from ads btw. Not because we can't make money another, better and cleaner way, but because it makes no sense to leave money off the table when the adblock rates are so low.


and if you really hate ads, install AdNauseum. It "clicks" the ads and can add noise to your consumer profiles.

https://adnauseam.io/


That's not an option for mobile though..


Firefox have extensions on mobile, plus ads make websites heavier, and suck more bandwidth

_____________________________

Says the person with +500 tabs on chrome android


For some reason firefox on android removed the ability to install on SD card. My cheap phone doesn't have the space to spare on the internal drive.


+500. Whoa. I hit the :D face and sweep all my open tabs into Pocket.


From chrome???


Sure, it's right there as the sharing method


I recommend ublock origin


It is for Samsung and iOS users. Both have the option of installing adblockers.


and Android users willing to use Firefox


Brave browser. It is exactly the same as normal chrome on Android, but it has an ad blocker.


On iOS you have an even better option: the little “readability mode” button in Safari.


You can install a system-wide adblocker if you have a Samsung phone.


Downvotes here are pathetic: "somebody made a mistake on the internet, let's show him how much of an idiot he is!"


Uggh. I'll talk to wordpress. They were supposed to turn off the video ads. I agree, it is sad.


You are a serious author and your articles are of top quality. Don't you think you should invest some time and effort in self-publishing? (Read: run far away from WordPress.)

I too hate side activities when I'm in the zone but you have to admit this situation can eat away at your audience. :(


use an adblocker

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: