
24-core CPU and I can’t type an email - ghuntley
https://randomascii.wordpress.com/2018/08/16/24-core-cpu-and-i-cant-type-an-email-part-one/
======
bcaa7f3a8bbc
It seems many comments missed the point. The article is not about how bloated
modern software is, how many useless features and programs are wasting CPU
cycles for pointless jobs, etc. ( _Yes, modern software is bloated, for this
reason, I 'm using the MATE desktop on a minimum Gentoo installation, but this
is not what the article is about._)

It is describing how web browser, a piece of software with extremely high
inherent complexity, interacts with the memory allocator of the operating
system, another piece of software with high inherent complexity, combined with
a rarely used feature from Gmail, can trigger complex and complicated
interactions and cause major problems due to hidden bugs in various places.
This type of apparent "simple" lockup requires "the most qualified people to
diagnose".

These problematic interactions are unavoidable by running fewer "gadgets" on
the desktop environment, it can be triggered and cause lockups even if the
system in question is otherwise good-performing. Installing a Linux desktop
doesn't solve this type of problem (though this specific bug doesn't exist).

The questions worth discussing are, why/how does it happen? how can we make
these problems easier to diagnose? what kind of programming language design
can help? what kind of operating system/browser architecture can help? how can
we manage complexity, and the problems came with such complexity, what is its
implications in software engineering, parallel programming? etc.

From another perspective, bloated software is also an on-topic question worth
talking about. But instead of the talking point of "useless programs wasting
CPU cycles", or "install minimum Debian", we can ask questions like "do _ALL_
modern software/browser/OS have to be as complex as this?", "what road has led
us towards this complexity nowadays?", "what encouraged people to make such
decisions?", "can we return to a simpler software design, sometimes?" ( _e.g.
a vendor machine near my home, trivially implementable in BusyBox, or even a
microcontroller, are now coming with full Windows 7 or Ubuntu desktop! Even
the advertising screens use Windows 8, and BSoD sometimes, despite all they
need to do is just showing a picture. same thing for modern personal
computers._ ), or even "is Web 2.0 a mistake?" (so we are here on Hacker News,
one of the fastest website in the world!). These topics are also interesting
to talk.

~~~
DanielBMarkham
I get what you're saying, but it seems you already have a place you want to go
and are using the article to get there -- much like the other commenters.

While these things are important, to me the critical phrase in the article is
this: _...It seems highly improbably that, as one of the most qualified people
to diagnose this bug, I was the first to notice it..._

My system hangs when typing text all of the time. Reading this article, this
indicates to me that 1) It probably hangs for tens of millions of other
people, and 2) Nobody either has the time or money to do anything about it.

That sucks. Additionally, it appears to be a situation that's only gotten
worse over time (for whatever reason).

You can look for potential answers as you point out. More importantly,
however, is the fact that nobody is aware of the scope of these problems.
Millions of hours lost, the situation getting worse, and there's nobody
hearing the users scream and nobody (directly) responsible for fixing things.
In my mind, figure those things out and then we can start talking about
specific patterns of behavior that might limit such problems in the future.

tl;dr Who's responsible for fixing this and how would they ever know it needs
fixing? Gotta have that in place before anything else.

~~~
JoeAltmaier
We've all been trained by web apps that stuttering, jaggy rendering and hangs
are normal and expected. I've railed against web apps for a decade now, most
to deaf ears. They are so broken, unperformant, unreliable that in a previous
time they would have never released with problems like those.

But now desktop apps have the same issues. And its not going back to where we
were. So I guess we'll get used to it.

~~~
piercebot
Jittery rendering, hangs, and high-latency are deal breakers in Virtual
Reality. I think things will get better as a matter of necessity for VR and
AR, and then "veterans" from the field will bring back "new ideas" about
performance and user experience.

Or, if you prefer your optimism a bit more dystopian-flavored, some megacorp
will come around with a walled garden whose user experience is just so good
that users will flock to use it, and the rest of the industry will have to
adapt to compete.

In either case, I don't think getting used to it is our only choice :)

~~~
mrguyorama
Except in VR's case they just ask you to purchase a $1000 set of kit and if
you experience stuttering just shrug and suggest upgrades

------
habosa
I just kind of figured we'd stop having UI lag by now. I work on a 24 core
workstation with 64GB of RAM and things lag all the time. Not slow to
complete, like jittery key entry and non-responses.

Haven't we figured out thread prioritization by now? Can't we make sure
something draws 60 times per second while things are going on in the
background? My Android Studio build should be totally isolated from my inbox.

I know this is a bit orthogonal to the article and that I'm certainly not well
informed about Operating Systems these days, I'd love to get schooled in the
comments.

~~~
bartread
This isn't meant to be an especially religious comment, but I regularly switch
back and forth between OSX and Windows 10 and wanted to make mention of the
differences. I also use Ubuntu fairly regularly but only over ssh so I'll
discount it from this comparison.

Both OSX and Windows 10 do suffer from UI lag but, in my experience, it is
_far worse_ on Windows 10 to the point where I have come to _absolutely
detest_ Windows 10. It's particularly bad on the login/unlock screen, which
often takes multiple seconds to even appear when you get back to your machine
- frustrating when you're in a hurry.

With that said, some of it is certainly application specifically. Office 365
Outlook, for example, is particularly egregious in this regard: switching
between windows, or between mail and calendar is awful. Microsoft Teams also
regularly hangs for multiple seconds when switching between teams, or between
chats. Extremely aggravating.

~~~
x3sphere
Ubuntu w/Gnome is possibly the worst offender. I actually like Gnome a lot but
anytime I leave my PC on for more than a day the UI gets incredibly sluggish.
Just moving windows around gets choppy and there is a annoying pause whenever
I click the application launcher (happens even without the animation).

I've also found macOS provides the smoothest experience. I haven't found W10
that bad, but I haven't used it that extensively. I really only boot into
Windows to play games these days.

~~~
ben-schaaf
I've found alt-f2 then r (ie. restarting gnome) puts it back in as good a
state as a fresh boot. Sometimes my extensions won't load and this also
restarts those.

~~~
suspectdoubloon
This doesn’t work under Wayland. Is there a way to something similar on
Wayland.

~~~
Grimm665
Using Fedora 27 with Wayland, I have not found a way to do this other than
logging out and back in :/

------
peterept
Have to give it to Apple for acknowledging the difficulty in writing
responsive (as in no lockups) applications and designing multiple developer
framework solutions to solve it (Grand Central Dispatch and NSOperationQueue).

Both those are designed to enable developers to easily offload work to
background threads and prioritise queued work for the user. No open-for-
interpretation thread priorities, but named QoS priorities (User Interactive,
User Initiated, Utility and Background). It makes much more sense for that
abstraction.

We just need more developers to make use of it.

~~~
mort96
Maybe more developers would make use of it if it was open source, existed for
other platforms, and available for languages people actually use on those
other platforms.

~~~
lukeh
GCD is. NSOperationQueue is too tightly coupled to Foundation to be
useful/relevant on other platforms.

~~~
Tsiklon
It looks like there's a working GCD port for Linux in
[https://github.com/apple/swift-corelibs-
libdispatch](https://github.com/apple/swift-corelibs-libdispatch)

------
emilsedgh
Gmail and Inbox both hang for me on my Chrome on Linux with almost no load.

It's funny how tides turn. Initially Gmail was the king of performance.

~~~
beefhash
I need 27 seconds on Firefox 52.9 ESR (Debian) and about 20,287 kbyte of data
transferred in 148 requests just to _reach_ an idle GMail tab.

What is all this stuff even doing?

~~~
codedokode
I usually append /h/ to the URL (/mail/h/) and get a HTML version. It can work
even without JS and it has classic design with small rows that works well on
my small screen. No Material Design and no huge elements with large offsets.

~~~
bevel
I _hate_ material design. I don't want more vacant space, I want density.

Ive always loved Japanese website layouts. Content is beauty.

~~~
audiolion
do you have any examples or resources on Japanese style web design?

~~~
themodelplumber
It's everywhere. Simple example: Compare the differences--and even the
subtleties--in information density between yahoo.co.jp and Yahoo.com.

~~~
anonytrary
Oh wow that is a stark contrast. yahoo.co.uk, yahoo.co.in, yahoo.co.id, etc.
are all incredibly low-density as well. I have always loved the Japanese
aesthetic, I wonder if any other cultures have a similar preference for
websites.

~~~
SmellyGeekBoy
Only downside being that it's common to see people walking around in Japan
with their phone about a foot in front of their face.

~~~
themodelplumber
Sounds like me. On the couch or walking around I'm at 10-12 inches away. I
wonder--do people who don't prefer high information density hold their devices
farther away?

------
Semiapies
That was a _lot_ more interesting than the "man, bloatware" complaint I was
expecting from the title.

------
ChuckMcM
Holy smokes! That is a freakin' _awesome_ deep dive into a bug that has be
irritating the crap out of me of late. My gmail window would just freeze for
long periods of time, other windows were fine, and restarting the browser
(Chrome) would fix it for a while. I had zero idea how I would figure out what
it was doing, now I have a road map for looking at these kinds of things.
Clearly some tools to play with there.

------
Animats
So why are we in this mess? Because there are still buffer overflows.

Address space randomization is done because buffer overflows allow exploits.
But rather than fixing the underlying problem, we now have complex schemes to
spread programs over the entire 64 bit address space to make such exploits
unreliable.

Then, apparently Microsoft's Javascript JIT engine has enough problems with
buffer overflows that each compiled program is in a different random part of
the address space to try to prevent Javascript exploits.

~~~
lazyjones
Isn‘t that a bit ignorant when nobody is forced to access their e-mail through
a website running JIT-compiled JS in a browser with built-in OS features on a
bloated, graphical UI of an OS with the burden of 25+ years of backwards
compatibility?

A fast client running on a lean OS would not exhibit these problems - and
AFAIK gmail still supports IMAP.

(I use Fastmail on MacOS&IOS/Safari and have no such issues either).

~~~
tralarpa
Web applications are the best example for Wirth's law. It's really mind-
boggling if you think about it. None of the client-side components of a web
application were originally designed for what they are used for today:

1\. a programming language (Javascript) that turns into a nightmare if you try
to write programs with more than a few thousand lines

2\. a user interface that requires you to learn two additional declarative
languages (HTML and CSS), both of them equally incomplete and crappy.

3\. an API between 1 and 2 that is so lacking that you need an external
library (e.g. jQuery) to reduce the amount of boilerplate code to a sane
level.

4\. a network protocol (HTTP) that was designed for static web pages with a
few pictures and that has serious performance issues for anything more
complex.

5\. and finally, the whole thing implemented in a language (C or C++) that is
fast but offers you plenty of opportunities to shoot yourself in the foot in
obscur ways security-wise.

Things are changing fortunately (HTTP/2, QUIC, Rust).

------
codedokode
So basically it is the problem with CFG (exploit protection) which is not
ready for the cases when there are many allocations and freeing of excutable
memory blocks.

~~~
Someone1234
Kinda; mainly that NtQueryVirtualMemory was super slow when scanning over CFG,
which was bug fixed in the April 2018 Windows 10 update.

It also uncovered a "bug" (performance weakness?) in v8 that they were able to
fix so less CFG blocks were allocated.

So kind of a win/win in the end, bugs fixed, world a slightly better place.

~~~
masklinn
> It also uncovered a "bug" (performance weakness?) in v8 that they were able
> to fix so less CFG blocks were allocated.

They implemented a freelist, it's a common workaround for problematic
allocators, but has its own issues
([https://www.tedunangst.com/flak/post/analysis-of-openssl-
fre...](https://www.tedunangst.com/flak/post/analysis-of-openssl-freelist-
reuse))

~~~
brucedawson
Normally a freelist is a tradeoff between memory and speed, but in this case
there is essentially no tradeoff. If you look at the fix you will see that we
don't maintain a freelist of memory in any traditional sense. We just retain a
freelist of addresses. These addresses are then used as hints for where to
allocate future CodeRange objects. If that address is gone, we'll go somewhere
else.

Because the memory is fully freed and reallocated this also avoids security
concerns.

------
alkonaut
I don't mind so much when a single program that I'm using interactively has a
pause, even though it's annoying as hell. What really bugs me is that
regardless of how many cores I have, there is always one pegged by that
annoying background service. If I start up windows, you can bet that windows
update will first peg a core for a few minutes. Then the builtin windows
antiviris takes over (perhaps because win update wrote some files, who knows),
pegging a core for a few more minutes. Then my backup program service (iDrive
in this case) pegs a core for a few more minutes. All of these programs I
understand why they are running, but despite me setting "quiet hours" and
"please backup only at night" etc, they seem to run for several minutes when I
least want it. After all these programs are done (or, I have killed their
processes and stopped their services more likely), some completely random apps
and services seem to always take a core and peg it. SNMP (some network
protocol service) often does it for hours on end. Killing it doesn't make
anything obvious stop working - so I have no idea why it can use 100% cpu for
hours on end. Explorer.exe (the desktop process) often goes into 100% cpu-
mode. The "sound graph isolation" service is a common culprit. When doing
normal desktop work, this is often barely noticable. But when I boot my
windows machine, it's usually to play a game, immediately after startup. And
despite having many cores to spare, if one core is pegged, the framerate is
20fps instead of 100fps. This is presumably not because of CPU starvation, but
more likely because of competition for memory/cache/storage resources.

I don't understand how all these services must use 100% CPU for minutes, on a
fast cpu core, and why they must run ANY logic on startup, which is when the
user is most likely to use his machine. Don't even read your app config on
start of process. Sleep a few hours and THEN wait until the machine is idle
and THEN do logic! Using power save modes is no better. When you take it out
of sleep after 12 hours, the services are very eager to check for updates or
antivirus or backup again, regardless of time.

I wish windows could just let me use my machine for what I intend to - which
is use ALL my cores for ONE foreground application, ONLY. I'd be happy to boot
into a freaking "game mode" or "work mode" which is like safe mode and has
ZERO crap running (no unnecessary services, no scheduled tasks can start and
so on).

~~~
madez
Analytics about you and your machine won't get collected by themselves. Have
you noticed this phenomenon with free software?

~~~
alkonaut
I don’t mind analytics/telemetry if done right, just don’t use my CPU when I
need it...

As for free software: Game titles are rarely free and most don’t even run on a
free OS so really using Linux or not switching on the gaming machine solves
the problem of CPU use without me being able to run the app I want. But it’s
just not a very good solution.

------
rocky1138
Great article! The only thing I wish was explained a bit better was the 2 TiB
CFG memory reservation. What's that for, again?

~~~
brucedawson
Basically, one byte of CFG memory "controls access" to 64-bytes of executable
memory - indicating which addresses are valid indirect branch targets. With
appropriate compiler and OS support this can help stop some exploits.

Unfortunately it quickly gets really complicated so a bit of hand waving is
necessary.

------
reitanqild
> For some reason most people see either no symptoms or much milder symptoms
> than I do.

This seems to be right for "most people". But there are definitely a few
people who are annoyed by issues like this but aren't in a position to
troubleshoot and report it.

Even I, after helping people with computers since mid nineties I still can't
troubleshoot like that. I'll fall back to latency checking and turning off
services one by one, combined with a fair amount of experience + googling.

------
yason
Interesting story in particular, but in general, performance and memory behave
like any other resource that is plentiful: they get used up until things are
slowish again. Like road space. Things get added on top of each other until
the reduction in speed becomes visible.

Because these days, unlike in the 90's, it's no help waiting for the next
Pentium processor to come out this usually results in a heavy optimisation
cycle in the underlying engine. For example Firefox has advertised a speed-up
several times in the last 10 years, each coming from a focused effort to
rewrite or optimise the JS engine, the rendering engine, or something else.
Then things accumulate again, until things are too slow.

Obviously the cycles aren't wasted as you can do things with a few lines of
high-level script that would've taken months to implement in the "good old
times". But this development inevitably creates bizarre flashbacks where,
occasionally, you're doing something simple like typing text on today's
monster machines and it takes a few seconds for the screen to catch up your
typing.

When I started programming, things were roughly and more or less instant.
Typing was instant. There was a very short code path from handling the
interrupt to updating screen. The computers reacted more like physical
apparatus. Terminals at shops, warehouses, hospitals, with text-based programs
to update data were pretty much instant too. Good clerks could bang their
keyboard through multiple subscreens of their program in seconds, and you were
checked in, goods reserved, or patient data updated. Later, when multitasking
hit the mainstream, flipping between programs was instant. (Not on Windows,
though.) Things like I/O took its time, of course, but at one point there was
a peak moment where a software machine had nearly as low latencies as a
physical machine.

From those times things have mostly gone downhill. Yes, we have immense
processing power and near endless fast memory and disk storage. You can switch
between browser tabs 300MB each pretty fast but there is a constant, nagging
sense of slowness present all the time. Maybe sometimes switching that tab or
bringing up another program takes longer, surprisingly, or there is something
simple that just doesn't happen right away. The feeling of instant
responsiveness is broken into shards: you can still see a reflection of it if
you happen to look at the right angle but mostly its crumbles are pointing
elsewhere.

Things like BeOS tried to reach back to that, with varying but acknowledgeable
success. But real success in terms of popularity and market penetration seems
to come from piling up stuff until things get too slow. So I doubt we never
get back to the old world of instant response in spite of processing power
keeps climbing up.

~~~
rothron
The move from CRT to LCD adds a few milliseconds depending on the screen.
We're so used do it by now, that typing on old hardware is almost jarring. It
feels almost TOO responsive.

~~~
oblio
Latency is not everything. Moving from CRT to LCD probably saved my eyesight
from degrading 10 years too soon.

~~~
rothron
It's sure nice to not have your corneas constantly assaulted by charged dust
particles.

------
avip
On a completely different matter... have just switched to Gmail new UI, and
now my mac Mail keeps hanging on threaded emails while consuming very high
CPU. Anyone else encountered this issue?

~~~
reitanqild
What browser?

For the last couple of years it seems I'll get some issues on some Google
properties if I dare use a different browser than Chrome.

For a while it was search results doing interval training on my CPU, while the
page was supposedly idle.

Last it got to the point where I started troubleshooting it was calendar that
acted up.

You'd think a company like Google had resources to verify the UI across at
least the 3 or for biggest browsers across the three biggest desktop
platforms, -but it doesn't seem like.

~~~
close04
> You'd think a company like Google had resources to verify the UI across at
> least the 3 or for biggest browsers

Should they ever want to do that. I think I've read multiple discussions in
the past about how Google is optimizing exclusively for Chrome while hurting
the performance and compatibility with any other browser. Which is why Chrome
is now basically called the new IE6.

------
joering2
Every other paragraph the ad box tries to run a full screen video with unmuted
audio by default. I got headache some half way thru article and then gave up
reading. From authors work, it doesnt seem this particular blogger needs my
two cents from ads he runs to live a decent life. Its sad how the state of
internet looks like these days ;(

~~~
scrollaway
If you hate ads, and you hate the state of the internet, run adblock.

[https://chrome.google.com/webstore/detail/ublock-
origin/cjpa...](https://chrome.google.com/webstore/detail/ublock-
origin/cjpalhdlnbpafiamejdnhcphjbkeiagm?hl=en)

Seriously. Adblock is _the_ way you have to fight back, to change the
economics of the internet. Install it, use it indiscriminately, and make sure
you tell others to use it.

My company makes a decent chunk of money from ads btw. Not because we can't
make money another, better and cleaner way, but because it makes no sense to
leave money off the table when the adblock rates are so low.

~~~
ax0ar
That's not an option for mobile though..

~~~
O_H_E
Firefox have extensions on mobile, plus ads make websites heavier, and suck
more bandwidth

_____________________________

Says the person with +500 tabs on chrome android

~~~
themodelplumber
+500. Whoa. I hit the :D face and sweep all my open tabs into Pocket.

~~~
O_H_E
From chrome???

~~~
themodelplumber
Sure, it's right there as the sharing method

------
ndh2
Hi Bruce. This is a fantastic article, but I needed to read it twice to
understand everything. Very dense in information, and sometimes the order of
things make it hard to follow. Sometimes you tell us what you did (e. g.
modified the virtual memory scanner) before telling us why (what CFG is, how
it works), which was confusing.

> _It turns out that reproducing the slow scanning from the sampling data was
> quite easy._

This was the first thing that went over my head. Going from one stack trace to
reproducing it is quite the jump. Maybe add a sentence "The interesting part
of this trace is NtQueryVirtualMemory, which is used to scan process memory."
Might be obvious to you, but for me that trace was "just a bunch of Windows
stuff" at first.

~~~
brucedawson
Thanks for the feedback.

The full investigation covered about two weeks and I was having trouble
condensing the story into a single post. I appreciate having a particular
problematic transition pointed out. Fixed.

------
dasmoth
_Our IT department was running regular WMI scans of our computers_

This seems like a big part of the problem.

While I recognise that IT departments at Google-sized companies do have some
extra worries, I do feel that we’re gradually losing the “personal” from PC,
and that seems pretty unfortunate.

------
bonestamp2
Side note... not related to the same root cause, but related to a similar end
user problem... I've been staying in a lot of hotels lately and using a lot of
very slow internet connections (not by choice, that's just what is available)
and I wish Chrome had some sort of "low bandwidth" mode where it would only
allow requests from your current tab, or maybe current window (easily open a
new window to control which tabs could use bandwidth). That would make a big
difference for us road warriors and nearly everyone when they're on vacation.

~~~
saltcured
Ironically, I think people on slow connections were early pioneers of using
tabs for asynchronous/background page loading.

I don't remember where I learned the technique first, but when I was roaming
the world and getting work done via GPRS internet, I'd open links in a new tab
and let them slowly load in the background. I'd view the tab later and see the
fully rendered content. It would absolutely kill this use case if the tabs
would not perform their necessary AJAX calls until foregrounded.

For those not familiar with GPRS of those times, imagine a 9600-14.4 modem
speed but with the latency of a satellite connection, so with almost any
activity you start seeing 1 second ping times. Even when on ridiculously fast
connections, I still retain the muscle memory which causes me to browse in
breadth-first mode, opening articles and discussions into tabs in one sweep
and then working through the tab list from left to right and closing each as I
go.

~~~
bonestamp2
Yes, that's funny. I guess for me it's when I open my browser to do something
quickly, but can't do it quickly because I have to wait for all those tabs to
load first.

------
_pmf_
I'm using Sylpheed, the same e-mail program I used 16 years ago. I highly
recommend it.

------
milankragujevic
I don't want to bash Windows 10, but it's seems interesting that for me, on a
laptop with 4GB of RAM, Windows 10 is extremely laggy and stuttering and
acting "dumb" (app loads an empty window then I have to wait for it to "Not
responding" it's way to reality, then it loads fully and works). I know, I
know, RAM is the problem, upgrade it blah blah get a new PC, etc. But Windows
8.1 works well enough, presumably Windows 7 would even better if it wanted to
boot off of a UEFI bootloader... Ah, I literally am going to have to switch to
some Linux distro if I want to continue using it when Win 8.1 support ends...
:/

------
vectorEQ
i love this article, it does show some of the bloat of modern operating
systems etc. but really that bloat can be controlled and tamed. This kind of
technical 'solution' to one problem which is causing other subtle issues, and
eating away at our resources is a big issue though in a lot of modern systems.
There are so many 'hidden' or sneaky mechanisms which can have bad effects on
other programs if they aren't well understood.

------
titzer
Just wanted to say publicly: awesome job root-causing this, Bruce! Your
dedication to the deep-dive is unparalleled.

------
brucedawson
Part two is here: [https://randomascii.wordpress.com/2018/08/22/24-core-cpu-
and...](https://randomascii.wordpress.com/2018/08/22/24-core-cpu-and-i-cant-
type-an-email-part-two/)

------
jwilk
Previous post:
[https://news.ycombinator.com/item?id=14733829](https://news.ycombinator.com/item?id=14733829)

------
agumonkey
Superb article. Call all your colleagues and make a book with all your stories
:)

------
rusk
For me what’s unforgivable is not being able to bring up task manager ...

------
gabrielcsapo
Mo memory, mo problems.

------
KevanM
Wait until this person puts a CD or DVD in the drive.

------
some_account
I'm running Manjaro Linux with Deepin Desktop on a Dual core 0.9Ghz CPU with 8
GB of memory.

Boot speed from off to login is less than 5 seconds. From hibernate it's
instant.

My girlfriend brought similar desktop to work and people with windows couldn't
believe the machine was that fast. It runs circles around their new windows
laptops with tons of memory and cpu cores.

Firefox loads in a second. Kingsoft office also loads in less than a second.

I think it's time people start to understand what is out there now as options
for their laptops. And in particular, deepin desktop is very fast and
beautiful.

~~~
sadamznintern
.9ghz in 2018!? What machine is this?

~~~
bcaa7f3a8bbc
Intel Core-M?

A SoC-like, high performance, low power mobile CPU. Performance per MHz ratio
is high, allows reasonable performance with low clock frequency.

~~~
some_account
Yep it is. Also the computer don't have fans so it's dead quiet at all times.
It's an Asus Zenbook ux305.

------
dingo_bat
If I'd been tasked with debugging that, I'd have given up halfway and gone on
a vacation.

------
the_cat_kittles
debugging tour de forces should be a genre unto themselves. this one is a
classic

------
Animats
This never happens with Thunderbird or K9 Mail.

~~~
verbatim
Actually, I run Thunderbird on Linux and have frequent issues with it pausing
and hanging on me. (Among other issues related to it seemingly just having
trouble handling large amounts of email -- but that's not important to dive
into here.)

Let's not act like other complex software isn't also likely to end up with
similar issues.

------
megamindbrian2
Malwarebytes and McAfee cause this. Something to do with updating policies.

~~~
jkaplowitz
Those do cause a lot of problems in general, but this particular issue had a
different root cause. He traces it all the way back from the symptoms in the
article. Worth a read!

~~~
rasz
Actually the reason might be the same, those programs also scan allocated
memory space of user programs.

------
DoctorOetker
I have also been noticing this for a while now... there was a hacker news
recently about banks and other firms mining behavioral biometric data (cursor
freezing, typing speed, phone angle, probably a load of other metrics likehow
often you backspace and pause to reflect what you write or rewrite) for
authentication purpouses...

Perhaps google is doing this too?

------
emersonrsantos
“I work on Chrome, on Windows”

...

    
    
      chrome_child.dll (stack base)
      KernelBase.dll!VirtualAlloc
      ntoskrnl.exe!MiCommitVadCfgBits
      ntoskrnl.exe!MiPopulateCfgBitMap
      ntoskrnl.exe!ExAcquirePushLockExclusiveEx
      ntoskrnl.exe!KeWaitForSingleObject (stack leaf)
    

So don’t use a proprietary kernel that performs poorly?

~~~
brucedawson
I'm confused about your suggestion. I work on the Chrome web browser on the
Windows platform. If we want to allocate memory we, ultimately, need to call
VirtualAlloc.

I could work on Chrome for Linux, but then this bug on Windows might never get
found or fixed. I am satisfied with my choices, despite the occasional
glitches.

~~~
dsr_
They failed to distinguish between

"I use the Chrome browser on Windows to get work done"

and

"I work at Google, making Chrome for Windows".

Human languages, what are you going to do?

~~~
brucedawson
Good point. How's this:

I modify the source code of the Chrome browser in order to make the final PE
files that are compiled from the aforementioned source code work more
efficiently.

~~~
ReverseCold
Or how about

"I work at Google, making Chrome for Windows".

