
VS Code uses 13% CPU when idle due to blinking cursor rendering - Kristine1975
https://github.com/Microsoft/vscode/issues/22900
======
artursapek
I'm reminded of this classic:
[https://github.com/npm/npm/issues/11283](https://github.com/npm/npm/issues/11283)

NPM had a progress bar that was so fancy that it slowed down installation time
(basically its only job) by ~50%. Hilarious.

My mantra here is, if you find yourself thinking about implementing a fancy
loading spinner/progress bar, it would be more productive to just spend that
time making it unnecessary - speed up your shit! Obviously that doesn't apply
to VS Code's cursor.

~~~
Touche
I think I don't understand the issue well enough. This looks like a standard
blinking cursor to me. Users expect a blinking cursor in an editable text
field.

I'm not sure why this implementation is slow or why they needed to implement
it themselves and not let the OS handle the blinking cursor. I'm guessing
there must be some reason.

~~~
paulirish
Powerful* text editors built on the web stack cannot rely on the OS text caret
and have to provide their own.

In this case, VSCode is probably using the most reasonable approach to
blinking a cursor: a `step` timing-function with a CSS keyframe animation.
This tells the browser to only change the opacity every 500ms. Meanwhile,
Chrome hasn't yet optimised this completely yet, hence
[http://crbug.com/361587](http://crbug.com/361587).

So currently, Chrome is doing the full rendering lifecycle (style, paint,
layers) every 16ms when it should be only doing that work at a 500ms interval.
I'm confident that the engineers working on Chrome's style components can sort
this out, but it'll take a little bit of work. I think the added visibility on
this topic will likely escalate the priority of the fix. :)

* Simple text editors, and basic ones built on [contenteditable] can, but those rarely scale to the feature set most want.

(I work on the Chrome team, though not on the rendering engine)

~~~
Mister_Snuggles
> Powerful* text editors built on the web stack cannot rely on the OS text
> caret and have to provide their own.

Is there any reason that Electron couldn't provide an API that would expose
the system caret in an OS-agnostic manner? Windows, for example, has an API[0]
that can arbitrarily show the caret at a given point in the window. Sounds
like something that would be useful to many apps and not get in the way for
those that don't need it.

[0] [https://msdn.microsoft.com/en-
us/library/windows/desktop/ms6...](https://msdn.microsoft.com/en-
us/library/windows/desktop/ms648398\(v=vs.85\).aspx)

~~~
mohaine
Probably not easily. Remember this is what java AWT did and it was a complete
mess. Write once debug everywhere.

My favorite issues was that on one OS (windows I think) a panel would only be
visible if pixel 0,0 was on the screen and nothing was on top of it. The panel
could be 99% visible but not be shown at all if the upper left corner was
under another panel.

------
Philipp__
It just doesn't go in my head that we are building text editors inside a web
browser! I get it, there are many good use cases for Electron and it's easy to
get started with cross platform support, but why is everybody going crazy
about text editors in them? Because you can write plugins in JS?

Wouldn't it be better to make native application, especially for code editors,
where developers spend most of their time, where every noticeable lag and
glitches are not appreciated.

Edit: Many people here think that I am attacking this web based kind of
technology, which I am not, and sorry for not being clear enough, but why
chose something so high up the stack for dev tool?

Edit2: For non-believers in nested comments, look ->
[https://github.com/jhallen/joes-
sandbox/tree/master/editor-p...](https://github.com/jhallen/joes-
sandbox/tree/master/editor-perf)

~~~
dheera
Yep. I also can't wrap my head around the fact that we are now constructing
buttons, drop-down boxes, tagged text boxes using dozens of nested <div>
layers instead of a native widget that writes directly to the screen. My 486
rendered UIs with nearly imperceptible lag. Google Docs takes a good 2-3
seconds to spin up a UI on my i7.

~~~
cr0sh
You both do realize that similar arguments could have been made back in the
day when moving from, say, command-line DOS applications to Windows API
applications - right?

Ultimately, computing has always been one of abstraction from the lower
"layers". Taken far enough, one could spuriously argue that if you aren't
soldering together the flip-flops that make up your logic and memory, you just
aren't being efficient...

~~~
thechao
Except that the abstraction layer _for the user_ has remained the same.

The extra abstraction layers you're talking about are invisible to the user...
while our GUIs are slower, and our processors faster. It feels like, after 30
years, we should be able to have our GUI cake and eat it, too.

~~~
JustSomeNobody
Where are the Michael Abrash's of today teaching people how to write tight,
fast code? Seems a lost art...

Yes, I know, he's still around...

~~~
desertrider
Luckily there are a few people that still care, just look at the response
Handmade Hero has gotten.

~~~
swah
Those videos on data oriented design were also very interesting:
[https://github.com/taylor001/data-oriented-
design](https://github.com/taylor001/data-oriented-design)

The thing is, using C++ instead of React for mobile development of a simple
application would probably make me miss deadlines... So we just stick to whats
popular.

~~~
flukus
Building UI's in something like Qt, GTK or swing was never really that time
consuming, especially given the limited amount of controls one screen in a
mobile app.

------
aphextron
I simply cannot understand why Atom and VSCode are so popular. I get that they
are extensible, but is that really worth the slowdown to you? If I need more
features than a text editor, I use an actual IDE.

Someone just posted some really embarassing benchmark numbers regarding this
issue yesterday: [https://github.com/jhallen/joes-
sandbox/tree/master/editor-p...](https://github.com/jhallen/joes-
sandbox/tree/master/editor-perf)

Note that Atom and VSCode are nearly 10x slower than all the other
competition, as well as simply crashing for many of the tests. To be fair, I
do think Electron based desktop apps have their niche. Spotify is a perfect
example. But they have no place in text editing.

~~~
nojvek
I don't know why you're saying they are slow. vscode starts up pretty fast.
Sure it consumes more resources than vim. However having code completion,
debugging, linting, and a bunch of IDE like features is super useful including
the fact that it's cross platform and open source.

It's very hackable. Just last night I fixed an issue that had been bugging me
for a while.

Pages with ads use a lot more of my CPU so I'm not really worried.

This looks like a chrome problem more than Vscode. I do know that they take
perf very seriously and this will be given some attention.

~~~
juandazapata
100% of vim users will tell you that their vim setups also have code
completion, debugging, linting, etc...

~~~
Spivak
Sure, but I won't pretend that adding those features doesn't add some
significant resource usage and some occasional slowdown.

~~~
temp
>doesn't add some significant resource usage

If by "significant" you mean less resources than opening a blank tab in Atom.

------
ChuckMcM
I love the irony of simulating an XOR gate from a piece of hardware (a serial
terminal) with a billion gates in a processor which renders a square with an
alpha blend function. Sort of like using a 787 to sit on the runway, and run
its engines to blow a windmill to crank a butter churn :-).

~~~
dahart
That's a fun analogy, made me laugh. :)

I see this literally everywhere though, the article at hand is only a
_slightly_ better example than almost everything we do. Browsers consume
gigabytes of memory to render a few basic web pages. We use high level
scripting languages with tons of dynamic memory inside containers that are
running on VMs to run all our cloud infrastructure. If we had the time, these
things could be multiple orders of magnitude faster and smaller. It just isn't
worth our time... :P

Another fun example of this I ran into recently is the controllers for
brushless drone propellers. The hobby motors you buy for $20 usually have a
1Mhz 8-bit CPU running Electronic speed control, literally shrink-wrapped
inside the wires. Every single prop. Think about that, a million instructions
per second running, only to make something spin. (To be fair, the CPUs are
under-utilized, but still, it reminds me of churning butter with a jetplane.)

The main difference, of course, is that cpu time & memory are close to free,
and 787's are super expensive. Maybe if 787's were free, we'd use 'em often to
churn butter... ;)

~~~
eeZah7Ux
> cpu time & memory are close to free

Not for 3 billion people. Also energy is not free at all.

~~~
dahart
Right yes, I know, and it's a great point. Global economics and third world
access to computing & the internet are in a completely different time zone
from I was talking about.

But you're right, and on the global scale, we may actually be doing the
equivalent of churning butter with jets. I totally wouldn't be surprised if
the sum total energy expenditure on all computers in the world was greater
than on all the aircraft in the world... and we are most definitely wasting
the vast majority of the energy we use on computation.

Still, in my defense, I said close to free, and compared to the cost of a 787,
cpu time & memory are closer to free than jets, no matter who we're talking
about, right?

------
camgunz
The answer to all this is: "use Qt".

"But camgunz, I only know JavaScript"

That's cool! Look into QML.

"But camgunz, I only know X"

That's cool too! Qt4 has a truly ludicrous number of language bindings:
[https://en.wikipedia.org/wiki/List_of_language_bindings_for_...](https://en.wikipedia.org/wiki/List_of_language_bindings_for_Qt_4).
Qt5 has a fair number too:
[https://wiki.qt.io/Language_Bindings](https://wiki.qt.io/Language_Bindings).

"But camgunz, I need an embedded browser"

I agree, separate windows are for savages. Qt has you covered with Qt
WebBrowser.

"I need a native look and feel across all platforms"

Well, that's a pipe dream. BUT, you can get closer with Qt than anything else.
Google for some screenshots.

"I need a bananas style but I don't want to write any code"

Qt supports CSS-like stylesheets!

\---

The web isn't a good application platform. Sure we could (and have been) spend
billions of engineering hours and years to get it up to speed with exactly
what we have now, but that's obviously a bad idea. We can figure out zero-
install and sandboxing, but we just don't need to shoehorn everything into JS,
weird APIs like localstorage and webrtc, and the DOM. We just don't need to.

~~~
demarq
but but camgunz isn't the commercial license like a new house, three kidneys,
and your cat?

3,540.00 USD to be exact :P ... each year to be exact :P

~~~
jhasse
You don't need the commercial license to use Qt commercially.

------
okket
TL;DR: It seems to be a problem in Chromium, the relevant CSS (one second
change) results in a 60hz animation cycle.

Workaround:

    
    
      "editor.cursorBlinking": "solid"

~~~
artursapek
I recently discovered the same issue in a webapp I develop... a simple CSS
animation for a "loading" spinner was pegging the CPU. Using steps(n) with a
low number basically resolves the issue. [https://css-
tricks.com/snippets/css/keyframe-animation-synta...](https://css-
tricks.com/snippets/css/keyframe-animation-syntax/#article-header-id-6)

Kind of ridiculous that it's so easy to make this mistake.

~~~
gigatexal
With great power (flexibility of JavaScript) comes great responsibility.

------
mmarks
13% cpu usage at the lowest c-state is a also very different than 13% at an
elevated c-states. I've recently spent a lot of time analyzing
c-states/p-states and the power mgmt modes of the GPU. After learning more
about the complexity behind the clocks, bus speeds, etc. underlying each
state, whenever I hear someone quote a utilization number of a minor workload,
I want to know at what power state.

Not to take away but the author's point, just an aside that utilization
numbers can be a lot more complicated when there are dozens of energy states
and the utilization might be utilization at a particular state rather than
utilization at maximum power

~~~
aarongolliver
You mean p-states in your first sentence, right? Anything but c0 represents
different levels of 'retiring no instructions' (totally idle). The rest of
your comment seems accurate though.

~~~
mmarks
You're correct that I just meant a lower power state.

The author mentioned blinking cursor, so it reminded me of graphics issues. A
more efficient CPU state has the possibility of slowing an app due to CPU-GPU
sync points. A blocking CPU in an energy efficient state can reawaken slower
from GPU done notifications, so FPS is lower. So both c-state and p-states can
affect performance. General point was just utilization may not be utilization
at max power.

I've worked on problems where utilization was 15% at lower power and it was a
problem. But to compare different workloads, it'd be < 1% at max power.

------
tomc1985
The amount of bad hacking that has to happen for NodeJS to work as a platform
has said all I could ever want to possibly know about the quality of NodeJS
developers: so pathetically in love with their trainwreck of a language that
they would rather pile kludge upon hack upon kludge than to learn the language
and environment _most appropriate_ and _most computationally efficient_ for
the tasks at hand. Javascript devs would rather just throw JS at it

~~~
oblio
This arrogance is amusing. The only thing the web and Javascript world proved
is that we are bad. In the past it was much harder to distribute crap since
you needed to, you know, find and install everything. Since websites are so
frictionless, now we get to experience everything.

And guess what, 90% of everything is shit.

The current "JS devs" are the former "PHP/Java devs" and the former "VB/Delphi
devs" and the former "C/Cobol devs".

They're us.

~~~
tomc1985
My problem isn't with the bad JS (or, in the past, PHP/JAVA and so on) devs
themselves, it's that they insist on building an ecosystem out of a really bad
platform. Why? Because JS and this "leverage existing skillsets" bullshit. How
about leveraging your brain to learn a more appropriate language?

~~~
oblio
Do you know of a "more appropriate" language for creating cross platform apps
that work on Windows, Linux, MacOS, BSDs, iOS, Android, 4k screens and 480p
screens?

I don't know any.

~~~
tomc1985
C? C++? QT? Java? Any language with GTK hooks? Xamarin? Juce framework?
Delphi??

You say 4K and 480p screens like that's hard. Design once, scale forever?

~~~
oblio
C is unsafe and is too low level.

C++ is possibly the only language with more bad parts than Javascript :)

Java ok, but how would you run it on iOS? The last viable option for that
(RoboVM) was taken behind the shed and shot by Xamarin/Microsoft.

GTK doesn't run on mobile and it's barely supported on Windows and MacOS.

Xamarin is ok, but it used to be closed source and cost $1000 per year for any
serious project.

I don't know Juce, but from what I can see it's a C++ framework, so see C++ :)

Delphi? Zombies don't count ;)

The most viable contenders for modern cross platform software were marred by
bad corporate ownership: Java, C#. They've kind of gotten back on track
recently but I'm not sure they can catch up to the web-train.

------
mschaef
This reminds me that as part of the Windows95 development effort, Microsoft
disabled per-second updates of the taskbar clock to improve performance.
Raymond Chen wrote a bit on it back in 2003:

[https://blogs.msdn.microsoft.com/oldnewthing/20031010-00/?p=...](https://blogs.msdn.microsoft.com/oldnewthing/20031010-00/?p=42203)

~~~
nihonium
On the other hand, iPhone's "Clock" app has an icon that shows the correct
time with a super smooth seconds indicator. Which doesn't drain the battery.

~~~
voxic11
How fast does it run on 1995 hardware though?

~~~
jbmorgado
Pretty sure any _sane_ software implementation of a simple seconds pointer
would easily update faster than 1 Hz while using little to negligible CPU time
in a Desktop processor from the 90's and onwards.

~~~
mschaef
The Windows 1.0 clock (1988) had a 1Hz second sweep hand that ran on a 4.77MHz
8088. It's really just a couple clipped line segments each second.

[http://variableghz.com/wp-
content/uploads/2012/12/windows-1....](http://variableghz.com/wp-
content/uploads/2012/12/windows-1.0-clock-app.jpg)

~~~
jbmorgado
Yes, that was the point I was making. This is something computationally
trivial as long as the code is written appropriately.

~~~
mschaef
Keep in mind though, that the performance optimization the Windows article
talks about is how to reduce the working set in memory more than the CPU
consumption.

------
adamnemecek
...21st century desktop application development ladies and gentlemen (not that
I'm proposing anything constructive).

~~~
analognoise
Throw it in the bin and return to the old ways, but with modern testing,
source control, and static analysis?

------
c-smile
In GDI era caret bar ( that's not _cursor_ , sic!) was rendered as by simply
inverting pixels (InvertRect() call) inplace in video frame buffer. Very cheap
operation that does not require redrawing and run of any other code.

With the GPU the only viable option for rendering blinking caret is to redraw
the whole window. That's why it takes so much CPU as Chrome uses mostly CPU
based rasterizer.

But redrawing the whole window in GPU is not that bad if the renderer is
capable of caching GPU objects while rendering.

Here is what happens in Sciter ([https://sciter.com](https://sciter.com))
while rendering blinking caret in screen of similar complexity (editor with
syntax hihglighting):

[https://sciter.com/images/sciter-caret-cpu-
consumption.png](https://sciter.com/images/sciter-caret-cpu-consumption.png)

As you see it CPU consumption is near to zero.

~~~
jblow
"With the GPU the only viable option for rendering blinking caret is to redraw
the whole window."

Sorry, that is plainly false. There is nothing preventing you from treating an
offscreen buffer just like any other buffer of non-dirty pixels. Treating the
back buffer that way is slightly less conventional but is still just fine.

~~~
c-smile
> There is nothing preventing you from treating an offscreen buffer just like
> any other buffer of non-dirty pixels.

You need:

1\. ability to invert pixels by CSS/JS. No such feature in principle. For many
reasons.

2\. Even if you will be able to invert those pixels in offscreen bitmap you
need to send that window's offscreen bitmap to CPU on each caret blink. You
can use tiles - so do partial CPU->GPU data transfer but still.

3\. If you use offscreen buffer you are almost always use CPU rasterization.
CPU rasterization is O(N) operation, where N is a number of pixels.

On high-dpi monitors (200dpi...300dpi) number of pixels is 4...9 larger than
on "standard" 96dpi monitors. And CPU stay roughly the same last 4-6 years. So
if you want your app to run on modern hardware - GPU is the only viable option
for rendering - forget about offscreen bitmaps and the like.

~~~
jblow
None of the 3 things you said are true. I recommend you get some experience in
rendering before you mislead people too much with these kinds of comments.

In reality the problem is trivial, you set up a scissor rect (or explicitly
mask the pixels in your shader) and then render only stuff overlapping that
square. You don't need to invert the pixels for it to be fast; you can render
an arbitrarily nice cursor effect.

~~~
c-smile
I am not sure I understand why do you need clipping at all to render
_rectangular caret_ bar (note: _cursor_ is a different entity in UI
professional jargon).

What exactly you want to be clipped out?

~~~
jblow
I am assuming that your caret bar may be overlapping text in some way, or that
there is a background bitmap that you might be alpha-blending against, etc.
Basically I don't want to make an assumption that might break if the UI gets
nicer. The case of a strictly opaque strictly rectangular non-antialiased non-
smoothly-moving bar does not seem very interesting or nice-looking.

~~~
c-smile
Are you speaking about some particular implementation of this all just from
your imagination?

~~~
jblow
You are talking to someone who has done 3D rendering professionally for 21
years. What's your background?

------
Mahn
Sounds like this can be fixed by moving away from animating opacity to
something more rudimentary like flipping the display property in JS. CSS3
animations from my experience are a bit taxing in general, and something you
want to use only in small bursts, not constantly running in the background.

~~~
SippinLean
That would, but right now the cursor _fades_ in and out, it doesn't "blink"
exactly, so this suggestion is not a 1:1 replacement.

Sounds like they need to optimize their CSS animations in general.

------
eranation
This is where Bert Bos (Created jokingly the notorious, unofficial, abused
blink tag [https://www.w3.org/Style/HTML40-plus-
blink.dtd](https://www.w3.org/Style/HTML40-plus-blink.dtd)) should have a
little "told you someone will need it someday" moment ;)

------
Skywing
Ah yes. The classic text editor debate thread. In this thread you can expect
to find folks claiming that 2 seconds of startup time for editors, like Atom,
is so disruptive to their workflow that they'd rather use notepad.

~~~
SippinLean
I've switched to VSCode as my go-to text editor and you understate the real
impact the poor performance has on my workflow. It's not just startup, almost
everything has a 250ms-2s delay; it's not a lot each time but it adds up and
is frustrating.

~~~
equasar
VSCode best use is not for text-editing, its main target is to work as an
small version of a full-blown IDE. Use the right tool for the job.

------
kibwen
Not to defend Electron (I use neither VSCode nor Atom), but meanwhile I'm over
here watching the JVM consume a constant 10% of my CPU thanks to having a
single 60-line file open in the official Arduino editor (which isn't even an
IDE, it's a glorified GUI for compiler flags with a built-in syntax
highlighter).

------
ohitsdom
The Chromium bug describes the root cause, which is a fixed schedule interval
for CSS animations:

> The JS implementation uses an interval of 500ms between updates while native
> animations will be updating at 60Hz. At the moment we're not smart enough to
> deschedule ourselves during animations with step timing functions.

[https://bugs.chromium.org/p/chromium/issues/detail?id=500259](https://bugs.chromium.org/p/chromium/issues/detail?id=500259)

~~~
petters
The fact that animating a cursor at 60Hz requires non-neglible CPU is still a
little sad.

------
logicallee
You know how there's like, hard real time embedded programming where if you
don't hit your realtime deadline every time without exception forever, your
engine explodes or something?

Well, apparently the web is built on whatever the opposite of that style of
programming is.

~~~
jodrellblank
_whatever the opposite of that style of programming is._

Is it "fun"? I bet it's fun.

Or easy. Or convenient. Or low effort. Or relaxing. Or accessible. Or cheap.
Or quick.

~~~
redial
Or stubborn.

------
davidascher
Reminds me of this ancient Firefox bug about the performance cost of the
throbber:
[https://bugzilla.mozilla.org/show_bug.cgi?id=437829](https://bugzilla.mozilla.org/show_bug.cgi?id=437829)

------
agumonkey
I hope y'all never have the chance to boot Turbo Pascal 7.0 (DOS) on a P5
class cpu. It's pretty sad. I mean a 700KB IDE with decent language to native
code with some form of live check, instantaneous compilation times, modular
programming, online help; multiple windows and a cult classic color scheme. It
hurts.

------
makecheck
Developers often work on extremely fast machines for their own
productivity/sanity but this is a reminder that it can help to test on a
slower setup from time to time. Or more generally, test edge cases.

For instance, if you have a graphics tool that briefly flashes screen updates,
the problems are much easier to see. On a fast system, you might not only miss
an unnecessary refresh of “everything”, you may miss a _repeated_ refresh of
the same content.

Also, on slower systems, the cost to generate a frame may delay an entire
sequence. Consider something like “live resizing”: on your spiffy machine it
seems fluid, on a lesser machine it might be stuttering like crazy. Sometimes
you have to cheapen the computations occurring _during_ rendering to make sure
it’s OK.

~~~
JustSomeNobody
This ... isn't an edge case.

------
abrkn
I recently switched from Atom to VS Code and have noticed a considerable
reduction in lag, especially when using CMD+D and searching through project
files.

~~~
lotsoflumens
Same here - the lag with just doing simple things in Atom, like moving the
cursor, became too much.

Since I made the switch I've found VS Code to be quite nice. I miss having
Hydrogen available but I can always run jupyter notebook if I need something
like that.

------
Animats
I had something like this happen with QNX, back in 2004 when we were using it
for the DARPA Grand Challenge. We had an industrial x86 computer system that
was running headless, with no display. But the device had a minimal VGA
controller on the motherboard. So QNX brought up a screen saver on the slow
VGA controller, where reading from display memory was very slow. The screen
saver was reading from display memory to move the screen saver box around.
This used up about 15% of the CPU.

The QNX people were really embarrassed about that and fixed the screen saver.
We just reconfigured to turn off the display entirely.

------
jankotek
It is easy to do screen redrawing wrong. Intellij Idea had similar problem:

[https://blog.jetbrains.com/idea/2015/08/experimental-zero-
la...](https://blog.jetbrains.com/idea/2015/08/experimental-zero-latency-
typing-in-intellij-idea-15-eap/)

------
bitwize
An internet points out that Hackernews' favorite text editor takes up more CPU
than the average Hackernews would have had at their disposal 20 years ago just
to make the cursor blink on and off. Hackernews circles the wagons to justify
the stupid engineering design decisions behind said editor based solely on the
fact that embedding a complete Web browser just to draw buttons and text
fields "won" over any sensible GUI implementation, and they can't live without
the crutches VSCode provides when shitting out Go microservices.

Many of these same people will argue vehemently that X11, the shitty GUI layer
for Linux that ran perfectly fine 20 years ago, is "slow" and "bloated" and
needs to be replaced with Wayland, a new, completely different, shitty GUI
layer.

~~~
zbuf
Best bit is that X11 runs really well these days -- responsive, fast,
reliable. Emacs runs great, gitk, xterm, xosview, mplayer window manager, even
Firefox is alright. Renoise, Maya, Blender too. Not to mention network
transparency when I need it. It's a great environment to get my work done, and
3x hi res displays is the stuff I dreamt of 15 years ago.

So it's a good job we're about to throw it all out and start again, eh folks?

~~~
ldev
> X11 runs really well these days -- responsive, fast, reliable

Well, you haven't seen Windows then - the graphics stack is phenomenal and a
marvel of engineering. nVidia drivers crash? I only get a second of black
screen and then resume my work. Yep, that's right - no other GUI program
crashed, I didn't had to do anything, literally just 1 second of black screen.

Oh and you can have one window on two monitors and both parts of window will
have full vsync - insane, huh? :)

It's scary how good Windows is.

~~~
charrondev
I don't much about graphic stacks, but in my experience the best, as least
from the perspective of someone using multiple HiDPI displays macOS is far and
away the best. Many windows applications don't scale properly or if they do
they require special settings to do so. If you have monitors with different
levels of scaling your going to have a terrible time on windows.

~~~
boterock
On the other hand, connecting a 4k display in windows will default to
configure it at 200% scaling, in mac it defaults to render everything tiny as
ants.

~~~
rl3
> _... in mac it defaults to render everything tiny as ants._

I'm a fan of no DPI scaling (100%) at 4K, at least on my 27" monitor. It takes
2-3 months of getting used to, but once your brain and eyes adapt,
significantly lower dot pitches become completely unusable. The only thing I
change is bumping up my terminal or editor's default font size a tad.

That said, I'm not sure how people with 24" 4K monitors do it without DPI
scaling. I'd probably even prefer 30" myself.

~~~
rl3
> _...significantly lower dot pitches become completely unusable._

Correction/clarification: significantly _higher_ * dot pitches, as in lower
pixel density. "Completely unusable" was meant in the sense of how it'd feel
to return to 800x600 after being accustomed to 1080p. 4K is four times 1080p,
so it's roughly comparable.

It wasn't my intention to offend anyone with poor eyesight, or suggest that
people ruin theirs. Just that it's possible to get used to really low (dense)
dot pitches, and once you do it's simultaneously really enjoyable and weird at
the same time.

------
jokoon
This comment section deserve to be posted in some kind of drama-oriented
section, a little like /r/subredditdrama

Personally I'm waiting for the second net bubble to burst, so that we can
force everyone to use a stricter markup language.

Everyone is seeing how android apps are slowly but literally making HTML
completely obsolete, and honestly that's an awesome thing, because it's really
needed.

I hope the tech market realizes that and evolve quickly, instead of waiting
for some battery breakthrough.

Never forget about the Andy Bernhardt video about the birth and death of
javascript.

Those issues are why I will always target C++/java jobs and laugh at anything
related to the "web". I am never short of the amount of analogies I can invent
about HTML/JS. It's like comparing stick and stones to a decent steam engine.

------
tete
This is when I think that the following two minute video should be mandatory
for programmers:

(Grace Hopper - Nanoseconds)
[https://www.youtube.com/watch?v=JEpsKnWZrJ8](https://www.youtube.com/watch?v=JEpsKnWZrJ8)

------
k__
I don't understand all the hate.

Java developers have intelliJ IDEA, Eclipse and Netbeans, all written in Java.

JavaScript developers have Atom and VSCode. Since JS was created for HTML, it
seems logical for me to use browser tech to build these editors.

It allows JS devs to build extensions without the need to learn a different
language. As Java devs can build Eclipse plugins with Java.

Also, the Electron based editors aren't the only ones available, so it isn't
as if someone would force the poor dumb JS developers to use these clunky slow
tools, like when Slack built their client with Electron and you had to run a
monster app for a simple chat.

If they want something faster, there is Sublime, Vim and Emacs...

------
angrygoat
From 2009 – Much hot air over blinking cursors
[https://lwn.net/Articles/317922/](https://lwn.net/Articles/317922/)

Quote from the article;

> The blinking cursor causes the processor and GPU to be woken up frequently.
> On one of my test systems, this causes somewhere in the region of 2 Watts of
> extra power consumption.

This isn't about electron vs "native" apps. It's just that a smooth cursor
animation rendered at 60Hz costs power; and if you look at the github issue,
one config setting and that's gone, and so is the power usage when idle.

------
qeternity
Just another day in the Electron universe. My personal favorite is resource
waste of another variety: Slack and WhatsApp using multiple gigs of RAM for
unbelievable basic (which is their selling point) chat apps.

~~~
scott_karana
Admittedly, Slack uses tons of memory because _animated GIFs do_ , plus all
the other embeds that is supports.

    
    
      /giphy time to burn up ram from everyone in the channel

------
corford
Hilarious. I can reproduce on my Dell Inspiron 15 5000 (core i7 running
Windows 10) but it's not as pronounced as in the bug report. CPU usage bounces
between 0.5% and 2.5% (which is obviously still ridiculous).

I almost want to swap back to using sublime out of protest but vscode's
tighter git integration/workflow is hard to give up :(

Edit: I've always wondered why vscode couldn't be written in portable
C/C++/Rust/D and then embed a V8 engine to power a javascript plugin API (a
bit like sublime does with python)?

------
saghm
I always turn off cursor blinking everywhere I can because I find it
distracting; I never knew I could use "performance" as an excuse!

On another note, why is cursor blinking such a universal thing? I assume that
other people must like it, or else it wouldn't be so common. Do people have
trouble finding their cursor without it, or do they have trouble
distinguishing it from actual text? I've never had either of those problems
with blinking turned off, but I can't think of any other plausible reason.

~~~
funkyy
I assume it is useful to determine where the cursor is. It is easier to spot
blinking element rather than solid.

------
DonHopkins
Rik Arends, who worked on the Cloud9 IDE, has done some amazing work
implementing a blazingly fast WebGL based code editor and text formatting and
rendering with JavaScript and GPU shaders. He's actually compiling JavaScript
code into WebGL shaders!

Implementing a WebGL based code editor for JavaScript - Rik Arends - GrunnJS
[1]

How do you render text and update a code editor when it only has
vertexbuffers? What can you do with it? What does an AST editor do to help?
Rik Arends (founder of Ajax.org/Cloud9) will blow your mind with the talk.
Note that this editor is work in progress.

[1]
[https://www.youtube.com/watch?v=hM1oLr9G3-Q](https://www.youtube.com/watch?v=hM1oLr9G3-Q)

Here are some other talks about his work:

Rik Arends: Multi-threaded JavaScript inside Makepad [2]

As a web developer since the early 2000s, Rik has lived through the ups and
downs of browsers all the way from IE3 to Chrome 52. From doing interactive
web projects for big brands to working on Cloud9 IDE, the limitations of HTML
have always been there. After leaving Cloud9 four years ago to explore the new
possibilities that WebGL offers for UI, a continuous stream of WebGL-related
projects have appeared, including a JavaScript trace debugger, a compile-to-
JavaScript language and a live coding prototyping framework. Makepad is the
latest from-scratch iteration of this WebGL direction, and it is exciting to
share the progress and lessons learned.

[2]
[https://www.youtube.com/watch?v=tVTWdFE6-O0](https://www.youtube.com/watch?v=tVTWdFE6-O0)

Rik Arends: Beyond HTML and CSS: Fusing Javascript and shaders | JSConf EU
2014

What would the world look like when you can style UI with actual shader
programs? The web could be 60fps on mobile, and we can start to imagine what
lies beyond HTML and CSS

[3] [https://www.youtube.com/watch?v=X8xxz-
YeWtk](https://www.youtube.com/watch?v=X8xxz-YeWtk)

Here's a demo! [4]

[4]
[https://makepad.github.io/makepad.html](https://makepad.github.io/makepad.html)

------
pointernil
Thinking about "the stack", runtimes , development processes and the root-
causes involved here i think, the best thing that could happen to "the
industry" is some event preventing the silicon production to deliver new
cpu-s/ram/gpu-s/harddrives for some 5+ years or so.

Constrain the resources to bring brack some sane levels of efficiency
regarding ram/cpu-cycles/hd-space.

/scnr

------
roryisok
I just checked on Windows 10, an empty instance of VS Code with a blinking
cursor uses 0% most of the time.

Is this a MacOS only issue? Does it affect Atom?

~~~
acdha
There has to be some other factor: on a 2013 iMac it uses under half a percent
when idle, with the same versions of macOS and VS Code.

------
coding123
Thank you to the author of this github issue! I hope it gets sorted soon! I
use VSCode a lot when disconnected from my power outlet.

------
SippinLean
I wonder if adding

    
    
      will-change: opacity
    

to the CSS would improve performance (by switching CSS animation rendering to
the GPU)

[https://medium.com/outsystems-experts/how-to-
achieve-60-fps-...](https://medium.com/outsystems-experts/how-to-
achieve-60-fps-animations-with-css3-db7b98610108#.2kb4429b9)

------
kelvin0
Simple fix: ... cursor.blink = False // 13% CPU optimization :-) ...

------
rbanffy
We could call it Egacs: "Eight Gigabytes And Constantly Swapping"...

------
kardashev
I think the reason using the browser for desktop UI is because graphics and UI
programming is terrible. The computer architecture and operating systems we
use today are built on tech that was before graphics existed.

That's why it's trivial to build a terminal app, but god help you if you want
to create a window with a button or draw a line. GLEW, GLUT, Qt, OpenGL, and
the rest are just kludges to deal with the fact that we're still working with
tech from the 70's. Even those kludges are so terrible that companies like
Github and Microsoft have resorted to using the browser to make things like
Atom and Visual Studio Code.

It can be fixed, but I'm not sure anyone has the fortitude to redesign
hardware and operating systems.

~~~
jstarks
What hardware or operating system changes do you think are necessary to
improve the GUI development experience?

~~~
kardashev
Imagine being able to create windows, draw, deal with io devices (keyboard,
mouse, controller, etc.) and do OpenGL graphics type work with system calls.

That would allow (practically) any language to trivially create native apps
with a GUI, at least trivially compared to today.

In Linux we could eliminate the multitude of complicated display servers and
reduce latency in the system (making VR and AR easier to do). Windows has
something similar to what I describe, but it's not very good, and has no
builtin OpenGL-like system calls.

A standardized set of system calls for graphics, GUI, and IO would allow cross
platform native apps to be trivially created. In principle there's no reason
why MacOS, Windows, and Linux could not have such system calls added. The
biggest problem is those systems have become very bloated from adhering to
backwards compatibility.

~~~
coding123
I have also been waiting for the day... Maybe in a few years this Linux
subsystem for Windows will start to mold things in this direction?

------
ericfrederich
Dude... that cursor has a carbon footprint.

I hope when Microsoft fixes this they publish some usage stats and we can see
how much energy/money/carbon this will save annually.

------
scriptproof
It is amazing this article got more than 700 points because when I open the
task manager and VSC is in IDLE, I see 0% in CPU usage. And the cursor is
blinking!!!

------
caseymarquis
Crossing my fingers any related changes don't kill the cursor in the vim
extension. Maybe I'll skip the next few updates...

~~~
xconverge
Hey bud, I am here for you as a developer of the vscodevim extension. I will
make sure it works just for you.

------
joe563323
On a not so different not xfce had a bug when the screen saver got displayed
after idle timeout the cpu utiliztation rose over 100%.

------
GoToRO
I noticed that somethimes when you press TAB at the beginning of a line, it
can take about 1s for the TAB to appear on the screen.

------
CodeWriter23
Why hasn't someone thought of implementing a blinking cursor in hardware, like
we did in the early 80's?

~~~
yeukhon
How would that work in reality?

~~~
CodeWriter23
Back in the day you'd write a value into hardware register and the video
hardware would draw the cursor at the location designated by the written
value. The hardware that was generating the signal to drive the beam would AND
the VRAM contents with the register and XOR that result with a clock that
pulsed at the blink rate. Voila, blinking cursor.

In this day and age, I imagine you write some code that runs in the GPU off a
timer interrupt that pumps the texture of a cursor to a location on screen
based on a value stored in some scratch RAM. Exposing that interface to some
JavaScript running an editor in a browser is left as an exercise for the
jackasses who have taken what we would call a SuperComputer back in the day
and turned it into wheezing desk heater with a stuttering UI.

------
batmansmk
This bug has nothing to do with techonology choices. It is just a bug, in
chromium. Team is investigating.

------
Ace17
No issue, just need a plugin to disable cursor rendering when you launch the
build :-)

------
nebabyte
> Powerful* text editors built on the web stack

I've identified the problem

------
hesarenu
Could not Electron use React Native kind of framework?

------
z3t4
You do not need horses to power your computer.

------
plg
carbon footprint

~~~
betaby
Yes. We quickly forget what those CPU cycles and abundant of RAM consumes
electricity => heating the planet.

~~~
valarauca1
The real issue is cooling the universe and accelerating us to the inevitabile
heat death :(

~~~
rikkus
Quick, produce more heat!

------
excalibur
Lotus Notes FTW

------
am185
can you close VS Code when not in use?

------
knodi
The power of node.js -_-

------
yAnonymous
Why don't all those who complain about Electron apps get together and make a
FOSS, cross-platform editor with all the features and an equal or better
interface than VS Code and Atom?

If you use the time you now spend complaining to code instead, you should have
the basics covered.

~~~
angry-hacker
Because there are good enough tools out there already, foss and commercial.
This problem is solved already.

~~~
yAnonymous
Name them.

------
dingo_bat
Don't see this on my Win 10 ultrabook.

------
vgy7ujm
The answer:

Vim <\-- this

------
billconan
Yesterday I wrote a webpage with a bootstrap template. The webpage has a
progress bar that is updated in realtime by a websocket. The webpage is
extremely slow. I guess every bit of progress bar update it rerenders the
entire dom tree.

I don't think react would do much better, but I will have to try.

~~~
Guillaume86
With a decent implementation, a progress bar will run at 60 fps no problem.
You don't need react for a simple progress bar either, just check no
unnecessary is done (probably just need to update the width on an element) and
use requestAnimationFrame if you receive updates at very high rate.

~~~
billconan
I admit I'm not familiar with frontend development.

I used this template
[https://github.com/secondtruth/startmin](https://github.com/secondtruth/startmin)

I noticed that boostrap progress bar is a div,

[https://www.w3schools.com/Bootstrap/bootstrap_progressbars.a...](https://www.w3schools.com/Bootstrap/bootstrap_progressbars.asp)

that's why I guess updating it triggers dom update.

I'm still looking for a solution.

------
onli
Because it hasn't been mentioned yet, as far as I could see: Blinking cursors
are horrible and should never exist, it resembles torture.
[http://www.jurta.org/en/prog/noblink](http://www.jurta.org/en/prog/noblink)
is an extensive list with ways to deactivate it in various programs. Try it
out, you will feel the difference.

~~~
juandazapata
I don't like XXX, so it shouldn't exist it's a great argument.

~~~
onli
No, of course not. It wasn't supposed to be the argument. Are you really
interested in the background? I will just assume so, this is HN after all.

Our brain has some special recognition for specific things. A collection of
those things are bundled under Gestalt psychology. There are also a number of
things that make something pop out. Now, movement triggers the pop out effect,
it is one of those things that we notice immediately. What that means in
practice is that a blinking cursor in an otherwise plain image will be noticed
immediately, and it takes conscious effort to ignore it. That is a strain on
your mental capacities, and highly unnerving, even if you are not completely
aware of what it is that annoys you.

Parts of this is what motivates those specialized writer editors, that remove
everything apart from pure trext to make it as easy as possible to focus on
the task at hand.

Blinking cursors, and this is what the jurta page mentions, also resemble the
chinese waterdrop torture. I think that was meant a bit tongue in cheek, but
is actually not wrong. It comes again and again and again and again ..., and
it not immediately controllable.

Add to that the useless resource usage like in this bug and yes, I stand to
that and am serious about it: Blinking cursors are a very bad habit that
should just die out. And developers that actively implement it without making
it deactivable should ask themselves what they are doing - I had that with
Sharelatex, a platform which I otherwise like a lot.

------
drzaiusapelord
>A workaround for folks who are similarly obsessed about battery life

That's odd wording. Obsessed because I don't want to waste energy, which leads
directly to pollution? Or don't want a short battery life on my laptop? Power
savings shouldn't be seen as irrelevant geekery.

13% is half the capacity of a single core in a four core processor. My 5 year
old 2500k peaks at 120 watts. So 15 watts to render a cursor?

Microsoft needs to stop it with these terrible levels of QC. Its inexcusable.

~~~
StyloBill
It's a Chromium problem, maybe you should read the Github issue before
commenting.

~~~
big_paps
Still its one of the most interesting posts. x Joules per blink

~~~
StyloBill
The maths may be interesting, the rant is definitively not.

------
LanceH
On one hand, I'm annoyed that javascript and browsers are being used for a
text editor.

On the other hand, I'm reminded of the arguments against emacs using 8 _MEGS_
of memory and how terrible that is.

I've learned to just relax and use what works. People have chosen to
concentrate on extensibility at the expense of current day performance. The
result has been good enough.

There is probably something else to be said about these also making OSX a
first class citizen, which hasn't always been true.

~~~
RandomOpinion
> _On the other hand, I 'm reminded of the arguments against emacs using 8
> MEGS of memory and how terrible that is._

Yeah, but back then Moore's Law was running full steam and we were getting
tremendous advances in processor and memory performance every year. Now that
era has ended; what processor performance we have today is likely to remain
more or less the same for the foreseeable future, barring a miracle from the
chipmakers.

It's time to stop pissing away performance by using inefficient software
stacks and web technologies are a big, juicy target for cleanup.

