Hacker News new | comments | show | ask | jobs | submit login
VS Code uses 13% CPU when idle due to blinking cursor rendering (github.com)
899 points by Kristine1975 38 days ago | hide | past | web | 760 comments | favorite



I'm reminded of this classic: https://github.com/npm/npm/issues/11283

NPM had a progress bar that was so fancy that it slowed down installation time (basically its only job) by ~50%. Hilarious.

My mantra here is, if you find yourself thinking about implementing a fancy loading spinner/progress bar, it would be more productive to just spend that time making it unnecessary - speed up your shit! Obviously that doesn't apply to VS Code's cursor.


This reminds me of one of my favorite old tech stories.

A long while back (seems like this was the late 90s or early 2000s) I was working on a script that did some data processing on a remote machine. It had to loop through a bunch of text log data and generate some reports. Being as that I had no idea if the script was actually working until it completed awhile later, I decided to put in a neat little ASCII spinner in it when you ran it with verbose options.

At the time I was on a slow dialup connection as I was on break from school, and something weird would happen. Every time I ran the script to test it, my Internet connection would become nearly unusable. But as soon as the script finished, it would suddenly start working again.

As you can imagine, this was very confusing since the script was running entirely on a remote system. What the hell is going on?

This stumped me for an hour or so until I ran it without the verbose option ... and it didn't happen. Then I finally realized what was happening: I was refreshing the spinner on EACH ROW and remote machine was going through the rows so quickly that sending refreshes for the spinner saturated my tiny dial-up connection. Changing this to only update once a second fixed it entirely.

And that's how I DoS'd myself with a spinner.


Your story reminds me a current scenario @peckrob ..

For Android Application development, we used to connect devices to computer via usb and would see the device's log in logcat tool (Android Studio). The device, in general, spawns lots of logcat messages during a debugging session and would eats up CPU.

The case is still worse that the tool stucks when device is connected over wifi (wifi-adb) as the data transfer is little lower in wifi than usb.


I've got a similar one. I was working on an app once where the current results were being logged to a text box, nothing too fancy, but I noticed it got a lot slower on larger blocks. Changing "textBox.Text = textBox.Text + newline" to "textBox.Append(newLine)" cut >99% of the CPU time.

On the same project I discovered that "System.Envireonment.NewLine" was a relatively expensive call on c#, caching the result of that property was another 50% cut to CPU.


Nice foot gun story. If you had a T1 at the time, you would never had noticed such a pitfall.


I often advise to write code on "obsolete" tech. It makes every bit of cruft obvious.


I test all of my apps on an old Moto G, second generation, on throttled mobile internet (64kbps).

That’s the average worst case user.

This also means I notice massively if an app has hardcoded timeouts, or loads massive amounts of data.


Pretty much, yeah. I never noticed it on campus because I was sitting on a 10 megabit connection.

That added an extra dimension to the confusion because I was sure this never happened when I was on campus, only when I was sitting 300 miles away. It was still happening if I had bothered to look at the network stats, just not enough to entirely saturate the connection.


In the early days of working on my current main project, I found that updating a progress bar was slowing the process it was monitoring. Since there were times when it was useful to see near real-time progress, I added added a slider which allows the user to adjust the sampling rate. That slider is affectionately known (by me anyway) as the Heisenberg Compensator.

P.S. I have sped up my shit. That process originally took days, now it takes an hour.


It's fairly regular in my field (game dev) to put some debugging/instrumentation code in and only enable it for one frame after a key is pressed.


> Heisenberg Compensator

That's a perfect name, I love it.


There's a Heisenberg Compensator in Star Trek transporters [0]

[0] http://www.startrek.com/database_article/heisenberg-compensa...


Not​ a coincidence.


IBM spent a humongous amount of man hours upon the running man in smit under AIX, many of those consumed with making sure it ran correctly on all systems of various speeds without the animations going crazy. Alas I don't have exact man hours, but that was what one of the engineers told me a nearly 2 decades ago now. Hopefully somebody else has more detail upon this as one of the earliest examples of progress animations that still runs today that I'm aware of.

I still hark back to the days of the C64 and the likes in which had tape storage and was common to have a game as a loading screen to play. Was a small program and used to give the user something to do whilst they waited 20 minutes for the program to load. Many also rewrote the cassette storage code for faster storage/loading and would often be a case of loading that program with the small game like space invaders that you could play whilst the main game loaded utilising the quicker tape handling code.


SMIT running man example: https://www.youtube.com/watch?v=YMWSD69BWqI

If anyone knows where I can get those animation frames, I'd like them for a personal project as a "running" indicator.



Broken Sword let you play Breakout during the installation; it was such fun (I had not played Breakout in a long time at the point) I was almost sad when the installation was finished. ;-)

I have often wondered since why this was not a more popular way to deal with long running installers.


I think it's because Namco had a patent on games in loading screens from 1995 to 2015: http://kotaku.com/the-patent-on-loading-screen-mini-games-is...


Ghostbusters game on c64 had it when I bought it used in 1991 or 92 I think.


It's a common example for a patent that had a lot of prior art but was granted anyway.

I don't know enough about patents to say if there was a reason the prior art didn't apply, but I know that Namco was fairly protective of it and it was somewhat limiting for people working in the game industry.

Although, loading screens are less common now than they were back then, and I think that would still be case even if such a thing were never patented.


Having minigames during loading or installation was actually patented by Namco for Ridge Racer, I believe. I'm not quite sure about the details since I seem to recall things like invade-a-load on my Commodore 64 at least a decade earlier, but there you go.


Expired in 2015 :)

When I was younger, it seemed so obvious to add a minigame to a long loading screen, I had assumed that there were technical reasons for not doing it.


Not quite the same thing, but DVDFlick lets you play Tetris while your DVD is being authored and burned, which was a nice suprise.


One Linux distribution used to let you play Breakout during installation, but I forgot which one. Caldera, maybe?


Don't most Linux installation just let you use your computer normally during installation?

(And by normally I mean, as booted from a LiveCD.)


I think I don't understand the issue well enough. This looks like a standard blinking cursor to me. Users expect a blinking cursor in an editable text field.

I'm not sure why this implementation is slow or why they needed to implement it themselves and not let the OS handle the blinking cursor. I'm guessing there must be some reason.


Powerful* text editors built on the web stack cannot rely on the OS text caret and have to provide their own.

In this case, VSCode is probably using the most reasonable approach to blinking a cursor: a `step` timing-function with a CSS keyframe animation. This tells the browser to only change the opacity every 500ms. Meanwhile, Chrome hasn't yet optimised this completely yet, hence http://crbug.com/361587.

So currently, Chrome is doing the full rendering lifecycle (style, paint, layers) every 16ms when it should be only doing that work at a 500ms interval. I'm confident that the engineers working on Chrome's style components can sort this out, but it'll take a little bit of work. I think the added visibility on this topic will likely escalate the priority of the fix. :)

* Simple text editors, and basic ones built on [contenteditable] can, but those rarely scale to the feature set most want.

(I work on the Chrome team, though not on the rendering engine)


"So currently, Chrome is doing the full rendering lifecycle (style, paint, layers) every 16ms when it should be only doing that work at a 500ms interval"

Only? It shouldn't be necessary to layout the entire window to redraw a cursor.

Also, if it did update every 500ms:

- it still would use about half a percent of CPU. On a machine whose CPU easily is >200 times as fast as those powering the first GUIs (and those _could_ blink the caret at less than 100% CPU) and that has a powerful GPU, that's bad (yes, those old-hat systems had far fewer pixels to update, but that's not a good excuse)

- implementing smoother blinking (rendering various shades of gray in-between) would mean going back to 13% CPU.

I would try and hack this by layering an empty textfield on top of the content, just to get a blinking caret. Or is it too hard to make that the right size and position it correctly?


> Powerful* text editors built on the web stack cannot rely on the OS text caret and have to provide their own.

Is there any reason that Electron couldn't provide an API that would expose the system caret in an OS-agnostic manner? Windows, for example, has an API[0] that can arbitrarily show the caret at a given point in the window. Sounds like something that would be useful to many apps and not get in the way for those that don't need it.

[0] https://msdn.microsoft.com/en-us/library/windows/desktop/ms6...


Probably not easily. Remember this is what java AWT did and it was a complete mess. Write once debug everywhere.

My favorite issues was that on one OS (windows I think) a panel would only be visible if pixel 0,0 was on the screen and nothing was on top of it. The panel could be 99% visible but not be shown at all if the upper left corner was under another panel.


Why are you even painting at all instead of layerizing it?

(This is a perfect example of what I've been increasingly convinced of lately: that the whole "paint"/"compositing" distinction hurts the Web…)


>> text editors built on the web stack cannot rely on the OS text caret

Can you explain in simple terms why this is the case? Why on earth not?


Well, because, in order to satisfy the most finicky of its users (myself included), VS Code offers no less than 6 styles for its cursor ('block', 'block-outline', 'line', 'line-thin', 'underline' and 'underline-thin') and 5 animations ('blink', 'smooth', 'phase', 'expand' and 'solid'). Also, in a future release the themes will be allowed to change the color of the cursor to any of the 16,581,375 colors in the RGB spectrum.

Does that answer your question? :)


Because you can't just ask the OS to "please paint text caret here thankyou", and browsers do not expose a powerful enough native text editing control. So you end up reimplementing one in JS/HTML/CSS, including the caret.


The WinAPI function SetCaretPos seems to do that: https://msdn.microsoft.com/en-us/library/windows/desktop/ms6...


>http://crbug.com/361587

Reported almost 3 years ago and still not fixed...


> I'm not sure why this implementation is slow

I'm guessing the culprit for that is in this part of the bug report:

> Zooming into a single frame, we see that while we render only 2 fps, the main thread is performing some work at 60 fps (every 16 ms)

As for

> why they needed to implement it themselves and not let the OS handle the blinking cursor

They're not editing inside a text editor field anymore, they need to blink on html. I guess they're not using a deprecated <blink> tag is because it's not customizable in any way.


I don't believe that any modern browser still supports the blink tag; IIRC Firefox was the last to get rid of it, a year or two ago. You can hack it up in CSS, though.


Truly, the web has come full circle.


On Win32[0], the blinking caret can be shown anywhere inside a window, not just on a text field. The sample program builds a basic text editor without cheating and using a textbox control.

I'm sure other operating systems have this kind of facility - after all, how does the built in text edit control draw its caret?

[0] https://msdn.microsoft.com/en-us/library/windows/desktop/ms6...


>On Win32[0], the blinking caret can be shown anywhere inside a window, not just on a text field. (...) I'm sure other operating systems have this kind of facility - after all, how does the built in text edit control draw its caret?

The "built in text edit control" IS a native text field already.


My point is that the built-in text edit control needs some way to draw it's caret, so surely a universal method to draw a blinking caret exists somewhere.


Well, they could just make a C extension to Electron that draws a blinking colored line (that's all it is) using the OSes arbitrary drawing facilities.

No need to have an official OS caret function (especially if you're not a native text field, and caret aside, the rest of your text editing will be different/broken in subtle ways compared to the OS).


I'm not aware of any system where one can create that caret without a text edit control to hold it. That means that, if the standard text edit control isn't suited for what you want to do, you can't have the caret.

Certainly, on the original Mac OS (a system on which I worked on OS patches for detecting the location of the caret) the method that application used for drawing the caret varied widely. I've seen it implemented by drawing a line, by drawing a rectangle, either in one go or in two parts (that happened in applications that supported a split caret, even if the caret wasn't split), with various transfer modes. Slanted carets typically were application-specific, too.


> I'm not aware of any system where one can create that caret without a text edit control to hold it.

Windows allows this. In a parent comment I posted a link to the API docs. All Windows requires is a window to hold the caret, it doesn't care what kind of window it is.


Thanks, so they are probably not using a contenteditable element. In that case it makes sense that they would need to use a CSS animation to blink the cursor.


> why they needed to implement it themselves and not let the OS handle the blinking cursor.

Probably because they are using electron.


this is exactly the same problem I met yesterday.

I made a webpage based on a bootstrap template

On this webpage, I have 2 realtime components. one is a chart, based on dygraphs

the other component is a bootstrap progressbar.

the bootstrap progressbar is made by html divs:

https://www.w3schools.com/Bootstrap/bootstrap_progressbars.a...

Both components are updated in realtime by a websocket.

I noticed that the chart by itself is very fast. But as soon as the progressbar is added, the entire webpage and even the entire machine becomes very slow. I guess this is because every bit of progressbar update, the browser rerender the entire page, as the progressbar is a div.

I don't know how to solve this. I thought about using react for its virtual dom. but if eventually I need to update the dom, the speed seems to be the same.


There is an HTML 5 progress tag supported by all browsers: <progress id="" max=100>


Can you buffer the incoming websockets data and only update at a small percentage of the inputs or maybe based on time?

Lodash has a debounce function which is useful to throttle UI features hooked to incoming data. https://css-tricks.com/debouncing-throttling-explained-examp...

You never really want the UI repaint to be dictated by data, it's better to use timers.


Am I the only one around here who prefers a nonblinking cursor. That and key repeat rate are my first setting on any new install.


Minimizing the installation window would prevent the paint event and your installation would finish quickly...


iTunes has the same issue with its spinner while syncing, it takes about 20% CPU time.


Who said it's the spinner in that case?

After all, it IS syncing at that time, which means it does a lot of stuff.

Whereas the issue here is with the caret shown when the editor is idle.


Activity Monitor allows you to easily profile what the application is spending its CPU time on and it shows that iTunes spends an excessive amount of its time on updating the UI, specifically the spinner.


It just doesn't go in my head that we are building text editors inside a web browser! I get it, there are many good use cases for Electron and it's easy to get started with cross platform support, but why is everybody going crazy about text editors in them? Because you can write plugins in JS?

Wouldn't it be better to make native application, especially for code editors, where developers spend most of their time, where every noticeable lag and glitches are not appreciated.

Edit: Many people here think that I am attacking this web based kind of technology, which I am not, and sorry for not being clear enough, but why chose something so high up the stack for dev tool?

Edit2: For non-believers in nested comments, look -> https://github.com/jhallen/joes-sandbox/tree/master/editor-p...


Yep. I also can't wrap my head around the fact that we are now constructing buttons, drop-down boxes, tagged text boxes using dozens of nested <div> layers instead of a native widget that writes directly to the screen. My 486 rendered UIs with nearly imperceptible lag. Google Docs takes a good 2-3 seconds to spin up a UI on my i7.


You both do realize that similar arguments could have been made back in the day when moving from, say, command-line DOS applications to Windows API applications - right?

Ultimately, computing has always been one of abstraction from the lower "layers". Taken far enough, one could spuriously argue that if you aren't soldering together the flip-flops that make up your logic and memory, you just aren't being efficient...


Except that the abstraction layer for the user has remained the same.

The extra abstraction layers you're talking about are invisible to the user... while our GUIs are slower, and our processors faster. It feels like, after 30 years, we should be able to have our GUI cake and eat it, too.


The abstraction layer has only remained the same visually, and only for a loose definition of "visually."

In the "good old days" the user interface was 640px x 480px (original VGA[1], skipping past the original MDA, CGA, and Hercules graphics cards[2] since they predated "modern" GUIs). Then it was expanded to 800x600, etc. The programs of those days were hard-coded to the graphics adapter resolution. If a program did not support the graphics adapter you had, you had to run your GUI in lower resolution compatibility mode. Sometimes it didn't work at all.

In lower resolution compatibility mode, the program's display got uglier and uglier as the screen resolution went up.

The abstraction layer today is adaptive and high DPI. Since the abstraction layer abstracts away the actual display resolution, programs (generally) can take advantage of the resolution that is available on a given display without change, allowing a program to run on small hardware displays (phone/tablet form factor) up to mega 4K++ displays, looking better and better as the screen resolution goes up.

[1] https://en.wikipedia.org/wiki/Vga

[2] https://en.wikipedia.org/wiki/Hercules_Graphics_Card


Precisely. And even Win32, which is being touted in this thread as somehow far superior to the Web stack, was never designed for high DPI apps. It's far worse than the Web stack, with exact pixel hardcoding everywhere. That's why the HiDPI situation on Windows is such an inconsistent mess. Meanwhile, the Web scaled up to HiDPI so seamlessly that most people never even noticed any friction. This is the benefit of the declarative model of CSS.


Win32 was designed around DPI-independence from day one. That's why things like GetSystemMetrics exist. Unfortunately, the app developers chose not to use them.

GDI was originally designed to run on printers as well as screens, where the DPI values are completely different.

The declarative model of CSS has nothing to do with it. You can specify pixel values in CSS if you like.


> You can specify pixel values in CSS if you like.

And many (most?) do... leading to the reinterpretation of what "pixel" means in the CSS spec. Yes, CSS pixel does not mean a physical pixel on the screen.


> Win32 was designed around DPI-independence from day one. That's why things like GetSystemMetrics exist. Unfortunately, the app developers chose not to use them.

Integer pixel coordinates are right there in the most fundamental functions:

    HWND WINAPI CreateWindow(LPCTSTR lpClassName,
                             LPCTSTR lpWindowName,
                             DWORD dwStyle,
                             int x,
                             int y,
                             int nWidth,
                             int nHeight,
                             HWND hWndParent,
                             HMENU hMenu,
                             HINSTANCE hInstance,
                             LPVOID lpParam);


WPF is perfectly fine for high DPI apps and still beats all this web stuff by orders of magnitude.

CSS people still think a grid is a choice of "small" "medium" and "large" columns/rows. Haha.


> WPF is perfectly fine for high DPI apps and still beats all this web stuff by orders of magnitude.

> CSS people still think a grid is a choice of "small" "medium" and "large" columns/rows. Haha.

CSS Grid is literally Microsoft taking the WPF grid layout and porting it to the Web!


Yes, that is what I'm saying. They are not porting it because previous solutions have been just as capable.


But they did port it. That's why CSS Grid exists…


Yes, but his point is that for ages it didn't exist. Like, for 2 decades and something, was first tables, then "floats" BS.

And even now it's still not mature and supported in all browsers.


Isn't this a false dichotomy? Why not create more modern, declarative native APIs and libraries, or use them where they already exist?


> the old stuff worked just fine

> we don't need the new stuff

> how about we throw both away to shut everyone up, would that make either side happy?


How about: Take what we learn from new development so that we can improve the older, lower-level stuff, and strip away some unnecessary levels of abstraction?


Yep. I did some swift development recently and while I absolutely loved the language I was struck with how much worse the uikit APIs are compared to react.

There's nothing special about javascript the language. The nice thing about web development is that the really fast framework iteration cycles. And everyone complains about them, but as a result we have some fantastic APIs for UI development in javascript land now. (Eg, react.)

What I want to see is someone port those lovely abstractions across to native languages. For example, a port of choo[1] to swift on top of uikit would be gloriously fast, efficient, small and easy to work with.

[1] https://github.com/yoshuawuyts/choo


Because it's less effort and more likely to be successful to just improve Web implementations.


Sure, but it's also more likely to make my laptop with 16 gigs of RAM, 8 simultaneous threads and SSD feel more sluggish than the one I had 10 years ago.

Edit: I'll add, I frequently see people say that Windows XP was the best OS ever made. Why the rose-tinted glasses? It was the tail end of the era when hardware was getting faster, faster than software was getting slower.


Where are the Michael Abrash's of today teaching people how to write tight, fast code? Seems a lost art...

Yes, I know, he's still around...


Luckily there are a few people that still care, just look at the response Handmade Hero has gotten.


Those videos on data oriented design were also very interesting: https://github.com/taylor001/data-oriented-design

The thing is, using C++ instead of React for mobile development of a simple application would probably make me miss deadlines... So we just stick to whats popular.


Building UI's in something like Qt, GTK or swing was never really that time consuming, especially given the limited amount of controls one screen in a mobile app.


Except today's machine language is javascript.


Except that the abstraction layer for the user has remained the same.

This is absolutely not true.

Going to windowing GUIs from character-mode DOS was a massive usability improvement. No more exotic ctl-alt-function key combos (or control-control sequences for WordStar users like me).

Going from desktop apps to webapps changed the abstraction layer of how we access and share documents. No more installing software on the desktop. Docs are available on every computer we log into. Multiple people can edit the same document.

Maybe you don't care about document sharing and would be happy installing old school word processors and spreadsheets? That's a fair statement to make, but the market appears to be voting otherwise.


>No more installing software on the desktop

At least VS Code has to be installed. It is a plus for Google Docs & Co, but not universal. And in most cases, instead of installing you now have to create an account.

>Docs are available on every computer we log into. Multiple people can edit the same document.

Those features are completely independent from the platform the application uses. Those features are commonly implemented in web apps because for a long time they had no other choice, but cloud storage and collaborative editing can be equally well implemented in a destop application.


> Going to windowing GUIs from character-mode DOS was a massive usability improvement. No more exotic ctl-alt-function key combos

cough Blender ...


Unfortunately - the OSS world has a shortage of good UX people, and the engineers tend to be the ones steering the ship.

I would point to Eclipse as another classic example of "obviously designed by an engineer". There's tons of functionality under the hood and it's a fantastic jumping-off point for further customization... but its layout is intensely non-intuitive in so many ways compared to a purpose-built IDE.

Everything is locked away in menus and "perspectives". If you're writing a Java web app - do you want the Java perspective, the Java EE perspective, the Web perspective, or the Debug perspective?

I hear GIMP's no picnic to work with either but can't confirm personally.


I fear the day "good UX people" come to GIMP and Blender.

It used to be that OSS was developed by people that actually used it. Maybe it wasn't pretty, it had a steep learning curve, but it got things done and was efficient once you learned how. Often you got fresh perspectives on how an interface could be done, since people got fed up with existing solutions.

Nowadays you get UX experts preaching how an interface is supposed to look like, which mostly means copying Apple or Google. You get tons shiny whitespace. Burger menus, because that's what "everybody is used to".

GIMP with "good UX people" would turn into a bad copy of Photoshop.


GIMP is pretty easy to pick up if you've used desktop apps before. It follows the sort of conventions you'd expect.. Blender, on the other hand, has a very unique approach to UX.


Well, that is true of nearly all 'pro' creative apps. Maya, Illustrator, ...


>You both do realize that similar arguments could have been made back in the day when moving from, say, command-line DOS applications to Windows API applications - right?

No, we don't. With Windows API there's no sandbox and extra embedded language overhead (it's still native code like in DOS, and even optimized better, able to use more available memory than DOS would allow, etc.).

Oh, and you also got the benefit of a FULL graphical interface over DOS. Here you have slower performance, extra cruft, bad apis AND the same final output (as a GUI app).

Not to mention that the Windows API UI toolkit, while bad, is sane compared to the web stack.


There was a big new overhead -- in DOS you could write directly to video memory, and control the graphics cards registers directly. In windows you had to go through libraries. Until DirectX came along, it was basically impossible to write even reasonable performance animations in windows, hence the lack of games in windows 2 and 3.


That is not true, before DirectX there was WinG and there were quite a few good games done in WinG.

The lack of games was mostly due to reluctance of game developers to abandon Assembly and using directly PC hardware, specially because C and C++ compilers were "too slow".


WinG didn't come around until 1994 though. If you wanted to ship a game in 1990, you had a choice of using GDI or using MS-DOS.

And the "assembly" argument doesn't make any sense. You can program Windows games in assembly if you want.


Windows only became relevant for home users after version 3.0, actually the 3.1, which was released in 1992, so of course no one was shipping Windows games in 1990!

Sure you can use Assembly on Windows and there were a few books teaching exactly that, but you weren't allowed to touch the graphics hardware any more, unless you were doing a graphics driver, and do all those graphic card tricks, specially mode X.


> Not to mention that the Windows API UI toolkit, while bad, is sane compared to the web stack.

Completely disagree. Charles Petzold's HELLO.C is hundreds of lines of code. Hello World on the Web is, well, "Hello world!".

"Sane" environments don't force you to make up a distinction between "long pointers" and "pointers" if you want to conform to the house style in order to match 16-bit x86 real mode. Or route all events through one WndProc, forcing use of "message crackers" to poorly recreate the ergonomics of addEventListener. Or have to recreate the vector graphics stack not once but twice in order to deal with the lack of forward thinking (GDI, GDI+, Direct2D). Or deal with incredible apartment threading complexity to maintain VB6 compatibility. Etc. etc.

If you had said .NET, maybe. But Petzold-style Win32 is bad.


>Completely disagree. Charles Petzold's HELLO.C is hundreds of lines of code. Hello World on the Web is, well, "Hello world!".

It's actually 87 lines of code, with a disclaimer comment, ample empty lines, a callback to play a wav when clicked, and an error message for when it's not run on NT. So, like 30-50 lines of actual Hello World necessary code. And that includes the starting up boilerplate.

That's about as relevant as complaining about the large binary size of a hello world program in a language creating static binaries. It might be larger than expected, but it usually includes a whole runtime. So once you add actual code, the binary's size doesn't scale linearly with the code size.

For comparison, a modern CSS "reseting" file, which just removes empty values is usually larger than the hello.c example.

This is the problem with trivial/contrived examples. They don't show you how the thing you're discussing scales in actual use.

In this case, the hello world in HTML is not representative of what you need to do to create a medium/large SPA in HTML.

And the old Windows api you could always wrap in higher order stuff, or even use as a basic layer to create your own UI library (still native).

The web stack, because of how it is done, you can't.


It's actually 87 lines of code, with a disclaimer comment, ample empty lines, a callback to play a wav when clicked, and an error message for when it's not run on NT. So, like 30-50 lines of actual Hello World necessary code. And that includes the starting up boilerplate.

Petzold's Hello World is far more than what I'd consider a Hello World in Win32, because in addition to what you noted, it also creates a "full" window with all the associated complexity of managing its drawing yourself. A more suitable Win32 Hello World is not that much more complex than the traditional C one:

    #include <windows.h>
    #include <winuser.h>

    int main() {
        MessageBox(0, "Hello World!", "Hello World!", MB_OK);
    }
From there, one can progress onto "dialog-based applications", where the bulk of the layout is declarative (in the resource file) and the C part is two functions, a callback for messages and a main() which just calls DialogBoxParam(). No WM_PAINT handling is necessary for those. A "full window" application is actually not necessary for many use cases. I've been working with Win32 for over a decade and a half and written at least a dozen little apps for various things, but the vast majority of them are not "full window" apps.

In other words, I'd say Petzold was at least partially responsible for giving the impression that Win32 is impossibly complex even to start with. Fortunately others have come up with better introductions like http://www.winprog.org/tutorial/ or even https://win32assembly.programminghorizon.com/tutorials.html since then, but it seems that the damage has already been done and a whole generation of programmers have gotten the "Win32 is hard" notion embedded into their minds.


You are forgetting the magic CSS incantations, which are browser and version dependent, to accelerate something that is just granted on any native UI toolkit.


I agree that the paint/compositing distinction is bad (see my reply to Paul Irish), but Win32 has just an opaque a distinction. GDI is much worse for modern hardware acceleration than the Web is.


GDI is a dead worse, already replaced by better solutions for anyone that cares to use them, yet it still scales better than most web stuff.



Complaining that win32 has to do stupid things to get basic features in comparison to the web seems pretty laughable.


Why? The consensus opinion here seems to be that Win32 is a better API than the Web. I'm pointing out that this is an extreme case of rose-colored glasses.


I have had to use the win32 API within the last year. I left that job, I regret nothing.

As much as I want as few layers between me and the hardware as possible, I would rather have many sane layers like much of the web than 1 insane layer like the win32 API. It doesn't follow good C, C++, std library or other conventions. Many functions have multiple poorly documented modes based on which structs you pass are empty or null pointers. It also just fails often for no reason I can understand, and the distinction between windows application and console is quasi magical and counterproductive.

Posix... Is a blessing by comparison. Every time I need to touch it it take me something like a single hour and I have a function that works reliably and just does what I need. Until we get to X11 and manipulating those windows. Then I just want to punch everything, but at least X11 works once the code is written, even though its inside out and backwards.

Now I just strongly prefer to use a good library to get at OS facilities. Things like SDL, boost or Intel Threading build blocks, etc. They are fast and generally tight enough I can open them and understand down to the hardware when I want.


There is a point to be made here though that is more valid than the argument against moving away from command-line DOS applications to Windows API applications.

Moving from text-based to graphical is a much different shift than what we are talking about here. The idea here is that we are creating many abstractions to get the same basic result. I can't rewrite Atom.io for DOS, but I could rewrite Atom.io to use the Windows API directly rather than being Electron-based.

We're talking about VS Code using 13% of the CPU while idle to render a blinking cursor. What benefit does the user get from this? We could render a blinking cursor with less then 13% of a much less powerful CPU years ago.


Or, as a better example - rewriting from Electron to Qt rather than Win32. Obviously Electron was chosen to be cross platform. But I still don't get the web renderer obsession. We already have high perf cross platform solutions. Use them.


On a related note, it seems crazy to me that for actively maintained, cross-platform, native widget GUI libraries, your options are… Qt.

(Not that Qt is a bad library, but it's bizarre that such an important area is so neglected by our industry).


An interesting observation. But isn't the explanation rather obvious? Writing desktop applications to sell for money, that has quietly disappeared. It still exists, somewhere, but so does horseshoe-fitting (the few experts are probably making decent money, but it's a zombie business nonetheless). Between a few established offerings that won't be looking for a new framework any time soon, desktop-packaged web and all the gratis stuff, where would new GUI toolkits find their footing? Desktop means potentially offline and offline means almost all monetization schemes don't apply. (including those that don't work except for instilling hope in investors)


Yes, I know why it's the case nowadays: at my work, for example, I occasionally touch all the major areas of modern software development — server backends, mobile, Javascript-in-the-browser and meta-software for writing software — but I've never done what the average person, and maybe even I, would think of when they hear the term "software development."

And it does seem very very strange. I'm imagining this Socratic dialogue:

---

Socrates: What is the most popular programming language?

Developer: Java.

Socrates: And why is Java so popular?

Developer: It's the first language most people learn nowadays, so everyone knows it.

Socrates: So it's popular because it's popular?

Developer: Well, it's very similar to C++, which was the most popular language when it first came out, so it was easy for people who already knew C++ to learn.

Socrates: So why not keep using C++?

Developer: C++ had a lot of problems that made it hard to use, and Java solved a lot of those.

Socrates: Such as?

Developer: If you wrote an application in C++ that ran on any given operating system, like Windows, you wouldn't be able to run the same code on another operating system like Linux or Mac; you would have to rewrite much of your work from scratch to support more than one operating system. Java is "write once, run anywhere." Your Java program will run exactly the same way, without having to rewrite anything, on any operating system that has a Java implementation.

Socrates: Interesting! Can you show me?

Developer: (Writes "hello world," demonstrates it runs the same in both cmd.exe and Bash)

Socrates: And that works with real programs too?

Developer: What do you mean? This isn't a very long or useful program, but it is an real program.

Socrates: What's special about what you just showed me? Can't you write these command line things in any language? Look: I learned a bit of Python a few years ago. (enters python -c "print 'Hello, world!'" in both terminals)

Developer: OK, maybe that wasn't the best example, but when you're doing more complicated stuff it gets harder and harder to write code for more than one operating system in a language like C++, so a language that runs the same everywhere, like Java, or Python, is better to have.

Socrates: So like an application that you would install?

Developer: Exactly!

Socrates: OK! So could you show me a real application then? Like, it says "Hello, world!" in a window, with a menu and buttons and stuff? And it will be the same on both Windows and Linux?

Developer: No, that's not possible.

Socrates: Hm?

Developer: You can't do that in Java.

Socrates: So, Java is the most popular programming language, but you can't use the programs you write in it?

Developer: Alright, you got me there. Usually Java is used for writing applications that run on servers, not PCs.

Socrates: Servers?

Developer: Yeah, like if you go to a website, there might be an application on its server to fetch your account information from a database.

Socrates: But why do you need a special application for that? Can't you just ask for the information you need directly?

Developer: Good question. We're moving in that direction. But for some things you really do need some kind of custom logic, and for those things Java is a good solution across different platforms.

Socrates: What different platforms are there in servers?

Developer: There aren't really that many platforms. Almost all servers run Linux. But there's also a few that run Windows, or FreeBSD, which is very similar to Linux.

Socrates: So if you were on a less popular operating system, like Windows, you would use Java to be compatible with Linux?

Developer: Probably not; really, the only reason you would use Windows on a server instead of Linux is if your application were written in C#, which is Microsoft's Java competitor.

Socrates: So Java is the most popular-because-it's-popular programming language you can't write programs in, and it's useful because it runs on every platform, on only one platform?


Besides the fact that you can have reasonable UI in swing (look at IDEA), and that Java was designed for Solaris initially, and that Java is still way faster that any JS (sockets, compare-and-set primitives, +direct access to native memory, shared memory IPC and the like... which makes it prime candidate for server applications) it makes for a good example what an uneducated developer might say.


>Socrates: OK! So could you show me a real application then? Like, it says "Hello, world!" in a window, with a menu and buttons and stuff? And it will be the same on both Windows and Linux? >Developer: No, that's not possible.

swing? awt? javafx?


I exaggerate somewhat, but all three are low-quality and further development appears to be abandoned.


They are good enough in the hands of those that care to learn how to use their APIs, and way better the web will ever be.


It's a huge undertaking with a questionable business model, especially as most things have moved to the web and/or mobile. But it is amusing that there are way more free, cross-platform, high-quality game engines than GUI toolkits.


>You both do realize that similar arguments could have been made back in the day when moving from, say, command-line DOS applications to Windows API applications - right?

Ugh, reddit's standard, snarky response: "you do realize... right?"

There's simply no way to get around the fact that by and large, many in-browser apps are poorly performant by any modern standard.

To me, it seems like there's a faction within the web development community whose goal is to take things that worked perfectly fine as native apps and re-implement them with poorer performance in the browser, with seemingly no benefit.

I wish these people all the success in the world, if that's what they enjoy doing. But for the most part I simply cannot tolerate the work they produce.


The benefit is to their paymasters, as they get to extract rent (either directly or via ads) without having to contend with software pirates because the actual software logic is sitting pretty in a server cluster somewhere.

Effectively we are back to the world of time-share terminals.


Ironically, we may yet see a renaissance in native code as more investors realize that the US broadband situation (especially wireless) is not getting better anytime soon. This leads to the realization that always-on, high-speed, low-latency broadband is not a safe assumption. Disconnected operation will become more important as the product space for always-on solutions becomes ever-more crowded.

This realization that always-connected apps might be joined by disconnected apps is emphasized when looking to markets across the world. There are more disconnected people in the world than always-on, high-speed, low-latency connected. There is a burgeoning middle class characterized by sporadic, low-speed, high-latency connections, who are willing to spend some money online, for those times they make it online. They might spend pennies each today, but whoever captures those markets is addressing billions of underserved online customers. I'd take a billion $0.01 payments a day any day.


And they get to use open source libraries without having to share any single piece of their changes.


Except we're using divs, markup elements originally intended for research documentation, and using them to emulate modern UI techniques. I don't mind abstractions, but we should at least be using something built to task.


The most egregious example of abuse of technology for me is the browser version of Wolf3d, where they used divs to essentially replicate the vertical column drawing of a raycasting engine: http://3d.wolfenstein.com/


That game only works in the US, as it seems.

But if you set the cookies

    document.cookie = "age_checker=pass; expires=Thu, 18 Dec 2037 12:00:00 UTC; path=/";
    document.cookie = "is_legal=yes; expires=Thu, 18 Dec 2037 12:00:00 UTC; path=/";
It does work elsewhere, too. (Otherwise it just redirects to your local wolfenstein info site)


Abstraction is about coming up with general designs from many specific implementations. You can layer abstractions but layering is not fundamental to abstractions.

Just because a design introduces extra indirection into a system does not mean that the design is providing any abstraction.

Likewise just because a design changes from one implementation to another does not mean anything has been abstracted.

Your example of moving from DOS to Windows is really good for illustrating this. DOS had almost no hardware abstraction support, you had to include drivers for a lot of hardware into your application. Moving from DOS to Windows 3.1 only provided abstractions over the video drivers, for example when it came to network cards you still had to use DOS drivers.

Moving a text editor from DOS to Windows does not provide you with any abstractions for things like the cursor. It is only changing the API for the sake of having your text editor run on a different platform.

Guess what? This is the same with trying to get your cursor and text rendering working on a web browser. There is absolutely no abstraction here, you are just changing your code to work with the very awkward DOM API.


It depends on what efficiency buys you, and what you trade efficiency for.

When Windows was developed, we had enough CPU power to do the basic tasks people wanted done with a computer at a reasonable speed - the idea was that spare capacity was being traded for user friendliness.

For some applications, even today, when maximum efficiency is necessary, purpose built machines ('soldering together the flip-flops that make up your logic and memory') are still used - it's just that it's often an ASIC that is fabricated.


> When Windows was developed, we had enough CPU power to do the basic tasks people wanted done with a computer at a reasonable speed - the idea was that spare capacity was being traded for user friendliness.

And in many cases that led to an instant drop in productivity. Those old green-screen systems were not pretty but they were quite efficient to use.


I've actually witnessed a very large oil & gas company switch over from those old dos programs to a new win32 program, and to a man, every person who had any experience on the old system bitched about how much slower it was to get anything done.

Having worked on the system myself, they were right (this was as a noobie to both systems).

There is something to be said for a small, tight system.


Some time ago HN linked to a blog from a Norwegian tasked with mailing 3.5" floppies to doctors.

This because the doctors insisted on using a DOS based patient journal, as they could operate it completely by keyboard while maintaining a conversation with the patient.

I speculate that one reason for this is that DOS allowed each program to have full keyboard access, while Windows and other GUI has to reserve certain keys for managing the UI (switching between windows etc).

Thus what was earlier a series of single key presses now involves holding down a modifier for the duration. And that is if the developer even remembered to put in a hotkey for said action.


You don't need the Windows reserved keys for anything with an application. You could make a fully keyboard operable UI for an application on Windows.

It's just that software makers don't do it; most everyone thinks everyone loves to do things with mouse et al.


Do you happen to have the link to this blog handy? I'd be interested in reading it.


You got me digging, so here is the HN discussion about it.

https://news.ycombinator.com/item?id=10287889


Yea, but that came with other benefits, like mandatory accessibility and unified look (at least on the mac). Where are they on the web? Does anyone proactively consider ARIA? Consistency of user interface is just a laughable dream.

But hey, thank god we finally have a framework we can inject ads in at will.


Google Docs takes a good 2-3 seconds to spin up a UI on my i7.

I doubt much of that time is spent building the DOM tree for the UI. Google Docs does a lot more than a simple text editing widget - there's a networked file system, a multi-user collaboration engine, a realtime notification system, etc that all get initialised. Instantiating all that over a network in 2-3 seconds is really fast.


> there's a networked file system > a multi-user collaboration engine > a realtime notification system

None of these apply to a new, unshared document. Those subsystems can be loaded slowly over the following 10 seconds, that's fine. Is it such a hard thing to ask to make the UI responsive within 0.1 s? Like, be able to type stuff and have it appear on the screen without delay?

It's so bad that I often use the basic HTML version of Gmail because it loads in less than 1 second. The normal "AJAX" version loads in 3-4 seconds. And I have a 200 megabit connection. Who wins at getting me my information faster?


Over the past year or two Gmail has gotten really slow. I'm not sure what happened, because there are no new features I can think of that would cause this.

It's gotten so bad that I'm considering moving to another provider or using a good old email app again.


Outlook.com as well. When it launched, it was blazingly fast. Now it comes with a loading screen and really laggy.

Bloated single page apps is the curse of the modern web.


Use the basic HTML version. It's worth the loss of a couple features from regular gmail or inbox for a much snappier UI (even with it fetching full new pages from the server all the time!)


I like my vim-style keyboard shortcuts thought :-/.


I've noticed that too! Do you use Inbox? I do and I was thinking about going back to the classic interface to see if it's faster.


I tried Inbox. I thought for a while that Gmail stars == Inbox pins. When I found out that I was missing out on emails I had previously starred, I quit and went back to Gmail.

Also, Inbox did not show me full e-mails on my Android Wear smartwatch while Gmail did. Also, I couldn't figure out how to set up the filters I needed in Inbox.


Oh, I tried Inbox but Gmail was nice and fast compared to that pos.


I would recommend Fastmail. Moved over for similar reasons and never looked back


I imagine it would be hard for Google to run integration tests if the whole app took 15 seconds to load instead of 3. Not to mention that if there's an error loading part of it, people will complain that they wrote a document but it wasn't saved/shared like they expected.

Google Docs is truly amazing technology for the browser. It's just bringing its model to "native" apps that I have very mixed feelings about.


>Instantiating all that over a network in 2-3 seconds is really fast.

I see. In your opinion, do you think Chrome would be even faster if it managed to spin up an entire sandboxed docker container (in case any web site wanted one, to do whatever it wants with), as well as another one with a full Postgresql install (again in case any web site wanted one, to do whatever it wanted with, oh, and also for Chrome's own bookmarks and stuff), yet still launched within a blazing 14 seconds?

Would that be even faster, considering?

Because from where I'm standing, that wuold be 5-7 times slower.

And 3 seconds to start is 2.95 seconds too slow for me to start typing into a URL box, which could appear within 50 ms if it was done right instead of done wrong.

Chrome does things wrong instead of doing things right, and it really is that simple. I'm a human, not a network share. I use software so I can interact with it. It should do the rest on it's own time and as needed.

Loading a bunch of stuff "quickly" does not equal being quick. Google should know better.


...a whole 2-3 seconds!

Sometimes I think that comments like these arise out of not having experienced text-only rendering at 300 baud...

I know that's a generalization, and most likely unfair - but damn, today's phones, to this old man, are pocket super-computers (for that matter from my vantage point, an Arduino is a wonder, and a RasPi is utterly amazing)!


> Sometimes I think that comments like these arise out of not having experienced text-only rendering at 300 baud...

At the same time, we are now living in the future. With "pocket super-computers" that are more powerful than the Crays that existed when you were waiting on 300 baud text rendering, yet we are using 13% of that power just to render a blinking cursor? Does that not seem like we haven't made as much progress as we should have?


That's totally unfair. You should compare it to startup time of Word from, I don't know, 10 - 20 years ago. Around the same time (if not less), and Google Docs still can't match the features of old Words.

The point being, for the past decade or two, we've been burning all hardware performance improvements on things that are neither visible to user, nor enable them to do more with their computer. Surely, there must be ways of spending those CPU cycles and RAM bytes on things that are actually useful and enable people to do more / better / faster work.


On a per-application basis, you are probably correct. But if the additionally abstraction layers allow there to be greater diversity of applications and more tools for more niche cases because development is easier and/or faster, then that is directly immediately beneficial to the end user. Not to mention faster design iteration, implementation of new features etc.


> On a per-application basis, you are probably correct. But if the additionally abstraction layers allow ...

Windows 7 boots in approx. 5 seconds to the desktop (if no password is set), and stuff like Word or Visio starts instantaneously. Web applications on the very same computer are a whole different story.

Let's just acknowledge that SaaS is not and never was about any kind of benefits for the user or customer, but just about either centralising resources and services back to the vendor's control and/or increasing money extraction.


I have never seen word open instantly. When I do it there is always a second or two for teh obnoxious splash screen, then often 10 to 15 seconds waiting for it to do whatever. Notepad++, yeah, that opens nearly instantly.

I mostly use Ubuntu and there Libreoffice still has a stupid splash screen. After that though the amount of time is too small for me to count.


> But if the additionally abstraction layers allow there to be greater diversity of applications and more tools for more niche cases because development is easier and/or faster

Please try to find a single example where this is true. Niche cases and greater diversity all come from a lot of work keeping runtimes up to date and APIs backward-compatible. This is only because of Free Software. Trying to credit this to "abstraction layers" is insulting to all of the programmers working on Free Software libraries, compilers, runtimes and operating systems.


What I'm suggesting is that, e.g., there are more Slack, Atom, VS Code, (and those are just Electron apps) etc. and/or those apps have more features because of the speed of development and iteration afforded by these inefficient abstraction layers.

So, I can't give a specific example but am instead pointing at the diverse ecosystem of applications and rich functionality. It's logically impossible for me to prove that these apps wouldn't exist without inefficient abstraction layers. It's my supposition, and the developers who write electron apps would probably agree.


> It's logically impossible for me to prove that these apps wouldn't exist without inefficient abstraction layers.

The literally millions of applications not written on top of the specific crapware you list are an existence proof. This was the case even with assembly/Pascal/BASIC applications on microcomputers in the 1980s. Your whole argument is that somehow the web stack is easier to write applications on top of because <insert nonsense adjectives like "diverse" and "rich">. To go back to the Pascal example, a lot of people who program in Delphi strongly disagree even today, and Delphi has been around since 1995. What makes you think that the web stack has a higher speed of development than other software tools? Why do you think that high-speed development depends on inefficient abstractions? That is all total nonsense. There are a lot of problems with the web stack. You need to stop making up bullshit rationalizations and learn about other approaches to software development.


Very good questions that I can't answer. And it's not my 'bullshit rationalization' -- I'm not the one who decided to build all these products on this "crapware" stack. I'm just suggesting that this stack was chosen for some (hopefully) logical reason.

If speed of development isn't the reason, then what is the attraction? I'm serious and curious. I looked through your profile and you clearly know your shit. Are we just in a period of a bad stack being popular and used despite there being other, better options?


Why should he compare it to Word? Was it a cloud-based product that synced all files between multiple machines of radically different form-factors from PCs to phones? No?

Google Docs uses the Internet, so did anything on an old modem. Word didn't (especially 20 years ago).


So word processors have to be slow to use the network?

I'm not sure your arguement makes any sense.


I can't remember the stand-up routine, but the punchline applies:

"10secs! I was supposed to be at work 15 seconds ago!"



Yeah. I remember when Gmail introducing the floating compose window with some limited window management was a big feature. But, desktop GUIs have been doing this since the 80's...


There is plenty of available spectrum between native widgets and writing entire applications inside the web browser. Electron is a fashion statement and a convenient short cut to portability, that comes bundled with a mountain of complexity and technical debt. Moving the DOM to a native "server" removes most performance issues and allows applying the full power of a real language. https://github.com/codr4life/libc4dom/blob/master/tests/main...


> instead of a native widget that writes directly to the screen.

"Writing directly to the screen" (by which I assume you mean writing pixels one by one to the framebuffer) is a bad idea for modern graphics hardware. It was fine on the 486, but nowadays you need the ability to do global optimizations for good 2D (or 3D) graphics performance. Ironically, the Web stack is much better positioned to do this than, say, Win32, because of the declarative nature of CSS.

Besides, as some downthread have pointed out, you didn't "write directly to the screen" in Win32. You went through GDI.


It seems reasonable this might be true, but it's not. In video games we went down the road of retained-mode graphics APIs (declarative-type things, so that they can do the kinds of 'global optimization' you mention) but we abandoned them because they are terrible. Video games all render using immediate-mode APIs and this has been true for a very long time now and nobody is interested in going back to the awful retained-mode experiment.


You build custom retained-mode APIs on top of the immediate mode APIs—they're called game engines.

What happens if you try to present an immediate mode API for UIs is the status quo with APIs like Skia-GL. You frequently end up switching shaders and issuing a new draw call every time you draw a rectangle, and you draw strictly in back to front order so you completely lose your Z-buffer.

Imagine if games worked like that: drawing in back to front order and switching shaders every time you drew a triangle. Your performance would be terrible. But that's the API that these '90s style UI libraries force you into. Nobody thought that state changes would be expensive or that Z-buffers could exist when Win32, GTK, etc. were designed. They strictly drew using the painter's algorithm, and they used highly specialized routines for every little widget piece because minimizing memory bandwidth was way more important than avoiding state changes. But the hardware landscape is different now. That requires a different approach instead of blindly copying what the "native" APIs did in 1995.


Ehh, game engines are not really retained-mode in the way you mean. There isn't usually a cordoned-off piece of state that represents visuals only. Rather, much of that state is produced each frame from the mixture of state that serves all purposes (collision detection, game event logic, etc).

"What happens if you try to present an immediate mode API for UIs is the status quo with APIs like Skia-GL."

I don't know what Skia-GL is, but in games, the more experienced people tend to use immediate-mode for UIs. (This trend has a name, "IMGUI". I say 'more-experienced people' because less-experienced people will do it just by copying some API that already exists, and these tend to be retained-mode because that is how UIs are usually done). UIs are tremendously less painful when done as IMGUI, and they are also faster; at least, this is my experienced. [There is another case when people use retained-mode stuff, and that's when they are using some system where content people build a UI in Flash or something and they want to repro that in the game engine; thus the UI is fundamentally retained-mode in nature. I am not a super-big fan of this approach but it does happen.]

"and you draw strictly in back to front order so you completely lose your Z-buffer"

That sounds more like a limitation of the way the library is programmed than anything to do with retained or immediate mode. There may also be some confusion about causation here. (Keep in mind that Z buffers aren't useful in the regular way if translucency is happening, so if a UI system wants to support translucency in the general case, that alone is a reason why it might go painter's algorithm, regardless of whether it's retained or immediate).

"But that's the API that these '90s style UI libraries force you into."

90s-style UI libraries are stuff like Motif and Xlib and MFC ... all retained mode!

I don't agree that an IMGUI style forces you into any more shader switches than you already would have. It just requires you to be motivated to avoid shader switches. You could say that it mildly or moderately encourages you to have more shader switches, and I would not necessarily disagree. That said, UI rendering is usually such a light workload compared to general game rendering that we don't worry too much about its efficiency -- which is another reason why game people are so flabbergasted by the modern slowness of 2D applications, they are doing almost no work in principle.

Back to the retained versus IMGUI point ... If anything, there is great potential for the retained mode version to be slower, since it will usually be navigating a tree of cache-unfriendly heap-allocated nodes many times in order to draw stuff, whereas the IMGUI version is generating data as needed so it is much easier to avoid such CPU-bottlenecking operations.


I will also say that this is not an academic argument for me; I am in the middle of writing yet another immediate-mode GUI right now, for the game editor I am working on. Every day I am freshly glad that I am doing things as IMGUI instead of RMGUI.

Here is a (somewhat old) video explaining some of the motivations behind structuring things as IMGUI: https://www.youtube.com/watch?v=Z1qyvQsjK5Y


This argument looks like you and pcwalton are arguing about different definitions of "immediate mode API". I think both of you agree with each other on object-level propositions.

pcwalton seems to be presuming that part of the contract of an "immediate mode API" is like old-school ones it actually immediately draws to the frame buffer by the end of the call.

Whereas you are talking about modern "immediate mode API"s where the calls just add things to an internal data structure that is all drawn at once, avoiding unnecessary shader switches etc. IIRC this is how Conrod (Rust's imgui library) and https://github.com/ocornut/imgui work, although with varying levels of caching.

One point to make about retained mode GUIs is I remember reading an argument that immediate mode is great for visually simple UIs, such as those in video games, but isn't as good for larger scale graphical applications and custom widgets. For example when rendering a large text box, list or table you don't want to have to recalculate the layout every frame so you need some data structure that sticks around between frames specific to the widget type, so that's what retained mode APIs like Qt do for their widgets.

Sure you can do the calculations yourself for exactly which rows of a table are currently in view and render those and the scrollbar with an immediate mode API, but the promise of toolkits like Qt is that you don't have to write calculations and data structures for every table.


"so you need some data structure that sticks around between frames specific to the widget type, so that's what retained mode APIs like Qt do for their widgets."

Immediate mode GUI systems are allowed to keep state around between frames and the most-featureful ones do. The "immediate mode" is just about the API between the library and the user, not about what the library is allowed to do behind the scenes. The argument that retained-mode systems are inherently better at this doesn't hold water; it is kind of an orthogonal issue.


I'm definitely aware of this, it's why I mentioned "varying levels of caching". The Conrod imgui that I mentioned basically uses retained mode GUI data structures behind an immediate mode API through diffing for performance reasons.

This works just as well/quickly as a retained mode API in almost all cases. There's some cases like extremely long tables with varying row heights and sortable columns, where you need an efficient diff of the table contents. Since recalculating layout and sorting every frame is inefficient. Retained mode APIs do this with methods to add and delete rows. It's possible to do with an immediate mode API, but to detect differences in the rows passed in quickly you need to use a functional persistent map data structure with log(n) symmetric diff. Or you can just have an API that is mostly immediate mode but has some kind of "TableLayout" struct that persists between frames and is modified by add and remove functions.

I'm curious what API you would use for implementing a table with varying row heights (that you only know upon rendering but can guess beforehand), sortable columns and millions of rows. I implemented this in an immediate mode GUI API a few months ago, and I did it with persistent maps and incremental computation in OCaml. Incrementally maintaining a splay tree and a sorted order by symmetric diff of the input maps. This isn't as nice of an API in languages like C++ so I'm wondering if there's a better way.


"I'm curious what API you would use for implementing a table with varying row heights (that you only know upon rendering but can guess beforehand), sortable columns and millions of rows."

In general my policy is that when things get really complicated or specialized, the application knows a lot more about its use case than some trying-to-be-general API does, so it makes sense for the application to do most of the work of dealing with the row heights or whatever. (It's hard for me to answer more concretely since it depends on exactly what is being implemented, which I don't know.)


Motif and Xlib use expose events to handle drawing. Doesn't imply retained mode drawing; you could use either in the handler.


It is a little confusing because we are talking about both rendering and GUIs, but ... "retained mode" in this case refers to the GUI itself, not the method of drawing. Motif and Xlib are "retained mode" in the GUI sense because if you want there to be a button, you instantiate that button and register it with the library, and then if you want it to become visible or invisible or change color you call procedures that poke values on that instantiated button. In IMGUI you don't preinstantiate; you just say "draw a button now" and if you don't want it to be visible, you just don't draw it, etc.


This is a fair point. All the mapped XWindows are certainly "retained" from this point of view.


But can you imagine something like Word being written without the "retained-mode" abstraction?


Yes, absolutely, and in fact I think it would be a much better program.


Separating layout from styling and behaviour is something that many GUI toolkit developers have decided is beneficial.

Most "modern" mainstream native toolkits - e.g. GTK+ 3, Qt 5, WPF - encourage this separation into layout - GtkBuilder, QML, XAML - and style - CSS, Qt Style Sheets, XAML Styles.

So, this isn't a "web browser" problem. Or, this style of GUI isn't the problem with Electron. I find GTK and Qt apps to be plenty responsive enough, even when their GUIs are loaded from XML files.


Also compare Google Docs with Microsoft Word and LibreOffice cold start up time. You will see 2-3 seconds is fast and the two mentioned are not even loaded from a remote resource...


Excel loads in ~1 second...and doesn't have UI lag after it does.

Try working with large data sets in Google Docs and you'll have that 2-3 second lag time with _every_ operation you perform.


On the flip side, the new Windows "Metro" style calculator app takes several seconds to load ... and is less usable than the old calc.exe.


I've found this so often, I have a dual Xeon and 16GB of RAM and a calculator of all things taking more than a second is unacceptable.

I got a popup inside the calculator asking for feedback about it once with an inbuilt form and submission. I can only assume it has toooooons of hidden away cruft that does everything but assist in calculating things.


I can see it now.

Manager: All apps have to use this feedback framework now, no exceptions. Getting feedback is super important so we can be more Agile!

Dev: ok... uh, but this is 40x the size of all of calc.exe. Plus it's just a calculator and we've refined it for years so it's pretty good already. Isn't that kinda nuts?

Manager: metrics! Feedback! Agile! Just do it!


IMHO calc.exe has been getting worse since XP:

https://news.ycombinator.com/item?id=10791667

(Note that the "Calculator Plus" mentioned there has --- not suprisingly(!?) --- disappeared from Microsoft's download center, but you can still find the official, signed installer by searching for "CalcPlus.msi".)


It takes maybe half a second on my 4 year old Core I5.


Sometimes it does. I've seen this happen on several machines, some containing recent i7s. And if it does "pop up" quick, it has a loading screen. Let that sink in. It actually has a full-colour screen while it figures out how to render a few numbers and buttons. FFS.


Which is still kinda impressive given that calc.exe starts in milliseconds.


Or try working with it over an intermittent connection, everything goes haywire.


If you don't have plug-ins, Word 2010 (which, even without plugins is far more functional than Google Docs) loads in under half a second on a modern machine.


I really can't reproduce that. I used MS Office since Office '97, and even Office 2000 on a Windows 98 machine loaded in about three seconds. Nowadays, with Office 2010 or newer you won't even see the splash screen anymore. Start menu -> click -> poof, application is there. Google Docs is nowhere near that.


Very simple, there's a couple of orders of magnitude greater number of developers and designers who have skill with web technologies than native UI. Similarly there's a couple of orders of magnitude more options of UI frameworks and design patterns. Add to the fact that Blink, Webkit, V8 and Chakra have been constantly pushing the bounds on speed bringing a web technology based front end within touching distance in terms of speed vs. native.

Given all these factors, any product using web technologies for UI can move much faster than products which don't. Like how Sublime and every other programming text editor basically got eaten up by VSCode and Atom in about a year and half.


It's the typical web story. You can get to "good enough" with blazing speed, but the limitations of the web make it hard to achieve a high level of polish.


Very simple, there's a couple of orders of magnitude greater number of developers and designers who have skill with web technologies than native UI.

There is a very succinct rebuttal to that, and a good explanation for why apps based on "web technology" are they way they are: "quality is not quantity."


> Very simple, there's a couple of orders of magnitude greater number of developers and designers who have skill with web technologies than native UI.

[citation needed] - web application development is incredibly complex and can't be compared to just "web development" (e.g. writing a HTML document or template and styling it).


I still use Sublime. I've also used Atom and I didn't see any features it had that Sublime didn't. They're both very basic text editors, with the difference that Sublime is about 100x faster and less bloated.


I don't get this criticism. HTML/CSS is the closest we have to a universally understood syntax for designing interfaces.

Why invent a new standard? HTML is fine, CSS is fine. Most importantly, everyone understands it and can work immediately with it.

Any issues with performance is due to the implementation of the platform that renders this HTML/CSS interface. It's much more likely Google Docs feels sluggish due to the javascript it executes in order to control the rendering of its html/css, rather than the rendering itself.


HTML and CSS are a bad fit for rendering heavy graphical interfaces because they fundamentally follow a document flow rather than grid-based layouts. Flexbox and css-grid are helping some in this area, but they are not used very often.

(not to say that HTML and CSS aren't useful, but they are far from the idea means of rendering a UI).

HTML does work fine when you use it for mostly document focused work, and I enjoy the interactivity and connectivity that web browsers have brought to the web, but I'd love to see an improvement on it all.


Flexbox is that improvement.

If you discount flexbox due to "not [being] used very often", then you can't logically argue for burning down the HTML and CSS stack and replacing it with something else that has zero market share.


My point wasn't so much that Flexbox isn't that improvement, but that I haven't yet had a chance to fully learn it, and I suspect that a lot of front-end developers are in a similar space. I plan on rectifying that soon, but it is yet another thing to add atop the large number of other things that comprise understanding modern web browsers.

I'm not saying that we should burn the HTML and CSS stack. It has served the web very well and will continue to do so, but until the last few years, it's been a document focused stack that's been twisted into doing app development, and in some sense, it still is.

HTML and CSS are not the best tool for heavy GUI development, a la Photoshop, Visual Studio (not VS Code) or other large GUI intensive things. Most web apps have yet to replicate the combination of features and/or performance of those types native applications. Flexbox is an improvement there that helps with layout, but that doesn't change the fact that we are working with the DOM under the hood, with all of its various quirks and performance issues. (Not to say other GUI frameworks/APIs are perfect, or necessarily better. Some of them just allow you to optimize a little closer to the metal).

One can point out that large GUI tools like Photoshop aren't being created as much these days, outside of AAA game dev, CAD, or the like, or that many large GUI's use web views to help display documents, a la Steam.

I sincerely hope that Servo's "Web Browsers are essentially AAA game engines" approach catches on.

I'd be interested to see how an event-based html5 canvas GUI library would compare to DOM.


Q: How do you make a video maintain an aspect ratio and fill up the width of a parent? A: Nested div hell.

Q: How do you make an image maintain an aspect ratio and fill up the width or height of the screen, whichever comes first? A: Nested div hell and JavaScript.

Q: How do you make a scaled background image stay put even when the keyboard input pops up on a phone? A: Supreme JavaScript, CSS, and div hell.

Q: How do you center a paragraph of text vertically in a div? A: Nested div hell.

I'd say FlexBox is pretty inadequate. Why can't we have things like:

    #my-video {width:80%;height:calc(width*2/3);}


CSS object-fit handles your first two complaints, and CSS Variables handles your last one.


Aren't CSS Variables just constants you can reuse? How are they going to let you set width to 80% and height to 2/3 of whatever that ends up being?


Because percentages in heights are usually relative to the width of the containing block.


Just because it's the best we have (if you're looking for a cross platform solution, anyway) doesn't mean that we've reached the peak and can just quit. We can do a hell of a lot better, and doing anything less is tragically underselling ourselves. There are no laws of physics preventing a better solution from existing. No, creating something better isn't easy, but neither have any of the other technological breakthroughs.

HTML/JS/CSS is just a stepping stone like any other, not an endgame. Don't grow complacent with it. Demand something better.


> it's the best we have (if you're looking for a cross platform solution, anyway)

No it's not. If you want a fast full-featured cross-platform GUI toolkit, there are many: GTK+ and Qt are especially great, and have bindings for several languages.


I can't speak for Qt, but GTK+ is not that great outside of the Linux bubble. IMHO, of course.


Qt is not that fun either, at least two years ago. Old style class hierarchies, clunky abstractions and awful build step (QMAKE). It got better with the use of lambdas, no longer necessary to have class to connect to a signal, lambda works as well.

However QML is an improvement. It was a bit quirky to get render the way you wanted and it was not native & look and feel, but maybe have gotten better since I used it. I could imagine that TypeScript + QML could be quite pleasant. Big downside is that the install size of your program is quite big.


They aren't that great in the Linux bubble.

I say that from the developer and the user perspective.


Qt isn't native. They're drawing their own widgets that look close to native - that's why you can theme Qt apps.


Factually incorrect, since Qt uses native widgets when possible. Applying Qt stylesheets usually disables use of the operating system's styling engine, i.e. only then Qt starts drawing the widgets by itself. Widgets that are not part of the native widget assortment are drawn by Qt.


Then what is this blog post about? https://blog.qt.io/blog/2017/02/06/native-look-feel/


Qt Quick Controls which has nothing to with QtWidgets.

> Posted in Dev Loop, Qt Quick Controls, Styles


"Quitting" is not how I'd describe Atom and VS Code.

You might not like the envelope they are pushing, but they are cutting-edge explorations into web tech, coinciding with other cutting-edge developments like HTML/JS -> Native interfaces.


Are they really cutting edge? Or are they just an excuse to cram more javascript and web tech into unrelated areas.

Just because js is good on web, doesn't - and shouldn't - mean that we should be doing that.


There are now many more expressive languages which compile to HTML, CSS and JS. We are 'stuck' with these 3 technologies that are still very functional and we should just embrace them as the low level language of UIs.

Of the 3, javascript is the most dispensable, there's nothing stopping us writing platforms which have UIs designed in HTML/CSS but have a DOM controlled by python/c#/ruby.

I was not suggesting we stop progressing, but we need to recognise the phenomenal amount of overhead involved in replacing these 3 technologies. What benefit would be served when they can just be abstracted on top of, at least with regards to the web?

Case in point, Assembly Language, we could theoretically replace the standards that have been reached over decades of collaboration with something more suited to our modern leanings. But what would be the point, when we've long since abstracted it out of our minds?


Why invent a new standard? HTML is fine, CSS is fine. Most importantly, everyone understands it and can work immediately with it.

In order to render HTML, CSS and JavaScript, you need an entire web rendering engine. A new standard would let us get by with a lot less.


> In order to render HTML, CSS and JavaScript, you need an entire web rendering engine.

Getting to piggyback on V8 and Blink work is, I suspect, often a benefit rather than a cost in the eyes of developers of Electron-based editors. Sure, it's bigger resource load, but for use cases where the performance is acceptable, it's a lot less developer load to get the functionality out the door.


Until you have to reimplement even blinking cursors.


No, even after that I'm pretty sure it's a net win in developer time.


>Why invent a new standard?

Because they suck?


Care to argue why? Also, there are countless alternative languages to compile down to HTML or CSS available. Feel free to create your own if none agree with your personal leanings.


CSS has many weird is corner cases where you have an issue and after a lot of debugging, trying things, getting mad you find on Stack Overflow that you add a ilogical rule like "min-width:0", so css is ok until you end up in a tricky problem where you need to understand how the css code works under the hood to fix. Other issue with css is too big and complicated, good layout modes are added but we stil ahve the old ones and you need to understand everything because most developers work on existing code and you hit all kind of layouts like floating,absolute,relative. Flexbox layout seems better but stil worse then layouts I have seen in MXML and WPF. So ignoring JS that you could replace, getting a new GUI for the Web inspired by MXML/WPF or QML with a sane subset of css would improve the situation.


> you add a ilogical rule like "min-width:0"

Yeah, like my favorite of these lately, widows and orphans. Chrome changed a default that affected inline-blocks within columns that caused them to wrap prematurely after version 52 or so. It is especially bad because the other popular browsers don't even support these yet and looked fine.


>Care to argue why?

Where do I start?

1) Designed for document presentation, not for apps. 2) Limited widget selection (native forms and that's it). 3) Different implementations between vendors. 4) Bad at layout (20+ years to get to Flex and Grid CSS layout which kind of resembles UI layouts but is still not supported everywhere. 5) Slow 6) Batter hungry 7) Too many un-needed UI layers (DOM over native widgets) 8) Extra language layer (JS on top of v8 on top of native execution) 9) JS is not the best language to write large scale software (to put it mildly) 10) Restricted access/integration to native platform APIs.


Do you want to listen to the reasons, or just tell them to "go make your own if you don't like it"? You can't have it both ways.

I agree that simply saying X sucks is not a valid argument. Howeever with CSS/HTML the flaws are too numerous and already discussed ad nauseam over the past 15 years. Everytime a new version of CSS comes out, people go and try it and find out that it sucks, whether its th broken box model or the broken float layout techniques or the constant fiddling you are forced to do to get anything working. Every web developer I've seen adopts the 'edit and refresh' trial and error model of developing which is the direct result of a bad spec. Which also explains why there isn't even a reference implementation. As far as UI layout is concerned I am fairly sure I could out-compete a web developer in terms of time taken to implement, using something like IMGUI.


> Which also explains why there isn't even a reference implementation.

As someone who has spent years implementing those standards, a reference implementation would not help me at all.

> As far as UI layout is concerned I am fairly sure I could out-compete a web developer in terms of time taken to implement, using something like IMGUI.

Dear imgui's layout model doesn't scale at all, due to the fact that it's immediate mode. It redoes layout from scratch every frame.


>a reference implementation would not help me at all.

Well, considering that the web has been for decades a landmine of subtle rendering differences based on different interpretations of the same standards, it would surely help others...

Also, I'm not sure what you're saying here. That, warts and all, you love the web as a programming platform?

Well, maybe you do.

But then again, you don't program everyday IN it. You program a rendering engine for it in Rust, so you're safely protected from the horrors of web programming.

>Dear imgui's layout model doesn't scale at all, due to the fact that it's immediate mode. It redoes layout from scratch every frame.

Aren't (or at least weren't) most computer games "immediate mode" too, and far more demanding than any web page?


> Dear imgui's layout model doesn't scale at all, due to the fact that it's immediate mode. It redoes layout from scratch every frame.

Which isn't a problem if you're layout algorithm is fast/simple. If it's not, then it's a bigger issue.


>As someone who has spent years implementing those standards, a reference implementation would not help me at all.

I don't quite understand what you meant your comment to be indicative of. That nobody else would consider it useful? That you personally see no value in reference implementations at all?

>Dear imgui's layout model doesn't scale at all, due to the fact that it's immediate mode. It redoes layout from scratch every frame.

Could you detail the UI are you thinking of where IMGUI is inefficient but HTML/CSS isn't? Not asking for a formal spec, just a general usecase..

My problem with CSS/HTML is that productivity scales inversely when using them.


There was XUL and it used native widgets.

XUL is dead now, HTML+CSS won.


  > everyone understands it and can work immediately with it.
Only if by "understands" you mean "everyone is capable to throw things at the wall and seeing what sticks". I have seen a lot o HTML and CSS and let's just say, only a small fraction looked like it was done by someone with understanding "what" and "why". Otherwise it was just tortured to the point "somehow works unless someone changes something".


this person speaks truth


I've done some raw C Win32 GUI programming and I've done some modern electron stuff as well.

I know which I'd rather offer as my toolkit of choice for a project on which I wanted lots of people to contribute code to!


Please don't use win32 as an example for all native UI coding, its garbage. Consider Qt or anything else, even X11 looks good compared to win32.


Would be better to compare it to transfer a state/screen over the wire/internet on 486 vs. doing the same nowadays ;)


Well a x server on a 486 was probably also faster...


Besides the X-server point, it's not just the state transfer that's causing web UI to lag.


Javascript is meh, it's HTML and CSS that are the real culprits for the crazy inefficient GUI rendering. HTML was great for what it was designed for, but we're ten years beyond that.

Something will come in and replace HTML, it's just a matter of time. The main driver is mobile. The many layers of abstraction burn battery, one day phone manufacturers will get tired of it and do something.


Something like XUL, XAML or Enyo? Wait for it.


Something like React Native that targets Windows/macOs/Linux rather than iOS/Android? I think that this would be a better way to deal with using Javascript as the dev environment, while at the same time passing on Electron.


By making a text editor out of web technologies, you can reuse all the web ecosystem the web has, and enjoy also its customizability.

For instance I wanted to be able to display PDFs directly in VSCode. I went to look for a plugin, and there was one. People simply used "pdf.js" to integrate PDF support in VSCode. Because it is web based it should have been straightforward to do. Doing the same with native technology would have took several weeks of coding, and it wouldn't have been cross-platform.

Imagine all the web-based open-source tools that could potentially be integrated in these editors. Integrating a SVG editor in native text editors would be a nightmare. With web-based text editor you can potentially just incorporate an existing tool like [1].

There are lot of other examples like this: Live markdown preview, mini-map, color-picker, integrated VSCode debug panel, …

It is also quite easy to add visual stuff. For instance adding a vertical bar at 80 characters in the background is quite easy to do with web technologies. On the other side emacs has still not managed yet to make the html-mode work nicely with this "fill-column-indicator". It is also probable that it would be much more easy to integrate web services (trello, github, …) directly within VSCode.

At the end people wanting performances have already quite a lot of choice in native text editors (vim, emacs, sublime…), and people who prefer functionalities can go with web-based text editors (atom, vscode, …)

[1] https://svg-edit.github.io/svgedit/releases/svg-edit-2.8.1/s...


> Doing the same with native technology would have took several weeks of coding, and it wouldn't have been cross-platform.

Actually, it would take one hour because you would use Poppler, and it would be cross-platform. It would also be faster than pdf.js.


> Actually, it would take one hour because you would use Poppler, and it would be cross-platform. It would also be faster than pdf.js.

I don't really know poppler. If there are Poppler bindings for the language you are developing in, I believe you that this is possible. This is only one use case though, I'm not sure you will find native libraries in your language for all features that VSCode can offer almost for free, like live markdown preview for instance.


> I don't really know poppler.

Exactly. Web developers live in a bubble, and because they don't know much about native alternatives, they assume they don't exist.

Note: I'm a web developer and I don't know poppler.


I don't think the implication you're making here is valid. In Electron you're forced to use JavaScript or a compile-to-JavaScript language. In the desktop it's pretty much the same deal but with C. And we've been doing C FFI since way before JavaScript was conceived.


> For instance I wanted to be able to display PDFs directly in VSCode. I went to look for a plugin, and there was one.

You could have done that 20 years ago with COM components already. There was a thriving ecosystem where you could get a component for just about everything.

Similar technologies were available on other platforms than Windows, like Bonobo (Gnome), Kparts (KDE), and whatever Mac had.

It's a bit of a shame that component frameworks have gotten such a bad reputation (for complexity and insecurity), I think they are very misunderstood.


You also had to pay 100$ per copy of the COM component. Most of the web stuff is free, and you can just right click and view source. In NodeJS and NPM there are hundreds of thousands of components/modules that you can use for free.


>By making a text editor out of web technologies, you can reuse all the web ecosystem the web has, and enjoy also its customizability. For instance I wanted to be able to display PDFs directly in VSCode. I went to look for a plugin, and there was one

You know where else you could do the exact same thing AND have a 10x faster and 10x more memory/battery efficient editor?

If a native editor just gave you a webview that can run JS extensions...


> 10x faster and 10x more memory/battery efficient editor?

Except for the startup time --- with which I can live with --- I never felt a difference in speed between native text editors ans VSCode. There is one, but I don't notice it. Human are much more slower than computers, so as long as it is not taking more time that I need to notice it, I don't care.

Nowadays my computer has enough memory so I can afford to not care about the 116M that VSCode is currently using, especially in regard to what my browser consumes, and in regards of the wide range of features it offers.

Battery could be problematic indeed, but before looking at my text editor, I would probably first switch from KDE to i3, and then use a lightweight web browser. This would impact it much more than my text editor.

> If a native editor just gave you a webview that can run JS extensions...

But then you start loosing all the benefits of faster/memory/battery points you mentioned.

Anyway, I'll happily try something like this if you develop it ;-)


> Battery could be problematic indeed, but before looking at my text editor, I would probably first switch from KDE to i3, and then use a lightweight web browser. This would impact it much more than my text editor.

This item is about VS Code consuming 13 % CPU when idle, which is bigger than the difference in idle CPU between KDE and i3 (if you don't have fancy widgets, it's about ~nil).


Indeed, this bug is pretty nasty for the battery. I don't have it though, so my remark was about non-buggy battery usage. Web-based text editors consumes more battery, but the difference is not significant from my point of view.


>Except for the startup time --- with which I can live with --- I never felt a difference in speed between native text editors and VSCode. There is one, but I don't notice

Try editing anything larger than a simple program file (from a large JSON to a CSV as devs often do) and you'll do. Try hex editing a large binary. And many other tasks.


Sooner this evening I had to open a 144'000 lines CSV file, 3.5MB. This is not big, but it is the kind of file I have to open from time to time for my work. No noticeable delay to open the file in VSCode, the cursor moves smoothly, and scrolling with the minimap through the whole 144'000 lines is also smooth. The VIM plugin for VSCode start to have troubles over 10M it seems, but if I have to edit files bigger than this I can open them with spacemacs anyway. I spend most of my time in small text files like Python, CSV or Makfiles, so this is not a problem. Of course if you have other requirements, like you are passing a lot of time in big files, then VSCode is not the correct tool.

I would suggest you to try VSCode for curiosity, if you have not done it yet. You could be surprised. Maybe it is not the right tool for you, but it is quite well made, and compared to atom, bracket or other web-based text editor, it feels like a supersonic jet.


Ema... Ok ok, I will shut up. :)


> Doing the same with native technology would have took several weeks of coding, and it wouldn't have been cross-platform.

I wrote a PDF viewer in Java(FX) in about one evening using PDFbox. There are also PDF components for practically any other desktop app framework you care to name, most of them older and more mature than pdf.js

The fact that so many devs express amazement at things considered utterly routine for decades is one of the reasons the entire web dev community is so often treated as a joke.

HTML has very few redeeming features as an app platform. It's way past time it gets killed by something better.


One of the nicest replies here. Thanks! :)


This is wrong. The world on the other side is so much greener.

https://wiki.qt.io/Handling_PDF#Using_QtPDF


I don't think this would have been any harder had it been written in C(++) for someone comfortable in C(++).

But I do believe there are more people comfortable with web technologies these days.


That depends on all the code running locally...


The web is in just a shameful state. Even with 300 mbps fiber internet most websites are not snappy. As in, pages are so slow to render that I'll start reading, but then lose my place when the page reflows as it continues to load. Or click on the wrong thing because the page reflowed while I was trying to click on something.

As far as I can tell I'm actually CPU-limited, because I noticed no real difference from when I had 50 mbps internet. This is on a quad-core Macbook Pro that boosts up to 3.5 GHz...


It takes a while to source all the ads, and the reflow that leads you to click on the wrong thing (e.g. an ad) is most likely intentional.


So you just assume that changing between 50 meg and 300 meg service actually should give you a 6x speed up during browsing? I think that's a very flawed assumption to make. Just because your connection is capable of a certain advertised speed doesn't mean you're getting that speed from any given server as you browse the Internet.


No, I'm assuming that because it didn't give me a noticeable speedup that bandwidth isn't the bottleneck. Also, the bottleneck isn't my connection out to the internet because I can get more than the advertised speed any time of day to speedtest.net servers. I suppose the bottleneck could be on the other end, but aren't these sites all hosted on major platforms these days? Like AWS/Google/etc.?


Building a plugin system is hard. Building one that allows creating complex UI elements, modifying other UI elements (from either the core app or other plugins) or changing the way literally anything is rendered is particularly hard.

You not only have to build the code that supports all this, you also have to create and document an API and/or markup format to build all this out, plus document all your internal integration points.

If you want other developers to really take it up and build plugins, you have to make it easy to get into, so that means not just documentation, but great documentation, plus tutorials, examples and tooling to help.

You get a big chunk of that for free when you use HTML/CSS/JS.

Fire up VS Code, go to Help > Toggle Developer Tools, and poke around for a few minutes. Imagine the amount of time it would take to build a similar experience to just this one aspect if you were doing this from scratch.


Or you could use Lisp, and have the UI be S-Expressions. Everything can edit a list


Can you give me an example of a Lisp UI library like that?


Two come close but aren't quite there...

Seesaw [0] - A nice-ish way of using Swong in Clojure.

Iup [1] - One of the friendliest GUIs I've used, hands down. Just feels like Scheme.

However, I'd expect that QML and X-Expressions could go hand in hand to make something much closer, with a bit more flexibility.

[0] https://github.com/daveray/seesaw

[1] https://wiki.call-cc.org/iup-tutor#hello0scm


> Fire up VS Code, go to Help > Toggle Developer Tools, and poke around for a few minutes. Imagine the amount of time it would take to build a similar experience to just this one aspect if you were doing this from scratch.

Don't do it from scratch then. You can use GtkInspector to poke around with any GTK+ application, by pressing Ctrl+Shift+I.[1]

[1]: https://wiki.gnome.org/Projects/GTK%2B/Inspector


> easy to get started

Yes.

> cross platform support

Yes.

> Because you can write plugins in JS?

Yes.

---

As much as I prefer native programs as a user, it's impossible to ignore the benefits of cross-platform development and plebeian hackability/debuggability.


>> cross platform support

> Yes.

WOW FINALLY SOMETHING that will run on my LINUX and FreeBSD!

ohh.. a lot of plugins don't support linux and it doesn't build on BSD?


Cross platform support means if you want support, you cross over to a supported platform.


Some years ago cross platform in Microsoft speak meant it would run on at least two of the following: A version of Windows, Wincows CE, Windows Phone or Xbox. That cross platform now almost includes a non Microsoft platform is progress.


Don't vim, emacs, Sublime, and IntelliJ IDEA work on FreeBSD?


(neo)vim, emacs and IDEA certainly do. Sublime is not natively ported, only under the linux compat layer.


Really? I haven't seen this at all... though I don't have a ton of plugins.


Define a lot. Every plugin I use on Atom run on Linux fine.


> doesn't build on BSD

doesn't run on ZX Spectrum either


Why bother describing something as cross-platform if this is the attitude held?


you can do all of this in Qt, too. Without the overhead of the inner platform effect.


You can do it, but it's significantly harder. And with Electron, you can leverage the same skills that are used to build web applications to modify your environment and text editor as well. Those are very significant advantages.

Note: I don't use VS Code or any other JS editor, I use emacs. But I can definitely appreciate the major benefits of the architecture.


I build both web apps and Qt apps. TBH, doing something in Qt is about 1/10 to 1/100 the effort of doing it on the web. The web is a morass of confusing standards, none of which work well together. To get guaranteed, predictable behavior which is documented, and will continue to work properly for 10 years after deployment (with only minor maintainence), you simply cannot use the web.


So does it have a NoScript plugin to kill the unavoidable 200 tracking and ad scripts from Google, Facebook and who knows what running in the background? I am sorry if I offend someone with this, however the current Web experience is something I want as far away from my dev. tools as possible.


While your concern might be valid in general, this is applicable to native software just as much.


tracking or ad scripts by facebook or google are in neither atom nor VSCode.

both projects are open source if you care to verify



you're being asked while installing f you're fine with anonymized usage information being send to MS for further development of the software.

its a simple checkpoint and pretty much every actively developing project does this nowadays.


We have 'native' text editors. Sublime Text is very similar to VS Code in many ways. But with Sublime I haven't got up and running writing, running, debugging code in various languages like I have in VS Code.

The whole experience is important. For many, it's more important than the individual 'feature' of being light on resources.

I currently ignore the fact that VS Code is 'heavy' as text editors go, because it's so much lighter than e.g. Visual Studio, leaving me much more RAM free for some of the code I'm running to gobble up and use for its own ends.

I'd prefer lighter 'weight' - in terms of RAM and CPU usage - but I'm not tempted back to Sublime yet.

BTW I'm a vi person, so I'm using vi keybindings in VS, VS Code, Sublime - and anywhere else I can do so. I love Vim's speed, but I can't Get Stuff Done in it like I can in more modern editors. I mastered the keys, not the inbuilt windowing system, scripting language, etc.


I'm always confused when people say this, I've tried a lot of vim modes and to a T I've never found any that were satisfactory.

I really believe that if you find these vim plugins useful you don't really use vim all that deeply. that's not a criticism, just an observation.


And that is OK for Notes or todo app, but text editor that is used by developers who tend to customize it with plugins and whatnot I think that is not viable option. But that's just my personal preference, maybe I am wrong...


VS Code IS a viable option. I use it daily basis and works great.


Seriously, coming from Visual Studio VSCode is a breeze of fresh performant air.

Anecdotally I was able to create the cursor problem by minimizing/showing the VSCode window while viewing CPU usage DESC in task manager, but the effect was only ~2% usage for me (i7 processor).

If one invisible (from a UX perspective) bug is VSCode's big performance problem then I'll gladly let it eat away at 2% of my CPU until it's fixed.


Plus the fact that it's actually possible to use the profiler and debugger built into VSCode to profile and reflect upon itself, then drill down into its own live data structures, source code and css to discover and fix what was slowing it down.

I once wrote a visual PostScript debugger for NeWS, which I primarily used for debugging itself. [1]

[1] http://www.donhopkins.com/drupal/node/97

The PSIBER Space Deck is a programming tool that lets you graphically display, manipulate, and navigate the many PostScript data structures, programs, and processes living in the virtual memory space of NeWS.

The Network extensible Window System (NeWS) is a multitasking object oriented PostScript programming environment. NeWS programs and data structures make up the window system kernel, the user interface toolkit, and even entire applications.

The PSIBER Space Deck is one such application, written entirely in PostScript, the result of an experiment in using a graphical programming environment to construct an interactive visual user interface to itself.


I tried it too... Wasn't satisfied with performances. I used to install Atom every 2,3 months, when VSCode got released I then tried installing it every few months in place of Atom. I still do, but I always uninstall after few hours of using it. It has many good ideas implemented well, but still not worth switching and sacrificing all of the performance for nice git and debugging interface.


What's too slow for you? I recently switched to it from Vim running in a console (+ a bunch of plugins for IDE features) and it's actually more performant. It doesn't freeze when I run ctrl-p for one thing.


Well I don't run any plugins in (neo)Vim besides colorscheme, FZF and neomake, and I run neovim inside Terminal.app since it is much faster than iTerm2.

One situation, I open 5k LOC file, scroll to the middle and bam colors are there everything was instant. In VSCode I open same file, slight delay, opens the tab, I click on middle of side codetree, slight delay, and few seconds for colors to draw. This is just one example. And there are those slight delays all over the place that I don't have with sublime, emacs or vim.


VS Code is faster than Atom (by... a lot) tho?

Especially with large files, but just in general. Speed is the main reason I can't stay using Atom for more than a few hours. It's awful.

VS Code is snappier than Sublime Text ffs...


Wait, why? I use Atom on daily basis, don't have very strong machine, but I've never seen performance issues.


Compared with Sublime Text (what I used before VS Code) Atom was painfully laggy and slow.

Now if I was coming from a larger, probably Java, IDE? Yeah, I can see that Atom would look great.

Just wasn't for me. Glad you like it tho!


VSC is not even close of being as snappy as Sublime Text. Sublime also dominates when opening large files (2GB+) and searching trough them.


On my Mac that just wasn't true.

Sublime had the edge in a couple of cases, but when opening large files (esp JS bundles) it crawled. Took over a minute to open one.

Same file in VSCode - maybe a couple of seconds. Maybe.


While I occasionally need to open large files like that, they aren't source code files, and I have no problem using a different tool than my main source editor for that.


Agreed, except for the domination; true on linux afaik but not on windows: working with the large files is ok but opening them takes ages. VSC is definitely faster there.


You are probably right. I've used Sublime only on Mac and Linux. On Windows I was using another great, native application - Notepad++. It was as fast (or maybe even faster) as Sublime on other platforms.


My experience on the Mac too. Working with was ok (mostly) not great but ok, but opening was very slow.


Nope, it's not snappier than Sublime.


shrugs

Is for me. Not by a lot, and not in every case, but overall? Is for me.


> but text editor that is used by developers who tend to customize it with plugins and whatnot...

There are a huge number of plugins for VS code. It's built to be plugin-centric - most functionality is a plugin.


My two cents: you get what you pay for in your IDE.


That depends strongly on the IDE and the user. I used to pay Jetbrains every month, but last July I noticed that I preferred VS Code to PHPStorm for essentially everything, and stopped my subscription and uninstalled it.


Would you mind giving some examples, mostly for the higher tier of "paying".


I pay a lot of money for my copy of IntelliJ.

...indexing...

...as I was saying...


Emacs is a little harder to write plugins for, but it runs natively and the only impediment is that more people know JS than they do elisp.


Lots of emacs functionality runs via its lisp VM. I'd hardly call that 'native'.

I mean, Emacs had jokes about being bloated decades before Javascript (and Java, another contender for these jokes) even existed. :)


Most of those jokes are completely out of date. "Eight Megs and Constantly Swapping" used to be a big deal, but today it's not.

I don't think running Elisp in Emacs takes away from it being "native." At least insofar as there are no popular text editors (that I know of!) which expect you to compile your plugins and macros to native code. They all have interpreters of one kind or another. What Emacs doesn't have, though, is an inner platform effect: Emacs is the platform, there isn't a second one underneath (no browser engine).


GNU Emacs Lisp code is certainly not native - not until a JIT will be widespread. GNU Emacs usually compiles Lisp code to a byte code which is interpreted by a byte code engine written in C.

If you use a Common Lisp based editor like Clozure CL, Allegro CL and LispWorks have, they don't use a C-based byte code interpreted. The Lisp code is compiled directly to native code. Which makes editor extensions running in natively compiled Lisp code.

The advantage of the c-based byte code engine is compact code and improved portability - since the C compiler will already be provided with most platforms, whereas a native Lisp compiler is typically not something provided by a platform (CPU/OS/...) vendor...


I should be more clear. I don't think that just because Emacs contains an Elisp interpreter, we should call it a "non-native" application -- even if Elisp is integral to Emacs' operation.

If forced, I would say that Emacs-the-platform is native, and Emacs the system of Editor MACroS is not, since the macros run on the Emacs platform. But it seems kind of pedantic.


> I don't think that just because Emacs contains an Elisp interpreter,

The number of lines of Elisp my Emacs uses (counting plugins, but also built-ins) is actually more than twice the number of lines of C.

It's not that "Emacs contains an interpreter", rather it's "Emacs has all these features written in Elisp [...a looong list here...] Oh, and it also has an interpreter to execute it all".


The Lisp code is not natively executed.

Some features like memory management (garbage collection) are layered on top of the OS.

The UI is not 'native' - it's based on a portable substrate written in C/Lisp, which works both on WIMP and terminal systems.

The user interaction is not native (commands, buffers, undo, preference dialogs, window/frames, ...).

etc.etc.


I never said the Lisp code was natively executed. I'm not sure where you got that idea.

Let's just agree to disagree. We are talking past each other at this point. Best regards!


I think that the definition of "native" should take into account how the code interacts with the system.

Because compiling code to the native instruction set shouldn't affect the semantics: so why should that be the only yard-stick for "native", right?

What is semantically relevant is: how much of the platform is exposed to the programs directly, versus through abstractions.

Suppose a language like Emacs Lisp or Java or whatever has only thin wrappers around POSIX through which applications interact with the platform. Then those programs are quasi-native POSIX programs, really. They are doing things like fork, waitpid, dup2 and whatever almost directly. And suppose that in the Windows version of that language, programs use functions that mimic CreateProcess or WaitForSingleObject. Then, regardless of the language being interpreted, it's really a native programming language.


Of course - I should have added that I don't neccessarily share the sentiment behind those jokes (quite the contrary). For me they are just inevitable (social) indicators for "non-native" software (i.e. I'd say anything with significant code running through an intermediate representation or virtualization at runtime).

As such I simply found emacs an odd choice as an example.

BTW: Notepad++ uses .dlls as plugins. (Which doesn't neccessarily make it a better editor than emacs :) )


Which begs the nice corollary: the definition of "native" doesn't happen to be fixed and changes with time.


Well, language is like that. :) But I don't think describing Emacs as "native" here is very different from how the word was used decades ago. To me, "native" is less about how the logic is represented (compiled vs. interpreted code) and more about its execution context. If the host system is an operating system, then the application is native; if the host system is another application, then it's not. I agree that the boundary is blurrier for applications that have scripting interfaces; but I think that most applications don't live very close to that boundary, and are clearly in one camp or the other. (Maybe Emacs is nearer to the edge: there's an old joke that Emacs is a great operating system, it just needs a better text editor.)

Having said all that, I don't think that "non-native" is a pejorative. I use Emacs, but would switch to a "non-native" editor if there were a good reason. I use IDEA (native or not? you decide!) when working on big Java projects, and Emacs for everything else, because Emacs affords me a lot of power that I find lacking in other editors. "Nativity" isn't really part of the equation, it's really about functionality.


GNU Emacs comes with its own portable execution platform (the byte-code Lisp execution engine), where Java applications typically use a provided virtual machine.

As such GNU Emacs is just as 'non-native' as a JVM-based editor.


Almost agree with you! If the Emacs VM were used for applications other than Emacs -- if it were a general purpose VM -- then I'd completely agree.


There are lots of applications written on top of GNU Emacs, but most of the time they use the specific UI and features of an editor, or integrate with the editor.

Example: the calculator of GNU Emacs. https://www.gnu.org/software/emacs/manual/html_mono/calc.htm...



The jokes of GNU Emacs being bloated are from a time when machines with 8MB RAM were modern. The original NeXT computer was introduced in 1988 with 8MB RAM... The Mac II from 1987 started at 1MB, max at 8MB, ... today phones have 2GB RAM.


I bought a phone with 6GB of RAM. It keeps getting bigger.


Not true. Big chunk of Emacs is written in C.


Well, you can write plugins in Python for Sublime Text and NeoVim (and more). JS and Python and both fairly uncomplicated languages when it comes to "plebeian hackability/debuggability".


> it's impossible to ignore the benefits of cross-platform development

Performance & bugs/quirks. I'd much rather have performance.


We have a lot of native GUI text editors. Gedit, Geany, Kate, and Notepad++ are free options, and then you have Sublime Text as a proprietary one.

They all support plugins and extensions, they are all super efficient in CPU usage, etc.

The thing is they aren't new, and because they are all C/C++ codebases developers wanting to add new features to text editors don't want to touch C++98 / ANSI C code from two decades ago.

Then you want to start talking about a C++17 / Go / Rust / etc text editor, but that is starting from scratch, and when you consider the time investment to develop the infrastructure of a text editor today vs just using Electron, the time investment makes less sense for hobbyist devs doing this stuff in their free time.


> when you consider the time investment to develop the infrastructure of a text editor today vs just using Electron, the time investment

...is exactly the same. No matter what gui toolkit you use, you still need to develop the infrastructure. Electron doesn't know how to handle keyboard and mouse events, it doesn't have a text buffer implemented, has no understanding of different text encodings, how to parse different languages, and draw different colored text accordingly, or format it, etc.


> developers wanting to add new features to text editors don't want to touch C++98 / ANSI C code from two decades ago.

As I understand it, sublime has tight enough integration with python, such that python can do literally everything you would ever need


CADT, all the way down...


> It just doesn't go in my head that we are building text editors inside a web browser!

I felt the same until I actually tried it out. That changed my mind: now I'll take any platform that those developers & contributors choose for their cross-platform products. Because this for an Electron app, this is a rock-solid 'old-school' (as in, fully as neat smooth helpful-yet-staying-out-of-the-way and somehow "ergonomic" as it was ever since at least oh late 90s, v6 or so) "Visual Studio experience". After a few years of sitting listlessly in front of subjectively inferior editors, I'm prepared & willing to give this Electron stuff more time to further mature improve and speed up. There's no intrinsic reason it can't get there. Lots of seemingly native apps are just live Lua/Python interpreters under the hood with widget bindings in place of a DOM. In that case, well-engineered JavaScript (terribly time-consuming to produce & pretty rare out there for those who like to rely blindly on a huge pile of unscrutinized 3rd-party snippets/script um-I-mean "repos" --- but not impossible) can fully deliver the same, in principle.

Seeing how VScode took off, that could even propel MS to invest unprecedented energies & talent into rounding up the JS "rich client app" performance story further. Who knows.


Because Microsoft and Apple have both dropped the ball on their native UI toolkits. Right now I'm doing web frontend with TS+VueJS+nice CSS toolkit. Yes there's plenty of complaining to do about the fragile convoluted toolchain and crappy performance. But I still gladly take it over WPF (promising, abandoned for some reason) or Cocoa (feels a decade out of date). No major OSS community, no Material Design/Bootstrap/etc. toolkits, slow develop/run loop, no good UI automation, crappy/no inspection capacity, etc.

I don't need x-plat UIs, I just need a good UI toolkit period, and that's why I'm looking real hard at Electron for future desktop work.


Cocoa has its issues, but I find it much more pleasant to use than anything based on front end web tech despite that. I find myself wishing I could use it on Windows and Linux.

I find many of the frustrations people have with desktop Cocoa come from the bizarre need to reinvent the wheel with a custom UI theme. If you stop fighting the system and instead go with a native look with well chosen accents, life is much easier. Native can look great with a little attention to detail.


You can also look at JavaFX. It's pretty good.


The current state of Javascript is very much, "just because you can, doesn't mean you should"


For sure.

Half the responses in this thread are people defending the stupid idea of "let's just use js everywhere, just because we can and fuck better suited tools".

Js is not the be all and end all of tools, just because it is used a lot in web doesn't mean it is the best (or appropriate) tool for other spaces.

It sort of seems like js/js ecosystem is built around hacky solutions to things, so I guess it makes sense that the community doesn't seem to see any issue with shoehorning in their language of choice into entirely inappropriate spaces.


Yeah! We should go back to native editors, like Eclipse or IDEA!


HN could use a "funny" upvote option...


Seeing the resources taken for a simple text editor, I crave to see what a full-blown IDE (whic is what Eclipse and IDEA are) written with web technologies would eat up.


> Wouldn't it be better to make native application

Actually there are such editors. I use vim instead of vscode or atom. And I think my installation of vim is slower then vscode because of some plugin that I've not found yet.

Applications like vscode very useful because they help to find performance and other bugs in browsers. Same way as browser improved when gmail and sites like this appeared.

I would support appearance of IDEs, large games, VR, image and video editors in browsers as they help improve web platform.

If you don't like it, just use other option. It could be faster, but may be not. Not sure that vscode is slower then visual studio for most tasks.


I switched from sublime to Atom maybe 2 years ago. I'm completely happy with the speed. I see no disturbing lags and it works/looks the same on every platform. There is a great extension system and I can develop my own extension quickly if I ever needed to. I believe for the developers it was much more productive to create it with web technology and therefor I don't see any reason why it shouldn't be done like that. I guess there are people that need much faster this or that, and as low memory footprint as possible, but it is not an average user/developer. I believe for most of us these editors work well


It just doesn't go in my head how so many people have trouble understanding why things like VSCode or Atom are popular.

They're sexy, they are powerful (extensions for everything), portable, and extremely easy to extend thanks to Javascript being pervasive. I don't know for certain but I'd also assume writing an Electron app is easier than writing a similar app in a lower level language.

Is it really that hard to grasp? Performance has to be perceptible by the average person for it to affect user base. I prefer WebStorm but I've had absolutely no issues using VSCode on my laptop - which feels even faster than WebStorm.


I've played around with VSCode and what it can do seems impressive, but I want to do one simple thing. I want to make the background black.

Every dark theme I can find makes the background dark gray, not black. I actually looked into what it would take to make a new color theme and I simply don't understand all the steps, and definitely don't want to deal with the hassle. It very quickly goes off into the weeds of TextMate themes (huh? Why are we referencing a Mac editor? Yeah, I know it's a de facto standard, but really?), editing XML (complete with hex codes for colors) and installing something called "Yo Code". Dude, I just want to change one friggin' color!

In every native app I've ever used, I can just go into the Options and make the background black, period.

Just because it's a programmer's editor shouldn't mean you need to be a programmer to make the simplest configuration.


I don't know about VSCode, but in Atom you'd just write one line of CSS


> Wouldn't it be better to make native application

Better in what way? The market has voted with their downloads, they don't agree that the problems with Electron apps are as bad as you feel that they are.


Are you kidding me? Look at the raw performances and benchmarks of Vim/Sublime/Emacs, compare it to VSCode and Atom and you will see. If you don't believe the numbers then use both side by side, open 250MB file in all of them and look at the screen.

And I still see native editors used more (for example latest StackOverflow survey, showed that Notepad++, Vim and Sublime combined are used much much more than VScode and Atom combined). I don't want't my editor to crush mid session, or have to write bunch of gulp files and npm commands to do one simple modification.


Raw performance means nothing, it's just yet another metric that can be traded off in favor of other aspects that make up a good application. In VSCode's case, it was traded off in favor of ease of development, which spurred an extremely active and ever growing ecosystem of extensions. Was it worth it? The download counter says yes, because despite it being "slow" compared to other editors, the tradeoff is not even noticeable by most of its users.

This all boils down to the art of "it's good enough". Take game development for an example. You could write an engine from scratch using Vulkan APIs and all that jazz and run at 144fps@4k on a toaster. Or, you know, you could trade off the performance and settle for just using Unity and optimizing wherever possible. It's not as fast, but as long as the user is not frustrated by it, who cares? You just saved a lot of development time. Tradeoffs, tradeoffs.

Same thing applies here. The VSCode team did a damn good job of keeping performance just about over the "good enough" threshold of most of its users, <flamebait>unlike other Electron based applications</flamebait>. Of course, that threshold varies based on the user and his machine, but outright dismissing VSCode based solely on the assumption that editors cannot be written in html+js is simply short-sighted.


This might mark the first time in history that Emacs was trotted out as an example of a performant editor.

I say this as an Emacs user.


I am Emacs user. Have you tried it lately? It is lightning fast compared to Atom/VSCode. It's not as fast as Vim, struggles with long lines, and all that, but boy how surprised I when I uninstalled VSCode and fired up Emacs after a week of usage of VSc.


> Have you tried it lately?

I would, but it's still swapping back into RAM.

(For real though: yes, I use it all day)


Compared to what kids these days use, it's as fast as a lightning bolt.


There's a lot of grass-is-greener comparisons happening. I'm a long-time Vim user and if I forget to turn off syntax highlighting before opening a large file it comes to a crawl as well.


Emacs takes 8-15 seconds to open on my new i7.

That matters when I just want to do quick edits.

Vscode takes 3.


It's not "Emacs" itself, it's plugins and Elisp libraries you have loaded. Emacs itself - the GUI, even including (optionally) blinking cursor - is actually quite fast.

To prove this, this command:

    time emacs -Q --eval "(kill-emacs)"
reports ~0.2 sec on my system.

My normal Emacs, just as yours, needs ~12 seconds to start up. But I made it this way by explicitly enabling and requiring things. I could, with some effort, get the time down to 5-8 seconds (byte-compilation and gathering autoloads in one place), and even back to around a second if I was desperate enough to try dumping the image of a running Emacs to disk (I did it once and succeeded, although the process wasn't pretty). I don't do this because I don't care that much, but the option is there.


Yes, this is true. But emacs without plugins isn't worth much, and I'm too old to enjoy playing the configuration fiddle for more than a few minutes


export EDITOR="emacsclient -a ''"


Unfortunately the group of plugins I use combined with running in windows just isn't stable enough to keep a server process running.

Admittedly this is a corporate laptop with all their virus crap running, so no editor is fast. But emacs in particular has very bad startup times for me.


For me Emacs opens up in about 2 seconds and it's ready to go and print text input in scratch buffer. I am on 2015 MacBook Pro 13" with i5.


That's been my experience as well, although that's still too slow for my tastes as a vim user.

But I definitely don't ever recall it being 8 seconds, that feels like either an exaggeration or someone working on a potato.


> that feels like either an exaggeration or someone working on a potato.

Haha, no. It depends on what you use Emacs for. Remember the old joke about Emacs being a great OS? The reality is that Emacs is a great computing environment for almost anything that deals with text and even for some completely unrelated things. If you want a cross-platform GUI then Emacs Lisp may be one of the choices available.

This caused "plugins" - Elisp applications - to flourish and over the years a lot of code was written. Long story short, I have ~640000 (not a typo) lines of Elisp in my ~/.emacs.d/ alone, not counting the built-in Elisp libraries. It takes time to load that much code, even if it's byte-compiled beforehand.


fair enough, I've never been able to stay with emacs long enough to collect the plugins.

The 1-2 second startup was slow enough that I couldn't even stay with the Emacs evil mode.


For me, it takes <700ms, but that's because I leave it running all the time and call emacsclient from the command line.


There is something wrong with your system.


I usually never deal with 250MB code files. The right tool for the right task. The Electron programs deal with code in a directory (and btw. VSCode is totally different beast than Atom) and I also would not edit a 250MB file with both of them. Never had a crushing VSCode here... but I always have a better Python and HTML and JS support than e.g. in Sublime+good plugins.


Clearly a lot of people don't care if their text editor is using "a lot" of memory or "doesn't benchmark well" because all they're doing is writing code and running it occasionally.

There's plenty of competition in this area, so if electron-based text editors have enough downsides, people will use something native (as your comment indicates). If that wasn't the case, and we could only choose from electron text editors, then we would have a problem.

I think we should look at performance and resource usage as features on the same level as other features. Those things have to be balanced against whatever else the tool is bringing to the table.

I used Atom for a while. But as my projects got bigger, I got to the point where I was bothered by its slowness, crashes, and choking on large files (not even that large, honestly). I find sublime much better in these areas, so I switched. Looks like a healthy ecosystem to me!


I am sure something like Visual Studio of any other IDE is better suited for a large project.


>If you don't believe the numbers then use both side by side, open 250MB file in all of them and look at the screen.

Maybe I don't have any 250MB files to open?

If VSCode doesn't fit your use case, don't use it. There are innumerable alternatives. But what purpose does it serve to tell the rest of us (who like it) that it sucks?


I don't know what you are arguing about, I didn't say Electron apps are as performant as C++ written editors. I said that users, as evidenced by their downloads, don't find this to be as big of a concern as you (and many other HN commenters) do.


And xe said in turn, which you completely overlooked, that at least one survey didn't bear out your claim about usage at all. Rather than ignore that inconvenient point, you could have countered with what data you actually have on text editor downloads.

You also appear to be falling into the developers are not users trap.


raw performance doesn't matter for a text editor nearly as much as it used to. unless you are using a toaster oven to code. then I guess raw performance would become important.

but hey, at least you can write code on a toaster oven, eh?

All joking aside -- I use VS Code on a 7 year old laptop, and it doesn't lag, or stutter at all. Performance is just fine. There comes a point where the hardware greatly outstrips the requirements to the point where it doesn't matter if the resources being used seem 'too much' for what it does.


VS Code and Atom aren't really comparable for performance. VS Code is much, much smoother and closer to the experience of Sublime.

> or example latest StackOverflow survey, showed that Notepad++, Vim and Sublime combined are used much much more than VScode and Atom combined

By this metric Notepad++ and Visual Studio (not Code) are the best editors because they topped out of every category (except Vim for Sysadmin / DevOps). If you look at the "Desktop Developers" tab, Visual Studio Code is actually in 3rd place behind Visual Studio and Notepad++. With Vim and Sublime a few rows down and Atom even further.

There's no way in hell Visual Studio (not Code) is faster than Vim, but how come it dominates it in all but one categories?


You should also sum all the procentages of Intellij based IDEs there, then VSCode is one place down


My whole point is that "most used" is a terrible, terrible metric for anything except for most used.


Not even for the market "voting with their downloads"?


Ctrl+F "voting with their downloads"

1 post found.

Oh, this comment.


Are you really comparing the number of people with VS Code to the number of people with Sublime, VIM, NotePad++, or Emacs and saying VS Code is greater?


Unless you enjoy the sound of your fan and battery life of only a couple hours, 13% CPU to blink the cursor is a problem.


Wouldn't it be better to make native application, especially for code editors, where developers spend most of their time, where every noticeable lag and glitches are not appreciated.

I agree. However, as someone who has used Visual Studio (the one that costs $$$, not VSCode) which is a native application (AFAIK --- it probably has some .NET and web components too), I can attest to the fact that even native applications can be extremely resource-consuming and slow.


Your question seems backwards. VS:Code is an absolute dream to use, for me at least - powerful, hackable, and performs great. If there are native editors that leave it in the dust, what are they? If there aren't, surely it would make more sense to ask why that is, rather than asking why Code isn't native?


I really like that I can edit my editor. A few times, I've disliked how something looked or wanted to add a feature. So I tweaked the stylesheet, or wrote an Atom plugin.

There's a beautiful poetry to being able to do web dev inside a web application.

I still have Sublime text for when I need to view/edit files with tens of thousands of lines, since Atom chokes on large files, but otherwise I have zero regrets :) I'm as productive in Atom, and it's a more pleasant experience.


> Wouldn't it be better to make native application, especially for code editors, where developers spend most of their time, where every noticeable lag and glitches are not appreciated.

There are plenty of native-application (or JVM/CLR) text editors and IDEs, too. For lots of usage patterns, the browser-engine-based ones have acceptable performance, and the number of people with expertise building for web contributes to the speed of development on those editors and their plugins.

But, sure, if you don't like Electron-based editors, there are plenty of other actively-developed editors for you to choose from.


VSCode runs nicely on my MacBook. It doesn't feel like a web application to me, in the same way that IntelliJ IDEA doesn't feel like it's written in Java. I'm sure that both VSCode and IDEA could run even faster if they were ported to, say, C, but in both cases that would be a large investment for improvements that I wouldn't even detect.

So I guess it makes sense to optimize for maintainability i this case, i.e. not having one codebase per native OS.


> Wouldn't it be better to make native application, especially for code editors, where developers spend most of their time, where every noticeable lag and glitches are not appreciated.

I think it'd be better to improve Electron so that native applications don't have so much of an advantage. WebAssembly is a big part of that. Another useful part would be an alternative layout mode that eschews legacy HTML/CSS cruft, for more predictable and performant GUIs.


Most of these don't support it (yet?), but I personally love the idea of having my editor of choice on the web with all my settings and with no install/permission issues.

The real reason to me though seems that it's just the easiest way to make a cross-platform UI. And in this web powered world, everyone knows how to code html/css, so why relearn a bunch of new tools?


Agreed. If you're looking for easy extensibility, a simple core and a flexible GUI, I can't imagine what could put VSCode above emacs, except for the button marked "sort by CPU usage"


Because if we don't build it inside a web browser, we'll have to add plugin support to add a web browser inside of it. Then you have two problems.


Actually then, in accordance with Zawinski's Law, one will be able to use that web browser to read mail. (-:


You could write a native text editor which can use JS plugins. Sublime Text uses python plugins and is nice and snappy.


> Wouldn't it be better to make native application, especially for code editors, where developers spend most of their time, where every noticeable lag and glitches are not appreciated.

The lesson here is that our assumed bars of quality for what makes a text editor good are inaccurate. It turns out that the minimum performance bar is lower than you think, and that ease of customization is, in fact, much more important than you think.


I switched to vscode from webstorm, for performance reasons. Building a native editor / ide is a great idea. But since a lot of options in this area had already chosen some cross-platform toolkit for development, why not build it as an electron app?


I used to think the same, until I started using VS code and it completely blew me away!

I don't care if it's written on to of a browser or in assembly. I just know it's the best text editor I've used so far.


well if you've ever built a cross platform native desktop application, you will appreciate Electron.

Java Swing. Never again.

edit: Do people downvoting even know what it's like building cross platform desktop applications using Java Swing? It's fucking awful, and that's a fact. Even the end result UI design look & feel is butt ugly. Sure you can spice things up with JavaFX but why? Do you not realize how masochistic it was back then vs now with thin web browser clients? Can't believe people are still thumping Java Swing in 2017.


Swing was at least meant to do UI. Web stack is not; webapps are essentially a pile of ugly hacks on top of a document rendering engine, and it really, really shows - especially when you have webapps pretending to be native (e.g. webview-based apps on mobile).

Also, for Java there's JavaFX (a de-facto standard UI toolkit for Java), which is very nice to work with.


> Swing was at least meant to do UI. Web stack is not;

The ancient history of the web, clearly, is different, but the modern web stack, both in terms of specs (WHATWG HTML/W3C HTML5 and related standards) and modern browsers are very much engineered for applications, not just classic documents, as a primary use case.


When you find yourself having to reimplement a blinking cursor, that's when you know you are working on a shitty tech stack.


In the minds of people who've spent long time on traditional client-server tier architectures view Javascript as a toy and view all the Javascript frameworks as "over engineering".

Yet, what they fail to realize is they are comparing a webpage to an application. An application contains information about the state it's in. Handling the application state has always remained complex and it has been two way coupling to the presentation layer making life more difficult.

The latter is why Javascript is perceived to be "over engineered" by people viewing it nothing short of "fairy dust on ugly document pages" which is absolutely incorrect way of viewing application development.

I used to be the biggest Javascript skeptic but once I realized the intent behind React/Redux/Vue.js, it changed my perspective and have started treating it with more weight and respect.

Once that paradigm shift happened in my perception, I found it a lot easier to navigate and endure the fragmented tooling and endless variations of npm modules.

Because I realized that it's going to get better eventually and it's here to stay. Young people aren't learning Java & using Maven on Eclipse or Netbeans anymore. They are on Atom or Visual Code, writing Javascript.


Why was this downvoted? I expect a rebuttal instead just silent downvotes. This just solidifies my opinion, there are lot of dinosaurs on HN and they are going to find themselves unemployable without Javascript in the future.

Javascript is essential knowledge, along with AWS. Software engineering has changed in the past 20 years like it or not.


And yet that hacked together document rendering engine manages to be less painful to use than the the current show of UI toolkits.

I think we'll get there eventually but until a native toolkit presents an interface as easy to use as the web developers are going to take the path of least resistance.


> I think we'll get there eventually but until a native toolkit presents an interface as easy to use as the web developers are going to take the path of least resistance.

Like Qt did with QML? http://doc.qt.io/qt-5/qml-tutorial1.html


Yeah, but Qt has licensing costs for commercial use...


Qt is LGPL, so unless you need to link to it statically or make proprietary modifications to Qt itself, you don't need to pay licensing costs.


Ah right, I wasn't aware of that.


Qt is (largely; a couple of optional components are GPL) LGPLv3, so can be used commercially without getting the commercial license.


Oh, lets be professionals and not pay for our tools like everyone else does, who needs money anyway!


Less painful according to who? From reading this thread it appears that most devs claiming it's incredibly easy have never actually worked with other UI toolkits at all, so their experience is largely worthless.

I've done web UI dev. I've also written code using GTK, Qt, Windows, Swing and JavaFX. A good modern UI toolkit like JavaFX or Qt blows the web stack out of the water on almost every metric. Developer productivity, correctness, speed ... you name it.


Intention is in the hands of the builder not in it's inherent design.

> webapps are essentially a pile of ugly hacks on top of a document rendering engine

That's your own opinion. You claimed Java Swing & FX was the correct way and that thin clients like Electron is wrong. I disagree.

If Swing was meant to do UI then it's probably the most awful and inefficient way to do it.

If Web was not meant to do UI but it's the fastest and more efficient to work with than Java.

It's a matter of opinion but with starkly different development experiences. Sure, you can build using Java Swing/FX but you are going to get a completely different demographic and developer culture...one that is still ingrained in the era it was released.


Why?

With JavaFX where you can customize it using CSS it looks nice, and you don't need to use a scripting language to do it.


Why use JavaFX when you can change the CSS in a web app which is indistinguishable?

Here's a easy way to get a job done but people refuse to do it because of philosophical/ideological indoctrination. ex. the world is made of objects therefore our languages and how we build software should now mirror it.


I was extremely skeptical of using Swing for a really large hospital application. But after working in it for quite some time, I have to say that with the right approach it is quite manageable. There is plenty of power in there and a lot of good things can be done.

(I do prefer the webstack over Swing)


> Even the end result UI design look & feel is butt ugly.

Only when developed by those devs that never bothered to read books like "Filthy Rich Clients".


Is the IntelliJ UI "butt ugly"? No, but you'll probably say it is as to not contradict yourself. Blame the craftsmen not the tool.

>Even the end result UI design look & feel is butt ugly

You know what else is butt ugly? Programmer art and UI.


No, you are correct. It is insane.


I think their bet is that the web stack won't be "so high up the stack" in the near future.


Have you used VS Code? Feels much more performant than any native editor or IDE I've ever used.


What native editors have you used?


Primarily Sublime Code, Notepad++, PhpStorm and NetBeans.


That's an unusual experience. Based on a handful of benchmarks[0] done by the author of JOE, VS Code is sometimes an order of magnitude slower than Notepad++ and Sublime at some tasks.

I use VS Code pretty much exclusively these days myself, so I'm not picking on it by any means.

[0]: https://github.com/jhallen/joes-sandbox/tree/master/editor-p...


Well, I haven't benchmarked it so it's just how I feel. Maybe there's something in the UX that makes it feel more performant.


I have the same experience on my pretty slow laptop. Granted, my projects aren't big, but I would take vs code over sublime any day. Interestingly, atom feels much slower compared to both.


That's because, unequivocally, atom is slower.

https://pavelfatin.com/typing-with-pleasure/


Surprising part isn't that atom is slow, it's that vs code isn't and they both use electron.


PhPStorm and NetBeans are not 'native'.


Care to elaborate?


There's nothing to elaborate. They're both Java/Swing apps.


Thanks. That's what I meant. I thought that counted as native.


Nope, it doesn't. Although it's probably a good indication that the hairsplitting over 'native' and not is not as important as this thread might make one think.


I guarantee that is virtually impossible.


Maybe he was hosting his native editor via an xwindows server on dialup.


I have used, 6 times, for 1 week period each time. Last time I tried it a month ago.


the benchmark you linked is outdated, I reran those tests just now with vscode and could perform all open, edit, close operations in the test in fewer than 2s.


I believe making native application for the three platforms would be considerably more effort.


Have you tried VS code? It seems your arguments about performance are theoretical but in my experience i have run into none of what you're describing could happen on a 5 year old macbook air. A 14% cpu idle due to a cursor is a bug that will be squashed. The software experience, in practice, is quite impressive.


Actually I find the consistency and simplicity of using well established, widely supported and rich third party ecosystem available with using HTML5/CSS/JS based UI very liberating vs. the incompatible mess of native UIs. Personally I script all front-ends in web technologies irrespective of platform and back end tech. The browser rendering engines and JS engines are reasonably fast in most platforms.


As someone who is literally building a IDE in Electron, the biggest reason is JavaScript itself. If you look to the stackoverflow yearly overview, you can observe that JavaScript is currently the most used language. Also do not forget about all the integrations you could do with for example devtools.

An additional feature is that you can run the IDE in the web browser, so that you can have an online code editor. Think for example about configuration files on the Azure website, or a cloud IDE with multiple people using it.

Honestly I think Visual Studio Code is not slow enough for people to switch. A long web page is very similar to a long code document, all the keywords are just span's with a colour. Furthermore, it is not like Visual Studio or IntelliJ are known for low CPU or smoothness.


The SO survey (while super interesting!) conflates "most used" with "most asked about on Stack Overflow"; I'm skeptical of anyone who claims to know which programming language is the most popular. The TIOBE index, which has its own problems, has Javascript coming in 8th, with the top 4 being Java, C, C++ and C#; my experiences and my confirmation bias suggest that's more accurate.

https://www.tiobe.com/tiobe-index/


I agree that tiobe is more established, especially as they look to companies being involved and to the job market. One of the reasons why I am interested in Javascript is that many people are learning it, even non-developers. Think about people without education.


> Think about people without education.

I don't want people without education building my IDEs and other software I might rely on.

The VS Code team have done a great job, and this bug will die. But, I'd say it's more in spite of the platform, than because of it.


JavaScript is literally the reason I would avoid a platform. It's only the de facto language for the web because you can't directly run any other language across different browsers. With web apps being so popular, having a low barrier to entry, and being "quicker" to provide half-assed cross platform apps, it's no wonder JavaScript is technically so popular. Hopefully, web assembly or something similar can change that. But then again, web browsers weren't meant to host proper applications.


I personally wish Github would incorporate a full-feature Electron-based IDE into their system; they do have an online editor, but it is fairly simplistic. Good enough for quick edits, but I wouldn't want to dev in it (then again, I started my professional career in software development sitting in front of a VT220 using a line-based editor; it got bad enough that me and my mentor had a "contest" on who could write a better full-screen code editor - honestly, we both won in a way).


I think you have answered it yourself partly. Creating a highly configurable, UI/UX predictable cross plattform editor is not an easy task. If you start from scratch on a native platform you have to code a ton of code (pretty sure more than VSCode has) just to get started. Editors on web platforms can be modified with CSS, JS (or TypeScript like on VSCode), HTML. Try to be that configurable on a native platform... you literally have to code something equivalent to CSS, HTML, JS to reach that level of modularity. It's good to stay on the shoulders of giants and start there ;)


Pray tell, sir, have you heard of the Qt library? There is a whole world outside Javascript :)


So you think hacking QT libraries like this (and not from the outside, I mean internally) and using C++ would make a good community editor? http://doc.qt.io/qt-5/qtwidgets-richtext-syntaxhighlighter-e...

When using QT for a highly modular editor be prepared to code QT components from the lowest level. It's not like you take a QT widget and modify it on a simple way. Trust me.

If you think it is easy to code editors look e.g. at the people who write letters and their custom editor tool: Microsoft Word. Now look at the many competitors this program had and how many behave super speedy on all plattforms.


I've written low level GUI components for several UI frameworks before, thank you very much. It's not that bad and a lot smoother for the end user.

The real problem is this new generation of "developers" that only know Javascript. When all you have is a hammer...


I simply cannot understand why Atom and VSCode are so popular. I get that they are extensible, but is that really worth the slowdown to you? If I need more features than a text editor, I use an actual IDE.

Someone just posted some really embarassing benchmark numbers regarding this issue yesterday: https://github.com/jhallen/joes-sandbox/tree/master/editor-p...

Note that Atom and VSCode are nearly 10x slower than all the other competition, as well as simply crashing for many of the tests. To be fair, I do think Electron based desktop apps have their niche. Spotify is a perfect example. But they have no place in text editing.


I don't know why you're saying they are slow. vscode starts up pretty fast. Sure it consumes more resources than vim. However having code completion, debugging, linting, and a bunch of IDE like features is super useful including the fact that it's cross platform and open source.

It's very hackable. Just last night I fixed an issue that had been bugging me for a while.

Pages with ads use a lot more of my CPU so I'm not really worried.

This looks like a chrome problem more than Vscode. I do know that they take perf very seriously and this will be given some attention.


100% of vim users will tell you that their vim setups also have code completion, debugging, linting, etc...


And I just proved your point. :)


Sure, but I won't pretend that adding those features doesn't add some significant resource usage and some occasional slowdown.


>doesn't add some significant resource usage

If by "significant" you mean less resources than opening a blank tab in Atom.


>I don't know why you're saying they are slow. vscode starts up pretty fast

I takes "ages" to start up, meaning it interrupts my workflow. I like vscode, but startup time for the app and for new windows is terrible (~2 seconds) and it's one reason why I still prefer sublime for anything that doesn't have significantly better language plugins in vscode.


I so very rarely close Atom, and generally don't use multiple windows in a single app, that those slowdowns never really bite me. It'd be interesting to see their analytics to see how often people actually hit these things in practice.


To be fair, Emacs can easily have code completion, debugging, linting and a bunch of IDE features, it's cross platform and it's open source. It's probably one of the most hackable editors out there.

And I use adblock :)


> I don't know why you're saying they are slow

Because the slowness, when compared to a whole slate of "regular" (== non-web-browser-based) editors, is empirically measurable, and is not just like "twice as slow" or "500% less efficient", but — in many cases, for various operations, as most recently delineated by the joe's own editor benchmarks, but really I mean repeatedly demonstrated over and over for the life of these editors — hundreds of times slower or even infinitely slower (it just gives up and crashes). To a lot of people, that's infuckinsanebro!!!, like a car that gets not n miles per gallon, but needs n gallons per mile.

I get that the extensibility and hackability is appealing. I like that, too.

But at what cost!


have you used VS code? I recently switched from Sublime to VS Code. Originally, I hated the slowness. But the slow point in my work (Software engineer) is not how fast I open files, but on how easy is it for me to transfer what I am thinking to the screen.

VS Code, while slow, has extensions that make me much more productive than when I was using Sublime.

Highlights include:

- auto-formatting my ruby code to fit coding standards. I don't have to spend time mucking with spacing. I write it the fastest way I can, and it fixes it.

- inline git blame. I can easily see who and why a line of code is changed as I select each line. Super useful when debugging. I don't have to kick up the console or fire up github.com in my browser.

- prettify json. I don't have to open up chrome to some json formatting tool to debug my http responses.

edit: I still have to use sublime to open very large files, but that is rare in my work.


Yeah, I have, and I actually use it regularly (along with several other editors). The slowness (of not just opening files, but editing them) drives me up the wall, but in certain workflows (e.g., TypeScript) it can be worth it.


At least while I was using atom on a not so high end machine, it generally took a few seconds too start and being usable.

Neovim on the other hand is there in a heartbeat, and I do have completion, debugging, linting, file browsing, git integration... available as well :)


Because for certain languages like Rust it is an IDE.

I was a bit surprised at those tests. It doesn't match my gut feel, VSCode is quite snappy for me.


Same for Go and typescript/React, there's no better IDE than VS Code.


As was pointed out numerous times in that thread, they aren't 'embarassing benchmark numbers', they are 'irrelvant benchmark numbers', because no such 'slowdown' is actually felt by most developers that try VS Code.

> But they have no place in text editing.

Their popularity proves otherwise.


>they are 'irrelvant benchmark numbers', because no such 'slowdown' is actually felt by most developers that try VS Code

I don't think that's true at all. I have tried both VSCode and Atom, and immediately dropped them both after they crashed opening a file of a few thousand lines.


YMMV. I've been a daily user of Code for about a year and I've never seen it crash. Never seen it pause for a perceptible amount of time, that I can recall. Granted, I haven't tried editing any 10MB XML files with it, but then I've only been using it, not benchmarking it.

Sure it uses more memory than nano, but why would I care about that?


That's not the use case of people using full featured editors.

For the same reason I don't open log files with Intellij or Visual Studio, I don't open them with VS Code.

I use VS Code as a code editor, not as a text editor. If I want to open a log file, I use Sublime or Vim.


This. Text editor can't handle openning log file. (no need to highlight anything, complete anything, scan the structure etc. Just show me the damn log!)

With the latest updates they are now able to open files up to 10Mb, wow.


Is this a case of a more specialized tool (with fewer but more focused features)? For example, you can view email in vim, and I'm sure you'd get better performance than Gmail for large emails but very few developers I know use vim for that.


There really isn't much competition though. Sublime is commercial. Many people don't like emacs or vim. Most other editors are confined to one OS (Notepad++, Kate, etc).

When it comes to a full featured, modern editor that works quite well out of the box on most modern OSes, I struggle to come up with any answer other than VS Code.


What's wrong with using Sublime? Quite possibly the best text editor I've ever used. It's snappy, elegant, and it has a boatload of features.


I have nothing against Sublime. But I personally don't use closed source text editors. For something that essential to my livelihood, I like knowing should it get abandoned, that there's a chance of others picking it up (even myself).


That benchmark is 7 months old. I just tested the same operations with vscode and did not see any operation take more than 2 seconds. Opening test.xml, scrolling to the end, and closing vscode in <2s. Replace all instances of thing with thang, <2s. VSCode has made significant improvements since last summer when it first was released - that could be part of the reason for these discrepancies.


Actually, one of the things that's so absolutely incredible about VSCode is that it's both an IDE and a non-IDE. The delightful thing about it is there are no projects. In the future, I hope this happens to all other IDEs out there. I think the notion of a "project" clearly comes from the idea that the source structures must be understood so that proper syntax highlighting, auto-completion, etc... can work correctly. Maybe in this new era of machine learning, our "IDE-less" IDEs can start auto-figuring this stuff out and just know how to build, complete, etc without any of this crazy hassle.


> I simply cannot understand why Atom and VSCode are so popular.

Free, open source, best ide for TypeScript, and better than Eclipse, NetBeans.


> I get that they are extensible, but is that really worth the slowdown to you?

Yes it is worth it. I use Atom, which people saying is slower than VS Code, but that's still fine and definitely worth it.

I used to use (and still use sometimes) vim but mostly just Atom with vim-mode-plus plugin.

It is possible to do all that Atom does in Vim and I have colleagues who are very productive with a heavily customised Vim but somehow, despite using it for years, I only ever used quite basic features. Now I can do the things in Atom that I could not bring myself to learn in Vim.

In any case, editor performance is the least of my worries. Most of my time is spent waiting for stuff to deploy and tests to be done, meetings etc. rather than actually writing code. :(


The benchmarks make it look bad but to be honest aren't that relevant for day-to-day use. I can't remember the last time I saw performance issues in Atom and the plugin infrastructure massively outweighs the occasional possible issue


As plugins go Sublime has quite a few as well. In fact there hasn't been one I've looked for and couldn't find. I used to use Atom and VS Code, but Sublime is just that much quicker even for day to day tasks as you say. Plus plugins can be written in Python so that is a huge plus. But as with anything use what makes you happy.


The only ever time I see performance issues is when I accidentally click on a binary by accident. Atom's large file handling is awful... But how often do you open a 100MB text file?


Isn't VSCode's hard limit more like 30MB? I've tried using it as my only non-IDE text editor recently, and it's infuriating that it's completely impossible to open a lot of log files I deal with.

People here are really down on Sublime Text for some reason, but it can actually do everything I need.


When I'm looking at trace files to figure out why an error occurred, I'm happy when I only get a 100MB one to look at.


I'd always jump to something like (unix) `less` to do something like that. I don't see it as a failure of my main programming 'text editor' that it can't open log files, because I wouldn't even try that in Vim (which I'm also very comfortable with).


It breaks for me when I open a 3MB json file with complex nesting.


VSCode is free and feature-rich with a huge amount of extensions already.


Yeah the integrated node js debugging is awesome. I love how it's an in-between of an IDE and regular text editor. It has some more features than atom but feels lighter than webstorm


It's not slow to me, I find my computer uses far less resources than say one of the million JetBrain IDE's. Combined with a bunch of useful plugins, it's pretty wonderful.


You're comparing a text editor to an IDE. If you are able to do your work with a text editor, maybe you didn't need an IDE in the first place.

IDEs are definitely slower than text editors, but they have their benefits that comes with their tradeoffs (like most things).


vscode kind of walks a thin line between IDE and text editor. It has an integrated debugger, intellisense, linting, and plugins to do everything you'd expect from an IDE.


I love the irony of simulating an XOR gate from a piece of hardware (a serial terminal) with a billion gates in a processor which renders a square with an alpha blend function. Sort of like using a 787 to sit on the runway, and run its engines to blow a windmill to crank a butter churn :-).


That's a fun analogy, made me laugh. :)

I see this literally everywhere though, the article at hand is only a slightly better example than almost everything we do. Browsers consume gigabytes of memory to render a few basic web pages. We use high level scripting languages with tons of dynamic memory inside containers that are running on VMs to run all our cloud infrastructure. If we had the time, these things could be multiple orders of magnitude faster and smaller. It just isn't worth our time... :P

Another fun example of this I ran into recently is the controllers for brushless drone propellers. The hobby motors you buy for $20 usually have a 1Mhz 8-bit CPU running Electronic speed control, literally shrink-wrapped inside the wires. Every single prop. Think about that, a million instructions per second running, only to make something spin. (To be fair, the CPUs are under-utilized, but still, it reminds me of churning butter with a jetplane.)

The main difference, of course, is that cpu time & memory are close to free, and 787's are super expensive. Maybe if 787's were free, we'd use 'em often to churn butter... ;)


> cpu time & memory are close to free

Not for 3 billion people. Also energy is not free at all.


Right yes, I know, and it's a great point. Global economics and third world access to computing & the internet are in a completely different time zone from I was talking about.

But you're right, and on the global scale, we may actually be doing the equivalent of churning butter with jets. I totally wouldn't be surprised if the sum total energy expenditure on all computers in the world was greater than on all the aircraft in the world... and we are most definitely wasting the vast majority of the energy we use on computation.

Still, in my defense, I said close to free, and compared to the cost of a 787, cpu time & memory are closer to free than jets, no matter who we're talking about, right?


The answer to all this is: "use Qt".

"But camgunz, I only know JavaScript"

That's cool! Look into QML.

"But camgunz, I only know X"

That's cool too! Qt4 has a truly ludicrous number of language bindings: https://en.wikipedia.org/wiki/List_of_language_bindings_for_.... Qt5 has a fair number too: https://wiki.qt.io/Language_Bindings.

"But camgunz, I need an embedded browser"

I agree, separate windows are for savages. Qt has you covered with Qt WebBrowser.

"I need a native look and feel across all platforms"

Well, that's a pipe dream. BUT, you can get closer with Qt than anything else. Google for some screenshots.

"I need a bananas style but I don't want to write any code"

Qt supports CSS-like stylesheets!

---

The web isn't a good application platform. Sure we could (and have been) spend billions of engineering hours and years to get it up to speed with exactly what we have now, but that's obviously a bad idea. We can figure out zero-install and sandboxing, but we just don't need to shoehorn everything into JS, weird APIs like localstorage and webrtc, and the DOM. We just don't need to.


> use Qt

It has only been few years since a bug in file copy animation in systray that caused 100% cpu utilization in KDE Plasma 4 has been fixed. It wasnt the first bug in this regard nor it was the last one (IIRC. I saw a similar one in Plasma 5 too, though I didn't check it). Qt is old but cant say it is efficient or optimized. Qt5 programs that use classical QtWidgets have significantly more memory consumption than the programs written in other toolkits, and the newfangled Qt quick widgets is a fucking disaster as can be seen from the aforementioned bug they couldnt even get a simple screen animation straight, so not at all different than Electron ( only few years ahead of them)


I'm not arguing Qt is perfect; no software is. But you can't argue that a platform built on JS and the DOM will ever perform as well as a native toolkit unless the native implementor makes unbelievable mistakes, or we put a vastly disproportionate amount of work into the JS platform. If you really have a thing against Qt, GTK has a bunch of bindings too (including JS!); go wild.

Or to put it another way, if you can switch your stack and get a "few years" of progress, that actually sounds amazing.

(Plus, and this applies to Electron too, you're free to fix bugs in open source dependencies that impact your project).


In what do you prefer to write cross platform apps? Is there a more productive platform?

The thing with the web, is that it's very easy for those familiar with it to try and shove into everywhere, I do it too.

The amount of code that does things that are useful that one can just steal for his project is very high, with all the fiddles and bins and even more importantly, very accessible and, sometimes, when the gui code, it looks really good too, so people use Electron.

BTW, they use Electron because they run the same editor in their Azure platform, as an online editor for files in servers, so knowing this, I'd say picking Electron is a reasonable choice.


If I need to throw together a GUI app I usually use Python and PySide. I've had co-workers use Qt Creator and they built way better and better looking apps than I did. There are tons of Qt examples and the docs are pretty good.

Electron is almost never a reasonable choice. As a platform it's one of the slowest and most limited. You have a choice of a single, pretty bad language. It only works on a few platforms, and in practice it's limited to even fewer because of its poor efficiency.

I'm sympathetic to devs that don't want to venture outside of JavaScript, but in the same way we shouldn't pretend that C is a great language to write web backends in, we shouldn't pretend JavaScript is a good choice for anything but the most basic of web scripting. Writing apps in JS, you're going to experience poor performance, high memory usage, a lack of portability, difficulty scaling to a large codebase, and problems with concurrency.

You can usually tell when devs choose the wrong tool. If you write a DB in Java, you'll have problems with the GC impacting your latency (Cassandra). If you build a desktop app in JS you'll have weird problems with resource usage (Atom). If you build a web app in C it'll take 4 times as long, be unstable, and have weird restrictions. We should recognize that every tool has its use, and stop wildly spending resources making one tool passable at everything.


but but camgunz isn't the commercial license like a new house, three kidneys, and your cat?

3,540.00 USD to be exact :P ... each year to be exact :P


You don't need the commercial license to use Qt commercially.


Just don't link your app statically.


TL;DR: It seems to be a problem in Chromium, the relevant CSS (one second change) results in a 60hz animation cycle.

Workaround:

  "editor.cursorBlinking": "solid"


I recently discovered the same issue in a webapp I develop... a simple CSS animation for a "loading" spinner was pegging the CPU. Using steps(n) with a low number basically resolves the issue. https://css-tricks.com/snippets/css/keyframe-animation-synta...

Kind of ridiculous that it's so easy to make this mistake.


With great power (flexibility of JavaScript) comes great responsibility.


Okay, but checking at 60 hz if an animation needs to be updated is not relevant work. In no universe should that cause 13% processor load.

Sure maybe you can get smarter and realize that all animations run at 2 hz and then also only check at 2 hz, but it seems the root problem here is an entirely different one.


Interesting, considering that CSS animations were supposed to be much better than JS animations in that regard. I guess a simple setInterval would actually perform better here.


CSS animations and raw JS animations perform equally well. JS had a bad reputation for animation only because the common libraries back then (jquery, mootools) had absolute horrible incompetent implementations.

This also has nothing to do with the blinking interval. A setInterval would simply not do the same. The CSS animation is fading its opacity at 60 fps not toggling it. Doing the same with a setInterval(.., 1000/60) will use the same amount of CPU.

As any introductory tutorial about css animations explains: They interpolate between the percentages you are not providing! So thats a DOM update every 12ms. The reason its smooth is because opacity (like transform) does not trigger a relayout.

Finally, whatever is triggering relayouts is not the opacity change, but some other obviously badly written code that is continously triggered on requestAnimationFrame and is touching the DOM even though, zero dom writes should be taking place on idle. (And zero dom reads always! Since reading from the DOM triggers synchronisation between the rendering thread and the js thread)


> The CSS animation is fading its opacity at 60 fps not toggling it.

Most of the cursor styles have some sort of animation, but the "blink" style is just an on-off blink with no fade. Chrome updates every animation at 60hz, even step animations like these. This is an acknowledged issue in the Chrome bug tracker as linked in the issue.


You think this is gonna be fixed in Chromium, fixed in VSCode, or working-as-designed?



13% cpu usage at the lowest c-state is a also very different than 13% at an elevated c-states. I've recently spent a lot of time analyzing c-states/p-states and the power mgmt modes of the GPU. After learning more about the complexity behind the clocks, bus speeds, etc. underlying each state, whenever I hear someone quote a utilization number of a minor workload, I want to know at what power state.

Not to take away but the author's point, just an aside that utilization numbers can be a lot more complicated when there are dozens of energy states and the utilization might be utilization at a particular state rather than utilization at maximum power


You mean p-states in your first sentence, right? Anything but c0 represents different levels of 'retiring no instructions' (totally idle). The rest of your comment seems accurate though.


You're correct that I just meant a lower power state.

The author mentioned blinking cursor, so it reminded me of graphics issues. A more efficient CPU state has the possibility of slowing an app due to CPU-GPU sync points. A blocking CPU in an energy efficient state can reawaken slower from GPU done notifications, so FPS is lower. So both c-state and p-states can affect performance. General point was just utilization may not be utilization at max power.

I've worked on problems where utilization was 15% at lower power and it was a problem. But to compare different workloads, it'd be < 1% at max power.


That's actually even worse: it will prevent the cpu to enter deep sleep states and save more power.

A while ago I was trying to minimize the power usage of my laptop (yeah, slow day) to maximize my battery time.

Armed with powertop, I removed any undesired process until I was left with an otherwise idle emacs (less than 1<% cpu) as the last major source of wakeups. Sure enough, disabling the blinking cursor brought that down to nothing, allowing the cpu to stay into deep sleep state much longer.


even the best case scenario is 13% of ~900MHz, 100MHz to blink 20 pixels.


The amount of bad hacking that has to happen for NodeJS to work as a platform has said all I could ever want to possibly know about the quality of NodeJS developers: so pathetically in love with their trainwreck of a language that they would rather pile kludge upon hack upon kludge than to learn the language and environment most appropriate and most computationally efficient for the tasks at hand. Javascript devs would rather just throw JS at it


This arrogance is amusing. The only thing the web and Javascript world proved is that we are bad. In the past it was much harder to distribute crap since you needed to, you know, find and install everything. Since websites are so frictionless, now we get to experience everything.

And guess what, 90% of everything is shit.

The current "JS devs" are the former "PHP/Java devs" and the former "VB/Delphi devs" and the former "C/Cobol devs".

They're us.


My problem isn't with the bad JS (or, in the past, PHP/JAVA and so on) devs themselves, it's that they insist on building an ecosystem out of a really bad platform. Why? Because JS and this "leverage existing skillsets" bullshit. How about leveraging your brain to learn a more appropriate language?


Do you know of a "more appropriate" language for creating cross platform apps that work on Windows, Linux, MacOS, BSDs, iOS, Android, 4k screens and 480p screens?

I don't know any.


C? C++? QT? Java? Any language with GTK hooks? Xamarin? Juce framework? Delphi??

You say 4K and 480p screens like that's hard. Design once, scale forever?


C is unsafe and is too low level.

C++ is possibly the only language with more bad parts than Javascript :)

Java ok, but how would you run it on iOS? The last viable option for that (RoboVM) was taken behind the shed and shot by Xamarin/Microsoft.

GTK doesn't run on mobile and it's barely supported on Windows and MacOS.

Xamarin is ok, but it used to be closed source and cost $1000 per year for any serious project.

I don't know Juce, but from what I can see it's a C++ framework, so see C++ :)

Delphi? Zombies don't count ;)

The most viable contenders for modern cross platform software were marred by bad corporate ownership: Java, C#. They've kind of gotten back on track recently but I'm not sure they can catch up to the web-train.


This reminds me that as part of the Windows95 development effort, Microsoft disabled per-second updates of the taskbar clock to improve performance. Raymond Chen wrote a bit on it back in 2003:

https://blogs.msdn.microsoft.com/oldnewthing/20031010-00/?p=...


I've made clocks before, and tried to either have them woken up when it's (almost) time to update - or have them sleep long enough that it'll be (almost) time to update. Only seems polite.


On the other hand, iPhone's "Clock" app has an icon that shows the correct time with a super smooth seconds indicator. Which doesn't drain the battery.


It probably does drain the battery if you leave it on the screen for a while. They can get away with it because people usually aren't looking at their app icons for extended periods of time.


The Windows 95 optimization related to the blinking clock making it difficult to page code out... which was very important on systems at the minimum 4MB RAM required by Windows 95.

I suspect that iPhone's "Clock" app would have difficulties both with smoothness and battery life if constrained to 4MB of RAM and paging...


How fast does it run on 1995 hardware though?


Pretty sure any sane software implementation of a simple seconds pointer would easily update faster than 1 Hz while using little to negligible CPU time in a Desktop processor from the 90's and onwards.


The Windows 1.0 clock (1988) had a 1Hz second sweep hand that ran on a 4.77MHz 8088. It's really just a couple clipped line segments each second.

http://variableghz.com/wp-content/uploads/2012/12/windows-1....


Yes, that was the point I was making. This is something computationally trivial as long as the code is written appropriately.


Keep in mind though, that the performance optimization the Windows article talks about is how to reduce the working set in memory more than the CPU consumption.


...21st century desktop application development ladies and gentlemen (not that I'm proposing anything constructive).


Throw it in the bin and return to the old ways, but with modern testing, source control, and static analysis?


In GDI era caret bar ( that's not cursor, sic!) was rendered as by simply inverting pixels (InvertRect() call) inplace in video frame buffer. Very cheap operation that does not require redrawing and run of any other code.

With the GPU the only viable option for rendering blinking caret is to redraw the whole window. That's why it takes so much CPU as Chrome uses mostly CPU based rasterizer.

But redrawing the whole window in GPU is not that bad if the renderer is capable of caching GPU objects while rendering.

Here is what happens in Sciter (https://sciter.com) while rendering blinking caret in screen of similar complexity (editor with syntax hihglighting):

https://sciter.com/images/sciter-caret-cpu-consumption.png

As you see it CPU consumption is near to zero.


"With the GPU the only viable option for rendering blinking caret is to redraw the whole window."

Sorry, that is plainly false. There is nothing preventing you from treating an offscreen buffer just like any other buffer of non-dirty pixels. Treating the back buffer that way is slightly less conventional but is still just fine.


> There is nothing preventing you from treating an offscreen buffer just like any other buffer of non-dirty pixels.

You need:

1. ability to invert pixels by CSS/JS. No such feature in principle. For many reasons.

2. Even if you will be able to invert those pixels in offscreen bitmap you need to send that window's offscreen bitmap to CPU on each caret blink. You can use tiles - so do partial CPU->GPU data transfer but still.

3. If you use offscreen buffer you are almost always use CPU rasterization. CPU rasterization is O(N) operation, where N is a number of pixels.

On high-dpi monitors (200dpi...300dpi) number of pixels is 4...9 larger than on "standard" 96dpi monitors. And CPU stay roughly the same last 4-6 years. So if you want your app to run on modern hardware - GPU is the only viable option for rendering - forget about offscreen bitmaps and the like.


None of the 3 things you said are true. I recommend you get some experience in rendering before you mislead people too much with these kinds of comments.

In reality the problem is trivial, you set up a scissor rect (or explicitly mask the pixels in your shader) and then render only stuff overlapping that square. You don't need to invert the pixels for it to be fast; you can render an arbitrarily nice cursor effect.


I am not sure I understand why do you need clipping at all to render rectangular caret bar (note: cursor is a different entity in UI professional jargon).

What exactly you want to be clipped out?


I am assuming that your caret bar may be overlapping text in some way, or that there is a background bitmap that you might be alpha-blending against, etc. Basically I don't want to make an assumption that might break if the UI gets nicer. The case of a strictly opaque strictly rectangular non-antialiased non-smoothly-moving bar does not seem very interesting or nice-looking.


Are you speaking about some particular implementation of this all just from your imagination?


You are talking to someone who has done 3D rendering professionally for 21 years. What's your background?


>>In GDI era caret bar ( that's not cursor, sic!) was rendered as by simply inverting pixels (InvertRect() call) inplace in video frame buffer.

And on an old 8-bit system, blinking a cursor involved toggling one byte (character code) in video RAM. How far we have come...


Sounds like this can be fixed by moving away from animating opacity to something more rudimentary like flipping the display property in JS. CSS3 animations from my experience are a bit taxing in general, and something you want to use only in small bursts, not constantly running in the background.


That would, but right now the cursor fades in and out, it doesn't "blink" exactly, so this suggestion is not a 1:1 replacement.

Sounds like they need to optimize their CSS animations in general.


This is where Bert Bos (Created jokingly the notorious, unofficial, abused blink tag https://www.w3.org/Style/HTML40-plus-blink.dtd) should have a little "told you someone will need it someday" moment ;)


Ah yes. The classic text editor debate thread. In this thread you can expect to find folks claiming that 2 seconds of startup time for editors, like Atom, is so disruptive to their workflow that they'd rather use notepad.


I've switched to VSCode as my go-to text editor and you understate the real impact the poor performance has on my workflow. It's not just startup, almost everything has a 250ms-2s delay; it's not a lot each time but it adds up and is frustrating.


VSCode best use is not for text-editing, its main target is to work as an small version of a full-blown IDE. Use the right tool for the job.


Actually, I use notepad for most things on windows specifically for the startup time... only I replace it with akelpad since it's a little more sensible, but the point remains.


Those kids! On my lawn! When we did it, it was uphill, both ways! I prefer letting butterflies flap their wings to change the atmospheric composition in such a way that the cosmic rays flying through the atmosphere bend to flip bits on this disc.


Not to defend Electron (I use neither VSCode nor Atom), but meanwhile I'm over here watching the JVM consume a constant 10% of my CPU thanks to having a single 60-line file open in the official Arduino editor (which isn't even an IDE, it's a glorified GUI for compiler flags with a built-in syntax highlighter).


The Chromium bug describes the root cause, which is a fixed schedule interval for CSS animations:

> The JS implementation uses an interval of 500ms between updates while native animations will be updating at 60Hz. At the moment we're not smart enough to deschedule ourselves during animations with step timing functions.

https://bugs.chromium.org/p/chromium/issues/detail?id=500259


The fact that animating a cursor at 60Hz requires non-neglible CPU is still a little sad.


You know how there's like, hard real time embedded programming where if you don't hit your realtime deadline every time without exception forever, your engine explodes or something?

Well, apparently the web is built on whatever the opposite of that style of programming is.


whatever the opposite of that style of programming is.

Is it "fun"? I bet it's fun.

Or easy. Or convenient. Or low effort. Or relaxing. Or accessible. Or cheap. Or quick.


Or stubborn.


Reminds me of this ancient Firefox bug about the performance cost of the throbber: https://bugzilla.mozilla.org/show_bug.cgi?id=437829


I hope y'all never have the chance to boot Turbo Pascal 7.0 (DOS) on a P5 class cpu. It's pretty sad. I mean a 700KB IDE with decent language to native code with some form of live check, instantaneous compilation times, modular programming, online help; multiple windows and a cult classic color scheme. It hurts.


Developers often work on extremely fast machines for their own productivity/sanity but this is a reminder that it can help to test on a slower setup from time to time. Or more generally, test edge cases.

For instance, if you have a graphics tool that briefly flashes screen updates, the problems are much easier to see. On a fast system, you might not only miss an unnecessary refresh of “everything”, you may miss a repeated refresh of the same content.

Also, on slower systems, the cost to generate a frame may delay an entire sequence. Consider something like “live resizing”: on your spiffy machine it seems fluid, on a lesser machine it might be stuttering like crazy. Sometimes you have to cheapen the computations occurring during rendering to make sure it’s OK.


This ... isn't an edge case.


I recently switched from Atom to VS Code and have noticed a considerable reduction in lag, especially when using CMD+D and searching through project files.


Same here - the lag with just doing simple things in Atom, like moving the cursor, became too much.

Since I made the switch I've found VS Code to be quite nice. I miss having Hydrogen available but I can always run jupyter notebook if I need something like that.


I had something like this happen with QNX, back in 2004 when we were using it for the DARPA Grand Challenge. We had an industrial x86 computer system that was running headless, with no display. But the device had a minimal VGA controller on the motherboard. So QNX brought up a screen saver on the slow VGA controller, where reading from display memory was very slow. The screen saver was reading from display memory to move the screen saver box around. This used up about 15% of the CPU.

The QNX people were really embarrassed about that and fixed the screen saver. We just reconfigured to turn off the display entirely.


This comment section deserve to be posted in some kind of drama-oriented section, a little like /r/subredditdrama

Personally I'm waiting for the second net bubble to burst, so that we can force everyone to use a stricter markup language.

Everyone is seeing how android apps are slowly but literally making HTML completely obsolete, and honestly that's an awesome thing, because it's really needed.

I hope the tech market realizes that and evolve quickly, instead of waiting for some battery breakthrough.

Never forget about the Andy Bernhardt video about the birth and death of javascript.

Those issues are why I will always target C++/java jobs and laugh at anything related to the "web". I am never short of the amount of analogies I can invent about HTML/JS. It's like comparing stick and stones to a decent steam engine.


It is easy to do screen redrawing wrong. Intellij Idea had similar problem:

https://blog.jetbrains.com/idea/2015/08/experimental-zero-la...


An internet points out that Hackernews' favorite text editor takes up more CPU than the average Hackernews would have had at their disposal 20 years ago just to make the cursor blink on and off. Hackernews circles the wagons to justify the stupid engineering design decisions behind said editor based solely on the fact that embedding a complete Web browser just to draw buttons and text fields "won" over any sensible GUI implementation, and they can't live without the crutches VSCode provides when shitting out Go microservices.

Many of these same people will argue vehemently that X11, the shitty GUI layer for Linux that ran perfectly fine 20 years ago, is "slow" and "bloated" and needs to be replaced with Wayland, a new, completely different, shitty GUI layer.


You're wrong and don't know what you're talking about.

I dislike the implementation of Atom and have been highly critical of it on HN, even crashing a release thread once by pointing out how harebrained it is to implement complex text layout on top of browser APIs when the browser has access to a much richer text shaper itself - we'd know because we wrote it for Qt originally.

Because I've also worked on KDE for 12 years, wrote a big chunk of Plasma 5 and am one of the people porting it to Wayland. We want Wayland for many of the same reasons that make Atom bad, such as state synchronization problems and overhead with X11 (along with its very dire security story).

The intersection you suggest isn't real or doesn't matter. No one working on Wayland uses Atom.


I don't think one should take obvious satire to seriously. There's not enough time in the day to point out details in posts that are written just for laughs.

However, just to be pedantic ( ;-) ), I'll have to point out that "a big part of the people defending VSCode in this post agree that X11 is slow" does not imply "a big part of the people agreeing that X11 is slow defend VSCode in this post". The two aren't commutative, since one set of people is quite likely much larger than the other.

Edit:

Oh, and since you mentioned working on Plasma for over a decade: Thank you for making awesome FOSS! :)


:) Fair enough. I am a bit less cranky after my morning coffee now ;). Thanks for the warm words!


I'd say you absolutely have to point out factual flaws in comedy. Comedy is truth, that's why you probably believe that David Cameron put his penis in a pig's mouth and Donald Trump was peed on by Russian prostitutes. There's no evidence, but it's funny so you believe it anyway.


Best bit is that X11 runs really well these days -- responsive, fast, reliable. Emacs runs great, gitk, xterm, xosview, mplayer window manager, even Firefox is alright. Renoise, Maya, Blender too. Not to mention network transparency when I need it. It's a great environment to get my work done, and 3x hi res displays is the stuff I dreamt of 15 years ago.

So it's a good job we're about to throw it all out and start again, eh folks?


If by "responsive, fast, reliable" you mean awful screen tearing when you mix X11, OpenGL, and/or video playback. Fixing it on any given machine is often possible through some voodoo and a hand-rolled combination of driver and software settings, but that voodoo works only for one specific combination of hardware and driver version, and every update means more rounds of trial and error...


This is not because of X11 the protocol though, just implementation bugs. Thus not a good reason to start from scratch.


How would you suggest we fix the core pointer protocol? (It's not an extension)


It's the GNOME style of development: as soon as a bug appears which takes effort to fix, mark it EWONTFIX, declare it to be a symptom of the inherent brokenness of the current stack, then burn the whole thing to the ground and start over. Don't even let that which was burned fertilize new growth; it is tainted and must be purged entirely.


How have you managed to make an argument about embedding a web browser about GNOME?

As it were, GNOME's IDE (which I wrote) uses 0% CPU at idle.


Gedit is fantastic. Thanks for all the work on it. I used it for years before VS Code, and I still use it outside of work on my personal projects.

Amazing how much work you can get done by just typing the damn code into the editor, sometimes.

However, writing typescript in vscode genuinely changed the way I think about editors. It's a smalltalkish sort of feeling where my code (at least the textual level) lives and breathes in the editor, automatic tooltips that are actually relevant, etc.

Idk. I like both.


I work on Builder, not Gedit. However, Gedit does get various features as we push them up into Gtk/GtkSourceView. The overview map is one such example. The pixelcache in GtkTextView enabling smooth 60fps scrolling is another.


To be fair, the conversation had already shifted to X11/Wayland. GNOME does have a history of starting over from scratch every few years; you probably had a good reason for writing Builder instead of working on Anjuta, and that's just the way the GNOME community seems to do things.

This makes people, especially power users, disproportionately angry because when you rewrite something, it will be different (that's sort of the whole point);this is compounded by the fact that at the beginning it will be feature-poor compared to the older system.

"Why did you replace X with Y? With X, I could configure it to treat mouse-button-3 as mouse-button-2, but only when my USB mouse was not attached and I wasn't holding down any modifier keys."

See also: https://xkcd.com/1172/

Before I did any serious software development, I was on that side of the table being frustrated (I still cry a little inside any time I remember Galeon. Rest in Peace, my favorite browser).

I can also neither confirm nor deny that my windows are being decorated by sawfish and that I wrote a compositor in librep so that I could get modern features while using it.


> GNOME does have a history of starting over from scratch every few years;

The GNOME project will be 20 years old in a couple of months. So while it might feel like "every few years", we really don't change direction all that often.

As you can imagine, those that show up to do work have a great deal of say in where the project goes.

> you probably had a good reason for writing Builder instead of working on Anjuta, and that's just the way the GNOME community seems to do things.

> This makes people, especially power users, disproportionately angry because when you rewrite something

You've sort of made my argument for not changing Anjuta over the years. We knew the amount of stuff that had to be changed would leave both the code-base and UI looking very little like Anjuta. Best to let people continue using it in the mean time.

> (I still cry a little inside any time I remember Galeon.

It lives on as Epiphany and is the primary driver of the WebKitGtk backend.


> Not to mention network transparency when I need it

You realize that no X client draws like this nor has for 20 years right? They all use xshm to upload pixels so this "network transparency" is just buffer copying over the network. Not entirely different from a texture upload to a GPU.

> So it's a good job we're about to throw it all out and start again, eh folks?

Yes, by those of us who have been working on the same platform you claim to love, for the better part of a couple decades.


> You realize that no X client draws like this nor has for 20 years right? They all use xshm to upload pixels [...]

You seem to be misinformed. Of the applications I mentioned, the main ones (gitk, xterm, emacs, xosview) are not doing that; seen clearly by analysing the X traffic.

Of course, some of these could be built with an optional Qt front-end, in which case I would not be surprised.

So you stated clearly "all" applications (for "20 years") which is provably incorrect; and I home in on this because it's a recurrence of another 'well, you never had it before' argument (that really might be better directed at the decaying state of X11 client toolkits)

X11 isn't perfect, I have no doubt something better can be engineered (especially if it involves those who worked on X11). But people are missing the point that to replace X11 is a different matter from designing a better one. It involves acknowledging those who are quietly and successfully using X11 on a daily basis for the majority of their work and existing applications -- not telling us how wrong we are.


That's precisely why I stated 20 years. Those programs were written before that and largely haven't changed since.

If you wan't anti-aliasing in any of those above (gitk/emacs come to mind) you'll be doing client side rendering of fonts and copying pixels.


> no X client draws like this nor has for 20 years

If something was written a long time ago and hasn't changed in this respect, BUT is still widely used... you can't say that "no X client draws like this nor has for 20 years."


"Copying pixels" is the dominant mode of operation in other remote GUI protocols like RDP, so it's not the end of the world.


Actually, RDP is GDI over the wire, which is not too dissimilar from X.

RDP has efficiency gains because clients send pixmaps over the wire to be stored on the server, and then send draw calls to display those pixmaps in certain places, composing a display. You can do this with X11 too, but the GTK developers don't want to because using the protocol is hard ;_;


gtk+ certainly does use server-side textures and positioned draws. That is why the documentation is so adamant about using API like create_similar_surface().

Even the pixelcache uses server-side textures when available.

This is what Xshm does and why everything modern uses it. You get a server-side texture, but mapped into client address space so you can do direct draws and then XCopyArea() into your final location for double-buffering.

But gtk+ isn't really in the position to be able to control everything so precisely all the way down the driver stack to network transparency layers. So unless you connect to our process directly, like the HTML5 broadway backend, there is only so much we can do.

The broadway backend does employ various techniques to reduce the amount of content passed using a sort of rolling hash.

https://git.gnome.org/browse/gtk+/tree/gdk/broadway/broadway...

I expect forthcoming app/display network transparency layers in GNOME's Wayland compositor to employ a similar strategy.


Indeed. Especially since TCP-based X would require that you stall operations if you drop a packet so that draw-ordering is preserved.


Could have sworn that there exist multiple takes on extensions for that, but none that has been rolled into Xorg proper. This either because lisencing, or because the current devs have GPU stars in their eyes.

BTW, it is downright funny how just about every Gnome guy i have encountered online seems to come across a pedantic grump. that would not know a joke if it fell on his head...


Interesting how one person is a common denominator in all those interactions ;)


And most of the comment gets ignored for a cheap retort, as expected.


you've mentioned something i've not heard about without linking to it or providing a name to help me find it myself. so i'm not sure what you'd like from me.


If you want to optimize for choppy networks that have too much packet loss for interactive TCP applications , then the current X11 is not a good fit. But that would be a whole different niche.


It is reminding me that webkit is grand child of QT. QT > KDE > khtml > webkit


In 2000, I don't think many open source X clients (typically Linux stuff) used xshm. Not using X any more, I can't really comment on the current scene. But apart from things like MAME and vlc, what app needs it? And web browsers, of course.

(Accelerating OpenGL in hardware but displaying in an X11 window is a different question.)


> X11 runs really well these days -- responsive, fast, reliable

Well, you haven't seen Windows then - the graphics stack is phenomenal and a marvel of engineering. nVidia drivers crash? I only get a second of black screen and then resume my work. Yep, that's right - no other GUI program crashed, I didn't had to do anything, literally just 1 second of black screen.

Oh and you can have one window on two monitors and both parts of window will have full vsync - insane, huh? :)

It's scary how good Windows is.


I don't much about graphic stacks, but in my experience the best, as least from the perspective of someone using multiple HiDPI displays macOS is far and away the best. Many windows applications don't scale properly or if they do they require special settings to do so. If you have monitors with different levels of scaling your going to have a terrible time on windows.


On the other hand, connecting a 4k display in windows will default to configure it at 200% scaling, in mac it defaults to render everything tiny as ants.


I think it's only 150%, currently running windows on a 28" 4k monitor makes me think 200% would be too much?

Oh I should also add that I haven't ever had any of the DPI issues the parent's parent is referencing. The only problem with multiple display DPI in Windows 10 is the shockingly bad fuzz you get on your secondary display from the thing being rendered either smaller or larger than normal (depending on whether the 4k is your primary or secondary) and then scaled up or down to fit the monitor.


It may somehow be detecting the size of the display. My 13" laptop likes 250% scaling, while my 27" monitor likes 150%.


That would make sense, your DPI scaling is all to do with readability after all :)


Windows 10 refuses to work properly with my 4K Dell monitor.

After it goes to sleep, then I wake it up all open Windows have been resized into a tiny part of the screen and scaling goes weird.

https://duckduckgo.com/?q=windows+4k+monitor+resize+after+sl...

Terrible.

Having said that on macOS my external 5K LG monitor is causing complete system crashes now and again :(


I have a 43" 4k screen. It's near impossible to make Windows NOT use scaling, even if you disable it everywhere, the next Windows updates usually reset your carefully created registry hacks.

I wish Windows based scaling on DPI instead of resolution, the system seems to be aware of both. On a more general level, I wish there was any hope of passing feedback to Microsoft/Apple/etc.


>... in mac it defaults to render everything tiny as ants.

I'm a fan of no DPI scaling (100%) at 4K, at least on my 27" monitor. It takes 2-3 months of getting used to, but once your brain and eyes adapt, significantly lower dot pitches become completely unusable. The only thing I change is bumping up my terminal or editor's default font size a tad.

That said, I'm not sure how people with 24" 4K monitors do it without DPI scaling. I'd probably even prefer 30" myself.


>...significantly lower dot pitches become completely unusable.

Correction/clarification: significantly higher* dot pitches, as in lower pixel density. "Completely unusable" was meant in the sense of how it'd feel to return to 800x600 after being accustomed to 1080p. 4K is four times 1080p, so it's roughly comparable.

It wasn't my intention to offend anyone with poor eyesight, or suggest that people ruin theirs. Just that it's possible to get used to really low (dense) dot pitches, and once you do it's simultaneously really enjoyable and weird at the same time.


I once used 15" MBPr without DPI scaling (2880x1440 native res) for a while. Then I became too worried about my eyes.


As someone who remembers NT4 video driver crashes bringing down the whole system, when I saw Windows 10 recover from a video driver (yes, nVidia) crash I was massively impressed.

There's always room to improve but I'm really happy with the stability improvements they've put in.


FWIW my graphics drivers crashes on Ubuntu always recover. Even a complete GPU hang. It sees the driver is hung and restarts it causing a few seconds of minor glitching.

Configuring X is not for the faint of heart, but in Unity it is basically magical and deals with HiDPI displays, etc. just fine. Feature for feature it is very similar to Win or Mac on the display and GPU driver front. The total package still feels rougher around the edges, but it is still good.


My only issue with configuring X is that 'load "glx"' will not fetch the correct libglx.so with nvidia drivers loaded unless you tell it to go looking in /usr/lib/nvidia-[VERSION]/ :| took me a very very long time to get GLX working on my setup.


Yes, and it only took 30 years, too!


Windows' stack is still a ways from being reliable in a lot of somewhat important cases like (real) fullscreen, but it's certainly miles beyond X11 and Linux.

(I don't have enough experience with Mac's to speak to it).


Yes, but only if you have the Aero compositor turned on... which adds at least one frame of latency to everything. Without Aero, Chrome can't play video without tearing like crazy. Oh, and that white pixel in the top left corner of the screen if you've got Aero turned on but turned off the annoying as crap transparent windows. Yeah, scary good...


I think that's a case of necessity being the mother of invention. IME the windows 10 drivers crash much more frequently than the linux or windows 7 ones ever did.


I don't think it's fair to blame the OS for driver crashes. Those are a result of third parties and can happen on any platform.


That varies on the OS/driver/hardware, but IMO, the new AMD graphics drivers being in the kernel tree are the way to go, or something similar like a partnership between MS/AMD/Nvidia.

As a consumer it's incredibly frustrating to have a buggy driver and not know who is responsible. Is it MS? Windows comes with a lot of drivers so blaming MS seems fair. Is it the hardware manufacturer? Sometimes you can get the latest drivers but the OEM hardware isn't quite standard so you're screwed. Is the OEM to blame? Usually, because they have their own driver update system, but then the question is why can't they use the native windows update system?

The current situation on windows seems to be that no one is responsible.


> then the question is why can't they use the native windows update system?

Cynical answer, and I will grant that gpu vendors are less bad than e.g. printer or smartphone drivers, but it just distributes drivers and so doesn't provide all the opportunities to upsell/advertise to/lock in the users that their bundled crap they can pair with the driver with their own installer allows


Many vendors do use the Windows Update system. Looking at my old Windows 10 box, in the last three months it's received display driver updates from Intel and nVidia (it's an Optimus system so yay, twice the driver update joy). It's also got a mouse driver update from something called ELAN.

Also, Windows has this thing called minidrivers where Microsoft essentially writes a chunk of your driver for you (the generic chunk), and you only have to write the bits specific to your device. The idea is that Microsoft could QA their drivers better than J. Random OEM ever could, and so this'd reduce cost for OEMs and also make the Windows platform more stable.

https://msdn.microsoft.com/en-us/windows/hardware/drivers/ge...


You're making this harder than it needs to be. All Windows drivers are signed. Blame the party that signed the buggy driver.


The point is, users will blame windows because they don't know what a graphics driver is or that they have one, so MS took steps to avoid people thinking windows is buggy. MS has a long history of this, going as far as reproducing bugs.


You sign an executable to attest that it is authentic, not that it is bug-free.


X11 may "run really well these days" if your reference is X11 twenty years ago, but if you compare X11 to e.g. macOS, it's pretty obvious why Wayland is needed.

X11 network transparency was nice feature when 10 mbit ethernet was hot as pizza, but these days remote desktop protocols offer more practical alternative.


To be fair, X11 can't handle multiple displays with mixed DPI.


You sort of can with xrandr scaling.


Last time I did that, my CPU was always spinning and the rendering looked blurry and non-crisp. Now I just prefer to reduce my emacs and terminal font size, and keep Chrome in one display forever.


Don't forget to install the plugin that actually allows you to run a web browser inside your web browser-based editor:

https://atom.io/packages/browser-plus


I enjoyed your comment. The non-funny response is that all GUI layers suck, possibly because visual interfaces and abstract code are inherently incompatible. Since no one has managed to make the perfect GUI, we constantly look to improve on the tradeoffs we must make, which are different for different applications. 13% idle sounds egregious but higher battery usage isn't a problem for most if it excels in the areas that matter: responsiveness, design, power, completeness. In contrast, Wayland improves not just idle, but also overall performance, which is the main consideration for a low level API.


"3% idle sounds egregious but higher battery usage isn't a problem for most"

Oh God this is why we can't have nice things. How the heck is someone actually justifying this stupid shit here?


What do you find stupid? The way I see it, HTML/JS is a pretty good higher-level GUI language, and much effort has been spent making it work well. The overhead is worth it if it makes developers more productive.


Is it? The costs which you offset with developer productivity get multiplied times the number of users, which is orders of magnitude more than number of developers.


And value delivered also gets multiplied by the number of users. VS Code users decided that they like it even after accounting for Electron. Clearly, then, VS Code has certain strengths, and I'd argue that some of those strengths are made possible by Electron. Second, great design requires experimentation, and Electron makes it easier to experiment with editor UI. VS Code could become a place to prototype ideas that eventually improve your more efficient editor.


Why I agree on "much effort has been spent" it is very hard to agree about HTML which was meant to be used for marking up static documents combination with ad-hoc scripting language being "pretty good" for GUI. More like "possible to work around a bit".


Fair enough, but double-digit usage of an entire core... when the program is IDLE? how can you justify that? That's just broken.


They will indeed fix this. At minimum, they can freeze the editor when it's out of focus. So there is no need to justify--this is a minor oversight, as someone phrased it over at Slashdot.[1]

I was responding to those who believe that this demonstrates a fundamental flaw in using Electron. My response emphasizes the tradeoffs involved. If they couldn't fix it, 13% background is undesirable, but it's not a dealbreaker for me if there are redeeming qualities. And it seems that VS Code certainly has some redeeming qualities.

[1] https://developers.slashdot.org/comments.pl?sid=10406465&cid...


> higher battery usage isn't a problem for most if it excels in the areas that matter

As engineers it's our responsibility to not spend our users' resources unnecessarily. This thinking is how we got to the bloated web where pages need a megabyte (or several) of code just to render. Could you imagine an architect saying, "yeah, this design costs 2x that other one, but my clients are rich so it doesn't really matter"? The old saw that "anyone can build a bridge that stands up; it takes an engineer to build a bridge that barely stands up" applies.


I'm glad at least we made fun of both sides.

I really wish there was a good option for GUI toolkits. Nearly all are either too primitive to make more than freshmen college (or tenured physics professor) level work or asymptotically approach web browsers without all that tedious attention to improvement and accessibility.


Qt pretty much nails this balance, WPF isn't bad either. Both will run circles around browsers at rendering performance and are much less tedious to develop in unless you've only done web dev before.


Such a pity that WPF was never ported to other platforms That's a UI framework that was done right in my opinion. Fast, incredibly flexible and sane.

That it's been relegated to boring internal only enterprise app development is pretty sad. I enjoyed working with it immensely.


Avalonia is a decently nice cross platform C# + XAML GUI toolkit. It doesn't cover every use case WPF does, of course, but I've still found it pleasant to use.


It also suffers similar problems to the one posted here (or at least used to), an idle app will consume quite a bit of CPU.


A WPF app is just a .Net app. I can't recall the problem described ever having been an issue.


AppKit/Cocoa is very good too, if not cross-platform. Consistent, dynamic, extensible, and there's so much your apps get for free, and new features often become retroactively available for past apps, like when NSDocument got autosaving, and now tabs for multi-document apps, the wide-gamut-aware color pickers and so on.


A project I've been keeping my eye on is libui (https://github.com/andlabs/libui), it's basically a thin wrapper around the native libraries (very SWT like) that can be used in a number of languages.


I have done both and I am not at all a fan of QT.


If you learn React, you can write for web, multi-platform desktop and (native) mobile, with React Native. Same can't be said for Qt or WPF.

Also the Qt community is minute compared to the JS community, that is significant.


"If you learn how to build doghouses, you can build skyscrapers, bridges, and nuclear power plants."

Use the right tools for the job. Trying to shoehorn everything into "the web" makes everything just as shitty as the web.


> Use the right tools for the job

i wonder if this expression constitutes a thought-terminating cliché.


It sounds a bit like one, but I think the point is that while the web can be used for everything, that doesn't mean it should. Like if we built XML into filesystems directly, instead of doing it just in the layers above.


It does, inasmuch as objective metrics for "correctness" don't exist in our field. The best we can do is show "incorrectness." It's absurd to say, "My tool is correct and therefore should be used," when all we really can demonstrate is, "My tool lacks these specific deficiencies I can identify in another choice."

This bug comes nowhere near demonstrating that Electron or VS Code or Atom are built incorrectly or incorrect tools


> Trying to shoehorn everything into "the web"

'Trying to shoehorn' is just your opinion.

> Use the right tools for the job.

What does that non-sequitur even mean in this context? Use a different native platform to write each native version of the app? That simply isn't viable for anything but very large companies.

Or should I use Qt, with it's tiny support community?

Also why is React/React-Native not 'the right tool'?

> [...] just as shitty as the web.

Again your opinion. In my opinion the web is a joy to work on compared to the cesspool that is native app development.


It's unfortunate that your opinion–that of the two most successful programmer's editors currently in use today are doing something right–is being heavily downvoted by people who object to the "web" invading on their territory for reasons unknowable.

"Eating too many CPU cycles while idle" is a problem many, many text editors have faced. It's purely an unfair bias in this case that this is justification for trying to invalidate the project.

By the way, if you'd like to really throw a stick into Qt proponents spokes, ask them what sort of accessibility story QT has for custom rendering components. Ask, "How can I support a colorblind or legally blind person with this component?"

Qt's developer story here is not nonexistent, but it throws into sharp contrast how _complete_ the web is from the perspective of accessibility for people who need auditory assistance or do not use conventional input devices.


Yes it is sad that it is one of those topics that HN users seem incapable of discussing objectively. Meanwhile Electron usage grows and grows, and will continue to until someone comes up with a viable alternative.


If you disagree, tell me what I should use instead of Electron, that is feasible to use for a small team to have a wide platform reach, and with a community comparable to even just the 'React' part of web dev.

The drive-by downvotes really just prove my point.


Yeah, thought so.


If you're meaning to say you can write "multi-platform desktop" apps using Electron... I suppose you're technically correct, but they don't feel native on any platform, are generally slow, massive to distribute, don't integrate well with other applications and generally have weird non-native-app-like behavior. Like Discord I can resize and have the minimize/maximize/close buttons overlap the window. Atom and VS Code still can't do DPI scaling right. I wouldn't say you can build good desktop apps with them. Just crappy little ones. Maybe that's necessary to interact with one library or another, but that's a tradeoff. And a big and important one to recognize.

Maybe React Native will work on desktop and fix all this eventually, but IMO React Native is just not there yet, it doesn't even feel native on mobile.

I say this as someone who writes both an app with both an Electron/React portion (in order mainly to interact with a specific JS library) and a Qt portion. I really do like the way React does a lot of things and npm seems like the exact right way to do package management in a programming language - but the JS ecosystem is a mess. While I'll agree the wealth of developers on it has had many good results, it's also created a horrible moving target. There are half at least 5 different and fairly popular module systems, module loaders, etc. There are constant attempts to use language features which aren't official yet or even at a final spec via babel. The build and packing systems are a mess, everyone has their own different series they prefer and getting working integration between a few can be problematic.


> If you're meaning to say you can write "multi-platform desktop" apps using Electron... I suppose you're technically correct, but they don't feel native on any platform,

You know, people say this, but I'd like to push back on it. "Platform native feel" is not something that I think most folks care about. You may need it if you're looking to ship an app for a pay-for-a-copy model, but the vast majority of computing hours people spend are already mitigated by their web browser of choice, and that seems to not really cause any substantial problems.

Electron is a flawed framework for building an app, sure, but you could argue Cocoa or Windows Universal are equally flawed in many ways. They also require a lot more code to get equivalent layouts, are almost never very good at reactively resizing (you may say it doesn't matter, I say I unplug in an external monitor and expect to have something sane happen). The fact that a bug exist causing repeated simple draws to be more expensive than expected is normal for text editors. You can find issues caused by similar problems in Sublime, Emacs, and even Notepad++.


Maybe it's just me, but I always avoid apps that don't integrate well with my platform. They're just not pleasant to use. I want the same look and feel, I want the same file browser, I want the same DPI scaling my desktop uses and I want the same UI standards followed, not some giant mess of whatever every individual dev decides seems nice that day.

I'm not so sure it's mitigated by the web - people may browse Facebook and a few news sites, but those aren't complex applications at all and they have very simplistic UIs that don't attempt to imitate familiar desktop controls or to show desktop concepts like files and folders.

I have far more problems with Electron app sizing than I do with proper desktop apps - see my examples above.


> I want the same DPI scaling my desktop uses and I want the same UI standards followed, not some giant mess of whatever every individual dev decides seems nice that day.

You must not use OSX native apps then with multiple monitors then. They don't work right (at least not for me!) One of the reasons I use Chrome for so much is that when I change my workspace (as someone who does both coding and project management this is increasingly a requirement) I don't want my UI to clip off the frame unrecoverably or render as a blurry mess.

> I'm not so sure it's mitigated by the web - people may browse Facebook and a few news sites, but those aren't complex applications at all and they have very simplistic UIs

You should look at the Chrome Web Store. But... also... "Facebook?" "Simplistic?" Simplistic is hacker news, where I can't use an Emoji.

> that don't attempt to imitate familiar desktop controls

One last thing: this is much more a sign of the current UI style of the time, and is very much a product of a post iPhone world where unique visual styles are expected. Even then, a text editor is such a specialist piece of work even native apps struggle and cheat, doing things like avoiding sheet animations and providing unique UI and UX components. Native apps have been introducing their own file hierarchy and multi-modal buttons since the time of emacs.


As someone who maintain other peoples code I really don't look forward to deal with js a few years from now:

What build system did they use?

Where is that package? Does it even exist anymore.

Etc.

Had similar problems trying to maintain a delphi project as an almost fresh from scool developer. At that point I understood the value of Java and Maven.


It's true, but there is an interesting mitigating effect here. If at some point you come across an old project that appears to be a product of herculean effort, note that these days its a matter of a few days effort.

The platform that JS is based upon moves incredibly fast.


Typically in Javascript, you would use npm's package.json for building, or you would use npm to install some well defined version of some build system, like you do for other packages. Like Maven builds, it works smoothly and predictably.


js community is filled with 20-somethings with ADHD (i'm 26) that reinvent the wheel every 2 weeks instead of helping polish other projects.


Seconding Qt, there's a bunch of bindings for non-C++ languages as well. It hits the uncanny valley sometimes but overall it's much better than the alternatives (like WxWidgets).


wxWidgets has improved a lot, give it a try.


If you could n-gate every article as it's posted, that would be awesome.


For those who missed it, n-gate satirizes some HN posts, to the delight of all: http://n-gate.com/hackernews/


Oh my god... n-gate is pure gold!


They had me at "Hacker" "News". Thank you so much for sharing this! (Hell, I mostly quite like HN and I'm sure I'm often part of the problem, and this is still wonderful.)


bummer n-gate didn't comment on its own hn post


I was really looking forward to that write up, as well. Unfortunately, the author must have some rule about not breaking the fourth wall. Or they avoid softballed summaries?


I think a recursive n-gate situation would be delightful. Someone should make this happen.


Some internets post weekly bitter satires of Hackernews to the internets. Hackernews finds it and misses the point entirely, of course: Hackernews is being laughed at, not with.


An internet finds a bitter satire of his industry and is endlessly amused at taking the piss out of sincere but naive youngsters who debate technical concerns without understanding the appropriate context and history. The mocker misses the point entirely: Hackernews didn't fail, the generation that came before did, in letting their culture die.


One Hackernews asks some Hackernews to ask the internet to make a satire about a satire of some Hackernews. Hackernews becomes Stackoverflow.


Yes, embedding a web browser is wasteful.

But, it is EXTREMELY EASY. There's a reason why Github, Microsoft, Facebook, Adobe etc converged on this decision, and it's not the bandwagon effect. Faced with the same task recently, I did my research, and it is indeed easy.

Since I work on the JVM, my alternative was JavaFX. That didn't seem so bad, but it was much more work. If you look at say IntelliJ IDEA on some platforms, you'll see the text rendering is much uglier than on Atom. Regrettably, if you want to acquire users, that matters more than conservation of resources.

I'm curious: have you ever tried interfacing with X11?


I mean, I couldn't agree with you more about the asininity of embedding a browser in a 'native' app (for anything other than web browsing). But I use VSCode all day, every day, and subjectively it feels zero-latency on my aging laptop.


It's kind of a crazy concept, especially if you are old enough to remember the days where RAM was counted in kilobytes and you would optimize assembler code to save clockcycles.

On the other hand, this is 2017, and people apparently don't care much about the footprint of their software, as long as it's convenient and it solves their problem. And ultimately that's what counts I guess.


So today I noticed that when you try to select text in a tool window, it really does act like a browser. The window won't scroll. The selection spills into other windows. There's no way to select anything you'd have to scroll to.



You know what would be fast for display updating, ncurses. I wonder why they didn't use that? If only we had an ncurses based text editor.


Someone should write a webkit based ncurses! Why haven't we seen the light before !? ;)


There is already an ncurses library for JavaScript [1] so one for WebKit shouldn't be far off ;)

[1] https://github.com/chjj/blessed


Ncurses is the worst gui toolkit there is (from a developer point of view), although it indeed is lighter on resource usage than most of the others if you do it right.


I was with you until the non-sequitur about Wayland. What exactly is shitty about Wayland? It is way less bloated than X11.


The design of Wayland might be (and I believe is) superior to X11. Although as every piece of software on this planet in first years of its existence it will be incomplete, buggy and not compatible with all the other things that already exist. For some people this might be shitty compared to mature software, that is buggy but at least most of it's bugs are already known and worked around.


Love your meta comment

Consider writing a douglas adams style book


I see this argument against electron again and again. Sure, sometimes its overkill - I've seen tray notification apps built with it - but sometimes, it's not. Sure it's not as efficient as something written in .Net, but its a great, hassle free way to go cross platform.

People don't complain that web IDEs are bloated and slow because they need a web browser


Really the argument gets less and less valid over time, for every one of these issues that get discovered, a patch gets made that solves the issue. The platform evolves and gets better until someone else finds an inefficiency, that seems like a pretty typical advancement to me.


You're projecting quite a bit there. There's tons of comments lambasting what's going on in the article.


This is when I think that the following two minute video should be mandatory for programmers:

(Grace Hopper - Nanoseconds) https://www.youtube.com/watch?v=JEpsKnWZrJ8


I don't understand all the hate.

Java developers have intelliJ IDEA, Eclipse and Netbeans, all written in Java.

JavaScript developers have Atom and VSCode. Since JS was created for HTML, it seems logical for me to use browser tech to build these editors.

It allows JS devs to build extensions without the need to learn a different language. As Java devs can build Eclipse plugins with Java.

Also, the Electron based editors aren't the only ones available, so it isn't as if someone would force the poor dumb JS developers to use these clunky slow tools, like when Slack built their client with Electron and you had to run a monster app for a simple chat.

If they want something faster, there is Sublime, Vim and Emacs...


From 2009 – Much hot air over blinking cursors https://lwn.net/Articles/317922/

Quote from the article;

> The blinking cursor causes the processor and GPU to be woken up frequently. On one of my test systems, this causes somewhere in the region of 2 Watts of extra power consumption.

This isn't about electron vs "native" apps. It's just that a smooth cursor animation rendered at 60Hz costs power; and if you look at the github issue, one config setting and that's gone, and so is the power usage when idle.


Just another day in the Electron universe. My personal favorite is resource waste of another variety: Slack and WhatsApp using multiple gigs of RAM for unbelievable basic (which is their selling point) chat apps.


Admittedly, Slack uses tons of memory because animated GIFs do, plus all the other embeds that is supports.

  /giphy time to burn up ram from everyone in the channel


Hilarious. I can reproduce on my Dell Inspiron 15 5000 (core i7 running Windows 10) but it's not as pronounced as in the bug report. CPU usage bounces between 0.5% and 2.5% (which is obviously still ridiculous).

I almost want to swap back to using sublime out of protest but vscode's tighter git integration/workflow is hard to give up :(

Edit: I've always wondered why vscode couldn't be written in portable C/C++/Rust/D and then embed a V8 engine to power a javascript plugin API (a bit like sublime does with python)?


I always turn off cursor blinking everywhere I can because I find it distracting; I never knew I could use "performance" as an excuse!

On another note, why is cursor blinking such a universal thing? I assume that other people must like it, or else it wouldn't be so common. Do people have trouble finding their cursor without it, or do they have trouble distinguishing it from actual text? I've never had either of those problems with blinking turned off, but I can't think of any other plausible reason.


I assume it is useful to determine where the cursor is. It is easier to spot blinking element rather than solid.


Rik Arends, who worked on the Cloud9 IDE, has done some amazing work implementing a blazingly fast WebGL based code editor and text formatting and rendering with JavaScript and GPU shaders. He's actually compiling JavaScript code into WebGL shaders!

Implementing a WebGL based code editor for JavaScript - Rik Arends - GrunnJS [1]

How do you render text and update a code editor when it only has vertexbuffers? What can you do with it? What does an AST editor do to help? Rik Arends (founder of Ajax.org/Cloud9) will blow your mind with the talk. Note that this editor is work in progress.

[1] https://www.youtube.com/watch?v=hM1oLr9G3-Q

Here are some other talks about his work:

Rik Arends: Multi-threaded JavaScript inside Makepad [2]

As a web developer since the early 2000s, Rik has lived through the ups and downs of browsers all the way from IE3 to Chrome 52. From doing interactive web projects for big brands to working on Cloud9 IDE, the limitations of HTML have always been there. After leaving Cloud9 four years ago to explore the new possibilities that WebGL offers for UI, a continuous stream of WebGL-related projects have appeared, including a JavaScript trace debugger, a compile-to-JavaScript language and a live coding prototyping framework. Makepad is the latest from-scratch iteration of this WebGL direction, and it is exciting to share the progress and lessons learned.

[2] https://www.youtube.com/watch?v=tVTWdFE6-O0

Rik Arends: Beyond HTML and CSS: Fusing Javascript and shaders | JSConf EU 2014

What would the world look like when you can style UI with actual shader programs? The web could be 60fps on mobile, and we can start to imagine what lies beyond HTML and CSS

[3] https://www.youtube.com/watch?v=X8xxz-YeWtk

Here's a demo! [4]

[4] https://makepad.github.io/makepad.html


Thinking about "the stack", runtimes , development processes and the root-causes involved here i think, the best thing that could happen to "the industry" is some event preventing the silicon production to deliver new cpu-s/ram/gpu-s/harddrives for some 5+ years or so.

Constrain the resources to bring brack some sane levels of efficiency regarding ram/cpu-cycles/hd-space.

/scnr


I just checked on Windows 10, an empty instance of VS Code with a blinking cursor uses 0% most of the time.

Is this a MacOS only issue? Does it affect Atom?


There has to be some other factor: on a 2013 iMac it uses under half a percent when idle, with the same versions of macOS and VS Code.


Thank you to the author of this github issue! I hope it gets sorted soon! I use VSCode a lot when disconnected from my power outlet.


I wonder if adding

  will-change: opacity
to the CSS would improve performance (by switching CSS animation rendering to the GPU)

https://medium.com/outsystems-experts/how-to-achieve-60-fps-...


Why don't all those who complain about Electron apps get together and make a FOSS, cross-platform editor with all the features and an equal or better interface than VS Code and Atom?

If you use the time you now spend complaining to code instead, you should have the basics covered.


Because there are good enough tools out there already, foss and commercial. This problem is solved already.


Name them.


Simple fix: ... cursor.blink = False // 13% CPU optimization :-) ...


We could call it Egacs: "Eight Gigabytes And Constantly Swapping"...


I think the reason using the browser for desktop UI is because graphics and UI programming is terrible. The computer architecture and operating systems we use today are built on tech that was before graphics existed.

That's why it's trivial to build a terminal app, but god help you if you want to create a window with a button or draw a line. GLEW, GLUT, Qt, OpenGL, and the rest are just kludges to deal with the fact that we're still working with tech from the 70's. Even those kludges are so terrible that companies like Github and Microsoft have resorted to using the browser to make things like Atom and Visual Studio Code.

It can be fixed, but I'm not sure anyone has the fortitude to redesign hardware and operating systems.


What hardware or operating system changes do you think are necessary to improve the GUI development experience?


Imagine being able to create windows, draw, deal with io devices (keyboard, mouse, controller, etc.) and do OpenGL graphics type work with system calls.

That would allow (practically) any language to trivially create native apps with a GUI, at least trivially compared to today.

In Linux we could eliminate the multitude of complicated display servers and reduce latency in the system (making VR and AR easier to do). Windows has something similar to what I describe, but it's not very good, and has no builtin OpenGL-like system calls.

A standardized set of system calls for graphics, GUI, and IO would allow cross platform native apps to be trivially created. In principle there's no reason why MacOS, Windows, and Linux could not have such system calls added. The biggest problem is those systems have become very bloated from adhering to backwards compatibility.


I have also been waiting for the day... Maybe in a few years this Linux subsystem for Windows will start to mold things in this direction?


It is amazing this article got more than 700 points because when I open the task manager and VSC is in IDLE, I see 0% in CPU usage. And the cursor is blinking!!!


Dude... that cursor has a carbon footprint.

I hope when Microsoft fixes this they publish some usage stats and we can see how much energy/money/carbon this will save annually.


On a not so different not xfce had a bug when the screen saver got displayed after idle timeout the cpu utiliztation rose over 100%.


Crossing my fingers any related changes don't kill the cursor in the vim extension. Maybe I'll skip the next few updates...


Hey bud, I am here for you as a developer of the vscodevim extension. I will make sure it works just for you.


I noticed that somethimes when you press TAB at the beginning of a line, it can take about 1s for the TAB to appear on the screen.


Why hasn't someone thought of implementing a blinking cursor in hardware, like we did in the early 80's?


How would that work in reality?


Back in the day you'd write a value into hardware register and the video hardware would draw the cursor at the location designated by the written value. The hardware that was generating the signal to drive the beam would AND the VRAM contents with the register and XOR that result with a clock that pulsed at the blink rate. Voila, blinking cursor.

In this day and age, I imagine you write some code that runs in the GPU off a timer interrupt that pumps the texture of a cursor to a location on screen based on a value stored in some scratch RAM. Exposing that interface to some JavaScript running an editor in a browser is left as an exercise for the jackasses who have taken what we would call a SuperComputer back in the day and turned it into wheezing desk heater with a stuttering UI.


This bug has nothing to do with techonology choices. It is just a bug, in chromium. Team is investigating.


No issue, just need a plugin to disable cursor rendering when you launch the build :-)


> Powerful* text editors built on the web stack

I've identified the problem


Could not Electron use React Native kind of framework?


You do not need horses to power your computer.


Don't see this on my Win 10 ultrabook.


carbon footprint


Yes. We quickly forget what those CPU cycles and abundant of RAM consumes electricity => heating the planet.


The real issue is cooling the universe and accelerating us to the inevitabile heat death :(


Quick, produce more heat!


Yeah. Somebody is putting coal on a fire so I can have a cursor blink on my screen. Makes you think :)


Lotus Notes FTW


can you close VS Code when not in use?


The power of node.js -_-


The answer:

Vim <-- this


[flagged]



There might be an insult or two thrown around but detaching the thread removes necessary context. I do not think that this was the appropriate course of moderation.


It was uncivil, unsubstantive, dismissive of others' work, broke the HN guideline against using uppercase for emphasis, took the thread into a language flamewar... Please just don't post like that to HN.


I think the fact that you call java not-compiled, but python interpreted shows your misunderstanding in this area. Both run exactly the same way, they are compiled into an intermediate bytecode which is run in a "VM". The normal Java VM has a JIT, the normal python VM does not, but pypy does and achieves performance on par with some of your compiled languages, at the cost of reduced c compatibility.

I know C, Java, python, and a limited amount of javascript and go. I prefer writing python. And, for plugins, there are enormous downsides to using something that is compiled (C, go).


For me, a big irritation with Python - intermediate bytecode or not - is that foolish small mistakes in formatting, or missing symbols, etc. will only make themselves apparent when the interpreter (or bytecode interpreter) actually tries to run or use them. In some configurations, it can be preprocessed and checked, but the most typical and default use cases for Python do not promote this.


Honestly, this is not a problem in practice; if some application crashes due to a misspelled variable, then that very clearly indicates that testing coverage is lacking. IDEs also tend to indicate this, as do linters.

Though your

> the most typical and default use cases for Python do not promote this.

might be different from those I have encountered.


Uh, I lumped in Java with the compiled languages?

And yeah, thats right there's like 80 million Python variants, some of which are compiled.


that was a typo. "not-interpreted" was my intended statement.

And I'd love for you to show me a python variant that is compiled in the same way that C is compiled. The closest is nuitka, which...isn't.


cpython? I know the regular python-2.7 installer in debian does binary compilation, producing little *.pyc files for everything


They are binary files, but they aren't "compiled binaries in the same way that C is compiled", they are intermediate Python byte-code files and they still need the CPython runtime to interpret them.


As the other user mentioned, those are compiled in the same way java is compiled, but not in the same way that C is compiled. They still need an intermediate VM to run.


[flagged]


Please don't post putdowns of entire classes of people. It's unsubstantive and leads to flamewars, as demonstrated by the below.

We detached this subthread from https://news.ycombinator.com/item?id=13940405 and marked it off-topic.


Putdowns? What putdowns exactly? According to the StackOverflow 2016 Developer Survey [1], almost 70% of the developers are self-taught, and also according to that survey, JavaScript is the most popular language, more than twice as much as C++. That pretty much covers my first paragraph.

As for the second, I'm confident about the average JavaScripter not being able to tackle C++. If you find that an insult, that's ok, but it's the reality. Try explaining custom memory allocation algorithms to the vast majority of script kiddies that call themselves "front-end developers" just because they use jQuery daily.

[1] http://stackoverflow.com/insights/survey/2016


[flagged]


This breaks the HN guidelines about civility and not calling names in arguments. Please don't respond to a bad comment with an even worse one.


Fair enough, I'll tone it down next time


Perhaps, but the grandparent has a point. Javascript is the hammer that's turned every kind of software development into a nail, and it simply isn't so.


Yesterday I wrote a webpage with a bootstrap template. The webpage has a progress bar that is updated in realtime by a websocket. The webpage is extremely slow. I guess every bit of progress bar update it rerenders the entire dom tree.

I don't think react would do much better, but I will have to try.


With a decent implementation, a progress bar will run at 60 fps no problem. You don't need react for a simple progress bar either, just check no unnecessary is done (probably just need to update the width on an element) and use requestAnimationFrame if you receive updates at very high rate.


I admit I'm not familiar with frontend development.

I used this template https://github.com/secondtruth/startmin

I noticed that boostrap progress bar is a div,

https://www.w3schools.com/Bootstrap/bootstrap_progressbars.a...

that's why I guess updating it triggers dom update.

I'm still looking for a solution.


why did this get down voted? I can't understand.


Because it hasn't been mentioned yet, as far as I could see: Blinking cursors are horrible and should never exist, it resembles torture. http://www.jurta.org/en/prog/noblink is an extensive list with ways to deactivate it in various programs. Try it out, you will feel the difference.


I don't like XXX, so it shouldn't exist it's a great argument.


No, of course not. It wasn't supposed to be the argument. Are you really interested in the background? I will just assume so, this is HN after all.

Our brain has some special recognition for specific things. A collection of those things are bundled under Gestalt psychology. There are also a number of things that make something pop out. Now, movement triggers the pop out effect, it is one of those things that we notice immediately. What that means in practice is that a blinking cursor in an otherwise plain image will be noticed immediately, and it takes conscious effort to ignore it. That is a strain on your mental capacities, and highly unnerving, even if you are not completely aware of what it is that annoys you.

Parts of this is what motivates those specialized writer editors, that remove everything apart from pure trext to make it as easy as possible to focus on the task at hand.

Blinking cursors, and this is what the jurta page mentions, also resemble the chinese waterdrop torture. I think that was meant a bit tongue in cheek, but is actually not wrong. It comes again and again and again and again ..., and it not immediately controllable.

Add to that the useless resource usage like in this bug and yes, I stand to that and am serious about it: Blinking cursors are a very bad habit that should just die out. And developers that actively implement it without making it deactivable should ask themselves what they are doing - I had that with Sharelatex, a platform which I otherwise like a lot.


Personally I like the blinking cursor in vim and I don't think dripping water is a good comparison.

Without the blinking I lose the file location and have to move it around with "hjkl" so I can find it again.


>A workaround for folks who are similarly obsessed about battery life

That's odd wording. Obsessed because I don't want to waste energy, which leads directly to pollution? Or don't want a short battery life on my laptop? Power savings shouldn't be seen as irrelevant geekery.

13% is half the capacity of a single core in a four core processor. My 5 year old 2500k peaks at 120 watts. So 15 watts to render a cursor?

Microsoft needs to stop it with these terrible levels of QC. Its inexcusable.


It's a Chromium problem, maybe you should read the Github issue before commenting.


Still its one of the most interesting posts. x Joules per blink


The maths may be interesting, the rant is definitively not.


There's nowhere saying this keeps the processor at max power state, so not 15 watts to render a cursor.

Your comment is going to be hosted on news.ycombinator.com, replicated through CloudFlare's CDNs, downloaded over and over and spidered by every search engine on the planet, indexed, replicated through all of their worldwide datacenters, backed up onto tape, copied into training sets, and a contender for the result set of every single internet search on every search engine for the rest of eternity or as long as people search with words.

How much energy will that take, all in?


> Power savings shouldn't be seen as irrelevant geekery.

It seems that these days anything that a developer doesn't care about is seen as "irrelevant geekery".


On one hand, I'm annoyed that javascript and browsers are being used for a text editor.

On the other hand, I'm reminded of the arguments against emacs using 8 MEGS of memory and how terrible that is.

I've learned to just relax and use what works. People have chosen to concentrate on extensibility at the expense of current day performance. The result has been good enough.

There is probably something else to be said about these also making OSX a first class citizen, which hasn't always been true.


> On the other hand, I'm reminded of the arguments against emacs using 8 MEGS of memory and how terrible that is.

Yeah, but back then Moore's Law was running full steam and we were getting tremendous advances in processor and memory performance every year. Now that era has ended; what processor performance we have today is likely to remain more or less the same for the foreseeable future, barring a miracle from the chipmakers.

It's time to stop pissing away performance by using inefficient software stacks and web technologies are a big, juicy target for cleanup.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: