Hacker News new | comments | show | ask | jobs | submit login
VS Code uses 13% CPU when idle due to blinking cursor rendering (github.com)
899 points by Kristine1975 on Mar 23, 2017 | hide | past | web | favorite | 760 comments



I'm reminded of this classic: https://github.com/npm/npm/issues/11283

NPM had a progress bar that was so fancy that it slowed down installation time (basically its only job) by ~50%. Hilarious.

My mantra here is, if you find yourself thinking about implementing a fancy loading spinner/progress bar, it would be more productive to just spend that time making it unnecessary - speed up your shit! Obviously that doesn't apply to VS Code's cursor.


This reminds me of one of my favorite old tech stories.

A long while back (seems like this was the late 90s or early 2000s) I was working on a script that did some data processing on a remote machine. It had to loop through a bunch of text log data and generate some reports. Being as that I had no idea if the script was actually working until it completed awhile later, I decided to put in a neat little ASCII spinner in it when you ran it with verbose options.

At the time I was on a slow dialup connection as I was on break from school, and something weird would happen. Every time I ran the script to test it, my Internet connection would become nearly unusable. But as soon as the script finished, it would suddenly start working again.

As you can imagine, this was very confusing since the script was running entirely on a remote system. What the hell is going on?

This stumped me for an hour or so until I ran it without the verbose option ... and it didn't happen. Then I finally realized what was happening: I was refreshing the spinner on EACH ROW and remote machine was going through the rows so quickly that sending refreshes for the spinner saturated my tiny dial-up connection. Changing this to only update once a second fixed it entirely.

And that's how I DoS'd myself with a spinner.


Your story reminds me a current scenario @peckrob ..

For Android Application development, we used to connect devices to computer via usb and would see the device's log in logcat tool (Android Studio). The device, in general, spawns lots of logcat messages during a debugging session and would eats up CPU.

The case is still worse that the tool stucks when device is connected over wifi (wifi-adb) as the data transfer is little lower in wifi than usb.


I've got a similar one. I was working on an app once where the current results were being logged to a text box, nothing too fancy, but I noticed it got a lot slower on larger blocks. Changing "textBox.Text = textBox.Text + newline" to "textBox.Append(newLine)" cut >99% of the CPU time.

On the same project I discovered that "System.Envireonment.NewLine" was a relatively expensive call on c#, caching the result of that property was another 50% cut to CPU.


Nice foot gun story. If you had a T1 at the time, you would never had noticed such a pitfall.


I often advise to write code on "obsolete" tech. It makes every bit of cruft obvious.


I test all of my apps on an old Moto G, second generation, on throttled mobile internet (64kbps).

That’s the average worst case user.

This also means I notice massively if an app has hardcoded timeouts, or loads massive amounts of data.


Pretty much, yeah. I never noticed it on campus because I was sitting on a 10 megabit connection.

That added an extra dimension to the confusion because I was sure this never happened when I was on campus, only when I was sitting 300 miles away. It was still happening if I had bothered to look at the network stats, just not enough to entirely saturate the connection.


In the early days of working on my current main project, I found that updating a progress bar was slowing the process it was monitoring. Since there were times when it was useful to see near real-time progress, I added added a slider which allows the user to adjust the sampling rate. That slider is affectionately known (by me anyway) as the Heisenberg Compensator.

P.S. I have sped up my shit. That process originally took days, now it takes an hour.


It's fairly regular in my field (game dev) to put some debugging/instrumentation code in and only enable it for one frame after a key is pressed.


> Heisenberg Compensator

That's a perfect name, I love it.


There's a Heisenberg Compensator in Star Trek transporters [0]

[0] http://www.startrek.com/database_article/heisenberg-compensa...


Not​ a coincidence.


IBM spent a humongous amount of man hours upon the running man in smit under AIX, many of those consumed with making sure it ran correctly on all systems of various speeds without the animations going crazy. Alas I don't have exact man hours, but that was what one of the engineers told me a nearly 2 decades ago now. Hopefully somebody else has more detail upon this as one of the earliest examples of progress animations that still runs today that I'm aware of.

I still hark back to the days of the C64 and the likes in which had tape storage and was common to have a game as a loading screen to play. Was a small program and used to give the user something to do whilst they waited 20 minutes for the program to load. Many also rewrote the cassette storage code for faster storage/loading and would often be a case of loading that program with the small game like space invaders that you could play whilst the main game loaded utilising the quicker tape handling code.


SMIT running man example: https://www.youtube.com/watch?v=YMWSD69BWqI

If anyone knows where I can get those animation frames, I'd like them for a personal project as a "running" indicator.



Broken Sword let you play Breakout during the installation; it was such fun (I had not played Breakout in a long time at the point) I was almost sad when the installation was finished. ;-)

I have often wondered since why this was not a more popular way to deal with long running installers.


I think it's because Namco had a patent on games in loading screens from 1995 to 2015: http://kotaku.com/the-patent-on-loading-screen-mini-games-is...


Ghostbusters game on c64 had it when I bought it used in 1991 or 92 I think.


It's a common example for a patent that had a lot of prior art but was granted anyway.

I don't know enough about patents to say if there was a reason the prior art didn't apply, but I know that Namco was fairly protective of it and it was somewhat limiting for people working in the game industry.

Although, loading screens are less common now than they were back then, and I think that would still be case even if such a thing were never patented.


Having minigames during loading or installation was actually patented by Namco for Ridge Racer, I believe. I'm not quite sure about the details since I seem to recall things like invade-a-load on my Commodore 64 at least a decade earlier, but there you go.


Expired in 2015 :)

When I was younger, it seemed so obvious to add a minigame to a long loading screen, I had assumed that there were technical reasons for not doing it.


Not quite the same thing, but DVDFlick lets you play Tetris while your DVD is being authored and burned, which was a nice suprise.


One Linux distribution used to let you play Breakout during installation, but I forgot which one. Caldera, maybe?


Don't most Linux installation just let you use your computer normally during installation?

(And by normally I mean, as booted from a LiveCD.)


I think I don't understand the issue well enough. This looks like a standard blinking cursor to me. Users expect a blinking cursor in an editable text field.

I'm not sure why this implementation is slow or why they needed to implement it themselves and not let the OS handle the blinking cursor. I'm guessing there must be some reason.


Powerful* text editors built on the web stack cannot rely on the OS text caret and have to provide their own.

In this case, VSCode is probably using the most reasonable approach to blinking a cursor: a `step` timing-function with a CSS keyframe animation. This tells the browser to only change the opacity every 500ms. Meanwhile, Chrome hasn't yet optimised this completely yet, hence http://crbug.com/361587.

So currently, Chrome is doing the full rendering lifecycle (style, paint, layers) every 16ms when it should be only doing that work at a 500ms interval. I'm confident that the engineers working on Chrome's style components can sort this out, but it'll take a little bit of work. I think the added visibility on this topic will likely escalate the priority of the fix. :)

* Simple text editors, and basic ones built on [contenteditable] can, but those rarely scale to the feature set most want.

(I work on the Chrome team, though not on the rendering engine)


"So currently, Chrome is doing the full rendering lifecycle (style, paint, layers) every 16ms when it should be only doing that work at a 500ms interval"

Only? It shouldn't be necessary to layout the entire window to redraw a cursor.

Also, if it did update every 500ms:

- it still would use about half a percent of CPU. On a machine whose CPU easily is >200 times as fast as those powering the first GUIs (and those _could_ blink the caret at less than 100% CPU) and that has a powerful GPU, that's bad (yes, those old-hat systems had far fewer pixels to update, but that's not a good excuse)

- implementing smoother blinking (rendering various shades of gray in-between) would mean going back to 13% CPU.

I would try and hack this by layering an empty textfield on top of the content, just to get a blinking caret. Or is it too hard to make that the right size and position it correctly?


> Powerful* text editors built on the web stack cannot rely on the OS text caret and have to provide their own.

Is there any reason that Electron couldn't provide an API that would expose the system caret in an OS-agnostic manner? Windows, for example, has an API[0] that can arbitrarily show the caret at a given point in the window. Sounds like something that would be useful to many apps and not get in the way for those that don't need it.

[0] https://msdn.microsoft.com/en-us/library/windows/desktop/ms6...


Probably not easily. Remember this is what java AWT did and it was a complete mess. Write once debug everywhere.

My favorite issues was that on one OS (windows I think) a panel would only be visible if pixel 0,0 was on the screen and nothing was on top of it. The panel could be 99% visible but not be shown at all if the upper left corner was under another panel.


Why are you even painting at all instead of layerizing it?

(This is a perfect example of what I've been increasingly convinced of lately: that the whole "paint"/"compositing" distinction hurts the Web…)


>> text editors built on the web stack cannot rely on the OS text caret

Can you explain in simple terms why this is the case? Why on earth not?


Because you can't just ask the OS to "please paint text caret here thankyou", and browsers do not expose a powerful enough native text editing control. So you end up reimplementing one in JS/HTML/CSS, including the caret.


The WinAPI function SetCaretPos seems to do that: https://msdn.microsoft.com/en-us/library/windows/desktop/ms6...


Well, because, in order to satisfy the most finicky of its users (myself included), VS Code offers no less than 6 styles for its cursor ('block', 'block-outline', 'line', 'line-thin', 'underline' and 'underline-thin') and 5 animations ('blink', 'smooth', 'phase', 'expand' and 'solid'). Also, in a future release the themes will be allowed to change the color of the cursor to any of the 16,581,375 colors in the RGB spectrum.

Does that answer your question? :)


>http://crbug.com/361587

Reported almost 3 years ago and still not fixed...


> I'm not sure why this implementation is slow

I'm guessing the culprit for that is in this part of the bug report:

> Zooming into a single frame, we see that while we render only 2 fps, the main thread is performing some work at 60 fps (every 16 ms)

As for

> why they needed to implement it themselves and not let the OS handle the blinking cursor

They're not editing inside a text editor field anymore, they need to blink on html. I guess they're not using a deprecated <blink> tag is because it's not customizable in any way.


I don't believe that any modern browser still supports the blink tag; IIRC Firefox was the last to get rid of it, a year or two ago. You can hack it up in CSS, though.


Truly, the web has come full circle.


On Win32[0], the blinking caret can be shown anywhere inside a window, not just on a text field. The sample program builds a basic text editor without cheating and using a textbox control.

I'm sure other operating systems have this kind of facility - after all, how does the built in text edit control draw its caret?

[0] https://msdn.microsoft.com/en-us/library/windows/desktop/ms6...


>On Win32[0], the blinking caret can be shown anywhere inside a window, not just on a text field. (...) I'm sure other operating systems have this kind of facility - after all, how does the built in text edit control draw its caret?

The "built in text edit control" IS a native text field already.


My point is that the built-in text edit control needs some way to draw it's caret, so surely a universal method to draw a blinking caret exists somewhere.


Well, they could just make a C extension to Electron that draws a blinking colored line (that's all it is) using the OSes arbitrary drawing facilities.

No need to have an official OS caret function (especially if you're not a native text field, and caret aside, the rest of your text editing will be different/broken in subtle ways compared to the OS).


I'm not aware of any system where one can create that caret without a text edit control to hold it. That means that, if the standard text edit control isn't suited for what you want to do, you can't have the caret.

Certainly, on the original Mac OS (a system on which I worked on OS patches for detecting the location of the caret) the method that application used for drawing the caret varied widely. I've seen it implemented by drawing a line, by drawing a rectangle, either in one go or in two parts (that happened in applications that supported a split caret, even if the caret wasn't split), with various transfer modes. Slanted carets typically were application-specific, too.


> I'm not aware of any system where one can create that caret without a text edit control to hold it.

Windows allows this. In a parent comment I posted a link to the API docs. All Windows requires is a window to hold the caret, it doesn't care what kind of window it is.


Thanks, so they are probably not using a contenteditable element. In that case it makes sense that they would need to use a CSS animation to blink the cursor.


> why they needed to implement it themselves and not let the OS handle the blinking cursor.

Probably because they are using electron.


this is exactly the same problem I met yesterday.

I made a webpage based on a bootstrap template

On this webpage, I have 2 realtime components. one is a chart, based on dygraphs

the other component is a bootstrap progressbar.

the bootstrap progressbar is made by html divs:

https://www.w3schools.com/Bootstrap/bootstrap_progressbars.a...

Both components are updated in realtime by a websocket.

I noticed that the chart by itself is very fast. But as soon as the progressbar is added, the entire webpage and even the entire machine becomes very slow. I guess this is because every bit of progressbar update, the browser rerender the entire page, as the progressbar is a div.

I don't know how to solve this. I thought about using react for its virtual dom. but if eventually I need to update the dom, the speed seems to be the same.


There is an HTML 5 progress tag supported by all browsers: <progress id="" max=100>


Can you buffer the incoming websockets data and only update at a small percentage of the inputs or maybe based on time?

Lodash has a debounce function which is useful to throttle UI features hooked to incoming data. https://css-tricks.com/debouncing-throttling-explained-examp...

You never really want the UI repaint to be dictated by data, it's better to use timers.


Am I the only one around here who prefers a nonblinking cursor. That and key repeat rate are my first setting on any new install.


Minimizing the installation window would prevent the paint event and your installation would finish quickly...


iTunes has the same issue with its spinner while syncing, it takes about 20% CPU time.


Who said it's the spinner in that case?

After all, it IS syncing at that time, which means it does a lot of stuff.

Whereas the issue here is with the caret shown when the editor is idle.


Activity Monitor allows you to easily profile what the application is spending its CPU time on and it shows that iTunes spends an excessive amount of its time on updating the UI, specifically the spinner.


It just doesn't go in my head that we are building text editors inside a web browser! I get it, there are many good use cases for Electron and it's easy to get started with cross platform support, but why is everybody going crazy about text editors in them? Because you can write plugins in JS?

Wouldn't it be better to make native application, especially for code editors, where developers spend most of their time, where every noticeable lag and glitches are not appreciated.

Edit: Many people here think that I am attacking this web based kind of technology, which I am not, and sorry for not being clear enough, but why chose something so high up the stack for dev tool?

Edit2: For non-believers in nested comments, look -> https://github.com/jhallen/joes-sandbox/tree/master/editor-p...


Yep. I also can't wrap my head around the fact that we are now constructing buttons, drop-down boxes, tagged text boxes using dozens of nested <div> layers instead of a native widget that writes directly to the screen. My 486 rendered UIs with nearly imperceptible lag. Google Docs takes a good 2-3 seconds to spin up a UI on my i7.


You both do realize that similar arguments could have been made back in the day when moving from, say, command-line DOS applications to Windows API applications - right?

Ultimately, computing has always been one of abstraction from the lower "layers". Taken far enough, one could spuriously argue that if you aren't soldering together the flip-flops that make up your logic and memory, you just aren't being efficient...


Except that the abstraction layer for the user has remained the same.

The extra abstraction layers you're talking about are invisible to the user... while our GUIs are slower, and our processors faster. It feels like, after 30 years, we should be able to have our GUI cake and eat it, too.


The abstraction layer has only remained the same visually, and only for a loose definition of "visually."

In the "good old days" the user interface was 640px x 480px (original VGA[1], skipping past the original MDA, CGA, and Hercules graphics cards[2] since they predated "modern" GUIs). Then it was expanded to 800x600, etc. The programs of those days were hard-coded to the graphics adapter resolution. If a program did not support the graphics adapter you had, you had to run your GUI in lower resolution compatibility mode. Sometimes it didn't work at all.

In lower resolution compatibility mode, the program's display got uglier and uglier as the screen resolution went up.

The abstraction layer today is adaptive and high DPI. Since the abstraction layer abstracts away the actual display resolution, programs (generally) can take advantage of the resolution that is available on a given display without change, allowing a program to run on small hardware displays (phone/tablet form factor) up to mega 4K++ displays, looking better and better as the screen resolution goes up.

[1] https://en.wikipedia.org/wiki/Vga

[2] https://en.wikipedia.org/wiki/Hercules_Graphics_Card


Precisely. And even Win32, which is being touted in this thread as somehow far superior to the Web stack, was never designed for high DPI apps. It's far worse than the Web stack, with exact pixel hardcoding everywhere. That's why the HiDPI situation on Windows is such an inconsistent mess. Meanwhile, the Web scaled up to HiDPI so seamlessly that most people never even noticed any friction. This is the benefit of the declarative model of CSS.


Win32 was designed around DPI-independence from day one. That's why things like GetSystemMetrics exist. Unfortunately, the app developers chose not to use them.

GDI was originally designed to run on printers as well as screens, where the DPI values are completely different.

The declarative model of CSS has nothing to do with it. You can specify pixel values in CSS if you like.


> You can specify pixel values in CSS if you like.

And many (most?) do... leading to the reinterpretation of what "pixel" means in the CSS spec. Yes, CSS pixel does not mean a physical pixel on the screen.


> Win32 was designed around DPI-independence from day one. That's why things like GetSystemMetrics exist. Unfortunately, the app developers chose not to use them.

Integer pixel coordinates are right there in the most fundamental functions:

    HWND WINAPI CreateWindow(LPCTSTR lpClassName,
                             LPCTSTR lpWindowName,
                             DWORD dwStyle,
                             int x,
                             int y,
                             int nWidth,
                             int nHeight,
                             HWND hWndParent,
                             HMENU hMenu,
                             HINSTANCE hInstance,
                             LPVOID lpParam);


WPF is perfectly fine for high DPI apps and still beats all this web stuff by orders of magnitude.

CSS people still think a grid is a choice of "small" "medium" and "large" columns/rows. Haha.


> WPF is perfectly fine for high DPI apps and still beats all this web stuff by orders of magnitude.

> CSS people still think a grid is a choice of "small" "medium" and "large" columns/rows. Haha.

CSS Grid is literally Microsoft taking the WPF grid layout and porting it to the Web!


Yes, that is what I'm saying. They are not porting it because previous solutions have been just as capable.


But they did port it. That's why CSS Grid exists…


Yes, but his point is that for ages it didn't exist. Like, for 2 decades and something, was first tables, then "floats" BS.

And even now it's still not mature and supported in all browsers.


Isn't this a false dichotomy? Why not create more modern, declarative native APIs and libraries, or use them where they already exist?


> the old stuff worked just fine

> we don't need the new stuff

> how about we throw both away to shut everyone up, would that make either side happy?


How about: Take what we learn from new development so that we can improve the older, lower-level stuff, and strip away some unnecessary levels of abstraction?


Yep. I did some swift development recently and while I absolutely loved the language I was struck with how much worse the uikit APIs are compared to react.

There's nothing special about javascript the language. The nice thing about web development is that the really fast framework iteration cycles. And everyone complains about them, but as a result we have some fantastic APIs for UI development in javascript land now. (Eg, react.)

What I want to see is someone port those lovely abstractions across to native languages. For example, a port of choo[1] to swift on top of uikit would be gloriously fast, efficient, small and easy to work with.

[1] https://github.com/yoshuawuyts/choo


Because it's less effort and more likely to be successful to just improve Web implementations.


Sure, but it's also more likely to make my laptop with 16 gigs of RAM, 8 simultaneous threads and SSD feel more sluggish than the one I had 10 years ago.

Edit: I'll add, I frequently see people say that Windows XP was the best OS ever made. Why the rose-tinted glasses? It was the tail end of the era when hardware was getting faster, faster than software was getting slower.


Where are the Michael Abrash's of today teaching people how to write tight, fast code? Seems a lost art...

Yes, I know, he's still around...


Luckily there are a few people that still care, just look at the response Handmade Hero has gotten.


Those videos on data oriented design were also very interesting: https://github.com/taylor001/data-oriented-design

The thing is, using C++ instead of React for mobile development of a simple application would probably make me miss deadlines... So we just stick to whats popular.


Building UI's in something like Qt, GTK or swing was never really that time consuming, especially given the limited amount of controls one screen in a mobile app.


Except today's machine language is javascript.


Except that the abstraction layer for the user has remained the same.

This is absolutely not true.

Going to windowing GUIs from character-mode DOS was a massive usability improvement. No more exotic ctl-alt-function key combos (or control-control sequences for WordStar users like me).

Going from desktop apps to webapps changed the abstraction layer of how we access and share documents. No more installing software on the desktop. Docs are available on every computer we log into. Multiple people can edit the same document.

Maybe you don't care about document sharing and would be happy installing old school word processors and spreadsheets? That's a fair statement to make, but the market appears to be voting otherwise.


>No more installing software on the desktop

At least VS Code has to be installed. It is a plus for Google Docs & Co, but not universal. And in most cases, instead of installing you now have to create an account.

>Docs are available on every computer we log into. Multiple people can edit the same document.

Those features are completely independent from the platform the application uses. Those features are commonly implemented in web apps because for a long time they had no other choice, but cloud storage and collaborative editing can be equally well implemented in a destop application.


> Going to windowing GUIs from character-mode DOS was a massive usability improvement. No more exotic ctl-alt-function key combos

cough Blender ...


Unfortunately - the OSS world has a shortage of good UX people, and the engineers tend to be the ones steering the ship.

I would point to Eclipse as another classic example of "obviously designed by an engineer". There's tons of functionality under the hood and it's a fantastic jumping-off point for further customization... but its layout is intensely non-intuitive in so many ways compared to a purpose-built IDE.

Everything is locked away in menus and "perspectives". If you're writing a Java web app - do you want the Java perspective, the Java EE perspective, the Web perspective, or the Debug perspective?

I hear GIMP's no picnic to work with either but can't confirm personally.


I fear the day "good UX people" come to GIMP and Blender.

It used to be that OSS was developed by people that actually used it. Maybe it wasn't pretty, it had a steep learning curve, but it got things done and was efficient once you learned how. Often you got fresh perspectives on how an interface could be done, since people got fed up with existing solutions.

Nowadays you get UX experts preaching how an interface is supposed to look like, which mostly means copying Apple or Google. You get tons shiny whitespace. Burger menus, because that's what "everybody is used to".

GIMP with "good UX people" would turn into a bad copy of Photoshop.


GIMP is pretty easy to pick up if you've used desktop apps before. It follows the sort of conventions you'd expect.. Blender, on the other hand, has a very unique approach to UX.


Well, that is true of nearly all 'pro' creative apps. Maya, Illustrator, ...


>You both do realize that similar arguments could have been made back in the day when moving from, say, command-line DOS applications to Windows API applications - right?

No, we don't. With Windows API there's no sandbox and extra embedded language overhead (it's still native code like in DOS, and even optimized better, able to use more available memory than DOS would allow, etc.).

Oh, and you also got the benefit of a FULL graphical interface over DOS. Here you have slower performance, extra cruft, bad apis AND the same final output (as a GUI app).

Not to mention that the Windows API UI toolkit, while bad, is sane compared to the web stack.


There was a big new overhead -- in DOS you could write directly to video memory, and control the graphics cards registers directly. In windows you had to go through libraries. Until DirectX came along, it was basically impossible to write even reasonable performance animations in windows, hence the lack of games in windows 2 and 3.


That is not true, before DirectX there was WinG and there were quite a few good games done in WinG.

The lack of games was mostly due to reluctance of game developers to abandon Assembly and using directly PC hardware, specially because C and C++ compilers were "too slow".


WinG didn't come around until 1994 though. If you wanted to ship a game in 1990, you had a choice of using GDI or using MS-DOS.

And the "assembly" argument doesn't make any sense. You can program Windows games in assembly if you want.


Windows only became relevant for home users after version 3.0, actually the 3.1, which was released in 1992, so of course no one was shipping Windows games in 1990!

Sure you can use Assembly on Windows and there were a few books teaching exactly that, but you weren't allowed to touch the graphics hardware any more, unless you were doing a graphics driver, and do all those graphic card tricks, specially mode X.


> Not to mention that the Windows API UI toolkit, while bad, is sane compared to the web stack.

Completely disagree. Charles Petzold's HELLO.C is hundreds of lines of code. Hello World on the Web is, well, "Hello world!".

"Sane" environments don't force you to make up a distinction between "long pointers" and "pointers" if you want to conform to the house style in order to match 16-bit x86 real mode. Or route all events through one WndProc, forcing use of "message crackers" to poorly recreate the ergonomics of addEventListener. Or have to recreate the vector graphics stack not once but twice in order to deal with the lack of forward thinking (GDI, GDI+, Direct2D). Or deal with incredible apartment threading complexity to maintain VB6 compatibility. Etc. etc.

If you had said .NET, maybe. But Petzold-style Win32 is bad.


>Completely disagree. Charles Petzold's HELLO.C is hundreds of lines of code. Hello World on the Web is, well, "Hello world!".

It's actually 87 lines of code, with a disclaimer comment, ample empty lines, a callback to play a wav when clicked, and an error message for when it's not run on NT. So, like 30-50 lines of actual Hello World necessary code. And that includes the starting up boilerplate.

That's about as relevant as complaining about the large binary size of a hello world program in a language creating static binaries. It might be larger than expected, but it usually includes a whole runtime. So once you add actual code, the binary's size doesn't scale linearly with the code size.

For comparison, a modern CSS "reseting" file, which just removes empty values is usually larger than the hello.c example.

This is the problem with trivial/contrived examples. They don't show you how the thing you're discussing scales in actual use.

In this case, the hello world in HTML is not representative of what you need to do to create a medium/large SPA in HTML.

And the old Windows api you could always wrap in higher order stuff, or even use as a basic layer to create your own UI library (still native).

The web stack, because of how it is done, you can't.


It's actually 87 lines of code, with a disclaimer comment, ample empty lines, a callback to play a wav when clicked, and an error message for when it's not run on NT. So, like 30-50 lines of actual Hello World necessary code. And that includes the starting up boilerplate.

Petzold's Hello World is far more than what I'd consider a Hello World in Win32, because in addition to what you noted, it also creates a "full" window with all the associated complexity of managing its drawing yourself. A more suitable Win32 Hello World is not that much more complex than the traditional C one:

    #include <windows.h>
    #include <winuser.h>

    int main() {
        MessageBox(0, "Hello World!", "Hello World!", MB_OK);
    }
From there, one can progress onto "dialog-based applications", where the bulk of the layout is declarative (in the resource file) and the C part is two functions, a callback for messages and a main() which just calls DialogBoxParam(). No WM_PAINT handling is necessary for those. A "full window" application is actually not necessary for many use cases. I've been working with Win32 for over a decade and a half and written at least a dozen little apps for various things, but the vast majority of them are not "full window" apps.

In other words, I'd say Petzold was at least partially responsible for giving the impression that Win32 is impossibly complex even to start with. Fortunately others have come up with better introductions like http://www.winprog.org/tutorial/ or even https://win32assembly.programminghorizon.com/tutorials.html since then, but it seems that the damage has already been done and a whole generation of programmers have gotten the "Win32 is hard" notion embedded into their minds.


You are forgetting the magic CSS incantations, which are browser and version dependent, to accelerate something that is just granted on any native UI toolkit.


I agree that the paint/compositing distinction is bad (see my reply to Paul Irish), but Win32 has just an opaque a distinction. GDI is much worse for modern hardware acceleration than the Web is.


GDI is a dead worse, already replaced by better solutions for anyone that cares to use them, yet it still scales better than most web stuff.



Complaining that win32 has to do stupid things to get basic features in comparison to the web seems pretty laughable.


Why? The consensus opinion here seems to be that Win32 is a better API than the Web. I'm pointing out that this is an extreme case of rose-colored glasses.


I have had to use the win32 API within the last year. I left that job, I regret nothing.

As much as I want as few layers between me and the hardware as possible, I would rather have many sane layers like much of the web than 1 insane layer like the win32 API. It doesn't follow good C, C++, std library or other conventions. Many functions have multiple poorly documented modes based on which structs you pass are empty or null pointers. It also just fails often for no reason I can understand, and the distinction between windows application and console is quasi magical and counterproductive.

Posix... Is a blessing by comparison. Every time I need to touch it it take me something like a single hour and I have a function that works reliably and just does what I need. Until we get to X11 and manipulating those windows. Then I just want to punch everything, but at least X11 works once the code is written, even though its inside out and backwards.

Now I just strongly prefer to use a good library to get at OS facilities. Things like SDL, boost or Intel Threading build blocks, etc. They are fast and generally tight enough I can open them and understand down to the hardware when I want.


There is a point to be made here though that is more valid than the argument against moving away from command-line DOS applications to Windows API applications.

Moving from text-based to graphical is a much different shift than what we are talking about here. The idea here is that we are creating many abstractions to get the same basic result. I can't rewrite Atom.io for DOS, but I could rewrite Atom.io to use the Windows API directly rather than being Electron-based.

We're talking about VS Code using 13% of the CPU while idle to render a blinking cursor. What benefit does the user get from this? We could render a blinking cursor with less then 13% of a much less powerful CPU years ago.


Or, as a better example - rewriting from Electron to Qt rather than Win32. Obviously Electron was chosen to be cross platform. But I still don't get the web renderer obsession. We already have high perf cross platform solutions. Use them.


On a related note, it seems crazy to me that for actively maintained, cross-platform, native widget GUI libraries, your options are… Qt.

(Not that Qt is a bad library, but it's bizarre that such an important area is so neglected by our industry).


An interesting observation. But isn't the explanation rather obvious? Writing desktop applications to sell for money, that has quietly disappeared. It still exists, somewhere, but so does horseshoe-fitting (the few experts are probably making decent money, but it's a zombie business nonetheless). Between a few established offerings that won't be looking for a new framework any time soon, desktop-packaged web and all the gratis stuff, where would new GUI toolkits find their footing? Desktop means potentially offline and offline means almost all monetization schemes don't apply. (including those that don't work except for instilling hope in investors)


Yes, I know why it's the case nowadays: at my work, for example, I occasionally touch all the major areas of modern software development — server backends, mobile, Javascript-in-the-browser and meta-software for writing software — but I've never done what the average person, and maybe even I, would think of when they hear the term "software development."

And it does seem very very strange. I'm imagining this Socratic dialogue:

---

Socrates: What is the most popular programming language?

Developer: Java.

Socrates: And why is Java so popular?

Developer: It's the first language most people learn nowadays, so everyone knows it.

Socrates: So it's popular because it's popular?

Developer: Well, it's very similar to C++, which was the most popular language when it first came out, so it was easy for people who already knew C++ to learn.

Socrates: So why not keep using C++?

Developer: C++ had a lot of problems that made it hard to use, and Java solved a lot of those.

Socrates: Such as?

Developer: If you wrote an application in C++ that ran on any given operating system, like Windows, you wouldn't be able to run the same code on another operating system like Linux or Mac; you would have to rewrite much of your work from scratch to support more than one operating system. Java is "write once, run anywhere." Your Java program will run exactly the same way, without having to rewrite anything, on any operating system that has a Java implementation.

Socrates: Interesting! Can you show me?

Developer: (Writes "hello world," demonstrates it runs the same in both cmd.exe and Bash)

Socrates: And that works with real programs too?

Developer: What do you mean? This isn't a very long or useful program, but it is an real program.

Socrates: What's special about what you just showed me? Can't you write these command line things in any language? Look: I learned a bit of Python a few years ago. (enters python -c "print 'Hello, world!'" in both terminals)

Developer: OK, maybe that wasn't the best example, but when you're doing more complicated stuff it gets harder and harder to write code for more than one operating system in a language like C++, so a language that runs the same everywhere, like Java, or Python, is better to have.

Socrates: So like an application that you would install?

Developer: Exactly!

Socrates: OK! So could you show me a real application then? Like, it says "Hello, world!" in a window, with a menu and buttons and stuff? And it will be the same on both Windows and Linux?

Developer: No, that's not possible.

Socrates: Hm?

Developer: You can't do that in Java.

Socrates: So, Java is the most popular programming language, but you can't use the programs you write in it?

Developer: Alright, you got me there. Usually Java is used for writing applications that run on servers, not PCs.

Socrates: Servers?

Developer: Yeah, like if you go to a website, there might be an application on its server to fetch your account information from a database.

Socrates: But why do you need a special application for that? Can't you just ask for the information you need directly?

Developer: Good question. We're moving in that direction. But for some things you really do need some kind of custom logic, and for those things Java is a good solution across different platforms.

Socrates: What different platforms are there in servers?

Developer: There aren't really that many platforms. Almost all servers run Linux. But there's also a few that run Windows, or FreeBSD, which is very similar to Linux.

Socrates: So if you were on a less popular operating system, like Windows, you would use Java to be compatible with Linux?

Developer: Probably not; really, the only reason you would use Windows on a server instead of Linux is if your application were written in C#, which is Microsoft's Java competitor.

Socrates: So Java is the most popular-because-it's-popular programming language you can't write programs in, and it's useful because it runs on every platform, on only one platform?


Besides the fact that you can have reasonable UI in swing (look at IDEA), and that Java was designed for Solaris initially, and that Java is still way faster that any JS (sockets, compare-and-set primitives, +direct access to native memory, shared memory IPC and the like... which makes it prime candidate for server applications) it makes for a good example what an uneducated developer might say.


>Socrates: OK! So could you show me a real application then? Like, it says "Hello, world!" in a window, with a menu and buttons and stuff? And it will be the same on both Windows and Linux? >Developer: No, that's not possible.

swing? awt? javafx?


I exaggerate somewhat, but all three are low-quality and further development appears to be abandoned.


They are good enough in the hands of those that care to learn how to use their APIs, and way better the web will ever be.


It's a huge undertaking with a questionable business model, especially as most things have moved to the web and/or mobile. But it is amusing that there are way more free, cross-platform, high-quality game engines than GUI toolkits.


>You both do realize that similar arguments could have been made back in the day when moving from, say, command-line DOS applications to Windows API applications - right?

Ugh, reddit's standard, snarky response: "you do realize... right?"

There's simply no way to get around the fact that by and large, many in-browser apps are poorly performant by any modern standard.

To me, it seems like there's a faction within the web development community whose goal is to take things that worked perfectly fine as native apps and re-implement them with poorer performance in the browser, with seemingly no benefit.

I wish these people all the success in the world, if that's what they enjoy doing. But for the most part I simply cannot tolerate the work they produce.


The benefit is to their paymasters, as they get to extract rent (either directly or via ads) without having to contend with software pirates because the actual software logic is sitting pretty in a server cluster somewhere.

Effectively we are back to the world of time-share terminals.


Ironically, we may yet see a renaissance in native code as more investors realize that the US broadband situation (especially wireless) is not getting better anytime soon. This leads to the realization that always-on, high-speed, low-latency broadband is not a safe assumption. Disconnected operation will become more important as the product space for always-on solutions becomes ever-more crowded.

This realization that always-connected apps might be joined by disconnected apps is emphasized when looking to markets across the world. There are more disconnected people in the world than always-on, high-speed, low-latency connected. There is a burgeoning middle class characterized by sporadic, low-speed, high-latency connections, who are willing to spend some money online, for those times they make it online. They might spend pennies each today, but whoever captures those markets is addressing billions of underserved online customers. I'd take a billion $0.01 payments a day any day.


And they get to use open source libraries without having to share any single piece of their changes.


Except we're using divs, markup elements originally intended for research documentation, and using them to emulate modern UI techniques. I don't mind abstractions, but we should at least be using something built to task.


The most egregious example of abuse of technology for me is the browser version of Wolf3d, where they used divs to essentially replicate the vertical column drawing of a raycasting engine: http://3d.wolfenstein.com/


That game only works in the US, as it seems.

But if you set the cookies

    document.cookie = "age_checker=pass; expires=Thu, 18 Dec 2037 12:00:00 UTC; path=/";
    document.cookie = "is_legal=yes; expires=Thu, 18 Dec 2037 12:00:00 UTC; path=/";
It does work elsewhere, too. (Otherwise it just redirects to your local wolfenstein info site)


Abstraction is about coming up with general designs from many specific implementations. You can layer abstractions but layering is not fundamental to abstractions.

Just because a design introduces extra indirection into a system does not mean that the design is providing any abstraction.

Likewise just because a design changes from one implementation to another does not mean anything has been abstracted.

Your example of moving from DOS to Windows is really good for illustrating this. DOS had almost no hardware abstraction support, you had to include drivers for a lot of hardware into your application. Moving from DOS to Windows 3.1 only provided abstractions over the video drivers, for example when it came to network cards you still had to use DOS drivers.

Moving a text editor from DOS to Windows does not provide you with any abstractions for things like the cursor. It is only changing the API for the sake of having your text editor run on a different platform.

Guess what? This is the same with trying to get your cursor and text rendering working on a web browser. There is absolutely no abstraction here, you are just changing your code to work with the very awkward DOM API.


It depends on what efficiency buys you, and what you trade efficiency for.

When Windows was developed, we had enough CPU power to do the basic tasks people wanted done with a computer at a reasonable speed - the idea was that spare capacity was being traded for user friendliness.

For some applications, even today, when maximum efficiency is necessary, purpose built machines ('soldering together the flip-flops that make up your logic and memory') are still used - it's just that it's often an ASIC that is fabricated.


> When Windows was developed, we had enough CPU power to do the basic tasks people wanted done with a computer at a reasonable speed - the idea was that spare capacity was being traded for user friendliness.

And in many cases that led to an instant drop in productivity. Those old green-screen systems were not pretty but they were quite efficient to use.


I've actually witnessed a very large oil & gas company switch over from those old dos programs to a new win32 program, and to a man, every person who had any experience on the old system bitched about how much slower it was to get anything done.

Having worked on the system myself, they were right (this was as a noobie to both systems).

There is something to be said for a small, tight system.


Some time ago HN linked to a blog from a Norwegian tasked with mailing 3.5" floppies to doctors.

This because the doctors insisted on using a DOS based patient journal, as they could operate it completely by keyboard while maintaining a conversation with the patient.

I speculate that one reason for this is that DOS allowed each program to have full keyboard access, while Windows and other GUI has to reserve certain keys for managing the UI (switching between windows etc).

Thus what was earlier a series of single key presses now involves holding down a modifier for the duration. And that is if the developer even remembered to put in a hotkey for said action.


You don't need the Windows reserved keys for anything with an application. You could make a fully keyboard operable UI for an application on Windows.

It's just that software makers don't do it; most everyone thinks everyone loves to do things with mouse et al.


Do you happen to have the link to this blog handy? I'd be interested in reading it.


You got me digging, so here is the HN discussion about it.

https://news.ycombinator.com/item?id=10287889


Yea, but that came with other benefits, like mandatory accessibility and unified look (at least on the mac). Where are they on the web? Does anyone proactively consider ARIA? Consistency of user interface is just a laughable dream.

But hey, thank god we finally have a framework we can inject ads in at will.


Google Docs takes a good 2-3 seconds to spin up a UI on my i7.

I doubt much of that time is spent building the DOM tree for the UI. Google Docs does a lot more than a simple text editing widget - there's a networked file system, a multi-user collaboration engine, a realtime notification system, etc that all get initialised. Instantiating all that over a network in 2-3 seconds is really fast.


> there's a networked file system > a multi-user collaboration engine > a realtime notification system

None of these apply to a new, unshared document. Those subsystems can be loaded slowly over the following 10 seconds, that's fine. Is it such a hard thing to ask to make the UI responsive within 0.1 s? Like, be able to type stuff and have it appear on the screen without delay?

It's so bad that I often use the basic HTML version of Gmail because it loads in less than 1 second. The normal "AJAX" version loads in 3-4 seconds. And I have a 200 megabit connection. Who wins at getting me my information faster?


Over the past year or two Gmail has gotten really slow. I'm not sure what happened, because there are no new features I can think of that would cause this.

It's gotten so bad that I'm considering moving to another provider or using a good old email app again.


Outlook.com as well. When it launched, it was blazingly fast. Now it comes with a loading screen and really laggy.

Bloated single page apps is the curse of the modern web.


Use the basic HTML version. It's worth the loss of a couple features from regular gmail or inbox for a much snappier UI (even with it fetching full new pages from the server all the time!)


I like my vim-style keyboard shortcuts thought :-/.


I've noticed that too! Do you use Inbox? I do and I was thinking about going back to the classic interface to see if it's faster.


I tried Inbox. I thought for a while that Gmail stars == Inbox pins. When I found out that I was missing out on emails I had previously starred, I quit and went back to Gmail.

Also, Inbox did not show me full e-mails on my Android Wear smartwatch while Gmail did. Also, I couldn't figure out how to set up the filters I needed in Inbox.


Oh, I tried Inbox but Gmail was nice and fast compared to that pos.


I would recommend Fastmail. Moved over for similar reasons and never looked back


I imagine it would be hard for Google to run integration tests if the whole app took 15 seconds to load instead of 3. Not to mention that if there's an error loading part of it, people will complain that they wrote a document but it wasn't saved/shared like they expected.

Google Docs is truly amazing technology for the browser. It's just bringing its model to "native" apps that I have very mixed feelings about.


>Instantiating all that over a network in 2-3 seconds is really fast.

I see. In your opinion, do you think Chrome would be even faster if it managed to spin up an entire sandboxed docker container (in case any web site wanted one, to do whatever it wants with), as well as another one with a full Postgresql install (again in case any web site wanted one, to do whatever it wanted with, oh, and also for Chrome's own bookmarks and stuff), yet still launched within a blazing 14 seconds?

Would that be even faster, considering?

Because from where I'm standing, that wuold be 5-7 times slower.

And 3 seconds to start is 2.95 seconds too slow for me to start typing into a URL box, which could appear within 50 ms if it was done right instead of done wrong.

Chrome does things wrong instead of doing things right, and it really is that simple. I'm a human, not a network share. I use software so I can interact with it. It should do the rest on it's own time and as needed.

Loading a bunch of stuff "quickly" does not equal being quick. Google should know better.


...a whole 2-3 seconds!

Sometimes I think that comments like these arise out of not having experienced text-only rendering at 300 baud...

I know that's a generalization, and most likely unfair - but damn, today's phones, to this old man, are pocket super-computers (for that matter from my vantage point, an Arduino is a wonder, and a RasPi is utterly amazing)!


> Sometimes I think that comments like these arise out of not having experienced text-only rendering at 300 baud...

At the same time, we are now living in the future. With "pocket super-computers" that are more powerful than the Crays that existed when you were waiting on 300 baud text rendering, yet we are using 13% of that power just to render a blinking cursor? Does that not seem like we haven't made as much progress as we should have?


That's totally unfair. You should compare it to startup time of Word from, I don't know, 10 - 20 years ago. Around the same time (if not less), and Google Docs still can't match the features of old Words.

The point being, for the past decade or two, we've been burning all hardware performance improvements on things that are neither visible to user, nor enable them to do more with their computer. Surely, there must be ways of spending those CPU cycles and RAM bytes on things that are actually useful and enable people to do more / better / faster work.


On a per-application basis, you are probably correct. But if the additionally abstraction layers allow there to be greater diversity of applications and more tools for more niche cases because development is easier and/or faster, then that is directly immediately beneficial to the end user. Not to mention faster design iteration, implementation of new features etc.


> On a per-application basis, you are probably correct. But if the additionally abstraction layers allow ...

Windows 7 boots in approx. 5 seconds to the desktop (if no password is set), and stuff like Word or Visio starts instantaneously. Web applications on the very same computer are a whole different story.

Let's just acknowledge that SaaS is not and never was about any kind of benefits for the user or customer, but just about either centralising resources and services back to the vendor's control and/or increasing money extraction.


I have never seen word open instantly. When I do it there is always a second or two for teh obnoxious splash screen, then often 10 to 15 seconds waiting for it to do whatever. Notepad++, yeah, that opens nearly instantly.

I mostly use Ubuntu and there Libreoffice still has a stupid splash screen. After that though the amount of time is too small for me to count.


> But if the additionally abstraction layers allow there to be greater diversity of applications and more tools for more niche cases because development is easier and/or faster

Please try to find a single example where this is true. Niche cases and greater diversity all come from a lot of work keeping runtimes up to date and APIs backward-compatible. This is only because of Free Software. Trying to credit this to "abstraction layers" is insulting to all of the programmers working on Free Software libraries, compilers, runtimes and operating systems.


What I'm suggesting is that, e.g., there are more Slack, Atom, VS Code, (and those are just Electron apps) etc. and/or those apps have more features because of the speed of development and iteration afforded by these inefficient abstraction layers.

So, I can't give a specific example but am instead pointing at the diverse ecosystem of applications and rich functionality. It's logically impossible for me to prove that these apps wouldn't exist without inefficient abstraction layers. It's my supposition, and the developers who write electron apps would probably agree.


> It's logically impossible for me to prove that these apps wouldn't exist without inefficient abstraction layers.

The literally millions of applications not written on top of the specific crapware you list are an existence proof. This was the case even with assembly/Pascal/BASIC applications on microcomputers in the 1980s. Your whole argument is that somehow the web stack is easier to write applications on top of because <insert nonsense adjectives like "diverse" and "rich">. To go back to the Pascal example, a lot of people who program in Delphi strongly disagree even today, and Delphi has been around since 1995. What makes you think that the web stack has a higher speed of development than other software tools? Why do you think that high-speed development depends on inefficient abstractions? That is all total nonsense. There are a lot of problems with the web stack. You need to stop making up bullshit rationalizations and learn about other approaches to software development.


Very good questions that I can't answer. And it's not my 'bullshit rationalization' -- I'm not the one who decided to build all these products on this "crapware" stack. I'm just suggesting that this stack was chosen for some (hopefully) logical reason.

If speed of development isn't the reason, then what is the attraction? I'm serious and curious. I looked through your profile and you clearly know your shit. Are we just in a period of a bad stack being popular and used despite there being other, better options?


Why should he compare it to Word? Was it a cloud-based product that synced all files between multiple machines of radically different form-factors from PCs to phones? No?

Google Docs uses the Internet, so did anything on an old modem. Word didn't (especially 20 years ago).


So word processors have to be slow to use the network?

I'm not sure your arguement makes any sense.


I can't remember the stand-up routine, but the punchline applies:

"10secs! I was supposed to be at work 15 seconds ago!"



Yeah. I remember when Gmail introducing the floating compose window with some limited window management was a big feature. But, desktop GUIs have been doing this since the 80's...


There is plenty of available spectrum between native widgets and writing entire applications inside the web browser. Electron is a fashion statement and a convenient short cut to portability, that comes bundled with a mountain of complexity and technical debt. Moving the DOM to a native "server" removes most performance issues and allows applying the full power of a real language. https://github.com/codr4life/libc4dom/blob/master/tests/main...


> instead of a native widget that writes directly to the screen.

"Writing directly to the screen" (by which I assume you mean writing pixels one by one to the framebuffer) is a bad idea for modern graphics hardware. It was fine on the 486, but nowadays you need the ability to do global optimizations for good 2D (or 3D) graphics performance. Ironically, the Web stack is much better positioned to do this than, say, Win32, because of the declarative nature of CSS.

Besides, as some downthread have pointed out, you didn't "write directly to the screen" in Win32. You went through GDI.


It seems reasonable this might be true, but it's not. In video games we went down the road of retained-mode graphics APIs (declarative-type things, so that they can do the kinds of 'global optimization' you mention) but we abandoned them because they are terrible. Video games all render using immediate-mode APIs and this has been true for a very long time now and nobody is interested in going back to the awful retained-mode experiment.


You build custom retained-mode APIs on top of the immediate mode APIs—they're called game engines.

What happens if you try to present an immediate mode API for UIs is the status quo with APIs like Skia-GL. You frequently end up switching shaders and issuing a new draw call every time you draw a rectangle, and you draw strictly in back to front order so you completely lose your Z-buffer.

Imagine if games worked like that: drawing in back to front order and switching shaders every time you drew a triangle. Your performance would be terrible. But that's the API that these '90s style UI libraries force you into. Nobody thought that state changes would be expensive or that Z-buffers could exist when Win32, GTK, etc. were designed. They strictly drew using the painter's algorithm, and they used highly specialized routines for every little widget piece because minimizing memory bandwidth was way more important than avoiding state changes. But the hardware landscape is different now. That requires a different approach instead of blindly copying what the "native" APIs did in 1995.


Ehh, game engines are not really retained-mode in the way you mean. There isn't usually a cordoned-off piece of state that represents visuals only. Rather, much of that state is produced each frame from the mixture of state that serves all purposes (collision detection, game event logic, etc).

"What happens if you try to present an immediate mode API for UIs is the status quo with APIs like Skia-GL."

I don't know what Skia-GL is, but in games, the more experienced people tend to use immediate-mode for UIs. (This trend has a name, "IMGUI". I say 'more-experienced people' because less-experienced people will do it just by copying some API that already exists, and these tend to be retained-mode because that is how UIs are usually done). UIs are tremendously less painful when done as IMGUI, and they are also faster; at least, this is my experienced. [There is another case when people use retained-mode stuff, and that's when they are using some system where content people build a UI in Flash or something and they want to repro that in the game engine; thus the UI is fundamentally retained-mode in nature. I am not a super-big fan of this approach but it does happen.]

"and you draw strictly in back to front order so you completely lose your Z-buffer"

That sounds more like a limitation of the way the library is programmed than anything to do with retained or immediate mode. There may also be some confusion about causation here. (Keep in mind that Z buffers aren't useful in the regular way if translucency is happening, so if a UI system wants to support translucency in the general case, that alone is a reason why it might go painter's algorithm, regardless of whether it's retained or immediate).

"But that's the API that these '90s style UI libraries force you into."

90s-style UI libraries are stuff like Motif and Xlib and MFC ... all retained mode!

I don't agree that an IMGUI style forces you into any more shader switches than you already would have. It just requires you to be motivated to avoid shader switches. You could say that it mildly or moderately encourages you to have more shader switches, and I would not necessarily disagree. That said, UI rendering is usually such a light workload compared to general game rendering that we don't worry too much about its efficiency -- which is another reason why game people are so flabbergasted by the modern slowness of 2D applications, they are doing almost no work in principle.

Back to the retained versus IMGUI point ... If anything, there is great potential for the retained mode version to be slower, since it will usually be navigating a tree of cache-unfriendly heap-allocated nodes many times in order to draw stuff, whereas the IMGUI version is generating data as needed so it is much easier to avoid such CPU-bottlenecking operations.


I will also say that this is not an academic argument for me; I am in the middle of writing yet another immediate-mode GUI right now, for the game editor I am working on. Every day I am freshly glad that I am doing things as IMGUI instead of RMGUI.

Here is a (somewhat old) video explaining some of the motivations behind structuring things as IMGUI: https://www.youtube.com/watch?v=Z1qyvQsjK5Y


This argument looks like you and pcwalton are arguing about different definitions of "immediate mode API". I think both of you agree with each other on object-level propositions.

pcwalton seems to be presuming that part of the contract of an "immediate mode API" is like old-school ones it actually immediately draws to the frame buffer by the end of the call.

Whereas you are talking about modern "immediate mode API"s where the calls just add things to an internal data structure that is all drawn at once, avoiding unnecessary shader switches etc. IIRC this is how Conrod (Rust's imgui library) and https://github.com/ocornut/imgui work, although with varying levels of caching.

One point to make about retained mode GUIs is I remember reading an argument that immediate mode is great for visually simple UIs, such as those in video games, but isn't as good for larger scale graphical applications and custom widgets. For example when rendering a large text box, list or table you don't want to have to recalculate the layout every frame so you need some data structure that sticks around between frames specific to the widget type, so that's what retained mode APIs like Qt do for their widgets.

Sure you can do the calculations yourself for exactly which rows of a table are currently in view and render those and the scrollbar with an immediate mode API, but the promise of toolkits like Qt is that you don't have to write calculations and data structures for every table.


"so you need some data structure that sticks around between frames specific to the widget type, so that's what retained mode APIs like Qt do for their widgets."

Immediate mode GUI systems are allowed to keep state around between frames and the most-featureful ones do. The "immediate mode" is just about the API between the library and the user, not about what the library is allowed to do behind the scenes. The argument that retained-mode systems are inherently better at this doesn't hold water; it is kind of an orthogonal issue.


I'm definitely aware of this, it's why I mentioned "varying levels of caching". The Conrod imgui that I mentioned basically uses retained mode GUI data structures behind an immediate mode API through diffing for performance reasons.

This works just as well/quickly as a retained mode API in almost all cases. There's some cases like extremely long tables with varying row heights and sortable columns, where you need an efficient diff of the table contents. Since recalculating layout and sorting every frame is inefficient. Retained mode APIs do this with methods to add and delete rows. It's possible to do with an immediate mode API, but to detect differences in the rows passed in quickly you need to use a functional persistent map data structure with log(n) symmetric diff. Or you can just have an API that is mostly immediate mode but has some kind of "TableLayout" struct that persists between frames and is modified by add and remove functions.

I'm curious what API you would use for implementing a table with varying row heights (that you only know upon rendering but can guess beforehand), sortable columns and millions of rows. I implemented this in an immediate mode GUI API a few months ago, and I did it with persistent maps and incremental computation in OCaml. Incrementally maintaining a splay tree and a sorted order by symmetric diff of the input maps. This isn't as nice of an API in languages like C++ so I'm wondering if there's a better way.


"I'm curious what API you would use for implementing a table with varying row heights (that you only know upon rendering but can guess beforehand), sortable columns and millions of rows."

In general my policy is that when things get really complicated or specialized, the application knows a lot more about its use case than some trying-to-be-general API does, so it makes sense for the application to do most of the work of dealing with the row heights or whatever. (It's hard for me to answer more concretely since it depends on exactly what is being implemented, which I don't know.)


Motif and Xlib use expose events to handle drawing. Doesn't imply retained mode drawing; you could use either in the handler.


It is a little confusing because we are talking about both rendering and GUIs, but ... "retained mode" in this case refers to the GUI itself, not the method of drawing. Motif and Xlib are "retained mode" in the GUI sense because if you want there to be a button, you instantiate that button and register it with the library, and then if you want it to become visible or invisible or change color you call procedures that poke values on that instantiated button. In IMGUI you don't preinstantiate; you just say "draw a button now" and if you don't want it to be visible, you just don't draw it, etc.


This is a fair point. All the mapped XWindows are certainly "retained" from this point of view.


But can you imagine something like Word being written without the "retained-mode" abstraction?


Yes, absolutely, and in fact I think it would be a much better program.


Separating layout from styling and behaviour is something that many GUI toolkit developers have decided is beneficial.

Most "modern" mainstream native toolkits - e.g. GTK+ 3, Qt 5, WPF - encourage this separation into layout - GtkBuilder, QML, XAML - and style - CSS, Qt Style Sheets, XAML Styles.

So, this isn't a "web browser" problem. Or, this style of GUI isn't the problem with Electron. I find GTK and Qt apps to be plenty responsive enough, even when their GUIs are loaded from XML files.


Also compare Google Docs with Microsoft Word and LibreOffice cold start up time. You will see 2-3 seconds is fast and the two mentioned are not even loaded from a remote resource...


Excel loads in ~1 second...and doesn't have UI lag after it does.

Try working with large data sets in Google Docs and you'll have that 2-3 second lag time with _every_ operation you perform.


On the flip side, the new Windows "Metro" style calculator app takes several seconds to load ... and is less usable than the old calc.exe.


I've found this so often, I have a dual Xeon and 16GB of RAM and a calculator of all things taking more than a second is unacceptable.

I got a popup inside the calculator asking for feedback about it once with an inbuilt form and submission. I can only assume it has toooooons of hidden away cruft that does everything but assist in calculating things.


I can see it now.

Manager: All apps have to use this feedback framework now, no exceptions. Getting feedback is super important so we can be more Agile!

Dev: ok... uh, but this is 40x the size of all of calc.exe. Plus it's just a calculator and we've refined it for years so it's pretty good already. Isn't that kinda nuts?

Manager: metrics! Feedback! Agile! Just do it!


IMHO calc.exe has been getting worse since XP:

https://news.ycombinator.com/item?id=10791667

(Note that the "Calculator Plus" mentioned there has --- not suprisingly(!?) --- disappeared from Microsoft's download center, but you can still find the official, signed installer by searching for "CalcPlus.msi".)


It takes maybe half a second on my 4 year old Core I5.


Sometimes it does. I've seen this happen on several machines, some containing recent i7s. And if it does "pop up" quick, it has a loading screen. Let that sink in. It actually has a full-colour screen while it figures out how to render a few numbers and buttons. FFS.


Which is still kinda impressive given that calc.exe starts in milliseconds.


Or try working with it over an intermittent connection, everything goes haywire.


If you don't have plug-ins, Word 2010 (which, even without plugins is far more functional than Google Docs) loads in under half a second on a modern machine.


I really can't reproduce that. I used MS Office since Office '97, and even Office 2000 on a Windows 98 machine loaded in about three seconds. Nowadays, with Office 2010 or newer you won't even see the splash screen anymore. Start menu -> click -> poof, application is there. Google Docs is nowhere near that.


Very simple, there's a couple of orders of magnitude greater number of developers and designers who have skill with web technologies than native UI. Similarly there's a couple of orders of magnitude more options of UI frameworks and design patterns. Add to the fact that Blink, Webkit, V8 and Chakra have been constantly pushing the bounds on speed bringing a web technology based front end within touching distance in terms of speed vs. native.

Given all these factors, any product using web technologies for UI can move much faster than products which don't. Like how Sublime and every other programming text editor basically got eaten up by VSCode and Atom in about a year and half.


It's the typical web story. You can get to "good enough" with blazing speed, but the limitations of the web make it hard to achieve a high level of polish.


Very simple, there's a couple of orders of magnitude greater number of developers and designers who have skill with web technologies than native UI.

There is a very succinct rebuttal to that, and a good explanation for why apps based on "web technology" are they way they are: "quality is not quantity."


> Very simple, there's a couple of orders of magnitude greater number of developers and designers who have skill with web technologies than native UI.

[citation needed] - web application development is incredibly complex and can't be compared to just "web development" (e.g. writing a HTML document or template and styling it).


I still use Sublime. I've also used Atom and I didn't see any features it had that Sublime didn't. They're both very basic text editors, with the difference that Sublime is about 100x faster and less bloated.


I don't get this criticism. HTML/CSS is the closest we have to a universally understood syntax for designing interfaces.

Why invent a new standard? HTML is fine, CSS is fine. Most importantly, everyone understands it and can work immediately with it.

Any issues with performance is due to the implementation of the platform that renders this HTML/CSS interface. It's much more likely Google Docs feels sluggish due to the javascript it executes in order to control the rendering of its html/css, rather than the rendering itself.


HTML and CSS are a bad fit for rendering heavy graphical interfaces because they fundamentally follow a document flow rather than grid-based layouts. Flexbox and css-grid are helping some in this area, but they are not used very often.

(not to say that HTML and CSS aren't useful, but they are far from the idea means of rendering a UI).

HTML does work fine when you use it for mostly document focused work, and I enjoy the interactivity and connectivity that web browsers have brought to the web, but I'd love to see an improvement on it all.


Flexbox is that improvement.

If you discount flexbox due to "not [being] used very often", then you can't logically argue for burning down the HTML and CSS stack and replacing it with something else that has zero market share.


My point wasn't so much that Flexbox isn't that improvement, but that I haven't yet had a chance to fully learn it, and I suspect that a lot of front-end developers are in a similar space. I plan on rectifying that soon, but it is yet another thing to add atop the large number of other things that comprise understanding modern web browsers.

I'm not saying that we should burn the HTML and CSS stack. It has served the web very well and will continue to do so, but until the last few years, it's been a document focused stack that's been twisted into doing app development, and in some sense, it still is.

HTML and CSS are not the best tool for heavy GUI development, a la Photoshop, Visual Studio (not VS Code) or other large GUI intensive things. Most web apps have yet to replicate the combination of features and/or performance of those types native applications. Flexbox is an improvement there that helps with layout, but that doesn't change the fact that we are working with the DOM under the hood, with all of its various quirks and performance issues. (Not to say other GUI frameworks/APIs are perfect, or necessarily better. Some of them just allow you to optimize a little closer to the metal).

One can point out that large GUI tools like Photoshop aren't being created as much these days, outside of AAA game dev, CAD, or the like, or that many large GUI's use web views to help display documents, a la Steam.

I sincerely hope that Servo's "Web Browsers are essentially AAA game engines" approach catches on.

I'd be interested to see how an event-based html5 canvas GUI library would compare to DOM.


Q: How do you make a video maintain an aspect ratio and fill up the width of a parent? A: Nested div hell.

Q: How do you make an image maintain an aspect ratio and fill up the width or height of the screen, whichever comes first? A: Nested div hell and JavaScript.

Q: How do you make a scaled background image stay put even when the keyboard input pops up on a phone? A: Supreme JavaScript, CSS, and div hell.

Q: How do you center a paragraph of text vertically in a div? A: Nested div hell.

I'd say FlexBox is pretty inadequate. Why can't we have things like:

    #my-video {width:80%;height:calc(width*2/3);}


CSS object-fit handles your first two complaints, and CSS Variables handles your last one.


Aren't CSS Variables just constants you can reuse? How are they going to let you set width to 80% and height to 2/3 of whatever that ends up being?


Because percentages in heights are usually relative to the width of the containing block.


Just because it's the best we have (if you're looking for a cross platform solution, anyway) doesn't mean that we've reached the peak and can just quit. We can do a hell of a lot better, and doing anything less is tragically underselling ourselves. There are no laws of physics preventing a better solution from existing. No, creating something better isn't easy, but neither have any of the other technological breakthroughs.

HTML/JS/CSS is just a stepping stone like any other, not an endgame. Don't grow complacent with it. Demand something better.


> it's the best we have (if you're looking for a cross platform solution, anyway)

No it's not. If you want a fast full-featured cross-platform GUI toolkit, there are many: GTK+ and Qt are especially great, and have bindings for several languages.


I can't speak for Qt, but GTK+ is not that great outside of the Linux bubble. IMHO, of course.


Qt is not that fun either, at least two years ago. Old style class hierarchies, clunky abstractions and awful build step (QMAKE). It got better with the use of lambdas, no longer necessary to have class to connect to a signal, lambda works as well.

However QML is an improvement. It was a bit quirky to get render the way you wanted and it was not native & look and feel, but maybe have gotten better since I used it. I could imagine that TypeScript + QML could be quite pleasant. Big downside is that the install size of your program is quite big.


They aren't that great in the Linux bubble.

I say that from the developer and the user perspective.


Qt isn't native. They're drawing their own widgets that look close to native - that's why you can theme Qt apps.


Factually incorrect, since Qt uses native widgets when possible. Applying Qt stylesheets usually disables use of the operating system's styling engine, i.e. only then Qt starts drawing the widgets by itself. Widgets that are not part of the native widget assortment are drawn by Qt.


Then what is this blog post about? https://blog.qt.io/blog/2017/02/06/native-look-feel/


Qt Quick Controls which has nothing to with QtWidgets.

> Posted in Dev Loop, Qt Quick Controls, Styles


"Quitting" is not how I'd describe Atom and VS Code.

You might not like the envelope they are pushing, but they are cutting-edge explorations into web tech, coinciding with other cutting-edge developments like HTML/JS -> Native interfaces.


Are they really cutting edge? Or are they just an excuse to cram more javascript and web tech into unrelated areas.

Just because js is good on web, doesn't - and shouldn't - mean that we should be doing that.


There are now many more expressive languages which compile to HTML, CSS and JS. We are 'stuck' with these 3 technologies that are still very functional and we should just embrace them as the low level language of UIs.

Of the 3, javascript is the most dispensable, there's nothing stopping us writing platforms which have UIs designed in HTML/CSS but have a DOM controlled by python/c#/ruby.

I was not suggesting we stop progressing, but we need to recognise the phenomenal amount of overhead involved in replacing these 3 technologies. What benefit would be served when they can just be abstracted on top of, at least with regards to the web?

Case in point, Assembly Language, we could theoretically replace the standards that have been reached over decades of collaboration with something more suited to our modern leanings. But what would be the point, when we've long since abstracted it out of our minds?


Why invent a new standard? HTML is fine, CSS is fine. Most importantly, everyone understands it and can work immediately with it.

In order to render HTML, CSS and JavaScript, you need an entire web rendering engine. A new standard would let us get by with a lot less.


> In order to render HTML, CSS and JavaScript, you need an entire web rendering engine.

Getting to piggyback on V8 and Blink work is, I suspect, often a benefit rather than a cost in the eyes of developers of Electron-based editors. Sure, it's bigger resource load, but for use cases where the performance is acceptable, it's a lot less developer load to get the functionality out the door.


Until you have to reimplement even blinking cursors.


No, even after that I'm pretty sure it's a net win in developer time.


>Why invent a new standard?

Because they suck?


Care to argue why? Also, there are countless alternative languages to compile down to HTML or CSS available. Feel free to create your own if none agree with your personal leanings.


CSS has many weird is corner cases where you have an issue and after a lot of debugging, trying things, getting mad you find on Stack Overflow that you add a ilogical rule like "min-width:0", so css is ok until you end up in a tricky problem where you need to understand how the css code works under the hood to fix. Other issue with css is too big and complicated, good layout modes are added but we stil ahve the old ones and you need to understand everything because most developers work on existing code and you hit all kind of layouts like floating,absolute,relative. Flexbox layout seems better but stil worse then layouts I have seen in MXML and WPF. So ignoring JS that you could replace, getting a new GUI for the Web inspired by MXML/WPF or QML with a sane subset of css would improve the situation.


> you add a ilogical rule like "min-width:0"

Yeah, like my favorite of these lately, widows and orphans. Chrome changed a default that affected inline-blocks within columns that caused them to wrap prematurely after version 52 or so. It is especially bad because the other popular browsers don't even support these yet and looked fine.


>Care to argue why?

Where do I start?

1) Designed for document presentation, not for apps. 2) Limited widget selection (native forms and that's it). 3) Different implementations between vendors. 4) Bad at layout (20+ years to get to Flex and Grid CSS layout which kind of resembles UI layouts but is still not supported everywhere. 5) Slow 6) Batter hungry 7) Too many un-needed UI layers (DOM over native widgets) 8) Extra language layer (JS on top of v8 on top of native execution) 9) JS is not the best language to write large scale software (to put it mildly) 10) Restricted access/integration to native platform APIs.


Do you want to listen to the reasons, or just tell them to "go make your own if you don't like it"? You can't have it both ways.

I agree that simply saying X sucks is not a valid argument. Howeever with CSS/HTML the flaws are too numerous and already discussed ad nauseam over the past 15 years. Everytime a new version of CSS comes out, people go and try it and find out that it sucks, whether its th broken box model or the broken float layout techniques or the constant fiddling you are forced to do to get anything working. Every web developer I've seen adopts the 'edit and refresh' trial and error model of developing which is the direct result of a bad spec. Which also explains why there isn't even a reference implementation. As far as UI layout is concerned I am fairly sure I could out-compete a web developer in terms of time taken to implement, using something like IMGUI.


> Which also explains why there isn't even a reference implementation.

As someone who has spent years implementing those standards, a reference implementation would not help me at all.

> As far as UI layout is concerned I am fairly sure I could out-compete a web developer in terms of time taken to implement, using something like IMGUI.

Dear imgui's layout model doesn't scale at all, due to the fact that it's immediate mode. It redoes layout from scratch every frame.


>a reference implementation would not help me at all.

Well, considering that the web has been for decades a landmine of subtle rendering differences based on different interpretations of the same standards, it would surely help others...

Also, I'm not sure what you're saying here. That, warts and all, you love the web as a programming platform?

Well, maybe you do.

But then again, you don't program everyday IN it. You program a rendering engine for it in Rust, so you're safely protected from the horrors of web programming.

>Dear imgui's layout model doesn't scale at all, due to the fact that it's immediate mode. It redoes layout from scratch every frame.

Aren't (or at least weren't) most computer games "immediate mode" too, and far more demanding than any web page?


> Dear imgui's layout model doesn't scale at all, due to the fact that it's immediate mode. It redoes layout from scratch every frame.

Which isn't a problem if you're layout algorithm is fast/simple. If it's not, then it's a bigger issue.


>As someone who has spent years implementing those standards, a reference implementation would not help me at all.

I don't quite understand what you meant your comment to be indicative of. That nobody else would consider it useful? That you personally see no value in reference implementations at all?

>Dear imgui's layout model doesn't scale at all, due to the fact that it's immediate mode. It redoes layout from scratch every frame.

Could you detail the UI are you thinking of where IMGUI is inefficient but HTML/CSS isn't? Not asking for a formal spec, just a general usecase..

My problem with CSS/HTML is that productivity scales inversely when using them.


There was XUL and it used native widgets.

XUL is dead now, HTML+CSS won.


  > everyone understands it and can work immediately with it.
Only if by "understands" you mean "everyone is capable to throw things at the wall and seeing what sticks". I have seen a lot o HTML and CSS and let's just say, only a small fraction looked like it was done by someone with understanding "what" and "why". Otherwise it was just tortured to the point "somehow works unless someone changes something".


this person speaks truth


I've done some raw C Win32 GUI programming and I've done some modern electron stuff as well.

I know which I'd rather offer as my toolkit of choice for a project on which I wanted lots of people to contribute code to!


Please don't use win32 as an example for all native UI coding, its garbage. Consider Qt or anything else, even X11 looks good compared to win32.


Would be better to compare it to transfer a state/screen over the wire/internet on 486 vs. doing the same nowadays ;)


Well a x server on a 486 was probably also faster...


Besides the X-server point, it's not just the state transfer that's causing web UI to lag.


Javascript is meh, it's HTML and CSS that are the real culprits for the crazy inefficient GUI rendering. HTML was great for what it was designed for, but we're ten years beyond that.

Something will come in and replace HTML, it's just a matter of time. The main driver is mobile. The many layers of abstraction burn battery, one day phone manufacturers will get tired of it and do something.


Something like XUL, XAML or Enyo? Wait for it.


Something like React Native that targets Windows/macOs/Linux rather than iOS/Android? I think that this would be a better way to deal with using Javascript as the dev environment, while at the same time passing on Electron.


By making a text editor out of web technologies, you can reuse all the web ecosystem the web has, and enjoy also its customizability.

For instance I wanted to be able to display PDFs directly in VSCode. I went to look for a plugin, and there was one. People simply used "pdf.js" to integrate PDF support in VSCode. Because it is web based it should have been straightforward to do. Doing the same with native technology would have took several weeks of coding, and it wouldn't have been cross-platform.

Imagine all the web-based open-source tools that could potentially be integrated in these editors. Integrating a SVG editor in native text editors would be a nightmare. With web-based text editor you can potentially just incorporate an existing tool like [1].

There are lot of other examples like this: Live markdown preview, mini-map, color-picker, integrated VSCode debug panel, …

It is also quite easy to add visual stuff. For instance adding a vertical bar at 80 characters in the background is quite easy to do with web technologies. On the other side emacs has still not managed yet to make the html-mode work nicely with this "fill-column-indicator". It is also probable that it would be much more easy to integrate web services (trello, github, …) directly within VSCode.

At the end people wanting performances have already quite a lot of choice in native text editors (vim, emacs, sublime…), and people who prefer functionalities can go with web-based text editors (atom, vscode, …)

[1] https://svg-edit.github.io/svgedit/releases/svg-edit-2.8.1/s...


> Doing the same with native technology would have took several weeks of coding, and it wouldn't have been cross-platform.

Actually, it would take one hour because you would use Poppler, and it would be cross-platform. It would also be faster than pdf.js.


> Actually, it would take one hour because you would use Poppler, and it would be cross-platform. It would also be faster than pdf.js.

I don't really know poppler. If there are Poppler bindings for the language you are developing in, I believe you that this is possible. This is only one use case though, I'm not sure you will find native libraries in your language for all features that VSCode can offer almost for free, like live markdown preview for instance.


> I don't really know poppler.

Exactly. Web developers live in a bubble, and because they don't know much about native alternatives, they assume they don't exist.

Note: I'm a web developer and I don't know poppler.


I don't think the implication you're making here is valid. In Electron you're forced to use JavaScript or a compile-to-JavaScript language. In the desktop it's pretty much the same deal but with C. And we've been doing C FFI since way before JavaScript was conceived.


> For instance I wanted to be able to display PDFs directly in VSCode. I went to look for a plugin, and there was one.

You could have done that 20 years ago with COM components already. There was a thriving ecosystem where you could get a component for just about everything.

Similar technologies were available on other platforms than Windows, like Bonobo (Gnome), Kparts (KDE), and whatever Mac had.

It's a bit of a shame that component frameworks have gotten such a bad reputation (for complexity and insecurity), I think they are very misunderstood.


You also had to pay 100$ per copy of the COM component. Most of the web stuff is free, and you can just right click and view source. In NodeJS and NPM there are hundreds of thousands of components/modules that you can use for free.


>By making a text editor out of web technologies, you can reuse all the web ecosystem the web has, and enjoy also its customizability. For instance I wanted to be able to display PDFs directly in VSCode. I went to look for a plugin, and there was one

You know where else you could do the exact same thing AND have a 10x faster and 10x more memory/battery efficient editor?

If a native editor just gave you a webview that can run JS extensions...


> 10x faster and 10x more memory/battery efficient editor?

Except for the startup time --- with which I can live with --- I never felt a difference in speed between native text editors ans VSCode. There is one, but I don't notice it. Human are much more slower than computers, so as long as it is not taking more time that I need to notice it, I don't care.

Nowadays my computer has enough memory so I can afford to not care about the 116M that VSCode is currently using, especially in regard to what my browser consumes, and in regards of the wide range of features it offers.

Battery could be problematic indeed, but before looking at my text editor, I would probably first switch from KDE to i3, and then use a lightweight web browser. This would impact it much more than my text editor.

> If a native editor just gave you a webview that can run JS extensions...

But then you start loosing all the benefits of faster/memory/battery points you mentioned.

Anyway, I'll happily try something like this if you develop it ;-)


> Battery could be problematic indeed, but before looking at my text editor, I would probably first switch from KDE to i3, and then use a lightweight web browser. This would impact it much more than my text editor.

This item is about VS Code consuming 13 % CPU when idle, which is bigger than the difference in idle CPU between KDE and i3 (if you don't have fancy widgets, it's about ~nil).


Indeed, this bug is pretty nasty for the battery. I don't have it though, so my remark was about non-buggy battery usage. Web-based text editors consumes more battery, but the difference is not significant from my point of view.


>Except for the startup time --- with which I can live with --- I never felt a difference in speed between native text editors and VSCode. There is one, but I don't notice

Try editing anything larger than a simple program file (from a large JSON to a CSV as devs often do) and you'll do. Try hex editing a large binary. And many other tasks.


Sooner this evening I had to open a 144'000 lines CSV file, 3.5MB. This is not big, but it is the kind of file I have to open from time to time for my work. No noticeable delay to open the file in VSCode, the cursor moves smoothly, and scrolling with the minimap through the whole 144'000 lines is also smooth. The VIM plugin for VSCode start to have troubles over 10M it seems, but if I have to edit files bigger than this I can open them with spacemacs anyway. I spend most of my time in small text files like Python, CSV or Makfiles, so this is not a problem. Of course if you have other requirements, like you are passing a lot of time in big files, then VSCode is not the correct tool.

I would suggest you to try VSCode for curiosity, if you have not done it yet. You could be surprised. Maybe it is not the right tool for you, but it is quite well made, and compared to atom, bracket or other web-based text editor, it feels like a supersonic jet.


Ema... Ok ok, I will shut up. :)


> Doing the same with native technology would have took several weeks of coding, and it wouldn't have been cross-platform.

I wrote a PDF viewer in Java(FX) in about one evening using PDFbox. There are also PDF components for practically any other desktop app framework you care to name, most of them older and more mature than pdf.js

The fact that so many devs express amazement at things considered utterly routine for decades is one of the reasons the entire web dev community is so often treated as a joke.

HTML has very few redeeming features as an app platform. It's way past time it gets killed by something better.


One of the nicest replies here. Thanks! :)


This is wrong. The world on the other side is so much greener.

https://wiki.qt.io/Handling_PDF#Using_QtPDF


I don't think this would have been any harder had it been written in C(++) for someone comfortable in C(++).

But I do believe there are more people comfortable with web technologies these days.


That depends on all the code running locally...


The web is in just a shameful state. Even with 300 mbps fiber internet most websites are not snappy. As in, pages are so slow to render that I'll start reading, but then lose my place when the page reflows as it continues to load. Or click on the wrong thing because the page reflowed while I was trying to click on something.

As far as I can tell I'm actually CPU-limited, because I noticed no real difference from when I had 50 mbps internet. This is on a quad-core Macbook Pro that boosts up to 3.5 GHz...


It takes a while to source all the ads, and the reflow that leads you to click on the wrong thing (e.g. an ad) is most likely intentional.


So you just assume that changing between 50 meg and 300 meg service actually should give you a 6x speed up during browsing? I think that's a very flawed assumption to make. Just because your connection is capable of a certain advertised speed doesn't mean you're getting that speed from any given server as you browse the Internet.


No, I'm assuming that because it didn't give me a noticeable speedup that bandwidth isn't the bottleneck. Also, the bottleneck isn't my connection out to the internet because I can get more than the advertised speed any time of day to speedtest.net servers. I suppose the bottleneck could be on the other end, but aren't these sites all hosted on major platforms these days? Like AWS/Google/etc.?


Building a plugin system is hard. Building one that allows creating complex UI elements, modifying other UI elements (from either the core app or other plugins) or changing the way literally anything is rendered is particularly hard.

You not only have to build the code that supports all this, you also have to create and document an API and/or markup format to build all this out, plus document all your internal integration points.

If you want other developers to really take it up and build plugins, you have to make it easy to get into, so that means not just documentation, but great documentation, plus tutorials, examples and tooling to help.

You get a big chunk of that for free when you use HTML/CSS/JS.

Fire up VS Code, go to Help > Toggle Developer Tools, and poke around for a few minutes. Imagine the amount of time it would take to build a similar experience to just this one aspect if you were doing this from scratch.


Or you could use Lisp, and have the UI be S-Expressions. Everything can edit a list


Can you give me an example of a Lisp UI library like that?


Two come close but aren't quite there...

Seesaw [0] - A nice-ish way of using Swong in Clojure.

Iup [1] - One of the friendliest GUIs I've used, hands down. Just feels like Scheme.

However, I'd expect that QML and X-Expressions could go hand in hand to make something much closer, with a bit more flexibility.

[0] https://github.com/daveray/seesaw

[1] https://wiki.call-cc.org/iup-tutor#hello0scm


> Fire up VS Code, go to Help > Toggle Developer Tools, and poke around for a few minutes. Imagine the amount of time it would take to build a similar experience to just this one aspect if you were doing this from scratch.

Don't do it from scratch then. You can use GtkInspector to poke around with any GTK+ application, by pressing Ctrl+Shift+I.[1]

[1]: https://wiki.gnome.org/Projects/GTK%2B/Inspector


> easy to get started

Yes.

> cross platform support

Yes.

> Because you can write plugins in JS?

Yes.

---

As much as I prefer native programs as a user, it's impossible to ignore the benefits of cross-platform development and plebeian hackability/debuggability.


>> cross platform support

> Yes.

WOW FINALLY SOMETHING that will run on my LINUX and FreeBSD!

ohh.. a lot of plugins don't support linux and it doesn't build on BSD?


Cross platform support means if you want support, you cross over to a supported platform.


Some years ago cross platform in Microsoft speak meant it would run on at least two of the following: A version of Windows, Wincows CE, Windows Phone or Xbox. That cross platform now almost includes a non Microsoft platform is progress.


Don't vim, emacs, Sublime, and IntelliJ IDEA work on FreeBSD?


(neo)vim, emacs and IDEA certainly do. Sublime is not natively ported, only under the linux compat layer.


Really? I haven't seen this at all... though I don't have a ton of plugins.


Define a lot. Every plugin I use on Atom run on Linux fine.


> doesn't build on BSD

doesn't run on ZX Spectrum either


Why bother describing something as cross-platform if this is the attitude held?


you can do all of this in Qt, too. Without the overhead of the inner platform effect.


You can do it, but it's significantly harder. And with Electron, you can leverage the same skills that are used to build web applications to modify your environment and text editor as well. Those are very significant advantages.

Note: I don't use VS Code or any other JS editor, I use emacs. But I can definitely appreciate the major benefits of the architecture.


I build both web apps and Qt apps. TBH, doing something in Qt is about 1/10 to 1/100 the effort of doing it on the web. The web is a morass of confusing standards, none of which work well together. To get guaranteed, predictable behavior which is documented, and will continue to work properly for 10 years after deployment (with only minor maintainence), you simply cannot use the web.


So does it have a NoScript plugin to kill the unavoidable 200 tracking and ad scripts from Google, Facebook and who knows what running in the background? I am sorry if I offend someone with this, however the current Web experience is something I want as far away from my dev. tools as possible.


While your concern might be valid in general, this is applicable to native software just as much.


tracking or ad scripts by facebook or google are in neither atom nor VSCode.

both projects are open source if you care to verify



you're being asked while installing f you're fine with anonymized usage information being send to MS for further development of the software.

its a simple checkpoint and pretty much every actively developing project does this nowadays.


We have 'native' text editors. Sublime Text is very similar to VS Code in many ways. But with Sublime I haven't got up and running writing, running, debugging code in various languages like I have in VS Code.

The whole experience is important. For many, it's more important than the individual 'feature' of being light on resources.

I currently ignore the fact that VS Code is 'heavy' as text editors go, because it's so much lighter than e.g. Visual Studio, leaving me much more RAM free for some of the code I'm running to gobble up and use for its own ends.

I'd prefer lighter 'weight' - in terms of RAM and CPU usage - but I'm not tempted back to Sublime yet.

BTW I'm a vi person, so I'm using vi keybindings in VS, VS Code, Sublime - and anywhere else I can do so. I love Vim's speed, but I can't Get Stuff Done in it like I can in more modern editors. I mastered the keys, not the inbuilt windowing system, scripting language, etc.


I'm always confused when people say this, I've tried a lot of vim modes and to a T I've never found any that were satisfactory.

I really believe that if you find these vim plugins useful you don't really use vim all that deeply. that's not a criticism, just an observation.


And that is OK for Notes or todo app, but text editor that is used by developers who tend to customize it with plugins and whatnot I think that is not viable option. But that's just my personal preference, maybe I am wrong...


VS Code IS a viable option. I use it daily basis and works great.


Seriously, coming from Visual Studio VSCode is a breeze of fresh performant air.

Anecdotally I was able to create the cursor problem by minimizing/showing the VSCode window while viewing CPU usage DESC in task manager, but the effect was only ~2% usage for me (i7 processor).

If one invisible (from a UX perspective) bug is VSCode's big performance problem then I'll gladly let it eat away at 2% of my CPU until it's fixed.


Plus the fact that it's actually possible to use the profiler and debugger built into VSCode to profile and reflect upon itself, then drill down into its own live data structures, source code and css to discover and fix what was slowing it down.

I once wrote a visual PostScript debugger for NeWS, which I primarily used for debugging itself. [1]

[1] http://www.donhopkins.com/drupal/node/97

The PSIBER Space Deck is a programming tool that lets you graphically display, manipulate, and navigate the many PostScript data structures, programs, and processes living in the virtual memory space of NeWS.

The Network extensible Window System (NeWS) is a multitasking object oriented PostScript programming environment. NeWS programs and data structures make up the window system kernel, the user interface toolkit, and even entire applications.

The PSIBER Space Deck is one such application, written entirely in PostScript, the result of an experiment in using a graphical programming environment to construct an interactive visual user interface to itself.


I tried it too... Wasn't satisfied with performances. I used to install Atom every 2,3 months, when VSCode got released I then tried installing it every few months in place of Atom. I still do, but I always uninstall after few hours of using it. It has many good ideas implemented well, but still not worth switching and sacrificing all of the performance for nice git and debugging interface.


What's too slow for you? I recently switched to it from Vim running in a console (+ a bunch of plugins for IDE features) and it's actually more performant. It doesn't freeze when I run ctrl-p for one thing.


Well I don't run any plugins in (neo)Vim besides colorscheme, FZF and neomake, and I run neovim inside Terminal.app since it is much faster than iTerm2.

One situation, I open 5k LOC file, scroll to the middle and bam colors are there everything was instant. In VSCode I open same file, slight delay, opens the tab, I click on middle of side codetree, slight delay, and few seconds for colors to draw. This is just one example. And there are those slight delays all over the place that I don't have with sublime, emacs or vim.


VS Code is faster than Atom (by... a lot) tho?

Especially with large files, but just in general. Speed is the main reason I can't stay using Atom for more than a few hours. It's awful.

VS Code is snappier than Sublime Text ffs...


Wait, why? I use Atom on daily basis, don't have very strong machine, but I've never seen performance issues.


Compared with Sublime Text (what I used before VS Code) Atom was painfully laggy and slow.

Now if I was coming from a larger, probably Java, IDE? Yeah, I can see that Atom would look great.

Just wasn't for me. Glad you like it tho!


VSC is not even close of being as snappy as Sublime Text. Sublime also dominates when opening large files (2GB+) and searching trough them.


On my Mac that just wasn't true.

Sublime had the edge in a couple of cases, but when opening large files (esp JS bundles) it crawled. Took over a minute to open one.

Same file in VSCode - maybe a couple of seconds. Maybe.


While I occasionally need to open large files like that, they aren't source code files, and I have no problem using a different tool than my main source editor for that.


Agreed, except for the domination; true on linux afaik but not on windows: working with the large files is ok but opening them takes ages. VSC is definitely faster there.


You are probably right. I've used Sublime only on Mac and Linux. On Windows I was using another great, native application - Notepad++. It was as fast (or maybe even faster) as Sublime on other platforms.


My experience on the Mac too. Working with was ok (mostly) not great but ok, but opening was very slow.


Nope, it's not snappier than Sublime.


shrugs

Is for me. Not by a lot, and not in every case, but overall? Is for me.


> but text editor that is used by developers who tend to customize it with plugins and whatnot...

There are a huge number of plugins for VS code. It's built to be plugin-centric - most functionality is a plugin.


My two cents: you get what you pay for in your IDE.


That depends strongly on the IDE and the user. I used to pay Jetbrains every month, but last July I noticed that I preferred VS Code to PHPStorm for essentially everything, and stopped my subscription and uninstalled it.


Would you mind giving some examples, mostly for the higher tier of "paying".


I pay a lot of money for my copy of IntelliJ.

...indexing...

...as I was saying...


Emacs is a little harder to write plugins for, but it runs natively and the only impediment is that more people know JS than they do elisp.


Lots of emacs functionality runs via its lisp VM. I'd hardly call that 'native'.

I mean, Emacs had jokes about being bloated decades before Javascript (and Java, another contender for these jokes) even existed. :)


Most of those jokes are completely out of date. "Eight Megs and Constantly Swapping" used to be a big deal, but today it's not.

I don't think running Elisp in Emacs takes away from it being "native." At least insofar as there are no popular text editors (that I know of!) which expect you to compile your plugins and macros to native code. They all have interpreters of one kind or another. What Emacs doesn't have, though, is an inner platform effect: Emacs is the platform, there isn't a second one underneath (no browser engine).


GNU Emacs Lisp code is certainly not native - not until a JIT will be widespread. GNU Emacs usually compiles Lisp code to a byte code which is interpreted by a byte code engine written in C.

If you use a Common Lisp based editor like Clozure CL, Allegro CL and LispWorks have, they don't use a C-based byte code interpreted. The Lisp code is compiled directly to native code. Which makes editor extensions running in natively compiled Lisp code.

The advantage of the c-based byte code engine is compact code and improved portability - since the C compiler will already be provided with most platforms, whereas a native Lisp compiler is typically not something provided by a platform (CPU/OS/...) vendor...


I should be more clear. I don't think that just because Emacs contains an Elisp interpreter, we should call it a "non-native" application -- even if Elisp is integral to Emacs' operation.

If forced, I would say that Emacs-the-platform is native, and Emacs the system of Editor MACroS is not, since the macros run on the Emacs platform. But it seems kind of pedantic.


> I don't think that just because Emacs contains an Elisp interpreter,

The number of lines of Elisp my Emacs uses (counting plugins, but also built-ins) is actually more than twice the number of lines of C.

It's not that "Emacs contains an interpreter", rather it's "Emacs has all these features written in Elisp [...a looong list here...] Oh, and it also has an interpreter to execute it all".


The Lisp code is not natively executed.

Some features like memory management (garbage collection) are layered on top of the OS.

The UI is not 'native' - it's based on a portable substrate written in C/Lisp, which works both on WIMP and terminal systems.

The user interaction is not native (commands, buffers, undo, preference dialogs, window/frames, ...).

etc.etc.


I never said the Lisp code was natively executed. I'm not sure where you got that idea.

Let's just agree to disagree. We are talking past each other at this point. Best regards!


I think that the definition of "native" should take into account how the code interacts with the system.

Because compiling code to the native instruction set shouldn't affect the semantics: so why should that be the only yard-stick for "native", right?

What is semantically relevant is: how much of the platform is exposed to the programs directly, versus through abstractions.

Suppose a language like Emacs Lisp or Java or whatever has only thin wrappers around POSIX through which applications interact with the platform. Then those programs are quasi-native POSIX programs, really. They are doing things like fork, waitpid, dup2 and whatever almost directly. And suppose that in the Windows version of that language, programs use functions that mimic CreateProcess or WaitForSingleObject. Then, regardless of the language being interpreted, it's really a native programming language.


Of course - I should have added that I don't neccessarily share the sentiment behind those jokes (quite the contrary). For me they are just inevitable (social) indicators for "non-native" software (i.e. I'd say anything with significant code running through an intermediate representation or virtualization at runtime).

As such I simply found emacs an odd choice as an example.

BTW: Notepad++ uses .dlls as plugins. (Which doesn't neccessarily make it a better editor than emacs :) )


Which begs the nice corollary: the definition of "native" doesn't happen to be fixed and changes with time.


Well, language is like that. :) But I don't think describing Emacs as "native" here is very different from how the word was used decades ago. To me, "native" is less about how the logic is represented (compiled vs. interpreted code) and more about its execution context. If the host system is an operating system, then the application is native; if the host system is another application, then it's not. I agree that the boundary is blurrier for applications that have scripting interfaces; but I think that most applications don't live very close to that boundary, and are clearly in one camp or the other. (Maybe Emacs is nearer to the edge: there's an old joke that Emacs is a great operating system, it just needs a better text editor.)

Having said all that, I don't think that "non-native" is a pejorative. I use Emacs, but would switch to a "non-native" editor if there were a good reason. I use IDEA (native or not? you decide!) when working on big Java projects, and Emacs for everything else, because Emacs affords me a lot of power that I find lacking in other editors. "Nativity" isn't really part of the equation, it's really about functionality.


GNU Emacs comes with its own portable execution platform (the byte-code Lisp execution engine), where Java applications typically use a provided virtual machine.

As such GNU Emacs is just as 'non-native' as a JVM-based editor.


Almost agree with you! If the Emacs VM were used for applications other than Emacs -- if it were a general purpose VM -- then I'd completely agree.


There are lots of applications written on top of GNU Emacs, but most of the time they use the specific UI and features of an editor, or integrate with the editor.

Example: the calculator of GNU Emacs. https://www.gnu.org/software/emacs/manual/html_mono/calc.htm...



The jokes of GNU Emacs being bloated are from a time when machines with 8MB RAM were modern. The original NeXT computer was introduced in 1988 with 8MB RAM... The Mac II from 1987 started at 1MB, max at 8MB, ... today phones have 2GB RAM.


I bought a phone with 6GB of RAM. It keeps getting bigger.


Not true. Big chunk of Emacs is written in C.


Well, you can write plugins in Python for Sublime Text and NeoVim (and more). JS and Python and both fairly uncomplicated languages when it comes to "plebeian hackability/debuggability".


> it's impossible to ignore the benefits of cross-platform development

Performance & bugs/quirks. I'd much rather have performance.


We have a lot of native GUI text editors. Gedit, Geany, Kate, and Notepad++ are free options, and then you have Sublime Text as a proprietary one.

They all support plugins and extensions, they are all super efficient in CPU usage, etc.

The thing is they aren't new, and because they are all C/C++ codebases developers wanting to add new features to text editors don't want to touch C++98 / ANSI C code from two decades ago.

Then you want to start talking about a C++17 / Go / Rust / etc text editor, but that is starting from scratch, and when you consider the time investment to develop the infrastructure of a text editor today vs just using Electron, the time investment makes less sense for hobbyist devs doing this stuff in their free time.


> when you consider the time investment to develop the infrastructure of a text editor today vs just using Electron, the time investment

...is exactly the same. No matter what gui toolkit you use, you still need to develop the infrastructure. Electron doesn't know how to handle keyboard and mouse events, it doesn't have a text buffer implemented, has no understanding of different text encodings, how to parse different languages, and draw different colored text accordingly, or format it, etc.


> developers wanting to add new features to text editors don't want to touch C++98 / ANSI C code from two decades ago.

As I understand it, sublime has tight enough integration with python, such that python can do literally everything you would ever need


CADT, all the way down...


> It just doesn't go in my head that we are building text editors inside a web browser!

I felt the same until I actually tried it out. That changed my mind: now I'll take any platform that those developers & contributors choose for their cross-platform products. Because this for an Electron app, this is a rock-solid 'old-school' (as in, fully as neat smooth helpful-yet-staying-out-of-the-way and somehow "ergonomic" as it was ever since at least oh late 90s, v6 or so) "Visual Studio experience". After a few years of sitting listlessly in front of subjectively inferior editors, I'm prepared & willing to give this Electron stuff more time to further mature improve and speed up. There's no intrinsic reason it can't get there. Lots of seemingly native apps are just live Lua/Python interpreters under the hood with widget bindings in place of a DOM. In that case, well-engineered JavaScript (terribly time-consuming to produce & pretty rare out there for those who like to rely blindly on a huge pile of unscrutinized 3rd-party snippets/script um-I-mean "repos" --- but not impossible) can fully deliver the same, in principle.

Seeing how VScode took off, that could even propel MS to invest unprecedented energies & talent into rounding up the JS "rich client app" performance story further. Who knows.


Because Microsoft and Apple have both dropped the ball on their native UI toolkits. Right now I'm doing web frontend with TS+VueJS+nice CSS toolkit. Yes there's plenty of complaining to do about the fragile convoluted toolchain and crappy performance. But I still gladly take it over WPF (promising, abandoned for some reason) or Cocoa (feels a decade out of date). No major OSS community, no Material Design/Bootstrap/etc. toolkits, slow develop/run loop, no good UI automation, crappy/no inspection capacity, etc.

I don't need x-plat UIs, I just need a good UI toolkit period, and that's why I'm looking real hard at Electron for future desktop work.


Cocoa has its issues, but I find it much more pleasant to use than anything based on front end web tech despite that. I find myself wishing I could use it on Windows and Linux.

I find many of the frustrations people have with desktop Cocoa come from the bizarre need to reinvent the wheel with a custom UI theme. If you stop fighting the system and instead go with a native look with well chosen accents, life is much easier. Native can look great with a little attention to detail.


You can also look at JavaFX. It's pretty good.


The current state of Javascript is very much, "just because you can, doesn't mean you should"


For sure.

Half the responses in this thread are people defending the stupid idea of "let's just use js everywhere, just because we can and fuck better suited tools".

Js is not the be all and end all of tools, just because it is used a lot in web doesn't mean it is the best (or appropriate) tool for other spaces.

It sort of seems like js/js ecosystem is built around hacky solutions to things, so I guess it makes sense that the community doesn't seem to see any issue with shoehorning in their language of choice into entirely inappropriate spaces.


Yeah! We should go back to native editors, like Eclipse or IDEA!


HN could use a "funny" upvote option...


Seeing the resources taken for a simple text editor, I crave to see what a full-blown IDE (whic is what Eclipse and IDEA are) written with web technologies would eat up.


> Wouldn't it be better to make native application

Actually there are such editors. I use vim instead of vscode or atom. And I think my installation of vim is slower then vscode because of some plugin that I've not found yet.

Applications like vscode very useful because they help to find performance and other bugs in browsers. Same way as browser improved when gmail and sites like this appeared.

I would support appearance of IDEs, large games, VR, image and video editors in browsers as they help improve web platform.

If you don't like it, just use other option. It could be faster, but may be not. Not sure that vscode is slower then visual studio for most tasks.


I switched from sublime to Atom maybe 2 years ago. I'm completely happy with the speed. I see no disturbing lags and it works/looks the same on every platform. There is a great extension system and I can develop my own extension quickly if I ever needed to. I believe for the developers it was much more productive to create it with web technology and therefor I don't see any reason why it shouldn't be done like that. I guess there are people that need much faster this or that, and as low memory footprint as possible, but it is not an average user/developer. I believe for most of us these editors work well


It just doesn't go in my head how so many people have trouble understanding why things like VSCode or Atom are popular.

They're sexy, they are powerful (extensions for everything), portable, and extremely easy to extend thanks to Javascript being pervasive. I don't know for certain but I'd also assume writing an Electron app is easier than writing a similar app in a lower level language.

Is it really that hard to grasp? Performance has to be perceptible by the average person for it to affect user base. I prefer WebStorm but I've had absolutely no issues using VSCode on my laptop - which feels even faster than WebStorm.


I've played around with VSCode and what it can do seems impressive, but I want to do one simple thing. I want to make the background black.

Every dark theme I can find makes the background dark gray, not black. I actually looked into what it would take to make a new color theme and I simply don't understand all the steps, and definitely don't want to deal with the hassle. It very quickly goes off into the weeds of TextMate themes (huh? Why are we referencing a Mac editor? Yeah, I know it's a de facto standard, but really?), editing XML (complete with hex codes for colors) and installing something called "Yo Code". Dude, I just want to change one friggin' color!

In every native app I've ever used, I can just go into the Options and make the background black, period.

Just because it's a programmer's editor shouldn't mean you need to be a programmer to make the simplest configuration.


I don't know about VSCode, but in Atom you'd just write one line of CSS


> Wouldn't it be better to make native application

Better in what way? The market has voted with their downloads, they don't agree that the problems with Electron apps are as bad as you feel that they are.


Are you kidding me? Look at the raw performances and benchmarks of Vim/Sublime/Emacs, compare it to VSCode and Atom and you will see. If you don't believe the numbers then use both side by side, open 250MB file in all of them and look at the screen.

And I still see native editors used more (for example latest StackOverflow survey, showed that Notepad++, Vim and Sublime combined are used much much more than VScode and Atom combined). I don't want't my editor to crush mid session, or have to write bunch of gulp files and npm commands to do one simple modification.


Raw performance means nothing, it's just yet another metric that can be traded off in favor of other aspects that make up a good application. In VSCode's case, it was traded off in favor of ease of development, which spurred an extremely active and ever growing ecosystem of extensions. Was it worth it? The download counter says yes, because despite it being "slow" compared to other editors, the tradeoff is not even noticeable by most of its users.

This all boils down to the art of "it's good enough". Take game development for an example. You could write an engine from scratch using Vulkan APIs and all that jazz and run at 144fps@4k on a toaster. Or, you know, you could trade off the performance and settle for just using Unity and optimizing wherever possible. It's not as fast, but as long as the user is not frustrated by it, who cares? You just saved a lot of development time. Tradeoffs, tradeoffs.

Same thing applies here. The VSCode team did a damn good job of keeping performance just about over the "good enough" threshold of most of its users, <flamebait>unlike other Electron based applications</flamebait>. Of course, that threshold varies based on the user and his machine, but outright dismissing VSCode based solely on the assumption that editors cannot be written in html+js is simply short-sighted.


This might mark the first time in history that Emacs was trotted out as an example of a performant editor.

I say this as an Emacs user.


I am Emacs user. Have you tried it lately? It is lightning fast compared to Atom/VSCode. It's not as fast as Vim, struggles with long lines, and all that, but boy how surprised I when I uninstalled VSCode and fired up Emacs after a week of usage of VSc.


> Have you tried it lately?

I would, but it's still swapping back into RAM.

(For real though: yes, I use it all day)


Compared to what kids these days use, it's as fast as a lightning bolt.


There's a lot of grass-is-greener comparisons happening. I'm a long-time Vim user and if I forget to turn off syntax highlighting before opening a large file it comes to a crawl as well.


Emacs takes 8-15 seconds to open on my new i7.

That matters when I just want to do quick edits.

Vscode takes 3.


It's not "Emacs" itself, it's plugins and Elisp libraries you have loaded. Emacs itself - the GUI, even including (optionally) blinking cursor - is actually quite fast.

To prove this, this command:

    time emacs -Q --eval "(kill-emacs)"
reports ~0.2 sec on my system.

My normal Emacs, just as yours, needs ~12 seconds to start up. But I made it this way by explicitly enabling and requiring things. I could, with some effort, get the time down to 5-8 seconds (byte-compilation and gathering autoloads in one place), and even back to around a second if I was desperate enough to try dumping the image of a running Emacs to disk (I did it once and succeeded, although the process wasn't pretty). I don't do this because I don't care that much, but the option is there.


Yes, this is true. But emacs without plugins isn't worth much, and I'm too old to enjoy playing the configuration fiddle for more than a few minutes


export EDITOR="emacsclient -a ''"


Unfortunately the group of plugins I use combined with running in windows just isn't stable enough to keep a server process running.

Admittedly this is a corporate laptop with all their virus crap running, so no editor is fast. But emacs in particular has very bad startup times for me.


For me Emacs opens up in about 2 seconds and it's ready to go and print text input in scratch buffer. I am on 2015 MacBook Pro 13" with i5.


That's been my experience as well, although that's still too slow for my tastes as a vim user.

But I definitely don't ever recall it being 8 seconds, that feels like either an exaggeration or someone working on a potato.


> that feels like either an exaggeration or someone working on a potato.

Haha, no. It depends on what you use Emacs for. Remember the old joke about Emacs being a great OS? The reality is that Emacs is a great computing environment for almost anything that deals with text and even for some completely unrelated things. If you want a cross-platform GUI then Emacs Lisp may be one of the choices available.

This caused "plugins" - Elisp applications - to flourish and over the years a lot of code was written. Long story short, I have ~640000 (not a typo) lines of Elisp in my ~/.emacs.d/ alone, not counting the built-in Elisp libraries. It takes time to load that much code, even if it's byte-compiled beforehand.


fair enough, I've never been able to stay with emacs long enough to collect the plugins.

The 1-2 second startup was slow enough that I couldn't even stay with the Emacs evil mode.


For me, it takes <700ms, but that's because I leave it running all the time and call emacsclient from the command line.


There is something wrong with your system.


I usually never deal with 250MB code files. The right tool for the right task. The Electron programs deal with code in a directory (and btw. VSCode is totally different beast than Atom) and I also would not edit a 250MB file with both of them. Never had a crushing VSCode here... but I always have a better Python and HTML and JS support than e.g. in Sublime+good plugins.


Clearly a lot of people don't care if their text editor is using "a lot" of memory or "doesn't benchmark well" because all they're doing is writing code and running it occasionally.

There's plenty of competition in this area, so if electron-based text editors have enough downsides, people will use something native (as your comment indicates). If that wasn't the case, and we could only choose from electron text editors, then we would have a problem.

I think we should look at performance and resource usage as features on the same level as other features. Those things have to be balanced against whatever else the tool is bringing to the table.

I used Atom for a while. But as my projects got bigger, I got to the point where I was bothered by its slowness, crashes, and choking on large files (not even that large, honestly). I find sublime much better in these areas, so I switched. Looks like a healthy ecosystem to me!


I am sure something like Visual Studio of any other IDE is better suited for a large project.


>If you don't believe the numbers then use both side by side, open 250MB file in all of them and look at the screen.

Maybe I don't have any 250MB files to open?

If VSCode doesn't fit your use case, don't use it. There are innumerable alternatives. But what purpose does it serve to tell the rest of us (who like it) that it sucks?


I don't know what you are arguing about, I didn't say Electron apps are as performant as C++ written editors. I said that users, as evidenced by their downloads, don't find this to be as big of a concern as you (and many other HN commenters) do.


And xe said in turn, which you completely overlooked, that at least one survey didn't bear out your claim about usage at all. Rather than ignore that inconvenient point, you could have countered with what data you actually have on text editor downloads.

You also appear to be falling into the developers are not users trap.


raw performance doesn't matter for a text editor nearly as much as it used to. unless you are using a toaster oven to code. then I guess raw performance would become important.

but hey, at least you can write code on a toaster oven, eh?

All joking aside -- I use VS Code on a 7 year old laptop, and it doesn't lag, or stutter at all. Performance is just fine. There comes a point where the hardware greatly outstrips the requirements to the point where it doesn't matter if the resources being used seem 'too much' for what it does.


VS Code and Atom aren't really comparable for performance. VS Code is much, much smoother and closer to the experience of Sublime.

> or example latest StackOverflow survey, showed that Notepad++, Vim and Sublime combined are used much much more than VScode and Atom combined

By this metric Notepad++ and Visual Studio (not Code) are the best editors because they topped out of every category (except Vim for Sysadmin / DevOps). If you look at the "Desktop Developers" tab, Visual Studio Code is actually in 3rd place behind Visual Studio and Notepad++. With Vim and Sublime a few rows down and Atom even further.

There's no way in hell Visual Studio (not Code) is faster than Vim, but how come it dominates it in all but one categories?


You should also sum all the procentages of Intellij based IDEs there, then VSCode is one place down


My whole point is that "most used" is a terrible, terrible metric for anything except for most used.


Not even for the market "voting with their downloads"?


Ctrl+F "voting with their downloads"

1 post found.

Oh, this comment.


Are you really comparing the number of people with VS Code to the number of people with Sublime, VIM, NotePad++, or Emacs and saying VS Code is greater?


Unless you enjoy the sound of your fan and battery life of only a couple hours, 13% CPU to blink the cursor is a problem.


Wouldn't it be better to make native application, especially for code editors, where developers spend most of their time, where every noticeable lag and glitches are not appreciated.

I agree. However, as someone who has used Visual Studio (the one that costs $$$, not VSCode) which is a native application (AFAIK --- it probably has some .NET and web components too), I can attest to the fact that even native applications can be extremely resource-consuming and slow.


Your question seems backwards. VS:Code is an absolute dream to use, for me at least - powerful, hackable, and performs great. If there are native editors that leave it in the dust, what are they? If there aren't, surely it would make more sense to ask why that is, rather than asking why Code isn't native?


I really like that I can edit my editor. A few times, I've disliked how something looked or wanted to add a feature. So I tweaked the stylesheet, or wrote an Atom plugin.

There's a beautiful poetry to being able to do web dev inside a web application.

I still have Sublime text for when I need to view/edit files with tens of thousands of lines, since Atom chokes on large files, but otherwise I have zero regrets :) I'm as productive in Atom, and it's a more pleasant experience.


> Wouldn't it be better to make native application, especially for code editors, where developers spend most of their time, where every noticeable lag and glitches are not appreciated.

There are plenty of native-application (or JVM/CLR) text editors and IDEs, too. For lots of usage patterns, the browser-engine-based ones have acceptable performance, and the number of people with expertise building for web contributes to the speed of development on those editors and their plugins.

But, sure, if you don't like Electron-based editors, there are plenty of other actively-developed editors for you to choose from.


VSCode runs nicely on my MacBook. It doesn't feel like a web application to me, in the same way that IntelliJ IDEA doesn't feel like it's written in Java. I'm sure that both VSCode and IDEA could run even faster if they were ported to, say, C, but in both cases that would be a large investment for improvements that I wouldn't even detect.

So I guess it makes sense to optimize for maintainability i this case, i.e. not having one codebase per native OS.


> Wouldn't it be better to make native application, especially for code editors, where developers spend most of their time, where every noticeable lag and glitches are not appreciated.

I think it'd be better to improve Electron so that native applications don't have so much of an advantage. WebAssembly is a big part of that. Another useful part would be an alternative layout mode that eschews legacy HTML/CSS cruft, for more predictable and performant GUIs.


Most of these don't support it (yet?), but I personally love the idea of having my editor of choice on the web with all my settings and with no install/permission issues.

The real reason to me though seems that it's just the easiest way to make a cross-platform UI. And in this web powered world, everyone knows how to code html/css, so why relearn a bunch of new tools?


Agreed. If you're looking for easy extensibility, a simple core and a flexible GUI, I can't imagine what could put VSCode above emacs, except for the button marked "sort by CPU usage"


Because if we don't build it inside a web browser, we'll have to add plugin support to add a web browser inside of it. Then you have two problems.


Actually then, in accordance with Zawinski's Law, one will be able to use that web browser to read mail. (-:


You could write a native text editor which can use JS plugins. Sublime Text uses python plugins and is nice and snappy.


> Wouldn't it be better to make native application, especially for code editors, where developers spend most of their time, where every noticeable lag and glitches are not appreciated.

The lesson here is that our assumed bars of quality for what makes a text editor good are inaccurate. It turns out that the minimum performance bar is lower than you think, and that ease of customization is, in fact, much more important than you think.


I switched to vscode from webstorm, for performance reasons. Building a native editor / ide is a great idea. But since a lot of options in this area had already chosen some cross-platform toolkit for development, why not build it as an electron app?


I used to think the same, until I started using VS code and it completely blew me away!

I don't care if it's written on to of a browser or in assembly. I just know it's the best text editor I've used so far.


well if you've ever built a cross platform native desktop application, you will appreciate Electron.

Java Swing. Never again.

edit: Do people downvoting even know what it's like building cross platform desktop applications using Java Swing? It's fucking awful, and that's a fact. Even the end result UI design look & feel is butt ugly. Sure you can spice things up with JavaFX but why? Do you not realize how masochistic it was back then vs now with thin web browser clients? Can't believe people are still thumping Java Swing in 2017.


Swing was at least meant to do UI. Web stack is not; webapps are essentially a pile of ugly hacks on top of a document rendering engine, and it really, really shows - especially when you have webapps pretending to be native (e.g. webview-based apps on mobile).

Also, for Java there's JavaFX (a de-facto standard UI toolkit for Java), which is very nice to work with.


> Swing was at least meant to do UI. Web stack is not;

The ancient history of the web, clearly, is different, but the modern web stack, both in terms of specs (WHATWG HTML/W3C HTML5 and related standards) and modern browsers are very much engineered for applications, not just classic documents, as a primary use case.


When you find yourself having to reimplement a blinking cursor, that's when you know you are working on a shitty tech stack.


In the minds of people who've spent long time on traditional client-server tier architectures view Javascript as a toy and view all the Javascript frameworks as "over engineering".

Yet, what they fail to realize is they are comparing a webpage to an application. An application contains information about the state it's in. Handling the application state has always remained complex and it has been two way coupling to the presentation layer making life more difficult.

The latter is why Javascript is perceived to be "over engineered" by people viewing it nothing short of "fairy dust on ugly document pages" which is absolutely incorrect way of viewing application development.

I used to be the biggest Javascript skeptic but once I realized the intent behind React/Redux/Vue.js, it changed my perspective and have started treating it with more weight and respect.

Once that paradigm shift happened in my perception, I found it a lot easier to navigate and endure the fragmented tooling and endless variations of npm modules.

Because I realized that it's going to get better eventually and it's here to stay. Young people aren't learning Java & using Maven on Eclipse or Netbeans anymore. They are on Atom or Visual Code, writing Javascript.


Why was this downvoted? I expect a rebuttal instead just silent downvotes. This just solidifies my opinion, there are lot of dinosaurs on HN and they are going to find themselves unemployable without Javascript in the future.

Javascript is essential knowledge, along with AWS. Software engineering has changed in the past 20 years like it or not.


And yet that hacked together document rendering engine manages to be less painful to use than the the current show of UI toolkits.

I think we'll get there eventually but until a native toolkit presents an interface as easy to use as the web developers are going to take the path of least resistance.


> I think we'll get there eventually but until a native toolkit presents an interface as easy to use as the web developers are going to take the path of least resistance.

Like Qt did with QML? http://doc.qt.io/qt-5/qml-tutorial1.html


Yeah, but Qt has licensing costs for commercial use...


Qt is LGPL, so unless you need to link to it statically or make proprietary modifications to Qt itself, you don't need to pay licensing costs.


Ah right, I wasn't aware of that.


Qt is (largely; a couple of optional components are GPL) LGPLv3, so can be used commercially without getting the commercial license.


Oh, lets be professionals and not pay for our tools like everyone else does, who needs money anyway!


Less painful according to who? From reading this thread it appears that most devs claiming it's incredibly easy have never actually worked with other UI toolkits at all, so their experience is largely worthless.

I've done web UI dev. I've also written code using GTK, Qt, Windows, Swing and JavaFX. A good modern UI toolkit like JavaFX or Qt blows the web stack out of the water on almost every metric. Developer productivity, correctness, speed ... you name it.


Intention is in the hands of the builder not in it's inherent design.

> webapps are essentially a pile of ugly hacks on top of a document rendering engine

That's your own opinion. You claimed Java Swing & FX was the correct way and that thin clients like Electron is wrong. I disagree.

If Swing was meant to do UI then it's probably the most awful and inefficient way to do it.

If Web was not meant to do UI but it's the fastest and more efficient to work with than Java.

It's a matter of opinion but with starkly different development experiences. Sure, you can build using Java Swing/FX but you are going to get a completely different demographic and developer culture...one that is still ingrained in the era it was released.


Why?

With JavaFX where you can customize it using CSS it looks nice, and you don't need to use a scripting language to do it.


Why use JavaFX when you can change the CSS in a web app which is indistinguishable?

Here's a easy way to get a job done but people refuse to do it because of philosophical/ideological indoctrination. ex. the world is made of objects therefore our languages and how we build software should now mirror it.


I was extremely skeptical of using Swing for a really large hospital application. But after working in it for quite some time, I have to say that with the right approach it is quite manageable. There is plenty of power in there and a lot of good things can be done.

(I do prefer the webstack over Swing)


> Even the end result UI design look & feel is butt ugly.

Only when developed by those devs that never bothered to read books like "Filthy Rich Clients".


Is the IntelliJ UI "butt ugly"? No, but you'll probably say it is as to not contradict yourself. Blame the craftsmen not the tool.

>Even the end result UI design look & feel is butt ugly

You know what else is butt ugly? Programmer art and UI.


No, you are correct. It is insane.


I think their bet is that the web stack won't be "so high up the stack" in the near future.


Have you used VS Code? Feels much more performant than any native editor or IDE I've ever used.


What native editors have you used?


Primarily Sublime Code, Notepad++, PhpStorm and NetBeans.


That's an unusual experience. Based on a handful of benchmarks[0] done by the author of JOE, VS Code is sometimes an order of magnitude slower than Notepad++ and Sublime at some tasks.

I use VS Code pretty much exclusively these days myself, so I'm not picking on it by any means.

[0]: https://github.com/jhallen/joes-sandbox/tree/master/editor-p...


Well, I haven't benchmarked it so it's just how I feel. Maybe there's something in the UX that makes it feel more performant.


I have the same experience on my pretty slow laptop. Granted, my projects aren't big, but I would take vs code over sublime any day. Interestingly, atom feels much slower compared to both.


That's because, unequivocally, atom is slower.

https://pavelfatin.com/typing-with-pleasure/


Surprising part isn't that atom is slow, it's that vs code isn't and they both use electron.


PhPStorm and NetBeans are not 'native'.


Care to elaborate?


There's nothing to elaborate. They're both Java/Swing apps.


Thanks. That's what I meant. I thought that counted as native.


Nope, it doesn't. Although it's probably a good indication that the hairsplitting over 'native' and not is not as important as this thread might make one think.


I guarantee that is virtually impossible.


Maybe he was hosting his native editor via an xwindows server on dialup.


I have used, 6 times, for 1 week period each time. Last time I tried it a month ago.


the benchmark you linked is outdated, I reran those tests just now with vscode and could perform all open, edit, close operations in the test in fewer than 2s.


I believe making native application for the three platforms would be considerably more effort.


Have you tried VS code? It seems your arguments about performance are theoretical but in my experience i have run into none of what you're describing could happen on a 5 year old macbook air. A 14% cpu idle due to a cursor is a bug that will be squashed. The software experience, in practice, is quite impressive.


Actually I find the consistency and simplicity of using well established, widely supported and rich third party ecosystem available with using HTML5/CSS/JS based UI very liberating vs. the incompatible mess of native UIs. Personally I script all front-ends in web technologies irrespective of platform and back end tech. The browser rendering engines and JS engines are reasonably fast in most platforms.


As someone who is literally building a IDE in Electron, the biggest reason is JavaScript itself. If you look to the stackoverflow yearly overview, you can observe that JavaScript is currently the most used language. Also do not forget about all the integrations you could do with for example devtools.

An additional feature is that you can run the IDE in the web browser, so that you can have an online code editor. Think for example about configuration files on the Azure website, or a cloud IDE with multiple people using it.

Honestly I think Visual Studio Code is not slow enough for people to switch. A long web page is very similar to a long code document, all the keywords are just span's with a colour. Furthermore, it is not like Visual Studio or IntelliJ are known for low CPU or smoothness.


The SO survey (while super interesting!) conflates "most used" with "most asked about on Stack Overflow"; I'm skeptical of anyone who claims to know which programming language is the most popular. The TIOBE index, which has its own problems, has Javascript coming in 8th, with the top 4 being Java, C, C++ and C#; my experiences and my confirmation bias suggest that's more accurate.

https://www.tiobe.com/tiobe-index/


I agree that tiobe is more established, especially as they look to companies being involved and to the job market. One of the reasons why I am interested in Javascript is that many people are learning it, even non-developers. Think about people without education.


> Think about people without education.

I don't want people without education building my IDEs and other software I might rely on.

The VS Code team have done a great job, and this bug will die. But, I'd say it's more in spite of the platform, than because of it.


JavaScript is literally the reason I would avoid a platform. It's only the de facto language for the web because you can't directly run any other language across different browsers. With web apps being so popular, having a low barrier to entry, and being "quicker" to provide half-assed cross platform apps, it's no wonder JavaScript is technically so popular. Hopefully, web assembly or something similar can change that. But then again, web browsers weren't meant to host proper applications.


I personally wish Github would incorporate a full-feature Electron-based IDE into their system; they do have an online editor, but it is fairly simplistic. Good enough for quick edits, but I wouldn't want to dev in it (then again, I started my professional career in software development sitting in front of a VT220 using a line-based editor; it got bad enough that me and my mentor had a "contest" on who could write a better full-screen code editor - honestly, we both won in a way).


I think you have answered it yourself partly. Creating a highly configurable, UI/UX predictable cross plattform editor is not an easy task. If you start from scratch on a native platform you have to code a ton of code (pretty sure more than VSCode has) just to get started. Editors on web platforms can be modified with CSS, JS (or TypeScript like on VSCode), HTML. Try to be that configurable on a native platform... you literally have to code something equivalent to CSS, HTML, JS to reach that level of modularity. It's good to stay on the shoulders of giants and start there ;)


Pray tell, sir, have you heard of the Qt library? There is a whole world outside Javascript :)


So you think hacking QT libraries like this (and not from the outside, I mean internally) and using C++ would make a good community editor? http://doc.qt.io/qt-5/qtwidgets-richtext-syntaxhighlighter-e...

When using QT for a highly modular editor be prepared to code QT components from the lowest level. It's not like you take a QT widget and modify it on a simple way. Trust me.

If you think it is easy to code editors look e.g. at the people who write letters and their custom editor tool: Microsoft Word. Now look at the many competitors this program had and how many behave super speedy on all plattforms.


I've written low level GUI components for several UI frameworks before, thank you very much. It's not that bad and a lot smoother for the end user.

The real problem is this new generation of "developers" that only know Javascript. When all you have is a hammer...

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: