Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Lite – A small, fast text editor (github.com/rxi)
607 points by rxi on May 9, 2020 | hide | past | favorite | 256 comments

Looking at the screenshot in full size [1], it looks really cool but the text is visibly blurry (look at the = signs for example). Text rendering is hard, and especially for a text editor it's a good idea to just use the current platform's text rendering instead of rolling your own. Of course SDL "rolls its own" because it focuses on being exactly the same on every platform.

SDL seems to offer font hinting which would somewhat solve the immediate problem, but I'm not sure it's being used properly here. With that said, text rendering is optimized for the current device's DPI, so maybe I'm just reading too much into a screenshot taken at a different DPI.

[1] https://user-images.githubusercontent.com/3920290/81471642-6...

SDL has hinting because it uses freetype. But this program doesn't use that. It uses stb_truetype which does not support hinting.

Yes, the rendering implementation uses gray-scale antialiasing.

But system uses ClearType - compare rendering in window caption (rendered by Windows) and the text inside client area.

Gray-scale may work on high-DPI monitors, but on typical monitors it will be blurry.

But Apple and Microsoft still use ClearType even on high-dpi monitors.

For that matter, Sublime Text, that uses similar architecture (but Python instead of Lua), uses ClearType.

Apple removed all subpixel antialiasing on Mojave [1] and on Windows it isn't used for any 'modern' app using UWP onwards afaik

[1] https://arstechnica.com/features/2018/09/macos-10-14-mojave-...

well, apple certainly de-emphasized subpixel antialiasing.

i am still using a non-retina monitor for all of my day-to-day work. when i first installed mojave, it made my display look so bad that it gave me headaches. so i wiped my mac entirely and re-installed the previous version of macos. later, i learned that subpixel anti-aliasing is still there, it just takes some fiddling to get it back.


eventually, i will have to get with the program and buy a retina display. but i am thankful for this loophole that allowed me to put it off for awhile.

I just purchased an HP Z27 (4K 27”) because I couldn’t deal with the low-res monitors I had at home. The high quality monitors in my office had really spoiled me.

I’m so glad I did. Night and day difference.

It has become a challange to have a sane default for text rendering. Many users are still on really bad LCD monitors with low contrast. Macbooks also does 2x scaling. On a 4k monitor on PC i recommend fractional scaling eg 1.5 but as an app dev you cant control that. So do you increase text size to make up for the small pixels, or do you leave that up to the user.

Mea culpa, thanks.

> But Apple and Microsoft still use ClearType even on high-dpi monitors.

This isn't true.

I don't think Apple uses ClearType and OS X defaults to no AA on retina displays.

Yet, there is no support (yet?) for RTL languages.

I have just downloaded it too and witnessed this issue, the text looks weird, like a little bit blurry, it would be perfect if it looked like on Sublime Text 3 or on this image https://i.ibb.co/qYCpx3x/1588859182372.gif

If you compare the SDL-rendered text to the window chrome (OS-rendered), they appear to be similar. It just looks like this is a screenshot of a low DPI display. Nonetheless, peeking at the repo's issues shows that high DPI displays haven't quite been figured out either:


That could possibly be a result of the image compression as well, no?

Shouldn't PNG be lossless by default ? (you could do some lossy pass to optimise but in that case you're probably better off using JPEG ?)

Sites like github often "optimize" images to lower bandwidth costs. Basically any platform you share images on that is at scale will probably re-compress images. For PNG they often reencode the image in a lossy way.

GitHub doesn’t “optimize” your images, just as it doesn’t format your code. People would storm the barricades if Github changed their files.

They do change files that are embedded in readmes, just like most web platforms do. The original file is unchanged, but the version embedded in the readme preview on github.com is changed.

They just proxy it on the fly with camo: https://github.com/atmos/camo

If you are interested in small Lua-based text editors, I advise you to take a look at Textadept: https://foicica.com/textadept/ . Specifically as to minimalism: "Relentlessly optimized for speed and minimalism over the years, the editor consists of less than 2000 lines of C code and less than 4000 lines of Lua code." I believe this is a limit that the editor's author self-imposed and keeps to it with an impressive strictness.

edit: though it uses Scintilla for the "engine" and I would assume its LOC count is not counted towards this limit.

You might also be interested in kilo: https://github.com/antirez/kilo

Or, for something really minimal, two kilo - https://github.com/moon-chilled/Two_Kilo/blob/master/two_kil...

I forked that to add support for Lua-based scripting, multiple buffers, etc. Fun exercise:


Hunter x hunter reference?

No, it was just the obvious "kilo" to "ki-lua" transformation!

Textadept is great. It's my daily driver. That's mostly because I was looking for something lightweight that I could configure to operate similarly on windows and mac.

Text adept lacks a "folder view". if it were not for it. and "open folder".

For now, lite also lacks "open folder", but it shows tree view of current directory atleast

Textadept is also very extensible with little effort.

Also "zile"

texadept is great, it can handle really large files where most other editors crash/die.

Just tested that premise with a 300MB text file... Kate loaded it no problem in about 3secs... textadept didn't load the file, gave an error.

From textadepts FAQ:

Q: Why can’t Textadept handle HUGE files very well?

A: Textadept is an editor for programmers. It is unlikely a programmer would be editing a gigantic log file. There are other tools for that case.

I can't speak to the FAQ, I guess by 'gigantic' they mean tens of GBs. I use it for what i consider large files all the time, I just opened a 2GB file to check I am not going mad and it worked just fine. It took around 10 seconds to open so I agree it's not the quickest to start but at least it does. I'm running Textadept 10.7 on a decent Linux machine.

It seems a lot faster than most.

But what it still doesn’t replace is my favorite editor EmEditor [0] (Windows only). Like every alternative I’ve checked out, Lite blocks for a long time when opening a multi-gb file. They aren’t FOSS and probably more expensive than any other text editor, but I’d love to know what they do to have such superior large file performance (the free version is fast, the paid version even supports streaming-loading of parts of the file).

[0]: https://www.emeditor.com/

That's only because it doesn't do what editors have to do. Try writing some right-to-left text in lite and see what happens.

Wow, thank you for mentioning EmEditor, I used to keep it as a must-have app when used Windows, preferred it over UltraEdit, EditPlus etc, good memories. Nowadays it's all vscode though.

Since there are a lot of text editors, I'd like to see more detail on the motivations for yet another one. How does this compare to the current top-five open source editors?

The resident memory was 10MB in my windows machine when I ran this. A strong reason for me to use this would be in a low powered device. I want to write markdown in a tiny editor like this. I want to run multiple projects side by side and this is good for that. If I'm a beginner trying do dev on a machine lower than 4G of RAM. So many more examples come to my mind. I'm pretty sure there are more

Also the immediate UI library from the same author is very good and the examples are amazing.

1. https://github.com/rxi/microui

2. https://floooh.github.io/sokol-html5/index.html

On my GNU / Debian testing 64-bit, with 10 tabs opened it consumed 31 MiB of RAM and my CPU pushed itself to reach 0.75% at its busiest processing during typing.

To me this is simply mind-blowing.

Not as mind-blowing as SublimeText leaking 3 solid GB of RAM after a few days on my Debian..

Its always the extensions. Not sublime itself. That has been my experience so far

Wut, I opened 12 files in 9MB back in the day on a 32MB device as nothing.

Right now, I'm editing all the files in a 3kLOC project in neovim, doing completion with an LSP host, in 14.8MB. Keep it going for a few days and you might rack up 40MB. People can keep creating lightweight editors all they like, but you really need to bring more to the table than that when vim already exists.

We used to be able to open 10 files at a time on 4MB systems back in the 90s.

(They weren't big files, of course, but still. Standards have slipped.)

Yes, those were the days...every once in a while, I visit https://kolibrios.org/en/screen to get nostalgic.

Indeed standards have slipped or better say have been sacrificed at the altar of "quantity over quality".

For a simple "Hello, World!" string message, nowadays you need how many MBs of RAM.

Madness, complete madness!

I upgraded to 64 GB of RAM and now my average usage is around ~40% so hopefully I will be able to live with that for some time. Browsers, VSCode and other webapps and Electrons are using crazy amounts of memory. But... RAM is cheap now so who cares. :P It's about time to start telling kids legends about old days where people were using memory leak detectors and analyzers.

Out of curiosity, what's your CPU utilization? Every bloated Electron-based app I've ever seen also happened to consume cycles while sitting there doing nothing. I'm willing to give up my RAM, but not my CPU.


What would you say are top open source editors one would consider, and by what criteria should one benchmark the comparison?

While subjective criteria are perhaps most important (usability, integration with favorite programming language, extensibility), for technical quality perhaps there can be some meaningful numerical benchmarks. For example, the lag and memory use when opening a many-GB text file and editing the middle of it with syntax highlighting.

Supposedly editors using ropes are good at this.

Personally, I'm thinking of its comparison to Geany. A wonderful editor but I have a feeling the first difference is it's going to seem real bloated. Time to see.

Second that about Geany. Wonderful editor that i have been using for years - anytime find a new system or need to have 1 editor that will handle any type of file in a civilised manner, then my first install is Geany.

That said looking at the author's main.c this looks really nice as a shell for anyone looking to create a tool with lua scripting. Im going to download and study this one!

Geany is lean and fast but has some warts.

I.e. side-panel is too large, find+replace menu is modal and confusing.

how are "usability, integration with favorite programming language, extensibility" subjective? these all seem like very objective features.

Usability is only objective if you use the word in a dictionary sense: "it is possible to use this editor". It is very much subjective in its usual use: how easily it can be used by a certain user to do certain things. Say, to me a text editor without Vim key bindings is not usable. To someone else, Ctrl-C / Ctrl-V bindings may be preferable. 'Usable code editor' and 'usable screenplay editor' are completely different categories, even though both are text editors.

Different approaches to extensibility can exist, all of them good in certain areas. Both Emacs and VS Code are extensible; which approach is better? You'll find different opinions.

You'll need, as a minimum, to specify a language to move "integration with favorite programming language" towards objectivity.

They are subjective in the degree to which they are measurable. How would you measure how well a text editor “integrates with a programming language”? There are many ways to measure this, making any single measurement choice subjective to a degree.

There is absolutely an objective process for identifying criteria and identifying how well it meets those criteria.

First, an open-ended survey among text editor users to identify the features/requirements that matter to them.

Second, tag and categorize those responses into a standardized list of features/requirements.

Third, survey users to determine both the relative importances of those features/requirements, as well as how well each editor meets their needs for each feature/requirement. Both of these can be done using Likert scales, most commonly giving a score between 1 (does not meet needs at all) to 5 (completely meets needs), with intermediate values being "mostly doesn't", "somewhat", and "mostly". Several hundred randomly chosen survey respondents will generally give you the statistical precision you need.

Companies do this all the time. It's bread and butter for many product managers and user researchers, to justify to execs why a particular feature ought to be built rather than other ones (combined with other factors like cost, risk, strategy, etc.).

And there you have it. To answer your specific question, to measure how well a text editor integrates with a programming language, you just ask its users to rate how well it does. Since user opinion is all that matters in the end, that's the objective answer.

A survey doesn’t eliminate subjectiveness, it merely averages over it.

You're missing the point.

When it comes to products, people's average evaluation is the objective answer. Because people's evaluations are what lead to usage, purchase, subscriptions, etc. There is no other "objective" answer.

There's nothing subjective about it. What users think about your product is your product in the marketplace. That's the entire meaning of "the customer is always right".

Satisfying user demand for a capability isn't a mathematics problem where there's some independently objectively right answer.

"The customer is always right." was a marketing slogan used by retailers in the late 19th century to convince people to shop in their stores. It doesn't mean anything.

That's its origin, but it's certainly been repurposed today in product design to mean a very real thing, which is that the customer will buy what the customer wants, regardless of whether you think they should or not.

In other words, there's no objective product goodness/badness. Only what the customer wants. In that sense, the customer (market) is always right.

Inexperienced restauranteurs often experience the same shock. You don't cook the food you want to make, or that you think people ought to eat -- you cook the food people want to eat. Otherwise you'll go out of business.

It's created an inappropriate sense of entitlement for many consumers. You're absolutely right that a product will fail if people don't want it, but it's also true that people often don't get the products that they want because there's no way to deliver that product to them and turn a profit. In these cases, the customer is most certainly not always right.

It's a bad slogan that only portrays half of the reality of the situation.

And Textadept, which is also written in Lua.

Typing latency as measured by typometer is quite high on kwin (no compositing). Both kate and Howl do much better. And pycharm. Great start, but it's not defeating vscode in the one aspect I care about: typing latency.

Kate is a surprisingly good/fast editor nowadays.

Long time Kate user, though I dropped KDE years back in favor of Fedora XFCE. If one isn't in KDE world, Kate's paint latency when alt-tabbing back to a Kate window is incredibly, painfully slow.

kate has always been a really good piece of software. I would have used it a lot back in the late 90s if it had only had good vim emulation, everything else was excellent.

do you mean vs code has low latency?

Was a bit skeptical, but this is surprisingly fast & lite & supports multi-tab + tree view + syntax highlighting in a 1mb download. I've not seen any of the reported issues with the font which looks clean & crisp (on Win 10).

I'm not expecting it to have any of the features I'm used to with VS Code or JetBrains products, but it's definitely going to replace notepad.exe for a fast look at any text file.

Supports `lite <path>` to open any file/folder, e.g. `lite .` opens up the current folder in a tree view with beautiful dark mode by default, single click on each file loads it instantly. Seeing beautiful, matte-style syntax highlighting for all popular formats I've tried: .html, .css, .js, .md.

Perfect minimal distraction-free editor for writing docs.

Firstly, let me say that I'm in no way hating on Lite, these are just some fact-based observations.

Thought I'd give this a quick try on my Windows machines, where my liteweight editor of choice is Notepad3.

A fresh start of Lite uses 10MB of memory, vs 3MB for Notepad3.

After opening a 27.5MB text file in each, Lite used 156MB of memory, vs 68 for Notepad3.

The functionality available in Notepad3 is also vastly superior. Lite is pretty spartan - it doesn't even appear to have a file/directory selection dialog to open files/folders with. I really like the default colour scheme/theme Lite ships with though.

The memory usage can be explained by what structure it uses for its buffer. It is a Lua table of lines. It is this splitting of the file into lines that causes the larger memory usage due to internal fragmentation.

Lite has a slightly smaller binary, however; but as someone whose regular text editor on Windows is notepad (the stock one, which is <100KB), I still find the fact that a text editor's binary is >1MB rather disappointing.

I do realise that both Lite and Notepad3 are significantly more featureful, but I'm not sure if the increase in resource consumption is proportional.

(For comparison, regular notepad uses <1MB of memory when holding nothing, and I don't have a 27.5MB text file to test with, but a 6.5MB one takes 17MB when loaded. In other words, the expansion factor is close to Notepad3.)

Notepad is basically a wrapper around a text box provided by the Windows native UI libraries. The binary is so small because the heavy lifting (such as there is any to be done in Notepad) is handled by the platform.

Precisely. And for the same reason, Windows 95 could only edit 64kb of text in Notepad because that was the most text that could be put in an EDIT control.

Very impressive.

Font Rendering does not seem to be as crisp as needed for a text editor. (on Windows at least)

I know Sublime is using DirectWrite to render fonts on Win32, this might be something to explore. Also, TrueType with subpixel antialiasing should give good results but might need some tweaking.

I'll follow this project and I might use it as my main editor once it is more mature :)

When I run this editor on Windows 10 the text appears to me as crisp as in any other editor I have available.

Do you have a high dpi display? Lite does neither supixel-aa nor hinting, which is very noticeable on a 24" 1080p display. The lack of hintig is most noticeable at the top of letters where sharp edges look blury.

Its looking pretty impressive, especially for such a new project. Since you said it was fast, it got my huge log file test (1GiB of binary with lots of utf-8 text). It performed better than most editors do. It did have a few stalls though. Still, pretty impressive.

Looks nice.

But upon starting it up I lost all windows decorations (KDE), had to restart my session and lost the things I was working on in my terminals.

How can launching an app cripples the whole desktop ? (not that this have anything to do with the app to me, it's a plasma thing)

First thing I look for was support for vim keybinding, could neovim be used as the backend editor ?

KDE disables compositing when SDL apps are opened to improve performance. It should return once the app is closed. You can disable this behavior in System Settings > Display and Monitor > Compositor and unchecking Allow applications to block compositing.

Wow, thank you !

It's not the first time something like that happened to me but I would never have had the patience to investigate !

Can confirm unchecking that setting solved that problem.

This will have very bad effects when playing video games and might(?) potentially increase battery usage when watching full screen videos.

Compositing should only provide the fancy effects like Wobbly Windows and window previews in the alt-tab switcher etc. Not sure why this is causing KWin to crash... You can use Alt-Shift-F12 by default to toggle compositing.

But if you're not doing anything more graphically intense than playing turn-based or slow-paced video games then I guess you can use that option to permanently leave on compositing.

Also on KDE and it works fine — but I have composing turned off.


No comment is made worse by adding a concrete, actionable suggestion!

You should add one to your comment.

Stop using KDE.

Lite is written by rxi, who has made some excellent game jams with very impressive youtube videos showing him doing programming, artwork and sound effects, all at record time [1]. His other github repos are also excellent, for example fe, a minimal Lisp, is definitely worth checking out.

[1] https://www.youtube.com/channel/UC1eJk1sWcUYvBwnn7v5RLaQ/vid...

Another one is Microemacs (I use it nearly exclusively):

C version:


D version:


The source is so simple, the "extension language" is just editing the code.

Checking the Github repo, it says 92.3% C[1], 2.6% Lua[2], why does the author claim it's a "text editor written in Lua" instead?

[1]: https://github.com/rxi/lite/search?l=c

[2]: https://github.com/rxi/lite/search?l=lua

Because it has complete copies of Lua 5.2 (src/lib/lua52), stb truetype (src/lib/stb), and SDL2 (winlib/SDL2-2.0.10) checked in.

Excluding those, it's 1369 lines of C (.c or .h), and 5389 lines of Lua (.lua), and 112 lines of other files (license, readme, build scripts...); or 78.4% Lua, 19.9% C, and 1.6% other. I didn't count fonts or images (.ttf, .ico, or .inl).

It seems to include the entirety of Lua - https://github.com/rxi/lite/tree/master/src/lib/lua52

Which is written in C.

Odd that they chose lua 5.2. The most popular luas nowadays are 5.1 (the version that luajit implements), and 5.3 (which was for a long time the 'latest and greatest', though it's been superseded by 5.4).

5.4 hasn't officially released (it's in the RC stage) so 5.3 is still the latest stable/supported version.

They have all of the lua implementation checked in, which they didn't write.

Most of the implementation is oddly in the "data" folder.

Probably because that C code is the embedded Lua interpreter. See the lua.h file, there's the copyright notice at the bottom.

If you check the C sources you'll find that it's Lua itself and libSDL.

Works on Haiku: http://0x0.st/i_3I.png

In src/main.c at line 33 i had to add an ;.

Very nice; some basic user's documentation would be helpful.

It feels snappier then sublime! Is there any limitation in the plugin system?

As the editor is written mostly in Lua, with C taking care of the lower level parts, plugins can typically customise anything limited to what is exposed by Lua and the C API. Beyond adding custom commands, plugins can also do things like patch straight in the DocView's line-drawing function to draw additional content:


Or create their own custom "views":


The treeview at the left of the screen is implemented as a normal plugin, and like any other plugin can be removed from lite by simply deleteing the `treeview.lua` file.

Thank you for the response. I wanted to have a simple IDE for R because R studio is very slow. I think this is a nice base to built on top of.

I support this! Have been trying to use VS Codium[1], but R Studio keeps pulling me back in. An alternative with some nice R plugins would be fantastic.

[1] https://vscodium.com/

Looks interesting. Hope plugins make it more wonderful to make it useful for many languages like Python, Go etc. Right now I am stuck with VSCode and PyCharm.

So there's no GUI toolkit? Everything is drawn with Simple DirectMedia Layer?

Not even that: SDL just provides a pixel buffer, the application draws everything itself per-pixel. Lite uses a technique I refer to as "cached software rendering" which allows the application code to be written as if it's doing a fullscreen-redrawn when it wants to update, the renderer cache (rencache.c) then works out which regions actually need to be redrawn at the end of the frame and redraws only those. You can call `renderer.show_debug(true)` to show these redraw regions: https://youtu.be/KtL9f6bksDQ?t=50

I wrote a short article detailing the technique here: https://rxi.github.io/200402.html

That's not anyhow different from typical GDI, CoreGraphics, GTK/Cairo way of doing rendering.

Windows, MacOS and GTK maintain internal pixmap buffer for a window.

And when needed you call InvlidateRect(wnd,rc) and receive WM_PAINT with cumulative rect to update.

Personally, I would create an abstraction that wraps couple of functions of GDI, CoreGraphics and Cairo and use it instead manual pixmap rendering. Will be faster and more flexible. All that UI can be rendered by just 2..3 functions FillRect, DrawText, MeasureText.

It's quite different, actually, both in application programming logic and in expressive power of drawing primitives.

With dirty rectangles, it's the application's responsibility to minimize much of its drawing; the renderer will at best avoid copying bits where the target of the bits lies outside the drawing rectangle. The renderer can only optimize in situations where the entire call is understood as a full primitive, and it knows that the result will lie outside the dirty rectangle.

With rxi's approach, the application gets to define the commands which update the UI - which may be as complex as desired, as long as they have a calculable rectangle - and the cost of calculating that rendering can be skipped, without needing to query for dirty rectangles or doing any application-side conditional logic, beyond the layer that rxi wrote.

It's particularly powerful if the rendering primitives are higher level than those provided by the native APIs.

Interesting. Do you know any software whose source code is available that uses such an abstraction to paint the screen?

Any classic Windows, MacOS or GTK application does that.

Check https://docs.microsoft.com/en-us/windows/win32/learnwin32/pa...

If your question is about unified wrapper for multiple platforms then wxWidgets will qualify : https://wiki.wxwidgets.org/Painting_your_custom_control

As my Sciter where I wrapped Direct2D/DirectX, Skia/OpenGL, CoreGraphics, Cairo, GDI+ into class graphics abstraction so rest of code is isolated from particular paltform/backend used: https://github.com/c-smile/sciter-sdk/blob/master/include/be...

FWIW this is basically "dirty rectangles" which was a very common technique for avoiding full screen updates in games back when the hardware wasn't fast enough to do that.

You're equating the final stage of this approach to the entire approach. The point of this technique is that you get the benefits you typically would from dirty rectangles without the burden of the bookkeeping you would traditionally have. Using this technique your application "redraws" everything as if it's drawing it fresh each frame and the renderer cache takes care of determining what's actually changed.

Typically with dirty rectangles you would have to manage this state in the application code, for example, determining that line X was edited then updating the region for that line, or determining that view Y moved and updating a dirty rectangle based upon it's previous and current positions.

From the description at least it does sound there is book-keeping for the tiles so i'm not sure why you think there isn't such a burden.

It isn't. Normally the application needs to be aware of dirty rectangles; it fetches them from the window compositor, and it needs to limit its drawing calculations based on the dirty rectangle, in order to get the full benefit.

(Dirty rectangles are still perfectly common in desktop apps for when you e.g. drag a window back on screen after overlapping the edge, or if you scroll a window.)

You are thinking about regions, dirty rectangles were a common thing in games to avoid redrawing the entire thing and rarely seen outside of them.

Nah, it's tile-based concurrent precompositing, like web browsers do. Each tile knows what's in it (i.e. what set of DOM elements); and subscribes to state-change events for those DOM elements; and any such state-change event will trigger the tile to re-render its cached texture. Then, on each frame, all you have to do to draw everything, is to grab the latest cached texture from each tile, and blit those (or set them up as a grid of flat-projected rects in screen-space, if you're in 3D-semantics land.)

You can get additional benefits from this approach, by doing multiple layers of it (e.g. having scrollable surfaces have their own tiles that precompute the inner-document-viewport-space rather than the outer-viewport-space, such that the inner tiles aren't invalidated by scrolling the outer viewport.) This technique ends up forming a tree of tiles, where tiles higher up the tree, when invalidated, re-render trivially by compositing tiles further down the tree into themselves. Thus, another name for this approach is a "precompositing tree."

The difference between this approach and dirty rects, is in the direction of information flow. In tile-based precompositing, the information only flows in one direction—from the user, through view-controller, into the DOM, to the tiles, and then out the display. Dirty rects, meanwhile, are a signal sent backwards, from the display system to the program, essentially telling it that the display system lost/discarded the information needed to re-draw an area, so could the program please send it over again. (The program doesn't even have to re-render in response; some dirty-rects implementations, like X11's DAMAGE extension, just involve the client application re-transmitting pixbuf data to the server from its own precomposited buffer.)

Also, dirty rects / screen-damage doesn't solve the problem of hardware not being fast enough; it solves the problem of hardware not having enough VRAM to do per-frame compositing from undamaged intermediates. In low-VRAM conditions, you can only keep around the final pre-composited image; and so any time you "damage" / make "dirty" a region of the screen (e.g. by removing an overdrawn element, which should have the semantics of revealing whatever was there before that element overdrew it) then you need to propagate a request back to the renderer to re-draw (and, for efficiency, re-draw just that region), because you don't just have an intermediate texture laying around for that window/stage-layer/etc. to re-source it from. If you did, then dirty-rects would never come into play, since you'd just re-composite everything each frame. (Which is cheap even on old-school CPU-only blitters—you just have to alternate which pixmap pointer you're basing your LOADs off of using either a rect-overlap check, or a mask-bitmap [which gets you 8 pixels' mask-states per LOAD.] Even the Gameboy can do it!)

Like the other guy, you are thinking about regions (which is what the X11 DAMAGE / WM_PAINT / etc stuff use). Dirty rectangles is a method used in older games where the game kept track of -usually- sprites on screen and whenever something changed (e.g. a sprite moving) that part of the screen was marked as dirty (often implemented as a list of non-overlapping rectangles, hence the name). Dirty rects also flows only in one direction.

Thanks for the write-up. I can see it being beneficial for rust-audio community, which is looking for the right approach to VST GUI.

Just an aside. Everyone should play with SDL and write a simple game that you have to draw your own pixels. It’s extremely satisfying. Do it in C. It’s pretty simple and fun.

> …the application draws everything itself per-pixel

That’s what I meant, yes. Interesting approach and impressive.

Sounds like the pixel grid equivalent of a "virtual DOM"?

I think this is correct, like in the sense that react uses a VDom. When you make changes, you sort of pretend that you are changing everything, but the rendering engine figures out the differences to the real DOM, based on the in-memory changes, and makes minimal edits to it. This is why you can use react with all kinds of things that aren't DOM or even web-related (react-native, react-blessed, react-babylonjs, etc.) I contributed to react-blessed & react-babylonjs, and wrote the main chunk of react-babylonjs's current fiber system. You essentially just use the VDOm to describe the full graph, and that graph doesn't have to be DOM at all.

Not even close.

I'm open to the possibility that I'm that wrong in my understanding, but this didn't help me understand any better at all.

The technique does sound similar to me. Both (as I understand it) maintain a representation in memory of the final rendering and use a diff to determine which parts of the rendering to perform. The "virtual DOM" technique isn't strictly tied to a browser DOM, though the term is a reference to that, and React's (in particular) has been adapted to many other rendering targets.

I'd be happy to learn more if you'd be kind enough to explain what I misunderstood.

virtual DOM implies DOM existence.

DOM based systems use so called retained mode rendering. But this one uses something that can be classified as immediate mode rendering.

Check https://docs.microsoft.com/en-us/windows/win32/learnwin32/re...

A virtual DOM doesn't necessarily imply DOM existence; I don't buy that. It really depends on how persistent you want your "virtual" to be.

A virtual DOM, to me, means that the application renders by, every time, constructing data structures which are handed off to be reconciled with the display.

If, as in an HTML application, you render by means of a retained mode real DOM, then the reconciliation is via comparison of the virtual with the real. But that's not the only way to handle the output of the construction of a virtual DOM; it could figure out how those structures intersect with the dirty rectangle/s, and only render the subtree of the DOM which applies.

rxi's technique resembles a virtual DOM of depth 2 (1 root and everything is a child) and absolute positioning, though it's even closer to an OpenGL display list or combined vertex & command buffer. For that reason, I think it's a little bit of a stretch; not on the virtual DOM angle, but on the not particularly DOM-like nature of the drawing commands.

> A virtual DOM doesn't necessarily imply DOM existence;

That's wrong. virtual DOM is always a parallel structure to real DOM - projection of it. That's by definition of it.

From vDOM authors: https://reactjs.org/docs/faq-internals.html

> The virtual DOM (VDOM) is a programming concept where an ideal, or “virtual”, representation of a UI is kept in memory and synced with the “real” DOM

I get the feeling you're reading this too literally. I think eyelidlessness is talking about using a technique that is analogous or similar to that of a virtual DOM. Nobody is talking about an actual DOM.

That was my intention yes.

> virtual DOM implies DOM existence

No, it doesn't. I addressed this in the comment above. React's virtual DOM has been used to render:

- Plain HTML (e.g. server-side rendering) - Native UI framework objects (e.g. React Native) - Text-based interfaces (e.g. Ink) - Smart TV devices (e.g. Netflix's Gibbon) - Browser `<canvas>` elements - Markdown formatted text

And a whole bunch of other targets.

> DOM based systems use so called retained mode rendering. But this one uses something that can be classified as immediate mode rendering.

This seems orthogonal to the question? I'm not trying to be difficult, I sincerely don't understand why this would mean the two are "not even close".

> Native UI framework objects (e.g. React Native)

React Native uses DOM, the only nuance is that that DOM is a tree of native widgets/windows which is a perfect DOM.

Again, virtual DOM is a projection of real DOM in one form or another. It could be a tree of anything that can be represented by attributed nodes and leaves.

DOM tree has nothing with rendering and pixels, that's why "not even close". By using virtual DOM you can update (by diffing) some tree that even has no visual representation in principle - it is pure data structure. Think about abstract XML config that can be reconciliated with its virtual DOM.

I think you're using a term in a way it wasn't meant to be used. I tried to give this the benefit of the doubt, I did searches for uses of DOM outside of HTML and XML documents, and it's just not a term that's used generally for any tree representing nodes and leaves. There are much more general terms for those kinds of structures, and pretty much any program which maintains or creates structured data has some kind of representation of data with those kinds of relationships. But the DOM, as defined by the W3C:

> The Document Object Model (DOM) is a programming API for HTML and XML documents.

I was unable to find any other usage.

In reality, I think it would be more accurate to refer to the virtual DOM (at least React's, I haven't spent much time familiarizing myself with other implementations with the same naming) as a virtual output data structure, where the output may be rendered to a screen, it may be rendered to a string serialization, or any other output... but the role it plays (when it performs well) is to optimize output over time by minimizing changes pushed to its destination. One of those output targets is the DOM.

You chose to respond to one of my examples among many non-DOM React renderers, but another one very much has everything to do with rendering and pixels, and that's canvas.

And pixels, to a software, are just another data structure. Software doesn't emit light from an LED or a diode, it just provides data to a hardware which produces physical side effects.

Honestly, this has been an enlightening discussion, but primarily because I've been reminded that my instincts for engaging dismissive comments on the internet are there for a reason. I don't hope to convince you, I don't think any further engagement would be productive, have a nice weekend.

> virtual output data structure, where the output may be rendered to a screen.

Sigh. Virtual DOM has nothing with rendering.

Virtual DOM was introduced as a lightweight construct to generate and modify tree of nodes.

When the structure is generated it is used as a prototype for updating "master" DOM (or any other tree of nodes).

Any modern UI system is a tree of widgets/windows - child has one and only one parent. So vDOM can be applied as to HTML/XML DOM as to native UI tree of widgets/windows. That's why there are React, Native React and my native implementation of React in Sciter (https://sciter.com/docs/content/reactor/helloworld.htm) for that matter.

Just treat "DOM" as a short name for tree of nodes where each node has a) tag(or type or class) b) collection of attributes and c) collection of children. Nothing more, nothing less.

In any case, I have no idea where here is the place for "grid of pixels ".

Sounds like curses.

I didn't check but I would assume it's based on https://github.com/rxi/microui by the same author.

Something similar is Revery, written in ReasonML (OCaml) [0].

I saw some comments here about a WASM renderer-based editor, I think there is something in Flutter that is similar, where Flutter uses Skia to render its components.

[0] https://github.com/revery-ui/revery

For those who are already proficient with Vim/Emacs, is there any reason to consider this editor?

Also does it come with.vim keybindings :D

One thing that has mystified me is all these people talking about text editor responsiveness, whether it feels "snappy" or not.

What are they talking about!?

I mean that as someone who has grown up with 3D shooters and is obsessive about tuning networks to the lowest possible ping times to improve latency for competitive gaming. I always turn triple buffering off because I can definitely feel the difference over double buffering.

Practically every editor I use updates with the maximum 60Hz refresh of my monitor, and that literally can't be improved upon any further through software alone. The exception is Microsoft Word, which does about 30Hz and I hate this, but it's a shitty WYSIWYG editor, not a simple fixed-width text editor.

I mean, seriously: I'm playing Doom Eternal at 4K with a constant 60fps, no dips. That game is processing a decent chunk of a terabyte per second of data at that rate.

What is this mysterious difficulty people have with editing ~100KB text files!?

Either this forum is full of people editing insane multi-gigabyte files (By hand? Why!?) or they're doing it on their 486SX PCs for nostalgia reasons.

I seriously don't get it.

I dunno why you haven't encountered it, but it's a real thing.

I remember using one editor perhaps 8 years ago, and if I tried multi-cursor mode with more than ~40 insertion points (totally reasonable to edit 40 similar lines at a time), it took a couple of seconds to register each keypress.

Similarly, other editors wind up choking on syntax highlighting, or large files, or find & replace, or documentation lookup, or whatever.

The "mysterious difficulty" you mention is often literally several seconds of latency with, say, a 30,000-line file, whether it's with opening, scrolling, editing, or the other more advanced features already mentioned.

I'm honestly pretty baffled this isn't something you've encountered before. This isn't about hertz, it's literally about entire seconds or large fractions thereof.

I regularly use Notepad, Notepad++, TextPad, VS Code, Visual Studio, and the PowerShell ISE. I haven't had any issues with any of them, even when block-selecting or multi-cursor editing.

Notably, they're all Windows native apps written in C++, with the exception of VS Code, which is partially JavaScript.

I've noticed that some of them struggle with huge (1 GB) files, but editing such as large file is a somewhat strange thing to do.

I mean, that's great for you. You're just lucky I guess.

But I hope what I described makes sense to you. It's not about 1 GB files at all, it's about regular files. My suspicion is that it's mostly "side features" that start to grind when they get beyond a certain point.

To be more specific with one example, I've used another code editor that a simple "find all" operation will populate a results box. If you mess up your regex to be accidentally super-generic and it finds 100,000 results in your 20K-line file, it takes half a minute to load the results into the results box and become responsive again, with no "cancel" button.

Similarly weird edge cases in a syntax highlighter interacting with a half-finished line of code of yours causes something to choke up. That kind of thing.

Do you understand now? Again, you seem to just be lucky that you haven't encountered this kind of thing.

I develop on a 2018 dual core 16GB RAM MacBook Air, which is a lot crappier spec-wise than your gaming rig (but not crappy enough that text editing ought to be a problem). Often typing in VSCode on a large TypeScript project can take hundreds of milliseconds to register my keystrokes, which may have to do with the machine being overloaded by too many Electron apps (switching from Chrome to Firefox as my main browser already feels better), or maybe it’s that the Typescript language daemon or an errant extension is doing too much work - whatever it is, I haven’t figured it out, and dropping in some replacement text editor that claims to be fast might be a quicker way to get my dev setup more tolerable than my current experience.

I recently got Neovim working with nvim-typescript but I would be lying if I didn’t say I missed the nice UI/UX of VSCode, nor that I’m annoyed at having to spend so much time configuring my editor and memorizing shortcuts instead of actually getting things done.

So to answer your question I think part of it is that my machine is just worse than yours, and part of it is that editing text is often doing more than just editing text - in my case keystrokes trigger static analysis of a large project.

Did you try sublime? Also try running bootcamp because my personal conspiracy theory is Apple only optimizes their releases for the latest hardware

The difference in "snappiness" between Sublime Text and VS Code is extremely noticeable. The startup time in particular is substantially slower on VS Code (not just on the initial startup but also upon loading new windows)

The reason I still use VS Code is because the additional features for coding make it worth the unfortunate performance cost, but whenever I'm just working with basic markdown files I always go to something like Sublime.

When I was running Linux one metric that mattered to me in choice of editor was the time it took to launch an editor from command line.

Anything spending time loading plugins or whatever before giving you text on screen and responding to commands is frustrating if the only requirement is to quickly tweak a setting in a config file or just viewing a text file.

I guess it’s less noticeable on windows since everything is GUI based and quite unresponsive by default. But when spending your days in a terminal you get used to a certain snappiness that’s noticeable once interrupted.

Edit: Btw an interesting in-depth analysis of typing latency with measurements https://pavelfatin.com/typing-with-pleasure/

While the reason could easily be that not everyone has the horsepower you're used to (I assume 60FPS at 4K requires a certain level of hardware), people often mean something other than high speed when they say "snappy": ease of use, good descriptive UI that gives you feedback on actions and their results, certain features that simplify tasks such as integration with other tools (git compilers, etc.).

Also add other factors such as remote editing, say on a VPS with 512mb RAM with too many daemons running.

> remote editing, say on a VPS with 512mb RAM with too many daemons running.

I really like the new VS Code remote-editing feature, where the GUI is local but you can work directly on the remote system.

Visual Studio has a vaguely similar "remote debugger" feature.

Generally I avoid remote development like the plague. As you said, the latency is quite noticeable through any kind of remote connection.

Pro tip for anyone using Windows: Modern versions limit RDP to 30Hz by default, irrespective of CPU power or available bandwidth. See this MS article on how to remove the limitation: https://support.microsoft.com/en-au/help/2885213/frame-rate-...

I always set this to 60 Hz on high-performance "workstation" VDI systems used by developers.

I don't think screen frame rate is a good measure when it comes to measuring editor responsiveness.

For me the only time screen response comes in to consideration is for flicker.

Editors that flicker badly can be very annoying and very distracting.

My measure of editor responsiveness would be a count based measure of the numbers times you can find yourself waiting on the editor.

Now this waiting can happen for very many different editor actions, but it is most annoying for anything that is keyboard input related.

Snappy editors have very few of these wait points, whereas slow editors drive you mad as you find yourself constantly waiting for some action to complete.

> What are they talking about!?

You're missing the point with refresh rate, you could make the slowest implementation in the Universe have a snappy UI thread.

> What is this mysterious difficulty people have with editing

> ~100KB text files!?

Most editors will open the entire file into RAM, which is not such a problem unless is starts trying to index everything for searching, colourizing everything, etc, etc. What feels like a great feature for ~10kB source files starts to chomp away at CPU and RAM for some file >1MB as your editor tries to find patterns in some binary file.

> I mean, seriously: I'm playing Doom Eternal at 4K with a

> constant 60fps, no dips. That game is processing a decent

> chunk of a terabyte per second of data at that rate.

That's quite the powerful machine you have. Consider many people will operate with laptops and some of them are low-power, high battery life. Also consider that many will not just be doing that one thing. I know quite a few people now doing serious dev work from tablets... (They use build servers.)

Also, just because you do have the latest processor available doesn't mean that I expect a simple text editor to consume everything it has.

you may find this interesting:

"Why are 2D vector graphics so much harder than 3D?"


I think I've read that article already! Computer graphics has always been an interest of mine, and I've written 3D engines before.

Admittedly, not a 2D text engine, but I keep up with the research.

The difference between the blog post you linked and a programmer's text editor is night & day.

There's a huge difference between arbitrary 2D graphics and fixed-width text rendering. The former has crazy complex corner-cases, and also has to deal with all of the fun things 3D engines do such as arbitrary transformations and transparency.

A web browser has to deal with the arbitrary case, which is why Firefox's new Rust-based renderer took so many years of hard work to write. It's a complex beast.

A fixed-width text editor is more or less just putting sprites on a grid. Sure, there's subpixel alignment, antialiasing, and maybe even ligatures, but this is nothing really in comparison. They're all "local" issues where typically at most a few hundred pixels are affected per character update.

Or to put it another way, text editor rendering is "embarrassingly parallel". The screen can be split up into lines or char blocks and each can be drawn separately and updated individually when modified.

Compare this to 3D games that are pumping out 4K pixels every frame, updating all of them every time. Web browsers do the same thing, they have to update the entire screen every frame in a lot of scenarios and can manage this at 60 Hz too. Firefox and Chrome both can do this now for much more complex scenarios than text editing.

We are very sensitive when it comes to text. 3d games doesn't need to be accurate, compared to text that needs sub pixel perfection. 3d is closer to the metal and thus faster. Ive made a web based text editor that parse the entire file on each key press to get language intellisense and semantic coloring, that however only takes 1ms. Rendering one full screen of text takes around 10ms. That's how slow text rendering is. Taking into account random GC pauses and graphics layering overhead there's little budget left for parenthesis matching highlighting, auto completion suggestions, auto quote insertion, spellchecking, etc.

I use vim as my primary editor. I've tried a number of times to pick up VSCode. The functionality is there. The modal keybindings are there. But yet it lags and I can deal with it in the short term but over the course of a day my frustration mounts and I end up going back.

There's a difference as well between refresh/frame rate and input lag. It is very much possible for a game to have a consistent 60hz frame rate but yet lag. Lag can come from delays in the input subststem, it can come from buffering before display, it can come from the display itself.

Here's a good read: https://www.eurogamer.net/articles/digitalfoundry-2017-conso...

The suspicion may also be induced by other software.

Ever tried early RStudio (or even current) on a crappy work-issued laptop? And I don't mean R, I mean the UI elements.

This is a great piece of software, certainly more than a text editor. But it isn't as fast and snappy as a native editor, far from it. It's tolerable now, but it used to be pretty bad on non beefy hardware.

At the same time, native IDEs did not have that issue at all.

This is a thing man.

I'm running a Pentium powered notebook and VS Code runs like a dog.

Another factor to consider is you might be running other things in parallel to editing your files (terminal watching and compiling code, a web browser for viewing output, etc).

try pretty much any electron app, try a web browser on mobile (these days even a simple google search sometimes freezes my phone for a few seconds with the message "unresponsive script") if you can't feel it maybe you got used to it, but try some CLI program and you'll see the difference.

> The exception is Microsoft Word, which does about 30Hz and I hate this, but it's a shitty WYSIWYG editor

Was the shitty there qualifying WYSIWYG (as in WYSIWYG editors are definitionally shitty) or qualifying MS Word (as in MS Word is a shitty editor)?

If the latter, do you have suggestions for good WYSIWYG editors?

MS Word has... many problems, but being WYSIWYG is not one them. I like WYSIWYG editors.

I write a lot of reports these days, and the Word editor interface drives me crazy. Like I said, it's slow, irrespective of the hardware. It can't even maintain a consistent 30 Hz on a high-end gaming rig when editing plain-text paragraphs with no special formatting. I suspect it's throttled internally, but it could just be badly written.

The editor is also very glitchy in the way it handles formatting. Lots of little annoyances just haven't been fixed and have been left there to fester for decades. It's all too easy to corrupt a document's styles to the point that the only reasonable fix is to carefully cut & paste all content into a new document, going through Notepad on the way to guarantee all hints of the original formatting are stripped.

Is there language server support/is it planned?

I will definitely try it as vs code became so slow and has all not needed features for me.

See also howl, written in lua/moonscript and damn fast. Both of these seem to have a bit more of a community than textadept (and no dependency on scite).

I used textadept for a while, bought the book to support the author. It didn't work out for me.

I also think Howl is pretty nice unknown gem.

I have to mention, I do love how this looks! I'd be looking in using it as my daily driver for quick / short text edits and maybe small projects!

Since I really like the looks of it, in case you do need help with keeping it maintained, I would love to help! Feel free to contact me at contact@taigi100.com

Good luck and I hope it all goes well with this text editor!

I just think about use VSCode as text editor, but I cannot change from Notepad++ yet... and VSCode as code editor only.

Any instructions on how to run this on macos?

Someone made a formula: https://github.com/lincerely/homebrew-tools

   brew install lincerely/tools/lite

   brew tap lincerely/tools; brew install lite

on catalina os x; I just went

  git clone https://github.com/rxi/lite.git
  cd lite
and an app did start up, but the text gets cut off in the editor window, and I can't quit the app from the menu

The compiler might complain that it needs SDL.

Fixed this with:

  sudo apt install libsdl2-dev
on Ubuntu.

Else it works as tingletech described also using Mint/Ubuntu.

How do I install this on Mac? apt is not available in mac.

Update: I can install this using brew install sdl2 https://medium.com/@edkins.sarah/set-up-sdl2-on-your-mac-wit...

  brew install sdl2

it seems to compile/run fine for me on catalina but everything is absolutely huge (hidpi compatibility issues?)

Same here; font is huge on catalina. It seems like this is the culprit:

  static double get_scale(void) {
          float dpi;
        SDL_GetDisplayDPI(0, NULL, &dpi, NULL);
  #if _WIN32
          return dpi / 96.0;
  #elif __APPLE__
          return dpi / 72.0;
          return 1.0;
I found changing it to dpi / 192.0 to be fairly comfortable. It wouldn't be too hard to add a scale option and change to `return (dpi * scale) / 192.0`. The "right way" is probably to do that but also get scale by checking the screen resolution; I'd go by height due to the increasing adoption of ultra-wide monitors:

  static double get_scale(void) {
          SDL_DisplayMode dm;
          SDL_GetDesktopDisplayMode(0, &dm);
          return dm.h * scale / 786.0;
Edit: works on my win10 and arch boxes.

still, after changing the get_scale, fonts doesn't look like they are rendered in higher resolution.

There should be a separate feature to handle higher-resolution screens.

Anyway, it feels really easy to change anything in this editor.

Probably adding it to the dock works?

it creates a 348K "Mach-O 64-bit executable x86_64", but I'm not sure how I would add that to the dock. I tried to drag it there with finder, but it would not take it.

You need an App Bundle. This is normally handled by Xcode or your building script, but it's possible to do it manually.

To make App bundles you have to create the following directory structure:

    \- Content
        \- MacOS
           \- Lite (that's the executable file)
Here's a script that automates all that: [1]

To add an icon you need a plist and a icns file [2].

I know it looks cumbersome, but it pays off when you need to bundle multiple files with your app.


[1] https://gist.github.com/mathiasbynens/674099

[2] https://stackoverflow.com/questions/1596945/building-osx-app...

same here, works on mac quite nice :) and it's super fast.

also fonts are a bit big on retina screen.

Nice project. Would be nice to change the key binding on mac to use the Command key instead of control. It is one of those habit and mental shift when using mac vs linux/windows.

Is there a config file where this can be changed?

Very cool. It looks like it is written in Lua using sdl and some stb single file libraries.

Looks like a nice start, kind of confusing interface though. Needs an open folder command.

Seriously. I'm all for lightweight text editors, but the ability to open a folder and a way to start a new text file is minimum viable product.

Very cool! I ran it on repl.it via x11 but not sure it does the perf justice because of the network lag: https://repl.it/@amasad/lite

Interesting that it uses SDL. It would be nice if Vim or Emacs had an SDL backend, so it could be used on more esoteric platforms. I remember there being a distinct desire for a good homebrew text editor for the Switch.

I would really like something as simple as windows notepad which syncs to my devices, where the app is equally simple. A no frills fulltext search feature would be nice too.

I currently use Google Keep, which is incredibly slow

I love the mobile and web versions of Simplenote from Automattic--decent searching, basic markdown, perfect sync. But I do wish the Windows/Linux clients weren't Electron. There's a good native Mac client at least.

Try my Sciter Notes ( https://notes.sciter.com ).

It uses database for storing stuff but you can map books to folders on hard drive. And those folders can be under control of DropBox, GoogleDrive etc. So you can read shared stuff on any device using browser there.

Have you tried Notion(https://www.notion.so/)?

- No linux client

- Too many frills

- Too slow

Lite has far less functionality than notepad.

Could you share what data structure you use to keep the text in? I'm curious how text editors work, since you need fast inserts anywhere, is the text a linked list?

My Lua isn't strong, but I think this is what you're looking for:

Data structure initialization: https://github.com/rxi/lite/blob/master/data/core/doc/init.l...

How insertion works here, which illuminates how the table is used: https://github.com/rxi/lite/blob/143f8867a13a35f5688ad7c9771...

Looks like an array of lines, which is a completely reasonable structure to use.

There's a good post on the Visual Studio Code blog about why and when they moved on from an array of lines to a new structure based on a piece table.


And if you want to take that a step further down the rabbit hole, here's how Lua allocates the table in memory: https://stackoverflow.com/a/29930168

It's two dynamic arrays, of hashes and values, whose sizes grow as powers of two.

Check out this paper. It talks about the different ways to manage the data structures for text editors.


That's a very nice piece of work. I'm very interest to get to know more how text editor/IDE works. How hard would be add something like a GUI designer?

Really cool project! At what point in the development did you start to use lite as your editor to develop it? I'm sure that was a fun milestone to hit.

I can't say exactly when, but I was writing it in itself before I made the initial commit on the public repo, so at least 6 months ago. Every change I made since then (in addition to the plugins I wrote) would have been written in lite itself.

Beautiful. Would love to see this front a headless nvim.

A previous comment (I can't seem to find it) said that it didn't respect keyboard layout on Linux. Is this true?

Very cool.

Any plans of getting this into the arch repo?

Man, I love Webstorm. I'd love something like this with Intellisense for JS/Node.

What is the most feature-full modern text editor for the console, with syntax highlighting?

GNU Emacs

(If by modern you mean actively developed)

This software seems pretty cool.

He says its coded in lua but it seems to be completely coded in C ?

Does it support LSP?

ooh - didn't know it was possible to set an SDL window icon like that in a linux build, my project... updated :)

Fast and small is how they all start out. Feature complete and fast, now that is impressive.

There was a moment, years ago, when I used VS Code to edit text files because it was faster than Sublime and Notepad++. These days I open it begrudgingly.

Thats weird. I have never seen sublime slowdown unless there is a buggy extension

Any chance on a windows arm64 build?

Yay! Someone else on a Windows ARM machine! I have a Samsung Galaxy Book S.

Nice one OP! looks great!

Whats the advantage of Lite over Vi or Emacs?

I like Vim (and I've tried Emacs), but there's certainly still room for a small, hackable text editor that can be used with the same intuition that applies to the rest of a modern computer. Vim and Emacs keybindings require a time investment (that some people find worth it) to learn and get comfortable with.

This is a very nice piece of work, and I think very important: it shows that it's very much possible to make slick interfaces without using javascript/web technologies. It's also just as hackable as something like vscode or vim. I hope there's a shift back to native or semi-native applications as opposed to web-based stuff. It's certainly not perfect yet (it has some scaling/rendering issues and is slow to open large files), but is still a pretty great start. My two cents.

It's written in Lua and doesn't use the OS GUI APIs and widgets anywhere. I'm not sure I understand what makes this more "native" than Javascript.

There’s way less abstraction going on in Lua app that has a hand-rolled GUI versus an app built on top of a web browser like Electron. Hand rolling you’re own UI doesn’t make it less native — is Ableton Live not-native?

Or are you drawing the line because Lua is a scripting language?

You seem to be the one drawing the line saying Electron isn’t vs something like this is.

Is a WASM app that renders to WebGL native?

I guess the point is native should probably mean “using the toolkit the OS provides” and we should just use “performant” in most cases.

I don't know how people are finding ways to contradict your comment.

Native means using the native UI toolkit. Or if not about UIs, it can mean compiled to native machine code ahead of time.

People used to care about native vs non-native because of the look and feel.

Now people are complaining about performance, memory use, and startup times. Everyone would be happy using a non native toolkit if it was fast, used little memory and started quickly.

Why can't we be precise with our language? We're engineers, it seems critical to use precise terminology. If you don't like the look and feels let's discuss that, if you don't like the memory use, let's discuss that, if you don't like the distribution or install size, let's discuss that.

People mainly want their autocomplete, accessibility and double-click text to work the same everywhere.

wxwidget guys on IRC used to showcase how their buttons titlebars and menus blended with the windows xp back in early 2000 but I'm sure they are past that at this point.

Electron is like a whole framework say JVM or .NET but for JavaScript GUIs... Its definitely heavier than a few thousand lines of C and Lua vs the amount of code behind both Chrome and Node.

just having the UI definition given in native code already is a gigantic boost in terms of latency since you don't have to setup a whole javascript engine and HTML DOM to reach actual UI code though, just pass a pointer to the root of your widget tree to your paint function and see magic happening.

e.g. compare Telegram made in Qt Widgets with Signal made with Electron - the latter is incredibly slow to come up (and has broken text rendering) while the former is more or less instant on my machine (which is an overpowered i7...): https://streamable.com/m48hg7

fun fact: approaches such as Qt or also Flutter from what I could see can be more performant than some of the OS-provided UIs of some platforms - in particular Android is a pretty slow mess while Qt has seen a lot of work poured into running on <1Ghz devices.

Again you just hand waved “native”. The UI definition here is in Lua, it then calls a backend that renders to the system, not sure if that part is C or not.

But... this setup may be a lot simpler, but no different from Electron in that they are both scripting languages that eventually use C to render.

Also, you can’t just cherry pick an example. It’s very possible to write pretty fast interfaces using v8. Certainly going straight html/css is slow, but that’s not necessary and VSCode is a great counter example (not crazy fast, but faster than many “native” apps).

I think the HN crowd has invested so much emotional energy into hating electron they literally can’t think straight about it, including just admitting that this is no more native than any electron app.

In fact, if you take native to mean some combination of “using OS APIs” and “accessibility support and behavior similar to the OS”, then electron wins hands down because of the latter part: font rendering, text selection, input controls, copy and paste, everything mostly just works like native. It’s the Lua app that I’d say is far less native. It’s just a bit faster.

Can you make WASM apps using js toolkits that compiles to native desktop yet? Seems like that will be a huge step forward from electron apps.

Sure, as native as any language that runs in a VM. There is no reason a WASM app can or can't be any more or less native than something running on lua (like in this case), jvm or anything else.

People don't use Electron to get a js runtime, they use it to get a web-like runtime. Any javascript (including WASM) app can run without electron in either node or in the bundled runtimes that OSX/Windows/Gnome/Whatever has, but the point of electron isn't to write JS, it's to write web-like code. For that it requires a browser in a predictable version, which is what electron gives you (albeit at a high cost.)

So, obviously, "native" is a spectrum. On OS X, Cocoa > QT > Java > Wine > Docker + X11 forwarding.

But one thing I think is being missed here is that Lua, Python, and other scripting languages are just a lot more resource efficient than electron, in terms of runtime size, memory usage, cpu usage, etc. A very highly optimized Electron app like VS Code might outperform a more run-of-the-mill Lua app, but that's an outlier.

You're comparing apples to oranges. Lua is a language runtime. Electron is a full browser and a language runtime.

If you want to compare them then compare node.js or JavaScriptCore to lua. Or compare Electron with a lua runtime that has a webkit/chromium engine integration.

I don't especially like Electron, and sometimes like writing lua, but the two are not solving the same problem or competing in the same space.

Actually, for a while Java was just below Cocoa for "nativeness" the LAF for it was maintained by Apple itself.

I'd heard that before! I kind of debated whether to put Java or QT first—it came down to the fact that QT apps just feel closer to me on macOS. Although I don’t know how ugly they are under the hood.

Edit: What does LAF stand for?

LAF stands for "look and feel", which is basically Java's name for "UI theme". Another fun fact: Cocoa had official Java bindings at some point!

The Java SWT toolkit uses Cocoa native widgets under the hood. So it is a native UI toolkit. That said, like parent said, native doesn't necessarily translate to a user experience of being performant.

The rest of the application, being in Java, will often means it will have your typical JVM high memory usage and slow startup times.

Yes, and there is no reason you can't have a cow as a pet because they have pointy ears and a tail, like cats. It seems you've lost your way, my friend.

Why is using JS or WASM as a building block for a app any worse than lua, python or ruby?

What is the material difference?

The material difference is that this editor is an order of magnitude smaller than a browser. Sure, you can go "we have plenty of RAM/storage/battery these days", but consuming 10x more of every resource you can think of is quite the hint that you are not using the right tool for the right job, or that you do not employ the right person for the right job.

That's not because of JS or WASM, that's because of the browser. Using JS or WASM does not mean you need to use a browser, you can for example run quickjs, node or whatever other runtime without bundling a browser.

This editor could have been written in js and use the exact same libs for rendering. If it used one of the smaller js runtimes (like quickjs) I'm guessing it would have been similar in size and pretty close in performance.

I don't get why you are conflating using a certain language with bundling a full browser (like electron does). It's two completely different things.

What would the defining difference be between the two? "Forward" along what dimension?

Yep, it’s not Win32, it is a software renderer (according to renderer.c).

What's ableton written in? Java / awt?

Their own C++ UI toolkit mostly

If you look at the source its not Lua its, pretty much small amounts of C to embed Lua, the renderer is a SDL renderer in C. Its a pretty decent architecture I have been toying with a similar idea past few days with C + chez scheme + SDL.

>I'm not sure I understand what makes this more "native" than Javascript.

Not using a full blown web rendering engine (if that's the case) but OS painting (not necessarily GUI) APIs would make it much more native in my eyes...

I don't see why. Electron apps, for all their faults, use the OS to render text, so they respect things like system-wide settings on scale, you can use your installed fonts, you can use screenreaders and other accessibility technologies.

The comparable JS architecture would use little more than a canvas. Normally, JS manipulates a retained mode render graph, i.e. DOM+CSS. That enormous abstraction stack both empowers JS - you can get a lot done with very little code - and slows it down.

It's more "native" because the stack is much shorter, much more direct. If it was more common to have plain JS interacting directly with input APIs and plain canvas, the argument for parity would be stronger. But it's just very rarely the case.

> It's written in Lua and doesn't use the OS GUI APIs and widgets anywhere. I'm not sure I understand what makes this more "native" than Javascript.

It is not running on a web browser shaped to act like a text editor instead.

Its interpreter is not a web browser.

Because JavaScript is not a native language.

I mean, yes, Javascript isn't x86 assembly code, but neither is Lua. Sure, Javascript was designed to be embedded within other applications, but so was Lua. And neither _has_ to be. People seem to be using "native" to mean "lightweight" or "performant," but if that's what they mean, they should just use those words.

Js isn't native. It's not c it's not c++, it's not Java or even fucking python.

It's a webscripting language. Which is totally fine but it will never be native.

Neither is Lua

I 100% agree with you, but the R&D time for something like this would eclipse an equivalent effort in Javascript/web tech.

There needs to be focus on building tooling that enables rapid development of native applications. GTK is a good example. Glade is a perfectly fine editor, but the underlying tooling for GTK is a mixed bag.

> the R&D time for something like this would eclipse an equivalent effort in Javascript/web tech.

I think this is less true when you don't assume people know all about web tech already, IOW there would be less of a difference for someone coming into software dev completely naive of web tech.

> There needs to be focus on building tooling that enables rapid development of native applications.

Fully agree.

The time and resources users waste on using such application eclipses the difference in development. One could be folding enzymes instead of waiting for syntax highlighting to finish rendering in vs code.

I'm not sure I agree. There needs to be a focus on building "quality everything" and not "quickest to make anything".

Move fast and break things has become the cancer of software engineering.

I completely dislike the "isn't made is js/web/electron tech" argument

Because as you said "It's certainly not perfect yet"

Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact