Hacker News new | past | comments | ask | show | jobs | submit login
Nuklear: A cross-platform GUI library in C (github.com/immediate-mode-ui)
338 points by ducktective 15 days ago | hide | past | favorite | 142 comments

On Mac I finally got the example to run had to brew install glew and glfw3 and change the makefile to point to the include and lib paths. A lot of the interactions don't seem native, for example tap to click doesn't work, I have to do hard clicks on my Mac. Also dragging doesn't seem to work like other apps, it stops dragging if go outside the boundaries of the element. Overall this is pretty epic and I wonder if serious alternatives to XCode and JS based wrappers like Electron are possible.

The tap-problem depends on how the input integration is done, I've been running into similar problems in Dear ImGui (and Nuklear too). You need to implement your own little input event queue which tracks button up- and down-events and still creates a "click" even if both happen in the same frame. TBF, it's a common oversight when creating an input polling layer over event-based input, and usually it only shows up with touchpads, because physical buttons are usually too slow to trigger up and down in the same frame.

Check the sokol-headers integration example here, this should properly detect short touchpad taps:


The "drag doesn't work outside widget boundaries" seems to be a mix of Nuklear-specific and platform-specific problems.

It seems that scrollbars don't loose input when the mouse is not over them, but slider widgets do (that would be a Nuklear specific problem).

For the problem that an app might lose mouse input when the mouse moves outside the window boundary, this must be handled differently depending on the underlying OS. For instance on Windows you need to call SetCapture() [1]. The platform integration isn't handled by Nuklear itself, so you'll see different problems like this pop up depending on how carefully the platform integration has been implemented (the official samples should be in better shape though I guess).

[1] https://docs.microsoft.com/en-us/windows/win32/api/winuser/n...

Based on the gallery of examples it looks to be focused on what you might call "full screen" applications, like games. In these use case consistent cross-platform interaction is more important than feeling "native" on any given platform.

You may have an issue with your installation of glew and glfw -- I didn't have to make any changes to the makefile to build the examples.

Probably my env paths are not set correctly with brew on my M1 Mac.

Ah -- brew is definitely not mature on the M1 Macs yet as I understand it.

I've been on the fence for a couple of months over this -- I want to grab an M1 Mac for R&D type stuff, but at the same time I want to wait until they release the 16" with a hopefully upgraded M1 of some sort.

You should probably get one anyway, it's incredibly life changing. You can always sell it on eBay after the 16" comes out, they hold on to their value pretty well.

I really should. lOL.

Thing is, I literally just bought this maxed out i9 less than a year ago and it works perfectly!

Single header, no dependencies, cross platform, immediate mode, and the GUI examples actually look good... that’s wild.

Not to mention written in C89. This person knows how to distribute code well. I love libraries like this, I wish we had more.

How relevant is C89 really nowadays? As far as I understand it took MSVC a long time to catch up to C99, but it's been there for a long time now. Is the motivation more related to retrocomputing? I would expect that decades-old compilers for retro platforms might not be reliably C89 conforming either.


Only 7 of the 24 compilers in the list fully implements C99 whereas I would guess all those compilers implement C89. If you want to write long-lived programs, C89 is a good choice.

Thanks, though those numbers may be a bit misleading. It's probably feasible to have very widely portable programs in C99-except-obscure-pragmas. But MSVC in particular seems to be less far along than I thought. Oh well.

Also, the difference between C89 and C99 is not that big.

It’s not just retro computing, new hardware platforms bootstrap themselves by providing a c89 compiler. It’s just good form. Imo c99’s additional features are not worth losing nearly universal portability. It’s trivial to reimplement most of c99’s useful features with a small header if you must, as this library does.

msvc's still not hit C99, but to be fair, they're not aiming for it

(last I checked a year or two ago)

ImGui is nice too. I use it for quick UI hacks in Python :)

Should clarify that you're referring (I presume) to dear imgui. Imgui in general refers to the immediate mode gui paradigm in general, as opposed to retained mode gui.

Yes that one. Though I have only ever seen dear imgui written as ImGui. It’s really sweet. I taught my 14 year old son the basics and he is making all sorts of weird GUIs now lol

It's really nice. I was surprised at how well it works. Downloaded it, ran make inside the example folder, and it built working examples.

Why is a single headerness so important here?

It's not important at all. “single-header-library“ in C basically means #including a .c file in disguise. It's for people who are too lazy to add a source directory to their build system...

As usual, though, an absolute accessibility nightmare. To screen readers, this is just a black box.

Even on Web, in almost 40 years developing software, there was one single project where it did matter, the customer was part of a government organization.

Like writing secure software, until this becomes a legal requirement no matter what, it will keep being ignored by the large community of developers.

> until this becomes a legal requirement no matter what, it will keep being ignored by the large community of developers.

"no matter what"? One of the more popular uses for these immediate-mode UI toolkits is to create interfaces for video games, either debug UIs or the menus painted over the game's main scene/content. I don't think people who need a screen reader are going to be playing a twitch-reaction shooter game.

I agree that these shouldn't be used for general use applications, but I strongly disagree with the sentiment that somehow all programs should be forced to work with screen readers. Some domains and applications are primarily visual and don't really translate well to textual interaction. I think these kinds of toolkits work best with those kinds of applications.

You shouldn't use these to write the next Discord or Slack or Firefox or LibreOffice etc. -- but I don't see a problem with making a debug UI or a menu for an action video game with an immediate mode toolkit.

You would be surprised at what people do with screen readers, but yes, that probably won’t work for first person shooters.

Realistically, however, even if this is only/mostly used for games (if so, why doesn’t https://github.com/Immediate-Mode-UI/Nuklear even mention the word game?), many, if not most, of them probably will be turn-based, because it’s much easier to write such games.

Also, accessibility doesn’t imply screen reader. It also includes high-contrast, larger fonts, tabbing through controls (e.g. to support users with Parkinson or motor disabilities), etc. Nowadays, a GUI library should pick up settings for those from the OS.

I would think the same thing, but playing video games blindfolded is a thing. Look up ponktus Super Metroid 100% blindfolded and zallard1/sinister1 2p1c blindfolded Punchout for examples.

If it's possible for someone to beat these games blindfolded (at a competitive pace, no less!), then it's possible for a blind person to beat it too.

The old consoles didn't have screen readers, but watching zallard1 play Wii Punchout blindfolded, I can see where they would help. Amazing fights, yet painful to watch when using the menu system.

That’s up to the person or company creating the video game to decide though. It can’t be a legal requirement any more than you could require painters to also produce a 3D model of their work so the visually impaired can enjoy it too.

This seems a bit short sighted. Should we not require accessible entrances to shops, banks, etc in law because it should be up to the people who own the building to decide if it's worth the massive expense of making the building accessible for only a few customers?

A videogame is (almost always) an unnecessary, discretionary waste of time for everyone who uses it. A storefront probably serves a purpose; a bank obviously serves a purpose. It might not be that easy to draw the line but it's obvious that "game" is on one side and "bank" the other.

It absolutely can be a legal requirement, though I am not arguing that it should be one. I was specifically addressing the comment "I don't think people who need a screen reader are going to be playing a twitch-reaction shooter game." Apologies for being unclear.

The people typically playing the games blindfolded had first beaten them countless times with normal sight. You typically will see this on games where core gameplay elements are not influenced by any significant randomness.

Outside of that lack of randomness, I doubt there are any purposeful or even accidental affordances towards blind players in those games.

Woah hang on there buddy, this is an open source project often used for video games. If you want accessibility so bad that you think it should be a “legal requirement” why don’t you write your own accessible GUI tool kit instead of complaining on the internet?

No need for, native GUIs are accessible already.

> Why don't you write your own

I feel there are plenty of reasons to not write your own anything, from prioritizing other projects, not having time, or just not having all of the required expertise. As a student I'm lucky enough to be able to drop basically everything and work on this one cool project idea so long as I get my essays in on time, but that just doesn't seem to be something universally applicable.

> instead of complaining on the Internet

Where else would you prefer they complain? I agree it may be more effective to open an issue no the repo, but does that also count as `copmlaining on the Internet'? Talking to people is how we change things. In this case, that's contributing to the usability of technology that grows ever more central in our lives.

Personally, I think a focus on accessibility is a great focus to have, and it should be obligatory if it isn't voluntarily universal. There is no reason for our society to provide more opportunities to humans with perfect vision than to humans with impaired vision.

It is a legal requirement in a lot of places.

> Like writing secure software, until this becomes a legal requirement no matter what

For America it already is. The ADA covers software, too afaik

Are there any options, besides Qt, that are accessibility friendly? I don't consider anything but Qt for this very reason (GTK on Windows/MacOS is not accessibility-aware or whatever you want to call it, to my knowledge.)

And I generally like Qt, but I can see how you might consider it heavy and a bit unwieldy.

I believe that .NET MAUI, when it drops, will be accessibility-friendly from the get-go. Of course, if Qt is heavy for one's needs, then I imagine .NET is, too. And of course there's Electron.

My impression, from looking into this a bit a few months ago, is that cross-platform accessibility is just a huge effort, and may be beyond the reach of projects that lack commercial backing.

What makes a toolkit accessibility-aware? Is it not enough to support native widgets?

Perhaps more technical than you intended, but for linux it interfaces to screen-readers via AT-SPI via DBus (Or you can use the older libatk -- iirc). Also as other people mentioned, high contrast, big text, and other specific display settings have to be figured out from the information passed from the OS / display server / wm.

It's mostly for video games, so not really a concern IMO.

It's never openly stated, though, so there are a lot of people who DO use it for other things than games, and then end up with a completely non-accessible app.

There are lots of non-gaming related apps that would be near impossible to use as a visually impaired person. Among them image/video editors most DAW (digital audio workstations) etc.

The thing that makes it more unsuitable for video games imho is the lack of IME support, which absolutely is a concern for video games, as many gamedevs have found out The Hard Way

This is the right thing to be looking at. Way more important than screen reader support IMO.

The project README does not say that. Instead, it says:

> It was designed as a simple embeddable user interface for application

so, at least rhetorically, it is more generic/general than for video-game use.

And it should be. Games are played by people with poor eye-sight, or with coordination problems, too.

"Game Maker's Toolkit" does an overview of accessibility in games: https://www.youtube.com/watch?v=RWQcuBigOj0

It really depends on the game. Too many games rely on visuals as the primary form of output in a way that simply cannot be made particularly accessible, outside of the obvious things, which many newer games at least seem to address, to various extents, as shown in the GMTK: colorblind modes, large text[1], on-screen icons for sounds, no quick-time events or other twitch controls.

Having said that, I think The Last of Us 2 needs to be studied more as they appear to have done an amazing job with accessibility, given that a blind player was able to complete it. Maybe other developers can learn from that.

[1] And a personal pet peeve of mine: pixel art games using pixellated fonts. I find most pixel fonts extremely difficult to read. When I posted about that on r/gamedev once I just got yelled at because "artistic vision", but your artistic vision is useless if I can't play the game. At least give me the option to use a normal, crisp font.

I don't know why you are downvoted, while you state correct viewpoint IMHO. The accessibility is not just about screen-reading but also options that make our contact with the virtual product be more pleasant (e.g. resizeable UI, non-transparent background under subtitles). There are also many indie, slow-paced games that do not have any kind of voice-acting (well, most of tycoons), where even Windows Narrator will be enough. Please note, you don't have to be a blind person to use assistive software - there are a lot of people wearing a corrective glasses, who might use some screen reader.

Will I get scolded for not having implemented accessibility features into my yet-to-be-released game? Pull requests implementing those will be welcome though.

No, you won't. All I'm saying is, games are apps, and they should also acquire accessibility features. Many games do, and we can do better than the current status quo.

"mostly for video games, so not really a concern" should be downvoted, not the people who say that games should think about accessibility.

As usual though this statement is posted yet no real references are linked on how and what needs to be actually done to make a graphical user interface good for accessibility. Rules like have a high contrast color version for people with less eyesight. I tried to find resources about accessibility a while back to actively work towards it but couldn't really find anything official or sometimes only behind paywalls.

Not sure if commenters posting about accessibility have a disability themselves or are speaking up for people with disability. However reality is that in total it is a small percentage of users and without guidelines or having a disability yourself it is really hard to work towards. Especially at least from what I have seen in games the variety and range of disability. Without good guidelines and accessible interfaces for aiding tools. So requiring a gui library without funding to have a high level of accessibility or labeling it worthless is a somewhat cheap way of judging these libraries.

Good points overall. The reason nobody posts examples or references is because it's way too damn hard right now.

Current accessibility APIs are tightly coupled (conceptually and logically) to APIs that originated in the 80s: Win32 and Cocoa.

If you're only using native widgets it's virtually automatic to have full accessibility. But as soon as you need minimal customisation you have to interact with extremely verbose APIs. Cocoa's API is much better, but MS's Automation API is very arcane and complicated, and even MS employees acknowledge that.

On top of that, even if you're using the APIs as intended, the examples provided by Microsoft are low-quality and are not a good starting point for implementing accessibility on non-native.

Thus, only giant corporations have the resources to fully re-implement accessibility in non-native applications. Google can do it in Flutter and Chromium (Electron). Nokia for Qt. Facebook for React Native. But single developers just don't have the power to do it on their lightweight libraries.

What we need is a smaller lower-level accessibility API that gives accessibility to game engines, non-native UI toolkits, TUIs and command line apps. But I don't think there's much incentive coming from OS makers to do it.

> Current accessibility APIs are tightly coupled (conceptually and logically) to APIs that originated in the 80s: Win32 and Cocoa.

You forgot AT-SPI2: https://www.freedesktop.org/wiki/Accessibility/AT-SPI2/

For accessibility, wouldn't it make more sense to ensure your app has a fully functional CLI. If sight is disabled, why try to use the GUI?


Take for instance a Webbrowser. You could implement one using ncurses, etc. like links, lynx, etc. But for the screen reader this is just a terminal window with a bunch of text.

A Gui can help the screen reader to know which part to read when and how it releats to other parts of the GUI.

Also as a blind person, you live in a world of people that see. You cannot expect every developer to take care and cater to your needs. A GUI that takes care about this automatically for the developer means the dev can just continue doing their thing. While the blind people, can benefit from it as well.

Lots of reasons that I can think of, including:

1. Disabled person not the one who controls what gets run.

2. Disabled person not the one who controls what is installed on device.

3. Joint use by two people of the same app.

4. Disabled person wants to be able to ask someone for help who can interact with the app "fully".

Note that screen reader accessibility isn't just for visually impaired people, and even if it were, visual impairment is a wide gamut.

Take features like voice control for example where a motion impaired person can still enjoy visually rich content. https://youtu.be/aqoXFCCTfm4

I've long been curious how non native UI libraries may assist with accessibility frameworks.

Does anyone have any good resources?

I have no experience with assistive software but I suppose non-native UI libraries should be using native OS toolkit (e.g. [0][1]) or specific APIs libraries, which targets NVDA, JAWS, Orca and others (this is one of the same idea shared in the one answer on SO [2]). I guess web browsers and other native GUIs just do that behind the scene.

[0]: https://en.wikipedia.org/wiki/Microsoft_Active_Accessibility

[1]: https://en.wikipedia.org/wiki/Microsoft_UI_Automation

[2]: https://stackoverflow.com/questions/65168795/make-non-native...

Edit: IAccessible2 [3] seems to be compliant to both Windows and Linux. Whether Apple AppKit provides specific accessibility-focused UI elements [4]. Flutter has also similar one, it is called Semantics [5].

[3]: https://wiki.linuxfoundation.org/accessibility/iaccessible2/...

[4]: https://developer.apple.com/documentation/appkit/nsaccessibi...

[5]: https://api.flutter.dev/flutter/widgets/Semantics-class.html

Thanks for the links

On macOS one would create a hierarchy of custom accessibility objects representing the application's state via NSAccessibilityElements: https://developer.apple.com/documentation/appkit/nsaccessibi...

Thanks. That makes sense.

Are there any screen readers that can model a system from images? Immediate mode GUIs can deliver multiple frames per second, so it seems like there would be plenty of data points from which to build a dynamic model of the system.

This is possible at least for restricted domains: I've personally written software for image processing for text extraction and application steering from high frequency screenshots of a Windows app that didn't have an automation API.

Also: The DeepMind Starcraft 2 AI plays at a high level in real-time from, AIUI, an image stream.

Has nobody written an AI-based screen reader yet that can work with any software?

iOS has one nowadays, but not sure how well it works.

It works surprisingly well, however it quickly falls apart if nonstandard controls are introduced. If you have a custom control that does not behave at all like a standard control would, you'll quickly see its limits. It's OCR is also pretty good, but errors do still happen which make apps unusable. Don't get me wrong, it is actually amazing and works much better than I expected, but it's obviously no match for a proper implementation.

Has anyone made a serious attempt at a intermediate-mode frontend to desktop GUI toolkits (as opposed to single-application ones that are rendered by some general-purpose accelerated graphics library)? I've experimented a little bit in the past (https://github.com/blackhole89/instagui/blob/master/main.cpp, whose implementation is based on something pretty close to my understanding of Elm's "virtual DOM" diffing; don't mind the kooky custom macro system), but wound up bumping into a lot of nasty little problems that made hacking on it not a lot of fun.

You might be interested in Mike Dunlavey's Differential Execution demo:



He explains it a bit more here:


Oh, yeah, the Stack Overflow post especially seems to talk about very similar problems to what I have been grappling with. Thanks for the pointer! The code is pretty opaque to me, though; it's been well over a decade since I've last had any interaction with the WINAPI programming style, Hungarian notation and all.

I wonder why he arrives at the conclusion that he needs a full-fledged DSL for what he is doing. I remember that at the time I was working on this, the impression I had was that a lot of my problems would go away if only there were some unique way to identify every distinct invocation of a function (so I could use data along the lines of "you are currently in the 3rd call of Button() in something.cpp"). __FILE__ and __LINE__ get close but don't disambiguate between multiple calls on the same line (and anyhow would need to be baked into the invocations with macro hackery).

Every immediate-mode UI deals with the id namespace issue. What they usually have in common is a hierarchical id namespace where you have an id stack and a child id is derived from the parent id via hashing, child_id = hash(parent_id, widget_type, subid). [1] The subid can be derived from widget arguments that are likely to be unique and stable from frame to frame, e.g. a text edit box's buffer pointer. But there always needs to be a way to provide an explicit subid since the implicit method doesn't always cut it. Alternatively, as long as the default subid is stable you can always make it unique by wrapping the widget (and other nearby related widgets) with an explicit pair of push_id/pop_id calls. You can see an example here: https://github.com/ocornut/imgui/blob/master/imgui_demo.cpp#.... The defaults work most of the time but this is definitely an aspect of immediate-mode UIs that can't just be treated as a hidden implementation detail.

[1] The prevailing use of hashing is a consequence of the popular immediate-mode UI libraries being focused on minimal state per widget. If you already plan to maintain significant state for every widget (as you would need in an immediate-mode interface to win32 controls) you would just do hierarchical interning with sequential id assignment (i.e. the first time a new subid is used with a given parent, it is assigned a sequential global id and put in a table so the association can be memoized across frames) and then hash collisions won't cause id collisions.

Yeah, I essentially copied imgui's ID stack approach for my experiments too. (I've been using imgui for some other projects to great success.) It still seems like a hack; I'm quite surprised that no programming language (I'm aware of) makes it possible to uniquely identify callsites like that. Maybe it hints at a more general blind spot/free real estate in PL design :)

(On the off chance you're curious, I just pushed some previously unpushed updates to that experiment I had sitting around, so now it has labels and text entry too. I guess the real test of the architecture would still be making an alternative "rendering backend" based on win32 widgets or something.)

Focusing too much on the call site is probably misleading, which is why most production-quality immediate-mode UI libraries generally don't rely on __FILE__/__LINE or stack walks or anything else like that. The issue isn't just loops. As soon as you wrap code in a function for reuse, the proximate call site no longer has anything to do with the widget ID; in the extreme case, your entire immediate-mode UI is data driven and there's nothing in the code paths that indicate anything at all about the widget IDs.

The real hack is implicit IDs, not the ID stack (which is just a way of implementing a hierarchical namespace like file system paths or URLs). Implicit IDs just work 99% of the time and rarely require manual intervention, so seeking a 100% solution is a tempting siren song. But once you actually start writing UIs like this, the 99% solution is just fine.

> As soon as you wrap code in a function for reuse, the proximate call site no longer has anything to do with the widget ID.

That's a good point.

(a) hash the entire call stack (though that might produce false negatives, i.e. consider two UI elements that should be the same distinct?)?

(b) put the burden on the reusable function to mark itself as such by pushing/popping an identifier of its own call site on the ID stack?

> The real hack is implicit IDs, not the ID stack (which is just a way of implementing a hierarchical namespace like file system paths or URLs). The fact that implicit IDs just work 99% of the time and only require manual intervention 1% of the time is a false siren song into letting you believe a 100% solution is desirable (you have to consider the marginal cost of what it would entail).

Well, this is just the standard problem of library design, isn't it? You always have to figure out the appropriate tradeoff between supporting rare cases and making common ones easy. (Of course, you can often do both; in this case, you probably could both give "explicit ID" and "call site ID" versions of each UI element API.)

Yeah, a mixture of implicit and explicit IDs is fine and what everyone does in one form or another. That's what I had in mind as the 99% solution. You don't want the default implicit ID scheme to be too clever or opaque so the programmer can easily diagnose what went wrong when it inevitably does. I was just cautioning against relying too much on ever fancier implicit schemes because of those failure modes; I went down that path a few times when I first started experimenting with immediate-mode UIs years ago.

Yes, I think he finds a DSL useful for that reason: each place where flow of code diverges (FOR, IF) is recorded and serves as the identity of the entire execution path.

The differential execution approach is to "diff" the GUI by "diff"ing the program's control flow. As opposed to something like React where we're diffing the virtual-DOM output of the render().

During a GUI update the program is run comparing it's new control flow with the prior run. So for example on an IF statement, if the prior execution took branch A but the new run takes branch B, then we need to run branch A again in "erase" mode to erase everything created by branch A, then run branch B to create the new state.

Not a serious attempt but I prototyped a single header library for macos and will attempt to make it work on windows too

example: https://pbs.twimg.com/media/EuQ-6vzXUAca_Ph?format=jpg&name=... (ignore the fact the UI goes from bottom to top, that will be fixed)

A cross platform gui library needs to have consistent layout across all platforms so it has uses basic flexbox layout algorithm instead of the native macos constraint system, I think this turns out better anyway because flexbox is a lot simpler in my mind. It decouples the layout algorithm from the UI code so you don't need to rerun your whole UI whenever the window changes size. It uses pure c++ native calls instead of objective-c which is kind of crazy but the library user never needs to interact with it

I have used Nuklear in a medium sized hobby project, and it's kinda cool but I will be migrating away from it. I am not aware of it being used in shipped products, unlike Dear Imgui which is a more popular alternative.

The first reason is that it doesn't have a good layout system requiring manually specifying positions and sizes in quite a few places and doesn't gracefully handle varying font sizes. My GUI code is littered with x*FONT_SIZE to do some kind of scaling to work on my low res 27" screen and a high res 13" laptop. I don't need it to do any magic behind the scenes with font sizes (such as moving from monitor to monitor), just allow setting the GUI font size to a reasonable value and sticking with it without manually specifying every row height.

The motivation for this probably is that the author wanted to have extremely skinnable UI to be able to do fancy game UI's. However, if you look at a lot of modern game UIs they have a flat "material design" type of UI with flexbox-like layouts, and no fancy skinning, just flat colors and rounded corners. Another motivator is that Nuklear can run on triangle-based GPUs and pixel-pushing APIs. This is not valuable to me.

The second reason is that it's buggy. The developers have taken to the extremes to avoid dependencies. This includes stuff like home-brewed implementation of printf. Which will hang in an infinite loop when you try to print a float that has INF value. With some hacking, I was able to make Nuklear use standard printf and some other standard library functions instead of the bundled implementations.

I appreciate all the effort the developers have put into this, but it's not ready for prime time and expect to spend time fixing bugs if you'll use it.

I'll probably be moving away from the immediate mode GUI paradigm as a whole (instead of using Dear Imgui or Nanogui), it's a poor fit for the application I'm developing.

Recently I've been seeing a quite polished game UI toolkit used in several published games, like Rise of Industry and Space Haven. Does anyone know what this toolkit is? Something Unity offers or some proprietary library?

On my spare time I've also worked on a retained mode flexbox-based UI layout and rendering library that integrates like Nuklear or Dear Imgui. In other words, the GUI library doesn't have any dependencies or have any side effects. You feed in a tree of GUI elements, the events coming from the windowing system and as output you get a vertex buffer and a list of triggered events. This concept shows some promise but unfortunately I don't seem to have the time it takes to turn this into a polished product. I'm happy to talk about it if anyone is interested.

Thanks. I came here to ask how it compares to Dear ImGui (which I've used). Nuklear seems to be skinnable to make it more useful as an in-game UI (going by the screenshots) while Dear ImGui is targeted more at built-in tooling. So I was always curious about Nuklear, but I stuck with Dear ImGui because its more popular and has more stuff available for it.

Your post has made it clear that I should stick with Dear ImGui and figure something else out for in-game UI. I'll probably roll my own simple thing (I'm just playing around for fun, so I can afford to do that).

Wow, reimplementing printf is like pre-heartbleed OpenSSL level of NIH insanity.

Well it seems that the developer wanted to support embedded platforms too.

printf isn't a part of the freestanding C standard, and neither is snprintf. That said, there are portable freestanding snprintf implementations out there with permissive licensing.

There's a demo that works in the browser: http://dexp.in/nuklear-webdemo/

This might provide a slightly better "experience" (it's about 10x smaller, doesn't look blurry on Retina displays, and doesn't suffer from the "touchpad taps are ignored" problem):


The radio and checkboxes are weird. Radio buttons seem intuitively inverted (empty circle is the selected one). Where as on checkboxes I think the filled square means checked but I can't say because of the weird radio buttons.

I don't understand... the documentation says "does not have any dependencies" but still it requires glfw3. Is that alright? Cannot compile the examples without glfw3 but maybe I'm doing something wrong.

Nuklear doesn't have any dependency, but you must provide it a backend so it can do its drawing. The examples use a glfw3 backend, but you can provide any other backend, it's just a handful of functions to implement and is usually not too hard.

See https://immediate-mode-ui.github.io/Nuklear/doc/nuklear.html...

Interesting. It seems like it would be fairly easy to write a WASM/Canvas2D port of it.

That’s called a dependency.

> Nuklear doesn't have any dependency, but you must provide it a backend so it can do its drawing. The examples use a glfw3 backend, but you can provide any other backend, it's just a handful of functions to implement and is usually not too hard.

The text editor gcc doesn't have any dependency, but you must provide it a backend so it can do its text editing. The example uses a nano backend written in C, but you can provide any other backend like vim or emacs, it's just a handful of functions to implement and is usually not too hard.

The difference is that its up to you to provide the backend, so presumably you will be using whatever you're application is already using, rather than pulling in a new dependency just for Nuklear to work.

Its still a dependency in my books, but its not the same as saying "this requires DirectX" or whatever: it requires a backend, but you can give it any backend you want, so ideally wouldn't pull in anything you aren't already using.

I would believe you if Nuklear provided several (say, at least 5) different such backends, easily accessible through different makefiles, say

Since it is just a handful of lines, that should be easy to do :) Also, it would be a great way to learn how to add new backends.

But as it is now, it is presented to be like glfw3 is a hard dependency.

Dear ImGui did that: https://github.com/ocornut/imgui/tree/master/backends, https://github.com/ocornut/imgui/tree/master/examples You can probably borrow some of these code and fit with Nuklear.

For these imgui toolkit, it is just a few lines of code if you know what you are doing. For the library authors, it is quite a bit of work to maintain different environments and CIs to test all these backends though.

The library itself doesn't depend on anything, instead it delegates the "platform integration" (rendering and input) to outside code.

The examples somehow need to connect to the underlying operating system, and that's why they depend on GLFW as intermediate layer, but this could also be SDL, or - shameless plug - the sokol headers (https://github.com/floooh/sokol), or (more commonly) a game engine like Unity, Unreal Engine, or your own code.

I think in this context they mean it doesn't depend on any particular dependencies, but at some point you need a way to render their frame buffer. I guess you could trivially swap out glfw3 for something else, such as using the BIOS video mode directly.

Ah, last I had heard this project had been abandoned. I'm happy to see that a community team has picked it up, dusted it off, and is maintaining it.

Wondering if this was inspired by Wenzel Jakob's nanogui [0] by any chance. (cz that's what I wanted to do, rewrite nanogui in C, so I don't have to rely on a C++ compiler).

The appeal of nanogui to me is that it's built on top of OpenGL.

[0] https://github.com/wjakob/nanogui

About 400 comments worth of prior discussion: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

This looks pretty neat! I briefly looked for Immediate mode gui libraries for Java, just to experiment with. I see this has java bindings (though 3 years old). I had been planning to try out the jni wrapper for Dear Im GUI which I have used in C++ before.

Just wondering if anyone has used a decent pure jvm immediate mode gui library before or have suggestions on immediate mode libraries in general to try out?

For what I am thinking of trying it out with, my needs aren't too high so I might just write something simple with libGDX/lwjgl3.

I once did something similar in C++, C# and HLSL: https://github.com/Const-me/Vrmac

With more focus on GPU integration, and output quality. Unfortunately, these priorities resulted in more than an order of magnitude of code complexity, compared to Nuklear.

For instance, nice but fast anti-aliasing is hard. 16x hardware MSAA is often OK in terms of quality, but too slow on platforms like Raspberry Pi.

How does this compare to Sciter? I understand this is primarily to overlay UIs over full screen applications (like games), but Sciter has that capability as well. Particularly how easy is it to style things, considering Sciter allows CSS?

I did a CTRL+F for "sciter" and usually I find something, but this time, nope.

I’m using this for a game, it’s working great. Takes some getting used to the layout model, and if you need complicated layout (I don’t), particularly more than one column of widgets, you probably don’t want to use the built in layout (the docs say this and suggest using a constraint solver, which seems like a good approach).

I wonder how this compares with what's currently used in FOSS games. For example... what does Wesnoth use?

More generally - is this really something that's missing? I mean, hasn't something very similar already been implemented (as FOSS, I mean)?

Looks amazing!

The only thing missing for me to use it would be Vulkan support. I'm sure it will land soon and if not I guess I can put in the effort and make a patch/PR. =)

This has been around a few years, it’s stable and used in some AAA-level games IIRC. Very neat project and makes putting a basic UX on a C program easy to do.

Again, this has no integration with accessibility software (AT-SPI/libatk), so please think twice about using it in production software.

Looks cool. How flexible is it? Can this be embedded in an existing C++ application to render a screen?

Edit: I just saw the other comments that it was written in C89 to have better support with C++, so I guess the answer is yes.

Has anyone bundled this inside an Obj-C (/swift) app to see if it's possible to run this on an iPhone?

Curious. Why C89?

C99 support in MSVC was still spotty until very recently. And for single-header libraries in particular it's helpful to use a style of C that can be #included directly into C++ source files without issue.

I thought MSVC supported C99 since like ... 2012? I guess a decade is sort of "very recently" in MSVC terms, but it's been awhile.

A post-release update to VS 2017 was the first version which supported a useful subset of C99. But support isn't a binary thing; I was tangentially involved in diagnosing a critical bug in their implementation of C99 lvalue literals just a few months ago. Minimal repro: https://godbolt.org/z/7rTv1M

Minor nitpick: Most C99 features were "already" in a VS2015 update (initialization features like designated init and compound literals, and a standard compliant snprintf()).

The big missing features were VLAs (those will never be implemented) and _Generic (implemented now in VS2019).

Thanks, I was confusing the VS 2015 update with VS 2017 as the one that got the big pieces like designated initializers and compound literals. I don't think anyone cares about VLAs, but I look forward to being able to rely on _Generic in another 5 years when VS 2019 can be assumed available. :)

VS 2019 supports C11 and C17.

MSVC only supported a somewhat usable C99 subset since a VS2015 update, but it never implemented VLAs, so it can't be called a standard C99 compiler.

_Generic has only been added very recently in VS2019, and MS recently has pledged C11 and C17 support (but it will never be a C99 compiler because VLAs will not be implemented, that's fine though, VLAs should never have made it into the standard).

...also important to note in that context: unlike gcc and clang, the MSVC C++ compiler is stuck at a "sort-of-C95". GCC and clang support most modern C features in C++ as non-standard extensions, but MSVC doesn't.

MVSC++ isn't the only C++ compiler that follows that.

There is no rule that ISO C++ compilers should be C compilers as well.

In fact, I bet they only backtraced on their position on their C compiler due to WSL, IoT and devices like Azure Sphere.


Lots of legacy targets have shoddy-at-best C99, or new, support.

You'll find that a fair amount of single-header C libraries that are explicitly C89-supporting are in use by game developers.

> You'll find that a fair amount of single-header C libraries that are explicitly C89-supporting are in use by game developers.

Is that true? I thought low-level game development was almost entirely in C++. Why is there resistance to using libraries written in the same language as the rest of the program?

C libraries are usually easier to integrate even into C++ projects because there's no "C++ subset incompatibility" (e.g. use of exceptions, RTTI, banned stdlib features etc... - since most of those things don't exist in C in the first place), and "flat" C-APIs are also usually much saner (some C++ library APIs have designed themselves into class-hierarchy- and/or template-hell).

There are easy to integrate C++ libraries like Dear ImGui but they are in the minority, and (for instance) Dear ImGui uses a very restricted set of C++ features (it's essentially "C with namespaces and overloading", so it's not very far removed from a C library).

> I thought low-level game development was almost entirely in C++

And this is the main reason why is written in a subset of C89, because is 100% source compatible with C++.

it is not. very simple example:

    char* bytes = malloc(4);
is valid C89 but not valid C++. You have to restrict yourself to a kind-of subset of C89 if you want it to work with C++.

It's true that C is not a subset of C++, but in reality such implementation details don't matter much as long as the library API is both C and C++ compatible. Compiling a C source file in a C++ project is as simple as using a ".c" file extension, build systems will then compile the file in "C mode" instead of "C++ mode".

This is a header-only library, so the code must be compiled in “C++ mode” to be used in C++ projects, requiring the C++/C89 subset in this case.

It's an STB-style single-file library, which means the implementation is in a separate ifdef-block from the interface declarations, this allows to compile the implementation in a different source file (which can be a C file) from the source files which use the library (which can be C++ files).

Here's for example such an "implementation source file" example (using stb_image.h):


...or for a whole collection of related single-file libraries:


Yes, my mistake. I assumed they retained some code in header mode intended for inlining but that was not correct.

What do you mean with legacy target? I mean, this is not going to run on a Nintendo 64. Is it?

This is cool. Has anyone written a C++ "MFC" for this yet?

You mean, having wrapper classes over the C primitives?

Wouldn't this defeat the purpose of having an immediate-mode API ?

It looks cool

C89 yeah :)


Very nice

The name reminds me of the website NuklearPower, and specifically the web comic 8-bit Theater [1] that I followed back in the day.

Off topic, but also a pleasant reminder of something I used to enjoy quite a lot.

[1] http://www.nuklearpower.com/2001/03/02/episode-001-were-goin...

This is only tangentially related but if you are looking for a 2d rendering api for Rust, peep femtovg https://github.com/femtovg/femtovg

Join the discord https://discord.gg/V69VdVu

Quite a few people have been using it to build their own GUI. Add some layout, windowing and event handling and you have yourself a GUI.

This demo UI (https://github.com/femtovg/femtovg/blob/master/assets/demo.p...) is the same one I've seen in the Pathfinder (https://github.com/servo/pathfinder) examples. Is it something common, like the tiger used to demo SVG libraries?

Yeah it’s from nanovg https://github.com/memononen/nanovg. I wonder if it’s even older than that.

Interesting, thanks for letting me know!

Applications are open for YC Summer 2021

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact