Hacker News new | comments | show | ask | jobs | submit login
Nuklear: A single-header ANSI C GUI library (github.com)
402 points by mpweiher 7 months ago | hide | past | web | favorite | 176 comments



I really do not understand why 'single header' is considered a good thing, but I see this more and more often on libraries. What is the reason all the code is put in the header file?


Sean Barrett (who I think popularized the idea) has a FAQ on this (https://github.com/nothings/stb) where he justifies it by pointing at difficulties with deploying libraries on Windows. Which is a fair point, but by going straight to header-only he skips the step where you can also just distribute a bunch of headers and .C files. The convenience of only having to include a single header is nice for quick weekend projects, but for anything bigger you're dealing with dependencies and build issues anyway.

I get some of the reasons that you would initially start out with a header-only implementation, but when your library grows, you probably want to split it at some point. For me personally, that point would be some time before the header reached 25k (!!) lines.


Some advantages of header-only vs .h/.c pair:

- you can build simple tools contained in a single .c file, and you dont't need a build system for this (e.g. just call "cc tool.c -o tool" instead of mucking around with Makefiles or cmake)

- the library can be configured with macros before including the implementation (e.g. provide your own malloc, assert, etc...), with the implementation in a .c file, these config macros must be provided via command line args, which implies a build system

- you can put the implementations for all header-only libs used in a project into a single .c file instead of compiling the multiple .c implementation files, this might speed up compilation

Single-header libs have some of the advantages that a proper module system would bring to C and C++ (e.g. extremely simple integration into own project and increased compile speed), but without all the complexity of implanting a module system into C or C++.


For #1: you already have multiple files in your source so nothing stops you from including the .c file if you feel like compiling a single c file from the command line without a build tool is needed (although when it comes to make usually you can have a single generic makefile that does the same job)

For #3: the #1 applies here too (although beware for static stuff conflicts) but in practice C code compiles fast enough for this to not be a problem

I can see #2 being an advantage, but TBH i think the case where you both need a custom malloc, assert, etc and not need a build tool where you can pass the configuration macros is kinda rare.


Make is so simple that it can actually build you single file project without a makefile with less characters to type:

make tool

Et voilà.


Errr, I actually have to spend 5 minutes googling how to install make on Windows, then getting mingw and hating myself. So no, no it's really not that simple in general.


https://github.com/tom-seddon/b2/blob/master/snmake.exe - standalone build of GNU Make 3.80. Has a couple of hacks in it to better support Windows-style paths with colons in it. I think this came from the SN Systems Playstation2 SDK. Thanks to the GPL, you can have it too.

(I have no idea whether this will successfully run a C compiler for you on Windows, though. I mostly use it to run Makefiles with phony targets as a cross-platform replacement for shell scripts.)


Oh no, Make 3.80...

Why not build a more recent version? This thing is 15 years old!


This sounds like something one should ask people who do not write portable makefiles, rather than porters of Make, which has been standardised for quite a while now.


They added some useful stuff in subsequent versions... my recollection (though this is going back a while now) is that even only 3.81 was a worthwhile upgrade over 3.80!

However I have better things to do than try to figure out how to build GNUstuff myself on Windows.


Good idea! Let me know when you’ve done it... I could do with an updated copy.


Not really fair to blame make for your daft OS.


1) A new dependency to install on every build server and dev's machine unless you're a 'nix only shop (e.g. not in gamedev) and sometimes even then (yes, I've had to apt-get install make) - at the very least this is a boatload of new setup/install instructions, in practice this also involves coordinating with IT, and heaven help you if you let your less programmery coworkers build from source too.

2) Which version of make? I have detailed instructions at work about which make to use with which makefiles such that all the stars align and the OS vars and other gnu tools line up such that our upstream Makefiles work without modifications (well, usually.)

3) Given the complexity of 2, what I'm actually going to do is automatically invoke make from whatever build system we're using internally.

4) Now that I have two sets of build configuration, I have a continual maintenance burden as conflicting dependencies and incompatible build settings need to be resolved. Since I deal with C++ libs and tools, sneezing too hard will cause incompatible libs. In practice, the Makefile will hardcode CCFLAGs, sometimes CC itself, and things like building with/without RTTI will cause incompatible libs I can't link without more Makefile tweaks.

5) When I get sufficiently fed up with the state of affairs in point 4, I'll integrate the tool into our own build system such that we have a single unified build system again and I don't have to make the same change in 5 places (our codebase + a measly 4 dependencies in this example.)

At this point I'm no longer using make and wondering why I didn't just skip directly to step 5 in the first place. Make is simple - so simple it doesn't address my needs nor solve my problems.


> he justifies it by pointing at difficulties with deploying libraries on Windows. Which is a fair point, but by going straight to header-only he skips the step where you can also just distribute a bunch of headers and .C files

That part was justified by deploying libraries for Windows. Going for one file only was justified by this:

"You don't need to zip or tar the files up, you don't have to remember to attach two files, etc."

--

Unrelated, and probably colored by the fact I first learned to program on Windows, but I don't get the problem. Windows applications usually bundle DLLs with them and keep them locally, unlike Linux applications which typically install dependencies globally through a package manager. I don't think I've ever had a big DLL problem developing on Windows, whereas on Linux I've been occasionally bit by the "oh this software requires X <= v2.1 but you can't have that since something else is already using X v2.3 and that would be downgrade".


For some reason, problems with DLL libraries are called as "DLL hell", not "SO hell". ;-)

If your software needs library foo.so.x while other application needs library foo.so.y, just put both into /usr/lib. Problem solved.


Easy, Windows already had dynamic loading on Windows 3.x, which was the same model used by OS/2, while UNIX was still trying to figure out how to implemente dynamic libraries.

The first versions of dynamic linking on UNIX basically required patching a.out files, before ELF was designed.

So of course the expressetion regarding compatibily issues with dynamic libraries came to be "DLL hell", there weren't .so to talk about.


Thanks for the link.

Why not two files, one a header and one an implementation? The difference between 10 files and 9 files is not a big deal, but the difference between 2 files and 1 file is a big deal. You don't need to zip or tar the files up, you don't have to remember to attach two files, etc.

I'm still not convinced. I am convinced about a .c and .h -- that's how sqlite does it. Going to just a .h seems to provide negligible benefit and confuses the implementation and interface. It probably confuses a lot of tools too, e.g. source navigation tools, code coverage, code instrumentation, etc.


I don't understand why anyone cares. you grab the .h file and call "load a jpeg"/"draw a button" or you grab two files and call "load a jpeg"/"draw a button"

Are we bikeshedding about this?


This is a matter of adhering to a sound engineering principle, and the approach in question has not been generally considered acceptable. To explain why, one of the ideas behind header files was that they could be freely reused in more than one part of a project; therefore, for example, any executable code appearing in a header file might end up existing in multiple copies throughout the executable (perhaps, depending on the linker).


> any executable code appearing in a header file might end up existing in multiple copies throughout the executable

that's why the keyword "inline" exists.


'Inline' does not prevent duplication of the generated code (in fact, it forces it).


... mostly. Except in the case where the inlined version can be optimized away, which is the best time to use inline but not entirely germane.


> and confuses the implementation and interface

Normally you have the implementation inside an "#ifdef IMPLEMENTATION" block, and the API interface (public structs and functions) outside of the implementation block, and all private functions inside the implementation block are defined as 'static' so they are not visible outside the special implementation source file. In the places where you include the header for normal use, only the public interface is accessible.


I'm also not convinced by the value of single file especially for semi large libraries like Nuklear. Especially as you could #include the .c file just as well as the .h files if you need. That's as many lines as #define IMPLEMENTATION.

Nuklear was developed as multiple files and hastily merged at the last minute by its author before being tagged 1.0. I think it was a mistake especially as he copied the entire stb_xxx files inside. And unlike libraries like stb_image.h, if you want to use Nuklear you need to setup or copy a non-trivial backend anyway, so it's not a matter of include and ready-to-use.

(Dear ImGui which Nuklear is based on is 7 files including 4 headers as you know.)

To me the core value of those libs is that they are highly portable, designed to compile everywhere without any specific build system, and designed to be compiled from source code. Because they are designed as such, problems (such as error/warnings on some setup) are caught easily and fast.

Whereas for bigger libraries you either get the headache of binaries, either get the headache of figuring out their build systems and building from source which frequently fails. And as people don't frequently build themselves, portability problems aren't caught as often.


OK I get that the big .h file is still logically separated into an .h and .c, like you say.

But what about preprocessing times? If you're including a library from many of your source files, then even if it always hits the #if 0 case, the preprocessor still has to parse the implementation. It matters for distributed compilation too -- more preprocessed bytes have to be sent over the network.

I'm sure there are cases where this overhead is negligible. But I'm just as sure there are some where it's not. Not caring about how much text is in your headers seems like a bad habit to get into. Build times are the main reason I don't use C and C++ more.


The C preprocessor is the least of your worries with regard to compile times. Parsing a #if 0 can be done almost as quickly as a memcmp operation. Even my naive implementation can process an #if 0 at around 600 MBps. Even if you had a gigabyte of text in an #if 0, that's only about 2 more seconds on the compile, provided your disk can manage the throughput.


> Even if you had a gigabyte of text in an #if 0, that's only about 2 more seconds on the compile

Are you sure your implementation is correct? How does it handle this:

    #if 0
    "\
    #endif \
    "
    #endif
I'm not saying gcc's and clang's preprocessors are not really fast, but preprocessing is trickier than most people expect. In particular (as you can see from my example), while skipping over an "#if 0" you still have to split the source into tokens and discard the tokens.


Absolutely, great example! I have looked at quite a few implementations of preprocessors, and they're not simple.

I don't trust anyone who thinks it's simple without actually having implemented it -- and tested it on real code.

One older thread: https://news.ycombinator.com/item?id=10945552


I agree with the sentiment, but it should be easy enough to smash line continuations together while you're searching for that #endif (famous last words)


Is your naive preprocessor implementation complete and standards compatible C99? Or just some bastardization of it? If not the 600 MBps bench is as good as useless.

What's the C preprocessor speed of GCC/Clang/MSVC? Any benches?


Some tests on my system: including everything at the root of /usr/include: (intel 4770HQ)

    $ for file in /usr/include/*.{h,hpp} ; do echo $file | awk '{ print "#include \"" $1 "\"" }' >> /tmp/test.cpp ; done
    $ (laborious step consisting in removing headers that don't play nice with others... in the end there are still more than 1425 base headers, which end up including most of Qt4 & Qt5, boost, etc)
    $ time g++ -E /tmp/test.cpp -fPIC -std=c++1z -I/usr/include/glib-2.0 -I/usr/include/qt -I/usr/include/glibmm-2.4 -I/usr/include/glib-2.0/glib/  -I/usr/lib/glib-2.0/include -I/usr/include/gdkmm-2.4 -I/usr/include/gdkmm-3.0 -I/usr/include/gtk-3.0 -I/usr/include/pango-1.0 -I/usr/include/cairo -I/usr/include/gdk-pixbuf-2.0 -I/usr/include/atk-1.0 -I/usr/include/qt4/ -I/usr/include/qt/QtCore -I/tmp -I/usr/include/qt/QtWidgets -I/usr/include/qt/QtGui -I/usr/include/KDE -D_FILE_OFFSET_BITS=64 -DPACKAGE=1 -DPACKAGE_VERSION=1 -I /usr/include/raptor2 -w -I/usr/include/rasqal -I/usr/include/lirc/include -Wno-fatal-errors -I/usr/include/lirc -I/usr/include/gegl-0.3 -I/usr/include/libavcodec -I/usr/include/freetype2 -DPCRE2_CODE_UNIT_WIDTH=8 -I/usr/include/python3.6m -I/usr/include/libusb-1.0 > out.txt

    1,82s user 0,18s system 99% cpu 2,013 total
out.txt (the whole preprocessed output) is 710kloc and 23 megabytes.

With clang:

    0,72s user 0,08s system 91% cpu 0,875 total 

so I'd say that in general, preprocessing time is fairly negligible. A few template instantiations will take much more time to compile.


Well, you have to multiply it by the number of translation units.

Also, I like the point made in the sibling comment by moefh. Are you sure you don't need to tokenize?

Can I see your implementation? I have looked at a few implementations of C preprocessors, and they're not simple.


because dependency management and build systems are hard to get right and not standardized. It's a simple distribution model that works everywhere.


Yes, but why a single header file and not a pair of .c and .h files (personally this is what i do for my small libs)? With a single header file you'd need the user to put it in a specially designated "this is the implementation" .c file anyway.

I can see it for C++ which supports placing code in a header as a language, but C requires jumping over awkward preprocessor hoops that can be avoided by using a .c file.


With .h only files, when you need to include needed functionality, you modify only your own source file.

But with .c files, when you have multiplatform project, you need to put it in other "3rd party" tools. Think about the whole zoo: Visual Studio, XCode, Code::Blocks, make, NMake, etc.


> But with .c files, when you have multiplatform project, you need to put it in other "3rd party" tools.

No you don't: just act as if you've written the damn thing yourself. Your tools don't have to know. How being multiplatform changes anything?


It changes everything. Each platform brings its own preferred build system with it, which needs a copy of a definitive and explicit lost of all source files to compile the project. This is cumbersome to maintain. Header files on the other hand are licked up be the compiler automatically after seeing the include statement in the source. So this is actually much less time consuming to maintain.

Also, it shows that building C and C++ is a sad affair.


Why you need to do that with .c files? You can just drop the .c/.h pair into the same place as you would put the single .h file and the rest of your C/C++ files and use it like that.


Let's say you need zip lib functionality in your project. If it will be available as set of .h files you would include it by one line `#include zip.h` in your .c file.

But libzip is made of .c files. How you would include it in your multiplatform project ? Visual Studio, XCode, etc...

In general it would be great if C has `#include source "some.c"` feature.

So you would add single file zip-lib.c that will contain `#include source ...` references to all needed .c files.

But C/C++ has no modules as other languages so we need palliatives like make files, etc.


I am not comparing many-c-files (like your libzip example) with single header, i am comparing the single header with a pair of c/h files.


Easy, one links to the binary library.

It really feels strange that young devs find it so hard to use a linker.


It feels strange that you're unfamiliar with the headaches that brings for cross platform projects.


Those headers have code that most likely require linking with other libraries, many of each are not available in source code that one can dump into an header file.

So it appears like a little convenience just for not having to deal with installing binary libraries, at the expense of wasting everyone's time.

The time I waste installing a binary library?

Once per platform for the longevity of the project.

The time I waste building software with header only library?

Every single time I do make all.


Dealing with pre-compiled libs in the Visual Studio ecosystem is more complicated that you make it sound like.

For a start there are 4 incompatible standard libraries (dll vs static, debug vs optimized) and that's already a big headache. Also to a mix, a bunch of Visual Studio versions with enough difference to make it frequent that one compiled for e.g. 2010 doesn't work with 2015 or vice-versa.

To me the nature of those small libraries (regardless of the number of source files) is that they are highly portable and designed to always be compiled from source code to prevent those issues.


The point of single-file header libraries is that generally they are relatively small, and do not rely on linking with other libraries (outside the C runtime).


That might be the point, but it doesn't mirror many use cases.


You are free to split it into .h and .c if you want, every single header library I have used has an IMPLEMENTATION block to make doing so very simple.


Header only means you don't have to modify your build system at all, and C++ build systems nearly all suck. A lot. (Only exceptions I've found are Meson and QBS.)

Though I agree a single header and single .c/cpp file is another good option and solves the compile time issues of throwing everything in the header.


Some libraries do go with a pair because they find the single header too spartan or awkward. Either way works, the C preprocessor and compiler should chew through either style effortlessly and it's trivial to spit the header into a pair or merge a pair into a header by hand if you wish.


Especially when that single header is over 13k lines and reimplements half the standard library. This looks pretty cool and I have an idea for an OpenGL project that needs a GUI, so I'll probably try it out at some point, but it doesn't really seem as lean and straightforward as the blurb implies...


There isn't really that much unnecessary stuff in there. It's more like 20k LoC but half of it is comments. And if included as a header most of it will get stripped away by the preprocessor.

Just for context, in C++ including std::string (after preprocessing) will pull in 25k LoC, std::vector around 20k.

Programming in C++ by the book will average around 35k LoC per every compilation unit. And it will compile significantly slower than C code.

Having that in mind, and considering that Nuklear doesn't pull in any dependencies, not even C stdlib parts. I think author has done a good job at being lean and fairly straightforward.

UI libs are usually a huge dependency and a hairy mess. This is not really the case. It's just one ~1mb 20k line header file.

There is some overhead on preprocessing over having a .c/.h combo, but that's it.


Reimplementing parts of the standard library is important for applications where you don't neccessarily have access to a standards-compliant libc (read Windows), or if you have need custom malloc/free implementations. Similar techniques are used in automake and similar build generators to provide non-standard libc function implementations. precompiled headers can help with the fact that good portions of the header will be #ifdef'ed away.


I think you completely missed the point of what you're replying to.

You can put those reimplentations in a different file. You don't have to expose the consumer of the library to them by jamming the whole thing, both interface and implementation, in a single header.

This is like library design by the folks who mistakenly say that what they do with '#include <stdio.h>' is they are "including a library". Eg. Compile and link are the same thing in a lot of people's minds, I think increasingly these days as C expertise is diminishing.


Yeah it looks like they are afraid of touching a linker.


I develop on linux primarily but occasionally windows also. The stdlib is totally fine! When did you last compile on Windows? 2011?


All Windows compilers have ANSI C compliant libraries.


once you grasp the concept of immediate mode it is straight forward. You need to parse system events and feed them to Nuklear, then you make a single function that will draw your interface and update the application state variables you need. I like a lot the fact that within just a few lines of code I can have a GUI and focus on the main work...


I've actually seen some projects where it's organized into a bunch of files, but then they export/"compile" to a single header file which is what you include in your project.


I would also like an explanation. The author(s) of Musl (libc alternative) mention on this page that it can potentially invoke undefined behavior: https://wiki.musl-libc.org/alternatives.html

However, having used a few of these "single-header" libraries my main concern is navigating a 9000 sloc header file as opposed to a neatly re-factored version...

An explanation of how undefined behaviour is possible would be welcome.


An explanation of how undefined behaviour is possible would be welcome.

Short summary: compilers that enable strict aliasing assumptions by default can introduce bugs into otherwise-working code by assuming that a pointer to one type will never refer to a variable of another type. This assumption is perilous but considered worthwhile by many, since it allows the compiler to take advantage of early-out optimizations and CPU pipeline scheduling in ways that would otherwise be unavailable at compile time.

More specifically, compilers may make different decisions about the appropriateness of aliasing optimizations depending on the information they have available. If a function that accepts potentially-aliased pointers exists in its own C file, its visibility to the compiler is limited to what the linker can see. So the compiler may not be as aggressive about its aliasing-safety assumptions as it would be if the function and all of its callers were all present in the same file. Under these conditions, a single-header library can result in "riskier" optimizations that the compiler wouldn't attempt if the same code resided in its own translation unit.

My understanding of Sean's position is that this breaches an implicit contract between the programmer and the compiler of a systems-level language. I agree, and I think the standard should have included a keyword -- or the compiler authors a flag -- to allow people to opt in to these sorts of optimizations rather than requiring them to opt out. It is way too late to change the way C works by default by doing stuff like this.

This is an oversimplification but I think it's what the Musl author is getting at. My guess is that if he knew Sean, he'd be a lot slower to accuse him of being ignorant of any particular aspect of what he's doing. Still, while his dismissal amounts to FUD in the absence of any specific examples of "undefined behavior," there are some good points on both sides of the argument.

My own take, which is unfortunately all too easy to back up with specific historical examples, is that it's inappropriate to do anything that makes C/C++ programming more difficult, more error-prone, or less secure than it already is.


I agree with the last part; should ask the Musl author(s) to substantiate this assertion.

> [...] the standard should have included a keyword -- or the compiler authors a flag -- to allow people to opt in to these sorts of optimizations rather than requiring them to opt out.

As an amateur Linux kernel hacker, I see hacks in the kernel that circumvent compiler bugs and unexpected behavior because of compiler defiance of standards. The rants on lkml seem to assign most of blame to the compiler authors of gcc. Here is one of Linus' (many) denunciations of gcc:

https://lkml.org/lkml/2003/2/26/158

But also Andrew T: https://stackoverflow.com/a/2771041 who claims - if I understand correctly - that strict-aliasing was already part of the C89/C90 standard but that compiler authors didn't implement the standard correctly.

> It is way too late to change the way C works by default by doing stuff like this.

One thing that I am confused about in your explanation is this:

> Under these conditions, a single-header library can result in "riskier" optimizations that the compiler wouldn't attempt if the same code resided in its own translation unit.

How exactly does a compiler generate "riskier" optimizations from a single-header as opposed to separate translation units? I fail to understand how after the pre-processing phase, this would be less safe.


How exactly does a compiler generate "riskier" optimizations from a single-header as opposed to separate translation units? I fail to understand how after the pre-processing phase, this would be less safe.

For instance, the optimizer might conclude that it's safe to either elide an inline function call or compile it very differently if it sees that you're referring to the same object in two separate parameters.

If the function is implemented in a separate file, the optimizer has to assume that the function does something it doesn't know about.


Thank you. Please indulge me if you have time with two more questions:

1. In some libraries (SQLLite for instance, and libev too I think) the authors have a script that "amalgamates" all sources into a single translation unit.Their reasoning being that a compiler with full-visibility of the source can do global / interprodecural optimization that would not be possible otherwise. Is there any sense in this if it practical to do so for a small to moderate size library?

2. Please tell me what I should read so I can reach the same level of understanding that you have. <not a question>


1. It's almost certain that the speed increase you'd get from intentionally merging a lot of source files into one is less than what you could achieve with a more intelligent profile-based refactoring approach. I wouldn't have a very high opinion of the approach you describe, not knowing any more about those libraries or their authors' motivations.

2. Read a lot of C code written by people like Sean Barrett. You would just pick up a lot of bad habits from mine. :)


Perhaps the musl author is referring to Sean Barrett's comments about strict aliasing on his web page: http://nothings.org/

Fun rant by the owner of the company Sean Barrett works for: http://web.archive.org/web/20160309163927/http://robertoconc... (and while we're on the subject, additional UB links: https://news.ycombinator.com/item?id=14170585)

As for how you navigate these files, you use a text editor that understands that you're working with code and not just a list of chars. Example post about this from last time Nuklear was discussed: https://news.ycombinator.com/item?id=11532001

Personally, for Visual Studio I use DPack (http://www.usysware.com/dpack/CodeBrowser.aspx); in Xcode, I use its methods dropdown (Ctrl+6); in VSCode, I use its methods dropdown (Ctrl+2); and in Emacs I use a thing I wrote ages ago that grabs the imenu names list and presents it in an ido menu in the minibuffer. And there are other options for other editors. The key thing is just to unshackle yourself from PgUp/PgDn/Find/mouse wheel.


I think the real reason for this stance is that it imposes a sense of discipline that wouldn't otherwise be there. When you write a library like this one, you're always tempted to overengineer it... maybe it needs to render .PDFs (or render to them)? Wouldn't it be nice if it supported various OS clipboards? How about supporting both OpenGL and DirectWhatever and that homebrew software renderer that you wrote a few years ago? And wow, it sucks to build complex interfaces by aggregating C function calls. Lua support would only take a few more KLOCs, right? ...

And before you know it, you're reinvented Qt. NTTAWWT, I guess, but if you set out to build a lightweight set of GUI controls for a specific purpose, you probably had a different goal in mind, and that's not the way to accomplish it.

In my own work I frequently include stb libraries in separate .c(pp) files that provide a somewhat higher-level C or C++ interface to the features I need, which are then linked in the traditional way with .obj files and minimal headers. This approach works well, maintaining encapsulation and keeping compile times down while avoiding dragging in a ton of random dependencies.


As Ken Thompson [1] (the one from Ken Thompson and Dennis Ritchie (Unix & C)) was one of the original authors of Go, I think it is safe to use the Golang Origins FAQ [2] as a source to answer this question.

> Dependency management is a big part of software development today but the “header files” of languages in the C tradition are antithetical to clean dependency analysis—and fast compilation.

That might be a reason to limit the use of header files to single file. In addition, I think that it is meant to make the adoption of the library easier.

[1]: https://en.wikipedia.org/wiki/Ken_Thompson [2]: https://golang.org/doc/faq#Origins


It's really a godsend. include it and you're done.


... Include it in two files, and you are really done!


assuming proper guard macros are included in the header, this shouldn't be a problem


I have mostly seen it done by (young) people who come from web dev. They bring their bad habits together with them. And they usually don't get a very warm welcome for that.

I mean, we've been nicely organising our sources in files, modules and libraries for 30 years, and that's been fine: very easy, logical and giving us many benefits (such as quick partial builds, splitting different operations, isolating problems for a much easier resolution, and so on) but suddenly they show up and it's too difficult for them and they prefer to dump thousands and thousands of lines in a single compilation unit and have all things interfere with each other...


On HN, could you please not conduct arguments like this in the flamewar style?

Making this about "young" people who come from "web dev" is flamebait, which predictably takes the discussion into inner circles of hell. That's just what we don't want on HN.

You've done this kind of thing more than once before. That's bad. If you'd (re-)read https://news.ycombinator.com/newsguidelines.html and take the spirit of this site to heart—intellectual curiosity and thoughtful conversation—we'd appreciate it.


I have not heard this 'criticism' ever before and it's absolutely ridiculous on so many levels to me as a person tangentially interested in gamedev, C, C++, runtimes, system programming, retro stuff, history of gaemdev, etc.

1. Why would a web dev suddenly decide to write in such a spartan way an algorithm heavy library in C or C++? This makes no sense except to somehow tie modern web dev failures like left pad with this style of libraries to portray them in negative light...

2. In gamedev neat organization is/was not some golden long standing standard, i.e. in comments under[0] you can find information that with 3D graphics it was customary to use a single file to let the early compilers do a better job since they didn't optimize across translation units and many old codebases have no problem with thousands of lines of spaghetti per file.

3. These libraries are meant to be compiled once ever per project (except when you clean out all your object files) with a define in a single source file (which any modern compiler should handle), otherwise most of their text is thrown away in a single ifdef which should be trivial and speedy on a modern compiler. The resulting header shouldn't be a problem for any modern compiler and is as light or lighter than an entire include tree would be. Most of these libraries are also single purpose and have very small API and C is light compared to template heavy C++ anyway. If you develop such a library you can keep it split up and ship merged files, i.e. SQLite does that. If anything this might speed up the execution (optimization in single TU), compilation (a single TU for entire library) and linking (no dynamic nor static linking but a single object) but to me that tangential compared to convenience itself.

4. I've never seen such libraries to have been badly received - quite the contrary, SQLite might be the most widely deployed piece of software actually and it's recommended consumable form is a single pair of files (plus an ext file, for a total or 3 in the official 'amalgamation').

5. I've personally yet to see a 'young web dev' create such a library. If anything it's the grizzled veterans of gamedev, runtimes, compression, middelware, cross platformness, systems programming, people with university education that includes CS heavy stuff, etc.

Libraries I use or like that use this (single file or single header + source) style are:

- very popular PD libraries by Sean T Barret, thanked in this linked repo's README. He arguably popularized this style, maintains a list of such libraries and has a FAQ and a guide about this style of libraries. He is over 50 (or maybe over 60, he was a teen in the 80s), is a gamedev veteran with software and hardware graphics and runtimes - Thief 1, Iggy (Flash runtime) at RAD, etc. His website is also hardly something that a modern web dev would make[1].

- pugixml by Arseny Kapoulkine - gamedev veteran: PS3, physics, rendering, etc. not a web dev, not young (definitely over 30, unless you happen to redefine young to 'under 40' which makes 0 sense in industry as young as ours but considering your other claims is totally likely).

- SQLite which is shipped and recommended to be used like that (and I think Python embedds it like that) - D. Richard Hipp, Tcl, over 50 now, total veteran of world class between Fossil, Tcl, SQLite, etc., educated and programming since 90s, his personal website is also hardly modern web dev.

- ClipperLib by Angus Johnson, a hardcore clipping 2D library, it has single/two file versions in C#, Delphi and C++ (are these the languages young web devs know well now too?) - another person who was a programme rin the 90s and is from very Enterprise-y background (Delphi Object Pascal, yikes), not a web dev, nor with a web dev worthy web page[3] (I don't mind it, I even find it nice to look at, but it's another hint of the lack of supposed web dev blood in him).

Honorable mentions are the two people related to imgui concepts and also thanked in this linked repo's README:

- Casey Muratori - programmer in the 90s already, over 30 or 40 years old, another (along STB) RAD tools related person, worked on Granny (animation for games) and Bink (video compression for games), total veteran in low level stuff, not a young web dev.

- Omar Cornut - maker of Dear IMGUI, was an intern in the 90s, has a very non web dev website[4].

All in all I'm truly baffled about where you got the 'mostly young people who come from web dev' bit, especially since it's no secret many people are inspired to write them by STB himself (and mention that in their READMEs, as seen here) who is none of that. Your comment seems very hearsay, dishonest disingenuous.

If anything I feel like this is a set up of a Yerevan (communist, post communist, Eastern Europe, Eastern Bloc, [5], etc.) joke I happen to know in Polish (translation mine): "Dear Radio Yerevan, is it true they give out free cars on the Red Square in Moscow?" - "That's mostly true but it's bikes, not cars, they are stolen, not given out, and not in the Red Square in Moscow but in the Prague district of Warsaw".

[0] - http://fabiensanglard.net/duke3d/

[1] - http://nothings.org

[2] - http://www.hwaci.com/drh/

[3] - http://www.angusj.com/delphi/clipper.php

[4] - http://www.miracleworld.net/

[5] - https://en.wikipedia.org/wiki/Radio_Yerevan_jokes


That's all fine and dandy. You can quote all the shiny names you want from your domain and write a novel about them as it seems you are in the mood for doing that, it will not change anything to what I witness on each C forum I follow: it is always:

1. young people

2. coming from Javascript

3. coming from the land of IDEs which blur the difference in nature between files, between external and internal files, between copy inside the project and inclusion/linking from outside, etc.

4. who take a try at C, because they have heard C is hardcore or something like that and they want to taste low-level to become real programmers, not just web devs

5. who offer or request single header libraries because they have no idea it could be otherwise, or because they think linking or writing a basic makefile is an issue an enormous amount of work, because they come from the land of language-dependant automatic package managers.

That's somehow the same horrible movement that ends up giving us projects which ship with all the dependencies inside the package. Well, not all dependencies, just almost all, because there will always be a random couple of libraries you'll have to install yourself, without knowing why... And then those 'internalised' libraries will never get patched, never get updated, never get fixed, that's wonderful.

End of story. I do not even know why, along your text wall, you go on fighting things you pretend I have said, when I haven't. I haven't spoken of this library in particular, I haven't spoken of gamedev (I would not touch that with a 10 feet pole), I have spoken of what I witnessed. And that's a true phenomenon that started only 2 or 3 years ago.


You have answered to:

> I really do not understand why 'single header' is considered a good thing, but I see this more and more often on libraries. What is the reason all the code is put in the header file?

With:

> I have mostly seen it done by (young) people who come from web dev. They bring their bad habits together with them. And they usually don't get a very warm welcome for that. (...)

And now you ask why I've listed all these libraries to prove otherwise? This is what I'm disproving with this list of old, experienced, non web devs writing good libraries that are well received and reliable.

You can't make a sweeping statement about an entire paradigm of writing libraries and then claim you just happened to ignore all of the facts that contradicted it like existence of SQLite, STB and the entire gamedev industry. Your logic is on the level of saying that "most nuclear power plants go boom" and claiming you just don't care about any except Chernobyl personally.

Where are your examples of these bad libraries made by young web devs that are not well received? Between few examples I gave you should have at least 10 by now but instead you tell me that some people learn C because it's 'hardcore' and that new C programmers struggle with C linking (both of which are obvious facts to any C or C++ programmer).

And there's nothing special with single file libraries about avoiding building or lack of updates, people can (and do) dump entire source trees into their projects to avoid having to build a separate library and it works just as well most of the time. People also do things like ship prebuilt binaries (especially C ones, which have a very stable ABI even on Windows, unlike C++) with their C or C++ project which is just as bad from the update standpoint as embedded source is or it's even worse since it tasks anyone trying to replace these with finding out how to build that library (which between CMake, premake, Scons, even changing compiler defaults - which once bit me when building a project with GCC - and God knows what else might be quite an adventure compared to just dumping the source into the project itself and rebuilding that).


What are you talking about, C doesn't have a concept of a module or a library in the first place.

Nothing about C is easy or logical. C only makes sense in 1965 on PDP7. All these hacks just work around the inherent shittyness of C/C++.


Not sure why you felt necessary to create an account to post this message.

It is not funny and yet it is wrong on all accounts.

Do I even need to discuss that C doesn't have libraries???

And a module == a compilation unit.


Developers that don't want to bother how to properly use a linker.


Apparently, passing "-lmylib" to a compiler/linker has become a terrible effort indeed.


I'm guessing this deals with accessibility the same way most other 'light' UI toolkits do - by not dealing with it at all?


Correct. I started my own subthread on this subject: https://news.ycombinator.com/item?id=16347902


How do you handle accessibility across Win/Mac/Linux?


Well, you could use Qt...


Golang bindings: https://github.com/golang-ui/nuklear

Never used it myself, but looks interesting considering the current state of go's UI programming options (i.e. fairly limited at the moment)


It's even got bindings for Chicken Scheme: http://wiki.call-cc.org/eggref/4/nuklear


Also some for Common Lisp:

https://github.com/borodust/bodge-nuklear

They are purpose-built for a particular engine though, so I'm not sure how generally usable they are.


This one is a generic but full thin wrapper over Nuklear. It's not tied to cl-bodge in any way :) https://github.com/borodust/nuklear-blob is an asdf system with loadable compiled shared libraries for it. Hopefully, it soon will be in the main quicklisp distribution.


> No global or hidden state

Does that mean that one can detach an input field, save its state somewhere, create a new one and attach it somewhere else again and have it appear exactly as it was before (i.e. cursor position, selection, focus, etc)?

That's one of the pain points with VDOM frameworks (i.e. need to diff around UI components which shouldn't lose state), so I'm curious if this library gets it right.


There is no "input field", the UI state is entirely on your end :).

This is immediate mode UI. It handles inputs and renders simultaneously. Consider a line from the example in the README:

  nk_slider_float(&ctx, 0, &value, 1.0f, 0.1f);
That line is responsible for both drawing the slider (at a place determined by value, with min = 0, max = 1, step = 0.1) and setting the variable `value` to whatever position user is currently dragging the slider to.

Or:

  if (nk_button_label(&ctx, "button")) {
        /* event handling */
  }
That simultaneously draws the button and executes the conditional if the button was just pressed.

Immediate mode GUIs are used in games, where you think in terms of drawing frames, each visible for a fraction of a second. When using such GUI, you usually "clear" it at the beginning of the frame, feed it with input from current frame, then build up the UI at the same time as you're drawing it[0].

So basically, you're in total control of the state; there is no persistent "input field", you recreate it each frame with a function call, and it's up to you whether or not it's meaningfully the same input field as it was the last frame.

--

[0] - most times, like in this case, "drawing" means you "build up" a list of display commands which then you feed to the renderer at the end - this lets an immediate mode GUI be independent of whatever graphics library you're using.


> It handles inputs and renders simultaneously.

I'm not sure that's the point. I think it's a good idea to decouple input and output (render).

The point of IMGUI vs traditional toolkits is, I think, rather that you don't have to communicate through an API to manipulate and sync state with the framework/library. The latter imposes lots of communication overhead and second guessing, and can't offer the same level of control.


When you say, draw a button in an immediate mode gui, the widget is drawn for this frame, which has not yet been flushed to the monitor. Yet the draw button function returns true or false on whether it was clicked. But how can that be, this button has not yet been flush to the display?

The imgui framework handles the previous frames events when drawing the current frame.

This implies that the only concept you really need to know using an immediate mode GUI framework, is that internally to the framework it needs some mechanism to identify objects between frames.


> the only concept you really need to know using an immediate mode GUI framework, is that internally to the framework it needs some mechanism to identify objects between frames.

So why not allocate the necessary structures on the caller-side, and hand them as arguments to update and render calls? I think that's cleaner. It's the way I've started going forward for my own experiments.

I briefly looked into Dear Imgui before, and I was thinking it had quite a few hacks to find the objects from last frame. Lots of best effort guessing and workaround schemes to keep the API calls "concise".


I think overall you and I agree quite a bit.

But as far as I can tell, nuklear is designed to be simple to program the implementation, and simple to program against, at the cost of some features.

https://github.com/vurtun/nuklear/issues/50

Dear imgui may have some hacks as you mention to achieve features such as using tab to switch the active component, (I've never looked at the source), but it's still simple as can be to use.

I'm not quite sure what you're advocating to do, but if you make a better graphical user interface framework than those other two projects, then I can't wait to see it.


Always happy to find people to connect to and talk about how to actually _make_ infrastructure instead of only using it. (Maybe it's not brainy or sexy enough?)

I've only begun making some UI elements. At the moment I have a keyboard-driven menu, and a slider box and an unused button with misplaced caption that react to mouse clicks. There's a lot more infrastructure I've experimented with and the UI just barely works -- but I'll soon need and work on more 2D UI functionality. One problem is that I'm not sure how to structure 2D rendering, having to deal with modern OpenGL. But I need this experience to inform my UI architecture. I will probably study vurtun a bit.

So if you're interested check back in two weeks (and I'm happy to receive feedback).

https://github.com/jstimpfle/learn-opengl/blob/master/ui.c

https://github.com/jstimpfle/learn-opengl/blob/master/meshvi...


Sounds good. I too have a 3d graphics playground app if you're interested

https://github.com/billsix/hurtbox


> Yet the draw button function returns true or false on whether it was clicked. But how can that be, this button has not yet been flush to the display?

A button being clicked doesn't really have anything to do with whether it's been drawn or not. The relevant information is whether the user has hovered the button with their mouse and (mouse position), and whether the mouse button is pressed (mouse button state), both of which are usually provided to the system on a per-frame basis. I've not used Nuklear, but this is how dear imgui works. If the GUI system knows where you are clicking, and must know the position of the button it's drawing (in order to render it), then it should be able to tell if it's being pressed.


I am unsure about where we disagree. What you quoted from me was a set up for my subsequent, unquoted argument.

Buttons don't fly around the screen over time. If a user clicked on the screen 1.0/144 seconds ago (assuming a 144 htz monitor), the user likely clicked on the button which is about to be redrawn.


I guess I don't understand why this would be true:

> The imgui framework handles the previous frames events when drawing the current frame.

I'm not terribly familiar with the inner workings of these libraries, though, so I could be wrong. Since you pass in the input state for each frame, it's my understanding that they just process the current frame's "events" during that frame. It would be a little weird if you passed some mouse click event into the system and your Button() call returned true the NEXT frame (I think that's what you're suggesting).


That is what I'm suggesting.

Before I used immediate mode GUIs, I was familiar with using Qt, wx, and gtk. Never did X windows. So I never handled events directly, just indirectly programmed what would happen upon the event occuring.

I had been under the likely incorrect assumption that those frameworks, in order to correctly associate say a mouse click to a button, had a mapping of the previous frame's screen-space coordinates of all widgets, and when a click happened, they associated the click with the previously drawn widget. Because that is what was on the screen when the mouse was clicked.

Immediate mode GUIs baffled me. How could the drawing of a button, which has not yet been flushed to the monitor, return true or false depending upon whether or not it was clicked? The button is not yet on the monitor!!! After thinking about it for 10 minutes, buttons don't fly all over the screen. Whatever the bounding box for the button for this frame is likely the bounding box it had last frame. So the association is easy, assuming the imgui framework can keep track of object identity across frames.

Perhaps, I am just ignorant of how qt, gtk, and wx, actually work.


Nice, but what about a double-click, long press, or drag?

Is it just about detecting whether an input button is currently down, or is there some way to handle a gesture happening over time?


At least in dear imgui, the input information is provided every frame, so the system can track it over time. If you want to see if there was a double click, then you can call "IsMouseDoubleClicked", for example.


For python fans, I've been working on pyNuklear

https://github.com/billsix/pyNuklear


My first thought: I wonder if I can invoke this with ctypes from Python? I'll check this out, thanks!


My plan of action is result-driven: incrementally port nuklear's demo overview.c file to python. Each new piece of that requires adding more ctypes bindings.

Help/patches are welcome, as I only have limited free time to implement this.


Nice. I also recommend checking out Nanogui (C++) by Wenzel Jakob: https://github.com/wjakob/nanogui


The reason that I love immediate mode GUIs like nuklear and dearimgui is that, unlike callback based GUIs like nanogui, qt, gtk, wxwidgets etc, I do not need to learn how the framework Works internally in order to use it effectively.

The lack of callbacks in particular, means that I do not need to worry about memory management very much. Or, I should say, I don't have to defer memory management to later.

I own books on wxwidgets and on qt. To be honest, although I've read them fully, I still don't quite know how to use them well, nor how would I teach teammates how to even if I did.

Using a tool like dearimgui or nuklear, I have no books, but 95% of the time I can just copy example code from the demo into my app, modify it slightly for my needs, and call it a day.


Before you use this in your next important application, please read the comment on accessibility that I posted the other day on the LCUI thread: https://news.ycombinator.com/item?id=16329640


Technical follow-up: It seems to me that implementing today's platform accessibility APIs (e.g. UI Automation for Windows, AT-SPI for Unix) would be difficult if not impractical for an immediate-mode toolkit like this one.

These accessibility APIs expose a tree of UI objects, which the client (e.g. screen reader) can freely navigate and query. This basically assumes that there's a tree of UI objects in memory, which is the case for all mainstream toolkits as far as I know. But that's not the case for an immediate-mode toolkit. At least with Nuklear, the content and state of the UI (e.g. the label and current checked state of a checkbox) aren't stored anywhere in the toolkit's data structures. So I guess applications would have to play a very active role in implementing accessibility APIs, much more than they would with, say, Qt or even Win32.


>> This basically assumes that there's a tree of UI objects in memory, which is the case for all mainstream toolkits as far as I know.

This is also the reason why they are a pain in the butt to use, at least from the perspective of an IMGUI advocate. Those people usually have a game development background, so they're used writing bespoke solutions for everything.

Statelessness is a strength but also a big weakness, I doubt IMGUI will ever catch on its pure form. However, tools like React provide a fairly similar experience and they do produce object trees usable for screen readers. The missing piece here is something like a lightweight DOM for C/C++ projects.


Your note is good in general, but for this particular case, note this is an immediate mode GUI library. I.e. the kind of stuff that's primarily used in videogames. A typical game (or supporting tool) where this is included would already be not accessible (imagine playing e.g. an RTS with a screen reader).


Point taken. Some games, though, could be accessible if not for the way the UI was implemented. For example, repackaged text adventures. I downloaded the Lost Treasures of Infocom app for iOS, and it was completely useless with VoiceOver. Presumably because the dev at Activision who was working on this packaging used the same kind of UI toolkit that they always use for games. The same goes for turn-based strategy games, if it's feasible to provide text for all the graphics.

Accessibility for games is definitely a much lower priority than accessibility for applications that are used in business, education, and everyday communication/social networking. Still, a blind person can feel excluded if their friends are playing a game that they can't play. Sometimes a given game can't be made accessible without changing the whole game mechanic. But that's not always the case. So, something to consider.


I agree. It's good that you bring this up, and also great you provided some extra context about why you're doing this in another comment.


Accessibility is always important to mention, which is why we should thank you for reminding us. In return, you have to understand that not every next important application will be used by blind people or people with disabilities, and as of such there is still room for people to develop new UI toolkits if they so please.

Again, thanks for reminding everybody. People need to think about it. Your advice could be toned down a bit to “please think about your audience, if you expect people with disabilities to be part of it think twice when making your own UI toolkit, or at least factor in the costs to build in accessibility features.”


Thanks for your feedback. I do try to control the tone of my comments on this subject, because I get emotional about it, but I know a vitriolic comment isn't helpful. Still, I find it hard to moderate the scope of my advice. Several years ago, a blind acquaintance of mine briefly lost his job because some developer didn't pay enough attention to accessibility in an application that he was required to use. His employer re-hired him shortly afterward, because they found another role for him. But in the interim, he went through all the feelings associated with losing one's job, and he was lucky to regain employment so soon. I don't ever want anyone to go through that again because some developer used an inaccessible GUI toolkit, even if that toolkit was meant only for games. That's why I'm as vocal as I am here.


> Several years ago, a blind acquaintance of mine briefly lost his job because some developer didn't pay enough attention to accessibility in an application that he was required to use

Can you explain in more detail? As a person ignorant of these types of issues, I'm having a hard time understanding the connection between the lack of accessibility in an app to the firing of a blind employee.


Sure. We're getting off on a tangent now, but I guess that's why people can collapse subthreads.

Here's what happened as I remember it. The blind employee, Darrell, was providing technical support for a client company on behalf of his employer. At some point after he had taken on this role, the client added a requirement that anyone who worked with them had to use the Seibel CRM system in "high interactivity" mode. If I recall correctly from talking to Darrell, the high interactivity version of Seibel at this time (2006) used the Microsoft JVM. That, combined with the fact that the app implemented some custom controls with no regard for accessibility, made it completely unusable to him. So his employer laid him off. Luckily for him, they rehired him shortly after that, but only because they found a whole new role for him. One can question whether the real problem was the app developer, the client, or the employer. But the inaccessibility of the app was a critical factor.


That makes a lot of sense. Sucks to go through such a situation. Cheers for trying to make a better world.


> not every next important application will be used by blind people or people with disabilities

If this application is important to people in general, it wouldn't stop being important just because you happen to be blind or otherwise disabled. If everyone uses a particular walled-garden service for some daily task, blind people need to use it too, just the same as everyone else.


If you develop for an audience where you do not know every of your end users, i.e for the generic public, i would argue you can never say it will not be used by blind people (and hence have to develop for it).

I was talking about app developers that build applications for jobs that require sight because of the nature of the job. Say a medical diagnostic application or airplane (simulator). You have to develop for the color blind, but for the completely blind, not so much. When selecting an UI toolkit I do not have to put that requirement up top.


That's reasonable, as long as a developer, once familiar with a toolkit like this one, doesn't go on to use it for applications with a less restricted audience.


There are a couple domains for which I'm pretty sure sight is required. Photoshop for instance. Is there any way this program could be made accessible?

For all the others, I wonder if a command line interface wouldn't just be better—sometimes not just for the blind.


A decent UI toolkit has accessibility built in, so using such toolkit and adhering to the platforms design language takes you quite far in not obstructing disabled people. Something like an interactive website, for example, if one follows good practices, is effortlesly disabled-friendly.


A good UI toolkit with built-in accessibility helps, but in non-trivial applications, it still takes thought to make sure you use the toolkit correctly. In my experience, there are myriad ways to accidentally mess up accessibility. So I wouldn't use words like "effortless".


People with disabilities don’t need random developers determining a priori whether or not they’ll be able to use the same software as everyone else. People with disabilities aren’t an afterthought; they’re not a bonus; it’s a requirement to produce software that everyone can use — not just people like you. If it were more work to produce software for people of another race than yourself, would you be making the same argument?


Did you read my comment below? I made the comment with application developers in mind that develop applications for job types that require sight as a first job requirement (pilots, doctors, ...).

I’m the last to say blind people are an afterthought which is why I thanked the original commenter for his reminder. I merely want to bring some nuance that there are always exceptions.


And to reiterate, I agree with that nuance and will do my best to apply it to future comments.


I’m disagreeing with that nuance. I’m saying that blind people should be able to enjoy the same things that sighted people can, rather than developers deciding for them whether or not blind people are in the target market.


Your strident advocacy is admirable, and that's why I upvoted your first comment.

But we have to deal with reality here. Some tasks simply require normal sight. Believe me, as a legally blind person myself, I wish I could drive, but I understand why I can't and accept it. The same applies for some software applications. So all I can ask is that developers pay attention to accessibility unless there's some reason why it just won't work for their application. I think that's what your interlocutor is getting at as well.


Fair enough. I think we’re pretty close together on that.


Additional advice: "don't use this toolkit if you ever want to sell your product to government".


Which is interesting because I use a dialysis machine and these are essentially sold to the government instead of the patient yet they all use a custom GUI interface like this one. Far as I can tell there is no accessibility features.


That's a difficult case, because it's a dedicated device and not a general-purpose computer. Adding a screen reader to such a device could be costly. Of course, these days, even a lot of "embedded" projects use Linux, some embedded flavor of Windows (previously CE, now Windows IoT Core), or even Android. Still, the device would need audio output. And to really cover the bases, for deaf-blind people, it would need to connect to a braille device via USB or Bluetooth.

In my ideal world, all user interfaces would run on the user's main portable computer (currently a smartphone), controlling devices remotely over Bluetooth, NFC, or the like. Embedded UIs would then be a thing of the past, and accessibility would simply be a matter of following the recommended practices for mainstream general-purpose platforms.


TBH this looks mainly oriented towards games where usually the only UI accessibility you have to be concerned about is for people with color blindness.


It's the only accessibility developers usually ARE concerned about, but in many games, it would be possible to do much better. Uncharted 4, for instance, is highly accessible: https://dagersystem.com/all-review-list/disability-review-un...


I think your comments here and in the other thread are thoughtful and well-intentioned and are in general the right comments to make.

However, I must admit that the incidents where UC Berkeley and other universities removed public access to many thousands of hours of videos of online undergraduate lectures severely reduced the extent of my sympathy for accessibility advocates.

It sounds like a no-brainer to be sympathetic to the needs of disabled minorities, but in fact I would suggest that if the community of accessibility advocates wants to be successful in their aims, then they need to absolutely avoid being associated with the intellectual vandalism we saw at UC Berkeley and elsewhere.


What Berkeley did with its lectures is not the fault of accessibility advocates. I'll just point you to this comment from a blind friend: https://news.ycombinator.com/item?id=14580342


I am not sure what you mean by "fault" but it was certainly the result of accessibility advocacy.

I'm sorry but I disagree strongly with your blind friend. E.g.

> this continuing trope of "OMG I didn't realize we needed to make our content accessible! Woe, woe are we!" passed through the realm of disingenuous and into that of pure bullshit a long, long time ago.

They are just videos of a lecture. They can be put on the internet.

EDIT: That sounds like I don't understand his point. I think I do: we've had decades to get used to the idea that content should/must be accessible, and of all institutions, a large public university should certainly be doing that now, in 2017.

However, I just can't make myself accept it. They are just videos of a lecture. They can be put on the internet. Certainly they would be better if accessible, but once up it is vandalism to take them down.


I think I understand why the outcome in this case frustrates you. Taking down the videos didn't do anyone any good, except that it was the easiest way out for Berkeley. Now, nobody has access to that information. So, one might argue that it's stupid that Berkeley was required by law to remove inaccessible content. But if that requirement wasn't there, the law would have no teeth, and other content wouldn't be made accessible. What we all wanted, of course, was for Berkeley to make the lectures accessible, not remove them. The fact that Berkeley removed them is Berkeley's fault, not ours. Your problem should be with Berkeley, not with us.


> So, one might argue that it's stupid that Berkeley was required by law to remove inaccessible content. But if that requirement wasn't there, the law would have no teeth, and other content wouldn't be made accessible.

I'll argue that it's stupid that the law requires Berkeley to make its content accessible. If accessibility is something society has determined is worthwhile (I think we can all agree that it is), then it should be funded by society. Unfunded mandates are no way to run society.


I'm just going to be honest and admit that my instinct is that demanding accessibility is too high a barrier. I want to live in a world where a lecturer is about to start a lecture and someone in the class says "hey can I record this it would be cool for others to see" and the lecturer says "sure!" and the simple phone-captured video goes up on the internet for ever.

Now if someone can make technology to make that accessible, post-hoc, awesome. And if someone can make it easy for the content to be born accessible, even better! But I do not agree with making accessibility required.


I understand your opinion, but I think I must still disagree with it. I do think accessibility should be required, unless it's just not possible in some situation. And content should be, as you said, born accessible.

And not just accessibility for blind people, of course. And here I have to be honest and admit that I haven't lived up to this ideal myself. Several months ago, I gave a presentation to a local Meetup group for software developers on making software accessible to blind people. It's on YouTube. But I haven't yet had it captioned so a deaf person can benefit from it. Nobody's coming after me with a lawsuit in hand. But now that I've thought of this, I should get that video captioned.

Or maybe the presentation or lecture itself is an archaic way to share information. You can see it in the word "lecture". It comes from reading, as in reading out loud. But the people who attend today's lectures can read for themselves. Maybe the best way for content to be born accessible is just to stick to electronic text, that everyone can read in their own way and at their own pace.


Let's say I'm implementing an application blind users should be able to use (not an FPS game or drawing program). Aren't GUIs a pain in the butt even when properly implemented with the right accessibility hooks?

How about writing a command line version of the program? Wouldn't that be much more accessible? I'm not even thinking ncurses, I'm thinking about a REPL, or even non-interactive like grep or awk.


Blind programmers and sysadmins sometimes prefer command-line tools. But most blind people are now thoroughly accustomed to GUIs. An accessible GUI can be very usable. IMO, it's much better than using a screen reader with a screen-oriented terminal application (ncurses or the like). If you're skeptical, I can put together a little demo of one or two use cases on Windows and/or the iPhone.


> IMO, it's much better than using a screen reader with a screen-oriented terminal application (ncurses or the like).

I suspected as much, which is why I was thinking purely REPL/batch stuff. On the REPL end, I was thinking programming languages (Python, Lua, Lisp, OCaml…), and stuff like gdb. On the batch end, I was thinking utilities like grep or awk, compilers, or ImageMagick. In other words, stuff that would work well with a printer.

I personally have a concrete use case in mind: file encryption, and password manager. I intend to write a first version for the command line, because it will be smaller and easier to write. Then I may do GUI versions. I was wondering to what extent the accessibility of the GUI version even matters, considering how simple the command line version would be.


Reposting from a previous thread:

I have not had a chance to try it out for myself yet, but I have recently learned that webkit includes a direct-to-opengl renderer such that one can skip the Qt or GTK backend and write HTML/CSS/JS that outputs to the framebuffer directly.


Check out these alternative C++ GUI libs running in a web browser using WebAssembly:

- AssortedWidgets: https://shi-yan.github.io/AssortedWidgets/

- Dear ImGUI: https://pbrfrat.com/post/imgui_in_browser.html



Most likely re-submitted after being mentioned 2 days ago.

https://news.ycombinator.com/item?id=16327918#16328912


I used it once to provide a simple GUI for a some simulated annealing I had written in C, was more straightforward than I expected.


Wow! This is the first time I'm seeing a project having two licenses. MIT and unlicense. You can choose whichever you want.


I love it because I can write applications for iOS macOS android windows and have most of the code identical. There is no need of resources to make that easy. Almost too good to be true.


I could make an iOS application that is only 1 MB in size thanks to Nuklear.. More and more users will not want to try an app if it feels bloated. Nuklear allows to generate tiny binaries, I appreciate that.


It looks quite nice also. Last time I saw it I wanted to port to to Go(not bindings) but could not find details docs about implementation. Anyone know how it is drawing OpenGL or native?


It is API agnostic, you have to provide the functionality. There are some examples using various APIs here: https://github.com/vurtun/nuklear/tree/master/demo


Nuklear isn't doing any actual rendering. It depends on other, platform-dependent code to do that. Look into the headers inside the demo folder to get an idea of how that works.


Amazing. A great candidate to be included into suckless.org [1], in my opinion.

[1] https://suckless.org/rocks


Question: How easy or difficult would it be to make responsive UI's with this?

Also, what would the performance be like as compared to native OS GUI controls?


Is there a good reason for having logic in an H file?


Nope, but less typing of #include:s? Easier to send by mail? He seems to use defines to keeps function and variable definitions to one in the program it's included in.


As a small criticism, looking at the project page I get no clue whether this library works on Microsoft Windows.


Yes it works on windows.

Hopefully this does not come across as rude, but from the readme

"It was designed as a simple embeddable user interface for application and does not have any dependencies, a default renderbackend or OS window and input handling but instead provides a very modular library approach by using simple input state for input and draw commands describing primitive shapes as output. So instead of providing a layered library that tries to abstract over a number of platform and render backends it only focuses on the actual UI."

Under the demo folder, there are examples.

Nuklear works with opengl 2 and 3 both with sdl2 and glfw (both of which support windows) it works with direct 3d 9 and 11, with gdi (windows specific libraries)


Yes, I did see the readme and I do remember thinking "so does this means it runs on Windows as well?". It's not unheard of for linux-specific projects to simply ignore the existence of Microsoft world.

So I cloned the repository and opened up nuklear.h. If this library supports Windows, then CERTAINLY this file has to include "Windows.h", somewhere, right? Uhm.. no, it does not. Bad sign, I thought.

So, still confused, I thought "so let's look into this folder called "example". I look into canvas.c and I notice "#include <GL/glew.h>". So I thought "Oh dear, this is gonna be a bother to compile on Windows, forget it". As luck would have it, I never looked into the folder called "demo".

So at that point I honestly believed that this library was only sort of "semi-supported" on Windows, as often happens with these kind of projects. You know, the usual "if you're a bit of a masochist and you're willing to muck around with mingw and download glib.tar.gz stuff, you may get this to work". The fact that the GUI has a definite linux look reinforced that impression.

What I'm saying is, all my confusion could have been solved by just one line in the readme explicitely stating "this builds on Microsoft Windows using Visual Studio and a direct X backend, etc...". A little bit of redundancy, it helps sometimes.


That would be a bit inaccurate though - the core library doesn't have a backend. The example you found e.g. implements an OpenGL backend.

It doesn't reference any external libraries (maybe C runtime). IIRC it does not assume you have a malloc() function by default; you provide one or use a #define before include to tell the header it can rely on standard malloc.


"C89", "no dependencies". That pretty certainly means "yes".


"no depencies" until you actually want to use it. Because then of course you'll have to link with some non-standard library that can actually paint pixels on the screen. That was precisely what confused me.


Yes you have a win32 and direct x 9 and 11 backend...


And to add more, this is one of my favourite about it, it's very easy to have a tiny .exe with no DLL dependencies thanks to that...


C# Wrapper please!


There should be a link on the page. I started binding it to C# over Christmas out of frustration of one not existing, and someone else quickly picked it up and made a cleaner binding that I recommend more. Google NuklearDotNet for the latter.


[flagged]


Could be a pun on Electron, though.


You can't build a "ANSI C GUI library", because ANSI C has no graphical capabilities.


You can if you rely on the library's users to provide the non-ANSI functionality (like Nuklear does - you have to provide the graphics and input functionality).


I would say the requirements of the functionality you have to provide define what kind of library it is. It's a very different thing if it's a raw framebuffer, OpenGL context, Carbon API, a terminal or something like WASM.


It is a GUI library written in ANSI C. Graphical capability is not a property of programming languages. It is an orthogonal matter.


C, the language, doesn't come with e.g. a MySQL driver either, but I'm pretty sure you can write one in C. If you're going to be pedantic at least be correct.


I didn't say it wasn't written in C.




Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: