Hacker News new | past | comments | ask | show | jobs | submit login
Httpserver.h: Single header library for writing non-blocking HTTP servers in C (github.com/jeremycw)
332 points by jeremycw on Dec 12, 2019 | hide | past | favorite | 163 comments



Question: I thought any given C code could go in either headers or c files (or rather, split between headers and c files), and that the difference was only a build concern. So why wouldn't a given library be available in both forms, unless one of them just makes no sense at all? Put differently: why isn't this just "Minimal library for writing non-blocking HTTP servers in C" which people understand to mean "this might make sense to put in a header"?


"header only" is C-speak for "up and running in 15 seconds" more or less. One major concern with integrating a new dependency in C land is how much of a pain in the ass it makes your build, especially considering a large chunk of C projects do not target typical desktop environments. After the first time you lost a day trying to get some batshit custom build tool to emit the correct link flags for your exotic ARM board, "header only" carries a lot of weight.

On the intermixing of source and header files, it doesn't quite work the way you mentioned. It's not possible to just cut-paste any old public object definition into a header file without at least applying 'static' to it, and that only works where the object truly is private to each translation unit. It's not so easy e.g. to instantiate a global variable shared across the whole program this way, so 'header only' also implies 'almost certainly free of globals', which is a good sign of hygiene

edit: yikes, in this case, the implication is totally invalid


Hijacking your comment to ask my question below. Apart from the possibility you can't do header-only for everything (like something that requires a singleton for example) and the other known issues (build times, ballooning of object files and thus executables) is the only other reason you don't see as many header-only libraries in C (like you do in C++ where they are ubiquitous) is that people merely don't do it in C? That is, a social landscape reason, not a technical reason.


It's partly a historical reason as well. There are many projects with very complex Makefiles. Those are hard to integrate into your own project.

Header-only is the other extreme: No makefile at all. With package managers like Conan and vcpkg, using a build-system like e.g. CMake, it's possible to have very simple and short project files which are easy to integrate.

In this regard, C and C++ are a bit behind the times compared to other languages.


Isn't C++ supposed to be getting proper namespaces soon?


I am not an expert in C++, but there exists an opinion that it will not help as much as you might think.

https://cor3ntin.github.io/posts/modules/


The solution for the "singleton problem" is to put the implementation code into a separate part of the header behind an #ifdef XXX_IMPL, and only define this before including the header in one place in the project.

This is also the approach used by this library, see:

https://github.com/jeremycw/httpserver.h#example

This approach also doesn't have the 'build time problems', since the (expensive to compile) implementation code is only seen once by the compiler (unlike many C++ header-only libs, which rely on inline code)


The fact that you cannot define templates out of headers also explains why C++ has a lot of header-only libraries. : if you use templates and have no global references, you are most likely already header-only.


Yeah, templates forces many C++ libraries to be header only. I suspect this came first, and only after a c++ header-only (because they had to be) libraries existed did people realize there were advantages to header only and so people wrote them where they wouldn't have to.


Yes, the social reasons are the only ones, besides the technical reasons :)

An additional technical downside in this case is the code complexity in the client code, from the required modal preprocessor flags.


"header only" is more like "I don't want to learn"-speak in regards to deal with native language toolchains.


The native toolchain on my embedded board is a mess. You don't want to learn to deal with it if you don't have to. This comment applies to every embedded toolchain I've ever worked with. Even doing embedded linux with yocto is a mess of a toolchain and it is the best attempt I've ever seen at creating a good embedded toolchain. The problem is just messy.


Usually that boils down to "we don't want to bother with the vendors IDE", e.g. MPLAB.

I know C and C++ since 1992, used them across multiple OSes, hardly seen anything that would motivate the fashion of header only files.


You've never in 28 years lost a single day to a compiler/linker flags/version mismatch and some random tool? I find that difficult to imagine. The horrifying alternative would be that such situations had been encountered but considered justifiable productive work


No, because I either used what was provided by the OS vendor, or library providers that were selling libraries for the deployment we wanted to do.


Well, you can do:

    int a;
    int a;

    static int b;
    static int b;
And it's a valid way to define a single global/static symbol called `a`/`b`.


They have different semantics. The static one can only be referenced in the current translation unit, whereas the non-static one is a global which can be referenced from any translation unit.

The problem is that in the header-only case defining "int a" means you can only import the header from a single source file. With "static int a" the visibility is wrong. And if you did "extern int a" linking would fail because the symbol is never assigned to a translation unit.


Header-only libraries sometimes require you to define those externs in exactly one "impl" file, which you compile and link to your artifact. Something like this:

    // libfoo_impl.c
    // or in some other translation unit, such as main.c
    #define LIBFOO_IMPL
    #include "vendor/libfoo.h"

    // main.c
    #include "vendor/libfoo.h"
    
    int main() {
      foo_inc();
      printf("%d\n", foo_count);
      return 0;
    }

    // vendor/libfoo.h  
    #pragma once
    extern int foo_count;
    void foo_inc() {
      foo_count += 1;
    }

    #ifdef LIBFOO_IMPL
    int foo_count = 0;
    #endif
main.c and libfoo_impl.c both include libfoo.h, which declares foo_count with the right linkage, but the variable is only defined once, in whichever translation unit defines LIBFOO_IMPL.

Occasionally you'll find a header library which supports this _IMPL paradigm as an option to avoid inlining its functions in every translation unit that calls them.


You're misunderstanding. I'm saying you can have multiple identical `int a;` across multiple translation units and it will point by default to a single global variable across the program. So you can include a single header from multiple places, and you just get a single shared global variable, if the header contains `int a`.


This is really two files in one. If you have

  #define HTTPSERVER_IMPL
prior to the inclusion, then the header provides the implementation too, otherwise only the interface. Obviously, you must have this HTTPSERVER_IMPL followed by #include "httpserver.h" in only one file in your program.

If someone doesn't like it, they can split the file into two: the header proper and the "impl" part in a .c file.

Just look for the part starting with #ifdef HTTPSERVER_IMPL. Take that out through to the closing #ifdef and put it into a httpserver.c file. Then put #include "httserver.h" into the top of that file, and remove all *_IMPL preprocessor cruft from httpserver.c. Now you have a proper single-header, single C file module.


This type of architecture makes more sense when you use it to let the user give arbitrary names and types to the functions, as with e.g. KHASH_INIT(my_t). This version with #ifdef seems a little cargo-culty since the result isn’t as polymorphic as you’d get with klib (which afaict kicked off the header-only craze) so the advantage over a traditional two-file setup is not obvious. (Of course, I don’t know why you need more polymorphism in a small http server!)

As noted above, it avoids version incompatibility by essentially forcing static linking. But most developers would statically link a small two-file library anyway, so it’s a moot point. C isn’t supposed to have training wheels.


>C isn’t supposed to have training wheels.

And then there came Arduino.

We really need to fix the lack of documentation / warnings for these generic C libraries. If you need to understand token-combining preprocessor magic (looking at you, kbtree) due to a lack of proper documentation, these newbies _will_ just try to wing it and _will_ stop poking as soon as it seems to work, regardless of why/how. Say hello to use-after-free & it's cousin, memory leaks.


Thankfully Arduino uses C++, there are no training wheels beyond taking advantage of C++ improvements over C, and a beginners friendly IDE.


For instance, I used this trick to create a shared library wrapper (something that calls dlopen, but provides stubs so the program thinks it's just calling a build-time-linked library).

Macros defined the individual functions themselves, in one place, (what are their names, arguments, and which library). A regular #include of these macros provided the declarations to the program. The shared lib wrapper module #defined the definition version of these macros prior to the #include, which expand to the invocation stubs.

It makes sense because the users who maintain it just have one place to add a new function; they don't have to do copy and paste programming to write the invocation wrapper.


To elaborate on the other responses, “single-header” means “one-line import into your project”, and since c doesn’t have a standard module system, the alternative means hooking up the library’s module system to yours, or otherwise separately building and linking it to your project.


> the alternative means hooking up the library’s module system to yours

What "module system"? The alternative would be one .c file and one .h file. It's negligible effort to add these to any project that already has more than one .c file.


I meant build system, such as cmake. I assume the alternative to a single-header library would be lots of .c and .h files, and something to build them into a dynamically-linked library. The alternative, just dumping them into your project, means that once you get into .c files you typically have to start enumerating those somewhere in your own build system, rather than just including them from one of your own headers.


Besides, even for a single translation unit (such as a quick hello world) you could just #include the .c file to avoid setting up a makefile.


Short term convenience

Long term technical debt


How much of that comparison ignoring the time sink that is setting up linkers and dealing with that whole lotta mess.

Header only works for some things (particularly things that require no globals, singletons, etc) and that's a valid concern. Saying header-only = long term technical debt always (or even most of the time) feels like an assertion because I've only heard hypotheticals around why it is bad.


Is it really a time sink? You have to have a build system anyway to link together your project itself, assuming it's bigger than a single-file hello world. One extra line in your build system should be the least time consuming part of adding a new foreign dependency--you still have to vet the code for security and figure out how to use it.

Unless you're using C++20 modules, you also have to deal with possibly including the header multiple times (slowing down builds), namespacing, macros potentially defined by the header, or a bunch of external/internal linkage edge cases. You only ever find out about these problems once it's too late to remove that library for a different one.


> One extra line in your build system should be the least time consuming part of adding a new foreign dependency

Could, should. IRL Docker became a thing mostly because of the hassle it is to do so in C.


It depends. But if that header only library is now in your hands - good luck modifying it without paying the penalty of recompiling anywhere else it's used.

Also, now have to prefix the hell out of it, as even anonymous namespace won't work. And not only that.


It feels like more being lazy not to learn the proper way of doing it.


The proper way to do it sometimes is horrendously complex or uses a build system different from the one you use. Header only avoids this usually.


Building a plain .a/.lib or .so/.dll is not horrendously complex.


Package management is often hideously complex and a central sink of labor effort in linux distros. Building it for yourself is easy, in practice using dynamically linked libraries are the start of all sorts of troubles.


Providing static libraries is also an option.

Generating rpm or deb files is hardly the end of the world, there are even package generators available.


I just watched a cppcon video where Bjarne himself lamented about the complexity of installing many libraries for use in C++.


I’ve personally had great difficulty building these for large libraries, especially when they use a build tool different than the one I’m familiar with (I use cmake for C++ these days but I find cmake itself horrendously complex and difficult to understand even after investing significant time into it).


Header only fixes the lack of build system/module system as part of the language. Some popular build systems include: make, cmake, Visual Studio, Xcode project. Each has a different format for specifying source/header/libraries - and even if you provide project files for all of the above, then what about the other 1,000 build systems?

Header only is nice and easy, dump file in the project, #include and your job is done.


Beside build consideration, you get a bit more performance out of headers-only libraries and single, large c files.

That technique is used in sqlite. https://www.sqlite.org/amalgamation.html

> Combining all the code for SQLite into one big file makes SQLite easier to deploy — there is just one file to keep track of. And because all code is in a single translation unit, compilers can do better inter-procedure optimization resulting in machine code that is between 5% and 10% faster.


Another key use case for header-only libs is for games and projects where you need to do rapid prototyping. A lot of game utilities are distributed this way, e.g. https://github.com/RandyGaul/cute_headers


Even though I dislike C, I have my share of years coding in C, from all the issues I have with it, lack of rapid prototype was never one of them.


This is not the case due to the single definition rule.


In a C compiler (cc command), there are only compilation units (.c -> .o). Headers (.h) were bolted-on (cpp command) to deduplicate interfaces (forward declarations).

This header-only containing implementations adds its code to every compilation unit and the linker has to be smart enough to deduplicate identical symbols across many compilation units. Linkers (ld command) are provided either by the platform vendor or the compiler vendor, so they may or may not be able to do this. Binutils on Linux is able to do so IIRC.

It's a terrible practice that promotes cowboy coding. It should just use two files, which would cause less problems in the real world.

You can also write C without headers or newlines. IOCCC may welcome it but why would you ship it or encourage people to use it?


> This header-only containing implementations adds its code to every compilation unit and the linker has to be smart enough to deduplicate identical symbols across many compilation units.

That's not how this works. It's actually a pretty clever trick, and it effectively behaves as a .h / .c pair of files. See https://github.com/nothings/stb/blob/master/docs/stb_howto.t...


“Clever trick” should be treated as a pejorative.

Knowing why I have to `#define FOO_IMPL` before I include the header, but only from one of my compilation units, is a “clever trick” that now anybody reading or maintaining my code has to tuck away into their brain as well. Along with all the other tricks they’ve had to memorize because some other asshole thought they were being clever too.


C/C++ lack the niceties of npm/cargo/pip/etc, so headers only is like wget something.h and gcc mystuff.c, viola magic.

It's a symptom of the tooling that this is even a thing.


There are mature solutions for building C/C++ programs which check for dependencies and provide a high quality configuration setup. Think of GNU Autotools. It is often blamed to be too complex, but this complexity is essential, not accidental. You do not want to abstract away details a C/C++ maintainer cares about.


It’s not that autotools is complex - rather, most of what it does hasn’t been useful since nineties, and when it doesn’t work - which is quite often when it’s being used on a platform a given piece of software hasn’t been tested on - makes fixing the build harder compared to hand-written makefiles.


Right, but that still doesn't solve the problem of actually distributing the library in the first place.

As a library author you basically have to wait for distributions to start shipping it, or make users manually install dependencies (and then make the build work with libraries installed into /usr/local).


Right, this is an extraordinary pain point as a library author which writes libraries in C or C++.

That's assuming the distro even does package it up. Because beyond Archlinux packaging, packaging up a DEB is painful. RPM is somewhat less painful, but you still need to contend with N distros. Your lib depends on lets say 4 other libraries, perhaps some of the GTK ecosystem of libraries. Good Luck!


Autotools doesn't help you solve the issue of you needing version X.Y when Z.X is installed, neither do RPM/DPKG/etc.

When you go to build C projects, there's usually a long list of instructions, per distro/os to set the environment up correctly.

Compare that to a Rust project where you git clone and cargo build to get going. No instructions needed, no insanely complicated esoteric m4 macro system, or ad-hoc programming language (looking at you CMake), and snowflake package manager needed.

Meson is the only C/C++ build tool I know of that solves the issue of building complex C/C++ programs with dependencies sanely. Especially in the case where both your application and your dependencies are unreleased as of yet, as is the case with Gnome and KDE applications as they are developed.


> niceties of npm

Shudder


It seems more like being lazy not to create an RPM/DEB/MSI/PKG/DEPOT,... to me.

Or even plain old tarball with any sort of build script.

And vcpkg, cargo are already good options for those that need cross-os package manager.


npm for C is apt/brew


Not really, because the amount of provided package versions is very limited. You won't get an update unless the package maintainer (who is likely not the library owner) provides an update.


That's a feature, not a bug. I want to rely on well tested, stable, mature libraries that receive security updates and don't change for the next 3 to 5 years.



npm for js is apt/brew for os


"dnf install" or "apt-get install" work fine.


System package managers rarely offer all of the packages you need.


So .. add them? They can even be added automatically, as we do for Perl and Tex packages in Fedora.


You're right. "Header only" libraries are an anti-pattern in C and the fastest way for me to close the tab when considering a new library. Just put it in two files and it's still dead easy to incorporate into a project.


I don't really understand what your criticisms are. There is literally no functional code difference between a single header library and a two file library. Your aversion is as arbitrary as saying you don't consider libraries if the API uses camel case.


Other than not being able to use pseudo-modules in C, using a gigantic header file instead.


Can you compile a header to an object file so that its definitions don't have to be recompiled every time the including file is recompiled?


Eh... Yes?

GCC [0], Clang [1] and others [2] have supported for compiled headers. Cmake also has support for precompiled headers [3].

I would say both the tool and architecture to do that is well supported.

[0] https://gcc.gnu.org/onlinedocs/gcc/Precompiled-Headers.html

[1] https://clang.llvm.org/docs/PCHInternals.html

[2] https://www.qnx.com/developers/docs/6.3.2/neutrino/utilities... (-pch flag)

[3] https://cmake.org/cmake/help/latest/command/target_precompil...


I can sort-of understand (and still disagree) for header-only template libraries. But in general yes I agree, "header-only" is an anti-pattern for high quality C and C++ projects.


> "Header only" libraries are an anti-pattern in C

says who? care to elaborate?


Separating the interface from the implementation details? Isolating the server code from the module that uses it? Because the implementation is in the header, including it defines a bunch of macros that could easily collide with macros in the larger project.

Not knocking the overall effort here, but yes, "header only" libs are definitely an anti-pattern in real world C. I assume the author considers it a curiosity, a novelty, or something that would only be used in very limited circumstances that make such a design an advantage.


Only one inclusion expands to produce the implementation.


Still suffers from the isolation problem. For example, if you happen to define HTTP_BODY in your own code, it segfaults.

This can be avoided by being careful to only define HTTP_BODY after including httpserver.h, but avoiding this type of thing to worry about is the entire point of interface/implementation separation.


This only causes a problem if you define HTTP_BODY in the same file that you define HTTPSERVER_IMPL when including httpserver.h


Well yes, it's a small isolation failure, but a failure nonetheless. It may seem like a contrived example, but "HTTP_BODY" would be a perfectly reasonable choice of variable or macro name for someone building a tiny embedded web server, and then you have a user (the user in this case being a developer using your library) having to debug your code because its internal assumptions collide with the outside world in a way that's completely avoidable if you just put it in a separate compilation unit.


How is that different.from anything else in C or C++? What does that have to do with single header libraries? Preprocessor isolation is just now possible with C++ modules.


It's different because normally your header files don't contain internal implementation details like this that spill out all over everyone who #includes them.


This only includes the implementation if a certain IMPL symbol is defined, so it doesn't 'spill' the implementation every time it is included. That wouldn't work anyway because you would get multiple definitions. Single header files like this and the commonly used stb libraries include the implementation so that there is a single file to keep track of, but the implementations are still explicitly included in only a single complication unit.

What you are talking about is not an issue in practice.


It's pretty simple: Is HTTP_BODY part of the interface of the module? No. Is the consumer of this library secretly prevented from using HTTP_BODY itself in the module that includes it? Yes. Is this a big deal? For a toy module, no. Would this pass design review in any company I've ever worked in? No. The feedback would look like this:

  // httpserverlib.c
  #define HTTPSERVER_IMPL
  #include "httpserver.h"
  // TODO: look up definition of "compilation unit"


That's just part of C and C++. When people do 'unity builds' to make larger compilation units the same rule applies. I think it is much more about avoiding the preprocessor as much as possible than some sort of fatal flaw with single header libraries. The advantages far outweigh the downsides since minimal dependencies means that you have real modularity. Pragmatically there is much more reuse with single header libraries like this. Search github for how many things depend on stb libraries and would not be possible without them or would end up with spidering dependencies and build systems.

Even windows.h redefines min and max - blaming single file libraries is ridiculous.

Using a single .c file that includes the file and defines the IMPL symbol would negate what you are saying anyway and in fact is a common way to use them.


> Using a single .c file that includes the file and defines the IMPL symbol would negate what you are saying anyway and in fact is a common way to use them.

Ok, well we're obviously arguing semantics at this point, then, because in my opinion, moving the IMPL definition into a single .c file doesn't negate the complaint about single-header libraries, it makes it no longer a single-header library. :D


100% agree with you. I really think adding a source file to your build tool of choice (Make, CMake, Visual Studio project) is a basic fundamental skill that a C or C++ developer should know.


C is already an anti pattern for a library.

It is soon 2020, there is no reasons to use C in new code. Because of security and also convenience.

Maybe it is a nice mental exercise (also try brainfuck and malbolge), but not something for production.


h2o [1] is a web server and C library that not only supports http 1 & 2, but also usually tops the TechEmpower benchmarks [2].

[1] https://github.com/h2o/h2o

[2] https://www.techempower.com/benchmarks/


Is H20 being used in production?


I believe Fastly CDN sponsors some of the development [citation needed]. Not sure if that means they use it in production or not, but I've been playing with it on some of my dev servers and so far it's amazingly stable and configurable.


1100+ lines and barely a comment after the initial block.

It also defines some buffer size names that are likely also declared in other libraries, like `BUF_SIZE`, and will need to be `#undef`ed.


To be fair, most of the actual code doesn't warrant comments, and there's no point of writing comments for the sake of writing comments or repeating what the code is already telling you explicitly what it does.


Echoing the reply saying comments aren't always warranted but making more verbose yet safer macro names might be a good recommendation.


^Fchunked

No chunked support. Well, if ever I use this (and... I do have a pressing use for something like this), that will be a PR I'll send in, probably using async.h[0] to allow handlers to be asynchronous.

Also, it's no fair to compare this server's performance to nginx if this server has no TLS support: you'll have to setup a reverse proxy, and then what will that be?

The lack of URI parsing is not a big deal for me, but it'd be nice.

[0] https://github.com/naasking/async.h


Yes, there are some pieces missing that are on my radar like chunked support, uri/query parsing and sendfile.


> [...] some pieces missing that are on my radar like chunked support, uri/query parsing and sendfile

I like the idea. Keep it up. The "some things" that are missing are really the basics of any HTTP server. Now it's more like something you could build a minimal REST API on. (With the need of a "real" server as proxy)


The goal is definitely not to replace NGINX or Apache or the like. I don't see handling TLS ever being a goal of this project, same with HTTP2 support. Although allowing the user to plug something in to handle HTTPS may not be out of the question.


Then I'd actually consider renaming your project to better transport your target scope. What you have in mind more sounds like a minimal library for writing non blocking REST APIs or HTTP based handlers, and not (full featured, as usually expected) HTTP servers. I really like it, though.


Speaking of sendfile: take a look at io_uring, considering the asynchronous nature of this.


I like this. I've become a huge fan of using Go for web stuff since it's baked in to the language and I can get solutions really quickly. I'm more likely to go through this code though because it looks interesting.


Go compilers are anywhere near as universally available as C compilers...


Technically if you can compile once on one arch a Go binary will run on any distro from what I've heard. You can also cross compile. There's also gccgo and I'm sure other efforts. A decent number of Go projects are advertised to run on a Raspberry Pi. I think that's good enough.

I've gone as far as compiling Go from my Android phone.

Edit here's someone else running Termux with Go:

http://rafalgolarz.com/blog/2017/01/15/running_golang_on_and...


Wow awesome. I never knew. Thanks!


No problem, I tried out Termux mostly to test it out, and I was impressed. I was able to run a Web Server in Python... and Go was a similar story. You could open it up on your browser and all. I'm convinced all one needs to hack into any system is a good Android device with enough storage, or iOS device with their Linux emulator.


Oh man, I remember that 10 years ago I used my Symbian (Nokia 3230, later N82) smartphones as web server.[0]

[0] http://www.allaboutsymbian.com/news/item/4192_Raccoon_Apache...


I had good luck with microhttpd. I wrote a little json web service as part of an existing C application for an embedded platform. Was very DIY but this is C after all.


It's non-blocking for its own http request/response handling but will it allow you integrate with its event loop and register your own events to react to? Can your request handler make its own network connections as a client to other servers in a non-blocking way?


Yeah I was thinking this too, it doesn't seem like it. It's still blocking on http_server_listen.


Any sort of static analysis to argue this doesn't have undefined behaviour bugs? It's cool, I just don't know a lot of cases where I would want/need an HTTP server and wouldn't want something more weathered


No, but by virtue of being a single header library it will run through whatever static analysis you have set up on your project. If an http server like NGINX solves your problem then, please, use that, because this is not an http server. It's a library for creating http servers.

I would also love if someone wanted to contribute fuzz testing or other static analysis to the project. PRs are welcome.


Neat, but why? Is it just the novelty that it can be done?

I thought it was poor form to have a lot of code in headers. This seems like it'd be better served as a small C http library.


I think the main reason is simplicity. No need to muck around with custom make files if you can just include the header and call it a day.


A makefile will most likely already have a list of .c files. Why not add one more .c file to that list and call that a day?


It traditionally leads to bigger executables and longer compile times. With modern compilers, both of these are non-issues for "most" reasonably-sized projects.


I can see it either way, but for a tiny program this could be handy. For a larger program that already has multiple source files, it seems like it'd be better to keep it in a separate file (to make the git diffs easier to follow, for example).

It would be better if it was namespace'd in some way, at least by using a common unusual prefix for all of the non-local symbols. It wouldn't be hard to clash with the chosen names.


> I thought it was poor form to have a lot of code in headers.

It is mostly because it makes it harder to reuse compiled objects when you make a change and rebuild. For a drop-in library that you're probably not editing, this shouldn't be an issue.


For tiny things like this I'd prefer a single header file over something that's a compiled dependency.


At first glance this doesn't seem to handle large amounts of body data?

Also I find no handling of slow connections.

This is definitely a toy.


Might go well with embeded hardware


You forgot to add "for Linux/BSD".


"for *nix/BSD/Mac"


Great job. For me, checks all the boxes:

. C (could have been C++... embedded MCU platform)

. compiles cleanly out of the box

. demo is simple, works out of the box, simple to verify

I contrast this with my experience a few days ago with the "Space Invaders in C" post (too lazy to reference it or find the exact title). I do development on my Mac all the time (command-line, C++, clang/gcc). Tried building Space Invaders. One problem after the next. Gave up after about 20-25 minutes.

Cliché as it is, the importance of the "out of box" experience is so important. Especially for a commercial product, which I realize this isn't.


I now use mostly Lisp languages (or Haskell), but back in the day I wrote C++ books, acted as a C++ trainer and tech lead. My last professional use of C++ was in the 1990s doing entertainment game, VR, and game AI for SAIC, Nintendo, and Disney.

I then went back to using plain C for awhile, and after the complexity of C++, C was so much fun to use.

Anyway, I enjoyed reading through this C header file, and it took me back. But, I am sticking with Lisp because for me it is such a higher productivity language.


I like that it looks like Node.js http module. But I'm too scared to use C with pointers and malloc, so I'll stick to Node.js. I'm however jealous of people that are competent in C, assembly or any other low level systems language.


Pointers to long lived objects on the heap aren't a big deal in C.

Pointer arithmetic and pointers to ephemeral objects is what sucks.


Try it out! It’s not that complicated to pick up and even a basic understanding of how memory allocation works goes a long way to making you a better programmer.


imo it is not memory allocation that is the problem, but pointer ownership, which C neither helps you understand nor implement


What about Windows?

You can get a 3-for-1 with this kind of library by having a libuv implementation.


The point is "add header, done". If you're going to be using libraries then there are better options to choose from already.


brundolf is asking a similar yet different question, so I'll ask mine: I've heard about modules coming to C++. What are the concerns with headers exactly? I'm aware of the issues with duplication in object files and ballooning build times, but are there other issues? Is it then something that primarily affect very large code bases?


I think different people will have different answers. The biggest complaint is that this slows down builds, of course, but it also requires hacks to avoid re-including the same header twice when you have complex header hierarchies [1][2], bleed-trough of macros and pragmas, and of course you have to write forward declarations for everything you intend to publicly expose, as opposed to using a "public" keyword.

Honestly I dislike them because they're a redundant pain in the ass. I tried other solutions before (at one job I had 15 years ago we had automatic header code generation, ooof), and although modules are not that ready for primetime [3] (dependency resolution need to be done by an external program), I think they'll be the bees knees.

[1] https://en.wikipedia.org/wiki/Include_guard

[2] (some people will be quick to note that can be solved by pragmas)

[3] https://vector-of-bool.github.io/2019/01/27/modules-doa.html (but he wrote a follow up: https://vector-of-bool.github.io/2019/03/04/modules-doa-2.ht... )


Not a C++ monkey but headers in C/C++ have the problem that you can't parse them independently of the source file they are included in because of the predecessor. Which means you need to re-parse them each and every time they are included in a project.


I think you meant preprocessor, not predecessor, lol. I had to reread that a time or two. I knew what you were trying to say and I still got confused by it.


d'oh!


However, every modern C compiler and tools like cmake allow you to use precompiled headers to avoid that step if it is what you want to do.


Securely?


Surprised to see no CI.


This kind of thing is the reason that header files should die and a new module system should emerge in the world of C/C++. "Header-only" should stop being a novelty. Distributing or consuming libraries should be much more straightforward and easy as with modern languages. I'm not sure C++20 modules are the solution, but this certainly isn't.


A good packaging system for C/C++ would go a long way. CPAN, php pear, pip, go get, etc, etc. all have at least decent package management, support for third part repos, and ways to add a package without using the manager for things you've manually downloaded / coded yourself.

Trying to get multiple C packages from third parties all working together is rough compared to other languages. Rust has me a little spoiled, I guess.


I think it's easier for Rust (and other languages such as D, Crystal), because they started from scratch. Cargo is accepted as the way to build and distribute stuff on Rust, full stop. There are no endless debates whether Cargo should be used, or something else.

Meanwhile, in C/C++ world, build systems are a mess. You have so many tools. Some folks just use what IDE provides, e.g. MSVC solutions. Some people use CMake. Some people have their handcrafted Makefile solutions. Sure, it works on their platforms, but it's very hard to make it portable. With Cargo and similar, you just go "cargo get" (don't know the exact command, don't use Rust) and you can expect the packages to download and build as needed.


Conan and vcpkg are setting up as the two main winners of C++ package managers.


"php pear", its composer now FWIW.


What’s wrong with apt?


What's wrong with apt is that you can't get what YOU want: you can only get whatever the Debian people decided makes sense for one specific version of the OS - which usually means a 2 years out of date version of the library.

If you need a library of a version other than the Debian approved version, you are back to manually downloading source tarballs from sourceforge and figuring out their arcane build system.

Also, apt doesn't let you have two versions of the same library installed. For SOME things, they have more than one version package, but in general, you can't have STABLE_VERSION installed for your day-to-day OS usage and DEVELOPMENT_VERSION installed for your development needs. You only get one or the other (this isn't exclusive of apt - all Linux package managers do this, AFAIK).

Anyway, contrast that situation with pip (python), where you can just grab whatever version you want, have the package manager solve the dependencies for you, and slap it all into a virtualenv where it won't interfere with your system-level libraries. Heck, you can even grab versions straight from git, and it will (try to) solve the dependencies (if any).

It's a WHOLE different level of convenience.


First of all, the most popular apt-based distro is not Debian, it is Ubuntu. Second, you can set your apt sources to whatever you want. Plenty of people publish apt repositories other than the Ubuntu project. Confluent for example makes all their software (Kafka etc) available through their own apt repositories.


Many package managers support an “install root,” and there’s always Docker containers.

Those obviously don’t solve many of the other issues that a real Package manager would, of course.


I agree - but the counter example I'd give is portage which gives you all that and more.


dpkg -i deb_file_I_downloaded_off_the_interwebz

phew, that vendor lockin that includes tools to install whatever I want!


sigh

That random file you downloaded off the internet was built under a specific set of assumptions - assumptions that only hold true if you are running the specific OS version they were targeting.

IF you download the .deb file for your specific OS, and IF you manually install all missing dependencies, then it works. Otherwise, you are still screwed.

At least you can extract it (IIRC, it's just a zip file anyway). But that's no different from going to sourceforge or github or whatever and getting the source tarball... and we are back where we started.

By the way, I was not complaining about "vendor lock-in". I was complaining about Debian's package management policies and how they can affect your software development process in practice - to make the case that apt is a crap replacement for a proper language/library/development/whatever oriented package management.


Yes, it turns out that installation of software has the assumption that you're putting it on the right OS. This is the case for Windows, for Mac, and so forth.

I've never had an issue with a deb package, but I'll tell you it's one of the reasons I stick with Ubuntu for home.


No windows support. No macos support. No "not-debian-based system" support.


Why would a Linux package manager support Windows which has its own very different package format? I can't ever recall installing a Windows package that decided it would go and download some other dependencies - or at least not transparently. The two are vastly different beasts.


That's why I think C and C++ need a OS-agnostic package manager.


Yes, that’s the point :)

People who are suggesting a debian specific package manager to fetch C/C++ dependency issues are missing the full picture


This is C, not JavaScript. Does anyone really want an npm of the C world?

Header files work. They've worked for many decades. Yes, they require software authors to _do more work_, but they also help to eliminate a lib/ directory with 10,000 interdependent libraries that quickly becomes untrusted and frankly, ridiculous.


The npm of the C world is the dozen different package management systems that people have crafted and spent countless hours on. They work, but the lack of a standard makes it a bit cumbersome. Building a .deb is like trying to run a american ninja course, lots and lots of esoteric obstacles. Sure you can hack a .deb, but there's about 2 dozen tools to actually make one that works like the real deal instead of your ad-hoc tarball.


I'm not sure what you are criticizing really. npm is by far better than what we have in the C world, and why npm in particular? Because it's easy to bash?


Far better? Lol nope. In the C world the OS distributions and the general nastiness of shipping libraries outside of an OS distro at least turn away the newbies who think that the world needs yet another module that pads a string to X length.

The hardness of C is its weakness but also its strength. C programmers at least tend to know basic programming and OS management skills while JS programmers... oh hell I'm happy I got out of the mess that is "modern" frontend development.


You still haven’t said a single word about package management.


oh I do have: the easier it is to publish code/packages, the more newbies and morons will flood your environment, to the point of unusability (or at least inability to do any sort of audit).


npm as a package manager is good in theory. But in practice, it does tend to create a culture of "just use a package" for everything. For example foreach [0].

[0] https://www.npmjs.com/package/foreach


While I do easily agree that npm has created some sort of unwanted culture, I still think this culture was born partly because it's easy to publish/import. I can't say the same of the C world, moreover, there isn't only npm in the package management world.


> This is C, not JavaScript. Does anyone really want an npm of the C world?

I don't care for the deep dependency chains of npm, but integrating C libraries which use a mix of different build systems, and making sure your complete project cross-compiles cleanly with different toolchains for different arches is just irritating and time consuming. I don't think the situation is ideal.


npm is an outlier. Package management ecosystems of Python, C#, Go and Rust don’t suffer from such cultural problems or dependency hells.

Arguably, you can attest some of those issues to JavaScript’s immense popularity but if even Python can manage it, others can too.


What do you mean by 'this kind of thing'? You didn't actually state any arguments against single header libraries.


Its existence is my argument. The only reason single-header libraries exist is that library management in C/C++ is terribly cumbersome. We shouldn't need them at all. There is a reason source only distributions are practically absent in almost all modern languages.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: