My unusual application of Docker here is no exception. Most software builds are needlessly complicated and fragile, especially Autoconf-based builds. Ironically, the worst configure scripts I’ve dealt with come from GNU projects. They waste time doing useless checks (“Does your compiler define size_t?”) then produce a build that doesn’t work anyway because you’re doing something slightly unusual. Worst of all, despite my best efforts, the build will be contaminated by the state of the system doing the build.
Completely agree, Autoconf needs to die. C++ builds don't need to be complex and fragile, even if many environments are supported. Just use a minimal CMake build and resist the urge to implement "clever" things in it.
If only CMake weren't just as awful. It seems to have become an industry standard yet I can't begin to tell you how much time I've wasted wrestling with it lately. In an ideal world all projects would use CMake "correctly" but it's just not the case; problems are really hard to debug, no one really seems to know the current "right way" to do things, and the "right way" keeps changing anyway.
Coupled with an awful language, I just can't say that CMake is better, or much better, than autotools. But at least it handles Visual Studio and ninja. So yeah, we use it for the support it provides which is second-to-none, but it's a mostly terrible user experience, frankly.
The only reason CMake is considered better than automake is because automake is so unimaginably dreadful. CMake is a lot worse than pretty much any competent piece of software.
Qt is moving to CMake too. And the latest versions of CMake are as good, or better, than qmake. Add to that vcpkg and you will have a fantastic solution for managing dependencies.
Maybe CMake is not the best but is good enough and the new standard in the C++ world.
It’s really not good enough, it’s ok to admit it. Nobody likes CMake, it just somehow won the makefile generator wars. Having to a learn shitty, esoteric language to compile my code Is not something that makes me more productive.
I agree. qmake was much, much better to work with than cmake. I tend to believe the reason cmake won is because qmake was so associated with Qt. The best tech does not always win.
Android also settled with cmake, while keeping the original ndk-build makefile based one, after doing a couple of failed attempts to switch to something else.
After 10 years they are finally introducing AAR support for NDK projects, which is also built on top of cmake.
Meson is excellent and would have eaten CMake out of house and home if it weren’t for the stupidity of it targeting C/C++ but being written in Python. A lot of C/C++ developers cringe at the thought of requiring Python as a build dependency.
Complete aside: I often find myself agreeing with contrarian
or somewhat contentious dev-related posts here and on GitHub (I’m mqudsi there), look to the name of the author, and see yours.
(I’m not sure why I decided to share this right now. Sorry.)
You may want to check build2[1] then. As a bonus you also get a Cargo-like package/project manager (but with a real build system underneath). Full disclosure: I am involved with the project.
For me at least, using Python is the least of my concerns, in fact I like Python so having the build system in a familiar language instead of some esoteric DSL would be wonderful. Unfortunately SCons was too slow and had its own problems, but it wasn't a bad idea imho.
That said one thing I like about autotools is how it generates a build system that doesn't depend on more than shell script and make, and can be packaged that way. It's annoying that CMake produces a build system that requires CMake to be present to build.
I actually philosophically minded SCons less than meson as the former never pretended to be a build system that just happened to require Python as a minor detail to gloss over or wishfully pretend didn’t matter; it put it out there front and center.
I'm actually just learning about Meson for a small project I'm doing. It's... nice in many ways, but you can tell it's still very young. It's trying to avoid many pitfalls of older projects like CMake by being very strict and opinionated about how it can be used and how you can extend it. This is, I think, a good thing, but that means it also lacks a lot of support that is needed to do certain things. So, it's nice, as long as your build is not too complicated. At least it stops you from building a multiheaded hydra, but at the same time it doesn't cover everything you might need so you can get yourself into a corner if you commit to using it and then discover later that you'll need some functionality it doesn't provide.
You know, what I'd _really_ like, the more I think about it, is consistent a way to treat the "build system" simply as a separate project than the "source code". It's just silly how coupled the build system can become with the specific project it's been designed for. More thought is needed towards decoupling build systems from their corresponding target code. The last thing I want as a C/C++ developer is to have to maintain multiple build systems AND make sure everything is correct about what "package files" (pkg-config, CMake targets, etc) they install and where. Lately I find more and more time is spend maintaining build systems than the actual code, and it's very frustrating, and only gets worse as more and more "solutions" become available that require whole new build systems to be designed and integrated. I have several projects that have at least 2 build systems in their repositories, and we have to constantly test them both and make sure they do the same things.
If you do only minimal things, you will have fragile builds. The hacks are there not because people like adding complex code for no reason, but they're scars from broken builds.
For example, requirements for what has to be static and what dynamic are different on Linux (distros want unbundling) and macOS and Windows (you can link to handful of things that ship with the OS). macOS is especially annoying, because if you're not careful, you'll link dynamically to homebrew's non-permanent locations. `find_package` doesn't do the right thing without clever non-minimal things around it.
Having worked on compilers, I can honestly say that autotools' attempts to be helpful are more often the cause of the trouble than fixing anything broken.
For example, it checks for memcpy by writing this program:
void memcpy(void);
int main() { (void)memcpy(); }
and checking if it compiles. Which is of course illegal and completely nonsensical C code that no one would ever write. It is not entirely unreasonable for a compiler to error out if it sees this code, and ince we were working on an aggressive pointer analysis for C, we did do so. Of course, since the compiler failed on this code, autoconf concluded that the system didn't have memcpy, and it helpfully provided its own implementation... which immediately crashes the build (after autoconf completes, of course) for multiple redefinitions of memcpy.
To top it all off, it appears that there has never been any system that didn't have memcpy that was capable of running any version of autoconf, let alone any software written this millennium. This check literally has no benefit, and is "complex code [added] for no reason."
I don't understand why is it illegal C? memcpy is just another function in some library. It might be libc or not. I don't expect compiler to care about it, just put function call into object code and move on.
It's undefined behavior to call functions with a different set of arguments from its definition. The definition of memcpy is memcpy(void * , void * , size_t), and calling it with no arguments is obviously incompatible with the three required arguments here.
> I don't expect compiler to care about it, just put function call into object code and move on.
memcpy actually has a few semantics that cannot be legally expressed in C (specifically, strict aliasing doesn't kick in). All modern compilers actually define memcpy as a compiler-builtin, and they rely on memcpy's semantics for optimization purposes.
Or, put more bluntly, memcpy does not correspond to a function call in object code. C is not a thin wrapper around assembly code, and has not been so for quite some time.
> memcpy actually has a few semantics that cannot be legally expressed in C (specifically, strict aliasing doesn't kick in).
Are you sure? I hear this a lot, but I don't think it's really true. char pointers are allowed to alias pointers to any other type in standard C (C17 6.5p7,) so unless I'm missing something, a correct C implementation of memcpy could just cast both its arguments to char pointers and copy them char-by-char.
I agree with your premise that the call to memcpy() is undefined, but I think that's entirely because of the conflicting definition, and has nothing to do with any special semantics of memcpy, so it would equally apply to any other standard library function.
> Are you sure? I hear this a lot, but I don't think it's really true. char pointers are allowed to alias pointers to any other type in standard C (C17 6.5p7,) so unless I'm missing something, a correct C implementation of memcpy could just cast both its arguments to char pointers and copy them char-by-char.
I'm not completely certain. I have definitely seen this assertion before made by people more well-versed in the C standard than I, but I don't recall the exact argument. I think it may be the case that the write-via-char causes the effective type of memory to change (so it can only be read by char from that point on), but again, I don't trust my judgement here.
I think it might be an out-of-date assertion, but I'm not sure either since I also hear it from people who seem to be well-versed in the C standard. The only place in C17 where I can see memcpy singled out (6.5p6) also mentions copying "as an array of character type." The definition of memcpy itself just describes it as copying characters. It's true that some types can have bit patterns that are "trap representations," that is, they cause undefined behaviour when used, but memcpy can also create trap representations. You could memcpy to copy the bit pattern of a signalling NaN from an int into a float, for example.
memmove on the other hand can't be implemented in standard C, but it can be implemented on most platforms using only implementation-defined behaviour, because casting a pointer to an intptr_t is implementation-defined, but on most platforms with a flat memory model, it gives you the linear memory address.
Well, no, in this test program memcpy is declared as having no arguments and it has to be called with no arguments. It's perfectly consistent, unless memcpy is so magical that some specific undefined behaviour is allowed (not likely).
The actual test comes later: the real memcpy with three arguments, if it exists, is mislinked into the test executable; if memcpy cannot be found linking fails.
The test program would hopefully crash badly in case it is actually run, but the test is not about memcpy behaviour or about what the parameters of memcpy are.
It's not the declaration of memcpy that matters, it's the definition. And the definition of memcpy is given by the C standard. In the case of external functions, it's the user's responsibility to ensure that the declaration is compatible with the definition.
If you want to use a non-standard function that coincidentally happens to be memcpy, you have to use special compiler flags to inform the compiler that C library semantics are not in effect (e.g., -ffreestanding). Of course, the entire point of these checks is to figure out if the C library has these functions in the first place, so using these options during the tests would defeat the purpose.
> The actual test comes later: the real memcpy with three arguments, if it exists, is mislinked into the test executable; if memcpy cannot be found linking fails.
The actual test both compiles and links the executable, although it (obviously) doesn't attempt to run it. Whether it fails in the compile or the link step is immaterial to the test, as it fails either way.
I agree. If the compiler authors make some additional "magic", they are also responsible to "unmagic" the code which checks: "if memcpy is defined as having no arguments, and then a call to it is compiled (the call is never executed), will the memcpy routine from the standard library (which must be linkable only using the name) be successfully linked?"
> memcpy is just another function in some library. It might be libc or not. I don't expect compiler to care about it, just put function call into object code and move on.
The compiler has special knowledge of memcpy and may introduce calls to it even if you don't call it anywhere in your code - see this : https://gcc.godbolt.org/z/_w78qd
Hence standard library function names are "reserved for use as identifiers with external linkage", ie unless your implementation is 'freestanding', you're not allowed to re-define them.
The code above does not do so.
Contrary to what other people have suggested, according to my understanding of the C language standard, the word `memcpy` is not magical in any way, and undefined behaviour only comes in because the code will call `memcpy` with incorrect signature.
Because it is a function in the C Standard Library. Programs are typically prohibited from declaring these functions. You can if you don't link with libc and pass in the correct compiler flags (gcc will turn some raw loops into a memcpy call as an optimization even without linking libc afaik)
Compilation units are required to declare all standard library functions they call. They usually do so by including the compiler's header files rather than doing more work and risk errors with explicit declarations.
The problem is, autoconf does a terrible job of supporting Windows, and actively rejects patches which would improve things (for example, it does very badly with spaces in filenames, which are common in Windows, for example "Program Files" and "My Documents")
"My Documents" are called "Documents". You can also install your application into user directory, something like "C:\Users\Vladimir\AppData\Local\YourApplication", so no spaces necessary unless user name contains spaces (not sure if it's even possible).
I'm not claiming that spaces should not be supported, but you can live without them.
autoconf is an attempt to commoditize the insanity that people have experienced over the years. This is the wrong approach. The right approach is to make it easier to add checks for only your particular insanity. CMake does it correctly.
Not that I think autotools are great, but, to the contrary, feature-detection is exactly what all those macros in configure.ac etc. do, as opposed to cmake which tries to guess a platform and is a monolith with magical behaviour that nobody groks in its entirety. cmake, for one, is actually a case of the cure being worse than the illness. Already autotools treats Makefiles as output (and many macros shit gmake-specific if/else cascades into it) when the solution is simply using Makefiles properly, and assume POSIX headers/defs in this millenium. See git's Makefile, or those on suckless.org, for how to do it properly.
True but I think addressing those inconsistencies in the build system is doing it on the wrong level. Ideally this should happen outside the actual build and any dynamic configuration that is derived from the system environment should simply be passed into the build. CMake makes this easy using toolchains.
Is this really easy? I spent days lately trying to compile popular library on Mac for all platforms (Win, Linux) only to figure out that it is MUCH easier just to set up bunch of dockers/VM and compile it there. I suspect it is close to impossible to compile the library (LevelDB) for Linux on a Mac.
But every C build system will have to deal with some mess. That may be having multiple configurations for multiple OSes (with weird exceptions for weird OSes or WASM), snowflake libraries that just had to have their own pkg-config replacement, fiddling with flags for various flavors and versions of MSVC and non-MSVC compilers, rpaths, sovers, etc.
To me it's inevitable that every project that starts with "just 5 lines of simple CMake/Makefile, look how simple it is!" will end up with the same mess everyone else ends up with.
cat README.txt ..... now go apt install some packages. Except they might be named differently in your distro. Also some of them might be a different version but my README specifies no version, so it might fail to compile or error out later because of a header mismatch. Also the one available in your distro might not be compiled with the "X_POTATO=1" flag (which you will discover somewhere along the way is required), so its time to build that dependency from source and restart this whole dependency management hell one layer down have fun :)
There are different versions of compilers supporting different features floating around, but Makefiles or whatever don't specify that. Build dependency management is frequently something like `sudo apt install...`, tying the build process into packages installed in the OS itself rather than being sandboxed which creates all sorts of problems.
The autoconf checks if the build environment has a sane C compiler, standard C library, shell and command and all. At that time, well 30 years ago, it might be useful, but today, it's completely garbage. If the build environment found out to have an insane non standard conforming behaviours, there is nothing to do about it. And the autoconf's workaround implementation almost always won't work today in such an insane environment anyway.
What I don't get is the autoconf failed to update it to the modern situation and still keeping the behaviour that is totally irrelevant and considered harmful today.
autoconf is a typical GNU project. It looks like it would be horrible and crufty but when you make an effort to get to know it it is actually pretty sweet.
Just a little example of how clever it is:
If you want to cross compile to a target, you can't actually run the test programs you compile. So it becomes important to make them fail at compile or link time, not at run-time. If the expression to be tested is constant at compile time, autoconf will compile a test program that uses the expression in the size of an array type. So, let's say you want to check if sizeof(int) is at least 4. Then autoconf would make a test program that declares something like
typedef char foo[1-(sizeof(int)<4)*2];
If sizeof(int) is less than 4, then the array size would become negative, and that produces a compile time error. No need to actually run the program.
That is the place where I learned about this pattern many years ago. I'm still grateful for autoconf for teaching me that. This is how you did compile time assertions in C before C introduced _Static_assert.
Also note that autoconf has lots of other awesome features if you would like to learn about them. For example it can be configured to have a system-wide cache for test results, which speeds up future runs.
But the most important part of autoconf to me is that it is dependably configurable. On my system, I have both /usr/lib and /usr/lib64 so I can develop for both 32-bit and 64-bit worlds at the same time. Run configure with --libdir=/usr/lib64 and you are good to go. There is no single way to reliably do this with cmake. Among the conventions, -DLIB_SUFFIX=64 has apparently crystallized out as de-facto standard, but you can't rely on it. For LLVM you have to set LLVM_LIBDIR_SUFFIX instead.
Or let's say you want cmake to also look for include files in /usr/X11R7/include and for library files in /usr/X11R7/lib64. Good luck with that!
I'm not trying to say that cmake is bad. It is a good solution for what it is trying to achieve. But automake had higher goals and also achieved them.
But I think cmake vs autoconf is a false dichotomy. cmake is not the enemy here, and neither is autoconf.
I'm actually more disappointed in the hubris of all the people deciding they can make better build systems than automake or cmake and then roll their own, but consistently fail in situations that other people already encountered, thought about, and solved. We now have autoconf, cmake, jam, bjam (Boost), qmake (Qt), meson, waf, python and perl have their own stuff, everybody believes they need to ram a package manager down my throat as well.
To me, life time is the only resource that actually matters.
For every 5 minutes you think saved by not going autoconf, you made your "build from source" users waste 50 minutes per person to get your build scripts to work.
> This is how you did compile time assertions in C before C introduced _Static_assert.
That's the biggest issue with autoconf... it tends to spend a lot of time basically asking the question "are you some weird Unix from 1997?" that no one actually cares about the answer to, because people don't take the time to curate their build systems.
At its core task of figuring out how to compile and install properly on a system, autotools can do a surprisingly bad job. One of its most frustrating habits is dropping flags from the linker invocation ("I don't know what -flto does, but I'll drop it from the linker even though the user explicitly told me to link with it because unknown flags tend to bad for the linker").
Yes, build systems are a complicated three-way tussle between the system, the developer, and the user, and we've done a very bad job as a community at finding good tools to mediate this tug-of-war.
YOU may not care about old crappy systems, but you'd be surprised how much cruft is still in use in large organisations. And I for one am glad GNU went through all the trouble to make their code work on non-standard environments, because it means that I can use GNU stuff to bootstrap a non-crufty userland on those systems.
About the LTO stuff: Maybe you did it wrong?
I just tried to reproduce your problem by downloading GNU coreutils and running configure like this:
$ LDFLAGS=-flto CFLAGS="-Os -flto" ./configure
it definitely passes -flto to the compiler. I can't get to the linking stage because binutils is missing a plugin. Probably something wrong with my gcc setup. But autoconf is not removing -flto.
BTW: About non-standard environments: Go ask among your friends for people who do embedded systems development, and let them show you their environment. You'd be surprised.
Support for -flto was added in libtool about the time I was working with this (I want to say 2012 or 2013), which also means you get to run into the fun situation where it's fixed in the upstream codebase, but the repository you're working with has a checked-in stale version of it.
So, my experience is the total opposite, but it is probably because you only want to compile for your one computer whereas I am constantly trying to compile for twenty platform / architecture combinations with sometimes arbitrary toolchain requirements (like "must use clang" or "must use a compile newer than X" or "must generate object files capable of being used on this older version of the operating system"). I have _never_ had a hard time compiling something when it uses autoconf: I had to learn how autocons works and what its expectations about the world are, but they are actually mostly correct... libtool, on the other hand, causes me issues sometimes, but I have largely figured those out over the years.
But when people use CMake? Ugh. I am in for a world of hurt, and probably so many custom assumptions about how to interface with the compiler and the operating system that not only is cross compilation probably off the table, but compiling it at all for one of my target platforms is likely not going to work :/. I pretty much always end up having to just throw away your build system entirely and start over from scratch just to even test the project for my target. OMG: a bunch of projects like using tools like Meson... the entire Meson 0.52.x line entirely broke the ability to cross compile to targets like Android on macOS as it incorrectly detected stuff about the target linker using the host compiler, and as it is largely a black box the only real way to deal with this is to avoid 0.52.x forever (I filed an issue about this right as 0.52 came out but they failed to fix this serious regression until 0.53).
And in languages other than C++? Ha ha ha omg they are all so horrible :(. I have been fighting for a month now trying to get a Rust library compiled to a static archive so I can link it into my project, and have nothing but a long list of bugs filed with upstream and workarounds for their broken build system to show for it, as in the end I finally decided it just wasn't worth dealing with anymore: bindgen passes the wrong target to clang for iOS (with no way to work around it as the person doing the compile, and often this happens in some deep dependency which makes overriding that really hard, as the build system believes it should be in charge of building dependencies), buildcc incorrectly mixes up HOST_CC and TARGET_CC when you are doing a cross compile to a target (such as CentOS 6) from a host (such as Ubuntu bionic) that shares a triple, the MinGW build not only tries to link against some entirely-wrong set of libraries that comes with Rust (they just last month shipped a broken fix for this that works in cases where you aren't passing a custom sysroot... but if you are doing MinGW seriously you have a custom sysroot so you can be compatible with MSYS2) but it also embeds into the archives parts of the target's standard libraries (which is just crazy: they apparently do this for MinGW, musl, and one other target), and I can't yet for the life of me get it to generate reproducible builds even for simpler packages (which is normally trivial with C++--yes, even with autoconf--I need to spend more time looking into this problem and filing issues about it, though).
I've been eyeing off the new zig compiler for making portable binaries. I'm tempted to set it up for cross-compiling a native nodejs library I develop. The work Andrew Kelley has done to make zig cross compile is phenomenal.
Advice to young devs: If you don't have a good professional reason why you absolutely need to use gnu tools on windows, and you have no technical reason not to use Visual Studio, use Visual Studio. It's good enough, and for me, even good.
Your advice would have more weight if you provided some arguments. GNU tools are also good enough, and this kit seems like a lot easier way to get started with your C lab exercise than installing Visual Studio and getting lost in it.
I dislike Visual Studio because to set things up you have to compare online screenshots to your settings and click around multiple tabs of configurations. In the end you (as beginner) don't know what libraries get included and if it will work on another computer. GNU tools are text-driven, so easier to replicate and to reason about.
The GNU tools are not fully compatible with the Windows ABI and they don't support some important Windows features. There are great reasons to use the GNU tools, like compiling unix programs that do not otherwise target Windows.
But you are swimming upstream if you use the GNU tools for general Windows development. Windows is a complex, integrated system that has evolved over a long period of time and is very different from unix-like operating systems. The peculiarities that exist in MSVC and associated tools are not just the whimsies of Microsoft devdiv engineers and product managers, but often tie in to operating system functionality. If you try to develop Windows applications with mingw, you will eventually hit weird problems and errors that simply would not exist if you used Microsoft's toolchain.
If you want an alternative to Microsoft's compiler, then consider llvm, which is aiming for full ABI compatibility and even produces PDBs (though I'm skeptical that they're as complete as the ones generated by Microsoft's tools). But the compatibility is incomplete, so ymmv.
Microsoft seems to aim for deep integration of Linux in Windows per WSL. They even showcased that they are working on Wayland support for WSL, which would enable using Linux GUI applications directly on Windows.
We may soon get at a point where Linux applications feel as native on Windows as native Windows applications (which use quite a variety of toolkits anyway), as the integration deepens.
Now WSL still uses larger distribution images. But very little holds them from supporting thin Linux images with just the necessary dependencies that launches in Windows as any other application.
WSL is more a solution for users that want to have Linux software on Windows. It's not a solution for developers that want to target Windows, which is what I take this thread to be about.
I don't know if it will ever come with a vanilla Windows Home edition install but somehow I doubt that.
Not yet, but I think it is very likely that in the future installing a Linux distribution from the store will automatically install/enable WSL. Micro-distributions with a specific applications are only a small step from there.
I don't see what they have to lose. Windows is still used widely in business. But their lock-in has drastically reduced with the rise of iOS, Android, and web apps. Making Windows more attractive as a platform for developers to deploy applications, even if it is through the WSL subsystem will make Windows as a platform more competitive to these other ecosystems.
I am currently writing scientific software. But we have stopped building Windows versions, since these programs work great with WSL and it is far less effort than building these applications separately with Visual C++.
I write software that's used by human beings in businesses and building for Windows is trivial compared to maintaining additional docs and training material for managing a WSL install on users machines.
I'm currently in a weird spot with the software I distribute because the majority of the users "know enough to be dangerous" but aren't software engineers/IT professionals. We want them running code and using Linux like a pro, but there's a lot of training/documentation overhead just for our *nix builds and the friction to getting that up and running for WSL is daunting.
Luckily MS understands B2B native more than anyone else so I'm hopeful they'll have a solution eventually, but I'm not holding my breath until then.
WSL is more a solution for users that want to have Linux software on Windows. It's not a solution for developers that want to target Windows,
I am not sure what the difference is, when WSL gets deeper and deeper integration with Windows. WSL2 does not use the personality mechanism anymore, but that's just to make it easier to fully support Linux. But once Linux becomes a personality of Windows in the informal sense, targeting Linux means targeting Windows at the same time.
Depends where one is coming from, when I was TA at my university Visual Studio was the official C++ lab exercise tooling for first year students, in Win 9x with student drivers mounted on login.
Sure, visual studio builds have been, and still are up to a point a bit miserable experience.
But not as miserable as trying to use gnu tools for building C++ on windows. I have over a decade of exprience in that and it's always equally painfull.
There were good reasons to use GNU tools when MSVC did not have thorough enough support for C++11 and GCC did. Now MSVC is pretty good with the latest C++ standard.
Besides, Windows now comes with a real posix virtualized environment - Windows subsystem for Linux - which works great - use that, not msys2 or cygwin if you must have a gnu build system on a windows...
You can use VS for C just fine on Windows, VS 2019 has full support for C89 and almost complete C99 support. Next iteration of VS 2019 will even add support for _Generic. But, if you want to write portable code to other operating systems, I suggest to build your code with both VS and GCC.
Install llvm. Clang can use all of MSVC headers without breaking compatibility with its ABI, like MinGW does (you can't mix up DLLs built with the MSVC ABI and the MinGW one)
The only real downside of using MSVC is the mind boggling amount of storage it hogs with libraries you'll never ever use not even once if your goal is to just build simple C++ programs. MinGW might be the only viable approach if you are storage constrained (LLVM/Clang also supports MinGW of course)
I've done my fair share of professionally maintaining portable programs and my preferred choice nowadays is to extract platform independent and platform dependent code to their own libraries and then just implement what is the most out-of-the box simple way to define a build per each platform.
It tries to embrace it, but this is not necessary (and limited for non-trivial cases - and what isn't non-trivial for serious projects). CMake itself has good VS support for a long time. I always try to create VS solutions in parallel with ninja, nmake, jom, whatever completely from it without hand-holding of some integration.
My biggest problem with gcc for Windows is that libstdc++ does not support threads. There is a workaround with a pthread compatibility layer but that is just horrible and awful. I can't bring myself to do that.
This has been an open issue for literally years now. But apparently nobody cares enough to fix it.
If only that were so! At least up to gcc version 9.3 it does not work. I will try gcc 10 soonish but I don't have high hopes.
The whole reason I installed a cross compiler to Windows is so I could cross compile some search engine code I wrote that uses C++ threads. But if you have a copy of the header and run-time files from Visual Studio (there is a free edition of that now) you can actually use clang to cross compile for Windows from Linux just fine, including C++ threads.
gcc on Windows uses winpthreads for C++11 threads, which works just fine (it automatically links with libwinpthread)... I can basically use the same threading code for all platforms (except the stuff that is necessarily platform dependent and therefore not part of C++11). I should note that I use msys2. So I'm wondering what you are missing?
That being said, I often use my own STL like mutex class which wraps SRWLock on Window and pthtead_mutex_t on Linux/macOS.
The reason (for me) to use msys in the first place would be so I can create binaries that do not depend on obscure DLLs like winpthread. If depending on 3rd party DLLs was no problem, I could have used Cygwin.
Also, as I heard it, winpthread combines the warts of windows threads with the warts of pthreads, doing neither justice.
You can link both libstdc++ and libwinpthread statically if you want. Both libs come with msys2, so I wouldn't really call them obscure third-party DLLs... In practice, threading is not a problem at all with gcc on Windows. I agree, however, that libwinpthread is not the most efficient implementation (that's why I roll my own), but for most purposes it's good enough.
Portable in a sense of have it on a flash-stick in your pocket, all working without need to install.
I once created such a pocket-development setup, throwing in Fossil [0] for version control and project tracking, Notepad2 (notepad2-mod) [1,2] for editor. Actually, Git can also be set up for pocket, which then brings the whole MinGW for this.
I wanted to fit Geany [3] for more like IDE experience. It's also Scintilla based, but needed GTK, so I chose not to bother back then.
Android NDK can be set up this way too. It packs LLVM, so here it is another capable compiler to do the lab-work.
Just learn to program in portable C way, that is know the platform-specific ways to do certain things (threads, network) and #ifdef it properly. However this is a whole other story about how this sort of portability could be handled.
I'm doing all local development on Linux and since msys2 is there by default getting the windows compile done is a bliss. Just need to bring msys2 to your path and all the rest (git, g++, linking, etc) is the same as in your Linux env. E.g. this is my prefix for the cross-build https://gist.github.com/dominicletz/5e50c49aa1bf30d2485f5bec...
This has been the case for a long time on Appveyor which was is one of the reasons we stick to it for CI. Do you happen to have experience with it? I'm wondering if and why we should switch to github actions, for the repositories we have on github anyway.
Haven't tried Appveyor so can't compare experience. I'm a travis convert came for Windows with my open source project. Just glancing at the Appveyor pricing chart I guess my open source project would only get one parallel job on Appveyor while on GitHub Actions it's unlimited.
If you install visual studio, and then copy together all the files it adds to %PATH%, %INCLUDE% and %LIB%, and create a .bat script to set environment variables to point to your copies of these files you get a fully portable first-class compiler for windows with needed headers and libraries, zipped up to around 75 mB. Add remedybg as a debugger, and your editor of choice, and you never have to touch visual studio again.
Which you need the visual studio installer for, which only works half of the time. Just to check if it still is as bad as last I tried, I downloaded the build tools and started the installer. Currently downloading 1 GB of data...
Edit: To nobodies surprise, the installer still doesn't work. The command line shortcuts it installs fail to properly set up the path. At least it looks like they are trying to print a sensible error message in the command prompt now though. Still not quite sure how they manage to fail this badly though.
Not sure exactly what folder you mean, but I literally have a 75 MB zip file (or rather, it's a .7z file, zipping would make it slightly larger). Unzipped its 700 MB though.
In that folder, bin includes files for compiling from x86/x64 to x86/x64, and in practice you only need one of the subfolders (in my case the x64 to x64 toolchain). The lib folder also includes binaries for x86 and onecore. If you remove all of those 7zip compresses the rest down to 25 MB.
You're kidding right? Hostx64\x64 is like 51MB, lib\x64 is 428MB, together they compress to like 116MB. And then you forgot the ATLMFC x64 files which are 465MB and bring it to 230MB compressed.
On the fresh install of the 2019 build tools lib\x64 is 200 MB. I tried using 7zip to compress to .zip instead of to .7z, that took the archive from 25MB to 75MB. I might not have enabled ATLMFC in the installer though, I didn't change the default checkboxes.
Edit: I feel like this whole discussion highlights the issues with the vs installer though. If it wouldn't create hundreds of folders all over the place it would be much easier to have a sane discussion about it.
Note that this kit is using MinGW. That's not necessarily a bad thing but comes with all of the caveats you can read about. I wish the author mentioned it because it should definitely be a choice you make deliberately.
In the same vein I've been using https://github.com/mstorsjo/llvm-mingw + lld + cmake + ninja on windows for over a year with great success, dev experience is much better than with MSVC.
Mmmh Docker: It's still a problem to run Docker and VMware Workstation peacefully in parallel on Windows. The one prefers Hyper-V, the other the opposite:
If you don't want Hyper-V, you can run easily run docker in virtualbox with boot2docker, using docker-machine. Or you can setup a docker linux vm in vmware.
It becomes only more complicated with nested virtualization techniques. Especially if you develop close 'to the metal'.
I'm in 3D and image processing software, often with GPU interface requirements involved. I can reconcile some aspects in the virtual environments, but yet in the single case not all of them. On top of each other and according to experience, difficulties multiply.
For the OP, many projects are a "mess" that don't agree with his opinions. "bash is a mess, git is a mess, cygwin feels wrong..."
And M-Windows is a pristine example of orderly genius?
I support his right to his opinions. But here he is, running Docker and a pile of opinions to build himself a portable C&C++ env on M-Windows. (To use gcc, no less.)
Software in general is a mess. But that doesn't prevent great tools from being developed. There's a reason for the success of GNU and FOSS.
Completely agree, Autoconf needs to die. C++ builds don't need to be complex and fragile, even if many environments are supported. Just use a minimal CMake build and resist the urge to implement "clever" things in it.