Also while I love Python it’s helpful to understand why Python packaging is a (manageable) mess. It’s because of non standardization of build tools for C/C++/fortran and the immensity of the ecosystem, nothing to do with Python itself. It’s part irreducible complexity.
It’s a miracle it works at all.
For example, cargo for rust, which is great, can assume to package mostly rust-only code. And while it is compiled, the language "owns" the compiler, which means building from sources as distribution strategy works. I don't know how/if cargo can deal with e.g. fortran out of the box, but I doubt cargo on windows would work well if top cargo packages required fortran code.
The single biggest improvement for python ecosystem was the standardisation of a binary package format, wheel. It is only then that the whole scientific python ecosystem started to thrive on windows. But binary compatibility is a huge PITA, especially across languages and CPUs.
Various factors still make the Rust and Python story different (Python uses more mixed-language packages, the Rust demographic is more technically advanced, etc). But a big one is that in Rust, the FFI is defined in Rust. Recompiling just the Rust code gives you an updated FFI compatible with your version of Rust. In Python, the FFI is typically defined in C, so recompiling the python won't get you a compatible FFI. If Python did all FFI through something like ctypes, it would be much more smooth.
It's still its own mess. You'll find plenty of people having problems with openssl, for instance.
No idea how build.rs being able to run any Rust code is an advantage, it’s not like setup.py can’t run any Python code and shell out. In fact, bespoke code capable of spitting out inscrutable errors during `setup.py install` is the old way that newer tooling tries to avoid. Rust evangelism is puzzling.
If you cargo build, can that run a dependencies' build including trying to compile C and stuff?
And also there's a helper library (the "gcc" crate) which does all the work of figuring out how to call the C or C++ compiler for the target platform, so that build.rs can be very small for simple cases. You don't have to do all the work yourself.
I’ve only begun using it so my expertise is limited, but I think vcpkg aims to help with some of these difficulties by shipping code as source and then running make on dependencies so they are guaranteed ABI compatible because the same compiler builds everything.
- Competing msvc and gnu toolchains and ABIs, with native and Windows-first dependencies working better or exclusively with msvc, and *ix-first dependencies working better or exclusively with gnu, is a uniquely Windows situation. Which is which for a given build is also not clearly labeled most of the time. (You might mention glibc vs musl, but there’s basically nothing uniquely musl, and when you’re compiling for musl you can almost always get/compile everything for musl from the ground up.)
- Confusing coexistence of x86 and x64 is another thing largely unique to Windows. (amd64 and arm64 are much more clearly separated in Apple land.)
- Package management is a complete mess. Choco, scoop, win-get, nuget, vcpkg, ad hoc msi, ad hoc exe installer, ad hoc zip, etc. etc. There’s no pkg-config telling you where to look and which compiler flags to use. If you want to pick up a user-installed dep, you special case everything (e.g. looking in vcpkg path) and/or ask user to supply the search path(s) in env var(s).
Anyway, shit mostly(tm) just work(tm) on *ix if you follow the happy path. There’s no happy path on Windows more often than not.
I agree. Some people love to complain about python packaging. But from one perspective, it's arguably been a solved problem since wheels were introduced 10 years ago. The introduction of wheels was a massive step forward. Only depend on wheel archives, don't depend on packages that need to be built from source, and drag in all manner of exciting compile-time dependencies.
If there's a package you want to depend on for your target platform, and the maintainers don't produce a prebuilt wheel archive for your platform -- well, set up a build server and build some wheels yourself, and host them somewhere, or pick a different platform.
Admittedly I'm not a python expert, but julia handles this just fine? It doesn't seem like it's a difficulty inherent to "mixed language packages". Somehow it appears to me that python's approach is just bad somehow.
TLDR: There's a package called PrecompileTools.jl  which allows you to list: I want to compile and cache native code for the following methods on the following types. This isn't complicated stuff.
If the main objective is to reduce time to load and time to first task then PackageCompiler.jl  is still the ultimate way to do so.
Because Julia is a dynamic language, there are some complicated compilation issues such as invalidation and recompilation that arise. Adding new methods or packages may result in already compiled code no longer statically dispatching correctly requiring invalidation and recompilation of that code.
It slightly more complicated than what you stated. It's "I want to compile and cache native code for the following methods on the following types in this particular environment". PackageCompiler.jl can wrap the entire environment into a single native library, the system image.
Also, all packages must build with the latest version of R, or they are removed from CRAN. This makes the dep problems a lot less severe than we see with Python.
It is too slow to reimplement big pieces of software in it, so people just used bindings to existing code. And productivity rocketed!
I don't want to work with Fortran, C++, Cobol, etc. And I sure as hell don't want to figure out how to integrate such wildly different languages into my existing and modern ecosystem.
Ecosystems like Java, .NET, Golang, Rust, etc do away with this entire problem by virtue of... not calling into C 99.99% of the time, because they're <<fast enough>>.
Python was designed to call into C. It was always the solution to make Python fast: write the really slow parts in C and it might just turn out that will make the whole thing fast enough. Again: this is by design.
The languages and VMs you list were designed to be fast enough without calling into C. If you need that, great, use them.
People saying 'Python is slow' miss the point. It was never meant to be fast, it was always meant to be fast enough without qualifiers like 'no C'. If it isn't fast enough or otherwise not useful, don't use it, you've got plenty of alternatives.
1. The only way to integrate with C or other langs in early Pythons was to write interpreter extensions against the internal API. The cffi module seems to have appeared as late as 2012.
2. The Python interpreter API is not an excellent way to extend it, being as it is just whatever happens to be the internals of CPython specifically. There's now an HPython project that is trying to define a JNI equivalent for Python i.e. something vendor neutral, binary compatible and so on.
A language designed to call into C would have had a much easier to use FFI from day one.
I agree. In fact with what seems to be an exponential growth in complexity of software ecosystems, what's keeping it all from eventually getting to a "tower of Babel" catastrophe? Of course, this does not only apply to software, but it is a good example.
There is also the issue that often they were published before adding a LICENSE file was a thing. I've found myself in the position before of having to email professors in some random university to ask them if I can get permission to redistribute such a routine while packaging a library that depended on it. In one case I asked them if it would be possible to update their code with a license (which was just a zip file on netlib) and the answer was, "no, but you have my email". So I found myself having to write something like "distributed with permission from the author, private correspondence, pinky swear" in my copyright file. some of this code is so old the authors aren't around and it would get "lost" in terms of being able to get permissions to use it, I mean it's a potential crisis to be honest, if people really cared to check. (Until the copyrights expire I guess, which is what, 70 years after the author's death or some such?)
Anyway, I wonder if a potential solution is to autotranslate some of these libraries using LLMs? Maybe AI will save us in the end. Of course you can't trust LLMs so you still need to understand the code well enough to verify any such translation.
«To promote the Progress of Science and useful Arts», my ass. Why do we keep tolerating this Disney-caused bullshit ?
Though I guess that we didn't, and this is what caused the Free Software movement, so it all works out in the end ?
I guess it is a matter of taste.
I'm not saying it's an excuse but it's just how it got to where it was. Newer languages have alot of lessons learnt to build upon to be decent from day 0.
I do not have to use Docker or a venv for my Rust, Go or Deno builds.
When I receive a Python program that I'd like to modify, I can.
When I receive a Go program that I'd like to modify, I must beg for the source code.
Do you "vendor" your database into your Go program? If not, you likely still need Docker, or something like it, for your program to work.
It's not a terrible language for sure. It's just the packaging systems and tooling around it that are face-palmingly awful.
If your projects really do benefit from other languages, maybe you're doing network or systems applications that need to be fast, then you might have easier packaging but those languages require easy more work due to fewer libraries or just being harder (Rust).
As usual with these things it's six of one and half a dozen of the other :)
Every single decision point or edge case represents permanent failure for hudreds of people and intense frustration for thousands. Of course, none of this is really to do with Python the language. It's more about the wide userbase, large set of packages and use cases, and overlapping generations of legacy tools. But most of it isn't C/C++/Fortran's fault either.
This link might give you a taste:
The built-in tools venv, pip, (together with requirements.txt and constraints.txt) meet 99% of real life dependency management needs.
It already exists, it's called pyproject.toml. It already existed for years in the form of setup.py. Requirements.txt means that projects can't be automatically installed which contributes massively to the difficulty of getting packages to work.
Now that the janky corner case is a proprietary system run by cyber-landlords whose constraints are just... hostility, I feel less positive about the work being done to support it.
On the one hand it's really great that these people care so deeply about making these tools usable for everyone. And I applaud them for doing it, absolutely not suggesting they should change course, just pondering.
But... now instead of thinking "wow I'm so glad this work is happening" I do rather think "wow imagine what those wonderful people could achieve if they didn't have to work on this".
Indeed most of the story is explaining why SciPy just had to hope someone else would make a open source Fortran compiler for windows, and it looks like it was mainly NVidia devs that provided salvation
This whole problem could've been avoided if Python just didn't use proprietary tools in its toolchain.
See, the problem here is that if you want interop, with, eg. Ruby, Erlang, R, Perl, Go and probably a bunch of others (the only other exception I know of is PHP (PECL) that uses MS toolchain), then you have to produce compatible binaries.
Ideally, it shouldn't be about the flavor of compiler, but be some kind of official documented format... that many compilers can easily implement. But, since de facto this format is "just use GCC", then, in practical terms, either use GCC, or pretend you use GCC.
This sounds like a bug? The point of a build tool is to run the commands you tell it, not tell you, sorry dave.
So there was some extra work needed to create these DLLs. Either in the build description files, or in Meson. The SciPy people didn't want to implement this indirection in either place, and the Meson developers were not eager to help them either (they did help in general, for example with Fortran and Cython support; but they don't want to provide footguns) because it was indeed a hack. It only worked because the Fortran side didn't use files that were opened on the Python/C side, for example.
It does compensate by generally preserving a lot more sanity than its competitors, and having a readable and maintainable description of the build system.
But yeah, I think CMake is still the gold standard despite all its quirks, complexities and problems.
Edit: Caution: This statement is incorrect--It can generate CMakefiles as a backend. It can also generate build files for MSVC. It can also generate standard Makefiles.
Edit: Corrected downstream. CMake can make Ninja files which Meson also makes. I got this backwards but kept the edit so people won't get confused.
CMake is NOT a gold standard. CMake is an agglomerative disaster.
For example: try getting CMake to accept zig as your compiler. "Oh, your compiler command has a space in it? So sorry. I'm going to put everything after the space in all manner of weird places. Some correct--some broken--some totally random." If you're lucky CMake crashes with an inscrutable message. If you're unlucky, you wind up with a compiler command that fails in bizarre ways and no way to figure out why CMake is doing what it did.
This is my experience with CMake every damn time--some absolutely inscrutable bug pops up until I figure out how to route around it. If I'm really unlucky, I have to file a bug report with CMake as I can't route around it.
Sure, if some unfortunate soul has beaten CMake into submission and produced a functioning CMakefile, CMake works. If YOU are the poor slob having to create that CMakefile, you are in for worlds and worlds and worlds of pain.
Cmake has had ninja support for a long time. I've used it for at least 7 years at this point.
I know the basic thing I'm trying to do is not going to work, and that I'm going to wind up opening 20 browser tabs that alternate between (1) trying to understand from first principles how to do it properly (and getting frustrated going around in circles through their labyrinthine yet thoroughly incomplete docs), and (2) just desperately searching the rest of the web for the right incantation to whisper (and getting frustrated by all the blog posts and forum answers that describe how to do the thing before they went and changed how everything works).
Feeling the rage and despair build as hours roll by, and you're still staring at a screen full of the most cretinous syntax ever excreted into the world.
Includes Gimp, Gtk+, nautilus, Postgres, qemu, Wayland…
The programs that need complex build systems that require things like compiling a code generator first and then compiling the rest of the project with the generator etc. are quite common in C++ world to tame the language's shortcomings. Libraries like Qt, Protobuf and GRPC often introduce a crazy amount of build complexity.
CMake's complexity is directly result of that and it is currently the only build generator that can cope with that using basically every compiler in existence (including proprietary ones like ARMCC, ICC, MSVC and very limited ones like SDCC). Even Bazel cannot handle the same number of compilers and feature sets. That's the thing that makes CMake gold standard not its string-driven scripting language.
CMake shares quite a bit history with C++ and you hear sentences like "it's the only thing that works for this level of complexity" for both.
I’m no Microsoft fanboy. I’ve been in software dev for roughly 20 years at this point so generally view anything from Microsoft with genuine suspicion, so I get the hesitation to take it seriously. But it works across the big three (Windows, Linux, macOS) and is MIT licensed so I’d definitely recommend giving it a whirl.
The only serious knock against it is that they went the OG Homebrew route with a single Git repo containing all of their ports (equivalent to Homebrew Formulae). And then whoever designed the Git repo approach also knew slightly too much about Git internals and leveraged tree-ish refs as part of the versioning design which is just weird and confuses anyone that’s not spent time tearing into Git’s object model.
So basically, vcpkg is honestly a good tool that does what it does fairly well. It may not do everything you need, but if it can it’s amazing.
Also, the buried lede here is how vcpkg handles binary caching. Think of it like sccache but at the dependency level. I’ve seen it drop CI runs from over an hour to 10m purely because it helps skip building dependencies without resorting to bespoke caching strategies.
Everything else is mostly statistical noise.
People on C and C++ ecosystems like choice, so there will never be one single solution.
(I love Make).
Disclaimer: author of a soon-to-be released Meson competitor.
But one obscure, little gem of a rant that taught me what you just said is .
tl;dr: Build systems should run the commands you tell them to run, period. Because sometimes, the programmer actually does know what he's doing.
I am ashamed to admit that before I read that comment, I was thinking about making my build system magical. But after reading that comment, I realized that "magic" is why people hate build systems.
When you are subject to other people magic, it usually ends in tears.
Your comment is such a concise description of the problem.
I just want a tool that doesn’t make ANY assumptions about compiler flags or compilers or how my project directory structure is laid out. I can do the leg work of inputting all the exact parameters and build configurations into the tool. I just need the tool to incrementally compile my code in a parallel fashion without HACKS
That's exactly what Ninja is, and it's existed since 2012.
CMake is a flawed generator for Ninja, but you can write your own. I did that for my project, as explained here - https://lobste.rs/s/qnb7xt/ninja_is_enough_build_system#c_tu...
I found it insightful because I didn't know. Also, how angry the commenter was; I'd been struggling to figure out why people hate build systems, and the hate pouring out from that comment was palpable enough to point me in the right direction.
Anyway, I've already read your comments on lobste.rs, and I'm glad it's worked for you.
Ninja is great (my build system will be able to read Ninja files), but there are two major problems.
First, you still need a generator. Most people don't think about reinventing the wheel like you do, so CMake still comes up. And a generator can still have the magic that people hate.
Second, it's still limited in what you can do. Sometimes, you need something more complex to make your build happen.
Ninja can definitely take care of the 95% case, and I actually will encourage people to use it before they use mine.
I'm only going to be targeting the people with the 5% case, or that hate all of the widely-used generators. I fall into the latter category myself.
I think Ninja almost always needs a generator -- it's too low level to write by hand, especially for C/C++ projects.
Meson apparently also generates Ninja, but I haven't used it.
I think the main difficulty with builds is solving the "Windows problem", as I wrote in that thread.
Good example of all that from yesterday:
Anyone who can solve that problem will be a hero, but I also think no one person can solve that problem.
That is, cross-language / polyglot builds and working with the existing tools from each ecosystem is THE problem. Unfortunately most people seem to think that the language they use is the only one worth solving for.
My build system is self-contained. It doesn't just do the configure; it can also do the build.
Also, a feature is being able to treat external builds as part of the same build.
For example, if one of your dependencies is a CMake project, it will be able to run the CMake configure as a target, import the targets from the Ninja file, and run those targets as part of the build for your project.
It should be able to do the same for Cargo files, Zig build files, etc.
That would solve the polyglot issue, minus details, which I'm working hard on.
I'm not sure why you linked the post we're commenting on...
Edit: I'm also solving the Windows problem; my build system is Windows-native, and I have a design for something to build up command-lines based on the compiler in use, including MSVC.
Edit 2: I also understand why you question whether anyone can solve the polyglot problem. You are right to question whether I can. If your response is "show me the code," that is rational, and my response would be, "I'm almost there; give me three months." :)
Because working in a VM is inconvenient and has poor integration.
Also, in most cases for scientific computing, Mac users are using stuff like Remote SSH in VSCode to work directly on hardware that is running the code, which is pretty much always Linux.
Generally, its kinda said that manpower is being wasted on getting things running in Windows or Mac by the software developers. It should be the other way around, have Microsoft or Apple dedicate manpower to port things over and make both systems conform to the Linux standard.
Linux is also the common ground. If you have to choose ONE system, you would choose Linux. Otherwise you are going to need BOTH Mac and Windows, possibly even more. Just getting the hardware to test that would be a major setback for a small oss contributor.
I don’t think you can consider scipy desktop software anyway. The IDE can by all means be, but just let it communicate to some deamon running in WSL, docker or remotely over a standard interface.
Hey, that's unpopular statement on wanna-be hackers resources like HN or Reddit - people feel being creative using Linux on Desktop and feeling control over machine, famous System Administrator Of LocalHost experts ;)
Use for...what? Windows is primarily used for people who play games. Mac is used by people who want tech jewelry. Neither of which is related to development.
And it would make more sense to develop on a platform that runs the same kernel as the servers. This is the reason why the whole WSL2 exists with VSCode integration. Microsoft quickly realized that if they want to compete in the cloud with Azure, they have to be Linux first.
Or are we still going to pretend that a computer that you don't own, because Apple tells you what software you can and can't install on it, is somehow better for development?
Android/Linux, ChromeOS/Linux, WebOS/Linux also work great for the same reasons.
Other than that, better leave it on servers and embedded devices being a UNIX headless clone, with cloud and hardware vendors taking the trouble to keep it running.
Yeah so this is indicative of the fact that you don't really have ANY experience with modern Linux. You can take a well supported laptop and install Linux Mint on it and everything will just work, no tweaking required. Try it sometime before making 10 year old arguments that Mac users were making back in the day and apparently still do now.
Furthermore, Id go even further and argue that as a developer, learning how to configure basic documented things should be something that you know and is fairly straightforward for you to to do, just like installing tooling you need for your development.
As usual we get the answer,
"Have you ever tried distribution XYZ?"
The magical one that will sort out all problems, and then doesn't.
You must be doing something wrong if you are having to tweak kernel parameters.
There is also Framework laptops, and Librem laptops that are linux first.
My current DD is an Ideapad with Manjaro, which even being somewhat more bleeding edge than Debian based ones, has not only been flawless, but things like Nvidia Optimus work straight out of the box, with external displays.
…except for AMD ones with AMD GPU :( https://www.wezm.net/v2/posts/2020/linux-amdgpu-pixel-format...
or Intel ones with a particular wifi card https://bugzilla.kernel.org/show_bug.cgi?id=203709
I however agree with you that at least for me, the Linux desktop on almost all laptops and desktops "just works". Especially when comparing with Macbooks - you have way less hardware choice there.
Can I install Ubuntu?
So the definitive answer is no, Macs are still pretty much locked down.
I get that people like the battery life and the hardware of Macs, which is fine for personal use, but objectively for a laptop that is going to be used for development, you get much more utility out of buying a "non mac" laptop of your choice in the form factor, and installing Linux on it.
In macOS, I am not prevented from running whatever I want. There are some extra buttons to click to allow certain kinds of software to run, but ultimately, Apple doesn't have a say on what I can and cannot run on my Mac. Macs aren't "still pretty much locked down" because the open source community hasn't been able to make a kernel up to your standards. That's just not a commonly accepted definition of "locked down".
It's been getting way better lately. I don't daily drive it mainly because I've yet to move my stuff from the macOS partition.
This might be true for extremely demanding tasks that need to run on a cluster or cloud but a surprisingly large amount of scientific computing is perfectly manageable (indeed much easier to manage) on a laptop, if there are compatible toolchains for the OS.
That said, I would not be dissatisfied with a world in which Linux was the OS of choice for nearly everyone. Fully agree it would be great if research software developers could focus on the domain, distributions take a wildly disproportionate amount of effort
What I don't use: files on the windows fs from wsl2 for continuously or vice versa.
Exactly, your "occasionally and once in a blue moon" is close to nothing (but I agree, it's not nothing)
It is exactly a VM, and the WSL2 guest does not have full hardware access. The hypervisor can paravirtualize compatible GPUs, but for other hardware (such as USB) this is not possible. Hardware passthrough is also not possible in WSL2.
Sure, with enough begging and pleading, anything is possible, but that usually requires Conversations.
My (very limited) understanding of hypervisor is it by definition is used to run VMs.
So I don't get why that makes WSL2 not be counted as a VM, even "technically".
I assume you mean once you enabled hypervisor, technically both Windows itself and WSL2 are VMs in parallel?
Correct, I am referring to the traditional usage of VMs via VMWare or VirtualBox which do not have GPU acceleration, while WSL2 does.
It was like that 20 years ago, and it looks quite the same when I visit it for Alumni events.
WSL2 is a godsend for people forced on Windows by blind IT policies, but fortunately IT doesn't have that kind of control in academia.
What I've observed was also dictated by my own choices.
Definitely making the best of a less than ideal situation.
The inverse of κατα is often ανα though αναστροφή means literally "up turn" or invert
So maybe ευστροφη a "good turn" (eustrophe) would be better coinage
But arguing with JRR Tolkien on language coinage and being right would be a ... ευκαταστροφη i.e. good luck!
But in general I love their noticing of the overflowing grace -- that's something that gives joy and happiness in the world
I wonder if they’d just go with libflame instead, if they started today.
Of course there’s lots of other functionality in scipy; iterative stuff, sparse stuff, etc etc, so maybe Fortran is unavoidable (although, Fortran is a great language, I’m glad the tooling situation is at least starting to improve on Windows).
Several key parts using fortran have been removed, once it became possible. For example fft stuff.
I can't imagine there are a lot of Fortran folks around maintaining these old libraries - they must need maintenance no?
The standard Fortran math libraries just work, and they are fast.
I should clarify that you can write C/C++ code that would have equivalent speed, especially with the C restrict keyword, but putting in an f2c step to translate the existing code will making things significantly worse in many cases.
Can you explain this like I'm 12?
What's an example of an aggressive optimization you can make based on arguments not being aliased and there being no pointers?
*x = 5;
The compiler will try proving that no such aliasing is present, by tracking your pointer usage, but it's not always possible, in those cases it will assume the worse and re-read values from memory.
It's awful for application development, but that isn't it's niche
I'm also curious about performance numbers on Windows (though, to first order, it doesn't matter... anything serious is probably running on a Linux machine).
The idea is that different tools may rise and fall in popularity but they should all be following the standard so the compatability breaks should be minimal.
Will it work? No idea, but it's the best attempt to make things work well for everyone yet.
Python 3.12 might be the biggest churn moment, but there are probably a few more down the road, such as dropping legacy version specifiers.
I couldn't track down a more detailed scheduler-specific benchmark. I remember reading one on Phoronix a few months back...
> It is obvious to me. :)
resurrecting fortran: https://ondrejcertik.com/blog/2021/03/resurrecting-fortran/
I wonder if the Python alternatives will also get a similar boost for that reason. If maintaining the language and the ecosystem is that much of a bear, it would save a lot of human hours to do that. Even if we started back from scratch for a bit
The reason why we need virtual this and that in python is that some work in one and some work in the others. This is particularly problematic in macOS where sometimes the system fall back to the default (somewhat default) python compiler. I even have script to check version and lots of environment for running different program.
It is a mess.
After this magic, is there a new packaging environment, procedure, ... for the rest of us?
Anyway, the complexity is too damn high!
I still wonder if targeting Python's LIMITED C API wouldn't help in this case. I use a tool (for Nim) which seems to target that Limited API and solves the problem of specific python version mismatching at-least for my specific code. I never had to upgrade a pure C/Nim extension due to python version !
Using zig-cc along with a fixed GLIBC (2.27) and generic CPU flag also made it possible to target linux ecosystem in case user is not a developer and just use the compiled extension shipped with the package.
I can provide some background on the ABI issues in Windows.
The following is personal opinion not company opinion, and I am a product manager not an engineer so may have some technical details wrong. But:
At Embarcadero we're moving our C++Builder toolchain forward to a new Clang, and using a new C and C++ RTL layer [also see 1]. Like SciPy, we're now using UCRT, and the key bits that cause difficulty are the C++ RTL and platform ABI. Boy howdy do we have some stories.
* There is no standard Windows ABI beyond what msvc produces. This results in WinAPI (C-level, in other words!) APIs that could not in the past be reliably used from anything other than msvc because they could throw exceptions, rather than returning error codes, and Win64 SEH is not fully documented (clang-cl, which is open source, disappears into the closed source msvcrt to handle this.)
* Or another issue: cdecl isn't standardised. This may surprise those of you who think -- as I used to think! -- that cdecl is simple and a known calling convention for any platform. We have issues where sret for return values can be different between our toolchain and something built with MSVC. Since we need to change multiple languages, and one of our languages (Delphi) handles some returns (managed types) differently, changing this is more complex than it looks. We already did a lot of ABI compatibility work several years ago between multiple languages.
Back to the ABI: our new Clang is aiming at being fairly compatible with mingw-llvm on Windows. (And MSVC too, but we're starting with mingw-llvm as the basis.) That does not include a C++ ABI which the C++ committee has been resistant to, but it's a known good, working C++ toolchain, open source.
If Windows ever did have a platform ABI, it would likely be based on MSVC, but I would suggest that we -- developers -- should resist that until or unless the entire toolchain including runtime internals that affect the ABI is open sourced, or at least documented so that other toolchains can match it.
 sret is a hidden (?) or special return value used for structs, basically when a large (> register size) memory is required. I think. This is where I hope I don't embarass myself too much in the explanation. See eg https://stackoverflow.com/questions/66894013/when-calling-ff... In other words, returning values is more complex than it seems, even for a plain simple C method returning a struct which should be _incredibly basic_ and becomes a complex ABI issue.
Isn't msvcrt source code included with Visual Studio? I distinctly remember looking through it many years ago.
And I see it now too on my install: C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.37.32822\crt\src
But maybe it's not 100% of it and some key parts and missing, and obviously there might be legal issues reading this source code.
Archive link. Website doesn't load for me.
> Meson was going to refuse to accept the MSVC+gfortran combination
Back in the days, I went to Python's core-dev list and asked why. Why would any sane person ever use MSVC for a cross-platform language runtime. And guess what the answer was? Well... The answer was "Microsoft pays us, gives us servers to run CI on, and that's why we will use Microsoft's tools, goodby!"
For reference, Ruby uses GCC for the same purpose as do plenty of other similar languages for this exact reason.
To give you some context, I ran into this problem when writing bindings to kubectl. For those of you that don't know, in order to interface with Python from Go, one needs CGO, and on MS Windows it means MinGW. You could, in principle, build Python itself with GCC (a.k.a. MinGW) (and that's what MSYS2 a.k.a Cygwin a.k.a. Gitbash does), but this means no ABI compatibility with the garbage distributed from python.org.
So, after I had a proof of concept bindings to kubectl working on Linux, I learned that there will be no way (well, no reasonably simple way) to get that working on Windows. So, the project died. (Btw, there still isn't a good Kubernetes client in Python).
On the subject of packaging. I've decided to write my own Wheel packager. Just as a way to learn Ada. This made me read through the "spec" of this format while paying a lot more attention that I ever needed before. And what a dumpster fire this format is... It's insane that this atrocity is used by millions, and so much of critical infrastructure relies on this insanity to function.
It's very sad that these things are only ever discussed by a very small, very biased, and not very smart group of people. But then their decisions affect so many w/o even the baseline knowledge of the decisions made by those few. I feel like Python users should be picking up pitchforks and torches and marching on PyPA (home-)offices and demand change. Alas, most those adversely affected by their work have no idea PyPA exists, forget the details of their work.
Don't criticise people for making certain decisions years ago when those don't match what you'd choose to do now. Often you'll find that they were very reasonable given the constraints at the time.
Also the spec will have evolved over time with changes that would have been made under constraint of the existing system, which tends to produce things that are not as nice compared to something that was designed from the get-go to support the features. This is something that's seen very often in software engineering, and are probably partly a reason why long-lived codebases tend to be dumpster fires in general.
Calling them 'very biased and not very smart' is not very constructive.
That's not to say that the wheel format isn't a dumpster fire (I'll have to take your word on that), or hasn't morphed into one with time & revisions.
Because, if so, I'm not buying it. Wheel is an iteration after Egg, that was created in a world full of package managers, packages of all sorts and flavors. Wheel authors failed to learn from what was available for... idk some odd thirty years? (I'm thinking CPAN).
But, it has problems that just show how immature the people who designed the format were when it comes to using existing formats. For example, the Wheel authors were completely clueless about multiple gotchas of Zip format (even though they've been using Egg which is also based on Zip for... what a decade? I mean, come on, you had to be blind and deaf not to know about these problems if you had anything to do with packaging).
But, the most important problem is in the name format. And it's not about knowing gotchas of other formats. It's just total lack of planning / ability to predict the next step. For instance, some parts of the Wheel name are defined roughly as "whatever some function in sys module returns on that platform". So, it leaves this part of the name unpredictable and undefined, essentially. Wheel authors cannot make a universal package because in order to do so they need to have knowledge of all existing platforms and all future platforms... which, of course, nobody does.
And they've done it because... it was easy to do. Not because it was the right thing to do or the smart thing to do. The consequence of this decision is that implementing a PyPI competitor is virtually impossible because it's a layered crap-sandwich of multiple layers of mistakes that support each other (various parts of the name format were modified multiple times over the course of history, and weren't immediately supported by pip). Similarly, implementing a viable alternative to pip is equally almost impossible because of the same historical crap-pie of multiple mistakes on which Python package publishers built their whole infrastructure.
This led to the situation where today the whole Python packaging is locked into using PyPI, setuptools and pip. Those who are intimately familiar with the subject know that they are broken beyond repair and have no hope of getting better, but the mess is so big that undoing it just seems impossible. And, of course, PyPA is blissfully unaware of all the nonsense that's going on in its tools keeps adding new worthless features to polish this turd.
Edit: not to imply that the work of the maintainers hasn’t been INCREDIBLE (it has). I just thought this XKCD was a funny take on how complex the python packaging ecosystem is.
after all, if you want linux, you know where to find it; it's right there in wsl2
also microsoft could start shipping a fucking compiler with their sorry malware-ridden excuse for an operating system, like every single other operating system vendor has for sixty years
Visual Studio is a free download. Most users are not developers. Waste of space to include it by default.
And the last time I was forced to install Windows 10, it spent a few hundred megabytes of bandwidth and disk space on Candy Crush Soda Saga, not to mention a bunch of other junk I never asked for, so disk space is not that precious to Microsoft.
You are asking for basic development capability in the base OS installation.
Vast majority of Windows development uses .NET, or uses frameworks etc. The barebones C++ compiler and standard library simply won't work for most development on Windows, so what's the point? You are expecting base functionality to cater to your very niche specific needs which is practically useless for the vast majority of Windows development in general. There's no business case for it. Won't happen.
Even on Linux I need to install a lot of headers and libraries and compilers and SDKs before it can be used for development. Ubuntu base install is practically useless for dev without `apt install <all the things>`.
unix v7 included a compiler and was three megabytes
the compiler was a tiny fraction of that
gcc 9 is about 50 megabytes
windows 11 is over 8 gigabytes; you need a 16 gig usb drive to install it
there are probably individual audio files included in windows that are bigger than gcc
also, tho, this is a lot like not including life jackets on a ship because most passengers don't get shipwrecked
choco install -y visualstudio2019buildtools
choco install -y visualstudio2019-workload-vctools
I this case it's installing Visual Studio 2019.
Even on Linux the OOTB setup is useless for my development. I always need to `apt install` all the compilers, frameworks, libraries, SDK's, utilities, etc, before its usable.
there are free ones
All of those little apt get install ....-dev to spend an afternoon on.
Followed by installing Clion, QtCreator or KDE + KDevelop.
You should be comparing to a C compiler for MS-DOS.
If you want to do a proper comparisation you should include GNU/Linux libraries for all major architectures already compiled, GUI frameworks, IDE, .NET, Python, node, Java SDK, Azure integration SDKs, device drivers,...
Linking to UCRT using an entirely FOSS toolchain is, alas, nontrivial, but supported by mingw compilers (gcc and clang; no idea about the various FOSS Fortran compilers):
Life jackets are hopefully never needed, but when they are needed, the crew can't go to the warehouse and get them. Big difference.
Secondly, Solaris, Aix, HP-UX, DG/UX were the same in what concerns having to buy a UNIX developers license for the compilers.
So other UNIX vendors did follow suit, and I can't be bothered to dive into BYTE and DDJ ads from 1980 - 1990's to add others to the list.
like 90% of my links from 8 years ago are 404 now
it may well be obsolete in the sense that the new compiler is more convenient to use and produces more efficient code, but that's irrelevant
software doesn't rot like the potatoes you forgot about in the fridge
your argument is contingent on the presumption that people never do stupid things that cause them damage in the future. but if that were true, nobody would buy cigarettes, or for that matter microsoft windows, in the first place
My point still stands: the compiler should have been kept around if it is required to keep something business-critical on an 8 year old machine running. Whether such old versions of compilers are still provided depends on the goodwill of Microsoft.