Also, as a C++/Python dev - it's increasingly hard not to notice the awesome momentum Rust has garnered.
Apple may have the fastest processor, but Microsoft has the most comfortable tools. Both companies are not perfect, but if we must choose the lesser evil...
It's very fast for a low power, laptop-focused processor and even then only truly excels at single-threaded workloads. It's out classed by AMD mobile offerings (4900HS and 4800U) in multi-threaded workloads on most tests. If you step up to desktop processors, the top end processors like the AMD 5950X are in a different class of multi-threaded performance.
Don't get me wrong, it's an exceptional processor and incredibly fast for its sub-25W TDP.
 - https://www.anandtech.com/show/16252/mac-mini-apple-m1-teste...
 - https://images.anandtech.com/doci/16226/M1.png
I have to add the following: MSVC supports clang on windows. And CMake - all within Visual Studio. And it works perfectly, with perfect support for C++17. Badass.
Even though I prefer administering Linux servers any day over Windows servers, I find myself often missing PowerShell when I use bash. It has some quirks but some of the design decisions are exactly what you'd hope someone would make if they redesigned a command-line shell 40 years later.
I still find it comical that we proudly paste around commands that just wrangle text no differently from what perl programmers did in the 90s, using sed, print, cut, etc, when things like PowerShell moved to piping objects between commands. It just removes a whole class of ambiguity.
in my 25 years of using their tooling and reading their documentation, they've never been more than what just qualifies as borderline acceptable
I booted up VS2019 today for the first time in a while (after waiting 90 minutes for it to install) and it still feels like using a Jetbrains IDE from 15 years ago, and it's still worse than what Borland produced in the 90s
... and it's even slower than IntelliJ IDEA, which just seems amazing as IDEA is written in Java
The documentation is excellent in my experience though. I love it. Visual Studio is really the only Microsoft developer tool I don't like. Even Visual Studio Code is much better.
It as if the tools were built "by IDE users", "for IDE users".
But maybe I didn't stray too far off the beaten path?
The paradigm of sending text in and out of pipes is ~20 years older than that.
I think perl was created in the early 90s, correct?
 "1987 - Larry Wall falls asleep and hits Larry Wall's forehead on the keyboard. Upon waking Larry Wall decides that the string of characters on Larry Wall's monitor isn't random but an example program in a programming language that God wants His prophet, Larry Wall, to design. Perl is born."
 - http://james-iry.blogspot.com/2009/05/brief-incomplete-and-m...
I understood it as 'shell computing with stdin and stdout streams of characters' instead of structured data coming in and going out.
The single most heavily used Microsoft dev environment is the Microsoft VBA Editor. It has not had any update in nine years and is virtually unchanged in 22 years since the release of Office 2000, incredibly outdated in terms of usability. It also cannot be replaced by using a text editor like other IDEs can. It is anything but solid.
I don't usually use Windows, so perhaps I didn't spend enough time on it, but I was unable to create a colourmap with white background that didn't look horrible with some software. No matter how much I changed the colours, there was always some combination that gave me light grey on white or something like that.
If anyone has a colourmap I can use, that would be really appreciated.
Microsoft publish a tool in the Windows Terminal GitHub repo, ColorTool.exe, which can turn iTerm2 color scheme files into Windows Terminal ones. That might be your best bet because there are huge repositories of good iTerm2 schemes and really slick tools to quickly make your own with live previews.
There used to be a great Firefox plugin that allowed me to change the colours of web pages that use black background, but it doesn't work anymore, and I haven't found a good replacement.
What I tried to explain was the sensation, not the actual effect. It was a really bad example, I agree.
I’m not using Cygwin, but similarly, I decided to trick out my git bash with extra packages from MSYS2. I have all the Linux tools I need, been having a great experience with it.
The amount of resources it takes to run WSL, or a virtual box on older hardware can be devastating. You don’t hear this mentioned much.
You should not have any issues with WSL1 in terms of system load or "weight" (whatever the hell that is supposed to mean in computer terms.
This is used all the time and I've never once seen a definition for it. What is the unit of measurement? What's the border between "lightweight" and "not lightweight?" This industry as a whole ingests far too many hallucinogens. )
Cygwin has its quirks and flaws, like any software. But so does a full distro (Ubuntu, no less) running inside Windows 10.
It doesn't matter how fast the M1 is if XCode can't keep up with modern development tools. I don't understand how Apple developers produce any software with it, the experience is truly awful compared to nearly every alternative. It's slow, buggy and inscrutable. How long does it take to onboard a fresh grad at Apple, I wonder?
What one is first exposed to and how they're exposed to it probably makes a big difference.
I am on WSL1 two things hold me back from upgrading
1. How's networking? I go on and off VPNs quite a bit.
2. How's cross system access to files especially performance wise? I edit with phpstorm for windows, I share the files with Slack also windows, I access the same files with WSL git, LAMP stack and more.
2. This is my current pain. Windows file system is slow already and accessing it from wsl adds an overhead. Ideally I keep projects in Wsl2 storage and IDEs in Windows. I searched a lot for a solution but haven't found one. On the other hand, Wsl2 Linux storage speed runs circles around wsl1.
Publishing Rust crates does require a workaround in WSL2 if it's from a Windows directory. That's annoying but pretty infrequent (for me) and not a difficult workaround.
Except for those two issues, I've not had any problems in either WSL version - certainly nothing that would give me pause to using either one.
If they get open source so much it means not open sourcing what really matters is intentional. And quite frankly getting rid of patents, their litigiousness and data collection. But I'm asking too much and I would settle for them to just stop suffocating competitors, innovation and stop with vendor lock in. Same deal for their competitors.
Anything that really matters is just like the same old MS you know. DirectX, Office, Xbox, everything SaaS, IDE, compilers, debuggers, language servers, file formats, UI frameworks, UI patents, GitHub, Windows, Server, you'll find examples in every area. Practices like buying or killing competitors like Vulkan related acquisitions. I get it they are a company and need to maximize profits, so it's cool.
Microsoft has so many quality projects and good people working for them, it's just so frustrating that it's still like this. This will only get worse as the exploitative behavior and business models of their competitors like Google force their hand to do the same.
Moreover, every single Microsoft patent will now be used to fight against any patent claim concerning Linux, related open source software, and a limited set of codecs. 
Given the recent inclusion of an exFAT driver in Linux, this hurt MS's business even more.
Did Microsoft contribute all of their patents to the OIN? I seem to recall IBM only contributed a specific subset back in the day.
Is there a breakdown somewhere of their patent licensing revenue from Linux licensees, and the legal expenses they have in enforcing it?
Did they give up Linux patent licenses from Android makers? They had billions coming in from Samsung and LG in the past but that was all under NDAs, we don’t know what patents were under discussion.
Apple had "1,996 total patents granted between July 1, 2019 and June 1, 2020, more than any other company in Silicon Valley."
Apple + Microsoft. Expected: Macintosh apps. Actual: Windows
IBM + Microsoft. Expected: OS/2. Actual: MS NT kernel
Sybase + Microsoft. Expected: Sybase SQL server. Actual: MS SQL Server
Sun + Microsoft. Expected: Sun Java. Expected: .NET Framework
OpenGL ARB + Microsoft. Expected: OpenGL. Actual: Direct3D.
Also see what happened with Xamarin and Corel Office for Linux, DR-DOS, etc.
But hey, they bought Github and open sourced an Electron based editor so we have to worship them now.
Imagine you hire someone to do something for you and they end up stealing your business model and market share. That is Microsoft in a nutshell.
Well, they tried doing Java (in their own way), we got .NET because Sun sued them (rightfully so). I'm happy we got .NET though.
Many years ago I worked developing on Windows 7, using C# and MS SQL Server, and had a satisfactory experience at that time. I can see how that convenience has captivated many users.
But knowing how those technologies came to be makes a difference for me.
For example, Direct3D can be great, but the resulting vendor lock prevents other operating systems like Linux from getting game releases. There was a time where OpenGL was the most popular graphics library, but Microsoft frightened OpenGL users and told them that in future Windows releases, OpenGL would go through a compatibility layer with a significant performance cost and that they should switch to Direct3D. As a result, now everyone uses Direct3D.
Fortunately, projects like dxvk have implemented Direct3D on top of Vulkan and now many projects like Wine and Proton use it to run games using Direct3D on Linux.
Contrary to urban myths it never had a place on the game consoles.
VSCode may be open source, but the .NET Core plugin bits inside aren't, so in practice the open sourceness is debatable.
Not entirely true, because in fact NT is heavily "inspired" by VMS, that Dave Cutler, the main architect of NT kernel, used to work in DEC as a technical fellow. This is also one of the reason DEC Alpha can run Windows NT out of the box, as it is quite similar to VMS in nature.
SGI; Nokia; Sando; Spry; there were many others through the years.
Similar examples can be given for other IT giants.
It was one team splitting half-way and taking everything from the other.
IBM went their own way with OS/2 and Microsoft hired Dave Cutler from Digital to develop NT over several years. Windows NT is not OS/2. It never was.
Microsoft unilaterally changed the OS/2 3.0 API to the match the Windows API, IBM did not approve of that, and then the project split, with the Microsoft version of OS/2 3.0 becoming Windows NT.
Xerox's decision is considered dumb, but they were told exactly what was going to be done. The executives were stupid enough to agree because they did not want to hear about anything other than photocopiers and toners.
Microsoft on the other hand was initially a close Apple partner, developing the Z-80 SoftCard for Apple II and then helping develop applications for the Macintosh. Once they gained enough trust, they used that trust to clone the Macintosh (Windows 1.0).
I have been hearing this kind of thing for years, and I just don't get it.
Microsoft has turned around completely, becoming a huge open-source contributor. They committed all their patents to OIN. They make .NET Core available for MacOS and Linux (including open-source). They are noticeably absent from the congressional hearings of the other huge tech companies who have been bad players.
And yet we hear that they're "the same old MS". I get that no company is perfect, but in all honesty, what could Microsoft do that would change your perspective on them? And do you hold other companies (FAANG) do the same standard?
I mean it's really complicated. For starters I'd like them to stop forcing people to use their bad products just because they were there first to lock down the market and or abused their position. This is still happening today.
Then I'll be more open to use their good products, and there's plenty of that. I want to be excited when MS announces a new technology, not to be reminded of how bad they behave as a company and the negative impact they have on my life.
>And do you hold other companies (FAANG) do the same standard?
Yes. At least with e.g. Apple and Google I can just not use their products, but with Google it's getting harder and harder as they monopolize the web and close/lock Android even more. Google removed don't be evil from their motto, MS should change theirs to We love open source when it's convenient. Nothing wrong with doing manipulative PR like everyone else, but don't be surprised when some people don't want to drink it.
>Nothing wrong with doing manipulative PR like everyone else[.]
Just because everyone does something doesn't make it right.
1. I run Arch Linux on windows vis WSL which provides pacman as a package manager. Pacman is way better than homebrew.
2. Docker runs much faster on windows compared to Mac.
3. My Mac will freeze up at times, while ctrl+alt+del always works on windows.
4. I am no longer limited to the crappy gpu options on Mac and can actually do gaming on my laptop.
(Obviously there's also the Windows store, which is not bad for GUI programs, but for more developer-type stuff - e.g. installing python - chocolatey is great.)
Gave up and built a mini ITX box next to my laptop. Swap DisplayPort, swap one USB-C, done!
All big software corporations use open source strategically: keep the core money-makers closed, release tooling and other trinkets for developers so that they do some free advertising for the company. They also release expensive-to-develop software for free to destroy competitors and expand their influence.
Do you have any specific examples? From my perspective as an app user, rather than developer, the restrictions they've put in place seem to be beneficial to me. I like sandboxed apps, absolutely love that I can tell an app to bugger off when it tries to access some folder that it has no business reading.
I can imagine drivers, but if you stick to only Dell Developer Edition, Lenovo Linux-certified, Purism Librem, System76, or similar (still significantly wider selection than Apple s̶h̶e̶e̶p̶ fans seem satisfied with) hardware, things should work more smoothly than with Windows (drivers are built into the kernel and update with the OS).
Most Windows users haven't got a clue what Linux is let alone WSL2.
I don't need a huge selection, but I do need polished hardware that I can walk into a store and try out before buying. Just things like the feel of the keyboard or the trackpad can make a system a nightmare to use if they're bad. I also need to be confident that I'm not going to get given the runaround if a piece of hardware (e.g. docking station or monitor) doesn't work with the machine.
> things should work more smoothly than with Windows (drivers are built into the kernel and update with the OS).
That's actually one of the things that worries me the most about Linux - I can't pin a driver to a version that's working, and if the kernel drops support for my hardware then I have to choose between losing support for my hardware or never upgrading my OS. I was excited for GNU/kFreeBSD up until the point where systemd arrived and destroyed all the advantages of free OSes.
Doesn’t this apply to my first two suggestions? The Dell XPS Developer Edition is the same hardware as the Windows version, just with Ubuntu preinstalled. And Dell has upstreamed the drivers. Similar with Lenovo hardware, except IIUC they haven’t started shipping Linux preinstalled yet, you just buy a normal Thinkpad and install your distro of choice. Purism has a basic return policy, though IIUC you have to pay shipping and, if there’s no hardware defect, a 10% restocking fee: https://puri.sm/policies/. System76 has a 30-Day Limited Money Back Guarantee linked in their websites footer: https://system76.com/warranty (^f 30 and hit enter a couple times).
> if the kernel drops support for my hardware
Is there precedent for this? It still supports 32-bit CPUs long after most people have upgraded — indeed certain distros like Ubuntu have stopped supporting them, but there are probably hundreds of other distros that haven’t done that¹, like Devuan, which also champions init freedom and maintains a list of a ~two dozen Free distros/OSes that don’t force systemd: https://www.devuan.org/os/init-freedom
¹Edit: Distrowatch lists 40 active distros that support i386, 102 active that don’t use systemd, and 25 active (and 115 non-active) that meet both of those criteria : https://distrowatch.com/search.php?ostype=All&category=All&o...
Drivers make a lot of difference to the feel of a trackpad, I wouldn't want to buy without testing the actual drivers. I wanted to like the XPS but its keyboard felt too rubbery to me. The idea of buying something and then shipping it back really doesn't appeal to me (I'm not from the US and the idea of just returning stuff isn't so much in our culture); I really want to go to an actual showroom-like store, try out a bunch of different laptops, and then walk away with the one I like, and I accept paying a premium for that. (I'm sure others will feel different, and maybe I'm not being reasonable; just trying to describe where I'm coming from).
> Is there precedent for this?
Yes, I've had three different pieces of hardware go unsupported in Linux (Logitech QuickCam USB - eventually support reappeared in a different driver; Asus A730W PDA; an old Hauppauge TV tuner card). Linux is openly really hostile about out-of-tree drivers (no stable API as a matter of policy) which the first two were, but even in-tree drivers are aggressively deprecated (the Hauppauge driver was one of those). I switched to FreeBSD on my home server because I was just fed up with all the churn of Linux and it's been a lot better.
It almost feels like a strategy - be standard enough to bring people in, but idiosyncratic enough to lock them in.
I'll be using gtk-rs thank you very much.
ssh and scp make sense to put into powershell, because they're everyday sysops things. curses is pretty posix-specific, and apps that use it are likely to need other posix stuff, so handle that with WSL rather than unnecessarily re-inventing a wheel.
It all feels very vaguely analogous to the West's relations with China and Russia -- both China and Russia appeared very open for a time, and then closed back down after gaining enough leverage.
We've got a square peg, and an operating system that has both a round hole and a square hole. There is nothing nefarious about choosing not to use the round hole.
Please be aware that if you do this, your application won't be accessible with screen readers or other assistive technologies on Windows and Mac. At least not now. Maybe I'll have time to implement GTK accessibility backends for those platforms someday.
I looked the other day, but couldn't figure it out.
I would assume they would simply read the screen. Are they thus not capable of, for instance, reading a picture?
They should not be called “screen readers” but “text to speech” if they not actually read the screen on a bitmap level.
However, VoiceOver for iOS has a new feature called screen recognition, which is exciting because it overcomes these limitations and provides some level of access to applications that are otherwise inaccessible. Hopefully other platforms will catch up.
Even then, true screen reading will be much more CPU-intensive than what screen readers currently do. And anyway, it's not here yet, except on iOS. So I will continue to warn developers away from toolkits that are inaccessible, in hopes that some blind person somewhere will be spared the pain of being blocked from doing a task because of an inaccessible application.
Certainly there would be significant demand for a sightless man to be able to read the dankest memes from pictures?
Training a computer to solve a captcha is a lot easier than training a computer to understand interface conventions.
There isn't an AI that can look at a jpeg screenshot of an interface and say, "there's two input elements, and it looks like they're grouped together and control the list to the left of them, and one of them is selected, which I can tell because it has some kind of subtle glow effect on it, but not the glow effect you would get if you moused over it."
There's nothing that can realistically do that today, and it wouldn't be fast enough or performant enough for low-powered cell phones and laptops even if it did exist.
If you're just looking at describing pictures themselves... sure, Facebook does auto-generate alt tags for images if you forget to put one in. And Youtube auto-generates captions. Those are valuable services, but they have a lot of glitches and mistakes. If you're a blind reader, you'd prefer not to have that experience when you're using a piece of software, you'd prefer something that just works reliably.
It's the same reason you probably used a keyboard to type this comment instead of speech to text. Speech to text is useful in some cases, but not good enough or accurate enough that you would want to use it as your main input method.
Screenreaders don't just read text, they control the interface itself using standardized keyboard shortcuts and input components within whatever graphical framework you're using, and they communicate what that interface is using a set of standardized terminology.
This seems to be more so an “accessibility suite” of which text-to-speech is a component rather than what I would call a “screen reader".
If you're trying to do a search online for the kind of tool you're looking for, probably the phrase you would want to search for is "OCR software", short for Optical Character Recognition, or if you're trying to tag images just straight-up "image recognition."
You can see an example next to this very comment actually: the up and downvote buttons won't be accessible with OCR, but they have "title" attributes describing what they are. And consider that there's more to understanding a given user interface that raw text: radio buttons are tied to certain labels, there's hierarchy, all sorts of layout cues that would be opaque to a screen reader.
I suppose that ideally you'd want both: use native accessibility data if available, fallback to OCR when there's no alternative.
I saw no malice there directed at users of assistive technology, but rather of users of non-open platforms.
And rather sad that free software - which in theory ought to be a perfect place for exploring assistive options - seems to lag far behind the closed shops.
There's also a separate NodeRT projection for generating Node/Electron modules: https://github.com/NodeRT/NodeRT
How well does gtk-rs work as a cross-platform GUI library? I know it works well on Linux, but I haven't tried it on macOS or Windows.
If anyone has experience using it for cross-platform development, I'd love to hear about it.
But it's a bit telling that the first hurdle you hit in running python on windows was the operating systems choosing different forty-year-old terminal emulator escape sequences. :)
Defeating 37 years of separating content from presentation
So we can embedd garbage in strings again! Causing effects like this:
"On Linux, if you use ls --color then different file types use ANSI escape sequences as color indicators. If you pipe this output to less, then you get paging while retaining the color information. If you redirect this output to a file, that file contains the ANSI escape sequences. If you then use the cat command on the file, you see the coloring as the ANSI escape sequences are rendered by the terminal."
Interpreting commands from log messages! Because we haven't learned from history:
A glorious future awaits.
Indeed, there are some non-millennial shenanigans here.
And then the one which exploits a terminal for arbitrary command execution with a buffer overflow in the VT escape code parser. Wait, what am I talking about "inb4", that happened already and it didn't even need a buffer overflow: https://www.proteansec.com/linux/blast-past-executing-code-t...
> "mod_rewrite.c in the mod_rewrite module in the Apache HTTP Server 2.2.x before 2.2.25 writes data to a log file without sanitizing non-printable characters, which might allow remote attackers to execute arbitrary commands via an HTTP request containing an escape sequence for a terminal emulator."
Which WinRT call would that be?
Hi, owner of the Windows Console here.
Enabling VT control sequences is a matter of setting the ENABLE_VIRTUAL_TERMINAL_PROCESSING mode on the output handle, and it's available through the same interface as every console mode flag that came before it. SetConsoleMode has a long history--dating back to the nineties--that this just builds on.
It's behind a flag so that applications developed before ca. 2015 that like to emit control characters to the screen don't melt away into gibberish.
As a side note, Windows Terminal (the app) is absolutely fine letting programs emit and handle VT100 escapes without them issuing any particular opt-in call themselves... And upon looking, you're the person who enabled this feature! Thanks for that, but, why is it good for terminal but bad for conhost?
Back when VT parsing was implemented, it was an entirely new output stream parser built into a console that hadn’t been updated in a rather long time. We were careful commensurate with its age, and opt-in made sense. Language runtimes or compatibility layers like Cygwin could handle the decision for all of their hosted applications(1) and everything else would generally continue working properly. Now that we’re working on conhost’s replacement, we get to revisit some of those decisions!
Cases like this are especially acceptable because a user can always fall back to conhost. That escape hatch isn’t one we intend to get rid of.
1) this doesn’t do anything for manual or direct ports, and Cygwin is far from the only provider here. Representative example, etc.
I'm excited for Intel to do the same, and look forward to the lingering suspicion even then.
Being suspicious of their actions and their intentions is a very reasonable stance.
Nothing has changed.
... how much of my privacy I would like to have violated
And every time Windows has changed settings without asking permission and without giving me a clear explanation about the settings and changes.
I do not want my $OS to flout privacy.
I take it that Rust is next after Python, whose development has stagnated and where the mailing lists are now censored.
This may be a consequence of the consent decree they signed in the early 2000's, where it was alleged that they used their control of Windows APIs to further Internet Explorer market share at the expense of other browsers. Since then they have had to be careful not to act like a monopoly.
munging anything around with text very much not the Windows way. For better or worse.
Well turns out, microsoft started a project to also generate Win32 API information in WinMD file, to generate APIs from them automatically for all native languages! See win32metadata. This could make interfacing with win32 APIs a lot more convenient!
- Managed languages if you can afford a GC
- C++ with Core Guidelines
Note that there are still some teams like Azure Sphere and Azure RTOS, which are only providing C based SDKs, so no everyone is on the same wave length.
Credit where credit is due: Microsoft has really been doing a lot to try and rebuild their credibility when it comes to the developer community. Off the top of my head I can think of TypeScript, open sourcing .Net, WSL and now this.
Oh and they haven't done an Oracle or a Cisco (or, let's face it, a Google at this point) with their acquisition of Github by letting it die on the vine or with hostile forced integrations.
I've always found it to be incredibly difficult because of the number of gotchas which can leak into causing a segfault in Rust; it's immensely frustrating.
They have provided nontrivial wrappers, wrapping xcb datatypes so to make them safe and uphold the invariants.
I don't know much of any Windows interfaces, but I doubt this would be impossible there too; they simply did not do so.
This Windows a.p.i. in Rust seems of a frankly atrocious design and most of the functions seem simple `extern "C"` declarations rather than actual attempts at proper wrapping.
This seems to be #1 only for now, which is fair because winapi is enormous. Also there may be many ways to expose a safe rust interface all with different tradeoffs, by leaving #2 open they don't lock in a single strategy prematurely. That said I am looking forward to a safe wrapper as well.
However, my original reply was to this:
> It's a call out to Windows libraries that long predate Rust, and they are implemented in (mostly) C++. They don't provide any of the safety features on any data structure you pass to it. I don't see how it could be anything other than unsafe.
It very much can be, and is often done, it simply isn't (yet )here.
I'm sorry you think the dull finish is ugly, but looking pretty is not its purpose. Its purpose is to bind strongly to the substrate it's on and provide a better surface for that shiny coat you so desire to adhere onto. Primer and language bindings alike.
No, it's simply arguing against the claim initially made that there is no way for it to not be grey.
> I'm sorry you think the dull finish is ugly, but looking pretty is not its purpose. Its purpose is to bind strongly to the substrate it's on and provide a better surface for that shiny coat you so desire to adhere onto. Primer and language bindings alike.
Maybe it isn't it's purpose, but the original post claimed that it was impossible for it to be difference, which is certainly false.
1) Having everyone generate the bindings means there will be many copies of each type, causing type errors. This only works if Windows is always a private dependency, a concept which isn't even fully implemented, and that is a bad assumption. Public dependencies are the logical default.
2) Putting the proc macro and non-autogenerated parts in the same create is cute but a sloppy conflation of concerns and bad for cross compilation. There is a underlying https://crates.io/crates/windows_macros thankfully, but that should be the only way to get the macro.
> we could support this in future, but it is not an immediate goal. If this is something that folks would like to do, feel free to chime in on this issue and let us know.
Note that most of Microsoft regards Linux mainly as a server OS. You are not supposed to use it on the desktop, instead you should use Windows there.
Then what's the point of WSL? To allow server software development on a Windows desktop? OK. But then what's the point of trying to bring DX graphics to linux?
So Embrace Linux by providing WSL. Extend by bringing DX to it. Extinguish the standardized APIs for graphics and ML.
Got it. Check. Thanks for the confirmation.
Or LibGNMX, GX2 and NVN?
This new crate is a generator. If this new generator requires access to the .winmd files or .h files from the SDK or whatever, where is the non-Windows builder going to get those from? And will this generator look for them there?
For example, I used to maintain a generator that took COM typelibs and generated Rust bindings for them, but it required calling into the Windows API for working with typelibs and thus obviously required a Windows builder. The functionality of this crate is split between the windows_gen and windows_macros crates, and from a cursory glance I could not tell how it works wrt win32 bindings.
Edit: Answered by https://news.ycombinator.com/item?id=25863581 - the win32 API is now also described by .winmd files.
So the question is, can a non-Windows builder get these files, and then convince this crate to look for them wherever it placed them?
From memory (I haven't done Windows development in forever): there's a few projects that try to provide an open source version of the Windows API headers. The MingW project, for example, has a full set of drop-in replacement headers (meaning you can take code that compiled with Visual C++ and compile it directly with MingW's GCC and it should 'just work'). The LCC-Win64 project has a similar set of headers.
Does that answer your question?
Yes, there's actually a prebuilt Windows.Win32.winmd file in the repo, underneath .windows/.
EDIT: windows-rs doesn't yet support cross compilation as easily as winapi. Here's the issue tracking it: https://github.com/microsoft/windows-rs/issues/142
- MS were already moving to a model where they prefer to stick with the standards-compliant form of a language: vide C++/WinRT
- Rust has enough features built in to facilitate this, specifically being able to hook into the compilation process
What are the APIs you're interested in that are missing or of poor quality?
Microsoft is anti freedom, anti developer.
Windows is only good for games. Real development happens on other platforms.
They bought github and turned it into something against the spirit of free software.
First thing was to add deleting issues and comments.
Microsoft is evil. Always had been and if you trust them one bit you're a fool
I clearly remember that this name was squatted.
Ada has something similar: https://github.com/AdaCore/win32ada
British : fish parings or refuse broadly : any bits and pieces; gadgets, gadgetry the gubbins for changing a tire all the navigational gubbins
As the GP used it, it means "and what not", "and such like" etc.
Most recently, I've been playing with eBPF on Linux: The system call API is terse and pretty impenetrable, whereas the high-level APIs either mean writing gadgets in C with a nearly-blind debugging experience or relying on a library to retain the power of the featureset the kernel exposes.