Hacker News new | past | comments | ask | show | jobs | submit login
Rust for Windows (github.com/microsoft)
785 points by dsr12 42 days ago | hide | past | favorite | 293 comments

This is really cool. Kudos to Microsoft for really getting open source lately. I wrote an app (which failed miserably) called zenaud.io . When I started writing the app, Apple was hands-down a better developer experience. Now, it's the exact opposite -- MacOS is increasingly painful, throwing up more and more roadblocks and constricting their platform ever more. And Visual Studio is better than Xcode IMO.

Also, as a C++/Python dev - it's increasingly hard not to notice the awesome momentum Rust has garnered.

OT: I must agree about your comparison of macOS and Windows. IMO Microsoft is doing a lot to improve the developer experience. WSL2 is so freaking good. It has its quirks and it has issues with some workflows, but I’m thinking about moving out of macOS after having tried it.

Apple may have the fastest processor, but Microsoft has the most comfortable tools. Both companies are not perfect, but if we must choose the lesser evil...

"Apple may have the fastest processor, but Microsoft has the most comfortable tools. Both companies are not perfect, but if we must choose the lesser evil..."

It's very fast for a low power, laptop-focused processor and even then only truly excels at single-threaded workloads. It's out classed by AMD mobile offerings (4900HS and 4800U) in multi-threaded workloads on most tests[0]. If you step up to desktop processors, the top end processors like the AMD 5950X are in a different class of multi-threaded performance.

Don't get me wrong, it's an exceptional processor and incredibly fast for its sub-25W TDP.

[0] - https://www.anandtech.com/show/16252/mac-mini-apple-m1-teste...

This is simply a matter of adding more cores...I mean, I would hope the 5950x with its 16 full strength cores would be better than an M1 with it's big/little design...

I don't think it's as simple as adding more cores. Maybe the M1 doesn't scale up nicely.

Looking at the die shot [1], they have plenty of space for cores and cache if they remove the GPU. Surely it's not that simple, but I believe they should be able scale to at least 8+4 cores without large interconnect changes, and at that point they are already knocking on the 5900X's door.

[1] - https://images.anandtech.com/doci/16226/M1.png

I would suspect by the time that product releases, AMD will be allowed some 5nm fab capacity for a future Zen product release.

We’ll find out this year.

Indeed, WSL2 is pretty cool. Also, Windows Terminal is actually pretty sweet, and I even gave PowerShell a spin the other day. The crazy thing is you can basically use it as a bash shell, and it gets the job done. My developer experience right now consists of PowerShell, where I do all my regular directory jumping, editing (vim), etc., and a Developer Shell with god awful classic "terminal" which I only use to call conan/cmake/clang-windows.

I have to add the following: MSVC supports clang on windows. And CMake - all within Visual Studio. And it works perfectly, with perfect support for C++17. Badass.

MS has always been solid with developer tools, documentation, and dev relations.

Even though I prefer administering Linux servers any day over Windows servers, I find myself often missing PowerShell when I use bash. It has some quirks but some of the design decisions are exactly what you'd hope someone would make if they redesigned a command-line shell 40 years later.

I still find it comical that we proudly paste around commands that just wrangle text no differently from what perl programmers did in the 90s, using sed, print, cut, etc, when things like PowerShell moved to piping objects between commands. It just removes a whole class of ambiguity.

> MS has always been solid with developer tools, documentation, and dev relations.

in my 25 years of using their tooling and reading their documentation, they've never been more than what just qualifies as borderline acceptable

I booted up VS2019 today for the first time in a while (after waiting 90 minutes for it to install) and it still feels like using a Jetbrains IDE from 15 years ago, and it's still worse than what Borland produced in the 90s

... and it's even slower than IntelliJ IDEA, which just seems amazing as IDEA is written in Java

I find Visual Studio intolerably slow, I don't quite understand why but it's been bad since at least Visual Studio 2017. Rider and IntelliJ are both much faster and I prefer them.

The documentation is excellent in my experience though. I love it. Visual Studio is really the only Microsoft developer tool I don't like. Even Visual Studio Code is much better.

Agree. going through .Net ans ASP.Net documentation and none of the classes/methods have usage examples except for the super obvious ones (like the "String" class). They just show method signatures and that's it.

It as if the tools were built "by IDE users", "for IDE users".

They may have stagnated but MSDN was, for many, many years, some of the best documentation I had seen anywhere. (Java was pretty solid too, and also PHP).

But maybe I didn't stray too far off the beaten path?

MSDN has a lot of documentation, but it is not easy to use.

Why not just install powershell on Linux?

> 90s

The paradigm of sending text in and out of pipes is ~20 years older than that.

I had actually written an earlier date but then I realized I was talking about Perl :)

I think perl was created in the early 90s, correct?

I believe the annals of history say 87

[0] "1987 - Larry Wall falls asleep and hits Larry Wall's forehead on the keyboard. Upon waking Larry Wall decides that the string of characters on Larry Wall's monitor isn't random but an example program in a programming language that God wants His prophet, Larry Wall, to design. Perl is born."

[0] - http://james-iry.blogspot.com/2009/05/brief-incomplete-and-m...

+1 for linking such beautiful annals

I misunderstood your post. You are 100% correct about perl.

I understood it as 'shell computing with stdin and stdout streams of characters' instead of structured data coming in and going out.

> MS has always been solid with developer tools, documentation, and dev relations

The single most heavily used Microsoft dev environment is the Microsoft VBA Editor. It has not had any update in nine years and is virtually unchanged in 22 years since the release of Office 2000, incredibly outdated in terms of usability. It also cannot be replaced by using a text editor like other IDEs can. It is anything but solid.

Except many end up installing VB.NET and using Office COM Automation instead.

I think they even ship powershell now for non-windows OSs.

Windows Terminal is very difficult to work with if one has astigmatism that prevents working with dark background.

I don't usually use Windows, so perhaps I didn't spend enough time on it, but I was unable to create a colourmap with white background that didn't look horrible with some software. No matter how much I changed the colours, there was always some combination that gave me light grey on white or something like that.

If anyone has a colourmap I can use, that would be really appreciated.

Here's the colormap I use, which I've made sure never has too-bright colors on the near-white background: https://pastebin.com/raw/AdR3sBSs

Microsoft publish a tool in the Windows Terminal GitHub repo, ColorTool.exe[1], which can turn iTerm2 color scheme files into Windows Terminal ones. That might be your best bet because there are huge repositories of good iTerm2 schemes[2] and really slick tools to quickly make your own with live previews.[3]

[1] https://github.com/Microsoft/Terminal/tree/main/src/tools/Co...

[2] https://github.com/mbadolato/iTerm2-Color-Schemes

[3] https://terminal.sexy/

I guess it depends. I use the Windows Terminal, and I have high astigmatism and myopia. I find it very readable using the Cascadia Code font. Btw I agree about dark backgrounds being less readable but I can’t get away with light backgrounds because of eye floaters.

I'm astigmatic but I never noticed any issue with dark backgrounds, not only in Windows Terminal, I didn't know it could happen

I don't think it happens to everybody with astigmatism. I cannot read any text with black background. After a very short amount of time I'm unable to read anything, and once I take my eyes off the screen it feels as though I've been staring into the sun.

There used to be a great Firefox plugin that allowed me to change the colours of web pages that use black background, but it doesn't work anymore, and I haven't found a good replacement.

This has to do with screen glare. A matte screen might treat you better.

I only use matte screens. It definitely isn't that though. I didn't mean to say that black screens are too bright, that wouldn't make any sense.

What I tried to explain was the sensation, not the actual effect. It was a really bad example, I agree.

Visual Studio has supported Developer PowerShell for a while now, and it’s possible to load it into Windows Terminal. It’s just a set of arguments. This blog post explains them: https://devblogs.microsoft.com/visualstudio/say-hello-to-the...

Wait until you find out CygWin is a thing for decades and you could've used Linux style under Windows all this time

Not at all the same. Not at all.

I mean sure it’s not, but back in the late 2000s I did everything under it and it made for a great experience. It’s little package manager was handy, everything I needed was there, it worked surprisingly well.

I agree. I recently bailed on WSL because my poor crappy laptop was buckling under the weight of the extra resource demand.

I’m not using Cygwin, but similarly, I decided to trick out my git bash with extra packages from MSYS2. I have all the Linux tools I need, been having a great experience with it.

The amount of resources it takes to run WSL, or a virtual box on older hardware can be devastating. You don’t hear this mentioned much.

WSL1 doesn't use a virtual machine, and only implements Linux syscalls as NT kernel syscalls. There's no VM or OS overhead; WSL1 is 100% userland software, except for the kernel translation stuff.

You should not have any issues with WSL1 in terms of system load or "weight" (whatever the hell that is supposed to mean in computer terms. This is used all the time and I've never once seen a definition for it. What is the unit of measurement? What's the border between "lightweight" and "not lightweight?" This industry as a whole ingests far too many hallucinogens. )

Cygwin offers excellent out-of-the-box GNU userland functionality. It is fast and open-source, with a generous set of packages and languages. I can pin it and have a stable, internally consistent GNU userland. Very good for a developer using several interpreted languages.

Cygwin has its quirks and flaws, like any software. But so does a full distro (Ubuntu, no less) running inside Windows 10.

WSL1 isn't a full distro, it's only the userland stuff. You can also easily make your own userland in Linux, tar it up the deploy it as a WSL environment. It's really nice.

There was even a period of time when Cygwin's X server supported GLX and I managed to get some OpenGL software I wrote for Linux to run in Cygwin, but for some reason it was removed or stopped working.

You can still accomplish this with the cygwin packages xorg-server and xinit. You can then export DISPLAY=:0 in your bash shell and have working OpenGL, e.g. glxgears assuming you have the necessary packages available in your "remote" session/WSL. Here's a GitHub gist I use for this: https://gist.github.com/andrewmackrodt/b53943185bbbd804ef4b0...

What is windows terminal? cmd?

It's like the the terminals found on Linux. I would give it a recommendation but I still encounter issues with tmux session and mouse (yes I have the latest from github installed). I recommend the terminal in VSCode which works as expected with ssh + tmux.

> Apple may have the fastest processor, but Microsoft has the most comfortable tools.

It doesn't matter how fast the M1 is if XCode can't keep up with modern development tools. I don't understand how Apple developers produce any software with it, the experience is truly awful compared to nearly every alternative. It's slow, buggy and inscrutable. How long does it take to onboard a fresh grad at Apple, I wonder?

I think IDEs may be a bit more subjective than are often presented. Personally speaking, my experience with Xcode has been just ok (not amazing) while IntelliJ based IDEs have been clunky and arcane, with some of their "smarts" functioning in ways that aren't expected or intuitive at all. Both are functional if you let me pick which to spend a work day in, it'd probably be Xcode.

What one is first exposed to and how they're exposed to it probably makes a big difference.

I have an existing app in Swift and Interface Builder. I hate xCode. Simple things like deploying the app to a device is hit or miss (usually miss). Is there a development environment that I could use for my app which is better? Happy to use any platform (MacOS, Linux, Windows).

App Code is probably the only thing worth checking out. Swift isn't a useful language outside of the Apple ecosystem and interface builders come and go, usually go (design in Figma/Adobe products, implement in Code rather than UI wysiwyg)

> WSL2 is so freaking good.

I am on WSL1 two things hold me back from upgrading

1. How's networking? I go on and off VPNs quite a bit.

2. How's cross system access to files especially performance wise? I edit with phpstorm for windows, I share the files with Slack also windows, I access the same files with WSL git, LAMP stack and more.

1. Just works. Only issue I found is that Wsl2 will not update its dns resolver IPs when connecting to the vpn. There's a workaround script. So it's either exit the terminal and re enter it or run the script to update them.

2. This is my current pain. Windows file system is slow already and accessing it from wsl adds an overhead. Ideally I keep projects in Wsl2 storage and IDEs in Windows. I searched a lot for a solution but haven't found one. On the other hand, Wsl2 Linux storage speed runs circles around wsl1.

I never had an issue with WSL1 until some bizarre bug that cropped up seemingly out of nowhere in which I would blue-screen during some Rust compilation. I didn't have the issue after maybe four or six months of using Rust in WSL1 until suddenly I did. Upgrading to WSL2 fixed the problem.

Publishing Rust crates does require a workaround[0] in WSL2 if it's from a Windows directory. That's annoying but pretty infrequent (for me) and not a difficult workaround.

Except for those two issues, I've not had any problems in either WSL version - certainly nothing that would give me pause to using either one.

[0] https://github.com/rust-lang/cargo/issues/8439#issuecomment-...

I use X410 and run tools from inside of WSL2 by exporting the display. Works well for Emacs, JetBrains Rider, and every other GUI application I've tried so far.

Concur. X410 is great and survives where xming and vcxsrv both crash for me. It's not free but the "not crashing" feature makes it well worth the $10 I paid for it.

That sounds excellent: I’ll have to give it a shot. VSCode under WSL2 with its Docker support is neat, but can be slow due to the storage system overhead. This might solve that for my team!

re 2) try NFS exports on either host windows or WSL2. i've used an nfs mount inside WSL2 and it works very well. don't put it in /etc/fstab, though - for me it caused a WSL2 hard lock on start.

> As a rule of thumb, WSL 2 accessing host (NTFS) files is about 5 times slower than WSL 1 accessing those same files.


Buying Microsoft for it's OS and Apple for it's hardware? What is this opposite day?

Unleash all your handcuffs and move directly to Linux. It will be really refreshing!

Please know that networking in WSL2 is a nightmare if working in a VPN. We are currently staying in WSL1 because the internet wont work in WSL2 under the corporate VPN.

cisco anyconnect is notorious for that. you have to bump its interface metric above a certain value so WSL2 network or whatever does the routing can do its work.

>This is really cool. Kudos to Microsoft for really getting open source lately

If they get open source so much it means not open sourcing what really matters is intentional. And quite frankly getting rid of patents, their litigiousness and data collection. But I'm asking too much and I would settle for them to just stop suffocating competitors, innovation and stop with vendor lock in. Same deal for their competitors.

Anything that really matters is just like the same old MS you know. DirectX, Office, Xbox, everything SaaS, IDE, compilers, debuggers, language servers, file formats, UI frameworks, UI patents, GitHub, Windows, Server, you'll find examples in every area. Practices like buying or killing competitors like Vulkan related acquisitions. I get it they are a company and need to maximize profits, so it's cool.

Microsoft has so many quality projects and good people working for them, it's just so frustrating that it's still like this. This will only get worse as the exploitative behavior and business models of their competitors like Google force their hand to do the same.

Microsoft joined the open invention network; a defensive patent pool protecting Linux (kernel and distributions). This directly cut into their patent revenue and removed some of their leverage towards Android OEM's. This matters a lot to the android and wider Linux ecosystem.

Moreover, every single Microsoft patent will now be used to fight against any patent claim concerning Linux, related open source software, and a limited set of codecs. [0]

Given the recent inclusion of an exFAT driver in Linux, this hurt MS's business even more.

[0] https://openinventionnetwork.com/linux-system/

Moreover, every single Microsoft patent will now be used to fight against any patent claim concerning Linux, related open source software, and a limited set of codecs.

Did Microsoft contribute all of their patents to the OIN? I seem to recall IBM only contributed a specific subset back in the day.

According to the zdnet article I posted in a sibling comment, they did contribute all their patents, which is about 60,000 of them. They gave up significant licensing revenue.

Did they?

Is there a breakdown somewhere of their patent licensing revenue from Linux licensees, and the legal expenses they have in enforcing it?

Did they give up Linux patent licenses from Android makers? They had billions coming in from Samsung and LG in the past but that was all under NDAs, we don’t know what patents were under discussion.

I have no idea what their legal expenses were, but they explicitly said it covered all their patents. The link I posted mentions Android as something that would be covered.

"By joining the Open Invention Network, Microsoft is offering its entire patent portfolio to all of the open-source patent consortium's members."


Apple had "1,996 total patents granted between July 1, 2019 and June 1, 2020, more than any other company in Silicon Valley."


My recollection from previous OIN threads was that it came with a lot of caveats. I can't comment any further from lack of knowledge about patents and its subtleties. Would love to see an analysis of what really happened in practice since they did that.

All core MS products are the result of some level of backstabbing.

Apple + Microsoft. Expected: Macintosh apps. Actual: Windows

IBM + Microsoft. Expected: OS/2. Actual: MS NT kernel

Sybase + Microsoft. Expected: Sybase SQL server. Actual: MS SQL Server

Sun + Microsoft. Expected: Sun Java. Expected: .NET Framework

OpenGL ARB + Microsoft. Expected: OpenGL. Actual: Direct3D.

Also see what happened with Xamarin and Corel Office for Linux, DR-DOS, etc.

But hey, they bought Github and open sourced an Electron based editor so we have to worship them now.

Imagine you hire someone to do something for you and they end up stealing your business model and market share. That is Microsoft in a nutshell.

> Sun + Microsoft. Expected: Sun Java. Expected: .NET Framework

Well, they tried doing Java (in their own way), we got .NET because Sun sued them (rightfully so). I'm happy we got .NET though.

All the technologies listed above are good from a technological standpoint. My critique is about corporate ethics rather than whether or not those technologies are good or convenient.

Many years ago I worked developing on Windows 7, using C# and MS SQL Server, and had a satisfactory experience at that time. I can see how that convenience has captivated many users.

But knowing how those technologies came to be makes a difference for me.

For example, Direct3D can be great, but the resulting vendor lock prevents other operating systems like Linux from getting game releases. There was a time where OpenGL was the most popular graphics library, but Microsoft frightened OpenGL users and told them that in future Windows releases, OpenGL would go through a compatibility layer with a significant performance cost and that they should switch to Direct3D. As a result, now everyone uses Direct3D.

Fortunately, projects like dxvk have implemented Direct3D on top of Vulkan and now many projects like Wine and Proton use it to run games using Direct3D on Linux.

> There was a time where OpenGL was the most popular graphics library

Contrary to urban myths it never had a place on the game consoles.

> open sourced an Electron based editor

VSCode may be open source, but the .NET Core plugin bits inside aren't, so in practice the open sourceness is debatable.

> IBM + Microsoft. Expected: OS/2. Actual: MS NT kernel

Not entirely true, because in fact NT is heavily "inspired" by VMS, that Dave Cutler, the main architect of NT kernel, used to work in DEC as a technical fellow. This is also one of the reason DEC Alpha can run Windows NT out of the box, as it is quite similar to VMS in nature.

I cannot call those backstabbing. They are more of the results of market competition.

It’s not happening anymore, partly because of reputation and partly because they’re no longer the 800lbs gorilla they once were - but the “Microsoft kiss of death” was a thing - cooperating with Microsoft often resulted in great damage to the other company.

SGI; Nokia; Sando; Spry; there were many others through the years.

Nokia have themselves to blame, with the internal teams competition and the board promising an hefty bonus to Elop if he managed to do what he did, selling the mobile business unit.

Similar examples can be given for other IT giants.

This wasn't team 1 vs team 2.

It was one team splitting half-way and taking everything from the other.

Explain how it was like that in the case of NT.

IBM went their own way with OS/2 and Microsoft hired Dave Cutler from Digital to develop NT over several years. Windows NT is not OS/2. It never was.

That is a mischaracterization of what happened.

Microsoft unilaterally changed the OS/2 3.0 API to the match the Windows API, IBM did not approve of that, and then the project split, with the Microsoft version of OS/2 3.0 becoming Windows NT.

to be fair, apple stole the technology that they gave microsoft.

Apple lawfully licensed technology from the Xerox PARC from Xerox. Xerox knew they were licensing the technology, with the likely objective of copying it. That's a substantial difference with respect to what Microsoft did.

Xerox's decision is considered dumb, but they were told exactly what was going to be done. The executives were stupid enough to agree because they did not want to hear about anything other than photocopiers and toners.

Microsoft on the other hand was initially a close Apple partner, developing the Z-80 SoftCard for Apple II and then helping develop applications for the Macintosh. Once they gained enough trust, they used that trust to clone the Macintosh (Windows 1.0).

> Anything that really matters is just like the same old MS you know.

I have been hearing this kind of thing for years, and I just don't get it.

Microsoft has turned around completely, becoming a huge open-source contributor. They committed all their patents to OIN. They make .NET Core available for MacOS and Linux (including open-source). They are noticeably absent from the congressional hearings of the other huge tech companies who have been bad players.

And yet we hear that they're "the same old MS". I get that no company is perfect, but in all honesty, what could Microsoft do that would change your perspective on them? And do you hold other companies (FAANG) do the same standard?

>what could Microsoft do that would change your perspective on them?

I mean it's really complicated. For starters I'd like them to stop forcing people to use their bad products just because they were there first to lock down the market and or abused their position. This is still happening today.

Then I'll be more open to use their good products, and there's plenty of that. I want to be excited when MS announces a new technology, not to be reminded of how bad they behave as a company and the negative impact they have on my life.

>And do you hold other companies (FAANG) do the same standard?

Yes. At least with e.g. Apple and Google I can just not use their products, but with Google it's getting harder and harder as they monopolize the web and close/lock Android even more. Google removed don't be evil from their motto, MS should change theirs to We love open source when it's convenient. Nothing wrong with doing manipulative PR like everyone else, but don't be surprised when some people don't want to drink it.

I agree with everything you've said, except

>Nothing wrong with doing manipulative PR like everyone else[.]

Just because everyone does something doesn't make it right.

I agree with you.

1. I run Arch Linux on windows vis WSL which provides pacman as a package manager. Pacman is way better than homebrew.

2. Docker runs much faster on windows compared to Mac.

3. My Mac will freeze up at times, while ctrl+alt+del always works on windows.

4. I am no longer limited to the crappy gpu options on Mac and can actually do gaming on my laptop.

Agreed. BTW if you're looking for a apt/brew like experience on Windows proper, try chocolatey: https://chocolatey.org/

(Obviously there's also the Windows store, which is not bad for GUI programs, but for more developer-type stuff - e.g. installing python - chocolatey is great.)

I love chocolatey and use it quite a bit. But I like Arch Linux's AUR more because it contains way more packages. Though Microsoft also seems to be on their way to make their own


It's a different use case - use Chocolatey to manage native windows installers - it's more like brew casks.

I prefer scoop myself. Everything is isolated to the users profile. Check it out.


Hasn't WinGet superceded Choco of late?

WinGet isn't publicly available yet.

Scoop is a good option too IMO.

+1 for docker under Wsl2 is way better than macos

If you find yourself in a Mac again I'd really recommend nix over homebrew

For what it’s worth, the GPU thing can be solved using an eGPU on Intel macs — but the thermal throttling and CPU perf basically killed that dead in the water for me; I was running an RX 590 which worked brilliantly, but gaming itself was lacking sadly.

Gave up and built a mini ITX box next to my laptop. Swap DisplayPort, swap one USB-C, done!

You might enjoy the backstory to the Windows task manager:


Apple are throwing more roadblocks at least partly because developers are becoming more and more deplorable, trying to claw every penny they can by collecting and selling every bit of metadata (or even data) they can get their claws on. Microsoft aren't throwing similar roadblocks at least partly because they're one of the deplorables people need protection from.

All big software corporations use open source strategically: keep the core money-makers closed, release tooling and other trinkets for developers so that they do some free advertising for the company. They also release expensive-to-develop software for free to destroy competitors and expand their influence.

I am an Apple/Linux diehard, but Visual Studio has always been superior to Xcode imho.

> throwing up more and more roadblocks and constricting their platform ever more

Do you have any specific examples? From my perspective as an app user, rather than developer, the restrictions they've put in place seem to be beneficial to me. I like sandboxed apps, absolutely love that I can tell an app to bugger off when it tries to access some folder that it has no business reading.

What about GNU? It seems like most Windows-ₗusing devs use WSL2’s Linux VM. What advantages does that have over keeping the MS OS’s forced updates, BSODs, etc. in a VM, while keeping a free OS stably settled on bare metal?

I can imagine drivers, but if you stick to only Dell Developer Edition, Lenovo Linux-certified, Purism Librem, System76, or similar (still significantly wider selection than Apple s̶h̶e̶e̶p̶ fans seem satisfied with) hardware, things should work more smoothly than with Windows (drivers are built into the kernel and update with the OS).

It seems like most Windows users use WSL2’s Linux VM.

Most Windows users haven't got a clue what Linux is let alone WSL2.

Or that Rust isn’t iron oxide. I’ve edited that line to be clearer, and I’ll link definitions for all the other words ~~if someone will show me a pastebin that will hold the 583732 byte file I’ve just created for shell (xargs & dict) practice for free~~ Edit: https://ghostbin.com/paste/kgwWc.

> I can imagine drivers, but if you stick to only Dell Developer Edition, Lenovo Linux-certified, Purism Librem, System76, or similar (still significantly wider selection than Apple s̶h̶e̶e̶p̶ fans seem satisfied with) hardware

I don't need a huge selection, but I do need polished hardware that I can walk into a store and try out before buying. Just things like the feel of the keyboard or the trackpad can make a system a nightmare to use if they're bad. I also need to be confident that I'm not going to get given the runaround if a piece of hardware (e.g. docking station or monitor) doesn't work with the machine.

> things should work more smoothly than with Windows (drivers are built into the kernel and update with the OS).

That's actually one of the things that worries me the most about Linux - I can't pin a driver to a version that's working, and if the kernel drops support for my hardware then I have to choose between losing support for my hardware or never upgrading my OS. I was excited for GNU/kFreeBSD up until the point where systemd arrived and destroyed all the advantages of free OSes.

> I do need polished hardware that I can walk into a store and try out

Doesn’t this apply to my first two suggestions? The Dell XPS Developer Edition is the same hardware as the Windows version, just with Ubuntu preinstalled. And Dell has upstreamed the drivers. Similar with Lenovo hardware, except IIUC they haven’t started shipping Linux preinstalled yet, you just buy a normal Thinkpad and install your distro of choice. Purism has a basic return policy, though IIUC you have to pay shipping and, if there’s no hardware defect, a 10% restocking fee: https://puri.sm/policies/. System76 has a 30-Day Limited Money Back Guarantee linked in their websites footer: https://system76.com/warranty (^f 30 and hit enter a couple times).

> if the kernel drops support for my hardware

Is there precedent for this? It still supports 32-bit CPUs long after most people have upgraded — indeed certain distros like Ubuntu have stopped supporting them, but there are probably hundreds of other distros that haven’t done that¹, like Devuan, which also champions init freedom and maintains a list of a ~two dozen Free distros/OSes that don’t force systemd: https://www.devuan.org/os/init-freedom

¹Edit: Distrowatch lists 40 active distros that support i386, 102 active that don’t use systemd, and 25 active (and 115 non-active) that meet both of those criteria : https://distrowatch.com/search.php?ostype=All&category=All&o...

> Doesn’t this apply to my first two suggestions? The Dell XPS Developer Edition is the same hardware as the Windows version, just with Ubuntu preinstalled. And Dell has upstreamed the drivers. Similar with Lenovo hardware, except IIUC they haven’t started shipping Linux preinstalled yet, you just buy a normal Thinkpad and install your distro of choice.

Drivers make a lot of difference to the feel of a trackpad, I wouldn't want to buy without testing the actual drivers. I wanted to like the XPS but its keyboard felt too rubbery to me. The idea of buying something and then shipping it back really doesn't appeal to me (I'm not from the US and the idea of just returning stuff isn't so much in our culture); I really want to go to an actual showroom-like store, try out a bunch of different laptops, and then walk away with the one I like, and I accept paying a premium for that. (I'm sure others will feel different, and maybe I'm not being reasonable; just trying to describe where I'm coming from).

> Is there precedent for this?

Yes, I've had three different pieces of hardware go unsupported in Linux (Logitech QuickCam USB - eventually support reappeared in a different driver; Asus A730W PDA; an old Hauppauge TV tuner card). Linux is openly really hostile about out-of-tree drivers (no stable API as a matter of policy) which the first two were, but even in-tree drivers are aggressively deprecated (the Hauppauge driver was one of those). I switched to FreeBSD on my home server because I was just fed up with all the churn of Linux and it's been a lot better.

And meanwhile, GNU/Linux continues to be better than both. I love developing on a system made by developers.

Rust needed a GUI and Microsoft provided one. They seem to be very focused on giving developers what they need, but only to a point. I've been doing some system glue stuff and while it's nice that powershell has ssh an scp they are missing some options I want. I was going to use curses with python (batteries included!), only to find out it's not supported on windows.

It almost feels like a strategy - be standard enough to bring people in, but idiosyncratic enough to lock them in.

I'll be using gtk-rs thank you very much.

I do think it is a strategy, but I think it's a rather simpler one than that: basic work triage and scope management.

ssh and scp make sense to put into powershell, because they're everyday sysops things. curses is pretty posix-specific, and apps that use it are likely to need other posix stuff, so handle that with WSL rather than unnecessarily re-inventing a wheel.

Never attribute to malice what can be adequately explained by incompetance, laziness, or limited resources.

It's hard to forget that Microsoft's official, documented policy for a very long time was Embrace, Extend, and Extinguish.

It all feels very vaguely analogous to the West's relations with China and Russia -- both China and Russia appeared very open for a time, and then closed back down after gaining enough leverage.

It's hard for me to see why we are doggedly ignoring the existence of WSL in an effort to manufacture a controversy.

We've got a square peg, and an operating system that has both a round hole and a square hole. There is nothing nefarious about choosing not to use the round hole.

Much as I like WSL, WSL is the natural "embrace" phase of an EEE plan, and a partial implementation with key compatibility differences the "extend" phase. I will be optimistic but forever wary, and continue to do all of my serious work in native Linux.

Trying to bring DX to Linux was also an attempted move in the Extend phase.

> I'll be using gtk-rs thank you very much.

Please be aware that if you do this, your application won't be accessible with screen readers or other assistive technologies on Windows and Mac. At least not now. Maybe I'll have time to implement GTK accessibility backends for those platforms someday.

Yet another reason for them to do this. Not just a GUI for Rust, but the only accessible one. It really is a solid strategic offering to bring Rust developers to their Windows platform. But IMHO developers who do are trading tomorrow for today.

Actually, Qt is also accessible (more or less) on all the desktop platforms, so that's another option.

Is there a good way to use Qt from Rust?

I looked the other day, but couldn't figure it out.

You might want to have a look at this blog post[0]. It presents an up to date version of the state of GUIs in Rust

0: https://dev.to/davidedelpapa/rust-gui-introduction-a-k-a-the...

Screen readers rely on some kind of toolkit a.p.i.?

I would assume they would simply read the screen. Are they thus not capable of, for instance, reading a picture?

They should not be called “screen readers” but “text to speech” if they not actually read the screen on a bitmap level.

Screen readers, despite the name, don't do OCR, they access informations provided by the GUI toolkit (which is one of the vastly improved area in GTK 4 Afaik).

Some screen readers can do OCR on a specific part of the screen (e.g. an unlabeled image) on request. But while OCR is useful for getting text out of an image, most implementations can't discern the structure of a UI, e.g. which part is a button, which part is an edit box, etc. Also, OCR is typically done just once on-demand, not continuously as the screen changes.

However, VoiceOver for iOS has a new feature called screen recognition, which is exciting because it overcomes these limitations and provides some level of access to applications that are otherwise inaccessible. Hopefully other platforms will catch up.

Even then, true screen reading will be much more CPU-intensive than what screen readers currently do. And anyway, it's not here yet, except on iOS. So I will continue to warn developers away from toolkits that are inaccessible, in hopes that some blind person somewhere will be spared the pain of being blocked from doing a task because of an inaccessible application.

Then why don't actual screen readers exist when a.i.'s at this point can practically solve captcha's by enhancing the reflexion in the eyes of a highly compressed jpeg and reading the text in there?

Certainly there would be significant demand for a sightless man to be able to read the dankest memes from pictures?

> Then why don't actual screen readers exist when a.i.'s at this point can practically solve captcha's

Training a computer to solve a captcha is a lot easier than training a computer to understand interface conventions.

There isn't an AI that can look at a jpeg screenshot of an interface and say, "there's two input elements, and it looks like they're grouped together and control the list to the left of them, and one of them is selected, which I can tell because it has some kind of subtle glow effect on it, but not the glow effect you would get if you moused over it."

There's nothing that can realistically do that today, and it wouldn't be fast enough or performant enough for low-powered cell phones and laptops even if it did exist.

If you're just looking at describing pictures themselves... sure, Facebook does auto-generate alt tags for images if you forget to put one in. And Youtube auto-generates captions. Those are valuable services, but they have a lot of glitches and mistakes. If you're a blind reader, you'd prefer not to have that experience when you're using a piece of software, you'd prefer something that just works reliably.

It's the same reason you probably used a keyboard to type this comment instead of speech to text. Speech to text is useful in some cases, but not good enough or accurate enough that you would want to use it as your main input method.

Converting bitmaps to text isn't enough to make an interface usable. You need to be able to quickly convey the structure of the interface and what controls are available, and to do that well you need some kind of semantic insight into the interface.

Screenreaders don't just read text, they control the interface itself using standardized keyboard shortcuts and input components within whatever graphical framework you're using, and they communicate what that interface is using a set of standardized terminology.

I indeed am imparted that the term “screen reader” is a misnomer and not what I expected it to be.

This seems to be more so an “accessibility suite” of which text-to-speech is a component rather than what I would call a “screen reader".

Sorry, I guess? It's been an industry standard phrase for a pretty long time. Screenreaders probably have a historical reason why they're named the way they are, but the short version is that's just what everyone started using. A lot of software terminology is like that, it's weird to people who are unfamiliar with it because there's no central committee somewhere that decides what everything should be named.

If you're trying to do a search online for the kind of tool you're looking for, probably the phrase you would want to search for is "OCR software", short for Optical Character Recognition, or if you're trying to tag images just straight-up "image recognition."

What you describe might be more robust in some situations but vastly less for software designed to be accessible. Take "alt" attributes in HTML img tags for instance, or various metadata attached to buttons that use an icon instead of text (like a play button, or a X close button and similar things).

You can see an example next to this very comment actually: the up and downvote buttons won't be accessible with OCR, but they have "title" attributes describing what they are. And consider that there's more to understanding a given user interface that raw text: radio buttons are tied to certain labels, there's hierarchy, all sorts of layout cues that would be opaque to a screen reader.

I suppose that ideally you'd want both: use native accessibility data if available, fallback to OCR when there's no alternative.

This only matters if you care about propiretary platforms like Windows and Mac. And you should not care!

I still do, because people are stuck with these platforms, for reasons beyond our control, and I care more about people's access to applications that they need than about being a purist.


More like “fuck people who use Mac OS or Windows”.

I saw no malice there directed at users of assistive technology, but rather of users of non-open platforms.

If the assistive technology they need to use isn't available on Linux or they don't have the ability to run Linux it's not that meaningful of a difference.

It is available on Linux. Apparently not when using GTK on Windows.

Exactly. It's a difference without a distinction.

And rather sad that free software - which in theory ought to be a perfect place for exploring assistive options - seems to lag far behind the closed shops.

Having discussed this with Drew, I think he was really talking about what I, personally, should choose to work on. And I think he's right that I should work on the accessibility of free software as much as I can, rather than accessibility on proprietary platforms. I don't know yet how soon I can actually start doing this.

Well they bring in the language and its runtime to windows development via WinRT. They don't bring WinRT to Rust. This is the windows team writing adapters for their COM API surface. They do the same for C++, C#, JS and now Rust.

For people who have not lived in the Windows world this would be like GNOME writing Rust Glib bindings.

Didn't they remove the JavaScript WinRT projection?

The docs[1] describing how to call a WinRT component from JS say: Universal Windows Platform (UWP) projects are not supported in Visual Studio 2019. See JavaScript and TypeScript in Visual Studio 2019. To follow along with this section, we recommend that you use Visual Studio 2017. See JavaScript in Visual Studio 2017.

[1] https://docs.microsoft.com/en-us/windows/uwp/winrt-component...

Sort of, it was/is built in to Chakra (the JS engine in IE9+, old Edge, and the original UWP WebView based on those browser engines) and hasn't been ported to the new Chromium-based Edge or the WebView2 based on it.

There's also a separate NodeRT projection for generating Node/Electron modules: https://github.com/NodeRT/NodeRT

JS? can you point me to that?

Search WinJS. However, i agree with the sibling comment that this is a maintenance only story.

> I'll be using gtk-rs thank you very much.

How well does gtk-rs work as a cross-platform GUI library? I know it works well on Linux, but I haven't tried it on macOS or Windows.

If anyone has experience using it for cross-platform development, I'd love to hear about it.

It works. The windows theme is a little dated, but Windows users are used to a random mishmash of inconsistent styles, so likely won't cause complaints. On Mac, well actually I haven't personally tried it on Mac, just Windows/Linux, but I hear a lot of vocal complaints about GTK not fitting in from mac users. I'm not sure this means it's worse than on Windows, I think Mac users just expect more.

You'll need to install windows-curses, since cmd.exe didn't support vt100 escape sequences until relatively recently, and still requires a special WinRT call in order to enable them.

But it's a bit telling that the first hurdle you hit in running python on windows was the operating systems choosing different forty-year-old terminal emulator escape sequences. :)

Good thing Linux users are dragging Windows back to, checks notes, 1978.


Defeating 37 years of separating content from presentation

https://en.wikipedia.org/wiki/Separation_of_content_and_pres... (LaTeX)

So we can embedd garbage in strings again! Causing effects like this:


"On Linux, if you use ls --color then different file types use ANSI escape sequences as color indicators. If you pipe this output to less, then you get paging while retaining the color information. If you redirect this output to a file, that file contains the ANSI escape sequences. If you then use the cat command on the file, you see the coloring as the ANSI escape sequences are rendered by the terminal."

Interpreting commands from log messages! Because we haven't learned from history:


A glorious future awaits.

> Good thing Linux users are dragging Windows back to, checks notes, 1978.

Indeed, there are some non-millennial shenanigans here.

SQL and VT100 have rather different capabilities.

inb4 the first discovery of a program which "doesn't log plain text passwords" by logging them with the foreground colour set to the background color.

And then the one which exploits a terminal for arbitrary command execution with a buffer overflow in the VT escape code parser. Wait, what am I talking about "inb4", that happened already and it didn't even need a buffer overflow: https://www.proteansec.com/linux/blast-past-executing-code-t...

> "mod_rewrite.c in the mod_rewrite module in the Apache HTTP Server 2.2.x before 2.2.25 writes data to a log file without sanitizing non-printable characters, which might allow remote attackers to execute arbitrary commands via an HTTP request containing an escape sequence for a terminal emulator."

"still requires a special WinRT call in order to enable them"

Which WinRT call would that be?

Hi, owner of the Windows Console here.

Enabling VT control sequences is a matter of setting the ENABLE_VIRTUAL_TERMINAL_PROCESSING mode on the output handle, and it's available through the same interface as every console mode flag that came before it. SetConsoleMode has a long history--dating back to the nineties--that this just builds on.

It's behind a flag so that applications developed before ca. 2015 that like to emit control characters to the screen don't melt away into gibberish.

Yes, that'd be the call in question; it requires cross-platform vt100 apps to specifically know about and call a feature on windows in order to enable it. They can't just emit control characters to terminal, they must call this WinRT function first. This isn't something you can fix, for the reasons you listed, but it's something that's true.

As a side note, Windows Terminal (the app) is absolutely fine letting programs emit and handle VT100 escapes without them issuing any particular opt-in call themselves... And upon looking, you're the person who enabled this feature! Thanks for that, but, why is it good for terminal but bad for conhost?

Sorry, I had my Microsoft-colored glasses on. We use WinRT almost exclusively to refer to the new COM-based API surface that “modern” applications use. I see now that I’ve misunderstood you :)

Back when VT parsing was implemented, it was an entirely new output stream parser built into a console that hadn’t been updated in a rather long time. We were careful commensurate with its age, and opt-in made sense. Language runtimes or compatibility layers like Cygwin could handle the decision for all of their hosted applications(1) and everything else would generally continue working properly. Now that we’re working on conhost’s replacement, we get to revisit some of those decisions!

Cases like this are especially acceptable because a user can always fall back to conhost. That escape hatch isn’t one we intend to get rid of.

1) this doesn’t do anything for manual or direct ports, and Cygwin is far from the only provider here. Representative example, etc.

Thanks. I wonder why the python module instructions didnt just say to do that. Must not be widely known?

Windows Terminal is the actual solution.

WSL2 has almost everything you need if Windows isn’t enough

Yep, WSL2 is standard enough to draw some in. Its DirectX support is a[n early baby] step in the direction of making it idiosyncratic enough to keep them locked in (we should probably expect more in the future. Right now it otherwise seems to be just a VM, without a lot of non-standard stuff to give it an advantage over running the more stable OS on the bare metal and keeping the BSODing one contained with virtualization).

That's exactly what this is, a way to sink their proprietary claws into Rust and try to influence the market the way they have done with most of their software for decades.

Microsoft has reinvented themselves, but the old hatred won't die easy.

I'm excited for Intel to do the same, and look forward to the lingering suspicion even then.

I disagree that they've reinvented themselves: https://www.computerworld.com/article/3568009/slack-files-eu...

That the monetized IRC clone is unhappy doesn't convince me that Microsoft has done something wrong. But we'll see what the EU commission decides.

The solution for this is rather simple, and until Windows does it, I will always be skeptical of their renaissance as some benevolent contributor to open source: open source windows.

Sure! Presumably you're volunteering to track down the current rights holders for all code derived from third parties and will negotiate the relicensing of their contributions?

Nope, I am volunteering the trillion dollar market cap corporation to do that.

Microsoft are a gigantic corporation, they're not the good guys or the bad guys, they're collectively amoral and profit oriented. That will never change.

Being suspicious of their actions and their intentions is a very reasonable stance.

The old hatred flares up every time Windows 10 asks me if I'm really really sure I don't want to default Edge as my browser, or "accidentally" changes it.

Nothing has changed.

> every time Windows asks me

... how much of my privacy I would like to have violated

And every time Windows has changed settings without asking permission and without giving me a clear explanation about the settings and changes.

I do not want my $OS to flout privacy.

No they have not reinvented themselves. They are ruthlessly taking over OSS software projects by buying the type of developers who play politics and love to command others around.

I take it that Rust is next after Python, whose development has stagnated and where the mailing lists are now censored.

So Windows didn't port every single linux lib in existence and you call it 'strategy' ?

> They seem to be very focused on giving developers what they need, but only to a point.

This may be a consequence of the consent decree they signed in the early 2000's, where it was alleged that they used their control of Windows APIs to further Internet Explorer market share at the expense of other browsers. Since then they have had to be careful not to act like a monopoly.


curses specifically is majorly antithetical to how powershell and by extension Windows/server has decided to evolve. infact text-based UI is the reason CMD.exe cannot and will not ever be improved.

munging anything around with text very much not the Windows way. For better or worse.

I was curious how this worked: The previous iteration of this only worked for WinRT API, and this new crate seemed to also work by generating code from WinMD files. But WinMD files only contained definitions for WinRT/COM APIs, so how could this possibly work?

Well turns out, microsoft started a project to also generate Win32 API information in WinMD file, to generate APIs from them automatically for all native languages! See win32metadata[0]. This could make interfacing with win32 APIs a lot more convenient!


So does this generator need access to the .winmd files at build time to be able to generate the bindings? Are they available for non-Windows builders?

The WinMD files are shipped with windows-rs. You can find them here[0]. For cross-compilation use-cases, the main blocker is [1].

[0] https://github.com/microsoft/windows-rs/tree/master/crates/w...

[1] https://github.com/microsoft/windows-rs/issues/142

Does that meant it's compatible with any other language?

Any language that can call C ABI and support the necessary datastructures, yes. That's the point of win32metadata.

Kenny Kerr's blog post on this may also be of interest. In particular, it answers the question I was going to ask about how they're handling Win32 and WinRT in a unified way.


I wonder if Rust is becoming Microsoft's way forward for development rather than C++ (i.e. Rust for Windows rather than C++ for Win32), leaving .NET for higher level development? The bold introduction in the blog post surprises me, coming from Microsoft themselves who're right now hard at work on these individual and ununified technologies.

Here is some of the internal advocacy going on at Microsoft.

- Managed languages if you can afford a GC

- Rust

- C++ with Core Guidelines


Note that there are still some teams like Azure Sphere and Azure RTOS, which are only providing C based SDKs, so no everyone is on the same wave length.

I kind of expect it to be called "Windows for Rust".

Or maybe "Windows API for Rust".

Rusty Windows?


I had the exact same thought. I almost didn't bother following the link, because Rust for Windows is already a thing, but this is essentially a Rust equivalent to C++/WinRT.

To be clear, it's a bit broader than that. WinRT is a specific subset of windows APIs, and the Rust bindings for that have existed for a while. This is for all Windows APIs.

I thought your comment was a tongue-in-cheek reference to Windows Subsystem for Linux and then I clicked the link.

At least they are consistent in using the convection in reverse.

No more backwards than Windows Subsystem for Linux!

As someone who writes Windows software now and then, I’m genuinely excited. I tried using this early, when it was limited to WinRT bindings. It looked promising, but compile times were prohibitive. It seems like they now include a build.rs and have clear recommendations around caching — I hope this solves the problem. Has anyone tried a recent version?

Wow! Almost want to make Windows App now just for fun.

The lesson from Microsoft I think is that the fish really does rot from the head. Put another way: who the CEO is really does matter. We have night and day here with Ballmer compared to Nadella.

Credit where credit is due: Microsoft has really been doing a lot to try and rebuild their credibility when it comes to the developer community. Off the top of my head I can think of TypeScript, open sourcing .Net, WSL and now this.

Oh and they haven't done an Oracle or a Cisco (or, let's face it, a Google at this point) with their acquisition of Github by letting it die on the vine or with hostile forced integrations.


For Mac fans, the closest you'll have to this in OS-X is core-foundation-rs[1], by the servo team.

[1] https://github.com/servo/core-foundation-rs

Thanks for the link. Do you know of any examples that use this crate that would be easy enough for a beginner to start learning how to use it?

The folder /cocoa/examples in the repo I linked has some very simple ones.

For someone not familliar with Windows API, why does creating a Windows needs unsafe and other low level things? I guess it's the same for the C++/C# version?


Anything not defined in Rust needs to be marked as unsafe, because Rust cannot understand non-Rust code. FFI bindings are inherently unsafe.

On a side-note; do you know of any good resources where someone tries to wrap a non-trivial C wrapper where they go over common C idioms and ways to provide a (safer) API inside Rust without too much overhead?

I've always found it to be incredibly difficult because of the number of gotchas which can leak into causing a segfault in Rust; it's immensely frustrating.

My co-author Carol Nichols gave a few talks on this a while back, but there's not a ton of resources, it's true. I would look at big libraries and see what they do. I know it's not the best thing, but it's probably the best that exists right now.

I think maybe the ZeroMQ rust crate as a case study is something that is non-trivial: https://github.com/erickt/rust-zmq

It's more of an example than a resource, but the 'git2' crate that wraps libgit2 does a good job of providing a nice rust interface over a C library.

It's a call out to Windows libraries that long predate Rust, and they are implemented in (mostly) C++. They don't provide any of the safety features on any data structure you pass to it. I don't see how it could be anything other than unsafe.

Yet the xcb bindings in that crate are mostly safe: https://rtbo.github.io/rust-xcb/xcb/xproto/fn.create_window....

They have provided nontrivial wrappers, wrapping xcb datatypes so to make them safe and uphold the invariants.

I don't know much of any Windows interfaces, but I doubt this would be impossible there too; they simply did not do so.

Does it actually wrap libxcb, or does it generate bindings from the XCB protocol descriptions (XML)? I would think the latter would be less work and higher quality.

I don't know how it's achieved, but as I look at the bindings for this Windows a.p.i they return all sorts of raw pointers and other things, whereas in XCB bindings for Rust wrap the types in a Rust-friendly way so almost all of them are safe.

This Windows a.p.i. in Rust seems of a frankly atrocious design and most of the functions seem simple `extern "C"` declarations rather than actual attempts at proper wrapping.

I don't think that's a fair characterization. Most safe rust wrapper libraries are built in two layers: 1. Map the api's raw interface into raw rust types, usually with a simplistic code generator; this enables the bindings to closely track upstream api changes. 2. Use that raw interface to build a wrapper library that translates the api to use rustlike idioms and expose safe constructs.

This seems to be #1 only for now, which is fair because winapi is enormous. Also there may be many ways to expose a safe rust interface all with different tradeoffs, by leaving #2 open they don't lock in a single strategy prematurely. That said I am looking forward to a safe wrapper as well.

That it is atrocious because the project is young, and that it might become less atrocious in the future, is no argument that it not be atrocious.

However, my original reply was to this:

> It's a call out to Windows libraries that long predate Rust, and they are implemented in (mostly) C++. They don't provide any of the safety features on any data structure you pass to it. I don't see how it could be anything other than unsafe.

It very much can be, and is often done, it simply isn't (yet )here.

Hating this is like hating a primer-painted part for being grey: you denounce the propose and existence of primer and claim that it should have been painted with the finishing coat from the start.

I'm sorry you think the dull finish is ugly, but looking pretty is not its purpose. Its purpose is to bind strongly to the substrate it's on and provide a better surface for that shiny coat you so desire to adhere onto. Primer and language bindings alike.

> Hating this is like hating a primer-painted part for being grey: you denounce the propose and existence of primer and claim that it should have been painted with the finishing coat from the start.

No, it's simply arguing against the claim initially made that there is no way for it to not be grey.

> I'm sorry you think the dull finish is ugly, but looking pretty is not its purpose. Its purpose is to bind strongly to the substrate it's on and provide a better surface for that shiny coat you so desire to adhere onto. Primer and language bindings alike.

Maybe it isn't it's purpose, but the original post claimed that it was impossible for it to be difference, which is certainly false.

The Windows API is a sprawling mess.

I really doubt that would have anything to do with whether or not the binding wrote safe, Rust-esque wrappers or not.

There is some good stuff here but also some sloppiness.

1) Having everyone generate the bindings means there will be many copies of each type, causing type errors. This only works if Windows is always a private dependency, a concept which isn't even fully implemented, and that is a bad assumption. Public dependencies are the logical default.

2) Putting the proc macro and non-autogenerated parts in the same create is cute but a sloppy conflation of concerns and bad for cross compilation. There is a underlying https://crates.io/crates/windows_macros thankfully, but that should be the only way to get the macro.

Will this work when cross-compiling from Linux? That's supported by Rust.


> we could support this in future, but it is not an immediate goal. If this is something that folks would like to do, feel free to chime in on this issue and let us know.

Note that most of Microsoft regards Linux mainly as a server OS. You are not supposed to use it on the desktop, instead you should use Windows there.

>> Note that most of Microsoft regards Linux mainly as a server OS. You are not supposed to use it on the desktop, instead you should use Windows there.

Then what's the point of WSL? To allow server software development on a Windows desktop? OK. But then what's the point of trying to bring DX graphics to linux?

To run Tensorflow and other ML libraries in WSL.

You dont need DX to run tensorflow on Linux.

So Embrace Linux by providing WSL. Extend by bringing DX to it. Extinguish the standardized APIs for graphics and ML.

Got it. Check. Thanks for the confirmation.

CUDA you mean?

Or LibGNMX, GX2 and NVN?

Even if that's true you really don't want to run a Windows box as a build server - licensing and security and all that.

I have a CentOS DVD to offer you.

I don't see why not, the winapi crate is.

The question is valid. The winapi crate binds the functions in the Windows headers explicitly. You can call `winapi::...::CreateEventW` because winapi has a `pub extern unsafe fn CreateEventW(...)` in its code.

This new crate is a generator. If this new generator requires access to the .winmd files or .h files from the SDK or whatever, where is the non-Windows builder going to get those from? And will this generator look for them there?

For example, I used to maintain a generator that took COM typelibs and generated Rust bindings for them, but it required calling into the Windows API for working with typelibs and thus obviously required a Windows builder. The functionality of this crate is split between the windows_gen and windows_macros crates, and from a cursory glance I could not tell how it works wrt win32 bindings.

Edit: Answered by https://news.ycombinator.com/item?id=25863581 - the win32 API is now also described by .winmd files.

So the question is, can a non-Windows builder get these files, and then convince this crate to look for them wherever it placed them?

> This new crate is a generator. If this new generator requires access to the .winmd files or .h files from the SDK or whatever, where is the non-Windows builder going to get those from? And will this generator look for them there?

From memory (I haven't done Windows development in forever): there's a few projects that try to provide an open source version of the Windows API headers. The MingW project, for example, has a full set of drop-in replacement headers (meaning you can take code that compiled with Visual C++ and compile it directly with MingW's GCC and it should 'just work'). The LCC-Win64 project has a similar set of headers.

Does that answer your question?

Headers are irrelevant. We're talking about .winmd files.

> So the question is, can a non-Windows builder get these files, and then convince this crate to look for them wherever it placed them?

Yes, there's actually a prebuilt Windows.Win32.winmd file in the repo, underneath .windows/.

The winapi does a lot of work to support that use-case. Importantly, it ships (mingw) import libraries to allow linking against windows libs from linux trivially[0].

EDIT: windows-rs doesn't yet support cross compilation as easily as winapi. Here's the issue tracking it: https://github.com/microsoft/windows-rs/issues/142

[0]: https://github.com/retep998/winapi-rs/tree/0.3/x86_64/def

Wow, I thought by the name this would be an awkward Windows distribution of Rust packaged in an MSI. I'm pleasantly surprised. Microsoft has become one of the best big tech companies for open source in the past few years.

Really pleasing to see that MS have done this without feeling the need to start nailing proprietary extensions onto the Rust language. I feel Rust adoption is still at a low enough level where a separate windows fork would have been especially harmful. I guess there are a couple of factors helping here:

- MS were already moving to a model where they prefer to stick with the standards-compliant form of a language: vide C++/WinRT

- Rust has enough features built in to facilitate this, specifically being able to hook into the compilation process

Wish there was something like this for Linux too. Rust system programing on Linux consist of dealing with a dumpster fire of badly implemented and incomplete wrapper crates for the kernel interfaces.

I assume you're talking about more than just libc. Many of the Linux specific facilities are captured in higher-level cross platform implementations, like mio abstracts over kqueue on BSD and epoll on Linux.

What are the APIs you're interested in that are missing or of poor quality?

Since Windows ships a stable ABI, why does this project need to generate the bindings at build time? Couldn't all of the bindings be pre-generated, eliminating the build-dependencies?

P/Invoke, it never went away.

Why would I ever write code on Windows if I don't have to because I'd be writing a Windows program?

Microsoft is anti freedom, anti developer.

Windows is only good for games. Real development happens on other platforms.

They bought github and turned it into something against the spirit of free software. First thing was to add deleting issues and comments.

Microsoft is evil. Always had been and if you trust them one bit you're a fool

They managed to secure the crate name "windows" [1] which always is a nice touch.

I clearly remember that this name was squatted.

[1] https://crates.io/crates/windows

D has fairly extensive windows API support, and the usual PE gubbins. Worth taking a look at

Rust has had that for a long time as well. This is first-party support from Microsoft for Rust in contrast to community supported options like in D.

Sorry, I had to look up "gubbins":

British : fish parings or refuse broadly : any bits and pieces; gadgets, gadgetry the gubbins for changing a tire all the navigational gubbins

I wasn't aware of the dictionary definition :)

As the GP used it, it means "and what not", "and such like" etc.

Are these D bindings relevant to Rust developers? Or asked another way, is there some reason that the D bindings would be better to use than these native ones in Rust?

When I am writing or choosing (or bindings to) an API I always go and look at other languages (especially what the functional people do) to avoid repeating non-obvious mistakes and seeing where the impedance mismatches are.

Most recently, I've been playing with eBPF on Linux: The system call API is terse and pretty impenetrable, whereas the high-level APIs either mean writing gadgets in C with a nearly-blind debugging experience or relying on a library to retain the power of the featureset the kernel exposes.

This is the same stuff like julia community is doing. Spam language in every Python/R related topic (D guys target Nim/Rust)

Correcting false statement?

Ah that's true in spirit, but it wasn't entirely false.

It's hackernews not rustnews, though.

Yes but the topic of discussion is rust here. D is awesome, but we are talking about the windows bindings for rust being “just good enough”.

Applications are open for YC Summer 2021

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact