Hacker News new | past | comments | ask | show | jobs | submit login
Rust/WinRT Public Preview (windows.com)
628 points by steveklabnik 37 days ago | hide | past | web | favorite | 211 comments



https://github.com/robmikh/minesweeper-rs - I just love how they brilliantly avoided mixing up C++/COM things into this. Tough one to pull when you're dealing with bindings for a foreign language. The code looks fairly standard Rust (except maybe for the winrt::import that looks like Go?).

> If you are familiar with Rust, you will notice this looks far more like Rust than it looks like C++ or C#. Notice the snake_case on module and method names and the ? operator for error propagation.

Developing on and now for Windows just keeps getting better.


It seems like they're doing all they can to get developers on their platform. Not to establish Windows developers like they have in the past, but to have others see Windows as just another platform.

We're reaching a revolution from moving from x86 to ARM and I think the more Microsoft positions themselves like this, the less likely they are to be left in some weird compatibility limbo.

As weird as this sounds, I feel more confident with Microsoft being able to do a successful transition to ARM than Apple at this point.


In fact, the _have_ transitioned to ARM, way back around the time of the first Surface tablet. Windows and Office have been running on ARM since at least, what, 2012?


Windows NT in 2000? I guess ARM was not a going concern at the time, but Windows was remarkably portable in 2000 before the dark decade of more or less exclusively Wintel.


https://archive.org/details/Microsoft_Windows_NT_Server_Vers...

"Disc contains code to run on Windows NT-compatible x86, Pentium, MIPS, R4x00, Alpha, PowerPC and Pentium Pro systems"


Windows itself has never been the problem, but the Software that’s running on it.


Way before that they had ARM-based Pocket PCs running Windows CE (or Windows Mobile).


Windows is my daily driver, and I spend most of my time at work these days building dotnet core applications for Linux on both x64 and ARM. It's a great dev experience

Seeing what was coming with cloud and making .NET cross-platform was very well timed.


Microsoft tried countless times to move to Arm.

The most recent efforts are with their Surface Pro X, their SQ1 CPU and pushing Qualcomm to release Snapdragon 8cx CPU for ARM laptops.

But most software development companies didn't care enough to recompile their software for Arm and running X86 software in emulation is not exactly a good experience.


It will be very difficult until Windows is POSIX-compatible. Whatever that means, I don't know. But it can't just be the Windows Subsystem for Linux. The whole OS needs to have POSIX compatibilities.


Windows NT used to have a POSIX subsystem [1] that applications could use instead of the Win32 subsystem. So the Windows NT kernel is designed with the option of POSIX compatibility. Arguably the Subsystem for Linux is the current successor of the POSIX subsystem, providing POSIX compatible access to the kernel along with a POSIX compatible shell and UNIX commands.

Obviously the Win32 subsystem isn't POSIX compatible, but I'm not sure why we would want that.

1: https://en.wikipedia.org/wiki/Microsoft_POSIX_subsystem


That subsystem was just so they could tick off the “POSIX” box for Department of Defence.


That was one of many, then came SFU, followed by SUA.

GNU/Linux users should be happy that old Microsoft didn't bother to do that much with POSIX subsystem.

Had they actually invested into keeping it up to date, and many of us on PC land would never bothered with FOSS UNIX clones.


Yeah, that may be true, but on the other hand, had they made it useable, portabilith with other Unix would have been a simpler option.

Simpler than Win32 + Unix portability in any case. Or in some cases, software would have been ported to Win32 at all.


What UNIX portability?

Back when I was in the university, and Linux kernel was still at 1.0, I had the pleasure to write C and C++ code across Xenix, DG/UX, Aix, HP-UX, Solaris, BSD and naturally Linux also.

There is POSIX, there is what each OS actually do for the implementation defined behaviours, then the whole set of APIs that made each UNIX variant unique and were the selling point of the UNIX clone to start with.

Autoconf and friends did not start out of masochism.


>Yeah, that may be true, but on the other hand, had they made it useable, portabilith with other Unix would have been a simpler option.

That's not necessarily the case. See macOS which is using different GUI libraries, Metal instead of Vulkan etc.

So you can't recompile macOS apps to use them on Linux.


People forgot that Microsoft was one of the biggest UNIX vendors with Xenix and for a long time, Microsoft had the highest-volume AT&T Unix license.


In reality, there are very very few people who care about POSIX compatibility. Because there are no POSIX-compatible systems in the wild.

POSIX-certified systems conform to different POSIX standards. And most of them are of no use to the average developer such as AIX, HP-UX, Solaris.

The rest are largely compatible which means they are not compatible. And yes, that includes every single one of Linux distributions and FreeBSD.


I am beginning to think that while POSIX, Unix and X11 might have been innovative and very useful in the '70s and ' 80s, now they are really old and most of the computing landscape changed.

We need better operating systems metaphors.


How can we bring this to Linux/BSD? Specifically the GUI layer? I’ve lamented that Linux desktop world has been fragmented and alien as compared to other major environments. I used to think that things would have been better if we had chased GNUStep and source compatibility with macOS.

I feel like the non-GUI userland in Linux is best in class and is effectively a given for any server deployment. But all the developers I know cobble together a mediocre Linux-like environment on macOS so they can have a first-class desktop environment and an okay development environment.

On the other hand I read about people using Windows as development environment by using WSL.

I feel like Linux will never organically gain critical mass for a desktop environment—substantial source compatibility with some major platform seems like the way to go. How are the Windows APIs vs the Cocoa APIs these days? Assuming source compatibility could help boost desktop Linux’s fortunes, is Windows or Cocoa a better target? Does consideration of iOS change the calculus?


It is already available, it is called GObject, KParts, D-Bus.

If the community has embraced a proper desktop eco-system around them, like the component vendors on other platforms, they aren't going to start now.

I surely don't see job offers for Linux/BSD GUI toolkits as I see for Apple, Microsoft and Google ones.

WSL is only for those that use Windows to target GNU/Linux, the demography that used to buy macOS to do the same.

For traditional Windows developers it doesn't really matter. PowerShell + Windows Terminal is where the action is.


No, the point of peatmoss's comment is that Linux doesn't lack components or even tools, they lack a cohesive ecosystem. The parts like GOjbect, or KParts, or D-Bus are present but lack cohesion and coordination. The paid teams at Cannonical, IBM/Red Hat, etc all seem to be more concerned with rewriting the core desktop shell's every other release than making a cohesive ecosystem on their chosen tools.

For example, take high DPI. You have two primary competing toolkits between GTK/QT. It took a while but QT/KDE finally added HiDPI. Great, so I could get Hi-DPI setup on KDE! Then I open Firefox and it all falls apart because GTK. Extrapolate to any other number of issues. Unfortunately, Windows 10 isn't much better wrt to HiDPI (at least a couple years back when I last used it). I think there are more different UI toolkits on Windows than Linux has. Counterpoint: MacOS has one global toolkit with message passing based objects, it was almost trivial for Apple to add HiDPI and then dark modes. GNUStep/ObjectiveC would've provided similar basis on Linux, but it never took off. Now macOS is slowly loosing cohesion, instead of ObjectiveC 3.0, they made Swift which from my perspective appears to only be good at adding bugs to the terminal app while trying to avoiding message passing (no more user hot-loadable, un-sactioned third party plugins for any program on macOS :/ ).

At this point, I vaguely dream that HaikuOS will take off or someone makes a WebOS DE for Linux that has a complete suite of tools... but like peatmoss, I'd settle for a Rust/winRT on Linux.


Better said than I could do.

One thing I’ve also thought is that if Linux had managed a first-class GNUStep environment with a compelling alternative to macOS in terms of dev tooling, it could have been great leverage in terms of keeping Apple focused on the core developer experience in macOS.


Counterpoint: Windows is an Operating System. macOS is an Operating System. But Linux is not an Operating System†.

Rather, individual distributions of Linux are Operating Systems. As are the other Unices, e.g. the BSDs, Solaris/Illumos, all the old ones like IRIX, etc. And all of those, at first, built their own Desktop Environment from the ground up, just like Windows and macOS did.

As it happens, through the power of FOSS, many of these Desktop Environment projects were gradually merged, or the various distributions dropped their own DE in favor of someone else's.

But the historical path-dependence is still there, and the political lines are still mostly clear.

Don't think of KDE as "a Linux Desktop Environment." It's both not just for Linux (you can run it on FreeBSD, too!) and also isn't available for every Linux distro. KDE is still fundamentally, in some sense, "SUSE's Desktop Environment."

Same with GNOME: it might have been created by GNU, but it was mostly funded by RedHat and IBM for many years, even if Debian et al eventually picked it up as well.

Same with CDE: that's HP's desktop environment.

If you think of each of these companies as providing their own Operating System with a GUI composed solely of the graphical applications written for their own DE, the continued non-merging of these DEs makes a lot more sense. There's no more reason for them to merge with each-other than for them to merge with e.g. Android. (These OSes all happen to be based on Linux, but that doesn't really mean anything. macOS is based on BSD; should the macOS DE therefore merge with some DE that runs on BSD?)

† Defining "Operating System" here as "a software ecosystem that tries to deliver a cohesive user experience to the people who deploy it."

---

The thing that makes POSIX DEs different from the ones on Windows and macOS, is that you can install multiple DEs' libraries, and then run applications from one DE "under" another DE.

Except for special one-off internal provisions for their own legacy DEs (Carbon; WoW) you can't run software written for another DE "directly" on Windows or macOS. You can only virtualize such software using e.g. Wine, or an X11 server that runs as a window under your DE.

But the fact that Linux can do this, doesn't mean that the software was designed with this in mind. There's no "Linux integrated GUI experience" that all these applications are trying to target (FreeDesktop is forum for "meet in the middle" consensus solutions to be generated when required, not a working group for creating an encompassing standard for everyone to converge toward.) There's just the individual Operating Systems' GUI experiences (plural), where a given GUI app will fit in with the OS it was written for—and maybe with any other OS that happens to share its DE as the primary DE that the system is cohered around.

tl;dr: there is no "Linux." There's just RedHat, Debian, Canonical, openSUSE, Slackware, etc. all with their own goals. And also FreeBSD, NetBSD, Illumos, and whatever wacky proprietary Unices are still in play, who all also run and support these same DEs on very different substrates, with their own even-more-different goals. Each corporate player delivers you their own experience. There's no intent, between these companies, to try to commodify that experience into the same experience. Where would these companies/projects be, if they did?


> WSL is only for those that use Windows to target GNU/Linux, the demography that used to buy macOS to do the same.

Do you know many people that used to do this with macOS and are now using Windows ?

Like, oh boy, I tried, but after 4 months I ended up back with macOS.


In my experience, one of the the problems is that Macs are hilariously underpowered for many modern dev and analytics use cases. So the workaround is to work off remote machines/clusters over web interfaces/SSH but for many development use cases that's just slow and clunky - the have merit more in productionising. The series of hardware clusterfucks on Mac haven't helped ether.

So I've seen Mac users who work in these areas get sick of the issues and limitations and switch to Win 10 + sometimes WSL.

The people remaining with Macs are executive suite (who go for iPad pros) and account execs who do sales where it looks cool to go pull out a Mac. However even there people are increasingly looking dumb when they want to present something and go on 5 min dongle troubleshooting hunt because Mac killed HDMI and VGA ports and start their presentation appearing incompetent. Sometimes over web screen shares, only the desktop shows and not the applications.

By refusing to focus on hardware/software quality and power users, Apple has lost the game on high end.


> In my experience, one of the the problems is that Macs are hilariously underpowered for many modern dev and analytics use cases.

This is what I do and works perfectly. My dev machines are 82 cores xeon platinums with Terabytes of RAM or nodes with 8 V100 GPUs attached, etc.

I don't know of any workstation, much less laptop, that could be a replacement for any of that.

For the stuff I do on the laptop itself (Word/Powerpoint/Outlook, web browsing, using an editor to edit files remotely, etc.), more power wouldn't hurt, but my macbook is a macbook air 2013... so I can't imagine that a 2020 macbook would be underpowered.


Bulk of business analytics and modeling in the world happens in the zone between "regular business laptop" and "82 Core Xeon machines". Laptops with Xeons and reasonably powerful NVIDIA GPUs thrive in that environment.


The CPU and GPU is rarely the problem. The problem is the amount of data to analyze. In a laptop with a 1TB SSD, if I need to analyze 200Gbs of data here and there, I just don't have enough hard-disk space, so I need to SSH somewhere anyways. Once you have to do that, then you might just analyze it there. Bringing it to your laptop makes no sense.


Few years ago I did this transition - I had a MBP that I used for all Unix based things - after realizing how great Ubuntu runs under Hyper-V I just began to use that. It wasn't as seamless but no big deal opening up SecureCrt to connect. Then WSL came along and I used that but found the file system access to be too slow so back to Hyper-V VM. Now that WSL2 is actually a more seamless Hyper-V VM I am planning to just use that. I truly haven't needed a Mac for anything at least 4 years. I do a bunch of Enterprise Java development along with learning Rust etc. - VSCode with Remote Extension Pack makes it all even better.


So you switched from a MBP to a Windows laptop ? (its unclear from all that you wrote) What was the advantage ?


IMO the main issue with Linux desktop is that it keeps getting broken. Outside of X11, which itself doesn't provide much in terms of features so you need to implement a ton of things, there is no stable API. Both Gtk and Qt intentionally break backwards compatibility every few years and unintentionally in-between (mainly Gtk though), which means that projects that rely on those waste time on keeping up with the changes instead of adding features and fixing bugs.

And it isn't just Gtk and Qt, outside of a very very small set of libraries (e.g. libcurl and Cairo), most other libraries do the same thing and most developers do not even seem to see that as a problem - except if you sit down and consider how much time is wasted in total by the users of those libraries (ie. the people who would actually make the desktops and the desktop applications) it should be obvious how much of an issue is (and even then, because many programmers prefer theoretically "clean" code and believe that the only way to do that is to rid existing working code and replace it with new -and unknownly broken- code, they have a very strong bias against even trying to accept this).

A very big reason Windows is dominant is because when something lands on the OS itself, chances are you'll be able to use it decades later. macOS much less so, but they have an army of engineers to keep up and even then as time moves on you see many people disliking how macOS breaks things (see the latest 32bit disaster).

Outside of a few influential outliers, like Linus himself or Keith Packard (who works on Xorg and Cairo) there aren't many that care about not breaking things. And IMO it is a bit disconcerting that it seems to be mainly the "old guard" of Linux developers that seem to care about this. I hope this is only because they've been around enough to realize that unnecessarily breaking things isn't a good idea, otherwise we can only expect things to become more broken over time (and Linus had this stance for decades, so it might not have anything to do with wising over the years).

Honestly, the move fast and break things culture that the web has fostered doesn't work for anything that has to do with other stuff people rely on.

But this is most likely also why you do not see that issue as much in the non-GUI world: most of the non-GUI stuff on Linux are either very long running projects (e.g. Bash, Perl, etc) that just do not break or very young projects that are mainly used for web work that will be thrown away in a couple of years or so, so any breakage wont be felt as much (and people working on those are used to broken things anyway). Any exception to that has ramifications that are felt for a long time - see Python2 vs Python3 as an example (...of what not to do - and note how many will even consider such breakage as natural and unavoidable, as if it was some sort of force of nature).

Unless this mentality and culture of breakage changes so that we can build stuff on solid foundations, things wont change.


> I feel like Linux will never organically gain critical mass for a desktop environment [...]

Indeed. https://media.ccc.de/v/ASG2018-174-2018_desktop_linux_platfo...


Just use Qt binding for Rust and you can have it on Linux.


> except maybe for the winrt::import that looks like Go?

It looks to be a pretty extensive codegen hook, generating bindings to the referenced runtime components on the fly based on the module metadata it finds on the system.

Apparently the C++ version uses IDL and an explicit codegen step instead: https://docs.microsoft.com/en-us/windows/uwp/cpp-and-winrt-a...


It's a really intriguing approach; most use "build.rs" to do codegen.


Seems sensible though, the build.rs solution would require every project using rust/winrt to have a build.rs and code to load & codegen the modules (probably copy/pasted), so it'd be a lot more work for users of rust/winrt, though I guess it'd be less magic.


Oh yeah, I agree. I’m really interested to see if this approach would work better than build.rs for some other projects too.


Hit the max nesting depth, but to reply about why, because winrt::import! is reading (arbitrary?) filesystem paths and there isn't an equivalent for build.rs emitting 'rerun-if-changed'.

So, neither cargo nor the compiler really know of the paths, and cannot accordingly rebuild the sources when they change.

Perhaps they can only change when a corresponding version cargo does know about changes, anyhow, bypassing the build/module system with a proc macro is probably not something to be done lightly.


Fun trick regarding the nesting depth, I believe that if you click on the comment permalink itself, it will still let you reply. Regardless, no worries :)

I was totally forgetting rerun-if-changed! This is indeed a barrier in the general case. I think that it doesn't affect this crate, but I am not 100% sure. Regardless, thank you!


Bah... This seems like exactly the kind of thing that makes dependency tracking, and sandboxing compilation harder.


Why? I'm not an expert here, but it seems basically the same to me.


> brilliantly avoided mixing up C++/COM

I remember VB6 was also exceptionally good at this.


COM is both an evolution of VBX and a subset of OLE 2.0.

.NET was going to be what UWP is now, an evolution of COM, but then Java happened, and they went with something similar instead.

There are some references to how this all happened in the story of F# evolution, given that Don Syme was part of the team since the early days.


Actually a lot in COM was made to work on vb. COM string type is BSTR which stands for Basic STRing.


I start to understand the Windows team desire for this WinRT. You can flow through time with these language bindings without being locked into a programming language. .NET, JS, C++ and now Rust. That is seriously cool.


It's been a reality on Windows for over 20 years, though. COM has made it possible to write applications on Windows and integrate with pretty much anything in it in any language with COM bindings.

Glad to see Rust is now one more such language.


Yeah though... pretty much any natively compiled language and many interpreted/VM-based languages can use the C ABI in any modern plaform anyway, so what COM provides isn't so much the ability to use the same API from many languages but a dynamic object oriented approach for doing so.

After all it isn't like you can't use the Win32 API directly from Rust - or any other language, e.g. Python.


The type libraries are a big part of why it's so successful - you can take a typelib and auto-generate bindings, or manually generate your own bindings based on what you see. If you look at the typelibs for a given GUID, you are almost guaranteed that the API will match once you get a pointer back from a QueryInterface call (sometimes people are naughty). This is nice because you can compile all that in so the QueryInterface call is the only dynamic part of it - everything else at runtime is just regular calls through a vtable. The rigid nature of it means you can also reverse-engineer compiled code to figure out the shape of a given interface's vtable even without a typelib (I've done that a couple times).

This is why you were able to manipulate COM objects from VBScript and JScript even though they were dynamic languages, and in fact VBScript could define new COM objects on the fly and then expose new methods and properties through a COM interface called IDispatch. It's all just vtables once you do your initial setup.

IIRC there was a Windows version of Python for a while that had built-in support for COM interop in both directions - you could define IDispatch-based COM objects in Python and implement interfaces while consuming COM libraries using their typelibs.


I wouldn't call it successful, COM is mainly used by Microsoft and pretty much nobody else uses it unless they want to interface with Microsoft's products (Office, etc).

In any case, yes, this is what i meant with the dynamic part. But the comment i replied to wrote that it made "possible to write applications on Windows and integrate with pretty much anything in it in any language with COM bindings", which - as i wrote - doesn't need COM.


WinRT is actually built on COM, Rust/WinRT uses it under the hood


That's true. But developing apps for Windows seems like a small niche now.

And if you don't use C#, you are in a world of pain with MFC and Win32.

C++ developers asked for a nice GUI framework for ages.


Well, just before it was selected stuff. Now it is the xml parser element node. That is a bit more than before. And the tooling changed like hell.


To be fair, you could do this with plain C, then with COM, etc.


While we've been hacking on the vst3-sys crate [0], we wound up forking the mentioned com-rs crate to support COM APIs on non-win32 targets (notably, there's some stuff with endianness of the IIDs, the calling convention of the generated vtables) as well as usability (the com-rs crate only supports up to 5 interfaces implemented at once). We're still chasing down some issues with order mattering when implementing the interfaces. Even without those small changes, it's an impressive piece of macro programming that is very close to being usable for general purpose, ABI-stable rust crates.

As a little dose of irony, clippy doesn't like the com-rs crate too much.

[0] https://github.com/RustAudio/vst3-sys.git


Now I'd like some native Rust bindings for Cocoa/UIKit, perhaps as an "official"/higher-level alternative to https://crates.io/crates/objc. There are some crates already attempting to replicate all of Apple frameworks, but they are community-supported/require a bit of `unsafe`.


You won't get around the unsafe.

I've been hacking away at these for a few months now: https://github.com/ryanmcgrath/cacao

Goal is to get a good-enough version to build apps with. I'm dogfooding it for my next product so there's some incentive to keep pushing.


The state of Rust MacOS bindings is pretty bad. Last I checked, a lot of crates are built on top of the `core-foundation` crate which is full of unsound abstractions. And a lot of them seem to be hard to fix, as the Core Foundation types are all equivalent to `Arc<Data>` with no synchronization of the inner data. This makes mutable collections very complicated to model, for instance.


Never really understood whether WinRT binds me to store apps, sandboxes, appx and so on.

Can I develop a single windows app with this that runs on Windows 10 desktop as a win32 app does (such as a plugin to another application where I can’t can’t decide the deployment model)?


WinRT, as in the API/ABI shape, at this point really isn't tied to any aspect of the UWP app model (other than that existing UWP-related APIs tend to be newer and more likely to use it). Third-party WinRT component activation used to depend on app packaging but that dependency was removed last year: https://blogs.windows.com/windowsdeveloper/2019/04/30/enhanc...

In general what used to be various facets of the "monolithic" Win32 vs. UWP divide have been refactored to allow a la carte usage, so not only can you activate your WinRT components without a packaged app, but you can also register your app as "packaged" for runtime purposes without having to install in that format ( https://blogs.windows.com/windowsdeveloper/2019/10/29/identi... ), can install in the packaged format without having to use the sandbox features ( https://docs.microsoft.com/en-us/windows/msix/overview ) can distribute sandboxed apps outside the store ( https://docs.microsoft.com/en-us/windows/msix/app-installer/... ), etc.

The only remaining aspect I'm aware of where there's still a binary decision and coupling is in the windowing model and shell integration; a Win32 process's top-level windows can only be HWNDs and a UWP process's top-level windows can only be CoreWindows. Win32 HWNDs can host both Win32 and UWP UI, but UWP CoreWindows can only host UWP UI; on the other hand, only UWP CoreWindows can make use of UWP shell integration features like fullscreen and picture-in-picture. It would make sense to somehow decouple this facet as well but I'm not aware of any plans to do so.


Thanks this is informative. I think my confusion comes from that many things are introduced as walled-off vertical concepts but later backported at least partially. Also there is no lack of terminology confusion when things can be an API, an app-model or both.

I’m a full time windows desktop dev since 15 years now. And I still find these UI and app model frameworks extremely confusing (which is why I haven’t gone near one after WPF).


Yeah, it's definitely confusing for anyone who hasn't been following all the twists and turns (i.e., the vast majority of developers)


WinUI 3 is planned to have support for "Win32 app model" apps. I'm not sure if this is strictly "CoreWindow" or another class with different features, but you'll be able to write "native" UWP-Style apps in C++ and deploy them as an unrestricted .exe. They're going to show a demo at Build 2020, I heard.


My understanding is the WinUI 3 Win32 model is still going to be basically doing what you can already do with XAML Islands, just packaged more nicely with wrapper classes and Visual Studio templates and so on, so by itself it doesn't change the underlying HWND vs. CoreWindow situation (just wraps it). There are already Win32 apps, like Windows Terminal, that implement a "UWP" UI by doing their entire UI in one big XAML Island.


Rust/WinRT has really triggered my interest. For someone who hasn't been following Windows development for more than almost two decades, what's a good read to understand the different models of app creation and distribution?


Unfortunately it's a bit of a mess right now for reasons alkonaut mentioned in his post upthread. The official Microsoft doc page -> https://docs.microsoft.com/en-us/windows/apps/desktop/ is maybe an ok starting point for someone with a vague memory of Windows desktop development who's curious about what's new, but I could see someone getting lost in a maze of acronyms and links, especially if you're trying to use it with a newly semi-supported language like Rust


> on the other hand, only UWP CoreWindows can make use of UWP shell integration features like fullscreen and picture-in-picture

What are the practical advantages of fullscreening / PIP'ing thanks to UWP as compared to what you can achieve without?


The Shell managed PIP is entirely unique to UWP. In Win32 you can fake it with an always-on-top window, but you don't get the PIP window management gestures out of the box and some other things. From what I recall, where this particularly matters is touch gestures and HoloLens and Windows MR support where Win32 always-on-top is very different from Shell-managed PIP in three dimensions. (Also, Xbox doesn't support Win32 always-on-top.)

I think fullscreening also has some gestures the Shell manages, but I'm not as familiar with them. Similar too that the Shell does very different things in 3D.

There's also the Dual Screen support in CoreWindow that Microsoft has been talking up for Windows "10X".


The future of Win32 is also bound to sandboxing, although we are not yet there.

"How Windows 10X runs UWP and Win32 apps"

https://www.youtube.com/watch?v=ztrmrIlgbIc&list=PLWZJrkeLOr...

WinRT is an improvement of COM, it either runs sandboxed in UWP (its home), or you can access it from Win32 via XAML Islands and Win32/UWP interop.

Right now Win32 applications can still choose between legacy mode, or opt into Win32 sandoxing via MSIX packages.

As Windows 10X shows, it might not be for long.


The minesweeper example from the article is a regular Win32 app. Desktop Win32 apps can call WinRT API’s.

https://github.com/robmikh/minesweeper-rs/blob/master/src/ma...


A lot of WinRT APIs are usable from "normal" desktop win32 apps, but some are restricted to the sandboxed/appx/etc model.

The most prominent of these is probably the XAML GUI stuff, but those are becoming available everywhere in the near future, and being decoupled from the OS itself, to go along with WinUI 3.0


It's a spectrum. You can already use (some) WinRT features on desktop/console win32 apps, even unsandboxed[0].

You can use UWP XAML in win32 desktop apps with a sandbox, and you'll probably be able to use it without a sandbox when WinUI 3 comes out (currently in early preview, they'll probably have a couple sessions about it at Build)

[0] some APIs require your app to have an identity though, and you need the sandbox for that


Well this really confused me as I thought Microsoft had discontinued the Windows RT operating system. It turns out that they also have a platform-agnostic application architecture, ingeniously named WinRT.


Coding in C++/WinRT (similaly, C++/CX) is in my experience a major mistake when targeting WinRT/UWP using C++. The primary use case for WinRT/UWP is making apps for the Microsoft Store, where most of the time you're going to be sharing code also targeting Google Play and the Apple Store given those markets are vastly larger (and have viable ad-platforms; Microsoft is shutting theirs down on June 1st).

The C++/WinRT and C++/CX language projections are incompatible with compilers that target Android and IOS due to WinRT reference specific syntax and keywords such as the '^' reference indicator.

As an alternative, it is possible to access all of the same WinRT/UWP platform functionality in pure C++ without the C++/WinRT syntactic sugar extensions, allowing you to cleanly abstract the WinRT/UWP platform APIs relative to Android and IOS. This is not well documented but by far a superior option for folks familiar with COM and Windows internals.

Furthermore, avoiding things like '^' reference operator syntax gives you tighter control over object lifetime as C++/WinRT and C++/CX hide to some degree the mechanics of COM object lifetime and gives a pseudo-environment much more like C# on .Net.

Microsoft does provide the WRL pure C++ template library, which is in many ways a spiritual successor to the ATL COM template library. WRL is in my experience the best way to interact with COM objects surrounding WinRT and UWP.

(Source: developer of several top 10 apps (at times) on Microsoft Store).

Glossary of Microsoft terms:

WinRT/UWP (WinRT (Windows Runtime) the old name used in Windows 8/8.1 for what later was renamed to UWP (Universal Windows Platform) in Windows 10). WinRT/UWP was originally intended to be an object-oriented replacement API for Win32 but hasn't lived up to its charter due to only being useful for Microsoft Store apps and being significantly less capable than Win32.

COM - Component Object Model; an object-oriented API style used by some Win32 APIs, this style was used near-exclusively as the basis for the WinRT APIs. COM can be unweildy to deal with so there are several C++ template libraries like WRL and ATL to make it easier.

C++/WinRT (and C++/CX) are Microsoft specific C++ extensions that require MSVC and hide the complexity of dealing with COM relative to WinRT/UWP APIs.


C++/WinRT and C++/CX are not the same; they are different enough that Microsoft Docs has a whole page explaining how to move from C++/CX to C++/WinRT: https://docs.microsoft.com/en-us/windows/uwp/cpp-and-winrt-a...

In particular only the older C++/CX has ^ pointers or other Microsoft-specific C++ language extensions. C++/WinRT is standard C++.

Writing your code using WRL (Windows Runtime C++ template Library) is good if you want to learn enough about how the Windows Runtime extensions to COM work in order to debug effectively. However, its close relationship to the underlying COM and Windows Runtime APIs also makes it far too verbose to quickly write good API-consuming and -producing code, in my opinion.


Updated my post - I had forgotten C++/WinRT discarded the ^ syntax as we skipped C++/WinRT for our development as it suffers from many of the same problems of C++/CX.


> it suffers from many of the same problems of C++/CX

The whole point of C++/WinRT is to be 100% standards-compliant C++. The guy who created it didn't even work at Microsoft at the time.


And now he created the Rust bindings.


Just like Qt depends on moc.

GCC and clang are also full of language extensions.

In fact if it wasn't for the work of Google, to this day Linux would only be compilable with GCC C.

I remember C devs on Apple platforms being happy for their lambdas extensions when they were introduced.

So I never understand why everyone else is allowed to create language extensions, but when Microsoft does it is bad.


I think C++/WinRT is more of a "modern C++" projection than C++/CX, and they aren't an also-known-as, although they meet similar goals. For example, with the ^ reference:

https://docs.microsoft.com/en-us/windows/uwp/cpp-and-winrt-a...


I want to reply to many other comments but I want to make this more visible: Microsoft does not do Embrace-Extend-Extinguish anymore today (or at least, it does a lot lot less).

You know who does Embrace-Extend-Extinguish? Google.


Looks like async/await support isn't ready yet. I wonder how challenging it will be to integrate WinRT's async model with Rust's futures and tasks.


IAsyncOperation is pretty flexible. I don’t think it will be technically hard, though perhaps picking the best fit model will be a time consuming process.


I'm super excited for this! I do wish that they'd ressurect their JavaScript projection for the runtime though. Especially with their focus on React Native, it'd be awesome not to need a "native" module to call Windows APIs. That being said, I also hope Rust/WinRT becomes a valid native module format for React Native (though I suspect it won't since that would likely require React Native developers to have the Rust toolchain setup to compile the module).


Something like this already exists: https://github.com/NodeRT/NodeRT


This is neither an official language projection nor for React Native.


It's not official, but it was endorsed by MSFT on stage at last build. You're right about RN, I did not realize that.


PWAs on Edge were able to do it, when signed.

Hopefully they will do the same with the new version.


You have to hand it to Microsoft: It may be the only large software company that doesn't seem susceptible to not-invented-here syndrome.

It's true that Microsoft historically has been an intensely competitive company, often trying to undermine competing technologies, e.g., with "embrace and extend" strategies.

But whenever a competing technology -- whether a language, or a framework, or an application -- gains adoption with developers or users, Microsoft to its credit will follow along, sooner or later, and do the work necessary for making it a first-class citizen on Windows.

(For those who don't know, Rust is originally a Mozilla project.)


It’s always easy to be an open source proponent when you are the underdog.

And Microsoft, as crazy as it sounds to me, kinda is when it comes to cool dev stuff.

We'll have to see if they have really changed or just waiting to regain ground and going back to the extinguish part.

But what I can confidently say is that, of all the BigCorps, Microsoft is the only one actively trying to make developer's life easier.

Apple is consumer first, so MacOS is getting harder and harder to hack on, for better and worse.

Chromebook was never a serious competitor. And Linux is Linux, hackable to a fault.

But Windows… which I have always despised, is, for the first time in my life, tantalizing… to some extent.


The Microsoft you remember is at a time when dev tools were "the future". They are now the past. Microsoft's ruthlessness is no doubt focused elsewhere now. For example, do they go out of their way to make your code portable b/w Azure and Google? Probably not.


Off the top of my head here are three different offerings from Microsoft that enable cross cloud applications: - https://azure.microsoft.com/en-us/services/azure-arc/ - https://dapr.io/ - https://keda.sh/


The runtime for their serverless functions is also open source and (I guess) fairly portable: https://github.com/Azure/azure-functions-host


That's pretty cool not an expert so can't know how truly portable it is, or is it like "Visual Java" ;-)


From firsthand experience... Microsoft has quite a bit of NIH syndrome. Still does. They've just gotten better about it in recent years.


One good example is Microsoft adding dtrace to Windows. The Linux community has floundered for years with several competing tracing tools, all of which have major deficiencies in one way or another.


Also from firsthand experience: for many (not most) of MS's projects, that's the right decision. A lot of their software is at a unique intersection between consumer and enterprise support which at their scale often has unique requirements.

But then MS's bureaucracy can grind down even the most well thought out projects (Also getting better, though).


TIL Rust is also sponsored by Amazon. (https://aws.amazon.com/blogs/opensource/aws-sponsorship-of-t...)


Right now they seem to have an enormous theme of winning over developers, of all kinds, for all platforms. I would say it's to sell them tools, but most of the recently-headlining MS tools have been free. Maybe it's just to garner general goodwill. Maybe it's to draw people to Azure. I can't quite see the strategy, but it's a very clear theme behind many of their recent (publicly-facing) efforts.


MS has a huge legacy market, they don't want to end up as the next oracle where every dev relentlessly argues to move to a different platform.

Making tools/functionality free such that existing devs don't want to jump, and new devs consider azure is a reasonable business strategy.


Except two of the biggest platforms they've been focusing on are the web and Android, and they've publicly stated that they don't necessarily see the future as a Windows-world. Azure is the only platform that I see these efforts helping, and maybe that's enough, but I still have to wonder if there's something I'm missing.


It's in their blood - its the 'Embrace' in EEE.


You know that’s from 3 decades and 2 CEOs ago, right? They’re a large company so there’s sure to be plenty to dislike but I don’t think it’s contributing much without some analysis of their actions in this century.


my view is that embrace/extend has really always been a general platform business thing and not specifically a Microsoft thing. it became infamously associated with Microsoft in '90s-00s because of the monopoly position they had.

since Microsoft is still a platform business, it's reasonable to be wary of the negative potential of embrace/extend in connection with them, for the same reason it is in connection with Google and others.


I don't see Microsoft doing much EEE on their modern non-cost-centre platform (Azure), though. Amazon is a much clearer example these days, with AWS having "embraced and extended" Postgres into Aurora, Redis into ElastiCache, and many other examples. (I don't know if AWS has ever gone all the way and "extinguished" any of the cores they've built upon, though?)


at its most benign embrace/extend can just mean that your product supports the standards people expect and need, and there's also some differentiating aspect in which it's better than the other options (otherwise why does it exist?)

It can be a bit fuzzy defining precisely where that benign sense stops and the negative sense where the "differentiation" brings harmful lockin begins. Then if you have a monopoly position the "extinguish" bit can come in. But actually AFAICT Microsoft rarely (never?) actually succeeded in extinguishing any of the open standards they were infamous for attacking - their successes happened earlier against proprietary competitors like Lotus 1/2/3 and Novell Netware


> But actually AFAICT Microsoft rarely (never?) actually succeeded in extinguishing any of the open standards they were infamous for attacking

SMB pretty well extinguished the use of NFS; and Active Directory pretty well extinguished the use of LDAP.

Neither of these were really Microsoft-driven, though. It was almost the reverse: the ecosystem cloned Microsoft's approach into FOSS, and then liked it better, and replaced their own stuff with it without Microsoft's participation. Sort of like how BitKeeper's approach was cloned as git, which then extinguished most other SCMs.


MemoryStore is GCP, AWS' redis is named Elasticache.


EEE did not end 3 decades ago and Balmer was only 1 CEO ago, so... no?


Whether or not you like/dislike EEE, the fact remains that MS openness to Embrace helps them today to quickly implement different technologies into Microsoft's own stack.

They may have stopped the "Extend & Extinguish" part, but they haven't forgotten how to Embrace.


I wrote this below, but I'll mention it here

this stuff also happens in a benign fashion. Emergent behavior. People/groups and their interests bubble up the self-serving outcomes without being a conspiracy.

We have 4 features to support in this release. feature 1 is p0 it is basic functionality. feature 2 is p0 because 3 customers are asking for it. feature 3 is p0 because our team needs it to work with our other product. feature 4 is compatibility and we'll get to that as p1.

That's why products get telemetry and linkedin integration, but not compatibility with other software.


Let me introduce you to Steve Ballmer, the CEO before the current one, Satya Nadella:

https://www.theregister.co.uk/2001/06/02/ballmer_linux_is_a_...

He was CEO of Microsoft until 2014, that is, 6 years ago.

Microsoft changed strategies because they tried everything they possible could to counter open source and Linux and lost. Now they are forced to play nice.

Being forced to be nice is not the same as being nice out of altruism.

If at any moment becoming jerks gives them more shareholder value they will become jerks again.


They _lost_?

They have a 1.36 trillion USD Market cap (today). That's more than Apple's 1.29 trillion USD Market cap. It's hilarious how people on hacker news live in this bubble where Microsoft has faded away into irrelevance.

And they're also very good. I'm running Windows 10 right now on a dual-monitor, dual NVidia GPU, dual Xeon system and everything "just works."


Saying they "lost" in this context means they got to a point where it was clear capitulation to open source was the obvious play.


As if you could not do that on another OS.

You can use Nvidia GPUs in SLI mode in Linux and BSD too. I am using an Nvidia GPU right now and I am not using Windows, and "everything just works" too, no terminals involved. Are you implying that it's not possible to do this outside Windows?

How much of that market capitalization comes from running a de-facto monopoly of office suites for decades? As a consumer, it's hard to see how corporations like Microsoft push governments around the world to buy licenses in bulk because there are no alternatives.


> dual-monitor, dual NVidia GPU

Not exactly thanks to Microsoft.


You can't do it on a Mac (unless you want to run two non-supported graphics cards sharing 4 lanes on a Thunderbold/USB-C connector instead of each having their own x16 PCIe connection)


To their credit, stability of such a setup might be facilitated by Windows' fine display driver model (WDDM).


Sure let's do that. This century they have bought up LinkedIn, GitHub, npm, and a stake in Facebook. They have replaced Atom with a fauxpen source editor. They've pointed them all at their locked down me-too cloud platform. And don't forget to develop your code on their locked down me-too tablet.

What has changed is that MSDN is no longer delivered on CDs.

That's because "Open Source" is a much cheaper deliver mechanism if you control the platforms that make its freedoms meaningless.

Plus ça change...


> This century they have bought up LinkedIn, GitHub, npm, and a stake in Facebook. They have replaced Atom with a fauxpen source editor.

You left out the part where you explained why any of those acquisitions are a problem: LinkedIn is no worse than it was before, GitHub and npm are both popular services people choose to use without coercion, and whether or not you like it a lot of developers are switching to VSCode because it's a better tool which makes them more productive.

I don't what you believe to be true about their Azure integration but I know many people who use GitHub, npm, VSC, etc. and none of them use Azure so clearly there's some key step missing in that process.


Concentration of power is (or at least I thought it was) obviously bad. This is why there's laws protecting against that but either they're not enforced or they weren't updated to the internet age.


It is a concern but being popular is not the same as having high power: GitHub has lock-in to the extent that it's a great service but it's not exactly like they're changing Git to prevent you from using your own self-hosted service, Gitlab, BitBucket, etc. Similarly, VSCode is open-source with tons of competitors: where's the angle where they have much ability to dictate anything to users?


>They have replaced Atom with a fauxpen source editor.

I'm gonna counter this with the fact that VSCode was way better maintained and optimized than Atom.

The GitHub before Microsoft seemed to me as an company that would half-ass everything they could, and Atom was one of those things.

When VSCode came, at least to me, it absolutely destroyed Atom with versatility and stability.


I don't think they really have the power to extinguish anymore. Sure, Windows is still dominant in the desktop market, but that's not where the novel software is being written.

Amazon is the king of EEE these days. Microsoft is desperately trying to remain relevant.


> It may be the only large software company that doesn't seem susceptible to not-invented-here syndrome.

.NET?


Many developers don't know that C# began in essence as a response to a lawsuit from Sun, sound familiar?

https://en.wikipedia.org/wiki/Visual_J%2B%2B#Sun's_litigatio...


.NET, ActiveX, JScript, etc.


> that doesn't seem susceptible to not-invented-here syndrome

Really?

Compelte opposite from my experience, Microsoft is re-inventing their own version of literally everything, even the smallest thing which doesn't have any strategic benefits. That shows me that they suffer extremely from not-invented-here syndrome.

For example, why does Microsoft need to invent their own Redis cache alternative (=AppFabric)? Why does Microsoft need to invent "Windows containers" which are incompatible with Docker Linux containers (aka normal Docker)? Why do they need to keep re-inventing Internet Explorer? Why does Microsoft need to re-invent their own search engine which nobody uses (Bing in case you have never seen/heard of it)? Why does Microsoft need to invent their own inferior Slack after already nuking Skype, Skype for Business, Yammer, etc.? Why does Microsoft need to invent their own Azure DevPops inferior alternative to GitHub, especially when they already bought the better tool, why compete with it with a shit product which makes every developers blood boil?


those are product offerings, it is like saying why are there new fast-food chains if McDonald's already exists.

specifically the alternative you list are commercial offerings from other companies, that is just wanting a piece of the cake.

and for example, "internet explorer" is now going to be based on chromium, a terrible news for the web, but definitely a invented-elsewhere case.

regarding Rust a not invented here situation would be trying to develop their own rustc/cargo or a new similar language, sort of like apple with swift (not saying this is what motivated apple)


As much as I appreciate what Microsoft is doing here, your NIH praise is not entirely well targeted - look at this research project from Microsoft, which aims to build a slightly different Rust. https://github.com/microsoft/verona


How is a research proof of NIH? I guess C++ or Java should never have existed because how dare another programming language use that "OOP" concept Smalltalk popularized


That's not at all a point I'm making. OP praised Microsoft for embracing Rust and making it work well on Windows and thanked them for not doing their own thing and instead supporting an existing project. So I'm just pointing out to a very recent endeavour that goes against all of that. I just enjoyed the irony of it all, that's it.

Not saying anything about anything else.


I think you're confusing Microsoft Research with Microsoft more broadly. A language research division (MSR does more than this, but they do this) is going to produce languages, that's their job. They often aren't trying to make a language that folks will use, but are instead exploring possibilities that end up improving other languages. Verona (and it is far too early to tell, IMHO) may end up in fact improving Rust. At the same time, Microsoft can and does support Rust, both as an organization on their own and indirectly via GitHub. These two things aren't in conflict, especially when you're talking about an organization as big as Microsoft is.


[flagged]


> Added their own packages version of every package.

> Added their own package repository (MRAN), replacing CRAN.

How is any of this different than the many incompatible Linux packaging solutions?

How would you address insecurity and trust relations if you weren't a first-class contributor of CRAN? (consider the recent Python package exploit; although amateur, it's still an indication of such issues; I'm sure I remember this was an issue with the Perl packages as well; wasn't there an exploit pushed to the Linux kernel tree at some point years ago?)

The narrative changes from "MS is EEE'ing CRAN" to "MS allowed a package with exploits to be deployed from CRAN".

I am not familiar with any of the issues within the R ecospace, but I am very familiar with corporate culture, security, and liability awareness.


> I am very familiar with corporate culture, security, and liability awareness

I thought the same thing many years ago. I was speaking with a friend (many years ago) who used to work for microsoft.

I said, "so they broke something and didn't support this, they're just developers and how do they know all the issues?" Basically, I was saying the coders were human, and sometimes getting something out is hard.

My friend said, "No, don't be naive. they had meetings. They sat around and said, "how can we own this?""

It might be interesting to revisit the (ancient) halloween documents:

https://catb.org/esr/halloween/halloween1.html

This also works in a benign fashion. Emergent behavior. People/roups and their interests bubble up the self-serving outcomes without being a conspiracy.

We have 4 features to support in this release. feature 1 is p0 it is basic functionality. feature 2 is p0 because 3 customers are asking for it. feature 3 is p0 because our team needs it to work with our other product. feature 4 is compatibility and we'll get to that as p1.


And now Google do the same in the browser. They're regularly being called out by Mozilian but most people dismiss it as "the coders are human".


Microsoft R is kinda annoying, OTOH it only happened because Microsoft bought Revolution Analytics, who developed this tool.

AFAIUI, it's mostly just R with better parallelisation of matrix libraries (which is cool).

It's not quite compatible, but I've used both through Conda a bunch, and haven't hit any edge cases that caused a problem.

So yeah, you're right that it's embrace/extend, but the extend was done by a startup which they bought.


Yeah this seems like a good take. Otherwise it's like saying anyone building alternative or similar runtimes for existing languages is evil. This is common. People don't always contribute to existing languages or runtimes because it's fraught a lot of politics and it can just be easier to fork and do what you need, rather than trying to get what you need into a project that might not want it.


F# is Microsoft's invented here version of Java


You mean C#? F# is an ML family language on the CLR.

> James Gosling, who created the Java programming language in 1994, and Bill Joy, a co-founder of Sun Microsystems, the originator of Java, called C# an "imitation" of Java; Gosling further said that "[C# is] sort of Java with reliability, productivity and security deleted."

(Wikipedia)


I thought C# was Microsoft's NIH version of Java?


To be fair Microsoft wanted to adopt Java, and with J++ they added Windows specific base libraries and make it work with COM. One could still compile ordinary Java back then also with J++.

Although they weren't allowed to do that if I remember right, so they basically had to create C# - and C# turned out to be fantastic in my opinion.



Would this be possible for Go too?


It is possible for any language, really. They already have 3 language projections for C++ (C++/WinRT being the newest, by the same person running this Rust projection), C#, VB, and JavaScript.

Under the hood they call out to (and potentially expose) COM interfaces which are C. So if your language has C interop, or if it could be added, something like this could be built.


C interop is slow in golang so I don’t think it would be the best language for it. People do it with SDL though


Sorry for the naive question, but how does this fit into WinUI 3.0?


Yes


How does this projection work with the borrow checker? UI frameworks typically allow you to have multiple mutable reference to the same element.


It works fine- Rust only forbids multiple `&mut T`s to the same object, but `&mut T` doesn't exist outside of Rust, so code in other languages doesn't have to care about that rule at all.

These bindings can just expose non-Rust pointers as (wrapped) raw pointers, leaving all the actual dereferencing and mutation to the other language. WinRT objects don't expose anything but virtual methods on opaque objects anyway, so this isn't even Rust-specific.

Outside of WinRT this sort of thing requires a bit more thought, but you can still use the usual tools- `Rc` and `Cell` or their variants, or raw pointers and `unsafe`.

You can think of `&T` and `&mut T`, and their associated "no shared mutability" rule, as a compiler-checked version of C's `restrict`. Stop using `restrict` and the rules go away.


Is the projection literally "wrapped raw pointers?" How are the raw pointers wrapped - for example is there interop between COM reference counting and Rust Arc? Or something else?

Here's a classic trap: you get a reference to something interior, and then the parent is deallocated. My event handler gets a ref to this button's label and then replaces the button. Is this possible with Rust/WinRT? (No judgement either way!)


Rust's borrow checker solves the problem of "get a reference to something interior, and the parent is deallocated". It knows (by static analysis) when an inner reference may be in use, and won't let the program compile if the parent could deallocate during that time.

This works well even with externally refcounted objects. You can give reference to an inner object without increasing its refcount, and rely on the borrow checker not to allow the parent to decrease the refcount. Or you bump the refcount of the inner object and give it out as an independently owned object.


WinRT is lousy with functions like Window.Current which provide global access to the current window. Any function may get access to any control at any time.


The raw pointers are wrapped in `ComPtr` defined here: https://github.com/microsoft/winrt-rs/blob/master/src/com_pt...

Those `ComPtr`s are then wrapped in convenience types that handle method dispatch, via code generated here: https://github.com/microsoft/winrt-rs/blob/master/crates/win...

There is no integration with Rust Arc, but that's just because in COM/WinRT each object is in charge of its own allocation and ref-counting- AddRef and Release are just methods of IUnknown, the root of the interface hierarchy.

That particular trap doesn't really come up in COM/WinRT- COM objects only expose methods that pass around references to full objects. This is because a given object might exist in another process or even on another machine, in which case you are only holding a proxy created by the runtime.

So any references held by your event handler, to the button or its label, are ref-counted (though they may be weak references), and replacing an object is fine because it just decrements its refcount.

In the general case (e.g. implementing an interface yourself, or some other non-COM-based scenario), pornel's sibling comment is correct- just like with Rust's own Rc/Arc types, you can borrow from some particular ref-counted pointer and the compiler will ensure that it outlives those borrows, keeping the object alive.


Hmm. Let's say I install an event handler:

   Application::Current.OnActivated = move |args| stuff();
and stuff() removes it:

   fn stuff() { Application::Current.OnActivated = None; }
Rust lambdas are not COM objects and are not refcounted. In practice, what prevents the event handler from being deallocated while it is executing?


Right, a Rust lambda is not a COM object on its own, but by the same token you can't just stash one in a COM object's property either- COM object properties don't have a type for "Rust lambda," or "C++ lambda" for that matter.

Instead you need to convert the lambda (from whatever language) into the type of that property- in this case some kind of Delegate, at which point it is ref-counted like any other COM object.

So in this case, installing the handler increments the refcount, reading the handler back out to invoke it increments it again, the handler itself decrements it, and finally whoever invoked it drops their reference and the Delegate gets freed.


As a former MFC/Winforms/WPF/Qt now turned Angular developer, this is really sparking my interest. I would love to back to writing great desktop applications. Problem is, most of my projects nowadays require a multiplatform GUI. Desktop is not enough, we also need web and mobile.


This is so cool. I should brush up on my rust maybe I should make a python extension or something...


With Windows 7 on its way out slowly, WinRT is becoming more palatable.


Slowly?! Hasn't Win7 been dead for years?

> Latest release Service Pack 1 (6.1.7601) / February 22, 2011; 9 years ago

> Mainstream support for Windows 7 ended on January 13, 2015.

There is:

> Extended support for Windows 7 ended on January 14, 2020.

But that is referring to very specific business contracts, not casual users pretending EOL never happened.


> Slowly?! Hasn't Win7 been dead for years?

No. Microsoft wants to kill it, but... even just checking Steam's hardware survey (users of which are more likely to upgrade so chances are overall PC usage of Win7 is higher) there is a combined ~7% of users on Win7.

With ~25M concurrent users the last 48 hours, that means there were more than 1.7M users running on Win7 just within the last two days.

FWIW these numbers are a bit lower than 10 times the Linux numbers of all distros combined.


Casual users still received updates for Windows 7 until January 14th, 2020.

Businesses can get "Windows 7 Extended Security Updates (ESU)" for a maximum of 3 years after January 14th, 2020. https://support.microsoft.com/en-us/help/4527878/faq-about-e...


https://github.com/retep998/winapi-rs they should work with this


These are two different APIs, so it's not clear to me how they would.


I thought there was much overlap


My understanding is that there’s a lot of overlap but also stuff that only one or the other can do.


Would `#[cfg(..)]` be the right way to do this then?


I stopped reading at "The Windows Runtime is based on Component Object Model (COM) APIs..."

Is that a bad choice on my part?

I see COM and think of OLE, CORBA and I remember that I'm old and going to die pretty soon (within the next 40-50 years almost assuredly).

https://en.wikipedia.org/wiki/Component_Object_Model


Yes it is.

After the whole political disaster that was Longhorn, the Windows team decided to rebuilt Longhorn ideas, originally based on .NET, and redo them with COM.

So while many outside Windows have considered COM dead, actually since Vista all major Windows APIs have been provided as COM interfaces, the large majority of Win32 surface has been frozen since Windows XP.

With WinRT/UA/UAP/UWP they have gone back to the roots, while pursuing this idea one level up.

Basically they replaced COM type libraries with .NET Metadata, added support for generics, structured data types and implementation inheritance, bringing it to what .NET would have been like if they did not decided to copy Java.

So while the implementation of this reboot has been somewhat clusmy, COM is not going anywhere on Windows.

Also, this isn't a Windows only thing, Linux has gathered around DCOP, Android has AIDL, Fuchsia FIDL, macOS/iOS has XPC, then there is gRPC and plenty of other variants.


I think the most similar thing to WinRT on Linux would be GObject Introspection since the main point appears to be a way to somewhat automatically surface the platform API in multiple languages using language-specific idioms. D-Bus is instead for RPC and events, which I guess COM also gets used for?


I guess you are kind of right, then it would be a mix of GObject, KParts and D-BUS then.

As for COM, yes there are multiple models, in-process, external process and across the network (with DCOM).

GObject and KParts would be the in-process variant, with D-BUS for the other execution models.


For linux I believe you mean D-Bus instead of DCOP. (that said, it's largely similar in concept)


You are right, that was the old protocol.


Historical note: first versions of .NET were called "COM+ 3.0".


Well, even before XPC there was Distributed Objects. In the mid-90s NeXT had a bridge between COM and DO called D'OLE, which remoted COM before Microsoft did. Of course, that didn't really take off, but it was a great example of how flexible a dynamic runtime can be. (They also had a lot of very smart developers.)


It’s not that different from things like Protocol Buffers, and for application code it’s mostly a hidden implementation detail. The API you’re calling (like the Composition API’s in the Minesweeper example) might be calling into a separate process but you don’t have to care.

Disclosure: I work on the Windows team


COM is a fantastic invention and implementation.

Granted, it was pretty painful in C and somewhat in C++, but all the other languages (VB and C# mostly, but plenty of others) really unlock the power of COM.

Nothing like that remotely exists on Linux or macOS.


My only experience with COM (that I know of) was trying to do MS Office automation from C# >10 years ago.

I recall that it was very awkward, and somehow prone to memory leaks, and unclosed resources. To this day, I still don't really understand what COM is, or is supposed to be. But I still have a negative visceral reaction.


> To this day, I still don't really understand what COM is

Maybe some excerpts from the documentation[1]:

"COM specifies an object model and programming requirements that enable COM objects (also called COM components, or sometimes simply objects) to interact with other objects. These objects can be within a single process, in other processes, and can even be on remote computers. They can be written in different languages, and they may be structurally quite dissimilar, which is why COM is referred to as a binary standard; a standard that applies after a program has been translated to binary machine code."

"The only language requirement for COM is that code is generated in a language that can create structures of pointers and, either explicitly or implicitly, call functions through pointers."

"Besides specifying the basic binary object standard, COM defines certain basic interfaces that provide functions common to all COM-based technologies, and it provides a small number of functions that all components require."

[1]: https://docs.microsoft.com/en-us/windows/win32/com/the-compo...


there is a big difference between out of process COM (like Office Automation) and in process COM (like WinRT and UWP or DirectX). Out of process was always a little tricky and not implemented well. DCOM was even worse. In process COM works really well.

Edit: I should add that C# isn’t the best language for COM because it doesn’t release objects when they go out of scope so you either have to release everything manually or wait for the GC to kick in which can cause memory problems. VB and C++ and even PHP are better that way.


That was kind of sorted out with the COM improvements in .NET 4.0 and SafeHandles.


Can you give more detail?


What is the point of "In process COM"? Is the same as any other library?

I thought COM was meant to communicate between processes?

Any enlightenment is appreciated...


COM is primarily used in-process, it exposes an ABI that allows multiple languages to operate on a common object model within the same process. That's why you can import COM controls into a .Net application by referencing the DLL in your project and work with them as if they were native CLR objects - or Python, Ruby, Perl, etc. for that matter.


It lets you use libraries written in numerous supported languages with a richer, more object oriented interface than a C ABI. You can use the same underlying interface to call out of process or distributed objects.


COM is Common Object Model. What you mean is OLE. In process COM calls are basically direct DLL calls and therefore very fast.


Is these problems fixed with AOT C# (CoreRT)?


Unlikely, as those problems are coming because of Garbage Collection which is a core part of language. C++/Rust don't have GC, so they don't have those problems


The bit about memory leaks is probably a comment on the effectiveness of the .net GC more than anything else. COM objects are reference counted. When the .net runtime GCs a wrapped COM object it bumps the count down. My experience from years ago is that can take a long time or not happen at all. There is a manual call that releases the object, after which it is no longer usable from .net.


Nothing like that remotely exists on Linux or macOS.

https://developer.apple.com/library/archive/documentation/Co...


There have been plenty of attempts to emulate what COM does.

All operating systems have some sort of object model and sharing API's, but neither macOS nor Linux have anywhere near the thriving ecosystem and actual cooperation model between applications that Windows has.


Yea, this is so true. It would be amazing if some standard similar to COM could unify the Linux community.


GObject gets us some of the way, but the ABI and bindings for other languages make it a royal pain in the ass to export types from anything but C rather than just consuming them.

Rather unfortunate, because with GIR you could easily have a program written in a mix of Rust, C, Go, C++, Vala, etc. all playing nicely if they were all able to expose to the common object model.


The COM support in CoreFoundation (such as it is) is... unusual. I have actually used it but it doesn't give you much from memory, beyond a way to locate plugins by UUID and the IUnknown ABI layout.


GObject is somewhat similar but it's nowhere close to first-class (being a Gtk specific tech)


COM brings back a lot of fantastic old memories.

When I was a kid, I used to make VB6 applications, and I'd be able to get custom widgets from the Internet, and import them into my VB6 project seamlessly. It was truly amazing, and mind-expanding.

The JS/React/etc web ecosystem doesn't come anywhere close, in terms of the ease-of-use that VB6 had.


Trouble is you can't really the solve the problem Microsoft had without using something like COM. Good news is you don't have to care - you can use C# or Rust and not worry about COM.


Somehow I think .NET would have been much better had it started as something like .NET Native, but yeah cannot turn back in time.


It would have been no good for web application servers, which is where the competition was going.

Native would have chased the declining native GUI market.


Sure it would, after all that is where backend development is going to, back to AOT compilation toolchains.

If Delphi never managed much on the server side, it has more to do with the downfall from Borland than anything else.

So ASP + COM would be ASP.NET with COM+ Runtime instead.


I just don't buy it. Delphi could have done better, but not good enough. I think on its best day it would have had a hard time time competing with Java, much less Python or Ruby, and Rails was on the up as Delphi's .NET strategy was on the wane.

It needed to be an open source cross-platform compiler to be truly viable in the Linux ecosystem, and Linux would be needed to compete with Windows and its licensing fees. Anything other than C# is always a step or two behind on .NET, and .NET was the best route to web apps on Windows.

The only reason AOT is coming back for web apps is FaaS, IMO. FaaS works best with minimal boot-up latency, and FaaS delivers operationally as a way to scale out stateless compute on demand, which optimally means acquiring those compute resources just for the duration of a request or job, and no longer.


Windows UI and Framework beyond Win32 is solidly based on COM. Win 10 works for many years now with a deployment close to a billion. I would say: works.


"Under the hood" makes it sound like you wouldn't deal with any COM stuff yourself, it's just how the different languages' APIs were implemented to interface with the OS runtime.


I don't think the basic architecture is going to go away. Even some of the newest operating systems have something quite similar:

https://fuchsia.googlesource.com/docs/+/ea2fce2874556205204d...


That's what they called "going native" (vs. using .NET).


Wish they'd open the source and let the Rust community do this kind of stuff instead. No need to waste Windows development time on niche stuff.


The article has a very prominent https://github.com/microsoft/winrt-rs in it, where you will find that this is MIT licensed.


The source is open, as already mentioned. But as for spending developer resources, I think having Microsoft work on things like this helps Rust and Windows alike. It helps Rust by lending a sense of official platform support to the language. And it helps Windows by encouraging Rust users to develop for the platform.


Open what source? Everyone has access to the winmd files necessary, the COM ABI is documented - there’s been nothing stopping anyone from doing this.

All of the work MS has done on these bindings is MIT licensed as well.


In fact there were third-party bindings already which I worked on: https://github.com/contextfree/winrt-rust

the new first-party ones look to be more complete and polished but this is mainly due to time and skill limitations on my part. :)


It's so open source, the developer of C++/WinRT didn't even work for Microsoft when he made it.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: