Even if it only replaced Windows LTSB/Embedded, I'd still prefer an ATM, checkout, or gas station terminal I was using was running on managed code. Doubly so for the next generation of Nuclear Powered submarines.
Plus between virtualisation and silo-ed software management ("Apps") it seems entirely plausible to move the PC/Laptop to the next generation.
Singlularity would be cool, but I'd prefer smaller footprint to begin with.
(This is in contrast to Windows IoT Enterprise, which is the more traditional "Windows Embedded" experience with many of the features you're complaining about. This is mostly because Enterprise customers want to be able to slap their coded-for-a-desktop apps onto a kiosk and declare the job done.)
It's really infuriating how many versions MS is cranking out without clear differentiation between them and even when you talk to MS reps they know nothing about the options.
Your MS reps need to be fired if they do not know about IoT Core, or chose not to tell you.
Also, I personally find it best to never rely on a rep from any company to give me the truth about their product line. I will make time to do preliminary research (at least) myself.
Microsoft will ALWAYS push IoT Enterprise over IoT Core because IoT Core costs nothing, and generates no revenue for Microsoft (other than the associated Azure revenue that they hope you spend connecting those IoT Core devices to infrastructure.)
All that said, I recommend looking into IoT Core if it interests you. It is very well suited for certain kinds of use cases.
IoT Core looks interesting but I am not sure if we are ready for App Store only. Does it even support WPF or only UWP?
It's basically .net core and UWP and all the old familiar Windows stuff is gone.
IoT Core is either the best possible Windows or the worst possible Windows, depending on your point of view.
For some things, it's a perfect fit. If you can't fit into a .net core & UWP ecosystem entirely then it won't be a good fit if you need a GUI.
The main thing is that with WPF you can use the same Win32-type logic you've always been using. UWP is it's own API entirely. It's not all that different, but it is different. Things are organized better and APIs make more sense in UWP-land to me.
So the gui would probably port over relatively easily. The application logic probably wouldn't.
Presumably, at some point in the future UWP support will extend beyond WPF's support window, and right now they are both supported, so it would be silly to switch solely for the sake of making IoT Core a viable deployment platform for you.
The most recent example is Kenny Kerr complaining that although they made a big effort to migrate from C++/CX into C++/WinRT, the large majority prefers to code in .NET nowadays.
However many of the Midori outcomes have landed in .NET, namely .NET Native, async/await, TPL, span, blittable structs, immutable data on Rosyln, so not everything was completly lost. As side note WP8 Bartok AOT compiler also comes up in Singularity.
So fortunely not everything gets lost.
As far as I am aware the only public information on Midori comes from Joe Duffy's blog which didn't discuss why it got cancelled.
Beyond the general issue of the well known managed vs unmanaged wars within MS, it would appear to be the case that Midori sucked up massive investment within MS and had little to show for it in the end - a nearly 10 year effort to build a fully managed OS, but one that focussed entirely on trying to be competitive performance-wise with Windows, rather than providing entirely new functionality and new ideas. So they spent years doing things like hand-optimising libraries to get rid of vtables and ended up with something that wasn't really .NET (it eschewed threading and .NET compatibility), and wasn't Windows, and whose primary strength was "it's written in a managed language" which isn't very interesting to actual users.
At Rustconf he clearly mentions that in spite of having the system running in front of them, WinDev wasn't willing to believe in them.
Which given how WinDev sabotaged Longhorn to reboot .NET ideas into COM and getting COM Runtime reborn as WinRT, I would believe him.
Many of the innovations from Midori were included in other projects. The project may have “failed” but it was far from a failure.
Considering you have absolutely no proof of that it seems rather disrespectful to act like that's a fact rather than your supposition.
The fact that you think a cancelled project means it is the same as DEC's "No Output" team likely means you don't understand what the "No Output" team even was at DEC or the implication for the people within it.
You absolutely do need to present proof of what you're saying, because it is a serious charge for both Microsoft and those employees alike.
Async/await, TPL, Span<>, blittable structs, safe stackalloc, .NET Native, and a couple of other minor features come from M# (System C#) usage.
Rust is the answer everyone wants, even if they don't really need it.
An aside, C# in .NET core is getting very close to native speed. It's passed Java in many benchmarks, mostly due to type reification and lack of boxed primitives.
Java is trying to catch up, with active research projects for copying C#'s native types, a "Java on Java JIT" (Graal). And most interesting to me, fiber based green threads.
If these projects bear fruit, we may be only a few years away from Java exceeding native code speed in most applications
Meanwhile at work, a couple of C++ servers have been replaced by managed ones across a couple of projects.
Even with the current stacks, not every business needs the ultimate performance across all stack, as an example Siemens and Zeiss use .NET on their digital imagining solutions, with native code only in the hotspots.
One reason why we're seeing stuff like the [FirecrackerVM](https://firecracker-microvm.github.io) or [AzureIoTEdge](https://github.com/Azure/iotedge) is that the companies behind them know they will attract privacy-/security-minded customers they might not otherwise have.
There's other reasons as well, but this is just what stands out to me first.
Yes, I understand this includes Linux, Windows, BSD, Darwin (iOS, MacOS, watchOS, etc), The Android Runtime, and lots of critical software that runs on top of those systems.
We have the tools to write very efficient managed code. Critical security flaws involving unauthorized access to data will continue to appear until we make the transition. The cost of these security flaws is likely to become more expensive than the cost of replacing the existing software. To make matters worse we are continuing to write more unmanaged code everyday.
A lot of applications, such as video games, don't require the type of security others do, but require performance to a much higher degree.
Rust is a good middle-ground, since it does have some compile-time issues at the moment still which are getting better. But it is incredibly stingy about anything unsafe. Even unsafe blocks of code are borrow-checked.
Considering how slowly things run these days on any OS, and OS's specifically having performance issues, I doubt the user experience will benefit from a completely managed OS either, but I'm no expert.
Video games often have direct or indirect access to personal and credit card data. To double the issue of memory safety some popular video games encourage user generated content to be consumed in game. While less critical than an http server I still think it would benefit. Unity runs game logic in managed c#. Opengl is still wirrten in C.
I think that you massively under-estimate the cost of rewriting all that code. Yes, the security flaws are expensive. Rewriting the code would be, I'd guess, at least one order of magnitude more expensive.
Expensive to whom? When was Microsoft or RedHat or Oracle or the FreeBSD foundation last fined or sued because of a security flaw - an unmanaged memory one, specifically?
When was a random company or developer fined or sued for same?
You're question is expensive to whom. The answer is the expenses are spread across the entire industry. Every time a software author needs to write a patch the bigger cumulative expense is to all of their customers that need to apply the patch.
What would you write, say, v8 in? Rust doesn't have any provisions against type confusion, JIT bugs, etc, and v8 seems way too complex for a formal verification.
The closest thing you find to unmanaged code is parts of the garbage collector, where "SystemJava" is used, a dialect that gives access to pointer arithmetic. But everything else including the JIT compiler itself is fully managed.
I think a pivot like this is an incredibly complicated thing for a company like Microsoft to perform. It requires multiple years of planning from the ground up to port internal libraries and tooling, train engineers on Rust best practices.
Not to mention the tricky business of going around and hiring Rust developers at the scale MS needs. :)
I think, simply waking up one day and changing the language / frameworks / tools used by your company is a privilege only smaller companies and startups might enjoy.
 shameless plug: https://github.com/duneroadrunner/SaferCPlusPlus
So it is always going to be something else, not UNIX related.
Fuchsia uses Rust and Go in several areas.
Microsoft is using Rust on VSCode, Azure and IoT Core.
Well there's certainly a chronological argument that it couldn't have, but I agree with the sentiment. It saddens me that our industry is one in which people _do_ let perfect be the enemy of better.
Just notice that on current desktop and mobile OS SDKs, Windows is the only one where C++ still gets a place at the GUI table. Everywhere else it just got pushed down the stack.
So while some divisions would like a world of purely .NET (AOT/JIT compiled), others would rather that it did not exist.
Furthermore, as Microsoft has patched most of the basic memory safety bugs, attackers and bug hunters have also stepped up their game, moving from basic memory errors that spew code into adjacent memory to more complex exploits that run code at desired memory addresses, ideal for targeting others apps and processes running on the system.
Is all we can hope for in the security game a series of mitigations? As soon as we add layers of security (Stack cookies, ASLR, memory "safe" languages with exploitable runtimes, etc.) it seems like new methods are invented to bypass them and those methods gradually become widespread (ROP chains, runtime fuzzing, rowhammer, speculative execution exploits). What is the next "70 percent" of security bugs? Is there an end to this race?
I think the only thing in the future that can ever be as secure as in-person conversation and paper records is devices using zero-intermediary communication, in the style of an ansible https://en.wikipedia.org/wiki/Ansible, and extremely restricted storage which can only store data and not arbitrary application state.
It's sitting at ~25 to 1, proof code to implementation code. I think you could probably get that down to 5 to 1 or so by treating a lot of the work they did as a library. The proof covers a full equivalence from abstract spec to machine code, and you could reuse a lot of that. Sort of how it's not fair to include the LOC of your compiler even though you need that for your program ultimately.
Secondly, Microsoft's own SecureMemZero is insecure against those attacks. It only applies a trivial compiler barrier, but no memory barrier, so you cannot use it to securely delete sensitive memory.
And getting down to 5/1 proof to implemention is within the realm of what unit tests you should be writing anyway, so it's not that much of an economic investment.
All software proofs prove correctness of specific theorems about a program given certain axioms. Those axioms must include, at the very least, the conformance of the hardware to some specification. But hardware can, at best, conform to the specification with some probability. For example, when you write `mov rax, [rdx]` it is not true that the computer will, 100% move the contents of the address pointed to by rdx into rax, only that it will do so with some high probability. Therefore, any proof about a software system (as opposed to an algorithm, which is an abstract mathematical entity) is a proof of a theorem of the form, "as long as the system behaves according to my assumptions, it will exhibit behaviors consistent with my conclusions." And that is even assuming that your conclusions cover all possible attacks. As it is generally impossible to formalize the concept of an arbitrary attack, you generally prove something far weaker.
But none of this is even the biggest limitation of formal verification. The biggest would be its cost and lack of scalability. seL4 is about 1/5 the size of jQuery, and it is probably the largest piece of software ever verified end-to-end using deductive formal proofs as a verification technique. However much the effort of a similar technique can be reduced, it doesn't scale linearly with the size of the program (it may if the program does the same things, but usually larger programs are much more complex and have many more features).
Sure, you can write an arbitrary application where that's true, but AFAIK it doesn't have to be. IMO, sel4's really cool part isn't that it just was formally verified, but it's total design specifically for verifiability.
The argument reminds me of all the people in the late 90s talking about how unit tests would never take off because they tried to introduce it in their legacy spaghetti codebase, and it wasn't amenable to testing.
Watch Xavier Leroy's talks.
> IMO, sel4's really cool part isn't that it just was formally verified, but it's total design specifically for verifiability.
Well, they were very selective with their algorithm choices, so that only very simple algorithms were used. Whether that's cool or an unaffordable restriction depends on your perspective.
> The argument reminds me of all the people in the late 90s talking about how unit tests would never take off because they tried to introduce it in their legacy spaghetti codebase, and it wasn't amenable to testing.
Except that I've been both practicing and evangelizing formal methods for some years now. I argue in favor of using them, not against, but as someone with some experience in that field I want to make sure people's expectations are reasonable. Unreasonable expectations all but killed formal methods once before.
No. seL4 proved functional correctness. It eliminates all attacks, not just particular kinds.
Functional correctness means implementation matches specification. As a corollary, seL4 has no buffer overflows. Proof: Assume seL4 has a buffer overflow. Exploit it to run arbitrary code. Arbitrary code execution is not part of specification, hence implementation does not match specification, leading to contradiction. QED.
Above proof applies to use after free, or any other exploits enabling arbitrary code execution, including methods which are not discovered yet.
According to them, they, sensibly, are indeed assuming some things to be correct without proof: http://sel4.systems/Info/FAQ/proof.pml
This is effectively the same idea behind Rust's `unsafe`. It represents the things you assume to be true. Of course, when compared to seL4, the scales are massively different. :-)
Hardware: we assume the hardware works correctly. In practice, this means the hardware is assumed not to be tampered with, and working according to specification. It also means, it must be run within its operating conditions.
Information side-channels: this assumption applies to the confidentiality proof only and is not present for functional correctness or integrity. The assumption is that the binary-level model of the hardware captures all relevant information channels. We know this not to be the case. This is not a problem for the validity of the confidentiality proof, but means that its conclusion (that secrets do not leak) holds only for the channels visible in the model. This is a standard situation in information flow proofs: they can never be absolute. As mentioned above, in practice the proof covers all in-kernel storage channels but does not cover timing channels
Rowhammer and Spectre exploits hardware not working to spec.
These sorts of attacks using heretofore "unreasonable" exploitation of side-channel and hardware vulnerabilities are exactly what worries me. We have no solutions for these classes of attacks. Complex high-speed systems are still inherently unsolvably vulnerable, it seems.
Even with regard to what you do prove, the conclusions depend on certain assumptions about the hardware which, even if true (and they often aren't -- read some of the seL4 caveats) at best hold with high probability, not certainty.
I am an avid fan of formal methods, and have been using them quite extensively in the past few years, but you should understand what they can and cannot do. You can read about formal methods on my blog: https://pron.github.io/ The paper discussed here also mentions some common misunderstandings, about seL4 in particular: http://verse.systems/blog/post/2018-10-02-Proofs-And-Side-Ef...
Nevertheless, there is no doubt that a kernel such as seL4 has a much lower probability of being hacked than, say, Linux, given similar hacking efforts, and I would trust it much more. But it really isn't an endgame.
If people built with things that weren't made of holes (ie. using memory safe languages), then there wouldn't need to be much effort in the "find and patch" side of things.
I'll agree with the sentiment that memory safe languages are less prone to security holes because by nature they entirely remove a whole class of issues related to managing memory...
but I can't help but think that it still stands to be the case that finding and patching the remaining holes that aren't related to memory management would still break the economics.
Perhaps the one thing I could imagine that might make a difference is regulations coming into force that mandated pentests / fuzzing for companies producing software doing a certain amount in revenue. It's been my experience that that stuff gets done but it's once in a blue moon. And stuff is always turned up. I'm yet to work anywhere where it is part of a regular process.
Even if it were I'm always blown away by security researchers who spend months and months attacking something and finally crack it then give a presentation about how they did it. There can't even be that many people on Earth with the level of skillset required.
I dunno... the economics of it seem to play out like "let's just deal with the fallout of an incident when it happens because that's cheaper than investing in security"
The asymmetric nature of finding holes vs not creating them combined with the asymmetric nature of the business value of investing in security is just a mildly awful combination.
If there is, it'll be a combination of two things.
One is you make things simple enough that finding the bugs is more realistic. WireGuard, not OpenSSL.
The other is that you make things modular, and once you have one of the modules right, you mostly leave it alone. Many people still use djbdns and qmail for a reason, and it's not the pace of new feature support.
These are trade offs. They do have costs. But it has proven to be more effective.
Arms races don't really ever end, except for truces/treaties or total victory.
Is Rust guaranteed to be 100% memory-safe? No, there could always be bugs in the compiler and/or unsafe code. (Aside: it annoys me how people consider the presence of an unsafe keyword inherently less trustworthy than a non-verified compiler/runtime system, when there is really little difference between them.) Is it a huge improvement? Yes.
No. They are not. You obviously have not done your homework on the subject.
It's a nice language, and safer than C/C++ but for sure not a safe one, even if thousands of newbies will repeat it constantly.
Now a Windows/browser zero-day requires months of research and can be sold for $100k+.
Page faults are fine. Null pointers are fine; just don't dereference them. Race condition is a much more general term that can cover non-memory races too; perhaps they meant data races.
Fixed the title. This is not an analysis about general software errors.
The benefits and costs of writing a POSIX kernel in a high-level language
Car accidents, like mistakes in programming are a risk that has a likelyhood that is non-zero. A seatbelt might be a little bit annoying when things go well, but much less so when they don’t. Rust is there to stop you in most cases when you try to accidentally shot yourself into the leg, unless you deliberately without knowing what you are doing while yelling “hold my beer” (unsafe). And contrary to popular belief even in unsafe blocks many of Rust’s safety guarantees hold, just not all
If the net benefit of a ownership concept like Rust has it is high enough, this ahouls be an easy choice for rational actors to take. The odds are on Rust’s side here, because humans make mistakes and if Rust manages to allow productivity despite stopping certain classes of these mistakes, there will be a net benefit.
Just like with the seatbelt, there will be always those that don’t wear one for their very subjective reasons (e.g. because of edge cases where a seatbelt could trap you in a burning car, or because it is not cool, or because they hate the feeling and think accidents only happen to people who can’t drive).
I write Rust for a year now, and I constantly see bugs far better programmers made in say C++ where Rust just wouldn’t allow you to not handle it. E.g. Crashing your software by writing to a file that is locked by another process. In Rust this would mean the deliberate act of ignoring a Result, unwrapping it and moving on. You can ignore it, and Rust will crash, but you know exactly where you took that risk.
Things like these remove a lot of cognitive load from the programmer which can be put into more pressing topics
I like Rust; I’ve done some coding challenges in it. It won’t replace C++ for me for quite some time, but it’s about time there was a safe, powerful language that’s truly a worthy contender.
So what if Rust doesn't prevent all conceivable memory safety issues anyone could possibly write? It's a huge improvement.
They are the only OS vendor (on consumer space) still having a full stack support for C++, with internal pushes to keep it going.
Sort by Vendor:
In my own research, I have attempted to send Microsoft security bugs only to be told they would be backlogged and reviewed later (which never happened to my knowledge).
Shouldn't then the number of bugs decrease much faster, since they are easier to find? Unless they are introduced at even a greater rate than the ones in Windows.
* Google Chrome: 36
* Artifex Ghostscript: 1
* ZeroMQ: 1
* macOS CUPS: 2
* Debian: 0
"But, your editor wondered, could we be doing more than we are? The response your editor got was, in essence, that the bulk of the holes being disclosed were ancient vulnerabilities which were being discovered by new static analysis tools. In other words, we are fixing security problems faster than we are creating them. "
Given it's Microsoft, they are probably thinking about .net.
... you know that this about what 9/10th's of an operating systems device driver code look like, right?
Are you suggesting the post I linked above from someone claiming to be an engineer on Office is inaccurate? It may be well be, but it sounds more plausible and supportable than whatever question it is you seem to be posing.
If you have evidence to suggest that Office365 is written in C# these days I'd like to see it please, because I have read on more than one occasion over the last 10 years that MS had attempted this and given up.
Quoting your favourite engineer "The desktop app’s are fully native".
There is also "Office 365 Desktop" which is what I think you'll find most large corporations that have migrated from Office 2016 Desktop apps are using.
Office 365 Mobile/Online are much cut down, more comparable from Google Docs from the sounds of it, and hardly a 1:1 replacement for the Desktop apps. Which are still C/C++.
In addition, if you have any hard evidence that the majority of the Office Mobile/Online web apps are written in C# I would like to see that, because I can't seem to find any.
You still have an explicit outstanding assertion that they are written in C# though, once we figured out which of the three versions you were actually talking about (ie: it's the versions that almost nobody uses) so let's see it.
> How do you think Office 365 works?
> I was referrring to Office 365 on Azure
Everyone can see it, its there and black and white. Having received your clarification that you meant "mobile or online" edition, I asked you to provide evidence it's written in C# which is apparently your claim, which you still haven't done. I haven't asserted that it's still written in C or C++, it could be assembled with spit and paper clips for all I know.
So enlighten us, as per your claim, if you can. I'm genuinely interested.
When your worst case latency/perf requirements don't allow you to have a GC, Rust is another great choice.
Async/await are still far away from being anywhere near figured out nor implemented, CSP primitives in the std. lib like std::mpsc are basically DOA/obsolete/half-functioning and tokio/futures are still early alpha.
There's a lot more to concurrency than just compile-time verification of safe usage of a mutex or moving data in/out of a thread. A lot more.
Async/await (and production quality gRPC) is currently what's keeping us from using Rust at dayjob.
> There's a lot more to concurrency than just compile-time verification of safe usage of a mutex or moving data in/out of a thread. A lot more.
Sure, but it's better to have at least that.
I would also emphasis the need for type safety and contracts.
So it's code written in C# or VB or another supported language, compiled by JIT to a common bytecode running on a virtual machine and probably a whole slew of libraries that target that VM too.
Okay. I have no idea why I'm struggling so much with that.
Two years ago, .NET in an everyday connotation would imply C# (VB.NET is mainly only used in legacy corporate environments, although there was nothing stopping people from implementing greenfield VB projects until last year or so) and an installation of the .NET Framework (equivalent to Java runtime environment) on a Windows PC.
Today, it is increasingly referring to .NET Core, which is still likely C# (with a smattering of F# from enthusiasts) but without a runtime necessarily pre-installed on the PC, now available as first-class citizens (and developed in concert with) on Windows 7+, macOS, Linux, and soon FreeBSD. The runtime has been broken down from the monolithic framework to hundreds of individual libraries (all available via a package manager) compiled to platform- and architecture-independent dlls available via the binary nuget package manager.
Back when .NET was first getting started there were a lot more languages (Microsoft paid language developers to port Java (the language) and the community provided ports of Python, Ruby, and others; while new languages specifically designed for .NET also came and went; I have somewhat fond memories of learning Boo, but I don't think it saw any updates this side of .NET Core 1.0), today it's mostly just C# and F#.
More interestingly, there were some serious attempts at introducing native MSIL (Microsoft Intermediate Language) the assembly-level language interpreted by the runtime environment, now known as CIL or Common Instruction Language) in the form of microcontrollers that natively executed MSIL instructions, but besides a few extremely niche implementations (and very expensive - I remember being disheartened at the time) that ended up going nowhere and I'm not aware of any modern efforts at revisiting that, although I wouldn't be surprised given the resurgence of .NET in recent years.
Today, .NET Core is being pushed for all desktop, mobile, and web development; and is supported on major consumer platforms. The entirety of .NET Core development is out in the open (and there are now official public committees for furthering its development and making decisions affecting its future) and the code is actually available on GitHub.
I jumped on board the .NET train when I found a letter to the only tech-literate teacher at my high school who had thrown it away; it offered to send a free sample pack of CDs and basic literature in advance of the release to interested schools back in 2001 or so (when J# was still a thing, C# had just been introduced, .NET was still at the "we don't speak of it" 1.0 mile marker before the hard fork to redo the CLR with support for generics and revisit some poor decisions in the initial release, and the "new" ASP.NET offering still used WYSIWYG to design the layout!) and managed to get them to send me a copy. It feels like forever ago! C# stagnated for some years, but then Microsoft became serious about it once again after the Windows Vista release (and after they failed to port the Windows userland to .NET due to serious performance constraints in particular pertaining to GC with the Longhorn project), but then saw some great updates that made it an incredibly well-designed and efficient language (without even taking the standard library into account).
I swear, they would've named Bill Gates' grand kids .Net Gates if they could have gotten away with it.