Hacker News new | comments | ask | show | jobs | submit login
Microsoft: 70 percent of all security bugs are memory safety issues (zdnet.com)
153 points by clouddrover 5 days ago | hide | past | web | favorite | 169 comments





Just a painful reminder that they scrapped Midori[0] their managed Operating System based on Microsoft Research's Singularity project[1].

Even if it only replaced Windows LTSB/Embedded, I'd still prefer an ATM, checkout, or gas station terminal I was using was running on managed code. Doubly so for the next generation of Nuclear Powered submarines[3].

Plus between virtualisation and silo-ed software management ("Apps") it seems entirely plausible to move the PC/Laptop to the next generation.

[0] https://en.wikipedia.org/wiki/Midori_(operating_system)

[1] https://en.wikipedia.org/wiki/Singularity_(operating_system)

[3] https://mspoweruser.com/uks-nuclear-submarines-runs-windows-...


I'd strongly prefer that my ATM had no start menu, file picker, online help system, interactive parts of update system, or any of the thousands of other UI apps which could be exploited via touchscreen if accidentally activated.

Singlularity would be cool, but I'd prefer smaller footprint to begin with.


Microsoft already have what you're talking about; it's called Windows IoT Core.

(This is in contrast to Windows IoT Enterprise, which is the more traditional "Windows Embedded" experience with many of the features you're complaining about. This is mostly because Enterprise customers want to be able to slap their coded-for-a-desktop apps onto a kiosk and declare the job done.)


I have never heard about IoT Core before, only IoT enterprise and LTSC. This would actually have been an interesting choice for the project I am working on right now.

It's really infuriating how many versions MS is cranking out without clear differentiation between them and even when you talk to MS reps they know nothing about the options.


IoT core is not new, not a secret, and Microsoft makes no effort to hide it.

Your MS reps need to be fired if they do not know about IoT Core, or chose not to tell you.

Also, I personally find it best to never rely on a rep from any company to give me the truth about their product line. I will make time to do preliminary research (at least) myself.

Microsoft will ALWAYS push IoT Enterprise over IoT Core because IoT Core costs nothing, and generates no revenue for Microsoft (other than the associated Azure revenue that they hope you spend connecting those IoT Core devices to infrastructure.)

All that said, I recommend looking into IoT Core if it interests you. It is very well suited for certain kinds of use cases.


Our reps wanted to push us into enterprise. We went with LTSC against their advice because the upgrade cycle is much better.

IoT Core looks interesting but I am not sure if we are ready for App Store only. Does it even support WPF or only UWP?


There is NO graphical Win32 stuff; UWP only. There are some non-graphical Win32 APIs but not like full Windows.

It's basically .net core and UWP and all the old familiar Windows stuff is gone.

IoT Core is either the best possible Windows or the worst possible Windows, depending on your point of view.

For some things, it's a perfect fit. If you can't fit into a .net core & UWP ecosystem entirely then it won't be a good fit if you need a GUI.


UWP is a no-go already. No way we can port our WPF code.

Well, UWP and WPF are close cousins, so it probably isn't as bad as one would think if they didn't know that. They're both XAML, though they are not quite the same.

The main thing is that with WPF you can use the same Win32-type logic you've always been using. UWP is it's own API entirely. It's not all that different, but it is different. Things are organized better and APIs make more sense in UWP-land to me.

So the gui would probably port over relatively easily. The application logic probably wouldn't.


The GUI would be the easy part but our applications have a lot of background logic. Going to UWP would be almost a complete rewrite without actually improving anything. From what I have seen so far UWP is just different but not really better than WPF.

Can't argue with any of that.

Presumably, at some point in the future UWP support will extend beyond WPF's support window, and right now they are both supported, so it would be silly to switch solely for the sake of making IoT Core a viable deployment platform for you.


Midori was a victim of the WinDev vs DevDiv political wars.

The most recent example is Kenny Kerr complaining that although they made a big effort to migrate from C++/CX into C++/WinRT, the large majority prefers to code in .NET nowadays.

https://kennykerr.ca/2019/01/25/the-state-of-cpp-on-windows/

However many of the Midori outcomes have landed in .NET, namely .NET Native, async/await, TPL, span, blittable structs, immutable data on Rosyln, so not everything was completly lost. As side note WP8 Bartok AOT compiler also comes up in Singularity.

So fortunely not everything gets lost.


Was it? How do you know that?

As far as I am aware the only public information on Midori comes from Joe Duffy's blog which didn't discuss why it got cancelled.

Beyond the general issue of the well known managed vs unmanaged wars within MS, it would appear to be the case that Midori sucked up massive investment within MS and had little to show for it in the end - a nearly 10 year effort to build a fully managed OS, but one that focussed entirely on trying to be competitive performance-wise with Windows, rather than providing entirely new functionality and new ideas. So they spent years doing things like hand-optimising libraries to get rid of vtables and ended up with something that wasn't really .NET (it eschewed threading and .NET compatibility), and wasn't Windows, and whose primary strength was "it's written in a managed language" which isn't very interesting to actual users.


Joe Duffy has made a few mentions to it in his keynotes and some tweets.

At Rustconf he clearly mentions that in spite of having the system running in front of them, WinDev wasn't willing to believe in them.

Which given how WinDev sabotaged Longhorn to reboot .NET ideas into COM and getting COM Runtime reborn as WinRT, I would believe him.


Midori, as cool as it was, was Microsoft's "no output division" (see https://archive.computerhistory.org/resources/text/DEC/dec.b... for context). It existed so that very senior engineers wouldn't go elsewhere and cause trouble for Microsoft. It had fullfilled its purpose in that regard. I strongly suspect that some of Google's more ambitious endeavors serve the same purpose.

Based on knowing and being firsthand friends with several Midori devs, I’m going to assert that this is simply not true. The majority of them were just reassigned when the project was cancelled.

Many of the innovations from Midori were included in other projects. The project may have “failed” but it was far from a failure.


> Midori, as cool as it was, was Microsoft's "no output division"

Considering you have absolutely no proof of that it seems rather disrespectful to act like that's a fact rather than your supposition.


It always cracks me up when people demand "proof" of something like this. I mean, I know some of the people from there from way back, and the project was eventually canceled, so there literally was "no output". And they had pretty much complete carte blanche on everything and could bikeshed over the most inane and inconsequential things for months on end. It was a fun "job" while it lasted, though.

Now I'm starting to wonder if you even read the "No Output" letter and understand its implication? You essentially called a bunch of Microsoft employees productivity drains on the rest of the company.

The fact that you think a cancelled project means it is the same as DEC's "No Output" team likely means you don't understand what the "No Output" team even was at DEC or the implication for the people within it.

You absolutely do need to present proof of what you're saying, because it is a serious charge for both Microsoft and those employees alike.


Surely there was output.

Async/await, TPL, Span<>, blittable structs, safe stackalloc, .NET Native, and a couple of other minor features come from M# (System C#) usage.


I don't think is is actually true since they actually did run some production services at MS like the speech service which did voice recognition for windows phones.

I hadn't seen that memory before. Turn out there's some previous discussion re: "No Output Division": https://news.ycombinator.com/item?id=10587124

Were they placed there because of negative productivity (as implied from the DEC memo), or so they wouldn't become someone else's competitive advantage? (even at the cost of not letting their unvested shares return to the pool when they left)

I hope Rust comes of age in a few years. Java, Go, and C# are above 50% "native" speed in every benchmark I've run across, but the mantra continues to be "why would I pick something slower?".

Rust is the answer everyone wants, even if they don't really need it.

An aside, C# in .NET core is getting very close to native speed. It's passed Java in many benchmarks, mostly due to type reification and lack of boxed primitives.

Java is trying to catch up, with active research projects for copying C#'s native types, a "Java on Java JIT" (Graal). And most interesting to me, fiber based green threads.

If these projects bear fruit, we may be only a few years away from Java exceeding native code speed in most applications


I hardly use C++ since 2006, only when I am forced to step outside Java/.NET or happen to actually do some hobby projects in C++.

Meanwhile at work, a couple of C++ servers have been replaced by managed ones across a couple of projects.

Even with the current stacks, not every business needs the ultimate performance across all stack, as an example Siemens and Zeiss use .NET on their digital imagining solutions, with native code only in the hotspots.


Unity's work on HPC# (High Performance C#) seems like a good possibility of a middle ground: Use C# where performance is not critical but you want safety, use HPC# when you absolutely need restrictions on performance.

It's a pity Modula-2 did not gain wider adoption 30 years ago - it has almost all the memory safety of managed languages and none of the performance penalties. Array bounds were checked by compiler and the only available unsafe operation was deallocate, all other unsafe operations were sequestered into separate module which could be easily isolated, audited, and/or banned as appropriate. Life could have been so much better.

Luckily now we have Rust! :)

30 years of pain tho? Plus however many more years it will take to phase out existing C-based code, so another 30 at least.

It depends how it gets prioritized by the companies that own and contribute to these codebases - both open and private source. Right now there's not much incentive financially to migrate away from any of the existing code, so more often than not it will not happen.

One reason why we're seeing stuff like the [FirecrackerVM](https://firecracker-microvm.github.io) or [AzureIoTEdge](https://github.com/Azure/iotedge) is that the companies behind them know they will attract privacy-/security-minded customers they might not otherwise have.

There's other reasons as well, but this is just what stands out to me first.


Look to the future and try to build it on the lessons of the past. Dont worry about the missed opportunities, if anyone knew in advance C and C++ would not have been used the way they were.

I think Ada was type-safe too, agreed re: 30 misspent years, and here's hoping it doesn't take 30 more to get where we need to be.

the more you read about history the more you feel that 30years is nothing but required time for reactions to occur

As an industry we should deprecate all unmanaged code. We've proven time and time again that even very professional and highly scrutinized unmanaged code can have critical data safety faults.

Yes, I understand this includes Linux, Windows, BSD, Darwin (iOS, MacOS, watchOS, etc), The Android Runtime, and lots of critical software that runs on top of those systems.

We have the tools to write very efficient managed code. Critical security flaws involving unauthorized access to data will continue to appear until we make the transition. The cost of these security flaws is likely to become more expensive than the cost of replacing the existing software. To make matters worse we are continuing to write more unmanaged code everyday.


I would much rather switch to a compiler-restricted memory safe unmanaged alternative such as Rust, before I throw out performance for safety completely.

A lot of applications, such as video games, don't require the type of security others do, but require performance to a much higher degree.

Rust is a good middle-ground, since it does have some compile-time issues at the moment still which are getting better. But it is incredibly stingy about anything unsafe. Even unsafe blocks of code are borrow-checked.

Considering how slowly things run these days on any OS, and OS's specifically having performance issues, I doubt the user experience will benefit from a completely managed OS either, but I'm no expert.


I'd agree that Rust would be a great alternative to C/C++ in performance critical applications.

Video games often have direct or indirect access to personal and credit card data. To double the issue of memory safety some popular video games encourage user generated content to be consumed in game. While less critical than an http server I still think it would benefit. Unity runs game logic in managed c#. Opengl is still wirrten in C.


> The cost of these security flaws is likely to become more expensive than the cost of replacing the existing software.

I think that you massively under-estimate the cost of rewriting all that code. Yes, the security flaws are expensive. Rewriting the code would be, I'd guess, at least one order of magnitude more expensive.


If we start getting more recalls and lawsuits due to CVEs, things will change.

The cost of these security flaws is likely to become more expensive than the cost of replacing the existing software.

Expensive to whom? When was Microsoft or RedHat or Oracle or the FreeBSD foundation last fined or sued because of a security flaw - an unmanaged memory one, specifically?

When was a random company or developer fined or sued for same?


The bandwidth costs for Globalsign incurred for revoking SSL certificates in response to Heartbleed was estimated to $400K in a Cloudflare blob post [0] and eWeek estimated the total cost of Heartbleed to be $500M [1], although admittedly this price tag is an educated guess.

You're question is expensive to whom. The answer is the expenses are spread across the entire industry. Every time a software author needs to write a patch the bigger cumulative expense is to all of their customers that need to apply the patch.

[0] https://blog.cloudflare.com/the-hard-costs-of-heartbleed/ [1] https://www.eweek.com/security/heartbleed-ssl-flaw-s-true-co...


Shutting down servers or stealing credit card info certainly has a cost

Agreed on principle but not sure it can be bootstrapped.

What would you write, say, v8 in? Rust doesn't have any provisions against type confusion, JIT bugs, etc, and v8 seems way too complex for a formal verification.


Does C/C++ have any provisions against type punning or JIT bugs? The fact that you have to use unsafe code for a lot of things in a JIT doesn't change the basic reality that Rust is safe by default, whereas C/C++ cannot be made memory-safe practically, except for small programs like seL4.

That's extremely easy to answer.

The answer is you'd write an interpreter for JavaScript in Java, use a partial evaluation framework also written in Java to convert it to a JIT compiler, and then run it on a multi-purpose optimising polyglot virtual machine that's also written in Java. Such a thing would be fully bootstrapped and fully managed.

And usefully, it exists already. That's what happens if you run JavaScript on SubstrateVM using TruffleJS and Graal:

https://www.graalvm.org/

The closest thing you find to unmanaged code is parts of the garbage collector, where "SystemJava" is used, a dialect that gives access to pointer arithmetic. But everything else including the JIT compiler itself is fully managed.


If I understand what you are describing this is how PyPy works.

Rust couldn't solve all of these bugs, but it certainly would have drastically reduced the total amount. The code would also be drastically easier to write and debug. C++17 is still a total mess to write, despite what Microsoft might tell you in their docs. Really doesn't make sense to me, outside of the sunk cost fallacy, why Microsoft isn't pivoting to Rust if they want to write C++.

Microsoft probably has tons of internal libraries that are modified and shared between many different teams spanning multiple orgs.

I think a pivot like this is an incredibly complicated thing for a company like Microsoft to perform. It requires multiple years of planning from the ground up to port internal libraries and tooling, train engineers on Rust best practices.

Not to mention the tricky business of going around and hiring Rust developers at the scale MS needs. :)

I think, simply waking up one day and changing the language / frameworks / tools used by your company is a privilege only smaller companies and startups might enjoy.


Replacing existing unsafe C++ elements with compatible memory safe substitutes[1] might be more expedient. The conversion can even be automated[2] for parts of the code that aren't performance critical.

[1] shameless plug: https://github.com/duneroadrunner/SaferCPlusPlus

[2] https://github.com/duneroadrunner/SaferCPlusPlus-AutoTransla...


I'm not a fan of these types of efforts. You could be putting energy into making Rust better. Memory management aside, C++ has become an incomprehensible mess filled with cruft. Rust does much more than provide safety, it provides better abstractions, a better type system, and actual tooling. Building tools for C++ to keep the ship afloat just means we have to deal with writing C++ longer.

I would imagine that we would at least hear about Microsoft doing some new projects in Rust, or at least using it in some capacity to understand the cost/benefits better. I'm sure the first step would be increasing C++ interop.

> I would imagine that we would at least hear about Microsoft doing some new projects in Rust

We did.


Microsoft is using Rust for greenfield projects: https://github.com/Azure/iotedge

you could ask the same question for every os. And none choose Rust even the recent like fushia. BTW go ask Linus to switch to Rust I'm waiting for you there.

Fuchsia is using Rust for some pieces. That's what I assume MS would do. MS is a company with tons of products, not just an OS, and certainly not a single monolith. You wouldn't rewrite everything in a day, you would slowly move things over time.

Linus is never going to switch to anything else other than C, UNIX is married to C.

So it is always going to be something else, not UNIX related.

Fuchsia uses Rust and Go in several areas.

Microsoft is using Rust on VSCode, Azure and IoT Core.


Fuchsia implements TCP/IP stack in Go. It is an improvement over others.

> Rust couldn't solve all of these bugs, but it certainly would have drastically reduced the total amount.

Well there's certainly a chronological argument that it couldn't have, but I agree with the sentiment. It saddens me that our industry is one in which people _do_ let perfect be the enemy of better.


Microsoft is a big C++ shop, we would all be running AOT compiled .NET code post Windows Longhorn if it wasn't for the political wars between DevDiv and WinDiv.

Just notice that on current desktop and mobile OS SDKs, Windows is the only one where C++ still gets a place at the GUI table. Everywhere else it just got pushed down the stack.

So while some divisions would like a world of purely .NET (AOT/JIT compiled), others would rather that it did not exist.


To quote the article:

```

Furthermore, as Microsoft has patched most of the basic memory safety bugs, attackers and bug hunters have also stepped up their game, moving from basic memory errors that spew code into adjacent memory to more complex exploits that run code at desired memory addresses, ideal for targeting others apps and processes running on the system.

```

Is all we can hope for in the security game a series of mitigations? As soon as we add layers of security (Stack cookies, ASLR, memory "safe" languages with exploitable runtimes, etc.) it seems like new methods are invented to bypass them and those methods gradually become widespread (ROP chains, runtime fuzzing, rowhammer, speculative execution exploits). What is the next "70 percent" of security bugs? Is there an end to this race?

I think the only thing in the future that can ever be as secure as in-person conversation and paper records is devices using zero-intermediary communication, in the style of an ansible https://en.wikipedia.org/wiki/Ansible, and extremely restricted storage which can only store data and not arbitrary application state.


sel4 shows that an endgame is possible wrt to memory safety, it's formally verfied to be memory safe.

It's sitting at ~25 to 1, proof code to implementation code. I think you could probably get that down to 5 to 1 or so by treating a lot of the work they did as a library. The proof covers a full equivalence from abstract spec to machine code, and you could reuse a lot of that. Sort of how it's not fair to include the LOC of your compiler even though you need that for your program ultimately.


Nope, sel4 is unsafe against modern Spectre/Meltdown attacks. Only Fiasco was recently made safe against these.

Secondly, Microsoft's own SecureMemZero is insecure against those attacks. It only applies a trivial compiler barrier, but no memory barrier, so you cannot use it to securely delete sensitive memory.


That's not an endgame if the resources required to block an attack are significantly greater than those required to make one. Moreover, verification is done with respect to specific properties that ensure no attacks of a particular kind. The more kinds of attacks you need to defend yourself from, the harder you need to work (and you will miss some). Not saying we're not making steps in the right direction, but no one thinks it's an endgame.

It's an endgame wrt to memory safety (like I said). sel4 is memory safe. All the other mitigations like ASLR, canaries, etc are just hacks around the fact that proving memory safety of existing codebases is a Sisyphean task.

And getting down to 5/1 proof to implemention is within the realm of what unit tests you should be writing anyway, so it's not that much of an economic investment.


I am not at all against formal verification. In fact, I evangelize it and use it myself quite a bit. But it is important to understand that it is very, very far from being a miracle cure.

All software proofs prove correctness of specific theorems about a program given certain axioms. Those axioms must include, at the very least, the conformance of the hardware to some specification. But hardware can, at best, conform to the specification with some probability. For example, when you write `mov rax, [rdx]` it is not true that the computer will, 100% move the contents of the address pointed to by rdx into rax, only that it will do so with some high probability. Therefore, any proof about a software system (as opposed to an algorithm, which is an abstract mathematical entity) is a proof of a theorem of the form, "as long as the system behaves according to my assumptions, it will exhibit behaviors consistent with my conclusions." And that is even assuming that your conclusions cover all possible attacks. As it is generally impossible to formalize the concept of an arbitrary attack, you generally prove something far weaker.

But none of this is even the biggest limitation of formal verification. The biggest would be its cost and lack of scalability. seL4 is about 1/5 the size of jQuery, and it is probably the largest piece of software ever verified end-to-end using deductive formal proofs as a verification technique. However much the effort of a similar technique can be reduced, it doesn't scale linearly with the size of the program (it may if the program does the same things, but usually larger programs are much more complex and have many more features).


> However much the effort of a similar technique can be reduced, it doesn't scale linearly with the size of the program

[citation needed]

Sure, you can write an arbitrary application where that's true, but AFAIK it doesn't have to be. IMO, sel4's really cool part isn't that it just was formally verified, but it's total design specifically for verifiability.

The argument reminds me of all the people in the late 90s talking about how unit tests would never take off because they tried to introduce it in their legacy spaghetti codebase, and it wasn't amenable to testing.


> [citation needed]

Watch Xavier Leroy's talks.

> IMO, sel4's really cool part isn't that it just was formally verified, but it's total design specifically for verifiability.

Well, they were very selective with their algorithm choices, so that only very simple algorithms were used. Whether that's cool or an unaffordable restriction depends on your perspective.

> The argument reminds me of all the people in the late 90s talking about how unit tests would never take off because they tried to introduce it in their legacy spaghetti codebase, and it wasn't amenable to testing.

Except that I've been both practicing and evangelizing formal methods for some years now. I argue in favor of using them, not against, but as someone with some experience in that field I want to make sure people's expectations are reasonable. Unreasonable expectations all but killed formal methods once before.


> Moreover, verification is done with respect to specific properties that ensure no attacks of a particular kind.

No. seL4 proved functional correctness. It eliminates all attacks, not just particular kinds.

Functional correctness means implementation matches specification. As a corollary, seL4 has no buffer overflows. Proof: Assume seL4 has a buffer overflow. Exploit it to run arbitrary code. Arbitrary code execution is not part of specification, hence implementation does not match specification, leading to contradiction. QED.

Above proof applies to use after free, or any other exploits enabling arbitrary code execution, including methods which are not discovered yet.


I read this and thought, "this can't be right, they must be assuming some things."

According to them, they, sensibly, are indeed assuming some things to be correct without proof: http://sel4.systems/Info/FAQ/proof.pml

This is effectively the same idea behind Rust's `unsafe`. It represents the things you assume to be true. Of course, when compared to seL4, the scales are massively different. :-)


From the assumptions page: http://sel4.systems/Info/FAQ/proof.pml

```

Hardware: we assume the hardware works correctly. In practice, this means the hardware is assumed not to be tampered with, and working according to specification. It also means, it must be run within its operating conditions.

Information side-channels: this assumption applies to the confidentiality proof only and is not present for functional correctness or integrity. The assumption is that the binary-level model of the hardware captures all relevant information channels. We know this not to be the case. This is not a problem for the validity of the confidentiality proof, but means that its conclusion (that secrets do not leak) holds only for the channels visible in the model. This is a standard situation in information flow proofs: they can never be absolute. As mentioned above, in practice the proof covers all in-kernel storage channels but does not cover timing channels

```

Rowhammer and Spectre exploits hardware not working to spec.

These sorts of attacks using heretofore "unreasonable" exploitation of side-channel and hardware vulnerabilities are exactly what worries me. We have no solutions for these classes of attacks. Complex high-speed systems are still inherently unsolvably vulnerable, it seems.


Formal proofs prove certain theorems about your programs. If you could formalize the notion of an arbitrary attack you could prove it's impossible, but there is no such formalization. Of course, you don't need to verify the lack of a certain attack techniques, only their effect, and so you can certainly verify even with respect to undiscovered techniques, but this still would never cover all attacks regardless of their effect. E.g. you may be able to prove no arbitrary code execution, but you may still fail to verify no data leaks. If you have a data leak, you can steal passwords and certifications, and if you have those you can execute arbitrary code without any arbitrary code execution attack or privilege escalation. Note that this could occur even if the kernel is itself impervious to data leaks, due to vulnerabilities in user code.

Even with regard to what you do prove, the conclusions depend on certain assumptions about the hardware which, even if true (and they often aren't -- read some of the seL4 caveats) at best hold with high probability, not certainty.

I am an avid fan of formal methods, and have been using them quite extensively in the past few years, but you should understand what they can and cannot do. You can read about formal methods on my blog: https://pron.github.io/ The paper discussed here also mentions some common misunderstandings, about seL4 in particular: http://verse.systems/blog/post/2018-10-02-Proofs-And-Side-Ef...

Nevertheless, there is no doubt that a kernel such as seL4 has a much lower probability of being hacked than, say, Linux, given similar hacking efforts, and I would trust it much more. But it really isn't an endgame.


Reinforcement learning can be used to improve theorem proving, but it probably needs more research:

https://arxiv.org/abs/1805.07563


But does formal verification even help when you're up against side channel attacks on the hardware? Like the branch predictor or DRAM memory access timing.

With a model of the full system, quite possibly. That's one of the things that makes me so excited for RISC-V.

I don't know of any technique to model or avoid timing attacks that wouldn't drastically slow down execution to the point of being impractical.

Computer security is a fundamentally asymmetric game. Your entire stack needs to contain zero mistakes and an attacker only needs to find one. Also the resources you would expend trying to find and patch every last hole would break the economics. It is what it is.

>Also the resources you would expend trying to find and patch every last hole would break the economics.

If people built with things that weren't made of holes (ie. using memory safe languages), then there wouldn't need to be much effort in the "find and patch" side of things.


Are you absolutely certain of that conclusion?

I'll agree with the sentiment that memory safe languages are less prone to security holes because by nature they entirely remove a whole class of issues related to managing memory...

but I can't help but think that it still stands to be the case that finding and patching the remaining holes that aren't related to memory management would still break the economics.

Perhaps the one thing I could imagine that might make a difference is regulations coming into force that mandated pentests / fuzzing for companies producing software doing a certain amount in revenue. It's been my experience that that stuff gets done but it's once in a blue moon. And stuff is always turned up. I'm yet to work anywhere where it is part of a regular process.

Even if it were I'm always blown away by security researchers who spend months and months attacking something and finally crack it then give a presentation about how they did it. There can't even be that many people on Earth with the level of skillset required.

I dunno... the economics of it seem to play out like "let's just deal with the fallout of an incident when it happens because that's cheaper than investing in security"

The asymmetric nature of finding holes vs not creating them combined with the asymmetric nature of the business value of investing in security is just a mildly awful combination.


> Is there an end to this race?

If there is, it'll be a combination of two things.

One is you make things simple enough that finding the bugs is more realistic. WireGuard, not OpenSSL.

The other is that you make things modular, and once you have one of the modules right, you mostly leave it alone. Many people still use djbdns and qmail for a reason, and it's not the pace of new feature support.

These are trade offs. They do have costs. But it has proven to be more effective.


Security is an arms race, in a quite literal sense of the term.

Arms races don't really ever end, except for truces/treaties or total victory.


No, just use Rust...

Well, that wouldn't eliminate everything (row-hammer, speculative execution), but it would probably get rid of the most common culprits - buffer overflows and use after free.

Things like row hammer will not get attention until they are commonly used. Rust does eliminate the most common vulnerabilities and should be seriously considered for any new project that would otherwise be written in C.

Oh, I completely agree. It's just easy to overstate Rust's guarantees, which doesn't do anybody any good.

Nonsense. Rust safeties are massively overhyped. Not enough memory safety (comparable to Java, which is similarily memory unsafe), no type safety (unsafe keyword), not enough concurrency safety (not deadlock free, need to manually set mutexes). While there do exist safe and fast languages, Rust is not one of them.

If we all wrote in languages as memory-safe as Java, almost all these security bugs would go away.

Is Rust guaranteed to be 100% memory-safe? No, there could always be bugs in the compiler and/or unsafe code. (Aside: it annoys me how people consider the presence of an unsafe keyword inherently less trustworthy than a non-verified compiler/runtime system, when there is really little difference between them.) Is it a huge improvement? Yes.


Full agreement, but I think unsafe-phobia is mostly about user-extensible nature of it. unsafe in standard library is okay, because as you said it's no more dangerous than doing it in runtime. unsafe in "your" library is not okay, because I don't trust you to modify runtime either.

unsafe memory and unsafe types are not okay if you widely claim to be memory safe. Thousands of people actually believe these lies, and companies do makes decisions on these claims. It's much more memory unsafe than Java in fact. Java recently removed the unsafe tricks, whilst rust was once safe and went more and more unsafe since.

>> Nonsense. Rust safeties are massively overhyped.

No. They are not. You obviously have not done your homework on the subject.


Obviously not. I've only worked on two different compilers/languages which do provide all three safeties, while you probably wouldn't even be able to name a single one. It's embarrassing.

It's a nice language, and safer than C/C++ but for sure not a safe one, even if thousands of newbies will repeat it constantly.


It's about increasing the costs to the attacker.

Now a Windows/browser zero-day requires months of research and can be sold for $100k+.


Yes, the goal is to increase the costs to the attacker as much as possible while having minimal added costs during development and runtime of the software. Statically proven programs are great security wise but they are expensive to produce, while adding ASLR is not perfect but comparatively easy to pull off.

> Terms like buffer overflow, race condition, page fault, null pointer, stack exhaustion, heap exhaustion/corruption, use after free, or double free --all describe memory safety vulnerabilities.

Page faults are fine. Null pointers are fine; just don't dereference them. Race condition is a much more general term that can cover non-memory races too; perhaps they meant data races.


Yes, I don’t think race conditions should feature in a list of memory safety issues. However page faults can be not fine - on some hardware it can be a hard fault.

Link to the presentation this article is based one but doesn't site for some reason: https://github.com/Microsoft/MSRC-Security-Research/raw/mast...

In 2015 they become the first gold contributor to OpenBSD

https://www.zdnet.com/article/microsoft-becomes-openbsds-fir...


we should all be using Rust then :)

> Microsoft: 70 percent of OUR security bugs are memory safety issues

Fixed the title. This is not an analysis about general software errors.


Every project with millions of lines of C or C++ is going to be around the same, Firefox is over 50% as well.

Linux is much worse currently, hence the Kernel Self Preservation project being pushed by Google.

This paper's an interesting read, about writing an OS in Go:

The benefits and costs of writing a POSIX kernel in a high-level language

https://www.usenix.org/conference/osdi18/presentation/cutler


Not that Cutler, phew.

He would never write a (mainly) Posix kernel ;)

I knew this comment was coming :)))

It IS why Mozilla sponsored the language. The early "why Rust" presentations had a stat that (paraphrasing) 50% of Mozilla security bugs would have been prevented by the borrow checker.

Yeah. I looked at the comments looking specifically for this comment. I was not disappointing.

or some language with the runtime. for example, c#?

Microsoft did just that, by extending C#: https://en.wikipedia.org/wiki/Singularity_(operating_system)

Maybe you can do a proof of concept by writing a mini-Linux using Rust.


Right because Rust will prevent all the issues of Unsafe usage that an OS uses all the time ...

Say the same thing about seatbelts in a car. If you don’t plan to have accidents, why do you need seatbelts?

Car accidents, like mistakes in programming are a risk that has a likelyhood that is non-zero. A seatbelt might be a little bit annoying when things go well, but much less so when they don’t. Rust is there to stop you in most cases when you try to accidentally shot yourself into the leg, unless you deliberately without knowing what you are doing while yelling “hold my beer” (unsafe). And contrary to popular belief even in unsafe blocks many of Rust’s safety guarantees hold, just not all

If the net benefit of a ownership concept like Rust has it is high enough, this ahouls be an easy choice for rational actors to take. The odds are on Rust’s side here, because humans make mistakes and if Rust manages to allow productivity despite stopping certain classes of these mistakes, there will be a net benefit.

Just like with the seatbelt, there will be always those that don’t wear one for their very subjective reasons (e.g. because of edge cases where a seatbelt could trap you in a burning car, or because it is not cool, or because they hate the feeling and think accidents only happen to people who can’t drive).

I write Rust for a year now, and I constantly see bugs far better programmers made in say C++ where Rust just wouldn’t allow you to not handle it. E.g. Crashing your software by writing to a file that is locked by another process. In Rust this would mean the deliberate act of ignoring a Result, unwrapping it and moving on. You can ignore it, and Rust will crash, but you know exactly where you took that risk.

Things like these remove a lot of cognitive load from the programmer which can be put into more pressing topics


There’s a similar distinction between most C++ development and dropping to low-level placement new, mallocs/frees, handwritten assembly or heavy use of intrinsics. You rarely need the latter, and, once wrapped, when used by the former is quite safe. The difference is that Rust was designed with this in mind.

I like Rust; I’ve done some coding challenges in it. It won’t replace C++ for me for quite some time, but it’s about time there was a safe, powerful language that’s truly a worthy contender.


You're being sarcastic, but I think you'll find the limited use of unsafe in Philipp Oppermann's series informative: https://os.phil-opp.com/

How many of those unsafe patterns could be written in safe code? (The answer: nearly all of them.)

So what if Rust doesn't prevent all conceivable memory safety issues anyone could possibly write? It's a huge improvement.


This late? With Microsoft's move to C# and the Static Driver Verifier? The latter pretty much fixed this for drivers, which used to be the biggest source of kernel crashes. I thought Microsoft was past this.

There is still a strong C and C++ community at Microsoft.

They are the only OS vendor (on consumer space) still having a full stack support for C++, with internal pushes to keep it going.

https://kennykerr.ca/2019/01/25/the-state-of-cpp-on-windows/


Where can I find the original presentation from Microsoft's engineer?


First 38% of bugs at Airbnb could have been prevented by using type safety. Now 70% of all security bugs at Microsoft could have been prevented by using memory safety.

Exactly, this explains why my Java code is 108% bug free

Fuzz testing is a useful tool for finding these types of bugs. See https://en.m.wikipedia.org/wiki/Fuzzing

29% SQL Injection?

Linux is also written in C.

Yes, it is, and over 50% of Linux CVEs are memory-safety related.

And it has MORE reported CVEs than Windows:https://www.cvedetails.com/top-50-products.php?year=2018

When the list says Debian or Ubuntu it includes all software in Debian and Ubuntu. That includes software like Google Chrome, Firefox, Python, Ruby, etc. For example out of the 40 listed in 2019 in Debian 36(!) are Chrome bugs, not Debian bugs.

Sort by Vendor: https://www.cvedetails.com/top-50-vendors.php


It's also OSS and it is much easier to surface security bugs for Linux than for Windows.

In my own research, I have attempted to send Microsoft security bugs only to be told they would be backlogged and reviewed later (which never happened to my knowledge).


> It's also OSS and it is much easier to surface security bugs for Linux than for Windows.

Shouldn't then the number of bugs decrease much faster, since they are easier to find? Unless they are introduced at even a greater rate than the ones in Windows.


No, because linux OS's includes a lot of software in their repositories and new packages are added all the time. Look closer at the list. "40 Debian CVEs in 2019" breaks down to this:

* Google Chrome: 36

* Artifex Ghostscript: 1

* ZeroMQ: 1

* macOS CUPS: 2

* Debian: 0


You are right. One current estimate is that Linux is introducing security bugs at a rate faster than they are fixed.

New and better tools are finding bugs in old code, so it isn't really that more and more bugs get into new code:

"But, your editor wondered, could we be doing more than we are? The response your editor got was, in essence, that the bulk of the holes being disclosed were ancient vulnerabilities which were being discovered by new static analysis tools. In other words, we are fixing security problems faster than we are creating them. "

https://lwn.net/Articles/410606/


Maybe because more people can report them? With Windows you'll be lucky if Microsoft doesn't outright deny their existence.

That and saying Debian isn't like saying Windows. Debian is like 50.000 packages. Pretty much all CVES in this year so far listed as Debian CVES has been in Google Chrome browser...

90% of these memory bugs of MS are related to C++, bad designers I think; C is still good in low level of system hack.

Google thinks otherwise, hence Linux Kernel Self Preservation Project.


I always get it wrong. :)

[flagged]


Obsessed much? This has no mentions of Rust anywhere.

Given it's Microsoft, they are probably thinking about .net.


Microsoft haven't even rewritten Office in C#/.Net yet, what makes you think they are ready to rewrite/replace the Windows kernel with it?

https://news.ycombinator.com/item?id=17305332


I wouldn't be surprised if it's easier to rewrite the kernel than to rewrite Office.

Why is that?

Office is somewhere around an order of magnitude of so more code, everything I've heard is that it's nightmare fuel levels of legacy code that defies even refactoring within the same language, and has a bad habit of slurping up binary formats and doing crazy internal YOLO pointer chasing.

> and has a bad habit of slurping up binary formats and doing crazy internal YOLO pointer chasing.

... you know that this about what 9/10th's of an operating systems device driver code look like, right?


Not from externally imported data. That's a privilege escalation vulnerability when the kernel pulls that crap.

It is probably easier to figure out what it should be doing. With Office, sometimes you just have to line break like Word 97 (I am not making this up, it was a bad example of the so-called open format "OOXML"), which is much easier when you can just call the original Word 97 code, bugs and all.

How do you think Office 365 works?

A bunch of tiny pjmlp's running around in a hamster wheel providing electricity maybe?

Are you suggesting the post I linked above from someone claiming to be an engineer on Office is inaccurate? It may be well be, but it sounds more plausible and supportable than whatever question it is you seem to be posing.

If you have evidence to suggest that Office365 is written in C# these days I'd like to see it please, because I have read on more than one occasion over the last 10 years that MS had attempted this and given up.


I was referrring to Office 365 on Azure, which happens to run on the browser in case you missed it.

Quoting your favourite engineer "The desktop app’s are fully native".


So you are talking about "Office 365 Mobile" and/or "Office 365 Online".

There is also "Office 365 Desktop" which is what I think you'll find most large corporations that have migrated from Office 2016 Desktop apps are using.

https://www.howtogeek.com/334597/whats-the-difference-betwee...

Office 365 Mobile/Online are much cut down, more comparable from Google Docs from the sounds of it, and hardly a 1:1 replacement for the Desktop apps. Which are still C/C++.

In addition, if you have any hard evidence that the majority of the Office Mobile/Online web apps are written in C# I would like to see that, because I can't seem to find any.

Thankyou.


If you have any hard evidence that the majority of the Office Mobile/Online web apps are written in C++ I would like to see that, because I can't seem to find any.

Thankyou.


I have not made that assertion at all, not even implicitly. Therefore there is nothing to prove.

You still have an explicit outstanding assertion that they are written in C# though, once we figured out which of the three versions you were actually talking about (ie: it's the versions that almost nobody uses) so let's see it.


Quite the contrary, I explicitly mentioned "I was referrring to Office 365 on Azure, which happens to run on the browser in case you missed it.", but I guess you rather push this little number of your's, therefore there is nothing to prove.

First message:

> How do you think Office 365 works?

Second message:

> I was referrring to Office 365 on Azure

Everyone can see it, its there and black and white. Having received your clarification that you meant "mobile or online" edition, I asked you to provide evidence it's written in C# which is apparently your claim, which you still haven't done. I haven't asserted that it's still written in C or C++, it could be assembled with spit and paper clips for all I know.

So enlighten us, as per your claim, if you can. I'm genuinely interested.


Whoa, easy with the personal attack there.

There's a lot of memory safe languages out there that are great choices. Coming from a Microsoft statement, .Net is pretty neat.

When your worst case latency/perf requirements don't allow you to have a GC, Rust is another great choice.


What about when you want a concurrency safe language? You can still have concurrency bugs, like race conditions in languages that are only memory safe.

Of all the stories Rust tries to sell me on, concurrency is by far the weakest of all. Rust concurrency is a mess compared to (yes, I'm going to say it) Go.

Async/await are still far away from being anywhere near figured out nor implemented, CSP primitives in the std. lib like std::mpsc are basically DOA/obsolete/half-functioning and tokio/futures are still early alpha.

There's a lot more to concurrency than just compile-time verification of safe usage of a mutex or moving data in/out of a thread. A lot more.


I actually agree with you. I'll just gladly take any available advances in concurrency safety. In any concurrency models, including shared data.

Async/await (and production quality gRPC) is currently what's keeping us from using Rust at dayjob.

> There's a lot more to concurrency than just compile-time verification of safe usage of a mutex or moving data in/out of a thread. A lot more.

Sure, but it's better to have at least that.


Don't get what you mean by std::mpsc being half-functioning. I'm using it and it works great. Obsolete, maybe. There are alternative implementations out there that have various improvements, but any non-trivial core primitive of a healthy language has that, no?

Then use a concurrency safe language. There are plenty of slow ones (pass by copy) and even some supporting references into shared memory.

I would also emphasis the need for type safety and contracts.


Rust does not solve all race conditions, unfortunately. Even world domination has its limits! It does solve simple data races, which means that use of concurrency within Safe Rust is indeed memory safe. This is better than what some other languages can claim, such as Go (only the _sequential_ subset of Go is memory safe)!

What exactly is .Net in this context? Every time I think I understand it someone throws a curve ball. Is it c#? Is it a set of libraries?

It’s the CLR (common language runtime) running in a JIT environment.

Okay thanks. I could Google that.

So it's code written in C# or VB or another supported language, compiled by JIT to a common bytecode running on a virtual machine and probably a whole slew of libraries that target that VM too.

Okay. I have no idea why I'm struggling so much with that.

Thanks.


No problem. Sorry for the brevity, I was on my phone and have texter's thumb, but you got me to stop being lazy and go to my PC, so hey!

Two years ago, .NET in an everyday connotation would imply C# (VB.NET is mainly only used in legacy corporate environments, although there was nothing stopping people from implementing greenfield VB projects until last year or so) and an installation of the .NET Framework (equivalent to Java runtime environment) on a Windows PC.

Today, it is increasingly referring to .NET Core, which is still likely C# (with a smattering of F# from enthusiasts) but without a runtime necessarily pre-installed on the PC, now available as first-class citizens (and developed in concert with) on Windows 7+, macOS, Linux, and soon FreeBSD. The runtime has been broken down from the monolithic framework to hundreds of individual libraries (all available via a package manager) compiled to platform- and architecture-independent dlls available via the binary nuget package manager.

Back when .NET was first getting started there were a lot more languages (Microsoft paid language developers to port Java (the language) and the community provided ports of Python, Ruby, and others; while new languages specifically designed for .NET also came and went; I have somewhat fond memories of learning Boo, but I don't think it saw any updates this side of .NET Core 1.0), today it's mostly just C# and F#.

More interestingly, there were some serious attempts at introducing native MSIL (Microsoft Intermediate Language) the assembly-level language interpreted by the runtime environment, now known as CIL or Common Instruction Language) in the form of microcontrollers that natively executed MSIL instructions, but besides a few extremely niche implementations (and very expensive - I remember being disheartened at the time) that ended up going nowhere and I'm not aware of any modern efforts at revisiting that, although I wouldn't be surprised given the resurgence of .NET in recent years.

Today, .NET Core is being pushed for all desktop, mobile, and web development; and is supported on major consumer platforms. The entirety of .NET Core development is out in the open (and there are now official public committees for furthering its development and making decisions affecting its future) and the code is actually available on GitHub.

I jumped on board the .NET train when I found a letter to the only tech-literate teacher at my high school who had thrown it away; it offered to send a free sample pack of CDs and basic literature in advance of the release to interested schools back in 2001 or so (when J# was still a thing, C# had just been introduced, .NET was still at the "we don't speak of it" 1.0 mile marker before the hard fork to redo the CLR with support for generics and revisit some poor decisions in the initial release, and the "new" ASP.NET offering still used WYSIWYG to design the layout!) and managed to get them to send me a copy. It feels like forever ago! C# stagnated for some years, but then Microsoft became serious about it once again after the Windows Vista release (and after they failed to port the Windows userland to .NET due to serious performance constraints in particular pertaining to GC with the Longhorn project), but then saw some great updates that made it an incredibly well-designed and efficient language (without even taking the standard library into account).


Thank you so much. Your recent timeline of events really helps clear things up. Because a few years ago I did think I got it when I thought of .NET as C# + a monolithic suite of libraries.

Microsoft heavily obfuscated the term for some reason.

I swear, they would've named Bill Gates' grand kids .Net Gates if they could have gotten away with it.


Yeah. I think that's it. I remember .net passport as a kid. Among things. .net server stuff. Etc.

[flagged]


[flagged]


approved for release, we'll test it live!



Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: