
The Inside Story Behind MS08-067 - dsr12
http://blogs.technet.com/b/johnla/archive/2015/09/26/the-inside-story-behind-ms08-067.aspx
======
monopolemagnet
Having supported Windows in production in wild, real world environments and
worked in security research....

Windows has a giant attack surface because of the sheer indefinitely-legacy-
compatible, multiverse-sized codebases of unsafe, unprovable, low-level code
in C/C#(CLR)/asm.

The last beta copy of Windows code I reviewed looked like it was maintained by
thousands of contract programmers and interns on artificial deadlines whom
can't spare a second to clean up nasty technical debt.

With Russinovich and Azure's leader promoted now, I hope they're gonna nut up
to take a stab at incurring temporary customer wrath by remaking Windows from
(mostly) scratch with less duct tape and more lessons learned. It could even
go full open source (for core OS, Visual Studio and shared libs) to dig at
Apple (whom is already semi-open source). Given their resources, they could do
a modern, nearly ground-up, respectable-as-OpenBSD OS in a combination of
functional/multiparadigm languages and not fail too terribly at it if it's
previewed/seeks feedback early-and-often. Maintaining marketshare is about
taking-on hard-but-necessary choices for long-term gain, not seemingly safe
moves of tacking on some tweaks and security add-ons to the overall status
quo.

~~~
MichaelGG
Can you point to remote execution issues in the CLR? I'm having a rather hard
time thinking of any that'd impact an organization. The CLR issues with unsafe
code has almost entirely been related to trying sandbox binaries (via CAS or
loading .NET apps via browser plugins (same as Java)).

I'm unaware of any time the CLR would provide any useful attack surface. The
only ones that come to mind are GDI+ issues that .NET inherits via
passthrough. So, citation needed, I guess?

Agreed that use of C/++ code in general is bad and for all the effort spent on
mitigation, we'd be better off moving to a proper language, but ... doesn't
seem like that's happening.

A rewrite is rather silly; the NT kernel is well respected and capable. You're
basically suggesting "throw away everything and start a new company" which
certainly doesn't seem like a winning strategy.

~~~
monopolemagnet
You missed the boat and went off on a strawman fallacy based on a fundamental
misunderstanding: Microsoft OS code and libraries, are implemented not _just_
in C/C++/some asm but are also implemented in CLR languages. All that legacy
crap in a variety of languages is a huge attack surface. Demanding pointing to
closed source proof is an impossible, specious fools' errand.

Have you ever done any Windows NT kernel driver development? Compared to OS X
(xnu) kexts or Linux, it's a disaster. XNU and Linux extensions are also not
that great because it follows the same, flawed insecure methodology of
allowing kernel extensions to modify the kernel, which OpenBSD wisely
eliminated.

The other strawman you raise, trying put the wrong words in my mouth,
rewriting a massive legacy codebase does not throw everything away. The
lessons learned are the most important. Instead, OS and app virtualization can
be used to run prior OS/apps while new apps can run in a new environment,
sharing a filesystem as needed. That doesn't start over "from scratch, with
inexperienced people", which is _your_ incorrect presumption because that
would be foolish. Otherwise, the Microsoft ecosystem will continue to accrue
technical debt and job security which can never be paid off, because it's
stuck in 1993.

The kernel should be the kernel.. a tiny, mostly immutable thing which does
just enough to provide process/thread isolation, IPC and such. Even better is
_capabilities-granted_ privilege requests to perform _unsafe_ operations, so
that drivers are demoted to least-privilege processes. QNX and MINIX loosely
follow this, and it's a far more secure _and_ robust benefits of reasonable
microkernel architecture approaches.

~~~
asveikau
> Have you ever done any Windows NT kernel driver development? Compared to OS
> X (xnu) kexts or Linux, it's a disaster.

I have done NT drivers and linux kernel modules. I disagree with your
statement. Can you be more specific?

I'm not saying there are no warts in NT, but your comment doesn't demonstrate
any knowledge of where they are.

~~~
monopolemagnet
What kind of NT drivers (USB, Storage, what)? What kind of Linux kernel
modules?

I've worked on storage drivers for Linux (2.6), Mac (10.6+ - 10.8) and
Windows.

What like Nt vs Zw? Are you fucking serious? Do I need a badge and
certification from you too, your majesty? You haven't proven you've done shit
either, but it doesn't matter.

You don't have experience with xnu kexts (which make much more sense, at least
more so than the Linux model), so I don't see how you can claim, broad pseudo-
expert "opinion" on something you know nothing about.

Regardless, these three approaches are still irrelevant because they're
critically flawed: granting transfinite trust to the kernel driver developer
to not fuck up. A real microkernel model, only a process would likely crash.
QNX runs on billions of devices, including now BlackBerry 10 since they
acquired it: game, set, match.

That's the boat you've missed trying to go off into the weeds and make it a
personal attack, because you have nothing.

~~~
asveikau
Calm down. I'm not asking for a badge of certification whatever whatever. I
just don't understand what the complaint is and therefore cannot evaluate it.

[Since you asked, the biggest complaint in my filesystem-centric Windows
experience is that filesystem drivers are very "heavy". There is not really a
layer like Linux VFS which can handle common bug-prone tasks before they get
to your driver. The driver is expected to do a lot. But I don't think drivers
are easy on any platform.]

------
farmdve
The article mentioned Conficker. Conficker was the only malware that I had
years ago that seriously made a mess for me, mainly horribly slow Internet.

And it was my ISP that notified me of the unusual traffic and that I had
Conficker.

~~~
jlgaddis
Yeah, Conficker was pretty easy to detect at the network level. I worked at an
.edu at the time and set up alarms for it. Fortunately, it was trivial for us
to identify the user who owned the device, kill their network access, then
wait for them to call the help desk.

------
rasz_pl
Can someone explain to me what is the MS rationale behind binding RPC
mapper(135) to public ip on client computers?

~~~
nuxi7
Same reason unix systems typically did the same with theirs (albeit on port
111)

Its not very useful if other machines can't talk to it.

Keep in mind, the decision to do this in Windows happened back when home
computers were on dialup. (Even RFC 1918 and NAT were young at the time) The
real question is why didn't Windows have a firewall that blocked it by default
and the answer is that it has.... since XP SP2 was released in 2004.

~~~
MichaelGG
And going a bit further back, this is probably one more reason Windows was
successful. Networking "just worked". It was pretty neat (in the 90s) to just
plug stuff in, turn it on, and hey, there's the other computers, and printers
and so on.

~~~
NickHaflinger
If you mean Windows 3.1 running over Novell Netware and Trumpet Winsock then
it did indeed bring networked ease of use computing to the masses :)

