
Multicore requires OS rework, Windows architect advises - abennett
http://www.itworld.com/hardware/101580/multicore-requires-os-rework-windows-architect-advises
======
Aegean
That's what my startup is about. Leveraging multiple cores using a hypervisor:
<http://www.l4dev.org>

~~~
daeken
Someone's actually commercializing L4? I've been hoping for that for a long
time; good luck! While my own OS is very different (pure managed and all that
jazz), I'll never forget the things I learned from L4, and it's great to see
that someone's actually doing something real with it.

~~~
Aegean
I've been working on it for quite some time. Its nearly ready now, and it
supports the quad core Cortex A9 - the latest intel rival mobile cpu.

------
xilun666
Either this is bad journalism or the "architect" is really high. Because
proposed re-architecture makes absolutely no sense.

The article starts with an obvious observation, being that scheduling (in its
largest meaning) often is crap in real (Windows?) systems.

It concludes with complete meaningless bullshit where applications, now
renamed, would have a dedicated CPU. What the point? Doing hard real-time?
What does that even concretely means for "runtimes" of have a "dedicated" CPU
(wich, given the complexity of modern architectures, would not really be in a
strict fashion) and to do resource management thanks to metadata inserted by
the compiler?

What is really needed is more effective ways for applications to sends tips to
the kernel about what is going on, what should run at high priority, what
should be cached, what should not be, and so over.

Forbidding some programs to use some cores because you have many makes little
sense as a way to re-architecture an operating system. Because it already
happens anyway on current designs (not really forbidden, but when you have few
services consuming less than 1 percent of 1 core in the background, and/or
running with extremely low priority they will be near to unnoticeable, and
truly unnoticeable from the point of view of the user).

To get a responsive word processor no matter an anti-virus is loaded or not, a
simple solution can work really well now; priorities of scheduling. Given that
really background priorities are among the rare things that works fairly well
under windows (like 10x better than with a vanilla Linux kernel), if the
"architect" is experiencing random slow down of Word because of his
"antivirus", its because his "antivirus" is total utter crap, not because a
new "architecture" of OS is needed.

------
rpledge
This will become even more relevant as core counts above 32 become common
place. Sadly most programmers have a hard time dealing with more than 1
thread. I agree a new paradigm is needed to solve this problem.

~~~
wheaties
Thankfully people are pushing through STM and actor based parallelism. There's
already an STM for Boost, Clojure, Erlang, and Scala to name a few.

~~~
runT1ME
I'm really not sure Actors are the silver bullet. I write telcom software, and
its just as easy to get a deadlock (ok maybe a little harder) using Actors
than traditional locking.

~~~
dkersten
I also write (or wrote, up until recently) telecom software and, while I like
Actors (certainly compared to monitor-based concurrency), I agree that they
aren't really the silver bullet. I spend a lot of time thinking about
concurrency and I've come to the conclusion that the dificulties of
concurrency all come from the fact that everythings manual - the programmer
has to carefully think about how everything fits together, or risk all kinds
of problems: deadlock, inconsistent data, bad parallelism etc.

The Actor model certainly makes things easier, but doesn't work as well for
some problems. Its also still a manual task. STM improves monitor-based
systems by alleviating the task of carefully placing locks around critical
sections of code, but you must still determine where to place the transactions
and this can still be a big task. So far, the only model that, IMHO, lends
itself to implicit programmer-free concurrency is dataflow. Ultimately, I
think we need a language that exposes all three models (monitor, actor,
dataflow) as language primitives, handling it implicitly and transparently
where possible.

~~~
runT1ME
Interesting, I haven't gotten to hear the thoughts of too many other telecom
engineers when it comes to next generation languages and threading. I'm pretty
convinced it has to be one of the hardest problems to get right.

Speaking of STM and locks, have you seen this epic debate?

[http://blogs.azulsystems.com/cliff/2008/05/clojure-stms-
vs.h...](http://blogs.azulsystems.com/cliff/2008/05/clojure-stms-vs.html)

The problem with telecom software (that I'm dealing with now, in my very
humble opinion), and the reason it hurts my head, is because you're
essentially dealing with a distributed system, each leg being a node (since
each action a leg takes needs to go out over the wire). Not only do you need
to see consistent 'state', you also need to make sure your actions are handled
correctly as well.

~~~
dkersten
Ugh, tell me about it.. worse still if the nodes aren't all on fast
connections. One thing we had to deal with is where a mobile operator had two
sites with multiple nodes per site. Our nodes were connected through a high
speed network on each site, but the connection between the two sites was very
slow and unreliable, so we had to change our software so that it would
"prefer" to communicate with other nodes on the same site before trying the
second site.

------
aliston
I'm not sure I agree with the "common" use case Probert is talking about.
Sure, most people have multiple programs running at once, but a lot of the
time the background processes are more or less running at idle or minimal CPU
usage. So, I'm not sure that assigning a dedicated CPU per process would speed
up things all that much...

------
Retric
It was my understanding that this was how PS3's operate. The OS get's to use
one of the more limited cores and the running program gets everything else.

