
Cooperative Linux – running Linux as a Windows driver (2011) - ornxka
http://colinux.org/
======
muxator
I used colinux as part of my personal plan to migrate away from Windows back
in ~2005-2006.

I wanted the transition to be as painless as possible, and I also wanted to be
able to use both system at the same time for a while while getting used to
Linux & its applications.

First step was starting to use multiplatform & open applications. So I left
IE, Outlook and Office for the windows verdions of Firefox, Thunderbird and
Open Office.

Once I got accustomed to that, I installed Debian in dual boot and moved all
my files to ReiserFS. This was before the unfortunate events involving its
creator: eventually I moved to ext4 and years later I made a _super cool_ in-
place conversion to btrfs, but that's another story).

Now I was accustomed to open applications, and my data was on Linux, but I
still found myself frequently needing to start windows for a lot of reasons.
But then all my emails and documents were locked in the Linux partition.

Here colinix came to the rescue: while on windows, I started a colinux
instance (it was super fast, no way a VM could have served that purpose),
mounted the physical partition in it, and shared to the windows machine via
Samba.

It was a long journey, but taught my younger self a lot, and above all grew me
fond of how powerful and flexible an open software ecosystem can be. Nothing
of that would have been possible on proprietary technologies.

Many years later, I have never looked back, and I grateful to all the
wonderful passionate people that made this possible.

------
da-x
I'm the original author of this project. It warms the heart to see that it
helped so many fellow hackers to introduce themselves to Linux!

Recalling this, it makes me feel quite a old hack, when I started this back in
December 2003, without Git, through mailing lists, and on sourceforge - oh
boy. Would have definitely been easier maintaining this project with all the
tech, the reach, and the amount of potential contributors that we have today.

~~~
cyansmoker
Man, without you I don't think I would have survived in the corporate world.
If it wasn't for the fact that I could install coLinux to turn my work XP in
an actually functional OS, I don't know what I would have done.

Nothing (almost!) against Windows, but with experience steeped in Xenix,
Solaris and Interactive... let's say that coLinux was also pretty effective as
an anti depressant :D

------
michh
I remember running this (way earlier than 2011, like 2003?) and it was really
cool but it’d slow down the real time clock on the ‘host’ (co-habitating?)
Windows install. I figured that was due to the clock being tied to cpu cycles
and Linux stealing some of those. I’m not so sure about that explanation
anymore, but I don’t have a better hypothesis either. Funny how that little
detail stuck with me for almost 20 years…

~~~
snarfy
I thought it was due to the timer's frequency being programmable, and both
operating systems setting and assuming different values. It's been a long time
though and my memory is fuzzy.

------
alpb
Little trivia: There has been an official UNIX subsystem in Windows since
2000:
[https://en.wikipedia.org/wiki/Windows_Services_for_UNIX](https://en.wikipedia.org/wiki/Windows_Services_for_UNIX)
It was a company called Interix, and I believe it was not maintained for a
long time (but continued to ship). IIRC the reason it was there was to satisfy
DoD contracts or something similar, that required a POSIX-compliance.

~~~
pjmlp
Your timeline is a bit off, Windows NT released in 1993 had POSIX personality
already.

[https://en.wikipedia.org/wiki/Architecture_of_Windows_NT](https://en.wikipedia.org/wiki/Architecture_of_Windows_NT)

Had Microsoft taken care of it like other OS vendors did with their POSIX
compatibility layers, instead of using it for DoD contracts and Win32
migrations, and FOSS UNIX clones would never had mattered.

The only reason I started tinkering with Linux, was that Xenix and DG/UX were
a little hard to come by for work assignments at home, and stuff like Coherent
wasn't really a solution.

~~~
asveikau
> FOSS UNIX clones would never had mattered.

I am not sure about that. NT licenses weren't cheap back in the day, and Linux
was $0. (And didn't come with baggage like a more expensive SKU to support
more CPUs)

~~~
pjmlp
NT licenses were included in the computer price anyway. I was running NT
during university, and eventually dual booting with Linux.

------
DigitallyFidget
I used to use this in the days of XP. It was an amazing way to learn Linux
without losing Windows for me. I was sad when I had to move on without it due
to the limitations of 32bit and 4GB of RAM (of which I could only actually use
3.2GB of at the time). Strange that this abandoned relic of a time long past
showed up here in HN.

~~~
ornxka
My story with coLinux was that it was also my first experience with Linux,
also around 2008. I wanted to try it but the thought of installing it and dual
booting seemed a bit scary to me, so when I came upon a solution that
supposedly let me run Linux and Windows at the same time, I was thrilled.
However, since I was very young and knew nothing about Linux or even computers
in general, I never managed to actually run any Linux programs with it. I
don't remember if I just didn't install a disk image with a distribution on it
or what, but since I didn't get it to work, I assumed that it was some sketchy
project that amounted to snake oil.

Fast forward to earlier today, after using Linux (proper) for the past 12
years, I came across another comment on HN that mentioned it and realized that
even today I didn't really understand how the promises of coLinux were
possible - how can you run Linux and Windows at the same time? Was it some
kind of userland virtualization, like Wine, or was it just a fancy virtual
machine, or what? After reading about it I realized it was in fact a very real
project that actually did work, and that it used a very novel method of
running two operating systems together without "virtualization" as people
typically know it, where there is a "host" and a "guest" separated by hardware
isolation mechanisms.

Instead, it works by literally just running the Linux kernel inside of a
Windows driver, with some bootstrapping code to allocate memory to it, some
glue for Windows/Linux context switching (with control yielding from Linux to
Windows after a time slice, and control passing from Windows to Linux via a
userland daemon in Windows calling into the coLinux driver on a timer), and a
mechanism for ferrying interrupts between the two kernels. This basically
amounts to "cooperative multitasking", which hadn't really been a thing since
segmentation and paging were introduced at least as far back as the early
90's, and as far as I'm aware hasn't been used as a technique for serious
virtualization since (for probably the obvious reasons).

It was pretty fascinating learning that this thing I'd tried so many years ago
(and hadn't managed to get working, sadly) had such a novel approach to
virtualization, so I thought it was interesting enough to share here.

------
dependenttypes
Some mandatory mentions:

[https://en.wikipedia.org/wiki/MkLinux](https://en.wikipedia.org/wiki/MkLinux)
\- linux on top of the mach microkernel, I think it was sponsored by Apple
(especially its book was great)

[https://en.wikipedia.org/wiki/L4Linux](https://en.wikipedia.org/wiki/L4Linux)

~~~
xvilka
There is now PureDarwin[1][2] also. It's still actively developed - last
commit was a few hours ago.

[1] [http://www.puredarwin.org/](http://www.puredarwin.org/)

[2]
[https://github.com/PureDarwin/PureDarwin](https://github.com/PureDarwin/PureDarwin)

------
photon12
Reading through the FAQ, looks like the kernel can run on 1 physical core and
SMT/hyperthreading isn't enabled (maybe that's changed).

Seems cute but the WSL has the full support of the Windows scheduler which is
gonna make the practical choice of tool obvious for folks with Linux workload
requirements.

~~~
throwanem
coLinux being 32-bit only seems like a somewhat bigger problem.

~~~
bityard
Might have something to do with the fact that coLinux is almost as old as
Linux itself. I was surprised to see it on the front page of HN, it was an
iffy solution back when it was a semi-viable option. Now that hardware-
assisted VMs with very low overhead have been commonplace for a decade, it's
pretty much completely obsolete.

Edit: I guess development on it started in 2004, it was only a few years later
that hardware-assisted virtualization started becoming relatively mainstream.

------
throwanem
It's a shame the 64-bit port was never completed. WSL might never have needed
to happen, although I confide it would have happened anyway.

~~~
Koshkin
On a tangential note, I must say that I do not quite appreciate this obsession
with 64 bits, especially for "smaller" use cases, such as a PC for the
"everyday" use. There's still plenty of perfectly usable machines sold with
4GB of RAM. Also, pointers are everywhere, and 64-bit ones take twice as much
space, so the gain is not as big as one might think.

~~~
ornxka
I still use an EeePC on a fairly regular basis, so I agree with your
statement, but also I think that the desire to drop support for older
platforms is itself a concession, that it's too hard to engineer software at
an abstract enough level where ISA doesn't really matter. I understand that
maybe fewer people are using those machines, but outside of some really
platform-dependent code in the deeper parts of the kernel, how much should ISA
really matter? Of course, in reality it does, in the sense that virtually
nobody writes truly "portable" C code and instead encode all kinds of implicit
assumptions about things, but that doesn't mean we should accept that or that
the solution is to just bake those assumptions into the system as a whole and
stop supporting architectures that violate those assumptions. Instead,
supporting more architectures and finding more instances where implicit
assumptions about hardware behavior are violated is actually beneficial to
constructing a more abstract and easily-portable system, which incidentally
can benefit all of the rest of the architectures as well.

------
derwiki
I used this to stay sane at my first dev job at IBM--we were allowed to "use
Linux" but most of the required software wouldn't run. Colinux to the rescue!

------
peterwwillis
I loved coLinux! Was super simple to set up and way less overhead than a VM.

People talk a lot about WSL2 as if it's "already here", but WSL2 still
requires the latest update of Windows 10 from two months ago, or a slightly
older build along with an "insider preview" activated copy of Windows, _and_
you have to enable Hyper-V. Not all devices/users will be able to support all
of this and it will break compatibility with some apps.

Given that, your other option is WSL1, which is not really a replacement. If
you want to run Docker on it, you either have to install Docker for Windows
(which may also have conflicts with Hyper-V apps) or use the oldest stable
version of Docker on Linux along with some annoying hacks to be able to run a
container. All of this, and you finally run some apps, and it feels like half
your system's performance is gone, and the apps run at about 1/5th their
normal speed.

Maybe in a year most people will be on the right build of Windows for WSL2,
and maybe we'll have patched all the Hyper-V conflicting apps, and maybe
there'll be a way to use it without a long HOWTO and researching buggy
commands. Until then, a VM is way easier, more functional, faster, and more
reliable.

------
peter_d_sherman
>"How does it work

Unlike in other Linux virtualization solutions such as User Mode Linux (or the
forementioned VMware), special driver software on the host operating system is
used to execute the coLinux kernel in a privileged mode (known as ring 0 or
supervisor mode).

By constantly switching the machine's state between the host OS state and and
the coLinux kernel state, coLinux is given full control of the physical
machine's MMU (i.e, paging and protection) in its own specially allocated
address space, and is able to act just like a native kernel, achieving almost
the same performance and functionality that can be expected from a regular
Linux which could have ran on the same machine standalone."

This sounds great and is a noteworthy achievement!

Favorited.

Although ( _and this is just me geeking out here, and doing some imaginary
future engineering in my mind!_ ), wouldn't it be great if someone modified
both Linux and Windows -- to use a common, neutral, small-as-possible
microkernel?

You know, take out the common stuff that both of them do, and put all of that
into a microkernel, then change them so they can happily run alongside one
another, supported by that microkernel at the center?

Note to future self (when I have the time): Look into doing this... what would
be learned about microkernels and microkernel design -- if someone went down
this path?

(Yes, I know about Mach and other microkernels... maybe the job would be
fitting Linux and other OS's to an existing microkernel... that would
certainly be easier than generating the microkernel from scratch -- but not as
educational...<g>)

------
pmontra
I used it for many years up to the end of 2008. Then I realized that I was
using only software also available on Linux and I installed Ubuntu 8.04 with
Windows in VirtualBox (IE was important back in the time.)

Colinux worked very well. Thanks to the authors.

------
divingdragon
There was a "distribution" called "pubuntu" (aka Portable Ubuntu) that makes
use of coLinux, along with an X server (probably Xming) and even PulseAudio
running on the host Windows system. For a while I used it to play around with
Android porting (that was around Gingerbread), compiling kernels and Android
with no issues at all. Fun times.

------
gbraad
Good old times; I made the CentOS, Fedora and OpenSuSE images

Used this a lot for cross platform development, like C#/.NET and Mono. Used
this with an xserver on windows (xming?) and ran monodevelop or gvim.

------
Wowfunhappy
Why didn't Microsoft go with something like this for WSL, instead of using a
VM for WSL2?

~~~
ornxka
Cooperative multitasking is, in general, less than optimal in terms of
stability. Both kernels run in the same address space, and must voluntarily
cede control to the other, so any problem with either writing to the wrong
place or failing to yield control to the other can cause a system failure. In
addition to that, there is also the latency introduced when Linux has control
and receives a hardware interrupt which it must then ferry to Windows for
processing. In general, they can't both be running at the same time, which is
not the case with all other virtualization methods.

The real strength of this approach I think lies in the total time and manpower
required to get it working - the paper on their web site says that from the
day he sat down to start the project, it took him roughly one month until he
was able to run KDE programs, and the total modifications to the Linux kernel
were only a few thousand lines of code. I find this pretty incredible in
itself.

~~~
da-x
That's true. The effort was very concentrated.

Those days in late 2003 were crazy. Waking up at noon to work on it for until
9:00pm, and then off to a night shift writing boring tests script stuff until
the morning. And also weekends. It was like a full time job with extra hours
during that month. Was full head-on stamina at age 21, and I don't even drink
tea or coffee.

Interestingly, if it wasn't for the boring night shift job I had back then, I
would have never found all the time back at home during the day time to do all
this. And once I figured out how I want to write it, nothing stopped me until
it worked.

------
person_of_color
I used to use andLinux for my CS work before I grew out of gaming for good. It
was great.

------
mixmastamyk
Interesting how this compares with the newer Linux Services for Windows.

~~~
Koshkin
Basically, WSL2 is a VM, whereas coLinux acts like a device driver.

~~~
da-x
These two are not mutually exclusive. To be more exact, coLinux is a VM
running under a device driver. The driver is mainly just a bridge to get into
the privileged execution path and manage the resources it needs.

