Virtualization makes a lot of sense as the basis for an OS - instead of organizing projects into files and directories and trying to manage those with a master OS - just run a new instance of the OS for each project and access just the relevant resources - most of the stuff on a typical user's desktop machine is often irrelevant to the task at at hand. A new VM for a specific project allows the user to save state - work on the draft of the TPS report for an hour, then shut down that virtual machine; come back a week later and everything is just as you left it - same open files, same running programs, everything just the way you want it.
In some ways it's the equivalent of tabbed browsing or application windowing - virtualization of the OS creates efficiency by dealing with multiple divergent contexts at the same time.
Some things will - as they are today - be stored in the cloud [edit or its local equivalent using WCF]. While bookmarks may be one of them - many of the items I bookmark when I'm actually working on a project rather than procrastinating on HN et al., are pretty project specific. And of course, having HN et al. in its own virtual machine would allow all those idea bookmarks to be stored in one place relevant to their useful context rather than relevant to a file system which I have to keep organized (assuming I bother with organization).
Most people don't keep their filing system very organized and when it comes to bookmarks even less so. Context is often a more efficient way of recalling what you did than a directory name - particularly over longer periods of time such as several months.
And of course, you can multitask across virtual machines - I often have two or three open at once because I need access to software which runs on a legacy version of Windows and I run Facebook in it's own exclusive VM.
At the same time I will be working on a project on the host OS. And there is no need for interconnection between any of them.
And VM's solve a lot of legacy issues, cross platform compatibility issues (e.g. windows phone apps) and Microsoft has already developed methods of integrating VM's with the host (see Windows Virtual PC and XP mode integration).
I don't disagree. But as we move to a world with more virtualized environments and multiple devices, I think it will be more common. At the very least iCloud will make it standard in the iOS/OSX space. I can't help but believe MS will do the same for Win8.
we are talking VM's... not multiboot. you can have many VMs(applications) open at once time. those application have access to the shared disk and other resources. have you seen parallels? the user would not change workflow _at all_. they dont know their app is now in a VM.
Again, saved state. In my work projects often go on hold for months or years - I might need to come back to a project and pick up where I left off even though in the interim I replaced my primary computer, browser, and production software.
And in my writing side projects, I may leave a project for several months. So that's where I realized the value of VM's - the one's I use have survived an upgrade from XP to 7 with the same open windows and without any software reinstallation (and of course without any recreation of bookmarks). I'll add that they are also descendents of previous virtual machines used for the same purpose but different projects. It is more efficient from a workflow perspective to have six copies of Open Office each pointing to the relevant context than to reconfigure one copy each time the context switches.
To put it another way, the way in which one develops software projects from a custom starting point and the way in which references persist across IDE sessions during a project due to saved state are not unique to software development. They are indicators of the features which facilitate efficient project execution timelines.
Exactly. We were sold "process isolation" and "virtual memory" back with the 386 chip and Windows NT. But the actual effective security was squandered for the sake of convenience and compatibility. OSes didn't really want to share the hardware in any meaningful way.
The current demand for virtualization is, to a significant degree, an attempt by admins to get control of their own hardware back from Microsoft. Putting MS back in charge of the lowest layer hypervisor seems like it could sort of defeat the purpose. Or maybe they'll play nice this time?
Computers are designed to do more than one thing, but traditionally many servers were purchased per-role. Mission critical apps would only run on one version of Windows, or apps might not play nice with others or with OS upgrades.
It turns out that one of the apps people really need to run multiple instances of is Windows itself. This is largely Microsoft's fault for bundling every app including the kitchen sink in the OS platform itself. As a condition of using their clean little high-performance kernel, you had to accept a web browser and home-user-friendly userspace.
Little surprise that people are kicking the whole package off of Ring-0 and substituting something like vmware for their $five-figure server hardware.
It's that super-isolation that actually allows multiple apps/roles/data categories to finally share the same hardware.
We should be virtualizing the software, not the machines. Oh wait, we already are: JVM, CLR, Python RT, good-old-fashioned processes etc...
Virtualization is just snake oil. I don't see a real use for it TBH and I work at a place that drinks the VMware kool aid. All it does is cost money, use up resources and excuse incompetant administrators from having to plan properly up front.
Would this type of virtualization prevent me from needing to have multiple copies of the os for each VM? I'd love to have multiple projects completely separated, without having 10 ubuntu server VMs. Maybe I'm just better off using virtualenv.
To some extent I agree that virtualizing at the hardware level (particularly PC hardware!) is the wrong level. Ordinary Unix processes are another sort of virtualization, and one which is far more efficient.
However there are three facts that get in the way of doing the right thing:
(1) Does it run Windows?
(2) We've got Intel and AMD making PC virtualization acceptably fast nowadays, so it makes sense to just use that.
(3) Isolation between processes is not very strong [not sure about virtualenv] but between machines is a great deal stronger because people have a lot more experience protecting networked machines from each other.
1. You can virtualize fine on Windows. IIS does a wonderful job of virtualizing network services. (as of 2008 that is). And windows services are a form of secure process-level virtualization are they not?
2. It's acceptable until you whack several kernels and OS' on a machine, all running lots of processes at which point everything suffers. Cache locality goes AWOL, cache utilisation is shared, bus traffic goes up, so does latency and performance suddenly plummets.
3. It depends on the environment. Virtualenv just provides a consistent python software environment which can be isolated from everything else on the machine. As for other things: Look at FreeBSD's jails - that's as far as it should go. Linux has ulimit and decent security. Windows has NT which is actually damn good and provides very good process isolation with respect to memory, CPU and IO no less.
If we needed virtualization, we wouldn't have processes.