"With larger memories and independent I/O operation came the possibility of more efficient machine utilization. A portion of the machine could be dedicated to the programs which assisted in machine
operation, the operating system. As a result, overall productivity of the system increased."
The author succinctly shows how we went from calculators to hardware with an OS. A wonderful explanation.
If you wanna try out MULTICS:
But they are offline ATM don't know why...
Or with the emulator (DPS8-M):
> A key concept of the CP/CMS design was the bifurcation of computer resource management and user support. In effect, the integrated design was split into CP and CMS. CP solved the problem of multiple use by providing separate computing environments at the machine instruction level for each user. CMS then provided single user service unencumbered by the problems of sharing, allocation, and protection.
I think we've lost this concept: We run Multics-like OSes as guests under Xen, not CMS-like.
> As an aside, the MULTICS system  of M.I.T.’s Project MAC and CP/CMS were both second-generation systems drawing heavily on the CTSS experience with very different architectural results.
And Unix is fundamentally of the Multics mold.
Another thing I want to comment on:
> The design of System/360, in order to facilitate the multiplexed execution of several jobs in a scheduled job environment, provided two instruction execution states: privileged and problem. The instructions available in problem state are those commonly used by application programs. They are innocuous to other programs within the same machine and can be safely executed. However, privileged instructions affect the entire machine as well as report its status. As they are encountered in problem state, the machine blocks their execution and transfers control to a designated program. When using CP, each virtual machine program is actually executed in problem state. The effects of privileged instructions are reproduced by CP within the virtual machines.
All privileged instructions faulting to the hypervisor is a basic requirement for efficient virtualization in the Popek & Goldberg virtualization requirements.
(It's nice to know what things are called if you want to do further research.)
Another interesting little note:
> For example, a piece of CP might be implemented within hardware to extend machine capability. Users of virtual machines and their operating systems would see no change in this function except possibly in cost or speed. Conversely, CP might simulate a new hardware feature to be tested. Performance might be poor, but programs using this feature could be run. This was done within IBM to prepare System/370 programs using a System/360.
Moving features from the hypervisor to the hardware is an interesting concept.
> The System/370 virtual machine can be in basic or extended control mode. Basic control does not include the virtual memory hardware. This type of machine is used by CMS and other operating systems that do not themselves utilize the address translation hardware. Extended control mode is selected when an operating system that controls virtual memory, such as CP or OS/VS, is executed in a virtual machine. This might be the case when a new version of CP is to be tested.
That's right: Running a hypervisor as a guest of another hypervisor, possibly to debug a newer hypervisor in a safe (and potentially gimmicked?) environment which is nevertheless exactly like running on bare hardware. Is that possible with non-VM hypervisors on commodity hardware? Or is it another idea we've lost?
It's another idea we've lost. You can run VM on VM on VM... Wikipedia has a good article on why this doesn't work well with x86.
It's possible. For example, I run ESXi on top of VMware Workstation.
There can be caveats and limitations. For example, Hyper-V cannot be nested on AMD CPUs, although I believe MS is working on this. This creates problems for running things like WSL2 inside a VM.