For those that are interested, the IBM Mainframe family is a fascinating architecture, completely and totally with nothing in common with modern mid-range or microcomputing.
MVS is the ancestor of a modern operating system, z/OS, also capable of running on hercules, albeit with much more by way of hardware requirements (I wouldn't want to run it on a raspberry pi). IBM's licencing prohibits the installation of z/OS on anything other than IBM mainframe hardware, which in itself led to a bizarre situation where IBM, defender of open source projects and patent abuse decided to start abusing patents against an open source project
IBM had their own P/390 systems that basically put a mainframe on a PCI card to go in a PC or RS/6000 host.
There were also the FLEX-ES systems that were Linux-based mainframe emulators running on Intel hardware. IBM sanctioned and allowed this for long enough for a number of companies (including mine) to buy them, then pulled the plug on the program and forced everyone to buy real mainframe hardware instead.
Long ago I found an IBM 360 emulator written in 2 pages of APL and published in some technical magazine. Anyone recall this and have a copy? (I've found websites with copies of similar articles, but lacked the straight-up APL code.)
This is a digression, but do you have experience with z/OS?
When I took an operating systems class a few years back, a comment the professor made about z/OS stuck in my mind. "z/OS is one of the few operating systems that can run a VM of itself with almost no overhead." He went on to mention that he had seen nested VMs on z/OS about twelve steps deep that ran without noticeable delay.
Since he said this, I've always wanted to ask someone with some experience with the OS whether or not this is a valid observation. Additionally, does z/OS act similar to a BSD jail, where the kernel itself is replicated to improve security? I've not been able to find a whole lot of free/open documentation on the operating system.
I have some experience with z/OS, but mainly from a security perspective rather than a day to day admin role.
I wouldn't say that there's no overhead, but the structure of z series mainframes is completely different to any midrange architecture. Everything is designed to be virtualised, parallelised and incredibly redundant. Hard disk fails? No problem, carry on as normal. Motherboard failure? No problem, carry on as normal.
z/OS doesn't have a kernel, it has what's called the nucleus. Effectively, the physical system is divided into LPARS, which then provide in effect highly scalable virtualised systems. There's then further isolation through about 3 different methods IIRC (for example, each subsystem - analogous to a long running process has it's own addressable memory) to the point where everything can be completely isolated, so it's very different to a jail, slightly like something like Xen (but only slightly).
The security model on z/OS is completely different to Unix/Windows because the entire architecture is completely different (for example, z/OS uses a block-based disk operating system as opposed to a byte stream filesystem, meaning there's no such thing as files in the Unix sense on z/OS - outside of USS which is beyond the scope of this comment).
Under z series LPAR virtualisation each LPAR runs it's own OS with it's own allocated resources. It's about as separate as you can get. You'd never use virtualisation that way on midrange as you cheat to get more VMs into less space, but on z series you need to absolutely guarantee access to a resource when needed, so you partition the systems up accordingly.
Please note, if anyone knows better than this please correct me, as I say I'm more on the security side than operator so I might be wrong in a couple of places.
The author's comparison doesn't really work. The power of those IBM mainframes was in the entire architecture (SNA - System Network Architecture) not just the processing unit. Terminals such as the 3270 talked to cluster controllers that talked to batch controllers that talked to other controllers that fed the processor etc. (simplified).
Data making it's way through the device chain was nothing for a 3 MIPS processor with 64 MB of internal memory and 3MB IO to handle.
As far as IPL's were concerned, it was more common for the controllers to be IPL'd than it was for the processor. Because of heat and cooling issues, nobody ever wanted to shutdown the processing system or even let those boards deviate much from their operational temperature. Bad things always happened when they powered down and cooled off. Always.
Raspberry PI is better compared to a PDP-11, a VAX, or a Data General (Soul of a New Machine).
As chips / boards / etc cool, they move and contract and (massive generalisation) this causes mechanical wear . I'm sure someone more versed in such stuff can give a detailed explanation of what can happen. (there are lots more side effects to power cycling gear as well)
Absolutely correct on all counts. Incidentally, for those with a Raspberry PI who want to run old hardware emulators (and indeed those of us without who still want to run them), you can't bear SIMH which will happily emulate a PDP-11, VAX or Data General computer amongst others. You can even run old Unix on it!
Couple of years ago I did this on my shiny new N900, and for a brief period owned one of the world's smallest (if not the smallest) MVS systems. Alas, time marches on and that N900 seems rather old-school now.
Guys, if you ever encounter a woman with a t-shirt that says "Talk nerdy to me", I believe you will have her instantly jump on you and have wild sex, if you tell her exactly what seclorum just said. Which, by the way, I have absolutely no idea what it means. :|
You should avoid the whole gender/sex angle and just make some comment about extreme nerdiness.
"I'm a total a nerd and have been all my life I've been on HN for over 2yrs and I can honestly say there's nowhere else on the entire internet where people make comments that even come close to how nerdy yours does!"
That being said it still isn't that funny and it doesn't really add anything to the conversation, but likely wouldn't have offended anyone either.
EDIT: forget to mention kudos for apologizing and not whining about downvotes like most people seem to now.
It's a personal challenge that I have, not to take offense or go on the offensive and not taking it personally. A year ago, I would have reacted in a completely different manner.
But you're right, "That is the nerdiest thing I have ever heard in my life" would probably have been better. And yet, still not constructive in any way. At least this little altercation is constructive, at least for my attitude!
GP2X = open-source handheld gaming platform from a few years back.
EDSAC = one of the first programmable computers ever made, upon which the very first video game was programmed: XOX, a game of naughts and crosses. (This fact was referred to culturally in the movie "Wargames"..)
Having an EDSAC in my pocket was definitely a Nerd moment, and you are forgiven. It is extremely Nerdy.
I agree it doesn't belong here, but I don't get how it's cyberbullying or anything close to it. Unless you think calling someone 'nerdy' is a grave insult around here, which I also don't really think is the case.
Old school Manframes have poor IO by today's standards as in less than a mid range wireless access point. And, the software and hardware where ridiculously stable by today's standards but, if you look at mean time between failures in therms of operations preformed not just time they once again fall behind.
PS: Still, to give an idea of why Manframes where considered such IO beasts a high end PC's IO is about ~1000x as fast (HDMI is 10.2 Gbit/s + USB + Gigabit Ethernet etc), but it's got ~1,000,000x the processing power.
Of course, if you don't pretend that mainframes stopped advancing, you see a different picture. Modern zSeries (or whatever new marketing they've come up with) have multiple 10Gbit interfaces to a network, and can keep them all fed. Not to mention special offload processors for AES and X509 certs, and of course hardware partitioning and virtualization built in. Mainframes are still pretty badass - a lot of stuff that is exciting and new in the server space is stuff that IBM was doing for decades in the Mainframe space.
There is just something very cool about handling something on the order of 10^5 fully ACID transactions per second, while still allowing real time database querying and on-the-fly hardware failure tolerance.
I'm surprised someone hasn't mentioned attempting to supplant IBM's older mainframes with smaller units like Pi, Arduino or Bone. However, maybe it's generally known that ... you can't. Because IBM has the same hardware/software tying licensing arrangement that people decry Apple for having.
If you had been around that generation of systems and toggled in a 30 byte program in hex or octal that loaded a paper tape or card deck that then loaded the actual OS, bootstrap seemed a reasonable term.
Bootstrap actually makes some sense when you're toggling in a loader on a machine that has a bunch of switches, it certainly feels like you're trying to lift yourself up by your laces rather than that you're loading an initial program.
Oh, and one bit wrong and you can start all over again... the scary thing is that once you've done it often enough you start to remember the sequences the same way you remember how to play a piece on the piano, in your muscle memory.
It's disappointing :( I work in education and would like to try this thing out for a variety of projects, but the delays in getting them to market mean I probably won't get one. There are alternatives, not least (for the very interesting original article) just setting up a Linux box and using Hercules, etc, on that. It's a real shame that supply problems risk putting a damper on the huge public interest and enthusiasm for the Raspberry Pi.
Its disappointing in the short term, but look at the big picture. All this extra interest and purchases from the private sector if handled correctly will let them buy in bigger bulks, twist more arms in manufacturing, and find and fix more bugs. In the long run getting this boards out to a wider audience will give you a more stable, feature-rich and cheaper product.
I hope that will be the case. My ultimate motivation is to get students interested in programming and all the other things (control tech, etc) a cheap expendable PC can facilitate. I could easily get 20 of these if they were available but getting one is tricky enough at the moment.
I have only recently started looking at the Raspberry Pi, one thing that struck me was the use of SD memory for storage... I have used USB sticks to boot Linux before and they only lasted a few months of continuous use. Perhaps I'm missing something? Other than that I'm impressed, the device has lots of potential.
Not swapping and getting rid of Linux's ridiculously noisy (for a personal laptop) and duplicative logs and perhaps mounting with "-o noatime" will help a lot as well.
Additionally, the SDD in your 901 isn't just a hunk of flash - it contains additional logic that will perform Wear Leveling on the physical flash modules, resulting in more write cycles before you see a failure. (Higher-capacity thumb drives also have wear levelers to fight against the OS's propensity to always allocate data sectors in the same order, but they're a different variety than those found in SSDs, from what I've read)
I've been booting with old USB sticks for years, not to mention cheap microSD cards. I'm still waiting for one to stop working. What sticks are you using? Are you booting with the stick mounted rw? Are you mounting your userland in RAM?
My old monitor doesn't have HDMI input. Reading text through the composite out on a old TV is painful. They should have used a gpu with DVI-I output, so that it would have digital and analog (VGA) output pins, in my opinion. Nevertheless I'm waiting for mine to arrive :)
I have one of those, but the Rpi HDMI doesn't have analog pins, afaik, which precludes its use with older LCD monitors with only a VGA (analog) input, which was what I meant initially, sorry if I didn't make myself clear.
It's always amazing to see how people computed in the time of Dickens and Edgar Allan Poe.
just kidding...but I can't help but wonder if even the people who are kids now will realize this stuff isn't from the nineteenth century... I mean, just look at the device you're reading this on :) We've come a long, long way.