As a matter of trivia, I learned recently that the System/360, among other things, is probably where software licensing originated. [1] For a variety of reasons, not least of which were antitrust considerations, IBM wanted to unbundle software and hardware and, at the time, it was unclear whether copyright, much less patents, would provide any IP protection for unbundled software.
For anyone unaware, the System/360 was also the subject of the often quoted Mythical Man Month by Fred Brooks.
I believe that there were a few licensed programs before IBM unbundled in 1969. There was a program that would produce flowcharts from source code, called Autoflow, released in 1965, for example. As for Bill Gates, he was 10 years old when Autoflow was released, I doubt he was thinking of selling software.
Not sure it is. It is bill gate not ibm that believe more onto software license. Ibm goes with hardware cum software license.
They are more structured no doubt even then. They are very careful about this. Licensed product but they are more onto
Still because they are more onto hardware and its usage their first anti trust is about tie-in sales. Hardware with monopoly of punch card purchase (I am not joking it is my professor speciality).
Software wise they allow development. Even to this day mvs (that is many years into the future it is mvs) 3.8 is still can be run without license. And invention like vm also.
They worry about Amdahl who basically do this s/360 and hardware compatibility (plug compatible) and even now fuji…
No they no doubt has software licensing but not their focus for a long time.
That is why when found bios cannot protect them, they do mca and totally missed it is software, stupid.
My partner writes System/360 assembly language for a living, an enormously complex product that's a bit like a suite of kernel modules for Linux. It's called MVS or z/OS these days but there's still a robust market for big iron.
I wrote assembly language on IBM/360 in ACP (Airline Control Program) and TPF (Transaction Processing Facility) for KLM. It was fun. A totally different world.
My manager/mentor had worked with punch cards (phased out just before I came in) and was known to only get two errors on average in the nightly compile batches.
(People could only compile once per day, and one program could be comprised of hundreds of card).
An interesting aspect of assembly programming on punch cards is that since many lines of code in different programs look the same, you could “write” new code by just reshuffling an old deck!
Channelized I/O, intent-based security policies (e.g. from RACF), multi-layered error handling and recovery. It's a rich garden. However, it's not as simple as "implementing a feature," since the interesting things are all systems of activity with interlocking assumptions and expectations with all the other systems. You can't "just" pluck pieces out of context any more than you can grab a cool phrase from Mandarin or Urdu, or admire a lobster's claw and decide to graft it onto your own arm.
But for anyone interested in evolving systems/OSs, definitely study S/3x0 and Z successors, or the proprietary mainframes and minicomputers in general. In many cases we are now stumbling into reinventing techniques that mainframes or minicomputer teams built many years earlier. Best case in point probably virtual machines (VMs), in which VMware et al started in ~2003 rebuilding a technology capability that had been developed in Z systems in 1967/68.
It's essentially having a small CPU as a DMA controller. For a prototypical DMA controller, you just give a list of source and destination addresses + transfer size and it runs off an performs those transfers and tells you when it's done. Maybe they can even be chained together (one DMA channel writes to control registers of another then starts that channel).
For "channelized" IO, you had CPU instructions which would effectively hand off small programs to another processor that was extremely limited in capability compared to your CPU. The channel processor would handle the direct interrupts/events from physical IO devices, do basic processing, then kick off (a) DMA transfer(s) to and from main memory, or as a data stream straight to the CPU.
For some mainframe architectures, you could implement things like text-editors and filesystem drivers that would run on the channel processors so that basic tasks didn't take up core CPU time. The main CPU could allocate memory for a process to be placed in, then send off a channel program to the tape drive and go and do something else for awhile while the tape drive found and loaded the executable completely independent of the CPU.
Probably a more realistic example would be to take something like a database and have a separate CPU processing the on-disk format, or a separate CPU to process your network protocol's wire format and only having the actual data it contains seen by the main CPU.
These days processing power is so ridiculously cheap compared to those days that flexibility rules the day. Might as well have a bunch of dumb IO devices because even a basic CPU core can move and process gigabytes of data per second.
The channel I/O format has conditional jumps in the command stream and lets you as the application programmer offload large parts of the filesystem or database directly onto the hard drive controller without main CPU intervention. It's closer to GPU command submissions than what you see out of NVMe/SCSI/ATA. So, for one example, telling the controller to "walk an index tree in this format and retrieve the block that matches this key" is something you can code yourself in the channel I/O command stream as it's own sort of ISA.
From the "roads not taken" perspective, Intel actually made something like a channel IO engine for the 8086/8088 family called the 8089 (https://en.wikipedia.org/wiki/Intel_8089) which supported the 20-bit address space of those systems and proper 8 and 16-bit operations. You could write little programs for it which would communicate with IO devices and respond to interrupts on either a shared or private bus, do some processing, and then move the results to main memory.
IBM in a cost saving move grafted the DMA engine from the 8080 family (the 8237) and made it mostly work by adding a page register to cover the remaining bits and setting one up in an odd way for the 286 to get it to do IO <-> memory transfers for 16-bit devices (but unable to do memory-to-memory transfers).
See eg https://netfpga.org for pretty much exactly this for network. I believe it's more of a research platform than something that's used in production. It's not exactly new either.
And even a faster 6502 than the CPU! Yeah the true very programmable I/O offload you see in the C64 is very similar conceptually, and would be even closer if it could DMA into main RAM to communicate with the main CPU rather than bitbanging serial.
The new "Z on demand" features they offer over the internet remind me a lot of how it easy it was to provision virtual S/360 (S/390 at the time) hardware and software with VM/CMS when I worked there in the 90's.
I've been playing with VM/370 and MVS 3,8j on Hercules for some time, but I haven't figured that out yet. The concepts are so alien for a Unix person I may have gone over the instructions a couple times, without recognizing them as such.
Agree on ISPF being a great editor. It was trivial for new users to pick up, but at the same time had a rich feature set.
My favorite part was its support for folds, or what it called "excluded" lines. You could issue an initial command that excluded lines you wanted to ignore, and then issue subsequent commands to operate on lines not excluded, or "NX". Very nice. I occasionally wish I had ISPF while I'm in the middle of a Vim session.
The most fun thing in VM/CMS is REXX. It's an awesome language invented by Mike Colishaw with the express goal of being easy to program above all other considerations. Its PARSE instruction is one of the most powerful things I've ever seen in any language.
REXX was so great it became the standard scripting language for all IBM system products and was incorporated into the OS command line and the text editor, XEDIT. This meant you could have one language that ran commands and programs in any other language, could do anything at the command line (like create machines, etc), and could edit and save text files. Think about that for a second. It was WAAAAY ahead of its time.
Sadly, REXX predated the internet and never had a browser-savvy release. A miscarriage of REXX and OOPS called Object Rexx was also unsuccessful.
Presumably, most anything worthwhile would still be in Z/OS from whence it would have pretty naturally made its way into IBM's Unix and eventually Linux.
OS/360 was quite different from most modern operating systems--most notably it was batch and designed for very small memory sizes--but different isn't really better in this case.
I've looked at it only briefly, but... From my (mostly Unix-ish) perspective z/OS looks a bit like as if MS-DOS have been continuously developed by a few hundred people since the eighties. There are some nice things, but the overall system architecture is somewhat... nonexistant.
When I first read "It combines microelectronic technology, which makes possible operating speeds measured in billionths of a second", I was thinking "Wow, Gigahertz clock speeds back then". But later on you realise they are talking about 200 of them billionths, so only 5MHz clock rates.
"Only"—in 1964! It was another 17 years before the IBM PC was released with a 4.77MHz 8088. Imagine a gap like that today: The collapse of the lag between "fastest CPU available at any price" and "the CPU in your phone" is basically the story of the computing industry to date.
My dad worked for Univac. I remember that he had an instruction card for the 418 (Univac's lower-end system; the high end at the time was the 1108). I remember that addition took 4 microseconds, and multiplication took 6. This was arount 1970, maybe a bit after.
So don't sneer at 5MHz in 1964. It's really fast for the day.
Pretty much exactly what we pay for compute time today, if you don't count inflation. Accounting for inflation the System/360 was about 7x more. It was a good time to be IBM.
It was a bet-the-company move and would have bankrupted IBM had it failed. They literally discontinued all other product lines in one swoop and consolidated the entire company's architecture around a single, compatible processor spec. That is standard today and was unheard of at the time.
> They literally discontinued all other product lines
Not quite, as I mention elsewhere they continued incompatible low end systems, most prominently the existing business 1401 series, and used 360 technology to make the affordable 1965 1130. I would assume the same for the 1969 System 3, which eventually evolved in the AS/400 or IBM I as they call it now, which internally is still by far the most sophisticated mass manufactured computer system architecture.
"The company spent US $5 billion (about $40 billion today) to develop the System/360, which at the time was more than IBM made in a year, and it would eventually hire more than 70,000 new workers. Every IBMer believed that failure meant the death of IBM."
In 360 terms, virtual machines are for dividing the machine into multiple smaller ones, each able to run its own OS. At that time, there were multiple OSs for different things.
I know it was a long time ago, but the announcement of the iPhone wasn't terribly significant at the time. We already had smartphones with apps. The iPhone didn't have apps. It had a large screen using a touchscreen technology that worked better than the competitors, but it had no keyboard -- something seen as a disadvantage by many.
The most important thing the iPhone brought imho was low, fixed-priced data plans that were affordable by regular folks. This was a true innovation by Jobs and perhaps his single most important achievement, although obviously someone else would eventually have pulled it off.
The 360 was not a single device at a moment in time, but a platform and an architecture. The 360 is not judged by the first machine you could buy, but by how the machine did in the market over the long term.
The iPhone line of products came to blow the doors off the “smart phone” market, like the 360 did with mainframes. You can’t seriously claim that the touch screen was “seen as a disadvantage by many” when the new line sold orders of magnitude more units than any smart phone before it, and became the dominant form factor for all mobile phones within a few years. Like the 360 did with mainframes.
“obviously someone else would eventually have pulled it off” can be said about any innovation, including the 360. What IBM did, like what Apple did decades later, appears obvious in hindsight. But like IBM’s 360 and mainframe computers, Apple is the one that did it with smart phones.
Low price data plans, an app ecosystem several orders of magnitude bigger than anything before it, portable computing in the pocket of billions of people. Nobody achieved that before the iPhone.
Like the impact of the 360, the creation of iPhone eventually enabled a vast number of new businesses and products (for good or ill) including Uber, Snap, and countless others. Pre-existing businesses like Facebook grew significantly after the iPhone was released.
Sure, lots of earlier devices had some of the features that iPhone released with, just like computers existed long before the System/360. But Apple packaged it up in a way that worked, proceeded to own the market for a decade (especially in terms of revenue), and became one of the largest companies in the world on the back of its success.
The app ecosystem on the iPhone (and then Android) certainly became much more significant over time. But the 2007 iPhone announcement didn't upend things over night. I had a Treo at the time and didn't get around to switching over to an iPhone until the 3GS in 2010 or thereabouts--which was probably the model when the iPhone really started taking off.
(The iPod was similar. A lot of people viewed the initial models as just another and not very compelling MP3 player.)
And you're right that the iPhone also hit the market at a time when data plans were becoming more affordable for people not expensing them. My Treo's plan was fairly reasonable as a recall but then you couldn't really use a huge amount of data anyway.
I totally agree. The iPhone was cool, but not really as useful as a Blackberry at the time. It took two-hands to use the iphone (kinda still does unless you use voice, which is noisy). The Blackberry thumb-wheel was ridiculously effective at getting around on the phone.
In fact, iPhones weren't even used by many large businesses until later.
The cool factor won, though. When it came out it reminded me of the first LED watches; it you had to push a button on the watch (so now it took two hands to tell time). The red LEDS glowed for a few seconds then vanished. I first saw one in a James Bond movie.
Android phones toyed with the navigation ball for a little while, but it took a lot of rolls of the thumb to get a small movement on the screen -- it should have felt like a trackball, but ended up being useless compared to flicking the touch screen. The last Android phone I owned that had one was the HTC Magic.
I'm hoping physical controls will make a comeback, but they're often incompatible with water resistance. Maybe some day a touchscreen will emerge with good haptic feedback? I can dream.
The announcement of the iPhone was big because it really defined what a smartphone is. Also with the phones we have it’s easy to think that it was a small announcement which is a testament to the influence it has.
The competition focused on prioritizing phone features and then adding apps that worked on a small screen. The purpose of a screen on a phone wasn’t emphasized at all.
Also having a camera with a large screen eventually made the iPhone take the most pictures of any device.
But the iPhone did have its price slashed by around $200. Also it took til the iPhone 5 to have a great deal of people to adopt the iPhone.
So the announcement and the iPhone itself made it influential and then a few revisions later the iPhone really became dominant.
What a smartphone should do was pretty solidified by then. I used a Sony Ericsson one back then and we had Palm and Windows CE devices as well. What the iPhone has shown was a UX that didn't try to mimic a computer - it was its own thing, with animated transitions and multi-touch. This made all other smartphones look clunky in comparison. The second revolution Apple introduced (later) was the app store. It extended the zero-pain experience of iTunes to applications - no more downloading .prc files and opening them with a custom tool. With the iPhone it was easy.
Also, the integration with OSX apps didn't hurt either - it made the Mac and the iPhone a mostly seamless continuum.
In retrospect, it defined what smartphones were in terms of market. Before 2007, smartphones were business devices. They weren't marketed to the general consumer, and if you walked into a phone store, they were mostly ignored unless you were a "business person". The high end of consumer phones were "multimedia phones", like the Razr, which was the best seller at that time.
This is partially a result of smartphone design at the time. They were, in design (and sometimes literally), PDAs with phone functionality added. It was assumed that this paradigm was the correct one. Consumers were asking for 'iPod-like' phones at the time, and manufacturers responded by making mp3 player hybrid phones like this: https://en.wikipedia.org/wiki/LG_Chocolate_(VX8500) We now know that this approach was too literal -- too feature based. What consumers actually wanted wasn't an mp3 player glued to a phone, it was smartphone that was designed for non-business use cases first -- one that wasn't a PDA.
At the time, there was actually a lot of commentary that the iPhone was destined to fail: it ignored what was then seen as the most important parts of building a smartphone at the time, being able to integrate into business communication systems: e.g. Blackberry Enterprise Server or Exchange ActiveSync.
Although "PC" is hard to localize to a single announcement. Yes, the IBM PC is a convenient hook. But, presumably, something like the IBM PC would have happened even absent IBM skunkworks in Boca given what was already on the market.
I suppose you could say the same thing about the iPhone but it did clearly move things off in a different direction relative to predecessors.
dfox lists some important things, the two most important I'd note are how through extensive use of microcode there was a huge range of systems that could run the same software, and for the lower end ones could also emulate older systems.
Going from memory and not counting the very limited 360/20, from the 8 bit internally 360/30 all the way to the 360/91 supercomputer you could run basically the same software. Which eventually solved a huge problem for IBM and the industry in general because not so much effort had been put into comparability in the early days, and IBM had two major lines of big computers, decimal ones for business and binary ones officially for science and engineering but by then often used by businesses.
Although stuff at the low end which somewhat like the 360/20 remained incomparable, the business 1401 series remained, also see the scientific 1130 which a lot of people including myself got their first start at programming on. It was a very clever design that used 360 technology to make an affordable system, for quite some time they'd put effort into that niche with the 650 and the CADET, Can't Add, Doesn't Even Try, a backronym from how they based it on a big block of core memory which they were very good at manufacturing by then vs. the drum as used in the 650, and and used that with tables etc. to avoid including an ALU.
And leaders of the effort going all the way up to Tom Watson Jr. made a catastrophic mistake in deciding virtual memory, which they called dynamic address translation (DAT) was a bad thing, which hardened so much the first models of the System/370 which used "monolithic" circuits, ICs, didn't have it (and they didn't do right by their customers who bought those first models).
The affordability of the 360 series and their ability to make so many but much less than initial demand rested on their avoiding too new at the time ICs and using automation to manufacture small boards logic boards on a ceramic substrate: https://en.wikipedia.org/wiki/Solid_Logic_Technology This was an era in which IBM still massively benefited from its electromechanical wizardry gained in the punched card data processing days.
This lost them the high end computer science community which I believe seriously harmed them in the long term, ultimately made what they now call IBM Z a niche product.
The CADET was in fact the IBM 1620, contemporaneous with the IBM 1401, but more for 'scientific' market. The '1620 uses these clever in-core tables and iteration to achieve arbitrary-precision addition/subtraction and different tables for multiplication/division; I implemented the algo in C and it worked great, certainly fast enough for the occasional arb-precision computation.
The IBM 1401 could operate on arbitrary strings of digits in hardware ; since that was my first machine, I was disappointed when I used these silly 'binary' machines. However, a typical instruction + operand fetch/store took about 100 microseconds on the '1401; yes, that is 10 kilo-instructions per second! There were many programming tricks to minimize instruction fetch e.g.'chaining' instructions, using left-over contents of the regs in the next instr, and using parts of instr as data, the most common being:
"A *-6,FOO" because "*-6" is relative address, which points to the letter 'A', which was BCD +1, etc. Equivalent to ++FOO;
The pdp-11/45 I used next took about 2.5 usec per instruction with 2 memory operands -- 400 kilo instructions per second! This "40x" speedup of the 11/45 over the IBM 1401 more than compensated for the hardware arbitrary-precision addition/subtraction! (oh, and on the '1401, the HW 'Multiply/Divide Feature' was another big group of electronic cards, if you bought that) http://ed-thelen.org/comp-hist/ibm-1401.html
1) uses architecture that has at least user-space binary compatible descendands today.
2) introduced 32b wordsize with 8b byte addressable memory.
3) was designed to use ASCII. (which is somewhat ironic as all the IBM big iron things are mostly synonymous with EBCDIC, which was only meant as temporary solution because ASCII was not finalized in 1964)
Totally agree. The comment I was replying to asked "most important hardware announcement ever made by a tech firm?" and I was noting that in 1964 it was. That said, it's still a great read and I can imagine how cool it was :)
The big news is that IBM is still selling mainframes at all, and even getting rich from it. Doesn't hyperscaler tech nowadays do pretty much everything that we used to need a mainframe for?
We shouldn’t underestimate some companies’ desire to keep running the same software.
However, as for stuff they legitimately do "better", they’re mind bogglingly reliable. We’re talking two or three seconds of downtime per year. They’re also accurate - every instruction they perform is checked for correctness and if something goes wrong, that task can be migrated to another CPU core and resumed transparently. Cores can fail, socket can fail, etc. without the system going down. Hardware can be replaced without downtime. Processes and data are just migrated elsewhere. They recover from hardware faults and resume at the same instruction within milliseconds with no data loss (cache contents can be mirrored to other machines in the data center, etc.)
The hyperscaler model would be to eventually recognize tasks sent to some machine aren’t returning and reschedule them from the beginning. VMs would resume from the last snapshot, etc. It takes a few seconds at best. For 99.9% of the world, that’s perfectly reasonable and acceptable. We work off of eventually consistent models.
But there still remains a small audience for which that model isn’t acceptable, and they’ll pay through the nose because there’s not many companies making machines that way anymore.
I'm not convinced that they're paying though the nose.
You'll easily spend a lot more in just salary costs trying to attain the same combination of reliability and thoughput on off the shelf hardware.
Most shops won't need it, but there's a reason many financial institution are still using these, and it's not because it's all legacy software due for replacement.
Hardware is only a small part of implementing a new technology in a business. Most businesses only buy new tech when there is a business need that drives it. For big B2Bs that haven't changed how they do business or how they interact with their customers in 30 years, there often hasn't been a need to change their business processes. If the mainframe breaks, it's much cheaper and lower risk just to buy a new one, than it is to get different infrastructure and reimplement 30 years of code for which no SMEs are around anymore.
For anyone unaware, the System/360 was also the subject of the often quoted Mythical Man Month by Fred Brooks.
[1] https://www.create.ac.uk/blog/2018/11/14/the-first-software-...