Hacker News new | past | comments | ask | show | jobs | submit login
IBM AS/400: Databases all the way down (2019) [video] (youtube.com)
161 points by twoodfin on May 2, 2023 | hide | past | favorite | 147 comments



I like to joke there are platforms that were so ahead of their time they are still ahead of ours. The AS/400 is one such thing. It concerns me IBM doesn't seem to provide enough entry-level resources (and I believe nobody would do a green field project on IBMi that required a company to invest on the platform unless it already had a sizeable investment on it). For all three of their crown jewels, AIX, Z and i, IBM is doing a terrible job in onboarding new clients.

BTW, I've been trying to set up a small AIX box to do some software testing (and not make the mistake of making a Unix app dependent on linuxisms) on IBM cloud and, even when you navigate the confusing UI and metaphors (there is no "give me a POWER10 AIX VM with this many vCPUS, this much memory and this much disk" option), it's failing for me for unfathomable reasons while trying to create a storage volume with the default options. On a side note, I can create a Linux VM running on s390x from the same flow I create x86 ones, but I can't select a z/OS disk image, or a POWER processor. If I want a z/OS machine, I have to go through an entirely different workflow (one I went through, only to never be able to figure out the IP addresses I could ssh to).


IBM is doing a terrible job onboarding because they are all focused on offboarding.

This hardware is expensive and not as performant as any modern competitor. The software is arcane, confusing, and it is a pain for IBM themselves to keep supporting it. AS/400 is a costly legacy. AIX has no reason to be in a world that Linux and BSD exists - IBM owns Red Hat after all.

It was a cool technology. But it's definitely a thing of the past.


That all may be true for the actual implementation, but the idea of the ability to write an application with a built-in relational database that would be available to all my users relatively instantly is still a great idea, and the closest thing we get to that today is the web, but the web comes with it's own whole thing, least of which being a dev stack with so many moving parts by comparison.


On an AS400 you wouldn't even get a dev stack. Well, unless you wanted to pay IBM for 2 of them. Dev on production is the name of the game for most AS400 shops.


So AS400 developers would chronically be deploying at night? Code during the day and deploy/fix/hack at night? Actually, that sounds familiar.


Yes - we had a night shift from 00:00 to 08:00 that was oddly popular.


Computers are much faster when the batch jobs finish running and there are no interactive users logged in.


I guess it depends on how long the batch jobs run and how much they impact interactive use compared to the normal dayshift workload.


They usually lock the database completely, so you have to time your work around them.


There were so many attempts to create something similar but modern that it isn't even funny. WinFS was first announced to be released in 1993, and the last I saw it was announced in Windows 8. This was one of the core ideas of BeOS, AFAIK they never really did give-up on it, just de-prioritized. The KDE people played with it at least twice...

It never works. There are insurmountable problems you notice once you try to design such a system. And yeah, AS/400 is an example of it not working too, as it was only usable as a hidden database layer and never survived contact with the users.

On the other hand, the obvious extension of the concept into development tools is a great idea that has been used to create novice-friendly toolsets all over. The tools always seem to "mature" away from it for some reason, and usually lose popularity on the process.


You don't need to build on Kubernetes when an ECS cluster and an RDS instance will do the job. It is possible to build simple things in the cloud.


I'm not saying a full k8s stack, but consider mobile apps - you can't control when your user will update, if they do at all. So you either have to have a very rigid data model that makes a lot of translations and such on the backend, or a hook to check for updates and pester the user, or just ship late in pursuit of perfection. Plus now you get to deal with the entire Internet and cell networks and VPNs and bad actors.


> you can't control when your user will update, if they do at all

If you design your APIs to be versioned from the start, you don't need to be overly concerned with that. Also, if maintaining a deprecated API is too resource-consuming (either developers or API translation logic) you can always make it return a message informing the user the said API will cease to function on a given date (you need to make the app display that, even if not updated) and shut it down after that. Any mobile app should have a way to be placed in degraded functionality mode for a number of reasons, API deprecation being just one of them.


Deno has a built-in KV store, but one could argue it belongs to the web ecosystem.


IBM still makes boatloads of money providing support and upgrade paths to it's legacy customers. They aren't _actively_ offboarding because it wouldn't make sense to lose that institutional market that they effectively own by the balls and that nobody will fight them for. It'd be hard to sell that tech to new customers though and it would also invite comparison to modern solutions, which would not please their current customers. It's like IBM is running a island prison / amusement park for rich old people. Just keep them happy and they'll keep paying.


My anecdata involves a foreign market. Maybe IBM USA is more keen on keeping this rolling, but in Japan we got strong pushback and were led to cloud solutions every time we discussed acquiring new hardware. They were very proactively trying to get us out of AS/400 and into their cloud platform.


Interesting, maybe they feel that they have a strong enough grip on their customers that they can move them away from proprietary hardware to higher-margins proprietary cloud solutions.


Or maybe someone's KPIs are aligned towards cloud growth.


That's true and it really is a major factor in dying technologies: where there is no growth, there is no investment.


z is up to the minute in terms of clock speed, CPU process and such. They have a RAIM system which is like a RAID array for RAM which must cost something in latency but lets you replace failing RAM sticks when the machine is running. The rest of the industry will have something like that soon based on CXL.


Being up to the minute in terms of clock speed doesn't matter that much when, for example, DB2 on the AS/400/iSystem has a much higher latency and handle much less requests per second than an equivalent Xeon.

Replacing failing RAM sticks live are also something that, while quite cool and impressive, it happens to be quite pointless in today's world of clusters.

Current AS/400 is almost retrofuturistic in both its cool factor and disconnection from current actual needs.


I’ll maintain that Kubernetes makes Parallel Sysplex look user friendly.


This is one of the mainframe's key selling points. Not that I am implying z/VM is user friendly.

But instead of managing a fleet of x86 servers, switches, routers, storage boxes (you may need a few of those), all liable to fail without warning, you have an exquisitely built datacenter that fits in half-a-dozen 19" racks (counting 2 for storage) where redundancy, failover, and easy maintenance are built-in at the hardware level.


Kubernetes makes WebSphere 5 look user friendly, disregarding 6.x improvements on purpose.


Yet the sales keep increasing.

https://blocksandfiles.com/2022/07/19/ibm-storage-revenues-q...

There are still Aix features that aren't available on Linux and BSD, let alone what IBM i and z/OS offer.

As for IBM owning Red-Hat, it has come full circle, Linux only started taking off on big shops as IBM, among others, entered the game back in 2000.


Sales are increasing, according to this link, for z, which doesn't run either AIX or IBM i (the AS/400 OS).


No, but it does run z/OS UNIX System Services, which is also not Linux/BSD for that matter.


z/OS USS is a very weird take on Unix, but I'd love to have an affordable way to test my code on it.


IBM still seems to invest a good chunk of money keeping up development of the POWER series.. the hardware improvements get shared by all the Linux/AIX/i software.


The closest system that hit it big was Java.

AS/400 was based on the System/38 which had a capabilities-based OS and persistence built into the system from day one. (I remember the bad old days of JDK 1.0 before there was serialization which was awful for writing applets because more than half of your applet might be serDes code. I think Sun panicked when Netscape came out with a serialization library before they did.)

In the 1980s there was a fad for complex architectures like the Lisp machines, stack machines, the iAPX 432, etc. Common Lisp and the JVM put those to rest showing you could implement a featureful virtual machine in a mainstream CPU. My understanding is that was what System/38 was from the beginning.


The difference is that IBM made abstractions of machines work through several generations of hardware. That abstraction has let them maintain software compatibility at a binary level for longer than any other vendor... you can literally take production programs written for and running on the System/38 and migrate them to modern iSeries systems.

IBM's methods have let them maintain compatibility despite moving the iSeries through several generations of hardware. It originally ran on their IPMI cpu, then later was migrated to the RS64, and now runs on POWER series hardware, the same as AIX (and in fact virtual instances can share chassis hardware with AIX instances).

So in fact there are services to companies still running today that originally ran on IBM's mainframes that have been migrated and migrated again, and have essentially always been up and available. The companies aren't forced to spend the money and time to plan upgrades because their previous hardware and OS isn't supported any more (contrast to Microsoft).

Lots of companies advertise upward compatibility. Some of them manage it for a single generation of hardware... after all, virtual machines are a thing. Continually migrating upward over a span of 45+ years is a technical achievement not matched by any other company.

The down side to this is that they're bound in some sense by technical decisions made long ago... so they can't modernize their software environment like e.g. Apple can.


> The difference is that IBM made abstractions of machines work through several generations of hardware

Great idea - but with PASE they’ve effectively killed it. With PASE the abstraction is gone and everything compiles to POWER machine code, which makes it no more “abstracted” than AIX or Linux or Windows is. And they keep on encouraging customers and ISVs to do more with PASE, and more of their own products rely on it. Which means IBM i is going to die on POWER, and the whole hardware abstraction stuff, while used to great effect once (the CISC-to-RISC transition), is now pointless


Burroughs B5000 (nowadays ClearPath MCP) and z/OS are also like that.


How is z/OS like that?


The ‘virtual machine’ that’s used on IBM z is hardware virtualization, not a foreign system like Java or Infocom’s z Machine.

A mainframe is usually running a hypervisor (VM) and underneath that hypervisor there are various operating systems running, z/OS is one of them. Circa 1989 I was in the Computer Explorers and we got to use VM/CMS on an IBM 3090 which is basically a single-user OS a lot like MS-DOS (though it is really the other way around.). To do software development on such a beast you would create a VM and log into it.

Of course you can run Linux or Java applications in a VM on z too so you can consolidate legacy applications on the same server with more mainstream workloads.


z/VM and PRSM aren't in the same category as System/38, AS/400 Lisp machines, stack machines, the iAPX 432, Common Lisp, the JVM, Burroughs Large Systems, etc.

z/VM and PRSM are just virtualising (relatively) mainstream hardware, fundamentally no different than what VMware/Xen/KVM/VirtualBox/HyperV/etc do.

The rest are either (1) hardware architectures whose ISAs are much further from mainstream than 360/370/390/z is–aiming to provide high-level capabilities which are normally provided in software, making the distance between the hardware ISA and HLLs much smaller than normal; (2) software architectures which could be implemented in hardware (and occasionally were) to become instances of (1).


By having Language Environments.


z/OS LE doesn’t do much more than what the C runtime library on Unix or Windows does. Despite the common name, z/OS LE has little in common with IBM i ILE. They do share some code (CEE), but the code they share is just that “C runtime library” part-which actually runs (or ran) on OS/2 as well. The whole “hardware abstraction” part of ILE (MI bytecode) is not part of z/OS LE


> It concerns me IBM doesn't seem to provide enough entry-level resources

Hell, at one point I worked for IBM and even I couldn't get access to any decent entry-level resources (let alone beyond!).


That's probably the reason why the overwhelmingly majority of central banks worldwide stick with IBM mainframes. It's nearly impossible to hack, or even access, anonymously even when connected to the internet. And even then only a small pool of select people would know how to do anything meaningful with that access.


A little larger pool than you might think.

https://en.m.wikipedia.org/wiki/ES_EVM

I wonder if the Hercules emulator runs their custom OS/360.


Key words "generally compatible". Which even then I doubt, since I'm fairly certain the Soviet copies didn't support all the features and functionality supported by the most advanced System 370 variants, let alone System 390 or later.


The wiki implies that these models were directly compatible: 1010, 1020, 1030, 1040, and 1050, which were produced between 1969-78.

There were many later models, but compatibility isn't mentioned.


Okay, but how is that relevant? No bank is running anything older then a System 390.


> I like to joke there are platforms that were so ahead of their time they are still ahead of ours. The AS/400 is one such thing

In some ways really advanced, in many ways really backward.

An “object-oriented operating system” in which the OS vendor can define dozens of classes (object types), even for their most obscure add-on products-but third party vendors and end-customers can’t define any at all, they can only use the ones IBM defines (very rarely, third party vendors might have some kind of deal with IBM, where IBM engineering defines a class for them)

An “object-oriented operating system” with no concept of inheritance - except many of the classes have defined some kind of “subtype” field/attribute which amounts to the same thing, except in an adhoc rather than general way.

Has this great “technology independent machine interface” (TIMI) idea-programs are compiled to byte code, and the OS translates the bytecode to machine code before running it - so moving a binary to a new CPU architecture doesn’t require a recompile. Except turns out the latest version of that bytecode (the ILE version) is completely undocumented outside of IBM, and IBM won’t release that documentation (except possibly under NDA and with $$$$ license fee)-which means nobody but IBM can write a compiler for it. Documentation is available for the original late 70s/early 80s version of the bytecode (OPM), but it has problems supporting languages like C (it works much better for 1970s versions of COBOL or RPG). Nowadays IBM will say “use PASE instead”-which is an AIX emulation environment, meaning kissing the whole bytecode idea goodbye and compiling to POWER machine code

A system whose native filesystem doesn’t even support nested directories (which it calls “libraries”)-although it has since added a POSIX filesystem that does, but lots of core functionality uses its native filesystem instead

To say nothing of EBCDIC, 5250 green screens, or RPG

And the OS itself is written in a grab-bag of different languages, including Modula 2, and two different top secret IBM-only dialects of PL/I - although the newer parts are written in C++ instead

But yeah it has also got other cool stuff like capabilities, tagged memory and a single level store (although many apps now run in private address spaces instead)


You seem to know a lot about the system, thanks for sharing. Do you have any idea the reasoning behind the no nested libraries & objects can only have a 10(?) letter name?


Some of it is backward compatibility with earlier systems - S/36 and S/38 had roughly similar limitations.

It is a design decision that dates back to the 1970s, and in the time they made it, they were hardly the only system to have made such a decision. Back then, truly hierarchical file systems were still a new idea - Unix was popularising it, Unix got the idea from Multics - it was primarily associated with research/academic systems and hadn’t yet spread much to commercial ones, which is where they were situating the product. On the other hand, S/38 adopted other highly novel research ideas, so why not this one?

Another factor is the UI: 5250 and 3270 block terminals work really well with fixed width fields, they don’t have good support for arbitrary length ones. Fields can’t be scrolled - if the field has a maximum value length of 255 bytes, it has to take up that many bytes on the screen. That is likely a factor which discouraged them from longer names and multi-level hierarchy. OTOH, MVS datasets have max 44 character names. They are pseudo-hierarchical, with 4 levels of “qualifiers” then a name, each max 8 characters, separated by dots. So I don’t know why they didn’t do something closer to MVS-maybe they thought “we are not a mainframe we are midrange machines that’s overkill for us”

The 8 or 10 character limit wasn’t uncommon for the 1970s. In those days, most Unix filesystems had a 14 character limit which is only a bit better.

They could have enhanced the core system to support nested libraries and longer library and object names. It would have caused backward compatibly issues for older apps but there are workarounds for that, like Windows having both short names (8.3) and long names for files; they could have some system to make nested libraries look like unnested ones on an app-by-app basis. They decided never to make those investments-they decided to just slap a POSIX filesystem on the side and leave the core alone.


That was my guess - the Green Screen thing, though I didn't know the fields couldn't scroll.

It just seemed like with everything else sort of 'overkill' - 128bit address space - it would've been sort of an arbitrary amount.

Thanks again for the info


> It just seemed like with everything else sort of 'overkill' - 128bit address space - it would've been sort of an arbitrary amount.

It is kind of bizarre how they overkilled / went bleeding-edge in some areas but held back in others. It makes for a strange, retrofuturistic system


I don't know why AIX would be compelling. Isn't it just another proprietary Unix?


It all depends on performance per dollar - POWER and AIX, POWER and Linux, x86 and Linux, ARM and Linux, SPARC and Solaris... It's all about what gets you more work done per unit of money until you retire the box. Is it compelling? Not sure - depends on licensing terms, co-location costs, what are you doing, energy costs, AND compute power. As compelling x86 and ARM with Linux seem to be now, I wouldn't rule out other platforms without an assessment based on the workload.

Once I managed to bring a couple Itanium boxes we had from retirement because their then humongous caches were perfect fits for our working dataset and, therefore, for that specific workload they were 10x faster than our Xeons.


> Is it compelling? Not sure - depends on licensing terms, co-location costs, what are you doing, energy costs, AND compute power.

It seems like Linux always wins in terms of licensing, so does it currently lose in any of those metrics?

Also, since Linux runs on IBM POWER systems, it looks a lot like IBM's AIX has very little place to stand: Even if the hardware is better (which, judging from experience with other proprietary workstation vendors like Sun, I'd be surprised about) the value proposition of running Linux as opposed to a proprietary OS with less effective support and expensive licensing appears insurmountable.

https://en.wikipedia.org/wiki/PowerLinux

So, I'd be willing to believe that IBM's hardware is better in some ways, but I'm more skeptical about IBM's software in a realm where an apples-to-apples comparison is possible.


You run the IBM power stuff mostly because it is the most cost effective way to run Oracle and other CPU licensed workloads. You can’t use VMWare partitioning to avoid paying. But the IBM hardware based stuff allows you to segment the workloads.

The other thing is that like mainframe, you can lease CPU on demand. So if your business is cyclical, it may be better to increase CPU November-January by 20%.


POWER is supported on linux to the point where it runs, but it doesn't come close to fully leveraging the hardware. AIX does. Stuff like hardware accelerators, transactional memory, hardware counters, reliability monitoring and self healing, etc. Lots of stuff left on the table because it doesn't perfectly overlap the x86, and it would be a massive undertaking to correct that in the kernel. You'd think that wouldn't be the way it is with IBM and RH... but I suspect there are some market segmentation ideas informing those decisions.


I would expect AIX to be finely tuned to IBM's hardware and able to exploit the exotic hardware that's bundled with the machine.


My company still runs a lot of stuff on AIX (also mainframes for that matter) and the reason is that it was set up that way in the 90s and no one feels like investing the sizeable amount to move these business-critical applications over to Linux just for the sake of it. Unlike all the other unices that were formerly used here (HP UX, IRIX, Solaris, Super-UX and others) you can still get AIX support so there is nothing forcing the hand of this migration. I expect them to still run some stuff on AIX in 10 or 20 years. Nothing new will ever be deployed to AIX and probably hasn't been in 20 years. At some point the AIX systems will only be around for a handful of niche things and at that point the cost of migrating those over might become lower than the cost of paying IBM off.


I mean, you'd have expected that of Solaris and Sun's hardware, too, but that didn't make Solaris on Sun workstations compelling enough to actually survive. That argument seems like a variation on one of the myths mentioned in these posts:

https://utcc.utoronto.ca/~cks/space/blog/unix/PCsAreUnixWork...

https://utcc.utoronto.ca/~cks/space/blog/unix/WorkstationMyt...

In short, I'm not sure IBM Power machines have any special hardware, and if they do, I'm reasonably sure Linux supports it. It is, after all, a smaller and more stationary target than the weird crap that ended up inside and hooked to commodity PCs that Linux ended up supporting.


Sun abandoned the workstation space well before Oracle finished abandoning SPARC and Solaris.

As for special hardware, the processor drawer of an E1080 looks a lot like the one of a z16 (without the distributed virtual cache of the Telum, or the insane water cooling blocks):

https://power10-ar-experience.com/


Power certainly had exotic hardware with the Cell processor in the Sony PS3.

https://en.m.wikipedia.org/wiki/Cell_(processor)


Generalizing the question a little in case someone passes through with an answer:

Why should a couple of hackers working out of the proverbial garage building the next unicorn consider IBM as their infrastructure vendor?

What are their offerings? How do those stack up against the defaults of aws/Azure + Linux + Postgres/MySql?

Assume said hackers are broke.


On the far opposite of the startup post company gobble up side - I've been pleasantly surprised how well our zSeries ran OpenShift k8 images for our inherited mainframes. A simple tune for s390x JDKs, and many of the Java based apps were off to the races. The hardware already was - so the cost of putting it to work ended up being cheaper than using cloud resources. The IO, as one might expect, is very solid. Loving the hardware we had was a less bitter pill than pushing out our own instances.

After noodling out how to make my docker images work on aarch64 and amd64, it was trivial.


They wouldn’t. These are “deep pockets” platforms. Ramen eaters need not apply.


This is the point - IBM doesn't seem willing to help. I'd love to be able to use LinuxONE machines as VM hosts (single thread performance and IO throughput are absolutely ludicrous) but there seems to be no entry-level machine that doesn't imply in a very sizeable investment. And I suspect LinuxONE and POWER10 machines would very cost effective ways for IBM to provide Linux VMs for public cloud environments at better price/performance points that it could be achieved with x86 or ARM.

To me, it's absolutely nuts they don't have entry paths to their crown jewels. What will IBM's competitive advantage if everyone migrates from AIX and IBMi to Linux? While there is no IBMi emulator, there are commercial environments that can compile and run COBOL and PL/1 code made for i on x86 Linux machines.


I think that was the point of the Red Hat acquisition. "See all those Linux boxes you run? Do you want to have someone to yell at when they blow up? It's either us or Canonical.".

There's no way z/OS is going to be free or open-source (if for no other reason than if they open sourced it, you'd still need a mainframe, which means you're going to pay IBM for cloud time), so if hobbyists are going to start with something they'll probably start with Linux. Once they are no longer a hobbyist, IBM will be there to help.


> There's no way z/OS is going to be free or open-source (if for no other reason than if they open sourced it, you'd still need a mainframe, which means you're going to pay IBM for cloud time)

This is probably why IBM mainframe OSes until the 1980s are public domain: You can't run MVS without an IBM mainframe, so why bother even copyrighting the source code? The Hercules people are grateful for that bit of pragmatism.

http://www.hercules-390.org/hercfaq.html

https://www.ibiblio.org/jmaynard/

https://wotho.ethz.ch/tk4-/

https://cbttape.org/~jmorrison/mvs38j/index.html


I believe the actual reason old IBM mainframe OSes is that computer programs originally weren't copyrightable. When the law was changed to make them copyrightable, this wasn't retroactive.


MVS 3.8j contained software developed under US Federal contracts, and that is why it is freely available.


> There's no way z/OS is going to be free or open-source

I never said that.

What I said is that there aren't any onboarding routes to z/OS (or AIX, or IBMi). You either already are running one or more z/OS boxes, or you'll just deploy to cloud, CentOS, Kubernetes, OpenShift, on commodity CPUs (x86 and ARM), or any of the other stack that rivals a mainframe in some capabilities (and carefully avoid business requirements only a mainframe can fulfill)


Sure, I'm a bit sad to see tech lost as well. However I gave up hope decades ago that someone at IBM might have read the "Innovators Dilemma." Or considered employing the longterm-thinking kind of executive that would have read it.

Not sure if big-blue ever had that kind of person in abundance to be honest. Was recently reading up on the DEHOMAG thing recently, and well probably not. At least Watson tried to give the Nazi medal back, haha.


Makes you wonder what future the platform has, when the only customers that are willing to afford it are banks and defense.


You wouldn’t. Most customers tolerate IBM, they aren’t investing.

The only exception is in the defense space, there are some things from a segmentation perspective that are cheaper to achieve on mainframes. But that isn’t a startup scenario, and those advantages are eroding as well.

IBM has huge margins on this stuff, so the typical play is use margin on the mainframe to win software and services deals. Startups need fast time to market, so it makes sense to overpay AWS by the drink than to overpay for a feast from IBM.


Softlayer was (is?) a good US-based bare-metal hosting provider before IBM acquired it. A company I worked at in the mobile gaming space in 2012 used them for their US backends. Not sure if they're good value these days.


I have worked with these extensively - my employer uses one for work, and I have written software both in the greenscreen / ILE languages and in the PASE environment / IFS to accomplish certain tasks (e.g. setting up an intranet site with PHP to present pretty-fied reports of what's in the databases).

I think one of my favorite things about it is actually IBM's Data Description Specification (DDS) [0]. You use it for the same reason you'd use a `CREATE TABLE` statement in SQL, but the syntax is much more legible / suited for that purpose, and the file is stored permanently (and used to spool up a "physical file" with the given format, where the data is actually stored).

Definitely encourage anyone who is curious about these / esoteric OS's in general to look into them - you can get hands on through IBM's "Cloud for Co-Creation and Enablement" (I believe it's been renamed) or PUB400 [1].

[0]: https://www.ibm.com/docs/en/i/7.3?topic=files-describing-usi...

[1]: https://www.pub400.com/


The AS/400 is, indeed, a truly weird system. Nowadays it runs as a VM guest on an IBM POWER system. Interestingly, it makes use of hardware tagged memory functionality in the IBM POWER CPUs. You can tinker with this stuff yourself if you have a Talos/Blackbird (or other POWER9) system, as I wrote about previously [1] [2].

[1] https://www.devever.net/~hl/power9tags

[1] https://www.devever.net/~hl/ppcas


I worked with AS/400s. Very fond memories of changing the backup tapes, in particular. I loved the server room! Meanwhile on the software side I was mostly dealing with EDI messages and it was all about critically precise placement of characters (including whitespace). It was ... weird. Nice machines though. The whole company's back office systems for around 300 shops ran on two of them in head office. Ah nostalgia...


Interesting tidbit I found the other day, apparently there was a collaboration between Nintendo and IBM to promote AS/400, which resulted in the online game "Mario Net Quest"

https://www.reddit.com/r/UnreleasedGames/comments/133ukrl/co...


I found this so weird I had to learn more, this [0] link has more details and more links.

FTA: "AS/400 handled the verification and distribution of every order... [of a N64]"

This is a link to an Italian ad for Nintendo+AS/400 Advanced series[1] and the English one [2]

[0] https://www.reddit.com/r/lostmedia/comments/12m6yru/found_19...

[1] https://archive.org/details/lastampa_1997-05-08/page/n3/mode...

[2] https://bashify.io/images/LOIpU7


That seems really a really weird combo, I wonder what the thinking was.


Yup, in the late 80's, and 90's I worked on several AS/400's. The video doesn't mention that not only is SQL a part of the OS, the hardware (microcoded) actually has instructions for SQL. It's part of the CPU!


On the original CISC AS/400 systems, there were two layers of "microcode" - horizontal and vertical. The vertical "microcode" was not really microcode, but was essentially the OS kernel (including the database and the native code generator), mostly implemented in a PL/I dialect. The horizontal microcode was the actual microcode - it implemented the CPU instruction set which the vertical microcode's PL/I code compiled down to. While the horizontal microcode implemented some rather high level things such as processor scheduling, I am almost certain that the database logic was implemented in the vertical microcode layer.

Once IBM i was ported to PowerPC, the vertical microcode was mostly rewritten in C++ and became known as the "Licensed Internal Code" and the PPC instruction set essentially replaced the role of the horizontal microcode.


Incidentally, the term "microcode" for the OS kernel originated with the AS/400's predecessor, the System/38, because, as a legacy of '60s-era lawsuits, it was IBM's policy at the time to separate hardware and software sales and development costs.

Bundling microcode with hardware, on the other hand, was standard practice.

The designers' idea was to build and sell the hardware and software stack as an integrated product, never to implement an operating system or RDBMS "in hardware".

For an explanation of this integrated architecture from one of its original designers, see

https://archive.org/details/insideas4000000solt/page/75/mode...

(registration and check-out required)


I've been waiting for that book to be digitized for years.


This [0] is a pretty good intro to the AS/400 etc.

[0] https://www.scss.tcd.ie/SCSSTreasuresCatalog/hardware/TCD-SC...


> Once IBM i was ported to PowerPC, the vertical microcode was mostly rewritten in C++

I heard once from someone who used to work for IBM, that although the most important parts of the “vertical microcode” were rewritten in C++, a lot of less important bits stayed in PL/MP (the PL/I dialect which compiled to the CISC IMPI native instruction set)-they just built a new PL/MP compiler which spat out POWER machine code instead of IMPI machine code-and so lots of PL/MP code is still there. And then there is also PL/MI, which is used for the higher level OS components which run above the “Licensed Internal Code”-it compiles to MI bytecode-and apparently that’s still around too. And still parts of the OS written in Modula 2. But brand new stuff, they prefer more mainstream languages such as C++


Kind of.

Not sure about the CISC-based AS/400, but current ones have bytecode translation (AOT compilation, IIRC) and run on POWER. In hardware terms, a pSeries and an iSeries are the same, with different microcode customizations loaded into the processors. I think it's even possible to run AIX partitions side by side with IBM i ones, if the processors (or TPM's, not sure) have a valid license.


I think they always did AOT compilation from the AS/400 instruction set to the actual hardware. In the '90s I worked for an external company that did some compiler work on a horizontally-microcoded implementation that didn't get close to shipping. (I don't remember many details, and if I did, I expect they'd still be under NDA.)


This is true; at one point, there was even an instruction that deleted the untranslated code ("program template") to save disk space,

https://bitsavers.org/pdf/ibm/system38/GA21-9331-1_System_38...

This instruction is either not present or not documented in the current Machine Interface[1], but I have no idea when or if it was removed.

[1] https://www.ibm.com/docs/en/i/7.5?topic=interface-machine-in...


> This instruction is either not present or not documented in the current Machine Interface

That’s not actually the current Machine Interface, that’s the legacy one. The current one isn’t documented outside of IBM. IBM still supports the legacy MI, which is translated to the current MI - and high-level language programs compiled to current MI can even embed legacy MI calls as a kind of “in-line assembly” - but all that is converted to the new MI as part of the compilation process


> In hardware terms, a pSeries and an iSeries are the same, with different microcode customizations loaded into the processors

I don’t think the microcode is different. It is true there are a few CPU instructions (memory tagging related) which were added for IBM i, but they are documented and there is nothing technically stopping some other OS from using them.

I thought the difference was that AIX, IBM i and Linux use different firmware. The firmware isn’t CPU microcode, it is just ordinary (albeit privileged) POWER machine code. Or to be more accurate, the POWER firmware has this concept of loadable modules (LIDs), and IBM i needs certain special firmware modules loaded which aren’t required for any other OS. This is why, even though QEMU can run Linux for POWER fine, and even AIX, it can’t boot IBM i-no one outside of IBM really knows what those special firmware modules do or their API, but without them the OS can’t boot. It is unlikely to ever happen without some serious reverse engineering, which would likely upset IBM and cause them to unleash their army of lawyers


> no one outside of IBM really knows what those special firmware modules do or their API

This is a thing that always gets me thinking. How is it possible that the firmware has never been dumped? It's not like there wouldn't be some market for a machine that could emulate an IBM i in jurisdictions where requiring IBM hardware to run the OS is illegal.


> It's not like there wouldn't be some market for a machine that could emulate an IBM i in jurisdictions where requiring IBM hardware to run the OS is illegal.

From a business perspective, in most major markets, too much risk of being sued by IBM. There are some countries in which you don’t have to worry about that - but those countries have few or no IBM i systems, so are likely too small a market to make it worthwhile

I’m sure some hobbyist somewhere will build an IBM i emulator eventually - IBM probably wouldn’t bother suing a hobby project, even if the law was on their side. But it is a pretty niche thing, so the intersection of sufficient skills and sufficient interest is likely to be quite small - that eventually may take a long time. I myself think about it sometimes, but I doubt I’d ever do it - don’t have the time, other things I’d rather work on, don’t have any actual IBM i hardware and I think you’d really need that for reverse engineering


It is absolutely possible to run AIX and IBM i in separate LPARs on the same machine (as of Power 7, at least), as long as you have the appropriate hardware entitlement to run IBM i.


pSeries and iSeries were consolidated down to "IBM Power Systems" back in 2008. I think they may install different firmware onto the systems, but the differences are more related to licensing than anything specific to the hardware.


A friend of mine got a job as a junior developer at an insurance company whom rely solely on IBMi and RPG programs running on it.

It is not dead.


Sadly. We're using an i currently. It has sort of locked us in because all of the programs that touch the money are written in ancient RPG. You can't just change a table (which IBM calls files...) when you want to add a field, oh no. You have to change the table AND update EVERY program in RPG that uses that table, because RPG has to know about every field in every file it touches. We are slowly working on migrating out to PHP.


The sad part is that the only reason you have this constraint is because your team/company chose to have it. It has nothing to do with the language or system itself, it's 100% self-inflicted.

You can absolutely add new fields to database tables on the system without updating every program - if you architect the software properly. Use embedded sql, explicitly select field names (ie don't do "SELECT *"), and put default values on new fields. I haven't written a new RPG program using the old file-specs in years - all new code has been embedded sql.

Not willing to give up SELECT *? Fine - create views on top of the real database tables to do your SELECT * from. Then you can still add new fields to the database tables without changing everything else.

Edited to add - if you want to add new fields to a table and not change all your old programs to use SQL, create a new table, transfer your data, and then create a view with the old table name using the same fields. You just have to test a little to make sure your view has the same record format id as the old table. Honestly it's pretty easy once you get the first one done and see how to do it.


Some would say that's great as you get instant database integrated type checking. Some people actively look for that.


I don't know, but suspect there are better ways than having to touch all our programs when a table changes.


[He got the job in the year 2022. They were using RPG version 4 on IBM i and are not migrating anywhere. Rather they are hiring!]


The hardware is solid. The development community is aging out.

We have the same thing and are currently migrating off as there are no RPG developers available for hire. Most of them are retiring. There are very few schools teaching RPG 3, 4, and Free.


My company has one as their main database too, these things just run and run and run


Not only AS/400 itself is pretty alien, even the terminals are unusual from modern viewpoint; they were not just dumb character grids or text lines, but had some logic for handling forms etc on the terminal side


AS/400 had a terminal that was similar to but not compatible with the 3270 used on the 360 mainframes. Applications like either one are a lot like web applications from 1999, that is, the mainframe draws a screen with a form in it, the user fills out the form, hits a button, and it get submitted.

The programming model for transaction managers like CICS on the 360 was similar in some ways to back end web frameworks, they even had clever code generation systems (see “X macros”) for writing serialization/deserialization in assembly language.


AS/400 terminal is 5250. Like you said, it's similar to 3270 but different.


AS/400 (and contemporary IBM i) actually supports 3270 as well - although I don’t think many people ever used that support


I had a class in community college that was taught on one of these. It was not what I was expecting from a programming class. I had a real bad time of it.

Later on, I briefly came to be in possession of an AS/400 that allegedly originally belonged to WKQX Chicago. I held on to it for a couple months but I didn't even have the slightest idea about how to go about hooking it up, let alone the proper cables, so I eventually let it go to another scraphound.


The AS/400 OS is one of the most arcane systems I've ever used. Power users can do some incredible things, but I think long-term use results in strange things.


We still work with an AS/400 (IBM iSeries) on a daily basis for legacy data, and it's impressive to watch how quickly our older staff members navigate the interface and get work done with it. It's several orders of magnitude faster than our newer web-based ERP, and I don't think I can be convinced that any modern browser UI will ever match the efficiencies offered by a terminal-based interface. That's not to discount the numerous technical benefits you reap by moving to a modern software stack, but strictly from an end-user usability standpoint, the AS/400 still wins in my book.


> I don't think I can be convinced that any modern browser UI will ever match the efficiencies offered by a terminal-based interface.

But you can run a terminal in the browser. It’s a pretty common way to interact with cloud servers. And as long as the server isn’t on the other side of the planet it’s not noticeably slower than a local terminal.

The problem isn’t the technology, its the sad state of UI engineering.


It's comparing apples with oranges. Your typical web based UI is layers and layers of rasterized graphics, meanwhile a terminal is the frontend for a platforms serial console.

I'm sure you can emulate the same performance in a modern webbrowser, it's just that no one cares, because no system administrator relies solely on web interfaces to get work done.


>> no system administrator relies solely on web interfaces to get work done

Increasingly, they do. And prefer to, because if it's in a browser, it's probably someone else's cloud system that they don't have to administer.


How is that different from a veteran Linux programmers/sysadmin skilled in $whatever shell and language of choice?

Our staff can run our ERP on a browser on a phone at the airport. I imagine it's hard to get 5250 on an iPhone. =D



To me, before I met Unisys' MCP, it was the epitome of user-hostile OS. Not because it's particularly difficult to navigate, but that everything is deeply alien (same feeling as z/OS, BTW).

Then I met MCP, where everything is both hard to navigate AND deeply alien. ;-)

I'm sure Alan Kay is on my side on that one.


> To me, before I met Unisys' MCP, it was the epitome of user-hostile OS. Not because it's particularly difficult to navigate, but that everything is deeply alien (same feeling as z/OS, BTW).

Nit, but you're not describing something that's "user-hostile," just something that's unfamiliar to the user that is you.

Alien could actually be very good and user-friendly, since a lot of the stuff we're used to frankly sucks, and we're stuck as at an inferior local maxima that's very hard to get out of.


> something that's unfamiliar

True, but unfamiliar and hard to learn combine to make it forbidding to newcomers.

Case in point: https://retrocomputing.stackexchange.com/questions/26398/how...


> True, but unfamiliar and hard to learn combine to make it forbidding to newcomers.

> Case in point: https://retrocomputing.stackexchange.com/questions/26398/how...

Though that's MVS, which probably should no be conflated with OS/400. The former is all kinds of trouble because it maintains compatibility with stuff from really old and limited systems, while the latter is quite a bit newer than UNIX so could have alien-advanced "science fiction" features.


True. It's a bit mind blowing that some metaphors in MVS (that carry over to z/OS) are rooted on decks of punched cards.

OS/400 has much newer ones and some of those are futuristic even now (the single memory map that encompasses fixed storage is a pretty cool one, even though deeply alien for most people).


Perhaps it's bad PR?

The TRON MCP was definitely hostile. Not sure if its victims could have been considered "users", though.

https://www.google.com/search?q=tron+mcp

Master Control Program


It really didn't like users though. It tried to kill Flynn a couple times.


This is also very much like the pick operating system, currently the biggest multivalue operating systems are UniVerse, UniData and D3. If you want to try them out, look into scarletdme. The multivalue style is a total mind bend at the beginning!


When I think of Pick and PickBASIC, I also think of the old MUMPS programming language+database. Where the values in the database are exposed to the programming language as "global variables," and the data structure the database stores is a sparse multi-dimensional array. Still around, mainly for the healthcare and banking industries, in currently-maintained implementations... InterSystems, GT.M, and YottaDB (the latter two being free software).


My first job was supporting an in-house system built in UniVerse. My second job was supporting a commercial ERP that ram on top of UniData.

I like the more modern stuff I work with now, but I truly miss the multivalue world some days.

Every once in a while I install the UniVerse personal edition and build something for fun. I will definitely check out scarletdme!


Because I'm just a straight nerd for BASIC here are some links!

A quick guide to getting started with scarletdme: https://nivethan.dev/devlog/scarletdme.html

Unfortunately I never ported my editor and shell to scarletdme so this will be UniVerse/UD/D3 specific:

An editor like vim: https://github.com/Krowemoh/eva

A fish like shell: https://github.com/Krowemoh/nsh

scarletdme in the browser(this will be a 500mb download as it runs debian under v86.js): https://nivethan.dev/projects/v86/scarlet.html


Very nice - scarletdme in the browser is brilliant!

The two things I do to start playing in a new MV environment both worked exactly as I hoped they would:

    LIST VOC
And

    ED BP BLAH
    I
    PRINT "THIS IS A TEST"
    
    FIBR
The environment feels very normal and reasonable, as far as a multivalue system goes. The only differences I came across are minor, like compiled programs going in BP.OUT instead of BP.O or the debugger being called DEBUG instead of RAID.

I definitely need to set this up and seriously play with it! I periodically download a new copy of UniVerse PE to play with, but this is something I could actually do real work in.

Believe it or not, I actually like working in ED and/or AE.

At one point (back in the UniVerse job), I wrote my own shell. It was basically a REPL that passed whatever you typed to EXEC, except for one nifty trick. The trick was that there were two REPLs in the program - one which worked normally, the other executed within a BEGIN TRANSACTION/END TRANSACTION block and had commands to commit and rollback. Having two REPLs was a concession to get it to even compile because BASIC required that BEGIN TRANSACTION and END TRANSACTION form a block, you couldn't just arbitrarily call those commands. I didn't even bother building a history mechanism, it was literally just a REPL that could work within the context of a transaction. It was incredibly useful.


That sounds pretty cool. I'm guessing you created a transaction block and then executed commands inside it, giving you a way to mess with the system without borking it.That is brilliant and is giving me all sorts of ideas :)

Honestly the best programmers are out here using ED. 0 syntax highlighting and you can only see 40-60 lines at a time. Insanity.

There is a 64bit version of scarletdme in the dev branch and the instructions are the same, it should work on both debian and centos.


ibm 4381 running vm/cms - best platform ever. beautiful hardware, wonderful operating system, brilliant documentation.


One of my first ("borrowed") internet accounts was on VM/XA SP around 1991. Yeah you had to hit CTRL-Z to clear your screen, but I was actually amazed at how friendly this machine was (and it would happily spit out pages and pages of helpful documentation), given that it was an "IBM mainframe." It was quite the experience learning to navigate the nascent internet and BITNet of the time on that thing.


We have 2.5 RPG/AS400 devs in my department keeping our old legacy system alive and trucking. They're great devs and they do good work, but every time I get roped into their side of things I feel lost and confused.


IBM i (AS/400) is ( I think) the last of the midrange/minicomputers still being developed by the original company. vs OpenVMS/VAX, HP 3000 - MPE/IX, etc.


Allstate at one point has about 16,000 of them—one for each field office. Hq would pull up new/changed data every night.


RPG, though ...


It's like functional programming in that everything useful an RPG program does is a side effect. A side effect of creating a report, in its case.


RPG is an abomination. It's a paradigm of the old plugboard tabulators from the 40's.

No indentation. Cryptic op codes. Rigid fixed-column OP1/OP2/operation/result. Assembler without the power.

"Repulsive Programming Garbage".


The latest version of RPG in current versions of IBM i is fully freeform! It's pretty wild how much they've evolved the language while still calling it "RPG". If you're maintaining old code you'll mostly run into the fixed-column stuff, but for writing new code at least you can make it somewhat readable now.


Do you still have to add every field in every table you use for your program to work?


yeah, it's pretty nasty. Free looks pretty java-centric. What's weird is you can in-line RPG. Makes for some crazy looking code.


That terminal is beautiful. The colors, typeface, and sparse layout make it incredibly readable for me.


Video author here -- The terminal emulator is tn5250j [0], configured to use the IBM Plex Mono [1] font. Full-screen it on a 1080p display and it really is quite pleasant! And there's something that just feels "right" about using IBM's font in this case.

[0] http://tn5250j.org/ [1] https://github.com/IBM/plex


I wonder what it'd take to make sqlite a filesystem driver for linux. Plop an sqlite blob onto a raw device, interact with it via the "posix filesystem" schema...



pwrdwnsys *immed


About 25 years ago I was trying to just stop a http server but instead issued ENDTCPSVR (ommitting which type of tcp server), everyone in my department lost their connection and the sysadmin walked over to my desk saying this is why devs should not have any admin access.

I did a walk of shame around each desk apologizing profusely.


I always feel bad not giving it two minutes just in case! `pwrdwnsys delay(120)` for good luck. ha


REQUIRES QSECOFR


Man what a nightmare. I work at a place that still to this day runs on a mainframe with programs mostly written in assembly. It’s the pure essence of the boomer mentality manifest in software. All the people involved got theirs and retired and left a steaming pile of shit for everyone else to deal with.


Love this!




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: