Hacker News new | past | comments | ask | show | jobs | submit login

And it is especially quite a bit better in handling a high volume of transactions while physically replacing one of the CPUs and half of the RAM in the system than many $50k servers.



You are paying about 100x on hardware because your software is limited to a single address space.

Even if the mainframe never goes down, the entire site will go down (eventually the rack power supply, HVAC, fiber, natural disaster, backhoe, etc. will get you even if your CPUs and RAM are redundant and replaced before they fail), and then either your entire business stops or at least processing for that region stops, or your system is resilient to site failure because you built a distributed system anyway.

If you could rewrite your software to be distributed and handle a node/site going down, you could run a single site on 5 servers that together outperform the mainframe (by a lot) and can be serviced on a whole server basis (though of course, expensive x86 servers also have reliability features), or use really cheap hardware without even redundant power supplies, but have enough of them to not care.

The modern solutions are better than the mainframe, and the only reason to use them is management risk aversion and unwillingness to learn new things.


A single mainframe can easily be located in multiple datacenters. The Hungarian state owned electricity distributor company has one, one half is at Budapest the other half -- if memory serves -- is at Miskolc, a bit more than a hundred miles away.


Is that a single address space system or merely two systems with db2 databases and disk volumes on a SAN in replication? I think it's the latter.

100 miles will add 5ms (round trip) to your disk flush on commit. So a system like this has the sequential and random IO latencies of a RAID of SSDs but the flush (database commit) times of a 15K RPM spinning rust disk. People lived with mechanical disks, it's ok.

Sync disk replication (in one direction) over a fiber line is not an exclusive feature. Having both sides be active, instead of active and hot standby requires some smarts from the software, but modern distributed databases do that, and if you're careful you can get far with batch sync jobs.


It might be just storage as it was brought up as an example of how subsystems of mainframes are essentially their own world and a single disk or distributed volumes over a long distance are presented the same to the rest of the system. In the Linux world DRBD does something similar (just much simpler). The point, however, is that the software knows nothing about this being distributed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: