Hacker News new | past | comments | ask | show | jobs | submit login

I see a lot of people knocking these things in the comments...but if you want guaranteed employment you would be trying to get experience with COBOL. I'm telling you.

Truth is, these things are so much more stable and secure that most projects running on large hardware (think Oracle anything) would end up saving money by running on a mainframe.




> Truth is, these things are so much more stable and secure that most projects running on large hardware (think Oracle anything) would end up saving money by running on a mainframe.

Stable, maybe, but probably not secure. The software stack on these was designed well before anyone had any real interest in network security. I saw a youtube by a security researcher who took a look at one and found all kinds of inadvisable stuff. He also said there were very few people looking at these from a security perspective.


I saw the same presentation. You're right, they could use a penetration team in the release development cycle.

I was referring to the access control model that's built in to Mainframes. They were pioneers of mandatory access control's and fine grained security.

Having fewer people working on them is a definite issue, though, when it comes to the kinds of exposures that Mainframe integration to a corporate environment entail.


I'll keep my "non-guaranteed" employment any day over COBOL.

Those might be reliable but you could achieve the same reliability on commodity hardware for a lower price if you write distributed software and plan for failure.

My SIP server software (for a startup I hope to launch soon) can have its power cable pulled and nobody will realise because the server besides it just took the process over. For the price of a single mainframe I can put hundreds of servers all around the world and still come out cheaper.


Right. There are something's that are so important that you only want a single machine running it and you want that machine to have all of the redundancy built in.

I'm not saying that distributed systems are great...I make my living off of them...I'm just saying that there are classes of business problems where the risk calculation still comes down on the side of mainframes...and probably always will.

The COBOL comment...whatever. To each their own. I think its cool. IBM embraced web services in a huge way so basically any COBOL function can be turned into a web service automatically. The number of people who know how to work on these things is pretty small and getting smaller. You sound like you are comfortable with the risk involved in a startup. I'm not. Lot's of people aren't.

I sincerely hope it works out for you though. Good luck. It sounds like you are working on something really cool.


Since you seem to have more knowledge than I do about this, would you have an idea why COBOL is still around and not being phased out? Is there any particular reason they can't make a mainframe that you can program in Python (my poison of choice)? Or does this built-in fault-tolerance requires some specific language and most commodity languages aren't suited for it?

By the way thanks, I'll definitely need the luck. It will be cool if it succeeds yes.


The fault-tolerance stuff does not depend on any language. If a fault happens in a CPU (for instance) while running your application, the checkpoint (pre-failure) of that CPU will be copied to a spare CPU and your application will continue as if nothing happened. The application won't even be aware of the sparing.

All mainframes can run Linux. A single mainframe can run thousands of Linux servers. Except for the difference in arch (s390x vs x86_64) you probably wouldn't know you are on a mainframe. So of course any Python code will run just fine.

COBOL is used with a different OS, z/OS. The combination of hardware, z/OS, and COBOL can make MUCH more efficient use of the hardware than you could ever get with Python. If you would like an explanation I will give you one.

The current language of choice on the mainframe, for new development, is Java.


Great reply, cheers.

EDIT: I would like to hear more about COBOL and Z/OS. Can you explain your efficiency statement?


One of the common things that mainframes and COBOL are used for is financial systems, and that means a lot of fixed-point decimal data. So suppose you have a file containing an account number and a balance, like '9876543210 000000123.45', and you want to add to that balance.

In Python, once you have that record read in, you have to split it apart by some means. That takes CPU work. Python will then store the pieces as new objects, and that takes CPU work. Then you have two strings, and one of them (the balance) must be converted to a number that can be operated on, more CPU work. Then you can do the math, using the Decimal module, which is more work. The result then has to be converted back to a string and written back to the file.

COBOL, on the other hand, understands record formats, understands decimal date, and knows that the hardware supports decimal data. So the same task (add to decimal number in record) that on Python may take many thousands of CPU instructions can be done in three: PACK (convert printable (zoned) decimal number to packed decimal, ADDPACKED (perform the addition), and UNPACK (back to printable number). And those three operations are performed in dedicated decimal hardware in the CPU.

The important thing to remember is that in the mainframe world, not only is the hardware priced according to the performance of the machine, the software is also. This is why there are so many performance levels (over 100) to choose from. Users naturally want to buy the smallest machine they can get that will handle their workload, and they aren't interested in wasting any processor power doing things not absolutely required by their business needs. For this reason, you won't find any mainframe shops running traditional mainframe workload in a language like Python.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: