Hacker News new | past | comments | ask | show | jobs | submit login
Western civilization runs on the mainframe (fosspatents.blogspot.com)
154 points by a2tech on Aug 2, 2010 | hide | past | web | favorite | 149 comments



As a 25-year professional z/os programmer working primarily in COBOL and CICS, I think the dearth of new COBOL programmers is not the big problem it is made out to be. Having learned C, Java, SQL, Common Lisp, and Scheme on my own, I have no doubt that programmers will jump to COBOL when the marketplace starts offering rewards (higher salaries) for doing so. The outsourcing trend is holding this back as companies continue to pursue the holy grail of cheap programmers, but all such efforts eventually run afoul of the need for local talent with a deep understanding of the application. A good programmer can get up to speed on COBOL very quickly. Getting up to speed on a large application? That'll take a few years.

The problem with legacy apps on any platform is not the language (unless you're talking about some very obscure unsupported language). The problem in my experience is the tangled mess of new code piled on top of poorly designed old code. In such systems, a small change can have unintended side effects. The solution is exhaustive testing (expensive and time-consuming) or a Hail Mary installation (leading to "testing" in production and yet another reactive fix). COBOL didn't create this problem and changing to another language won't fix it. In my shop, we practice test-driven development and deliver high quality releases. Success on the receiving end of these releases varies along with the established coding/testing/integration practices at each customer site.


Given that you guys are into TDD, I'm assuming you're running Cobol on Cogs?

http://www.coboloncogs.org/INDEX.HTM

(sorry, couldn't help myself)


True, except that a lot of programmers want to follow the "sexy" technology, and COBOL just ain't that.


Working on big iron, I get to work with assembler, CICS, MQ Series, and DB2 in addition to COBOL. No, it's not flashy like web apps, but there's a certain appeal to bit-diddlers like myself. Perhaps it's the same as the difference between driving a PT boat and working in the engine room of an aircraft carrier. I would choose the carrier.

Also, Greenspun's Tenth Rule is in effect in the COBOL world as much as in any other language.

http://en.wikipedia.org/wiki/Greenspuns_Tenth_Rule

The difference in the COBOL world is that you are more likely to work with people who would not know what you are talking about if you mentioned Lisp or Greenspun's Rule, or even the word "predicate" just to pick an unrelated example. So if you are into programming more than you are into business, you can function in the mainframe world as a sort of playground for exercising your programming muscle in ways that suit you, even if your colleagues aren't aware that something different is going on.

For instance, I generate a lot of code using a Common Lisp system I wrote for my own use. No one knows or cares--all they know is they seem to get reliable code from me very quickly. If I suggested that more people use the Lisp code generator, it would not fly because it would require training and might cost maintenance dollars. So it remains a personal tool.

More: My job involves a weekly task to analyze a release while it is under development. My predecessor did this manually and took all week. I wrote a Java/MySQL system to do it, and it takes me all of 10 minutes a week. Another win for the intrepid programmer.

More out in the open, I recently led development of a rules-based system. The team did not speak rules, evaluation, predicates, or the general use of collections. The challenge for me was to design the system using these concepts, then present it to developers who will never want to learn the general concepts. Done in record time, by some mysterious process (I broke the design into small pieces w/o reference to the comp sci terms, gave each developer a focused task, and tied it all together myself).

Definitely more Sears foundation garment than Frederick's of Hollywood, but I like it.


You wrote a compiler in Common Lisp that auto-generates COBOL and your co-workers haven't noticed this?


Nay, not a compiler. I wrote a set of libraries in Common Lisp that auto-generate the text of COBOL programs. It uses a combination of template selection and textual substitution, all based on a spec written in a spreadsheet. I work from home, where I am free to Get Things Done with no need to be seen poking away on the standard 3270 terminal to be considered working.


It's amazing how sexy technology can get when someone's offering you a six or seven figure salary to figure it out.


Really? Because I keep seeing articles on HN about how employees can't be motivated by pay increases, they have to be motivated by things like appreciating their work, sense of accomplishment, etc. etc.

Both of these things can't be true!


The thing is, both things can be true. They're just true for different people.

Those high salaries won't generally motivate an "in it for the tech", startup-friendly developer of the type that commonly frequents HN... but a bank with a legacy mainframe system probably doesn't want to hire that person, and they don't need to. The technology isn't the core of their business.

On the other hand, there are many highly competent developers out there who view their job as a means to an end, a way to support their family/life/hobbies/etc. To these people, learning a highly specialized legacy skill and making lots of money is exactly the right motivator.

As an example, I present my father. He has been writing data management software for hospitals for decades, and does all his work on midrange IBM AS/400 ("System i") systems. He writes all his code in IBM RPG--"report program generator" code. In recent years he's learned Java, C and Unix systems to keep himself fresh, but he's never found the need to actually use any of those skills, or a job using these that would treat him as well as his legacy work.


Could not have said it any better myself. Every time someone mentions that money isn't a motivating factor to get more productivity, I want to scream! For YOU maybe not, but for ME? Yes please with a bonus on top!

I wonder how long it will take before Java becomes the next COBOL (unless it already has...)


I think it's better to say Java is another COBOL: a default choice for business, stable, and likely to hang on through several generations of programmers.

I won't call Java the "next" COBOL until COBOL goes away. Which I predict will happen sometime between 2050 and the death of the sun...


Well, it's an obvious fact that we IT professionals... err, programmers... err, software engineers... err, code monkeys... anyway, we're all the same and are all motivated by the same thing.


There's a difference between motivating people to do their job better, and convincing people to do the job in the first place.

You would need to pay me a lot of money to convince me to work with COBOL full-time. Once you've convinced me to do the job, paying me more won't make me any better at it.


Well there is a certain "wall" that employers need to climb.

For .NET the "wall" is 30-50k $. For COBOL the "wall" seems to be 50k$ + (I'm conservative about this estimate on purpose). So beyond this "fair" compensation for ones troubles, the pay doesn't scale anymore.

So It's not that people wouldn't want to work with COBOL or perceive It as sexy - It's just that the "sexiness" point is a couple of 10's of thousands of dollars higher then for other tech.


One of them is true when money is limited, and it's actually possible to know everyone else in the company. Another is true when you can throw millions of dollars at a problem to make it go away and barely even notice that the money is missing.

The rules a company follows at small sizes and large sizes are not necessarily entirely the same, especially once bureaucracy sets in.


It's nothing to do with COBOL not being sexy, it has everything to do with the applications.

Bookkeeping in any form isn't sexy, but it is where the big money is when you're 'just' an application programmer.


What is sexy today is tomorrow's "ancient" legacy technology. And once you've been in the business for a decade or so, you start seeing the old ideas coming around again, cloaked in some new names, with a lot of newbie fans excited about how sexy it is.


There's no pressing reason for a company to toss its legacy mainframes. What VP would sign off on replacing core systems that run an international corporation? The downsides are infinite! I mean, are you insane? Just throw money at an "expensive" COBOL programmer and get out of my office!

I grilled an employee at a Major Package Shipping Company a few years ago about their infrastructure, and it's pretty stereotypical: a few mainframes that were coded decades ago in COBOL and an expensive team of legacy programmers performing surgical tweaks. They're porting unimportant side systems to modern hardware + languages (ostensibly so the mainframe programmers can focus on the major systems), but there are no plans to replace the core. Ever. Replacement systems just aren't good enough to consider switching the heart of their company. And so it will continue until there's a compelling storyline for dropping mainframes


These corporations may not see it, because they do not understand it, but having such legacy systems a risk to their operations. The longer they wait to migrate, the more expensive it will be, and the harder it will be to attract talent to the project. There may come a point where they can no longer find anyone to support their hardware or software, and it'll force them to stagnate against their competition who may be able to provide better or different offerings because they are not tied to a system they can't modify.

My understanding of business is that you should never stagnate, and always keep advancing.

People that I've talked to who have worked with such systems indicated that they built giant shims around the systems in other languages to allow them to be adaptable. In theory this gives them a clear interface definition (it may be huge, thousands of method signatures, and possible direct schema access, but it can still be a clear interface) to boot strap a core system replacement in another language and hardware system.


The laws of supply and demand will make sure that there will always be people available to maintain those systems. The price will go up until some 'modern' programmer is tempted enough to open a 3 decades old book on COBOL and JCL.

Your understanding of business is flawed, businesses are advancing by marketing, not by shacking up with every new hot tech that comes along in favour of their old, trusted and reliable tech.

The larger, older and more profitable a business is the more likely that they are going to be ultra-conservative about using new technology at the core of their business.

Eventually we'll get rid of this stuff but I wouldn't bet on it happening in the next two decades, unless someone finds a way to emulate a mainframe on a cluster with similar reliability. And then we still need to get rid of the software.


You don't think that there is room for a new high-tech bank to step in and eat the lunch of the big banks? If you started a bank from scratch, built with modern technology, you could provide all kinds of valuable services and adapt to change much more quickly. Pretty much every service provided, particularly on the web, is some application that screenscrapes CICS screens or reads data from VSAM. These applications a tricky to build and take much more time than they should.


Starting a bank from scratch will have you worry about other things than the technology of your IT department, mostly to do with solvency and staying clear of SO.

Sure, at some level the technology will come in to play but nobody will 'start a high tech bank'. For instance, ING direct (one of the efforts of the banking industry to leverage the use of technology) was jump started from the legacy ING systems and as far as I know still ties in to them for all of their back-end stuff.

When I was the 'systems administrator' for the corporate division of a bank (in 1986 or so) we had daily rounds of getting the signatures of the various people responsible that the totals for the night batch runs were correct. That's the sort of responsibility you're talking about, I can't imagine that an MD5 or some digital signature would be sufficient today, it's all about procedures and accountability, emphatically not about technology.

I got one of the first PCs the bank bought on account of our project manager being very forward thinking but nobody ever thought of it as more than a curiosity and something that might be a handy replacement for the terminals.

That attitude has changed a bit but not by much and that's 25 years.


Well, nobody would have built another search engine either, but google did it. Part of the problem is the legacy mindset of the higher ups at financial institutions combined with their extremely conservative nature.

I currently work in a fairly large financial institution, giving them a web presence for various services and the backend system is no longer a mainframe but an AS/400. Though it still runs COBOL-based apps, so it's really no different than the standard mainframe.


I'm not sure I follow you on the google link here.

Google built another search engine, which ran on a cluster of PCs and did so more efficiently than the big competitor at the time, AltaVista, which ran (afaik) on a cluster of Dec Alphas. Because google needed lot of horsepower and had a tight budget they came up with some pretty clever packaging ideas for their cluster.

In that sense both these companies took the best route to a solution. Now a search engine is materially different from a bank in that if something goes wrong in a bank the potential damage (not just to the bank, but also to the customers of the bank) is enormous.

A search engine that spits out 'wrong' results is still a search engine. A bank that doesn't know where your money went is no longer a bank (at least, not for very long).

Financial institutions rely heavily on their image of reliability, much more so than any web company.


I was speaking more about the mentality that led to the business opportunity that google exploited. More so than the technology. The mentality at that time was that "nobody would build another search engine". Any time I hear someone say "nobody would..." I think about how that mentality allows you to exploit that opportunity.

In this case, I'm referring to banking. You've got an industry that is based almost entirely on legacy systems that has not evolved with the progress of technology. As correctly pointed out, you would not want to modify an existing legacy system to move it to a new technology, which is why nobody has done so. What I'm referring to is a new, from scratch, bank with new technology. One that would allow you to disconnect from the current banking model and allow you to offer services that nobody could provide with today's legacy systems.

This combined with the fact that if you ask any bank customer they will tell you how much they hate their bank and hate the service. Sounds like it's ripe for the pickings with the right investment, investors and people.


I don't think the mentality was that 'nobody would build another search engine', it may have been around where you were at the time but I know plenty of people - myself included - that were wondering about building another search engine. Also, altavista definitely wasn't the only player. Even today, with google as established as it is I'm pretty sure there are at least several hundred people involved in building new search engines. Whether one of them replaces google at any point in time remains to be seen, but it's definitely not classed as a 'hopeless problem'.

As for the banking bit. You are missing my point, nobody builds a bank because of the technology, people found new banks because of the money. If that means adapting old, tried and true technology to jump trough new hoops then that's a cost of business. Bankers are conservative by nature, not because they like it but because it makes very good business sense when you're a bank. It's no coincidence that banks, insurance companies and governments are the areas where the mainframe is still dominant.

I'm really not sure what kind of services you could offer with a 'new' software stack that you could not offer with the old one. Bank transfers are routinely done in seconds between banks, you can pay with everything from credit cards to mobile phones and computers, they seem to have done a remarkable job at adapting to new tech while remaining conservative at heart.

People hate their banks mostly for the costs and for the lack of human interest, rarely if ever because of the lack of technology. In fact, I could do with less technology and better human interaction.


I can think of a lot of new products and services a bank could offer. You would have the opportunity to rethink the standard checking/savings account world and hybridize cd's and savings accounts. There has been so little innovation in banking (some would say that it's because of how banks are tied to legacy infrastructure) that a re-think of what exists and what customers want would be very refreshing.

I think the best part would be that because of the legacy technology in place at existing banks, they would have a very hard time making changes to adapt to anything new that would be offered. Just imagine adding a new account type that isn't checking or savings and has new rules. How long would it take for a bank to re-write their backend system to incorporate that change? A long time.

If a disruptor entered banking and did offer this kind of change, I think you'd have banks banging down your door to buy you out, if nothing else.


Interesting. My bank has lots of different account types that aren't checking or savings, and all of them have different rules (and I use a few of those).

There is a big temptation to assume that banks don't do stuff because they're technologically inept, but I don't think that's the case. The bank I worked for was a subsidiary of the Chase Manhattan bank and as far as the hardware went they were cutting edge at the time.

Software driving products that were the banks income was extremely well documented and extremely well protected in terms of changes to the code and who could do what with it. Security was tighter than a gnats ass, both physical and network wise, with round the clock monitoring and so on.

A banks idea of disruption is not the fear of some new tech, banks know that technology only appears to move fast, they can afford the wait until something has proven itself.

A bank fears things like paypal, that obviate the need for banks and their 'products which are the privilege of accessing your own hard earned money'.

Banks have tons of special products in their inventory, the fact that you don't actually see them in the wild is that you are mostly looking at the consumer side of the operation. On the business side the special products are the majority. Then there is all the regulatory business, that too is responsible for an enormous amount of code.

For a newcomer the barrier to entry is not technology, it is simply the vast headstart that existing banks have in terms of knowledge, marketing power and financial resources.

I'm only aware of one bank that started in recent times that was not tied to another bank, and unfortunately it went bust.

Thanks for the exchange by the way, that was interesting. Who knows, maybe one day you'll found that disruptive bank based on high-technology!


I think there are new banks starting up and some probably see themselves as high tech operations. But in that sector that means J2EE and not Ruby on Rails.


Personally I think that any new banks should use NoSql to store their data and use haskell for their core code.


I don't disagree, I just meant it from a realistic/pessimistic point of view.


It was in jest, a serious answer would have been to use Erlang, which I think is 'hot' new tech (for those that don't know how old it really is, it's 'new' to them so it must be new to everybody) and a suitable replacement because of the design rather than j2ee.


I imagined reading that aloud in thick Swedish accent to get in the mood for how serious Erlang is :) (just joking: I'm refering to the famous Erlang demo video)


Why so?


Starting a new bank - if only it were that simple.

I actually looked into the requirements for establishing a new bank (not in US) and the requirements were so prohibitive (tens of millions in reserve from day one, virtually obliged that no single shareholder hold more than 5-10% stake), not to mention meeting regulatory requirements like BASEL in Europe and SOX in the US mean that the big banks own the market in many countries, with no fear of swifter moving newcomers "eating their lunch". Not to mention people's resistance to change, especially with something as important as their money. And that is only on the retail banking side. It's even worse in i-banking, with the bigger investment banks making more money than ever on Wall Street, as if the crash of 2008 never happened. They may be crooks, but they have the money and influence to make sure the laws get written in their favour - see https://www.banksimple.net/blog/2010/07/21/banks-strike-back... (there was an article on HN recently around this post.)

Also looked into providing the "valuable services" you mention too, a la Mint.com (no equivalent where I live) and this is something I believe there is certain potential (you will have to build a service that taps into banks online portals and offers more useful features than the banks themselves, e.g. account aggregation, financial product comparisons, tailored financial advice etc.) I think the future is one where virtually ordinary transactions will be done online (paying bills, transferring funds), possibly on smartphones, and where you only go into a branch to get a mortgage or something major - and possibly not even then. Of course you can do all this already online, but the world is still very cash and cheque based outside of the US. It will be a while yet until the majority of people are willing to do 100% of their banking with an institution that is 100% web-based.


It looks like at least one company is trying this: https://www.banksimple.net/about/


That doesn't say anything about their tech, and they're not open for business (yet). It will be interesting to see how they fare.


Don't presume that you're operating with sufficient domain knowledge to make those judgements, and don't presume that the folks that are making those judgements are ignorant of the options and alternatives.

When you're operating at the juggernaught-scale of business or with industrial-scale production, the rules and the requirements and the competitive landscapes are different.

And the risks can come not from the existing environments - which generally work - but from making the changes to the environment, and from the risks of outages and downtime.

Every vendor that can make even a tangental case for a platform or application or process migration or update or new hardware is also certainly pressing that, too; that's an automatic education for the folks involved.

If you can't process customer orders for a week or a month or can't ship orders or your outage slags a whole production line, you could well end up out of business. I know of places that have had similar meltdowns during updates and during migrations, and it's Not Pretty.


You have to remember that they use these systems to run their business and they've been tested for thousands or millions of hours so the system is extremely reliable. It is naive to suggest to switch to a newer platform or language. Those things are always changing. Also, once you are a seasoned software developer in multiple languages how long do you really think it will take you to learn cobol? Not much I can assure you. You'll probably be familiar with the basics in a couple of days, and maybe even enough to be dangerous. Switching to a platform just for the sake of switching is extremely dangerous and potentially quite costly. Especially when from the customers point of view it really makes no difference whether behind the scenes the business is using cobol, fortran, java, ruby or whatever new fad of the day. If the system is going to behave exactly the same whether you use java or cobol then what is the point of switching?


A million hours is more than a century, I think that is somewhat unlikely.


No is not. Thousands if not millions of people use their systems every year. Every person that uses it counts as time testing the system. Notice how quickly the hours add up if you look at it this way.


Ok, I was thinking in terms of the run-time of the software, not in terms of the time people spent using the system. But that's a valid way to look at it.


You could say the same about Windows XP.


He's talking about FedEx. They understand that it's a risk to their business continuity. The guy who hijacked a FedEx plane planned to fly it into their datacenter near the airport, which houses the primary mainframe that runs FedEx's most critical business software. He wasn't an IT employee either, so it's fairly common knowledge. AFAIK, they are now running in a hybrid mainframe / client+server model. Most of the package transactions are processed simultaneously. Apparently all of the tracking on FedEx.com uses the client+server system to back it. Obviously they are taking it slowly and carefully (it's like a decade-long project) to avoid any screwups.


When you're working at this scale, geographic redundancy of DCs is commonplace.

I know of several cases where entire DCs were destroyed, and production continued unabated.

There are commercial platforms that are explicitly designed for this sort of DC redundancy over 800+ km spans. Where you have 400 km between various volumes in your RAIDset.


Do you have links? That's a scenario I've always been curious about, and would like to read about situations where it actually happened.


http://h71000.www7.hp.com/openvms/brochures/commerzbank/comm...

A moderately technical article on some of the issues that arise, and (for your question) see the end of the following:

http://h71000.www7.hp.com/openvms/journal/v1/disastertol.pdf


Fascinating.

"At 2 a.m. on Dec. 29, 1999, an active stock market trading day, the audio alert on a UPS system alarmed a security guard on his first day on the job. He pressed the emergency power-off switch, taking down the entire datacenter. Because a disaster-tolerant OpenVMS cluster was in place, the cluster continued to run at the opposite site, with no disruption. The brokerage ran through that stock-trading day on the one site alone, and performed a shadow full-copy operation to restore redundancy in the evening, after trading hours. They procured a replacement for the failed security guard by the next day."

The last sentence.


It's a little disturbing that a security guard had the ability to shut the whole thing down. Is that normal?

I would think they'd keep the power systems under lock and key, or at least under the control of someone who knew what to do when a UPS starts making noise.


The big red buttons are often there as a safety measure in case someone starts getting electrocuted. At very least, though, these very tempting buttons should be labeled in very dire language that they're not to be pressed unless either a) someone is about to die or b) you actually know what you're doing.


Nope. Data centers are really easy to shut off.

Emergency Power Off buttons are often required by code to shut down breakers and UPS.

I've heard that this is to prevent loss of life if a tech grabs a livewire and/or let the fire department know that power is removed before they start spraying water.


I believe that almost every "Major Package Shipping Company" uses mainframes in their operations, not just FedEx. My friend works as System/z administrator for DHL.


The rules of the free market game says someone using cutting edge technology will come up and sweep all. Which doesn't seem to happen. So maybe, as long as it works, it doesn't matter if it's Cobol and the expensive Cobol programmer is being paid peanuts when compared to the money being made.


Technology is irrelevant. It doesn't happen because there are so many requirements and regulations that even if you had the world's smartest hackers writing in the purest Clojure/Ruby/etc, your software would still have to contain all this horribly complicated business logic and would take years to develop.


Or compared to the money that could be lost in case of a business interruption.

Some of these companies carry a downtime penalty on the order of hundreds of dollars per second. You really don't want to mess up when there is that kind of money on the line.


By 'the rules of the free market game' I presume you mean the Efficient Market Hypothesis.

The EMH supposes that every actor has total and correct information, every actor is rational and products are fungible.

Whilst we might approach this on say a stock exchange to imagine most companies compete in an efficient market. When so much of modern business is about branding and advertising (destroying fungiblity and corrupting actors information and rationality). Is, I suggest, somewhat naive.

Essentially modern businesses do not generally compete on product, they compete on their ability to distort the markets they're in.


> The mainframe software market is twice as big as the Linux market

That's one hell of a compliment for Linux then.


I came here to mention that was the most stunning bit of the article. Not the fact. But that someone is using the size of the Linux market to demonstrate how large mainframe's share is.

I'm old and have lived through the growth of Linux. Starting as nothing, through all the FUD, and naysayer pudits, and claims it was communist, and MS many monopolistic attempts to squash it, etc. Comments like that still make my jaw drop.


... and we are talking about "market", that is about usage of Linux that involve exchange of money, no? So that's even more impressive.


In fact, western civilization runs on the mainframe AND excel. As a consultant, I see a lot of both...


More broadly, western civilization runs on entrenched systems which no one has the will to replace.

Sometimes it's easier to stick with what you know works than to try the newer thing, and that's probably the reasoning used by the people making the buying decisions.


Why would Excel need to be replaced? It's good software.


It isn't so much Excel as it is spreadsheets in general. The cycle I've seen is that when a group starts out, they use Excel because it is a rapid tool that they understand. But then they keep building on that until they have a very complicated mess on which they are dependent. Spreadsheets don't tend to have good tools for testing and can be very brittle, so they end up slowing things down and causing problems.

There are some interesting case studies here: http://www.eusprig.org


An argument can be made for replacing brittle, difficult to follow twisty spreadsheet programs (which is a lot of what people do with Excel that they can't do with anything else) with more conventional programs; maintainability benefits could be significant, just to pick one low lying fruit. But there's no money in it.


I was talking more about mainframes than Excel. I also posted that before the thread was full of comments that changed my view on mainframes.

As for Excel, I've read about a lot of situations where it's used for things it's not a good fit for, especially situations where a simple database and frontend might be more appropriate.


Anyone know why legacy mainframe software couldn't be emulated/compiled to run on a cluster of Linux boxes? Licensing, patents, legal concerns, etc? Obviously this is not a trivial issue, but it seems as if the amount of data we're talking about is not immense. Banks exchange and process large amounts of information, but how does it really compare to what people are doing with Hadoop or what Google does with MapReduce? I would imagine Google processes orders of magnitude more information than banks.


There can be substantial differences among what's technically feasible and what's economically feasible and what's politically feasible.

An existing environment is usually a large pile of twisty dependencies on operating system calls, on database or file formats that aren't entirely portable, on language extensions that don't exist elsewhere, on objects built from source code that's been lost, on dependencies on products from vendors that no longer exist, and it's all often inextricably linked with front-ends that would have to be rewritten, data bridges that would have to be rewritten, and users that need to be retrained.

In some cases, this effort can involve re-architecting substantial parts of a manufacturing facility that's involving six or ten buildings and that are each most of a kilometer long. And with many industrial-scale environments, change is not going to happen. You (might) see a new environment implemented when the entire production line is nuked and paved at the end of its useful lifetime, and replaced with a newly-deployed production line. In other cases, you'll probably see the existing system patched and re-used.

For many of these cases, just ramping up a load test would be a substantial effort.

Some hunks of the environment will have test cases and specs and documentation, and some hunks probably won't.

For many of these installations, substantial downtime has SEC-level visibility, or can involve exposures to various legal entanglements, or can involve re-establishing politically- or legall-mandated external certifications, or can cost thousands or millions of dollars per unit time off-line, or requires the production line to be cleaned with a jack-hammer or entirely replaced, or other issues.

If you can afford an incremental port or a big-bang port, well, what's that cost in comparison with the incremental costs of keeping the existing environment going?

C-level folks just don't blindly sign the really big bills involved in keeping these environments going, and most competing vendors seeking to acquire least part of these mainframe-scale budgets.


Not sure about running on Linux, but there are COBOL and CICS systems that run on small servers. In fact there are a whole range of mainframe-like machines (e.g. AS/400) from IBM, to support different size businesses. The amount of data in some applications IS immense. If you are managing a multi-million-account credit card portfolio on a single mainframe, which nicely supports all your realtime apps (including a 100-request per second authorizations system) and your 6-hour batch cycle, there is no reason at all to port that to smaller machines. The mainframe is marvelously stable and nowadays the whole machine rarely gets hung up (I haven't seen or heard of this happening to my company or its customers since the early 90's).

Sure, you could move those apps to a bunch of smaller boxes. Then you open yourself up to server failure, admin costs of maintaining multiple boxes, etc.

Mainframes wrote the book on SaaS long before 'net cloud computing came along. IBM and others have operated computing as a utility for years and years. What manager who gets his IT done on rented computer time is going to suggest taking the apps in-house on a bunch of servers? None that I know of.


The simple answer is that you have to have a consistent view of the world. Let's say that 100 Linux boxes in a cluster == 1 mainframe. Let's also say you're a bank. Someone connects to node 1 and makes a withdrawal from a joint account, and someone simultaneously makes a withdrawal from the same account on node 100. How do you check they haven't gone over their overdraft unless you can serialize those transactions? And if you can, and make it scale, congratulations: you're now where IBM was in the 70s'.

It's pointless to invoke the almighty Google; they simply don't deal with problems in this class.


> It's pointless to invoke the almighty Google; they simply don't deal with problems in this class

FWIW, I worked at a major bank (top 5 in the U.S.) and listened in on several discussions between their top tech guys. As mentioned in other comments, their number one worry was the supply of COBOL coders.

They were terrified of Google and/or Paypal and what would happen if one of those companies got serious about competing in the banking arena. Most of the top architects _knew_ that it was possible to build a bank on a huge cluster of servers. They also knew that there was no way an existing bank could build the infrastructure and software while also maintaining their existing infrastructure.


Apparently PayPal isn't in technologically fantastic shape either. Word on the street is that they still have their core offering written in C using a CGI interface.


With good unit testing C is very robust and also much cheaper in hardware footprint.


I wasn't as much bashing C as I was the use of CGI. Most of the time when C is paired with CGI it's a pretty good indication that the code is chock full of memory leaks.


If you are using CGI, then those memory leaks will be short lived, limited to one request.


Fork is still expensive. It is not sane to fork per request.


C? I guess that only puts them 30 years ahead of the banks, then...


I would love to see some startup disrupt the banking industry.


Banks are actually kind of a bad example for needing synchronized processing. Banks settle transactions pseudo-asynchronously in batches. While they adjust your "available balance" as well as they can, AFAIK, transactions are officially settled in the old school, nightly batch.

Even then, there are consistent, synchronous databases that run on Linux clusters. Oracle RAC and Sybase ASE come to mind. Linux also supports hardware NUMA implementations, which allow single system image software. An emulation layer could run on top of NUMA hardware.


I am very experienced with RAC and its predecessor OPS. It really is nowhere near as performant or reliable or scalable or manageable as a mainframe. Which is not to say it's not "good enough" for many serious, real-world workloads, of course it is. But you're comparing a Toyota pickup to a JCB earthmover.


Aging Mainframe Hardware is horribly slow. A fast 5,000$ server built today (Raid of SSD etc) is significantly more powerful than any Mainframe built prior to 1998. But Mainframe Software is far more efficient than anything built for a modern PC.

PS: I suspect a large part of this is the mindset and experience of the people building the software. Well written Java code can be about as fast a well written COBOL code, but the average Java system is horribly inefficient.


In floating point, yes you are entirely correct. But that old mainframe'd show you a thing or two about I/O.


I respectfully disagree. The only place they come close is a late 90's huge disk arrays, but random access to SSD arrays are amazing and once you start looking at things like L2 cache bandwidth old mainframes start looking anemic.

Edit: An 1980 IBM 3081 could access up to 32 megabytes of main memory. A 5400-series "Harpertown" can have 24 megabytes of cache. So Modern CPU cache memory vs Mainframe main memory is not an unfair comparison. But that's just overkill the real shocker is 3081 had less bandwidth to it's registers than many modern computers have to it's local network. (It was below 40MHz and consumed 23 kilowatts of power).

PS: Feel free to post some actual numbers.


Why do you keep comparing old mainframes to current tech? Why not compare current mainframes to current tech -- its like those people who try and compare the java of 1998 with the C of today (you know "jvm takes minutes to start, and has x, y and z gc issues...").

Mainframes from IBM are under constant revision and update, just like server tech.


Because they are still being used. People think they need to replace an old mainframe with a cluster but that's just not true. Also, large modern day mainframes have become distributed systems so while you may still need a cluster you are already using one.

PS: We live in an age of horrable custom software, sitting on a monstrous stack of terrible commodity software. If you spend a noticeable amount of time waiting on a modern computer system it's really just a software problem. (Wait you actually want to use XML?) But, often upgrading the hardware is really the best option.


I'll definitely yield to your experience. What exactly makes the mainframes so much more capable? How does it compare to something like an SGI Altix cluster with 2048-core / 16TB single system image capacity?


The mainframe is built from the ground up for transaction processing. Have a look around Wikipedia for terms like: CICS, IMS, TPF, Sysplex, WLM. Everything, the hardware, the kernel, the rest of the OS, the programming model, the languages - all for this. It has no pretence of being a general-purpose computing platform.

The mainframe would suck for what you'd use the Altix for, e.g. weather forecasting. The Altix has a lot of processes that want to work on a common dataset, but it would grind to a halt if you tried to use its interconnect to serialize transactions.


Exactly what makes it so much more useful for transaction processing? I'm looking at these things you suggested, but how does CICS compare to something like BEA Tuxedo? Of course there are no benchmarks or talk of performance out in the open on the web because I'm sure their licensing agreements forbid that.

Do mainframes have some kind of crazy low-latency or high bandwidth bus that something like the Altix is missing? It appears they use similar hardware architectures, at least from a high level: super high speed modular backplanes with blade servers that form a single system image. The NUMALink 5 is stated as having 15GB/s of bandwidth (7.5GB/s in each direction) with a ~1µsec latency.


A mainframe is to transaction processing as a GPU is to 3D graphics. Does that make sense?


No, you are being extremely vague. A GPU only performs well at executing a small set of mathematical instructions in a very specific, ultra-parallelized model against limited sets of data. They are not Von Neumann machines. A mainframe runs business software on general purpose CPUs with a normal memory model. It wasn't long ago that Macs and IBM mainframes shared the same CPUs. Yes, they support virtualization in hardware, but so do cheap Intel processors now. They appear to be architecturally no different than large SMP boxes. Please enlighten me on the key hardware differences that make them so apt for transaction processing.


Heh, I suggest you do a little reading around the architecture and the way SAPs interact with the main processor(s).

Or we can just say that all languages that are Turing complete are equivalent...


I know that Google Checkout isn't a "real" financial institution but surely, I'd think those problems were of the same class - perhaps it's just that they don't have to worry about legacy problems (yet).


Google Checkout backs onto a payment processing gateway. Google isn't a bank!


Any large distributed system barely runs on the specific hardware/OS it was designed for and required constant monitoring and maintenance. Even OS upgrade (i.e. in theory it should stay compatible and "just work") in the large datacenter takes weeks of testing and fixing. Migrating from mainframe to linux is very very very expensive. On top of that to achieve same reliability there will be a need to re-architect some software pieces.

BTW, Google needs with MapReduce are fundamentally different - it favors scalability and thoughput over correcteness vs banks that need distributed transaction consistency most of all.


There is actually mainframe emulation, it can certainly be done. However, IBM is doing everything to keep that unattractive and with a bad image.


I suppose what I'm talking about is a company with real biz clout (like HP) coming up with a solution to compete with IBM.



Priceless.


So that's a $6 billion profit for keeping western civilisation open for business? I'm not sure I'm buying the premise that that is somehow unreasonable.


In fact, at that price it's cheap.


You shouldn't buy that premise, because that's not what the article claimed was unreasonable.


I don’t think that’s the point of the article.


"According to Maclean, comparing mainframe performance against, say, Oracle running on an x86 server platform may yield similar results in a low usage scenario. However, a mainframe will prove its value as that application set scales up in volume. The difference isn’t in the CPU, which (in non-IBM cases) is the same on both sides of the comparison. Rather, the difference is in the architecture of the operating environment."

http://www.processor.com/editorial/article.asp?article=artic...


Obviously there is a business use case for mainframes, these can't all be legacy systems. But what is it? Can someone who has worked with such a system shed some light?


They're powerful, reliable, and compared to writing software for an equally-sized cluster or distributed system, (once you're in the right mindset) easy to write for. Or alternately, not to write for -- you can just keep migrating the same ancient COBOL up to newer systems, without ever changing it, and the stuff just keeps ticking.

The business case is that they work, and have a great track record.

Overall the biggest selling point that I've seen is probably reliability. There's a perception that mainframes are more reliable than commodity-parts systems. In truth I think this is confirmation and sample bias; if you run a business that has a mainframe (very expensive, built for reliability, software has had 20+ years of debugging) and a bunch of servers (inexpensive, cost-optimized, software is constant work-in-progress), it's always going to seem as though the mainframe is unbreakable and everything else is dangerously unreliable. It's as much a mindset issue as an architectural one.

So many businesses will pay the premium for mainframes as their "core" systems, and then hang a lot of commodity systems off of it to provide access or interaction with the mainframe data.


You're exactly right.

I worked on a mainframe for a major Bay Area school district during and shortly after high school, as a computer operator and COBOL programmer.

I was there for a couple of years; rebooting the mainframe was unheard-of. It never seemed to develop memory leaks, it didn't get more quirky after it had been running for a long time. Since it wasn't accessible to the internet (or even most of the network), it didn't need to have any time spent on it trying to secure it against the latest Windows bug. It never required software updates, other than the ones that we coded ourselves and deployed without having to reboot anything. The database didn't fall over when we ran a payroll job or report cards.

But, best of all -- and I love telling this story, because I think it really illustrates something that the techs using all the modern stuff just don't "get" -- it had the best recovery systems I have ever seen, hands down.

So, one day there was a power outage. We had backup power systems, but like a lot of data processing departments we were just a couple of overworked people and they hadn't been checked in a while. They didn't last as long as we were expecting and the mainframe kicks off. Around that time, I'd been playing around with Linux and such, and I had some idea of just how ugly things got when a complex operating system powered off in the middle of doing things.

I was a little bit stressed out. The other guy, a clean-shaven greybeard, wasn't.

When the power came back on, the mainframe booted up -- quickly -- and resumed its operations from pretty much the exact point at which the power had gone out. It didn't miss a beat, none of the data it produced was erroneous.

That was in '98 or thereabouts.

To say that mainframes are more reliable than the popular alternatives is an understatement. :-)


This is interesting. Clearly it's not just the hardware that makes mainframes this fault-tolerant. I'm curious why other platforms haven't appropriated some of the core features and functionality of the mainframe machines? Is it lack of familiarity? Patents? I'd love for one of my database machines to be this reliable.


I think there's probably a few factors involved:

1. Writing completely solid, fully debugged software is Not Sexy, which is why Linux is getting shinier and shinier but the software still has bugs.

2. You and I have probably been spoiled by working with Unix. Now, I've never used an IBM mainframe, but I have used VMS, which is definitely more businesslike than Unix. Working with VMS feels like going to the DMV. Imagine how it would feel to be living in a mainframe world, right out of IBM's most conservative, straight-laced era--because I'm gonna bet you wouldn't be able to play as fast and loose if you had mainframe-style fault tolerance.


They're powerful, reliable, and compared to writing software for an equally-sized cluster or distributed system, easy to write for.

Of course a big SMP such as System p has the same advantages (plus the greater familiarity of Unix), so in my mind the question stands. Why mainframes?


Right, but big System p also plays in a similar class price than a mainframe with Linux. You get similar functionality on an easier platform (Unix), but the architecture and middleware functionalities are similar.


1) If you need to do a lot of relatively simple things bloody quickly. There's nothing in the Unix world that can match the TPS of a mainframe. "Eventually consistent" just doesn't cut it for credit card auths.

That is the real difference between mainframes and supercomputers. A supercomputer is about computation, a mainframe is about I/O.

2) Extreme reliability. There are mainframe clusters in production with a continual service availability for decades. Again, nothing in the Unix world can touch this.


1) I refuse to believe banks and insurance companies have I/O that can't be satiated by commodity hardware. InfiniBand QDR can transfer 40Gbit/sec with 1µsec latency. That's easily 200,000 synchronous round-trip messages per second PER link. If you can deal with higher latency, those can be aggregated and you can easily exchange tens of millions per second. Let's take this one step further: ECNs and automated trading systems. Their latency and I/O requirements make banks look like amateurs, yet they almost entirely run on off-the-shelf hardware and systems. Billions are at stake in these systems.

2) When was the last time Google or Amazon or NASDAQ or Arca was actually out of service because of hardware failure or maintenance?


You have to compare like with like. A tickstream is more like writing to an in-memory ring buffer than it is to a database. Or a tick is like a UDP packet compared to an X400 email. Even the ECNs run their backoffices on mainframes.

I would be willing to bet that Amazon do their warehouse and logistics operations on a mainframe too.


ECNs do trade matching, which is very similar to bank transactions in that it requires synchronized, transactional processing. You've got huge lists of bids and asks that are matched as quickly as possible. I know engineers who work(ed) on Archipelago, now known as NYSE Arca. It's UNIX-based, first Sun and now Linux. There are no mainframes at all. These aren't particularly impressive boxes or a ton of them either. The software is fast and well architected.

NYSE also apparently unplugged their last mainframe in 2008[1]. It appears NASDAQ OMX's INET trading platform is based on x86-64/Linux/Oracle stack on commodity hardware[2]. NASDAQ's US exchange has a record of 194,205 orders per second[3]. The latest VisaNet peak message rate I could find was from 2008 and was 8,442 transactions per second[4]. Based on the fact that Visa aggregates transactions over hundreds of major banks, I would guess that each individual bank processes less credit card transactions. Visa's 2009 volume was 9.0 billion transactions[5]. I couldn't find anything substantial on NASDAQ's stock market order volume, but NASDAQ does about 10 billion contracts/year JUST in US & European option derivatives[6].

[1] http://searchdatacenter.techtarget.com/news/article/0,289142...

[2] http://www.nasdaqomx.com/digitalAssets/66/66992_genium_inet....

[3] http://www.nasdaqomxtrader.com/trader.aspx?id=inet

[4] http://investor.visa.com/phoenix.zhtml?c=215693&p=irol-n...

[5] http://www.creditcards.com/credit-card-news/credit-card-indu...

[6] http://www.nasdaq.com/aspx/company-news-story.aspx?storyid=2...


Amazon did run a mainframe briefly, but apparently it didn't last more than a year.

http://blog.seattletimes.nwsource.com/brierdudley/2008/05/am...


Amazon is beyond enterprise scale, it's Web scale.

This stark distinction in order of magnitude of scale is why there may be so much incredulity here as to the technical superiority of mainframes.


LSE went down last year if that helps for (2):

http://blogs.computerworld.com/14876/london_stock_exchange_d...


I haven't worked with mainframes, and I frequently chortle every time someone mentions one even though deep down I know the article is right. But still, lack of experience has never stopped me from making stuff up before, so:

I imagine that the primary reason companies buy mainframes is that their monolithic architecture abstracts away a lot of the hard thinking that goes into a clustered system by putting it all in one chassis. If my 2000 CPUs are on separate machines then I have to write code to coordinate them and move data between them. If there's a super-fast bus connecting them and they can work with shared address space and let the OS deal with scheduling and data flow then even the average corporate code monkey can write naive large-scale apps that run a little slower but get the job done well enough to please their corporate overlords. And maybe the cost of developing a "proper" architecture for these sorts of business jobs is too high compared to the cost of a few hundred thousand in rent-a-mainframes.


I have not worked on mainframes per se. But I was once tasked with investigating why a particular project to replace a mainframe process with a modern Java one failed.

Basically, the mainframe is FAST. Everything is simple there. The process in question was more or less a COBOL stored procedure doing a bunch of lookups in an incredibly denormalized DB2 database. The entire batch job would take less than 15 minutes on the mainframe of which this particular process was only a small fraction.

Now, the Java app to replace the mainframe process was apparently written by some smart dudes that had gone on to make lots of cash doing more exciting things by the time I got there. The replacement system was what lots of naive people drinking the Hibernate Kool-Aide were writing back then. The database was normalized to the extreme, it had a hierarchical object model that was mapped to objects with Hibernate, had a slick web interface, and so on. Well it ran like balls. It got exponentially slower as data was added. The administrative interface took 30 minutes to save a setting. This new process would increase the runtime of the overall batch by an estimated 10 hours. So I looked at it, and with another guy cleaned up a bunch of bad SQL and Hibernate.

The administrative interface got shaved down to seconds for a post. But in the end the overhead alone of marshaling and unmarshaling XML for the service calls was enough to push the batch process a full two hours longer than the mainframe one. Which, by the way, was unacceptably long for the 1 hour maximum processing window allotted to the batch.

OK, I know what you're saying. That's when you bring in the cloud and run a bunch of parallel instances and such and such. Or dump XML and use some kind of raw binary message format. But shit. The mainframe already did all that with a single threaded COBOL stored procedure in under 15 minutes. Needless to say, today they're still using that same old COBOL stored procedure and they're still making money hand over fist.

So yeah. Big companies spend lots of money trying to get off the mainframe every year. Believe me, I have personally witnessed tens of millions of USD lost to failed mainframe replacement projects by developers who think they know better. It's not like some manager looked at the the estimate and said "hell no." Actually they said, "Hell yes! My top architects say the mainframe is dead. I will write a check for $50 million dollars right now because that is nothing compared to the $10 billion in revenue we make." It's just not as easy as it looks.


I've seen these for at least:

- payment processing

- mail order sending (retail market)

- insurance databases

All on fairly large (world-sized) markets. Never programmed these myself, but had to interface (file or api) with it.

My feeling is that high reliability / data throughput is favored over programmer productivity and data structure elegance (most data I've seen there was fixed width strings :-).

See Host Integration Server (http://www.microsoft.com/biztalk/en/us/host-integration.aspx) for an example of product that allows to use .Net (I guess, even IronRuby) to talk to a mainframe.


One major advantage of a mainframe vs commodity built clusters with similar capacity is internal bandwidth. With clusters, you are limited by your network speed. A high end ethernet based network is about 10Gbit, and you can reach higher with specialty hardware like Infiniband, but then you're no longer using commodity components. Within a mainframe your interprocess bandwidth is limited only by your bus speed. Problems that things like memcached are meant to solve are no longer a concern.


Within a mainframe your interprocess bandwidth is limited only by your bus speed.

You're ignoring System p, Superdome, Altix, Beckton, etc. which are also big SMPs but are significantly cheaper and easier to use than mainframes.


How is Infiniband not commodity? It's not GbE cheap, but it's also a lot lower latency and 20x-40x the bandwidth. Infiniband is about $250-$500 per port, which is around what 10GbE costs right now.


I had been dismisising Infiniband out of hand for a few years now, assuming it remained too uncompetitively[1] expensive, much as 10GE has.

However, 2-4x the bandwidth (and lower latency) than 10GE, for the same price, could make it compelling. Does the price you quote include all of host adapters, cables, and switch ports?

How is Infiniband not commodity?

How many suppliers are there for the critical components? There may be more than one vendor for adapters, but if they all use the same HBA-on-a-chip, the market hasn't yet been commoditized.

[1] Since my POV is centered around disk I/O, it would compete against SAS. Per 8-lane bundle, it's 24x the bandwidth at about $400 at the host and $100 at the expander, as of the previous generation. The current generation doubles the bandwidth, though I don't have reliable "street" prices yet.


This only works if you exclude the entirety of commodity and stock exchange business, which runs almost completely on off-the-shelf hardware. I'd say this is a pretty big part of western civilization. Of course, the I/O requirements of these systems massively eclipse that of "puny" banks, logistics/distribution, manufacturing, and insurance companies.


It's hard to think of it as a monopoly when the same services can be performed by a rack of servers running other software. For any type of data processing that mainframes perform, I'm sure you can find examples of similar jobs running in server farms.

You can't define mainframes as a separate market just because they're physically large.


Well, this is kind of "the web is the whole of computing" mentality.

You pick up the phone, you get a dial tone, not a fail whale. You put in a plug, you get electricity. When a bank's core systems go down, it makes the evening news. It's a whole 'nother world from OMG sharding RoR lol that web kids think is "scalability".


Web companies operate in a different world entirely. If Twitter goes down, it's sad, but nobody really cares. They also don't have very much money, so running lean on development time and hardware is critical to survival. If Twitter had the IT budgets like banks and insurance companies, things would be a little different. Of course scaling isn't that hard when you have billion dollar budgets. Per byte of content processed, Twitter probably operates at orders of magnitude more efficiency than banks or insurance companies. Each byte of data HAS to cost almost nothing, or the business can't exist. Nobody would pay for social networking, so it has to be dirt ass cheap.


The fascinating thing is that, including VC, Twitter really did - does - have a comparable IT budget. They could have just licensed Tibco Rendezvous, which is how banks and exchanges distribute thousands of price updates/sec, to the level of this desk at this bank is allowed to see that, but that desk isn't, and the other desk over there gets it 20 seconds later. Instead they went of and tilted at windmills, all the while telling themselves that they were breaking new ground.


I don't think the economics quite work out that way. Most enterprise scaling products are extremely expensive, have long sales cycles, and require lots of expertise to install and maintain them. They affect your application architecture design because of the per-server or per-socket licensing. You're also pretty much going to have to use that once it's purchased and put in place. If it no longer fits your business need, you've just wasted a ton of money. In fast-moving businesses this is absurd. For slow-moving industrial-age Fortune 100s, it makes more sense.

It also really sucks when you run into a problem and can't open the damn thing up. Case in point: I was working with some code that was instrumented with HP OpenView OVPA to collect performance analytics. When the new version of OVPA came out, we were planning a huge rollout to bring all 4k+ servers up to date. Problem is, the new major release had moved to a pthreads library and changed a bunch of the threading code, causing a 2 second lockup on start due to some sloppy code. For the vast majority of server software, this is almost undetectable, but these were CGIs that were invoked millions of times per hour. It took me about a day to locate the problem with truss (Solaris' strace). I had to spend a week in 5am meetings with HP's offshore development team to convince them that it was their bug. Of course, it took another week to get a patch.


That's very aptly put.

Unless you need many thousands of servers the majority of funded web application companies could very well afford the best of what the OTP world has to offer them. But since that isn't 'sexy tech' they want to re-invent those wheels.

Twitter has since switched to Erlang, I think they've realised that customer satisfaction beats new technology any day.


Twitter actually uses Scala, I believe.


Aye, you're right:

http://www.theregister.co.uk/2009/04/01/twitter_on_scala/

Memory error, I should have my dimms checked ;)


The whole point is that you can't.

> You could perform the same services by a rack of servers running OTHER software.

But with the software that is there today the cost of re-writing it would be prohibitive and that's why the mainframe market is still as large as it is.

It's nothing to do with them being physically large.

Converting a couple of billions of lines of COBOL to something more modern, re-test the whole thing and add a bunch of magic to get the same kind of reliability is not a task any sane CTO at a bank or insurance company is going to sign off on.


Not to mention, the system interface provided by mainframes has inherent failover capabilities. This means the cobol software is intentionally less robust in the "what happens if a part of the system fails" area, as that is something that rarely happens in a way that actually affects app level software.


I think people are assuming that if you ported your COBOL app to a cluster you'd use some kind of middleware that provides the same "inherent" reliability that the mainframe environment provides. The cost of such middleware is an open question.


No. You can't replace a mainframe with a rack of x86 boxes. In mission critical applications where reliability (effectively 100%; five nines just isn't good enough) and I/O throughput are of paramount importance, System z is your only choice. These applications underpin our entire monetary system, among other things.


Yes, you can. I know of a delivered project at one of the World's major financial institutions where their mainframe was replaced with x86 hardware. The system architecture was funky, but the hardware was commodity.

Just because one x86 box doesn't have the required level of reliability doesn't mean you can't build a reliable system out of them.


Essentially people are locked into those mainframes. It doesn't make sense to pick them if you are starting from scratch.

I wonder if in 20 years people will be locked into the cloud systems they are investing in now in the same way.


Well, a friend of mine writes software for mainframes. Says, the platform has two advantages: customers are totally locked in and there are enormous possibilities for kickbacks (no market => no fair price => case-by-case pricing => guess what) Otherwise, the technology is totally inferior, compared to anything.


IBM also makes a lot from their consulting (human mainframes).


There are estimates that 80% of the world's data are processed by mainframes.

Pretty clueless estimates I'd have to think.


Should I learn COBOL 2002, or some earlier version?

Are mainframes COBOL standard compliant?

Is there free standard document?


One of my friends who works on a Z mainframe told me that the last time their mainframe was 'rebooted' was about 4 years ago. Hows that for downtime ?


What does Eastern civilization run on?


Redframes.


This belongs on reddit.


This article makes me want to buy some IBM stock. But, it's never performed very well...


Well, it's mostly factored into the price; IBM's mainframe division isn't that big a secret.

But like most "blue chips", IBM is probably a reasonably solid income stock. They have an obvious income stream (software licensing) with a locked-in market, in addition to their consulting/services divisions.

Whether or not it's too much to pay for the income (compared to similar companies with license-based revenue models, i.e. Microsoft) is the question for a potential investor. I'm not sure anyone would regard it as a hot speculative play...


Mainframes are very good at their jobs. Where do you think virtual machines, NoSQL databases, SQL databases, etc., got their start -- about 40 years ago?

http://en.wikipedia.org/wiki/VM_(operating_system) http://en.wikipedia.org/wiki/VSAM http://en.wikipedia.org/wiki/IBM_System_R

Page at a time remote interfaces anyone?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: