The problem with legacy apps on any platform is not the language (unless you're talking about some very obscure unsupported language). The problem in my experience is the tangled mess of new code piled on top of poorly designed old code. In such systems, a small change can have unintended side effects. The solution is exhaustive testing (expensive and time-consuming) or a Hail Mary installation (leading to "testing" in production and yet another reactive fix). COBOL didn't create this problem and changing to another language won't fix it. In my shop, we practice test-driven development and deliver high quality releases. Success on the receiving end of these releases varies along with the established coding/testing/integration practices at each customer site.
(sorry, couldn't help myself)
Also, Greenspun's Tenth Rule is in effect in the COBOL world as much as in any other language.
The difference in the COBOL world is that you are more likely to work with people who would not know what you are talking about if you mentioned Lisp or Greenspun's Rule, or even the word "predicate" just to pick an unrelated example. So if you are into programming more than you are into business, you can function in the mainframe world as a sort of playground for exercising your programming muscle in ways that suit you, even if your colleagues aren't aware that something different is going on.
For instance, I generate a lot of code using a Common Lisp system I wrote for my own use. No one knows or cares--all they know is they seem to get reliable code from me very quickly. If I suggested that more people use the Lisp code generator, it would not fly because it would require training and might cost maintenance dollars. So it remains a personal tool.
More: My job involves a weekly task to analyze a release while it is under development. My predecessor did this manually and took all week. I wrote a Java/MySQL system to do it, and it takes me all of 10 minutes a week. Another win for the intrepid programmer.
More out in the open, I recently led development of a rules-based system. The team did not speak rules, evaluation, predicates, or the general use of collections. The challenge for me was to design the system using these concepts, then present it to developers who will never want to learn the general concepts. Done in record time, by some mysterious process (I broke the design into small pieces w/o reference to the comp sci terms, gave each developer a focused task, and tied it all together myself).
Definitely more Sears foundation garment than Frederick's of Hollywood, but I like it.
Both of these things can't be true!
Those high salaries won't generally motivate an "in it for the tech", startup-friendly developer of the type that commonly frequents HN... but a bank with a legacy mainframe system probably doesn't want to hire that person, and they don't need to. The technology isn't the core of their business.
On the other hand, there are many highly competent developers out there who view their job as a means to an end, a way to support their family/life/hobbies/etc. To these people, learning a highly specialized legacy skill and making lots of money is exactly the right motivator.
As an example, I present my father. He has been writing data management software for hospitals for decades, and does all his work on midrange IBM AS/400 ("System i") systems. He writes all his code in IBM RPG--"report program generator" code. In recent years he's learned Java, C and Unix systems to keep himself fresh, but he's never found the need to actually use any of those skills, or a job using these that would treat him as well as his legacy work.
I wonder how long it will take before Java becomes the next COBOL (unless it already has...)
I won't call Java the "next" COBOL until COBOL goes away. Which I predict will happen sometime between 2050 and the death of the sun...
You would need to pay me a lot of money to convince me to work with COBOL full-time. Once you've convinced me to do the job, paying me more won't make me any better at it.
For .NET the "wall" is 30-50k $. For COBOL the "wall" seems to be 50k$ + (I'm conservative about this estimate on purpose). So beyond this "fair" compensation for ones troubles, the pay doesn't scale anymore.
So It's not that people wouldn't want to work with COBOL or perceive It as sexy - It's just that the "sexiness" point is a couple of 10's of thousands of dollars higher then for other tech.
The rules a company follows at small sizes and large sizes are not necessarily entirely the same, especially once bureaucracy sets in.
Bookkeeping in any form isn't sexy, but it is where the big money is when you're 'just' an application programmer.
I grilled an employee at a Major Package Shipping Company a few years ago about their infrastructure, and it's pretty stereotypical: a few mainframes that were coded decades ago in COBOL and an expensive team of legacy programmers performing surgical tweaks. They're porting unimportant side systems to modern hardware + languages (ostensibly so the mainframe programmers can focus on the major systems), but there are no plans to replace the core. Ever. Replacement systems just aren't good enough to consider switching the heart of their company. And so it will continue until there's a compelling storyline for dropping mainframes
My understanding of business is that you should never stagnate, and always keep advancing.
People that I've talked to who have worked with such systems indicated that they built giant shims around the systems in other languages to allow them to be adaptable. In theory this gives them a clear interface definition (it may be huge, thousands of method signatures, and possible direct schema access, but it can still be a clear interface) to boot strap a core system replacement in another language and hardware system.
Your understanding of business is flawed, businesses are advancing by marketing, not by shacking up with every new hot tech that comes along in favour of their old, trusted and reliable tech.
The larger, older and more profitable a business is the more likely that they are going to be ultra-conservative about using new technology at the core of their business.
Eventually we'll get rid of this stuff but I wouldn't bet on it happening in the next two decades, unless someone finds a way to emulate a mainframe on a cluster with similar reliability. And then we still need to get rid of the software.
Sure, at some level the technology will come in to play but nobody will 'start a high tech bank'. For instance, ING direct (one of the efforts of the banking industry to leverage the use of technology) was jump started from the legacy ING systems and as far as I know still ties in to them for all of their back-end stuff.
When I was the 'systems administrator' for the corporate division of a bank (in 1986 or so) we had daily rounds of getting the signatures of the various people responsible that the totals for the night batch runs were correct. That's the sort of responsibility you're talking about, I can't imagine that an MD5 or some digital signature would be sufficient today, it's all about procedures and accountability, emphatically not about technology.
I got one of the first PCs the bank bought on account of our project manager being very forward thinking but nobody ever thought of it as more than a curiosity and something that might be a handy replacement for the terminals.
That attitude has changed a bit but not by much and that's 25 years.
I currently work in a fairly large financial institution, giving them a web presence for various services and the backend system is no longer a mainframe but an AS/400. Though it still runs COBOL-based apps, so it's really no different than the standard mainframe.
Google built another search engine, which ran on a cluster of PCs and did so more efficiently than the big competitor at the time, AltaVista, which ran (afaik) on a cluster of Dec Alphas. Because google needed lot of horsepower and had a tight budget they came up with some pretty clever packaging ideas for their cluster.
In that sense both these companies took the best route to a solution. Now a search engine is materially different from a bank in that if something goes wrong in a bank the potential damage (not just to the bank, but also to the customers of the bank) is enormous.
A search engine that spits out 'wrong' results is still a search engine. A bank that doesn't know where your money went is no longer a bank (at least, not for very long).
Financial institutions rely heavily on their image of reliability, much more so than any web company.
In this case, I'm referring to banking. You've got an industry that is based almost entirely on legacy systems that has not evolved with the progress of technology. As correctly pointed out, you would not want to modify an existing legacy system to move it to a new technology, which is why nobody has done so. What I'm referring to is a new, from scratch, bank with new technology. One that would allow you to disconnect from the current banking model and allow you to offer services that nobody could provide with today's legacy systems.
This combined with the fact that if you ask any bank customer they will tell you how much they hate their bank and hate the service. Sounds like it's ripe for the pickings with the right investment, investors and people.
As for the banking bit. You are missing my point, nobody builds a bank because of the technology, people found new banks because of the money. If that means adapting old, tried and true technology to jump trough new hoops then that's a cost of business. Bankers are conservative by nature, not because they like it but because it makes very good business sense when you're a bank. It's no coincidence that banks, insurance companies and governments are the areas where the mainframe is still dominant.
I'm really not sure what kind of services you could offer with a 'new' software stack that you could not offer with the old one. Bank transfers are routinely done in seconds between banks, you can pay with everything from credit cards to mobile phones and computers, they seem to have done a remarkable job at adapting to new tech while remaining conservative at heart.
People hate their banks mostly for the costs and for the lack of human interest, rarely if ever because of the lack of technology. In fact, I could do with less technology and better human interaction.
I think the best part would be that because of the legacy technology in place at existing banks, they would have a very hard time making changes to adapt to anything new that would be offered. Just imagine adding a new account type that isn't checking or savings and has new rules. How long would it take for a bank to re-write their backend system to incorporate that change? A long time.
If a disruptor entered banking and did offer this kind of change, I think you'd have banks banging down your door to buy you out, if nothing else.
There is a big temptation to assume that banks don't do stuff because they're technologically inept, but I don't think that's the case. The bank I worked for was a subsidiary of the Chase Manhattan bank and as far as the hardware went they were cutting edge at the time.
Software driving products that were the banks income was extremely well documented and extremely well protected in terms of changes to the code and who could do what with it. Security was tighter than a gnats ass, both physical and network wise, with round the clock monitoring and so on.
A banks idea of disruption is not the fear of some new tech, banks know that technology only appears to move fast, they can afford the wait until something has proven itself.
A bank fears things like paypal, that obviate the need for banks and their 'products which are the privilege of accessing your own hard earned money'.
Banks have tons of special products in their inventory, the fact that you don't actually see them in the wild is that you are mostly looking at the consumer side of the operation. On the business side the special products are the majority. Then there is all the regulatory business, that too is responsible for an enormous amount of code.
For a newcomer the barrier to entry is not technology, it is simply the vast headstart that existing banks have in terms of knowledge, marketing power and financial resources.
I'm only aware of one bank that started in recent times that was not tied to another bank, and unfortunately it went bust.
Thanks for the exchange by the way, that was interesting. Who knows, maybe one day you'll found that disruptive bank based on high-technology!
I actually looked into the requirements for establishing a new bank (not in US) and the requirements were so prohibitive (tens of millions in reserve from day one, virtually obliged that no single shareholder hold more than 5-10% stake), not to mention meeting regulatory requirements like BASEL in Europe and SOX in the US mean that the big banks own the market in many countries, with no fear of swifter moving newcomers "eating their lunch". Not to mention people's resistance to change, especially with something as important as their money. And that is only on the retail banking side. It's even worse in i-banking, with the bigger investment banks making more money than ever on Wall Street, as if the crash of 2008 never happened. They may be crooks, but they have the money and influence to make sure the laws get written in their favour - see https://www.banksimple.net/blog/2010/07/21/banks-strike-back... (there was an article on HN recently around this post.)
Also looked into providing the "valuable services" you mention too, a la Mint.com (no equivalent where I live) and this is something I believe there is certain potential (you will have to build a service that taps into banks online portals and offers more useful features than the banks themselves, e.g. account aggregation, financial product comparisons, tailored financial advice etc.) I think the future is one where virtually ordinary transactions will be done online (paying bills, transferring funds), possibly on smartphones, and where you only go into a branch to get a mortgage or something major - and possibly not even then. Of course you can do all this already online, but the world is still very cash and cheque based outside of the US. It will be a while yet until the majority of people are willing to do 100% of their banking with an institution that is 100% web-based.
When you're operating at the juggernaught-scale of business or with industrial-scale production, the rules and the requirements and the competitive landscapes are different.
And the risks can come not from the existing environments - which generally work - but from making the changes to the environment, and from the risks of outages and downtime.
Every vendor that can make even a tangental case for a platform or application or process migration or update or new hardware is also certainly pressing that, too; that's an automatic education for the folks involved.
If you can't process customer orders for a week or a month or can't ship orders or your outage slags a whole production line, you could well end up out of business. I know of places that have had similar meltdowns during updates and during migrations, and it's Not Pretty.
I know of several cases where entire DCs were destroyed, and production continued unabated.
There are commercial platforms that are explicitly designed for this sort of DC redundancy over 800+ km spans. Where you have 400 km between various volumes in your RAIDset.
A moderately technical article on some of the issues that arise, and (for your question) see the end of the following:
"At 2 a.m. on Dec. 29, 1999, an active stock market trading day, the audio alert on a UPS system alarmed a security guard on his first day on the job. He pressed the emergency power-off switch, taking down the entire datacenter. Because a disaster-tolerant OpenVMS cluster was in place, the cluster continued to run at the opposite site, with no disruption. The brokerage ran through that stock-trading day on the one site alone, and performed a shadow full-copy operation to restore redundancy in the evening, after trading hours. They procured a replacement for the failed security guard by the next day."
The last sentence.
I would think they'd keep the power systems under lock and key, or at least under the control of someone who knew what to do when a UPS starts making noise.
Emergency Power Off buttons are often required by code to shut down breakers and UPS.
I've heard that this is to prevent loss of life if a tech grabs a livewire and/or let the fire department know that power is removed before they start spraying water.
Some of these companies carry a downtime penalty on the order of hundreds of dollars per second. You really don't want to mess up when there is that kind of money on the line.
The EMH supposes that every actor has total and correct information, every actor is rational and products are fungible.
Whilst we might approach this on say a stock exchange to imagine most companies compete in an efficient market. When so much of modern business is about branding and advertising (destroying fungiblity and corrupting actors information and rationality). Is, I suggest, somewhat naive.
Essentially modern businesses do not generally compete on product, they compete on their ability to distort the markets they're in.
That's one hell of a compliment for Linux then.
I'm old and have lived through the growth of Linux. Starting as nothing, through all the FUD, and naysayer pudits, and claims it was communist, and MS many monopolistic attempts to squash it, etc. Comments like that still make my jaw drop.
Sometimes it's easier to stick with what you know works than to try the newer thing, and that's probably the reasoning used by the people making the buying decisions.
There are some interesting case studies here:
As for Excel, I've read about a lot of situations where it's used for things it's not a good fit for, especially situations where a simple database and frontend might be more appropriate.
An existing environment is usually a large pile of twisty dependencies on operating system calls, on database or file formats that aren't entirely portable, on language extensions that don't exist elsewhere, on objects built from source code that's been lost, on dependencies on products from vendors that no longer exist, and it's all often inextricably linked with front-ends that would have to be rewritten, data bridges that would have to be rewritten, and users that need to be retrained.
In some cases, this effort can involve re-architecting substantial parts of a manufacturing facility that's involving six or ten buildings and that are each most of a kilometer long. And with many industrial-scale environments, change is not going to happen. You (might) see a new environment implemented when the entire production line is nuked and paved at the end of its useful lifetime, and replaced with a newly-deployed production line. In other cases, you'll probably see the existing system patched and re-used.
For many of these cases, just ramping up a load test would be a substantial effort.
Some hunks of the environment will have test cases and specs and documentation, and some hunks probably won't.
For many of these installations, substantial downtime has SEC-level visibility, or can involve exposures to various legal entanglements, or can involve re-establishing politically- or legall-mandated external certifications, or can cost thousands or millions of dollars per unit time off-line, or requires the production line to be cleaned with a jack-hammer or entirely replaced, or other issues.
If you can afford an incremental port or a big-bang port, well, what's that cost in comparison with the incremental costs of keeping the existing environment going?
C-level folks just don't blindly sign the really big bills involved in keeping these environments going, and most competing vendors seeking to acquire least part of these mainframe-scale budgets.
Sure, you could move those apps to a bunch of smaller boxes. Then you open yourself up to server failure, admin costs of maintaining multiple boxes, etc.
Mainframes wrote the book on SaaS long before 'net cloud computing came along. IBM and others have operated computing as a utility for years and years. What manager who gets his IT done on rented computer time is going to suggest taking the apps in-house on a bunch of servers? None that I know of.
It's pointless to invoke the almighty Google; they simply don't deal with problems in this class.
FWIW, I worked at a major bank (top 5 in the U.S.) and listened in on several discussions between their top tech guys. As mentioned in other comments, their number one worry was the supply of COBOL coders.
They were terrified of Google and/or Paypal and what would happen if one of those companies got serious about competing in the banking arena. Most of the top architects _knew_ that it was possible to build a bank on a huge cluster of servers. They also knew that there was no way an existing bank could build the infrastructure and software while also maintaining their existing infrastructure.
Even then, there are consistent, synchronous databases that run on Linux clusters. Oracle RAC and Sybase ASE come to mind. Linux also supports hardware NUMA implementations, which allow single system image software. An emulation layer could run on top of NUMA hardware.
PS: I suspect a large part of this is the mindset and experience of the people building the software. Well written Java code can be about as fast a well written COBOL code, but the average Java system is horribly inefficient.
Edit: An 1980 IBM 3081 could access up to 32 megabytes of main memory. A 5400-series "Harpertown" can have 24 megabytes of cache. So Modern CPU cache memory vs Mainframe main memory is not an unfair comparison. But that's just overkill the real shocker is 3081 had less bandwidth to it's registers than many modern computers have to it's local network. (It was below 40MHz and consumed 23 kilowatts of power).
PS: Feel free to post some actual numbers.
Mainframes from IBM are under constant revision and update, just like server tech.
PS: We live in an age of horrable custom software, sitting on a monstrous stack of terrible commodity software. If you spend a noticeable amount of time waiting on a modern computer system it's really just a software problem. (Wait you actually want to use XML?) But, often upgrading the hardware is really the best option.
The mainframe would suck for what you'd use the Altix for, e.g. weather forecasting. The Altix has a lot of processes that want to work on a common dataset, but it would grind to a halt if you tried to use its interconnect to serialize transactions.
Do mainframes have some kind of crazy low-latency or high bandwidth bus that something like the Altix is missing? It appears they use similar hardware architectures, at least from a high level: super high speed modular backplanes with blade servers that form a single system image. The NUMALink 5 is stated as having 15GB/s of bandwidth (7.5GB/s in each direction) with a ~1µsec latency.
Or we can just say that all languages that are Turing complete are equivalent...
BTW, Google needs with MapReduce are fundamentally different - it favors scalability and thoughput over correcteness vs banks that need distributed transaction consistency most of all.
The business case is that they work, and have a great track record.
Overall the biggest selling point that I've seen is probably reliability. There's a perception that mainframes are more reliable than commodity-parts systems. In truth I think this is confirmation and sample bias; if you run a business that has a mainframe (very expensive, built for reliability, software has had 20+ years of debugging) and a bunch of servers (inexpensive, cost-optimized, software is constant work-in-progress), it's always going to seem as though the mainframe is unbreakable and everything else is dangerously unreliable. It's as much a mindset issue as an architectural one.
So many businesses will pay the premium for mainframes as their "core" systems, and then hang a lot of commodity systems off of it to provide access or interaction with the mainframe data.
I worked on a mainframe for a major Bay Area school district during and shortly after high school, as a computer operator and COBOL programmer.
I was there for a couple of years; rebooting the mainframe was unheard-of. It never seemed to develop memory leaks, it didn't get more quirky after it had been running for a long time. Since it wasn't accessible to the internet (or even most of the network), it didn't need to have any time spent on it trying to secure it against the latest Windows bug. It never required software updates, other than the ones that we coded ourselves and deployed without having to reboot anything. The database didn't fall over when we ran a payroll job or report cards.
But, best of all -- and I love telling this story, because I think it really illustrates something that the techs using all the modern stuff just don't "get" -- it had the best recovery systems I have ever seen, hands down.
So, one day there was a power outage. We had backup power systems, but like a lot of data processing departments we were just a couple of overworked people and they hadn't been checked in a while. They didn't last as long as we were expecting and the mainframe kicks off. Around that time, I'd been playing around with Linux and such, and I had some idea of just how ugly things got when a complex operating system powered off in the middle of doing things.
I was a little bit stressed out. The other guy, a clean-shaven greybeard, wasn't.
When the power came back on, the mainframe booted up -- quickly -- and resumed its operations from pretty much the exact point at which the power had gone out. It didn't miss a beat, none of the data it produced was erroneous.
That was in '98 or thereabouts.
To say that mainframes are more reliable than the popular alternatives is an understatement. :-)
1. Writing completely solid, fully debugged software is Not Sexy, which is why Linux is getting shinier and shinier but the software still has bugs.
2. You and I have probably been spoiled by working with Unix. Now, I've never used an IBM mainframe, but I have used VMS, which is definitely more businesslike than Unix. Working with VMS feels like going to the DMV. Imagine how it would feel to be living in a mainframe world, right out of IBM's most conservative, straight-laced era--because I'm gonna bet you wouldn't be able to play as fast and loose if you had mainframe-style fault tolerance.
Of course a big SMP such as System p has the same advantages (plus the greater familiarity of Unix), so in my mind the question stands. Why mainframes?
That is the real difference between mainframes and supercomputers. A supercomputer is about computation, a mainframe is about I/O.
2) Extreme reliability. There are mainframe clusters in production with a continual service availability for decades. Again, nothing in the Unix world can touch this.
2) When was the last time Google or Amazon or NASDAQ or Arca was actually out of service because of hardware failure or maintenance?
I would be willing to bet that Amazon do their warehouse and logistics operations on a mainframe too.
NYSE also apparently unplugged their last mainframe in 2008. It appears NASDAQ OMX's INET trading platform is based on x86-64/Linux/Oracle stack on commodity hardware. NASDAQ's US exchange has a record of 194,205 orders per second. The latest VisaNet peak message rate I could find was from 2008 and was 8,442 transactions per second. Based on the fact that Visa aggregates transactions over hundreds of major banks, I would guess that each individual bank processes less credit card transactions. Visa's 2009 volume was 9.0 billion transactions. I couldn't find anything substantial on NASDAQ's stock market order volume, but NASDAQ does about 10 billion contracts/year JUST in US & European option derivatives.
This stark distinction in order of magnitude of scale is why there may be so much incredulity here as to the technical superiority of mainframes.
I imagine that the primary reason companies buy mainframes is that their monolithic architecture abstracts away a lot of the hard thinking that goes into a clustered system by putting it all in one chassis. If my 2000 CPUs are on separate machines then I have to write code to coordinate them and move data between them. If there's a super-fast bus connecting them and they can work with shared address space and let the OS deal with scheduling and data flow then even the average corporate code monkey can write naive large-scale apps that run a little slower but get the job done well enough to please their corporate overlords. And maybe the cost of developing a "proper" architecture for these sorts of business jobs is too high compared to the cost of a few hundred thousand in rent-a-mainframes.
Basically, the mainframe is FAST. Everything is simple there. The process in question was more or less a COBOL stored procedure doing a bunch of lookups in an incredibly denormalized DB2 database. The entire batch job would take less than 15 minutes on the mainframe of which this particular process was only a small fraction.
Now, the Java app to replace the mainframe process was apparently written by some smart dudes that had gone on to make lots of cash doing more exciting things by the time I got there. The replacement system was what lots of naive people drinking the Hibernate Kool-Aide were writing back then. The database was normalized to the extreme, it had a hierarchical object model that was mapped to objects with Hibernate, had a slick web interface, and so on. Well it ran like balls. It got exponentially slower as data was added. The administrative interface took 30 minutes to save a setting. This new process would increase the runtime of the overall batch by an estimated 10 hours. So I looked at it, and with another guy cleaned up a bunch of bad SQL and Hibernate.
The administrative interface got shaved down to seconds for a post. But in the end the overhead alone of marshaling and unmarshaling XML for the service calls was enough to push the batch process a full two hours longer than the mainframe one. Which, by the way, was unacceptably long for the 1 hour maximum processing window allotted to the batch.
OK, I know what you're saying. That's when you bring in the cloud and run a bunch of parallel instances and such and such. Or dump XML and use some kind of raw binary message format. But shit. The mainframe already did all that with a single threaded COBOL stored procedure in under 15 minutes. Needless to say, today they're still using that same old COBOL stored procedure and they're still making money hand over fist.
So yeah. Big companies spend lots of money trying to get off the mainframe every year. Believe me, I have personally witnessed tens of millions of USD lost to failed mainframe replacement projects by developers who think they know better. It's not like some manager looked at the the estimate and said "hell no." Actually they said, "Hell yes! My top architects say the mainframe is dead. I will write a check for $50 million dollars right now because that is nothing compared to the $10 billion in revenue we make." It's just not as easy as it looks.
- payment processing
- mail order sending (retail market)
- insurance databases
All on fairly large (world-sized) markets. Never programmed these myself, but had to interface (file or api) with it.
My feeling is that high reliability / data throughput is favored over programmer productivity and data structure elegance (most data I've seen there was fixed width strings :-).
See Host Integration Server (http://www.microsoft.com/biztalk/en/us/host-integration.aspx) for an example of product that allows to use .Net (I guess, even IronRuby) to talk to a mainframe.
You're ignoring System p, Superdome, Altix, Beckton, etc. which are also big SMPs but are significantly cheaper and easier to use than mainframes.
However, 2-4x the bandwidth (and lower latency) than 10GE, for the same price, could make it compelling. Does the price you quote include all of host adapters, cables, and switch ports?
How is Infiniband not commodity?
How many suppliers are there for the critical components? There may be more than one vendor for adapters, but if they all use the same HBA-on-a-chip, the market hasn't yet been commoditized.
 Since my POV is centered around disk I/O, it would compete against SAS. Per 8-lane bundle, it's 24x the bandwidth at about $400 at the host and $100 at the expander, as of the previous generation. The current generation doubles the bandwidth, though I don't have reliable "street" prices yet.
You can't define mainframes as a separate market just because they're physically large.
You pick up the phone, you get a dial tone, not a fail whale. You put in a plug, you get electricity. When a bank's core systems go down, it makes the evening news. It's a whole 'nother world from OMG sharding RoR lol that web kids think is "scalability".
It also really sucks when you run into a problem and can't open the damn thing up. Case in point: I was working with some code that was instrumented with HP OpenView OVPA to collect performance analytics. When the new version of OVPA came out, we were planning a huge rollout to bring all 4k+ servers up to date. Problem is, the new major release had moved to a pthreads library and changed a bunch of the threading code, causing a 2 second lockup on start due to some sloppy code. For the vast majority of server software, this is almost undetectable, but these were CGIs that were invoked millions of times per hour. It took me about a day to locate the problem with truss (Solaris' strace). I had to spend a week in 5am meetings with HP's offshore development team to convince them that it was their bug. Of course, it took another week to get a patch.
Unless you need many thousands of servers the majority of funded web application companies could very well afford the best of what the OTP world has to offer them. But since that isn't 'sexy tech' they want to re-invent those wheels.
Twitter has since switched to Erlang, I think they've realised that customer satisfaction beats new technology any day.
Memory error, I should have my dimms checked ;)
> You could perform the same services by a rack of servers running OTHER software.
But with the software that is there today the cost of re-writing it would be prohibitive and that's why the mainframe market is still as large as it is.
It's nothing to do with them being physically large.
Converting a couple of billions of lines of COBOL to something more modern, re-test the whole thing and add a bunch of magic to get the same kind of reliability is not a task any sane CTO at a bank or insurance company is going to sign off on.
Just because one x86 box doesn't have the required level of reliability doesn't mean you can't build a reliable system out of them.
I wonder if in 20 years people will be locked into the cloud systems they are investing in now in the same way.
Pretty clueless estimates I'd have to think.
Are mainframes COBOL standard compliant?
Is there free standard document?
But like most "blue chips", IBM is probably a reasonably solid income stock. They have an obvious income stream (software licensing) with a locked-in market, in addition to their consulting/services divisions.
Whether or not it's too much to pay for the income (compared to similar companies with license-based revenue models, i.e. Microsoft) is the question for a potential investor. I'm not sure anyone would regard it as a hot speculative play...
Page at a time remote interfaces anyone?