SQL 2012 is nearly completely compatible with SQL 2005, and largely compatible with SQL 2000. IIS 7.5 will still run ASP code in VB from the early 2000s, and that code can live alongside ASP.net 1.1-4, or ASP.net MVC 1-3. In many cases, you can even have hybrid ASP/ASP.net/MVC combined applications.
There are often benefits that make revisiting your old code worthwhile, but rarely are you forced to. And we kept getting security updates back-ported the whole time.
In software we should have similar mentality and we should somehow turn this into numbers. If you don't touch your environment, the value keeps on going down - until it reaches zero. In order to keep the value, you need to constantly work on the environment. Patch operating, upgrade libraries, upgrade frameworks.
Each time you do a quick'n'dirty thing, you essentially increase your liabilities. Refactoring equals to decreasing those liabilities. If you would put numbers for these, you could actually create a balance sheet every year for your software.
From all evidence I've seen it's more likely they 1) don't care because people are willing to put up with it and 2) don't have the proper type of process/system/codebase/culture to allow such a move (and would rather spend that effort on something else).
One of Microsoft's top 3 selling points in the enterprise is backwards compatibility... They go as far as implementing run-time patches for 20 year old 3rd party DOS binaries that they don't have the source code to when they get crash reports in and it crosses a certain threshold.
> We only have to look at Microsoft's third-party ecosystem to see the alternative.
A developer that targets Windows breaks compatibility with a previous version of his software, then...? What does that have to do with Windows/Microsoft?
Once you are bit by a bad update you will be very shy about doing it again without a good cause. As an example, we updated to Recent Version - 1 of critical server software and had the thing crashing constantly thereafter. So then we upgraded to Recent Version and had a new bug that was introduced that was causing equal troubles. But now we were stuck and had to wait for next version to fix the bug. Made a really good case for not changing what works.
Just as you test on the client side, you should have a staging cycle which tests releases before you deploy them.
Upgrading an ASP.NET web forms app to ASP.NET MVC is painful, and in many cases requires a full scale rewrite due to the high coupling of front and backend code in WebForms apps.
While Microsoft hasn't abandoned Web Forms, it's clearly behind the times in terms of modern web development and Microsoft isn't making much of an effort to bring it up to speed.
Comparing the upgrade of a Rails 2.3 app to 3.0 is a much better story IMO, and a far more realistic upgrade for even the largest apps.
Edit: missed where tptacek mentioned the name of it. There's how you find it, though.
Anything beyond that, and there are multiple schools of thought. One is to keep upgrading bleeding-edge in near realtime -- this keeps you ahead of a lot of known vulnerabilities, gets you access to features, etc., but imposes a cost in constantly fixing bugs.
Another is to do big batch upgrades periodically -- e.g. always updating to Ubuntu Stable within a month or two of release. This breaks more stuff all at once, and has periods of less current software being in production, but is a reasonable compromise.
Another is to deploy things once, get them working, and then leave them in place (with as few upgrades as possible) for as long as possible, then do what is basically a forklift upgrade as required. It has the lowest ongoing cost for maintenance. For something with extensive safety or security critical code which can only be audited and certified at great expense, this is commonly done -- but leads to nice tasty 0-day exploits working well against the most critical systems. It also leads to losing the organizational knowledge, resources, etc. to upgrade.
And best of all, most providers that are appealing to startups (Rackspace, Amazon EC2, Linode...) offer some kind of pay-what-you-use plans, so you can keep costs low at the beginning.
I also like to mention Parallels and their product Plesk that gets updated automatically without any problems; at least on our server.
Imagine trying to find a blog post explaining why your 4 year old version of an image upload gem doesn't work on JRuby. It's no fun :)
This really depends if you are running a hobby on a server, or running a business. I update client side apps cause it is exciting and it doesn't matter to me if it breaks. If it does, we cry foul and the developer updates a patch hopefully within a few days or weeks.
On a production environment, this is much different. There's a reason why people say "if it ain't broke, don't try to fix it." A down time of 5 minutes could cost one company thousands while another company millions. If the cash printing machine is down, having a broken image uploader would be the least of their concerns.
Yes, you have the code for Rails and can fix/make changes, but the further you deviate from what other people are fixing bugs in, the less you benefit from using open source.
If there's a bug or security hole in Rails 2.3, we go in and fix it for our users and ship it immediately.
Forking off your own special version of something is a recipe for eventual pain when keeping up the old version is worse than upgrading.
??? What does that mean? It seems rather hand-wavy.