The fact that Adobe can get away with this amazes me! With the theoretical engine problem you need to recall/repair each and every engine individually. With software, once you've developed the patch you can distribute it at next to no cost. There's no excuse for this.
Why can software engineers and companies get away with such horrendous practices?
Not forever! Only for 10 years, after that they are no longer responsible. Software is the same way, only the timeframe is much shorter, and there is no set standard.
Although less than a year, like CS5.5 is too short. I would suggest double or triple the usual time between major versions as a reasonable timeframe, in this case that seems to be about yearly, so Adobe should provide support for 2 to 3 years for old versions.
Because most of us who write software believe that the risks associated with effectively zero liability for software failure are far outweighed by the costs of government intervention. General purpose software is far too "easy" to create for mandatory liability to make any sense.
The better solution is for the market to demand simple security fixes in situations like this.
[Edited to clarify that software risk is less costly than government intervention]
Just like I think that this here thing doesn't have anything to do with laziness. It's that Adobe wants you to pirate the crap out of CS6, because they know they won't get money from you anyway. They do know, however, that every cent they don't get directly from you is a cent they'll get from your future employer or customer or small business which is forced to buy Adobe CS6. It's not laziness, it's doing exactly what they need to to make sure that they keep their repeat customers upgrading.
Interesting. I've heard this thing before, with Photoshop and Windows too. Is there any evidence for this?
Name one famous employee of adobe along the same lines you would recognize with any of the tech titans: gates, jobs, ballmer, brin, zuck, etc. etc. etc.
Never has an Adobe's leader's name been in that list.
The reason this is important - is that there is no personal image attached to adobe's products. This lets them get away with more mediocrity than you would see with any of the above.
You don't have the scorn of the user all pointed at one person's identity - you have it pointed at the nebulous "adobe" as a whole.
If they had a charismatic, public figurehead I a sure there would be a lot different about how we view adobe and its products.
Adobe isn't lazy. It takes extra work to implement multiple cross platform ui libraries with twenty different slider widgets none of which work quite right.
Somewhere I heard a variation on this quote, attributed to Napoleon, that his solution for the dumb and enthusiastic was to "shoot them".
While popular the comparison between cars and computer programs are not well chosen. Actually comparing software to any physical object is point-less. These two only have anything in common on the surface.
If you were to make software require the rigorous testing that physical products like cars undergo you would likely never be able to ship anything. If you did the customer would not be willing to pay the price.
Software is infinitely more complex than even space shuttles. The number of possible combinations which you program can traverse is so big it doesn't make any sense.
I guarantee you that once you spend the money having your code proven your costs are so high that no one will buy your software. In stead they'll turn to the competitor who wrote it in VB and accept their EULA and live with any errors.
The nature of software is not the same as of physical objects. You can either accept this and plan accordingly or you can betray yourself and keep getting angry about bugs.
I write software for fun, everything from low level drivers up the stack to web apps. My job is engineering mechanical systems more complex than the space shuttle - and with more lives at stake.
The two are not even remotely comparable.
They did it with an almost insane level of attention to detail.
So it certainly is possible to do.
In a sense, the problem with Adobe seems to me to be the alignment of organizational goals with user benefits, and not software process.
Software is expected to scale by many orders of magnitude in many dimensions. The equivalent would be a vehicle that supports carrying between 1 and 1 million people, can travel anywhere between 1 and 1 million mph, running off fuel between 1 and 200 octane. Physical objects are never expected to support such wide scaling parameters, and yet this is very common in software.
Software is also expected to run on lots of different kinds of hardware with different features and performance characteristics. A rough analogy is a physical design that has to support being constructed from either aluminium or steel.
Since software is more abstract in nature, you'll often hear people saying that they weren't even sure what they were building until version 2. The requirements are also more likely to change during the engineering process. Mechanical things seem more likely to have a well-defined purpose and scope throughout the engineering process.
As for your specific examples, 'different kinds of hardware' is no different than saying my system needs to work at -30F and 130F temperature. Materials behave very differently at different temperatures and we have to account for that. Some metals are weaker in temperatures as high as +25F. That's something you will see all the time.
You are also vastly over-rating the complexity of scaling. It's really not that hard. Are you really going to tell me it's harder to figure out how to scale a web site than it is to build a rocket engine? Because there are about 1,000 web sites out there with millions of users and only about 10 organizations building rockets.
No (which I admitted up-front). But have you ever worked on large, high-availability distributed systems? When you say that scaling is "really not that hard" I'm suspect that the answer is no. It is absurdly more complex than single-machine programming. There may be more people building large websites, but that probably has a lot to do with the fact that a lot more people visit websites than ride on rockets. If you look at the number of support staff needed to run a website like Amazon vs. launch a rocket, I bet they wouldn't be that far off.
I'm not saying mechanical engineering is easy, I'm just saying the software isn't easy either. I also don't think that you can draw the conclusion that because we have 60 years of mechanical engineering process that software should fit into the same processes.
So what's your database system like. Well, we are 1/2 though the transition between A and B, we don't have a DBA so Bob wrote something to create build scripts based on changes made in this file. It's buggy and we are starting to try out C but if you ...
Have you read papers like:
These papers all describe solutions to "hard real world software problems" and have nothing to do with legacy systems. If you think there aren't hard problems in software, you're probably not working on one.
We have fatigue/vibration, corrosion, and wear. What's the equivalent in software? There is a reason they park perfectly good airplanes in the desert - we can't gaurentee they won't fall out of the sky because it's impossible to perfectly predict fatigue.
And I have issues all the time related to things failing 3 or 5 years after they were built (yet they have a 40 year design lifetime). Metals always seem to find a new way to corrode and bearings find new ways to fail. There is no equivalent to a corrosive, hostile, environment in software.
Not to mention the random things thrown at you in the physical world. If you design jet engines, be prepared for birds to get sucked in (hopefully not too many, and if so, hopefully your pilot can land in a nearby river full of ferries to pickup the passengers). If you design buildings, get ready for earthquakes of unknown size, hurricanes of unknown wind speed, and terrorists with various methods of taking your structure down.
We can't gaurentee anything. In fact we can barely test most of the complex stuff because it's too expensive. Cars are cheap relative to most things. They don't crash 737s to find out what happens or shake an entire city just to ensure that it is built correctly. You have to predict all of this stuff using calculations and it largely goes untested.
Most mechanical components obey underlying physical principles that have linear or quadratic approximations, at least in certain regimes of environmental and other factors. Therefore, we can model the component and we can know when we are unable to model it.
We manage overall system complexity via physical/mechanical modularization, with things to insulate against thermal, mechanical, chemical, electrical coupling. By testing individual components, we have basic assurances on overall system behavior.
Software attempts to do this with "good design principles", but the truth of the matter is that just about any software component in a typical application can completely jack up the global environment for other components, and processes can make OS and environment modifications that completely break other processes belonging to the same user.
Try issuing performance guarantees on an airplane whose fuel pump can set μ0 and ε0 to -1 if the ground crewman that filled the wing tanks was named "Bob Null".
With unit tests and behavioral tests, we can assume basic assurances on individual components working as a whole.
Engineering also has good design principles. One does not make gear teeth perfectly angular (take a look at the Antikythera Mechanism) because it can lead to premature wear and will have poor performance. In fact, there are hundreds if not thousands of kinds of gear teeth, and interchanging them within the same application can have all kinds of long lasting effects. Take a look into any vehicle recall in the past 2 decades and see that nearly every one of them is an edge case bug that slipped by Q&A.
Not accounting for the string null being valid is a bad design principle within the domain of software. Just as using Frozen water as a bearing surface in high speed rotational machines (Hey! It's hard and slippery! It's perfect!) is a stupid mistake, not accounting for valid "Bob Null"s will also lead to premature failure if not for the database but for the business.
We've only been at software engineering for less than a hundred years. We've been at mechanical engineering for a good 2000 (see the aforementioned Antikythera). We might need a few more years to iron out best practices as an industry.
With digital computers, however, the size of the state space that the system can occupy grows exponentially with the number of bits of state in the system, and changing a single bit can result in an explosive cascade of changes to the rest of the system. Accumulated random failures of computer software very rarely lead to a nice, smooth, predictable probability distribution. Software failures are not caused by anything remotely resembling wear and tear.
 Please excuse and correct any inadequacies in my autodidactically acquired understanding of information complexity.
I humorously submit "The win32 API" and "The JVM garbage collector" as examples of hostile environments :)
And yeah, software and mech. Eng. ate tricky in very different ways.
It's still a good example when trying to convey the complexity of software to people who don't understand computers since most have an idea that space shuttles are very complex (which they of cause are)
My point was that that testing all possible combinations of how your app can execute is next to impossible unless you are willing to cough up a serious amount of money for rigid mathematical proving. Which would then make it too expensive.
All engineers are human. Whether you are working on a space shuttle, an airliner, a nuclear power plant, or an iPhone app, you are a human. Humans make mistakes. Humans overlook things.
So how do we engineer really complex systems with hundreds or thousands of lives at stake to an exacting standard - knowing that the engineers are human?
The answer is to build a process that catches mistakes. I don't think software engineering has really caught up with mechanical engineering in terms of process.
I know a lot of guys who love to wrench on cars. They swap parts, add horsepower, change out the suspension, etc. They can build a really fast car. But that's not mechanical engineering. They are mechanics.
In a lot of ways writing software is like that. Glue together some libraries and APIs the same way a tuner supercharges an engine. But that isn't engineering.
Obviously we don't need the rigour of the space shuttle to make an iPhone app, but if your application calls for that complexity (or your budget/liability is large), then you need to bring in the process mechanical engineers have been using for the last 60 years.
That means multiple people checking all the code. That means a well planned out arrangement/architecture. That means testing the individual parts thoroughly and the whole system together. And it means very specific configuration managament of every dependency.
It's not impossible, it's just not the willy-nilly fun part of hacking stuff together. It's the ugly paperwork inducing lame part of working in a big company. But that process if done correctly helps catch mistakes.
Although you're right that for some projects, the poor quality is because it's more fun to just hack it together, but for many, it's a matter of business priority. I've worked on projects (avionics software) that had the rigor that you describe. I've also worked on projects where the developers consistently tried to add robustness, but management kept redirecting them to add more features.
I agree. I'm not sure it ever will. But comparing software to a car and the relationship between the buyer and seller is too simplified. Software have bugs. Many more bugs than cars. Because it's not tested properly. Which we don't do because no one would buy it at the price which comes from proper testing.
You can accept this and write your contract accordingly or you can sit down, muck and be disappointed when it fails.
I'm not saying it's right - it's just how things are.
That is not an inherent property of software, the problem with software is that it makes it all too easy to hide complexity, and that some of the costs of complexity are not superficially apparent.
Add on top of that how in the name of 'reuse' we pile more and more layers of complexity, and you end up with systems that are humanly incomprehensible.
But this doesn't mean that writing simple yet functional software is not possible, it just requires much more care, thought and self-discipline.
The two top quotes listed here are worth remembering: http://quotes.cat-v.org/programming/
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies and the other way is to make it so complicated that there are no obvious deficiencies." — C.A.R. Hoare
"The computing scientist's main challenge is not to get confused by the complexities of his own making."
— E. W. Dijkstra
Adobe has a fix to a serious vulnerability. Not releasing it when the cost to them is tiny is essentially criminal negligence, especially when they say the fix is available to those who are willing to pay...
This is the same company that owns Flash which runs on >99% of the desktop machines connected to the Internet.
What is being discussed: when a defect is discovered in the product by the end consumer, is it a fair (or proper, or wise) business practice to charge the customer for the software patch?
It was not my intention to side track the discussion. I just don't like the simplification of comparing with cars - but the OP made it clear that I misunderstood his post.
Ok. The possible combinations of the way your application can (theoretically) run far outnumbers the estimated number of atoms in the visible universe - even for small programs. You just need a couple of loops in loops. If your program don't have it then I'm sure Node, Apache, Postgres, Rails whatever have plenty.
While many of these combinations may never happen you would still have to provide proof of all of them not causing your program to go into a state which you can not handle.
"and then go and make some of your own comparisons with the space shuttle"
This was a comparison of complexity - not a direct comparison between the two.
Can you elaborate on this? I'm not convinced that this is true (but am willing to be proven wrong)
As I see it this is what is going on when your users use a webapp.
The user runs some client code which you wrote. In a browser which other guys wrote. Running on an OS made by some one. Sending data back and forth via protocols and network equipment with software that other people wrote.
You server OS receives the request and passes it to your load balancer which distributes to Apache which forwards to PHP which routes to SQL...and all the way back.
With the millions and billions of lines of code involved in these steps it could likely be a number of this magnitude.
Actually it's a wonder that it works...
What's wrong with that?