The author brings up a fundamental difference between traditional engineering and software engineering. In many fields, engineers sign on the dotted line and assume professional and financial liability for the correctness of their design. They have to think about things like warranty repairs, recalls, product liability lawsuits and dead citizens. They tend to design very conservatively.
How many software engineers are willing to make their careers dependent on the correctness of their code?
There probably are some that are. Unfortunately I've ever met them or had the privilege of hosting their applications.
> How many software engineers are willing to make their careers dependent on the correctness of their code?
Proving program correctness is a very involved, expensive, and rigorous process. That, being said, comprehensive unit tests are a good investment in proving that code works as it should. Software cannot be engineered like a bridge.
So, do you believe that proving the structural integrity of an engineer's work is less expensive or difficult?
No, they are simply built according to several, well-established, well-studied principles. Which software engineers not only lack, but are not interested in pursuing.
People have been building bridges for a lot, lot longer than they have been building software.
Also, you can go on all you want about how to engineer quality stuff, but if you ignore the economics of the situation, you'll end up pricing yourself out of the market by an order of magnitude in many fields. For instance, the web or mobile phone stuff I have been working on lately: it's important that it mostly works, and is reasonably priced. The cost of making sure it never, ever has any downtime or ever fails would simply not be worth it to my customers and clients.
"The iron triangle refers to the concept that of the three critical factors scope, cost, and time at least one must vary otherwise the quality of the work suffers. Nobody wants a poor quality system, otherwise why build it? Therefore the implication is that at least one of the three vertexes must be allowed to vary. The problem is that when you try to define the exact level of quality, the exact cost, the exact schedule, and the exact scope to be delivered you virtually guarantee failure because there is no room for a project team to maneuver.
Software development projects often fail because the organization sets unrealistic goals for the "iron triangle" of software development:
* Scope (what must be built)
* Schedule (when it must be built by)
* Resources (how much it must cost)"
You are contradicting yourself: nobody thinks it's even remotely realistic to give assurances with 0 margin of error. Of course it's expensive, of course it's difficult. It is likewise unrealistic for traditional engineers to do so - everyone knows that buildings will eventually collapse.
However, we do need some quality assurance which is more reliable than "this piece of software passes all the tests which guided its development" - which is a nonsensical statement that somehow TDD fans hail as absolute truth. It'd improve our applications and it'd improve our skills. I can't see how it's a bad idea, unless taken to the extreme.
> we do need some quality assurance which is more reliable than "this piece of software passes all the tests which guided its development"
We do not necessarily need even that. The web site for the little antique bookstore in downtown Padova really doesn't - it doesn't even handle money. They're happy to get something that mostly works and fix the rare bug that does come up when it's noticed. The software responsible for safely guiding a 747 full of people into an airport at night, on the other hand, probably does something more in terms of tests/QA/provable correctness/whatever else makes it safer, even if it makes it significantly more expensive.
There's no contradiction, and I wasn't talking about TDD. My point is merely that you have to consider the economics of these things, and software projects vary to the extremes in terms of their importance and impact on our lives.
I completely agree. This also applies to engineering, though. If you are making a gingerbread house with your child for a school project, you are not going to need any kind of certification. If you are building a shed in your backyard, you may need a permit, but you do not need extensive approval from quality assurance teams on each step of the process. Software is the same way. Large systems require more extensive testing. A space shuttle needs software and engineering with extensive testing. A bridge and software for medical devices needs quality assurance beyond the usual scope. However, quick hacks exist in engineering and software, and it makes no sense to 'trust your code with your life' or 'trust your engineering skills with your life' when all you are making is a popsicle stick house.
If it only took you a night to do it, then I guess so. However, it sounds like you are making a shed or garage instead. Still, you are not sending anyone to the moon, I suppose.
If the scope of your statement was that narrow, I will agree with you. Though, the article does point out that eventually, code you've written and perhaps released without claiming any responsibility whatsoever might be used in something more important than what was originally intended.
If someone takes something of mine that's not been built for 'safety critical' use, and wants to use it there, the onus is on them to put the extra effort/money/time in. Same as if I built a one story house and someone wanted to add 5 floors to it - it's not really my fault if it doesn't work out.
> How many software engineers are willing to make their careers dependent on the correctness of their code?
You're assuming that correctness is binary. It isn't.
If you prefer, math is the only opportunity for binary correctness. Everything else is "how likely is undesired behavior" and even that is non-trivial.
Every software engineer's career depends on whether her code satisfies the likelyhood for her circumstance.
How do I know this? Because folks whose code doesn't satisfy the relevant likelyhood get fired (or at least moved to other activities) or their company loses the biz.
Yes, software fails. If certain failures (or their likelyhood) of certain software is a problem for you, stop using it and stop paying for it.
Many software failures simply aren't worth what it would cost to fix them.
No - you're not entitled to software that has the likelyhood of failure that you'd like for the price that you'd like to pay.
are we talking about my code for a video sharing and live streaming website? nope.
are we talking about my code on a microcontroller whose only purpose is to physically monitor and calculate a small number of things? probably, yes.
there's a huge difference between the two, both in the processes behind them and the end result of the development cycle. the code on important things where lives are on the line, like a space shuttle, is simple stuff that has been tested and combed over a lot.
bridges are very singular in purpose and only need to not fail at what they're supposed to do. most software needs to not fail at what its not supposed to do, as well as what it is supposed to do.
Software engineering is still an immature field. Civil engineers have known how to build a bridge that won't fall down for about 2000 years. We still haven't figured out how to write software that doesn't break.
Yes people know how to build bridges that don't collapse arbitrarily, and people know how to build software that doesn't crash arbitrarily - when it's simple enough and exposed to a limited range of inputs and is built on a solid foundation.
Most software changes as it's used - bridges don't. A bridge never falls down because a lorry drives onto it, then goes into suspend mode because the battery runs low, then when it wakes up the lorry has gone without driving off the end. A bridge never has a painter working on it who then has to make their changes live and accidentally makes one of the lanes invisible.
When you put too many concurrent requests into a web server nobody can use it and they are stuck, with a bridge it's an everyday traffic jam and people see it long in advance and know detours around it and gets on with their lives.
If bridges were as complex as software, would they still be as reliable?
> If bridges were as complex as software, would they still be as reliable?
No. As a simple example, if there are n software modules, there are n! possible communication paths. Yes, black-boxing can partition n down but software is not self-healing (yet). Witness the GMail cascading failures.
The bridge supports were undermined because the high-water level flooding generated enormous hydrostatic pressure against the footings (pressure is greatest at the bottom - where the footings meet the earth).
Think of standing in the surf and getting knocked down by a wave - the force, whether you know it or not, is greatest at the bottom (near your feet - which is why you can lose your footing).
I am not sure how to measure and compare complexity of buildings and software. But I wouldn't immediately dismiss buildings as being all simpler than all software.
> if there are n software modules, there are n! possible communication paths.
We deliberately build software so it does not join every part to every other. That is fairly evident from top to bottom, and really it couldn't have a distinct coherent function if that wasn't the case. So I think a stronger argument is needed . . . which I would be quite willing to accept if good . . .
Good questions. I tried to answer your question by replying to your other comment a few minutes ago.
The answer to your question lies in the idea that the more software can adhere to rules (assertions, aspects, closures) the more rule-driven the virtual model+behavior the software will have. Rules are good. It's good that we always fall towards the ground. It's bad a buffer overflow or exception will gum up a running software process.
You hear this BS from time to time. Usually from some organization, or individual, that wants to make computing a "Profession". They usually neglect to mention the thousands of bridges, buildings, and other structures that collapse every year, or the occasional city that drowns. How many automobile or baby cot recalls were there this year? I suspect that Engineering is no more reliable than computing.
I studied civil engineering in school. All of my civil engineering friends are now licensed professional engineers (most, if not, all of them structural - designing and inspecting bridges).
What that means, besides having a really nice seal embosser, is that when you sign-off on a design (embossing it with your professional signature) - is that you take responsibility for that design. You are professionally liable for the failure of your design.
As a side note, to take the Professional Engineering exam in certain U.S. states, you have to submit 4" thick or so of your actual engineering work notes from your 3 or 4 years of professional work experience (a prerequisite to sit for the exam). Pretty much the year you sit for the P.E. exam you study for it like a part-time job (companies support you because they know the importance of it).
It is not just about reliability, it is about ethics (the Citibank building case is studied) and a sense of professional responsibility. Real professions are very regulated and self-regulating (belonging to a tribe) at the same time.
When I heard my sister was dating a programmer - my future brother-in-law, I had her ask him if he thought software could be engineered. And he gave me the correct response. Having been trained in structural engineering, I hate... really, really abhor the term software engineer - I prefer software developer.
'Changes during construction led to a finished product that was structurally unsound. In 1978, prompted by a question from a Princeton University engineering student, LeMessurier discovered a potentially fatal flaw in the building's construction: The original design's welded joints were changed to bolted joints during construction, which were too weak to withstand 70-mile-per-hour (113 km/h) quartering winds.'
So I have to ask then (and this is really in the spirit of discussion not trolling) - is it that software cannot currently be engineered, or is it that software can never be engineered?
I ask because I feel like there are examples of "bridge-worthy" software, mostly examples out of NASA where the code tightly fits the hardware and the number of bugs per line of code is something like 1-to-1million.
It comes down to there being an objective, complex or sufficiently interesting, and usefully applicable material at the base. Software has this, it is just abstract: instead of atoms, it is bits and operations.
A 'material' means understanding, and rules, can be formed. Instead of force mechanics, software has algorithmic complexity. Sorting's lower bound of O(n log n) would be an example of basic knowledge. Of course, with only 50 years history, understanding of software's material is immature, and there is much to be discovered. But merely assembling pieces in regular, predictable ways, is sufficient for a simple kind of engineering status.
Engineering seems often conflated with process. But this misses the essentials. Process is only support. It doesn't tell you about the material, or about design. It only helps you be better organised in what you can already do. Buildings don't stand up because the designers rigidly followed a process, they stand up because there is an understanding of how forces work in structures (etc.).
In a large, industrial, sense, software engineering lacks in certain ways, in certain areas. But at the core, it is essentially engineering.
If the scope is very narrow, software can be engineered to be correct. However, I think it is better to work towards organic, self-healing software systems like Google (where they leave failed cluster nodes in place, not bothering to find or remove them).
Another reason software cannot be engineered in a cost-effective manner is most engineering is based on inviolable physical rules (the constant g (gravity) - this allows software (irony, yes) to be developed that helps engineers correctly (there is always a "factor of safety") engineer (FEA software like ANSYS). My hat is off to the legends like Thomas Roebling who designed the Brooklyn Bridge before computer aid.
The pain and cost of engineering software to structural engineering standards (ISO 9000 is a joke, NASA has its own internal standards) usually exceeds the lifetime benefit of correctly engineering that software. A good rule of thumb is will someone's life be jeopardized if the software fails (dialysis, automobile vehicle systems, nuclear thermal rod control, 747s, space ships (makes me wonder, actually, how "correct" Burt Rutan's X Prize spaceship software may be)).
> most engineering is based on inviolable physical rules
Don't rules such as those of algorithmic complexity have the same status and position? They are as objectively certain -- more so, in fact, since they are purely logical. At its base, software is built on bits and operations, which are entirely determinate. The ramifications might not currently be fully understood or exploited (software is only about 50 years old!), but the potential is there.
As an example: with CPUs like the Z80 it was possible to count clock cycles of instructions, and determine upper/lower/average running times of code. That would be equivalent to determining resultant forces across a whole physical structure. Today, CPU makers have made things difficult, but that is completely contingent: it would in principle be possible to predict performance, in an engineering fashion, for many purposes.
In the homework we do for engineering classes, we would be able to make simplifications without fundamentally altering the problem. For example a continuous load on a structure could be modeled as a single force vector because G, the gravitational force always acts downwards. You can't do that in software with loosely-coupled, autonomous modules. These modules do not obey anything. Just an implicit contract that can be broken easily. Unit tests only help to clarify the contract for a module or between modules. In structural engineering, there are a lot of unknowns but they can be modeled across the entire range because there are many known constants. The load-bearing capacity of the beam, the moment (the twisting force acting on the beam). Virtual abstractions like software do not obey physical rules. Because they are virtual. You can create a Castle in the Air in your program world, if you like. Hard physical reality has real constraints. Virtual reality has flexible constraints. Physical reality is one of the reasons building construction is a solved problem. The irony is with sophisticated software, architects like S. Calatrava and F. Gehry can begin to create almost Castles in the Air.
Nonsense. While buildings and bridges certainly do fail, and they require significant expertise to design and build correctly, they do not collapse because of one misplaced bolt. One misplaced *p++? Segmentation fault.
The real problem with people calling programming "software engineering" is that programmers have absolutely no idea which processes actually produce more reliable software. There's a lot of hand-waving and anecdotes, and no solid or reproducible results anywhere. TDD this, agile that, waterfall this other thing. All of it still produces buggy software which crashes, corrupts data, and otherwise makes programs unreliable. Relatively bug-free software is always the product of really smart people who made very few mistakes and tested thoroughly, and never the product of any particular "process" which generalizes well even to another group of smart people. (Just look at the arguments which arise here on HN every time TDD comes up.)
Assume for a moment, that there is no difference in error rates for software & bridge engineering. In the case of most software engineering, there is no recall, no liability, and often no admission of liability as a consequence of software failures.
In the case of 'real' engineering, the opposite is true.
If a software vendor wrote a product that deleted my data because of a bug (Apple, for example) were to assume liability, refund my purchase price, and compensate me for my data and recovery costs........
Interesting question. I was just thinking about this yesterday, so I'm glad you brought it up...
Business software, in particular, is often financially-critical (failure of the software leads to loss of money) rather than life-critical.
This may have been true at one time, but not any more. As a business programmer, I've worked on quite a few things where there is much more at stake than just money. Just a few of them:
- distribution of mission critical airline parts with linked certifications
- scheduling & routing of ambulances and firetrucks
- scheduling & routing of trucks carrying time-sensitive medical supplies
- clean-room quality control of medical devices
- distribution of pharmeceutical formularies
- medical claims processing & adjudication
- formulas & recipes for large batch food processing
- medical demographic databases of allergies
- certification of automotive safety devices, including airbags
- building contractor specifications, including electrical & plumbing
- clinic scheduling
Just because something won't hurt you immediately doesn't mean that it can't hurt you eventually. You can see from my examples that so much we program does affect the welfare of many, even if indirectly.
We really have reached the point where software QA is just as important as engineering QA. We programmers aren't the only link in the chain, but we are an important one.
I have looked at horrendous enterprise code that supported critical health and safety issues and thought, "Do you really want to get on that plane?" or "Are you sure you want to take that pill?" (Hopefully QA catches most of the potential culprits.)
Thanks for getting us to think about it a little more. This sort of thing should always be on any good developer's mind.
Techincally, yes, but I imagine it's very simple code. Most skydiving rigs these days have an Automatic Activation Device (such as the CYPRES: http://www.cypres-2.com/). You're not supposed to rely on it, but can save your life if you're knocked unconscious during freefall.
The thing is run by a small microcontroller. It compares your altitude and velocity. If your altitude is less than 750 feet and you're still in freefall, it pulls the chute for you.
Now, would I trust my life to code running on a desktop OS? Hell no! I've seen too many CVEs for that to happen.
No, I wouldn't trust my life to my code, because I don't need to. Quality matters, and defects can serious financial consequences (for instance, losing data can have a serious and measurable impact). But in the trade-off between innovation and risk, I should probably position myself fairly aggressively. Like most programmers, I'm in a position where it's probably better for me to make and recover from mistakes than to avoid mistakes altogether.
This is what I think the "software is like building a bridge" analogy is so inappropriate. If you're going to hold software accountable for its relative lack of reliability, you should also acknowledge that the innovation from this field has been astounding.
Would licensing help things? Would the space shuttle control system software become very reliable if only those software "engineers" were licensed? I seriously, seriously doubt it.
But I do think that "software engineers" would slowly succeed in choking off a competitive and free environment. In a nightmare scenario, you'd have to major in "software engineering" instead of math or physics to be legally allowed to write code, self study would be banned, and something like Ruby on Rails would be illegal because the people who wrote the EJB specs don't like it.
Look at the activities of the ABA or AMA. That nightmare scenario isn't as impossible as you might think. And I'm pretty sure that it would hurt innovation in software severely, in exchange for a safety and security that 1) it wouldn't deliver anyway, and 2) we don't need in the first place.
> So let me ask a related question. Would your trust your financial assets to your code? How much would you wager that your code is correct?
Anyone who runs a business that's based on code, whether it's as an ASP or consultant do this on a daily basis. I trust my code to make value for my client so my client will pay me so I can pay my rent.
IMO one of the great things about exploratory programming is that you don't have to trust your life to your code. It may annoy a lot of users, but the trade off (in being able to explore/experiment) is worth it.
My graduate advisor did some consulting for one of the major airplane manufacturers. He said looking at their systems code put him off flying for six months.
How many software engineers are willing to make their careers dependent on the correctness of their code?
There probably are some that are. Unfortunately I've ever met them or had the privilege of hosting their applications.