Hacker News new | past | comments | ask | show | jobs | submit login
Executing Software Engineers for Bugs (setec.io)
60 points by hlieberman on Nov 1, 2015 | hide | past | favorite | 86 comments



When people bring up a code of ethics, actuaries seem to come up a lot because they can potentially lose their license for mis-evaluating risks, but anecdotally what I've heard is that in reality if your boss asks you to sign off on something, and you refuse they'll find someone else to sign off on it and just label you as not a team player. At which point you've harmed your career and had no real impact.

I bring this up because when people propose a code of ethics, they assume it will help, but I'm not really convinced that we will really get to a point where codes of ethics are widespread and taken seriously enough that they will do anything but harm the altruists among us.


A code of ethics is only one blade of the scissors. For it to cut anything, you need the other: a professional association that acts in unison to back up members who put their necks out like this and ensure they're taken care of, and that the unethical comapany's talent pool dries up.


That's true. Is there a value to refusing to do work, even though it might not be able to stop something from being built in spite of it?

If you know, a priori, refusing to work on something you consider to be unethical will have /zero/ effect on the overall project, but negatively affect you, is it then ethical to do the work, or are you required to fall on your sword?


I'd contend that you don't know that it will have zero effect until you have tried, and how you go about refusing assent is of critical importance. If you simply say "no", your boss will move onto the next person for assent, and you will be left vulnerable. Make it a formal process, it it may well have an effect.

1) Listen to all competing arguments. Make sure you are making a sound decision and your opposition is genuine and not based on a personal bias.

2) Make it explicit that you are are saying no in your role as a professional, and that this is not just a casual "no". Ethics require engineers to make decisions only in areas of competency, so make it clear that you are competent to make this decision, and your decision derives directly from your competency.

3) Write it down. The document should cover 2) above, and set down the defensible reasons why assent is being refused, addressing counter arguments. Consider that the arguments you write down could well be tested in court, and you will be cross examined on them. The document/report should be of the same level as if you were a consultant to your employer.

4) Sign and submit the document (keeping a copy for yourself). You need to be open about your opposition. Keep it at a professional level. Hopefully you will get some respect, as your document should make it clear that you have legitimate concerns and are not just being difficult.

5) Be prepared for the consequences. Your opposition is now in writing and impossible to ignore. No sane boss is going to ignore it, as the legal consequences of doing so are too great if you turn out to be right. You will have heat applied to you, as the alternatives are to either address your concerns or to get you to retract, and the latter option is cheaper.

6) If the final decision is against you, either live with the decision or resign. If the issue is of sufficient gravity your only option may be to resign, whether that be for ethical reasons or your own legal protection. (Yes your Honour, my work killed 100 people, and I was fully aware of it as shown by this document that I wrote.)

If you end up resigning, then you're probably making the right decision anyway, as the events leading up to your resignation will have shown your employer to be a dud.

Granted, reality won't be as cut and dried as above, and will involve a lot of anguish.


Very true. From a philosophical standpoint, I still believe considering the zero effect case is interesting, but from a practical standpoint, I will gladly cede the point to you. This process seems an impressively good balance for a very hard, miserable situation.


You can draw parallels to the people who worked for the Gestapo in Nazi Germany. Was it unethical to follow orders to round up Jews and others for near certain death, if you knew you would be severely punished if you disobeyed? At the same time you know that if you refuse, the next guy won't. So why fall on your own sword?

I think a lot of people flippantly say they would refuse the order, but it's easy to say that when it's not your neck on the line.

Truly a deep moral dilemma.


You're so right!

It's the worse kind of dilemma: the kind that people outside the situation will claim was easy.

Personally, I think the Zero-effect case should not be cast aside lightly. Pragmatism is a legitimate ethical model.


I agree. An ethicists/moralist does not care about the content of the interests that bring people into conflict. He wants to constrain these conflicts by an additional feat of good behavior, just so that everyone adds an additional moral attitude to what they do or must do – at work or in the family – that leads to good works instead of evil deeds.

And thinking in this schema is not only a mistake, but it produces a theoretical reversal in a certain literal sense. People experience their environment and the interests that exist in it as a conflict, as an injury. And they do not hold this experience against this environment, but against themselves: their bad behavior.


>But making the leap from that to "my code can't harm people" is a bridge too far.

Meh. My application is an internal app for a large company. It's basically scheduling software for a part of our business process. To even start to hack it you'd first have to break into the corporate network, and in the end you'd have data you didn't care about. Hell, I'm not even sure the people who use it care.

Worst case, a subtle bug (and it would have to be subtle for my users to miss it) might cost my employer a few thousand bucks.

Again, meh. There are a whole lot of internal applications that fall into this category.


That's not the right way to think about it. Just because you cannot imagine a way your software can be used to attack your company or harm the users, doesn't mean such a thing is impossible or unlikely.


"Executing Software Engineers for Bugs"

Never have I been prouder to be a plain old programmer. Those software engineers, architects, systems analysts, and data scientists can keep the capital punishment for themselves.


You don't need a title to take responsibility.


The idea that "taking responsibility" is how we prevent bugs is what's wrong in the first place.

Quality is systemic, and has its roots across the organization.


I hope you don't have a manager, then...


This author's first sentence is: "There is an apocryphal tale I've heard many times about how, in Ancient Rome (or, in some tellings, Greece), the engineers responsible for the construction of an arch were required to stand underneath it as the final wooden supports were taken out."

This story is derived from Law #229 of the Code of Hammurabi (of Babylon). Babylon was not part of Greece, but it was very briefly part of the Roman empire.

" 229 If a builder build a house for some one, and does not construct it properly, and the house which he built fall in and kill its owner, then that builder shall be put to death. "

http://avalon.law.yale.edu/ancient/hamframe.asp


My Dad told the tale of a farmer coming to the repair shop with a tractor gas tank needing repair. The mechanic refused. The farmer insisted he'd cleaned the tank; no danger of explosion while it was being welded. The mechanic replied "Ok, I'll weld it, if you hold it while I'm doing that." The farmer left.


Good point. However, it was part of the Hellenistic empires for more than 180 years. Alexander the Great even died in Babylon. It was subsequently ruled by the Seleucus dynasty, even after the inhabitants of Babylon were transferred to Seleucia.

Not that it has anything to do with the Greeks putting engineers under their arch.


I'm not a big fan of professional bodies or mandatory qualifications. I think these tend towards rent seeking, with these bodies existing mainly to justify their own existence. In the case of software engineering, I'm especially concerned about academics with little real world experience using valid security and privacy issues as an excuse to force their own view of how software engineering should be done on everyone else.

That said, my employer has standards for security and privacy that go well beyond industry norms, so if I was working elsewhere maybe I would feel the need for better standards across the industry.

In my experience, software engineers tend to be conscientious. Caring about the big picture is a big part of open source, hacker and nerd culture. But knowledge is hard to come by. I learnt from the experts, but I doubt most engineers would be able to build a simple CRUD app form scratch without major security holes.

It would be nice to see some best practices around security and privacy emerge without forcing everyone to write Ada or Coq or completely change their approach to writing software.


We can write software like this.

The reason why we don't has been explained many times: It's too expensive, by many orders of magnitude.


If it's financially impossible to make a product that doesn't harm people, you don't say "gosh, sorry, but your safety is incompatible with our fiscal goals." You cancel the fucking product.

And beyond that, I don't believe for a moment that--to use the example from the article--it would have increased Grindr's costs by 1000x if they had taken a little while to brainstorm possible negative consequences of an app that lets gay people be anonymously tracked and located worldwide. "Too expensive" is code for "we've decided that the safety of our users is less important than maximizing our profit margin."


That's nonsense because it's so binary. Let's say you have product that takes 5 engineers to build without any consideration for security and safety whatsoever, and it gives you a profit margin of x. Now say you add one engineer, which halves risk, and gives you a margin of (0.9 * x). Then you add another, halves risk again, profit is (0.8 * x). And so on.

How many engineers should this company add to act 'ethically'? According to you, they need to cancel the product (I think - only for an ungenerously rigid interpretation of your comment).

Risk is not 'it harms people or not'. Knowing whether your product harms people is more difficult still. And the question of who is responsible, more still. (and that's not just between the supplier and the user - also to other actors. To take the Grindr example, was it an engineering problem, a product design problem, a user education problem or a governance problem? I could argue for all of them. Who should act to mitigate the risk? I don't know, and it's not as clear-cut (in the abstract) as some here are letting on).


I agree with your sentiment, but I am unclear as to how to actually implement it. An imperative to cancel the product in the absence of a court judgment seems a bit extreme. I think a series of escalation steps is in order.

Escalation steps inevitably involve a 'nuclear option' whose size and expense relegate it to just the most sure and most aggrieved wrong parties. These days that option is a class-action lawsuit. It seems to work best when deterrence is the primary goal, plaintiffs rarely ever get anything of value out of it.

It works as you would expect, but one really wants a pathway to punish negligence without having to get lawyers involved. I believe this is usually done with market solutions to make 'buyer beware' less awful on the ignorant.

But the sheer scale of the stated problem, forcing all web service developers everywhere, to consider possible lines of attack on their software, under pain of loss of livelihood, seems too general and too unenforceable to be useful.


This is simply false. True systemic quality will lower costs, speed production, and increase value all at once.

The reason is that, beyond a certain organizational complexity, we fail at creating the systems necessary to create the positive feedback loops necessary for quality, low cost, and fast execution to flourish naturally. Instead, we live in dysfunctional organizations where the tradeoffs prevail, and we choose speed and cost over quality in nearly all cases. Most of the time, we don't understand how to even begin to think about quality at an organizational level.

It's fully possible. Quality pioneer W. Edwards Deming explained the full system for creating an organization that is capable of quality in the first place. It mainly involves systems thinking, knowledge of variation, understanding psychology deeply, and creating systems of effective learning—and management and leadership versed in those new leadership competencies.

Instead our instincts are to blame, to punish, to hold accountable, effectively to leave each engineer as an island in charge of his or her own level of quality—when an organization and its production is so far removed from individual control it's not even funny.

We absolutely can write software like this—and we can do it quicker, and for lower cost. We need to work on the systems that produce and influence that quality/speed/cost relationship—not each individual part in isolation. There is a positive cycle to be achieved.

https://en.wikipedia.org/wiki/W._Edwards_Deming


You've stated that it's not more expensive. But everyone in the industry knows otherwise. Unless you can produce some evidence to the contrary, I'm going to insist it's you who is incorrect here.


That's a highly simplistic and incorrect viewpoint—and the correctness of a model is not dependent on popular vote.

When looking at increasing quality (essentially, scope) in isolation of systems and the environment in which that quality is created, then the only way to increase quality is to increase cost. That much is true, but it's also incomplete.

When looking at and eventually optimizing the whole system—from the customer and their needs, to the leadership of the company, to the clarity of purpose, to values and culture, to the intrinsic motivation of your employees, to methods of management, to your learning and education processes, to your processes of production, to your communication links, to your output and how it's measured, and how each is controlled and understood in great detail—you can and will improve quality much more than you could by simply increasing cost in isolation—and at the same time, you'll decrease your overall costs in the long term, you'll speed up production, and you'll increase sales and customer satisfaction. The end result is a better product, for lower cost, with more predictability, proud happy employees, and faster to boot. That's what Quality really means.

Looking at quality in terms of cost alone is the naive mistake. Look at the whole system.

And for some evidence, look at Japan circa 1948. These exact methods of systemic quality, "total quality control" and statistical quality control taught to Japan by W. Edwards Deming turned them from the ruins of World War II into the second largest economy in the world, to the point of such success that Japanese companies were putting American ones to shame by the 70's. They did this by following exactly the model I've described above, decreasing total costs, increasing quality, and improving time to market, all by focusing on their total systems across the organization. The proof is in the success of an entire country's economy, I don't know how it can get much clearer than that.

It is not my fault that everyone in our industry has forgotten these lessons on running companies, but the prevailing methods remain incorrect, as does your original statement.


Either you're wrong or literally every software company on the planet is wrong.

I'm betting it's you.

Please do prove us all wrong by starting up a software company that produces bug free software faster and cheaper than every other software company on the planet. There's a big pot of gold at the end of that journey. Go and claim it. If everyone else is as incompetent as you claim, it's going to be a walk in the park.


Not only software companies, but most companies in the US are indeed wrong about the prevailing methods and styles of management and leadership. Is this really that surprising to you? Do companies you work with appear chaotic yet surprisingly successful? Do the same things keep going wrong? Do they get in cycles of cultural upheaval? And do we accept these things as "normal business" and go on with out lives, powerless to control them?

And I'm not saying it's easy. Not at all. It might be the hardest problem in all of business—combining the true systems of work into a cohesive philosophy: systems, statistics, people, and knowledge. I'm just saying (and I'm just repeating the words of some great thinkers in the quality space, mind you) that if we change the way we think about quality and companies in general, we'll get a lot of gigantic unrealized gains in productivity.

This happens naturally in many companies in small pockets, but human nature and psychology tends to break it down eventually.

If I'm wrong, then a great many other people are also wrong, primarily W. Edwards Deming, who, as I noted, single-handedly turned around the entire economy of Japan using these exact methods and ideas.

Here's a great place to start: http://www.amazon.com/Leaders-Handbook-Making-Things-Getting...


If true, this is truly amazing. So here's a quick test:

1. Write a letter outlining the above to a VC. Explain how this works in somewhat more detail; 10-15 pages should get it across. Sure it's work. But you seem resolved, and the reward is great. If you pull this off, not only are you gonna get a Turing award, they're going to mint a whole new award and name it after you. Anyway, give it a decent treatment and mail it off.

2. Tell us how that went over.

Hundreds of thousands, heck, millions of pretty smart people have been thinking of ways to solve "the software problem" for over 60 years. Many of those people are smarter than I am, and probably smarter than most of the readership on HN. The actual improvements have been incremental.

I'm not saying there hasn't been improvement. For instance, we tend to write more secure systems these days, but security remains really, really hard. Saying "you're doing it all wrong" to the incredibly smart and driven people I know who are experts in security is pretty bold.

So, I urge you to make a proposal and report back.


Interesting sarcastic, insulting proposal. I'm tackling just that issue within my own company now, and it's incredibly difficult. I'll explain why.

The main issue is that once you exceed a certain number of developers, the problems become more human than software. It's not the software problem we need to solve, and our solution to the human problem thus far is highly individual and anti-system, riddled with ineffective processes, "communication problems," and "culture issues" that we believe are someone else's problem that we can do very little to fix even as leaders. These are the root cause of quality issues, and without looking at the whole system—and indeed, creating a whole organization capable of looking at the whole system—the quality issues will continue dependably.

The final issue is that when it comes to people problems, everyone has their own beliefs. Ideas are easy—ideas can be explained, processes implemented, software designed—but beliefs are hard. If I were to write to a VC outlining these ideas (which are W. Edwards Deming's ideas circa 1950, Joseph Juran's ideas circa 1980, and Peter Sholtes's ideas continuing into the 90's, and the foundation of the current Lean movement in software, education, manufacturing, and healthcare)‚ what would really change? I can make a convincing argument, but I would be in for a 1-3 year conversation about beliefs about management and people, depending on the person and their adaptability and openness to new ideas. It's much easier to simply do it, and I'm trying to do as much as I can in the context of a growing company at present.

Plus, there are companies who are already into this stuff. Pluralsight integrates Deming's philosophy into their management and onboarding. Check them out.

Someday I will start my own company, and these concepts will be at the foundation. For sure. Until then, I'll try my best to shift some perspectives.


As a boots-on-the-ground guy, it was intended to be a little rattling.

I've seen soooo many smake oil proposals, and wishy-washy garbage that amounted to "just do a better job", and I've kicked out a couple of consultants who wanted money (and one wanted a process patent!) for a scheme that amounted to "track team member progress on a whiteboard in your common area". Too many people with not enough background (or bad motives) think they have the answer.

So, I'm jaded. And allergic to rhetoric, because so much of it over the past 30-40 years has been vacuous bullshit.

This stuff is hard, no fooling. And rhetoric won't solve it. I'm happy you're actually doing something about it, and realize that it's difficult, and I honestly hope that you do well.


Hey, I appreciate this response. It's easy to become jaded. I'm often jaded too, and it takes everything I can muster to keep trying to improve things. Some days it's all I can do just to do my job and shut up the systems thinking part of my brain that wants to look outside it.

There are a lot of great ideas out there. What matters is what we do. Good luck, I wish you the best as well.

For what it's worth, this stuff is really great, and a good mental model for the reality of organizations. I highly recommend Peter Scholtes' book, The Leader's Handbook.


It's amazing you have to justify this view point.

There are books precisely documenting the correlation between quality, speed of execution and lower costs. http://ptgmedia.pearsoncmg.com/images/9780132582209/samplepa...


Exactly. The resistance to change is stunning. It's an ingrained psychology of individuality continuing to dig our management hole for us. Ignore the system at your peril.


> The resistance to change is stunning

If you ask a bunch of people who are analytically trained to ignore all the evidence in front of them you should expect this reaction. Especially if you tell them, to paraphrase, all they need to do is change their culture and all processes and everything will be rosy.

Pointing to books written by quality consultants isn't going to help your case much either I'm afraid.

Which is a shame. If you've ever been in an organisation that has this deep-seated view of quality it is quite an eye-opener. Of those that I am familiar with, only GE really were able to take it beyond manufacturing and definitely the only one where I saw it applied to software. It felt a little "religious cult"y at first but was pretty impressive nonetheless.

Your focus on Edwards Deming is a little unfortunate too, as I would expect that most people with an engineering bent will look at his work on SPC and, rightly, consider it inapplicable to software. It requires a high level of repetition to be valuable and software development doesn't have that.

His work on organisations could be applicable but it would require most companies to fundamentally change the way they approach almost everything. On the basis that it's not necessary to do this, I can only see it occurring if it became necessary. A special set of circumstances brought it to prevalence in manufacturing and by no means universally. If it happens to software I think it would need an equivalent set.


Yes. Good feedback, thanks. Deming himself said that any company can grow in a favorable market, so it is difficult to argue for better when indeed even running a company in a mediocre fashion can be highly successful as compared to the baseline.

What Deming's organizational view promises, instead, is joy in work. Very simply, that's what I'm really after.


For all you detractors, here's my story, maybe it will help you believe that quality is possible in our industry.

I have been doing professional software development since 1997, and from then until two years ago, I have been a developer on waterfall and scrummerfall projects. There have been deathmarches aplenty and lots of failures small and large with some heroic successes here and there. Overall, I'd say the performance of the teams I had been on has been fair to poor, and the code quality has been generally poor because it was always rushed.

Then two years ago, my manager got on a quality kick and the team was largely with him because we were frustrated with some boneheaded defects that had escaped to production. First, we started to require some unit testing here and there, and code quality started to improve, at least to those of us on the team who noticed such things.

Next, we all decided on a new greenfield project that we would enforce a test-first/TDD rule. Our code coverage was suddenly obscene! Months later, we had no escape defects when we delivered to production on-time. No escape defects was unheard of in our division, and a point of pride for the manager.

The TDD we were doing was largely just a gentlemen's agreement, and there was little pairing. Mainly we all still worked in our little heads-down solo developer modes. So, manager decided to amp up the agile more. We got TDD training ("obey the testing goat" video series) and we became much more open to the idea of pairing. The QA team was fired, we ended up with one floating QE engineer. He occasionally pairs with us, developing for real stories. An agile coach came to visit us and gave some tips. Got scrummaster training, bleh ...

On the next project, also a greenfield, we were much more confident. Pairing happened very dependably whenever we did any new development. We were less intent on metrics such as coverage, and more intent on writing our unit tests like they were a software spec. Spock Framework for Java is a really nice system for doing this, our tests read almost like english paragraphs. Our defect rate is similarly low like it was on the previous project, but now we have a much better comprehension of the code. I feel like the whole team is communicating better and can understand the subtle design direction changes that naturally happen throughout the course of a project. We just completed our first story mapping exercise for this project, it's really neat to see our plans all laid out on the board like that and this has been very helpful for our product owner.

There's a lot wrong with our situation, but we're impatient to fix things since we don't have defects to mentally burden us constantly. A one-click build would be really nice, and our overall system design handed down to us from on-high is heavy-handed. We're so agile now we can usually devise a long-term plan to address our issues that everyone is comfortable with. Our product owner is struggling to get better definition around aspects of the project, but we make it possible for him to shift directions - again because the codebase is well-factored and largely free from bugs.

My main observation looking back on my 18 years of coding is that software takes a long time to write. It's really excruciating. Trying to turbo boost a project by overcommitting resources and doing death marches usually doesn't shorten the actual time to delivery all that much in my estimation, but it DOES drive up the risk because people are going to fail on you left and right.

This excruciating slowness to develop software has only become more evident to me now that I'm on a team that is genuinely trying to be agile. Every little feature and every task on the board is SOOOO drawn out and painstakingly developed by pairs of developers. DevOps is a big deal that saps tons of the team's time, but it is important for us to own this.

I just never thought I would see proper resource allocation to software projects, but here it is. My company is spending the money it needs to AND the management, product owner, and developers on the team are all willing to work this way. We took no shortcuts and focus on code quality. Effort put into documentation is fairly lite, unless it is intended for the user.

I can count on one hand the number of days I was asked to work overtime in the last 2 years.

The question I have is, if high quality software development is possible, is it a lot more expensive than the old death march/waterfall/lie-to-ourselves-constantly style of development? My team no longer has show-stopper production environment defects nor a QA department anymore. Surely that has to cut our costs a lot! We do have a dependable cadence, and have recently been seeing our velocity accelerate. While we do meet our product owner's goals, but I'd say our progress feels achingly slow at times. Maybe I'm just impatient or restless. I'll need to ask my manager if he can gauge whether we're more or less expensive to run than we were in the past.


Thank you. It is incredible how much resistance there is to the idea that quality is a positive feedback loop of value -- not simply a cost.


Is it incredible? Software development is such a young discipline. It's no surprise to me that the vast majority of actors are winging it here. I have a lot of sympathy for your detractors.

Keep on fighting for quality, it's worth battling over and teaching others about.


I was actually thinking just the same thing. It feels young and not quite mature, doesn't it? We have a long way to go. Same to you. Cheers.


"True systemic quality will lower costs, speed production, and increase value all at once."

When you say "true systemic quality" are you speaking far beyond the scope of what any one company is responsible for?

Otherwise, I suspect those replying to you are correct - if bug-free software can be written, quicker and cheaper than software that winds up retaining some bugs, based on work well enough known for 20 years, we'd be seeing someone out there winning with it.


No, absolutely not. I'm talking about the system of a single company; its customers, inputs, processes and people, and finally outputs.

What do you think is the scope any one company is responsible for? Is there something about the quality of their output that they are not responsible for?

I'm not saying bug-free software is the goal, and maybe I'm too far off on a tangent to still be relevant to this conversation—just trying to get across that bugs come from somewhere, you should think about what all that "somewhere" is by looking at the complex system surrounding it, and you should not punish your developers or put them under the metaphorical bridge to hold them accountable for not producing bugs. That is ridiculous and ignorant of the system, and the system between and around people—not the individuals themselves—are where bugs come from.


"What do you think is the scope any one company is responsible for? Is there something about the quality of their output that they are not responsible for?"

They are not responsible for the output of other companies. My question was whether you were speaking to what one company could do, or what was prevalent in the development ecosystem. It seems that you were speaking to the former.

I certainly don't hold a position that there are no gains to be had focusing on the system. I do think it is most reasonable to say we don't know how to attain them in a way that acheives quality (or specifically lack of defects) at the level envisioned up-thread. The ways we do already know to attain that level of quality do impose substantial cost.

(And for the record, I certainly agree that individual punishment of developers is unlikely to get us to any better place.)


I don't mean to say that all code has to be bug free; certainly, I hope the life-safety critical code is. But I think we, as hackers and software engineers, need to consider the ramifications that bugs can have, and then making the decision based on that balance of risk.

Take an anonymous currency system, for example. Handwave away whether or not that it's actually possible to build one -- let's say you have a way to create electronic, untraceable money, and let's go one step further and say that there's no personal risk to doing so (e.g., it creates no criminal or civil liability to doing so, and costs nothing).

Do you build that, or not?


I've thought about this one a lot: Yes. Definitely.

"What about child porn?" We have that already; not having anonymous cash won't stop it. "What about tax fraud?" Pressuring governments to re-think corrupt, unbalanced tax systems might be a very great good. And so on. I have no question that it would be ultimately better to have this than not, although governments (forever forgetting who is really in charge) don't like it.


"Take an anonymous currency system, for example....Do you build that, or not?"

Obviously you do.

I don't understand why that would be a question.


Even knowing it's likely to be used for buying and selling drugs, child porn, and assassinations?

I personally believe that, even given those potential ramifications, the benefits outweigh the costs -- but my personal balance of those is not everyone's.


Yes, because it will also be used by marginalized groups to buy groceries, by sex workers who would otherwise be stuck paying huge commissions to pimps, to help hide who is buying anti-goverment printings. Every tool can be used for evil - Your drill is just as capable of drilling through someone's skull as wood.


This is garbage and you know it. You're choosing to be deliberately blind to an accurate assessment of the consequences. If we took the article at its word, you would be put to death.

Proposals for codes of ethics are about this exact issue - you don't get to build something that provides capability, then put up your hands and say you "didn't know what it was for" when it's used for something bad.


There is a major distinction between enforcing anti-fraud in advertising capabilities of your construction, and ensuring your construction is not used by anyone for malicious purposes. To claim otherwise is fraudulent.


OK,

I think we all can agree that programmers almost never begin a project with the intention of causing harm. All the examples cited involved situations where immense pressure applied by higher-up impel a programmer to knuckle-under and agree to such approaches.

The solution that is offered seems very specific to now - we take the fall guy who caved in to one sort of incentive and we put an opposite incentive on him to force a different behavior (btw without removing the first incentive). How fucked-up is that?

Obviously, the better, saner solution is removing the existing perverse incentives, giving software engineers more leverage in decisions, punishing high-ups if they don't give software engineers autonomy to make decisions. And heck, execute the CEOs when the bridges collapse. The bucks should stop with them, right?


The problem becomes... what if the CEO is unaware of it? Volkswagen, for example. It is highly unlikely in my mind that some engineer somewhere in the trenches decided, in desperation, to add in a few lines of code to make sure the engine passed emissions tests. But, do I think the CEO of Volkswagen signed off on adding the code? No.

There's a diffusion of responsibility that makes it hard to point to any given person after the fact. How can we arm people - software engineers, but product designers, project managers, QA engineers, CEOs, and the welders on the assembly line can say, "Hold on, stop, this isn't right!"?


I think that shows the limits of the punishment after the fact approach. The point is to demand that companies create an atmosphere where professionals have some autonomy to make decisions based on their expertise rather than the good of the company. Unfortunately, the trend is things going the other way for virtually all professionals, not simply almost-professionals like programmers.


Until it is the CEO..CTO..and vc that is standing under the bridge...nothing will change.


How would they know whether the bridge is designed badly? What's the reasoning behind your underlying suggestion?


> How would they know whether the bridge is designed badly?

It's their responsibility. If it's not possible for them to comprehend the design and execution thoroughly enough, they'd better make sure they had someone check it whom they can trust to understand it. Like ancient builders, who didn't lay every stone themselves, but were still responsible for the outcome.

Perhaps our standards for responsibility of leaders are a little low these days.


The reasoning is, I guess, that it's usually not engineers that introduce serious problems and vulnerabilities to software, but management and sales introducing unreasonable demands and not caring about the product any more than how much money it brings.


Those of you with a membership in ACM or IEEE, I'm interested in hearing your thoughts on why you do.

Those of you who don't, would you consider pledging yourself to a code of ethics if they met your requirements? What would those requirements be?


Shipping with bugs is a fact of life. Living with tech debt is an economic necessity because if you don't some other company will and they'll eat your lunch. The comparison with physical infrastructure doesn't really hold up.

I've been a member of ACM primarily because I wanted easier access to ACM library papers, but not any more. I'm uninterested in turning software engineering into a gated discipline with a professional organisation acting as a monopoly keyholder.

Software as a discipline affects too many areas of life for anything much more than a vague platitude as a code. Components may be used for good or evil in ways we can't control. Even something as potentially evil as Metasploit is also almost exactly the same thing you want to use to test for vulnerabilities (personally I would not write a plugin for it).


If it were possible to commit programmers to a code of ethics, it would be possible to do what most professions do an organize to protect the careers and incomes of professionals. On top of the code of ethics would soon be refusing to work on any team with H1-Bs, just as doctors, lawyers, and accountants don't work with any peers who haven't passed the same qualification hurdles as they have.

I don't like the idea of extensive central regulation, but it might well be better for programmers' lives.


Economically putting up barriers for the sake of it (aside from valid issues of qualification required for the public good) is bad policy. It's good for the people who can get the qualification, but bad for everyone else. When you average it out, it's worse for society as a whole (this is roughly implied by the Welfare Theorem's of economics).

Your argument that other professions do it is invalid. In fact the government should be more proactive in ensuring that professions only impose valid conditions on employment, and don't add spurious requirements in order to exclude people from the profession, in order to, as you say "protect [their] careers and incomes".


Programmers have to compete with doctors, lawyers, accountants, actuaries, nurses, dentists, schoolteachers, and pharmacists when we buy a house or seek to influence decisions of the companies where we work. We have to compete with them in income, prestige, bargaining power, moral influence, and credibility with financiers. We have to pay the higher wage rates they have achieved when we need their services.

Unilateral disarmament has already led to a situation where 80,000 unregulated immigrants every year are competing with us while those regulated professions see few or none. We're not going to see them de-regulate in any of our lifetimes, because the theorems of economists don't persuade anyone -- except possibly computer programmers and we don't have any power or influence because we're not united to get any. So the best option is obviously to at least organize to get a fair shake. But so far there are few signs of that.


My ethics are my own. I see no reason to make a public spectacle out of it one way or another.


I think you are putting the cart before the horse here, because current practice of software development is in a different universe than the one in which it is done as responsibly as you propose, and a developer's code of ethics is not the tool to move us there.

There have been projects of the sort you propose (the space shuttle software comes to mind), but they are few and far between, and it simply not possible to quickly change the industry in general without completely replacing it. Putting aside the technical challenges, it simply is not economically feasible - we don't have the resources.

The situation was very different with the Quebec bridge. At that time, the practice of quantitative engineering had advanced to the point where large, safe bridges could be reliably built, and all that was needed to make it happen was to deal with some human problems leading to sloppiness. The desired outcome was within grasp, and the public demanded it. Until the practice of software development reaches those points, a code of ethics is not going to make it happen. The department store collapse shows that engineering ethics need to be supported by society to be effective (and by support, I mean more than just lip service, and the support has to come from the people who set the agenda.)

Edit: Let me explain further what I meant by 'quantitative engineering'. Bridge designs are analytically verified to ensure that they meet a formal set of safety requirements before they are built. In contrast, almost all software is built and then tested (and fixed, and tested...) against an informal specification (even in iterative and agile methods, the building of parts precedes their testing, as it must.) Analysis is relatively much less important, and is generally informal.


If it meant having a job or not, sure I guess.


And this is the problem in a nutshell: "I'm okay with trying not to hurt people, as long as it pays better than hurting people does."

I wonder if the real difference between software engineering and other engineering disciplines is that they have a culture of responsibility--of saying "hey, public safety is more important than money"--and we emphatically don't.


What if sticking to it meant that you'd have to resign? A code of ethics that you have to sign for work is one thing. A code of ethics that you hold above employment is quite another.


Programmers are well-paid and have excellent employment prospects. In the off chance I did encounter an unresolvable ethical issue, it would not be very hard to resign and find new employment elsewhere. Given how much better off I am than most other workers in this country, I think it's a small sacrifice.

To address the thread topic, I'm a P.Eng, ACM member, and IEEE member.


Are you a P.Eng in software, or something else? (I know there are only a few places in the United States where you can even challenge the PE exam as a software engineer).


I'm a Canadian. My degree is electrical, but my field of practice is software engineering.


On the spot? Probably not, no. I would begin looking for work elsewhere though, possibly in a different field.


No! This is the equivalent of saying that kids shouldn't have chemistry sets or be able to play on jungle gyms. Saying that critical infrastructure needs to be "bug-free" is one thing, saying that dating site apps need to be is ridiculous by comparison.

And even in the case of bridges and the like, they do have a tendency to collapse to earthquakes, wars, and the like. Given enough time, most everything has bugs.


> Saying that critical infrastructure needs to be "bug-free" is one thing, saying that dating site apps need to be is ridiculous by comparison.

I think you are right, up to the point of security. The problem comes when the handling of customers' credit cards is treated as if it were just a dating app.


An old observation, but still relevant: We really have no liability in our profession, compared to other licensed professions. Why does software quality suck? Because our customers almost never sue us if the work we do does not meet expectation.


I think this is the problem, solving people caring about software would also make them more likely to prefer more open (e.g. open source) designs.


> Unlike our brothers and sisters in the other engineering disciplines, though, software engineering does not have the same history of mandatory formal education.

Was there a time when other engineering disciplines were not formally schooled? Did they develop into having formal education or were they born that way? Is computer programming moving in the same direction via organizations like the ACM?

Can we learn from other fields who have in fact experienced similar issues on some level? Do we need to reinvent the wheel of creating an effective culture that rewards all the values humans desire? What are those? Success, ability to express oneself, ... ?


If anything, with the advent of bootcamps, software engineering is moving away from formal education and into trade schooling. Personally, I think this is a step backwards for the discipline.


I resent a little bit.

I'm a boiler-maker / welder by trade (metal fabrication). My trade certificate actually says "Engineering Tradesperson". I operate a laser cutter, and have written scripts to automate some of my computer related tasks, and have a rudimentary understanding of Python. I've also worked at an ISP in NetOps doing physical security and infrastructure.

I live in Australia so I can only speak for the system we have here: trade school is formal education, the on-campus component of trade education is delivered throughout the nation by TAFE[1] campuses. We are formally educated to build things that don't kill people, and can certainly be held responsible if our actions or omission of action leads to injury. We are expected to escalate anything we see on workshop drawings that could be problematic. Each of the tradesmen in our workshops are required to perform a weld test every six months, which is examined using ultrasound techniques, to ensure our work meets or exceeds AS1554[2]

While us tradespeople aren't the ones engineering the bridges, we are the ones building them.

Comparatively, the software development discipline is still in it's infancy, whereas the construction trades have been around for millennia. Structural Engineers and tradespeople build physical structures to withstand Category 5 cyclones, but my bank (in the top 10 globally by market cap) can hardly have a month go by where some software system doesn't fail.

The advent of bootcamps is probably a side effect of 'coding' becoming relatively easy compared to it's more esoteric history, with the advent of high-level languages like Python and Ruby. Or maybe that stuff about the average IQ increasing over time is correct, and more people have the capacity to understand writing code. Probably both.

Another comment here spoke about TDD - Test Driven Development, that's probably closer to the physical engineering disciplines with destructive testing of random batches of building materials.

1. https://en.wikipedia.org/wiki/Technical_and_further_educatio...

2. https://infostore.saiglobal.com/store/PreviewDoc.aspx?saleIt...


Please accept my apologies.

With the current state of bootcamps in the US, I feel it's a stretch to call them formal education. They are at best short training programs. There's no accreditation, no accrediting body, and no real quality control beyond the press. That the field has few widely accepted standards does not help matters.

Personally, I suspect that the advent of bootcamps in the US is a fad driven by the explosion of the tech sector and the amount of time it takes to train a computer scientist. I suspect that in ten years, they will be much less popular as people who only know Ruby or PHP drop out of the field. It's hard to have a multi-decade long career in computing if you don't actually understand computers.


There's no such thing as bug free code. So yes, I've deployed coffee I knew had bugs. And if I took twice as long, and spent the extra time checking, it would still have bugs...


Effective code of conduct requires stable environment, people knowing each other and so on. No way it can be implemented in profession that's booming at the rate programming is.


Whoever claims that he never saw his software deployed to production with bugs has never written anything of any complexity. Adding "... he knew about" to that statement softens it a little, but not too much.

Problem comparing software engineers to the civil engineers lays in the ability of the latter to constrain their users, e. g. stating the weight limit for the bridge. Try to do that with software and enforce it.


> If we were required to "stand under the arch", as it were, how many of us would have pushed back against things that our employers asked us to do?

We can't achieve this consistently with privately owned software. Software must be public and transparent to have consistent accountability.

In the case of Grindr, for example, I assume we don't know exactly who wrote the original code. And the company would probably protect that information.

I believe those who contribute to open source software can and do subscribe to the author's desired level of accountability specifically because they know everything they write is available for public scrutiny for eternity.


Regarding the various calls that it should really be the CEO who gets executed, I'd wager those ancient bridge builders probably were the "CEO" of the relevant bridge building operation. Specifically, they probably managed substantial funds obtained from some Royal and had plenty opportunity to trade-off between personal profit and structural integrity.

Calls to transfer the idea to today's lowly programmer drone would be akin to ancient bridge builder absconding with substantial extra profit after convincing his Royal that it really should be the bricklaying crew who should line up under the bridge.


Some Mexican drug gangs do execute engineers for fucking up. And they recruit by kidnapping family. Hardball capitalism, for sure :(


140ce? What the hell does that mean?





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: