Hacker News new | past | comments | ask | show | jobs | submit login
Microsoft President: We Need a Hippocratic Oath for Software Engineers (capitalandgrowth.org)
490 points by jkuria on Aug 10, 2020 | hide | past | favorite | 515 comments



I’m not a fan of comparisons to the Hippocratic oath. The greatest risk to AI ethics is not the ethics of software engineers but the ethics of the software engineering process. By the time tasks are handed to engineers, most of the ethical decisions have been made by product managers, designers, and business stakeholders who are focused on their own goals. Software engineers are accountable to their bosses before their users, no matter how high minded we like to pretend to be. To say it’s on the engineer to do no harm puts them in the tenuous position of doing the job or being replaced by someone who will. That isn’t setting us up for success.

The Hippocratic oath sounds more altruistic than the alternatives, but good legislation, including business audits and incentives, will have far more impact than a software engineer swearing they won’t be evil.


In sufficiently complex systems, you can always blame someone else if you're motivated to do so. https://www.youtube.com/watch?v=IssR_J0QWr4

But ultimately there's always a software engineer involved in the creation of software - and that's not true of any of the other roles you mentioned. Since software engineers are necessary and sufficient to produce software, they should always be held responsible, and any oath should fall on engineers.

> To say it’s on the engineer to do no harm puts them in the tenuous position of doing the job or being replaced by someone who will.

Well, yes - if there were no tradeoffs there would be no point in having an oath to begin with. But there are software engineers today, including some on HN, who do things more harmful and unethical than medical malpractice, and they are personally culpable for the decision to do so - just as their replacements would be if they refused. I would also like to see laws criminalizing those individual engineers' conduct - maybe you're alluding to the same thing? - but an oath is a good start.


So management bears no blame for requiring illegal work be done, on pain of termination? Said another way, engineers now need to be technical and legal experts in the business domain?

(Remember employees in the US depend on the company for health insurance. Saying 'no' could cost a lot more than just ones position.)

Most software engineers are not like doctors. We have little autonomy over what is created. Our responsibility is primarily the how. And with devops sometimes the actual deployment and maintenance itself.


> Said another way, engineers now need to be technical and legal experts in the business domain?

Consider something like the 737 MAX debacle – did the programmers writing the MCAS code actually have enough aviation domain knowledge and understanding of where the component fit in the overall system to realise it was a threat to people's lives?

I don't know, but my guess is the most likely answer is "No".


From my limited information the MCAS code was primarily causing problems in association of incorrect readings by damaged sensors. Of course one could argue that this is an engineering failure because the MCAS failed to account for wrong sensor input but when you consider the legal implications of a MCAS fallback there is actually not much that can be done on the software side.

The MCAS is an optional component that reduces certification and training costs. It is definitively possible to fly the plane without accidents even with a disabled MCAS. So why can't the MCAS be turned off automatically when sensors fail? Because that changes the classification of the plane and therefore requires pilots to be certified for a new machine and receive new flight training for both MCAS and no MCAS modes.

If the software engineer was under a hippocratic oath then he would have to refuse to build the MCAS entirely but not because the idea of an MCAS is inherently unethical, no, he would have to refuse because the company he works at wants to use the MCAS for a non ethical purpose (namely operate and hide the existence of MCAS even when it is unsafe to do so).

This is basically a reverse audit but the software engineer has no authority conduct such an audit and even if he was allowed to, the business has no obligation to give him the necessary information to determine whether the MCAS will be used unethically.


> he works at wants to use the MCAS for a non ethical purpose (namely operate and hide the existence of MCAS even when it is unsafe to do so).

You think a programmer, handed a spec and asked to implement it, can be expected to know that their employer (or the employer's customer) wants to use it for a "non ethical purpose"?

Again, I can't know for sure, but I doubt the programmers who wrote MCAS (who most likely didn't even work for Boeing, but rather some subcontractor) actually knew, or could have known, how the code fit into Boeing's larger purposes


With all of that being said, were these software engineers (probably subcontractors) even given access to actual MCAS readouts or, more likely, virtualizations of expected readouts. These people probably didn't account for this type of faulty readout because the virtual machine never put out that type of fault.

Most types of development in these large companies is so compartmentalized that it's next to impossible to see the whole structure from a software engineers prospective. You need to be at a management level to understand how most of the pieces really come together, which is the only place where one of these "oaths" might make an influence. At that point, however, the selection is so goal oriented, I have a doubt as to whether or not people would take that oath.


Generally, there is somebody (typically in software assurance or systems engineering) who is supposed to ensure the fidelity of the simulator. Additionally, the hazard analysis or failure modes effects analysis should trace to specific test cases.

Of course, there’s all kinds of pressures that make these fall through the cracks. I vaguely remember an article stating some of these documents in the case of MCAS were not up to date


What the actual hazard analysis showed is that Boeing did not have the technical insight at the right level.

The HA listed MCAS as "hazardous" rather than "catastrophic". Meaning those in charge of that process document did not realize MCAS had the ability to down the airplane. I know it's tempting to arm-chair quarterback this, but let's assume they should have realized this hazard.

To your point, maybe the programmer doesn't have the systems knowledge to make those calls, but the process is predicated on somebody having both the technical acumen and the responsibility) for those decisions. This process broke down though.


> You think a programmer, handed a spec and asked to implement it, can be expected to know that their employer (or the employer's customer) wants to use it for a "non ethical purpose"?

No imtringued does not.

imtringued wrote:

> This is basically a reverse audit but the software engineer has no authority conduct such an audit and even if he was allowed to, the business has no obligation to give him the necessary information to determine whether the MCAS will be used unethically.

imtringued is saying that it would be impossible for a software engineer to determine whether what they were asked to do was ethical or not.


> If the software engineer was under a hippocratic oath then he would have to refuse to build the MCAS entirely but not because the idea of an MCAS is inherently unethical, no, he would have to refuse because the company he works at wants to use the MCAS for a non ethical purpose (namely operate and hide the existence of MCAS even when it is unsafe to do so).

One, it's not going to be clear from the request that the MCAS would want to be used in unethical ways.

Since the Hippocratic oath is the argument here, how many software developers want to work in a system closer to physicians? A national cartel controlling membership and licensure - tough luck if you want to hire more developers because there's an artificially limited supply. Mandatory academic training - goodbye self-taught developers. Follow-on training with pay 1/5th or less of your attending physicians - I know residents in specialties where attendings are paid $500k a year to start, and they are making $60k a year. Brutal shift work - residents work 70-80 hours a week easily. Toxic leadership - I've heard horror stories of residents being forced to lie on ACGME forms regarding their hours under penalty of being outright fired from their residency slot, which would make it nearly impossible to get a job as a physician (mainly because you'd have to apply to a different residency program and explain your termination).

I know they're not suggesting bringing the entire medical education & training structure over to tech workers, and everyone here likes to think that they're brilliant and changing lives every day but most of us are just throwing shitcode JS into a computer for 3-4 hours a day for an ad tech company and not much more. The comparison falls apart pretty quickly.


>actually not much that can be done on the software side.

This isn’t exactly true. There are mitigations (both software and non-software) that are expected to be done depending on hazard analysis. One of the items discovered is Boeing mischaracterized the MCAS hazard (it should have had a “catastrophic” hazard class). In addition, they didn’t appear to follow their own process for dual inputs required even for the lower severity class assigned. The “optional” part of MCAS was the secondary sensor reading into the software


>The MCAS is an optional component that reduces certification and training costs

No. The MCAS was a "necessary" component for pitch stability - without it, a 737 MAX in a pitch-up attitude would, in the absence of correcting inputs, pitch up further and further until it stalled. Without it, the airframe is uncertifiable, full stop.


Cite?

I'm certain that's not correct, everything I've read on it has said MCAS was specifically a software modifier put in place to allow the plane to respond substantially the same as a regular 737 without the larger engines, in order to avoid having to have additional training for all 737 pilots worldwide.

Most aircraft, in a "pitch up attitude" will increase their angle of attack as thrust is applied. The issue was that the MAX would do so in a more radical way than the regular 737 did, and so the software was put in place to limit that so it flew like a regular 737 as far as the pilots could see.

Conceptually, MCAS wasn't a bad idea. The execution and using it as a replacement for training and not informing pilots of the flight characteristics changes between the models was stupid.


Sure, a variety of references here: https://aviation.stackexchange.com/questions/73132/did-boein...

Although to be fair my summary wasn't entirely accurate - it wasn't that a MAX was outright dynamically unstable with no control input, as I described, but rather not sufficiently stable as to cause a monotonic increase in stick force as AOA increases, which can cause the combined system of pilot + flight dynamics to be unstable since the pilot relies on stick force as an indicator.

> Most aircraft, in a "pitch up attitude" will increase their angle of attack as thrust is applied

This is both incorrect and irrelevant. Most aircraft will climb when power is applied, but will not change their AOA unless the thrust axis is off-center. To a first approximation, power controls climb rate, and stick input changes AOA. Change in behavior under different power settings has little to do with the problem with the MAX. The problem with the MAX is that at high angles of attack - i.e., when the stick is held back, causing the air to meet the wing (and the engines) at a steeper angle - the engines, which are flung forward, start producing lift of their own and produce a pitch-up moment. This means that the further the pilot pulls the stick back, the less hard they have to pull. This is a dangerous inversion that increases the control order of the system, as it breaks the usual assumption that a given stick force will result in a given AOA, more or less.


Right -- it didn't give the exact same feedback to the pilot that the regular 737 did, which was why MCAS was created. The aircraft is no more or less unstable than a regular 737.

The original 737 does exactly the same thing the Max does with respect to producing a pitch-up moment -- as does nearly every other aircraft. It's just not nearly as pronounced as the Max is.


I'm sorry, none of that is correct. Did you read my link? The aircraft doesn't meet FAA regulations without the MCAS.

>The Boeing 737 MAX MCAS system is there ONLY to meet the FAA longitudinal stability requirements as specified in FAR Section 25.173, and in particular part (c) which mandates "stick force vs speed curve:, and also FAR Section 25.203 — "Stall characteristics".


It's exactly correct. The two points aren't in opposition.


How quickly after refusing work will the engineer be fired?


I can’t imagine software engineers staying in a company long enough to acquire it.

For this to work, software engineer compensation would need to change radically.


Maybe that's the point of this oath? Software engineers will refuse to work on these areas, unless demands of the oath can be met.


A doctor refusing to do a procedure because he worries for his patient is seen as a good guy doing what is right, he is in his opinion saving a life. In addition he is trying to avoid the massive cost of a medical malpractice suit.

A software developer refusing a job because it does not meet his ethical parameters is just an unemployed software engineer.


I think one of the issues is the domain of is vast. One developer may be working on a basic CRUD app while the next is working safety critical code on a vessel going to Mars.

There are definitely areas where the prudent thing for a developer to do is raise a dissenting opinion, if not halting work. What seems lacking is clear industry consensus standards to back up that decision.


If a doctor says no, it's because of legal liability and risk to licence.

For the same reason, it's harder to hire some rando budget doctor because the field is gate-keeped by the requirement of a licence, and liability.

You can't magically make Engineering the same without the same conditions. Add barriers top entry that see my pay rise, or have cheap programmers with no liability.


Engineering already has these barriers, they just aren’t required or enforced.

I’ve never been on a project that requires a software product stamped by a licensed engineer. NCEES dropped the software license because there was so little demand, compared to, say, civil engineers who consider a license a rite of passage to career growth.


You need a licence to practice medicine, if there is something called a "license" for engineering but isn't required for practise, then it's not the same. If we have the same barriers, but not enforced, then we don't have the same barriers.


That’s not quite right. You actually do need a state license in the US to practice engineering for the public except for a few basic instances:

1) you work for the federal government

2) you work under a licensed engineer

3) you work under an industrial exemption

There’s differences depending on state. There has been a more concerted effort to remove #3 recently due to both political reasons and the technical issues in this thread. Most people performing engineering work under an industrial exemption don’t actually realize it. Again, this is different state to state. For example, in some states you can’t start a business with “engineering” in the name unless you have a certain percentage of owners/principals with an engineering license.[1]

What actually happens is conflating terms in common parlance. “Engineer” and “engineer” are not necessarily the same. For example, a computer engineer may work under an industrial exemption (due to working in a manufacturing service) while a software engineer does not. Legally, an “Engineer” claims an explicit responsibility to public safety.[2] Apropos to the headline article, there is a distinction with this difference.

[1] https://fxbinc.com/wp-content/uploads/2016/10/state-by-state...

[2] https://www.theatlantic.com/technology/archive/2015/11/progr...


From my experience, software engineers don't rotate nearly as often in traditional cyclical and defense businesses (auto, aerospace) as they do in the FAANG technology sector. There are a lot of grey beards who weren't so grey when they started.


That's completely the wrong comparison and context.

The reason MCAS came about was because management wanted to try to fudge a larger engine into an outdated design created for a different purpose rather than do the engineering and certification necessary for the new requirements and to update the system.

Management wanted to save money. Of course the engineering leadership did not want to fudge something -- they wanted to do proper engineering. But the people in charge just wanted to save money, and the engineering leadership could not do anything otherwise, even they knew that just making the engine larger and compensating did not make sense from an engineering standpoint.

By the time it got to the MCAS, that was far down the line of the decision to not do proper engineering.


Which demonstrates the irrelevancy of a hippocratic oath for engineers. Because the complexity of the system is such that no single engineer understands it all, and thus no single engineer is (or feels) responsible.

Blaming it on management is also irrelevant, since management merely takes the advice of engineers, and do financial/business trade offs to maximize profit. If the engineers cannot tell that MCAS system could fail this way (due to the complexity), management will not question it.


So..

if the engineering leadership said, this is not a good idea. Management said, it saves money.

Engineering failed to convince management. Management didn't have the understanding that it was a bad idea.

It is now, no one's fault?

If management merely takes the advice of engineers ( and other who specialize in the things that they do not ), and they choose to ignore it because they do not understand the things they do not specialize in. I believe it's a reasonable to assume that management is more at fault than engineering ( I'm not sure they're is a situation here where any party is fault less )


Management isn't going to question things, even with engineering warning them it's a problem every day.

Management likes hearing things they like, and simply don't hear things they don't like. Then act surprised about it when it becomes public.


Hippocratic oath for engineers is irrelevant, correct. But management does not take the advice of engineers. As I said, the engineers wanted to do proper engineering, but management wanted to save a buck, so they instructed engineering to fudge a cheap solution.

Its very common.


That's like saying burglars who break windows demonstrate the irrelevancy of door locks.


> That's completely the wrong comparison and context.

By the rest of your comment, it looks as though it's an excellent comparison. How would engineers taking an oath help the situation?


I'm not saying engineers taking an oath helps, its about the executives.. maybe something about the thread structure implies that but I actually only read the comment above.


> So management bears no blame for requiring illegal work be done, on pain of termination? Said another way, engineers now need to be technical and legal experts in the business domain?

This is a red herring. The oath is not needed merely for illegal work. In fact, the more common use cases will likely be legal. It's a common sentiment, but: Don't conflate ethics with legal.

> Most software engineers are not like doctors. We have little autonomy over what is created. Our responsibility is primarily the how. And with devops sometimes the actual deployment and maintenance itself.

This is not a dichotomy - there can be a spectrum. You can restrict it to those who do know what the product is used for, or at least have good guesses for them.

And while not everyone is this way, I wouldn't really want to work in a job for long if I'm not told what the code I'm writing is for. It's not even an ethical concern for me - it just makes for a boring job. Ideally I want people to tell me the problem they are solving and give me some leeway in crafting a solution. Don't come to me with a solution and ask me to implement it.


> I wouldn't really want to work in a job for long if I'm not told what the code I'm writing is for.

But money dude.


How much autonomy do you think doctors have? They're not in the operating room inventing new procedures. They are following careful scripts, adapted for the intricacies of one particular human body. There are times where they need to qualitatively improvise, yes, but that's generally only when something has gone horribly wrong.

Also, the Hippocratic Oath is not terribly complicated. It basically says, I will not furtively and maliciously hurt people in an abuse of my authority, and I will try to heal them when they are sick. I don't think it's a lot to ask that software engineers to agree not to create knowingly malicious software. It actually addresses exactly the problem you describe. If everyone has taken this oath, and adheres to it, there is no "someone else" to do that evil work.

Last point, there is a wide variety of software engineering work out there. Some of it may be mindless of the bigger picture of what is actually happening, but for any sufficiently advanced behavior to emerge out of a complex software product, some engineer at some level has to have some idea of the path they are going down to create or allow that behavior. And every engineer has the ultimate autonomy over how and what is created because it is our hands on the code. If you don't understand that, you don't understand the power of the profession.


> So management bears no blame for requiring illegal work be done, on pain of termination?

No, I wouldn't say that. In many cases, management and engineering share the blame jointly and severally since they both have an opportunity to stop it.

> Said another way, engineers now need to be technical and legal experts in the business domain?

Engineers should know enough about their business domains to understand the ethical impacts of their work. Ethics and law are orthogonal, so thankfully this is generally much easier than being a legal expert.

> (Remember employees in the US depend on the company for health insurance. Saying 'no' could cost a lot more than just ones position.)

Thankfully, the healthcare safety net in much of the US is far better than it gets credit for, and the pay and availability of opportunities for software engineers in the US has generally been quite good. I'm sympathetic to this argument in general, which is one reason I don't think there should be an oath for, like, Amazon warehouse workers, but I'm far less sympathetic for anyone making 5+ times the median US income.

> Most software engineers are not like doctors. We have little autonomy over what is created. Our responsibility is primarily the how. And with devops sometimes the actual deployment and maintenance itself.

Your responsibility as framed to you by the business is the how, but upstream of the how is the question of whether or not to do it at all. If you contribute to a piece of software, you've tacitly answered yes to that question.


> Engineers should know enough about their business domains to understand the ethical impacts of their work.

This would be more reasonable in the era of 10-20 years in the same company or industry. Needing to job hop every 2-3 years for a decent raise, and software skills applying to a vast array of industries, makes it less reasonable IMO.

> ...but I'm far less sympathetic for anyone making 5+ times the median US income.

Not everyone here or in software makes that kind of money. Some of us in the Midwest--or who aren't as skilled at negotiating--don't pull down nearly that much.


> This would be more reasonable in the era of 10-20 years in the same company or industry. Needing to job hop every 2-3 years for a decent raise, and software skills applying to a vast array of industries, makes it less reasonable IMO.

Yeah, to be clear, I don't think software engineers have an infinite level of responsibility for understanding the ethical implications of their work. If you were a software engineer at a credit rating agency in 2006, and you didn't see the ethical dilemma because you didn't anticipate that contagion would be exacerbated by the shadow banking system to bring down the global economy, you get a pass. But if your prospective employer is, like, locking children in cages, or spreading disinformation on political candidates, you should probably find that out during the interview process.

> Not everyone here or in software makes that kind of money. Some of us in the Midwest--or who aren't as skilled at negotiating--don't pull down nearly that much.

Good point - I'm also in the Midwest and make less than that, for what it's worth. I've naturally had FAANG in mind as I type these comments, and more generally I think salaries for the more unethical roles tend to skew higher.


> Yeah, to be clear, I don't think software engineers have an infinite level of responsibility for understanding the ethical implications of their work.

Yep. That's more the responsibility of product managers, upper managements and chief architects/engineers.


> Ethics and law are orthogonal, so thankfully this is generally much easier than being a legal expert.

I suspect a lot of philosophers would disagree with you that ethics is much simpler than interpreting laws.


Philosophers would also make a distinction between being ethical and being an ethics expert.


You can derive most ethics in a couple of decades, plus everyday experience. Try deriving the law from that.


Thats because it's a couple of decades of dealing with ethics. If you everyday life actually involved dealing with law, you would get good at it


Ethics and law are not orthogonal. Look at definition from Wikipedia: "Ethics or moral philosophy is a branch of philosophy that "involves systematizing, defending, and recommending concepts of right and wrong behavior"."

Ethics forms basis on which law is built.

And, given that, it is not simpler to make ethical decision, it is harder. The decision should be worth being basis for a law, how's that simpler than following law?

Even if you take definition of ethical decision from Wikipedia: "An ethical decision is one that engenders trust, and thus indicates responsibility, fairness and caring to an individual." These words can bear negative connotations - if I beat people, I should be trusted that I will beat people, I should beat people fairly and I should care to beat an individual thoroughly.

Yes, I fully see you talk about principles. It can be seen that it is easier to make decisions from principles. But, you can misguide yourself about application of these principles.


"I was only following orders."

Edit to add: If you are being asked to perform illegal or unethical acts as part of your employment, then perhaps termination is an ideal course of action? Unless of course your personal enrichment outweighs legalities or ethics in your worldview?


And I wasn't implying engineers should be entirely blameless. Everyone has a limited understanding of legal systems too complex for one person to fully grasp. And workers far below the level of decision makers should be judged according to evidence of their knowledge and responsibility. Likewise those who give orders should bear more responsibly.

All these "companies take on a life of their own" arguments sound a lot like executives priming the pump of potential jurors with excuses. If decision makers cannot bear responsibility because of a company size or organizational structure then we can make some sizes and structures illegal before they stumble/march into devastating incompetence.


> ...then we can make some sizes and structures illegal before they stumble/march into devastating incompetence.

Was with you until this part. Just hold them personally liable if someone gets hurt should they create an uncontrollable system and predictably fail to control it.


Right. My point was in response to excuses being made elsewhere that the nature of large companies mean these executives cannot be personally liable. So if we accept that the nature of huge companies is no one can be liable (I'm not convinced yet) then it would be time for capping sizes or outlawing structures.

Keep in mind the US already has laws around corporate structures and conflicts of interest. (Even if they're selectively applied.)


The nature of any size corporation is to have one person in charge. In terms of assigning responsibility I'd think that works better than the alternative you'd get by breaking it up. Namely a bunch of cooperating smaller firms only doing part of the job each, and able to point the blame at each other.

We heard the "too complex to understand" excuse a lot regarding the pricing of subprime debt. Except a lot of people did understand it was a problem. It's basically the "I'm too stupid to know what I was doing" defense. If we accept that defense and try to make regulation to protect them from failing (as was done in finance back then), we basically allow stupid people to continue to be in charge rather than being replaced as they need to.


If a person is skirting responsibility at the expense of a structure, it isn't he fault of the structure but the one skirting responsibility.

Structures can and should be changed in this case. But shouldn't be outlawed.


It may be fine in some countries, but saying that you’ll make some organization sizes and structures illegal, barring other criminal activity, smells like a violation of the freedom of association.


Why is that a good freedom?


It doesn't matter if it's a good freedom, the chances of the US repealing the 1st Amendment any time soon are basically nil. You'd have a better chance of getting Apple/Amazon/Google to voluntarily split up their own companies out of the goodness of their own hearts -- it just isn't going to happen.

The only argument that actually matters here is whether or not restrictions on corporate structure actually do violate freedom of association or not.

I'm reasonably skeptical that they do, given that the 1st Amendment hasn't stopped us from enforcing antitrust and monopoly legislation in the past. Yeah yeah, Citizens United and all that, but we regulate companies all the time.

But I'd still want an actual lawyer to weigh in on that, I wouldn't feel confident saying that there aren't limits on how far we can go in that direction.


Antitrust doesn't violate the first amendment, so clearly limits on corporate scale aren't unconstitutional, so the legal defense is insufficient and the moral question stands.


Like I wrote:

> I'm reasonably skeptical that they do, given that the 1st Amendment hasn't stopped us from enforcing antitrust and monopoly legislation in the past. Yeah yeah, Citizens United and all that, but we regulate companies all the time.

> But I'd still want an actual lawyer to weigh in on that, I wouldn't feel confident saying that there aren't limits on how far we can go in that direction.

It doesn't necessarily hold that because one thing is legal, everything is legal. For example, we have 1st Amendment restrictions on threats and libel, but in the US hate speech is still protected speech. 1st Amendment exceptions are generally pretty narrow and specific in the US.

In the same way, clearly some corporate regulation is OK. It does not follow that there's literally no limit on what the government can dictate about how a company can operate. I would prefer to get input from a lawyer before asserting that so confidently.


Because it allows people to associate with whom they choose to. Remove that and you’ve opened the gate to legal racism, legally institutionalized homophobia, banning of religion; the list is endless. The five freedoms are the pillars of our Constitution. Without them, we are no better than China or Russia or even any third-world hellhole you care to mention.


I don't see that even a little bit, your cause effect isn't explained.

I should phrase it differently. Why is an absolute freedom of association more important then the freedom from being harmed by large associations with amoral machinations. The original argument asks that if large corporations inherently obscure moral outcomes, maybe they are immoral, which is an argument that puts these two moral axioms in conflict. Simply stating that one side wins is thought terminating; its important to argue for why its better.


Morality is highly variable, depending on the observers beif system. Legality is the only framework that we can establish in common. Ethics comes in second as it can be established by a group and does not bind those outside the group.


I’m under the impression we already don’t have freedom of association if it’s inconvenient enough. (Please see Portland moms.)


We can take lessons from other engineering domains. Disregarding exemptions, engineers who design and build for “the public good” have someone who is ultimately responsible. A newly minted civil engineer, for example, may be low on the hierarchy but has to work under the direct supervision of a licensed Professional Engineer. That licensed PE is the one responsible, legally and ethically, for now just the “what” but also the “how”. They may work for a project manager but ultimately it’s their stamp that allows the build.

As someone who works in safety critical code nothing irks me more than when people absolve themselves by saying “I’m just a programmer, that’s not my job/problem”. We need to hold ourselves to a higher professional standard


> Most software engineers are not like doctors. We have little autonomy over what is created. Our responsibility is primarily the how. And with devops sometimes the actual deployment and maintenance itself.

If you're an actual engineer of any kind, you always have some choice on this. You make architectural decisions every day, and you generally work for places that do, in fact, take your input into consideration. If you don't work for any of those kinds of places, then you are still responsible because you wrote the code to enable it. You can always say "no". There are consequences for that, for sure. You can always quit as well. And it may still get made. But it won't be by you.

And sometimes, that's still better than the alternative.


You are missing the point, the idea is that a average developer at a major bank has very poor understanding exactly what impact of his code will be. He generally has neiglther the information (secrecy, need to knwo basis) nor the understanding of the financial system.

On the contrary, a doctor's desision affects the life of an individual patient in a very clear and understandable manner.


Just check out cases of VW software devs who added code for emission test cheating. Check out the case of dev who was copy-pasting code at Toyota.

Illegal stuff is illegal because not knowing the law is no excuse. For your own safety and good, you better have some grasp on legal stuff in your domain. Don't have to be an expert.


[flagged]


I don't think the drone operator is a good example. There are situations in which unethical things must happen, no matter what. If wars could be fought without killing we'd be doing that already.


That’s called peace, and most nations are doing that already.

Just not the minority of violent ones.


> Since software engineers are necessary and sufficient to produce software, they should always be held responsible, and any oath should fall on engineers.

That's not true in any way. Lots of software is written by people who don't even have a degree, others by some who have a computer science degree, but not an engineering one, etc.

The other issue is that software is rarely unethical. The unethical bit often comes from the way it is used.

And I'd have to agree with OP. In an idealist world you could assume software engineers would all be ready to quit their job at any sight of unethical affair, even say, launching something to production with a known vulnerability, or without anything but the most rigorous security review process having passed. But in practice, you're not going to achieve this result, unless you put a framework to incentivize software engineers towards being ethical. If you allowed them to sue their employer, and made it that they more often win the lawsuit, for asking them to build something unethical, or insisting that they do so even after the SE said it was unethical, or to retaliate in any way to an SE refusing to build something on ground of ethics, then you'd maybe start to see results. Otherwise, won't happen, and you've only created a scape goat to make it even easier for companies to push for unethical software, since they can now just blame SE they coerce into building it anyways.


The problem is, software can be really complex it's possible to make it so no one programmer can understand the whole picture. The tasks can be divided so the individual programmers are given order like "do X in condition of Y" which itself seems harmless and lawful, but combining them lead to malicious behaviours.


this is precisely the problem. few physicists knew they were all collectively building a nuclear bomb because the big picture was well compartmentalized. feynman only found out because he picked the locks of his colleagues to piece it all together.


Just finished “Atomic” by Tim Baggot. A history of the development of the bomb. From my reading, lots of the physicists knew. They pushed forward as they didn’t know the extent of the german project (who had stalled due to lack of resources and a belief that while possible, a bomb wasn’t practical due to a need for a large amount of material). The safe-cracking was for amusement.


Can you point to the part of the book where it is stated that only few physicist (not including Feynman) knew this? I remember it very differently. From my memory:

- lock picking was for fun and didnt gain info - some warehouse workers didnt know about critical mass of uranium and stored the stuff to closely together


ah maybe you're right. memory might be foggy. time for a fresh re-read!


Maybe you meant the part where it is clarified that the weapon was not developed against Germany but strategically against the UDSSR. (if I remember that part correctly)


This is my recollection too.


I do not think this is accurate. Where did you get this version of events?


I'd take the oath if there are no more dumbass interviews, I get paid like a surgeon and work 20 hours a week.

Until then, management falls on the sword, thanks.


How about a compromise then? Software engineers are responsible for the negative consequences of the software. Any and all responsibility they take is then also shared with every person that is above them in the hierarchy.

Eg a developer does something and society finds this unethical and punishes them. The developer's boss, the boss's boss, the boss's boss's boss etc up to the CEO all get punished in the same way. Furthermore, to avoid companies trying to shield themselves from this by putting their developers into a different company, it will apply to software that you get from someone else too.

Suddenly this doesn't sound very appealing anymore, does it?


> Any and all responsibility they take is then also shared with every person that is above them in the hierarchy.

Currently, if I write software that performs illegal actions -- let's say software that allows me to use unlicensed Adobe products -- at the request of my boss and their boss, all three of us would be legally liable.


It seems you're conflating illegal and unethical.


Illegal and unethical can be the same -- they don't have to be, but it's not wrong to discuss cases where they are.


That makes no sense. Management just needs to decide on an ethics standard for the entire company and install an audit process that maintains that standard. That's the entire problem. Don't ask a handful of employees to do some charity.


I think it something should already kick in if you create tracking pixels to read canvas data to identify users or generally work on fingerprinting. Especially if it is for a benign purpose as advertising, an industry that is notoriously toxic and would have no problems selling ever kind of data they get their hands on. It is fine to generalize it that way in my opinion. The directly conflict with any spirit of the law in most countries regarding privacy.

Aside from that, the quantification of attributes/properties of people can have negative implications for many people. Oversharing is a problem on the net, but at least here people just endanger themselves.


> But ultimately there's always a software engineer involved in the creation of software - and that's not true of any of the other roles you mentioned. Since software engineers are necessary and sufficient to produce software, they should always be held responsible, and any oath should fall on engineers.

Nah, it's not the same at all. The fundamental difference between creating a program and medicine is creating a program only has to be done once, or at least a only by a few.

Medicine on the other hand: it has to be redone with each new patient. If the Hippocratic Oath works to prevent 99.9% doctors from doing a harmful procedure then you've hit a home run. Sure, you will never completely stop some bad egg removing a perfectly good limb because a patient suffering from Xenomelia offered enough money. But who wouldn't call a thousand fold reduction a huge win.

We demonstrably have the 0.1% of programmers who are willing to break any oath. They make malware, and willingly take out Sony as mercenaries because Kim Jong-Un got pissed off at a movie. All that 0.1% has to do is write the program once. Thereafter you are not trying to discourage hoards of high skilled professional from doing it again, you are trying to stop a legion of dark net operators copying the thing and selling it to anyone. An oath is a waste of time under those circumstances.


Here's how this oath thing would play out:

1) New regulation forces some branches of software engineering to have some type of oath.

2) Now some software jobs will require only oath takers to do.

3) A well payed and powerful new cast of software engineers is born.

4) They are highly paid and have a powerful lobby working for them.

5) The oath takers become very frisky and only work on jobs with minimal risk. The ones that do screw up have an armada of lawyers, because of course they have a new association with deep pockets.

6) Innovation stalls for a while.

7) Big corps start outsourcing some of the oath-taking jobs. These engineers are not bound by the same regulation. Screw ups happen, people die at some point.

8) Maybe we should have the outsourced engineers also take an oath? Back to square 1

This is exactly what I found happened for medical Doctors in Canada (don't know for US). Not saying doctors are not doing a good job, and I can't imagine the stress and pressure they operate from. But suing for malpractice in Canada can be challenging to say the least. I have personal account of a Family member who was grossly mistreated, and all the Doctor did was changed hospitals, nothing more than a slap on the wrist.

https://diamondlaw.ca/blog/how-canadian-law-discourages-pati...


> But there are software engineers today, including some on HN, who do things more harmful and unethical than medical malpractice...

I'm having a hard time trying to find examples of this, outside the field of armament development.

And in those fields where a software failure may result in death, e.g. aircraft development, proof of a software engineer willingly causing it, would likely result in jail time already.


The big question is: who has ultimate visibility on the consequences of a particular project? Very frequently software engineers are asked to work on projects where they only know one side of the picture. The executives in the company are the ones who know the ultimate context of what they're doing.


With a combination of local engineers, remote engineers in other countries, mechanical turk and some sleight of hand, I wonder if you could craft a nefarious project where nobody knows the whole picture.


Not sure hou understand how software is made. A programmer doesn't decide what to write, when to write it and they are lucky to be included in how it's made.

Programmers get specs and write programs to match those.

At no point is it the programmers responsibility to talk about the moral compass of the project and where it fits into society.

An oath to do no harm? You first need to give programmers the power to decide the fate of projects on their own the way only a doctor can decide medicine or treatment.


The programmer still decides whether that code gets written, since they’re the one writing it! If you write or review a piece of software, even if the spec was written by the PM/business, you’re endorsing whatever that spec says and all of its ethical implications. “Just following orders” is a famously poor defense at this point.


Except that the start and goal are so far away you can legitimately claim you can't see the actual harm in writing something.

You write a tool for let's say recognizing faces. Will it be used for login onto computer? Tracking dissidents? IDing corpses? Who knows.

What if you start as something 100% ethical. But your company pivots to unethical application?


> You write a tool for let's say recognizing faces. Will it be used for login onto computer? Tracking dissidents? IDing corpses? Who knows.

I mentioned this in another comment, but I'll say it again:

Irrespective of any legal/ethical concerns, yes, I would like to know! If my boss just came to me and said "build a facial recognition system" I certainly will ask how it is going to be used. Not because I care about ethics, but it's a basic aspect of the job. You can replace "facial recognition" with "CMS" and I'd still ask.

If they tell me the facial recognition is for logging into computers, and then later decide to use it to track dissidents, that is a different concern. But I'll at least ask!

> What if you start as something 100% ethical. But your company pivots to unethical application?

If they pivot after my work is done, I won't feel responsible. If they never used it for the original application and pivoted to this, I may get upset and quit, but my conscience would be clear.


> If they pivot after my work is done, I won't feel responsible.

So if you invented dynamite you wouldn't feel responsible for its use?

But, let's change it a bit more personal. You write an awesome OSS yaml parser. It's so good, that GFW of China uses it as a main component, and this gets published in the news.

What would you do? Nothing you did changed, but suddenly your work is powering an unethical component.


Dynamite is a good example of why the philosophy is complete bullshit. How about you blame the stupid evil fuckers who started all of the futile wars to try to get rich in the quagmire that was European geopolitics instead of tbe person who made them safer?

It is the same sort of stupid blame shifting involved with the hippocratic oath for x nonsense. Oaths are majorly outmoded in the zeitgeist anyway because everyone recognizes lies are commonplace.


> How about you blame the stupid evil fuckers who started all of the futile wars

And I fully agree. Expecting people to individually bear the burden of "some oath", is a fool's errand.

My point was software on its own, much like a fridge, is amoral. You can use it to store your groceries, or you can use it to store corpses.

That said, there are some extreme cases (like a gun), that have very limited non-violent uses. And IMO, that should be regulated, instead of depending on people Doing The Right Thing™.


> So if you invented dynamite you wouldn't feel responsible for its use?

I would if I were inventing dynamite, but that's not what this scenario is.

A person working for a knife manufacturer need not worry about it being used for murder, as that's not what the primary use. And facial recognition is a lot less harmful than even that.

Trust me: I work for a company that produces certain goods used for all kinds of good and nefarious purposes depending on who buys it. My conscience is clear.

> But, let's change it a bit more personal. You write an awesome OSS yaml parser. It's so good, that GFW of China uses it as a main component, and this gets published in the news.

> What would you do? Nothing you did changed, but suddenly your work is powering an unethical component.

I wouldn't do anything:

1. This is milder than the knife scenario above. Of course I don't care if people use it in a poor way - unless there is a straightforward technical mitigation I could do. In your example, given that the source code is available, that is not an option.

2. There's a certain hypocrisy in releasing something as open source and then complaining about how it is used. If it bothers you, then modify your license!


> I would if I were inventing dynamite

> I wouldn't do anything:

That's hypocritical. Nobel didn't invent dynamite because he wanted people to blow themselves up. He invented dynamite because nitro-glycerin was a horrid mess used in mining.

He definitely didn't have an easy technical solution to problem of people misusing dynamite.

You can either say in both case do nothing, or in both case do something.


> You can either say in both case do nothing, or in both case do something.

If I lived in a world lacking in nuance, I would agree. I do not live in that world.


Why would your boss tell you about the unethical ways he is going to use software you have written?


Right now your boss has no reason not to tell you- people at GitHub knew their software was being used to support ICE during the time in which families are being separated. People at Microsoft knew that Microsoft was having contracts with the military. Google engineers knew about Project Dragonfly.

Right now bosses don’t even have incentive to lie about it because no engineer is obligated to give a shit about the society they live in broadly.


This is a good time to point out that a significant chunk of the population isn't opposed to working for the military, for ICE, or for defence contractors who make weapons; they don't view that work as unethical. Moreover, the origin of Silicon Valley, and indeed the entire internet, is DARPA contracts and weapons manufacturing.

Any oath would either not be taken by those people, would be watered down so far as to be meaningless, or would require the entire industry to refuse to make weaponry. The first and second are ineffective and the third is ludicrous.


This is a great point, and reasonable people can disagree about when an application's abuse or potential for abuse crosses the line. The same goes for pivots or for general-purpose code that's used elsewhere. (Is it ethical to contribute to internal tools at Facebook? What if those tools make other engineers way more effective at doing things that ultimately undermine democratic systems?)

My point here isn't to dictate what software is or isn't ethical, but to argue that if a program is unethical, its ethical implications are the responsibility of the engineer(s) who wrote it.


Are you kidding? Do you like to get paid?

Btw I have tried this, and I was just replaced by someone who would just follow orders. Then I get to come in after and clean up the mess ..


Exactly. I can’t believe all the blame-shifting I’m reading in this thread! It’s as if software engineers are suddenly these powerless victims, lacking agency over their work, only capable of saying “yes, boss, whatever you say, boss!”

If a civil engineer’s manager told them to design an unsafe building or bridge, they’re not going to just say, “Sure thing manager! One death trap coming right up!” It is their ethical duty to build it safely.


A bridge is limited to a single purpose, like an appliance. If you insist on veto power over every outcome of what you build, that means you can only ever build sealed appliances for hapless consumers, not unfettered tools that empower clever human beings who will use them in unanticipated ways. Having sworn a Hippocratic oath, are you allowed to work on LLVM, which half of all evil apps probably depend on?

I could get behind a requirement that code be reliable and fit for purpose, though very few of us have any experience with the formal methods that might get us there, and most don't want to work that way.


Imagine if your manager copied the safe bridge you designed with a magic replicator and now uses that exact same design somewhere else. You tell him that the bridge was not designed for this location and that the bridge will collapse in 5 years. Your boss fires you but you are still responsible for the collapse of the bridge.

Let's go further into absurdity. The engineer is kidnapping the daughter of the manager and blackmailing the manager to take the bridge down. Is it ethical to force someone else to be ethical even if its only possible through unethical means? What if there is a hero saves the daughter? Will the hero be liable for the collapsed bridge?


Eh, this isn't a great analogy either. If I'm an engineer that develops a single beam in a bridge, is it my fault if someone assembles those beams together in such a way that is dangerous? At which point does a function become unethical? Do you now have to only use software made in your country by ethical developers?

Software isn't a bridge and comparisons fall apart quickly.


Engineers at the beam companies just certify that the beam meets its specs.

I'm not sure why software would be any different. Bridges are complicated and made up of versatile submodules, just like software. Some other software engineer eventually designs the "bridge" and selects "beams" for the structure. If those beams fail to meet their specs, then the engineers who stood by them are at fault. If the bridge fails because the beams weren't used in accordance with their spec, or didn't have a spec at all, then the engineers who approved their use in the bridge is at fault.


What if the engineer isn't allowed to certify his beams to save on costs? That's basically what happened with 737 MAX.


Then the engineer designing the bridge should probably choose a different beam supplier who can stand by their product.


Wow, I’m realizing what an unpopular opinion this is on HN! Yes, as a software developer you should absolutely be accountable for the ethical concerns around what that protobuf you’re moving around from one API layer to the other is used for. You’re not a code monkey. Ask, and refuse if it’s unacceptable. I have quit jobs where the ultimate purpose of what I was building was evil.

EDIT:

Forum the original OP:

> Software engineers are accountable to their bosses before their users, no matter how high minded we like to pretend to be.

They are accountable to themselves and their own conscience before both their bosses and their users. I understand this is an uncomfortable line of thinking if your employer asks for ethically questionable project work, but I’d argue that if this is the case for you, it warrants career introspection.


There is a gigantic difference between building an unethical software product and abusing an ethical software product for unethical purposes. The developer is not the user. Do you not understand that?


So the people that wrote java.util.List are liable for all unethical software that makes use of java.util.List?


No, of course they're not. Just like the scientists that developed morphine aren't responsible for a doctor later using it to kill a patient.


Sure, same as the LLVM example someone else pointed out. Good points. I’ll qualify my opinion then. To the extent that the engineer can know the ultimate application of their work, he or she should be responsible for ensuring it is being used ethically.

So, the engineer writing a binary search, knowingly working on “Project Orbital Death Ray” or “Voter Suppression 2.1” should know better. I hope we can at least agree on that one.

The engineer writing a linked list or moving around Protobufs for their some open source toolset gets a pass because their project as they understand it is ethically neutral. BUT there will be that engineer who then takes those tools and integrates it into “Project Orbital Death Ray”. That’s maybe where accountability should begin.

Everyone’s talking about the managers taking the blame and yes they’re culpable too. But at the end of the day an actual software developer’s fingers type the code in. If that developer knows what he is working on, he needs to bear the responsibility, too.


This argument is more akin to saying its the builders responsibility to decide whether a bad architectural decision should be built or not. They might bring it up, but it's not really expected that they get to decide.


It's probably not an "either/or" thing.

Although it would be optimal for the management/leadership to not be pursuing unethical developments, a software engineer having the fallback of "I can't implement this in good faith" is another layer of defence (to society).

It would probably also allow for legal push back against being terminated for refusing to implementing the unethical thing.


> Software engineers are accountable to their bosses before their users, no matter how high minded we like to pretend to be

I've quit jobs in the past because of ethical concerns about the way in which those above me have been acting. In one case this involved bribery of senior government officials to push through a project that put at risk the privacy of hundreds of thousands of people.

If you go along with shit like that, you're an accomplice and share partial responsibility. As professionals we have a responsibility to stand up for what is right. It's not good enough to fall back to the lazy excuse of "just doing my job".


And I mean, that is good of you to have a moral backbone, unfortunately there are many people behind you that will do the same job in software and they can be located anywhere in the world.


In the current state of things, that is factually true.

However the same argument could be applied to labor abuses in the textile/garment and shoe manufacturing industry. Most people who follow news are probably aware that about 20 years ago Nike, Adidas and other brands went through a period of terrible public relations disasters, after the working conditions in some of their shoe factories in developing nations were exposed by journalists.

The argument that could have been made at that time would also have been "well but if we don't employ these people, somebody else will just do the same thing with equally terrible labor/human rights violations somewhere else in the world, with even cheaper factories".

The situation today is not great, but it is significantly improved from how it was twenty years ago. There are third party neutral inspection/oversight agencies. Companies in the garment industry are forced to make public commitments to labor rights and reasonable working conditions, and to allow external auditing. They can't just hand wave away the problem and say "but if we don't do it , someone else will.."


And again, the analogs in software are much more, messy.

A shoe is a physical artifact. It must be made somewhere out of something. Software is much more flexible. If I tell you to build an 'ethical' piece of software with an extensible API, it will only take a tiny amount of work to make it do something unethical.

And much like the shoe companies abusing labour's, that had nothing to do with the engineers that made a a product line, but the management that is driven to ever lower costs.


Also unlike a shoe, the management of a company with ethically produced/designed software could fire all of the original developers and sell the company and its IP. To an organization that would make API changes/data storage changes and it would suddenly become something much more harmful.


It's very easy for management to not disclose unethical intentions. All they have to do is nothing.


The more abstract the problem the less people care. Police suffocating someone to death > 1,000,000 people being subtly spied on. "I got nothing to hide" after all.


The farther away and more abstract the problem, it's absolutely more difficult to get people to care. Trying to get Americans to care about what's going on with prison camps in Xinjiang province right now is difficult, if not impossible, outside of the subset of people who deeply care about the issues involved.


In this specific example say we do care, then what? What will it change? (Not theoretical change, actual change)


> "I got nothing to hide" after all

But that's a thoroughly discredited argument.

https://news.ycombinator.com/item?id=21416115

https://news.ycombinator.com/item?id=21923548


Sorry that’s a /s


Ah, got it :-P


> good of you to have a moral backbone

I don't want other people making decisions on morality for me. There are people who don't believe that software developers should develop software for the U.S. government because they don't like the way immigration law is being enforced. I'll make my own decisions about what is and isn't "moral", thanks.


Where did this come from? The parent comment simply commended the grandparent comment for not bribing a government official to push a project with poor privacy for users. I don’t see why anyone would see that as a challenge to their moral code.


Also once you learn how to ignore the ethical stuff it tends to repeat. Just like bad coding habits.

There is no point wasting energy and time around such people if they don't share the same values.

It's not complicated (requires some networking) to find in any org, the characters who will "do whatever it takes".

Then getting them kicked out, opposing them, sidelining them, subverting them, avoiding them are all choices every Engineer has.


> Software engineers are accountable to their bosses before their users, no matter how high minded we like to pretend to be.

I agree to a very limited extent about the hierarchical nature of a typical corporation, but I also disagree. Software engineers at a certain level of their career and with relatively uncommon skills can pick and choose what companies they want to work at. In my opinion people of good moral character and conscience need to be prepared to refuse to accept a position at companies known to engage in activities against their principles. And further need to be prepared to resign if they are asked to do something clearly unethical.

From my particular specialization in network engineering, I would never accept a role at an ISP in an environment where I had to implement something like the GFW in China, or further walled-garden/censorship of the global Internet. It's directly contradictory to my principles. I sincerely hope that the best and the brightest of my colleagues would never choose to aid and abet internet-fuckery by autocratic regimes. If people from my field look at a project and could reasonably say "Vint Cerf would be really disappointed if he saw me implementing this...", I hope they will choose to walk away.


Would you accept any job under penalty of imprisonment for violating grey ethical rules?


Under which nation-state's legal system are we basing this theoretical scenario?


> Software engineers at a certain level of their career and with relatively uncommon skills can pick and choose what companies they want to work at.

What percent of software engineers does this statement apply to?


Being expected to behave ethically is part of being an Engineer regardless of what your boss expects. Now we increasingly see real world negative consequences of the work of software engineers I don't see why they should be any different.


I think the point is that we can't just rely on individual ethics to enact change. People have bills to pay and kids to feed, if it's all on the man at the bottom to say no then a lot of bad software is still going to be created.


I think making a statement as an industry helps avoid some of this stuff even being asked. With research you can easily find doctors who are brought in to assist with torture or other heinous crimes, but in general I would expect it to be harder to find a doctor to help with my project of creating a more addictive cigarette versus finding a web developer to help me market them.

There's some (not a lot, but some) power in a profession just claiming to have ethics, even if there is no enforcement and change in the hearts of practitioners.

Plus, at some point, we all just have to own what we do with our lives. It's great to advocate for systemic change but doing your own small part is at least as important.


You would lose the ability to practise medicine as a license is required.

Do you want to license software developers?


I think the way Uncle Bob said it was reasonable. We have a choice: either we commit to doing good ourselves, and get out of licensing. Or we fuck up repeatedly, and eventually someone else will force a licensing process upon us. And it will not be a pretty one, because it's unlikely the person designing it will know the first thing about how we work.


I think we should license some software developers - if you're developing software to be used in industries where the people using your software need to be licensed (i.e medical equipment, structural engineering, industrial control systems) then perhaps the software engineers should be similarly rigorously qualified.

But if you're working on video games, or pizza delivery, or really probably like 90% of software then no.


Medical software systems already have legal obligations that fall on (I believe) the CEO of a firm. In the CRO space for clinical trials for example you're often audited by the FDA, MHRA, and others to ensure your processes are documented and followed, your issues traced and corrected, etc. You also have a requirement to fulfil things like 21 CFR Part 11 for records and signatures tracking/management.

Failures of a company responsible for this kind of thing that lead to injuries or deaths can result in imprisonment of (at least) the CEO and possibly others. I'm not sure licensing is necessary here, that's what documented process is supposed to manage (change tracking, development process, etc.)


The argument made is that it's necessary, not that it's sufficient.


The consequences of the decisions made by Microsoft Presidents and their ilk, since they are the ones who actually decide what software will look like. But still it would be good when MS gets found to do unethical things that a programmer could be scapegoated instead of their president, at least from the presidents perspective.


In some Engineering societies maybe, in general you may be liable if you construct a building you shouldn't have.

Software is fundamentally different because until you run it it has no consequences, and even if you run it, it can be contained. I can write a worm and not release it on world. In that regard, it is more like engineering _plans_. I can draw up plans for a building that is designed to collapse with X number of persons inside -- in fact I can imagine either of the two assignments given as an exercise in University.

No reason to make more laws: it should be immaterial whether I chop down the Christmas tree at the local town square or program a robot to do it.


It should be immaterial whether you chop down the Christmas tree or set a pre-programmed robot capable of doing the same on its way.

The programming of said robot isn't the bad act here, it's the act the robot actually performs.

The danger of being castigated for having written something that wasn't used is that we then get into the area of thought-crimes.


Well there is one major assumption error there - that negative real world consequences are only linked to negative ethics.

That is so wrong it isn't even funny. If the car was invented as powered by a Mr. Fusion the buggy whip makers going out of business would be a negative real world consequence.


Cars made us wealthier but they also contributed to obesity. It's not exactly clear if reducing obesity by banning cars is worth it.


Is behaving ethically not a part of being a designer or a PM or a director/boss?


And, what's more, isn't behaving ethically not part of being human?

Aristotle and a while host of philosophers certainly thought so.

Extending this, why don't we expect everyone in society to behave ethically? But then we get into arguments about what is ethical, because people disagree on that, and people naturally will disagree on what is ethical in software development, which isn't always a problem. The is a diversity of opinion, though since are clearly extreme.

One of the issues with a "Hippocratic oath" for software developers is that software spans the while spectrum of human activity and thought.

Constraining software is nearly equivalent to constraining human thought in more ways than one.


Yeah, the problem with this idea is that it assumes there's one fixed set of right ethical values everyone should agree on. Take for example the actual, original Hippocratic Oath - it required, amongst other things, doctors not to provide abortions to women. There's still a large number of Americans who hold this ethical belief, but I'd venture a guess that almost all the people pushing for ethics in software development hold the exact diametrically opposite view.

A lot of the demands for software ethics get into even fuzzier and more complicated territory, like whether the companies which control our mass communications should use that control to decide which political views people should be able to share and what they should target, whether this is good or bad for democracy, etc. Also, many of them seem to be things that would've been incredibly niche viewpoints even a decade ago. I've seen people in another thread thinking that Google promised not to use e-mail contents to sell ads originally and then snuck that in, but in reality basing ads on the contents of e-mails was part of their business model all along and it's just that no-one cared when they launched because it was so much less obnoxious than the ads on other providers. Somewhere along the way, we got this meme about big tech selling our personal information, and everyone seems to project it backwards in time onto how people felt before the meme.


Ethic is subjective. Across time and space.

Want to see an example - Ask Aristotle are women same or lower than men?


And relevant to this conversation, software can be written anywhere especially if you need alternative ethics.


That's why Smith also talks about a new DIGITAL GENEVA CONVENTION - not just a code for devs - so there is a defined framework for everyone in the chain.

Pls excuse the caps, but too many people seem to ignore this very important part of his argument here.


I thought everyone ignored it because that is what would happen in practice. Like the actual Geneva convention only applying in an unconditional surrender or complete defeat.


Ethics are subjective. Plenty of people working at facebook will tell you their work is ethical.


Are you expecting engineers to quit FB and IG and Twitter en masse? Because I'm not. People like money more than each other.


It wouldn't surprise me if there are people specializing in ethics violations in exchange for lots of money.


There is the problem that everything can be subdivided in a bunch of menial task so every cog in the system isn't aware of their impact. It already happens a lot in software.

the idea of Hippocratic Oath reminds me of Asimov's Three Laws of Robotics in "The Naked sun" (SPOILERS ahead): the detective realises that the normally quoted First Law of Robotics ("A robot may not injure a human being or, through inaction, allow a human being to come to harm.") is actually just an approximation, he argues that the real Law is "A robot may do nothing that, TO ITS KNOWLEDGE, will harm a human being; nor, through inaction, KNOWINGLY allow a human being to come to harm."

This is important because even though robots really try their best, different robots could perform sub-tasks that look very harmless by themselves, but combined kill a human being:

- A robot is instructed to pour this bottle of poison into a caraffe of water and then leave the room

- Another robot is instructed to enter the room, take the caraffe of water and give it to a human to drink

The human is poisoned, but none of the robots are directly responsible (in the first law sense). Is the act of connecting the two dots the evil deed.


> There is the problem that everything can be subdivided in a bunch of menial task so every cog in the system isn't aware of their impact. It already happens a lot in software.

Precisely the issue with the original proposal. Would it have mattered whatsoever if PhD's took a Hippocratic oath when developing the Manhattan project?

I feel like this MSFT executive may already know that swearing engineer to "do no harm" is fruitless after reading the article, but it's still unfortunate that statements like his diverts attention from more meaningful proposals.


> Software engineers are accountable to their bosses before their users, no matter how high minded we like to pretend to be.

Isn't that what a hipppocratic oath would solve. They'd be accountable to the oath before their bosses, and that would give them reasonable grounds to refuse unethical work.


How do you guarantee that the boss will cooperate with the software engineers and tell them about all the unethical business practices that are currently happening?


Well if it's something that the software engineers are expected to implement, then they're going to have to be told about it. If it's not, then it seems out of the realm of software engineering.


Eventually it would have to be told to a CTO, president or Vice President it some technological product, software architect, etc. Software Engineers generally need to understand the use case of a product to be able to develop something effective. It would be exceptionally difficult to convince someone to code something that creates fake profiles of people who don’t use your site and no one have any idea that that’s what’s happening.


... and be an easy way to get rid of any engineer who fails to do the work asked of them. Think you can just get a new one?


The Hippocratic oath for software engineers should come along with legislation that makes it illegal to fire a software engineer for refusing to break the oath.


Corporate strategists blaming software engineers for the consequences of corporate strategy is a fairly brazen kind of blame-shifting.

A system of ethics within the health system are necessary for customers to retain trust in the health industry. It's also strongly aligned with the selfish interests of workers who must enact that system of ethics. These properties do not neatly translate to software engineering—mostly because the most difficult ethical dilemmas in technology are rarely obvious when looking at source code. The problems with Facebook (for example) are not always inherent in code; many are only revealed after being deployed at scale and external groups begin exploiting the system.


> To say it’s on the engineer to do no harm puts them in the tenuous position of doing the job or being replaced by someone who will. That isn’t setting us up for success.

This is where a strict licensing requirement, like Canada's P. Eng, can empower the engineer. If you think what you're being asked to do would violate your professional ethics, not only can you decline to do it, but you have a system to ensure that you won't just get replaced by someone who will do it.


a licensing system is just a form of regulation as alluded to in the comment you're replying to.

And in the end, if software engineers are to conduct themselves in moral and ethical ways, they must be empowered to do so without having to sacrifice their personal wellbeing or livelihood. Regulation, it seems, is the only way to achieve that end.


The necessary regulation is imposed and enforced via licensing. An example simply look at every other professional industry.


Indeed it is the same with Chartered Accountants. Management can make whatever decisions they think they can make, but an accountant won't inact things that are unethical at the risk of their charter and professional standing.


You really believe many people with won't violate their oaths and associations and licenses for enough money? Accountants audited Bernie Madoff and Wirecard for years and "saw nothing". Michael Jackson hired a surgeon to pump him full of drugs for $150K a month. Money > Ethics for a lot of people.


No I don't. But it does stop a lot of blatant rule breaking where external parties can see the rules being broken.

Your comment about auditors is such a common misunderstanding that it has a name, the 'expectation gap'. Auditors are not there to detect fraud.


> Software engineers are accountable to their bosses before their users, no matter how high minded we like to pretend to be.

Software engineers are accountable to themselves before they are accountable to their bosses.


The Hippocratic oath is similarly local in scope. Individual doctors try hard not to cause harm to individual patients, but the medical establishment causes massive amounts of harm, by:

- developing treatments for chronic symptoms instead of curing diseases

- being unprepared for pandemics

- making health care unaffordable except through employer plans

- promoting wrong nutrition guidelines for decades after the evidence was in

and more.

To have good outcomes you need ethics at both individual and system-wide levels.


Agreed, the vast majority of software engineering disasters can be laid at the feet of bad management, not software engineers. Let's clean house in management first, just like Deming did when he straightened out Ford.

https://en.m.wikipedia.org/wiki/W._Edwards_Deming


To me this argument sounds a bit like a "only following orders" defence. I'm sure smart engineers ,I.e., all of them, can figure out what the application of their work will be.


The oath should be for everybody involved in the process.

After the 2008 mortgage crisis, Netherland required everybody working at banks to take the banker's oath, which is mostly about balancing the interests of the 4 main stakeholders of the bank: shareholders, customers, employee, and society. It's pretty broad, it doesn't magically fix everything, but it does make everybody more aware of their responsibilities. Maybe software companies should require something similar, where everybody needs to be aware of their responsibilities towards, well, primarily user data, I guess. And that goes for not just software engineers themselves, but for everybody involved in the process.


Where do you draw the line though? Does everyone employed in any capacity at any kind of business have to take an oath?

It seems to me that if everyone has to promise to do no evil, the meaning of such oaths would become diluted.


Do all those businesses have a problem with unethical behaviour causing serious problems in society?

Doctors, bankers and apparently software engineers have a pretty big impact on society, and often in ways that aren't very transparent to most other people. It's quite possible there are other professions that have a similar impact, but I'm pretty sure it's not all.

I think registered accountants also have some sort of oath, again because various stakeholders including society as a whole has to be able to trust them.


I think people assume the Hippocratic oath actually makes a difference. It's culturally important to many doctors of course, but how often has that hindered the military or the CIA from hiring enough doctors to facilitate a new interrogation program? Of the illegal human experiments that happened in the US many had doctors working for them as well. It's a good guiding principle but the idea that it would actually make an impact of any kind is debateable. It doesn't matter if 90% of coders follow the oath if 10% or even 5% are enough to handle the demand for oath breaking.


Agreed. Let's look at it from a simple emperical standpoint: where does the problem mostly arise, and who has the power to fix it?

Organizations that went through true iterative process to reduce failure rate like NASA figured out that they needed to allow true authority to specific domain experts to blow the whistle and not face reprisal or suffer for it. Oaths fix nothing, you need organizational change, and if someone is going to do that, its the management in charge.


> To say it’s on the engineer to do no harm puts them in the tenuous position of doing the job or being replaced by someone who will. That isn’t setting us up for success.

If we were to introduce an oath we would have to take further inspiration from doctors, e.g having a certification required to do the job, or, failing that at least having an industry-wide union/guild protecting the position.


How then do you manage things like outsourcing, or the purchase of overseas companies, or multinational mergers?


Kind of funny to hear this from a company known for harvesting tons and tons of telemetry from customers, with no true way to fully opt-out which is pretty damn unethical and probably feeds their AI.


not to mention that software is reusable: a technology might be invented with all the good intentions and ideals... and later be used for evil purposes.


I agree, however not all ethics can be legislated.

This sort of thing works in other professions like medicine because malpractice can cause doctors to lose their license. Same with civil engineers. This changes things because the choice is now quitting or possibly never being able to work in the field again.

Perhaps principal software engineers in charge of life or death software should be licensed for accountability, “engineer on record”.


Doctors don't typically have bosses directing how they treat patients; Doctors have full agency, front line grunt workers do not.


I don’t think this is true in America. Doctors are limited on what will benefit the system monetarily. Healthcare administrators will decline suggested procedures from doctors due to expense.


But that's not an independent fact. That's one of the major goals of medical oaths; they establish that doctors must be able to determine the right course of action on their own, that ethical doctors won't work for an organization that won't give them that agency.


>To say it’s on the engineer to do no harm puts them in the tenuous position of doing the job or being replaced by someone who will. That isn’t setting us up for success.

obviously for a hippocratic type oath to work you need the same kind of system in place for qualifying engineers that you do for doctors and not allowing anyone to work as an engineer who failed the ethics board.


Maybe the doctor analogy is flawed, but isn’t there ethics of a licensed (capital P) Professional Engineer more apt?

They are responsible to their bosses as well as the public who are the end users of their designs/products


This thread is littered with this attitude, that the developers don't actually have a seat at the table where "what to build" decisions are made.

This was not always the case.

How the hell did we let that happened?


Here's a prospective from someone who became a software dev in my late 20s after other jobs. Relatively speaking, if you can write working code and show up on time, you have a lot of leverage in getting and keeping employment, more than most other middle to upper middle class professions. This means org politics has less effect on you, and less incentive to get involved.

As a dev you can, but don't have to, think as much about the politics and operations of your org for all kinds of reasons. You are relatively harder to replace so internal politics tends to matter less, and if the org makes decisions you don't like, you can be confident that you can leave and find something else versus the long and often unfruitful process of trying to change an org from within.

Politics can and often is messy, how often have you heard something like "I just want to build things" (it's how I feel for sure), if you can get paid well to do that, why get involved with a messy decision making process?


To extend upon this (though personally I feel a little differently than yourself, but inclusive of how you feel as well):

My personal experience has seen individual contributing software developers have little voice in the matter regardless. Outside of "tech"-forward companies they are of little consequence in the larger political structure of a company.

I've seen it and been it—speak up about a desired direction, voice concerns about a decided direction, concerns about faulty legacy software, etc. Those voices, unless amplified by political clout mean little to nothing to anyone else.

In a lot of organizations, title is everything when it comes to moving discussion.

So my thinking, if we're continuing the comparison to the medical profession, is hierarchies must follow similarly.

A head of Surgery in a hospital ward is going to be a doctor. Hospital directors are going to be doctors. Sure, the CEO may not be, but they rely on the expertise of their directors. If the directors don't follow the same code as the rest of the professionals under them, then they can theoretically impose any ethics they choose and the onus falls to the IC/surgeon/etc which for the intended purposes of the oath/license/regulation at all is tenuous.


> Software engineers are accountable to their bosses before their users, no matter how high minded we like to pretend to be. To say it’s on the engineer to do no harm puts them in the tenuous position of doing the job or being replaced by someone who will. That isn’t setting us up for success.

The nazi officers who committed most of the atrocities used similar arguments. "I was just following orders!"

I expect better from a software engineer on hacker news. You've single handedly convinced most here - through your weak logic - that such an oath is necessary.


But we still blame the Nazi leadership the most. The nazi officers you talk about here were also in fact management typically.


Pretty much. Those that take the oath would quickly see themselves not hired for certain positions, in favor of those that didn't take the oath.


This assumes all of the vast majority of engineers are easily replaceable. Given that several tech companies prefer the chance of not hiring a good fit over the chance of needing to fire someone, I think the cost to a company if a significant subset of its engineers declined unethical work (especially engineers who at this point are high enough to have a good idea of the broad scope of the software) would make it difficult to perform what you suggest.


Doctors can have their licenses revoked. Software engineers cannot. There's no downside to being involved in unethical businesses.


Agree.

A software engineer is more like a chemist working for the pharmaceutical industry than a doctor treating patients. And chemists typically don't have an Hippocratic Oath. Pharmacists sometimes have their own version, but it is mostly about giving good advise to patients and respecting them as human beings.

But it doesn't stop the pharmaceutical industry from being heavily regulated, and while their business practices are often criticized, the drugs that come out of it are generally safe and effective. Many countries also have regulations making important drugs (ex: vaccines) accessible to everyone.


Don’t sell out yo FB or Google or some other “not-evil(R)” Corp and you will never have to follow orders from your boss that don’t meet your ethics.


Exactly. An oath means nothing: what you need is skin in the game.

Some health practitioners are literally bought by Big Pharma, by their hospital accountant, etc. How would an oath fix that? Same with engineers or any other discipline.

You need to make sure that everyone in the process has skin in the game. For me it's less about control (legislation) than about responsibility and accountability (assessments, eating your own dog food).


What does “skin in the game” mean precisely in this context?


Having personal exposure, taking accountability for your actions. From Taleb of course.

The case of Snowden shows how bad it can turn when not everybody in the loop has skin in the game (asymmetry). Ironically, his behavior tells us he has been faithful to some kind of oath, but apparently none of his coworkers or supervisors.


How about a Hippocratic Oath for business leaders? This is shifting the responsibility from management towards the engineers. It's not the engineers who pulled the trigger at Facebook - or Microsoft. They build the weapons. Management fires them.

This is a hypocritic ode. If somebody is acting unethically at MS then it is management. All the innovation that is not happening because MS is abusing their position. Two times they have killed a universal software platform to preserve theirs: Java and websites. Ironically they are pushing websites now that the platform has shifted to mobile with objective c and Google's variation of Java.

>According to Brad Smith, just like it is the Pope’s job to bring religion closer to today’s technology, it is the software developer’s job to bring technology closer to the humanities.

The Pope is to religion as is the President of the biggest software company to software development. It is his responsibility, not theirs. Or does he see himself as that software developer? I guess it is more a Balmer developer and he means software engineers.

He could start by handing out software licenses / EULAS that take full responsibility for any damage the software does cause, like any other sold product has to do. Then, by business processes, management will take care of the ethical issues to minimize risks.


Why isn't this the top comment?

Microsoft executives seem more in need of lessons in ethics than their engineers. Just one example from last year:

>'We did not sign up to develop weapons' say Microsoft employees protesting $479 million HoloLens army contract

https://www.pcgamer.com/we-did-not-sign-up-to-develop-weapon...

>They build the weapons

Talking of weapons, while we speculate about what AI might be used for, Microsoft executives have literally decided to build actual weapons.


Building weapons is immoral? Tell that to the WW2 industrial complex that supported the war.

Not building weapons for the war effort is not always right. That is an intentional double negative because I think it's the most clear if you read it twice. Building weapons for the war effort is sometimes right would be the boolean negative of that statement.

>Microsoft executives have literally decided to build actual weapons.

Yep. Literally they did. Clearly all US weapons are evil in your opinion because you disagree with all US weapon usage I'm guessing? You have to combine the argument that they are literally making weapons with the fact that those weapons are being used in a way you don't agree with.

Keep in mind that most of these advanced weapons they are literally making are not designed against the current wars you most likely disagree with. They are built, to include AI, to keep pace with advanced threats from other countries. Allowing us to fall behind technologically, due to perceived moral black/white issues of current wars, could lead to a whole new world in 40 years as you make your arguments in a well protected environment. Not researching advanced topics will lead to an asymmetric fight... not in our favor... if the enemy so chooses.

Reference our usage of nuclear weapons. If you think that was evil, then you wouldn't want an evil country / group of people to gain such an asymmetric advantage. If you think it was necessary, then you want to have an asymmetric advantage when it is necessary against an evil group. Yes I recognize the inherent cyclical issue with the above statement. Either way, allowing all people to gain an asymmetric advantage while we just discard all research in hopes that others will follow is ignorant of history - war theory is a thing.


Crocodile tears, and a lot of cheap virtue signaling.

I have some friends who have worked at MSFT for a long time, about 20 years or so. There was a time when they used to talk about open source as if it was cancer (~2011). When MSFT started embracing the cancer, they didn't really up and leave. Now they are all talking about how great this open source thing is.

But even funnier was when they used to complain about Google's rampant user tracking. And then one day they added targeted ads into Windows 10. Did these people suddenly decide "enough is enough" and go and join the EFF? You already know the answer to that.


The website is called "capital and growth" so of course they won't advocate for business leaders being actually responsible for anything.


If you’re in management at an engineering dept/co and make decisions about what is going to be engineered and how that’s going to be deployed you are in engineering yourself and should obviously take the oath yourself.

Not saying I’m in favor of this oath, just that it seems silly to distinguish different roles in the engineering process.


I don't think the exercise of drawing the line between "engineering" and "not engineering" is a useful one here. The actual decisions and the pressure to perform for the job crosses disciplines up at the top of the management hierarchy.

The broader point is that in most companies engineering decisions don't come purely from the engineering department. They are often decisions made as part of bigger projects or efforts. For example, it's probably not up to engineers in most companies whether the any of the tech giants sell to the military. If it is, it's up to people who were engineers at some point and might still exist up at the top of the "product" part of the company, but who for all intents and purposes stopped writing any code or even managing anyone who writes code a long, long time ago.



Doctors no longer take the Hippocratic Oath (because it is incompatible with a lot of difficult situations doctors are placed in).

But more to the point: Why are we trying to shift focus from the wrongs large multi-nationals do to individual software engineers? Plus what would the result be if this "oath" conflicts with a manager's instructions?

Maybe we should start with Microsoft, Google, Comcast, Oracle, and similar taking an oath to do no harm, before we push engineers under the bus for not fighting hard enough against what they're ordered to do.


This absolutely reeks of the corporate effort to undermine action of climate change or environmental policy in general: advertising campaigns being run with a focus on individual responsibility to consume less, save water etc. as a way to divert attention from systemic action which would have an impact by controlling the biggest contributors.


Whistle blower protection laws, and regulators with teeth being more likely to be effective?


> Doctors no longer take the Hippocratic Oath

You have incorrect information. A an abridged or modernized version of the oath is still taken upon graduation of most American MD schools.


At the two ceremonies I have attended, the new MDs took this oath: https://www.hospicepatients.org/modern-physicians-oath-louis...

While it is often referred to as "The Modern Hippocratic Oath", I would argue the Lasagna oath contains significant differences from the original Hippocratic oath, and it is worth treating them as separate things.

P..S. I remember because I was like "Mmmm, Lasagna...." both times.


Your first sentence contradicts your second. You also pocket-quoted the original post removing key context:

> Doctors no longer take the Hippocratic Oath (because it is incompatible with a lot of difficult situations doctors are placed in).

The original Hippocratic Oath is no longer used (as both the original post, and you yourself readily admit). Why it is no longer used it highly relevant to this discussion because someone is calling for a Hippocratic Oath-like thing in a different area.

The fact doctors have moved to a less idealized Hippocratic Oath should be a historical lesson, not something we should seek to emulate.


Or the graduating class writes their own oath


> Why are we trying to shift focus from the wrongs large multi-nationals do to individual software engineers?

Because software engineers are the ones doing the actual work. By imposing an ethical standard on the people doing the real work the multinational executives then either accept that limitation or knowingly accept risks from intentionally violating the spirit of that limitation.


> accept risks from intentionally violating the spirit of that limitation.

What risk? Since it's the engineers, not the executives that take any oath of professional conduct, there wouldn't be any risk for any executive. All this would get us is a legal framework for throwing engineers under the bus via an ethical commission if they do something silly like blowing the whistle on an unethical decision higher up in the hierarchy.

I'm very much for greater personal responsibility in the field of software engineering. Until not too long ago, I used to work a in a field (medical equipment) where accepting the burden of potentially catastrophic mistakes came with the job. But I also know -- based on that same experience -- that personal accountability is meaningless without organizational ability.

Unless this hypothetical "Hippocratic Oath for engineers" is backed by a "Hippocratic Oath for executives", a "Hippocratic Oath for product managers", and "Hippocratic Oath for engineering managers", (edit: or by a legal framework that requires companies to enable it) all it'll do is reduce the PR effort involved in cleaning up a mess like Volkswagen's emission test scandal to pretty much zero by providing an exceptional -- and very mythical-sounding! -- framework for scapegoating.


> there wouldn't be any risk for any executive

This isn't a real problem. I take it you have never worked outside of software. You don't need a law license to be the business manager of a major law firm or a medical license to manage a large hospital network. How often does such problems occur in major law firms or large hospital networks?

In other words this is a hypothetical complaint of some future condition that doesn't exist in order to avoid accepting responsibility in the present.

EDIT:

Also see: https://en.wikipedia.org/wiki/Sarbanes%E2%80%93Oxley_Act


If a lawyer does something unethical, they could be disbarred. If a doctor does something unethical, their license may be revoked. These end your career.

There are already professional organizations for developers and engineers. ACM's ethics code even includes, "avoid harm." You don't need to be in any of them to write software. We would need to pass laws dictating that Facebook et al can only employ licensed software engineers. Do we really want to impose that level of gatekeeping on the profession?

The important part is that there's an accountability structure outside the company. Without that an oath just consists of words on paper.


> These end your career.

Agreed: https://news.ycombinator.com/item?id=24106617

> You don't need to be in any of them to write software.

Then they are irrelevant.

> Do we really want to impose that level of gatekeeping on the profession?

Yes, absolutely. Everybody benefits except developers who probably shouldn't be writing software professionally in the first place.


The key to lawyers and doctors losing their licenses to practice is that they have professional bodies that will police/enforce the rules. However, they also have a monopoly to provide those services.

I suspect corporations don't want limit software engineers to those with licenses only as there would be less of them available and costs of said licensed software engineers would increase.


> I suspect corporations don't want limit software engineers to those with licenses

It comes down to money, which includes liability. I suspect licensing won't ever happen in software unless law suits increase and licensing becomes a deterrent, at which point licensed developers will cost less after factoring for risk assessments. There are all kinds of unintended benefits for everybody that come from limiting negligence, which pays for itself. As a counter argument who benefits most from allowing the hiring and practicing of negligent developers?


> I take it you have never worked outside of software.

You take it wrong, and reasoning by analogy with other fields is a very risky line of logic, because they're not at all similar in terms of internal regulation and professional autonomy.

First of all, in fields like medicine, that kind of responsibility is necessary because clinical autonomy is far wider than any kind of professional autonomy that software engineers have, at any level. And it works precisely because this autonomy doesn't come only with the responsibility of adequate care, it also comes with the ability to say "no" without fearing for your employment or license. Physicians enjoy a great deal (if faltering, and arguably still insufficient!) of protection for defending the interests of their patients, and firing someone for it is a pretty risky and expensive affair. Software engineers enjoy pretty much zero protection in this regard.

If you want engineers to bear that kind of responsibility, that's great, and I'm all for it, too. But in order for it to have any use, you also have to give them the means to defend the standards of professional conduct they must now conform to. Are you also prepared to:

* Defer to an external body of experts to decide if a technology is fit for commercial use or not?

* Forbid an engineer from arbitrarily overriding another engineer's bugfix?

* Have product managers defer to an ethics commission, ran by engineers and engineering ethics specialists, when deciding what features to prioritize or introduce? And if the ethics commission says "no" to a feature, then it's a "no" that not even the board or the executive team can override?

* Make it illegal to ask engineers to introduce a workaround with known flaws, rather than the proper fix, without explicitly informing customers about the known flaws?

* Make it illegal to fire engineers for not executing a manager's order if that order violates the engineers' standards of conduct?

* Defer to an external engineering organization to decide who even gets to call themselves a "software engineer"?

Second, one of the reasons why managing large hospital networks by people without a medical license works (more or less...) is that, for example, there's usually no legal way for the CEO to walk into an OR while cardiac surgery is happening and request the surgeon unplug the CPB machine, even if that means killing the patient. Even if there's no internal regulation being broken, criminal law is more than enough to keep that sort of thing in check. There's hardly anything that would currently prevent the CEO of a software company from walking into an office and requesting that an encryption feature be backdoored, for example.

Without the kind of autonomy and legal protection that would enable software engineers to defend such an oath, its provisions would be completely meaningless. What kind of benefits do you think the software industry, and society in general, would get from an oath that's more risky to defend than to break?


Since absolutely every other professional industry employs some manner of uniform credentialing the only way opposing it in software, where is has not been tried in a meaningful way, makes sense is an argument from ignorance:

https://en.wikipedia.org/wiki/Argument_from_ignorance

Your bullet points conflate experimentation with production delivery. Licensed professionals in other industries can violate the ethical standards of practice so long as they are not billing for those violations and not representing a client with such violations.

> Second, one of the reasons why managing large hospital networks by people without a medical license works (more or less...) is that, for example, there's usually no legal way for the CEO to walk into an OR

So? I am a software developer and my bosses don't write software. No distinction.


> Since absolutely every other professional industry employs some manner of uniform credentialing the only way opposing it in software, where is has not been tried in a meaningful way, makes sense is an argument from ignorance

Or one from experience ;-).

It may help to read all of an argument, rather than stop mid-sentence, too, but who am I to argue with Wikipedia?


The people who actually decide to do these unethical things are the executives, not engineers. Executives are the ones who should be taking Hippocratic oaths.


So? The goal is to eliminate improper behavior not cry about it and point fingers.


So, there’s always a sucker who needs the cash or health insurance.


I hope no one wonders why it's been so easy to tie a person's health insurance to their employment in "capitalist US" (as opposed to "communist Russia"), and how frustratingly difficult it has been to affect substantive change now that it's been shown to be exactly as constrictive as it sounds.


Reads like an argument for unionization more than a simple "oath."


I'm sure when someone gives you the choice between continued employment and following the ethical standard you'll choose to follow the ethical standard.

I mean, don't get me wrong, there are clear scenarios where I think many of us would choose to lose the job. For instance, I'll go unemployed vs directly causing someone to die. But those slightly more ambiguous scenarios are where we need to be enforcing it on a legislative level and the onus should be on ALL levels of the company (engineering and management).


I've told upper management directly, in front of their employees, that they're breaking the law. That was a fun day; my director directly told me to not be a trouble-maker and to knock it off if I wanted to keep my job. Ultimately, upper management elected to follow federal law, and I was dismissed for no reason a year later.

It wasn't about whether or not I got to keep my job; upper management was effectively disenfranchising other employees with their policy. Who cares about my job if it must come at the expense of my team's jobs?

Moreover, managers aren't somehow better people than engineers. They don't get to be less moral just because they have less to do.


That is the difference between an industry with ethics and an industry without. When ethics are the standard of employment an ethics violation prevents employement at any employer.

An example is a lawyer having a disagreement with their law firm may lose their job, but they are still employable. On the other hand if that lawyer loses their law license they lose their current job and cannot get another job at any law firm. In that very real scenario when there is a choice between employment and following the ethical standard the choice is always clearly the later.


This is not realistic. Any H1NB visa holder who doesn't do what they are told will be fired and must move their family back to their country of origin.

SWE would need to first unionize to protect workers from being fired/deported for pushing back before anything like this is even considered. Or tackle the problem where it begins: with the organization.


That's an unrelated problem. A person who follows an illegal order, whether in violation of law or ethics of practice, probably won't retain visa sponsorship long enough for that argument to matter. It also won't prevent them from suing their sponsor for wrongful termination and fraud.

At any rate ethics is a hard sell to an industry whose professionals are completely unaware of what ethics are or how they apply.


I don't understand your argument. Do you think people who follow unethical behaviour are quickly fired? That is not usually the case if the unethical behaviour was profitable. Do you think that the legal system is quick to move so people won't be deported in 60 days?

>At any rate ethics is a hard sell to an industry whose professionals are completely unaware of what ethics are or how they apply.

This is simply not true. Most everyone with a CS related degree in Europe or the US has taken a course on ethics and knows plenty about it.


> Do you think people who follow unethical behaviour are quickly fired?

They are fired once the house of cards collapses in epic failure or when a scapegoat is needed. Those are both evidence of systemic failures.


Since you're talking about multinationals, since when have ethical standards not deviated significantly between different countries?

Are you now going to ask for more software import/export laws?


No, we are merely the technical hands of our management's will. AS IF WE HAVE A CHOICE WHAT WE DEVELOP! This is an asinine attempt to blame those not in control. The managers and the owners and the corporate board need to have this Ethics Oath, and it needs to be enforced by law.


You have a choice in how it’s developed and so ethics applies. In all fairness doctors don’t choose their patients and legal aide do not choose their clients and yet they are still ethically bound.

All of this fear and crying about management only demonstrates a gross ignorance of what ethics are.


This seems out of touch with the reality of these large firms. Management is pretty distributed, decision-making is distributed, and your manager is likely also an engineer, or used to be one. Many people work on tools that are general-purpose, in different levels of the stack. Someone working on keeping the data center running doesn't deal with what the applications actually do.

Also, people change positions fairly often and avoid work in areas they consider to be unethical. This means that the people in a position to actually make the call are people who don't find it unethical, because the people who would be concerned about it avoided the whole area. Like, if you don't want to work in ads, you'll probably find a job in some other division, or at least not directly on something you consider unethical. The people who came up with AMP (to pick something controversial on Hacker News) were true believers who sold it to management.

But people still care about the company's reputation as a whole, and as a result you get conflict between the people not actually working on the controversial thing and the people who are, but that mostly results in a lot of drama and cynicism.

The politics is complicated. I can't think of a generic oath that you couldn't rationalize your way out of.


Ironically the "don't seduce people" bit that got axed is actually still enforced and is one of the most common reasons doctors get their licensed revoked. (More specifically sexual misconduct).


Maybe, “don’t be evil”.


Depends how you define evil. Is advertising evil? Then Facebook and Google engineers are committing evil. Is working with repressive governments evil? Then Apple and Microsoft engineers are committing evil. That's a sizeable chunk of engineers right there.


Some employees at Google certainly see working with repressive governments as 'evil', yes. https://www.google.com/amp/s/www.nytimes.com/2018/08/16/tech...


Well, it would be interesting to see MS seriously attempt to implement such an Oath in their own organisation.

Personally, I hope they do try. :)


What is evil?


That's why you leave it in your motto, where it is _not_ legally binding, as a reminder to try to do whatever you think the right thing is...

Unless you don't.


We need an Ethics Oath to be required by law for anyone in the C-Suite or Corporate Board. That's where this needs to begin.


> Why are we trying to shift focus from the wrongs large multi-nationals do to individual software engineers?

This "ethics" business doesn't make sense because the stated goals of the people pushing this idea aren't their actual goals. The stated goal of this "ethics" push is to reduce the harm done by software to society. As you point out, it won't work. The actual goal of the "ethics" push is to entrench a certain politically-contentious ideology in tech by branding this ideology as "ethics" and thereby making it immune to criticism.

The only legitimate binding code of ethics is the law. If a practice is harmful, we can all talk about it together and agree to enact a law against it. The "ethics" people are trying to use elevated moral rhetoric to bypass this democratic process, and we shouldn't let them get away with it.


> Doctors no longer take the Hippocratic Oath

Lol. I'm married to an MD. Doctors definitely still take the Hippocratic Oath. Perhaps some places no longer do it, but I am not aware of any of here peers that have not taken the Hippocratic Oath.


tl;dr

Reality, we have virtue signaling.

.... soap box

because its and end run. the idea being that if software developers had this as a whole then corporations could not force them to do whatever it is that was then decided as against the oath but it amounts to nothing more than virtue signaling

the issue with a software developer Hippocratic Oath is that you can damn well bet it will be subject to the whims of whomever is loudest on social media or whatever political group wants to use it to damage the other party.

The Hippocratic Oath is protected by history and pretty much limited to interpretation but any such oath or rule today is not worth page it is printed on.


Build it into the corporate charter.


"Just doing my job"


It's hard to see this as anything but a way for executives to foist responsibility upon software engineers when things go wrong (and of course, claim credit and profit when they go right) as other commenters have pointed out.

That said, this might actually work! If a software engineer can suffer personal harm by working for a business with iffy ethics, then they are incentivized to play it safe by avoiding working for those types of businesses -- thus correcting the market by internalizing the externalities. I doubt anyone would work for Facebook in a world with a Hippocratic Oath for Software Engineers that has real teeth.

Put another way: pointing to decision makers instead of individual engineers is a simple rephrasing of the Nuremberg defense, "I was just following orders!" It is obvious that we should hold leaders accountable. The question here is whether we hold individual software engineers accountable too (they're not mutually exclusive) and the answer is probably yes.


Wouldn't offshoring lower liability? If you can blame the developer why not offsource that or better outsource and remove any responsibility from the company.


The ACM Code of Ethics and Professional Conduct has been around for a while and surely is a good start. But if you read through it, you'll quickly come to the conclusion that unless leadership buys into it your only real option is to quit your job if asked to do something you shouldn't.

https://www.acm.org/code-of-ethics


That's the whole point of a professional code of ethics. If you want to protect people for behaving ethically, that's government's job (c.f. whistleblower laws et. al.).

This sits a layer down in the defense-in-depth stack. And the idea is that if there's a recognized code, and consensus on what constitutes a violation, that employers will conform because if they don't they'll risk not just one "activist" employee leaving but most of them, out of a shared sense of communal ethics.

Would it work? No idea. My experience is that software people tend to be pretty squishy on matters of personal ethics.


The ACM itself also won't put its code on the right side of the kind of wrongs being perpetrated in China.


What do you mean?


They don't want to alienate Chinese members so they go to great lengths to prevent the code of conduct from outright prohibiting participation in the creation of things like the systems used to control and oppress the Uyghurs, etc.


The way I see it, the likely, big ethical issue with AI isn't some Terminator/Butlerian Jihad scenario, or even mass surveillance - it's that the wealth created by the technology will benefit the already wealthy much, much more than everyone else. This is generally true of most technology and even more so of the software industry (high scalability, "low" headcount, IP makes profit shifting and tax evasion trivial). But AI is unique in potentially enabling mass unemployment at a rate and scale never seen before in human history. This should be a joyous event, since humanity is finally free to enjoy the fruit of its labor without the labor part - but the way the world works today means most of us would not be allowed any bites of that fruit anymore.

When Bezos fires every single warehouse employee, what happens if the job they start retraining for also gets automated away before they can even start? And the next one, and the next one. If nobody is making a salary anymore, then it doesn't matter how much lower the prices are on Amazon (due to being produced in automated factories and shipped from automated warehouses) unless Jeff decides to reduce those prices all the way down to 'free'. At that point the assumptions underlying the world's economy would break down in a way that makes a corona shutdown look like a mild hiccup.

No software engineer is going to be able to do anything to help alleviate this. If you want to do something about this, you need to go into politics, not tech.


> There are no common ethics codes to determine how lethal autonomous weapons and systems that are developed for the military should be used once they end up in the hands of civilians.

It's interesting to me that this just presumes developing these autonomous weapons systems in the first place is ethical. I understand there is a difference of opinion on this ethical point, but it immediately frames the discussion pretty far away from the Hippocratic oath's requirement to abstain from causing harm.


"It should be noted that no ethically-trained software engineer would ever consent to write a DestroyBaghdad procedure. Basic professional ethics would instead require him to write a DestroyCity procedure, to which Baghdad could be given as a parameter."

- Borenstein


> pretty far away from the Hippocratic oath's requirement to abstain from causing harm

So does abortion and euthanasia, and probably plenty of other practices as well. Both of those are without doubt harm-causing practices, with their related points of controversy primarily revolving around whether the harm that is caused is worthwhile in the context of the alternative being a potentially greater harm.

Putting aside the fact that the Hippocratic oath is not actually a relevant part of modern medicine (modern doctors are accountable to comprehensive, codified sets of ethics), the fact that there is no such thing as a set of common ethics by which people choose to live their lives kinda points out the futility of this idea.

One person could say developing weapons is bad because they cause harm, another could say it’s good because they can be used to reduce harm that would have otherwise been caused. Who’s right? Neither of them. That’s just two people with different opinions. I would personally suggest that establishing moral authorities like can often be harmful, because lacking any objective truths, it’s a topic people should generally be left to make up their own minds about.

Am I right or wrong? Who’s to say? I’m just a person with an opinion, and so is anybody who would want to agree or disagree with me.


I think a different problem is, that it's not so clear why these weapons are being made or used.

I think the main motivator in almost all of those things is money.

The reason for wars is money, they just get justified by "the greater good".

Same for all the involved technologies.


> So does abortion and euthanasia, and probably plenty of other practices as well.

A less controversial example would be something like chemotherapy. In fact, a lot of treatments for terminal and chronic ailments are pretty harmful.


All 3 of those examples are actually covered by the “original” Hippocratic Oath (which probably wasn’t written by Hippocrates, incidentally).

> Neither will I administer a poison to anybody when asked to do so, nor will I suggest such a course. Similarly I will not give to a woman a pessary to cause abortion.

Chemo is obviously a bit different though, because the potential harm caused by denying abortion or euthanasia is (generally speaking) the potential to deny somebody the right to exercise a form of personal agency over their body/life. The controversy isn’t really a medical one.

The Oath also doesn’t really address treatments that have potentially harmful side effects, and it’s debated whether the oath allows doctors to perform surgery. It’s basically not fit for purpose in 2020. If you wanted to suggest that software engineers adopt a code of ethics similar to that of doctors, what you’d be really suggesting is something like “We need a AMA Code of Medical Ethics for software engineers”. Which obviously doesn’t have the same broad appeal and simplicity of an oath.


That was essentially my point. With euthanasia and abortion you will find plenty of people who would call them unambiguously harmful and think they should be banned outright, regardless of context.

You'd be hard pressed to find people who want to abolish the entire field of oncology on the grounds that the treatments are horrible.

It's absurd to take the above-stated "requirement to abstain from causing harm" as a hard restriction out of context without taking into account the main point of the profession which is to help the sick.

The logical conclusion of considering "do no harm" as inviolable above all else is that doctors would have to restrict their treatments to homeopathy and compassionate smiles.


I like the way you challenge the framing, and I agree, the right first question is "should we develop lethal autonomous weapons at all, and if so what kind is ok, what's the limit on that".

The way its asked looks like an attempt to shift the Overton window until autonomous weapons of all kinds are treated as a mundane inevitability not worth worrying about, with just the niggling details subject to ethical questioning.

But big shifts like that are exactly the sort of thing serious ethical codes should be used to watch out for. Not the niggling details afterwards.


Well for ine autonomous weapons were already there. Landmines for one. Back in the stone age even with snares even meaning /rope/ is an autonomous weapon.

There is no human in the loop (no pun intended for snares). It decides when to strike using physics and the answer is always "yes" if it is triggered.

What makes the new "autonomous" weapons different is that they attempt target differentiation. Mobility becomes useful then when weapons systems can say "no" when presented a target. Since even the Military Industrial Complex, purveyor of unneeded bullshit which wantonly takes lives would find it impossible to sell a drone that goes around shooting missles at all targets after launch.


So reading the comments. I am on the side of having business owners taking some kind of oath, not software engineers. If a software engineer was programming IE around the time that MS was hit by regulators years ago, and this "software engineer" oath was in place, would the software engineers being asked to code a web browser be at fault for the wrong-doings of the company?

It just seems wrong for someone that has been asked to "write a web browser" to be at fault for anything.

What about someone asked to code up a voice prompt on something that answers the phone for a telemarketing company. And said company later uses the code to do illegal spam robocalls instead of what they told the developer they were doing with it?

What about if a software developer that writes code to turn off and on a sprinkler system by phone is later convicted of writing the code for a bomb that blows up a building?

What about if a software developer writes code that matches human faces for the purposes of automatically unlocking his door at his home, ends up being open sourcing it, but the developer is arrested for that software being deployed on a drone that murders specific people?

What about if a software developer that is asked to install the above Face Rec system on a drone but is told it's designed to take pictures of people it knows at a birthday party, and is later switched out to trigger a machine gun.


There are not two sides, because both management and engineering should face consequences for building harmful software. And I'm not sure it's appropriate to test these "what ifs" as if looking for logical flaws in such a system, because a well-functioning court and jury should easily be able answer these questions, and I'm sure you had the "correct" answers in mind when you wrote them. Whether or not our courts and juries are currently well-functioning enough to support something like this is a different question, but we should strive for it.

In any case, we already have laws like this! And they're not controversial! If you commit war crimes by killing people with a shovel under the orders of your superiors, both you and your superiors are responsible, but the manufacturer of the shovel obviously isn't responsible for what you did unless they advertised the shovel's skull-bashing capabilities.


But what is harmful software? Today, while there are arguments about facebook being too addictive, it's really just competition between platforms, it's not everyone's opinion that the software engineers at facebook should be liable at this point for making an addictive algorithm. At some uncertain point in time, there's the possibility that facebook will be sued for the addictiveness and lose. Should that happen, I don't see why the software engineers should be partially to blame.


If Facebook gets sued for its addictiveness and loses, then that would be a pretty solid judgment of its harm, wouldn't it? And it'd be up to the court to decide whether the engineers (and which ones) were complicit. There is room for nuance here.

> I don't see why the software engineers should be partially to blame.

Because they chose to build it! Software engineers aren't ignorant and helpless executors of instructions. They are responsible for what they build.


That makes no sense. That's the exact angle I'm trying to argue isn't feasible. It should absolutely be the corporate entity that takes the blame for things like that. If we were to change things to make the engineers directly responsible, we're just adding an entire industry that's not necessary.

We'll end up creating an entire class of insurance for software development liability. It doesn't really change anything except again, raise the cost and shift burdens.


> a well-functioning court and jury should easily be able answer these questions,

I'm going to ignore the qualifier of 'well functioning' which obviously up for debate. The process of being charged with a crime, being put on trial, wondering at the consequences of the outcome etc., is no joke. It is a tremendously time-consuming, expensive, and stressful processes and even if you are acquitted there is no undoing the damage that has been done. There's a reason doctor's spend huge chunks of their income on malpractice insurance, and if we decide that engineers need the same protection in case they get sued than the biggest beneficiary is going to be the insurance companies.

If insurance companies also had to sigh oaths we might make some progress, but the nature of their game is to spread the risk - which is to say they take money from a lot of people and hope they never have to pay them back. There's only so much regulation can do about that.

It shouldn't just be thrown out as 'well if you make a decision in good faith then you are sure to win your court case'. It is not a reasonable burden to put on someone who cannot anticipate all the possible outcomes of decisions they make.


That's exactly it. It also begs the question, if corporations already have liability insurance, what's the point of forcing all of its employees to get similar coverage. It really is just giving more money to the insurance industry.


Brad Smith has been with Microsoft since 1993, and is one of the lawyers that enabled the business practices that made Microsoft notorious, and the target of multiple legal actions. Microsoft only settled with the DOJ in 2002. The company pursued policies of forced upgrades to Windows 10 and aggressive use of telemetry whilst he was president.

Feel free to tell us how your professional ethics as a lawyer enabled you to make change at Microsoft before you were general counsel and president, Brad. Or after you got the top jobs.


So you're saying that since Lawyers have been around longer as a profession then they should demonstrate the equivalent oath and how to make it work before software engineers.

Sounds brilliant!


Perhaps Brad Smith can let us know when we live in a time when it's not true that all three branches of the US government, to pick on just one country, are daily disrespecting their oaths to uphold the constitution. And then we can talk about the efficacy of oaths.

In other words, oaths aren't worth much in the real world, apparently, as much as we might like to think they should be.

It would be troubling if Brad Smith isn't aware of this. He wouldn't be the first once-respected person to go off the deep end: https://en.wikipedia.org/wiki/Francis_Collins.

I have to give Brad the benefit of the doubt and assume something was lost in translation; possibly he is being misquoted or quoted out of context.

But even assuming we could agree on a shared, always-in-sync definition of harm, and even assuming oaths worked all the time, there are plenty of competent people, whatever label we give them, who will design and implement systems without having sworn any such oath, and countries and organizations (including DAOs) that will employ those people. And the first two assumptions are already a bridge too far.


We need a Hippocratic Oath for those that manage and deploy capital.

A Hippocratic Oath for only Software Engineers is addressing the symptom, not the cause.

Most of us don't wake up and think, "How can I make user's life worse today?".

We need to fix the incentives if we want to fix the behavior.

We need to give Software Engineers writing spyware the option to say no and still feed their families. We need saying," This is unethical and I can't help," to be a valid second option.


The key incentive is cash. As long as doing sinister things can pay, there will be people doing it.


Or maybe, just maybe, ethics isn't at all universal? If my time in other countries has taught me one thing, it's that different people and different cultures have very different ideas on right and wrong.

Ultimately (like, really ultimately), the very idea of ethics is a meaningless fabrication in the absence of God. Without an ultimate arbiter of truth, morality cannot exist.


While ethics are very far from universal, there are things that are universally accepted, i.e. that impaling everyone on Earth is bad.

God is not the only possible foundation of ethics, and not the most sound at that.


> God is not the only possible foundation of ethics, and not the most sound at that.

Absent God (or some sort of other mind-body dualism that's largely suspect otherwise), I don't see how ethics can approach anything that resembles "universals." The ontology of morality in a naturalistic or materialist worldview seems committed to something like emotivism, unless one feels like attempting to bridge the is/ought divide. How can we ascertain what the "universal" is when so many rational beings can hold diametrically opposed moral values? Is it democratically decided?

If one is examining morality as the values of different groups of people, and another is debating what we ought to consider universally moral, there isn't going to be much consensus, and probably a fair bit of confusion.


I think you might find some intro philosophy useful to understand how you can have a universal morality without a deity. Some Kant is a good place to start, with his universal law being a fundamental idea.


I actually majored in philosophy, Kant presupposes the existence of God in his assumption about the specialness of rational (i.e human) beings.

I never liked my ethics classes. Phenomenology and existialism is my shit, though I suspect Husserl was the inspiration for GPT-3.


I am little worried this will sound glib, but how about we have an oath/pledge ( w/e you want to call it ) for decision makers at the company. Otherwise putting onus on engineers is, at best, silly.


Software engineering isn't medicine. It isn't that difficult to learn and you can wield many of the advantages without much training. If software folks refuse to assist with morally questionable tasks, those tasks will be compartmentalized and the unethical components will be handed off to people willing to do the job.

Sadly, just about everyone in ML/AI is contributing to an easily weaponized technology. Every time you make it easier to train a network, every time you make it faster to train a network, every time you improve an image recognition algorithm, every time you improve the latency/jitter of inference... you're contributing to the pile of knowledge which will be leveraged to control populations or enable military action. Most people stay unaware of it and just focus on the benefits. Some of us just grow to accept it.


> Software engineering isn't medicine. It isn't that difficult to learn

I would never hire someone that treats software engineering as something easier that medicine or other branches of engineering.


Easy to say when you've never hired anyone.

The parent is right. Software engineering is advantaged by a much more simplistic feedback loop. Software's "Hello, World" is validated in milliseconds. Medicine's "Hello, World" could take 30 years of clinical trials to ensure that you haven't killed anyone. It is easy because it is quick, allowing a greater understanding within an equal amount of time.

In fact, the adage of years of experience comes from learning that actually take years to encounter different circumstances and see the results play out in order to fully understand what you're dealing with. This is almost never a problem for software engineers, especially in the learning phase. Software engineers can gain "years of experience" each day by seeing the results of what they are doing in practically real-time.


> Easy to say when you've never hired anyone.

Funny how you can make that statement.


I think a lot of the recent focus on "AI ethics" is just self-aggrandizement.

I also think a lot of people here are overestimating how broad the agreement is on what is ethical and what is not (and thus what would be prevented from happening under such a "Hippocratic Oath").

Examples being: Microsoft/GitHub's work with ICE, Google's China search engine, use of AI in military applications

I personally have a strong opinion on some of those, but I'm not delusional enough to think that there are no competent software engineers that disagree with me.


For each of those examples, there are also positive social justice arguments if you look at them in different directions:

- Microsoft/GitHub's work with ICE [prevents illegal migration, saving those workers from a life of wage slavery]

- Google's China search engine [gives more information than otherwise would've been available]

- use of AI in military applications [can save lives by no longer bombing civilians where it can be avoided]


The relevant difference between medicine and software development is that doctors provide a service and software developers create a product.

A doctor's responsibility is solely to the patient. When the treatment ends, the doctor has done their job. They may be responsible for long-term damage from the treatment, but that's still about something the doctor did themselves. It's like being responsible for your software randomly running amok and deleting random files on a user's computer.

When a software developer has developed some software, that product continues to exist and can be used by anyone who can run it, to do anything the software is capable of.

Turning this around, given that the doctor's "product" is a healthier patient, it would be like making doctors responsible for the evils committed by patients that they have saved.

It's easy to come up with obvious unethical scenarios, but a lot of harm can come from less predictable unethical uses. If you make your software easy to use, the result is that people outside of any ethically-aware, license-controlled bubble can use it. Where does the responsibility end?

The developers of Excel and Access have almost certainly indirectly contributed to evil software.


Is there evidence that doctors and lawyers are, by virtue of their oaths, codes of conduct, and background study of ethics, more ethical, moral, and upright than members of the general population?

Also, whose ethics would such a Hippocratic Oath advance? For every privacy-conscious person saying that encryption-everywhere is good, there is a law enforcement officer speaking of reduced abilities to solve crimes.

Imagine a software engineer who has been asked to place a backdoor in some software. Is there any piece of uncontroversial advice which you can give them?

And like several previous commenters have asked, could this be a way to shift responsibility from institutions and their management cadre to individual developers?


> Imagine a software engineer who has been asked to place a backdoor in some software.

I imagine such an engineer just gets a development plan and doesn't get to see the bigger picture, implying the backdoor. It might only get enabled on integration into a larger codebase, and nobody out of the loop will be able to extrapolate its existence from what they get to know for sure.

Hence I completely agree with the argument of shifting responsibility to the developers. Seems like MS is selling more of that eyewash again.


I disagree. I only have one point of data from past experience (not a backdoor, but working on a potentially unethical system), but I would say most developers know exactly what they are doing, or they know deep down but don't try to clarify in order to absolve themselves.

If we go back to backdoors, yes, sometimes, in the simplest cases (e.g. a "root" account), it might get in prod through trickery. But anything more complex and you need to know what you are trying to achieve.


I'm guessing Brad Smith hasn't seen the Tethics episode of Silicon Valley.

https://www.youtube.com/watch?v=nfRUQh_EHoQ


It just seems pointless to me. Microsoft collaborates with the PRC, including governmental projects that go well into deeply questionable territory. Presumably Brad Smith would stop those if he could, since he's selling himself as an ethics guru. But if the President of Microsoft himself is unable to scrub out these ethical blackspots, what would software engineers taking an ill-defined oath do?


Microsoft itself imposed their OS by very questionable means, unethical means, I will not be lectured about ethics by any Microsoft bigshot to begin with, as a software developer. Especially when none of the Microsoft execs made amend for their little schemes in the 80's and the 90's.


I think we also should finally abandon the idea that companies are only here to make a profit and should ignore all other factors when doing business. That doesn't mean they should make no profit at all but it means that they also should consider external costs as if they were internal ones. This is true for the environment, privacy, data protection, employees, and many other things.


I think something like this would be more easily 'enforced' if the current punishments weren't so low they got factored into the "cost of doing business."


IMO having a hippocratic oath for software engineers is entirely pointless. There are plenty of people who would say the words with no intention of living up to them - not everyone is going to share your values.

I can't say for sure, but I have a feeling that the hippocratic oath is not what's stopping an unethical doctor from behaving unethically. Instead, it's the risk of lawsuits and loss of their license to practice medicine that keeps them in line.

I just don't think software could match the very real consequences to unethical behaviour that doctors face. We don't have a concept of a "licensed software engineer", and even if we did, there's nothing stopping an unethical engineer from working on their own. Doctors can be blackballed from hospitals, clinical trials, etc. Are we going to blackball software engineers from touching computers?

The above being said, I do believe that software engineers should try to be ethical. However, I do not think there's any mechanism that could be implemented that could force this behaviour, and any attempt to do so is simply a way for its promoters to say "look how good I am", and less so an honest attempt at making the world a better place.


For sure, I am one of these people you mention. I feel very little to no loyalty to any company I have ever worked for. The reason is that they can and would fire me at any time if the conditions where right.

Hence, I am ready to move to another position at any time if the conditions are right (pay, benefits etc). If they want me to take an oath I would probably not take the job today but if they would pay me a ridiculous amount for example, I would most likely take the job and just say the oath or whatever you have to do and just never care about it again.

Just because I am forced to say something doesn't mean I believe it. Maybe it's because I am a natural rebel that hates to be told things, despises "mission statements" or "company values" but I feel like there is a lot of people like me.


It should be noted that no ethically-trained software engineer would ever consent to write a DestroyBaghdad procedure. Basic professional ethics would instead require him to write a DestroyCity procedure, to which Baghdad could be given as a parameter. -Nathaniel Borenstein


static void destroyBaghdad() { destroyCity("Baghdad"); }

Here, it's done.



No, no, it's working as intended. Product Management failed to properly define the spec, the software works as defined.


we already have one that is pretty close - Ted Nelson from his 1974 book Dream Machines:

The purpose of computers is human freedom.

I am going to help make people free through computers.

I will not help the computer priesthood confuse and bully the public.

I will endeavor to explain patiently what computer systems really do.

I will not give misleading answers to get people off my back, like “Because that’s the way computers work” instead of “Because that’s the way I designed it.”

I will stand firm against the forces of evil.

I will speak up against computer systems that are oppressive, insulting, or unkind, and do the best I can to improve or replace them, if I cannot prevent them from being bought or created in the first place.

I will fight injustice, complication, and any company that makes things difficult on purpose.

I will do all I can to further human understanding, especially through the new visualizing tools of interactive computer graphics.

I will do what I can to make systems easy to understand, interactive wherever possible, and fun for the user.

I will try not to make fun of another user’s favorite computer language, even if it is COBOL or BASIC.


We should aim for having a "Hippocratic Oath" in place for the CEO's, board, and other business leadership too.

They're the one's who's "interesting" ethics are responsible for the things mentioned in the article.

The engineers wouldn't generally even attempt to implement the concerns in the article unless directed by their organisational higher-ups.


Quite. Some of the most egregious ethics lapses come from 'management decisions'. Some infamously immortalized by Challenger story ( https://freakonomics.com/2011/06/01/launching-into-unethical... ), where manager effectively overruled engineers, who explicitly told him it was a bad idea. Oh yeah, he called it a 'management decision' too.


The challenge here is the entire software engineering compensation and career model.

Let's consider Boeing. Aviation is complicated and takes a long time to learn. Good software engineers are constantly moving jobs to boost compensation, so Boeing would retain them for 1-2 years at most unless it were willing to radically increase its compensation year over year.

I have spent 1 year in my job at a parking software organization. I have learned very little about how the business works and couldn't tell you much about the software I work on beyond the features built while I have been there. Most of my team has been there just a year or two and they don't know either. Give me all the training you want, but I know very little about how any of this fits into the business model or how it is going to be used and I will probably never know before I leave. And if we use the median tenure of engineers for my organization, I am arguably scheduled to leave.

A friend is at a medium sized tech company. He spent 6 months on one project, spent 6 months on another, and now is interested in moving to a third. He works on "some cloud service, not quite sure which."

Another friend is just fed tickets, completes the tickets, and otherwise isn't even sure what his software is used for and what kind of companies buy it. He has been there a year.

So, sure, you can create an oath, but how many software engineers are going to ever know enough to object to something? It is maybe 50%?

To understand the bigger picture in many cases, engineers would need to stay on their projects to for 2-4 years. They would need to be there from the design phase to the implementation and deployment phase, as well as the day to day use phase. They would need to know a lot more about what the business actually is in many cases. In aviation, they would need to have engineers stay for 6-8 years.

How does that happen when 3 years at a company is considered a long time and plenty stay 1-2 years?


"It should be noted that no ethically-trained software engineer would ever consent to write a `DestroyBaghdad` procedure. Basic professional ethics would instead require him to write a `DestroyCity` procedure, to which `Baghdad` could be given as a parameter.

--- Nathaniel S. Borenstein, https://en.wikipedia.org/wiki/Nathaniel_Borenstein


The way I see this is that someone is trying to shift responsibility away to someone who has even less power and is now in a vulnerable position.

Smells like diesel gate where executives claimed that they did not know about the cheating device. It is entirely possible to force subordinates to do things in secret in a way that cannot be traced back to the original decision maker. If there is money to be made it's going to happen.


What an asinine suggestion. Of all people a president of microsoft punting responsibility down to the people simply following your orders. This comes across as a scheme to corner out newcomers to AI and make it so only well connected multinationals are able to control this tech.


this is absurd. We should not be putting the onus on the engineers to practice their trade ethically when they often have no control over the decisions being made at their company. Additionally, those software engineers do not profit from unethical practice nearly as much as the decision makers and executives.

This is especially rich coming from a faboulsy wealthy executive of a company engaged in many unethical activities


Why should software developers take the oath? Half of us are already working for morally bankrupt companies. The companies should take those oaths and stop building products that ultimately produce negative externality.


I think what we should focus on, instead of ethics or humanity, is things like transparency, open discussion and clear feedback, etc. We can't really tell what's exactly ethical or human when the situation is changing this quickly. What we can ensure, though, is that a process exists that can spot and fix issues at hand in a timely manner, publicly. It's a bit like OODA for ethics. I think a similar system would be needed for the US police system too. Be it a company or police, when its secrecy is of its utmost importance, ethics cannot be expected.


The Hippocratic Oath is mostly irrelevant, so, no, we don't need it.

If he means we need a professional code of ethics, and the organized professional discipline that makes such a code meaningful, sure, maybe.

But it's odd for a software executive to make that case, since the only reason such a code would be necessary in software would be to provide a countervailing force so that developers resist unethical demands from their employers. IOW, if it weren't for the lack of ethics by Smith’s peer group, their subordinates wouldn’t need new constraints.


"The fish rots from the head", comes to my mind.

What we need is ethics in management.


Well, it sure is tough to know the outcomes of what you create, however many oaths you take. Long ago, someone invented a pencil; people have written a lot of terrible things with pencils. More recently, people invented neural networks to recognize handwritten digits; now people are using networks like that to do terrible things. An oath won't solve the problem; we need laws and governance and more to ensure society uses technology safely and for good.



"Engineering ethics" is a fallacy as it implies that engineers have high level influence on the goals set by organizations. If lockheed martin trained all of its missile engineers in ethics would they suddenly start building bombs that burst with flowers? Obviously not. Engineers are paid to design things against a very specific set of requirements. If an average engineer pushes back on ethical grounds they will simply be replaced.


Ethics are subjective and always evolving. I think Engineering ethics is not to tell you what to build or not to build but rather to tech you what is the likely outcome if you decided to build x or not to build x. Based on that its up to you to decide whatever to do or not do it.


I'm not sure what your engineering experience is, but I'm pretty certain that >80% of engineers are not in a position to refuse to work and quit their job without a severe impact on their career. And the same goes for managers and directors, all the way up to the top. Fail to explore a new technology your competitor is already working on? You've just damaged your company and all your employees. The only way to deal with these kinds of issues is regulation. Classic tragedy of the commons.


Of course every decision in life will always have its own trade off.

Sure it can have severe impact on their career but its still a choice whether to do or not to do, quit or not to quit


The same thing would have applied to doctors at some point. Doctors still have line management.


Why should this concept be limited to just AI? How many developers knowingly enabled or directly wrote software for predatory companies like Facebook, (sometimes/mostly?) Google, sometimes Apple, historically often Microsoft, many of the large banks, Verizon and ATT and Comcast (and probably other major telcos), debt collection agencies, payday lenders, and on and on.

I'm quite certain that many HN readers knowingly contribute to companies that do plenty of evil. Heck, some of the HN crowd have created startups that specifically do gray things just for money.

Practically speaking, there's no way to know that your efforts will not somehow result in doing harm to others. Rather than taking some oath which frankly has little meaning without full knowledge of the outcomes, I think it would be more beneficial for everyone to make a concerted effort to get out in the world - see places that are in severe poverty or political turmoil. Meet other people who are very different from our bubble. Then we start to have an idea how even some of our good intentions can result in worse situations for others (such as giving lots of clothes away to have shipped to Africa).


So true. I'd say much, much more harm has been done by regular CRUD programmers than any others. You really think AI is going to introduce regulation around this?

They're just going to treat it like munitions, because that's all the lawmakers know.


I suspect this is somewhat a problem of age and experience. When you are younger (at least typically speaking... ignoring the rare exceptional people), you get caught up in the job at hand. It is new to you, and the environment is exciting. You focus your attention and your effort narrowly.

If you have someone to shake you from your myopia, or you get older and start to gain perspective, you question your efforts and their potential outcomes.

To be fair though, at the lowest CRUD level, it's no worse than someone who builds roads or cooks food. We cannot reasonably consider nor police the use of those roads or the meals we make.


To my mind there are two types of ethics around being a software engineer. The first is just normal ethics, and it applies equally to everyone else - am I working towards something I think is ethical? For this reason someone might choose not to work at PornHub, or Facebook or Boeing or whatever. That's a personal choice. If you want to change the world so PornHub doesn't exist anymore, the solution isn't to force software developers into an oath that forbids it, but to do normal, boring politics.

There are professional ethics, but they are much more tightly bound - more comparable to an accountant's ethics than a doctor. For me that will be along the lines of

- I will not use trickery in a demo to decieve others about the progress of a project

- I will report all bugs and mistakes I find and make, regardless of any personal cost this may bring on me.

- I will be honest in my dealings with non technical people, and do my upmost to accurately represent the work I have done and the work I plan to do.

- I will make decisions and advocate for changes for the benefit of the codebase and organisation, not for the benefit of my own CV

... And so on - you can pretty quickly get into controversial territory I'm sure.


>I will not use trickery in a demo to decieve others about the progress of a project

This is a good starting point. All those who've used dodgy marketing to convince investors that their machine learning is AI and so we're 'first generation of humans to endow computers with the ability to make decisions' have lent their words to a culture war with no basis in fact.


I hate to be a debby downer but this is nothing but virtue signaling.

How can we enforce this? Strong individual ethics without ability to do anything about it will simply make engineers feel helpless. And that’s just going to hurt happiness at the workplace.


You would need a software professional body that can ban people from practising professionally


The Order of the Engineer is instructive here: https://order-of-the-engineer.org/

The oath states:

"I am an Engineer. In my profession, I take deep pride. To it, I owe solemn obligations.

As an engineer, I pledge to practice integrity and fair dealing, tolerance and respect, and to uphold devotion to the standards and dignity of my profession. I will always be conscious that my skill carries with it the obligation to serve humanity by making the best use of the Earth's precious wealth.

As an engineer, I shall participate in none but honest enterprises. When needed, my skill and knowledge shall be given, without reservation, for the public good. In the performance of duty, and in fidelity to my profession, I shall give my utmost."

The Order of the Engineer started in Canada in 1925 as the Iron Ring. It was imported into the US in 1970, with many changes.

Worn on the pinky finger of the working hand (depending on dis/ability), the ring is meant to drag on drawings and leave subtle marks as it ages and rusts. It's facets are meant to be noticeable, to remind engineers of their duties and obligations. Though I cannot find the source, I have always heard that the original Canadian Iron Rings were made out of steel from a collapsed bridge that was lethally designed.

The US version is significantly different, as engineers are licensed very differently in Canada and the US.

The US based Order of the Engineer is fairly accommodating to change and updates. Bioengineers are welcome to take the oath, for instance.

Perhaps the US based rings for software engineers should be made of the Uber car that killed that poor woman in Arizona, Therac-25, or Theranos machines.


Translation: we need to increase barriers of entry to working in this field by spreading the social cancer that is vocational licensing to software engineering.


How do you hire an architect to design your house? Pick a doctor? Pick a lawyer?


Pay a huge amount of money. And, perhaps unsurprisingly, we have an incredibly expensive healthcare system, a legal system where deeper pockets have a large advantage, and a housing crisis.

All I need to make a website is to know how to do it. To build a house I need to comply with an endless list of regulations that are essentially intended to make me unable to do so. To buy an asthma inhaler (insert medicine here) I must pay a doctor to write me a prescription. It doesn't matter whether I know what I need and how much. I must pay for it. Lawyering I can do on my own for myself though.

Edit: I'm not saying that all of this is necessarily bad, but I just couldn't resist when you mentioned those 3. They're not great examples, because the ethics there are about protecting the person receiving the service. In software development you rarely build something for a specific human individual.


This is different. Anybody can create a website and thus the next Facebook/Twitter/TikTok. As far as I know, I don't need a permit to build a website. Requiring one would stifle freedom of speech.

On the other hand you bet that if you work on missile guidance/plane software you'll need some sort of certification. Your comparison just doesn't work. And there are already laws in place that any website dealing with user data or any sort of ecommerce must follow.


That’s a bad faith question asked in an attempt to setup a straw man argument and you know it, and I know it, so cut the crap. Worry about the guy writing software for all the nuclear power plants we’re not building, not the guy who makes webshit, word processors and compilers for a living.

Architects, Doctors and Lawyers are closer to one extreme, you can make as strong a case for licensing them as you can for not licensing them. Software engineers are vocationally with the florists, where licensing is less a matter of ethics or workmanship or safety as it is an artificial government sanctioned means of preventing new competition from entering the market, or only allowing it in a controlled and revocable manner.


Writers of open source software are winding up to one day spin in their graves reading the comments of the authoritarians demanding software licensing in here.


Agreed. There is no professional class so enlightened that it won’t use whatever means available, governmental or otherwise, to insulate itself from competitive pressures.


> How do you hire an architect to design your house?

Depends where you live and how much local architects have been regulatory captured the housing market. Other than that, building a house is easy.


I'm not against the idea, but analogizing to medicine isn't great. There are a number of challenged that need sorting before an oath is feasible:

* Medicine is thousands of years old. Software is decades old. There's still a lot we don't know.

* Medicine involves a lot of repeat situations, do you can set precedent and learn over time. Software isn't always new, but there are more novel applications than in medicine.

* The patient is, generally speaking, at the center of a doctor's ethical universe. It's not clear who the "patient" is for engineers. Users are often malicious. You can just say engineers must consider "society as a whole" but that's kind of a cop out, and not useful for trickier situations.


The patient is the company.

So the oath would be to do no harm to the company.


I don't think that's what the original article is getting at though. It talks about user privacy rights for instance, and it would be harmful to certain companies to not spy on their users, even if it's the right thing to do for society at large.


"Don't be evil" [until it's inconvenient not to be]


How about we make the executives and managers take an oath?

Engineers mostly do what they are told, it's too easy to let them carry the moral burden while higher ups can ask whatever they want out of them. It's too easy to shift the blame on developers.


There's the Archimedean oath https://en.wikipedia.org/wiki/Archimedean_Oath

Considering the life of Archimedes of Syracuse who illustrated the ambiguous potential of technology since the Antiquity, Considering the growing responsibility of engineers and scientists towards societies and nature, Considering the importance of the ethical problems stemming from technology and its applications, Today, I commit to the following statements and shall endeavor to reach towards the ideal that they represent:

I shall practice for the good of humankind, respecting human rights1 and the environment.

I shall recognize the responsibility for my actions, after informing myself to the best of my abilities, and shall in no case discharge my responsibilities on another person.

I shall endeavor to perfect my professional abilities When choosing and implementing projects, I shall remain wary of their context and their consequences, notably in their technical, economic, social and ecological aspects. I shall give particular attention to projects with military applications.

I shall contribute, to the extent of my abilities, to promote equitable relationships between people and to support the development of economically weaker countries.

I shall transmit, with rigor and honesty, to discerningly chosen interlocutors, any important information, if it constitutes a gain for society or if its retention constitutes a danger for others. In the latter case, I shall ensure that the communication yields concrete action.

I shall not let myself be governed by the defense of my own interests or those of my corporation.

I shall endeavor, to the best of my abilities, to lead my company to take into account the preoccupations of the present oath.

I shall practice my profession in complete intellectual honesty, with conscience and dignity.

I solemnly take this oath, freely and on my honor."

1. According to the Universal Declaration of Human Rights of the United Nations (10 December 1948)


Here's an anecdote echoing many of the other comments:

I attended an event about AI ethics a couple of years ago, many of the panelists were getting very excited about the upcoming AI eithics guidlines from the IEEE and how it would set a gold standard for state level AI development. They were completely unaware that the IEEE already has a general code of ethics[0], which many governments implicitly require engineers in certain roles to ignore.

[0] https://www.ieee.org/about/corporate/governance/p7-8.html


The missing point:

You can't become surgeon by Googling and slicing things

While you can become /Software Engineer/ entirely by yourself (self taught)


BloodOverflow:

"The inflating red bag stopped inflating. What do I do?"


The oath is going to become the new "Terms And Conditions" checkbox at the end of online courses.

> Tick here to certify that you have read and understood the Techno Oath

"Yeah yeah, whatever"


This seems like "noble virtue signaling" for software executives.

It isn't taking a five why's approach to the problem of ethical failures in the industry. It won't fix root cause problems.

We don't need a Hippocratic oath for Software Engineers. We need laws and regulations around some things. And then the government needs to penalize C-level execs, the board, and major shareholders for violating them.

The problem is that laws apply to individual without significant financial resources, sporadically to the truly wealthy, almost never to corporations, and never to the shareholders.

Trying to fix a broken system with a "Hippocratic oath" for software developers is like trying to fix school shootings by making sure the doctors will save the school shooters as much as the victims.

Microsoft could still be profitable and fix way more bugs in the software than they do - they choose the higher profit margin over "the good of humanity" all day every day, and it is the executives and major shareholders that make that own that decision.


In Greece, we do take an engineering oath to graduate: in summary to nurture and expand science and to improve and protect human welfare. In a similar fashion, in Canada they have the iron ring https://en.wikipedia.org/wiki/Iron_Ring.

IEEE and almost all of the serious engineering organizations have a code of ethics.

Accountability of the management is what is missing. If a hospital management instructs their doctors to skip costly steps in a procedure, they are going to jail. If any engineering company screws up and lives are hurt, lost or damaged somehow (e.g. privacy), they issue a public apology; rinse and repeat.

Which is ironic, as "a doctor might screw up and kill a few patients, but if you as an engineer screw up you can kill thousands." Because you are not there holding their hands when people get hurt, get their identity stolen, their money or lose their lives due to a bug or critical error, does not make you less responsible.

The problem with our field is that i) it has immaterial direct results, ii) big failures have big money behind them. The first makes it hard to impossible for the public to understand the implications and does not excite terror if you are not directly involved. How many people run in fear hearing about "x credit company got hacked"? The latter reason implies that there is an incentive for no punishment -- similarly to banking institutions in 2007.

I think we are unfortunately a lot of the times swaying between being a statistic (https://quoteinvestigator.com/2010/05/21/death-statistic/) or having insignificant or incomprehensible impact for us to gain legal power to push back. Recall that deep learning, distributed ledgers, etc are magic or demonized by the public.

P.S. One could argue that not all software developers are trained engineers etc, but I am not buying that. As a lot have noted even if a person is not academically trained they can take an oath, but nothing is going to change in the responsibility realm.


How about an oath for tech CEOs? Or better yet, some strong, comprehensive laws on data collection and ownership?


How about renewed enforcement of anti-trust laws? Tech CEOs love to make a big stand about issues but they are the root cause.


All traditional degrees that have "engineering" in title are usually regulated professions in every country, same as those of doctors and lawers. You have to learn the local law that applies to your domain and are qualified and recognized by the state after you get a degree to work locally there.

Software engineering is one of the few where word “engineering” is not really engineering. Software engineers neither know the law, not are trained for that, work globally and have no idea what laws apply. This has its own benefits and drawbacks (such as, less protection - anyone can take your job, no matter the degree, but you can also work anywhere).

For “software engineers” to be real engineers in order to take any oath, the education system and degrees for software need to change and match those of other local protected engineering degrees, with all benefits and drawbacks of a state recognized profession that can by carried only locally. None of current developers matches any of that.

It could be a possible future, but it may also kill talent as know it in software. The software market dynamics of today exist because everyone that can write code can claim they are “software engineers” and they only need to write code to prove that. You cannot claim you are a doctor, just because you think you have skills to heal people and prove that by healing out a few.


Yeah ok. Only if it works as an escape clause for NDAs. If we can’t talk about the chain of management decisions that led to being asked to implement something unethical, then a personal oath is only something to be used against us. If an ethically conflicted engineer ever asks “why” they always get the same answer, “because it’s what the client/department head/CTO wants.” I have never worked at a company that takes any moral implications of technology seriously. Money rules all.


Should probably start with a Hippocratic Oath for the c-suite execs.


Great, if we will take software engineers to task over their ethics, let's form a professional regulatory body that licenses them and a union or advocacy org while we are at it. Tech companies have benefited tremendously from the way the profession is currently setup. I'd love to see how they respond when they can't make SWEs dance like monkeys in their interviews to get a job. Suddenly, outsourcing the work would become a lot less possible too.


Doctors can’t be told how to practice medicine by non md managers.


Sure, let's let the decision makers off the hook because they're untouchable anyway thanks to the system they created - a system we keep agreeing to everytime we buy their products and vote for their puppets.

But it's not the decision makers who should be targeted. The workers told selling the product aren't a problem. The workers marketing the product aren't the problem. No. The workers told to make the product.

These workers have such easy, stable, and wealthy lives that they cannot be pressured in any way to make something harmful. They are so intelligent, so smart, so perceptive that the truth can't ever be hidden from them. In a world governed by relative and fuzzy morals and ethics, with few absolutes and a plethora of grays, these guardians of the righteous should take an oath to be better; to take responsibility; to uphold these virtues; to lead by example; to let no more evil pass under their gold hands as they type in the language of the creators (perl).

All you have to do to become one of these divine creatures? Take an online programming course and tick a box stating you read and understood the ter... uh... oath. Yeah. Oath. Because that's what's gonna solve the problem.


> If you look today at who is studying an AP course for computer science in an American high school, what you will see is a group of people that are more male, white, urban, and more affluent than the country as a whole. That is what we need to address

As a male, white, urban, and affluent, perhaps he could demand to get replaced by a POC like some corporate board members have done, instead of endlessly yapping and proudly showing off his self-hating racism.


Maybe we need a hippocratic oat for microsoft CEO's. Or maybe there already is one, like the one we have for bankers. But somehow they are still corrupt, big surprise. Let's just accept people do bad things and empty promises are worthless. the penalty for oathbreaking in ancient greece was death or the extinction of the family line (harsh, i know). that's very different from modern 'oaths'.


No, we need no such thing. Any such "Hippocratic Oath" would just become a statement of loyalty to the politics of the people in charge of drafting and administering the oath. The recent and still-raw experience of activists in tech companies damaging their coworkers and their company under the guise of "ethics" should tell us that this word is just a sneaky disguise for politics.




We also need a Hippocratic Oath for CEOs and investors and shareholders. We need a Hippocratic Oath for Corporations, for they have personhoods too!


It doesn’t work for doctors in the U.S. (see opioid epidemic, pharmaceutical company deals, etc.), why would it work for Software Engineers?


“All the nations of the world may sign up for the kinds of principles and rules we've described now, but will everybody follow them? Will everybody even sign them? Well, we don't yet know. But if you get these kinds of laws or principles adopted, it makes it easier to galvanize everyone to stand up and defend them. It makes it easier to hold the violators accountable.”

It sounds more like a mandatory code of conduct for all software projects.


We need to go far beyond that -- we need an actual profession. It always strikes me as odd that someone can go through a 4 week bootcamp and call themselves a software engineer. Robert Martin has been advocating this for point for years -- https://www.youtube.com/watch?v=17vTLSkXTOo


>It always strikes me as odd that someone can go through a 4 week bootcamp and call themselves a software engineer

Why? If four weeks is enough for somebody to learn enough that they can build Joe the Plumber a slightly nicer-looking website that he's happy with, what's wrong with that? A lot of excellent software engineers have zero days of formal software education. Even Donald Knuth doesn't have a software engineering degree; he has physics and maths degrees. Should Donald Knuth be forbidden from developing software because he doesn't have a piece of paper saying "software engineering degree"?


Tech companies are actively against there being a “profession”. They want engineers to be plentiful, cheap, and replaceable.


No thanks. Apart from creating unnecessary barriers to entry, such an oath would immediately be politicized and used to stifle certain views, speech, and work. More broadly, I also think licensing in general can be counter productive, constraining supply and imposing bureaucratic certifications where none are needed.

How about we start with something we need more immediately - updated antitrust laws that are enforced against giant companies like Microsoft? Brad Smith is talking big about requiring an oath for engineers but neglects his own role. Before he was President, he was General Counsel for MS. He is still today the chief legal officer and chief compliance officer alongside his president title (https://en.m.wikipedia.org/wiki/Brad_Smith_(American_lawyer)). Let see him and Microsoft come clean on antitrust, privacy, theft of others’ innovation (like Slack) while holding a giant patent war chest, etc.


I suppose the NSA will just disappear overnight, that and the military industrial complex /s.

People very much disagree on what is ethical, and the people who know they are being unethical rarely let simple oaths stop them. Am oath like this simply becomes a nuisance to the ethical free thinkers, while becoming one more useless social more for the people who don't care about ethics.


I think I prefer the three permaculture ethical principles. They are much more condensed, more broadly applicable to any technology at any tech level, and much more holistic. These are:

1. Care of Earth

2. Care of People

3. Fair Share

They can be applied to technology as part of the whole ecology, not just human-centric. It puts life and death in the proper context of the whole, rather than the way modernity had twisted it up.


Caring about ecology is very much human-centric.

Neither the planet nor its biological inhabitants "care" about anything. They just are (as in they exist).

The main driver for protecting the ecosystem is the fact that we - as a species - can't survive (for long) without it.

Granted, there are some who strongly believe that becoming a race of cave dwelling mole people or inhabitants of glorified snow globes in an inhospitable barren wasteland is a desirable prospect somehow. But overall humans prefer blue skies, green meadows, birdsong, trees, and diverse natural landscapes.


Care about the ecology is not just about the environment as if it were something separate, or abstracted. There is a very practical, pragmatic reason not to sh$t where you eat. At least, not without composting with charcoal, or using one of those ecovats to clean it first, before feeding it back to the garden.

As far as whether the planet or biology “care”, we will have to disagree and leave it at that. While I think the word “care” is overloaded with a number of meanings, many are human-centric, I do have beliefs that there is a planetary and plant intelligence. I don’t think the Universe is as uncaring as you think, though there are many, many unpleasant things.


Replace Software Engineers with "Anyone involved in the creation of software", and then it goes from a "Definitely not" to a "Eh, maybe."

Until the people pointing the software engineering departments are held accountable for their immoral behavior and goals, I don't think the engineers beneath them can really be assessed fairly.



That's an ACM Code of Ethics. Software Engineers aren't required to be in the ACM, so it's not, in any meaningful sense, a general code of ethics for software engineers.


The Hippocratic oath isn't very effective — it did not prevent the development of bioweapons, drugs to carry out death sentences, or human destruction during WW2. If doctors can have their oath circumvented in a very tangible and obvious way, why do we think programmers working at a high level of abstraction can't?


The real answer is the same as it has always been: libre software.

Funny how same topic comes up every time some new issue pops up but we've had the answer nailed in the 90s.

If you want software you can trust no amount of oaths, certificates and overseer bodies will ever be enough. It's only possible to trust fully transparent software.


These are orthogonal topics, though.

This isn't about trust. Is using Libre Office Calc for operating a death camp more ethical than using MS Excel?

Is the creation and use of an armed autonomous drone justifiable and ethical as long as its firmware is open source?

Are misinformation bots and fake news generators OK, as long their source code is freely available for use and further modification?


well, then using Hippocratic Oatch is not a right metaphor. Doctors are supposed to take care of their patients and not judge their morals, even if such patient is a death camp commander.


Yup, if I'm a Libre office developer what am I suppose to do to prevent nazis using my program? It's absurd to put this weight on a software developer.


It’s unfortunate, perhaps to the point of irrelevance, that a need for ethics was raised and then summarily limited to AI.

Software, all software, needs an ethical standard and it currently has nothing remotely close in common practice. This will be a hard sell though, because most developers have no idea what ethics are, why they would be needed, or how they apply to practice. Frequently when the idea of ethics are raised the response is hostility and often framed in terms that are purely imaginative.

When people in other industries ask me what’s required to be a software developer I always tell them it’s just a matter of charming an interviewer. There is no education requirement, no licensing/certification, no standard skill definition, no internship or agency, and certainly no ethical standard. It’s always amusing to watch their response.


Well, the only thing you 'need' at the end of the day to be a software engineer is a computer. It is probably the most accessible field that contains the term engineer in it. There are no physical artifacts necessary like most other engineering fields. More so only few software projects actually require any of those things you state above, such as medical devices or software that runs control systems that affect things in real life.

Does the same set if ethics/regulations even make sense for one person developing a guided missile versus another making a video game, that happens to shoot missiles at virtual targets?


Certainly we can be very happy that it is so.


When you don't have anything to compare it to I am sure its perfect and faultless.


https://en.wikipedia.org/wiki/Iron_Ring

>The Iron Ring is a ring worn by many Canadian-trained engineers, as a symbol and reminder of the obligations and ethics associated with their profession. The ring is presented to engineering graduates in a private ceremony known as the Ritual of the Calling of an Engineer.[1][2] The concept of the ritual and its Iron Rings originated from H. E. T. Haultain in 1922, with assistance from Rudyard Kipling, who crafted the ritual at Haultain's request.[1][3]

https://en.wikipedia.org/wiki/Engineer%27s_Ring


Wow coming from Microsoft that is a bit rich. I do understand the problem with developers being a law unto themselves but that was largely caused by the propriety mess caused by Microsoft and others. So now they want to fix what they broke to their advantage. No thank you.


No, this is a meaningless sentiment. Floating a feel-good idea like this accomplishes little in the face of how the world actually works. In the real world the hippocratic oath does not prevent malpractice, over prescription of opioids, or name your beef with the entire healthcare system let alone doctors actions.

Engineers are hired to make weapons systems that kill civilians. Engineers make software that enable ubiquitous surveillance by whomever wants to use it for whatever purpose. No a Hippocratic oath will have a hard time influencing individuals and their organizations that’s what laws and the enforcement thereof are aimed at. I have little hope for oaths or laws however.


Yeah, how about not abusing their market dominance with lock-in and anti-competitive shenanigans? MS could benefit from sticking to proper business practices first. Preaching ethics while the company engages in shady behavior is hypocritical, not Hippocratic.


Please have a hippocratic oath for management first.


I think we need to go well beyond a Hippocratic oath.

Most Software Engineering Degrees require an ethics course. Yet - we still have the issues we have today.

Asking for good intentions isn't going to change outcomes, because Software Engineers already have good intentions. Nothing will change.

How about Software Engineering Licensure? You have been found to contribute glaring security issues in widespread code? Lose your license and employment. Use dark patterns to defraud people? Lose your license and employment. Leave "debugging" endpoints open or collect unminimized telemetry? Lose your license and employment.

This seems stronger, and much more likely that outcomes will change.

But good luck getting the industry to move.


Ethics is not a hard science.

Also it's not that the underbelly asks would ask their hacker candidates for a license, or that the military/cops would care about a licensing body.


This doesn't generalize very well, since a lot of software is written by volunteers making open source software. Those people aren't employed, and exempt from getting fired. People who own their own software business also can't be fired. Overall, what you propose doesn't make sense.


Or we could just finally admit it: markets/buissness needs to be regulated.

And markets/businesses with a high risk of negative effects for society (and software already belongs in this category) should be strickly and proactevly regulated.


That's rich coming from a lawyer who has been with Microsoft since the early 90's..

Maybe the oath is yet another hollow ritual and what Software Engineers need is what everyone else needs, a sense of morals and a spine.


Looking at history, we rather need a hippocratic oath for Microsoft CEO's.


We don't need a Hippocratic oath for engineers, we need it for management. It shouldn't be up to the individual engineer to guide the organization towards better morals, it should be upper management.


Software engineers usually aren't in charge of the features they write. They can refuse to do their job at their own peril. Maybe we need a hippocratic oath for product managers or something.


More like an oath for company directors. No single engineer will make the world destroying AI on their own it will be made by companies getting more and more desperate for people to click their ads.


A better thing would be if CEOs would take that oath. Software engineers tend to be told what to do, they are more like soldiers than like doctors.

What would be good if we took a leaf out of the book of the real engineers and started caring out what we put out there as though lives depend on it. But that starts with getting rid of all of those disclaimers. No other industry gets away with accepting such little liability as the software profession.


Softwsre Engineers rarely get to make those decisions about privacy or AI ethics. From the engineer standing point of view, there is of little possibility of the engineer knowing exactly how his code will be used. There should be an oath for people making this decision but not so much for people doing this. The reason why doctors swears in is because they are to make those calls that could easily decides a person's life or death


What we need is more ethics courses and courses on historical business frauds/schemes as a graduation prerequisite for getting an MBA. For the persons leading the groups of software engineers.

I am honestly much more worried about terrible directives coming from c-suite persons, that result in ethically questionable apps and software.

In addition to ethics training and commitments from the persons doing the actual engineering work.


So he suggests something like the “Hambach Declaration on AI”? When it comes to regulation we are fast in the EU ;) https://ec.europa.eu/futurium/en/institutional-matters/decla...


While I still keep in mind the words from Tron ("fight for the users") and haven't yet accepted a job where people would likely be harmed by my code one way or another (that I knew of), much of the blame still falls on management as the top comment says.

Still, I'll always try to fight for the users in any code I write and, specially, in any code I control.


The Hippocratic Oath works for doctors because they have power over what happens to their patient(s). Administrators and managers run the facility, but cannot tell Doctors what to do.

It would only work for software engineers if we moved to the same model: Management run the facility but cannot tell software developers what to build.

I don't think that's a viable model for managing software development.


I think that could work but the people working there wouldn't solely be software engineers (e.g. not just code monkeys). They would be a person using software engineering to solve a problem.


What about Hippocratic Oath for Management and Leadership.

In all seriousness, ultimate decision making, hiring, firing, promoting, rewarding, punishing and culture building are all on management and leadership. Microsoft President happen to be on top of it. If they think companies use engineers in an unethical way, what about building management culture that ensures this wont happen.


Ok sure, all for it but as a business executive, the president of Microsoft can really stay the hell out of the process of coming up with that.


Get into an industry, make a shitload of money, then add regulation and qualifications to make it harder for the next generation.

A tale as old as time itself.


That's the summary for the Boomer & X'ers.


I say no. My boss says "well, then no more salary for you". I say yes. My boss smiles. I feel dirty but my kids are not hungry.


Think more like a member of guild.

In fact, I am starting to think certifications like PCI DSS are similar to oath: "I will not leak customer data, I will not leak card information". If the oath is broken, the PCI retaliation is swift: the whole company is totally cut off from processing payments. The company compliance is ensured on threat of bankruptcy.

So you push back: pushing this through means the company will fail PCI DSS audit once that comes up, and then we're all out of jobs. No salary for anyone. And the boss thinks again.


This is stupid. It's like asking a blacksmith to not build a specific knife, because it'll be used in a murder. You need to prosecute the murderer not the people building the knife.

Software, as an object, is harmless. It can be used for good or evil, just like anything else. So if anybody needs an oath is the people using the software, not the people building it.


Quick retort, i'd roughly sketch out the problems with software as 10% personal ethical issues & 90% as leadership/organizational/constraint/related.

Engineers want time & resources to do a good job, to make sure things are well considered. It is capital that drives us to push for expediency & convenience.


Electrical Engineers, if they join the IEEE, are asked to uphold their code of ethics: https://www.ieee.org/about/corporate/governance/p7-8.html

Mechanical Engineers have a similar one (ASM).

This is not novel.


I'm not a fan of the article's opening comparison to military hierarchy as an example of an organization that deploys weapons with ethics or discretion. As we've seen in (at least) the US, the military industrial complex determines when and where we fight, often at the expense of thousands of innocent lives.


More usefully we need something like "engineers sign off". A railway engineer is needed to sign (literally) a document saying that a new bridge or changed line is safe and reliable - and if they don't there is no getting around it by management etc.

This has lots of implications but the big one is really very few railway deaths.


Most evil is conducted with the conductor being fully aware that the action is evil. An oath isn't going to stop someone who is aware they are violating social norms in the first place.

A thief knows stealing is wrong. A murderer knows killing is wrong. You don't stop thieves and murderers by making them swear hippocratic oaths.


This has nothing to do with ethics and everything to do with hyping AI. “It’s so good you need to listen to me. Not just on technical matters but on moral matters. It’s too dangerous to give to just anyone but I can protect you from it. I can harness it for good. If only you would do as I say and give me $1b”


Why are people so obsessed with "don't be evil" if "be good" is a much more effective commandment. This suit had to visit pope Francis to ask him how to program a soulless machine, his time would have been better spend helping a local charity or even better a struggling family member.


Absolutely not.

No.

No again.

A thousand times, no.

The reason that doctors take an oath is because their practice directly affects people. A patient puts their leg, their heart, their eyes right into the doctors hands. Any mistake, any "experiments"... this directly harms the patient in an objective way.

A "do no harm" oath doesn't work with software. A software engineer writes a piece of code. This can be used for good, or evil, but by whose standard? Once the code is out there, the software developer cannot be held liable for its abuse, except perhaps in the limited circumstance that the software was designed FOR abuse. (ie: a zero day released without responsible disclosure).

I would never take an oath that I wouldn't release "harmful software". First, because "harmful" today has become incredibly subjective. (Someone might say I'm being harmful in writing this!) Second, because it's difficult to tell how something will be used, and I don't have the time or energy to follow up with everyone using it to make sure they're using it to my liking.

Last year, someone got a lot of attention for taking down their insignificant NodeJS is_even implementation when they discovered that ICE was using it. Can you imagine what would happen if Redis disappeared, because the authors discovered it was being used by some disagreeable activists?

I'll tell you, very simply. People would continue to use it, having archived the source. The activists, ICE, you, me, everyone. Any such oath is useless posturing, and subject to the whims and pressures of what's in vogue today. Gives the appearance of doing good, without having to do any good.


There's checking people using your libs downstream (which, admittedly, is quite hard to do) and there's not writing software for cruise missiles.

> This can be used for good, or evil, but by whose standard?

Your own. The oath is taken by humans, and humans differ in opinion - no one can impose an absolute standard on you, and you follow your oath as you understand it. There's no board to deprive you of your license in IT.

> Any such oath is useless posturing, and subject to the whims and pressures of what's in vogue today.

It's only as good as the will to uphold it. I think that's what's in short supply among the stereotypical "bro" coders.


> cruise missiles

Cruise missiles aren't necessarily evil, quite the opposite, they could be guaranteeing Liberty.


"Software Engineers" are not the problem. They'll do what their told. That Microsoft President should look higher up the food chain (stuff rots from the head chief). Also the analogy to having a UCMJ-type legal system for "Software Engineers" is just bonkers.


Those of us with professional Engineering titles already have to do a kind of engineering oath anyway.

The problem is the lack of value of the word "engineering" in several countries, where everyone fells like calling themselves engineer after a six weeks bootcamp without suffering any issue with it.


> The problem is the lack of value of the word "engineering" in several countries

Or the fact that "enginnering" has be co-opted and made de-facto illegal for all but the small minority who agree to pay the local mafia^Wengineering association. This is especially a problem in North America. In Europe, the problem is different as you can only declare yourself an "engineer" if you've been in the right university.

Anyway, I could be a P.Eng, but paying the yearly racket money... I'll pass.


This is another example of the system trying to move to individual workers the failures of the economic elite. It is not the software engineering profession that is putting in risk the world, instead it is the unscrupulous drive for more profits coming from investors and CEOs.


What the Microsoft President said:

"We need a Hippocratic oath for software engineers"

What it would be useful for the Microsoft President to say:

"We need a Hippocratic oath for software engineers, marketing, sales, accounting, customer support, management, senior executive staff, board of directors, ..."


Software Engineers: but your ethics aren’t my ethics, and/or ethics get in the way of my paycheck.


We need a Hippocratic Oath for the medical profession. How many doctors honestly "First do no harm"? Seriously, my father was killed by them and my best friend was seriosly injured. If they would have never visited doctors they would still be alive or still walking.


As a Microsoft employee how could I take a Hippocratic oath and continue to work or do for Microsoft? Is DOD and ICE not harming people? It's not the engineers that need to take the oath, it's the greedy business folk that will do anything for a dollar.


I think a lot of folks on HN supporting this might be disappointed to find out what it actually covers. While you may find it unethical to work for Palantir or Facebook, or require JS to view a page, this particular law isn't going to be on your side.


Tangential... but who exactly is Microsoft's president? I always thought that "the president" of a company was the CEO (that's how it seems used in my native French anyhow). What's the difference here?


> In the top 10 Computer Science departments in the nation, there is only one that requires taking an ethics course to graduate

I'm not sure how well this meshes with the "you don't need a computer science degree to be a programmer" trend.


Yeah sure, except if I refuse to do something my boss will fire me and find someone who will.

And hey, now that remote work is the new hotness, we’re more replaceable than ever!

Before we can be expected to take an oath of any kind, software engineers need to have agency.


How’s this for a start: the class of people who by their particular education and talent control tech platforms shouldn’t get to determine what political, scientific, and cultural content other people are allowed to consume.


It won't work. There will always be an engineer willing to sell out humans for money or knowledge. Money obviously being the root of all evil.

The surveillance apparatus wouldn't exist without these types of engineers.


It makes more sense to have this oath for all tech workers, not just software engineers. Software Engineers may have less visibility on matters that determine ethics.

Managers handle the context of application of a software.


Software Engineers: We need a Hippocratic Oath for Microsoft Presidents.


As a developer using Microsoft tools, I'll take the "Hippocratic oath for software engineers" when Microsoft releases fully bug-free tools and operating systems. Not until then.


The ACM does have their code of ethics for all members... https://www.acm.org/code-of-ethics


It used to be DON'T BE EVIL until Google turned to the dark side.


I hope AI will make as much people jobless as possible. Especially lazy Europeans who took too much money away from me via huge taxes. I hate all Europeans very very much.


Let's assume that the president of Microsoft said this but it was taken out of context. He of all people understands who decides what gets built and how it is used.


We already have something better. It's called the AGPL 3.


No. We need one for software company executives and investors.

A senior executive who wants to push this onto their employees can be safely optimized out of the discussion.


I couldn't disagree more. On the contrary, there needs to be a Hippocratic Oath for Management, Marketing and CEOs.

Anything else will just be buck passing!


I think more like software engineering needs to be more formalized like the other mature engineering disciplines such as Civil Engineering.


"Do not create tools which can be used for evil" is rather hard to do. Metasploit is rather useful but can be used for great evil.


That'd be great, then their employers could ride rough-shod over their oath the same way hospitals and insurers do with doctors.



Why not some sort of professional standards body with examinations for admittance like any other engineering discipline.


This will not work unless the CEOs of all of these companies take the same oath too ... along with the shareholders.


Do no harm, patents, copyright and proprietary software being considered harmful.

I could get behind that, dont think you will though.


Some of these systems are so complicated it is hard for one person to reason about every eventuality.


Stupid bullshit, we need a Hippocratic Oath for CEOs if ANY OF THIS COLOSSAL MESS IS EXPECTED TO IMPROVE!


I just wrote a keylogger in Python, and billion dollar companies shouldn’t be touted as arbiters of morality


The minute we do that all software engineering will be shipped off to countries that don’t require that.


Could a software engineer writing guided missile targeting software be held accountable to such an oath?


The Hippocratic Oath is not for assigning blame or providing accountability towards society's rules - at least not in the first place. It is a reminder of basic ethical and professional standards and contributes to the common understanding of a physician's role.

This may seem ridiculous considering the responsibilities of medical professionals but it works as a baseline independent of the legal and moral standards of the particular society they live in.

So to answer your question:

Yes, the software engineer could be held accountable - yet, that would arguably be up to him- or herself in many cases [1]. There were physicians in history who have done horrible things [2] and - if not by the legal system - they were at least judged by their peers for their actions.

From my perspective, the most valuable aspect of having an oath (and that I regard for professions in general) seems to be the opportunity it offers for reflection, identity and future aspirations.

[1] https://en.wikipedia.org/wiki/Hippocratic_Oath#Violation [2] https://en.wikipedia.org/wiki/Josef_Mengele


No, we need antitrust investigations


that's a lot of words to try and move accountability from the executive level into the common soldiery.

the medical profession hierarchy is completely different to accommodate for medics own agency.

responsibility without freedom (and without just compensation, dare I say) is just bullshit.


And yet they provide services to ICE


Not gonna do it. Not, gonna, do it.


I feel like a lot of people here are missing the point of professional agreements or organizations like this: protecting the engineer from the consequence of saying "no" to something unethical. Almost every software engineer I talk to has a story about something they did early in their career that was ethically dubious. Hell, I work at a fairly senior level in medical software and I still feel like I'm taking risks by saying "no" to things that are clearly sketchy.

Also, it makes sure we have a ethical framework to work with. Right now, it's every engineer for themselves. You won't do something that sits in a legal gray area? Companies will find someone hungry enough to do it. If we're all staking our professional reputation on it, there will be fewer competent takers.

Personally, I think it will also be a step towards better legitimizing the profession and being more inclusive. Many folks have started equating Software Engineer with hyper-rationalist orthodox libertarianism and... the stereotype isn't far off from what's typically upvoted in these sorts of communities.

Anyway, them's my thoughts on this. I definitely support this sort of notion.


Hippocratic Oath is the wrong thing. Software Engineers need to act like real Engineers, namely certifications, accepted practices and penalization (revocation of certifications) if accepted practices aren't followed. Software Engineering needs live up to it's actual name and stop being just "programmers".


No please. Let's not take 1 step forward and 10 steps back. Software engineer culture is already light-years ahead of pretty much every medium in every regard ranging from inclusivity to simple fun factors.

Certifications is meaningless corporate propaganda and we should eliminate it completely unless it's free and public.


Disagree. Without standards we as a group hold ourselves there's no improvement.


How about we start with a Hippocratic oath for public company executive management.


"I swear to fulfill, to the best of my ability and judgment, this covenant: I will respect the duty to increase shareholder profits by any means necessary."


Doesn't he mean for MBAs?


I think a hippocratic oath for CXO's and other managers are more needed.


Isn't that what GPL is?


Please would someone remind me about how and why the Hippocratic oath began


When I was taking CS Engineering UG course(IN~2008), there was an elective subject(have to be chosen by entire class) called 'Engineering Ethics'. It was a preferred elective as there was no course work and I think there were no tests as well.

I remember the professor starting the class as,

>"If a Structural/Civil Engineer builds a bridge and it goes down, he/she will go to jail; lucky for you guys there are no ethics for computer science".

Last time when I said this, I was told there's no way a Structural/Civil Engineer goes to jail if a bridge goes down. May be its more common in India, could be very well be the norm because every time a bridge goes down a related Engineer gets arrested the very same day or soon under 'Causing death by negligence'; perhaps a practice from colonial era still being practiced to pacify public.

Read:

Another BMC engineer arrested in Mumbai bridge collapse(2019)[1]

Bhubaneswar flyover collapse: Engineer, director of construction firm arrested(2017)[2]

4 Engineers Arrested In Kolkata Flyover Collapse Case(2016)[3]

IIT Roorkee: Two professors arrested in bridge collapse case(2015)[4]

SMC engineer held in bridge collapse case suspended(2014)[5]

Gammon, Hyundai officials arrested, probe ordered(2009)[6]

I'm sure you can pull up such cases going back at least 200 years.

[1]https://www.deccanchronicle.com/nation/current-affairs/02041...

[2]https://www.hindustantimes.com/india-news/bhubaneswar-flyove...

[3]https://www.ndtv.com/kolkata-news/4-engineers-arrested-in-ko...

[4]https://www.indiatoday.in/india/story/iit-roorkee-two-profes...

[5]http://timesofindia.indiatimes.com/articleshow/38044181.cms

[6]https://www.ndtv.com/india-news/kota-bridge-collapse-30-dead...


How about a Hippocratic Oath for the pharmaceutical industry first?


Sounds like hiring competition with FB and Google is hotting up then


Software engineers: We Need a Hippocratic Oath for Tech C-Suits.


We Urgently Need a Hippocratic Oath for Software Executives


When are we having a Hippocratic Oath for politicians ?


This is like saying “we need a hippocratic oath for hands”.


We Need a Hippocratic Oath for Software Executives


Okay. How about one for politicians and CEO's?


For software engineering companies is more like it


the audacity of this pos. i'll take it, if we also have a Hippocratic oath for greedy CEOs and board members.


We Need a Hippocratic Oath for CEOs


doctors can agree on a minimal ethical standard, software engineers can't


Oh yeah, WE are the problem.


Software engineers do what pays the bills.

The people in charge who run the business need a Hippocratic oath.


Really, Brad? Software Engineers need a hippocratic oath?

This is the most hypocritical, low-minded dirty blow, pass-the-buck mentality I have seen in a long time. That's even counting most of what Trump has said. You should be ashamed of yourself, Brad.

How dare you make that remark when your WHOLE PLATFORM is built on shady business practices and monopoly leveraging? How dare you try to shunt the blame on software engineers when YOU YOURSELF profit from lack of ethics and participate in systematic blacklisting of engineers? You earn seven figures a year AT LEAST and are trying to blame people just doing their jobs, and telling them they should look at the big picture and they should "have principles" to basically refuse to work or quit? While at the same time actively preventing these kinds of people from finding ethical work elsewhere?

Put your money where your mouth is or stop spewing Trumpian bullshit, Brad. Anybody who buys this needs to have their head examined. With a cactus.


how about an oath for executives of big corporations?


“First do no spam.”


Do civil engineers, accountants, or lawyers take an oath?


Seriously? How about a Hippocratic Oath for CEOs and other corporate executives? They exist in a world that encourages sociopathic behavior, and they are coming down on engineers?


No


You first, management.


What about the C-suite?


They should start by open sourcing the operating system.


Brad Smith in 2007: Linux violates 235 MS patents, and the FOSS community must pay up. https://web.archive.org/web/20070514143032/https://money.cnn...

Brad Smith in 2020: We Need a Hippocratic Oath for Software Engineers.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: